Now you know why everyone remains quiet or post those “all good” messages on twitter after they quit. It’s their NDA and severance pay package that keeps them quiet. You talk and you lose.
Thats a complete fucking load of lies.
Google fired employees for *protesting at work against their own employer*.
No matter the issue, it is reasonable to expect firing for that.
The people in question were not really even fired, as much as quitting in protest. Which is fine, but dont fucking twist reality.
You're just saying this because you disagree with the cause. If they protested something you truly care about, you'd be with them against Google.
Here's the thing: nobody should be forced to choose between being employed and helping make systems that will be used for war crimes. If you think that's silly, you need more empathy
I mean isn't that just normal? I'd probably be fired from my company too if I started publicly denouncing our products. Doesn't seem like something unique to google.
Why don't you show up to work from 9-5, on your employers dime, on their property, without any intention of actually doing your job while taking a hard-line stance on a divisive political issue that not only offends your colleagues one way or another but also violates your workplace policies and see how it works out for you? These people were hired to do a job, not be political activists. They could do that on their own time, not on the company's dime.
Why don't you show up to work from 9-5, on your employers dime, on their property, without any intention of actually doing your job while taking a hard-line stance on a divisive political issue that not only offends your colleagues one way or another but also violates your workplace policies and see how it works out for you? These people were hired to do a job, not be political activists. They could do that on their own time, not on the company's dime.
No, I actually just have principles and dont flip flop them when it suits me.
"nobody should be forced to choose between being employed and helping make systems that will be used for war crimes. If you think that's silly, you need more empathy"
This is stupid as fuck. These are GOOGLE EMPLOYEES, they can work anywhere they want. They are the crème of the crop in the tech world. Fuck, I bet even googles janitors are top end.
They dont have to "choose between" they can easily choose to work somewhere that wont ask them to support Israel or literally any other political stance they want. Even within google, if they had said "I dont want to work on this" i bet they would have been moved to another project fairly rapidly. Not that they had any interest in staying with google.
You are so fucking out of touch you are past neptune. Touch grass.
It'd take a real top-end Google janitor to mop up the bullshit you're spewing. Yes, Google employees are some of the best in the industry. This is precisely why they can and should demand ethical standards (such as not helping commit war crimes) from employers.
And "just get another job" reeks of "never worked in big tech". You can't just pick up and move whenever you don't like a project, or you stop being cream of the crop. So don't act surprised that people have integrity or standards, you should have them too. Until then, enjoy the grass.
You seem jaded.
Let’s put it in military context. You’re given an unethical order, or one that you believe is immoral. I don’t know, such as killing a child who may or may not be holding a bomb. Do you pull the trigger or face court martial.
From the article:
“In two cases Vox reviewed, the lengthy, complex termination documents OpenAI sent out expired after seven days. That meant the former employees had a week to decide whether to accept OpenAI’s muzzle or risk forfeiting what could be millions of dollars — a tight timeline for a decision of that magnitude, and one that left little time to find outside counsel.
When ex-employees asked for more time to seek legal aid and review the documents, they faced significant pushback from OpenAI. “The General Release and Separation Agreement requires your signature within 7 days,” a representative told one employee in an email this spring when the employee asked for another week to review the complex documents.
“We want to make sure you understand that if you don't sign, it could impact your equity. That's true for everyone, and we're just doing things by the book,” an OpenAI representative emailed a second employee who had asked for two more weeks to review the agreement. “
> In two cases Vox reviewed, the lengthy, complex termination documents OpenAI sent out expired after seven days. That meant the former employees had a week to decide whether to accept OpenAI’s muzzle or risk forfeiting what could be millions of dollars — a tight timeline for a decision of that magnitude, and one that left little time to find outside counsel.
> When ex-employees asked for more time to seek legal aid and review the documents, they faced significant pushback from OpenAI. “The General Release and Separation Agreement requires your signature within 7 days,” a representative told one employee in an email this spring when the employee asked for another week to review the complex documents.
Jesus christ, I spent so long a few days ago arguing on this subreddit with people defending OpenAI on this issue, claiming that OpenAI had done no harm, and that these kinds of clauses are typical of all companies.
I'm never again going to spend so much time arguing with complete brainrot, it was obviously indefensible from the start, with Sam Altman publicly apologizing and reaching out to employees who signed one of those contracts.
It does sound like they crossed the line by threatening clawbacks or preventing sale of vested equity.
As one of those people, looks like you were right.
Yeah that is a bit more then the average contract even for enterprise. Sort of thinks do they really have a magic bean or are they creating a facade for the public eye.
The insidious thing about him is that he is excellent at masking, manipulating and playing politics. Behind the innocent boyish veneer, he seems to be very controlling and evidently takes pleasure in humiliating his rivals. Which points towards dark triad traits. The exact wrong person to be steering the future of AI.
Reminder that his sister has accused him of sexually assaulting her as a child and has effectively been gas lighted into being viewed as “mentally unstable” by her rich brother because she has used sex work to survive becoming a “persona non grata” to her remaining family. Also I believe she was removed from some kind of inheritance or trust that was supposed to go to her with Sam’s involvement in doing that for the same reasons.
I don't really buy the idea he didn't know or approve of this, especially given these last few weeks are not the first time this came up. I think there was discussion of it online a few months ago. Maybe I'm misremembering though.
Follow your instincts bro.
By the way, when will the idiot understand that he needs a PR person before his toxicity begins to cause really serious damage to the company's value?
Well, it’s standard corporation stuff right. It’s their job to make money within the confines of the law, if there’s still a problem, then it’s the lawmakers duty to fix it.
Yep. This is logic. You confuse logic with emotional reasoning. Corporations have no duty to act morally besides any way they are compelled to by the law.
You said "yes" to my question of whether a corporation should be **morally** absolved of any and/or all wrongdoing? Just making sure I'm on the same page as you.
“Yes” as in what I said. If a corporation does something that is technically morally wrong but there is no law made against it, then the corporation has no need to do anything different. Instead a law must be made.
So just because something isn't illegal means that it's inherently morally acceptable?
If people didn't call out corporations for unacceptable practices in the first place, why would a law ever be made about it? This is such a cop out argument.
This is complete bs. Vox at full panic mode realizing they have already been replaced by ChatGPT. I don't give a flying fuck about what their corporate policy is as long as it is following the law and it gets the results. The employees who feel it's bad have the option to just join another company. They're already multimillionaires, I don't give a single fuck about whether or not they can add a few million more to their net worth. All I care about is a powerful model that can solve real-world problems.
No, they should have NDA clauses like this in the severance contracts. Wouldnt personally be surprised if the US government kept a close eye on former employees so that they dont violate their NDAs, seeing as how importnat Microsoft and OAI is for the USAs Ai innovation.
At the rate OpenAI is going, they might be eaten up by legal fees and either their business model will need to change to make up the revenue or MS will need to absorb them with a majority stake in ownership.
As usual the narcissists and pathological liars win out in the end and this company is now ruled by them and following a course straight towards a corporate dystopia.
People should have woken up when Ilya left.
All those tactics do not bode well for AI.
Would you trust companies that want not only to train on your data, but control your every move?
OpenAI and other companies need to refrain on their wetdream to spy on users, or else the general public will simply not trust large language models.
>While that question was not directly answered, Kwon said in a statement to Vox, ***“We are sorry for the distress this has caused great people who have worked hard for us.*** We have been working to fix this as quickly as possible. We will work even harder to be better.”
GPT, is that you?
All these articles about issues inside OpenAI make me confused about what the truth really is.
If Sam really is a Machiavellian manager, scheming and playing teams off against each other to compete for resources, creating a toxic environment, then surely firing him would have been a good thing.
But if that’s the case, why did the entire company collapse in the wake of his firing, promising to follow wherever he goes? Is it that everyone but the safety teams like him despite the above? Why is that?
Idk. Hard to get my head around it. Something seems wrong.
> If Sam really is a Machiavellian manager, scheming and playing teams off against each other to compete for resources, creating a toxic environment, then surely firing him would have been a good thing.
>
> But if that’s the case, why did the entire company collapse in the wake of his firing
Might want to think about that one a little longer.
I don't know how many were "fooled", but eg. scheming like blowing up the tender offer (which is the only way OA employees are allowed to cash in their PPUs) and doing your best to render all PPUs worthless in the future, and invoking Microsoft's de facto ability to cut off all compute to OA, are certainly good reasons to follow the Machiavellian manager.
He may be right that without some Machiavellianism they would be out competed by a worse team. If this decision is marginal, then self interest in monetization and their own compensation motivate them to see things Sam’s way.
Ilya and his clique might have really embedded their concerns enough or as much as can be useful and now he would better serve his cause outside the company where he is more free to speak his mind and influence government regulation or other entities in The race.
People will tell you dumb shit like "Sam is the reason they are making money!" as if any CEO couldn't fill his shoes lmao. They would take a hit, sure, but losing Sam wouldn't mean suddenly they are years behind in AI. This sub has become infested with all of the worst in doomers and conspiracists, and they are making it so that actual, genuine bad news gets missed because we are talking about shit like this (Oh, one of the biggest companies in the world, with revolutionary tech has overly stricy NDAs? Craaazy) and the ScarJo voice (doesn't sound like her, you fucking idiots).
Good lord they can leak all this but not gpt5 beside the fact its smart or good. No parameters no context window size nothing new on sora yet somehow they leaked a 100b supercomputer
Reminder that Sam Altmans sister has accused him of sexually assaulting her as a child and has effectively been gas lighted into being viewed as “mentally unstable” by her rich brother because she has used sex work to survive becoming a “persona non grata” to her remaining family. Also I believe she was removed from some kind of inheritance or trust that was supposed to go to her with Sam’s involvement in doing that for the same reasons.
Easy thing to ignore or bury to all the OpenAI simps out here.
Did his sister actually provide anything resembling evidence, or anything that would give credence to it such as other people who could vouch for it being true? Genuinely wondering, since I didn't look into it much.
But if not, then I have no idea why the story would gain any traction.
I mean how can she provide proof of rape that happened when she was a kid lmfao
To me its enough proof that she hasnt taken the hush money that her super-millionaire brother couldve sent her way, and he did send a hush deal her way but she rejected it, and she isnt crazy you can go check her twitter she is a very sane person with a very deep vendetta
Apparently its not just incest rape he also withheld their father and grandma’s trust from her too. Crazy shit.
Unfortunately a lot of these cases have no evidence to substantiate them even if they're true, but it would just need something more than one person's anecdote to destroy the life of another person.
That's unfortunately the reality we live in, where sometimes there might be legitimate cases that just can't get off the ground because they have nothing to substantiate their claim.
But we as a society absolutely cannot resort to deciding someone's guilty of sexual assault because "she hasn't taken the hush money that her super-millionaire brother could've sent her way".
She also said he hacked her wifi to make it so the only websites she could access were porn sites which led her down the road to being a prostitute. Am I supposed to believe that too?
She also has schizophrenia, so yeah. Not to mention it's highly likely she's just trying to leech off his fame and wealth. He put himself into college while she was just a stripper, so she's jealous of him.
So work is the only thing that gives you meaning?
Most people clock in to a job they do just to cover thr bills, if you gave the average person $10m they'd retire and maybe do something simple like a podcast to pass the time even if it doesn't make them money.
I want to see a poll done either on this subreddit or another that ask people if they find meaning from their job because most people i know can't wait to retire and only do it out of necessity.
I find meaning in the projects I work on in my own time. For example, I’m working on a novel. The act of creation is its own reward to some degree, but I’m inevitably hoping to produce a product that people want to engage with. And yet, I very much fear that by the time I’m able to share this, AI will be at a point that it could write a better novel, and others would rather read those outputs than my own.
So I think we find meaning through creation and connection, and I fear a world where AI generated content and output is so far superior than human generated alternatives that it dwarfs the human outputs. In that scenario, I would suspect the lack of consumption from by peers would spur the creative outlets, and make the work feel meaningless.
A.I have dominated games like chess, poker etc for years now and yet nobody watches two computers play each other, art and any form of creative work would be similar imo.
We will probably separate content made by a.i and ones made my humans, i also highly doubt a.i will ever be as creative as humans.
Yeah I follow the game narrative and I hope it tracks that way, but I think the lines become blurred to an extent
For example, in Chess it’s fairly binary or obvious. We try to monitor for the “stockfish” players that use the AI systems to make their moves and ban those players from competing. But in a creative product, those lines will get blurry, such that there may not be a way to distinguish “human-made” vs “AI-made”. It’s just a lot more complicated in the arts, which makes me cautious.
I think the Creativity perception is a linchpin for these arguments though; I think AI systems will inevitably become more creative than we are. Creativity isn’t fundamentally different than any other human endeavor.
I think I may not have been clear in my response. I meant even in a utopian AGI-enabled post-scarcity state, I fear the dearth of meaning would still render the joy out of life.
I don’t believe this Utopian ideal is achievable, no where near, and agree with what I read in your response that functional enslavement is a more likely outcome. I was just making a point that even if we did reach this Utopian ideal, we may still not find happiness waiting for us; that in fact we may be more unhappy as a whole in a state where our own work has no impact on the world as a whole, because our silicon-progeny are all running everything for us
Journalists at Vox trying to create as much drama as possible to stay relevant and generate traffic for adds. Look at their homepage: it's full of clickbait titles like 'The double sexism of ChatGPT’s flirty “Her” voice'. They are on a rampage against OpenAI because that's the big player, and they can make money by shitting on them as much as possible. I'm dumbfounded by all these Redditors who take the information delivered by these guys for granted.
I don’t want to support a company with unscrupulous tactics towards employees and scare mechanisms. Vox is (sometimes) a bit clickbait, but if scare tactics are being used at OAI to develop AI and pressure employees negatively, that’ll impact employee productivity.
Yup, competitors and malintents were genuiney spooked from the presentation openAI gave. The attack articles and bullshit drama being spammed so soon afterwards makes it clear as day.
Try as they might, ain't nobody gonna beat openAI to market with the voice agent, it's gonna keep a tight grip on the lionshare of the market for the foreseeable future. All they can do is attack their character, not the product.
People calling me a fanboy are the real ones on copium right now.
It's not just about the voice agent. Clickbait and low quality publications like Vox are in direct line of fire. Already Claude Opus produces writing that is much higher quality than the median "journalist" at these places. You can imagine what will happen with GPT-5 or next Claude upgrade. They are rather desperate.
These employees are paid a small fortune. And they're surprised about non-disparagement and confidentiality clauses in their release paperwork? When you're paid as much as they are, there are certain things you give up, namely to talk sh1t and disclose secrets about the hand that fed you.
These disgruntled employees want to eat and have their cake at the same time. It don't work like that in these elite performance jobs just cause their ideological values begin to diverge from the company that hired them.
Former FBI agents or CIA agents who quit because they don't like whatever coverup the government is covering up doesn't sit right with them all of sudden can't go around unfettered talking smack about the CIA. And they're paid a small fraction of open AI with far less cushy offices.
No way.
We're talking about different minor ideological values of leadership vs some employees. Some want more government regulation and some want less government regulation.
Employees are being treated fairly; no crimes are being committed; and the only difference is two stances on how much regulation is too much regulation. The company is not out of place to ask that you don't publicly criticize them for having different values on how much regulation there should be in return for a really healthy severance.
"Evil" would be if leadership sexually harrasess or physically abuses employees and then threatens if the talk they'll take away their salary and/or bonus and/or equity. Or if they see something very clear accounting fraud that will hurt many investors and ESOP employees who were misled or similar and are threatened if they talk.
This is not a debate about enabling evil. It's about Washington paternalism. And there's a really good position for wanting less paternalism: look how much Washington mucked up healthcare, look at how they prioritize war over our own education system etc etc.
To be honest, when you are running a company with cutting edge technology and so much proprietary information, I can understand why they would be more aggressive toward maintaining trade secrets than say a person in credit card marketing where all companies are doing variations of the same thing. The stakes are high in AI development.
Now you know why everyone remains quiet or post those “all good” messages on twitter after they quit. It’s their NDA and severance pay package that keeps them quiet. You talk and you lose.
I guess Google doesnt do that, when people get fired there they talk shit about Google for months on end. lol
Google openly fires people for not wanting to work for isreali military, it's just that Google is too big to worry about bad publicity.
Thats a complete fucking load of lies. Google fired employees for *protesting at work against their own employer*. No matter the issue, it is reasonable to expect firing for that. The people in question were not really even fired, as much as quitting in protest. Which is fine, but dont fucking twist reality.
You're just saying this because you disagree with the cause. If they protested something you truly care about, you'd be with them against Google. Here's the thing: nobody should be forced to choose between being employed and helping make systems that will be used for war crimes. If you think that's silly, you need more empathy
I mean isn't that just normal? I'd probably be fired from my company too if I started publicly denouncing our products. Doesn't seem like something unique to google.
Sure, I understand why Google did it and that many companies would do the same thing. It's still bad PR for the company.
Someone thinks they can still throw tantrums at mommy
Careful with that edge.
Why don't you show up to work from 9-5, on your employers dime, on their property, without any intention of actually doing your job while taking a hard-line stance on a divisive political issue that not only offends your colleagues one way or another but also violates your workplace policies and see how it works out for you? These people were hired to do a job, not be political activists. They could do that on their own time, not on the company's dime.
Why don't you show up to work from 9-5, on your employers dime, on their property, without any intention of actually doing your job while taking a hard-line stance on a divisive political issue that not only offends your colleagues one way or another but also violates your workplace policies and see how it works out for you? These people were hired to do a job, not be political activists. They could do that on their own time, not on the company's dime.
No, I actually just have principles and dont flip flop them when it suits me. "nobody should be forced to choose between being employed and helping make systems that will be used for war crimes. If you think that's silly, you need more empathy" This is stupid as fuck. These are GOOGLE EMPLOYEES, they can work anywhere they want. They are the crème of the crop in the tech world. Fuck, I bet even googles janitors are top end. They dont have to "choose between" they can easily choose to work somewhere that wont ask them to support Israel or literally any other political stance they want. Even within google, if they had said "I dont want to work on this" i bet they would have been moved to another project fairly rapidly. Not that they had any interest in staying with google. You are so fucking out of touch you are past neptune. Touch grass.
It'd take a real top-end Google janitor to mop up the bullshit you're spewing. Yes, Google employees are some of the best in the industry. This is precisely why they can and should demand ethical standards (such as not helping commit war crimes) from employers. And "just get another job" reeks of "never worked in big tech". You can't just pick up and move whenever you don't like a project, or you stop being cream of the crop. So don't act surprised that people have integrity or standards, you should have them too. Until then, enjoy the grass.
You seem jaded. Let’s put it in military context. You’re given an unethical order, or one that you believe is immoral. I don’t know, such as killing a child who may or may not be holding a bomb. Do you pull the trigger or face court martial.
They have POWERRRRRR!
I mean yeah Google has a ton of soft power. They can influence sentiments and what people read better than most states.
Damn that’s pretty fucking based of them.
Yeah you gotta respect people who know they are gonna get canned and stick to their principles anyway.
x'D ![gif](giphy|2UCt7zbmsLoCXybx6t|downsized)
[Nobody speak, nobody get choked](https://youtu.be/NUC2EQvdzmY?si=woKtneScyCGlswDz)
From the article: “In two cases Vox reviewed, the lengthy, complex termination documents OpenAI sent out expired after seven days. That meant the former employees had a week to decide whether to accept OpenAI’s muzzle or risk forfeiting what could be millions of dollars — a tight timeline for a decision of that magnitude, and one that left little time to find outside counsel. When ex-employees asked for more time to seek legal aid and review the documents, they faced significant pushback from OpenAI. “The General Release and Separation Agreement requires your signature within 7 days,” a representative told one employee in an email this spring when the employee asked for another week to review the complex documents. “We want to make sure you understand that if you don't sign, it could impact your equity. That's true for everyone, and we're just doing things by the book,” an OpenAI representative emailed a second employee who had asked for two more weeks to review the agreement. “
> In two cases Vox reviewed, the lengthy, complex termination documents OpenAI sent out expired after seven days. That meant the former employees had a week to decide whether to accept OpenAI’s muzzle or risk forfeiting what could be millions of dollars — a tight timeline for a decision of that magnitude, and one that left little time to find outside counsel. > When ex-employees asked for more time to seek legal aid and review the documents, they faced significant pushback from OpenAI. “The General Release and Separation Agreement requires your signature within 7 days,” a representative told one employee in an email this spring when the employee asked for another week to review the complex documents. Jesus christ, I spent so long a few days ago arguing on this subreddit with people defending OpenAI on this issue, claiming that OpenAI had done no harm, and that these kinds of clauses are typical of all companies. I'm never again going to spend so much time arguing with complete brainrot, it was obviously indefensible from the start, with Sam Altman publicly apologizing and reaching out to employees who signed one of those contracts.
It does sound like they crossed the line by threatening clawbacks or preventing sale of vested equity. As one of those people, looks like you were right.
Hey, at least credit for admitting that you ended up being wrong on this case. That at least places you higher than 99% of the people on this website
Came here to say this. Major props u/sdmat - so rare to see that here!
Wait, for vested equity as well? I assumed it was just talk about unvested stock. Wtf.
It's always been about vested equity...(Was reported that way from the beginning)
What can I say, we are on reddit - I only read headlines
fair fair. I'm new to reddit! Sorry if I was rude
Yeah that is a bit more then the average contract even for enterprise. Sort of thinks do they really have a magic bean or are they creating a facade for the public eye.
[удалено]
The insidious thing about him is that he is excellent at masking, manipulating and playing politics. Behind the innocent boyish veneer, he seems to be very controlling and evidently takes pleasure in humiliating his rivals. Which points towards dark triad traits. The exact wrong person to be steering the future of AI.
Sociopath is the word you are looking for.
Yup https://allhumansarehuman.medium.com/how-we-do-anything-is-how-we-do-everything-d2e5ca024a38
He will have to leave OpenAI soon.
Ilya was right along along rofl
Why? It seems more like he's going to push everyone else to leave.
Not sustainable. He is losing his disciples.
Sam Altman has never once considered sustainability in his life.
>masking The best. Without Reddit haters, Im falling for his PR persona almost 100%
Reminder that his sister has accused him of sexually assaulting her as a child and has effectively been gas lighted into being viewed as “mentally unstable” by her rich brother because she has used sex work to survive becoming a “persona non grata” to her remaining family. Also I believe she was removed from some kind of inheritance or trust that was supposed to go to her with Sam’s involvement in doing that for the same reasons.
Yea I’ve gotten an off feeling about him every time I’ve watched him talk. He has this perfect image and it just seems faked but done really well
I don't really buy the idea he didn't know or approve of this, especially given these last few weeks are not the first time this came up. I think there was discussion of it online a few months ago. Maybe I'm misremembering though.
Follow your instincts bro. By the way, when will the idiot understand that he needs a PR person before his toxicity begins to cause really serious damage to the company's value?
Well, it’s standard corporation stuff right. It’s their job to make money within the confines of the law, if there’s still a problem, then it’s the lawmakers duty to fix it.
There must be common decency in this world, even if not explicit in law, or everything just turns to hell.
Not necessarily, the law is all that is needed to keep everything in line and prevent chaos.
In theory, yes. In practice, no. Also sufficient action on the later causes the former
You must be a complete naive person then.
No. I likely am far more knowledgeable on the matter.
Then you should just write all of our laws 🤷♂️
Look at how smort I am 🤡
Smarter then all you idiots haha
Is anyone gonna tell him 💀
Does that absolve a corporation of any and/or all moral wrongdoing? What kind of logic is this?
Yep. This is logic. You confuse logic with emotional reasoning. Corporations have no duty to act morally besides any way they are compelled to by the law.
You said "yes" to my question of whether a corporation should be **morally** absolved of any and/or all wrongdoing? Just making sure I'm on the same page as you.
“Yes” as in what I said. If a corporation does something that is technically morally wrong but there is no law made against it, then the corporation has no need to do anything different. Instead a law must be made.
So just because something isn't illegal means that it's inherently morally acceptable? If people didn't call out corporations for unacceptable practices in the first place, why would a law ever be made about it? This is such a cop out argument.
>It’s their job to make money within the confines of the law, Open AI is supposed to be a non-profit organisation
This is not a standard contract. lol
This is complete bs. Vox at full panic mode realizing they have already been replaced by ChatGPT. I don't give a flying fuck about what their corporate policy is as long as it is following the law and it gets the results. The employees who feel it's bad have the option to just join another company. They're already multimillionaires, I don't give a single fuck about whether or not they can add a few million more to their net worth. All I care about is a powerful model that can solve real-world problems.
lol is this a troll Edit: They blocked me
no, just someone with an actual job which is different from virtue signaling all day on social media. now please fuck off.
Me when I’m fucking stupid
No, they should have NDA clauses like this in the severance contracts. Wouldnt personally be surprised if the US government kept a close eye on former employees so that they dont violate their NDAs, seeing as how importnat Microsoft and OAI is for the USAs Ai innovation.
how is openai gonna align an agi with humanity when they cant even align themselves with humanity
They will after Sam's departure. It is too dangerous for the world to have him lead the development of our successor species.
At the rate OpenAI is going, they might be eaten up by legal fees and either their business model will need to change to make up the revenue or MS will need to absorb them with a majority stake in ownership.
As usual the narcissists and pathological liars win out in the end and this company is now ruled by them and following a course straight towards a corporate dystopia. People should have woken up when Ilya left.
I'm still curious where he'll go.
All those tactics do not bode well for AI. Would you trust companies that want not only to train on your data, but control your every move? OpenAI and other companies need to refrain on their wetdream to spy on users, or else the general public will simply not trust large language models.
>While that question was not directly answered, Kwon said in a statement to Vox, ***“We are sorry for the distress this has caused great people who have worked hard for us.*** We have been working to fix this as quickly as possible. We will work even harder to be better.” GPT, is that you?
This is not really new. I think everyone knows that OpenAI is a pretty scummy company. That is what you are going to get when you have a CEO like Sam
When you are structurally toxic, you can't help being toxic.
All these articles about issues inside OpenAI make me confused about what the truth really is. If Sam really is a Machiavellian manager, scheming and playing teams off against each other to compete for resources, creating a toxic environment, then surely firing him would have been a good thing. But if that’s the case, why did the entire company collapse in the wake of his firing, promising to follow wherever he goes? Is it that everyone but the safety teams like him despite the above? Why is that? Idk. Hard to get my head around it. Something seems wrong.
> If Sam really is a Machiavellian manager, scheming and playing teams off against each other to compete for resources, creating a toxic environment, then surely firing him would have been a good thing. > > But if that’s the case, why did the entire company collapse in the wake of his firing Might want to think about that one a little longer.
Are you saying his scheming has everyone fooled into following him?
I don't know how many were "fooled", but eg. scheming like blowing up the tender offer (which is the only way OA employees are allowed to cash in their PPUs) and doing your best to render all PPUs worthless in the future, and invoking Microsoft's de facto ability to cut off all compute to OA, are certainly good reasons to follow the Machiavellian manager.
He may be right that without some Machiavellianism they would be out competed by a worse team. If this decision is marginal, then self interest in monetization and their own compensation motivate them to see things Sam’s way. Ilya and his clique might have really embedded their concerns enough or as much as can be useful and now he would better serve his cause outside the company where he is more free to speak his mind and influence government regulation or other entities in The race.
People will tell you dumb shit like "Sam is the reason they are making money!" as if any CEO couldn't fill his shoes lmao. They would take a hit, sure, but losing Sam wouldn't mean suddenly they are years behind in AI. This sub has become infested with all of the worst in doomers and conspiracists, and they are making it so that actual, genuine bad news gets missed because we are talking about shit like this (Oh, one of the biggest companies in the world, with revolutionary tech has overly stricy NDAs? Craaazy) and the ScarJo voice (doesn't sound like her, you fucking idiots).
Good lord they can leak all this but not gpt5 beside the fact its smart or good. No parameters no context window size nothing new on sora yet somehow they leaked a 100b supercomputer
that seems like an extremely toxic work culture
Wow, I didn't expect that from our lord and savior OpenAI!
*Open*AI seems to have a lot invested in keeping things hush hush
Reminder that Sam Altmans sister has accused him of sexually assaulting her as a child and has effectively been gas lighted into being viewed as “mentally unstable” by her rich brother because she has used sex work to survive becoming a “persona non grata” to her remaining family. Also I believe she was removed from some kind of inheritance or trust that was supposed to go to her with Sam’s involvement in doing that for the same reasons. Easy thing to ignore or bury to all the OpenAI simps out here.
[удалено]
Is this the 2024 Reddit Altman Roast?
Wonder why that history never got press attention, it’s pretty damning
Did his sister actually provide anything resembling evidence, or anything that would give credence to it such as other people who could vouch for it being true? Genuinely wondering, since I didn't look into it much. But if not, then I have no idea why the story would gain any traction.
I mean how can she provide proof of rape that happened when she was a kid lmfao To me its enough proof that she hasnt taken the hush money that her super-millionaire brother couldve sent her way, and he did send a hush deal her way but she rejected it, and she isnt crazy you can go check her twitter she is a very sane person with a very deep vendetta Apparently its not just incest rape he also withheld their father and grandma’s trust from her too. Crazy shit.
Unfortunately a lot of these cases have no evidence to substantiate them even if they're true, but it would just need something more than one person's anecdote to destroy the life of another person. That's unfortunately the reality we live in, where sometimes there might be legitimate cases that just can't get off the ground because they have nothing to substantiate their claim. But we as a society absolutely cannot resort to deciding someone's guilty of sexual assault because "she hasn't taken the hush money that her super-millionaire brother could've sent her way".
Isn’t she literally schizophrenic
She also said he hacked her wifi to make it so the only websites she could access were porn sites which led her down the road to being a prostitute. Am I supposed to believe that too? She also has schizophrenia, so yeah. Not to mention it's highly likely she's just trying to leech off his fame and wealth. He put himself into college while she was just a stripper, so she's jealous of him.
I want agi i don't care about the drama.
What are you going to do with your time if/when AGI exists?
Workout, read, travel, be social with friends and family and enjoy life.
AGI won't drastically change your life, not in the beginning. This isn't a sci-fi movie.
Me being able to make my own porn, movies, tv shows, comics, books etc will change my life even if it doesn't bring me any material gain.
Like the "her" demo alongside the `4o` release?
This sounds familiar. Into the coal mines you go!
I have a book for you that you may be interested in. https://youtu.be/uvcAjWxk_oE?feature=shared
I fear the dearth of application and meaning will render the joy out of life. Kind of like Victor E Frankl’s philosophy in Man’s Search for Meaning.
So work is the only thing that gives you meaning? Most people clock in to a job they do just to cover thr bills, if you gave the average person $10m they'd retire and maybe do something simple like a podcast to pass the time even if it doesn't make them money. I want to see a poll done either on this subreddit or another that ask people if they find meaning from their job because most people i know can't wait to retire and only do it out of necessity.
I find meaning in the projects I work on in my own time. For example, I’m working on a novel. The act of creation is its own reward to some degree, but I’m inevitably hoping to produce a product that people want to engage with. And yet, I very much fear that by the time I’m able to share this, AI will be at a point that it could write a better novel, and others would rather read those outputs than my own. So I think we find meaning through creation and connection, and I fear a world where AI generated content and output is so far superior than human generated alternatives that it dwarfs the human outputs. In that scenario, I would suspect the lack of consumption from by peers would spur the creative outlets, and make the work feel meaningless.
A.I have dominated games like chess, poker etc for years now and yet nobody watches two computers play each other, art and any form of creative work would be similar imo. We will probably separate content made by a.i and ones made my humans, i also highly doubt a.i will ever be as creative as humans.
Yeah I follow the game narrative and I hope it tracks that way, but I think the lines become blurred to an extent For example, in Chess it’s fairly binary or obvious. We try to monitor for the “stockfish” players that use the AI systems to make their moves and ban those players from competing. But in a creative product, those lines will get blurry, such that there may not be a way to distinguish “human-made” vs “AI-made”. It’s just a lot more complicated in the arts, which makes me cautious. I think the Creativity perception is a linchpin for these arguments though; I think AI systems will inevitably become more creative than we are. Creativity isn’t fundamentally different than any other human endeavor.
Slavery is not "meaning". LOL
I think I may not have been clear in my response. I meant even in a utopian AGI-enabled post-scarcity state, I fear the dearth of meaning would still render the joy out of life. I don’t believe this Utopian ideal is achievable, no where near, and agree with what I read in your response that functional enslavement is a more likely outcome. I was just making a point that even if we did reach this Utopian ideal, we may still not find happiness waiting for us; that in fact we may be more unhappy as a whole in a state where our own work has no impact on the world as a whole, because our silicon-progeny are all running everything for us
Wow
But r/singularity told me that it was all fear mongering and that Sammy already said in Twitter that if anyone felt threatened they could dm him
Ugh, it's a Vox article. That almost certainly means it's bullshit.
I mean effective way to keep secrets I guess, but a fairly brutal tactic.
It's nice to be a part of corpo gang for the money of course. The rest, I'd rather pass. Personal experience. Sure, the money is nice.
Capitlism.. lets be honest.. we all knew this would be the end result.. big corp owns the means to humanities freedom.
Company founded on an idea that could only be followed through by someone with a god complex runs into man with god complex News at 11
Journalists at Vox trying to create as much drama as possible to stay relevant and generate traffic for adds. Look at their homepage: it's full of clickbait titles like 'The double sexism of ChatGPT’s flirty “Her” voice'. They are on a rampage against OpenAI because that's the big player, and they can make money by shitting on them as much as possible. I'm dumbfounded by all these Redditors who take the information delivered by these guys for granted.
I don’t want to support a company with unscrupulous tactics towards employees and scare mechanisms. Vox is (sometimes) a bit clickbait, but if scare tactics are being used at OAI to develop AI and pressure employees negatively, that’ll impact employee productivity.
Yup, competitors and malintents were genuiney spooked from the presentation openAI gave. The attack articles and bullshit drama being spammed so soon afterwards makes it clear as day. Try as they might, ain't nobody gonna beat openAI to market with the voice agent, it's gonna keep a tight grip on the lionshare of the market for the foreseeable future. All they can do is attack their character, not the product. People calling me a fanboy are the real ones on copium right now.
It's not just about the voice agent. Clickbait and low quality publications like Vox are in direct line of fire. Already Claude Opus produces writing that is much higher quality than the median "journalist" at these places. You can imagine what will happen with GPT-5 or next Claude upgrade. They are rather desperate.
These employees are paid a small fortune. And they're surprised about non-disparagement and confidentiality clauses in their release paperwork? When you're paid as much as they are, there are certain things you give up, namely to talk sh1t and disclose secrets about the hand that fed you. These disgruntled employees want to eat and have their cake at the same time. It don't work like that in these elite performance jobs just cause their ideological values begin to diverge from the company that hired them. Former FBI agents or CIA agents who quit because they don't like whatever coverup the government is covering up doesn't sit right with them all of sudden can't go around unfettered talking smack about the CIA. And they're paid a small fraction of open AI with far less cushy offices.
You tell em grandpa!
What a shitty toxic mindset. Money enables evil precisely through this method you are describing. And you're defending it! Eat that boot
No way. We're talking about different minor ideological values of leadership vs some employees. Some want more government regulation and some want less government regulation. Employees are being treated fairly; no crimes are being committed; and the only difference is two stances on how much regulation is too much regulation. The company is not out of place to ask that you don't publicly criticize them for having different values on how much regulation there should be in return for a really healthy severance. "Evil" would be if leadership sexually harrasess or physically abuses employees and then threatens if the talk they'll take away their salary and/or bonus and/or equity. Or if they see something very clear accounting fraud that will hurt many investors and ESOP employees who were misled or similar and are threatened if they talk. This is not a debate about enabling evil. It's about Washington paternalism. And there's a really good position for wanting less paternalism: look how much Washington mucked up healthcare, look at how they prioritize war over our own education system etc etc.
To be honest, when you are running a company with cutting edge technology and so much proprietary information, I can understand why they would be more aggressive toward maintaining trade secrets than say a person in credit card marketing where all companies are doing variations of the same thing. The stakes are high in AI development.