Why was Sam Altman fired? To accelerate AGI, slow it down, or neither? (please read description)
18
110
Never closes
to take less risk with agi = the board wanted to take less risk and/or move slower than Sam so they fired him
Neither
to take more risk with agi = the board wanted to take more risk and/or move faster than Sam, so they fired him

I just resolved a market based on the same question and added my explanation below. I wonder what other users on Manifold think.

The Case for to take less Risk with AGI

tl;dr: We cannot know for certain, but I lean towards that Sam was prioritizing AI commercialization over safety, while Ilya wanted to slow down development and place a greater emphasis on safety.

I think the board (specifically Ilya) believed that Sam was moving too quickly to commercialize AI models without ensuring safety came first. This concern seemed to be shared by others, including the founders of Anthropic who left OpenAI under similar circumstances. There were also disagreements among leadership over how to manage the company. For instance, see what transpired with Wilmer below.

This leads me to conclude that there was a significant divide within OpenAI regarding the pace of development and insufficient resources dedicated to ensuring AI system safety. Ilya convinced the board he believed the company should move more cautiously and allocate resources differently. The board made their decision.

Excerpts from articles that influences my opinion:

  • Anthropic Founders left OpenAI because they believed that the company was moving too fast to commercialize its technology

  • Executives disagreed over how many people OpenAI needed to moderate its consumer-facing products. By the time OpenAI launched ChatGPT in late 2022, the trust and safety team numbered just over a dozen employees, according to two people with knowledge of the situation …. Some employees worried OpenAI didn’t have enough employees tasked with investigating misuse of the platform … By July, when Willner left the company, citing familily reasons

  • Under Willner, a former Meta Platforms content moderation executive, the trust and safety team sometimes ran into conflicts with other executives at OpenAI. Developers working on apps built on OpenAI’s technology had complained, for instance, that the team’s vetting procedures took too long. That caused executives to overrule Willner on how the app review process worked.

  • Sam Altman alluded to a recent technical advance the company had made that allowed it to “push the veil of ignorance back and the frontier of discovery forward.”

  • In a sign Sutskever had become increasingly concerned about AI risks, he and Leike in recent months took on the leadership of a new team focused on limiting threats from artificial intelligence systems vastly smarter than humans. In a blogpost, OpenAI said it would dedicate a fifth of its computing resources to solving threats from “superintelligence,” which Sutskever and Leike wrote “could lead to the disempowerment of humanity or even human extinction.”

  • At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns. You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do. (Another person said Sutskever may have misinterpreted a question related to a potential hostile takeover of OpenAI by other parties.)

  • Altman had spoken to Toner about a paper she co-wrote for Georgetown's Center for Security and Emerging Technology, where she is a director of strategy, because it seemed to criticize the company's safety approach and favor rival Anthropic, according to The New York Times. Toner defended her decision, according to the report. The disagreement over the paper led senior OpenAl executives, including chief scientist Ilya Sutskever, to discuss whether Toner should be removed as a board member. After those discussions, Sutskever joined the rest of the board in firing Altman, communicating it to the former chief executive himself over a Google Meet conference call.

Get Ṁ200 play money
Sort by:

My understanding is that the people who voting Neither are basing this on Helen’s statements after she resigned from the board. Is this correct?

@Soli That and emmett and people familiar with the matter going out of their way to specify this wasn’t about ai safety, and it seeming clear from context that it was instead moreso board drama / power struggles / related to sama trying to oust helen and/or his not being candid (which is the official given reason for his original firing and seems more literal than was originally obvious)

@TheBayesian hmm but didn’t Sam and Helen’s disagreement start because of AI Safety? Wasn’t Ilya the one who pulled the trigger? Weren’t there known disagreements between Ilya and Sam on safety? Also, how then should Ilya’s comments in the all hands be interpreted?

  • Altman had spoken to Toner about a paper she co-wrote for Georgetown's Center for Security and Emerging Technology, where she is a director of strategy, because it seemed to criticize the company's safety approach and favor rival Anthropic, according to The New York Times. Toner defended her decision, according to the report. The disagreement over the paper led senior OpenAl executives, including chief scientist Ilya Sutskever, to discuss whether Toner should be removed as a board member. After those discussions, Sutskever joined the rest of the board in firing Altman, communicating it to the former chief executive himself over a Google Meet conference call.

  • At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns. You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do. (Another person said Sutskever may have misinterpreted a question related to a potential hostile takeover of OpenAI by other parties.)

@Soli good questions! It hope anyone who has a stronger opinion than I do feels free to jump in and add anything I'm not considering or wtv. I might not be the right person to defend this position, a lot of my views are deferred to other ppl i think have good views lol.

My sense is that pointing to AI safety as the cause of conflicts within the board has been either described as a misdirection, or outright not a factor, or a background aspect that might influence what factions develop but didn't have a large weight on the firing. Idk how their disagreements started, but I wouldn't be that surprised if they started because of AI safety; after all, the board's goal is to have openai's agi benefit humanity, so its bound to be a subject of a bunch of discussions. nevertheless, it seems more to me like safety was a background thing that might have caused some disagreements occasionally, and that the main divide was suspicion about Altman's general character, and power struggles. dunno if you read the latest big helen articles but they might be worth a read.
Dunno if Ilya pushed the trigger, but does he even have the power to? My informedish guess is that nobody on the board, including ilya, could "just" fire sama, it had to have been a decision that reached either unanimity or some kind of threshold + negotiations. So yeah, it wouldn't be too surprising to me if tensions had been stirring around sama's behaviour for months, and then Sama did some pointed political maneuvering that backfired and made them confident enough that Ilya felt he had to also oppose it, and that this gave them the board power to go through with the firing. Turns out he seems to have thought that was a mistake in the end, but I really don't have a good sense of ilya's thoughts and motivations about the situation, either about altman or about the rest of the board or wtv. Maybe someone knows more. I would be surprised if it was lead by Ilya, who initiated the antagonistic conflict, and led to him "managing" to oust sama. Like, my model of the situation describes that as very unlikely, but model uncertainty means i could easily be wrong

And yeah, I think most people who think for years about ai safety end up with disagreements, weak or strong, and that's fine, and it might have influenced some of the conflict, but that's a far cry, IMO, from "sam was fired to slow down AGI". Seems like sam was deceptive and tried to reduce the board's influence on the direction of the company (which they felt was their job), or at least that they felt that way, and that this is the actual thing that led to them firing him, rather than any specifics about AGI concerns. If you extrapolate "well why would they want to have a competent CEO, to fulfill openai's mission right? to build safe agi that benefits humanity?", then yeah, since that is their mission it will always be an aspect of the reason behind their decisions, but really it wasn't like "oh, if we fire him, agi will slow down"; more like "if we fire him, we can actually steer openai such that it fulfills the mission instead of worrying all day about office politics and whether sama is hiring enough employees to have political levers when we try to fire him"
^ofc most of that is conjecture, and I could be wrong, and I haven't done as much research on the topic as many people, so take that with a grain of salt I guess

about the first quote, I think more recent reporting might be relevant, but yeah it is pretty consistent with the picture I'm painting imo: She wrote a paper about ai governance, which is her field of study, and sama (according to this model where he's had a pattern of being deceptive) tried to use it to paint her as not fit for being on the board, either trying to pick out some possible conflicts of interest she has, or wtv. Ilya had mostly been balancing the board by not taking the side of the helen-and-co board members, but was compelled by this being sama shenanigans, switched sides, and the balance shifted and sama was finally fired (after some of the board members thinking that would have been wise for weeks or months). So Ilya's switch seemed plausibly pretty immediate (and explains why he switched sides quickly afterwards; it was a bit of a in the moment thing for him, unlike for the rest of helen-and-co), but theirs did not.

not sure it matters, but subjectively speaking I am shocked by how well it feels like this model fits the observations, like so many motivations are so much less confusing under this frame. Could be an illusion tho, idk. I do realize I haven't given much at all in the way of textual evidence. If that's what you were looking for, I'm not the best for compiling links lol, but like anything specific you ask about I think I can find info on.

@TheBayesian ok you convinced me - thank you thank you thank you!!!!

More related questions