This market resolves based on my personal judegment of all evidence released within the next 30 days.
To take less Risk with AGI = the board wanted to take less risk and/or move slower than Sam so they fired him
To take more Risk with AGI = the board wanted to take more risk and/or move faster than Sam so they fired him
So, first of all, thank you for the kind + fair reviews.
So, what changed since I resolved this market?
After resolving the question and providing an explanation, I engaged in conversations with various individuals, such as @firstuserhere and @TheBayesian. During these conversations, I realized I had undervalued reports from credible sources with more context than I do and whom I have no reason to distrust. I also realized that safety being a topic of tension at OpenAI doesn't necessarily mean it was the primary motivation behind Sam's firing.
What happens now?
I am no longer confident in my original assessment and prefer resolving this market in 1-2 days after engaging in more discussions with members of the community and having some time to sleep over it.
Could this be motivated by bad reviews?
Most reviews were kind and fair. This has more to do with realizing that maybe I was forcing a narrative rather than interpreting all evidence impartially. The idea that Sam could be fired over such a small matter did not align well with my worldview. I was also heavily biased by initial reports and the fact that Helen's paper that started the dispute was favoring Anthropic's safety approach over OpenAI's.
Sources
Finally, thank you, @Eliza , for unresolving this market for me to fix a mistake.
again thank you to @TheBayesian and @firstuserhere for the conversations
You can find a snippet lf the conversations below
tl;dr: We cannot know for certain, but I lean towards that Sam was prioritizing AI commercialization over safety, while Ilya wanted to slow down development and place a greater emphasis on safety.
I took this question seriously and put in effort out of respect for everyone who participated in this market. However, we may not have all the facts, and my opinion is based on available evidence.
I think the board (specifically Ilya) believed that Sam was moving too quickly to commercialize AI models without ensuring safety came first. This concern seemed to be shared by others, including the founders of Anthropic who left OpenAI under similar circumstances. There were also disagreements among leadership over how to manage the company. For instance, see what transpired with Wilmer below.
This leads me to conclude that there was a significant divide within OpenAI regarding the pace of development and insufficient resources dedicated to ensuring AI system safety. Ilya convinced the board he believed the company should move more cautiously and allocate resources differently. The board made their decision.
Excerpts from articles that influences my opinion:
Anthropic Founders left OpenAI because they believed that the company was moving too fast to commercialize its technology
Executives disagreed over how many people OpenAI needed to moderate its consumer-facing products. By the time OpenAI launched ChatGPT in late 2022, the trust and safety team numbered just over a dozen employees, according to two people with knowledge of the situation …. Some employees worried OpenAI didn’t have enough employees tasked with investigating misuse of the platform … By July, when Willner left the company, citing familily reasons
Under Willner, a former Meta Platforms content moderation executive, the trust and safety team sometimes ran into conflicts with other executives at OpenAI. Developers working on apps built on OpenAI’s technology had complained, for instance, that the team’s vetting procedures took too long. That caused executives to overrule Willner on how the app review process worked.
Sam Altman alluded to a recent technical advance the company had made that allowed it to “push the veil of ignorance back and the frontier of discovery forward.”
In a sign Sutskever had become increasingly concerned about AI risks, he and Leike in recent months took on the leadership of a new team focused on limiting threats from artificial intelligence systems vastly smarter than humans. In a blogpost, OpenAI said it would dedicate a fifth of its computing resources to solving threats from “superintelligence,” which Sutskever and Leike wrote “could lead to the disempowerment of humanity or even human extinction.”
At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns. You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do. (Another person said Sutskever may have misinterpreted a question related to a potential hostile takeover of OpenAI by other parties.)
Altman had spoken to Toner about a paper she co-wrote for Georgetown's Center for Security and Emerging Technology, where she is a director of strategy, because it seemed to criticize the company's safety approach and favor rival Anthropic, according to The New York Times. Toner defended her decision, according to the report. The disagreement over the paper led senior OpenAl executives, including chief scientist Ilya Sutskever, to discuss whether Toner should be removed as a board member. After those discussions, Sutskever joined the rest of the board in firing Altman, communicating it to the former chief executive himself over a Google Meet conference call.
PS: I am ready for all your negative reviews 🤍 but this is honestly what I think and I would be lying if I resolved the question any other way
@Soli I respect your internal consistency while resolving this question. It is entirely your right to do so!
Nevertheless, I think you are significantly under-counting the importance of the multiple reputable reports we have saying Sam Altman was fired because he tried to oust Helen Toner in a power play (and in the course of doing so, misrepresented/lied to the board members about eachother.) Accelerating AI progress/slowing it down seems ancillary to the issue primarily at play.
@RobertCousineau Thank you for the comment. I appreciate you! I don’t agree but maybe you are indeed right. I think Ilya played a bigger role than Helen. She is more or a side-character in my opinion.
poll with the same question: https://manifold.markets/Soli/why-was-sam-altman-fired-to-acceler-f4af5d9003bd
I don't think the community would agree with me but I am still curious to see how the results will look like
@Soli yeah you should resolve this based on your opinion since that's the criteria, but if you want to see polls I also have a bunch, link here.
@Joshua thank you for sharing - My fingers are hovering over the unresolve option haha as the bad reviews are coming in. I truly think everyone is giving Helen too much credit and Ilya played a significantly bigger role here so I won’t change it but I guess no more personal opinion markets for me. I don’t like the pressure .
Edit: The results of your poll are not as bad as I thought. This means there are some that agree with me ☺️
@Soli Haha, you ran this market right/exactly as you said you would. The review system as it stands is really funky though/a popularity contest in a way that doesn't seem to add much value.
@Soli Yeah the review system isn't great in my opinion, definitely leads to weird pressure for subjective markets even though I don't think it's fair to leave bad reviews when resolution was known to be subjective from the start.
I do think it's best practice when running a subjective market to keep traders updated on your opinion if it seems like the market is not predicting your judgement well, but traders could also have been more proactive in asking for your opinion before trading instead of assuming your opinion would be similar to the market's.
And ultimately not much mana even changed hands here so it shouldn't be a big deal to either side. I'll leave a 5 star review for the well-written explanation of your reasoning.
@Joshua very good advice, thank you! I think for now no more subjective markets for me but in case I try them again in the future, I will make sure to keep participants updated.
The main problem was that I did not have an opinion till the market closed. I was hoping that some conclusive evidence would actually come to light but this did not happen. I could have still shaped an opinion and shared with the participants just in case to avoid surprises.
Thank you for the review! ☺️☺️
@Soli If the market was "would the net effect of the firing be to accelerate AGI or slow it down", I would have said your resolution was accurate in that case: firing him (and keeping him fired) would have the net effect of increasing caution about AGI. It's even possible that that was an additional motive for some. But based on the various reports at this point, to the extent the information is public, I don't think those support the position that that's the primary underlying motivation / inciting incident that led to the firing.
@Soli For the record, as one of the holders of "neither", I have no objections to your resolution and have left a five star review, since this was clearly described as resolving to your own judgment.
@Soli FWIW, to acknowledge this explicitly: you can absolutely judge and resolve this any way you choose, it is marked explicitly as subjective. My comment was about the selection of input that you're stating you used to form your judgement, and the effect that would have on the resulting evaluation, as well as calling attention to the distinction between "what's the net effect" and "what was the actual cause". In effect, this being an opinion-based market, I'm asking the question of whether the resolution of this market was actually based on your opinion on the question posed by the market ("why did this happen") rather than your opinion on a related but different question ("what was the effect of this happening").
One of the difficulties here is that there is a lot of hypothesizing out there, and many people are reasoning backwards from net effects (which are easier to evaluate without access to as many of the undisclosed details). There's some somewhat clear public information on the root causes at this point, but it's sufficiently non-authoritative that the speculations still run rampant, and "played board politics, tried to oust someone, lost and got ousted, but then subsequently won after all" is a more boring and mundane explanation than "desperate battle about AGI safety and risk and the future of humanity!", so the boring and mundane explanation doesn't get as much traction or interest. And it is on balance true that the firing (if it stuck) would have had an effect on AGI safety; there's much more evidence to support that than evidence to support that that was the motivation for the firing.
At the end of the day, you should evaluate the available evidence and decide if you believe the available evidence supports "the board wanted to take less risk and/or move slower than Sam so they fired him", with that being the actual root-cause-and-effect.
> Participants in this market have been extra kind.
A bit of mana isn't something folks should be unkind over. ❤
@chrisjbillington I did not want to resolve this without explaining how I concluded. Can this wait some time?