In a year, will we think that Sam Altman leaving OpenAI reduced AI risk?

Sam Altman just left OpenAI:

Will resolve to the majority opinion of a poll (with options "reduced" and "increased") of people who I think are trustworthy on this question. (Feel free to propose different resolution criteria.)

Edit: Clarification: I will resolve this market based on whether the board's initial decision to fire Sam Altman set off a chain of events that ultimately reduced/increased AI risk. So if Sam returns to OpenAI, this market would NOT resolve as N/A.

Get Ṁ500 play money

Related questions

Sort by:
hominidan avatar
hominidanpredicts NO

I think in the closest counterfactual world, the board thinks action against Sam doesn't seem so necessary or urgent. So either they take more time to craft a PR-palatable messaging, or they are satisfied with a less severe reprimand like salary cut / threat of sacking & Sam convincingly expresses regret and alters his behavior. Or maybe instead of firing him, they move him to another position.
OP says the decision includes its second order effects. IMHO it seems the result has been an increased control of a single man with cult following, who has threatened to sack a board member for saying regulation is good, who's been lobbying for less regulation and who's generally manipulative (e.g. see Irving, Graham, his repeated inability to give a straight answer to a questions regarding x-risk on the Senate hearing). Plus somehow, the event spurred an avalanche of EA criticism within the tech community.

seikonfertrad avatar
seikonfertradpredicts YES

@hominidan I think I broadly agree with all of your points. But my crux is that I don't think the board would have become more capable with time. On the contrary, I suspect there would have been a slow loss of control. Few people would have tracked it and ~nobody would have had sufficient clout to counteract irresponsible action, whether the nonprofit structure persists or not.

The already-weak governing board took one of the only actions it thought was available to them to credibly signal we're in a pickle. Yes, maybe they could have waited and curated more of a PR stunt, but maybe they're also just human and were insufficiently resourced. Either way, I think this draws serious attention to the inadequacy of the governance structure and will (hopefully) result in an actually-resourced checks-and-balances system 1 year later.

IsaacKing avatar
Isaac Kingbought Ṁ100 of YES

@JonasVollmer Can you elaborate on who you think is trustworthy for this question?

JonasVollmer avatar
Jonas Vollmer

@IsaacKing I might survey the people at or something like that

TheBayesian avatar
TheBayesian 🦚bought Ṁ0 of YES

it seems plausible that the increase in ai safety-related drama might overshadow other effects, in making governance of future ai systems easier (more attention, more view into what can go wrong), and reducing AI xrisk. i'm betting that and other possibilities make it higher than 20%, buy my limit order if you disagree (or lmk why this consideration is wrong)

connorwilliams97 avatar
Connor Williamsbought Ṁ25 of YES th

If this is accurate, it suggests that if Sam stays at Microsoft, OpenAI could splinter into a million pieces rather than everyone following him there. The former is presumably better than the latter for reducing AI safety, so my estimation of the probability here has gone from ~15% to ~40%.

MarkHamill avatar
Mark Hamill
ICRainbow avatar
IC Rainbowpredicts NO

Our only chance is not-killing-everyone-ist remainder of OpenAI merging into Anthropic.

SemioticRivalry avatar
Semiotic Rivalrybought Ṁ100 of YES

Rationally, I can see good arguments on either side. Emotionally, this feels very very bad. Watching what is probably a massive defection from the "don't kill everyone" equilibrium and seeing it get massive support hits different than just predicting that defection will happen.

ICRainbow avatar
IC Rainbowbought Ṁ100 of NO

Nadella is committed to race ahead with $50B spending spree.

That's one of the worst outcomes out there.

I hope they can keep their models shut and guarded, unlike Meta (joining which would be even worse).

EliTyre avatar
Eli Tyrebought Ṁ35 of NO

It seems like there's substantial probability that this will be the event that creates yet another top-tier AGI lab.

TobiasH avatar
Tobiasbought Ṁ20 of NO

@EliTyre Would guess that whether this is good depends on how long it will take Altman/Brockman to build up a SOTA lab at Microsoft (6-24 months?). And how relevant it is for safety to get this potential slowdown or "break" to put increased focus on safety.

Maybe equally important is whether OpenAI will be able to execute on safer AI Capabilities or AI Safety research. Could also be that OpenAI will be gutted and operationally incapable to achieve anything bad but also anything good.

LR avatar

@TobiasH Sam may find the MSFT environment more limited than a fresh startup he commands, even with Satya pulling all the levers for him. There are obviously advantages as well though.

Soli avatar

This is the most creative sama related market. Really nice one!

IsaacKing avatar
Isaac King
IsaacKing avatar
Isaac King
TheBayesian avatar
TheBayesian 🦚predicts YES

TheBayesian avatar
TheBayesian 🦚bought Ṁ20 of YES

board stood its ground, W

Unknown user avatar
Unknown user avatar
2 traders bought Ṁ55 YES
uzpg avatar
Uzaybought Ṁ50 of NO

Would be curious to hear people's reasonings for YES.

TheBayesian avatar
TheBayesian 🦚bought Ṁ20 of YES

@uzpg puts a lot of attention on the safety issue, provides additional evidence to the skeptics that this isn't a conspiracy / people pretending to believe there are xrisks to AI, that it's a real concern. + interim CEO has mentioned being for a slow down in AI progress, the board actually did its job in getting sam out when he overstepped, etc. If sam starts a new company with the accs, seems likely that the public image will be worse, of him getting kicked for safety-related reasons and thereafter advertising his recklessness to rally the eaccs, at the detriment of popular opinion... something like that, all willd guesses but they probably add up to probably reducing xrisk. big risk is if over half of openAI employees leave, then... well hard to tell

TheBayesian avatar
TheBayesian 🦚predicts YES

also, if they stood their ground and the acc faction was like "ok let's all leave with sam", probably this market would have everything dropping? hasn't happened so far, might just be slow market reaction, but seems plausible that actually people stay because the board gives reasonable cause for concern

RyanGreenblatt avatar
Ryan Greenblattpredicts YES


> probably this market would have everything dropping? hasn't happened so far, might just be slow market reaction, but seems plausible that actually people stay because the board gives reasonable cause for concern

Pretty low liquidity on most people.

TheBayesian avatar
TheBayesian 🦚bought Ṁ35 of YES

@RyanGreenblatt good point!