In a year, will we think that Sam Altman leaving OpenAI reduced AI risk?
511
11K
3.2K
Nov 25
12%
chance

Sam Altman just left OpenAI: https://openai.com/blog/openai-announces-leadership-transition

Will resolve to the majority opinion of a poll (with options "reduced" and "increased") of people who I think are trustworthy on this question. (Feel free to propose different resolution criteria.)

Edit: Clarification: I will resolve this market based on whether the board's initial decision to fire Sam Altman set off a chain of events that ultimately reduced/increased AI risk. So if Sam returns to OpenAI, this market would NOT resolve as N/A.

Get Ṁ200 play money
Sort by:
bought Ṁ202 YES

A bunch of safety-minded people getting fired or leaving is a bad sign for OpenAI. But they, just like the board, didn't have much sway in OpenAI's operations either. So plausibly a good sign for the world that more people can now speak more about what they might have seen? Willing to buy it up to 15% and did.

Paired with the announcement of Paul Christiano as US AISI head, this at least provides reason to believe that this sequence of events enables people to mount increasing pressure on OpenAI to improve safety practices. But this doesn't seem causally related, unless we get evidence of negligence out of the firing affairs. https://www.commerce.gov/news/press-releases/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safety

bought Ṁ1,500 NO
sold Ṁ1,213 NO

@EliLifland I think I agree with you but I don't know who Jonas is going to poll

bought Ṁ150 NO

@Joshua He said below "I might survey the people at constellation.org or something like that"

@EliLifland this was expected, no? What did you think was going to be possible for the old board had it stayed on and prepared a better case? I think looking at the board has been and is the wrong way to evaluate risks, given it has little power.

I think the chaos has shown regulators more clearly that self governance cannot be trusted. Demands for checks and balances are increasing each day, so is the acknowledgement of existential risks. Public board chaos was good to cement worries imho (instead of well curated drama-as-usual that just looks like PR stunts).

I know EAs aren't viewed well in DC right now but the ideas are in the water supply, which is what ultimately matters?

@seikonfertrad I hope you're right, both for the world and my market position's sake! Re: possible for the board, it's unclear but it seems like they definitely misjudged the situation and admit as so. Plausibly they could have added more board members to buy time to figure things out. They also got killed in the PR war. The board does technically have tons of power ofc, but yeah it's unclear how much practical power they have given the dynamics at play.

I hope you're right that it will motivate better governance! My sense is that the blowback has been substantial in the tech world, am less sure about in DC (beyond the FTX/tech money/etc. criticisms). There may also be negative effects where labs are less likely to want to put EAs/x-risk folks on boards, and might be scared of strict governance structures in general.

Re: ideas in the water supply, I'm not sure how much this incident contributed to that, as opposed to CAIS letter, ChatGPT, AI Safety Summit, etc.

With Sam being more outspoken, more people are seeing the governance issues and parallels to SBF that also threaten the market directly. Elon just took action that I suspect has a high likelihood of resulting in better scrutiny. https://twitter.com/xDaily/status/1763464048908382253

This, plus EU action against MSFT, seems to lead to increasing scrutiny from regulators and likely an increase in evaluations. Now, whether that translates into actual risk reduction is another question but in expectation I'm becoming more confident.

I also haven't seen much resulting animosity towards AI safety/EA from anyone outside of the tiny e/acc crowd, so overall I think safety is luckily winning out here. Conflict in public, even if slightly chaotic, seems to be helping everyone make sense of things faster.

I don't think the old board had the ability to create as strong a case as there is now distributed across many more actors.

Even if Elon isn't thinking clearly about safety, I think this lawsuit is likely to be positive in expectation. You can only sue successfully on solid grounds, which in the case of OpenAI might be its non-openness. What follows afterwards is more important and plausibly better informed as a result, even on the open source issue.

bought Ṁ100 NO

https://www.astralcodexten.com/p/sam-altman-wants-7-trillion

Sam Altman seems much more outwardly accelarationist post-firing.

predicts YES

@JonasVollmer, I request a clarification. Is this "AI risk", or "AI extinction risk"? I just assumed AI extinction risk (stupid of me), but re-reading the description, there is no reason to interpret it that way, and @31ff's comment below discussing mundane risks of AI reminded me that AI extinction risk is NOT the usual interpretation of "AI risk", even if it is the usual one for me and my friends.

It is entirely possible we will think Sam Altman leaving increased mundane risks of AI but reduced AI extinction risk or vice versa. How does this question get resolved in such cases?

predicts NO

@SanghyeonSeo I think when people on Manifold talk about AI risk it's almost always about "catastrophic" risk, including extinction, astronomical suffering, etc., but not bias/inequality as "mundane" risks.

bought Ṁ10 of YES

I consider that OpenAI with Sam Altman at the helm had already increased AI risk greatly. GPT-2 was just a toy for NLP researchers and nerds, GPT-3 was already quite dangerous capabilities-wise yet was kept relatively too niche for bad actors to notice, but ChatGPT was what finally contaminated the web with awful AI outputs, grifters even printing entire "books" of that nonsense and selling courses on how to do that to their followers, increasing the noise exponentially), propaganda (https://www.newsguardtech.com/special-reports/ai-tracking-center/), and just bland text/broken code. Enough so that even OpenAI seem to now be cautious of updating their corpus, fearing the effects of training on LLM outputs (https://arxiv.org/abs/2305.17493). Adding onto that, many people have now wrongfully assumed LLMs are the general intelligence miracle we needed, thus increasing true real world risks by companies attempting to automate real world workers/managers with hallucination-prone stochastic parrots (https://arstechnica.com/information-technology/2023/05/ibm-pauses-hiring-around-7800-roles-that-could-be-replaced-by-ai/)

Thus, I at least already would resolve it as yes, as Sam's choice of B2C business model had inadvertently caused silent yet significant civilization-scale AI safety concerns, long before actual AGI could be achieved.

bought Ṁ50 NO from 32% to 31%
predicts NO

I think in the closest counterfactual world, the board thinks action against Sam doesn't seem so necessary or urgent. So either they take more time to craft a PR-palatable messaging, or they are satisfied with a less severe reprimand like salary cut / threat of sacking & Sam convincingly expresses regret and alters his behavior. Or maybe instead of firing him, they move him to another position.
OP says the decision includes its second order effects. IMHO it seems the result has been an increased control of a single man with cult following, who has threatened to sack a board member for saying regulation is good, who's been lobbying for less regulation and who's generally manipulative (e.g. see Irving, Graham, his repeated inability to give a straight answer to a questions regarding x-risk on the Senate hearing). Plus somehow, the event spurred an avalanche of EA criticism within the tech community.

predicts YES

@hominidan I think I broadly agree with all of your points. But my crux is that I don't think the board would have become more capable with time and already wasn't able to do the things you suggest. On the contrary, I suspect there would have been a slow loss of control. Few people would have tracked it and ~nobody would have had sufficient clout to counteract irresponsible action, whether the nonprofit structure persists or not.

The already-weak governing board took one of the only actions it thought was available to them to credibly signal we're in a pickle. Yes, maybe they could have waited and curated more of a PR stunt, but maybe they're also just human and were insufficiently resourced. Either way, I think this draws serious attention to the inadequacy of the governance structure and will (hopefully) result in an actually-resourced checks-and-balances system 1 year later.

bought Ṁ100 of YES

@JonasVollmer Can you elaborate on who you think is trustworthy for this question?

@IsaacKing I might survey the people at constellation.org or something like that

bought Ṁ0 of YES

it seems plausible that the increase in ai safety-related drama might overshadow other effects, in making governance of future ai systems easier (more attention, more view into what can go wrong), and reducing AI xrisk. i'm betting that and other possibilities make it higher than 20%, buy my limit order if you disagree (or lmk why this consideration is wrong)

bought Ṁ25 of YES

https://twitter.com/JacquesThibs/status/1727134087176204410 th

If this is accurate, it suggests that if Sam stays at Microsoft, OpenAI could splinter into a million pieces rather than everyone following him there. The former is presumably better than the latter for reducing AI safety, so my estimation of the probability here has gone from ~15% to ~40%.

predicts NO

Our only chance is not-killing-everyone-ist remainder of OpenAI merging into Anthropic.

bought Ṁ100 of YES

Rationally, I can see good arguments on either side. Emotionally, this feels very very bad. Watching what is probably a massive defection from the "don't kill everyone" equilibrium and seeing it get massive support hits different than just predicting that defection will happen.

More related questions