In a year, will we think that Sam Altman leaving OpenAI reduced AI risk?
➕
Plus
538
Ṁ210k
resolved Nov 27
Resolved
NO

Sam Altman just left OpenAI: https://openai.com/blog/openai-announces-leadership-transition

Will resolve to the majority opinion of a poll (with options "reduced" and "increased") of people who I think are trustworthy on this question. (Feel free to propose different resolution criteria.)

Edit: Clarification: I will resolve this market based on whether the board's initial decision to fire Sam Altman set off a chain of events that ultimately reduced/increased AI risk. So if Sam returns to OpenAI, this market would NOT resolve as N/A.

Get
Ṁ1,000
and
S3.00
Sort by:

Anonymous poll results:

Manifold market resolution: Did the OpenAI's board's initial decision to fire Sam Altman set off a chain of events that ultimately reduced/increased AI risk?

Reduced AI risk 18

Increased AI risk 29

No opinion / show results 8

Respondents: 53

What are people betting on?

It seems like a lot of the discourse in online AI safety spaces is focused on the backlash against the community, which seems like a misleading metric?

Eg SB1047 struck me as better supported and debated than I would have otherwise expected and in either worlds I didn't see it go through. Or Helen Toner putting her full self out there is likely better for her midterm impact than one would think looking at the immediate vitriol.

Powerful lobbying interests would have mobilised eventually. Mobilising them sooner actually helps to learn how to handle them instead of getting surprised and overwhelmed by them later.

I continue to update in the direction of this drama having increased (a) government scrutiny and (b) commercialization pressures on OpenAI. Both seem to be reducing the race toward arbitrarily large models and bringing us more toward efficient features that seem more demonstrably controllable.

I am unsure about how to treat Ilya's new venture, but don't think this would have been avoided in most worlds.

bought Ṁ250 YES

@EliLifland this market isn't really about whether the board was right to fire him, though

@jskf Yeah I think what’s happened so far isn’t enough, but I could see a case for the boards action being positive if the anti-sama momentum continues to build and this is building to some extent on the board firing.

What do you mean exactly btw? Ex-ante vs. ex-post? Or right in some deontological sense?

@EliLifland I just meant this market is about how things actually play(ed) out, including the board resigning etc. Even if knowing what we know now (or in the future) removing Sam Altman was justified, when they actually did it they failed, and if they hadn't at the time they might have been in a better position now that there's some more concrete behavior that can could point to.

@jskf I don't think the board could easily have done significantly better, which is likely a crux for many here.

In most of my counterfactual worlds, the information we're getting now would not have leaked at this speed and scale, given the lack of precedence. Safetyists leaving would have been more easily brushed off as doomers gonna doom. The board also would not be in a stronger position, especially not memetically. The board drama motivated many more people to start triangulating.

The main worlds where stuff is better now are ones where the board could do some pivotal act closer to AGI. And I don't see how it would come to that.

I don't think there are many worlds where the board could have done much more than what it did. Especially becoming clear right now, given legal insecurities.

In our world now we might have more racing or even another AGI lab, yes. But all of that seems more likely to happen with more checks and balances sooner than in the worlds where few remarkably strange things happened this early on. And I think those checks are what was missing more generally, more than control of openai in particular.

bought Ṁ202 YES

A bunch of safety-minded people getting fired or leaving is a bad sign for OpenAI. But they, just like the board, didn't have much sway in OpenAI's operations either. So plausibly a good sign for the world that more people can now speak more about what they might have seen? Willing to buy it up to 15% and did.

Paired with the announcement of Paul Christiano as US AISI head, this at least provides reason to believe that this sequence of events enables people to mount increasing pressure on OpenAI to improve safety practices. But this doesn't seem causally related, unless we get evidence of negligence out of the firing affairs. https://www.commerce.gov/news/press-releases/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safety

sold Ṁ1,213 NO

@EliLifland I think I agree with you but I don't know who Jonas is going to poll

bought Ṁ150 NO

@Joshua He said below "I might survey the people at constellation.org or something like that"

@EliLifland this was expected, no? What did you think was going to be possible for the old board had it stayed on and prepared a better case? I think looking at the board has been and is the wrong way to evaluate risks, given it has little power.

I think the chaos has shown regulators more clearly that self governance cannot be trusted. Demands for checks and balances are increasing each day, so is the acknowledgement of existential risks. Public board chaos was good to cement worries imho (instead of well curated drama-as-usual that just looks like PR stunts).

I know EAs aren't viewed well in DC right now but the ideas are in the water supply, which is what ultimately matters?

@seikonfertrad I hope you're right, both for the world and my market position's sake! Re: possible for the board, it's unclear but it seems like they definitely misjudged the situation and admit as so. Plausibly they could have added more board members to buy time to figure things out. They also got killed in the PR war. The board does technically have tons of power ofc, but yeah it's unclear how much practical power they have given the dynamics at play.

I hope you're right that it will motivate better governance! My sense is that the blowback has been substantial in the tech world, am less sure about in DC (beyond the FTX/tech money/etc. criticisms). There may also be negative effects where labs are less likely to want to put EAs/x-risk folks on boards, and might be scared of strict governance structures in general.

Re: ideas in the water supply, I'm not sure how much this incident contributed to that, as opposed to CAIS letter, ChatGPT, AI Safety Summit, etc.

With Sam being more outspoken, more people are seeing the governance issues and parallels to SBF that also threaten the market directly. Elon just took action that I suspect has a high likelihood of resulting in better scrutiny. https://twitter.com/xDaily/status/1763464048908382253

This, plus EU action against MSFT, seems to lead to increasing scrutiny from regulators and likely an increase in evaluations. Now, whether that translates into actual risk reduction is another question but in expectation I'm becoming more confident.

I also haven't seen much resulting animosity towards AI safety/EA from anyone outside of the tiny e/acc crowd, so overall I think safety is luckily winning out here. Conflict in public, even if slightly chaotic, seems to be helping everyone make sense of things faster.

I don't think the old board had the ability to create as strong a case as there is now distributed across many more actors.

Even if Elon isn't thinking clearly about safety, I think this lawsuit is likely to be positive in expectation. You can only sue successfully on solid grounds, which in the case of OpenAI might be its non-openness. What follows afterwards is more important and plausibly better informed as a result, even on the open source issue.

bought Ṁ100 NO

https://www.astralcodexten.com/p/sam-altman-wants-7-trillion

Sam Altman seems much more outwardly accelarationist post-firing.

predictedYES

@JonasVollmer, I request a clarification. Is this "AI risk", or "AI extinction risk"? I just assumed AI extinction risk (stupid of me), but re-reading the description, there is no reason to interpret it that way, and @31ff's comment below discussing mundane risks of AI reminded me that AI extinction risk is NOT the usual interpretation of "AI risk", even if it is the usual one for me and my friends.

It is entirely possible we will think Sam Altman leaving increased mundane risks of AI but reduced AI extinction risk or vice versa. How does this question get resolved in such cases?

predictedNO

@SanghyeonSeo I think when people on Manifold talk about AI risk it's almost always about "catastrophic" risk, including extinction, astronomical suffering, etc., but not bias/inequality as "mundane" risks.

I consider that OpenAI with Sam Altman at the helm had already increased AI risk greatly. GPT-2 was just a toy for NLP researchers and nerds, GPT-3 was already quite dangerous capabilities-wise yet was kept relatively too niche for bad actors to notice, but ChatGPT was what finally contaminated the web with awful AI outputs, grifters even printing entire "books" of that nonsense and selling courses on how to do that to their followers, increasing the noise exponentially), propaganda (https://www.newsguardtech.com/special-reports/ai-tracking-center/), and just bland text/broken code. Enough so that even OpenAI seem to now be cautious of updating their corpus, fearing the effects of training on LLM outputs (https://arxiv.org/abs/2305.17493). Adding onto that, many people have now wrongfully assumed LLMs are the general intelligence miracle we needed, thus increasing true real world risks by companies attempting to automate real world workers/managers with hallucination-prone stochastic parrots (https://arstechnica.com/information-technology/2023/05/ibm-pauses-hiring-around-7800-roles-that-could-be-replaced-by-ai/)

Thus, I at least already would resolve it as yes, as Sam's choice of B2C business model had inadvertently caused silent yet significant civilization-scale AI safety concerns, long before actual AGI could be achieved.

bought Ṁ50 NO from 32% to 31%
predictedNO

I think in the closest counterfactual world, the board thinks action against Sam doesn't seem so necessary or urgent. So either they take more time to craft a PR-palatable messaging, or they are satisfied with a less severe reprimand like salary cut / threat of sacking & Sam convincingly expresses regret and alters his behavior. Or maybe instead of firing him, they move him to another position.
OP says the decision includes its second order effects. IMHO it seems the result has been an increased control of a single man with cult following, who has threatened to sack a board member for saying regulation is good, who's been lobbying for less regulation and who's generally manipulative (e.g. see Irving, Graham, his repeated inability to give a straight answer to a questions regarding x-risk on the Senate hearing). Plus somehow, the event spurred an avalanche of EA criticism within the tech community.

predictedYES

@hominidan I think I broadly agree with all of your points. But my crux is that I don't think the board would have become more capable with time and already wasn't able to do the things you suggest. On the contrary, I suspect there would have been a slow loss of control. Few people would have tracked it and ~nobody would have had sufficient clout to counteract irresponsible action, whether the nonprofit structure persists or not.

The already-weak governing board took one of the only actions it thought was available to them to credibly signal we're in a pickle. Yes, maybe they could have waited and curated more of a PR stunt, but maybe they're also just human and were insufficiently resourced. Either way, I think this draws serious attention to the inadequacy of the governance structure and will (hopefully) result in an actually-resourced checks-and-balances system 1 year later.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules