Why Was Sam Altman Fired [Mod Free Response Resolves to one]
127
2.1K
5K
Mar 31
74%
Other
17%
Misalignment between profit and non-profit goals that isn't covered by another explicit answer
5%
AI Safety
2%
We don't find out before 2024 Apr 1

Resolves to whichever of these is closest. If there are multiple separate reasons, I may resolve to % on several but strongly disfavour this.

A group of mods are going to resolve this, because it's pretty subjective. Currently the mood of mods is that if it were to resolve today (2023 Nov 30) it would resolve to "we don't know". I don't intend to trade on this market any more.

Get Ṁ500 play money

Related questions

Sort by:
RachelFreedman avatar
Rachel Freedman

If the reason is that Sam tried to oust a safety-focused board member, would that resolve to goal misalignment, AI safety, or other?

Joshua avatar
Joshua 🦚bought Ṁ10 of
Other
YES

Toner just did another interview witih the WSJ where she repeats that it wasn't about Safety, as Shear and others have said before.

Personally I don't think any of the current options adequately summarize the reason sam was fired, so I'm betting on other.

TheBayesian avatar
TheBayesian 🦚

One big possibility being proposed is that altman tried to oust another board member, or some similar power struggles, and it backfired. That could be interpreted as misalignment between profit and nonprofit, if altman was trying to oust members that were not as focused on nearer term funding for scaling research, or somesuch. Would it resolve to that, or to ‘other’, with the sense of ‘failed ousting is far enough from the current options that it resolves other’?

Joroth avatar
Tim

@TheBayesian But why was he trying to oust Helen? Because of safety disagreements (and her publishing about them). No safety disagreements, no board action.

DavidBolin avatar
David Bolin

@Joroth It doesn't make any difference, if the reason the board acted was not because of the safety disagreements, but because of perceived bad behavior on Sam's part.

Then the reason is the bad behavior, not safety disagreements.

Joroth avatar
Tim

@DavidBolin Depends on how credulous you are on stated reasons I suppose.

Bad behavior is fairly subjective; the winning choice right now talks about a reason not coveted by another answer. If not for safety concerns, the behavior wouldn’t be perceived “bad”.

On a more traditional board, Sam being really timid about pushing the limits of AI models would be considered bad behavior. It would seem odd if the market resulted in this way but I’m still a newbie 🤷‍♂️

jacksonpolack avatar
jackson polack

The board could act because of safety disagreements and bad behavior!

Joroth avatar
Tim

@jacksonpolack I think Nathan’s planning on picking just one though.

https://manifold.markets/NathanpmYoung/why-was-sam-altman-fired-mod-free-r#qg3QGszz5OEMFDeZs3wL

This at least gives me some confidence that if @NathanpmYoung and I have the same conception of the facts (and nothing changes by the time this this resolves) then it would resolve AI Safety.

CalebW avatar
Calebbought Ṁ125 of
AI Safety
YES

I'm interpreting the NY times article as lending some credence to 'AI safety', insofar as the CSET report arguably aimed to further AI safety

EliTyre avatar
Eli Tyre

Love the banner image.

It just needs more crazy phrenetic energy to capture to the spirit of the week.

Joshua avatar
Joshua 🦚

DISMSHED 😂

KatjaGrace avatar
Katja Gracebought Ṁ30 of
Other
YES

If he was not consistently candid, I'm understanding that to resolve to "other". (Like if it wasn't about a specific dishonesty.) I guess unless the dishonesty seems to be clearly forwarding profit motives. Curious if that's wrong.

NathanpmYoung avatar
Nathan Young

@KatjaGrace Yeah I guess, but like we have to be sure it was just that. Which I dunno, is pretty wild.

GCS avatar
GCSbought Ṁ10 of
AI Safety
NO

https://twitter.com/eshear/status/1726526112019382275

"PPS: Before I took the job, I checked on the reasoning behind the change. The board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I'm not crazy enough to take this job without board support for commercializing our awesome models."

fortenforge avatar
Rahul Sridharsold Ṁ40 of
Misalignment between...
YES

Saw this interview with Emmet Shear (new interim CEO of OpenAI) and he's a full-on doomer, so now I buy that it was about safety

https://x.com/eshear/status/1673016996555014147?s=20

TJ avatar
T J

Why option semantics can be tricky.

Let's say (hypothetically) I want to bet on the possibility that the conflict was about the distribution of strategic focus between productionization/monetization vs deep research. What is the option I should select?

Some current trends exogenous to this question, would tag that as an AI safety issue -- that Sam was more focused on profit or products, and not enough on risks. However if I believe that this tagging is based on a poor assumption or some contingent facts, then it is less clear if I should bet on AI safety option. (As an example, it's possible that OpenAI focussing less on products on more on research is more likely to scale their models faster without early feedback from reality, increasing the chances of sharper emergence of takeover-worthy artificial agency)

NathanpmYoung avatar
Nathan Youngbought Ṁ0 of
AI Safety
NO

@TJ I think that depends on the deep research. Most situations like that sounds like AI safety. But if it was capabilities research, then probably it would resolve to misalignment.

benshindel avatar
Ben Shindelsold Ṁ16 of
AI Safety
YES

Lol... see this is the problem with a market where the creator:

-Controls the options

-Is the only one who can add options

-Bets in their own market

AI safety was the "closest option" (at least judging by the near-consensus in the larger market), until you created a more specific option which was closer. Conveniently you're holding a large NO share on AI safety.

Don't think I'll be giving a high review on this market, and I'm surprised to see this from a moderator.

TheBayesian avatar
TheBayesian 🦚bought Ṁ10 of
AI Safety
YES

@benshindel there are big advantages to the creator controlling the options though, like there are less silly options and less will result in NA (making the opportunity cost of bets higher). it is unfortunate that the (currently) winning option got added after all the others tho

TheBayesian avatar
TheBayesian 🦚

Also far from settled; "Alignment coup" is at 72% in the other market, and "Misalignment with nonprofit mission" is at 75%. kinda hard to tell which one is gonna be more significant, still

benshindel avatar
Ben Shindel

@TheBayesian 🤷‍♂️

MarkHamill avatar
Mark Hamill

@TheBayesian “Alignment coup” isn’t referring to what the plain meaning of the phrase means

TheBayesian avatar
TheBayesian 🦚sold Ṁ18 of
AI Safety
YES

Could you rephrase? I'm not sure what you're saying

MarkHamill avatar
Mark Hamill

@TheBayesian Implies slowing down or stopping of current capabilities, which isn’t inherent from “alignment coup” at all

bec avatar
Rebecca

@benshindel What do you mean by ‘closer’?

NathanpmYoung avatar
Nathan Young

@benshindel That's quite the accusation. Have you even read it?

"Misalignment between profit and non-profit goals that isn't covered by another explicit answer"

If it's misalignment related to AI safety then it resolves AI safety.

benshindel avatar
Ben Shindel

@NathanpmYoung Yes, I did

benshindel avatar
Ben Shindel

@NathanpmYoung Your bio says “tell me if I have done something you think is a bad norm.” This is a bad norm.

NathanpmYoung avatar
Nathan Young

@benshindel I am not paid to do this, nor do I get much benefit at all, other than closing markets sooner. I felt attacked and still don't understand what your issues is. I don't feel obliged to explain myself to you if our interactions are like this. I am a person, just like you, who doesn't like being accused of stuff. If you want to give me a bad rating, do so. I doubt I'll respond to you in future if our interactions keep on like this.

The other mods will choose the resolution to this market. If you can make a clear case, I'll happily dump my shares somehow, but that should be costly to you, not aspersions I don't understand.

benshindel avatar
Ben Shindel

@NathanpmYoung You’re welcome to dm if you don’t understand my issue with how you’ve run this market. I don’t personally care about profit (in fact, I’ve gained mana in this market) but your own bio explicitly says to raise situations like this to your attention. I probably won’t give you a bad review, tbh, because I think you’re acting in good faith. But you’re a moderator on this site so I think you should be held to a high standard when running markets.

bec avatar
Rebecca

@benshindel What is the issue that you’re trying to point out?

NicoDelon avatar
Nico

@benshindel I agree that Nathan should be held to higher standards and often falls short, but I’m failing to see the issue here.