resolved Feb 16
Sam has been misrepresenting the board members’ positions when talking to other board members
Sam tried to manipulate the board members
The board caught him lying several times
Because Sam/OpenAI have been intentionally misleading the board
Sam was "extremely good at becoming powerful", too often getting his way in various conflicts with the board. Past board members and employees who opposed him, on a variety of issues, were forced out. They wanted a less powerful CEO.
Interpersonal squabbles not related to AI safety
Told one too many little fibs/frequently dishonest about minor decisions/pathological liar
Sam tried to oust other board member(s)
Any reason whatsoever (resolves yes)
Something he did
Disagreement with Ilya Sutskever
Not Consistently Candid
Literally no other reason other than habitual lying and dishonesty, topic irrelevant
Sam defrauding/lying to the board
Sam had minor communication issues (which the board felt was enough to fire him)
Something related to Microsoft
Sam tried to compromise the independence of the independent board members by sending an email to staff “reprimanding” Helen Toner https://archive.ph/snLmn
We won't know, within the next year.
Internal power struggle, Ilya wanted Sam out
A power play with no straightforward case ever given by the board

RESOLUTION BEING DECIDED HERE, COMMENT OR FOREVER HOLD YOUR PEACE https://manifold.markets/sophiawisdom/why-was-sam-altman-fired#moQKORcoQ82R9NZnLCy3

Get Ṁ600 play money

🏅 Top traders

#NameTotal profit
Sort by:

Is there a proper way to debate / dispute the resolution process?

@ThothHermes join the discord and make your case

@ThothHermes what is it you want to debate/dispute, out of curiosity? Sophia listed the ways she intended to resolve and left the comment there for replies for a month
the market has been in a vegetative state for quite some time and Sophia isn't using the site, so it needed to be resolved. is there something in particular you don't agree with?

@shankypanky Unfortunately I wasn't able to keep track of the comments on this question before it resolved.

With questions like this one, it seems to me like it usually would be pretty hard to definitively know for certain the correct answer even at resolution time (if ever). That being said, sometimes I will still bet in these markets. If I do so, it will usually be because the current market probabilities seem too confident, so I usually will buy either a yes or a no with the intent to sell when the market moves closer to what I perceive as a more calibrated probability.

However, I think that if the question resolves to what someone decides are the correct answers, that means I may no longer be able to bet on what I think the true probabilities are, and it could mean that I should instead bet on what I predict that person will choose (or perhaps the outcome of a debate). So if I won't be able to track those conversations, that means I should probably just not bet on these kinds of markets.

It seems preferable to me to be able to bet in the market while not to have to worry about wading into the debate about the answers to this question.

I'm going to resolve most of the remaining ones to what Sophia commented 1 month ago. The few which have relatively ambiguous copy (eg. something related to Microsoft) or are still under contention I will N/A.

Hopefully for the next time this happens we will learn that the creator sometimes will have to add additional clarity to a user-submitted answer so that each individual submitted answer has clear criteria.

The Lord Of The Rings Mueller Report GIF by reactionseditor

Here we go again

@sophiawisdom Can we just resolve based on that list you posted?

@sophiawisdom ⚠Please Resolve Your Closed Markets.

@SirCryptomind We need to know what happened in order to resolve a market.

Sad Sponge Bob GIF by SpongeBob SquarePants

@SirCryptomind I've got mana in here, so I leave the mod decisions to those without any bias. I'm sure she'll be back any time now...

@Joshua Gotcha. I don't know enough about the situation to be able to resolve myself. I bet in some of the Altman markets but only stuff I was sure about, not this crazy big market lol.

@Joshua this market is a constant in a rapidly changing world

@sophiawisdom Please bite the bullet on this and resolve it now. I am sure you will have someone who will disagree with you whichever way you go

@Orca I don't think the investigation in 6 months will be any more trustworthy of a proxy (speaking against my own interest here).

@ooe133 I think all data to resolve is there. More time will bring out more opinions but not the facts

@Orca That's interesting. I'm really interested in understanding the extent that Microsoft was involved in the weeks leading up to the incident (beyond the dev day, of course). What's your thinking/source on that? You could DM me if it's sensitive.

@ooe133 Michael -- I am sorry if I gave you the feeling that I have some inside information or source. I am just following the news, like most of the folks here. But anyway here are my thoughts.

Regarding any investigative report coming out, I am not sure why folks think it will come out and if so it will be made available in entirety and clarify anything and people's role in this affair. Let us look at the factors at play here

a) People with megaton EGOs

b) People with tons of Money. Lots of money involved

c) Everyone lawyered up like crazy

d) Everyone has signed NDAs

e) Danger of being blackballed or being retaliated is ultra high if you break the unspoken rules

f) Small community so you cannot escape the consequences

Given the above, any report that comes out will be highly sanitized and make everyone look like angels. Will sort of re-iterate what is already know or surmised in the public but most likely will soften any rough edges and make things more benign than reported so far. Everyone will be a winner.

Now regarding MSFT, again nothing more than what is out in the popular press. My thoughts are

1) MSFT was blissfully unaware of all this until it happened. They were under the impression that all is going well and they are on top of the world.

2) After this happened, MSFT put the hammer down and played the MOST important role in putting together the final compromise in such a manner as not to compromise their future. (i.e. MSFT future )

3) MSFT had too much at stake for this to fall apart. IMO, they have no meaningful and immediate AI offering that could replace the ChatGPT and go head to head against GOOG, META, AMZN etc.. They were behind them all of them and they would have been too reliant on whatever ChatGPT access they had garnered access to.

4) MSFT will ensure that their actions during this episode will remain under the curtains and any information on their role will be heavily curated by MSFT

This whole thing clearly shows that the non-profit charter of OpenAI was clearly a hogwash. Powerful interests with tons of money were circling around for it to be a viable non-profit with a vision to better humanity. Finally it had to serve the interests of those folks. Even the employees of OpenAI really did not believe in that mission (as evidenced by the willingness to bolt to MSFT).

Just my thoughts on this matter. Please feel free to comment.

Ok, I’ve resolved “NO” all the answers I think are obviously false. Here are my opinions on how to resolve the remaining questions. I want to resolve all the questions by the end of today (Jan 5), so if you disagree with my resolutions speak now or forever hold your peace.

Conflict of Interest with Other Ventures: NO.

There has been talk about his other ventures and how they might have been cause to fire him, but I don’t think there was any specific confirmation of this.

Lied about infra costs and covered it up: NO.

He lied about a lot of stuff, but I didn’t hear anything about infra costs specifically, and there’s certainly nothing about how it was specifically the lying about infra costs.

Negligence in Addressing AI Safety Concerns: NO.

It was about the lying

Sam intended to start a new company before he left: NO.

This seems likely (the chip company) but, like “conflict of interest” I don’t think it was a reason to fire him.

Trying to speed up capabilities despite safety: NO

it was about the lying

Something related to Microsoft: NO

it was about the lying

Fundamental disagreement about OpenAI's safety approach (the board wanted more safety): NO

it was about the lying

Allocating resources towards accelerating AGI without involving safety teams: NO

it was about the lying

Approving capabilities projects without informing safety team: NO

it was about the lying. Plausibly there were capabilities projects approved, but I don’t think it was for safety reasons. Possibly this gets a partial resolution.

Consistently making business decisions without Board knowledge: NO

it was about the lying. plausibly they are mad about this also; if so someone please point it to me.

Withholding Specific AI Project Developments or Outcomes from the Board: NO

haven’t heard anything about this

Sam was more interested in creating products and development speed, others were more interested in nonprofit mission and safety:

it was about the lying

Misalignment with nonprofit mission: NO

this was always incredibly vague, but i still think lying isn’t meaningfully captured by this.

Sam didn't notify the board of important Devday-related decisions: NO

not about notification, was about the lying

Board believes that OpenAI shouldn’t get deeper into customer products, but should be focused on providing foundational APIs for other companies to use (e.g. Quoras Poe): NO

it was about the lying

Disagreement over future direction of OpenAI and capital allocation for research vs commercial: NO

it was about the lying

Something related to a new unannounced model: NO

it was about the lying

Dception/miscommunication around selling employee shares: NO

haven’t heard about this specifically. Chance of partial resolution if this was one of the lies he was fired for

Tapping into the superalignment compute to scale or provide other commerical services and not telling the board about it: NO

it was about the lying

Using the superalignment allocated compute for something announced on devday, without board permission: NO

it was about the lying

Fundraising for OpenAI's next set of commercial products when the safety team repeatedly asked him not to: NO

it was about the lying

Fundamental disagreement about OpenAI's capabilities research: NO

it was about the lying

Samaa wanted to scale the current models they had and squeeze the max utility out of them, while Ilya et al think that this is hindering their AGI efforts.: NO

it was about the lying

Ilya was upset after Sam previously tried to reduce his role at the company: NO

from my read, it was more that he thought sam was trying to stir trouble, but if people have citations for this i’m willing to give it partial resolution.

Sam hasn’t been honest about technical stuff that was going on at OpenAI: NO

from my understanding it was more general, and reading this answer makes me think it was specifically technical which I don’t believe. Possibly willing to give partial resolution

Internal power struggle, Ilya wanted Sam out: NO

This is kind of generic because we knew from the start Ilya wanted to get him out, but my reading of this is that Ilya wanted Sam out to be on top himself, which doesn’t seem corroborated

Interpersonal squabbles not related to AI safety: YES?

tempted to resolve this yes even though “squabbles” makes it seem pretty small

A power play with no straightforward case ever given by the board: NO

“ever given” makes it confusing but taking just “power play with no straightforward case” seems to be wrong

Told one too many little fibs/frequently dishonest about minor decisions/pathological liar: YES

seems strongly borne out by the evidence

Sam had minor communication issues (which the board felt was enough to fire him): partial resolution

It was “just” communication issues, none of which were major, so I’m tempted to resolve positively, but it also made it sound like the issue as a whole was small, which seems untrue. Willing to listen to evidence on this one, otherwise maybe I’ll resolve 30%

The board caught him lying several times: YES

this seems true

Because Sam/OpenAI have been intentionally misleading the board: YES

seems pretty true

Literally no other reason other than habitual lying and dishonesty, topic irrelevant: YES

seems true

Board determined Sam exhibited dark triad tendencies which they felt hindered OpenAI’s long term mission: YES

they didn’t identify dark triad tendencies like per-se but the lying stuff seems like this. willing to listen to evidence on this one

The reasons given by the Former OpenAI Employees letter (https://news.ycombinator.com/item?id=38369570): NO

these reasons seem pretty old and so I don’t think they were related

Sam tried to oust other board member(s): NO

it seems to have been about the lying etc. and not mainly because he attempted to get helen toner off the board. maybe partial

Disagreement around filling board vacancies: NO

doesn’t seem like a main cause. maybe partial

Sam tried to compromise the independence of the independent board members by sending an email to staff “reprimanding” Helen Toner https://archive.ph/snLmn: NO

I don’t think it was about compromising their independence specifically. maybe partial

The board felt that blowing up OpenAI was better for humanity as a whole than allowing it to continue pushing AI progress forward: NO

was about lying

Sam was "extremely good at becoming powerful", too often getting his way in various conflicts with the board. Past board members and employees who opposed him, on a variety of issues, were forced out. They wanted a less powerful CEO.: YES?

seems true. seems less about the board’s specific desire for a less powerful CEO, and more about one who they trust. maybe partial, maybe yes.

We won't know, within the next year.: NO

we seem to have good evidence

The board did not trust Sam to lead the company as it developed the Q* project: NO

not super about Q*?

Sam planned a purge of undesirable employees: NO

I’m unaware of evidence of this

Sam planned a purge of EA-adjacent board members and executives: NO

unaware of evidence

Sam tried to start an AI chip company: NO

this doesn’t seem like a significant reason

Something the board and Altman are legally restrained from talking about (due to, for example an NDA with a third party): NO

this doesn’t seem true at the moment

Sam tried to manipulate the board members: YES?

idk this seems generic but I guess

Sam has been misrepresenting the board members’ positions when talking to other board members: YES

this happened and was a major reason

I'm sure people can quibble about some specific ones that might get partial resolution or whatnot, but all in all I think this is pretty great. It was about the lying. Great work.

@sophiawisdom I think people are gonna disagree with a bunch of these 😆

I don’t think I’m invested in any of the controversial options that ppl will protest so I think I’m fairly unbiased but:

Sam tried to oust other board members should definitely resolve YES. There has been a TON of reporting that pretty clearly indicates that was a reason for his firing, if not the most important one. It also /does/ relate to his lying.

I DON’T think the “habitual lying” or “no reason other than lying” options should resolve YES. I think the board fired him for manipulating them by misleading them about key things, and were worried about his decision making because of this. I don’t think that that really counts as “habitual lying”. Like, I don’t think he was just lying about random stuff or that they would have fired him if he just had a tendency to lie about small things. They were worried about his personality traits and manipulation.

(Also this comment just reflects what I believe about the BOARD’s assessment and motivation, not whether that was sound or correct. In fact, I think they probably didn’t have a great assessment of Sam’s character. OpenAI please hire me 😆)

@benshindel This depends on some counterfactuals.

If he had tried to oust board members without lying in the process, would he have gotten fired? If yes, then he got fired for trying to oust board members. If no, then he didn't get fired for trying to oust board members, even though he did in fact try to oust board members.

If he'd lied similarly without trying to oust a board member, would he have been fired? If yes, then he got fired for lying. If no, then he didn't get fired for lying.

If he had not lied previously would he have been fired? If yes then he didn't get fired for habitual lying, he got fired for this instance of lying. If no, then he got fired for habitual lying, and not this instance specifically.

It's the impression I've gotten that this instance of lying whilst trying to oust a board member was the straw that broke the camel's back on top of a history of this sort of thing, and this one instance might not have been fireable by itself. And it's my impression that if he hadn't lied whilst trying to oust a board member, also, that wouldn't have been fireable. I'm more confident in the latter than the former, but don't have links at hand - if you do to demonstrate that I'm off-base though, that'd be great.

@sophiawisdom Seems pretty great. Thank you for running this market. The only thing I have an issue with here is the dark triad. That seems very speculative. Sure, you can call the lying machiavellianism, but what about narcissism and psychopathy? I haven't heard much evidence of these. I also can't imagine the members of the board ever thought to themselves "oh, he has the dark triad, better fire him".

I also agree with Ben about firing other board members. You can perhaps say that it was the trigger, not the underlying issue, but even the trigger is a cause. If he didn't try to remove Toner, he probably wouldn't have been fired in 2023, and perhaps it would have never happened.

"Not Consistently Candid" is an answer (my answer) that N/A'd early on. Now that the answers resolving YES are centered around lying, I'm curious what people think of this answer. I see it as basically synonymous with lying, and it's the exact phrase that the board used in the blog post explaining their actions.

No sour grapes here, I don't envy the position @sophiawisdom is in to resolve this market, or the work that went into keeping it organized along the way. I'm just here to predict outcomes, and I'm curious how well people think I predicted this one.


Something related to Microsoft: NO

Um... everything big that happens at OpenAI now is related to Microsoft? Specifically, Microsoft looms large over every single decision made made by Altman, Sutskever, the board, leadership team, etc.

Microsoft was obviously pressuring them to commercialize and expand, that commercialization and expansion was the catalyst for the conflict, and anyone anywhere at any time can set up another market to demonstrate this.

You'd need to dive deep into convoluted motivated legalese in order to say "Why was Sam Altman fired? Not because of something related to Microsoft!".

@ooe133 if the interpretation of that answer makes it so meaningless that it can't resolve NO, then it should resolve NA. A sensible interpretation is that it was something involving Microsoft to a degree meaningfully greater than the degree to which all goings on at OpenAI involve Microsoft in some way.

@ooe133 What is the “something related to Microsoft” that caused Sam Altman to be fired, if that’s what you believe?


Uh, no, not at all. That's not how logic or language works.

If it was something related to Microsoft, it resolves to "yes". If it was nothing related to Microsoft, it resolves to "no".


The azure credits alone (which ChatGPT runs on) were more then enough, but if not that, then Microsoft's involvement in most of OpenAI's projects and growth is plenty. If the OpenAI leadership and board's decisions revolve around Microsoft as the elephant in the room, then "Something related to Microsoft" was one of the ways to describe why he was fired. If it was Microsoft orchestrating the incident, the option would be "Microsoft orchestrated the incident" which itself would usually be covered up as standard business practice and therefore requires leaks, which also aren't reliable.

This is a pretty reasonable thing to think in a market that lists "Sam lied" in like 5 slightly different variations at the very top and hundreds of conjunctive answers that everyone ignored, and I'm surprised people think otherwise. Why was this market resolved so early at all on an issue this likely to be filled with subterfuge? Why not resolve the lab leak market years ago when the NYT kept saying it didn't happen?

If Sophia ever wrote on the market description that she was going to resolve it within a couple months, then I concede that it was entirely my fault for getting involved in this at all.

@ooe133 There's a convention in questions like these that answers whose resolution can already be known definitively at the time they are added get NAd because they are pointless.

So either it gets NAd if you read it literally, or you need to read into the spirit of what it was intended to mean as having a higher bar of relatedness, in which case it sounds like a NO.

There isn't a way for it to resolve YES, because the only argument you have for YES is that it can't resolve any other way, which triggers an NA because it's therefore a pointless thing to ask.

@chrisjbillington I do in fact still think that there are decent odds (>20%) of Microsoft having been deliberately been involved and/or the relevant actors thinking a ton about Microsoft when they made their decision to initiate the incident. I didn't know about that convention you described, which I agree with.

I don't think this is an N/A, I think this has a 20% chance of resolving "yes" and that it is ridiculous to expect it to resolve within 2 months, same as lab leak. If Sophia said on the description that she would resolve it quickly (I think she might have?), it's 100% my fault for missing that, and frankly I wasn't aware of the convention either so don't worry about resolving it as "no".

I can make some kind of promise not to sell my positions if that specific market restarts (I think I have a decent track record for this). I didn't mean to make a fuss about this but I really, really liked this market at 20%, and I really, really don't get why people think this issue is resolved.

Like, Gell-Mann amnesia isn't some conspiracy theorist thing, it's literally something that people can test and prove true in under 3 minutes at any time.

@sophiawisdom I appreciate you writing all these up. 99% of these suggested resolutions haven't been objected to, maybe go ahead and resolve those now at least?

Personally, I would second Shump's objection that "Sam Tried to Oust Board Members" should clearly resolve Yes.

The original description of this market was:

These all include "and then lied about it to the board" because that's what they list as the specific thing he was fired for ("lack of candor")

Sam trying to oust Toner through lying to a board members about the opinion of another board member is the one big specific instance of Sam doing something and then lying about it that we know about. It was clearly a significant factor, as it was emphasized in every major article covering what Sam did to get fired.

Per the NYT:

Mr. Altman called other board members and said Ms. McCauley wanted Ms. Toner removed from the board, people with knowledge of the conversations said. When board members later asked Ms. McCauley if that was true, she said that was “absolutely false.”

“This significantly differs from Sam’s recollection of these conversations,” an OpenAI spokeswoman said, adding that the company was looking forward to an independent review of what transpired.

Some board members believed that Mr. Altman was trying to pit them against each other. Last month, they decided to act.

You can see this other market about why was sam was fired resolved to this option, as did this poll I ran about why sam was fired. Of all the polls I ran based on the answers in this market, that answer had the most votes in its favor. The poll about him telling one too many lies also resolved yes, as most people don't view these as mutually exclusive. He was fired for having a general pattern of dishonestly, and the specific instance of his trying to oust Helen Toner was the most significant example of this.

I think I personally agree with the rest of the resolutions you proposed, and people have had enough time to object at this point.

@Joshua I didn’t see anyone specifically address Chris’ counterfactual on that?

I could very much imagine that “seeking to oust someone, even a board member, who is doing something harmful to the organization” is part of the expressed values of the mission and would be considered a positive act - provided it was done honestly.

Indeed, it could be argued that the board expressed those same values themselves in firing Sam.

More related questions