Why was Sam Altman fired? [Resolves to Public Poll in December - Same Answers/Format As Main Market]
Basic
16
Ṁ1906
resolved Dec 22
Resolved
YES
Sam tried to oust other board member(s)
Resolved
YES
Interpersonal squabbles not related to AI safety
Resolved
YES
Disagreement around filling board vacancies
Resolved
YES
Told one too many little fibs/frequently dishonest about minor decisions/pathological liar
Resolved
YES
Sam tried to compromise the independence of the independent board members by sending an email to staff “reprimanding” Helen Toner https://archive.ph/snLmn
Resolved
YES
Sam was "extremely good at becoming powerful", too often getting his way in various conflicts with the board. Past board members and employees who opposed him, on a variety of issues, were forced out. They wanted a less powerful CEO.
Resolved
N/A
Interpersonal squabbles not related to AI safety
Resolved
NO
Misalignment with nonprofit mission
Resolved
NO
Sam was more interested in creating products and development speed, others were more interested in nonprofit mission and safety
Resolved
NO
Fundamental disagreement about OpenAI's safety approach
Resolved
NO
The reasons given by the Former OpenAI Employees letter (https://news.ycombinator.com/item?id=38369570)
Resolved
NO
Withholding Specific AI Project Developments or Outcomes from the Board
Resolved
NO
Sam intended to start a new company before he left
Resolved
NO
Fundraising for OpenAI's next set of commercial products when the safety team repeatedly asked him not to
Resolved
NO
Data leak / privacy issue
Resolved
NO
Sexual misconduct

This is a derivative market of the main market on this subject: /sophiawisdom/why-was-sam-altman-fired

This market is meant to inform trading on that market by providing a gauge of public opinion.

This market will close on December 16th, and I will then create a separate Manifold Poll for each option, asking if it was a "significant factor" in Sam Altman's firing. The polls will run for several days to allow everyone time to respond to them.

The answers in this market will resolve based on the polling results, unless I believe that a poll has been manipulated by people who are obviously voting dishonestly. If this occurs, I may resolve an option by creating a private poll that only moderators can answer.

All polls will have a "see results" option so that users are not forced to vote in a poll if they are unsure how to vote but still want to see the full results.


All answers here are taken from the main market. If you want to submit additional answers, they must also be submissions on the main market. I also ask that you not submit any answers which are extremely low % on the main market, so that we can keep this market easier to navigate. I will N/A any submissions that I view as low-quality even if they were not N/A-ed in the original market.

I have also tried to not include too many duplicate options, though there can be reasonable disagreement about what counts as a duplicate. I may N/A answers that I think are too similiar to other answers, such that the polling on them would likely be identical. For example, I don't think we need two answers about fundamental disagreements about OpenAI's safety approach.


These resolution rules should be considered to be in "Draft Form" upon creation, and I am open to modifying them while remaining within the spirit of the question.

Get
Ṁ1,000
and
S3.00
Sort by:

Thanks for voting everyone!

Alright, I've got the polls up for every question, and I've put them all together in this dashboard!

I'll also put them all together here in a single block:

Eager to hear if anyone has hot takes for voting yes on a low % option or no on a high % option!

Alright, this is closed and I'm going to start making the polls! I'm thinking 3 days is probably enough time for them to run? Open to suggestions though, I might adjust the dates if people think that's too much or too little time.

@Joshua Man I really wish Manifold would let me put all these polls together into a mega-poll with approval voting.

@TheBayesian Yeah for lack of more concrete options, I might actually end up just voting purely based on Ezra because I generally trust him to be a good judge of complicated situations like this.

@Joshua This explanation also probably postdicts the events better than most other hypotheses, and is simple and realistic, etc

I'm finding it very hard to model what a board member believing they can't trust/control Sam looks like that doesn't ultimately cash out as safety concerns. Like isn't the main reason you care about trusting or controlling your CEO is so you can make sure you're happy that the mission of safe AGI is in good hands? Or maybe another way to put it is that it seems like a contradiction to say you don't have any concerns about how safety is being done, but also that you don't trust the person in charge of doing it?

Edit: After Toner keeps explicitly saying on the reocrd it wasn't about safety and was just about trust/honesty, I no longer plan on voting yes on any safety answers.

So far, the options here are generally much higher than in the main market!

original market:

vs here:

Btw, interpersonal squabbles not related to AI safety is listed twice

@TonyPepperoni Ah my mistake, N/Aed one.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules