Was the destruction of OpenAI the company really "consistent with the mission" of the non-profit's charter?
24
542
470
resolved Jan 1
Resolved
N/A

You also informed the leadership team that allowing the company to be destroyed would be consistent with the mission.

The mission the board refers to is

Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity.

We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.

The charter exists to guide the board.

The charter will guide us in acting in the best interests of humanity throughout its development.

The board saying that allowing the company to be destroyed would be consistent with the mission implies that destruction of OpenAI is consistent with their pursuit of development of an AGI which benefits humanity.

which further implies that either OpenAI is not on track to achieve AGI, or to achieve AGI that would benefit humanity. This is a very strong statement to make by the board.

I will evaluate the validity of this statement (and also keeping in mind the context it was made in), and resolve the market accordingly. If you'd like to ask for clarifications, please do so in the comments. Subjective markets are hard to operationalize beforehand, and a cooperation by traders is much appreciated.

Read the letter here: https://www.documentcloud.org/documents/24172377-letter-to-the-openai-board

Get Ṁ1,000 play money
Sort by:
bought Ṁ25 of NO

The liquidity in this market isn't anywhere near enough to tell you guys the real story of what happened at OpenAI. It would barely cover the cost of the Coke...

bought Ṁ20 of YES

which further implies that either OpenAI is not on track to achieve AGI, or to achieve AGI that would benefit humanity. This is a very strong statement to make by the board.

I guess I think OpenAI is likely to create agi that benefits humanity, but is also likely to create agi that destroys humanity, and weighting the possible scenarios, comes out very negative (but not confident about that). I think it very plausible that the board has additional reason to believe the negative part is likely. it seems like by default capabilities research does not benefit humanity, and so it isn’t a strong statement to say openAI is not on track to achieve agi that would benefit humanity, especially given how much faster the capabilities progress has been than the alignment progress, and how strongly the company culture has shifted to anti-slowing-down, anti-alignment-being-a-concern, etc.

@TheBayesian Actually, rephrasing

Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world.

to

Step 1: Bulld safe AGI

Step 2: Share benefits (note they dont say "share AGI") with humanity

doesn't exclude the option of "build unsafe AGIs -> don't share them with humanity" which seems like the path they're going on atm, by first creating an automated alignment researcher (the goal of Superalignment) and then using it to align intelligences smarter than humans, should they reach that stage.

Also, even the announcement post (link) for superalignment says things like

Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems

written by the then-board member ilya sutskever himself, which makes

board has additional reason to believe the negative part is likely. it seems like by default capabilities research does not benefit humanity

unlikely

@firstuserhere I think most AI safety people believe "build unsafe AGI, don't share it with humanity" isn't really a viable option.

I think the OpenAI's LP announcement post is also useful (link)

In particular, the following paragraphs:

  • Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity, primarily by attempting to build safe AGI and share the benefits with the world.

  • We’ve designed OpenAI LP to put our overall mission—ensuring the creation and adoption of safe and beneficial AGI—ahead of generating returns for investors.

  • The mission comes first even with respect to OpenAI LP’s structure. While we are hopeful that what we describe below will work until our mission is complete, we may update our implementation as the world changes. Regardless of how the world evolves, we are committed—legally and personally—to our mission.

  • Furthermore, only board members without such stakes can vote on decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict

  • However, we are also concerned about AGI’s potential to cause rapid change, whether through machines pursuing goals misspecified by their operator, malicious humans subverting deployed systems, or an out-of-control economy that grows without resulting in improvements to human lives.

The board saying that allowing the company to be destroyed would be consistent with the mission implies that destruction of OpenAI is consistent with their pursuit of development of an AGI which benefits humanity.

which further implies that either OpenAI is not on track to achieve AGI, or to achieve AGI that would benefit humanity. This is a very strong statement to make by the board, one which the leadership and ~97% of OpenAI disagrees with.

No, there's another option: OpenAI (like all the frontier AI companies) had a non-negligible probability of unsafe AGI that would cause destructive harm to humanity. That's not a strong statement, that's pretty much encoded in the mission statement. (Note that unaligned superintelligences can certainly close off paths to beneficial AI, by for example wiping out humanity etc)

What exactly did 97% of OpenAI disagree with? I'm pretty sure it wasn't that.

@jack Potentially disagrees with*

bought Ṁ50 of NO

@jack I was interpreting this in the context of 97% of OAI threatening to quit if Sam was not made CEO again.

@jack

a non-negligible probability of unsafe AGI that would cause destructive harm to humanity.

But that is known from the very beginning. Thats exactly why they have clause 2:

We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.

and they also say

Unaligned AGI could pose substantial risks to humanity and solving the AGI alignment problem could be so difficult that it will require all of humanity to work together.

Which means that something must've changed between then and now for their actions to be different

@firstuserhere I know it was known from the beginning, that was part of my point?

Which means that something must've changed between then and now for their actions to be different

Of course something changed in the world we're talking about here - in this scenario, which I think is fairly likely, they lost confidence in Sam Altman to drive the development of safe AGI.

predicted NO

@jack Sure, so this market is about whether their lack of confidence in him was sufficiently justified, right? And also if they actually believed that or were lying to negotiate.

@Joshua Yes, my point is that the way I read the quoted text in the market description leaves out what I think is the most likely scenario relevant to this question.

predicted NO

@jack Gotcha, makes sense.

The mission was to ensure the development of safe AI. If the board felt that the company was developing unsafe AI, destroying the company doesn't do any worse at the mission than keeping the company up. That was the explicit purpose of having a nonprofit have control over the company. I'm confused at what the argument is for NO here.

predicted YES

@firstuserhere Can you explain what's going on here? This market's correct resolution is obviously YES; the board had an explicitly-stated mission, and there are certain states the company could get into that would conflict with the mission, so if the board couldn't reform the company, destroying it would be the most effective way to accomplish the mission. I don't see how anybody could reasonably disagree with this.

Like how if NASA's mission is to launch a rocket to the moon safely, and the rocket starts veering off course and they can't correct its trajectory, the best thing to do for the mission is to destroy the rocket before it hits the ground, since letting it hit the ground would be unsafe and would not further the mission of getting to the moon safely.

But the market seems to be hovering at 50%; why?

@IsaacKing

I guess because people are afraid you're going to misresolve?

first of all, I wouldn't make assumptions about other people's trading. Also, there's not really any mana at stake to make this kind of statement, or any other reason to assume that.

This market's correct resolution is obviously YES

Can you explain why?

bought Ṁ25 of YES

@firstuserhere I did, in the preceding two comments.

@IsaacKing

  • first of all, is this letter and the statement correct?

  • Second, is it now confirmed that the board fired Altman for specifically these reasons, and not over some petty dispute or political reasons, with nothing whatsoever to do with the mission of OpenAI?

  • Can the board be taken at its word about this statement?

    I don't know yet.

@IsaacKing My reading of this market was that the mission of the company was to create "Safe AGI" that benefits all humanity. So destroying the company would only be consistent with this mission if there was some reason to think that if Sam remained in power, he would create Unsafe AI. Like some secret internal drama about Sam ignoring safety concerns.

predicted YES

Oh, are you trying to ask whether destroying the company was actually the best course of action in this specific situation? I think that's a very different question, and the board didn't make that statement.

I think this question is extremely ambiguous and should be clarified as much as possible. I disagree with Isaac, but I also think if the question is intended to reflect what firstuserhere said in the above comment, that is very much not what the question says

first of all, is this letter and the statement correct? Second, is it now confirmed that the board fired Altman for specifically these reasons, and not over some petty dispute or political reasons, with nothing whatsoever to do with the mission of OpenAI? Can the board be taken at its word about this statement? I don't know yet.

@Joshua So for example, if it turned out that Sam was doing a great job of creating "Safe AGI" and the board fired him because they're secretly a bunch of bigots who don't like that sam is gay, then obviously destroying the company rather than let Sam return is not consistent with the mission.

So it depends on why Sam was fired, was my reading.

@IsaacKing Yes, the board can do many things that would be consistent with the mission but their responsibility is not just to ensure consistency with the mission but to serve it in the best way they can. Which means that if the board fired sam for such reasons, then firing him was the best way to serve their mission

@Joshua Exactly, and I'm waiting for "Why was Sam fired" and other related edge cases to sort out before making a judgement

@jack Indeed, clarification would be better. I'll rewrite the description, because the market was made in a rush when the statement had just come out and I kinda forgot about it with no activity happening here. Now I'll rephrase it better

predicted YES

@firstuserhere You said that the board's statement was "insane", which implies that you think it's highly unlikely that the company could ever be in a state where the statement is true. But the theory of "Sam was being unsafe and doing a power grab" is hovering at 30% or above in other markets, and is the mainstream theory, which I assume you're aware of. So these facts together imply that even if Sam were being unsafe, you would not believe that destroying the company was consistent with the mission, which is where my original comments came from.

bought Ṁ10 of NO

@IsaacKing

I wouldn't read too hard into sophia's market because it's a mess, but if you do want to talk about what's "hovering at 30%" it's not the safety explanations. I'm not sure why this market is at 50% if the most plausible theory behind the firing right now is a more run of the mill corporate power struggle without any major ideological differences having been proven enough to resolve other markets.

@IsaacKing Further, I wouldn't draw strong conclusions from markets that have <20 traders and are also not traded on strongly, or actively.

predicted YES

@firstuserhere Ok, I'm sorry for the accusative tone of my comment, that was unwarranted.

I still think this market should resolve YES. The board was not saying that destroying OpenAI was necessary at that point in time, just that it could reasonably be necessary in a situation like a CEO who doesn't care about safety and is trying to take over the company.

predicted NO

@IsaacKing I think it's very important that the quote is "would be consistent" and not "could be consistent". I agree that a market about "could" would automatically resolve yes. I think "would" means that the board was saying that this was specifically, factually the case for OpenAI with Sam vs the destruction of OpenAI.

@IsaacKing Has Sam said anything about dismissing safety? As far as I can tell, he takes safety seriously and claims that he's not full on doomer but somehere in the middle, i.e. he thinks safety will be extremely important but doesn't think that doom scenarios are very near term

predicted YES

@firstuserhere I think he plays both sides in order to get support from both. I have no idea what his actual beliefs and goals are.

https://twitter.com/sama/status/1540227243368058880

More related questions