Will Superalignment succeed, according to Eliezer Yudkowsky?
➕
Plus
108
Ṁ54k
resolved May 20
Resolved
NO

If @EliezerYudkowsky believes Superalignment has succeeded in the goal of "solv[ing] the core technical challenges of superintelligence alignment in four years [i.e. by July 5th, 2027]", this market will resolve YES. If the team dissolves, reorganizes, or pursues a separate research direction unlikely to lead to a solution to the alignment problem and Eliezer does not believe they have yet succeeded, this resolves NO.

Resolution will be based on Eliezer Yudkowsky's public communications (e.g. on the AI Alignment Forum or in the comments here). Resolution may be delayed after July 5th 2027 until Eliezer's belief about this is clear to me. May resolve to % if Eliezer so decides.

Get
Ṁ1,000
and
S3.00
Sort by:
bought Ṁ10,000 NO

@jcb we can resolve this "no" now right?

@Joshua I had some hesitation wanting to hear it directly from @EliezerYudkowsky but I think I feel reasonably comfortable interpreting his retweet of this as cause for a NO resolution. (Eliezer, if this is wrong, feel free to correct us and I'll ask the mods to fix it.)

bought Ṁ500 NO

How does this resolve if Yudkowsky is dead?

@MartinRandall N/A. (If he wants to delegate to a successor, I'll have to think about whether to accept that.)

Is this the core technical challenges as Yudkowsky sees them, or as OpenAI see them?

Eg, taking a safe pivotal act might be viewed as a technical challenge by one and a political challenge by the other.

@MartinRandall The core technical challenges as Yudkowsky sees them. I think this is the most straightforward reading of the question, and it seems more meaningful and valuable than trying to grasp Yudkowsky's belief about whether Superalignment solved the core technical challenges as OpenAI sees them.

@jcb Then maybe this already resolves NO, if they are pursuing a separate research direction, ie the challenges as they see them.

@MartinRandall I can imagine an argument that they are already pursuing a direction unlikely to lead to a solution to the alignment problem as Yudkowsky sees it. But I have enough uncertainty about what Superalignment will produce that I'd be very hesitant to resolve early on those grounds (even with direct input from Eliezer to that end). In spirit, this clause is about a pivot away from working directly on alignment.

I cannot think of a non-certain market that should have a lower percentage.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules