Superalignment is a new team at OpenAI attempting to solve the alignment problem within 4 years.
If the team believes they have succeeded in this goal of "solv[ing] the core technical challenges of superintelligence alignment in four years" by their own estimation by July 5th, 2027, this market will resolve YES. If the team dissolves, reorganizes, or pursues a separate research direction unlikely to lead to a solution to the alignment problem, this resolves NO.
Ilya is confident OpenAl will build AGI that is safe in the tweet quoted below. The market resolution was wrong. The market said it resolves to no if it reorganizes in a way that is "unlikely to lead to a solution". : "Ilya Sutskever
@ilyasut After almost a decade, I have made the decision to leave OpenAl. The company's trajectory has been nothing short of miraculous, and I'm confident that OpenAl will build AGI that is both safe and beneficial under the leadership of@sama, @gdb,@miramurati and now, under the excellent research leadership of @merettm."
@TeddyWeverka The text is:
If the team dissolves, reorganizes, or pursues a separate research direction unlikely to lead to a solution to the alignment problem, this resolves NO.
As I read it, the "pursues a separate research direction unlikely to lead to a solution to the alignment problem" is one clause. It doesn't make sense to say "If the team dissolves [...] unlikely to lead to a solution to the alignment problem" - that doesn't even parse grammatically, and also doesn't really make sense. Also the rest of the question text clearly requires the team to do the evaluation, and the team doesn't exist anymore, so it is further evidence that the sentence clearly means that the question resolves NO on dissolution of the team.
@jack The team reorganized in a direction that Ilya, the former team leader, has expressed confidence it will succeed. Your point on the criteria for resolving to yes is a good one though. At best the question should resolve to NA.
@SG My original terms were "If the team dissolves, reorganizes, or pursues a separate research direction unlikely to lead to a solution to the alignment problem, this resolves NO." The leadership team all quitting / being fired constitutes, at the very least, a "reorganization... unlikely to lead to a solution to the alignment problem".
Hmmm no wait I shouldn't headline trade, there might be some editorializing here.
OpenAI has effectively dissolved a team focused on ensuring the safety of possible future ultra-capable artificial intelligence systems, following the departure of the group’s two leaders, including OpenAI co-founder and chief scientist, Ilya Sutskever.
Rather than maintain the so-called superalignment team as a standalone entity, OpenAI is now integrating the group more deeply across its research efforts to help the company achieve its safety goals, the company told Bloomberg News. The team was formed less than a year ago under the leadership of Sutskever and Jan Leike, another OpenAI veteran.
We should maybe wait to hear what OpenAI say directly, not to Bloomberg?
@Joshua Eh, re-reading it does certainly seem like dissolution. OAI will be denying it today if not, surely.
@Lorxus Why? You can just price it in. It's not like they're gonna decide how to self-eval based on their position in this market.
"If the team dissolves, reorganizes, or pursues a separate research direction unlikely to lead to a solution to the alignment problem, this resolves NO." What is the resolution if the team neither declares success nor makes big changes by July 5th, 2027 - ie if they say "what we're doing is good, we're just not done yet"?
Beware new traders, this market is not about whether superalignment will succeed according to the goals they've set, but about whether the OpenAI team will call it a success.
People might be interested in a podcast interview I did with Jan Leike about the superalignment team and plan: https://axrp.net/episode/2023/07/27/episode-24-superalignment-jan-leike.html
@ersatz Always look at the resolution criteria: "If the team believes they have succeeded in this goal of "solv[ing] the core technical challenges of superintelligence alignment in four years" by their own estimation by July 5th, 2027, this market will resolve YES." Now it mostly depends on one's estimation of how honest the Superalignment team will be.