Do OpenAI leadership actually believe they could develop AGI?
➕
Plus
16
Ṁ374
2030
83%
chance

Some online comments imply that OpenAI leadership is cynically playing up the risk of AGI for profit/fame. This market offers the chance to make concrete(-ish) predictions on the issue.

"Leadership" is taken from Wikipedia and will remain static regardless of subsequent turnover:

Key people

Only one is required to negate.

"Actually believe" is the fuzziest but fair warning that negating would require a fairly direct admission of "I didn't believe it AT THE TIME (2023 or before)" by a reputable source.

It needs to be belief that they could do it, so within the next 30 years or so, and OpenAI in specific. But they don't need to be certain, let's say >= 5% certainty.

AGI includes both positive and negative futures, but "new epoch in history" level, not "roughly as big as the iPhone" level.

I won't bet, but to be up front about my current beliefs: I currently think they are truly sincere (with 90% confidence). Absent an oracle, we'll unfortunately need to factor in the chance that they don't believe, but take that secret to their grave.

My goal here is to disambiguate, if you don't agree with them, between starry-eyes optimism and bubble-inflating cycnicism. Feel free to Coffeezilla them if you're really sure it's a scam - I'll throw in 1k mana if you credibly break the story yourself!

Get
Ṁ1,000
and
S3.00
Sort by:

It's a difficult one because of the possibility of them just not admitting it, but also the reverse: they did believe but might post-hoc say they didn't to save face, if AGI doesn't materialize.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules