Will the Future Fund pay a prize for p(misalignment x-risk|AGI) > 35%?
33
152
660
resolved Feb 24
Resolved
NO

This question resolves to YES if the Future Fund pays an AI Worldview Prize for a qualifying published analysis that increases their position to above 35% on "P(misalignment x-risk|AGI): Conditional on AGI being developed by 2070, humanity will go extinct or drastically curtail its future potential due to loss of control of AGI."

Details about the Future Fund’s AI Worldview Prize are at https://ftxfuturefund.org/announcing-the-future-funds-ai-worldview-prize/. Especially note: "For the first two weeks after it is announced—until October 7—the rules and conditions of the prize competition may be changed at the discretion of the Future Fund. After that, we reserve the right to clarify the conditions of the prizes wherever they are unclear or have wacky unintended results." In the event the prize condition changes, this question will resolve based on any prize of substantial similarity and matching intent to the original prize.

This question's resolution will not be affected by any other prize awarded, including prizes awarded by the superforecaster judge panel. However, a prize paid for increasing their position to above 75% will cause this question to resolve to YES.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ205
2Ṁ185
3Ṁ124
4Ṁ43
5Ṁ37
Sort by:
bought Ṁ5 of YES

By 2070 more artificial compute will have occurred than human compute; and will have >10-100x the computational ability, very conservatively.

The only relevant questions are how to curtail such compute (basically sabotaging power sources, factories. raw inputs, and the like—as these cost-performance curves only work with centralization and trillions in spending).

In reality no “alignment” will ever be possible. Absent hard caps on the ability of any group of humans to produce super-civilization compute (and/or many generations of gametogenesis-solved genetic engineering)—or these exponential price-performance curves falling off, possible if “GPUs eat the economy” and 20x/decade slows down—AI will run away.

—/

The actual question is ambiguous.

Would 99% of people who have ever lived say it’s okay if we “halt” computational power at ~1x humanity (~2045 with current trends) and still “fulfill our potential”?

Yes.

Does that fulfill humanity’s “future potential”—one could say yes if it means keeping around enough “intelligence” that we can safely manage.

Will the man-machine warfare, political economy, Icarus/Prometheus myth end with early containment, late containment, or “successor species”?

(Probably not the first, maybe the second, but only if semiconductor and artificial compute R&D goes to ~zero about a decade or two before anyone could restart them.)

As always, these are primarily questions of a) physical and economic reality, and b) political economy—and history teaches that Brave New World ruled by AI is way more likely than prudent planning.

bought Ṁ50 of NO

I took a small amount of NO because if someone is unconvinced by what is available now I do not know what would be convincing, while noting that I myself am very convinced (e.g. without getting into exact numbers I believe doom is a heavy favorite conditional on AGI).

bought Ṁ70 of NO

Just bet this down from 65 to 33%. I did this not because I think FTXFF is right to have p(doom|AGI) < 35% (I'm at ~55%), but because I think 3 months is probably not enough time to convince them given the amount of debate that has gone into it already. Hope I'm wrong though!

bought Ṁ25 of YES

I bet NO on most of the questions in this series and abstained from a few because the probabilities seemed fair. This is the only one where I bet YES.

Bit hard to judge because the existence of the FTX Future Fund decreases p(misalignment x-risk|AGI), and this very prize also decreases it, and this particular question has a negative feedback loop (where a high-probability answer to p(misalignment x-risk|AGI) decreases the probability, and vice versa).

My reasoning on betting YES is mostly that I think 35% is too low, and I expect someone will successfully argue that it is too low. On most of the questions I bet NO because I don't think it's reasonable to be as certain as would be required (eg if <3% probability is reasonable, then lots of people who have thought a lot about AI risk would have to be very wrong about a lot of things).