Why did we survive AI until 2100?
40
1.5kṀ3075
2100
55%
A small group made it impossible for anyone else to develop AI.
55%
a big anthropic shadow
52%
A humanitarian catastrophe hampered AI progress.
45%
AI never got the capability to cause extinction.
45%
AI became safer as it got more powerful without much human effort outside of some RLHFing.
44%
Cognitive enhancement helped a lot.
35%
Humanity coordinated on having a sufficiently long and strong AI moratorium.
34%
Jono from 2023 does not think I (the one being polled in 2100) qualify as a person.
31%
Brain uploading helped a lot.
28%
A plan to mitigate AI risks succeeded and already had a post about it on the alignmentforum on 2023.
23%
Open source AI created an egalitarian world where no one/few got in a position to (accidently) kill everyone.
12%
Nobody wanted to develop AI anymore.
5%
Humanity spread out over independent space colonies.

Me, or someone inheriting this question will poll people on this question in 2100 and resolve any answer to the proportion that people on the poll answered "yes".

I'll give you 5~100 manifold bucks if you post another good possible answer in the comments.

The question about the polled being a person is there to control for scenarios where something weird happened during the passing down of the responsibility of resolving this question.

Sorry to non-humans that between now and 2100 join human discourse. I'll edit the term "humanity" when I find a nonconfusing term encapsulating the group of all nearby moral patients.

Huh, another AGI survival prediction market?

Yes, this one is not a "pick one from many" but just a collection of yes/no questions, which I think is more informative.
- By Isaac King
- By Yudkowsky
- By Yudkowsky's community

Get
Ṁ1,000
to start trading!
© Manifold Markets, Inc.TermsPrivacy