Me, or someone inheriting this question will poll people on this question in 2100 and resolve any answer to the proportion that people on the poll answered "yes".
I'll give you 5~100 manifold bucks if you post another good possible answer in the comments.
The question about the polled being a person is there to control for scenarios where something weird happened during the passing down of the responsibility of resolving this question.
Sorry to non-humans that between now and 2100 join human discourse. I'll edit the term "humanity" when I find a nonconfusing term encapsulating the group of all nearby moral patients.
Huh, another AGI survival prediction market?
Yes, this one is not a "pick one from many" but just a collection of yes/no questions, which I think is more informative.
- By Isaac King
- By Yudkowsky
- By Yudkowsky's community
This question got me thinking about optimal formats for this, so I'm trying a weird one where all trades are cancelled after a week but then it re-resolves in 2060.
I am sufficiently pessimistic about humanity's ability to coordinate that I think most surviving worlds in 2100 are ones in which we are just lucky and it turns out that the relevant technology is much harder to invent than we think it is. Specifically, we might be lucky and:
A) The next AI breakthrough on the order of Transformers simply never arrives. LLMs keep getting better, but no amount of additional training data makes them a superintelligence.
or
B) It turns out there are no superweapons. Nanotechnology just doesn't work how we currently expect it to, engineering super-viruses turns out to be impossible, etc. Without any easy way to kill us all instantly, AI decides to work with us instead.
I am sure someone could phrase these better than me, but they are what I'm hoping for. I still think we should be desperately trying to coordinate moratorium treaties and develop human intelligence augmentation etc, but I doubt we pull those off.
@Joshua I’d like to second (B) especially. I work in nanoscience and I’m shocked by how seriously people take Eric Drexler’s ideas (I hesitate to say pseudoscience, but they’re certainly not very rigorous). I just don’t think it’s plausible that even a superintelligence could figure out how to engineer self-replicating nanobots and the like.
@Joshua Thanks. I totally missed this despite doing some searching. Maybe I'll close the market if the overlap is too large.
@Jono3h Don't close it! An unlinked multichoice is much better than those old linked parimutual markets.
@Joshua Oh this also exists which is a duplicate of EY's market but with the current linked format that allows shorting:
So I would expect it to perhaps have more accurate percentages than EY's original, even though it has fewer traders. Probably also worth including the description?
My reasoning is that in general, large groups of people mostly make big changes in response to disasters.
Most of my probability mass is on things like energy scarcity, climate issues, etc. that just make AI research unfeasible.
Also significant is a failed takeover, causing everyone to understand the risk more viscerally. But that's hard to estimate. It's hard to imagine an AI causing significant enough damage without also just winning.
If it turns out to be too hard to make creative agents, then we survive for free. I wouldn't count on it but possibly it's true.