If an intelligence explosion occurs, this market resolves N/A. Otherwise:
Shortly after market close, I will post a Yes/No poll in this market's comments, in the Manifold discord, and/or in whatever other appropriate Manifold-related spaces exist at that time. It will ask:
Do you believe that a rapid AI intelligence explosion poses a significant existential risk to humanity before 2075?
This market resolves to the percentage of Yes votes in the poll, rounded to the nearest integer.
The poll will be limited to one response per Manifold account, and the way everyone voted will be public.
All markets for each year:
Do you believe that a rapid AI intelligence explosion poses a significant existential risk to humanity within the next 50 years?
I feel like the vague terms "rapid" and "significant" deem this question pretty unclear. I am happy calling 0.1% risk of extinction significant. On the other hand I could see people thinking "yeah maybe give it 1%, but that's not significant".
@TassiloNeubauer I'm ok with that. For practical purposes, someone who thinks the risk is 5% and that 5% is negligible will advocate for similar policies to someone who thinks that a 5% risk would be significant but that actually the risk is only 0.000001%. The question is asking "are people going to be concerned about this thing", and for most intents and purposes I don't think it really matters exactly why they're concerned.