@GustavoMafra Nope, just whatever you'd use for other things you do/don't consider significant risks. (Note that for most people that's more than just the probability of occurrence, but also the severity of the outcome.)
@GustavoMafra I usually would ignore probabilities under 1 in ten thousand since that’s about the probability I will not make it to the next year based on actuarial tables. But in this case even 1 in 10000 could be significant given that there are about 10^10 people around, so that’s a million deaths in expectation. The reasonable thing is probably to compare p(doom) from AI with other x-risk probabilities. I think those are still higher even on this timescale, so I voted no. For instance base rate for nuclear war is one in about a century (oom estimate) but an all out nuclear war could easily result in 1/100 of the population dying, so that is 1 in ten thousand every year.
@mariopasquato "I usually would ignore probabilities under 1 in ten thousand since that’s about the probability I will not make it to the next year based on actuarial tables."
I answered Yes, but that's because the question says "significant existential risk to humanity", not "significant existential risk to me". In the "actuarial tables of humanity", AI explosion probably gets >10% of each year total probability of death, but not in mine
btw are your tables segmented by health state and habits or something? The one I'm looking shows males in their 20-30s with 1/500 yearly death chance
@ItsMe Think of things that you'd consider a significant risk. Then think about whether a rapid intelligence explosion is included in that category. :)
(I'm leaving the exact interpretation up to people's individual judgement.)