Artificial superintelligence (ASI) here means any artificial intelligence able to carry out any cognitive task better than 100% of the unenhanced biological human population.
P(doom) here means the probability of humanity being wiped out by misaligned ASI.
Ideally the individuals will have publicly expressed their P(doom) within the past year, directly or indirectly (e.g. “I think it's practically guaranteed we're all gonna die” = 99%, “I think it's a tossup whether we'll survive” = 50%, “there's a small but significant risk it'll kill us” = 10%, “the risks are negligible” = 1% etc.), or they may even be contacted and asked for their P(doom) as defined above.
If it is impossible to get a P(doom) (e.g. they are dead, or refuse to give their opinion), then their option may resolve n/a.
When Manifold thinks ASI is <1y away here means the earliest point in time where there is a Manifold market asking whether ASI will be created before a deadline less than a year away, with a definition of ASI equally or more strict than the one in this market, 50 or more traders, and odds that have remained above 50% for the majority of the past month.
@asmith how is it independent of the actions of society? Although unfortunately unlikely, wouldn't it be vastly decreased if, for example, society made it illegal for ASI to be created before all a supermajority of AI scientists agreed that alignment had been solved? Also, how would being independent make it a bad measure? I remember there being an EA forum post about how it's not an ideal term for communication with the general public but in this case I think it's a useful abbreviation
@asmith If I am understanding correctly, I do think I agree with this - regardless of the doom probability, this could/should be something some people work on/look into anyways
@zyc could you elaborate? Although I think it's roughly around 30%, if for example the true P(doom) was 10^-100 I think it would be reasonable to not bother working on it
@TheAllMemeingEye Yeah so for me, I think even if the p doom is super small, it is still a technical field that people can work on; for governance the safety mind set is needed generally, but maybe just less resources/attention