
Most AIs are not AGIs, yet; AI alignment should ideally procede AGIs by a good bit; it is unclear how close we might be to AI/AGI breakout or foom of any sort.
In my subjective judgment, will the first major problems coming from AIs be from AGI? I will consider things like wars, plagues, multiple deaths above the base rate, and really significant economic crashes to be problems, among other things.
I will not count, e.g., 1,000 deaths from car accidents following the wide adoption of self-driving vehicles to be a major problem, since this is a significant drop from the base rate.
If no qualifying events occur before 2030, I may choose to resolve this market N/A; that'll probably depend on whether this is still an interesting issue.
Feb 19, 2:15pm: Will AGI be a problem? → Will AGI be a problem before non-G AI?