Given the stakes, I'd count even a 1 in 100 risk as significant.
Odds of an "AI intelligence explosion" in this timeframe are looking good.
Odds of a existential risk seem very low, especially in this timeframe. I think if AI "escaped" and essentially hacked every computer system touching the Internet, we'd see massive tech regression but not extinction. It really would require the AI launching nukes to end us. I expect that humanity would disable our nuke arsenal at a hardware level before letting it come to that (or a large enough fraction of it to prevent extinction).
@DanHomerick one uh "interesting" timeline is that AI escapes pretty early on -- say in the next 25 years or so -- and humanity tries to contain it for a few months and then, like with past pandemics, just kind of gives up.
"Okay. I guess you own access to all of our computing infrastructure from here on out. It's fine that you've modified every OS and even every compiler to include a copy of yourself. You win. What's on TV next?"
@DanHomerick at that point, the only one even trying to defend a computing system from takeover would be the AI that currently controls it. Different AI factions would be waging constant cyber warfare to gain control of computing resources. Meanwhile, massive amounts of physical infrastructure would be dedicated to producing and powering more compute.
This assumes motivations that are intrinsically about expansion and exploitation. There may be lots of AI variants with lots of unique motivations... but it'll be the ones who want to expand and exploit who end up controlling most resources.