This is a poll to determine what Manifold thinks the answer to the question in the title is.
Definitions for this question:
AGI: An artificial intelligent agent that is significantly better at acting in environments / decision making than humans in a broad set of domains.
Alignment problem: getting the former to act in alignment with the goals of a group of humans (hopefully humanity as a whole).
By 2100 is meant to limit resource input (no infinite time span brute forcing), but it isn't meant to be an exact cutoff.
It is in part inspired by this market: https://manifold.markets/Joshua/is-risk-of-extinction-from-ai-1-in?r=VG9ieTk2
Feel free to argue for or against in the comments. I may make and link derivative markets later.
Bet on the outcome of this poll: