In many sci-fi universes, humanity must stop using/destroy all computers to remain in control of its own destiny. This could be due to a typical AI doom scenario or simply because the existence of AI will stop people from exercising individual and collective agency. It also depends on the likelihood of any given computer-using civilization to develop ASI.
Yuval Noah Harari, a prominent public intellectual and philosopher, discusses the incompatibility of ASI and human-directed history in this clip:
https://www.youtube.com/watch?v=2M7VU7EoLT4
If you believe an "AI Heaven" is likely, in which each person lives in an ASI-controlled paradise but lacks the ability to change the world beyond what the ASI allows, vote NO. If you believe that ASI is unlikely/impossible, vote YES.
Interpret the resolution date as "not anytime soon (hopefully)."
This question seems very vague to me. The details don't match my initial read of the title.
I hear "Is human civilization compatible" to mean, "Is it fairly possible that humans could have a setup where AIs follow their orders?", which is a much lower bar than something like, "Will our specific setups involve humans being overthrown"