Assume that there are only two possible outcomes to continued AI development:
Some company/government develops a sufficiently powerful AI and becomes a singleton, forever cementing its power over humanity.
Some sufficiently powerful open source model is used to kill a majority of people on Earth and to cause widespread civilizational collapse.
Which of these possible outcomes do you regard as preferable?
enslavement forever vs collapse once
easy choose
(consider just removing the word "AI" if your hypothetical isn't grounded in reality anyway)
@Jan53274 yes, the first one doesn't seem stable if alignment problem isn't solved, and even if it's solved, the "enslavement" part won't last long unless the company/government has some very specific beliefs
@paleink I also don't expect enslavement if alignment is solved, but who knows what an aligned AI thinks is best for us ๐