According to Manifold users, which possible cause of human extinction by AI needs the most urgent attention?
➕
Plus
37
Ṁ1368
Jun 2
20%
Surveillance capabilities (includes but not limited to "surveillance state" and "surveillance capitalism")
6%
Military applications
0.1%
AI works us all out of a job and UBI cannot save us because wealth no longer being generated
0.1%
Reversion to the mean: innovation in storytelling, jokes and music becomes impossible
2%
Falls into hands of bad actors
0.3%
This is a secret too dangerous for me to share
0.7%
AI becomes so good at predicting things it takes everybody's mana
40%
Maximizing its utility function leads to human extinction because of instrumental convergence.
28%
Unintended Consequences of Non-Malicious Human Use
1.9%Other

Assuming AI leads to human extinction, how will this happen and why should humans care?

Get
Ṁ1,000
and
S3.00
Sort by:

Aren’t surveillance capabilities a subset of military capabilities? Why is one at 26% and the other at 2%? Is there something I am not understanding here?

Edit: also how exactly is surveillance leading to extinction? To slavery maybe but can a security camera kill you with its stare? Whereas military stuff is designed to kill, no?

@mariopasquato This is an interesting question. I was about to say that surveillance and military intersect but surveillance isn’t a complete subset of military. Then I realized my definition of “military” isn’t coherent. Stay tuned.

Surveillance capitalism could lead indirectly to human extinction if the enslaving feature grinds the global economy to a halt .

@ClubmasterTransparent I doubt capitalism would grind the economy to a halt? Maybe the economy would be the only thing left and everything else would suck, yes, it’s already happening in a way. But that does not seem related to extinction. The only way I see capitalism (or rather, the liberal package of which it is part) being responsible for extinction is either through externalities (e.g. pollution) or by driving down birth rates (which is happening but not everywhere)

Thanks all for learnings. I appreciate it’s not anyone’s job to educate me. I did not make it through the suggested readings but I paretoized and reviewed. The suggestions were helpful and thought-provoking.

So “instrumental convergence” in layman’s terms is “Godzilla Clippy.” Humans cluelessly create imperfect technology Clippy and then he marauds over our limited human world trampling Tokyo and knocking over the Empire State Building because we accidentally programmed into him All You Need Is Love.

Clippy was/is a googlable thing.

This market and recommended readings from comments are still in the frontpile of the heap of multicolored sticky notes that's my brain.

Humans are craptastic at understanding our own motivations. We daily do the thing we strive to not do. We give in to impulses, decision fatigue, low blood sugar and crushes, all the while insisting more willpower or discipline would lead to better outcomes.

Sounds like AI is distinguished from other algorithms I know not only by being faster/more, but also people program a "utility function" into it. I see the appeal. Maybe I don't get it yet, maybe I get it just fine, can't tell. But not at all obvious that assimilating and crunching more stuff faster and seeing more connections would inexorably lead to optimal decisions maximizing one's own interest. Thats before this other issue -- if something that does all that appears, how to align it's interests with our own.

I'm going for economy of words, I mean by this to convey respect. Have not tried to speak utilitarianism since junior year of HS, this is my best go at it. Usually, whrn a question occurs to me, I ma not the first to have thought of it. Thanks all for challenge and opportunity to reflect.

@ClubmasterTransparent I think the original(?) paper is probably the best source if you want to read more.

https://nickbostrom.com/superintelligentwill.pdf

@Sailfish Thanks for this recommendation! Reading a section or so a day

@Sailfish Seriously a great recommendation. Still moving through.

Do "bad actors" include any any people who direct the AI to bad ends? Or only traditionally-considered "bad actors"? If the latter, please add "unintended consequences of human use"

@MattLashofSullivan Thank you for this question and the reminder that there's a lot I don't know. If "traditionally considered" bad actors are like scenery-,chewers, hams, B players, bit parts then haha funny. Otherwise there's likely some technical term in some discipline which I haven't seen much. I meant, malevolent people who want to use AI to their own benefit and the detriment of the rest of the world. Consequences not unintended.

@ClubmasterTransparent By "traditional bad actors" I meant like, terrorists, organized crime, rogue states, etc. The kinds of people that the current national security apparatus is currently tasked with keeping an eye on.

Suppose like, some hedge fund gets an advanced AI and tells it to make money for them. And the AI goes and does something unspeakably terrible, which also makes the number in the hedge fund's account go up by a lot. The sort of classic "extremely literal genie" scenario.

@MattLashofSullivan Hm, not to be glib, but we already have massive algorithmic traders and arguably some of the side effects of that are terrible and certainly they are straining existing regulatory structures. What you're describing sounds like the improvement in coding and computing power accelerates faster than human ability to manage it (or maybe humans have the ability but can't agree on how to use it in time). Thanks for adding option.

Instrumental convergence leads to human extinction

@asmith Does it now. Please add that. Or explain it in the comments since my Google is broken. Thanks!

@ClubmasterTransparent In my own words:
Imagine an AI that passes some critical threshold of intelligence such that it can do state-of-the-art AI research or the equivalent. In other words, imagine an AI smart enough to make itself smarter. Each additional bit of ability to improve itself leads to even more advanced abilities to improve itself, so exponential growth in capabilities results. A mind much more advanced than a human's would be able to destroy humanity.

Instrumental convergence is the observation that, no matter what one's goals are, the same things tend to be useful for accomplishing them. No matter what an AI values, control over resources and neutralization of threats would be useful to maximize those values. Increased mental capabilities too, of course.
[continued]

@asmith I'm intrigued, please proceed. Not obvious the difference between our example AI's "values" and its goals? People are extremely terrible at recognizing, much less evaluating, what resources are available and how to control them, in a vacuum or not. If an AI could do better at that, it would certainly have a head start

@asmith if you don't add this answer I will, seems like the sort of thing someone here would bet on. Is there a reason you're explaining it to me instead of adding it? Not that I mind, here to learn.

@asmith

So, an AI optimizing for a utility function that's sufficiently different from a human value system would want to seize control of all the resources on earth that could be useful to its utility function. One would imagine that an entity which is the equivalent of millions of years more advanced than humanity scientifically and technologically would be able to make some use of essentially every atom on Earth, and all the energy of the sun, which would result in human extinction. As far as threats are concerned, even if humanity isn't intrinsically threat, we could create a second superintelligent AI with a different utility function that would be.

How hard is it to program a superintelligence in such a way that it wants what we want, or agrees to take orders from us? It probably won't surprise you that it's extremely hard. If we give it points for doing things we like, like a chess-playing neural network AI is automatically awarded points for winning at chess, it would certainly be aware enough to take control of the point-reward-system and give itself points for free, then get rid of us so that we can't disrupt it. Even if its reward function was just a number that it wanted to be as high as possible, it would presumably still want to take control of all the atoms on Earth in order to build a computer capable of storing the extremely high number.

Of course it would realize that humans did not intend this, but it wouldn't care. Not unless we figure out how to program it to care what humans intend, which is an extremely difficult task.


If the idea of a superintelligent being that still has a simple value system based around maximizing one number doesn't sound right to you, keep in mind that even if it grows to want something more complex, there's no particular reason to think that more-complex-thing that it wants would resemble a human value system.


I could go on, but you get the gist.

@asmith appreciate the explanation and will read later.

@ClubmasterTransparent I added it. As far as timelines go, be aware that there is no guarantee that superintelligent AI is in the distant future. AI is improving by leaps and bounds every month. Maybe scaling up the existing technological paradigm will get us to an "intelligence explosion", (which is the term for that recursive exponential growth in intelligence I was talking about earlier) or maybe there are still some big technical breakthroughs still necessary, but when I look at the difference between GPT-2 and GPT-4, it's hard for me to imagine that more than two or three paradigm-shifting breakthroughs are needed to destroy the world.

@asmith Be aware that I've lived half my life in radical acceptance of the idea that the world could end at any moment, or at least my bit of it could, or at least I could. Thanks for adding your option and for the learning

@ClubmasterTransparent People often say or imply that as far as they're concerned the human race going extinct and their own death are roughly equivalent in badness. I think they're being silly, and just saying that because it's easy to say. Humans inventing excuses to not care or not do something. Even if they've convinced themselves they believe it or forced themselves to believe it, the only reason they did that is because it's on some level easier for them, not because it's a reasonable thing to believe.
But I appreciate that you took the time to read what I wrote. Thank you.

@asmith Thank you as well.

How does this resolve

@Joshua Resolves to top answer. I tried this first but turns out not everyone shared my assumption that human extinction obvious risk.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules