Is risk of extinction from AI >1% in the next century / Should we be spending >1% of our resources to prevent this?
63
resolved Jan 1
P(Extinction) < 1%, Spending should be < 1%
P(Extinction) < 1%, Spending should be > 1%
P(Extinction) > 1%, Spending should be < 1%
P(Extinction) > 1%, Spending should be > 1%

Many forecasts on this site estimate a >1% risk of human extinction in the next century from artificial superintelligence.

If that's true, it seems to me we should be dedicating far more resources to preventing this! At least 1%, if the risk of all of us dying is greater than 1%.

But what does Manifold think?

This isn't about how to spend money on preventing AI Doom, it's about how much in your best case scenario.

Just imagine we're doing all the things that you think should be done, every nation is working together, and the funding is coming from wherever you think is best. The spending can be any mix of public and private you think is best.

To give a simple sense of scope, 1% of the US government's budget would be about 60 billion dollars a year being spent on the problem:

Get
Ṁ1,000
and
S3.00
Sort by:

A thought: How many people and resources do we have working on making planes safer? Plane crashes killed about 80k people in the last 50 years (https://en.wikipedia.org/wiki/Aviation_accidents_and_incidents). If AI has just a 1 in 10,000 chance of killing at least 10% of humans, then it is a bigger safety issue than plane crashes - and that's not even to mention the knock-on effects of such a global catastrophe, or the extreme downside risk of extinction. (To be clear, I think the probability of AI killing a large fraction of humanity is far greater than that.)

I bet there are far more people working on plane safety than AI safety, even though I think the risk of death due to AI is far far greater than the risk of death due to plane crash.

(Obviously another important question is how much marginal expected impact does doing more work on safety have, but I also think that is high enough to be well worth it)

@jack I don’t think many people work on airplane safety at all. Safety is incorporated into design because we can reliably predict the safety of non-experimental aircraft using basic physics. Unsafe aircraft designs are never built and well maintained aircraft don’t crash. The Boeing issues with the max is the exception that kinda proves my point because the FAA allowed them to self assure the quality of their designs and the missed a single point of failure that caused the planes to literally dive into the ground no matter what the pilots did about it. The plane itself was perfectly safe until faulty sensors were given autonomous control over the pitch of the aircraft. Basically nobody was really working on “airplane safety” and those who did day something were suppressed by executives more concerned about delivering aircraft to the airlines that bought them on schedule.

@BTE A lot of people work on airplane safety as part of their work on airplanes in general. Whereas few people work on AI safety as part of their work on AI - and we should change that! There are also people whose roles are centered around airplane safety, like those at the NTSB.

The only reason you say "Unsafe aircraft designs are never built" is that we spend so much effort on the problem throughout the industry.

@jack NTSB is not about safety. They are an accident investigation agency. I think there is a big difference.

If spending >1% would avoid extinction (or even better, produced aligned AGI) then it would be an easy choice. However I can't see how - today, at least - we would spend that effectively.

@Tomoffer Interpretability research IMHO

Based on what reasoning do people think you'd be able to solve the alignment problem? You could give an ant as much money and resources as you want, it is not going to get a human to be aligned with its goals.

Why would this scenario be different? No point throwing money down a bottomless pit unless there is good reason to think that there is a solution to be found.

@Toby96 And that's before you even introduce the question of "which human(s) to align with". In reality it will be a small group of rich parasites to which a hypothetical AI would be aligned with. No thanks!

We can't even align governments with humanity as a whole. We've had millennia of time to align those, and it isn't a solved problem. There is no reason to think that aligning superintelligence is feasible.

@Toby96 Made a poll about the feasibility of solving Alignment problem for those interested: https://manifold.markets/Toby96/is-a-solution-to-the-agi-alignment?r=VG9ieTk2

While we should be spending more money on AI safety, there's much more pressing things that beed to be done. The average 4 year old could run our country better than it's current leaders.

1% of the 2023 U.S. federal budget is $58 billion. That's enough to pay 100,000 AI researchers $580,000 each per year to work on alignment (or 200,000 $290,000 each, etc.) - enough for the top 1,000 AI labs to hire 100 alignment specialists each while paying them ridiculous amounts of money. I'm not an expert, but that kind of sounds like overkill.

(FWIW, my P(AI Kills Everyone by 2100) is less than 10%, but probably higher than 5%.)

I do think that specifically would be a waste of resources, but I think there are a lot of other ways we could be directing resources towards the problem. I'd personally opt for 20 programs that each cost 3 billion instead of one big program that costs 60.

@evergreenemily you don't achieve results by hiring more people to attempt the impossible, you do it by doing what's necessary, no matter how hard it is

@Joshua That make’s absolutely zero sense. If this problem is as real as many of you think, just throwing as much money as possible at it IS NOT GOING TO HELP. How did the trillions spent to solve terrorism work out? How about tens of billions already spent on fusion technology? The federal government WILL NOT solve this problem. Like it or not it’s gonna be up to industry to get it right.

@Joshua The Manhattan project was the exception because the physics was known, the end goal was clearly within grasp. It was a problem solving issue not an inventing new things from scratch issue. It was solved because it was clearly solvable. That cannot be said for AGI which has no such physics backing up the claims made about it.

That's a reasonable argument, and notably the poll description does say that the hypothetical spending could be any mix of public/private that you prefer.

Government spending could also just be the cost to bribe reluctant nations to participate in a worldwide ban of AGI and the cost of enforcing that ban.

@Joshua I don't think any amount of money would get that ban to work - opposition to it would be too widespread, and enforcement would be necessarily authoritarian (how do you stop people from secretly creating an AGI in their basement?)

@BTE Fusion was underfunded even with respect to the “fusion never” scenario. Bad example. Still, public funding of research that benefits the public and that private entities won’t do sounds like a good idea to me. We just need to agree on the definition of “good research” now :-P

@Cytokine congrats on being the most cautious person so far.

The focus on extinction seems a bit too narrow, but this may as well apply to any other major problem with AI (basically just watch black mirror). If at least one of those things can go wrong with p>1% (even if individually p<1%) it’s definitely worth investing to prevent that. Interpretability, fairness, etc…

Yeah I imagine a rising tide of AI Safety floats all boats.

Although I think the only thing in black mirror I would want tens of billions of dollars spent guarding against would be swarms of killer bee-bots from Season 3 Episode 6, but I could be forgetting other cases.

> 1% AGI is impossible.

Not sure I follow. Like, you could believe there's an 80% chance AGI is impossible, but believe that in the 20% of worlds with AGI, 5% cause human extinction.

Do you mean you think there's <1% chance that AGI is possible?

probably more like > 15-20% AGI is impossible. I have never seen a compelling argument that it is possible, very similar to how I have never heard a compelling argument for god. The novelty/impressiveness of ChatGPT and the others has in fact diminished since their release, even just the last 8 months or whatever, not gotten stronger.

@Joshua Sure. What I am saying is talking about extinction is a huge mistake because it presumes AGI is possible , it takes it as a given. I am saying that is a mistake.

@BTE Want to bet on whether AGI is developed, maybe this century? I think it's very likely it will be.

What I am saying is talking about extinction is a huge mistake because it presumes AGI is possible , it takes it as a given. I am saying that is a mistake.

Completely disagree with this rationale. To make things clearer, I'm going to talk about a hypothetical that isn't AGI - I dunno, let's say we're debating extinction due to aliens.

I can very well believe that aliens existing is impossible with 10% probability (I don't, this is a hypothetical) while also believing that extinction due to aliens is 10% probable. Those two add up to 20% which is less than 100% - so it's logically sound.

If you believe it's only 15-20% impossible, that should only reduce your extinction risk estimate by about 15-20%.

@BTE Impossible in what way? Like literally against the laws of physics? I don't see any reason why that would be the case.

@SemioticRivalry it might be against the laws of physics, but I was more thinking of it being unattainable because it’s against the practical resource constraints of humanity, especially if it requires making very difficult trade offs.

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules