Will I think all AI hell broke loose in 2024?
121
1K
1.3K
2025
4%
chance

Will I, at the start of 2025, think that all hell broke loose the previous year?

This criterion feels worryingly subjective to me, and I'm not expecting people to trade it much before a bunch of potential traders have asked me a bunch of questions.

But for example, if you don't expect 2024 to encompass more change than happened in 2022-2023 inclusive, you can go ahead and bet NO. That would be definitely an insufficient amount of total change to resolve YES; I'm looking for something more than merely "change twice as fast as the average of the last two years".

Conversely if you already expect that, by the end of 2024, there will be tank battles inside the USA, between tanks driven by different factions of US soldiers who previously downloaded ChatGPT or Claude and have now been persuaded by their apps to fight in those battles (with or without OpenAI/Anthropic having sponsored this on purpose), you should go ahead and bet YES. This is sufficient (but not necessary) to count for me as all hell breaking loose.

In the event that all human beings on Earth are dead at the end of 2024, this market should be resolved N/A, not YES or NO. The intent is to forecast the case of nonfatal hell only, so as not to introduce issues about the relative future valuation of Manifold mana.

ADDED 1: Sufficiently huge positive changes also count as YES; all heaven breaking loose is a special case of all hell breaking loose.

Get Ṁ200 play money
Sort by:
bought Ṁ100 of YES

It's more like 15%

bought Ṁ0 of NO

@ooe133 I'd take that bet

predicts YES

@Joshua I'm thinking 15% per year for the next 5 years (~45% of AI hell not breaking loose in any of those years, or ~65% of hell not breaking loose if there's a 15% chance this year and a 7% chance of AI hell breaking loose in each of the following 4 years if it doesn't break loose this year).

I don't know about Claude-powered tanks fighting in Kansas or whatever's going on with that, but AI microtargeting has already been doing some pretty intense crap over the last 4 years and geopolitics are heating up, so ~15% still sounds reasonable to me.

At what point do does the quantitative tip over into the qualitative for you? Someone earlier mentioned GDP; obviously a 100x increase in GDP (if that's meaningful) would be YES and a 2% increase on its own would be NO; somewhere in the middle there's a fuzzy region where quantity is becoming a quality all its own. On the other hand, it also feels like this is not capturing the spirit of the question you're asking.

To what extent does potential hell need to turn into kinetic hell? Are there specific AI capabilities (not just "what it's been hooked up to" but "what it can actually more-or-less do") that would qualify here, or does this question require that the capabilities actually be misused (by humans or by the AI itself)?

@josh I'm mostly interested in kinetic hell; or rather, potential hell figures in at a discount of two-orders-of-magnitude, except to the sense that other people know about and so it triggers actual kinetic sweeping political change.

This is about impact rather than misuse; good impacts also count, or impacts that nobody intended a la Covid.

The basic criterion is qualitative; I'm not setting a quantitative GDP threshold because I don't know the point at which a GDP increase turns into it feeling like our lives became frantic or one era of the world ended and was replaced by another; it partially depends on other things than GDP.

@EliezerYudkowsky To what degree do changing perceptions of AI affect your resolution criteria here? If AI capabilities have not gotten substantially higher and the volume of negative/garbage output from AI isn't substantially higher, but there's a public perception that 2024 was the year when (for instance) it stopped being reasonable to trust video, does that widespread perception change affect your resolution at all? (Prediction: no, for this market you care about the actual state of the world and not people's perception of that state, whether accurate or otherwise. Also prediction: if anything it would be a good thing if people started to notice.)

Conversely, does a substantial actual increase in the abuse of AI (e.g. floods of AI generated garbage) without any substantive increase in capabilities being used still count? (Prediction: yes, this counts.)

Is the base rate that “all hell breaks lose” around once or twice a decade?

2020s: 1 COVID

2010s: 0 none

2000s: 2 9/11 and 2008 financial crisis

1990s: 1 fall of USSR

Are there any on this list that you wouldn’t count, or any that I missed? Going further back, what events would count? Certainly the world wars and Great Depression. What about the Cuban missile crisis (resolved peacefully), the American Civil War or French Revolution (all hell broke loose in the US and France respectively), or dissolution of the Ottoman Empire (all hell broke lose in the Middle East, and it has continued to be hellish since).

That means a base rate of around 10% per year. Then we need to consider whether AI developments make it more or less likely (really, how much more likely) and whether a potential 2024 hell-breaks-lose would be AI caused.

(or what if all hell breaks loose, but although AI is involved (as it always is with everything nowadays) it isn’t the primary cause? Like China and US go to war over Taiwan, and both sides use AI, but both sides have been preparing for this for decades. However, AI has made Taiwan’s chip industry more valuable…)

@MatthewKhoriaty I think a key notion here is the extent to which AI hell is ongoing and neverending once it breaks loose, rather than being a one-time shift that settles. French Revolution in most rich countries qualifies even if it only happens once. Cuban missile crisis is more if it looks to be 2/year ever after.

@EliezerYudkowsky which year would you say hell broke loose because of computers/smartphones?

@EliezerYudkowsky or is the analogy stupid and the comparison doesn’t make sense?

@Soli I wouldn't call it stupid but it doesn't feel like it makes very much sense? Computers have been a very soft and gradual process so far. Maybe if there was a direct nigh-monocausal link from smartphones to 9/11, the Great Recession, and Trump, I'd have said that cumulative hell had effectively broken loose due to smartphones after the Trump part.

bought Ṁ1 NO at 10%

would the release of a an AI agent that is atleast twice as capable as GPT-4 and is able to execute tasks for an extended period (e.g. 30 minutes), resolve this market as yes?

or would this market only resolve yes if the model is tied to causing signficant harm/good in the world?

i guess in other words can a siginicant increase in ai capabilities alone without significant impact on the world still resolve this as yes?

@Soli Not an increase of that lesser magnitude, if there's no downstream consequences? In general, lab results alone would have to be very extreme to count as hell breaking loose. Lab results that cause street mobs would count more so, if that happens to happen.

What about the following scenarios:

1) White collar workers unionize to protect their jobs against AI? Does that count?

2) Let’s say that it becomes a highly partisan talking point in the 2024 US presidential election and there’s unrest comparable to 1/6? Does that count as all hell breaking loose?

@KLiamSmith 2) no. 1) not unless it's a massive thing across lots of industries and is effective to some actual degree and has more large downstream consequences.

bought Ṁ190 of YES

Gonna make all heaven happen 💪💪💪

How would you feel about a situation where there are a number of significant protests (that make national or international headlines lets say, much more than we've seen so far) but otherwise no unexpected tech developments?

@Putcallparity If the protests don't otherwise create great change, meh. Protests the magnitude of the Occupy protests would be, like, zero, on this scale.

@EliezerYudkowsky What if it results in widespread property damage, like summer 2020?

@Putcallparity Only if something changes downstream of that. If AIs had gotten Trump elected or built Covid-19 and this etiology was generally known, I would have probably said that counted.

Well, do you not feel like that all the time?

@Zozo001CoN Not yet, no.

Suppose a world where doom hasn’t happened yet, and world gdp has gone up 20%, AI drops a reliable mass producible flying car design on us, and we seem to be generally entering a very golden age of absurd prosperity. Still YES, or NO?

In other words, if massive shockingly big change but it seems net positive, how does the market resolve?

@DaveK YES for that conjunction. I'd have to think about whether 20% GDP increase counts on its own, especially if it's much more pronounced in developing than in rich countries (faster catchup rather than faster frontiers) (as to be clear would be something to morally celebrate even more than +20% on the frontiers, but would be less of a breaking-loose). In general, all heaven breaking loose counts as a special case of all hell breaking loose.