There will be only one superintelligence for a sufficiently long period that it will become a singleton
15
35
1k
2300
60%
chance

Full question:

There will be only one superintelligence rather than multiple for a sufficiently long period that it will become more powerful than any single government (i.e., unipolar AI takeoff).

One of the questions from https://jacyanthis.com/big-questions.

Resolves according to my judgement of whether the criteria have been met, taking into account clarifications from @JacyAnthis, who made those predictions. (The goal is that they'd feel comfortable betting to their credance in this market, so I want the resolution criteria to match their intention.)

Get Ṁ600 play money
Sort by:
predicts YES

Come help me define "superintelligence" more rigorously here:

Is this about whether the emergent winner from the race will be a singleton group, or whether the race itself will be a just a single "group" so far ahead that it will be a one horse race?

predicts YES

@firstuserhere Not sure I understand the distinction you're drawing, can you elaborate?

predicts YES

@firstuserhere The events entailed in this market are (i) the creation of the first superintelligence, (ii) the first superintelligence becoming more powerful than any single government, (iii) the creation of the second superintelligence.

A "race" between AIs is only relevant for this market if it happens before (ii) or (iii). If you mean a "race" between multiple superintelligences, that can only happen after (iii), and thus it has no causal bearing on this market's resolution.

@JacyAnthis We shouldn't assume that superintelligences are by default general or of the same variety.

Given that AGI is considered possible, and ASI (artificial superintelligence) is expected to occur before AGI, it's quite plausible that we may see multiple flavors of ASI before the emergence of a true AGI. This could lead to a more diverse landscape of superintelligences with different strengths and capabilities, making the singleton scenario less certain.

ASI#1 may be capable of generalizing in domains {d1,d2.... d100} whereas ASI#2 may be capable of generalizing in domains {d1,d10,d20,d30...d1000} and both may be ASIs and both may occur acc to your point (i) as well as point (ii) while still not implying point (iii)

@firstuserhere (and the formation of singleton requires capabilities of generalization in some larger set of domains which are covered by neither ASI1 or 2 fully, but overlaps with, thus giving multiple ASIs without singleton formation)

predicts YES

@firstuserhere There's a lot there, but just to be clear, I see superintelligence as a subset of AGI. If being vastly more intelligent in a single domain counted, I think we'd need to do a lot of ad hoc exceptions so that current AIs didn't qualify.

But I would agree that superintelligences aren't necessarily identical (even in architecture), and that sort of specialization is relevant in the odds.

And just to reiterate, if there are multiple superintelligences and no singleton (as defined in the question), that is exactly the criterion for a NO resolution.

@JacyAnthis "If being vastly more intelligent in a single domain counted, I think we'd need to do a lot of ad hoc exceptions so that current AIs didn't qualify."

Yesterday i rewatched the alphago documentary after many years. This reminds me of the move 37 from game 1 and move 78 from game 5.

Move 37 was very unconventional and very unexpected but everyone unanimously agreed this move was brilliant and v hard for a human to find and that this move decided the eventual game result.

The move 78 from game 5 was a "stupid" move. As if the search algorithm had found a bad local optima and just spiraled from there. The next move was even dumber, and the next even more. The winning probability calculated by alphago according to its moves went from 91->50 and it kept making stupid moves and eventually was at i think ~9% winning chance.

And then when everyone thought this game was done because the search algorithm had gotten stuck and was just giving nonsensical results, in the next 2 moves alphago won the game.

Measuring what areas or domains a ASI can generalize to is an incredibly difficult thing to do without understanding the internal space the algorithms are capable of navigating and where a domain ends and another begins. What may seem creative or clever to humans may be just trivial in the vast space of solutions. An ASI which appears to do virtually anything may be called an AGI without necessarily having capabilities to generalize to unknown domains

bought Ṁ5 of NO

I think a lot of people talking about superintelligence have never run anything on a large scale. Data corruption and hardware failures are crippling when you're running hundreds of nodes.

bought Ṁ10 of NO

@MarkIngraham oh yes the famous Google which crashes everyday or the <20 week old chatGPT out of a previously small company with little experience in this much scaling still being up >95% of the time

predicts NO

@Dreamingpast Google is massively parallel, in any given location you're only querying ten or twenty nodes. Google has a million servers at 10% capacity and 10% doing search, so 100 per country.