Why did we survive AI until 2100?
Why did we survive AI until 2100?
38
1.5kṀ3038
2100
65%
a big anthropic shadow
55%
A small group made it impossible for anyone else to develop AI.
52%
A humanitarian catastrophe hampered AI progress.
49%
AI became safer as it got more powerful without much human effort outside of some RLHFing.
45%
AI never got the capability to cause extinction.
44%
Cognitive enhancement helped a lot.
35%
Humanity coordinated on having a sufficiently long and strong AI moratorium.
34%
Jono from 2023 does not think I (the one being polled in 2100) qualify as a person.
31%
Brain uploading helped a lot.
28%
A plan to mitigate AI risks succeeded and already had a post about it on the alignmentforum on 2023.
23%
Open source AI created an egalitarian world where no one/few got in a position to (accidently) kill everyone.
12%
Nobody wanted to develop AI anymore.
5%
Humanity spread out over independent space colonies.

Me, or someone inheriting this question will poll people on this question in 2100 and resolve any answer to the proportion that people on the poll answered "yes".

I'll give you 5~100 manifold bucks if you post another good possible answer in the comments.

The question about the polled being a person is there to control for scenarios where something weird happened during the passing down of the responsibility of resolving this question.

Sorry to non-humans that between now and 2100 join human discourse. I'll edit the term "humanity" when I find a nonconfusing term encapsulating the group of all nearby moral patients.

Huh, another AGI survival prediction market?

Yes, this one is not a "pick one from many" but just a collection of yes/no questions, which I think is more informative.
- By Isaac King
- By Yudkowsky
- By Yudkowsky's community

Get
Ṁ1,000
to start trading!


Sort by:
answered1y
Humanity coordinated on having a sufficiently long and strong AI moratorium.
1mo

I can't see a path where this would ever be true. AI is too valuable of a thing for everyone to link hands and cooperate.

Additionally, this seems like an incomplete answer. Unless the moratorium lasts for nearly 75 years, the only way it'd help is by giving time to agree on some other "real" fix that will endure. But in that case, the moratorium will just be a footnote in history.

1mo

The world coordinated to get enough time to save humanity and the future, when a lot of people thought it was impossible.

Whatever the mean to secure AI we found after this, it will not be a footnote in history, it is epic.

1mo

One path is that the most powerful AIs are never given intrinsic motivations of their own.

If a person uses AI as a tool to engineer a super virus, who actually killed humanity? The gun, or the person holding the gun?

In this "the person holding the AI did it" viewpoint, we're just as likely to see AI used to save humanity, e.g. is used to help develop a vaccine.

To develop this a little further, what if we handed every org of more than 100 people the ability to launch a nuclear warhead. That kind of power in the hands of so many people is obviously folly, because someone is going to decide to launch.

AI is of course the warhead, but AI is more flexible. Every org is going to consider the risk, and many of them will set their AIs to work on building a nuclear defense shield (or equivalent).

Offense is usually easier than defense. I think humanity will be in for a rough time. But AI will get used for both roles, and it may ultimately be a human who's "to blame" if AI shatters civilization.

1y

“An AI singularity or intelligence explosion never happened.”

answered1y
AI became safer as it got more powerful without much human effort outside of some RLHFing.
1y

Removed a typo, this sentence uses to be

"As AI became safer as it got more powerful without much human effort outside of some RLHFing"

1y

This question got me thinking about optimal formats for this, so I'm trying a weird one where all trades are cancelled after a week but then it re-resolves in 2060.

If humanity avoids AI Doom until 2060, what will have contributed to this? [Resolves N/A, then re-resolves in 2060]

1y

“AutoGPT6 escaped, started doing something, was caught and vivisected. This was enough of a warning shot to create AI CERN, which miraculously succeeded."

I am sufficiently pessimistic about humanity's ability to coordinate that I think most surviving worlds in 2100 are ones in which we are just lucky and it turns out that the relevant technology is much harder to invent than we think it is. Specifically, we might be lucky and:

A) The next AI breakthrough on the order of Transformers simply never arrives. LLMs keep getting better, but no amount of additional training data makes them a superintelligence.

or

B) It turns out there are no superweapons. Nanotechnology just doesn't work how we currently expect it to, engineering super-viruses turns out to be impossible, etc. Without any easy way to kill us all instantly, AI decides to work with us instead.

I am sure someone could phrase these better than me, but they are what I'm hoping for. I still think we should be desperately trying to coordinate moratorium treaties and develop human intelligence augmentation etc, but I doubt we pull those off.

1y

@Joshua I’d like to second (B) especially. I work in nanoscience and I’m shocked by how seriously people take Eric Drexler’s ideas (I hesitate to say pseudoscience, but they’re certainly not very rigorous). I just don’t think it’s plausible that even a superintelligence could figure out how to engineer self-replicating nanobots and the like.

1y

Previous questions like this to mine for answers:

23%
J. Something 'just works' on the order of eg: train a predictive/imitative/generative AI on a human-generated dataset, and RLHF her to be unfailingly nice, generous to weaker entities, and determined to make the cosmos a lovely place.
12%
K. Somebody discovers a new AI paradigm that's powerful enough and matures fast enough to beat deep learning to the punch, and the new paradigm is much much more alignable than giant inscrutable matrices of floating-point numbers.
10%
H. Many competing AGIs form an equilibrium whereby no faction is allowed to get too powerful, and humanity is part of this equilibrium and survives and gets a big chunk of cosmic pie.
9%
Something wonderful happens that isn't well-described by any option listed. (The semantics of this option may change if other options are added.)
15%
Yudkowsky is trying to solve the wrong problem using the wrong methods based on a wrong model of the world derived from poor thinking and fortunately all of his mistakes have failed to cancel out
12%
Other
11%
Alignment is not properly solved, but core human values are simple enough that partial alignment techniques can impart these robustly. Despite caring about other things, it is relatively cheap for AGI to satisfy human values.
1y

The new answer format you're using is better, of course.

1y

@Joshua Thanks. I totally missed this despite doing some searching. Maybe I'll close the market if the overlap is too large.

1y

@Jono3h Okay, that being a "pick one" question makes it pretty unappealing. My market stays live!

1y

@Jono3h Don't close it! An unlinked multichoice is much better than those old linked parimutual markets.

1y

@Joshua Oh this also exists which is a duplicate of EY's market but with the current linked format that allows shorting:

So I would expect it to perhaps have more accurate percentages than EY's original, even though it has fewer traders. Probably also worth including the description?

My reasoning is that in general, large groups of people mostly make big changes in response to disasters.

Most of my probability mass is on things like energy scarcity, climate issues, etc. that just make AI research unfeasible.

Also significant is a failed takeover, causing everyone to understand the risk more viscerally. But that's hard to estimate. It's hard to imagine an AI causing significant enough damage without also just winning.

If it turns out to be too hard to make creative agents, then we survive for free. I wouldn't count on it but possibly it's true.

reposted 1y

reposting because its a good question, please submit!

1y

this is a great question, but is too far beyond my planning horizon to expect useful resolution. no bet.

1y

@L 2030 or 2040 feel more bettable

1y

I can make a copy of this for 2040, though I expect similar results (and these questions cost me 30% of my entire capital to create)
Gimme bucks or make one and I'll link it.

What is this?

What is Manifold?
Manifold is the world's largest social prediction market.
Get accurate real-time odds on politics, tech, sports, and more.
Or create your own play-money betting market on any question you care about.
Are our predictions accurate?
Yes! Manifold is very well calibrated, with forecasts on average within 4 percentage points of the true probability. Our probabilities are created by users buying and selling shares of a market.
In the 2022 US midterm elections, we outperformed all other prediction market platforms and were in line with FiveThirtyEight’s performance. Many people who don't like betting still use Manifold to get reliable news.
ṀWhy use play money?
Mana (Ṁ) is the play-money currency used to bet on Manifold. It cannot be converted to cash. All users start with Ṁ1,000 for free.
Play money means it's much easier for anyone anywhere in the world to get started and try out forecasting without any risk. It also means there's more freedom to create and bet on any type of question.
© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules