At the end of 2023, will I believe that a rapid intelligence explosion is a plausible result of AI capabilities research, and the possibility is worth spending some non-negligible amount of effort investigating?
36
164
670
resolved Jan 3
Resolved
YES

I can't see any reason why it wouldn't be, but I also haven't looked into it that deeply. I know there are a lot of smart people who think it isn't possible, so it seems likely there's something they understand that I don't.

Resolves N/A if an intelligence explosion occurs before market close.

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ60
2Ṁ27
3Ṁ20
4Ṁ10
5Ṁ10
Sort by:
bought Ṁ11 of YES

The threat doesn't even need to come from this wave of AI to warrant some investigation.

bought Ṁ10 of NO

Is this worth spending your effort personally, or worth someone spending effort as part of civilization's cumulative endeavors?

@MartinRandall Civilization. Basically, is this a threat worth taking seriously?

predicted YES

@IsaacKing Its key claim seems to be "an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible". This seems like a niche reason to not expect AI intelligence explosion is plausible (e.g. most AI researchers do believe superhuman AI is possible I think). So my guess is that it isn't a particularly helpful thing to read on the topic.

predicted YES

@IsaacKing This page I mostly made is about fast progress in the vicinity of human-level AI in general, but mostly collects arguments I'd think of as separate from intelligence explosion: https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/

how will your opinion change after an intelligence explosion?

@L my opinion is there's a theoretical limit and we're going to hit it hard and fast next year. then the "no foom" arguments will be true because what foom exists will be done and over. (I also don't expect it to be qualitatively different than current generation AI, even if it is distinctly superhuman. still hard to align, still quite possible.)

@L Resolves N/A if an intelligence explosion occurs before market close.

predicted YES

@IsaacKing Surely it should resolve YES in that case.

@vluzko On the contrary, taking the title literally, it should resolve NO, since if it's already occurred and I'm still alive, it's clearly no longer a future risk.

But I don't want this market to be affected by the probability of such an explosion occurring; I want it to be about people providing strong arguments to shift my beliefs.

https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec

This article got posted below, I'm making a new thread for it in case anyone wants to talk about the arguments contained therein.

The core argument seems to be "no single intelligent human has invented some groundbreaking new technology on their own; it's always been a product of society and many different people working together and each contributing to a pool of shared knowledge. Therefore a single highly intelligent AI will not be able to effectively self-improve without that sort of support"

I'm not convinced the premise is true; I suspect there have been some inventions that were mostly made by one person. (Though of course they must have built on some pre-existing framework; none of the inventors were cavemen.) But even if we grant that premise, this doesn't seem like much of an argument that it can't happen in the future. (At the very least, an AI with enough computing power could simulate society in order to get further insights.)

We could apply the same argument to, say, mass murder. Throughout all of history, the only way to kill millions of people was as a communal effort, where thousands of people cooperated and shared their strength to accomplish this goal. Individual humans generally can't kill a lot of people on their own. Even ones with extreme physical strength don't do much better than the average. So I think we can safely say that there's no risk of one human ever gaining the capability to kill millions of people on their own.

predicted YES

@IsaacKing Yeah, I agree with you, I think the argument is wrongheaded on many levels. E.g. I see no reason to expect AI systems to be in any way like individual humans, rather than like collective intelligence.

bought Ṁ10 of YES

@jack I agree as well, and here's why.

When I read the phrase "The basic premise of intelligence explosion — that a “seed AI” will arise, with greater-than-human problem solving ability, leading to a sudden, recursive, runaway intelligence improvement loop — is false", I immediately thought of the industrial revolution. For most of human history, we've had a comparatively low amount of technology: it was only around 1760 (5 years after the American revolutionary war) that the Industrial Revolution began and the number of farmers dropped from 90% of the population to 27%. The majority of technological advancements were made in the last 3 centuries. That's a "sudden, recursive, runaway intelligence improvement loop" if I've ever seen one.

Additionally, the article assumes that "Answering 'yes' [to the question of if an AI intellegence explosion could happen] would fly in the face of everything we know — again, remember that no human, nor any intelligent entity that we know of, has ever designed anything smarter than itself" -- This is what's known as the circular reasoning fallacy, where the reason given for a conclusion assumes the conclusion is true (https://www.youtube.com/watch?v=Id3TCbpWR2M). If we made an intelligence smarter than ourselves, we would have proven that an intelligent entity can design something smarter than itself.

The real question that needs to be asked is if an intelligence smarter than us would be able to avoid the logical fallacies that we fall into, able to act morally, or able to ask humans if its ideas sounded reasonable. If the AI cannot do these things, then we have our apocolyptic intelligence explosion; if the AI can, then it will be able to self regulate.

@SophiaLaird "The majority of technological advancements were made in the last 3 centuries. That's a "sudden, recursive, runaway intelligence improvement loop" if I've ever seen one."

Incremental innovation over three centuries = sudden, recursive, runaway intelligence improvement loop???

Um, no. Now, if your statement were "in the last three years" then maybe, but three centuries?!? That is silliness.

@BTE

Incremental innovation over three centuries = sudden, recursive, runaway intelligence improvement loop???

"Rapid" is relative. We talk about an intelligence explosion taking place in a matter of hours, but that's still incredibly slow compared to, say, the speed at which a computer processor runs.

When human technology remained the same for around 100,000 years and then improved by multiple orders of magnitude within the space of 0.2% of that, I think that can reasonably be considered "runaway". (It was clearly recursive.)

There is of course no sudden discontinuous jump, nor will there ever be. There are just changes in the growth rate, and if those changes are fast enough, they look like an inflection point when you zoom far enough out.

predicted NO

@IsaacKing I think the collective intelligence argument has merits. Take the "mass murder" analogy you make, effectively one person can trigger a thermonuclear bomb and kill millions at once, but designing it and building it took efforts of thousands of people.

Ok, the AI will also have the resources of scientific litterature, so it doesn't build on nothing. Let's assume it will be equivalent of a team of 10 researchers. That would be already huge for a first AGI. It would probably need a lot of computing power so it can't replicate at will. It will need to try its ideas, make experiments. Even measuring its progress would be really challenging. How can it be sure that next iteration is really more intelligent than the first, or just better at solving a set of problems that will not generalize. So maybe after a few month it will get incremental progress. Then improving on that will be still more difficult, because of diminishing returns.

So me may have an intelligence explosion that takes 20 years, still a huge thing. If you want hours, even weeks, that seems to me very implausible.

bought Ṁ40 of YES

"Possible" is a low standard.

@StevenK Oh? How so?

predicted YES

@IsaacKing Well, it's a lot lower than "probable" or "plausible". I guess the main way an intelligence explosion could be impossible is if there were an upper bound to intelligence not far above the human level.

@StevenK To be clear, I mean "possible" in the sense that it's possible one could occur from humans developing AI technology.

@IsaacKing I haven't heard anyone claim it isn't possible, that sounds like an extremely strong position which would require strong arguments, curious what their arguments are.

@jack Maybe I should have said "feasible" instead of "possible"? My impression is that a large swath of the AI capabilities research community thinks that an intelligence explosion is so infeasible as to not be worth worrying about.

@IsaacKing ah I see. That seems a more plausible position but still quite strong. I haven't previously seen any good arguments for it, but a quick Google found me

https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec which seems largely gibberish to me

I think a more common position is that we're so far away from an intelligence explosion that it doesn't make sense to worry about it - by the time we get there, the state of AI research would look so different from now that any efforts now are meaningless. But that position just means infeasible now, not infeasible in the future.

sold Ṁ22 of YES

@jack Also they may think it's too unlikely to worry about given our world's circumstances, but still think it's possible in some worlds, e.g. if people were using different AI paradigms.

Will anyone mind if I edit the question to ask about the feasibility of an intelligence explosion within the next 50 years? I think that's closer to what I had meant to ask about, but I recognize that's not what I said and I don't want to mess up anyone's bets.

predicted YES

I'd say it's different. That something is possible is a weaker claim than that something is feasible within the next 50 years. You could be convinced of the former while not being convinced of the latter.

@VadimFomin Would you be ok with it if I removed the "50 years" part? How about something like:

Will I believe that a rapid intelligence explosion is a plausible result of AI capabilities research, and is worth spending some non-negligible amount of effort investigating?

predicted YES

@IsaacKing don't know about other traders, but ok with me. the wording is nice, i think.

Ok, edit made. If anyone thinks this changes the position they want to hold, let me know and I'll compensate you the mana.

@jack The fact that that article starts off by dismissing intelligence explosions as "science fiction" just because they've appeared in fictional movies doesn't bode well, but I'll read through the whole thing just in case there's anything interesting there.

sold Ṁ36 of YES

@IsaacKing I agree this is an improvement to the question, and at 48 shares I think I lost under Ṁ10 of EV from the change, so it's fine.

predicted YES

@IsaacKing I skimmed the article and strongly disagreed with most of it, so I'm not recommending it, it's just the first thing I found.

@IsaacKing re 'My impression is that a large swath of the AI capabilities research community thinks that an intelligence explosion is so infeasible as to not be worth worrying about.'

I ran a big survey of ML researchers publishing in NeurIPS and ICML conferences, and asked how likely the below intelligence explosion argument was to be broadly correct. Just over half said it had an about even chance, was likely or was very likely.

Argument:

"If AI systems do nearly all research and development, improvements in AI will accelerate the pace of technological progress, including further progress in AI.

Over a short period (less than 5 years), this feedback loop could cause technological progress to become more than an order of magnitude faster."

https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/#Chance_that_the_intelligence_explosion_argument_is_about_right

There are a couple of other very relevant questions here: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/#Intelligence_explosion

@KatjaGrace I'm not sure that 5 years to go up one order of magnitude is really an "explosion" in the sense that I was thinking of. That's an annual growth rate of about 58%, which is certainly high, but seems like it's within the same range we're already in.

I think a question that better gets at what I'm thinking about would be this one:

Assume that HLMI will exist at some point. How likely do you think it is that there will be machine intelligence that is vastly better than humans at all professions (i.e. that is vastly more capable or vastly cheaper) within two years of that point?

Median response: 10%

And:

What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?

Median response: 10%.

Also, this selection process makes me concerned about selection bias:

We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021. These people were selected by taking all of the authors at those conferences and randomly allocating them between this survey and a survey being run by others. We then contacted those whose email addresses we could find. We found email addresses in papers published at those conferences, in other public data, and in records from our previous survey and Zhang et al 2022. We received 738 responses, some partial, for a 17% response rate.

People who find it laughable that AI could be dangerous might not even bother responding to a survey about that.

predicted YES

@IsaacKing all seems right, except I'm less concerned about selection bias because a) the survey was fairly ambiguous about what it would ask about in the email (something like 'future of AI field', rather than anything sounding like 'wild science fiction superintelligence nonsense', b) we paid most respondents money (it varied by round, but many $50) so may hopefully mostly get people who like money more

@KatjaGrace Ah, ok, that assuages my concerns as well.

Comment hidden

More related questions