I can't see any reason why it wouldn't be, but I also haven't looked into it that deeply. I know there are a lot of smart people who think it isn't possible, so it seems likely there's something they understand that I don't.
Resolves N/A if an intelligence explosion occurs before market close.
how will your opinion change after an intelligence explosion?
@L my opinion is there's a theoretical limit and we're going to hit it hard and fast next year. then the "no foom" arguments will be true because what foom exists will be done and over. (I also don't expect it to be qualitatively different than current generation AI, even if it is distinctly superhuman. still hard to align, still quite possible.)
@vluzko On the contrary, taking the title literally, it should resolve NO, since if it's already occurred and I'm still alive, it's clearly no longer a future risk.
But I don't want this market to be affected by the probability of such an explosion occurring; I want it to be about people providing strong arguments to shift my beliefs.
This article got posted below, I'm making a new thread for it in case anyone wants to talk about the arguments contained therein.
The core argument seems to be "no single intelligent human has invented some groundbreaking new technology on their own; it's always been a product of society and many different people working together and each contributing to a pool of shared knowledge. Therefore a single highly intelligent AI will not be able to effectively self-improve without that sort of support"
I'm not convinced the premise is true; I suspect there have been some inventions that were mostly made by one person. (Though of course they must have built on some pre-existing framework; none of the inventors were cavemen.) But even if we grant that premise, this doesn't seem like much of an argument that it can't happen in the future. (At the very least, an AI with enough computing power could simulate society in order to get further insights.)
We could apply the same argument to, say, mass murder. Throughout all of history, the only way to kill millions of people was as a communal effort, where thousands of people cooperated and shared their strength to accomplish this goal. Individual humans generally can't kill a lot of people on their own. Even ones with extreme physical strength don't do much better than the average. So I think we can safely say that there's no risk of one human ever gaining the capability to kill millions of people on their own.
@jack I agree as well, and here's why.
When I read the phrase "The basic premise of intelligence explosion — that a “seed AI” will arise, with greater-than-human problem solving ability, leading to a sudden, recursive, runaway intelligence improvement loop — is false", I immediately thought of the industrial revolution. For most of human history, we've had a comparatively low amount of technology: it was only around 1760 (5 years after the American revolutionary war) that the Industrial Revolution began and the number of farmers dropped from 90% of the population to 27%. The majority of technological advancements were made in the last 3 centuries. That's a "sudden, recursive, runaway intelligence improvement loop" if I've ever seen one.
Additionally, the article assumes that "Answering 'yes' [to the question of if an AI intellegence explosion could happen] would fly in the face of everything we know — again, remember that no human, nor any intelligent entity that we know of, has ever designed anything smarter than itself" -- This is what's known as the circular reasoning fallacy, where the reason given for a conclusion assumes the conclusion is true (https://www.youtube.com/watch?v=Id3TCbpWR2M). If we made an intelligence smarter than ourselves, we would have proven that an intelligent entity can design something smarter than itself.
The real question that needs to be asked is if an intelligence smarter than us would be able to avoid the logical fallacies that we fall into, able to act morally, or able to ask humans if its ideas sounded reasonable. If the AI cannot do these things, then we have our apocolyptic intelligence explosion; if the AI can, then it will be able to self regulate.
@SophiaLaird "The majority of technological advancements were made in the last 3 centuries. That's a "sudden, recursive, runaway intelligence improvement loop" if I've ever seen one."
Incremental innovation over three centuries = sudden, recursive, runaway intelligence improvement loop???
Um, no. Now, if your statement were "in the last three years" then maybe, but three centuries?!? That is silliness.
Incremental innovation over three centuries = sudden, recursive, runaway intelligence improvement loop???
"Rapid" is relative. We talk about an intelligence explosion taking place in a matter of hours, but that's still incredibly slow compared to, say, the speed at which a computer processor runs.
When human technology remained the same for around 100,000 years and then improved by multiple orders of magnitude within the space of 0.2% of that, I think that can reasonably be considered "runaway". (It was clearly recursive.)
There is of course no sudden discontinuous jump, nor will there ever be. There are just changes in the growth rate, and if those changes are fast enough, they look like an inflection point when you zoom far enough out.
@IsaacKing I think the collective intelligence argument has merits. Take the "mass murder" analogy you make, effectively one person can trigger a thermonuclear bomb and kill millions at once, but designing it and building it took efforts of thousands of people.
Ok, the AI will also have the resources of scientific litterature, so it doesn't build on nothing. Let's assume it will be equivalent of a team of 10 researchers. That would be already huge for a first AGI. It would probably need a lot of computing power so it can't replicate at will. It will need to try its ideas, make experiments. Even measuring its progress would be really challenging. How can it be sure that next iteration is really more intelligent than the first, or just better at solving a set of problems that will not generalize. So maybe after a few month it will get incremental progress. Then improving on that will be still more difficult, because of diminishing returns.
So me may have an intelligence explosion that takes 20 years, still a huge thing. If you want hours, even weeks, that seems to me very implausible.
"Possible" is a low standard.
@IsaacKing ah I see. That seems a more plausible position but still quite strong. I haven't previously seen any good arguments for it, but a quick Google found me
https://firstname.lastname@example.org/the-impossibility-of-intelligence-explosion-5be4a9eda6ec which seems largely gibberish to me
I think a more common position is that we're so far away from an intelligence explosion that it doesn't make sense to worry about it - by the time we get there, the state of AI research would look so different from now that any efforts now are meaningless. But that position just means infeasible now, not infeasible in the future.
Will anyone mind if I edit the question to ask about the feasibility of an intelligence explosion within the next 50 years? I think that's closer to what I had meant to ask about, but I recognize that's not what I said and I don't want to mess up anyone's bets.
I'd say it's different. That something is possible is a weaker claim than that something is feasible within the next 50 years. You could be convinced of the former while not being convinced of the latter.
Ok, edit made. If anyone thinks this changes the position they want to hold, let me know and I'll compensate you the mana.
@IsaacKing re 'My impression is that a large swath of the AI capabilities research community thinks that an intelligence explosion is so infeasible as to not be worth worrying about.'
I ran a big survey of ML researchers publishing in NeurIPS and ICML conferences, and asked how likely the below intelligence explosion argument was to be broadly correct. Just over half said it had an about even chance, was likely or was very likely.
"If AI systems do nearly all research and development, improvements in AI will accelerate the pace of technological progress, including further progress in AI.
Over a short period (less than 5 years), this feedback loop could cause technological progress to become more than an order of magnitude faster."
There are a couple of other very relevant questions here: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/#Intelligence_explosion
@KatjaGrace I'm not sure that 5 years to go up one order of magnitude is really an "explosion" in the sense that I was thinking of. That's an annual growth rate of about 58%, which is certainly high, but seems like it's within the same range we're already in.
I think a question that better gets at what I'm thinking about would be this one:
Assume that HLMI will exist at some point. How likely do you think it is that there will be machine intelligence that is vastly better than humans at all professions (i.e. that is vastly more capable or vastly cheaper) within two years of that point?
Median response: 10%
What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?
Median response: 10%.
Also, this selection process makes me concerned about selection bias:
We contacted approximately 4271 researchers who published at the conferences NeurIPS or ICML in 2021. These people were selected by taking all of the authors at those conferences and randomly allocating them between this survey and a survey being run by others. We then contacted those whose email addresses we could find. We found email addresses in papers published at those conferences, in other public data, and in records from our previous survey and Zhang et al 2022. We received 738 responses, some partial, for a 17% response rate.
People who find it laughable that AI could be dangerous might not even bother responding to a survey about that.
@IsaacKing all seems right, except I'm less concerned about selection bias because a) the survey was fairly ambiguous about what it would ask about in the email (something like 'future of AI field', rather than anything sounding like 'wild science fiction superintelligence nonsense', b) we paid most respondents money (it varied by round, but many $50) so may hopefully mostly get people who like money more