Will this market contain a satisfactory definition of "superintelligence" by the end of May?
➕
Plus
142
Ṁ768
resolved Sep 10
Resolved
NO

Help me brainstorm a clear definition of "superintelligence" that I can use in the resolution criteria of other markets. Here's what I got so far:

A superintelligence is any intelligent system that is far more intelligent than any human who existed prior to 2023. It approches the theoretical maximum intelligence that can be obtained given the amount of computing power it has.

As some examples, a superintelligence running on the world's largest supercomputer in 2023 and connected to the internet should be able to:

  • Get a perfect score on any test designed for humans where such a score is theoretically achievable.

  • Solve any mathematical problem that we know to be in principle solvable with the amount of computing power it has available.

  • Pass as any human online after being given a chance to talk to them.

  • Consistently beat humans in all computer games. (Except trivial examples like "test for humanness and the human player wins", "flip a coin to determine the winner", etc.)

  • Design and deploy a complicated website such as a Facebook clone from scratch in under a minute.

  • Answer any scientific question more accurately than any human.

Get
Ṁ1,000
and
S3.00
Sort by:

I think there's a deeper problem here. Superintelligence obviously means greater-than-intelligence, and it seems trivial to say you mean greater-than-human intelligence. However, human intelligence has been continuously augmented and improved by technology since we invented language (assuming you accept the premise of language as tech - regardless, it's been incrementing for a loooong time).

So, in the sense that technology is constantly advancing, and humans integrate it, super-intelligence is constantly advancing. Any sufficiently advanced human who adequately uses a new tech to exceed whatever metric of intelligence you set, is "super-intelligent."


Do you mean a synthetic intelligence? Is super a moving target of any intelligence that integrates human (bio-brain) as an element? Surpasses a system with bio-brain as a major element? The primary element? What is major? What is primary? What if it includes a system modeled off a specific persons brain (e.g. brain "upload" tech)?

TL;DR: Definitions are hard.

predicted YES

@JustNo Hi, I politely disagree with your claim here that the reference point is so arbitrary as to make definition intractable - insofar as words point to clusters in concept-space, the concept of 'superintelligence' seems fairly crisp (at least moreso than 'modernism'). In particular, I think most people, technically inclined or not, could look at any of the examples you give and determine with a pretty good degree of uniformity whether or not that conforms to their personal definition of superintelligence (at least for the cases we consider as real prospects of near-term future superintelligence; I definitely agree that this gets weirder if we don't impose a practicality constraint).


You're right that it feels a little funny to say "something epsilon smarter than 'humans'" is a superintelligence - we mean something with cognitive capabilities decisively and substantially better than those of humans - John von Neumann (humanity not withstanding), would seem to be a close call, and probably we would call anything as far above JvN as he above us an SI pretty unambiguously. If we're worried about this epsilon-exploit, we can add the desideratum that the SI obtain X% more expected utility over the extrapolated-human-problem-class than humans (see my comment from yesterday), where X is some number we would not readily expect humans to achieve without augmentation tantamount to SI, but something we would expect any SI to achieve - something like 50%, or maybe even a few hundred percentage points, depending on how intelligence explosion plays out.

With respect to whether or not we would consider modern-tech augmented humans SI relative to our recent ancestors, this does indeed seem a question of gradual improvement (at least on certain metrics) - a high school student of today really does have tools permitting them to solve problems beyond the reach of the greatest of the pre-Newtonians. This improvement, however, is only actualized for a small class of problems (making predictions based on explicitly performed mathematics) encompassing a pretty small amount of the overall "probability weighted distribution of all problems faced by humanity", so even a 10x here wouldn't, under a more wholistic definition count as SI. There's an argument to be made that most of the probability mass would be under things an SI would not want to do (e.g., romance, hunting with groups of apes), but this seems a question more of 'not wanting to' (or of possibly being definitionally excluded for e.g., 'romance') than of 'not being able to'. I think its fair to punt on this and say we can expect an SI to be good at things it wouldn't want to do, just by virtue of being smart enough to figure out how to if it really needed to.

(Sorry this got so long - I'm having a lot of fun, but I maybe this isn't the right place :P )

predicted NO

@CRW this feels like a fine place to me, so long as we dont mind a casual and minimally edited dialog, and I enjoy your thoughts here.

[

Apologies for a rough answer. Very hard to edit on mobile, and this is just off the cuff before I've had any coffee. Hopefully it comes off as jovial in tone as I intend it.

]

I think anyone can define superintelligence however they please, and it it is useful idea - I also think it is going to be exceptionally hard to come up with a precise definition, and while "we'll know it when we see it" is passable for a lot of purposes, it's not great for resolving decades long markets.

I also think the progress in human capabilities has been far from gradual, especially since the invention of the internet but even preceding it - however I'll readily admit that I tend to have a very broad definition/assessment of human capabilities that is inclusive of the machines and tools they use.

Ignoring mechanical systems (e.g. a crane allows a human to lift a piano into the sky) and sticking to more cognitive realms because we're interested in cognition, here are some mundane examples of existing improvements

  • Without my calendar there's no way I could track all the things I do.

  • Timers allow me to measure miniscule amounts of time and as many spans as I want, well beyond my biological capacity

  • Sketching with paper and pencil, much less CAD software allows me to communicate and render spatial information well beyond bio-human capacity

  • Social media allows me to broadcast my thoughts to anyone willing to listen.

  • YouTube allows me to learn thousands of things in minutes, in a way that was unimaginable 20 years ago.

  • Search allows me to find resources and answer questions that would have taken extraordinary resources 40 years ago.

  • Predictive markets allow humans to collectively assess probability far better than an individual can.

  • Research like that of David Eagleman can use simple electronic devices to grant humans a "sense" of just about anything - much like my watch now gives me a sense of time, my pulse, and my physical activity.

When we start to embed computers and nano machines under our skin and inside our skull, combined with potential biological modifications... Well we live in interesting times.

I think at any given moment we can set a set of metrics that a human can't pass, especially without access to tools. However, I think those standards will seem silly if used for decades without revision (the use case here). Human augmentation is accelerating, and we might get overtaken dramatically by AGI that becomes self improving, but until that happens what constitutes super human will be a moving (and accelerating) target to hit.

Last note: I am a subscriber to the concept of the extended mind, so that's the lense I look at things through :)

predicted YES

@JustNo Indeed empirically good fun :) The writing helps thinking about things I should be thinking about anyways.

There's definitely something to be said about the "human intelligence comes from society/tools/language" hypotheses - I don't think I know the contours of the argument well enough to say if this is true or if our difference from pre-sapientes is just individual-cognitive. In my world model, I guess I'm not so worried about a changing baseline as I see machine-intelligence explosion to be very likely (haven't done the math, but I'd guess its the default outcome of AGI from ML). 'Extended mind' makes an explication harder, since as we add tools, we alter the kinds of problems we need to use cognition for.

I'm definitely short on expecting any kind of biological SI:
- The argument against finding really good nootropics or the like seems solid with minor exceptions
-Iterated embryo selection could 100% put egg on my face here if only anyone was trying to do it (I have no idea if we would know or if it'd be super secret, but it'd take a long time regardless)
-Bostrom gives a short argument that we should expect neuromodding to be AGI-complete - it holds at the surface level at which I can evaluate it, but could be outdated/wrong - even then there seems to be very little appetite for enhancement medicine of any kind in the medical-ethics / legal worlds

As for brain emulation, if we got it before GPT-DoubleThanos, would seem to instantly be superintelligence just by virtue of hardware-overhang + access to itself + ability to copy. (i.e. with high likelihood - reality could still surprise me here too)

I agree there's a big possibility that if we don't let the goalpost move alongside 'baseline human' we'd end up including something lame and non-central like "human with 170 IQ who can edit three Excel spreadsheets at a time". But the worlds where there's not a clear divide between entities that are pre- vs. post-SI seem like a small target that I don't expect reality to hit. The smallness of that probability proceeds from a world-model that is super uncertain though.

Perhaps we can hack it together with a clause like "conditioned on contemporary humanity being qualitatively different from the putative SI" - in the other case I think I'd agree its blurry enough that the word 'superintelligence' isn't a natural, clear, or unambiguous description of whatever its user would be talking about.

Hi, I'd like to offer my own 'two cents': I think a task-list of things an SI could handle is good, but the reason AGI-impossibilists have been able to move the goalposts so many times without looking (to the general public) stupid is that this doesn't give a really satisfactory idea of what intelligence is, just what it looks like from the outside (and it at least feels like we can imagine things that 'look the same from the outside' as an SI, but are not intelligent). As edge-cases of things which I'd say are "probably not SI in the way that most people mean it even if they'd still be incredible eldritch nightmares" could be something like a Chiatin's constant oracle - 'it just outputs a number'


Intelligence of an agent SI is probably something like "Given (any) utility function, the ability to alter the state of the world such that it maximizes expected utility". If we get SI from a supervised-learning algo like GPT, I expect this to be essentially irrelevant - for a Simulator, the 'intelligence' is mostly in the 'world model' (to the extent that this at least effectively exists). So I can't see right now a natural way to unify these.

A few possible ideas and why I don't like them:
- Optimization: no free lunch theorems imply that this doesn't work unless we specify that intelligence is optimizing over a particular prior in problem-space which punts on a lot of the complexity

- Ability to achieve arbitrary goals: GPT7 (assuming that there's no LLM plateau) will definitely be some kind of superintelligent, and probably won't be (fundamentally) agentic; it's arguable that maybe supervised learning won't give you the ability to 'do causality' with your world-model, or would otherwise be 'missing something' - I'm not sure, but I can readily imagine a world where this doesn't matter.

- Ability to infer super-efficiently based on observations: Closer, but we still want to be able to make some kind of requirement that it be able to achieve its goals "if it wanted to", and this doesn't imply that. And it also doesn't feel like an algorithm that is awful at inference but 'knows everything' shouldn't meet the bar.

- Ability to make good arbitrary predictions about the future state of the world: same as the above; good but not enough. The ability to predict the outcome of some action combined with a great search over actions is sufficient, but I'm not sure if its necessary.

There's a back-and-forth in me about whether or not GPT7 as an oracle-predictor of the future world state should count - after all, we don't think Schrodinger's equation is SI. But "GPT7 told to output a blueprint to achieve some goal' definitely does.

So I propose a nice-ish working definition for you,even if it's not super duper principled and someone on AI Alignment Forum I'm sure has done better:

"A process is superintelligent with respect to a class of problems if it could (indirectly) achieve higher expected utility on them than any human"


By 'indirectly' we mean that if GPT7 can't solve problem X, but can "Make yourself smarter and then solve problem X", or similar within reason - GPT7 with an operator to type the prompt "Write a plan to accomplish X" and then "implement this plan using the manipulators I just connected" should be fine too.


Then "A process is superintelligent [full stop] if it is so with respect to the prior over problem space corresponding any reasonable estimate of that prior made on the basis of human experience "

(with the caveat that a superintelligence would get into a different area of problem-space, so the prior it cares about isn't exactly this human one)

Or colloquially, "A process is superintelligent if it can do better than humans on all tasks that are human-relevant"

It seems like task-lists are a reasonable thermometer for this definition, which makes some intuitive sense. I disagree with some of the commenters who want the metric to involve economics - economics is the subset of problems having resource constraints as a defining feature (which includes basicaly any constrained-optimization problem), but using the word that way makes it needlessly contingent on the state of "the economy" today. An AI which can't figure out how to do something like "twirl a pen like that guy on reddit" definitely loses some amount of claim to SI-ness even if no one is going to pay anyone for that.

"Get a perfect score on any test designed for humans where such a score is theoretically achievable." This seems unrealistic, since many tests have some ambiguous questions. Humans don't get perfect scores, and a superintelligence could do better than humans but still won't always get a perfect score, because the "correct" answers are imperfect.

"Pass as any human online after being given a chance to talk to them." If this is going to be one of the criteria, it should specify that the judge only gets access to the same data about that person as the AI.

"Design and deploy a complicated website such as a Facebook clone from scratch in under a minute." I'm not sure why the time required is critical. If it could do all the steps but took an hour due to limited computing power, it would still be superintelligent. And some steps in creating a facebook clone might fundamentally require more time, like if you have to talk with people in order to rent a data center.

What's unsatisfactory about what you have so far? It seems at least as precise as most other speculative topics on Manifold.

"It approches the theoretical maximum intelligence that can be obtained given the amount of computing power it has."

This requirement seems unnecessary: if there were some AI that everyone otherwise agreed was a "superintelligence" I think very few would change their mind if they found out it was running on a supercomputer e.g. 100x more powerful than the theoretical minimum.

@Aaron_ Hmm, but I think a superintelligence would be able to optimize itself to get closer to the limit.

When you say design and deploy a facebook clone in less than a minute, I'm assuming you mean 'hit the deploy button' in less than a minute? I think it's unfeasible for such a site to take less than a minute to build and deploy from the moment the button is pressed, unless maybe the AI optimizes the build and deploy process itself somehow?

@YoavTzfati Less than a minute of time where the superintelligence is working. Doesn't account for something like waiting for DNS changes to propagate.

Is the criteria to resolve this just your personal satisfaction, or would you refer the definition to some reasonably well-respected peers / fellow Manifoldians / people in the AI space for review before resolving?

@PipFoweraker Personal satisfaction.

On a lighter more fun note, this quote from hitchhiker's guide to the galaxy is applicable also. Anything that can go through all 3 phases autonomously is intelligent. Goes through them faster than humans? Super!intelligence

"The History of every major Galactic Civilization tends to pass through three distinct and recognizable phases, those of Survival, Inquiry and Sophistication, otherwise known as the How, Why, and Where phases. For instance, the first phase is characterized by the question 'How can we eat?' the second by the question 'Why do we eat?' and the third by the question 'Where shall we have lunch?"

I actually like OpenAI's way of defining AGI but i think it's better to call that definition as one of superintelligence

"artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work"

A supperintelligence is an entity that can reason about dinners

I think you'd need to have a more clear definition of what you mean by intelligence if you're defining superintelligence that way, since the term "intelligence" points to a cluster of traits and can vary on many dimensions. You could use a more well-defined metric like IQ, but then you might be leaving out parts of what you actually care about when you say "intelligence". You could also define superintelligence as an agent/system that can do all economically useful tasks more efficiently than any human who existed prior to 2023, or something like that.

I think the usual definition is "A system with comparative economic advantage to every human at every task". So nobody would pay me to do anything at all when they could pay the superintelligence to do it better for cheaper.

@Mira I quite like that definition - I suggest changing the words "every human" to "any human" though. The number of humans who exist should not impact the metric for superintelligence (which would happen in the extreme, given the above statement).

predicted YES

@Mira For what it's worth, I've only seen that sort of definition used for AGI, not for superintelligence. Superintelligence is usually meant as not just passing the bar of human intelligence, but some speculative, higher bar, e.g., 100x human intelligence, any cognitive task that's possible for its hardware.

@JacyAnthis "any cognitive task that's possible for its hardware" seems like too high a bar. The architecture and knowledge of any computing system is going to be a tradeoff between different cognitive tasks, where on the same hardware you could make a change that improves performance on one but makes it worse on another. So the "best" superintelligence for a given set of hardware will be optimized to its environment and the set of tasks that are most important for achieving its goals.

@Mira Yeah, I think this is a good definition for AGI (at least when you exclude physical tasks), but superintelligence is a higher bar.

Related questions

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules