Will human brains be weaker than AI in every way by the end of 2028?
Standard
81
Ṁ24k
2028
12%
chance

Resolves yes if and only if human brains are unambiguously weaker than AI in every domain.

Resolution process:

  • to propose YES resolution, comment with a reference to an AI that appears to be stronger than humans in every way.

  • to counterpropose NO resolution, comment with a domain the given AI appears to be weaker-than-human within.

  • I or other commenters check if the AI can prove its capability in the challenged domain.

  • If no counterproposed NO can be found that the AI cannot disprove, the question resolves yes.

(At time of market creation, I expect this to resolve yes by 2024.)

This does not require AI to have replaced humans in practical tasks, only that there is at least one single integrated AI that can in fact beat us at absolutely everything.

Note: this includes understanding the human experience. An AI that passes this test would understand humans very dramatically better than most of us understand ourselves, since it'd need to be able to empathize with us across substrates.

Note #2: This is not a per-watt question. The AI is allowed to use arbitrarily much more compute than humans to do the task, as long as it can produce more accurate results than the best trained human in a given task. This will likely require the AI to be one that does active exploration.

Get
Ṁ1,000
and
S1.00
Sort by:

Resolves N/A because of real money switchover. however, "Negative payouts too large for resolution. Contact admin."

@L Why does this resolve N/A? What does the payout switch have to do with it?

predicts NO

Resolves yes if and only if human brains are unambiguously weaker than AI in every domain.

Does that include motor skills?

Like being able to control a humanoid robot body to run faster than a human

Edit: someone already asked below and answer is yes

I personally think this isn't particularly likely by 2028; but by 2033, I'd put even odds on it, and 80-90% odds on 2038, since I think we'll reach true AGI at some point in the early 2030s (at which point I expect a superintelligence to be developed within a maximum of two to three years, and potentially MUCH faster than that). I'm operating under the assumption that alignment is likely to be successful and a rational intelligent AGI/superintelligence will not be interested in killing all of us; I expect that any AGI will have the capacity to feel empathy, which in turn means I expect their desire to commit genocide to be close to zero.

Also, this question's cover art looks like it would be the album cover of the best progressive death metal record of all time.

A lot of our brain is for fine motor control, does this question measure that as well?

Also Just confirming that there needs to be a single AI system that needs to outcompete humans

predicts YES

@firstuserhere yes, single system that does absolutely every computational function of every human brain better than any human alive. complete surpass, with no remaining exceptions detectable by the resolution process, which is designed to be thorough.

I think we'll probably survive but I also think we're going to hit hard takeoff before then and the only reason we survive is it looks like we're having the appropriate discussions to figure out how to not be disempowered as a species

@L "yes, single system that does absolutely every computational function of every human brain better than any human alive"

"(At time of market creation, I expect this to resolve yes by 2024.)"

This does not say much for your predictive abilities.

predicts YES

@DavidBolin coming back to this, the trend I was expecting to have taken off this year seems to have been slowed. we shall see. I think my trading page says more for my predictive abilities, though - which is to say, yeah, fair 'nuff.

predicts NO

I mean, we're better at getting wet. Does that count?

betting yes because L resolving yes under ambiguity now looks pretty likely and there is a lot of ambiguity in this question.

predicts NO

@Adam They were pretty explicit about wanting this to be skewed in favor of humans, so I trust that they'll resolve it correctly.

https://manifold.markets/IsaacKing/will-human-brains-be-weaker-than-ai-81148c3e0489#C6P8nkyBnq7IXJZW5jFP

predicts YES

@Adam I claim that, conditional on reaching 2028, either machines are drastically superintelligent, or we're dead; there's no other possibility.

predicts YES

@L and because I want to claim drastic superintelligence, I made a market which is incredibly hard for nonhumans to win. a single machine must be better than all humans, and a single human being able to do a single thing the machine cannot invalidates the market and resolves false.

and on this market I am willing to buy up to 80% - because I think that, conditional on us surviving that long, it's because the drastically superintelligent machines allowed it, which would be a huge alignment success.

predicts YES

damn, capabilities doubters came out in force when I criticized them. efficient market indeed. so is anyone going to explain why, though? I want to hear more. I'd hope I'm incentivizing someone to explain why it's unlikely.

hi @IsaacKing, wanna say more about what your ai expectations are? I would love to hear why you think this isn't going to happen, in detail. Because if you have a mechanistic model right now it would be very useful!

predicts YES

@L at this point it's becoming indefensible to claim it won't happen, in my view. what in the world will stop it?

predicts NO

@L Any realistic world state in which this market's conditions have been met is a world in which we are all dead or soon will be.

predicts YES

@IsaacKing oh. okay. that's fair.

predicts YES

@L also, crap, maybe that makes this market useless. if this comment gets 6 hearts from stakeholders who already held stake before I wrote the comment, I'll resolve N/A

predicts NO

@L If this market were really useless in that way, it would be a guaranteed YES market with probability near 0.

predicts NO

@L Please don't resolve this N/A. I've already expended significant opportunity cost with the mana I've invested into the market, and I'd like to make my M$3000 if I'm still alive in 2028.

And you never know, I could be wrong. Maybe the AI and humans will get along just fine and you'll make a killing.

predicts YES

@IsaacKing ok, I won't resolve N/A

predicts YES

@L (there were no hearts on the "gets 6 hearts" comment at time of cancelling the offer to resolve N/A)

predicts YES

Ah shit.

I failed to even consider how to include significantly augmented humans whose brains are not entirely biological.

To be clear, I intended originally - but would not be surprised if some traders assumed differently, and thus now need to consider this edge case by talking to as many folks as possible - that it would be a comparison between a human with only biological matter in their brain.

I would like to update the description to explicitly exclude silicon neuroaugmentation and explicitly include biological neuroaugmentation. If humans are able to retain unique capabilities of any kind whatsoever, even ones that are only retained as unique because they are not useful to any ai by time of resolution, and those humans do not have what 2023 knows as the modern electronics based style of BCI in their brain, then this resolves false. If others object and would assert that a human brain with BCIs should qualify as entirely human by the standards of this question, or alternately if they object that advanced drugs which significantly increase human capability while retaining a biological substrate should not qualify as entirely human for this question, then I will not change the criteria until discussion has completed. Until this concern is resolved, I will not trade.

Comment hidden

Related questions