Will a sentient AI system have existed before 2040? [Resolves to 2100 expert consensus]
➕
Plus
46
Ṁ19k
2099
63%
chance

Resolves to YES if, according to expert consensus in 2100, there existed an AI system which had some degree of subjective experience (i.e. was conscious/sentient) before 2040. Resolves to NO if the consensus is that no such system existed, and N/A if there is no consensus.

Note that "before 2040" includes the (current) past. So if it turns out that the only sentient AI ever had been some odd network that Schmidhuber trained in the 1990s, this question still resolves to YES.

If it turns out the above definition of sentience/consciousness as having subjective experience is hopelessly confused or just plain inadequate in some significant way, it is left to 2100's best reasoner systems' discretion whether it is in the spirit of the market to resolve this question according to whatever non-confused definition is settled upon by then, or to resolve it N/A.

See also:

Get
Ṁ1,000
and
S3.00
Sort by:
predicts NO

Why do people think this is going to happen? When I'm multiplying two matrices for my math class do you also think that those matrices are sentient?

@Timothy Not necessarily, but I also do not necessarily think that when you perform [pick any given chemical reaction happening in the brain] in your chemistry class that those molecules are sentient, so that reductio does not particularly move me.

I think the basic consideration here is that the only known examples of something experiencing sentience are also the only known examples of something being intelligent. (Speaking about life forms with a brain here, supposing that animals experience sentience; though much of the argument goes through even with just humans experiencing it.) That, to me, seems to imply a nontrivial prior that something sufficiently intelligent is also going to be sentient. But I do feel very confused and uncertain about it all.

predicts YES

@Timothy I feel like there is a tempting intuition which tells us to treat Magic Sentient Things (like human brains) and Mundane Non-Sentient Things (like matrix multiplication / computers / whatever) as two separate magisteria. Forgive me if I read too much into your question - I'm merely describing an intuition that is present in my brain, so I guessed that your brain is prone to it, too.

The thing is, things like matrix multiplication and things like brains aren't really two non-overlapping magisteria. Brains are made of the same mundane stuff as other computers; and thus, even though I don't understand the mechanism behind human sentience at all, I am guessing that it is based on a computation that could, in principle, be run on any other computer. Indeed I am guessing that if I pick just the right Turing machine and run it on just the right data, it would have the same internal experience as I do now.

As for the question why I don't just think it possible, but am betting it will happen - that's just how I model the curve of our progress, I guess. I expect it will be possible, and I expect that someone will do the possible thing.

predicts YES

@Timothy Of course not. You also need nonlinearities.

More seriously, I have convincingly argued to myself that you can in principle run human consciousness on other hardware (using a similar argument to Chalmers (https://consc.net/papers/qualia.html) as it turns out, though without considerations about gradual changes: I just reasoned that, were an ideal computer simulation of a brain possible, it would (by definition) have to act the same way in all circumstances, so either it is conscious or consciousness is epiphenomenal only).

This obviously doesn't imply that an arbitrary AI system will have consciousness, but makes me think it's possible for them to. I assume that if we have it it's either instrumentally useful in some way - in which case it will be deliberately designed in or selected for - or it is likely to naturally emerge in messy intelligent systems.

@Timothy If you could calculate evolution of a simulated brain with pen and paper, I think the calculation would have consciousness. I think it is the information processing pattern that is responsible for it rather than the hardware. (Even if there would be quantum effects at play, quantum mechanics is computable too.)

predicts YES

@Timothy why do you think humans are sentient, I have protein in my shake, does it mean it's sentient?

@Timothy Standard views in philosophy of mind and cognitive science?: https://plato.stanford.edu/entries/computational-mind/
https://plato.stanford.edu/entries/functionalism/

"It' just X [i.e. matrix multiplication, next word prediction etc.]' arguments look less impressive when you remember that the brain is just a bunch of atoms obey rules of chemistry and physics.

predicts NO

@Irigi I wrote a long commet replying to everyone but it somehow vanished, so I'll just write a short one.

I think it is definitely possible to create sentient AI systems, I just think it is probably very hard.
I can imagine a normal computer made out of silicon being sentient. But because any universal Turing machine can simulate every other universal Turing machine it would have to be possible to have a sentient computer purely made of cogs and gear or a steam engine. It's not very hard to build logic gates out of gears or using steam. So maybe it's not just a computation that makes a thing sentient, maybe it's something else, even though that seems unlikely. What sentience is seems like the most profound question we have at the moment, and we probably won't solve that question very soon. Of course maybe we make something sentient without knowing what sentience is. This is all very difficult.
I really don't know but 50% still seems way to high.

predicts YES

@Thisistherealmessage I don't think subjective experiences are a much higher bar to clear than current state of art in AI, many animals are considered sentient but none have been able to use a language despite great efforts.

predicts NO

@CodeandSolder Do you think having a subjective experience is a purely computational thing. As in if we had the right program my phone could be sentient?

@Thisistherealmessage

if someone built a machine out of a big steam engine or out of gears because any universal Turing machine can simulate every other universal Turing machine it would have to be possible to have a sentient computer purely made of cogs or steam. And that just doesn't seem right.

Why not? How are neurons different from steam cogs? For me, it is hard to imagine hundred trillion cogs together, working on our timescale. But in principle this is how I imagine brain: just very complex machinery. You state the correct statement that strongly supports it (with the Turing machines) and then dismiss it because of intuition?

My opinion on consciousness, and "the hard problem" of it is that we are searching for something magical, even to the point we even cannot describe what it is exactly in naturalistic terms. Therefore it might be hard to prove something has it. But I am quite confident, we will be able to do very good simulacra of humans eventually, even if only via terminal. If we agree that such things indistinguishable from humans have consciousness, we are there. If not, we are probably struggling with definition of consciousness.

predicts YES

@Thisistherealmessage My model of reality does not involve anything supernatural and as far as I can tell I am sentient, and so a perfect simulation of my brain would also be such. That would obviously require an absurd and most likely unachievable amount of compute but there nearly certainly exists a shorter way to the same goal.

predicts NO

@Irigi I am sure we have the same definition of consciousness. I just think we will create it in the next 30 years with 40% and not in the next 17 years with 50% probability.

I am not dismissing that all Turing machines can be sentient, I just think maybe there is a new kind of computation that the brain does, something we don't even have a word for yet. I am sure Turing machines would eventually also be able to do that, I think It will just take a while. Reality is so weird, there are probably a couple of general relativity level weirdness things to discover before we know what sentience is. Maybe maybe not.

(Also sorry for not responding for 27days missed out on some potential great discussions.)

@Thisistherealmessage I share your skepticism about the time scale.

Also, I can imagine a new physical theory that changes our view of the world in similar manner quantum mechanics did. In such new theory, the world would not have to be Turing-computable and this would bring interesting twist to the discussion. (I would not give more than 20% to this possibility right now, but what do I know?)

predicts YES

@Irigi before quantum we had simple measurements that fundamentally could not be explained by the previous understanding, that no longer seems to be a large issue. 20% seems extremely high for what would amount to "new fundamentals of physics, located entirely within our brains". To me it seems the simplest model of "neurons made out of completely boring matter interact with each other" explains everything we observe and all alternatives are vastly more complex.
I'm not completely sure it will be computationally feasible with general processing architecture, there may be things that need to be implemented as dedicated hardware to achieve necessary performance but as I understand the question that's allowed.
And as for timelines, the XKCD below is from 2014.

predicts NO

@CodeandSolder how does "neurons made out of completely boring matter interact with each other" explain anything? Why do I feel like I have a self. Why do I have free will? Why do I have an experience?

predicts YES

@Irigi I don't believe free will is a correctly asked question, personally. And as for the other two - I don't really see the surprising part about it, that seems like an efficient way to process information, how exactly is it implemented by the hardware is above my paygrade but nothing I've seen suggests it shouldn't be possible. It doesn't seem to be on a fundamentally different level of complexity from language or image understanding we've already achieved in silicon either.

20% seems extremely high for what would amount to "new fundamentals of physics, located entirely within our brains"

20% was for new theory that is similarly different from current state of art as quantum mechanics to classical mechanics. Going less seems overconfident, given how far our current experiments are from the Planck scale. For it to be somehow connected to sentience, I would put 3% (0.6% in total, 3% is conditional if the theory like that is even necessary). This is because I do not think there is a reason to involve anything more on the energy scale of our brains and there are no experiments hinting it.

How does "neurons made out of completely boring matter interact with each other" explain anything? Why do I feel like I have a self. Why do I have free will? Why do I have an experience?

I would not say it explains it, I would say we have strong reasons to say all three systems (simulated one, silicon/steam actual one, and the brain) are equivalent in the direction that matters. (They preserve the information processing and in principle lead to same results - thinking).

On the other hand, I also do not think that the lack of answer in these questions invalidates the equivalence I mention above, or call for reinvestigation of physics. (Btw. I think the first two questions - sense of self and free will, are just special feelings of the same sort as pain or pleasure. So I think in reality there is just one question - why do we experience these feelings, "the hard problem of consciousness").

@Thisistherealmessage @CodeandSolder

predicts YES

@Irigi yeah, I was questioning it in context of "necessary for sentience but also not showing up in any other experiments we've done", there absolutely is still some blank space in physics, I'd put "something significant we don't know about now" even higher than 20%. From what I've seen though human brains generally operate several orders of magnitude above that space and are not generally functionally impacted by low energy interactions (like electromagnetic waves for example)

The rest matches my beliefs.

predicts YES

@CodeandSolder made a market on that:

predicts NO

What's with the big surge today?

@connorwilliams97 I don't think there's any particular reason, the market is still "converging" / often being seen for the first time I suppose.

Very cool markets

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules