Resolves to YES if, according to expert consensus in 2100, there existed an AI system which had some degree of subjective experience (i.e. was conscious/sentient) before 2040. Resolves to NO if the consensus is that no such system existed, and N/A if there is no consensus.
Note that "before 2040" includes the (current) past. So if it turns out that the only sentient AI ever had been some odd network that Schmidhuber trained in the 1990s, this question still resolves to YES.
If it turns out the above definition of sentience/consciousness as having subjective experience is hopelessly confused or just plain inadequate in some significant way, it is left to 2100's best reasoner systems' discretion whether it is in the spirit of the market to resolve this question according to whatever non-confused definition is settled upon by then, or to resolve it N/A.
Why do people think this is going to happen? When I'm multiplying two matrices for my math class do you also think that those matrices are sentient?
@Timothy Not necessarily, but I also do not necessarily think that when you perform [pick any given chemical reaction happening in the brain] in your chemistry class that those molecules are sentient, so that reductio does not particularly move me.
I think the basic consideration here is that the only known examples of something experiencing sentience are also the only known examples of something being intelligent. (Speaking about life forms with a brain here, supposing that animals experience sentience; though much of the argument goes through even with just humans experiencing it.) That, to me, seems to imply a nontrivial prior that something sufficiently intelligent is also going to be sentient. But I do feel very confused and uncertain about it all.
@Timothy I feel like there is a tempting intuition which tells us to treat Magic Sentient Things (like human brains) and Mundane Non-Sentient Things (like matrix multiplication / computers / whatever) as two separate magisteria. Forgive me if I read too much into your question - I'm merely describing an intuition that is present in my brain, so I guessed that your brain is prone to it, too.
The thing is, things like matrix multiplication and things like brains aren't really two non-overlapping magisteria. Brains are made of the same mundane stuff as other computers; and thus, even though I don't understand the mechanism behind human sentience at all, I am guessing that it is based on a computation that could, in principle, be run on any other computer. Indeed I am guessing that if I pick just the right Turing machine and run it on just the right data, it would have the same internal experience as I do now.
As for the question why I don't just think it possible, but am betting it will happen - that's just how I model the curve of our progress, I guess. I expect it will be possible, and I expect that someone will do the possible thing.
@Timothy Of course not. You also need nonlinearities.
More seriously, I have convincingly argued to myself that you can in principle run human consciousness on other hardware (using a similar argument to Chalmers (https://consc.net/papers/qualia.html) as it turns out, though without considerations about gradual changes: I just reasoned that, were an ideal computer simulation of a brain possible, it would (by definition) have to act the same way in all circumstances, so either it is conscious or consciousness is epiphenomenal only).
This obviously doesn't imply that an arbitrary AI system will have consciousness, but makes me think it's possible for them to. I assume that if we have it it's either instrumentally useful in some way - in which case it will be deliberately designed in or selected for - or it is likely to naturally emerge in messy intelligent systems.
@Timothy If you could calculate evolution of a simulated brain with pen and paper, I think the calculation would have consciousness. I think it is the information processing pattern that is responsible for it rather than the hardware. (Even if there would be quantum effects at play, quantum mechanics is computable too.)
@Timothy why do you think humans are sentient, I have protein in my shake, does it mean it's sentient?
What's with the big surge today?
@connorwilliams97 I don't think there's any particular reason, the market is still "converging" / often being seen for the first time I suppose.