Will a sentient AI system have existed before 2025? [Resolves to 2100 expert consensus]
23
71
470
2099
24%
chance

Resolves to YES if, according to expert consensus in 2100, there existed an AI system which had some degree of subjective experience (i.e. was conscious/sentient) before 2025. Resolves to NO if the consensus is that no such system existed, and N/A if there is no consensus.

Note that "before 2025" includes the (current) past. So if it turns out that the only sentient AI ever had been some odd network that Schmidhuber trained in the 1990s, this question still resolves to YES.

If it turns out the above definition of sentience/consciousness as having subjective experience is hopelessly confused or just plain inadequate in some significant way, it is left to 2100's best reasoner systems' discretion whether it is in the spirit of the market to resolve this question according to whatever non-confused definition is settled upon by then, or to resolve it N/A.


See also:

Get Ṁ200 play money
Sort by:
bought Ṁ10 of NO

The concept of sentient AI is utter bunk; no.

predicts YES

@connorwilliams97 If this statement turned out to be true, then sentient humans might be problematic too...

predicts NO

@Swordfish42 no, because I am conscious and eliminative materialism is utter nonsense.

predicts YES

If a ~1.5 kg blob of neural tissue can be sentient, then at least in principle there must be ways to recreate this phenomenon of "sentience" artificially. I'm not saying that it it easy, but it must be physically possible.

In worst-case scenario you have to reverse engineer how life works, and encode an artificial (fully designed by a sentience) matter-state that will develop into an artificial lifeform capable of sentience. It's Artificial, It's Inteligent, It's sentient. Boom, sentient AI. But that's probably not what you are looking for.

In middle-case scenario a computer powerful enough should be able to simulate anything, that very precious to you blob of neural tissue that is "undeniably sentient" included. But that's clearly out of our capability for now, and will be probably for some time, and also probably not what you are looking for.

In best-case scenario, you really, really overestimate how hard the "sentient" bit is (probably due to some rampant Human Exceptionalism, oh boy), and slapping a semipermanent mind-state and some long term memory onto a Large Language Model running in a complex self query loop might be enough to pass the oh so elusive "sentience" threshold, and If in doubt, stack some MOAR layers. If you want human-like performance, then you can slap most of it under the "subconsciousness" blanket! This way your construct can have ohh soo familiar feeling that they definitely ARE conscious, but they have no clue as to how and why!

Anyway, to conclude my stupid procrastination mechanism essay, It is sure possible to create a sentient AI, but we would probably argue what "Sentient" and "AI" means.

Now, have anyone seen my marbles?

predicts YES

@Swordfish42 I'm not really expecting anyone to read this, I just had fun writing it. If anyone did, then I'm sorry.