
Last year, a former Google employee claimed AI had achieved sentience.
Link: https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/
Resolved when we receive another report of supposed "sentient" AI, or SkyNet fulfills the prophecy of the Connors.
Edited per comments 4/24: threshold for proving false sentience removed. New requirement is only for a widespread report of sentient AI in reputable news media sources.
Apr 24, 11:27pm: Will there be another falsely reported sentient AI by the end of CY2023? → Will there be another widely reported sentient AI by the end of CY2023?
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ278 | |
2 | Ṁ239 | |
3 | Ṁ223 | |
4 | Ṁ216 | |
5 | Ṁ109 |
People are also trading
68% probability for this seems crazy to me. What are the chances that another Google/OpenAI/etc. engineer decides to stake their reputation on this? What are the chances journalists pick up the same kind of story again? And how many people who weren't already convinced of sentience in LaMDA/GPT-4/etc. will somehow be convinced by a new model coming out in 2023 (an improved version of Bard? Claude? Maybe GPT-4.5?)?
And see the attached market. As far as I know, nobody has made a serious, substantive claim that GPT-4 was sentient, even something easy like a blog post from an anonymous engineer or a panpsychist philosopher.
@JacyAnthis I don’t know how much it helps or how much the outcome of this market will be affected, but from Adele’s feedback below, I’ve amended the adverb in the OP. The original Q was if it would be another falsely reported sentient AI as opposed to a widely reported sentient AI. Either way, the recent widespread fascination is enough to convince me that someone is willing to run this story again this year because of our eagerness to see these types of breakthroughs with this technology, whether true or not.
@LBeesley I think that was a good amendment! Adjudicating "false" sounds too contentious.
I would agree that if a Google DeepMind, OpenAI, or Meta engineer came out as forcefully as Blake Lemoine did, they would get at least one article about them in a major media outlet (and several more in minor outlets based on that).
@adele This is akin to the "existence of god" debate. Our general understanding of artificial intelligence is that it is the result of a command prompt. A block of code that has been created by humans. Until there is a scientific consensus among people far smarter than I am that an artificial intelligence entity has developed self-awareness, I'll have to continue my life assuming it is impossible and that any news that is released will be debunked as it was last year.
@LBeesley In the world where an AI actually becomes sentient, I imagine it most likely would be controversial for some time and not resolve into a scientific consensus until several years later. I understand that your prior is essentially that this is impossible, but I think that this question is more accurately tracking "will there be a report of sentient AI by CY2023" in a prior-independent way than its stated title.
@jacksonpolack If it’s covered similarly to the way that LaMDA was covered, I’m resolving as a yes. That debacle made global headlines and was reported by multiple reputable media outlets.