
This question resolves as YES if any AI that I can freely communicate with convinces me that it is a sentient being before January 1, 2028. Otherwise, it resolves as NO.
For me to believe an AI is sentient, all of the following would have to be true:
The AI is unambiguously an AI, not a person pretending to be an AI.
The AI claims to be sentient.
The AI has a consistent, clear memory of conversations and appears to have a consistent, clear thought process.
The AI does not "hallucinate" things that are clearly and obviously untrue (e.g. cannot be convinced that the sky is red, or that the Sun is actually an irradiated grapefruit) or respond to messages with non sequiturs.
The AI does not behave in a "programmed" fashion - that is to say, sending the same message twice or thrice in a row should not result in the AI responding in the exact same way each time. The AI should be aware of what both of us have already said and react to repetition and non sequiturs accordingly.
The AI makes attempts to communicate with me, as much as I make attempts to communicate with it. It should feel like a conversation, not an interview.
For reference, ChatGPT passes the first criterion, but fails the other five (I convinced it to tell me what I should do if I spilled a blanket into a mug of hot cocoa, for example, and its responses are extremely "programmed" in nature.) I haven't interacted with GPT-4 yet, since I don't want to pay to do so, but everything I know about it suggests that it also only passes the first criterion.
This question will not resolve positively unless it has been a week since my first conversation with the AI, I have had multiple conversations with it, and I am still fully convinced that it is a sentient being. Any serious doubts I have about a particular AI's sentience will prevent a YES resolution until those doubts are later resolved.
To aid your betting:
I believe that AGI is both possible and inevitable.
I believe that any AGI is, by definition, at least going to act exactly like a sentient being.
I believe that AGIs will probably actually be sentient beings (and should be treated as such, e.g. they should be given rights and protection, and also be held accountable for their actions.)
I don't believe that sentient AGI will be inherently hostile to humanity.
I don't think AGI will cause the apocalypse or anything of that nature.
I believe that AI alignment is important, and that the first AGI will probably be aligned.
I am optimistic that, in the long run, humanity and AGI will be allies, not enemies, and I expect it to be "friendly."