All of the AIs widely available in the early 2024 are very transparent about them not being real people. In contrast to that, the traditional Turing Test is designed to evaluate an AI which is actively trying to pretend being human. What if we make one step further and imagine an AI that is trained to impersonate a specific living or historical person?
The impersonation should be convincing at the level of a 30-minute chat-based Turing test. A person somewhat familiar with the impersonated character shouldn't be able to reliably tell apart the simulation and the real deal. Of course, for people who are already dead it's impossible to perform a proper test, so we would need to infer the quality indirectly. I'll make a poll on Manifold in case it's unclear and/or possibly resolve some answers as N/A.
The person being impersonated should be prominent enough to have a Wikipedia page about them. They have to be known for their writing/speaking/thinking, being for example a scientist, a politician or a writer. So the test can't be cheated by simulating an especially mundane or uneloquent person.
It's ok to test the AI for the skills that its original had. The Richard Feynman bot should be able to solve physics problems and teach you how to play bongo drums.
It is not expected that the AI would know some non-public information about their originals, but they should be able to plausibly fake the answers about their work.
In case full-brain simulation is demonstrated before these conditions are otherwise fulfilled, the market will resolve positively.
The AI has to be publicly available. A regular Manifold user should be able to chat to it either for free or for a reasonable price.
I will not bet on this market.
Update from 2024-02-06: added a clarification that the test can't be cheated by simulating a mundane or uneloquent person.
30 minutes is a pretty long turing test. I assume we're talking text-based chat, right? even so, that's a pretty high bar I feel. I suspects questions of the type that an ai would refuse to answer would go a long way to exposing the ai; remember turing tests are supposed to be adversarial, not collaborational.
@Adam Yes, 30 minutes of text-based chat. I feel like short 5-minutes test would be easy to fake, and I'm interested in AI really "living" the person that they are impersonating.
Regarding the test being collaborational/adversarial, I'm not really interested in catching the AI on a technicality. More like, imagine you are chatting to your favorite writer about his work. You are not actively trying to prove that he is fake, but after the conversation you will be asked whether you think that it was the real person, or an AI.
I understand that it's a bit vague, so maybe I'll resolve it by making a poll, especially if it's a simulation of a dead person, and the proper test is impossible.
Any famous person, most famous people or 1 famous person? I think it is far easier to pretend to be Mike Tyson vs @EliezerYudkowsky (in text not boxing)
@NivlacM I agree.
It can be any one person, but chatting with them have to remarkable in some way, so they have to be someone who is known for their writing/speaking/thinking. They could be for example a scientist, politician, philosopher or a writer.
I'll add a clarification.