![](/_next/image?url=https%3A%2F%2Fstorage.googleapis.com%2Fmantic-markets.appspot.com%2Fcontract-images%2FManifoldAI%2F6fdd83122a90.jpg&w=3840&q=75)
This market resolves to the year in which an AI system exists which is capable of passing a high quality, adversarial Turing test. It is used for the Big Clock on the manifold.markets/ai page.
The Turing test, originally called the imitation game by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.
For proposed testing criteria, refer to this Metaculus Question by Matthew Barnett, or the Longbets wager between Ray Kurzweil and Mitch Kapor.
As of market creation, Metaculus predicts there is an ~88% chance that an AI will pass the Longbets Turing test before 2030, with a median community prediction of July 2028.
Manifold's current prediction of the specific Longbets Turing test can be found here:
/dreev/will-ai-pass-the-turing-test-by-202
This question is intended to determine the Manifold community's median prediction, not just of the Longbets wager specifically but of any similiarly high-quality test.
Additional Context From Longbets:
One or more human judges interview computers and human foils using terminals (so that the judges won't be prejudiced against the computers for lacking a human appearance). The nature of the dialogue between the human judges and the candidates (i.e., the computers and the human foils) is similar to an online chat using instant messaging.
The computers as well as the human foils try to convince the human judges of their humanness. If the human judges are unable to reliably unmask the computers (as imposter humans) then the computer is considered to have demonstrated human-level intelligence.
Additional Context From Metaculus:
This question refers to a high quality subset of possible Turing tests that will, in theory, be extremely difficult for any AI to pass if the AI does not possess extensive knowledge of the world, mastery of natural language, common sense, a high level of skill at deception, and the ability to reason at least as well as humans do.
A Turing test is said to be "adversarial" if the human judges make a good-faith attempt, in the best of their abilities, to successfully unmask the AI as an impostor among the participants, and the human confederates make a good-faith attempt, in the best of their abilities, to demonstrate that they are humans. In other words, all of the human participants should be trying to ensure that the AI does not pass the test.
Note: These criteria are still in draft form, and may be updated to better match the spirit of the question. Your feedback is welcome in the comments.
Related questions
Would the bins be under "your trades" or do you mean something else?
Oh i see what you mean. Yeah I don't know. My guess would be that the boundary is from January 1st 2024 to December 31st 2025.
!
@ManifoldAI I created a related market here which will resolve based on this market.
@Soli Huh, if it could be expected that all the humans for the Longbets Turing Test would be in roughly the same location, couldn't you easily sniff out the AIs by asking about how the place smells on the day of the test?
@jim which is not the same thing at all 😅 check the question with my linked tweet and comment for some context
@GooGhoul very interesting thought, you are probably right but then i think this market is very wrong on the 2029 date since machines won’t be smelling anytime soon
@Soli it's obviously not the same thing, but isn't it like the same thing as a camera is for the visual modality?
@jim i am not familiar with spectrometers but if they can indeed be used to allow machines to smell the same way a camera allows them to see then yes would suffice
@Soli But where would it get the information to make the right lie?
It's not like anyone maps out the smell of places in some database that the AI could've been trained on or have access to.