Will the first AI to pass a 2-hour adversarial Turing test primarily use text?
4
107
110
2100
70%
chance

By an AI able to pass a 2-hour adversarial Turing test, I mean an AI system able to reliably pass a 2-hour, adversarial Turing test during which the participants can send text, images, and audio files (as is done in ordinary text messaging applications) during the course of their conversation. An 'adversarial' Turing test is one in which the human judges are AI experts (who understand AI weaknesses) instructed to ask interesting and difficult questions, designed to advantage human participants, and to successfully unmask the computer as an impostor. A demonstration of an AI passing such a Turing test, or one that is sufficiently similar, will be sufficient for this condition, so long as the test is well-designed to the estimation of me. Several tests might be required to indicate reliability.
By primarily using text, I mean the conjunction of the following:

  • Text bottlenecks: there isn’t a path of more than 100k serial operations during the generation of an answer where information doesn’t go through a categorical format where most categories correspond to words or pieces of words, in a way which makes sense to at least some human speakers when read (but it doesn’t have to be faithful). Some additional bits are allowed as long as they fit in a human-understandable color-coding scheme. Only matrix multiplications, convolutions, and similarly “heavy” operations count as serial operations. There are 2 serial operations per classic attention block), and 2 serial operations per classic MLP block, so a forward pass of GPT-3 is around 400 serial operations, but GPT-3 generating 1000 tokens is around 400k operations, so GPT-3 generating 100 tokens would count as having text bottlenecks, but not a 100-layer image diffusion model doing 1000 steps of diffusion.

  • Long thoughts: The AI generates its answer using at least 1M serial operations. GPT-3 generating 1000 tokens wouldn’t count as “long thoughts”, but GPT-3 generating 10k tokens (within some scaffold) would count.

I will use speculations about AIs architectures to resolve this question. For example, GPT-4 generating 10k tokens would qualify as primarily using text.

(For reference, if it takes 1ms for a human neuron to process the incoming signal and fire, then the human brain can do 100k serial operations in 1’40’’, and 1M serial operations in 16’40’’.)

Get Ṁ600 play money

More related questions