Turing's Imitation Game
In 1950, Alan Turing devised a game to test whether computers could display intelligent behaviour similar to that of a human. In Turing’s Imitation Game, now commonly known as the Turing Test, a human interrogator speaks simultaneously to a human and a machine via teletype and aims to determine which is which. Turing thought that the open-ended nature of language would allow the interrogator to test the machine on virtually any topic.
What is this site?
Turing thought that by 2000, computers would be able to fool humans at least 30% of the time. While that might not have come true, AI has improved a lot over the last several years. We created this site to test how good current chatbots are at Turing’s game, and provide a platform to compare different AI models as they progress. The site is part of research being carried out at the Language and Cognition lab at UC San Diego.
HOW DOES IT WORK?
When a user joins the lobby, they are automatically assigned to either a Human or an AI partner. The AI models are Large Language Models (LLMs), trained to generate plausible completions for text inputs. The AI is really just repeatedly asking the question “which word is most likely to come next, given what has been said so far?” We use several different AI models and sets of instructions called “prompts”. These tell the model what sorts of things to say in response to users. For example, the prompt will generally tell the model that the conversation is part of a Turing Test, and instruct it to act like a human.
What happens to the data?
We store data from the conversations (including the text, the interrogator’s decision, and the user accounts involved in the conversation) so that we can carry out research on how well the models perform, what sorts of questions people ask, and what techniques are successful for detecting AI. We will anonymize all data before using it for research, and we will never release personally identifiable data, or sell your data to anyone.