Reality Catches Up to the Turing Test

5 min read

Do computers think? Some experts say yes, some say no. Time magazine, Jan. 23, 1950

How do we tell whether a machine thinks? Much of today’s discussion of the matter starts with British computer scientist Alan Turing (1912-1954). In 1950, Turing proposed that a computer that could converse naturally with humans would inevitably come to be considered intelligent. He illustrated his idea with an “imitation game”: A human judge poses a series of questions to two unseen beings, one a human and the other a computer imitating a human. Based on the answers, the judge tries to guess which is which. If the judge is unable to do so reliably, argued Turing, then the computer would have a strong claim to being capable of thought.

Dall-E 3 / OpenAI

Interviewing a robot.

Source: Dall-E 3 / OpenAI

We use open-ended questions and answers to vet job applicants, presidential candidates, and clients on a psychoanalyst’s couch. Hence the “Turing test” for machine cognition. Be warned that the direct approach may not work: The judge can ask, Are you human?, but the computer is allowed to lie. It might be better to ask something more indirect, like What’s your favorite Beyoncé song and why or Tell me how you’d deal with a beloved relative who shoplifts.

Turing’s thesis was nuanced. He seemed to have regarded machine consciousness as a metaphysical matter, something that could not be established in strictly scientific terms. His 1950 essay speaks mainly of human perceptions of computers. Turing predicted that, by the end of the 20th century, “the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

Yet 70-plus years later, the Turing test is now often regarded as an instructive failure. One reason is the “ELIZA” effect. In the 1960s, MIT computer scientist Joseph Weizenbaum created ELIZA, a text-based program that would respond to typed comments in the fashion of a nondirective psychotherapist. ELIZA was prone to answer questions with questions (“Why do you say that?”). It could riff on key words in a human’s comments or repeat those comments nearly verbatim as if pondering them. When curious humans tried to test the limits of ELIZA’s knowledge, it could mask its ignorance (of almost everything) by making its response part of the “therapy.”

Human: Who is Sylvia Plath?
ELIZA: Does Sylvia Plath remind you of your mother?

There was not much more to ELIZA than that. It was certainly light years less capable than today’s chatbots. Yet Weizenbaum was astounded at how many people found its canned statements convincing. “What I had not realized,” he wrote, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

The Loebner Prizes

It’s easy to fool some of the people some of the time. This was notoriously demonstrated in the Loebner prize competitions. In 1990, Hugh Loebner, an American inventor and gadfly, offered a $100,000 prize and a solid-gold medal to the first program to pass the Turing test. Loebner also funded an annual series of competitions with more modest cash prizes.

In 1990, nothing even remotely approaching Turing’s criterion existed. In any case, Turing had described a thought experiment, not a reality show. In order to have a viable competition, Loebner and advisors redefined the challenge. They decreed that each competing agent (computer or human) would declare a category and answer questions only in that category. The first competition’s categories could have been pulled from Jeopardy!: “small talk,” “dry martinis,” and “Shakespeare.” Judges were further instructed to avoid deceptive or trick questions. This created a test very different from Turing’s.

The Loebner contestants did occasionally fool a judge. One judge ruled that a Shakespeare-savvy contestant had to be a computer because no human could possibly have such encyclopedic knowledge. (She did.) Weizenbaum submitted the first competition’s winning bot, an ELIZA retooled to spout “whimsical conversation.” By speaking in nonsequiturs, it sidestepped probing questions as skillfully as a politician.

Mainly the Loebner competitions fooled the media into thinking that something important had happened. Loebner had a Ph.D. in sociology and was otherwise known as an activist for the rights of New York’s sex workers. He told the press that the women he found attractive would not have sex with him except for money. He compared his predicament to that of Turing, a gay man in 1950s Britain who was apparently goaded to suicide. Loebner took out a mortgage on his home to fund his prize. That didn’t earn him much gratitude. For AI pioneer Marvin Minsky, the Loebner prize was simply “obnoxious and stupid.” Minsky offered his own facetious prize for anyone who could put an end to the Loebner competition. Siri and Alexa may have accomplished that. They robbed the competition of whatever novelty it had. The last Loebner competition was held in 2019; the gold medal was never awarded.

Ironically, the competition was discontinued just a couple of years before it might have gotten interesting. Today we take glib chatbots for granted. But does ChatGPT think? Nearly everyone says no.

The history of artificial intelligence is one of moving goalposts. There was a time when serious people believed that a machine that could beat grandmasters at chess would have to be considered intelligent or even conscious. Then IBM’s Deep Thought began winning matches in 1988. It was a victory for the technology, but no one thought Deep Thought had a soul. Rather, the machine demonstrated that it is possible to be very, very good at chess without possessing general intelligence or subjective experience.

Now we’re having a goalpost moment with the Turing test. It appears that conversational ability is necessary but not sufficient for human-style thought. The question Turing asked in 1950 remains as pertinent as ever: If and when machines start to think…how will we know?

You May Also Like

More From Author

+ There are no comments

Add yours