So if you or I took the Turing test, would we pass? Or fail?
Hold that thought. Ever since we fabricated our first computers, we’ve wondered, can they think? Can they be made to think? They are superb at number-crunching, at solving intricate problems, at repetitive tasks, even at keeping appointments in order. But that “smart” phone you’re reading this on—oh yes, it’s a computer—how smart is it? Can it think?
Which raises a more fundamental question: what’s meant by “thinking”? Computers can add up a row of figures in a fraction of a second, but is that thinking? Or ask this: if your friend adds up a row of figures swiftly, is she thinking? Her mind is working, but I suspect you’ll baulk at calling that “thinking”. Because there’s something almost trivial about such a task. Surely thinking implies knowledge, understanding, creativity, intelligence. Where are those when you’re adding up numbers?
And that gives you an idea of what researchers have focused on, in trying to make computers think as humans do, in the discipline known as artificial intelligence (AI).
How will AI researchers know when a computer is thinking? Well, let’s suppose we can get it to act like your pal Kanakadurga does when she’s thinking. In particular, suppose we ask a computer questions we’d ask Kanakadurga, and it gives us answers that are indistinguishable from Kanakadurga’s. Suppose it does so consistently. “Wow,” we’d say incredulously, “it’s thinking!”
Alan Turing, widely considered the father of computer science and AI, made exactly this argument in a seminal 1950 paper called Computing Machinery and Intelligence. It begins with the words: “I propose to consider the question, ‘Can machines think?,’” and it describes what’s now called the Turing Test.
A questioner puts a series of questions, in parallel, to Kanakadurga and a computer. The questions and answers go back and forth via a keyboard and screen. Thus the only information available to the questioner is what shows up on the screen. The computer is designed to offer as human-like answers as possible. Kanakadurga, we must assume, will be human-like anyway. (Is she?)
The questioner must decide: which of the answers comes from Kanakadurga? If he can’t decide, and if over several such tests he consistently can’t decide, the computer passes the test.
So think: would you pass such a test?
Naturally, AI folks set their sights on producing such a computer. Massachusetts Institute of Technology professor Joseph Weizenbaum built one in the 1960s, a simulated therapist he called Eliza. Here’s an excerpt from a short conversation I had with her (at http://www.chayden.net/eliza/Eliza.html, one of the many Elizas on the Web):
Eliza: How do you do? Please state your problem.
Me: I really am glad for someone who will listen.
E: Can you elaborate on that?
Me: I want to cry.
E: What would it mean to you if you got to cry?
Me: It would mean everything!
E: I’m not sure I understand you fully.
Me: I should get hold of myself and think clearly.
E: That’s quite interesting.
Me: Things look so bleak.
E: Do you feel strongly about discussing such things?
Me: I want to do better in school.
E: Why do you want to do better in school?
Me: I don’t want to be the worst in my class.
E: Your class?
E: I understand.
If you suspect Eliza is showing definite—even spooky—understanding, you’re in good company. Weizenbaum found that many who used Eliza were convinced she really understood their feelings. In fact, and a little disturbingly, they began unburdening their most private troubles on her. This so distressed Weizenbaum that he eventually turned into one of AI’s more vocal critics.
But given its success, you might just begin to think that Eliza passes the Turing Test.
Except the truth is that there’s no understanding going on here at all. Eliza is programmed to follow a set of simple rules, an algorithm, and any halfway decent programmer can rustle up a system like this. (Was a time when I could). In what sense, then, can we say Eliza is thinking? That she can truly understand a neurotic patient’s woes? That she is intelligent?
Still, some AI-ers would suggest that intelligence is embodied in the algorithm itself. They say the mind works, in essence, like an enormously complicated algorithm. Others disagree. Intelligence, they say, cannot be simulated by computers carrying out the steps of an algorithm, however sophisticated. Intelligence means a certain consciousness that algorithms don’t have. Therefore, Eliza is by no means intelligent.
This debate rages on and that’s fine. Because AI’s fondest hope is to someday produce a better understanding of thinking and intelligence, and debate will do that. And from there, that holy grail: pass the Turing Test.
And someday I really do want to take it myself. Though if the questioner decides I’m the computer, do I pass? Or fail?
Once a computer scientist, Dilip D’Souza now lives in Mumbai and writes for his dinners. A Matter of Numbers will explore the joy of mathematics, with occasional forays into other sciences.
To read Dilip D’Souza’s previous columns, go to www.livemint.com/dilipdsouza