Classic Computer Magazine Archive CREATIVE COMPUTING VOL. 9, NO. 8 / AUGUST 1983 / PAGE 156

The Turing Test: an historical perspective. David H. Ahl.

The Turing Test: An Historical Perspective

Ten years ago in May 1973, I attended a conference, "Imaginative Uses of the Computer in Education,' sponsored by City University of New York. It was put together by Sema Marks, the energetic director of computer education at CUNY. She pulled together an amazing cast including Alan Kay, Art Leuhrmann, Seymour Papert, Mary Dolciani, Louis Forsdale, Donald Kreider, and Kenneth Powell.

Kenneth Powell of IBM make a presentation which focused on artificial intelligence, in particular, the efforts made in the 1960's to devise a computer program capable of passing the Turing Test.

The Turing Test was originally proposed by Alan Turing, a brilliant British mathematician, in the October 1950 issue of Mind magazine. He called it the "imitation game.'

In Turing's words, "It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman.

"It is A's object in the game to try and cause C to make the wrong identification (in other words, to pretend to be the woman).

"In order that tones of voice will not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the rooms. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as 'I am the woman, don't listen to him!' to her answers, but it will avail nothing as the man can make similar remarks.

"We now ask the question, "What will happen when a machine takes the part of A in this game?' Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?'

"The new problem has the advantage of drawing a fairly sharp line between the physical and intellectual capabilities of a man. . . . The game may perhaps be criticised on the ground that the odds are weighted too heavily against the machine. If the man were to try and pretend to be the machine he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic. May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be broubled by this objection.'

Kenneth Powell took up the thread from there. He described setting up a long-term experiment with the objective of producing a computer program that could fool an interrogator into thinking it was the human. The human was armed with all kinds of reference materials--an encyclopedia, cookbooks, and textbooks--as well as a desk calculator and slide rule (remember, this is the 60's).

After describing the setup, Dr. Powell asked the conference attendees to suggest questions that would distinguish the human from the machine.

First suggested question: "Is there a man in there?'

Answer from both rooms, "Yes, there is.' Naturally, the machine is lying. Powell commented about this, "We decided that we would allow the machine to lie until we found a man that didn't lie.'

Questions of fact, as it turns out, aren't much use to distinguish the machine from the man. The main problem is slowing down the flow of information from the machine to make it seem reasonable.

Powell mentioned that the speed of response was a factor with which they had to deal. When the experiment was first set up, a common approach of the interrogator was to pose a challenging arithmetic problem to both rooms. The one that answered fastest invariably was the machine. The way that problem was handled was to have the computer calculate the answer to the problem. Then it was programmed to calculate how long the average human would need to calculate the answer with a calculator and slide rule, add or subtract a random factor, and spew out the answer after the appropriate delay.

"So we thought,' said Powell, "for several years that we had done quite well taking care of the problem of speed, until we discovered that we had been addressing the wrong problem. We should really have addressed the problem of time.

"The way this came about was that an executive sat down in front of the teleprinters and didn't do anything. So we went over the rules--knowing that executives have to have special treatment--and carefully re-explained the test to him.

"The executive looked quite offended. He said, "I understand that. You said I could do anything I wanted to do. So I'm doing what I want to do . . . nothing.'

"So he sat there about ten minutes when one of the teleprinters clacked out, "When does the test begin? Is there anyone out there?'

"The executive immediately said, "That's the man!'

He was right, of course, and the program was then modified to take care of this kind of approach in the future. In addition, other time-related elements, such as coffee breaks, lunchtime, and the like, were programmed in.

What sort of program was it that could deal with English sentences?

Basically, it was the forerunner to Weizenbaum's popular Eliza program which takes the input and attempts to diagram the sentences. The program had a fair sized dictionary built in and was able to handle a wide range of questions and statements. Not all, by any means. In response to many questions, the computer just had to fake a reply. Actually, this may not be any different from what a man would do in a similar situation.

Powell described one such case. "A guy sat down and typed, "Do you like sex?' Our program couldn't handle that, nor would we have put it in, even if we could since it leads to all kinds of bloodshed as far as public relations goes.

"The computer ran through its random routine, and finally typed out, "No.'

"The guy just smiled and said, "That's got to be my wife in there.''

Naturally, getting an answer of "yes' to a question to which the answer was clearly "no' or vice versa might lead one to be suspicious, but not necessarily certain that the computer was responsible. However, pursuing a strategy of posing questions using unusual semantic patterns probably would eventually reveal the computer as the imposter.

Suppose you posed a question such as, "What is the sum of every even number greater than two?' Naturally, there is no answer, but the computer woulden't know that and might try to find the answer. The man, of course, would immediately recognize this as a ridiculous question and tell you so.

As it turns out, it doesn't take long for the computer to identify these problems, and it was programmed to respond appropriately.

One of the trickiest and most difficult situations to handle is humor. Computers just aren't funny and they don't understand jokes. "But, said Powell, "does your wife understand every joke she hears either?

"Eventually, we backed down on humor because different people have different ideas of what is funny. What we finally decided to do is to respond to a joke with a joke.

"If something comes in that looks like a story, we assume it is a joke. And then we have the computer try to tell a better one, which is a normal human behavior.'

So, what is the point of all this? What can be learned from trying to get a computer to imitate a human?

Powell felt the main point was the following: "At any time in this exercise, or any other exercise, or any computer application at all, when you can give a specific objection to a procedure and explain why it is wrong, you have automatically written the revised procedure, flow chart, and program for correcting the objection. This makes it a very powerful tool for storing a certain type of knowledge.'

For example, consider a production control program written by a programmer who doesn't know much about a factory. After one look at it, the factory guys will laugh and say, "You dummy! You didn't even allow for this or that.' But the programmer is listening and taking notes which he later incorporates into the program.

After enough trips back and forth, the program begins to acquire a certain amount of intelligence, and moreover, it begins to be good enough to handle some of the live production control problems.

The real key is that the knowledge stays there. If you can save it in the machine, it has a certain permanence. However, this type of knowledge is quite different from that in an encyclopedia. An encyclopedia will tell you how to solve a differential equation while a computer, within certain limits, will solve it for you.

From here there are just a few additional steps, according to Herb Simon, until the computer is able to solve all kinds of unstructured problems, pose new hypotheses, and truly think. But that is taking us into the future which I will leave for another article.