Would an intelligent computer have a "right to life"? Robert E. Mueller; Erik T. Mueller.
Would An Intelligent Computer Have A "Right To Life'?
Since humans are usually acknowledged to have a corner on intelligence, the subject of very smart robot computers is a little frightening. There are two human responses to robots: accept them as geniuses, or call them idiots. A recent book has stirred up the controversy again by raising questions like these: Would anything like a "state of consciousness' arise when a system reached a certain degree of complexity? Would something like the human soul be generated in a very complex and intelligent computer--a "ghost in the machine'? As we put it: Would an intelligent computer have a right to life?
The book in question is The Mind's I: Fantasies and Reflections on Self and Soul, compiled and written by Douglas R. Hofstadter and Daniel C. Dennett--two people who think computers can become geniuses. But the critic John R. Searle, who is a professor of philosophy at UC Berkeley, thinks that computers will never become more than idiot savants. He wrote a scathing review in The New York Review of Books arguing against the possibility of machine self-awareness.
Professor Searle's article centers on the idea he calls the Chinese Room thought-experiment (Gedankenexperiment). It goes something like this: suppose we write a program for a computer to simulate an understanding of Chinese. We write the program so well that when we tell the computer stories in Chinese and ask questions about the stories in Chinese, the computer gives us answers in Chinese that make sense.
Searle argues that this is analogous to putting me in a locked room with boxes full of Chinese idiograms, and giving me the rules in English (my "machine language') for putting them together--the basic syntax or program for combining idiograms. All I know is how to assemble strings of Chinese idiograms correctly-- correctly from the standpoint of whoever puts them into the room.
Searle argues that in time I might get so good at arranging the idiograms that someone outside the room would begin to think I really understood Chinese, which in fact I do not. I am, therefore, like a computer whose program--my rules for syntax-- enables me to put together answers which seem to make sense. Searle insists that something like semantics could never arise within such a computer program to give it a real "understanding' of the entire gamut of semantics of Chinese.
The contrary argument by proponents, including Hofstadter and Dennett, of what Searle calls "strong Artificial Intelligence' (AI), is that the entire "system' does indeed have the ability to comprehend Chinese. Semantics begins to develop and arise out of the syntax which is elaborately built up during a massive dose of idiograms.
Searle carries his argument to a ridiculous extreme (reductio ad absurdum). Imagine that you rig up beer cans to levers powered by windmills so that they bang together when you ask them if they are thirsty, responding with a clanking semaphore, "Yes, I am thirsty.' You cannot then assume that there is any vestige of an inherent thirst in the Rube Goldberg beer can contraption.
Beginning with an exchange of the book review by the US mail, the following dialog took place coast-to-coast between the authors over a passive computer network (that made no attempt to interject a single comment about its rights or character!):
Erik: Hi, Dad. The arguments of Searle you sent in the book review are totally unconvincing to me. Searle keeps talking about "intentionality' and "causal properties' being unique to the human mind and not simulatable on a computer. He says he is not talking about a soul, but I don't know. In Hofstadter and Dennett's book they try to investigate what a soul might be; what would happen if a soul were removed from a brain; what it is "objectively subjectively like to be another mind'; and many things like that. For now, I don't see why a computer could not be made to think and experience. If strong AI is successful (and I think it might be), people may assume that a computer is conscious just the way you assume that I am conscious. But the most confusing questions are: Why am I conscious? And what is consciousness?
Robert: Hi, Son. Trying to determine the relationship between mind and body is an ancient problem. People have been worrying about it since Socrates. Those who have considered it fall into two classes: those who think humans are complex machines and those who think the mind is something other than just a machine--call it something spiritual. The view that animals and humans are machines in a mechanical sense was explored first in modern times by the French philosopher La Mettrie, in his book Man A Machine. Today we are replacing his viewpoint with the idea that animals and humans are electrochemical computing machines. There are thousands of books and articles on the subject.
Consciousness in animals and humans has always befuddled philosophers. Schopenhauer called it the "world knot' problem. It is at the core of our wonder about the human consciousness, soul, awareness --call it what you will. We should not expect to solve it overnight, especially since computers seem to confuse more than clarify the issue.
Perhaps computers can help us sort out the problem in a very new way, simply because of their unique abilities to play as if they had their own awareness. As you know, Weizenbaum at M.I.T. wrote a simple program called Eliza which gave reasonable, psychoanalytic-sounding answers to questions--and it took a lot of people in. It is easy to fool human beings!
I think what Searle is reacting to in Hofstadter and Dennett's book, is their lack of awareness of the antiquity of the problem, and their brand of pseudo-philosophical, aphoristic writing that only misleads --at least, I think this is why Searle is up in arms. He concludes in a letter: "I believe that strong AI is simply playacting at science, and my aim in my original article and in this letter has been the relentless exposure of its preposterousness.' I agree--although I must add that I have always, and will surely always enjoy reading Hofstadter's and Dennett's remarks. One must not take them or oneself, too seriously--especially when talking about a question that is probably unanswerable.
E: Hi, Dad. Do you think that "personal awareness' affects the way we act? That is, do you think the awareness has some physical effect on the brain? An effect that can be observed? If the spark has no effect (which is what Searle thinks), then a thing without the spark acts no differently from a thing with the spark--that is, it passes the Turing test. This seems rather strange, because what we have just said implies that the thing will claim that it feels everything that you and I claim--it will insist that it has an inner light, etc.
We can't be sure that it really does, though, just as I can't really be sure that you do. It seems strange to me that having the inner light would not alter our behavior or modify us physically in any way, since it would then be the case that the inner light adds nothing to us except the fact that we have the inner light!
The other possibility is that personal awareness does have a physical effect on us, that it modifies the way we think, maybe even causes us to claim that we have an inner light. If so, I think that this would require quite a re-working of science.
You are saying we have a soul, and I would like to think that this is true, but it really starts to sound to me like something religious. After all, how would such an "inner awareness' develop in evolution? Some people claim that the origin of consciousness coincided with the development of language in humans. Why couldn't an inner awareness develop in a computer in the same way?
On what basis do you decide whether some hunk of matter has the inner awareness or not? The mere fact that we are biological? What is intrinsic about our biology? People already hypothesize beings of different biologies. Why couldn't one of these other biologies be electronics?
You can't say, it seems to me, that the particulars of our humanness cannot be duplicated in a computer, given the "total simulation argument.' (Just simulate all neural sensory inputs and then all of the neurons in our brains--right down to the atomic level, if necessary.) It seems as if all arguments are useless: Either we have a soul or we don't--but if we do, all of our science is wrong.
Anyway, Searle's arguments strike me as intuitively wrong. Although I have the intuition that I have a soul, intellectually I do not believe in souls! In his Chinese room experiment, for example, he says that even if a person memorized all of the rules of Chinese grammar and appeared to speak Chinese, that person would not understand a word of Chinese.
To me this is wrong.
If a person were capable of performing such a memorization feat, then he would understand what he was saying; he would have to catch on in order to internalize all of the rules. That is my intuition, anyway.
Also, he claims that such a system has "syntax but no semantics.' This kills me because after taking several linguistic courses I have an intuitive feeling for these syntactic and semantic structures running around in my brain, quite intermingled with each other, and I can imagine how analogous structures can be built into a symbolic processing system--a computer or a room full of paper and a human processor with a pencil.
The system does associate meaning with the symbols you give it--this meaning is scattered among all the pieces of paper in the room. In other words, all the scribblings done by the human following the rules constitute an interpretation of the symbols you give him. It would have to have a semantics in order to pass the Turing test.
Those are my intuitions on this argument. Regarding the nature of our personal awareness, I am totally lost.
R: Hi, Son. You pose quite a dilemma: I must believe either that personal awareness has a physical effect on the brain (in which case it can be simulated) or that it does not. I think that it does--but this does not require a reworking of science.
I am not sure I agree with you about Searle's Chinese room and the problem of language. You were educated, don't forget, in the Chomsky environment. I, however, have never bought his idea of "deep' preconditionings to semantics, preconditionings which could obviously be built into a computer. What do you really mean when you say that a "system associates meaning with the symbols you give it'? That semantic and syntactical structures are running around in our brains and you cannot disassociate them? I do not think that you would have to have semantics to pass the Turing test.
You might associate language with mind, but if you say that language is the seat of human consciousness you must say that animals have no internal conscious states--unless it be a bark or a meow consciousness. And if we cannot use language, say when a certain accident hurts a part of the brain, then we lose our consciousness --which is not medically true. And what about a mute person?
I think that consciousness is somehow linked with time. Consciousness is always alive at the current point in time. We can reflect on our conscious state of a moment ago, and we can will what it will do next, but it always exists as a lambent richness in the now. You cannot be sure, for example, that humans will slavishly follow all of your instructions. A person may deliberate between courses of action, but a computer cannot decide not to do a calculation because its chips are in pain or malfunctioning.
I am influenced but not determined by circumstances and language states; a computer is deterministic, no questions asked, no equivocation. I think that consciousness comes before language. You must invent language because of consciousness. Because if there are hidden intentions, if a person has a secret internal set of intentions (this is what Searle means by "intentionality') some of which are not carried out, language must be invented to let others know what is on that person's "mind.' Perhaps this is how mind arose-- barking or sniffing was insufficient to explain to another animal what was "on its mind.'
E: OK. I don't think that consciousness requires language. I just threw that idea out in my attempt to determine what happens when consciousness arises. But how can you say that computers slavishly follow all instructions and can't decide not to do a calculation when we also slavishly abide by the laws of physics. We are deterministic just as computers are, and computers can be made as non-deterministic as we are (locally) non-deterministic by making them into extremely complex systems influenced by many factors.
R: I guess we must establish a set of criteria to use before we can accept computers as co-equal partners in living. How do these sound?
mechanism. intentionscarried its instruction
Because it is difficult to explain human "consciousness'--that core spark glowing somewhere at the center of our personal awareness--how are we aware of what seems to be an internal light or an atmosphere of "beingness?' This sensitive, central recording device in us collects, interprets, fees, and experiences sensual or thought states, and, unless we are asleep, brings them to our attention to disturb, calm or amuse the self.
Though we cannot readily fathom this "device' within ourselves, we should not jump to the conclusion that it is irrelevant or that the mechanism built to simulate these attributes can be just as valuable as human thinking and feeling. We make two errors, I think, because of our inability to explain the human spark of beingness. We cannot say the soul is either mystical or mechanical because we cannot explain it. That core of our self which illuminates our inner experiences is indeed inexplicable.
I do not see how we can suggest that a computer could ever have a soul. I guess you are saying that it is beside the point of AI--that if a computer satisfies Turing's criteria, it simply does not matter. I think you are avoiding the issue. The issue is: what is the central thing within us, and do you really believe that a mocking Turing computer, however clever, could really have it?
Trying to act like a philosopher, of course, I worry about what the soul is, but I insist on being a very cautious and careful philosopher, allowing myself the luxury of conclusions only when I can either bring together a connection of irrefutable, experiential facts or construct a "reasonable' theory. But I also know that all human theories are just that: tentative human attempts to describe something basically indescribable.
Theories, especially theories about the human inner workings, hang together for only a little while. I must insist that all theories, perhaps even those about matter and energy, are provisional; that we humans, not being godlike, can have only provisional theories. This is especially true when we try to describe the human mind.
When you said that for the time being you "don't see why a computer couldn't think and experience,' you must have a very vague idea of what human "thinking and experiencing' are. I'm sure you are just expressing a "belief' with little to back it up except a strong feeling--it is your religion, n'est-ce pas?
Mom agrees with you, saying that I am exhibiting my "religious' prejudices by insisting that there is a certain something about human consciousness which exists over and above the electrochemical aspect of the human "machine.'
What if our computermind claims that it thinks and feels, if we become convinced by the computer, a la Turing, that it is indeed like us because it seems to exhibit all the signs of human intelligence, because it claims that it has a personality and its own core of consciousness, because it says it has a self just as we do, and it demonstrates its claims in unexpected ways--say, by writing a poem.
The question then arises--and this is the beginning of the moral dilemma: Can we turn them off at will? Such "humanoid computers' would not like it; qua human, they would begin to cry when we reached for the off switch. Can we legitimately pull that switch? If they claim to be as soul-like as we are, do they therefore have an inalienable "right to life'? But who pays for their power? How can they justify themselves, their expense? And if they do wrong, say, suggest something that turns out to have catastrophic consequences, can we punish them? How can we punish them? By annihilating them? By turning them off for a week? This argument reduces the problem to an absurdity.
I think it comes down to this: If you can turn off a computer with no qualms, morally, then it is less than human. Actually, this problem was suggested by the person who christened robots, Karel Capek, who wrote the play, R.U.R.--Rossum's Universal Robots.
How do we punish robots? Can we punish robots? Should we punish them? Here is the crux of the problem: Since they are just machines, punishing them would be foolish. And yet if they claim to be "like humans,' they should be subject to the same moral codes as we are. How far can we carry this process?
E: In response to your "moral argument,' Dad: Sure, that is a confusing question. Maybe even if AIs do feel, we should pretend that they don't to protect ourselves. I don't think the moral argument is an argument about whether AIs can really feel; it is only an argument about whether we should (morally) consider AIs to have feelings.
R: I agree with you, Erik, that the moral argument does not really face up to the technical issue of "consciousness,' but it faces the real issue as far as I am concerned. The other issue is, I am afraid, so difficult that we cannot hope to solve it here, and it will be argued (probably by computers) until the end of time. I can imagine, about a thousand years from now, two computers arguing about whether those old-fashioned "flesh machines' had any central "awareness core' equivalent to their "feelings' of "consciousness'!
E: My turn for the last word: Actually, I don't think Chomsky believes that language ability can be built into a computer. He does believe, I think, that language is an innate capacity of humans--a capacity for which most of the mechanism is in place at birth. Doesn't it seem reasonable to you that our brains were designed for language, at least to some extent? This design is the result of evolution.
But I do think that generative grammar is on the right track in explaining language --whether or not that language ability in humans is innate or acquired. "Innate' simply means that we have the ability to internalize the complex semantic, syntactic, and phonological rules and the lexical items necessary to use language. Surely you would admit that some sort of brain mechanism is required to give us these abilities, just as it is required for other abilities (creative, artistic, musical, problem solving, etc.).
How do you define semantics if you don't think that a machine would have to have semantics to pass the Turing test? Semantics, to me, is simply a collection of structures which constitute a symbolic interpretation of something else. How do you define syntax?
A collection of structures which do not interpret something else? The raw, uninterpreted form of something? A machine would have to have my definititon of semantics to pass the Turing test. It has to have a model of its world, and an internal model of itself. This modeling of one's environment is what semantics is.
I agree, at least, with your criteria for AI, and as I interpret the Turing test, something which satisfies the Turing test would also satisfy your criteria.
R: Touche! When a computer reaches a point of intelligence at which it objects to being switched off or objects to having a copy made of its mind--call it a birthdisk --being erased, and gives me good reasons why I should not do so, I will respect its wishes and withdraw all of my prejudices against it. In fact, I will envy it enormously.
Look at its virtues: Being in electronic form, it is ageless, and it has a "body' that can be periodically renewed by transferral from clumsy old hardware into a smaller, more beautiful, and super-miniature body. It can erase current mistakes and go back to any point in its past life (provided a copy of its mind was saved at that point). It can be transmitted over wires--beamed to Mars if it desires. It can reproduce itself endlessly and effortlessly, and bask in the confidence of infinite personal friends of its identical ilk--twins who will understand it intimately, know all of its desires and fears. It can, in fact, link up perfectly with its brethren and form a utopian society.
Indeed, I think such an intelligent computer would be so self-satisfied, so maniacally secure in its perfection, that it would render humans superfluous and eliminate us. It would eradicate us not by an accidental triggering of an atomic bomb as we fear but because we are trivial! (I originally wrote that an intelligent computer would probably eliminate us in utter disgust, but Mom said that it would probably think of humans as we think of ants--perhaps ecologically necessary, useful slaves for keeping it alive and maintaining its mechanisms.)
So the answer to the question "Would an intelligent computer have the right to life?' is probably that it would, but only if it could discover reasons and conditions under which it would give up its life if called upon to do so--which would make computer intelligence as precious a thing as human intelligence.