- Department of Philosophy & Religious Studies
- Degrees and Programs
- Careers for Majors
- Faculty Profiles
- Faculty Office Hours
- Finding the Department
- class schedule
- Course Descriptions
- Faculty and Staff Employment
- Philosophy Society and Listserv
- Why Major in Philosophy?
- Reflections
- Alumni Profiles
- Philosophy Links
Desire as Intelligence
Victor Ma
1. Introduction
In Isaac Asimov’s “Bicentennial Man,” Andrew is a robot that just wants to be human.Andrew gained his freedom, has an organic body, and he bosses people around. This story serves as a thought experiment for our accounts of intelligence since Andrew passes many of the popular tests. Most of these accounts rely on an intuition of what it means to understand something.
The type of understanding that I refer to is what we think of only in intelligent beings. This association of intelligence is usually reserved for reasonable human beings. If I made a machine that could answer questions, it might resemble a human’s response, but this kind of question answering is not enough for us to say that it understands the answer that it gives. In this essay, I explore what it means to understand something in this sense, and why that is important for determining intelligent beings. Specifically, I argue that Andrew in “Bicentennial Man” is an intelligent being because he understands like we do.
I will account for different levels of understanding through desire, and show that this understanding is what we mean by intelligence. I rely on the Turing Test for my account because his test has a bearing on whether language is sufficient for identifying intelligence. First, I will talk about language as a condition for intelligence, and how it fails to do what we need it to do. Then, I will improve the Turing Test by adding the condition of desire. In the third section, I will outline and respond to some issues that result from relying on desire as an explanation for intelligence. In the fourth section, I focus on Asimov’s story to show how Andrew meets the condition of desire. Particularly, I bring up his struggle for freedom, which does not lend easily to language as an explanation.
2. Desire as Intelligence
Alan Turing makes an imitation game to determine whether a machine can think (1950). There is an interrogator, and two subjects answering questions. One subject is male, and one is female. The interrogator is supposed to determine which is male and which is female. The male subject will try to convince the interrogator that he is the female, and the female will tell the truth to help the interrogator. The better that the male subject responds to questions, the harder the game is. Now replace the male subject with a machine. The test is whether there a machine that can play this game well enough to replace the male subject, and achieve the same level of success in making the interrogator guess wrong. It has been argued that this is supposed to be a test for intelligence, and I will follow this line of interpretation. What the Turing Test effectively says under this account is that we can determine that a machine is intelligent if it can answer questions as well as a deceiving male subject can. In other words, it deals with the complexity of the subject’s level of language understanding. However, the only way that it tests this is that it looks at how well the subject can manipulate the words that it knows to fit the situation. While this may give us a way to tell when a machine is intelligent and can understand language well enough, it does not explain why the machine can understand it so well. Henceforth, I need to explore the conditions for this level of understanding so that we may make the Turing Test more effective. I push that desire states are important for achieving a high enough level of understanding for something to be an intelligent being, or that is, for the machine to pass the Turing Test.
When we look for an intelligent being here, we are not looking at a scale of how smart someone is: either something is intelligent or it is not. Consider an alphabet writing contest. Sitting next to each other are the two contestants: an orangutan and a 15 year old teenager. They each have a minute to write as many distinct letters as they can. The 15 year old finishes rather quickly, and watches as the orangutan tries to catch up. The orangutan only manages to write the vowels. Is the 15 year old intelligent? Most of us would agree that this is not an adequate test for whether a 15 year old is intelligent or not, but we might want to say that the orangutan who could write the vowels with paper and pen is an intelligent being. This hints that we have different criteria for what intelligence is depending on the species that the contest belongs to. We have a higher standard for the 15 year old because we consider him, along with ourselves, as intelligent beings. So we must examine then what criterion determines this kind of intelligence. If we look at it in perspective with the Turing Test, then we must look at what drives our ability to use language as an indicator for intelligence.
There are different levels of understanding that point to what kind of work language does for us, but it also shows that language is not sufficient for what we mean by an intelligent being. There’s the kind of understanding that is just procedural, or a rote level of understanding. So, a calculator that can read “3”, “+”, “6” in the syntactical structure “3 + 6” will output “9.” But this is hardly what we think of when we say that the calculator understands what 9 is. Then there’s the kind of understanding that depends on context and emotional response, what I call the sympathy level of understanding. So, when my friend tells me that his father passed away, I tell him that I understand his pain. This is a different kind of understanding because it requires more than knowing what the words at face value mean. We have particular emotions that are essential to knowing what is meant by this sentence. These emotions are connected to our desires as intelligent beings. It shows how certain kinds of sentences are characterized by something deeper than the words because we cannot talk about them this way without associating it with our ideas about death, family, love, and so forth. This gives it the context to which we refer our ability to respond properly, that is, achieve the sympathy level of understanding.
Now, when we talk about things we do not think about the level of understanding the proposition requires, but notice that language fails to translate when we move from one level to another. We use language to express both the simple and the more complex propositions, but we cannot talk about the context/response type of statements without relying on an appeal to the sympathy level of understanding. So there is a point where language fails to do the work for us in understanding. It can provide for our account of rote levels of understanding, and maybe some of the sympathy levels, but it cannot explain how we know the meaning of something based on the kind of emotions that it presents. Desire picks up where language fails. It can account for our understanding on the rote and sympathy levels.
To get a better sense of how desire fits in to understanding, it might help to consider the following case. Let us conclude that we should not wear t-shirts when it is cold. Most of us can realize this connection, and take the advice to not wear t-shirts in the cold. We can give a machine an algorithm that matches all warm weather clothing with warm weather, and vice versa, so that it as well can determine that it should not wear t-shirts in cold weather. However, it is less clear that a machine recognizes when it is hot or cold. I think few of us would think that when the machine changes its clothes to match the weather that the machine realizes whyit does this. Changing in and out of certain clothes has a special meaning to humans because we do not want to be hot or cold: it expresses our desire. The important thing is that desire explains our action, but our understanding at the rote levels do not depend on knowing anything about the function at all. Consider another example. I was riding my bike while my brother was at work, but I was not supposed to be doing this because it is his bike. I happen to break a part while riding it, but this is a brass handle specific to my brother’s bike, so I do not know how to fix it before he gets back. I call him to ask him about it. He knows what handle I am talking about, and starts giving me instructions to fix it. I do not know anything about what this handle does, and I did not use it while I rode his bike. However, I was able to put the handle back because my brother guided me through it. This is an example of a rote level of understanding: it did not rely on my knowledge or motivation of working with bikes. I was simply taking what I heard over the phone, and did what my brother told me. I might as well have been a voice operated robot. Most people would say that I did not understand what happened in that transaction except for the trivial facts: that I put the brass handle back, and I did it with my hands. But if I waited for my brother to come home and fix it, then we would say that he knows how to fix the bike the way he wants it. He understands at a higher level than I do for that task. This difference is the kind of difference that desire plays because it fills in the function of most statements that rely on sympathy levels of understanding. This difference in meaning is the type of understanding that I refer to.
3. Problems with Desire
One might suggest that if emotions are so important to understanding, then I do not even need to talk about desire. Emotion is what is important to understanding and can just as well explain what language cannot. This does not explain what we mean by intelligence though. Many animals can express emotions, like a dog can be happy when it sees it owner come by with bone treats, but we do not take this to be a sign of intelligence. Furthermore, emotions cannot explain what goes on when we think of concepts like death. We say that only intelligent beings are afraid of dying, but that certain animals can only be afraid of the pain that’s involved prior to death. Something with emotions is attached to how things make it feel, but something with desire is attached to how it feels: the first is reactive, and the second perceptive.
A more serious objection is that desire is not accessible. At least with language, it helps us communicate because we can detect it. It may turn out to be a mysterious process how we say if something has desires or not. I admit that there is little around this, and I appeal to intuition here. We figure out when something has desires or not based on the way that they use language. So language here is still important as an indication of understanding, but it is not what determines it. In effect, I am not trying to dismantle the Turing Test, but I am adding to it by showing what kind of work desire plays in that kind of test.
A final note that I should address is that this account is limited by my definition of understanding. The strength of my essay relies on my readers to accept the definitions that I have laid out for the context of my argument. For that reason, I have tried to give a comprehensive spectrum of the different kinds of understanding, and showed that desire can work into most categories to appeal to the biggest audience. It is not intended to say that all of understanding can be accounted for through only our desires because a proof of that will result in manipulating an agent’s intent, so that I can say anyone is acting out of desire for every case by distorting the intent to link it back to desire.
4. Andrew and Desire
It seems like Andrew desires freedom, and this first becomes plausible when a local judge declares that Andrew should be free. “There is no right to deny freedom to any object with a mind advanced enough to grasp the concept and desire the state” (Asimov 530). We see that it makes a difference because the judge originally wanted to decline Andrew’s request because he thought it was absurd that a machine would come up, and ask for something that it cannot appreciate. However, Andrew replies that “only someone who wishes for freedom can be free. I wish for freedom” (Asimov 530). What this shows is that the judge was willing to grant Andrew freedom after he realized that Andrew knew what he was talking about when he said that he wanted freedom. Since he was a respected robot and belonged to a good family, it made little sense for him to seek freedom for anything other than the value of freedom itself. It is as if a rich family bought a dog, and can afford to take care of the dog for everything that the dog might need. We do not ordinarily think that the dog has any reason to own itself because his masters are taking care of him, and his needs. Now, if he had wants, desires, then we might have to have a different account. Andrew wanted freedom, so it was no longer acceptable that anyone owned him. At this stage, Andrew is still considered a robot. The judge that made this decision called him an “object,” and I think this was purposely done to say that it might be possible that desire is not enough to be human, but it does show that it is at least an integral part of understanding a concept like freedom.
Just because he is not considered human in the fiction yet does not mean that we should not feel that he is. In the story, the World Court does not actually grant him the status of an intelligent being until the very end. But I think Asimov builds sympathy for Andrew by using desire early on in the story to give an excuse for his friends to believe he is an intelligent being. Ensuing anecdotes of trivial criteria are taken into consideration, but I think we are supposed to believe that Andrew is already an intelligent being. Andrew’s friends lead me to believe that the ensuing criteria are only trivial because they treat him as if he is an intelligent being ever since he gained his freedom, which Andrew earned by showing his desire for it. Here, I shall consider some examples of Andrew’s interaction with his friends to show this.
Andrew is prompted to alter his vocabulary because he no longer has to serve the Martin family if he doesn’t want to. “One day Little Sir—no, George!—came. Little Sir had insisted on that after the court decision. ‘A free robot doesn’t call anyone Little Sir,’ George had said” (Asimov 531). I think this is evidence that Asimov wanted us to believe that there was a significant difference between Andrew with court-established freedom, and Andrew without. This difference would not have happened without the condition of desire that led to the court case. This may only mean that certain people have come to accept his freedom, but he may not necessarily deserve to be called an intelligent being. A historical example of this distinction can come from the Civil War. People in the North wanted to accept an African American’s freedom but this did not entail that they felt the African Americans were morally equal to them. I think this objection can be circumvented here because the Martin family’s orders are seen repeatedly as instructions for Andrew to bridge the gap from serving the family as a robot to making use of his own needs, and are not considered threats to his autonomy. It is, in other words, simply a tool that Asimov uses so that the story can build up with Andrew progressing slowly towards being comfortable with sentience.
Later in the story, Andrew’s friends are once again present to show us the accepted intelligence status of a more confident Andrew. “’No, You arrange it.’ It didn’t even occur to Andrew that he was giving a flat order to a human being, he had grown accustomed to that on the Moon” (Asimov 553). Here, we know that Andrew receives the respect of an intelligent being because his desire is recognized. Not only can he tell his friend to help him get something done, but Asimov also makes it clear that Andrew has been bossing people around for a while. He got used to it. Taken by itself, this conversation does not strike me as particularly odd. It was the narrator’s explanation of the quote that made it stand out as a sign of Andrew’s confidence. He gives an order to a human, and his wish is respected as if he himself was human. I think that I am not surprised because Asimov allows it grow within Andrew’s character. The way that we know Asimov lets it grow is that desire is established first. That way, Asimov has the room to build onto Andrew the other things that help develop the story, but nonetheless do not contribute as significantly to Andrew’s intelligence. In other words, without the expression for desire, Asimov would not have been able to establish Andrew as an intelligent being earlier. If he could not do that, then we would not see Andrew’s friends assuring Andrew that he may have achieved intelligence before the other things that come in later, e.g. when he sues his manufacturers for an organic body.
5. Conclusion
Andrew’s expression of desire led other people in the story to believe that he understood what he was trying to do, namely, gain his freedom because he wanted it. I argued that this is what people mean when they say someone is an intelligent being. There will be much debate about whether or not this type of machine will be possible. Even if they are not, we have to address the values underlying the conditions for understanding so that we can realize what we even mean by saying this. There is a mutually dependent relationship between desire and language in our understanding though. Without desire, we cannot access the higher levels of understanding. Without language we cannot normally express desire, and that makes it hard for us to recognize it. That means that we actually use both when we test for intelligence, but it is important to keep the two logically distinct. We see that desire does a lot of what language cannot do, and so it accounts for how our understanding works.
Featured Art
- “Effect” by Dana Ross
Works Cited
- Asimov, Isaac. “Bicentennial Man.” Machines That Think. Ed. Isaac Asimov, Patricia S. Warrick, Martin H. Greenberg. New York: Holt, Rinehart and Winston, 1984. 519-561.
- Turing, Alan. “Computing Machinery and Intelligence.” Mind. 59.236 (1950): 433-460.