Revisited is a series of blog posts in which I share words I have previously written for university, school or old blogs. View all Revisited posts here. Revisited 02 – written for: Grand Challanges of Artificial Inteligence (University of Aberdeen), December 2007.
–
In order to answer this question let’s start by formulating definitions of both machine and know. I will start with a simple dictionary definition of a machine:
“any mechanical or electrical device that transmits or modifies energy to perform or aid in the performance of human tasks”[1].
I will further focus my discussion on artificial machines such as computers and robots. I have two reasons for this very specific starting point: firstly, I am aware that defining a machine is arbitrary and there is no one agreed and finite definition. Secondly, we need a starting point since the “know” part of the question and our arguments around this could lead us to change or refine the “machine” definition. Having a starting point allows us a common ground to argue on and also aids progress towards a justified answer.
The word knowing feels intuitively simple to define but, although I believe that I know what knowing is, I realise that I cannot easily define it. Therein lays the problem. How do I know that a machine knows if I can’t define know? If I say that knowing something means we have knowledge then I have to define knowledge. We can start with the classic definition of knowledge as “justified, true belief”. This definition however will not bring us much closer to determining whether a machine can know. We need to determine if the machine believes since if it doesn’t believe then it can’t have a belief. Let us investigate the ways at arriving at the belief or, in other words, the ways of knowing. By looking from this angle we can try to bridge from the “how to know” to the population of “knower’s” and conclude whether the machine could be in that population. The main ways of knowing are reason, emotion, perception and language. By looking closer at these ways we may have some chance of reaching a conclusion to our question.
When we use reason, we can either inductively or deductively reach a belief. If I say I know that Thomas is a mammal, I should be able to justify my answer. If I were to use deductive logic, I could use two statements, “All humans are mammals” and “Thomas is a human”. Therefore I could say Thomas is a mammal. Using inductive reason and perception, I could say that I’ve always seen what humans and mammals look like, so I would guess Thomas is a mammal. Perception is largely based on our past experiences and information and knowledge that we have gathered previously and believe to be correct. Whether the reasoning leads to a true belief depends on whether the inputs to the reasoning process were indeed true and justified. Reasoning is an ongoing complex process which does not happen in isolation of language, emotions and feelings and these three are highly intertwined. I have feelings but if I express them in a language unfamiliar to the listener then they are meaningless to that person. If I listen to a topic that I feel strongly about I cannot get emotional about the subject if I do not understand the language. Clearly then language is significant in knowing something.
When relating knowledge to a computer based machine, we can certainly say that its ways of arriving at output or generating its knowledge fit most of the above categories and hence it could possibly be in the knower’s population. Take for example robot X, which has been equipped with some sensors (such as light and heat sensors), a complicated mathematical program and a voice recognition program. Give the creator of this robot some time to calibrate the machine and soon there will be a lifelike figure close to the human, who will be able to answer questions, react to changes in the surroundings and do various other tasks. The robot does all this through logic, perception and language. Emotion in this case would be very difficult to implement into the machine since emotion is a feeling of love, hate, disgust, fear, etc. However according to functionalism, mental states are functional states. Computers are themselves simply machines that implement functions. A functionalist would argue that mental states are like the software states of a computer. According to this theory it is possible that a machine running the right kind of computer program could have mental states. It could have beliefs, hopes and pains. But to what extent will the robot know something? It may be programmed to mimic the reaction to certain feelings, for example a buzzer could sound if it’s too hot, but this doesn’t mean that the machine feels the pain of burning heat. The robot, and many other artificially intelligent machines, are programmed to follow a set of true codes or rules. Following a code is like the human reflex system, it happens nearly instantaneous and automatically. But how many codes can a robot or computer system follow and is it enough to emulate a real human being? Even if you believe the functionalist argument, does it prove that a programmed mental state is equivalent to a brain with feelings? The answer to this is nearly impossible to tell and Alan Turing came up with the Turing test in the 1950s.
The Turing test[2] is an investigation to see how well a computer can achieve life-like conversations. The Turing test involves a computer and a judge (human being). The judge may ask the computer questions or have a normal conversation. The person and the computer are in different rooms so that there is no influence from bodily appearance. The aim is to have a computer trick the judge into thinking it is another human. The reasoning being that if we can’t tell the machine is different from us then we could infer that it has a mind. But will the computer actually know what it is saying or will it just follow the rules it’s been set? We can say that the computer can use deductive logic to assume the right answer and respond to the human by saying what it thinks is correct. The computer can also use inductive logic to build up its database automatically. It can accumulate a linguistic capability but this capacity on its own is no more than the regeneration of symbols. The machine is capable of response but it doesn’t follow necessarily that it has a mind and knows what its responses mean. The Turing test does not give me evidence of the machine’s ability to know something.
Let’s investigate further feelings. The only emotions and feelings you can feel are your own. I know that feelings are being felt by me so I know I have a mind. We can’t touch or see a mind so I know it’s different from a brain. If I dissect a brain (since it’s the part of the body that must hold knowledge) I will never find a physical part that I can touch and say is a feeling. But, if I go back to my earlier statement, I still know I have feelings and I believe this is beyond dispute. This reasoning underlies my belief that there is a separation of mind and body. René Descartes (1596 – 1650)[3] had this idea and believed that mental and physical states are separate from one another. He said that the mind and soul did not follow the laws of physics; however, he still stated that they can affect each other. Taking this a step further I contend that even if an emotion can be described and a physical body such as a machine can be programmed to display the emotion it still doesn’t prove that it feels the emotion. It would have to have a mind to feel and we can’t see the mind. A robot could be programmed to jump for joy if it hears certain words so it can indeed display joy but where is the proof that it feels joy? To be fair we also cannot see the human mind and in reality have no proof that the human feels joy or is just acting. The difference though is that acting out joy in itself requires that we feel the need to pretend to be happy whereas the robot was only reacting to programmed word signals.
To answer the question of whether a machine can know I believe we always come back to the feelings and mind discussion. The human can use all types of the areas of knowing and the ways of knowing. The main differences are that the machine cannot develop its own feelings and emotions to know something. It can certainly be programmed to mimic and it can use reason and perception but only to a certain extent. If it is not interacting with real objects and new events in the world and making up its own mind then its ability to develop its knowledge base is restricted.
To conclude, I believe that the physical state is separate from the mental state (though closely interlinked). If you acknowledge this then there is no basis for believing that a machine has feelings and a mind and therefore I conclude that a machine cannot know. I realise though that to some extent I am relying on belief that I cannot fully justify.
[1] http://wordnet.princeton.edu/perl/webwn?s=machine
[2] Alan Turing on Wikipedia (http://en.wikipedia.org/wiki/Alan_turing)
[3] Lecture 21 Notes by Frank Guerin (http://www.csd.abdn.ac.uk/~fguerin/teaching/CS1013/lec21print.ppt)