1 / 24

Philosophical Foundations

Philosophical Foundations. Chapter 26. Searle v. Dreyfus argument. Dreyfus argues that computers will never be able to simulate intelligence Searle, on the other hand, allows that computers may someday be able to pass the Turing test [they can simulate human intelligence]

cato
Download Presentation

Philosophical Foundations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Philosophical Foundations Chapter 26

  2. Searle v. Dreyfus argument • Dreyfus argues that computers will never be able to simulate intelligence • Searle, on the other hand, allows that computers may someday be able to pass the Turing test [they can simulate human intelligence] • However, does this mean that computers are intelligent? Is simulation duplication? • Attack on the Turing test • Directed against strong AI

  3. Simulation v. duplication • No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis • No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched

  4. Motivation • The Turing test is an inadequate determination of intelligence [only deals with simulation] • The Turing test is an example of behaviorism • ‘states’ are defined by how they make people act • happiness is acting happy • love is acting in a loving manner • intelligence is acting in an intelligent manner • to understand the word ‘knife’ means to be able to use it • But behaviorism is inadequate • since love & happiness are more than simply the way in which a person acts, so too must be intelligence • to be x, a person must be in the correct ‘state’

  5. Is behaviorism plausible? • Dualism is the belief that there are two substances that make up human beings: minds & bodies • These two substances are absolutely different & incompatible • Thus, to understand the mind we need not concern ourselves with the body • the mind can be abstracted from its ‘implementation’ in the brain [behaviorism] • Does AI thus subscribe to dualism? • Dualism is rejected by most philosophers

  6. Alternative to dualism • Biological naturalism says that consciousness, intentionality, etc. are caused/produced by the brain in the same way that bile is produced by the stomach • there thus aren’t two substances • rather, the so-called mental phenomena is simply a result of the physical process • realism? • There is something essentially biological about the human mind

  7. Argument • To show behaviorism is inadequate for understanding/consciousness, Searle designed a famous thought experiment in which he is locked in a room • Under the door are slipped various Chinese characters which he does not understand • In the room with him is a rule set (in English) that tells him how to manipulate the characters that come under the door and what characters to slip back under the door, and a pad of paper for making intermediate calculations

  8. Argument continued • The Chinese characters slipped under the door are called ‘stories’ and ‘questions’ by the people providing them • The characters that Searle returns to the outside world are called ‘answers’ • The answers perfectly answer the questions about the stories that he was given • To an outside observer, it appears that Searle understands Chinese!

  9. Argument concluded • However, it is manifest [given] that Searle doesn’t understand the stories, the questions or the answers he is giving • he doesn’t understand Chinese! • Thus, since intelligence requires a ‘state of understanding’ (the story must mean something to you), Searle can’t be said to understand Chinese although he gives the correct answers • correct input/output, but no understanding

  10. Conclusions • Similarly, just because a computer can produce the correct answers doesn’t mean that it is intelligent • Merely manipulating meaningless symbols is inadequate for intelligence; a ‘state of intelligence’ (intentionality) is also needed • what does it mean when I say x is intelligent? • problems with the behaviorist definition • Thus, a computer can pass the Turing test and still not be said to be intelligent

  11. Abstracting the argument [givens] • Brains cause minds [empirical fact] • Syntax [formalism] is not sufficient for semantics [contents] • syntax & semantics are qualitatively different aspects & no qualitative increase of the former will ever produce the latter • Computer programs are entirely defined by their formal, or syntactical, structure [definition] • the symbols have no meaning; they have no semantic content; they are not about anything • Minds have mental contents; specifically they have semantic contents [empirical fact]

  12. Conclusions I • No program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds • The way that brain functions cause minds cannot be solely in virtue of running a computer program

  13. Conclusions II • Anything else that caused minds would have to have causal powers at least equivalent to those of the brain • For any artifact that we might build which had mental states equivalent to human mental states, the implementation of a computer program would not by itself be sufficient. Rather the artifact would have to have powers equivalent to the powers of the human brain

  14. The expected conclusion • The brain has the causal power to give rise to intentional [semantic] states • Computer programs can’t give rise to intentional [semantic] states since they’re only syntax • Thus, computer programs are not of the same causal power as brains • Thus, computer programs can’t give rise to the mind & consciousness

  15. Objections • Systems reply • Russell & Norvig • Robot reply • Brain simulation reply • Other minds reply

  16. Systems reply • Objection: Perhaps not the man in the room, nor the rules in English, nor the scratch paper understand anything, but the system taken as a whole can be said to understand • Answer: Put the room within a single person • make the person memorize the rules, etc. • thus, there is no system • the person still can’t be said to understand • syntactic information processing sub-systems can’t give rise to semantic content [can’t be called intelligent]

  17. Information processing • Further, it seems that if all we are requiring for intelligence is information processing, then everything can be seen as doing information processing • But this leads to a contradiction • we don’t want to say that the stomach or a thunderstorm is intelligent • the stomach takes in something [food], processes it [digests it], and puts something out [energy] • but if our definition of intelligence is that it is ‘information processing’, why isn’t the stomach intelligent?

  18. Russell & Norvig • Certain kinds of objects are incapable of conscious understanding (of Chinese) • The human, paper, and rule book are of this kind • If each of a set of objects is incapable of conscious understanding, then any system constructed from the objects is incapable of conscious understanding • Therefore, there is no conscious understanding in the Chinese room [as a whole] • But molecules, which make up brains, have no understanding [cf. Brain simulation reply]

  19. Robot reply • Objection: If a robot was perceiving & acting in the world, then it would be intelligent • intentionality arises from being in a world • Answer: Put the Chinese room in the robot’s head • give the robot’s perceptions to the Chinese room as Chinese characters & give the directions to the robot in terms of Chinese characters • we are in the same spot we were before: no intentionality because everything is still happening formally

  20. Brain simulation reply • Objection: Simulate the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives replies to them • Answer: This simulates the wrong things about the brain • As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won’t have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states

  21. Other minds reply • Objection: How do we know someone understands Chinese? --Only by their behavior • Answer: The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am attributing to them when I attribute cognitive states to them • The thrust of the argument is that it couldn’t be just computational processes and their output because the computational processes and their output can exist without the cognitive state

  22. Minds & machines • Machines can think; we just are machines! • However, computational processes over formally defined elements is insufficient • i.e., a computer program is insufficient • formal elements can’t give rise to intentional states • they can only give rise to the next state in the computational device • only syntax, no semantics • interpretation is in the eyes of the beholder

  23. Meaning of the Chinese Room • Point of the Chinese room example: adding a formal system doesn’t suddenly make the man understand Chinese • The formal system doesn’t endow the man with intentionality vis-à-vis Chinese • Why would we expect it to endow a computer with intentionality? • E.g., Computers don’t know that ‘4’ means 4 • Only more & more symbols; never grounds out on meaning

  24. Conclusion • We (I) don’t understand what “consciousness” or “self-awareness” is • If it flies like a duck, swims like a duck, walks like a duck, and quacks like a duck . . ?

More Related