1 / 15

Commentary on Searle

Commentary on Searle. Presented by Tim Hamilton. Robert P. Abelson Dept. of Psychology, Yale. The act of writing the rules for symbol manipulation is a feat itself worth praising.

Download Presentation

Commentary on Searle

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Commentary on Searle Presented by Tim Hamilton

  2. Robert P. AbelsonDept. of Psychology, Yale • The act of writing the rules for symbol manipulation is a feat itself worth praising. • Our own learning ability comes from processing rules (addition, money, etc.), and it is assumed that as more rules are learned, our understanding increases. • According to Searle’s argument, someone does not really learn something without actually doing it personally.

  3. Abelson, cont. • It is very common for humans to produce linguistic interchange in areas that we have no idea what we’re talking about. Should we give a computer some credit when it performs as well as we do? • Programs lacking sensorimotor input may miss things, but why is intentionality so important?

  4. Abelson, cont. • Abelson says “Intentionality for knowledge is the appreciation of the conditions for its falsification” • Psychologists cannot even answer the question of how we determine what to do when beliefs and facts do not agree, how can we expect a computer to do the same? • Conclusion: AI is too young a discipline for objections to a lack of intentionality to be convincing.

  5. Ned BlockDept. of Linguistics and Philosophy, MIT • Searle’s arguments depend on intuition, and in the presence of enough evidence intuition must be ignored. • Examples: The earth moves through space at over 100,000 kph. A grapefruit sized chunk of grey organic matter is the seat of mentality. • Searle’s arguments against AI are really against the view of cognition of formal symbol manipulation. Before we can reject this view, we must be presented with the evidence for that view, in order to decide if our intuition is valid.

  6. Block, cont. • A machine that manipulates descriptions of understand does not itself understand. This still does not harm the theory of understanding as symbol manipulation. • Cognitive psychology tries to decompose all mental functions into symbol manipulation processes which are indivisible to the point where the internal “primitive” function is simply “a matter of hardware.”

  7. Block, cont. • Instead of one man trapped in a room manipulating Chinese symbols, what if there was an army, each of them performing one single primitive, and able to communicate with each other. The “cognitive homunculi head.” Is this network thinking? • The molecules in our bodies are slowly exchanged with our environment over time, what if we lived in a place where molecules were really tiny vehicles inhabited by beings smaller than sub-atomic particles? Does this affect our ability to think and understand? Would we now lack intentionality?

  8. Block, cont. • Intuitions about mentality are influenced by what we believe, so Searle needs to show that his intuition that the cognitive homunculi head lacks intentionality is not due to beliefs against symbol manipulation as cogitation. • The source of the intuition is an important component of a proper argument.

  9. Daniel DennetCenter for the Advanced Study in the Behavior Sciences, Stanford • AI of the time is “bedridden” with the only mode of perception and action being linguistic. The AI community is aware of the shortcomings of this model • Searle’s rebuttal to the “systems reply” is that a person (a whole system) understands language while the portion of their brain which processes language does not.

  10. Dennet, cont. • Searle’s example person who internalizes an entire symbol manipulation system would eventually learn and understand Chinese simply by noticing what their own actions are based on different Chinese inputs. • Searle’s insistence on the presence of intentionality begs two questions: What does the brain produce? What is the brain for?

  11. Dennet, cont. • Searle says the brain produces intentionality, while AI and others would say that the brain produces control. • Searle admits that a machine could produce control without intentionality, so what then is the use of intentionality if our exact actions can be produced without it?

  12. Roger C. SchankDept. of Computer Science, Yale • Agrees with Searle that the programs he has written do not think. • Disagrees that computer programs will never understand and never explain human abilities. • Some theories employed by AI (scripts) later tested on human subjects have been shown to be accurate descriptions of human ability.

  13. Schank, cont. • Complex theories of understanding must be explained by computer programs, not in English. • Can a model of understanding tell us anything about understanding itself? This is relevant to both AI and psychology. • Schank argues that it is just as impossible for biology to explain what starts life as it is for psychology to explain what causes understanding.

  14. Schank, cont. • A model (robot) could be built that functions exactly as if it were alive. Is it? • Are programs that function as if they understand, understanding? • Schank himself says no, but then asks “Does the brain understand?” Humans themselves understand, but do the biochemical processes within their grey matter understand?

  15. Schank, cont. • Schank agrees that something that simply uses rules does not understand. • The hardware implementation, whether biological or mechanical, does not understand. • It is the person who writes the rules, the AI researcher, who understands. This suggests that perhaps there is some sort of passing-on of understanding taking place.

More Related