1 / 42

AI Philosophy: Computers and Their Limits

AI Philosophy: Computers and Their Limits. G51IAI – Introduction to AI Andrew Parkes http://www.cs.nott.ac.uk/~ajp/. Natural Questions. Can a computer only have a limited intelligence? or maybe none at all? Are there any limits to what computers can do? What is a “computer” anyway?.

luna
Download Presentation

AI Philosophy: Computers and Their Limits

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AI Philosophy:Computers and Their Limits G51IAI – Introduction to AI Andrew Parkes http://www.cs.nott.ac.uk/~ajp/

  2. Natural Questions • Can a computer only have a limited intelligence? or maybe none at all? • Are there any limits to what computers can do? • What is a “computer” anyway?

  3. Turing Test • The test is conducted with two people and a machine. • One person plays the role of an interrogator and is in a separate room from the machine and the other person. • The interrogator only knows the person and machine as A and B. The interrogator does not know which is the person and which is the machine. • Using a teletype, the interrogator, can ask A and B any question he/she wishes. The aim of the interrogator is to determine which is the person and which is the machine. • The aim of the machine is to fool the interrogator into thinking that it is a person. • If the machine succeeds then we can conclude that machines can think.

  4. Turing Test: Modern • You’re on the internet and open a chat line (modern teletype) to two others “A” and “B” • Out of A and B • one is a person • one is a machine trying to imitate a person (e.g. capable of discussing the X-factor?) • If you can’t tell the difference then the machine must be intelligent • Or at least act intelligent?

  5. Turing Test • Often “forget” the second person • Informally, the test is whether the “machine” behaves like it is intelligent • This is a test of behaviour • It is does not ask “does the machine really think?”

  6. Turing Test Objections • It is too culturally specific? • If B had never heard of “The X-Factor” then does it preclude intelligence? • What if B only speaks Romanian? • Think about this issue! • It tests only behaviour not real intelligence?

  7. Chinese Room • The system comprises: • a human, who only understands English • a rule book, written in English • two stacks of paper. • One stack of paper is blank. • The other has indecipherable symbols on them. • In computing terms • the human is the CPU • the rule book is the program • the two stacks of paper are storage devices. • The system is housed in a room that is totally sealed with the exception of a small opening.

  8. Chinese Room: Process • The human sits inside the room waiting for pieces of paper to be pushed through the opening. • The pieces of paper have indecipherable symbols written upon them. • The human has the task of matching the symbols from the "outside" with the rule book. • Once the symbol has been found the instructions in the rule book are followed. • may involve writing new symbols on blank pieces of paper, • or looking up symbols in the stack of supplied symbols. • Eventually, the human will write some symbols onto one of the blank pieces of paper and pass these out through the opening.

  9. Chinese Room: Summary • Simple Rule processing system but in which the “rule processor” happens to be intelligent but has no understanding of the rules • The set of rules might be very large • But this is philosophy and so ignore the practical issues

  10. Searle’s Claim • We have a system that is capable of passing the Turing Test and is therefore intelligent according to Turing. • But the system does not understand Chinese as it just comprises a rule book and stacks of paper which do not understand Chinese. • Therefore, running the right program does not necessarily generate understanding.

  11. Replies to Searle • The Systems Reply • The Robot Reply • The Brain Simulator Reply

  12. Blame the System! • The Systems Reply states that the system as a whole understands. • Searle responds that the system could be internalised into a brain and yet the person would still claim not to understand chinese

  13. “Make Data”? • The Robot Reply argues we could internalise everything inside a robot (android) so that it appears like a human. • Searle argues that nothing has been achieved by adding motors and perceptual capabilities.

  14. Brain-in-a-Vat • The Brain Simulator Reply argues we could write a program that simulates the brain (neurons firing etc.) • Searle argues we could emulate the brain using a series of water pipes and valves. Can we now argue that the water pipes understand? He claims not.

  15. AI Terminology • “Weak AI” • machine can possibly act intelligently • “Strong AI” • machines can actually think intelligently • AIMA: “Most AI researchers take the weak hypothesis for granted, and don’t care about the strong AI hypothesis” (Chap. 26. p. 947) • What is your opinion?

  16. What is a computer? • In discussions of “Can a computer be intelligent?” • Do we need to specify the “type” of the computer? • Does the architecture matter? • Matters in practice: need a fast machine, lots of memory, etc • But does it matter “in theory”?

  17. Turing Machine • A very simple computing device • storage: a tape on which one can read/write symbols from a list • processing: a “finite state automaton”

  18. Turing Machine: Storage • Storage: a tape on which one can read/write symbols from some fixed alphabet • tape is of unbounded length • you never run out of tape • have the options to • move to next “cell” of the tape • read/write a symbol

  19. Turing Machine: Processing • “finite state automaton” • The processor can has a fixed finite number of internal states • there are “transition rules” that take the current symbol from the tape and tell it • what to write • whether to move the head left or right • which state to go to next

  20. Turing Machine Equivalences • The set of tape symbols does not matter! • If you have a Turing machine that uses one alphabet, then you can convert it to use another alphabet by changing the FSA properly • Might as well just use binary 0,1 for the tape alphabet

  21. Universal Turing Machine • This is fixed machine that can simulate any other Turing machine • the “program” for the other TM is written on the tape • the UTM then reads the program and executes it • C.f. on any computer we can write a “DOS emulator” and so read a program from a “.exe” file

  22. Church-Turing Hypothesis “All methods of computing can be performed on a Universal Turing Machine (UTM)” • Many “computers” are equivalent to a UTM and hence all equivalent to each other • Based on the observation that • when someone comes up with a new method of computing • then it always has turned out that a UTM can simulate it, • and so it is no more powerful than a UTM

  23. Church-Turing Hypothesis • If you run an algorithm on one computer then you can get it to work on any other • as long as have enough time and space then computers can all emulate each other • an operating system of 2070 will still be able to run a 1980’s .exe file • Implies that abstract philosophical discussions of AI can ignore the actual hardware? • or maybe not? (see the Penrose argument later!)

  24. Does a Computer have any known limits? • Would like to answer:“Does a computer have any limit on intelligence?” • Simpler to answer“Does a computer have any limits on what it can compute?” • e.g. ask the question of whether certain classes of program can exist in principle • best-known example uses program termination:

  25. Program Termination • Prog 1: i=2 ; while ( i >= 0 ) { i++; } • Prog 2: i=2 ; while ( i <= 10 ) { i++; } • Prog 1 never halts(?) • Prog 2 halts

  26. Program Termination • Determining program termination • Decide whether or not a program – with some given input – will eventually stop • would seem to need intelligence? • would exhibit intelligence?

  27. Halting Problem • SPECIFICATION: HALT-CHECKER • INPUT: 1) the code for a program P 2) an input I • OUTPUT: determine whether or not P halts eventually when given input I return true if “P halts on I”, false if it never halts • HALT-CHECKER itself must always halt eventually • i.e. it must always be able to answer true/false to “P halts on I”

  28. Halting Problem • SPECIFICATION: HALT-CHECKER • INPUT: the code for a program P, and an input I • OUTPUT: true if “P halts on I”, false otherwise • HALT-CHECKER could merely “run” P on I? • If “P halts on I” then eventually it will return true; but what if “P loops on I”? • BUT cannot wait forever to say it fails to halt! • Maybe we can detect all the loop states?

  29. Halting Problem • TURING RESULT: HALT-CHECKER (HC) cannot be programmed on a standard computer (Turing Machine) • it is “noncomputable” • Proof: Create a program by “feeding HALT-CHECKER to itself” and deriving a contradiction (you do not need to know the proof) • IMPACT: A solid mathematical result that a certain kind of program cannot exist

  30. Other Limits? • “Physical System Symbol Hypothesis” is basically • “a symbol-pushing system can be intelligent” • For the “symbol manipulation” let’s consider a “formal system”:

  31. “Formal System” Consists of • Axioms • statements taken as true within the system • Inference rules • rules used to derive new statements from the axioms and from other derived statements Classic Example: • Axioms: • All men are mortal • Socrates is a man • Inference Rule: “if something is holds ‘for all X’ then it hold for any one X” • Derive • Socrates is mortal

  32. Limits of Formal Systems • Systems can do logic • They have the potential to act (be?) intelligent • What can we do with “formal systems”?

  33. “Theorem Proving” • Bertrand Russell & Alfred Whitehead • Principia Mathematica 1910-13 • Attempts to derive all mathematical truths from axioms and inference rules • Presumption was that • all mathematics is just • set up the reasoning • then “turn the handle” • Presumption was destroyed by Gödel:

  34. Kurt Gödel • Logician, 1906-1978 • 1931, Incompleteness results • 1940s, “invented time travel” • demonstrated existence of "rotating universes“, solutions to Einstein's general relativity with paths for which .. • “on doing the loop you arrive back before you left” • Died of malnutrition

  35. Gödel's Theorem (1931) Applies to systems that are: • formal: • proof is by means of axioms and inference rules following some mechanical set of rules • no external “magic” • “consistent” • there is no statement X for which we can prove both X and “not X” • powerful enough to at least do arithmetic • the system has to be able to reason about the natural numbers 0,1,2,…

  36. Gödel's Theorem (1931) In consistent formal systems that include arithmetic then “There are statements that are true but the formal system cannot prove” • Note: it states the proof does not exist, not merely that we cannot find one • Very few people understand this theorem properly • I’m not one of them  • I don’t expect you to understand it either! … • just be aware of its existence as a known limit of what one can do with one kind of “symbol manipulation”

  37. Lucas/Penrose Claims • Book: “Emperor's New Mind” 1989Roger Penrose, Oxford Professor, Mathematics • (Similar arguments also came from Lucas, 1960s) • Inspired by Gödel’s Theorem: • Can create a statement that they can see is true in a system, but that cannot be shown to be true within the system • Claim: we are able to show something that is true but that a Turing Machine would not be able to show • Claim: this demonstrates that the human is doing something a computer can never do • Generated a lot of controversy!!

  38. Penrose Argument • Based on the logic of the Gödel’s Theorem • That there are things humans do that a computer cannot do • That humans do this because of physical processes within the brain that are noncomputable, i.e. that cannot be simulated by a computer • compare to “brain in a vat” !? • Hypothesis: quantum mechanical processes are responsible for the intelligence • Many (most?) believe that this argument is wrong

  39. Penrose Argument • Some physical processes within the brain are noncomputable, i.e. cannot be simulated by a computer (UTM) • These processes contribute to our intelligence • Hypothesis: quantum mechanical and quantum gravity processes are responsible for the intelligence (!!) • (Many believe that this argument is wrong)

  40. One Reply to Penrose • Humans are not consistent and so Gödel's theorem does not apply • Penrose Response: • In the end, people are consistent • E.g. one mathematician might make mistakes, but in the end the mathematical community is consistent and so the theorem applies

  41. Summary • Church-Turing Hypothesis • all known computers are equivalent in power • a simple Turing Machine can run anything we can program • Physical Symbol Hypothesis • intelligence is just symbol pushing • There are known limits on “symbol-pushing” computers • halting problem, Gödel’s theorem • Penrose-Lucas: we can do things symbol pushing computers can’t • Some “Turing Tests” will be failed by a computer • Some tasks cannot be performed by a “Chinese room” • but the argument is generally held to be in error

  42. Questions?

More Related