1 / 33

Strong vs. Weak AI

Strong vs. Weak AI. ECE 847: Digital Image Processing. Stan Birchfield Clemson University. The coming takeover. Common theme: - Robots become intelligent - Robots become independent - Robots get out of control - Robots must be subdued Why all the fuss? Need we fear?.

lulu
Download Presentation

Strong vs. Weak AI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Strong vs. Weak AI ECE 847:Digital Image Processing Stan Birchfield Clemson University

  2. The coming takeover Common theme: - Robots become intelligent - Robots become independent - Robots get out of control - Robots must be subdued Why all the fuss? Need we fear?

  3. The central question • This is not just for entertainment • It has far-reaching implications • Two camps: • strong AI: There is no fundamental difference between man and machine • weak AI: Only people can think, machines cannot • Central question: Is there a fundamental difference between man and machine, or is it only a difference in computing power?

  4. The question restated • Stated another way, • Can computers think? • What does it mean to think?

  5. In favor of “Strong AI” • Strong AI argument #1: Look at what machines can do clean(Roomba vaccuum cleaner) play soccer (Robocup)

  6. They play music trumpet (Toyota’s trumpet-playing robot 2008) organ (Ichiro Kato’s WABOT II reads music and plays the organ 1984) conductor (Honda’s Asimo robot conducts Detroit Symphony Orchestra 2008) flute (Atsuo Takanishi’s flute-playing robot)

  7. They even compose music MySong: http://research.microsoft.com/~dan/mysong/

  8. … and have emotions Rity sobot (from Kim Jong-Hwan at Korea's Institute of Advanced Science and Technology)

  9. In favor of “Strong AI” • Strong AI argument #2: Look at what people said machines will never do Hubert Dreyfus, Berkeley philosopher: No computer will ever beat me at chess 1967: Richard Greenblatt's MacHack’s program beat him Then Dreyfus: Well, no computer will beat a nationally ranked player But it did Then Dreyfus: Well, no computer will beat a world champion 1997: Deep Blue beat Garry Kasparov Kasparov vs. Deep Blue

  10. The clincher • Many people are fond of saying, “They will never make a machine to replace the human mind --- it does many things which no machine could ever do.” • J. von Neumann gave talk in Princeton (1948) • Question from audience:“But of course, a mere machine can't really think, can it?” • Answer:“You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!'' [from E. T. Jaynes, Probability Theory: The Logic of Science]

  11. “Weak AI” responses • Hubert Dreyfus, Berkeley philosopher:Nonformal aspects of thinking cannot be reduced to mechanized processes • John Searle, Berkeley philosopher:Chinese room experiment – blindly translating is not the same as thinking • Thomas Ray, Oklahoma zoologist:carbon medium and silicon medium are fundamentally different • Roger Penrose, British physicist and mathematician:Consciousness arises from mysterious force of quantum effects • David Chalmers, UC Santa Cruz philosopher:Basis of consciousness may be mysterious new type that has not yet been observed One thing in common: All appeal to materialistic (even if mysterious) explanations

  12. An alternative view • Thesis: • “Strong AI” is fundamentally wrong • There is a fundamental difference • Machines can never be equivalent to humans in all respects • “Weak AI” arguments are – well – weakbecause they all assume materialism in their foundation • The dilemma is solved by recognizing the role of the spirit (or soul)

  13. The “Strong AI” model Computer Human input output input output deterministic algorithm (running on carbon) deterministic algorithm (running on silicon)

  14. The proposed model Computer Human immaterial spirit input output input output deterministic algorithm (running on carbon) deterministic algorithm (running on silicon)

  15. Turing machine • Recall the Turing machine: • This simple abstract device (Universal Turing machine) can simulate the behavior of any digital computer – past, present, or future! Benjamin Schumacher, http://physics.kenyon.edu/coolphys/thrmcmp/newcomp.htm

  16. The proposed model Computer Human contingency mechanism deterministic algorithm (running on carbon) and immaterial decision maker deterministic algorithm (running on silicon)

  17. Let’s zoom in Computer Human http://gs.fanshawec.ca/tlc/math270/images/2_7_Bi2.jpg 0 1 contingency mechanism – decision made by immaterial spirit affects physical outcome

  18. How can this model be tested? device 0010001010101… 010111010001… input output • Kolmogorov complexity of output string s • is the length of the shortest program that outputs s • Example: K( 22/7 ) < K(p) • Define: Complexity of device is the maximum Kolmogorov complexity that it can output, when no input is given • For Turing machines, K(output) ≤ K(input) + C(device)

  19. Conservation of information device lossless compression algorithm (e.g., LZW) input image output image number of bits ( output ) < number of bits ( input ) complexity ( output ) = complexity ( input ) Process is reversible (Note: This complexity is entropy, not Kolmogorov complexity)

  20. Conservation of information device lossy compression algorithm (e.g., JPEG) input image output image number of bits ( output ) < number of bits ( input ) complexity ( output ) < complexity ( input ) Process is NOT reversible

  21. Conservation of information device downsample input image output image number of bits ( output ) < number of bits ( input ) complexity ( output ) < complexity ( input ) Process is NOT reversible

  22. Conservation of information • Furby (1998) • speaks Furbish off-the-shelf • learns English over time • How does it do this? • Pre-programmed to speak English • Program causes more English to be used over time • Nothing is learned • No new information

  23. Information generation • We do not expect computers to generate new information • Rather, they only process existing information • This limitation is NOT dependent on the speed / computational power of the computer “No officer. You see, I just bought this new processor, and it’s so powerful that it decided to create those files.” “You have illegal files on your computer!” http://digitalclonesrus.com/assets/images/happy_man_at_computer.jpg http://www.clipartof.com/images/clipart/xsmall2/4205_motorcycle_policeman_filling_out_a_traffic_citationticket_form.jpg

  24. Information generation • We DO expect people to generate new information: • The basic requirement for a PhD is a contribution to human knowledge • Intellectual property assumes that knowledge is created by the inventors • Plagiarism is detected when one person’s work is similar to another’s • There is a distinction between original work and derivative work • Example: • In 2005, students at MIT (Stribling et al.) wrote a computer program to generate research papers • The automatically generated paper was actually accepted for publication by the World Multi-Conference on Systemics, Cybernetics and Informatics (WMSCI)? • Why was this such a scandal? • Why did people complain that the conference organizers had not reviewed submissions thoroughly rather than conclude that computers had now reached human intelligence?

  25. Generating information device 0010001010101… 010111010001… input output • If complexity ( output ) > complexity ( input ), • then the complexitymust arise from the device itself(cf. Noam Chomsky’s black box for studying children’s innate ability to learn language)But the human brain is not complex enough. Back-of-the-envelope calculation: • Library of Congress has approx. 20 TB of written information • Human genome contains 3 billion DNA rungs for a total of 6 billion bits of data • 20,000,000,000,000 >> 1,000,000,000

  26. Free will • Contingency mechanism enables human to make free decisions • This is necessary for • moral responsibility • ethical standards • laws of justice (e.g., was the act intentional?) • self-improvement (Covey’s gap between stimulus and response) • In historic Christian theology, people are defined as “rational creatures”, which implies • free will • immaterial, immortal soul “So God created man in his own image, in the image of God created he him; male and female created he them.” Gen. 1:27 • Without free will, we are either • deterministic, or • random Either way leads to irrationality

  27. Consciousness • Common view is that consciousness arises from materialistic processes in body • Why? Not because of evidence, but because of prior philosophical commitment to naturalism • Kurzweil proposes to produce exact replica of brain • Then he will be automatically transported to the copy • Why should we think that, even if our brain could be copied, our consciousness should go with it? • What if the brain is copied multiple times? Will we have multiple consciousnesses? • This is a vain attempt at immortality (Salvation by computer upload) • Note that all proponents of this idea predict that the technology will conveniently be in place by the time they are 70 years old – see Pattie Maes, "Why Immortality is a Dead Idea", 1993

  28. Creativity • Consider song as point in high-dimensional space • Interpolating between songs may be possible (blending) • Creating new songs in local neighborhood may be possible • Claim: Making meaningful macro-jumps is not possible • Even if it were possible, who would be the judge of quality? Computer or human?

  29. What’s wrong with the Turing test? • Turing test: • one computer, one person, one judge • All communication via terminal • Goal: Judge tries to tell which is computer and which is person • Turing test can never be used to tell whether there is a fundamental difference between computer and human • Reason: Judge is required to be a human • In other words, • Suppose computer = human • Then human judge can be replaced by computer judge • But now test does not make any sense

  30. More… • Chess revisited: • Sure, computers can play chess, but can they invent a new game to replace chess? • Can they invent new rules? • Artificial life started with promise, then fizzled out as hopes were not realized • Captchas: Reverse Turing tests • Church-turing thesis • Complex specified information

  31. One final thought • Similarity between computers and animals: • Animals act by instinct • Animals do not have free will • Learn the lesson of Grizzly Man(Timothy Treadwell): • He thought bears were his friends • He thought they were misunderstood • He ignored warnings about getting too close • They killed him

  32. Is computer vision possible? • Distinction between • information processing: the information is changed from one form to another, or is lost • algorithms change information • information generation: the information is created • natural processes cannot create information • there is no algorithm to create information • information generation requires a contingency mechanism • supernatural or metanatural process -> spirit or soul • Will a computer ever be able to enjoy an aesthetically pleasing painting?if (painting_is_pretty) { printf(“I love this painting”); }

More Related