1 / 32

Strong AI and Weak AI

Strong AI and Weak AI. There are two entirely different schools of Artificial Intelligence: Strong AI: This is the view that a sufficiently programmed computer would actually be intelligent and would think in the same way that a human does. Weak AI:

aoife
Download Presentation

Strong AI and Weak AI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Strong AI and Weak AI • There are two entirely different schools of Artificial Intelligence: • Strong AI: • This is the view that a sufficiently programmed computer would actually be intelligent and would think in the same way that a human does. • Weak AI: • This is the use of methods modeled on intelligent behavior to make computers more efficient at solving problems.

  2. Defining AI • We can break this down even further by thinking about a heuristic which considers AI on two dimensions • thinking vs. acting • humanly vs. rationally

  3. Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally “Flavors” of AI humanly vs. rationally thinking vs. acting

  4. Thinking Humanly • Cognitive Science (1960s) • Focus on the “rationale” that goes into making a decision. Claims that making the right decision isn’t intelligence if there isn’t a proper rationale. • Critics state we know too little about the workings of the brain, and that this approach too often makes unfounded leaps.

  5. Thinking Rationally • Laws of Thought • Traces its roots back to Aristotle, who tried to approach things from a normative, or prescriptive, way of thinking rather than a descriptive one.  • The idea is that with proper logic a person (or computer) could always yield correct solutions when provided with correct information • ( x) (D(x) ^ R(x)) → ( x) (D(x) → S(x) ) • Modus Ponens, Modus Tolens

  6. Thinking Rationally There are several problems with this approach: • It is not always easy (if possible) to codify a situation in the formal terms required by logical notation • We don't always have perfect information. • Even if we have all the info and how to interpret it, knowing how to solve a problem and doing so are still very different things. • Not all intelligent behavior is mediated by logical deliberation (blinking)

  7. Acting Humanly • The Turing Test • AI is defined by human-like behavior and a human like experience. • Rather than define intelligence we operationalize it.

  8. Acting Rationally • “Doing the right thing" … "that which is expected to maximize goal achievement, given the available information."  • Unlike the previous approach (thinking rationally) the process of "acting" rationally doesn't necessarily require "thinking."  (blinking fits in here). • In AI we define things that act rationally as “agents.”

  9. Systems that think like humans STRONG AI Systems that think rationally Systems that act like humans WEAK AI?? Systems that act rationally WEAK AI “Flavors” of AI humanly vs. rationally thinking vs. acting

  10. Strong AI and Weak AI • Strong AI is currently the stuff of science fiction, although there are many that believe that machines will indeed be capable of real thought at some point in the future. • This course is concerned with Weak AI (Agents that act rationally).

  11. Strong Methods and Weak Methods • Not to be confused with Strong AI and Weak AI.

  12. Strong Methods vs. Weak Methods • Weak Methods : • employ systems such as logic, automated reasoning, and other general structures that can be applied to a wide range of problems; • do not incorporate any real knowledge about the world and the problem that is being solved.

  13. Strong Methods vs. Weak Methods • Strong Methods : • depend on a system being given a great deal of knowledge about its world and the problems that it might encounter • Example: Expert Systems with their strong reliance on domain knowledge. Note : The strong vs. weak methods dichotomy should not be confused with the distinction between strong and weak AI.

  14. Strong Methods and Weak Methods • Strong method systems rely on weak methods, as knowledge is useless without a way to handle that knowledge. • Weak methods are in no way inferior to strong methods – they simply do not employ world knowledge.

  15. Defining AI • AI goes beyond “normal” CS… • AI often is working on problems that we know are intractable… • Consider chess

  16. Defining AI • In the early 1970s someone wrote the following bit of trivia: • If every man, woman, and child on earth were to spend every waking moment playing chess (16 hours per day) at the rate of one game per minute, it would take 146 billion years to use every variation of the first 10 moves.

  17. Defining AI • Given this complexity, how the heck can humans EVER hope to play chess well??? • We don't often arrive at the best solutions, but we usually do arrive at solutions that are good enough. • This idea of satisficing rather than optimizing when confronted with an intractable problem is central to what AI is about.

  18. Defining AI • Most people would characterize AI as a subset of CS, saying that it focuses on a specific set of problems and techniques. • Given my view of CS as modeling the world, one might turn this relation on its head: CS as we usually practice it is in fact a subset of AI • An intelligent agent must be able to model the world, to think about whatever is in its environment, and to handle both tractable and intractable problems well enough to achieve its goals.

  19. Agents • “An agent is simply something that acts.” • An agent is an entity that is capable of perceiving its environment (through sensors) and responding appropriately to it (through actuators).

  20. Agents • If the agent is intelligent, it should be able to weigh alternatives. • “A rational agent is one that acts so as to achieve the best outcome or, when there is uncertainty, the best expected outcome.”

  21. Agents • An agent should be able to derive new information from data by applying sound logical rules. • It should possess extensive knowledge in the domain where it is expected to solve problems.

  22. Agents • We will consider true intelligent, rational, agents as entities which display: • Perception • Persistence • Adaptability • Autonomous control

  23. Agents and Environments • Agents include humans, robots, softbots, thermostats, etc. • The agent function maps from percept histories to actions: • The agent program runs on the physical architecture to produce

  24. Agents and Environments • Vacuum-Cleaner World

  25. Agents and Environments • Vacuum-Cleaner World

  26. Agents and Environments • Vacuum-Cleaner World

  27. So what would be a good performance measure for the vacuum agent? Rationality • A rational agent does the right thing. • What is the right thing? • One possibility: • The action that will maximize success. • But what is success? • The action that maximizes the agent’s goals. • Use a performance measure to evaluate agent’s success.

  28. Rationality • Fixed performance measure evaluates the environment sequence • One point per square cleaned up in time T • One point per clean square per time step, minus one per move? • Penalize for more than k dirty squares? • A rational agent chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date.

  29. Rationality • Rational agent definition:“For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.”

  30. Rationality • Rationality is not • Omniscience • Clairvoyance • Success • Rationality implies • Exploration • Learning • Autonomy

  31. PEAS • To design a rational agent, we must specify the task environment (the “problems” to which rational agents are the “solutions”). • Performance measure • Environment • Actuators • Sensors • Example: the task of designing an automated taxi.

  32. PEAS • Performance measure? • Environment? • Actuators? • Sensors? • Safety, destination, profits, legality, comfort… • US streets/freeways, traffic, pedestrians, weather… • Steering, accelerator, brake, horn, speaker/display… • Video, accelerometers, gauges, engine sensors, keyboard, GPS, …

More Related