300 likes | 468 Views
Intelligent Behavior in Humans and Machines. Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA http://cll.stanford.edu/.
E N D
Intelligent Behavior in Humans and Machines Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA http://cll.stanford.edu/ Thanks to Herbert Simon, Allen Newell, John Anderson, David Nicholas, John Laird, Randy Jones, and many others for discussions that led to the ideas presented in this talk.
Basic Claims • Early AI was closely linked to the study of human cognition. • This alliance produced many ideas that have been crucial to the field’s long-term development. • Over the past 20 years, that connection has largely been broken, which has hurt our ability to pursue two of AI's original goals: • to understand the nature of the human mind • to achieve artifacts that exhibit human-level intelligence Re-establishing the connection to psychology would help achieve these challenging objectives.
Outline of the Talk • Review of early AI accomplishments that benefited from connections to cognitive psychology • Examples of AI's current disconnection from psychology and some reasons behind this unfortunate development • Ways that AI can benefit from renewed links to psychology • Research on cognitive architectures as a promising avenue • Steps we can take to encourage research along these lines
Early Links Between AI and Psychology As AI emerged in the 1950s, one central insight was that computers might reproduce the complex cognition of humans. Some took human intelligence as an inspiration without trying to model the details. Others, like Herb Simon and Allen Newell, viewed themselves as psychologists aiming to explain human thought. This paradigm was pursued vigorously at Carnegie Tech, and it was respected elsewhere. The approach was well represented in the early edited volume Computers and Thought.
Early Research on Knowledge Representation • Much initial work on representation dealt with the structure and organization of human knowledge: • Hovland and Hunt's (1960) CLS • Feigenbaum's (1963) EPAM • Quillian's (1968) semantic networks • Schank and Abelson's (1977) scripts • Newell's (1973) production systems • Not all research was motivated by concerns with • psychology, but it had a strong impact on the field.
Early Research on Problem Solving • Studies of human problem solving also influenced early AI research: • Newell, Shaw, and Simon’s (1958) Logic Theorist • Newell, Shaw, and Simon’s (1961) General Problem Solver • DeGroot’s (1965) discovery of progressive deepening • VanLehn’s (1980) analysis of impasse-driven errors • Psychological studies led to key insights about both state-space and goal-directed heuristic search.
Early Research on Knowledge-Based Reasoning • The 1980s saw many developments in knowledge-based reasoning that incorporated ideas from psychology: • expert systems (e.g., Waterman, 1986) • qualitative physics (e.g., Kuipers, 1984; Forbus, 1984) • model-based reasoning (e.g., Gentner & Stevens, 1983) • analogical reasoning (e.g., Gentner & Forbus, 1991) • Research on natural language also borrowed many ideas from studies of structural linguistics.
Early Research on Learning and Discovery • Many AI systems also served as models of human learning and • discovery processes: • categorization (Hovland & Hunt, 1960; Feigenbaum, 1963; Fisher, 1987) • problem solving (Anzai & Simon, 1979; Anderson, 1981; Minton et al., 1989; Jones & VanLehn, 1994) • natural language (Reeker, 1976; Anderson, 1977; Berwick, 1979, Langley, 1983) • scientific discovery (Lenat, 1977; Langley, 1979) • This work reflected the diverse forms of knowledge supported • by human learning and discovery.
The Unbalanced State of Modern AI Unfortunately, AI has moved away from modeling human cognition and become unfamiliar with results from psychology. Despite the historical benefits, many AI researchers now believe psychology has little to offer it. Similarly, few psychologists believe that results from AI are relevant to modeling human behavior. This shift has taken place in a number of research areas, and it has occurred for a number of reasons.
Current Emphases in AI Research • Knowledge representation • focus on restricted logics that guarantee efficient processing • less flexibility and power than observed in human reasoning • Problem solving and planning • partial-order and, more recently, disjunctive planners • bear little resemblance to problem solving in humans • Natural language processing • statistical methods with few links to psycho/linguistics • focus on tasks like information retrieval and extraction • Machine learning • statistical techniques that learn far more slowly than humans • almost exclusive focus on classification and reactive control
Technological Reasons for the Shift • One reason revolves around faster computer processors and larger memories, which have made possible new methods for: • playing games by carrying out far more search than humans • finding complicated schedules that trade off many factors • retrieving relevant items from large document repositories • inducing complex predictive models from large data sets • These are genuine scientific advances, but AI might fare even better by incorporating insights from human behavior.
Formalist Trends in Computer Science • Another factor involves AI’s typical home in departments of computer science: • which often grew out of mathematics departments • where analytical tractability is a primary concern • where guaranteed optimality trumps heuristic satisficing • even when this restricts work to narrow problem classes • Many AI faculty in such organizations view connections to psychology with intellectual suspicion.
Commercial Success of AI • Another factor has been AI’s commercial success, which has: • led many academics to study narrowly defined tasks • produced a bias toward near-term applications • caused an explosion of work on “niche AI” • Moreover, component algorithms are much easier to evaluate experimentally, especially given available repositories. • Such focused efforts are appropriate for corporate AI labs, but academic researchers should aim for higher goals.
Benefits: Understanding Human Cognition • One reason for renewed interchange between the two fields is • to understand the nature of human cognition: • because this would have important societal applications in education, interface design, and other areas; • because human intelligence comprises an important set of phenomena that demand scientific explanation. • This remains an open and challenging problem, and AI systems • remain the best way to tackle it.
Benefits: Source of Challenging Tasks • Another reason is that observations of human abilities serve as an important source of challenges, such as: • understanding language at a deeper level than current systems • interleaving planning with execution in pursuit of many goals • learning complex knowledge structures from few experiences • carrying out creative activities in art and science • Most work in AI sets its sights too low by focusing on tasks that hardly involve intelligence. • Psychological studies reveal the impressive abilities of human cognition and pose new problems for AI research.
Benefits: Constraints on Intelligent Artifacts To develop intelligent systems, we must constrain their design, and findings about human behavior can suggest: how the system can represent and organize knowledge; how the system can use that knowledge in performance; how the system can acquire knowledge from experience. Some of the most interesting AI research uses psychological ideas as design heuristics, including abilities we do not need (e.g., to carry out rapid and extensive search). Humans remain our only example of general intelligent systems, and insights about their operation deserve serious consideration.
In 1973, Allen Newell argued “You can’t play twenty questions with nature and win”. Instead, he proposed that we: AI and Cognitive Systems move beyond isolated phenomena and capabilities to develop complete models of intelligent behavior; develop cognitive systems that make strong theoretical claims about the nature of the mind; view cognitive psychology and artificial intelligence as close allies with distinct but related goals. Newell claimed that a successful framework should provide a unified theory of intelligent behavior. He associated these aims with the idea of a cognitive architecture.
Assumptions of Cognitive Architectures Most cognitive architectures incorporate a variety of assumptions from psychological theories: Short-term memories are distinct from long-term stores Memories contain modular elements cast as symbolic structures Long-term structures are accessed through pattern matching Cognition occurs in retrieval/selection/action cycles Performance and learning compose elements in memory These claims are shared by a variety of architectures, including ACT-R, Soar, Prodigy, and ICARUS.
Cognitive psychology makes important representational claims: Ideas about Representation • each element in a short-term memory is an active version of some structure in long-term memory; • many mental structures are relational in nature, in that they describe connections or interactions among objects; • concepts and skills encode different aspects of knowledge that are stored as distinct cognitive structures; • long-term memories have hierarchical organizations that define complex structures in terms of simpler ones. Many architectures adopt these assumptions about memory.
Architectural Commitment to Processes In addition, a cognitive architecture makes commitments about: performance processes for: retrieval, matching, and selection inference and problem solving perception and motor control learning processes that: generate new long-term knowledge structures refine and modulate existing structures In most cognitive architectures, performance and learning are tightly intertwined, again reflecting influence from psychology.
Cognitive psychology makes clear claims about performance: Ideas about Performance • humans often resort to problem solving and search to solve novel, unfamiliar problems; • problem solving depends on mechanisms for retrieval and matching, which occur rapidly and unconsciously; • people use heuristics to find satisfactory solutions, rather than algorithms to find optimal ones; • problem solving in novices requires more cognitive resources than experts’ use of automatized skills. Many architectures embody these ideas about performance.
Cognitive psychology has also developed ideas about learning: Ideas about Learning • efforts to overcome impasses during problem solving can lead to new cognitive structures; • learning can transform backward-chaining heuristic search into forward-chaining behavior; • learning is incremental and interleaved with performance; • structural learning involves monotonic addition of symbolic elements to long-term memory; • transfer to new tasks depends on the amount of structure shared with previously mastered tasks. Architectures often incorporate these ideas into their operation.
Architectures as Programming Languages Cognitive architectures come with a programming language that: includes a syntax linked to its representational assumptions inputs long-term knowledge and initial short-term elements provides an interpreter that runs the specified program incorporates tracing facilities to inspect system behavior Such programming languages ease construction and debugging of knowledge-based systems. Thus, ideas from psychology can support efficient development of software for intelligent systems.
Responses: Broader AI Education Most current AI courses ignore the field’s history; we need a broader curriculum that covers its connections to: cognitive psychology structural linguistics logical reasoning philosophy of mind These areas are more important to AI’s original agenda than are ones from mainstream computer science. For one example, see http://cll.stanford.edu/reason-learn/ , a course I have offered for the past three years.
Responses: Funding Initiatives We also need funding to support additional AI research that: • makes contact with ideas from computational psychology • addresses the same range of tasks that humans can handle • develops integrated cognitive systems that move beyond component algorithms In recent years, DARPA and NSF have taken promising steps in this direction, with clear effects on the community. However, we need more funding programs along these lines.
Responses: Publication Venues We also need places to present work in this paradigm, such as: • AAAI’s new track for integrated intelligent systems • this year’s Spring Symposium on AI meets Cognitive Science • the special issue of AI Magazine on human-level intelligence We need more outlets of this sort, but recent events have been moving the field in the right direction.
Closing Remarks In summary, AI’s original vision was to understand the basis of intelligent behavior in humans and machines. Many early systems doubled as models of human cognition, while others made effective use of ideas from psychology. Recent years have seen far less research in this tradition, with AI becoming a set of narrow, specialized subfields. Re-establishing contact with ideas from psychology, including work on cognitive architectures, can remedy this situation. The next 50 years must see AI return to its psychological roots if it hopes to achieve human-level intelligence.
I would like to dedicate this talk to two of AI’s founding fathers: Closing Dedication Allen Newell (1927 – 1992) Herbert Simon (1916 – 2001) Both contributed to the field in many ways: posing new problems, inventing methods, writing key papers, and training students. They were both interdisciplinary researchers who contributed not only to AI but to other disciplines, including psychology. Allen Newell and Herb Simon were excellent role models who we should all aim to emulate.