360 likes | 409 Views
Paradigm Shift in AI. Oren Etzioni Turing Center Note: some half-baked thoughts, please don’t circulate or cite. Two Apologies. Basic Premises (I’m a…). Materialist everything is made of atoms Functionalist if you can instantiate it in neurons, you can also instantiate in silicon.
E N D
Paradigm Shift in AI Oren Etzioni Turing Center Note: some half-baked thoughts, please don’t circulate or cite
Basic Premises (I’m a…) • Materialist everything is made of atoms • Functionalist if you can instantiate it in neurons, you can also instantiate in silicon. (what is ‘it’?) • This makes me an AI Optimist (long term) • We are very far from the boundary of “machine intelligence” medium term optimist too! • Are we studying AI, though?!?
Outline • A bit of philosophy of science • Critique of Present AI • “Whenever you find yourself on the side of the majority, it is time to pause and reflect” – Mark Twain • Hints at a new paradigm
Science is done within paradigms • paradigm = set of shared assumptions, ideas, methods. • Cathedral view: we keep accumulating bricks • over 100’s of years until • we have….
Paradigm Shift, (The Duck View) Thomas Kuhn ’62 • Anomalies are explained away • When they accumulate • The paradigm is deemed inadequate • Change is unexpected, and revolutionary! • Or is it a rabbit?
Example from Physics • 19th Century (and earlier) light is a wave • How does light move in space? • Through the “luminferous ether” • But on one has observed it… • 1897: Michelson-Morley experiment ingenious way to detect ether, but… • No ether was detected.. • Other “cracks” in the Newtonian paradigm • 1905: theory of relativity
Critique of Current AI Paradigm • Subtask-driven research (e.g., parsing, concept learning, planning) • Formulate a narrow subtask spend way too many years solving it better & better • One shot systems (e.g., run learning algorithm once on single data set, single concept) • Where is the intelligence in an AI system? • target concept, learning algorithm, representation, bias, pruning, training set all chosen by expert • labor-intensive, iterative refinement
Critique of AI cont. • Focus of the field is on: • Modules, not “complete systems” • Desired I/O is assumed and invented • Where do target concepts, goals come from? • Experimental metrics are surrogates for real performance (e.g., search-tree depth versus chess rating). • precision/recall of KnowItAll. • How well can the robot pick up cups? • “How” instead of “what” • Most papers describe a mechanism/algorithm/system/enhancement • Only a few tackle issue/question (why does NB work?) • This makes me an AI Pessimist (short term)
So What do we Need? Rod Brooks (1991): • “Complete systems” • “Real sensing, real action” (Drosophila is a real creature!) • Pitfall: low level/engineering overhead • For me, this led to softbots (1991 - 1997) • Pitfall: low level/engineering overhead • Pitfall: Need background knowledge to succeed Ed Figenbaum/Doug Lenat: • Machines that can learn/represent/utilize massive bodies of knowledge • Cyc, KnowItAll, MLNs are pieces of this • Lesson from Cyc/KnowItAll: “writing down” bits is easy • Lesson from MLNs: reasoning is still hard • Question: how to make progress? How to measure it? • Need bona fide, external performance metrics
External Measure of Performance • IQ score, SAT score, chess rating, Turing test • This is surprisingly tricky: • Peter Turney’s SAT analogy test • Demo • Halo Project
HALO project • Build a Scientific KB “Digital Aristotle” • Measure performance on AP science tests • The Hype: “computer passes AP test” • The Reality: “goals in this project are further than they appear” • Slides courtesy of Peter Clark (KCAP ’07)
Example question (physics) An alien measures the height of a cliff by dropping a boulder from rest and measuring the time it takes to hit the ground below. The boulder fell for 23 seconds on a planet with an acceleration of gravity of 7.9 m/s2. Assuming constant acceleration and ignoring air resistance, how high was the cliff? ? Example question (chemistry) A solution of nickel nitrate and sodium hydroxide are mixed together. Which of the following statements is true? a. A precipitate will not form. b. A precipitate of sodium nitrate will be produced. c. Nickel hydroxide and sodium nitrate will be produced. d. Nickel hydroxide will precipitate. e. Hydrogen gas is produced from the sodium hydroxide.
too hard for the computer to understand too hard for the user There lies a “sweet spot” between logic and full NL which is both human-usable and machine-understandable Unrestricted natural language Formal language CPL “A boulder is dropped” “xy B(x) R(x,y)C(y)” “Consider the following possible situation in which a boulder first…”
Example of a CPL encoding of a qn An alien measures the height of a cliff by dropping a boulder from rest and measuring the time it takes to hit the ground below. The boulder fell for 23 seconds on a planet with an acceleration of gravity of 7.9 m/s2. Assuming constant acceleration and ignoring air resistance, how high was the cliff? ? A boulder is dropped. The initial speed of the boulder is 0 m/s. The duration of the drop is 23 seconds. The acceleration of the drop is 7.9 m/s^2. What is the distance of the drop?
Controlled Language for Question-Asking… • Controlled Language: Not a panacea! • Not just a matter of grammatical simplification • Only certain linguistic forms are understood • Many concepts, many ways of expressing each one • Huge effort to encode these in the interpreter • User has to learn acceptable forms • User needs to make common sense explicit • Man pulls rope, rope attached to sled → force on sled • 4 wheels support a car → ¼ weight on each wheel
Lessons from HALO Project • Setting an ambitious, externally-defined target is exciting but challenging • Grammatical simplification (via CL) is helpful, but only one layer of the onion! • Text leaves key information implicit • Need “common sense” to understand text • Need massive body of “background knowledge” and ability to reason over it • Need to articulate clear lessons • What have we learned from Soar? Cyc? KnowItAll?
Appealing Hypothesis? • AI will emerge from evolution, from neural soup,… • AI will emerge from scale up. • Let’s just continue doing what we’re doing • Perhaps gear it up to use massive data sets/machine cycles (VLSAI) • Then, we will “ride” Moore’s Law to success
Banko & Brill ’01 (case study) • Example problem: confusion set disambiguation • {principle, principal} • {then, than} • {to, two, too} • {whether, weather} • Approaches include: • Latent semantic analysis • Differential grammars • Decision lists • A variety of Bayesian classifiers
Banko & Brill ‘01 • Collected a 1-billion-word English training corpus • 3 orders of magnitude > than largest corpus used previously for this problem • Consisted of: • News articles • Scientific abstracts • Government transcripts • Literature • Etc. • Test set: • 1 million words of WSJ text (non used in training)
Training on a Huge Corpus • Each learner trained at several cutoff points • First 1 million, then 5M, etc. • Items drawn by probabilistically sampling sentences from the different sources, weighted by source size. • Learners: • Naïve bayes, perceptron, winnow, memory-based • Results: • Accuracy continues to increase log-linearly even out to 1 billion words of training data • BUT the size of the trained model also increases log-linearly as a function of training set size.
Lessons from Banko & Brill • Relative performance changes with data set size • Performance continues to climb w. increase in data set • Caveats: • there is much more to their paper. I just took a biased “sample”. • the task considered is narrow and simple • However, similar phenomena has been shown in other settings and tasks • Lesson: ask what happens if I have 10x or 100x more data, cycles, memory?
Computer Chess Case Study • A complete system, in a real/toy domain • Simple, external performance metric • ~40 years super-human performance • massive databases • knowledge engineering to choose features • automatic tuning of evaln function parameters • Brute-force search coupled with heuristics for “selective extensions” • Deeper search (scale up!) led to a qualitative difference in performance
Achilles Hill of the Scale Up Argument • These were narrow, well-formed problems • How do you apply these ideas to broader problems? • Take for example “monkeys at a type writer” • they would eventually produce the world’s most amazing literature • But how would you know?!
Elements of a New AI Paradigm • Report lessons from major projects • Focus on ‘what’ is being computed? • Is it an advance? • Build complete systems in real-world test beds • Challenge: avoid engineering “rat holes” • Rely on ‘external’ performance metrics • Is AI making progress? • Ask new questions: can this program survive? • How does it formulate its goals? • Is it conscious? • Is this enough?
AI = Study of Ill Formed Problems • Conjecture = if you can define it as a search/optimization problem, then computer scientists will figure out how to solve it tractably (if that’s possible) • The fundamental challenge of AI today is to figure out how to map fluid and amazing human capabilities (NLU, Common sense, human navigation of the physical world, etc.) into formal problems. • The amazing thing is how little discussion there is of how to get from here to our goal!!!
Some Ill-Formed Problems • Softbot: a cyber-assistant with wide ranging capabilities. • Would you let it send you email? • Would you give it your credit card? • A textbook learner: a program that reads a chapter and then answer • Machine Reading at Web scale: “read” the Web and leverage scale to compensate for limited subtlety • Life-long learner: a program that learns, but also learns how to learn better over time.
Automatic Formulation of Learning • Learning Problem = (labeled examples, hypothesis space, target concept) • Can the learner • Choose a target concept • Choose a representation for examples/hypotheses • Label some examples • Choose a learning algorithm • Evaluate the results
Life long Learning • Nonstop learning/reasoning/action • Is this just a matter of a large enough data set? • Add in “recursive” learning • learning at time T is a function of learning at T-1. • Multiple problems, limited resources • Representation change • Ability to formulate own goals/learning problems
The Future of AI To borrow from Alan Kay: “The best way to predict the future of AI is to invent it!”
My Own View • What is your own goal? • write a paper versus “solve AI” • Science is done within paradigms • AI’s current paradigm is “statistical/probabilistic methods” • Paradigms shift when they are deemed inadequate • Change is unexpected, and revolutionary! “The Structure of Scientific Revolutions” by Thomas Kuhn
Example from Physics • 19th Century (and earlier) light is a wave • How does light move in space? • Through the “luminferous ether” • But on one has observed it… • 1897: Michelson-Morley experiment ingenious way to detect ether, but… • No ether was detected.. • Other “cracks” in the Newtonian paradigm • 1905: theory of relativity
Cracks in the AI Paradigm • We are building increasingly powerful algorithms for very narrow tasks • Learning algorithms are “one shot” • we have parsing, but what about understanding? • Much of our progress is due to Moore’s Law • It’s time for a revolution…