1 / 117

Artificial Intelligence- An Introduction

Govindrao Wanjari College of Engineering & Technology,Nagpur Department of CSE Session: 2017-18 Branch/ Sem: CSE/3 rd sem “ introduction to AI” Subject :Artificial intellegence Subject Teacher: Prof.K.A.Shendre. Artificial Intelligence- An Introduction . e@ G-215/GH.

donetta
Download Presentation

Artificial Intelligence- An Introduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Govindrao Wanjari College of Engineering & Technology,NagpurDepartment of CSE • Session: 2017-18 • Branch/ Sem: CSE/3rd sem “ introduction to AI” Subject :Artificial intellegence • Subject Teacher: Prof.K.A.Shendre Artificial Intelligence- An Introduction e@ G-215/GH

  2. Tentative Outline • Introductory Lecture- AI, Learning (Intro) • Logic, Bayesian reasoning • Statistical Models, Reinforcement Learning • Special Topics

  3. Obvious question • What is AI? • Programs that behave externally like humans? • Programs that operate internally as humans do? • Computational systems that behave intelligently? • Rational behaviour?

  4. Turing Test • Human beings are intelligent • To be called intelligent, a machine must produce responses that are indistinguishable from those of a human Alan Turing

  5. Does AI have applications? • Autonomous planning and scheduling of tasks aboard a spacecraft • Beating Gary Kasparov in a chess match • Steering a driver-less car • Understanding language • Robotic assistants in surgery • Monitoring trade in the stock market to see if insider trading is going on

  6. A rich history • Philosophy • Mathematics • Economics • Neuroscience • Psychology • Control Theory • John McCarthy- coined the term- 1950’s

  7. Philosophy • Dealt with questions like: • Can formal rules be used to draw valid conclusions? • Where does knowledge come from? How does it lead to action? • David Hume proposed the principle of induction (later) • Aristotle- • Given the end to achieve • Consider by what means to achieve it • Consider how the above will be achieved …till you reach the first cause • Last in the order of analysis = First in the order of action • If you reach an impossibility, abandon search

  8. Mathematics • Boolean Logic(mid 1800’s) • Intractability (1960’s) • Polynomial Vs Exponential growth • Intelligent behaviour = tractable subproblems, not large intractable problems. • Probability • Gerolamo Cardano(1500’s) - probability in terms of outcomes of gambling events George Boole Cardano

  9. Economics • How do we make decisions so as to maximize payoff? • How do we do this when the payoff may be far in the future? • Concept of utility (early 1900’s) • Game Theory (mid 1900’s) Leon Walras

  10. Neuroscience • Study of the nervous system, esp. brain • A collection of simple cells can lead to thought and action • Cycle time: Human brain- microseconds Computers- nanoseconds • The brain is still 100,000 times faster

  11. Psychology • Behaviourism- stimulus leads to response • Cognitive science • Computer models can be used to understand the psychology of memory, language and thinking • The brain is now thought of in terms of computer science constructs like I/O units, and processing center

  12. Control Theory • Ctesibius of Alexandria- water clock with a regulator • Purposeful behaviour as arising from a regulatory mechanism to minimize the difference between goal state and current state (“error”)

  13. Does AI meet EE? • Robotics- the science and technology of robots, their design, manufacture, and application. • Liar! (1941) Isaac Asimov

  14. Mechatronics- mechanics, electronics and computing which, combined, make possible the generation of simpler, more economical, reliable and versatile systems. Norbert Wiener • Cybernetics- the study of communication and control, typically involving regulatory feedback, in living organisms, in machines, and in combinations of the two.

  15. An Agent • ‘Anything’ that can gather information about its environment and take action based on that information.

  16. The Environment • What all do we need to specify? • The action space • The percept space • The environment as a string of mappings from the action space to the percept space

  17. The World Model • Perception function • World dynamics / State transition function • Utility function- how does the agent know what constitutes “good” or “bad” behaviour

  18. What is Rationality? • Goal • Information / Knowledge • The purpose of action is to reach the goal, given the information/knowledge possessed by the agent • Is not omniscience • The notion of rationality does not necessarily include success of the actions chosen

  19. Environments • Accessible/Inaccessible • Deterministic/Non-deterministic • Static/Dynamic • Discrete/Continuous • E.g. Driving a car, a game of Chinese-checkers

  20. Agents • Reactive agents • No memory • Agents with memory

  21. Planning • Planning a policy = considering the future consequences of actions to choose the best one

  22. Seems okay so far? • Computational constraints • Can we possibly specify EXACTLY the domain the agent will work in? • A look-up table of reactions to percepts is far to big • Most things that could happen, don’t

  23. Learning • Incomplete information about the environment • A changing environment • Use the sequence of percepts to estimate the missing details • Hard for us to articulate the knowledge needed to build AI systems – e.g. try writing a program to recognize visual input like various types of flowers

  24. What is Learning? • Herb Simon- “Learning denotes changes in the system that are adaptive in the sense that they enable the system to do the tasks drawn from the same population more efficiently and more effectively the next time.” • But why do we believe we have the license to predict the future?

  25. Induction • David Hume- Scottish philosopher, economist • All we can say, think, or predict about nature must come from prior experience • Bertrand Russell- “If asked why we believe the sun will rise tomorrow, we shall naturally answer, ‘Because it has always risen everyday.’ ” David Hume

  26. Classifying Learning Problems • Supervised learning- Given a set of input/output pairs, learn to predict the output if faced with a new input. • Unsupervised Learning- Learning patterns in the input when no specific output values are supplied. • Reinforcement Learning- Learn to interact with the world from the reinforcement you get.

  27. Functions • Given a sample set of inputs and corresponding outputs, find a function to express this relationship • Pronunciation= Function from letters to sound • Bowling= Function from target location (or trajectory?) to joint torques • Diagnosis= Function from lab results to disease categories

  28. Aspects of Function Learning • Memory • Averaging • Generalization

  29. Govindrao Wanjari College of Engineering & Technology,NagpurDepartment of CSE • Session: 2017-18 • Branch/ Sem: CSE/3rd sem “Informed Search” Subject :artificial intelligence • Subject Teacher: Prof.K.A.Shendre

  30. Best First • Store is replaced by sorted data structure • Knowledge added by the “sort” function • No guarantees yet – depends on qualities of the evaluation function • ~ Uniform Cost with user supplied evaluation function.

  31. Uniform Cost • Now assume edges have positive cost • Storage = Priority Queue: scored by path cost • or sorted list with lowest values first • Select- choose minimum cost • add – maintains order • Check: careful – only check minimum cost for goal • Complete & optimal • Time & space like Breadth.

  32. Uniform Cost Example • Root – A cost 1 • Root – B cost 3 • A -- C cost 4 • B – C cost 1 • C is goal state. • Why is Uniform cost optimal? • Expanded does not mean checked node.

  33. Watch the queue • R/0 // Path/path-cost • R-A/1, R-B/3 • R-B/3, R-A-C/5: • Note: you don’t test expanded node • You put it in the queue • R-B-C/4, R-A-C/5

  34. Concerns • What knowledge is available? • How can it be added to the search? • What guarantees are there? • Time • Space

  35. Greedy/Hill-climbing Search • Adding heuristic h(n) • h(n) = estimated cost of cheapest solution from state n to the goal • Require h(goal) = 0. • Complete – no; can be mislead.

  36. Examples: • Route Finding: goal from A to B • straight-line distance from current to B • 8-tile puzzle: • number of misplaced tiles • number and distance of misplaced tiles

  37. A* • Combines greedy and Uniform cost • f(n) = g(n)+h(n) where • g(n) = current path cost to node n • h(n) = estimated cost to goal • If h(n) <= true cost to goal, then admissible. • Best-first using admissible f is A*. • Theorem: A* is optimal and complete

  38. Admissibility? • Route Finding: goal from A to B • straight-line distance from current to B • Less than true distance? • 8-tile puzzle: • number of misplaced tiles • Less than number of moves? • number and distance of misplaced tiles • Less than number of moves?

  39. A* Properties • Dechter and Pearl: A* optimal among all algorithms using h. (Any algorithm must search at least as many nodes). • If 0<=h1 <= h2 and h2 is admissible, then h1 is admissible and h1 will search at least as many nodes as h2. So bigger is better. • Sub exponential if h estimate error is within (approximately) log of true cost.

  40. A* special cases • Suppose h(n) = 0. => Uniform Cost • Suppose g(n) = 1, h(n) = 0 => Breadth First • If non-admissible heuristic • g(n) = 0, h(n) = 1/depth => depth first • One code, many algorithms

  41. Heuristic Generation • Relaxation: make the problem simpler • Route-Planning • don’t worry about paths: go straight • 8-tile puzzle • don’t worry about physical constraints: pick up tile and move to correct position • better: allow sliding over existing tiles • TSP • MST, lower bound on tour • Should be easy to compute

  42. Iterative Deepening A* • Like iterative deepening, but: • Replaces depth limit with f-cost • Increase f-cost by smallest operator cost. • Complete and optimal

  43. SMA* • Memory Bounded version due to authors • Beware authors. • SKIP

  44. Hill-climbing • Goal: Optimizing an objective function. • Does not require differentiable functions • Can be applied to “goal” predicate type of problems. • BSAT with objective function number of clauses satisfied. • Intuition: Always move to a better state

  45. Some Hill-Climbing Algo’s • Start = random state or special state. • Until (no improvement) • Steepest Ascent: find best successor • OR (greedy): select first improving successor • Go to that successor • Repeat the above process some number of times (Restarts). • Can be done with partial solutions or full solutions.

  46. Hill-climbing Algorithm • In Best-first, replace storage by single node • Works if single hill • Use restarts if multiple hills • Problems: • finds local maximum, not global • plateaux: large flat regions (happens in BSAT) • ridges: fast up ridge, slow on ridge • Not complete, not optimal • No memory problems 

  47. Beam • Mix of hill-climbing and best first • Storage is a cache of best K states • Solves storage problem, but… • Not optimal, not complete

  48. Local (Iterative) Improving • Initial state = full candidate solution • Greedy hill-climbing: • if up, do it • if flat, probabilistically decide to accept move • if down, don’t do it • We are gradually expanding the possible moves.

  49. Local Improving: Performance • Solves 1,000,000 queen problem quickly • Useful for scheduling • Useful for BSAT • solves (sometimes) large problems • More time, better answer • No memory problems • No guarantees of anything

  50. Simulated Annealing • Like hill-climbing, but probabilistically allows down moves, controlled by current temperature and how bad move is. • Let t[1], t[2],… be a temperature schedule. • usually t[1] is high, t[k] = 0.9*t[k-1]. • Let E be quality measure of state • Goal: maximize E.

More Related