1 / 32

Artificial Intelligence

This article discusses blind search and informed search methods in AI, including depth-first, uniform-cost, depth-limited, and iterative-deepening search. It also covers performance measures and criteria for evaluating search strategies.

agusting
Download Presentation

Artificial Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence CS 165A Thursday, October 11, 2007 • Blind search (Ch. 3) • Informed (heuristic) search methods (Ch 4) Today 1

  2. Notes • 8-puzzle problem • There are two disjoint sets of states! • Homework assignment #1 posted • Due Tuesday October 23rd • May work in groups of two • Get going on it right away! • Midterm schedule scheduled: November 13th • “Mid-to-late-term exam”

  3. Review Search Criteria • Primary criteria to evaluate search strategies • Completeness • Is it guaranteed to find a solution (if one exists)? • Optimality • Does it find the “best” solution (if there are more than one)? • Time complexity • Number of nodes generated/expanded • (How long does it take to find a solution?) • Space complexity • How much memory does it require? • Some performance measures • Best case • Worst case • Average case • Real-world case

  4. Review General Search Algorithm (Version 2) • Uses a queue (a list) and a queuing function to implement a search strategy • Queuing-Fn(queue, elements) inserts a set of elements into the queue and determines the order of node expansion functionGENERAL-SEARCH(problem, QUEUING-FN) returns a solution or failure nodes MAKE-QUEUE(MAKE-NODE(INITIAL-STATE[problem])) loop do if nodes is empty then return failure node  REMOVE-FRONT(nodes) if GOAL-TEST[problem] applied to STATE(node) succeeds then return node nodes QUEUING-FN(nodes, EXPAND(node, OPERATORS[problem])) end

  5. Depth-First Search • Always expands one of the nodes at the deepest level of the tree • Low memory requirements • Problem: depth could be infinite • Uses LIFO queue functionDEPTH-FIRST-SEARCH(problem) returns a solution or failure return GENERAL-SEARCH(problem, ENQUEUE-AT-FRONT)

  6. Thursday Quiz • Given the initial node s, a goal node g, and the current node n .... What is a heuristic functionh(n) of the current node? • What does it mean for a search routine to be complete? Is breadth-first search complete?

  7. A B C B C D E D B D E F D F Example State space graph Search tree Queue (A) (B C) (D C) (C) (B D E) (D D E) (D E) (E) (F) A

  8. 1 2 MIU MII 2 MIUIU 2 2 MIUIUIUIUIUIUIUIU MIUIUIUIU Example: MU-Puzzle • State space description • Start state: MI • Goal state: MU • Operators: • x I x IU • M x M x x • I I I  U • U U  (null) Search tree Queue (MI) (MIU MII) (MIUIU MII) (MIUIUIUIU MII) (MIUIUIUIUIUIUIUIU MII) … MI

  9. Is depth-first this? Or this? Note on depth-first search That is, when a node is expanded, is it expanded fully? • For our purposes, YES • Open all children of each node

  10. Complete? Optimal? Time complexity? Space complexity? No No Exponential: O( bm ) Polynomial: O( bm) Depth-First Search • b = Maximum branching factor of the search tree • d = Depth of an optimal solution (may be more than one) • m = maximum depth of the search tree (may be infinite)

  11. bm • Why is the space complexity (memory usage) of depth-first search O( bm )? • Remove expanded node when all descendents evaluated • At each of the m levels, you have to keep b nodes in memory • Example: • b = 3 • m = 6 • Nodes in memory: bm+1 = 19 Actually, (b-1)m + 1 = 13 nodes, the way we have been keeping our node list

  12. Uniform Cost Search • Similar to breadth-first search, but always expands the lowest-cost node, as measured by the path cost function, g(n) • g(n) is (actual) cost of getting to node n • Breadth-first search is actually a special case of uniform cost search, where g(n) = DEPTH(n) • If the path cost is monotonically increasing, uniform cost search will find the optimal solution functionUNIFORM-COST-SEARCH(problem) returns a solution or failure return GENERAL-SEARCH(problem, ENQUEUE-IN-COST-ORDER)

  13. Example A 2 6 8 B C E 2 12 8 4 D 1 F Try breadth-first and uniform cost

  14. Complete? Optimal? Time complexity? Space complexity? Yes Yes Exponential: O( bd ) Exponential: O( bd ) Uniform-Cost Search Same as breadth-first

  15. functionDEPTH-LIMITED-SEARCH(problem, depth-limit) returns a solution or failure return GENERAL-SEARCH(problem, ENQUEUE-AT-FRONT-IF-UNDER-DEPTH-LIMIT) Must explicitly represent node depth Depth-Limited Search • Like depth-first search, but uses a depth cutoff to avoid long (possibly infinite), unfruitful paths • Do depth-first search up to depth limit l • Depth-first is special case with l=inf • Problem: How to choose the depth limit l? • Some problem statements make it obvious (e.g., TSP), but others don’t (e.g., MU-puzzle)

  16. Complete? Optimal? Time complexity? Space complexity? No, unless d l No Exponential: O( bl ) Exponential: O( bl ) Depth-Limited Search • b = Maximum branching factor of the search tree • d = Depth of an optimal solution (may be more than one) • m = maximum depth of the search tree (may be infinite) • l = depth limit

  17. Iterative-Deepening Search • Since the depth limit is difficult to choose in depth-limited search, use depth limits of l = 0, 1, 2, 3, … • Do depth-limited search at each level functionITERATIVE-DEEPENING-SEARCH(problem) returns a solution or failure fordepth  0 to  do ifDEPTH-LIMITED-SEARCH(problem, depth) succeeds then return result end return failure

  18. Iterative-Deepening Search • IDS has advantages of • Breadth-first search – Optimal and complete • Depth-first search – Modest memory requirements • This is the preferred blind search method when the search space is large and the solution depth is unknown • Many states are expanded multiple times • Is this terribly inefficient? • No… and it’s great for memory (compared with breadth-first) • Why is it not particularly inefficient?

  19. IDS efficiency • When d = 2, the penalty for IDS is almost 100% • When d = 3, the penalty for IDS is about 50% • When d = 10, the penalty for IDS is about 11% • When d = 35, the penalty for IDS is about 3% This assumes the solution is in the “right bottom corner”

  20. Complete? Optimal? Time complexity? Space complexity? Yes Yes Exponential: O( bd ) Polynomial: O( bd) Iterative-Deepening Search • b = Maximum branching factor of the search tree • d = Depth of an optimal solution (may be more than one) • m = maximum depth of the search tree (may be infinite)

  21. bd • Why is the space complexity (memory usage) of iterative-deepening search O( bd )? • At each of the d levels, you have to keep b nodes in memory • Example: • b = 3 • d = 6 • Nodes in memory: bd+1 = 19 Actually, (b-1)d + 1 = 13 nodes, the way we have been keeping our node list

  22. … Bidirectional Search Forward search only:

  23. Bidirectional Search Simultaneously search forward from the initial state and backward from the goal state Much more efficient!

  24. Bidirectional Search • O(bd/2) rather than O(bd) – hopefully • Both actions and predecessors (inverse actions) must be defined • Must test for intersection between the two searches • Constant time for test? • Really a search strategy, not a specific search method • Often not practical…. Example: 410≈ 1,000,000 2*45 ≈ 2,000

  25. Complete? Optimal? Time complexity? Space complexity? Yes Yes Exponential: O( bd/2 ) Exponential: O( bd/2 ) Bidirectional Search * Assuming breadth-first search used, and no misses!

  26. Summary of Search Criteria b – max branching factor of the search tree d – depth of the least-cost solution m – max depth of the state-space (may be infinity) l – depth cutoff (Slightly different from the textbook)

  27. When to Use Which Method? • What kinds of problems are most appropriate (or inappropriate) for each type of search method? • Hmm, that would be a good test question….

  28. Why might you want to know this? Practical note about search algorithms • The computer can’t “see” the search graph like we can • No “bird’s eye view” – make relevant information explicit! • What information should you keep for a node in the search tree? • State • (1 2 0) • Parent node (or perhaps complete ancestry) • Node #3 (or, nodes 0, 2, 5, 11, 14) • Depth of the node • d = 4 • Path cost up to (and including) the node • g(node) = 12 • Operator that produced this node • Operator #1

  29. Avoiding repeated states • It may be beneficial to explicitly avoid repeated states, especially where there are loops in the state graph and/or reversible operators • Search space can be very significantly pruned • How to deal with repeated states: • Do not return to state we just came from(parent node) • Do not create paths with cycles (ancestor nodes) • Do not generate any state that was ever generated before(complete tree) • But there is a cost • Have to keep track (in memory) of every state generated

  30. Avoiding repeated states: Example • For my M&C implementation in Lisp : • With no checking • 11,851 nodes expanded, 760 MB • Checking parent • 55 nodes expanded, 17 kB • Checking ancestors • 26 nodes expanded, 7 kB • Checking all expanded nodes • 14 nodes, 4 kB memory

  31. Search as problem-solving • We have defined the problem • Data • States, initial state(s), goal test/state(s), path cost function • Operations • State transitions • Control – Search to find paths from initial states to goal states • Build up a search tree whose nodes and arcs correspond to nodes and arcs in the state space graph • Solution: “Best” path through the state space

  32. Next: • Informed (heuristic) search methods (Ch 4)

More Related