1 / 31

Local Search Warm-up

Local Search Warm-up. How to find the top of Mount Everest in a thick fog while suffering from amnesia?. AI: Representation and Problem Solving. Local Search. Instructors: Fei Fang & Pat Virtue Slide credits: CMU AI, http://ai.berkeley.edu. Learning Objectives.

mills
Download Presentation

Local Search Warm-up

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Local Search Warm-up How to find the top of Mount Everest in a thick fog while suffering from amnesia?

  2. AI: Representation and Problem Solving Local Search Instructors: Fei Fang & Pat Virtue Slide credits: CMU AI, http://ai.berkeley.edu

  3. Learning Objectives • Describe and implement the following local search algorithms • Iterative improvement algorithm with min-conflict heuristic for CSPs • Hill Climbing (Greedy Local Search) • Random Walk • Simulated Annealing • Beam Search • Genetic Algorithm • Identify completeness and optimality of local search algorithms • Compare different local search algorithms as well as contrast with classical search algorithms • Select appropriate local search algorithms for real-world problems

  4. Local Search • Can be applied to identification problems (e.g., CSPs), as well as some planning and optimization problems • Typically use a complete-state formulation, e.g., all variables assigned in a CSP (may not satisfy all the constraints)

  5. Iterative Improvement for CSPs

  6. Iterative Improvement for CSPs • Start with an arbitrary assignment, iteratively reassign variable values • While not solved, • Variable selection: randomly select a conflicted variable • Value selection with min-conflicts heuristic : Choose a value that violates the fewest constraints (break tie randomly) • For -Queens: Variables ; Constraints ,

  7. Demo – -Queens • [Demo: n-queens – iterative improvement (L5D1)]

  8. Demo – Graph Coloring

  9. Iterative Improvement for CSPs • Given random initial state, can solve n-queens in almost constant time for arbitrary n with high probability (e.g., n = 10,000,000)! • Same for any randomly-generated CSP except in a narrow range of the ratio

  10. Local Search • A local search algorithm is… • Complete if it always finds a goal if one exists • Optimal if it always finds a global minimum/maximum Is Iterative Improvement for CSPs complete? No! May get stuck in a local optima

  11. State-Space Landscape In identification problems, could be a function measuring how close you are to a valid solution, e.g., #conflicts in n-Queens/CSP What’s the difference between shoulder and flat local maximum (both are plateaux)?

  12. Hill Climbing (Greedy Local Search) • Simple, general idea: • Start wherever • Repeat: move to the best “neighboring” state (successor state) • If no neighbors better than current, quit No! Complete? No! Optimal?

  13. Hill Climbing (Greedy Local Search) How to apply Hill Climbing to -Queens? How is it different from Iterative Improvement? Define a state as a board with queens on it, one in each column Define a successor (neighbor) of a state as one that is generated by moving a single queen to another square in the same column How many successors?

  14. Hill Climbing (Greedy Local Search) What if there is a tie? Typically break ties randomly Make a sideway move if “” What if we do not stop here? • In 8-Queens, steepest-ascent hill climbing solves 14% of problem instances • Takes 4 steps on average when it succeeds, and 3 steps when it fails • When allow for 100 consecutive sideway moves, solves 94% of problem instances • Takes 21 steps on average when it succeeds, and 64 steps when it fails

  15. Variants of Hill Climbing • Random-restart hill climbing • “If at first you don’t succeed, try, try again.” • Complete! • What kind of landscape will random-restarts hill climbing work the best? • Stochastic hill climbing • Choose randomly from the uphill moves, with probability dependent on the “steepness” (i.e., amount of improvement) • Converge slower than steepest ascent, but may find better solutions • First-choice hill climbing • Generate successors randomly (one by one) until a better one is found • Suitable when there are too many successors to enumerate

  16. Variants of Hill Climbing • What if variables are continuous, e.g. find that maximizes ? • Gradient ascent • Use gradient to find best direction • Use the magnitude of the gradient to determine how big a step you move Value space of variables

  17. Piazza Poll: Hill Climbing 1. Starting from X, where do you end up? 2. Starting from Y, where do you end up? 3. Starting from Z, where do you end up? A: , , B: , , C: , ,

  18. Random Walk • Uniformly randomly choose a neighbor to move to • Complete but inefficient!

  19. Simulated Annealing • Combines random walk and hill climbing • Complete and efficient • Inspired by statistical physics • Annealing – Metallurgy • Heating metal to high temperature then cooling • Reaching low energy state • Simulated Annealing – Local Search • Allow for downhill moves and make them rarer as time goes on • Escape local maxima and reach global maxima

  20. Simulated Annealing Control the change of temperature ( over time) Almost the same as hill climbing except for a random successor Unlike hill climbing, move downhill with some prob.

  21. Simulated Annealing • Bad moves are more likely to be allowed when is high (at the beginning of the algorithm) • Worse moves are less likely to be allowed • Stationary distribution: • Guarantee: If decreased slowly enough, will converge to optimal state! • But! In reality, the more downhill steps you need to escape a local optimum, the less likely you are to ever make them all in a row

  22. Local Beam Search • Keep track of states • In each iteration • Generate all successors of all states • Only retain the best successors among them all How is this different from local searches with different initial states in parallel? The searches communicate! “Come over here, the grass is greener!” Analogous to evolution / natural selection!

  23. Limitations and Variants of Local Beam Search • Suffer from a lack of diversity; Quickly concentrated in a small region of the state space • Variant: Stochastic beam search • Randomly choose successors (offsprings) of a state (organism) population according to its objective value (fitness)

  24. Genetic Algorithms • Inspired by evolutionary biology • Nature provides an objective function (reproductive fitness) that Darwinian evolution could be seen as attempting to optimize • A variant of stochastic beam search • Successors are generated by combining two parent states instead of modifying a single state (sexual reproduction rather than asexual reproduction)

  25. Genetic Algorithms for 8-Queens • State Representation: 8-digit string, each digit in • Fitness Function: #Nonattacking pairs • Selection: Select individuals randomly with probability proportional to their fitness value (random selection with replacement) • Crossover: For each pair, choose a crossover point , generate two offsprings by crossing over the parent strings • Mutation (With some prob.): Choose a digit and change it to a different value in What if is an odd number?

  26. Genetic Algorithms for 8-Queens • Why does crossover make sense here? • Would crossover work well without a selection operator?

  27. Genetic Algorithms • Start with a population of individuals (states) • In each iteration • Apply a fitness function to each individual in the current population • Apply a selection operator to select pairs of parents • Generate offsprings by applying a crossover operator on the parents • For each offspring, apply a mutation operation with a (usually small) independent probability • For a specific problem, need to design these functions and operators • Successful use of genetic algorithms require careful engineering of the state representation!

  28. Genetic Algorithms How is this different from the illustrated procedure on 8-Queens?

  29. Exercise: Traveling Salesman Problem • Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city? • Input: • Output: A ordered sequence with , and all other indices show up exactly once • Question: How to apply Local Search algorithms to this problem?

  30. Summary: Local Search • Maintain a constant number of current nodes or states, and move to “neighbors” or generate “offsprings” in each iteration • Do not maintain a search tree or multiple paths • Typically do not retain the path to the node • Advantages • Use little memory • Can potentially solve large-scale problems or get a reasonable (suboptimal or almost feasible) solution

  31. Learning Objectives • Describe and implement the following local search algorithms • Iterative improvement algorithm with min-conflict heuristic for CSPs • Hill Climbing (Greedy Local Search) • Random Walk • Simulated Annealing • Beam Search • Genetic Algorithm • Identify completeness and optimality of local search algorithms • Compare different local search algorithms as well as contrast with classical search algorithms • Select appropriate local search algorithms for real-world problems

More Related