1 / 14

Games with Chance Other Search Algorithms

Games with Chance Other Search Algorithms. CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 3. Adapted from slides of Yoonsuck Choe. Game Playing with Chance. Minimax trees work well when the game is deterministic, but many games have an element of chance.

donkor
Download Presentation

Games with Chance Other Search Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Games with ChanceOther Search Algorithms CPSC 315 – Programming Studio Spring 2009 Project 2, Lecture 3 Adapted from slides of Yoonsuck Choe

  2. Game Playing with Chance • Minimax trees work well when the game is deterministic, but many games have an element of chance. • Include Chance nodes in tree • Try to maximize/minimize the expected value • Or, play pessimistic/optimistic approach

  3. Tree with Chance Nodes Max • For each die roll (blue lines), evaluate each possible move (red lines) Chance … Min Chance

  4. Expected Value • For variable x, the Expected Value is: where Pr(x) is the probability of x occurring • Example: rolling a pair of die:

  5. Expectiminimax Evaluating Tree Max • Choosing a Maximum (same idea for Minimum): • Evaluate all chance nodes from a move • Find Expected Value for that move • Choose largest expected value Chance … Min Chance

  6. More on Chance • Rather than expected value, could use another approach • Maximize worst case value • Avoid catastrophe • Give high weight if a very good position is possible • “Knockout” move • Form hybrid approach, weighting all of these options • Note: time complexity increased to bmnm where n is the number of possible choices (m is depth)

  7. More on Game Playing • Rigorous approaches to imperfect information games still being studied. • Assume random moves by opponent • Assume some sort of model based on perfect information model • Indications that often the behavior of the opponent is of more value than evaluating the board position

  8. AI in Larger-Scale and Modern Computer Games • The idealized situations described often don’t extend to extremely complex, and more continuous games. • Even just listing possible moves can be difficult • Consider writing the AI controller for a non-player opponent in a modern strategy game • Larger situation can be broken down into subproblems • Hierarchical approach • Use of state diagrams • Some subproblems are more easily solved • e.g. path planning

  9. AI in Larger-Scale and Modern Computer Games • Use of simulation as opposed to deterministic solution • Helps to explore large range of states • Can create complex behavior wrapped up in autonomous agents • Fun vs. Competent • Goal of game is not necessarily for the computer to win • Often a collection of ad-hoc rules • “Cheating” allowed (e.g. Civilization)

  10. General State Diagrams • List of possible states one can reach in the game (nodes) • Can be abstracted, general conditions • Describe ways of moving from one state to another (edges) • Not necessarily a set move, could be a general approach • Forms a directed (and often cyclic) graph • Our minimax tree is a state diagram, but we hide any cycles • Sometimes want to avoid repeated states

  11. State Diagram State C State I State A State B State D State J State E State H State G State K State F

  12. Exploring the State Diagram • Explore for solutions using BFS, DFS • Depth limited search: • DFS but to limited depth in tree • Iterative Deepening search: • DFS one level deep, then two levels (repeats first level), then three levels, etc. • If a specific goal state, can use bidirectional search • Search forward from start and backward from goal – try to meet in the middle. • Think of maze puzzles

  13. More informed search • Traversing links, goal states not always equal • Can have a heuristic function: h(x) = how close the state is to the “goal” state. • Kind of like board evaluation function/utility function in game play • Can use this to order other searches • Can use this to create greedy approach

  14. A* Algorithm • Avoid expanding paths that are already expensive. • f(n) = g(n) + h(n) • g(n) = current path cost from start to node n • h(n) = estimate of remaining distance to goal • h(n) should never overestimate the actual cost of the best solution through that node. • Then apply a best-first search • Value of f will only increase as paths evaluate

More Related