1 / 58

A rtificial I ntelligence for Games

A rtificial I ntelligence for Games. Minor Games Programming. Lecture 1 . Artificial Intelligence for Games. Introduction to game AI (self study) Theory: Moving Game Agents. Jan Verhoeven j.verhoeven@windesheim.nl. Game AI. Game AI is not a subset of AI Game AI

madra
Download Presentation

A rtificial I ntelligence for Games

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Intelligence for Games Minor Games Programming Lecture 1

  2. Artificial Intelligence for Games • Introduction to game AI (self study) • Theory: Moving Game Agents Jan Verhoeven j.verhoeven@windesheim.nl

  3. Game AI • Game AI is not a subset of AI • Game AI • often covers techniques that are not considered “AI-like” • AI • uses techniques impractical in a game context AI Game AI

  4. What is Game AI? • Analogy • game AI is to "real" AI as • stage design is to architecture • The goal of game AI is to give the impression of intelligence • to avoid the impression of stupidity • to provide a reasonable challenge for the player

  5. Challenge • It is very possible to make the computer too smart • think: driving game or a chess game • The task of AI is to support the experience • many compromises from “optimal” required

  6. Not dumb • It is surprisingly hard to make the computer not dumb • “Why are computers so stupid?” • especially with limited computational resources • Example • Humans are good at navigating complex 3-D environments • Doing this efficiently is (still) an unsolved problem in AI

  7. But • Game AI is the future of games • Many designers see AI as a key limitation • the inability to model and use emotion • the inability of games to adapt to user’s abilities • the need for level designers to supply detailed guidance to game characters

  8. Study book Literature: Mat Buckland Programming Game AI by Example http://www.ai-junkie.com

  9. What we will cover • Finite-state machines (Previous Knowledge !!) • the most basic technique for implementing game AI • fundamental to everything else • Steering behaviors • basic behaviors for avoiding stupidity while navigating the world • Path planning • the surprisingly tricky problem of getting from point A to point B • Action planning • assembling sequences of actions • Fuzzy logic • reasoning by degrees

  10. Today's Theory: Moving Game Agents,(see study book: chapter 3) • What is an Autonomous Agent? • Steering Behaviors • Group Behaviors • Combining Steering Behaviors • Spatial Partitioning • Smoothing

  11. Movement • Two types • Reactive movement • Planned movement

  12. Steering behaviors • Tendencies of motion • that produce useful (interesting, plausible, etc.) navigation activity • by purely reactive means • without extensive prediction • Pioneering paper • Reynolds,1999

  13. Examples • I want the insect monsters to swarm at the player all at once, but not get in each other's way. • I want the homing missile to track the ship and close in on it. • I want the guards to wander around, but not get too far from the treasure and not too close to each other. • I want pedestrians to cross the street, but avoid on-coming cars.

  14. Steering behavior solution • Write a mathematical rule • that describes accelerations to be made • in response to the state of the environment • Example: "don't hit the wall" • generate a backwards force inversely proportional to the proximity of the wall • the closer you get, the more you will be pushed away • if you're going really fast, you'll get closer to the wall, but you'll slow down smoothly

  15. Combining forces • Behaviors can be combined by • summing the forces that they produce • Example: follow • I want the spy to follow the general, but not too close • two behaviors • go to general's location • creates a force pointing in his direction • not too close • a counter-force inverse proportion to distance • where the forces balance • is where spy will tend to stay

  16. Steering Behaviors:Physics Model • Simple Vehicle Model • orientation, mass, position, velocity • max_force, max_speed • Forward Euler Integration • steering_force = truncate (steering_dir, max_force) • acceleration = steering_force / mass • velocity = truncate (velocity + acceleration, max_speed) • position = position + velocity Read Only Page

  17. Steering Behaviors:Seek and Flee • Seek – Steer toward goal • Flee – Steer away from goal • Steering force is the difference between current velocity and desired velocity • Blue is steering force, magenta is velocity

  18. Steering Behaviors:Pursue and Evade • Based on underlying Seek and Flee • Pursue – Predict future interception position of target and seek that point • Evade – Use future prediction as target to flee from (Another Chase and Evade demo)

  19. Steering Behaviors:Wander • Type of random steering with long term order • Steering is related from one frame to another • Maintains state • Red dot is wander direction • Constrained to be on black circle • Randomly moves within white circle each frame

  20. Steering Behaviors:Arrival • Goal to arrive at target with zero velocity • Red circle is maximum distance before slowing down

  21. Steering Behaviors:Obstacle Avoidance • White box is future path • Steering force strictly left or right • Braking force stronger as collision gets closer

  22. Obstacle avoidance II • Basic idea • project a box forward in the direction of motion • think of the box as a "corridor of safety" • as long as there are no obstacles in the box • motion forward is safe • To do this • find all of the objects that are nearby • too expensive to check everything • ignore those that are behind you • see if any of the obstacles overlap the box • if none, charge ahead • if several, find the closest one • this is what we have to avoid

  23. Obstacle avoidance III • Steering force • we want to turn away from the obstacle • just enough to miss it • we want to slow down • so we have time to correct • Need a steering force perpendicular*to the agent's heading • proportional to how far the obstacle protrudes into the detection box • Need a braking force anti-parallel to agent's heading • proportional to our proximity to obstacle * http://en.wikipedia.org/wiki/Perpendicular

  24. Steering Behaviors:Hide • Hide attempts to position a vehicle so that an obstacle is always between itself and the agent (“the hunter”) it’s trying to hide from

  25. Steering Behaviors:Wall Following • Move parallel and offset from gray areas • Goal to remain given distance from wall • Predict object’s future position (black dot) • Project future position to wall • Move out from wall set amount from normal • Seek toward new point (red circle)

  26. Steering Behaviors:Path Following • Path is connected line segments with radius • Corrective steering only when varying off of path • Red dot future predicted position • Red circle is closest spot on path • Corrective steering toward white circle farther down path

  27. Combined Steering Behaviors:Group Path Following • Path following with separation steering • Combined with weighted sum • Path following has 3 times weight as separation

  28. Combined Steering Behaviors:Leader Following (Group) • Combines separation and arrival • Arrival target is a point offset slightly behind the leader • Followers must move out of leader’s future path

  29. Combined Steering Behaviors:Leader Following (Queue) • Combines separation and arrival • Each object has a different leader

  30. Combined Steering Behaviors:Unaligned Collision Avoidance • Objects moving in all directions (unaligned) • Combines containment and avoidance • Future collisions predicted and objects steer away from collision site, or speed-up / slow-down

  31. Combined Steering Behaviors:Queuing • Seek doorway, Avoid gray walls, Separation from each other, Braking if others nearby or in front

  32. Flocking

  33. Flocking

  34. Flocking

  35. Flocking • First demonstrated by Craig Reynolds in his 1987 SIGGRAPH paper and movie • “Flocks, Herds, and Schools: A Distributed Behavior Model” • Film (Stanley and Stella in "Breaking the Ice" • Used to give flocks of birds and schools of fish eerily realistic movement • Won an Oscar in 1997 for his flocking work • (Scientific and Technical Award) • Flocking is an example of emergent behavior (a-life) • Simple individual rules result in complex group behavior • Individual creatures often called “boids” • PS2 technical demo • OpenSteer demo

  36. Flocking:Three simple rules Separation Alignment Cohesion

  37. Separation • "Don't crowd" • Basic idea • generate a force based on the proximity of each other agent • sum all of the vectors • Result • Each agent will move in the distance that takes it furthest from others • Neighbors disperse from each other

  38. Alignment • "Stay in step" • Basic idea • keep an agent's heading aligned with its neighbors • calculate the average heading and go that way • Result • the group moves in the same direction

  39. Cohesion • "Stay together" • Basic idea • opposite of separation • generate a force towards the center of mass of neighbors • Result • group stays together

  40. Combining these behaviors • We get flocking • different weights and parameters yield different effects • animation • demo

  41. Implementation issues • Combining behaviors • each steering behavior outputs a force • it is possible for the total force to exceed what an agent's acceleration capacity • What to do?

  42. Combination methods • Simplest: Weighted truncated sum, • weight the behaviors, add up, and truncate at max_force • very tricky to get the weights right • must do all of the calculations • Better: Prioritization • Evaluate behaviors in a predefined order • obstacle avoidance first • wander last • Keep evaluating and adding until max_force reached • Problem is getting the fixed priority right • Cheaper: Prioritized dithering • Associate a probability with each behavior • probabilities sum to 1 • That behavior will get its force applied a certain percentage of the time

  43. Partitioning • We want to calculate the neighbors of each agent • if we look at all agents, n2 operation • if there are many, many agents, too slow • Many techniques for speeding this up • basic idea is to consider only those agents that could be neighbors • carve up space and just look at the relevant bits • Very important in other parts of game programming, too • collision detection • view rendering

  44. Cell-space partition • Cover space with a grid • Maintain a list of agents in each cell • not that expensive since it is just an x,y threshold test • Calculate which grid cells could contain neighbors • check only those agents in the effected cells • O(n)

  45. Smoothing • Jitter occurs when behaviors switch in and out • obstacle avoidance kicks in when objects is in detection box • but other behaviors push back towards obstacle • Solution • average the heading over several updates Read study book for a solution or take notice of: http://blogs.msdn.com/shawnhar/archive/2007/04/23/hysteresis.aspx

  46. Alternative to Flocking:Simple Swarms • Computationally simpler • Doesn’t enforce separation or interpenetration • Example • Hundreds of spiders crawling around, up walls, and dropping from the ceiling – Tom Scutt, Tomb Raider series

  47. Simple Swarmsattacking Player • Outer zone • If heading toward player: waver heading • If heading away: steer toward player • Inner zone • Swirl around player • Flee after random amount of time

  48. Swarm Intelligence • Technique based on collective behavior in decentralized, self-organized systems • Beni & Wang (1989) • Simple agents interact locally • Global behavior emerges • Ant colonies • Bird flocking • Animal herding • Swarm Robotics • Combining swarm intelligence with robotics

  49. Formations • Mimics military formations • How is it similar to flocking? • How is it different from flocking?

  50. Formations • Issues • Is there a leader? • Where do individuals steer towards? • What happens when they turn? • What happens when they change heading by 180 degrees? • What happens when there is a narrow pass? • Formation splitting and reforming?

More Related