1 / 67

General Game Playing

General Game Playing (GGP) is an intellectual activity that involves playing arbitrary games based solely on formal descriptions provided at runtime. GGP serves as a testbed for AI and allows for comparison of skills in playing different games. However, there are limitations to GGP, such as its narrow focus on specific games and the fact that it doesn't truly test the intelligence of the machine.

michellel
Download Presentation

General Game Playing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. General Game Playing Michael Genesereth Logic Group Stanford University

  2. Game Playing • Human Game Playing • Intellectual Activity • Skill Comparison • Computer Game Playing • Testbed for AI • Limitations

  3. Limitations of Game Playing for AI Narrowness Good at one game, not so good at others Cannot do anything else Not really testing intelligence of machine Programmer does all the interesting analysis / design Machine simply follows the recipe

  4. General Game Playing General Game Players are systems able to play arbitrary games effectively based solely on formal descriptions supplied at “runtime”. Translation: They don’t know the rules until the game starts. Must figure out for themselves: legal moves, winning strategy in the face of incomplete info and resource bounds

  5. Versatility

  6. Novelty

  7. Annual GGP Competition Annual GGP Competition Held at AAAI or IJCAI conference Administered by Stanford University (Stanford folks not eligible to participate) /42

  8. GGP-05 Winner Jim Clune /42

  9. International GGP Competition /42

  10. History Winners 2005 - ClunePlayer - Jim Clune (USA) 2006 - FluxPlayer - Schiffel, Thielscher (Germany) 2007 - CadiaPlayer - Bjornsson, Finsson (Iceland) 2008 - CadiaPlayer - Bjornsson, Finsson (Iceland) 2010 - Ary - Mehat (France) 2011 - TurboTurtle - Schreiber (USA) 2012 - CadiaPlayer - Bjornsson, Finsson (Iceland) 2013 - TurboTurtle - Schreiber (USA) 2014 - Sancho - Draper (USA), Rose (UK) 2015 - Galvanise - Emslie 2016 - WoodStock - Piette (France) /42

  11. Carbon versus Silicon

  12. Human Race Being Defeated /42

  13. Game Description

  14. Multiplicity of Games /42

  15. Common Structure 0 50 50 100 s1 s2 s3 s4 b a a a a b b b b a a s5 s6 s7 s8 a b a 0 25 25 50 /42

  16. Multiple Player Game 0 / 0 50 / 50 50 / 50 100 / 0 s1 s2 s3 s4 a / a a / a a / a a / b b / a b / a a / b b / a a / b b / a a / b s5 s6 s7 s8 a / a a / a a / a 0 / 0 25 / 25 25 / 25 0 / 100 /42

  17. Direct Description Good News: Since all of the games that we are considering are finite, it is possible in principle to communicate game information in the form of state graphs. Problem: Size of description. Even though everything is finite, these sets can be large. Solution: Exploit regularities / structure in state graphs to produce compact encoding

  18. Structured State Machine 0 50 50 100 p(b) q(a,b) p(a) p(b) q(a,b) q(a,b) p(a) q(a,b) f(b) f(a) f(a) f(b) f(b) f(b) f(b) f(b) f(b) f(b) f(b) p(a) q(b,a) p(b) q(b,a) p(a) p(b) q(b,a) q(b,a) f(a) f(b) f(a) 0 25 25 50

  19. States X O X cell(1,1,x) cell(1,2,b) cell(1,3,b) cell(2,1,b) cell(2,2,o) cell(2,3,b) cell(3,1,b) cell(3,2,b) cell(3,3,x) control(o)

  20. State Update Actions noop mark(1,3) cell(1,1,x) cell(1,2,b) cell(1,3,b) cell(2,1,b) cell(2,2,o) cell(2,3,b) cell(3,1,b) cell(3,2,b) cell(3,3,x) control(o) cell(1,1,x) cell(1,2,b) cell(1,3,o) cell(2,1,b) cell(2,2,o) cell(2,3,b) cell(3,1,b) cell(3,2,b) cell(3,3,x) control(x)

  21. Game Description Language init(cell(1,1,b)) init(cell(1,2,b)) init(cell(1,3,b)) init(cell(2,1,b)) init(cell(2,2,b)) init(cell(2,3,b)) init(cell(3,1,b)) init(cell(3,2,b)) init(cell(3,3,b)) init(control(x)) legal(P,mark(X,Y)) :- true(cell(X,Y,b)) & true(control(P)) legal(x,noop) :- true(control(o)) legal(o,noop) :- true(control(x)) next(cell(M,N,P)) :- does(P,mark(M,N)) next(cell(M,N,Z)) :- does(P,mark(M,N)) & true(cell(M,N,Z)) & Z#b next(cell(M,N,b)) :- does(P,mark(J,K)) & true(cell(M,N,b)) & (M#J | N#K) next(control(x)) :- true(control(o)) next(control(o)) :- true(control(x)) terminal :- line(P) terminal :- ~open goal(x,100) :- line(x) goal(x,50) :- draw goal(x,0) :- line(o) goal(o,100) :- line(o) goal(o,50) :- draw goal(o,0) :- line(x) row(M,P) :- true(cell(M,1,P)) & true(cell(M,2,P)) & true(cell(M,3,P)) column(N,P) :- true(cell(1,N,P)) & true(cell(2,N,P)) & true(cell(3,N,P)) diagonal(P) :- true(cell(1,1,P)) & true(cell(2,2,P)) & true(cell(3,3,P)) diagonal(P) :- true(cell(1,3,P)) & true(cell(2,2,P)) & true(cell(3,1,P)) line(P) :- row(M,P) line(P) :- column(N,P) line(P) :- diagonal(P) open :- true(cell(M,N,b)) draw :- ~line(x) & ~line(o)

  22. Obfuscation What we see: next(cell(M,N,x)) :- does(white,mark(M,N)) & true(cell(M,N,b)) What the player sees: next(welcoul(M,N,himenoing)) :- does(himenoing,dukepse(M,N)) & true(welcoul(M,N,lorenchise))

  23. Game Playing

  24. Initial State cell(1,1,b) cell(1,2,b) cell(1,3,b) cell(2,1,b) cell(2,2,b) cell(2,3,b) cell(3,1,b) cell(3,2,b) cell(3,3,b) control(x)

  25. Legal Moves White’s moves: Black’s moves: mark(1,1) noop mark(1,2) mark(1,3) mark(2,1) mark(2,2) mark(2,3) mark(3,1) mark(3,2) mark(3,3)

  26. State Update mark(1,3) noop cell(1,1,b) cell(1,1,b) cell(1,2,b) cell(1,2,b) cell(1,3,b) cell(1,3,x) cell(2,1,b) cell(2,1,b) cell(2,2,b) cell(2,2,b) cell(2,3,b) cell(2,3,b) cell(3,1,b) cell(3,1,b) cell(3,2,b) cell(3,2,b) cell(3,3,b) cell(3,3,b) control(x) control(o)

  27. Complete Game Graph Search X X X X X X X X X X O O O O O X X X X X O X O O O X X X O X X X X X X O O O O O O O O O X O X O O O X X X O X X

  28. Incomplete Game Tree Search X X X X X X O O O O O X X X X X O X X X X O O O O O X O X How do we evaluate non-terminal states?

  29. First Generation GGP (2005-2006) General Heuristics Goal proximity (everyone) Maximize mobility (Barney Pell) Minimize opponent’s mobility (Jim Clune)

  30. GGP-06 Final - Cylinder Checkers

  31. Second Generation GGP (2007 on) Monte Carlo Search Monte Carlo Tree Search UCT - Uniform Confidence Bounds on Trees

  32. Second Generation GGP 100 100 100 100 100 50 25 75 0 0 0 0 0 0 0 0 0 0 0 100 Monte Carlo Search

  33. Third Generation GGP Offline Processing of Game Descriptions Finding and exploiting independences Reformulation of descriptions Automatic programming Trade-off - cost of finding and using structure vs savings Sometimes cost proportional to size of description Sometimes savings proportional to size of game tree What human programmers do in creating game players

  34. Game Factoring Hodgepodge = Chess + Othello Branching factor: a Branching factor: b Analysis of joint game: Branching factor as given to players: a*b Fringe of tree at depth n as given: (a*b)n Fringe of tree at depth n factored: an+bn

  35. Other Reformulation Opportunities Factoring, e.g Hodgepodge Bottlenecks, e.g. Triathalon Symmetry detection, e.g. Tic-Tac-Toe Dead State Removal

  36. Compilation Conversion of logic to traditional programming language Simple, widely published algorithms several orders or magnitude speedup no asymptotic change Conversion to Field Programmable Gate Arrays (FPGAs) several more orders of magnitude improvement

  37. Automatic Programming p(a,b) q(b,c) ~p(b,d) ∀x.∀y.(p(x,y) ⇒ q(x,y)) p(c,b)∨p(c,d) ∃x.p(x,d)

  38. Algorithmic Expertise Knuth in a Box

  39. Opponent Modeling

  40. Delayed Gratification X O X X O O

  41. Conclusion

  42. General Game Playing is not a game

  43. Serious Applications

  44. John McCarthy The main advantage we expect the advice taker to have is that its behavior will be improvable merely by making statements to it, telling it about its … environment and what is wanted from it. To make these statements will require little, if any, knowledge of the program or the previous knowledge of the advice taker.

  45. Newell and Simon The General Problem Solver demonstrates how generality can be achieved by factoring the specific descriptions of individual tasks from the task-independent processes.

  46. Ed Feigenbaum The potential use of computers by people to accomplish tasks can be “one-dimensionalized” into a spectrum representing the nature of the instruction that must be given the computer to do its job. Call it the what-to-how spectrum. At one extreme of the spectrum, the user supplies his intelligence to instruct the machine with precision exactly how to do his job step-by-step. ... At the other end of the spectrum is the user with his real problem. ... He aspires to communicate what he wants done ... without having to lay out in detail all necessary subgoals for adequate performance.

  47. Robert Heinlein computer/robot v A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.

  48. /42

  49. Game Management

  50. Game Management Game Management Game Management is the process of administering a game in General Game Playing. Match = instance of a game. Components: Game Manager Game Playing Protocol /54

More Related