1 / 57

Advances in Pattern Databases

Advances in Pattern Databases. Ariel Felner, Ben-Gurion University Israel email: felner@bgu.ac.il. Overview. Heuristic search and pattern databases Disjoint pattern databases Compressed pattern databases

roch
Download Presentation

Advances in Pattern Databases

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advances in Pattern Databases Ariel Felner, Ben-Gurion University Israel email: felner@bgu.ac.il

  2. Overview • Heuristic search and pattern databases • Disjoint pattern databases • Compressed pattern databases • Dual lookups in pattern databases • Current and future work

  3. optimal path search algorithms • For small graphs: provided explicitly, algorithm such as Dijkstra’s shortest path, Bellman-Ford or Floyd-Warshal. Complexity O(n^2). • For very large graphs , which are implicitly defined, the A* algorithm which is a best-first search algorithm.

  4. Best-first search schema • sorts all generated nodes in an OPEN-LIST and chooses the node with the best cost value for expansion. • generate(x): insert x into OPEN_LIST. • expand(x): delete x from OPEN_LIST and generate its children. • BFS depends on its cost (heuristic) function. Different functions cause BFS to expand different nodes.. 20 25 30 30 35 35 35 40 Open-List

  5. Best-first search: Cost functions • g(x):Real distance from the initial state to x • h(x): The estimated remained distance from x to the goal state. • Examples:Air distance Manhattan Dinstance Different cost combinations of g and h • f(x)=level(x) Breadth-First Search. • f(x)=g(x) Dijkstra’s algorithms. • f(x)=h’(x) Pure Heuristic Search (PHS). • f(x)=g(x)+h’(x) The A* algorithm (1968).

  6. A* (and IDA*) • A* is a best-first search algorithm that uses f(n)=g(n)+h(n)as its cost function. • f(x) in A* is an estimation of the shortest path to the goal via x. • A* is admissible, complete and optimally effective. [Pearl 84] • Result: any other optimal search algorithm will expand at least all the nodes expanded by A* Breadth First Search A*

  7. Domains 15 puzzle • 10^13 states • First solved by [Korf 85] with IDA* and Manhattan distance • Takes 53 seconds 24 puzzle • 10^24 states • First solved by [Korf 96] • Takes 2 days

  8. Domains • Rubik’s cube • 10^19 states • First solved by [Korf 97] • Takes 2 days to solve

  9. (n,k) Top Spin Puzzle • n tokens arranged in a ring • States: any possible permutation of the tokens • Operators: Any k consecutive tokens can be reversed • The (17,4) version has 10^13 states • The (20,4) version has 10^18 states

  10. 4-peg Towers of Hanoi (TOH4) • There is a conjecture about the length of optimal path but it was not proven. • Size 4^k

  11. How to improve search? • Enhanced algorithms: • Perimeter-search [Delinberg and Nilson 95] • RBFS [Korf 93] • Frontier-search [Korf and Zang 2003] • Breadth-first heuristic search [Zhou and Hansen 04]  They all try to better explore the search tree. • Better heuristics:more parts of the search tree will be pruned.

  12. Better heuristics • In the 3rd Millennium we have very large memories. We can build large tables. • For enhanced algorithms: large open-lists or transposition tables. They store nodes explicitly. • A more intelligent way is to store general knowledge. We can do this with heuristics

  13. Subproblems-Abstractions • Many problems can be abstracted into subproblems that must be also solved. • A solution to the subproblem is a lower bound on the entire problem. • Example: Rubik’s cube [Korf 97] • Problem:  3x3x3 Rubik’s cube Subproblem:  2x2x2 Corner cubies.

  14. Pattern Databases heuristics • A pattern database[Culbreson & Schaeffer 96] is a lookup table that stores solutions to all configurations of the subproblem / abstraction / pattern. • This table is used as a heuristic during the search. • Example: Rubik’s cube. • Has 10^19 States. • The corner cubies subproblem has 88 Million states • A table with 88 Million entries fits in memory [Korf 97]. Search space Mapping/Projection Pattern space

  15. Non-additive pattern databases • Fringe pattern database[Culberson & Schaeffer 1996]. • Has only 259 Million states. • Improvement of a factor of 100 over Manhattan Distance

  16. Example - 15 puzzle • How many moves do we need to move tiles 2,3,6,7 from locations 8,12,13,14 to their goal locations • The solution to this is located in PDB[8][12][13][14]=18

  17. 7-8 Disjoint Additive PDBs (DADB) • If you have many PDBS, take their maximum • Values of disjointdatabases can be added and are still admissible [Korf & Felner: AIJ-02, Felner, Korf & Hanan: JAIR-04] • Additivity can be applied if the cost of a subproblem is composed from costs of objects from corresponding pattern only

  18. DADB:Tile puzzles 5-5-5 6-6-3 7-8 6-6-6-6 [Korf, AAAI 2005]

  19. Heuristics for the TOH • Infinite peg heuristic (INP): Each disk moves to its own temporary peg. • Additive pattern databases [Felner, Korf & Hanan, JAIR-04]

  20. Additive PDBS for TOH4 • Partition the disks into disjoint sets • Store the cost of the complete pattern space of each set in a pattern database. • Add values from these PDBs for the heuristic value. • The n-disk problem contains 4^n states • The largest database that we stored was of 14 disks which needed 4^14=256MB. 6 10

  21. TOH4: results • The difference between static and dynamic is covered in [Felner, Korf & Hanan: JAIR-04]

  22. Best Usage of Memory • Given 1 giga byte of memory, how do we best use it with pattern databases? • [Holte, Newton, Felner, Meshulam and Furcy, ICAPS-2004] showedthat it is better to use many small databases and take their maximum instead of one large database. • We will present a different (orthogonal) method [Felner, Mushlam & Holte: AAAI-04].

  23. Compressing pattern database Felner et al AAAI-04]] • Traditionally, each configuration of the pattern had a unique entry in the PDB. • Our main claim  Nearby entries in PDBs are highly correlated !! • We propose to compress nearby entries by storing their minimum in one entry. • We show that  most of the knowledge is preserved • Consequences: Memory is saved, larger patterns can be used speedup in search is obtained.

  24. Cliques in the pattern space • The values in a PDB for a clique are d or d+1 • In permutation puzzles cliques exist when only one object moves to another location. d G d+1 d • Usually they have nearby entries in the PDB • A[4][4][4][4][4] A clique in TOH4

  25. Compressing cliques • Assume a clique of size K with values d or d+1 • Store only one entry (instead of K) for the clique with the minimum d.Lose at most 1. • A[4][4][4][4][4] A[4][4][4][4][1] • Instead of 4^p we need only 4^(p-1) entries. • This can be generalized to a set of nodes with diameter D. (for cliques D=1) • A[4][4][4][4][4] A[4][4][4][1][1] • In general: compressing by k disks reduces memory requirements from 4^pto4^(p-k)

  26. TOH4 results: 16 disks (14+2) • Memory was reduced by a factor of 1000!!! at a cost of only a factor of 2 in the search effort.

  27. Memory was reduced by a factor of 1000!!! At a cost of only a factor of 2 in the search effort. Lossless compressing is noe efficient in this domain. TOH4: larger versions • For the 17 disks problem a speed up of 3 orders of magnitude is obtained!!! • The 18 disks problem can be solved in 5 minutes!!

  28. Tile Puzzles Goal State Clique • Storing PDBs for the tile puzzle • (Simple mapping) A multi dimensional array  A[16][16][16][16][16] size=1.04Mb • (Packed mapping) One dimensional array  A[16*15*14*13*12 ] size = 0.52Mb. • Time versus memory tradeoff !!

  29. 15 puzzle results • A clique in the tile puzzle is of size 2. • We compressed the last index by two  A[16][16][16][16][8]

  30. Dual lookups in pattern databases[Felner et al, IJCAI-04]

  31. Symmetries in PDBs • Symmetric lookups were already performed by the first PDB paper of [Culberson & Schaeffer 96] • examples • Tile puzzles: reflect the tiles about the main diagonal. • Rubik’s cube: rotate the cube • We can take the maximum among the different lookups • These are all geometricalsymmetries • We suggest a new type of symmetry!! 7 8 7 8

  32. Regular and dual representation • Regular representation of a problem: • Variables – objects (tiles, cubies etc,) • Values – locations • Dualrepresentation: • Variables – locations • Values – objects

  33. Regular vs. Dual lookups in PDBs • Regular question: Where are tiles {2,3,6,7} and how many moves are needed to gather them to their goal locations? • Dual question: Who are the tiles in locations {2,3,6,7} and how many moves are needed to distribute them to their goal locations?

  34. Regular and dual lookups • Regular lookup: PDB[8,12,13,14] • Dual lookup: PDB[9,5,12,15]

  35. Regular and dual in TopSpin • Regular lookup for C : PDB[1,2,3,7,6] • Dual lookup for C: PDB[1,2,3,8,9]

  36. Dual lookups • Dual lookups are possible when there is a symmetry between locations and objects: • Each object is in only one location and each location occupies only one object. • Good examples: TopSpin, Rubik’s cube • Bad example: Towers of Hanoi • Problematic example: Tile Puzzles

  37. Inconsistency of Dual lookups • Consistency of heuristics: • |h(a)-h(b)| <= c(a,b) Example:Top-Spin c(b,c)=1 • Both lookups for B PDB[1,2,3,4,5]=0 • Regular lookup for C PDB[1,2,3,7,6]=1 • Dual lookup for C PDB[1,2,3,8,9]=2

  38. Traditional Pathmax • children inherit f-value from their parents if it makes them larger g=1 h=4 f=5 Inconsistency g=2 h=2 f=4 g=2 h=3 f=5 Pathmax

  39. Bidirectional pathmax (BPMX) h-values h-values 2 4 BPMX 5 1 5 3 • Bidirectional pathmax: h-values are propagated in both directions decreasing by 1 in each edge. • If the IDA* threshold is 2 then with BPMX the right child will not even be generated!!

  40. Results: (17,4) TopSpin puzzle • Nodes improvement (17r+17d) : 1451 • Time improvement (4r+4d) : 72 • We also solved the (20,4) TopSpin version.

  41. Results: Rubik’s cube • Data on 1000 states with 14 random moves • PDB of 7-edges cubies • Nodes improvement (24r+24d) : 250 • Time improvement (4r+4d) : 55

  42. Results: Rubik’s cube • With duals we improved Korf’s results on random instances by a factor of 1.5 using exactly the same PDBs.

  43. Results: tile puzzles • With duals, the time for the 24 puzzle drops from 2 days to 1 day.

  44. Discussion • Results for the TopSpin and Rubik’s cube are better than those of the tile puzzles • Dual PDB lookups and BPMX cutoffs are more effective if each operators changes larger part of the states. • This is because the identity of the objects being queried in consecutive states are dramatically changed

  45. Summary • Dual PDB lookups • BPMX cutoffs for inconsistent heuristics • State of the art solvers.

  46. Future work • More compression • Duality in search spaces • Which and how many symmetries to use • Other sources of inconsistencies • Better ways for propagating inconsistencies

  47. Ongoing and future work compressing PDBs • An item for the PDB of tiles (a,b,c,d) is in the form: <La, Lb, Lc, Ld>=d • Store the PDBs in a Trie • A PDB of 5 tiles will have a level in the trie for each tile. The values will be in the leaves of the trie. • This data-structure will enable flexibility and will save memory as subtrees of the trie can be pruned

  48. Trie pruninig Simple (lossless) pruning: Fold leaves with exactly the same values. No data will be lost. 2 2 2 2 2

  49. Trie pruninig Intelligent (lossy)pruning: Fold leaves/subtrees with are correlated to each other (many option for this!!) Some data will be lost. Admissibility is still kept. 2 2 2 4 2

  50. Trie: Initial Results A 5-5-5 partitioning stored in a trie with simple folding

More Related