1 / 22

Decomposition-based Constraint Optimization and Distributed Execution

Decomposition-based Constraint Optimization and Distributed Execution. John Stedl, Tsoline Mikaelian, Martin Sachenbacher September 2003. Distributed Execution. What is Distributed Execution? Each agent maintains a subset of the plan

karlyn
Download Presentation

Decomposition-based Constraint Optimization and Distributed Execution

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Decomposition-based Constraint Optimization and Distributed Execution John Stedl, Tsoline Mikaelian, Martin Sachenbacher September 2003

  2. Distributed Execution What is Distributed Execution? • Each agent maintains a subset of the plan • Each agent operates autonomously using communication to coordinate activities • Two phases: • reformulation phase to create dispatchable plan ( all agents can communicate ) • temporally flexible execution of the dispatchable plan (communication may be limited ) Motivation • Reduced communication vs. a leader-follower architecture • If agent-agent communication is limited , we need to perform distributed execution • Reduces computation and memory requirements for individual agent • Increase robustness ( no single point of failure ) Approach • Use intelligent distributed algorithm selection to map centralized algorithm for STNs and STNUs in to distributed plan running algorithms • sub-divide the problem into smaller pieces when necessary distributed STN

  3. Distributed Execution Domains distributed hardware components 1. Full Communication and No-Uncertainty • Use distributed versions of classical STN reformulation and dispatching algorithms* 2. Full Communication with Uncertainty • Use distributed versions of the Dynamic Controllability algorithms for STNUs and associated dispatching algorithm** 3. Limited Communication with Uncertainty • sacrifice some temporal flexibility in plan to make problem easier • Break up the plan into a “natural” two-level Hierarchy where based on communication constraints such that • agents within a sub-plan are free to communicate • no communication allowed between different sub-plans • Leverage work on strong and dynamic controllability satellite constellations rover teams on Mars group one group two * Tsamardinos, Muscettola, & Morris, “Fast Transformations of Temporal Plans for Efficient Execution” ** Morris, Muscettola, & Vidal, “Dynamic Control of Plans with Temporal Uncertainty”

  4. observations commands Hypertree Decomposition Dynamic Programming Plant Model CCA Constraint Hypergraph Distributed algorithm for solution extraction U1,U2, U3,… Offline compilation phase Online solution phase Distributed Mode Estimation • Based on centralized N-Step mode estimation, framed as an OCSP. • Solve the OCSP using hypertree decomposition • Exploit tree decomposition for distributed problem solving • Distributed ME process:

  5. Enabling Distributed Mode Estimation Through Decomposition-based Constraint Optimization Martin Sachenbacher September 2003

  6. Constraint Satisfaction Problems • Domains dom(xi) • Variables X = x1, x2, …, xn • Constraints C = c1, c2, …, cm • Constraint cj dom(xj1)  …  dom(xjk) y u u x y z y 1 10 0 0 0 00 1 11 0 11 1 1 x z v

  7. CSP Decomposition • Transform CSP into equivalent acyclic instance • By combining constraints responsible for cyclicity “Compilation”

  8. Hypertree Decomposition • See Gottlob et al., Artificial Intelligence 124(2000) • Tree T = (N,E) with labeling functions ,  such that: • For each cj  C, there is at least one n  N such that scope(cj)  (n) (“covering”) • For each variable xi  X, the set {n  N | xi  (n)} induces a connected subtree of T (“connectedness”) • For each n  N, (n)  scope((n)) • For each n  N, scope((n))  (Tn)  (n), where Tn is the subtree of T rooted at n • HT-width of a hypertree decomposition is defined as max(|(n)|), nN

  9. Example • Boolean Polycell (see Williams, Ragno 2003) x a = 1 Or1 b = 1 And1 f = 0 y Or2 c = 1 And2 g = 1 d = 1 z Or3 e = 0

  10. Example • Variables • a, b, c, d, e, f, g, x, y, z with domain {0,1} • O1, O2, O3, A1, A2 with domain {ok,fty} • Constraints • Or1(O1,a,c,x), Or2(O2,b,d,y), Or3(O3,c,e,z) model or-gates • And1(A1,x,y,f), And2(A2,y,z,g) model and-gates O1 a c x A2 y z g ok 1 1 1fty 1 1 0fty 1 1 1 ok 1 1 1fty 0 0 1fty 0 1 1fty 1 0 1fty 1 1 1

  11. Example • Hypertree Decomposition (width 2) -label -label {O3,A1,c,e,f,x,y,z} {Or3,And1} Shared variables y,z y c,x {O2,b,d,y} {Or2} {A2,g,y,z} {And2} {O1,a,c,x} {Or1}

  12. Example • Resulting acyclic CSP ok ok 1 0 0 0 1 1ok ok 1 0 0 0 0 1ok ok 1 0 0 1 0 1 … {O3,A1,c,e,f,x,y,z} {Or3,And1} y,z y c,x {O2,b,d,y} {Or2} {A2,g,y,z} {And2} {O1,a,c,x} {Or1} ok 1 1 1fty 1 1 1fty 1 0 1fty 1 1 0fty 1 0 0 ok 1 1 1fty 1 1 1fty 1 1 0 ok 1 1 1fty 1 1 1fty 1 1 0

  13. Semiring-CSPs and Optimization • Domains dom(xi) • Variables X = x1, x2, …, xn • Constraints C = c1, c2, …, cm • Set S, Constraint ci : dom(xi1)  …  dom(xik)  S • Operators  (defines projection) and  (defines join) on S with neutral elements 0 and 1 • (S, , , 0, 1) forms semiring structure • Type T  V specifies variables appearing in solutions

  14. Example • Boolean Polycell with probabilities • Or-gates: p(ok)=.99, p(fty)=.01 • And-gates: p(ok)=.995, p(fty)=.005 • Semiring ([0,1], max, *, 0, 1) • T = {O1,O2,O3,A1,A2} O1 a c x p A2 y z g p .99.01.01 .995.005.005.005.005 ok 1 1 1fty 1 1 0fty 1 1 1 ok 1 1 1fty 0 0 1fty 0 1 1fty 1 0 1fty 1 1 1

  15. Example • Resulting acyclic SCSP ok ok 1 0 0 0 1 1ok ok 1 0 0 0 0 1ok ok 1 0 0 1 0 1 … .985.985.985… {O3,A1,c,e,f,x,y,z} {Or3,And1} y,z y c,x {b,d,y} {Or2} {g,y,z} {And2} {a,c,x} {Or1} ok 1 1 1fty 1 1 1fty 1 0 1fty 1 1 0fty 1 0 0 ok 1 1 1fty 1 1 1fty 1 1 0 ok 1 1 1fty 1 1 1fty 1 1 0 .99.01.01.01.01 .99.01.01 .99.01.01

  16. Solving Tree-Structured SCSPs • Bottom-up phase for computing values • Top-down phase for extracting solutions • Polynomial in width, highly parallelizable

  17. Example • Bottom-Up Phase ok ok 1 0 0 0 1 1ok ok 1 0 0 0 0 1ok ok 1 0 0 1 0 1 … .0097 .985.985… {O3,A1,c,e,f,x,y,z} {Or3,And1} y,z y c,x {b,d,y} {Or2} {g,y,z} {And2} {a,c,x} {Or1} ok 1 1 1fty 1 1 1fty 1 0 1fty 1 1 0fty 1 0 0 ok 1 1 1fty 1 1 1fty 1 1 0 ok 1 1 1fty 1 1 1fty 1 1 0 .99.01.01.01.01 .99.01.01 .99.01.01

  18. Bottom-Up Phase • Function solve(node) For Each tuple  node.relation For Each child  node.children childTuple  findBestConsistentTuple(c(child),tuple) IfchildTuple =  Then c(node)  c(node) \ { tuple } Else value(tuple)  value(tuple)  value(childTuple) End If Next child Next tuple Dynamic Programming

  19. Top-Down Solution Expansion (True, 0) … (O3A1cxyz = ok ok 1011, 0.0097) {Or3,And1} … {And2} (O3A1A2cxy = ok ok ok 111, 0.0097) … {Or2} (O2O3A1A2cx = ok ok ok ok 11, 0.0097) … {Or1} (O1O2O3A1A2 = fty ok ok ok ok, 0.0097)

  20. Discussion • Conclusion • Search-free • Highly parallelizable • Tractable, if HT-width bounded • Current Work • Extension to best-first enumeration • Extension to symbolic encoding

  21. Material • CSP Decomposition Methods

  22. CSP Decomposition Methods • Biconnected Components [Freuder ’85] • Treewidth [Robertson and Seymour ’86] • Tree Clustering [Dechter Pearl ’89] • Cycle Cutset [Dechter ’92] • Bucket Elimination [Dechter ‘97] • Tree Clustering with Minimization [Faltings ’99] • Hinge Decomposition [Gyssens and Paredaens ’84] • Hypertree Decomposition [Gottlob et al. ’99] • …

More Related