1 / 44

Distributed Markov Chains

Distributed Markov Chains. P S Thiagarajan School of Computing, National University of Singapore. Joint work with Madhavan Mukund , Sumit K Jha and Ratul Saha. Probabilistic dynamical systems. Rich variety and theories of probabilistic dynamical systems

osmond
Download Presentation

Distributed Markov Chains

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Markov Chains P S Thiagarajan School of Computing, National University of Singapore Joint work with MadhavanMukund, Sumit K Jha and RatulSaha

  2. Probabilistic dynamical systems • Rich variety and theories of probabilistic dynamical systems • Markov chains, Markov Decision Processes (MDPs), Dynamic Bayesian networks • Many applications • Size of the model is a bottleneck • Can we exploit concurrency theory? • We explore this in the setting of Markov chains.

  3. Our proposal a a • A set of interacting sequential systems. • Synchronize on common actions.

  4. Our proposal a • A set of interacting sequential systems. • Synchronize on common actions.

  5. Our proposal a • A set of interacting sequential systems. • Synchronize on common actions.

  6. Our proposal a, 0.2 a, 0.2 a, 0.8 • A set of interacting sequential systems. • Synchronize on common actions. • This leads a joint probabilistic move by the participating agents.

  7. Our proposal a, 0.2 a, 0.2 a, 0.8 • A set of interacting sequential systems. • Synchronize on common actions. • This leads a joint probabilistic move by the participating agents.

  8. Our proposal a, 0.2 a, 0.2 a, 0.8 • A set of interacting sequential systems. • Synchronize on common actions. • This leads a joint probabilistic move by the participating agents.

  9. Our proposal a, 0.2 a, 0.2 a, 0.8 • A set of interacting sequential systems. • Synchronize on common actions. • This leads a joint probabilistic move by the participating agents.

  10. Our proposal • A set of interacting sequential systems. • Synchronize on common actions. • This leads a joint probabilistic move by the participating agents. • More than two agents can take part in a synchronization. • More than two probabilistic outcomes possible. • There can also be just one agent taking part in a synchronization. • Viewed as an internal probabilistic move (like in a Markov chain) by the agent.

  11. Our proposal • This type of a system has been explored by Pighizzini et.al (“Probabilistic asynchronous automata”; 1996) • Language-theoretic study. • Our key idea: • impose a “determinacy of communications” restriction. • Study formal verification problems using partial order based methods. • We study here just one simple verification method.

  12. Some notations

  13. Some notations

  14. Determinacy of communications. s’’ {a} s {a} s’ i

  15. Determinacy of communications. s’’ {a} s s’ i j

  16. Determinacy of communications. s’’ a {a} s a a s’ i j loc(a) = {i , j} (s, s’), (s, s’’)  en a

  17. Not allowed! s’’ {a} s s’ k i j act(s) will have more than one action.

  18. Some notations

  19. Some notations

  20. Example • Two players each toss a fair coin • If the outcome is the same, they toss again • If the outcomes are different, the one who tosses Heads wins

  21. Example Two component DMC

  22. Interleaved semantics. Coin tosses are local actions, deciding a winner is synchronized action

  23. Goal • We wish to analyze the behavior of a DMC in terms of its interleaved semantics. • Follow the Markov chain route. • Construct the path space . • The set of infinite paths from the initial state. • Basic cylinder: a set of infinite paths with a common finite prefix. • Close under countable unions and complements.

  24. The transition system view 3 1 1 1 3 2 4 1 3/5 1 2/5 3/5 1 4 2 1 2/5 1 1 B – The set of all paths that have the prefix 3 4 1 3 4 3 3 1 1 Pr(B) = 1  2/5  1  1 = 2/5 4 4 B 1

  25. Concurrency • Events can occur independent of each other. • Interleaved runs can be (concurrency) equivalent. • We use Mazurkiewicz trace theory to group together equivalent runs: trace paths. • Infinite trace paths do not suffice. • We work with maximal infinite trace paths.

  26. (in1, in 2) t1, 0.5 h2, 0.5 t2, 0.5 h1, 0.5 (in1, T2) (T1, in2) (H1, in2) (in1, H2) (T1, T2) (H1, T2) (T1, H2) (H1, H2) W1, L2 L1, W2 w1 l2 W1, L2 W1, L2 l2 w1 W1, L2

  27. The trace space • A basic trace cylinder is the one generated by a finite trace • Construct the -algebra by closing under countable unions and complements. • We must construct a probability measure over this -algebra. • For a basic trace cylinder we want its probability to be the product of the probabilities of all the events in the trace.

  28. (in1, in 2) t1, 0.5 h2, 0.5 t2, 0.5 h1, 0.5 (in1, T2) (T1, in2) (H1, in2) (in1, H2) (T1, T2) (H1, T2) (T1, H2) (H1, H2) B Pr(B) = 0.5  0.5 = 0.25 W1, L2 L1, W2 w1 l2 W1, L2 W1, L2 l2 w1 W1, L2

  29. The probability measure over the trace space. • But proving that this extends to a unique probability measure over the whole -algebra is hard. • To solve this problem : • Define a Markov chain semantics for a DMC. • Construct a bijection between the maximal traces of the interleaved semantics and the infinite paths of the Markov chain semantics. • Using Foata normal form • Transport the probability measure over the path space to the trace space.

  30. The Markov chain semantics.

  31. The Markov chain semantics.

  32. Markov chain semantics What if there were players? parallel probabilistic moves generate global moves This has a bearing simulation time.

  33. Probabilistic Product Bounded LTL Local Bounded LTL • Each component has a local set of atomic propositions • Interpreted over Si • Formula of type are atomic propositions and • i

  34. Probabilistic Product Bounded LTL Local Bounded LTL • Each component has a local set of atomic propositions • Formula of type are atomic propositions and • t (local) moves of component Product Bounded LTL • Boolean combinations of Local Bounded LTL formulas Probabilistic Product Bounded LTL • where is a Product Bounded LTL formula • Close under boolean combinations

  35. PBLTL over interleaved runs • Define –projections for interleaved runs . • Define for local BLTL formulas and for product BLTL formulas • Use the measure on traces to define

  36. Statistical model checking…

  37. SPRT based model checking • In our setting, each local BLTL formula for component fixesa bound on the number of steps that needs to make ; by then one will be able to decide if the formula is satisfied or not. • Product BLTL formula induces a vector of bounds • Simulate the system till each component meets its bound • A little tricky we can not try to achieve this bound greedily.

  38. Case study Distributed leader election protocol [Itai-Rodeh] • identical processes in a unidirectional ring • Each process randomly chooses an id in and propagates • When a process receives an id • If it is smaller than its own, suppress the message • If it is larger than its own, drop out and forward • If it is equal to its own, mark collision and forward • If you get your own message back (message hop count is , is known to all processes) • If no collision was recorded, you are the leader • If a collision occurred these nodes go to the next round.

  39. Case study… • In the Markov chain semantics: • Initial choice of identity: probabilistic move, alternatives • Building the global Markov to analyze system is expensive • Asynchronous semantics allows interleaved exploration

  40. Case study… Distributed leader election protocol [Itai-Rodeh]

  41. Case study Dining Philosophers Problem • philosophers (processes) in a round table • Each process tried to eat when hungry, and needs both the forks to his right and left • The steps for a process are • move from thinking to hungry • when hungry, randomly choose to try and pick up the left or right fork; • wait until the fork is down and then pick it up; • if the other fork is free, pick it up; otherwise, put the original fork down (and return to step 1); • eat (since in possession of both forks); • when finished eating, put both forks down in any order and return to thinking.

  42. Case study… Dining Philosophers Problem

  43. Other examples • Other PRISM case studies of randomized distributed algorithms • consensus protocols, gossip protocols… • Need to “translate" shared variables using a protocol • Probabilistic choices in typical randomized protocols are local • DMC model allows communication to influence probabilistic choices • We have not exploited this yet! • Not represented in standard PRISM benchmarks

  44. Summary and future work • The interplay between concurrency and probabilistic dynamics is subtle and challenging. • But concurrency theory may offer new tools for factorizing stochastic dynamics. • Earlier work on probabilistic event structures [Katoen et al, Abbes et al, Varacca et al]also attempt to impose probabilities on concurrent structures. • Our work shows that formal verification as the goal offers valuable guidelines • Need to develop other model checking methods for DMCs. • Finite unfoldings • Stubborn sets for PCTL like specifications.

More Related