1 / 30

Causal Message Logging(FBL)

Causal Message Logging(FBL). Rohit C Fernandes 10/23/01. Overview. Intuition Family Based Logging Manetho. System Model. Fixed Number of Processes (N->N) Reliable FIFO Communication Channels Fail Stop Processes Only non determinism is in message receive ( receive_any )

talon
Download Presentation

Causal Message Logging(FBL)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Causal Message Logging(FBL) Rohit C Fernandes 10/23/01

  2. Overview • Intuition • Family Based Logging • Manetho

  3. System Model • Fixed Number of Processes (N->N) • Reliable FIFO Communication Channels • Fail Stop Processes • Only non determinism is in message receive ( receive_any ) • Rollback Recovery (checkpoints and message logs)

  4. Determinant • #m = <sender,receiver,ssn,rsn>

  5. Requirements of logging protocol • Tolerate upto f simultaneous failures • Low failure-free overhead • Only failed processes roll back • (What about optimistic and pessimistic protocols?)

  6. Causal Logging : Intuition • Piggyback determinant of non-deterministic event on outgoing messages • Determinant? • How to control the amount of piggybacking?

  7. Controlling Piggyback Size • To recover from f failures, we need to store the determinant at f+1 places • f < 3 : easy • Recovering from 3 failures (?) • Clearly not optimal

  8. f=1

  9. f=2

  10. f=3?

  11. Definitions • Depend(m) =set of processes whose state causally depends on delivery of m • Log(m) = set of processes where #m is logged • Stable(#m) : #m cannot be lost because of crashes

  12. No Orphan Condition • m: ((Log(m) f  Depend(m)  Log(m)) • In practice, Depend(m)=Log(m) if (Log(m) f WHY?

  13. Definitions • DLp : Determinant Log of process p • UnstableDLp : subset of DLp which p does not know to be stable • UnstableDLp(q) : set of determinants in UnstableDLp that p knows q does not already have

  14. FBL protocol idea • How can p determine #m is not stable? • How can p determine if q has #m? • Piggyback extra information • Log(m)p : p’s estimated value of Log(m) • Important : never overestimate Log(m)

  15. Inferring information from #m • When q receives #m from p, q can infer that Log(m) contains p,q and the m.dest • Det : Protocol in which process p piggybacks the determinants inUnstableDLp(q)

  16. Inferring Information from Log(m) p • q can infer that Log(m)  Log(m) p • When received for the first time, q can infer that Log(m)  Log(m) p +1 • Log : Protocol where p piggybacks #m and Log(m) p for each #m in UnstableDLp(q)

  17. Inferring Information from Log(m) p • q can infer that Log(m) Log(m)p U Log(m)q • Log : Protocol where p piggybacks #m and Log(m) p for each #m in UnstableDLp(q)

  18. 2 more protocols • Log+ : Process p piggybacks the same data as Log except that if Log(m) p has increased since the last time p piggybacked #m to q,Log(m)p is piggybacked on m’ • Log+

  19. Comparison :Det, Log, Log , Log+, Log+ • For f<3, Det logs no more determinants than the other protocols • Trade-off between extra information piggybacked per message and unnecessary copies of #m logged • Trade-off depends on applications pattern of communication

  20. Overhead Measurements • D : Number of determinants in UnstableDLp(q) • N : Number of determinants in DLp • W : Number of words required to code a determinant

  21. Overheads • Det : Dw words • Log : D(w+1) words • Log : upto D(w+f) words • Log+,: Upto N(w+1) words (?) • Log+ : Upto N(w+f) words (?)

  22. Intuition for compacting • How to estimate log(m)? • Idea : Try estimating depend(m) instead • Intuition : Vector Clocks

  23. Dependence Matrix • Each process p maintains an n*n matrix DMatp • DMatp[p,*] : Vector clock of process p • DMatp[q,*] : p’s estimate of vector clock of process q

  24. Update Rules • On receipt of a message m at p from q: • P generates #m • DMatp [p,p] ++ • DMatp [p,*]=max(VCm,Dmat[p,*]) • DMatp [q,*]=max(VCm,Dmat[q,*]) • DMatp [i,i]=max(VCm[i], DMatp [i,i]) • Now, given #m in DLp what is Log(m)p ? • Log(m)p ={q| DMatp [q,m.dest]  m.rsn} • Weak Dependency Vectors(Clocks)

  25. Implementation • Log+: Piggyback DMat on outgoing messages • Dmatp can be used to estimate Log(m)p for all messages m for which p is a member of depend(m) • Log+,: Need to only piggyback an n*f matrix called the stability matrix

  26. Comparing Piggyback Overheads • Det : Dw words • Log : D(w+1) words • Log : upto D(w+f) words • Log+,: (D+nf)w • Log+ : (D+n2)w

  27. Experimental Results

  28. Pairwise comparison

More Related