1 / 33

Computer Science 425 Distributed Systems (Fall 2009)

Computer Science 425 Distributed Systems (Fall 2009). Lecture 6 Multicast/Group Communication and Mutual Exclusion Section 15.2.2&12.2. Acknowledgement. The slides during this semester are based on ideas and material from the following sources:

Download Presentation

Computer Science 425 Distributed Systems (Fall 2009)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Science 425Distributed Systems(Fall 2009) Lecture 6 Multicast/Group Communication and Mutual Exclusion Section 15.2.2&12.2

  2. Acknowledgement • The slides during this semester are based on ideas and material from the following sources: • Slides prepared by Professors M. Harandi, J. Hou, I. Gupta, N. Vaidya, Y-Ch. Hu, S. Mitra. • Slides from Professor S. Gosh’s course at University o Iowa.

  3. Administrative • Homework 1 posted September 3, Thursday • Deadline, September 17 (Thursday) • MP1 posted September 8, Tuesday • Deadline, September 25 (Friday), 4-6pm Demonstrations • MP1 Tutorial September 14, Monday • 7pm, 3401 Siebel Center

  4. Few Comments: VTand its Usage (Causal Ordering) 1,0,0 1,1,0 1,1,0 P1 0,0,0 (1,1,0) (1,1,0) (1,0,0) P2 0,0,0 1,1,0 1,0,0 (1,0,0) (1,1,0) P3 0,0,0 1,1,0 1,0,0 1,1,0

  5. Plan for Today • Group Communication and Group Membership • Group View • Process View • View Synchrony • Mutual Exclusion • Semaphores review • Coordinator-based algorithm • Token Ring algorithm

  6. Group Communication • Multicast is also known as group communication because process groups are destinations of multicast messages • Multicast algorithms rely on groups • Open group: anyone can join (e.g., customers in Best Buy Store) • Closed group: membership is closed (e.g., class of 2010) • Before, we assumed membership of groups was statically defined • Practically, dynamic membership is required since members can join and leave

  7. Group Membership • Implementation of group communication • Need group membership service to manage dynamic membership of groups in addition to multicast communication • Multicast and Group Membership management are strongly interrelated. • Role of Group Membership • Providing interface for group membership changes • Operations to create and destroy process groups • Operations to add or withdraw process to or from group • Implementing failure detector • Operation to detect that a group member crashed or left • Notifying members of group membership changes • Operation to notify when process was added or left • Performing group address expansion • Operation to map (expand) group ID to list of process IDs

  8. Group Communication Group Address Expansion • “Member”= process (e.g., RM) • Static Groups: group membership is pre-defined • Dynamic Groups: Members may join and leave, as necessary Leave Group Membership Management Group Send Fail Multicast Communication Join

  9. Group Views • Group View • List of current group members. • Maintained by group membership service • View Vp(g)of a Process p • Its current knowledge of the membership • Each member maintains its own local view • Example: V p.0(g) = {p}, V p.1(g) = {p, q}, V p.2 (g) = {p, q, r}, V p.3 (g) = {p,r} • New group view is disseminated, throughout the group, whenever a member joins or leaves. • Member detecting failure of another member reliable multicasts a “view change” message (requires causal-total ordering for multicasts)

  10. Inconsistent Views can lead to problems • Four members (0,1,2,3) will send out 144 emails equally • Assume that member 3 left the group but only member 2 knows about it • 0 will send 144/4 = 36 emails (first quarter 1-36) • 1 will send out 144/4 = 36 emails (second quarter 37-72) • 2 will send out 144/3 = 48 emails (last one third 97-144) • 3 has left, emails 73-96 will never be sent out!

  11. Dealing with Open Groups • View can change unpredictably, and no member may have exact information about who joined and who left at any given time • Views and their changes should propagate in the same order to all members • Current view (of all processes) V0(g) = {0,1,2,3} • Let 1,2 leave and 4 join the group concurrently • This view change can be ordered in many ways: • {{0,1,2,3}, {0,1,3}, {0,3,4} OR • {0,1,2,3}, {0,2,3}, {0,3}, {0,3,4} OR • {0,1,2,3}, {0,3}, {0,3,4} • To ensure that every member observes these changes in the same order, changes in the view should be sent via TOTAL ORDER MULTICAST

  12. View Propagation {Process 0}: V0(g); V0(g) = {0,1,2,3} send m1,…; V1(g) ; V1(g) = {0,1,3} send m2, send m3, ..; V2(g); V2(g) = {0,3,4} {Process 1}: V0(g); V0(g) = {0,1,2,3} send m4, send m5; V1(g); V1(g) = {0,3} send m6,..; V2(g); V2(g) = {0,3,4}

  13. View Delivery Guidelines • If a process jjoinsand continues its membership in group g that already contains process i, then eventually jappears in all views delivered by process i • If a process j permanently leaves group g that contains process i, then eventually j is excluded from all views delivered by process i

  14. Group Views • Requirements for view delivery • Order: If p delivers Vi(g) and then Vi+1(g), then no other process q delivers Vi+1(g) before Vi(g). • Integrity: If p delivers Vi(g), then p is in Vi(g). • Non-triviality: if process q joins a group and becomes reachable from process p, then eventually, q will always be present in the views that are delivered at p.

  15. View-Synchronous Communication • Rule. With respect to each message, all correct processes have the same view • m sent in view V → m received in view V • View Synchronous Communication = Group Membership Service + Reliable multicast

  16. View-Synchronous Communication • Integrity: If a process j delivers a view Vi(g), then Vi(g) must include j. • Validity: If a process k delivers a message m and k є Vi(g) and another process j є Vi(g) does not deliver that message m, then the next view Vi+1(g) delivered by k must exclude j • Agreement: if a correct process k delivers a message m and Vi(g) with kєVi(g) before delivering the next view Vi+1(g), then every correct process jє Vi(g) ∩ Vi+1(g) must deliver m before delivering Vi+1(g).

  17. Example • Let process 1 send out m and then crash • Possibility 1. No one delivers m, but each delivers the new view {0,2,3} • Possibility 2. Processes 0,2,3 deliver m and then deliver new view {0,2,3} • Possibility 3. Processes 2, 3 deliver m and then deliver new view {0,2,3}, but process 0 first delivers the view {0,2,3} and then delivers m. • Are these acceptable?

  18. p p p p X X X X q q q q r r r r V(q,r) V(p,q,r) V(p,q,r) V(p,q,r) V(p,q,r) V(q,r) X X V(q,r) V(q,r) Example: View Synchronous Communication Allowed Allowed m m m m m m m m Not Allowed Not Allowed

  19. Comment about View Synchrony • When a new process joins, state transfermay be needed (at view delivery point) to bring it up to date • “state” may be list of all messages delivered so far (wasteful) • “state” could be list of current server object values (e.g., in a bank database) • View Synchrony = “Virtual Synchrony” • Provides an abstraction of a synchronous network that hides the asynchrony of the underlying network from distributed applications • Used in ISIS toolkit (NY Stock Exchange)

  20. Distributed Mutual exclusion

  21. Why Mutual Exclusion? • Bank Database: Think of two simultaneous deposits of $10,000 into your bank account, each from one ATM. • Both ATMs read initial amount of $1000 concurrently from the bank server • Both ATMs add $10,000 to this amount (locally at the ATM) • Both write the final amount to the server • What’s wrong?

  22. Why Mutual Exclusion? • Bank Database: Think of two simultaneous deposits of $10,000 into your bank account, each from one ATM. • Both ATMs read initial amount of $1000 concurrently from the bank server • Both ATMs add $10,000 to this amount (locally at the ATM) • Both write the final amount to the server • What’s wrong? • ATMs need mutually exclusive access to your account entry at the server (or, to executing the code that modifies the account entry)

  23. Mutual Exclusion • Critical section (CS) problem: Piece of code (at all clients) for which we need to ensure there is at most one client executing it at any point of time. • Solutions: • Semaphores, mutexes, etc. in local operating systems • Message-passing-based protocols in distributed systems: • enter() the critical section • AccessResource() in the critical section • exit() the critical section • Distributed mutual exclusion requirements: • Safety – At most one process may execute in CS at any time • Liveness – Every request for a CS is eventually granted • Fairness/Ordering (desirable) – Requests are granted in FIFO order

  24. Refresher - Semaphores • To synchronize access of multiple threads to common data structures • Semaphore is protected variable with two operations • init: semaphore S=1; • wait(S): while(1){ // each execution of the while loop is atomic if (S > 0) S--; break; } • signal(S) S++; // atomic

  25. Refresher - Semaphores • To synchronize access of multiple threads to common data structures • init: semaphore S=1; • wait(S): while(1){ // each execution of the while loop is atomic if (S > 0) S--; break; } • signal(S): S++; enter() // enter critical region Access to resources exit() // exit critical region

  26. semaphore S=1; ATM1: wait(S); // enter CS obtain bank amount; add in deposit; update bank amount; signal(S); // exit extern semaphore S; ATM2 wait(S); // enter CS obtain bank amount; add in deposit; update bank amount; signal(S); // exit How are semaphores used? One Use: Mutual Exclusion – Bank ATM example

  27. Distributed Mutual Exclusion • No shared variables nor facilities supplied by a single kernel can be used • Require solutions to distributed mutual exclusion based solely on message passing !!

  28. Distributed Mutual Exclusion: Performance Evaluation Criteria • Bandwidth: the total number of messages sent in each entry and exit operation. • Delay: • Client delay: the delay incurred by a process at each entry and exit operation (when no other process is waiting) • Synchronization delay: the time interval between one process exiting the critical section and the next process entering it (when there is only one process waiting) • These translate into throughput -- the rate at which the processes can access the critical section, i.e., x processes per second. (these definitions more correct than the ones in the textbook)

  29. Assumptions • For all the algorithms studied, we assume • Reliable FIFO channels between every process pair • Each pair of processes is connected by reliable channels (such as TCP). Messages are eventually delivered to recipients’ input buffer in FIFO order. • Processes do not fail.

  30. Centralized Control of Distributed Mutual Exclusion • A central coordinator or leader L • Is appointed or elected • Grants permission to enter CS & keeps a queue of requests to enter the CS. • Ensures only one process at a time can access the CS • Separate handling of different CS’s

  31. Centralized control of Distributed ME • Operations (token gives access to CS) • To enter a CS Send a request to L & wait for token. • On exiting the CS Send a message to the coord to release the token. • Upon receipt of a request, if no other process has the token, L replies with the token; otherwise, L queues the request. • Upon receipt of a release message L removes the oldest entry in the queue (if any) and replies with a token. • Features: • Safety, liveness and order are guaranteed • It takes 3 messages per entry + exit operation. • Client delay: one round trip time (request + grant) • Synchronization delay: one round trip time (release + grant) • The coordinator becomes performance bottleneck and single point of failure.

  32. Token Ring Approach • Processes are organized in a logical unidirectional ring: pi has a communication channel to p(i+1)mod (n+1). • Operations: • Only the process holding the token can enter the CS. • To enter the critical section, wait for the token. • To exit the CS, pisends the token onto its neighbor. • If a process does not want to enter the CS when it receives the token, it forwards the token to the next neighbor. Previous holder of token P0 • Features: • Safety & liveness are guaranteed, but ordering is not. • Bandwidth: 1 message per exit • Client delay: 0 to (N+1) message transmissions. • Synchronization delay between one process’s exit from the CS and the next process’s entry is between 1 and N message transmissions. current holder of token Pn P1 P2 next holder of token P3

  33. Summary • Discussed relation between multicast and group communication • Important issue of view synchrony • Important for MP1 - please, consider carefully the group membership protocol and view synchrony issues when designing Chat’s membership service • Mutual Exclusion • Review semaphore concept from cs241 or other OS class • Semaphores alone will not do the trick of protecting critical regions in distributed systems • Need much more complex message-passing algorithms to ensure distributed mutual exclusion • Next time • More Complex Distributed Mutual Exclusion Algorithms • Leader Election Algorithms

More Related