1 / 27

Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications

Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications. Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan MIT Laboratory for CS SIGCOMM Proceedings, 2001 http://pdos.lcs.mit.edu/chord/. Chord Contribution.

svea
Download Presentation

Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan MIT Laboratory for CS SIGCOMM Proceedings, 2001 http://pdos.lcs.mit.edu/chord/

  2. Chord Contribution • Chord is a scalable protocol for lookup in a dynamic peer-to-peer system with frequent node arrivals and departures.

  3. Peer-to-peer Application Characteristics • Direct connection, without a central point of management • Large scale • Concurrent node join • Concurrent node leave

  4. The Lookup Problem N2 N1 N3 Key=“Ma Yo-yo” Value= Sonata… Internet ? Client Publisher Lookup(“Ma Yo-yo”) N4 N6 N5

  5. Some Approaches • Napster (DNS) - centralized • Gnutella - flooded • Freenet, Chord - decentralized

  6. Centralized lookup (Napster) • Simple • Least info maintained in each node • A single point of failure N2 N1 N3 1. Register “Ma Yo-yo” 3. Here (“Ma Yo-yo”, N7) N7 DBserver 2. Lookup(“Ma Yo-yo”) Key=“Ma Yo-yo” Value= Sonata… Client 4. Download N4 N6 N5

  7. Flooded queries (Gnutella) • Robust • Maintain neighbor state in each node • Flooded messages over the network N2 N1 N3 Lookup(“Ma Yo-yo”) N7 Client Here(“Ma Yo-yo”, N7) Key=“Ma Yo-yo” Value=Sonata… N4 N6 N5 N8

  8. Routed queries (Freenet, Chord) • Decentralized • Maintain route info in each node • Scalable N2 N1 N3 Client N7 Lookup(“Ma Yo-yo”) Key=“Ma Yo-yo” Value=Sonata… N4 N6 N5 Here (“Ma Yo-yo”, N7)

  9. Chord • Basic protocol • Node Joins • Stabilization • Failures and Replication

  10. Chord Properties • Assumption: no malicious participants • Efficiency: O(log(N)) messages per lookup • N is the total number of servers • Scalability: O(log(N)) state per node • Robustness: survives massive failures • Load balanced: spreading keys evenly

  11. Basic Protocol • Main operation • Given a key, maps the key onto a node • Consistent hashing • Key identifier = SHA-1(title) • Node identifier = SHA-1(IP) • Find successor(Key) • map key IDs to node IDs

  12. Consistent Hashing [Karger 97] • Target: web page caching • Like normal hashing, assigns items to buckets so that each bucket receives roughly the same number of items • Unlike normal hashing, a small change in the bucket set does not induce a total remapping of items to buckets

  13. K5 Key 5 Key 105 K20 N105 Circular 7-bit ID space N32 A key is stored at its successor: node with next higher ID N90 K80 Consistent Hashing • ID: 2^7 = 128

  14. Basic Lookup • Inefficient, O(N) N120 N10 “Where is key 80?” N105 N32 “N90 has K80” N90 K80 N60

  15. Finger Table (1) • Speed up the lookup • A routing table with m entries - fingers • At node n, i th entry of its finger table s = successor (n+2^(i-1)), 1≦i≦m • At node n, each finger of its finger table points tothe successor of [(n+2^(i-1)) mod (2^m)] • Allows log(N) - time lookups

  16. S=(1+(2^(1-1))) mod (2^3) = 2 S=(1+(2^(2-1))) mod (2^3) = 3 S=(1+(2^(3-1))) mod (2^3) = 5

  17. Lookups take O(log(N)) hops N5 N10 N110 K19 N20 N99 N32 Lookup(K19) N80 N60

  18. Node Join (1) N25 N36 1. Lookup(36) K30 K38 N40

  19. Node Join (2) N25 2. N36 sets its own successor pointer N36 K30 K38 N40

  20. Node Join (3) N25 3. Copy keys 26..36 from N40 to N36 N36 K30 K30 K38 N40

  21. Node Join (4) N25 4. Set N25’s successor pointer N36 K30 K30 K38 N40 Update finger pointers in the background Correct successors produce correct lookups

  22. Stabilization • Stabilization algorithm periodically verifies and refreshes node knowledge • Successor pointers • Predecessor pointers • Finger tables

  23. Failures and Replication • Maintain correct successor pointers N120 N10 N113 N102 Lookup(90) N85 N80 N80 doesn’t know correct successor, so incorrect lookup

  24. Solution: successor lists • Each node knows r immediate successors • After failure, will know first live successor • Correct successors guarantee correct lookups

  25. Path length as a function of network size

  26. Failed Lookups versus Node Fail/Join Rate

  27. Chord Summary • Chord provides peer-to-peer hash lookup • Efficient: O(log(n)) messages per lookup • Robust as nodes fail and join • Not deal with malicious users • http://pdos.lcs.mit.edu/chord/

More Related