1 / 20

Ad-hoc Limited Scale-Free Models for Unstructured Peer-to-Peer Networks

Ad-hoc Limited Scale-Free Models for Unstructured Peer-to-Peer Networks. Hasan Guclu ( gucluh@gmail.com ) Los Alamos National Laboratory Durgesh Kumari ( durgesh.rani@gmail.com ) Murat Yuksel ( yuksem@cse.unr.edu ) University of Nevada – Reno. Outline. Motivation

Download Presentation

Ad-hoc Limited Scale-Free Models for Unstructured Peer-to-Peer Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ad-hoc Limited Scale-Free Models for Unstructured Peer-to-Peer Networks Hasan Guclu (gucluh@gmail.com) Los Alamos National Laboratory Durgesh Kumari (durgesh.rani@gmail.com) Murat Yuksel(yuksem@cse.unr.edu) University of Nevada – Reno

  2. Outline Motivation Topology Generation Mechanisms Barabási-Albert (Preferential Attachment) Model Our Model with Local Info, Hard Cutoffs, and Churn Search Methods Flooding Normalized Flooding Random Walk Summary and Conclusions 2

  3. Motivation: Scale-Free Topologies Characterization is free of the system size N (i.e., scale).

  4. Motivation Characteristics of the p2p overlay topology has significant effects on the search performance. 4 Search Efficiency vs. Exponent and Connectedness Ultra-small Small-world

  5. Motivation Key Question:How to construct the overlay topology by using local information in p2p nets such that the search efficiency is good? Scale-freeness (i.e. power-law exponent) is related to search efficiency Key Constraints: No global knowledge No peer wants to take on the load – hard cutoff on the degree Local decisions affecting global behavior: When a new peer joins, how should it construct its list of neighbors? When a new peer leaves, how should its neighbors rewire themselves to the network? 5

  6. Topology Generation Model: Preferential Attachment w/ Hard Cutoff How to construct a scale-free topology? Preferential Attachment (PA) Include an existing peer with probability proportional to its current degree. prefer the peers with larger degree Requires global info 6 We revise PA such that a node with maximum allowed degree (i.e., hard cutoff) is skipped. And the procedure is tried again.. Degrees 7 4 3 3 2 2 2 1 1 1 26 Prob. of Attachment 0.27 0.15 0.12 0.12 0.08 0.08 0.08 0.04 0.04 0.04 1 ? Hmmm. Which node to have as a neighbor?

  7. Topology Generation Model: Parameters Probability of a node going down/leaving. Parameters of our topology construction framework Horizon of available state information at join. Horizon of available state information at leave. Maximum degree a node is allowed to have.

  8. Topology Generation Model: Join w/ j • Join() procedure: • Select a node J to start with • Collect J’s neighborhood topology information within j hops • Apply PA on the j sub-topology until m links are established • If m is larger than the nodes in the subtopology, repeat the procedure again until m links are established. J Hmmm. Which m nodes to have as a neighbor?

  9. Topology Generation Model: Rewiring w/ l after a Leave • Leave() procedure: • Select a node L to delete • Collect L’s neighborhood topology information within l hops • Let the l sub-topology information be available to L’s 1-hop neighbors, r1 and r2 • With L’s information and r1 or r2 being removed • r1 and r2 apply PA on their l sub-topology until the lost link is restored with another peer r1 L r2

  10. Topology Generation Model: Growth with Joins and Leaves • Topology growth process calls Join() or Leave() procedures depending on the amount of churn. • At every iteration: • Call Join() • Call Leave() with a probability  • Keep this iteration going until the target network size is reached • Both the Join() and Leave() procedures assure that degrees of nodes are less than the hard cutoff A peer is added at every iteration. Higher  means more churn.

  11. Increase in j shifts degree distribution from Exponential to scale-free. Increase in j shifts degree distribution from Exponential to scale-free. Degree Distributions, =0 m=1, kc=50 m=1, no cutoff

  12. Larger m makes the shift less apparent. Larger m makes the shift less apparent. Degree Distributions, =0 m=3, no cutoff m=3, kc=50 Lesson:Force peers to have a larger m to reduce the need for large j.

  13. Degree Distributions, =0.3 m=1, kc=50 m=1, no cutoff Contribution of l in shifting the degree distribution is more significant. Hard cutoff does not affect this distribution shift..

  14. 14 Search Methods • Flooding • Source node sends a message to all its neighbors and every node which receives the message forwards it to all its neighbors except the node the message is received from until the target node receives the message • Normalized flooding • Similar to flooding but the nodes send the messages to at most m (minimum number of links in the network) neighbors • Random walk • The nodes send the messages only to one of their neighbors except the source node

  15. Flooding, =0, m=3 Cutoff is the main factor defining flooding search performance.

  16. Churn with larger lhelps flooding performance!! Churn with larger lhelps flooding performance!! Flooding: =0.1, 0.3; m=3 =0.3, kc=10 =0.1, kc=10 Lesson: Use churn as an opportunity to restructure the network topology by carefully rewiring the peers.

  17. Again, larger l reduces the negative effect of churn!! Again, larger l reduces the negative effect of churn. Normalized Flooding: m=3 j=2, l=0 j=2, l=1 Performance of Random Walk exhibit a similar behavior to Normalized Flooding. Lesson: State information at leave is more valuable than the one at join.

  18. Design Guidelines & Principles • Force all peers to have a larger m (i.e., a minimum of 3) to reduce the need for large j. • Information at the time of leave is more valuable than the information at the time of join • A little responsible leave results in significantly better search performance for the leftover network • Rewiring is helpful – Churn can be used as an opportunity to restructure the network

  19. Summary & Future Work • A generic topology growth model with • churn • local state info • hard cutoffs • rewiring • Scales larger than N=10,000 • Models looking at dynamic behavior are worthy of pursuing..

  20. THE END Thank you! Acknowledgments This work was supported by the U.S. Department of Energy under contract DE-AC52-06NA25396 and by the US National Science Foundation under awards 0627039 and 0721542.

More Related