1 / 89

Experiences With Internet Traffic Measurement and Analysis

Experiences With Internet Traffic Measurement and Analysis. Vern Paxson ICSI Center for Internet Research International Computer Science Institute and Lawrence Berkeley National Laboratory vern@icir.org March 5th, 2004. Outline. The 1990s: How is the Internet used? Growth and diversity

chantrea
Download Presentation

Experiences With Internet Traffic Measurement and Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experiences With Internet Traffic Measurement and Analysis Vern Paxson ICSI Center for Internet Research International Computer Science Institute and Lawrence Berkeley National Laboratory vern@icir.org March 5th, 2004

  2. Outline • The 1990s: How is the Internet used? • Growth and diversity • Fractal traffic, “heavy tails” • End-to-end dynamics • Difficulties with measurement & analysis • The 2000s: How is the Internet abused? • Prevalence of misuse • Detecting attacks • Worms

  3. The 1990s How is the Internet Used?

  4. = 80% growth/year Data courtesy of Rick Adams

  5. Internet Growth: Exponential • Growth of 80%/year • Sustained for at least ten years … • … before the Web even existed. • Internet is always changing. You do not have a lot of time to understand it.

  6. Characterizing Site Traffic • Methodology: passively record traffic in/out of a site • Danzig et al (1992) • 3 sites, 24 hrs, all packet headers • Paxson (1994) • TCP SYN/FIN/RST control packets • Gives hosts, sizes, start time, duration, application • Large filtering win (≈ 10-100:1 packets, 1000s:1 bytes) • 7 month-long traces at Lawrence Berkeley Natl. Laboratory • 8 day-long traces from 6 other sites

  7. Findings from Site Studies • Traffic mix (which protocols are used; how many connections/bytes they contribute) varies widely from site to site. • Mix also varies at the same site over time. • Most connections have much heavier traffic in one direction than the other: • Even interactive login sessions (20:1)

  8. Findings from Site Studies, con’t • Many random variables associated with connection characteristics (sizes, durations) are best described with log-normal distributions • But often these are not particularly good fits • And often their parameters vary significantly between datasets • The largest connections in bulk transfers are very large • Tail behavior is unpredictable • Many of these findings differ from assumptions used in 1990s traffic modeling

  9. Theory vs. Measured Reality Scaling behavior in Internet Traffic

  10. Burstiness • Long-established framework: Poisson modeling • Central idea: network events (packet arrivals, connection arrivals) are well-modeled as independent • In simplest form, there’s just a rate parameter,  • It then follows that the time between “calls” (events) is exponentially distributed, # of calls ~ Poisson • Implications (if assumptions correct): • Aggregated traffic will smooth out quickly • Correlations are fleeting, bursts are limited

  11. Burstiness: Theory vs. Measurement • For Internet traffic, Poisson models have fundamental problem: they greatly underestimate burstiness • Consider an arrival process: Xk gives # packets arriving during kth interval of length T. • Take 1-hour trace of Internet traffic (1995) • Generate (batch) Poisson arrivals with same mean and variance

  12. Previous Region 10

  13. 100

  14. 600

  15. Burstiness OverMany Time Scales • Real traffic has strong, long-range correlations • Power spectrum: • Flat for Poisson processes • For measured traffic, diverges to  as   0 • To build Poisson-based models that capture this characteristic takes many parameters • But due to great variation in Internet traffic, we are desperate for parsimonious models (few parameters)

  16. Describing Traffic with Fractals • Landmark 1993 paper by Leland et al proposed capturing such characteristics (in Ethernet traffic) using self-similarity, a form of fractal-based modeling: • Parameterized by mean, variance, and Hurst parameter • Models predict burstiness on all time scales • Queueing delays / drop probabilities much higher than predicted by Poisson-based models

  17. Heavy Tails • Key prediction from fractal modeling: • One way fractal traffic can arise in aggregate is if individual connections have activity periods (durations, sizes) whose distribution has infinite variance. • Infinite variance manifests in distribution’s upper tail • Consider Pareto distribution, F(x) = (x/a)- • If  < 2, then F(x) has infinite variance • Can test for Pareto fit by plotting log F(x) vs. log x • Straight line = Pareto distribution, slope estimates -

  18. Web connection sizes(226,386 observations) 28,000observations  = 1.3  Infinite Variance

  19. Self-Similarity & Heavy Tails, con’t • We find heavy-tailed sizes in many types of network traffic. Just a few extreme connections dominate the entire volume.

  20. Self-Similarity & Heavy Tails, con’t • We find heavy-tailed sizes in many types of network traffic. Just a few extreme connections dominate the entire volume. • Theorems then give us that this traffic aggregates to self-similar behavior. • While self-similar models are parsimonious, they are not (alas) “simple”. • You can have self-similar correlations for which magnitude of variations is small  still possible to have a statistical multiplexing gain, especially at very high aggregation • Smaller time scales behave quite differently. • When very highly aggregated, they can appear Poisson!

  21. End-to-End Internet Dynamics Routing & Packets

  22. End-to-End Dynamics • Ultimately what the user cares about is not what’s happening on a given link, but the concatenation of behaviors along all of the hops in an end-to-end path. • Measurement methodology: deploy measurement servers at numerous Internet sites, measure the paths between them • Exhibits N2 scaling: as # sites grows, # paths between them grows rapidly.

  23. “Measurement Infrastructure” sites 1994-1995 End-to-End Dynamics Study

  24. Path in the Study: N2 Scaling Effect

  25. End-to-End Routing Dynamics • Analysis of 40,000 “traceroute” measurements between 37 sites, 900+ end-to-end paths. • Route prevalence: • most end-to-end paths through the Internet dominated by a single route. • Route persistence: • 2/3’s of routes remain unchanged for days/weeks • 1/3 of routes change on time scales of seconds to hours • Route symmetry: • More than half of all routes visited at least one different city in each direction • Very important for tracking connection state inside network!

  26. End-to-End Packet Dynamics • Analysis of 20,000 TCP bulk transfers of 100 KB between 36 sites • Each traced at both ends using tcpdump • Benefits of using TCP: • Real-world traffic • Can probe fine-grained time scales but using congestion control • Drawbacks to using TCP: • Endpoint TCP behavior a major analysis headache • TCP’s loading of the transfer path also complicates analysis

  27. End-to-End Packet Dynamics: Unusual Behavior • Out-of-order delivery: • Not uncommon. 0.6%-2% of all packets. • Strongly site-specific. • Generally little impact on performance. • Replicated packets: • Very rare, but does occur (e.g., 1 packet in, 22 out) • Corrupted packets (bad checksum): • Overall, 1 in 5,000 (!) • Stone/Partridge (2000): between 1 in 1,100 and 1 in 32,000 • Undetected: between 1 in 16 million and 1 in 10 billion

  28. End-to-End Packet Dynamics: Loss • Half of all 100 KB transfers experienced no loss • 2/3s of paths within U.S. • The other half experienced significant loss: • Average 4-9%, but with wide variation • TCP loss is not well described as independent • Losses dominated by a few long-lived outages • (Keep in mind: this is 1994-1995!) • Subsequent studies: • Loss rates have gotten much better • Loss episodes well described as independent • Same holds for regions of stable delay, throughput • Time scales of constancy  minutes or more

  29. Issues / Difficulties for Analyzing Internet Traffic Measurement, Simulation & Analysis

  30. There is No Such Thing as “Typical” • Heterogeneity in: • Traffic mix • Range of network capabilities • Bottleneck bandwidth (orders of magnitude) • Round-trip time (orders of magnitude) • Dynamic range of network conditions • Congestion / degree of multiplexing / available bandwidth • Proportion of traffic that is adaptive/rigid/attack • Immense size & growth • Rare events will occur • New applications explode on the scene

  31. Doubling every 7-8 weeks for 2 years

  32. There is No Such Thing as “Typical”, con’t • New applications explode on the scene • Not just the Web, but: Mbone, Napster, KaZaA etc., IM • Event robust statistics fail. • E.g., median size of FTP data transfer at LBL • Oct. 1992: 4.5 KB (60,000 samples) • Mar. 1993: 2.1 KB • Mar. 1998: 10.9 KB • Dec. 1998: 5.6 KB • Dec. 1999: 10.9 KB • Jun. 2000: 62 KB • Nov. 2000: 10 KB • Danger: if you misassume that something is “typical”, nothing tells you that you are wrong!

  33. The Search for Invariants • In the face of such diversity, identifying things that don’t change has immense utility • Some Internet traffic invariants: • Daily and weekly patterns • Self-similarity on time scales of 100s of msec and above • Heavy tails • both in activity periods and elsewhere, e.g., topology • Poisson user session arrivals • Log-normal sizes (excluding tails) • Keystrokes have a Pareto distribution

  34. The Danger of Mental Models “Exponential plus a constant offset”

  35. Not exponential - Pareto! Heavy tail:  ≈ 1.0

  36. Versus the Power of Modeling toOpen Our Eyes • Fowler & Leland, 1991: • Traffic ‘spikes’ (which cause actual losses) ride on longer-term ‘ripples’, that in turn ride on still longer-term ‘swells’

  37. Versus the Power of Modeling toOpen Our Eyes • Fowler & Leland, 1991: • Traffic ‘spikes’ (which cause actual losses) ride on longer-term ‘ripples’, that in turn ride on still longer-term ‘swells’ • Lacked vocabulary that came from self-similar modeling (1993) • Similarly, 1993 self-similarity paper: • We did so without first studying and modeling the behavior of individual Ethernet users (sources) • Modeling led to suggestion to investigate heavy tails

  38. Measurement Soundness • How well-founded is a given Internet measurement? • We can often use additional information to help calibrate. • One source: protocol structure • E.g., was a packet dropped by the network …… or by the measurement device? • For TCP, can check: did receiver acknowledge it? • If Yes, then dropped by measurement device • If No, then dropped by network • Can also calibrate using additional information

  39. Calibration Using Additional Information: Packet Timings Routing change? Clock adjustment

  40. Reproducibilty of Results(or lack thereof) • It is rare, though sometimes occurs, that raw measurements are made available to other researchers for further analysis or for confirmation. • It is more rare that analysis tools and scripts are made available, particularly in a coherent form that others can actually get to work. • It is even rarer that measurement glitches, “outliers,” analysis fudge factors, etc., are detailed. • In fact, often researchers cannot reproduce their own results.

  41. Towards Reproducible Results • Need to ensure a systematic approach to data reduction and analysis • I.e., a “paper trail” for how analysis was conducted, particularly when bugs are fixed • A methodology to do this: • Enforce discipline of using a single (master) script that builds all analysis results from the raw data • Maintain all intermediary/reduced forms of the data as explicitly ephemeral • Maintain a notebook of what was done and to what effect. • Use version control for scripts & notebook. • But also really need: ways to visualize what's changed in analysis results after a re-run.

  42. The 2000s How is the Internet Abused?

More Related