1 / 18

A Framework for Classifying Denial of Service Attacks

A Framework for Classifying Denial of Service Attacks. Alefiya Hussain, John Heidemann and Christos Papadopoulos presented by Nahur Fonseca NRG, June, 22 nd , 2004. This paper is NOT about…. Detecting DoS attacks, although they suggest an application for it in the end.

alma
Download Presentation

A Framework for Classifying Denial of Service Attacks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Framework for Classifying Denial of Service Attacks Alefiya Hussain, John Heidemann and Christos Papadopoulos presented by Nahur Fonseca NRG, June, 22nd, 2004

  2. This paper is NOT about… • Detecting DoS attacks, although they suggest an application for it in the end. • Responding to DoS attacks. • Dealing with smart attacks which explore software bugs or protocol synchronization. (so don’t worry Mina, you can continue your plans to take over the World).

  3. Problem and Motivation • Problem: Need a robust and automatic way of classifying DoS attacks into these two classes: single- and multi-source. • Because: Different types of attacks (single- or multi-source) are handled differently. • Classification is not easy. For instance, packets can be spoofed by attacker.

  4. Preliminaries • Zombie x Reflectors • Single- x Multi-source • Direct x Reflection

  5. Discussion • DWE Quiz: • Is this problem interesting at all ? • What could make it a SIGCOMM paper ? • [Optional] What is the related work ? • What should be the OUTLINE of the rest of the presentation ?

  6. Outline • Description of traces used • Four Classification Techniques • Evaluation of Results • Conclusion & Discussion & Validation

  7. Data Collection • Monitored two links at moderate size ISP. • Captured packet header in both directions using tcpdump, and saved every two mins. • Attack detected when: • # of sources to the same destination > 60 in 1s, or • Traffic rate > 40K packets/s. • Manually verify detected attacks. False positive rate of 25 – 35 %.Resulting in a total of 80 attacks in 5 months.

  8. T1: Packet Header Analysis • Based on ID and TTLfields filled by OS. • Idea: identify sequences of increasing ID number with a fixed TTL. • Classified 67 / 80. • Some statistics:87% evidence of root accessTCP prevalence flwd by ICMP

  9. T2: Arrival Rate Analysis • Single-, multi-source and reflected attacks have different mean. • Kruskal-Wallis one-way ANOVA test F=37 (>> 1) p=1.7 x 10-11 (<< 1) 105 104 103 102 Attack rate (pkt/s) Single-source Multi-source Reflected

  10. T3: Ramp-Up Behavior • Single-source attacks start at full throttle. • All multi-source attacks presented ramp-up due to synchronization of zombies. • (Left) one of the 13 unclassified attacks (Right) agree with header analysis 100 80 60 40 20 0 60 50 40 30 20 10 0 Attack rate (pkt/s) Attack rate (pkt/s) 0 10 20 30 40 50 60 70Time (seconds) 0 10 20 30 40 50 60 70Time (seconds)

  11. T4: Spectral Content Analysis 1.6 1.2 0.8 0.4 0 S(f) C(f) S(f) C(f) • Trace as time series. • Consider segments in steady-state only. • Compute Power Spectral Density S(f) • C(f) is the normalized cumulative power up to frequency f. • F(p) = C(f)-1 1.0 0.8 0.6 0.4 0.2 0 0 100 200 300 400 500Frequency (Hz)a) Single-Source 1600 1200 800 400 0 1.0 0.8 0.6 0.4 0.2 0 0 100 200 300 400 500Frequency (Hz)b) Multi-Source

  12. The F(60%) Spectral Test • Single-sourceF(60%)[240-295] Hz • Multi-sourceF(60%)[142-210] Hz • Wilcoxon rank sum test used to verify the 2 classes have different F(.) ranges.

  13. Validation of F(60%) Test • Observations in a smaller alternate site. • Controlled experiments over the Internet with varying topology (cluster x distributed) and # of attackers (1 to 5 Iperf clients). • Use of attack tools (punk, stream and synful) in testbed network.

  14. Effect of Topology

  15. Effect of Increasing # of Attackers • Similar curve for controlled experiment and testbed attack using hacker tools.

  16. Why ? • Aggregation of two scaled sources? No! a1(t) = a(t) + a((s+)t) • Bunch of traffic (lika ACK compression)? No! a2(t) delay the arrival of packets until 5-15 have accumulated and send all at once • Aggregation of two shifted sources? No!a3(t) = a(t) + a(t +  + ) • Aggregation of multiple slightly shifted sources? Yes!a3b(t) =  a(t + i), 2 < i < n

  17. Conclusions • ‘Network security is an arms race.’Thus the need for more robust techniques. • Once detection is done, spectral analysis can be used to identify type of attack and trigger appropriate response. • Contribution to model attack traffic pattern. • Use of statistical tests to make inference about attack patterns.

  18. Discussion • How a single-source could try to foul the spectral analysis tool ? • What is the spectral face of normal traffic? • What other type of patterns could we identify and design statistical tests for it ? • More thoughts ?

More Related