1 / 26

Reviewed by Michela Becchi

Packet Classification Using Multidimensional Cutting Sumeet Singh (UCSD) Florin Baboescu (UCSD) George Varghese (UCSD) Jia Wang (AT&T Labs-Research). Discussion Leader Haoyu Song. Reviewed by Michela Becchi. Outline. Introduction Related works HiCuts HyperCuts Evaluation Conclusions.

Download Presentation

Reviewed by Michela Becchi

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Packet Classification Using Multidimensional CuttingSumeet Singh (UCSD)Florin Baboescu (UCSD)George Varghese (UCSD)Jia Wang (AT&T Labs-Research) Discussion Leader Haoyu Song Reviewed by Michela Becchi

  2. Outline • Introduction • Related works • HiCuts • HyperCuts • Evaluation • Conclusions

  3. Packet Classification • Rule-based packets’ handling • Destination address • Source address • Protocol type • Destination and source port • TCP flags

  4. Applications • Security • QoS • Network address translation • Traffic shaping • Monitoring • …

  5. Challenge • Classify packets at packets’ processing speed • Increasing link speed • 14% links between core routers OC-768 (40 Gbps) • 21% links between edge routers OC-192 (10 Gbps) • Memory-time tradeoff

  6. Terminology • Classifier: N rules R1,R2,…,RN • Rule Rj: array of k values (fields, dimensions ) • Rj[i]: value of the i-th header field of a packet • Exact match: source address equal to 128.252.169.1 • Prefix match: destination address matches 128.252.* • Range match: destination port in range 0 to 255 • actionj: action associated to Rj E.g. R=(128.252.*,*,TCP,23,*), action=block • Pkt1=(128.252.169.16,128.111.41.101,TCP,23,1025) • Pkt2=(128.252.169.16,128.111.41.101,TCP,79,1025)

  7. Memory-time tradeoff • Time-memory tradeoff: • O((log N)^(k-1)) time and linear space • Log N time and O(N^k) space • SRAM vs. DRAM • Hardware solutions: Ternary CAMs • Algorithmic solutions: • Linear search • EGT-PC • HiCuts Note: Update complexity not considered for core routers

  8. TCAMs • Uses parallelism in hardware • Pros: • Low latency and high throughput • Simple on-chip management scheme • Cons: • Power scaling (parallel comparisons) • Density scaling (more board area) • Time scaling (highest match arbitration) • Rule Multiplication for ranges (prefix format) => Suitable for small classifiers

  9. EGT-PC Extended Grid-Of-Tries with Path Compression • Idea: Regardless of database size, any packet matches only a few rules. This is true even when the rules are projected to only source or destination fields • Extend efficient two-field classification algorithm with linear search • Worst case search time ~ HiCuts optmized for speed • Memory requirement ~ HiCuts optmized for space

  10. HiCuts Hierarchical Intelligent Cutting • Decision-tree based algorithm • Linear search on leaves • Storage ~ depth of tree • Local optimization decisions at each node to test next dimension to cut • Limit amount of linear search • Limit amount of storage increase • Range checks => cut=hyperplane

  11. Field5 Field2 Field4 Field3 R7 R11 R9 R10 R11 R7 R10 R11 R7 R10 R11 R2 R3 R4 R7 R10 R11 R4 R7 R10 R11 R8 R9 R10 R11 R3 R7 R10 R11 R0 R5 R6 R7 R10 R11 R0 R5 R6 R10 R0 R1 R5 R6 R7 R10 R11 R1 R7 R10 R11 R7 R10 R11 R2 R7 R10 R11 HiCuts: an example (0010,1101,00,01,TCP) 12..15 0..3 12..15 4..7 8..11 0 Bucket size = 4

  12. From HiCuts to Hypercuts • Multiple cuts per node possible • Reduce depth of the tree (memory) • Through array indexing one memory access per node • Hypercube instead of hyperspace

  13. Hypercube * Slide taken from S. Singh’s presentation

  14. Building Decision Tree (1) Step1:Select dimensions to cut • Goal: Pick dimensions leading to the most uniform distribution of rules • Alternatives: • Largest number of unique elements • # unique elements > mean of unique elements • # unique elements / size of region • Idea: dimensions with highest entropia

  15. Building Decision Tree (2) Step2:Select number of cuts • Goal: Create search tree with minimal memory requirement • Alternative 1: • Minimum number of rules in each child node • Maximum number of children limited by space factor * sqrt(# rules in current node) • Alternative 2 (Greedy approach): • Determine local optimum nc(i) for each dimension • Determine iteratively best combination

  16. Refinements (1) • Node Merging: nodes with same rules • Rule Overlap: overlapping rules and different priorities

  17. Refinements (2) • Region Compaction: shrink the region of a node depending on its rules • Pushing Common Rule Subset Upwards: • rules to non-leaf nodes. • Bitmap in header to avoid extra memory accesses

  18. Search Algorithm * Slide taken from S.Singh’s presentation

  19. Search Algorithm * Slide taken from S.Singh’s presentation

  20. Search Algorithm * Slide taken from S.Singh’s presentation

  21. Search Algorithm * Slide taken from S.Singh’s presentation

  22. Evaluation • Memory: up to an order of magnitude less than HiCuts optimized for memory and EGT-PC • Time: 3 to 10 times faster than HiCuts • On ERs: HyperCuts ~ HiCuts (only IP source and destination specified => 2 dimensions) • On FWs: wildcard-rules on IP addresses make HyperCuts ouperform HiCuts • Synthetic databases: memory requirement grows linearly with number of rules (except for FWs – wildcards)

  23. Conclusions • Idea of cutting in more than one direction • Improvement in memory requirement • Still one access per node • Refinements to reduce memory wasting • Evaluation on industrial firewall databases and synthetic databases • Limited depth of the tree: possible hardware implementation using pipelining and on-chip SRAM

  24. Questions?

  25. Evaluation Data (1)

  26. Evaluation Data (2)

More Related