1 / 24

FREERIDE: System Support for High Performance Data Mining

FREERIDE: System Support for High Performance Data Mining. Ruoming Jin Leo Glimcher Xuan Zhang Ge Yang Gagan Agrawal Department of Computer and Information Sciences Ohio State University. Motivation: Data Mining Problem. Datasets available for mining are often large

reese
Download Presentation

FREERIDE: System Support for High Performance Data Mining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FREERIDE: System Support for High Performance Data Mining Ruoming Jin Leo Glimcher Xuan Zhang Ge Yang Gagan Agrawal Department of Computer and Information Sciences Ohio State University

  2. Motivation: Data Mining Problem • Datasets available for mining are often large • Our understanding of what algorithms and parameters will give desired insights is limited • Time required for creating scalable implementations of different algorithms and running them with different parameters on large datasets slows down the data mining process

  3. Project Overview • FREERIDE (Framework for Rapid Implementation of datamining engines) as the base system • Demonstrated for a variety of standard mining algorithms

  4. FREERIDE offers: • The ability to rapidly prototype a high-performance mining implementation • Distributed memory parallelization • Shared memory parallelization • Ability to process large and disk-resident datasets • Only modest modifications to a sequential implementation for the above three

  5. Popular algorithms have a common canonical loop Can be used as the basis for supporting a common middleware Key Observation from Mining Algorithms While( ) { forall( data instances d) { I = process(d) R(I) = R(I) op d } ……. }

  6. Shared Memory Parallelization Techniques • Full Replication: create a copy of the reduction object for each thread • Full Locking: associate a lock with each element • Optimized Full Locking: put the element and corresponding lock on the same cache block • Fixed Locking: use a fixed number of locks • Cache Sensitive Locking: one lock for all elements in a cache block

  7. Memory Layout for Various Locking Schemes Fixed Locking Full Locking Optimized Full Locking Cache-Sensitive Locking Lock Reduction Element

  8. Trade-offs between Techniques • Memory requirements: high memory requirements can cause memory thrashing • Contention: if the number of reduction elements is small, contention for locks can be a significant factor • Coherence cache misses and false sharing: more likely with a small number of reduction elements

  9. Combining Shared Memory and Distributed Memory Parallelization • Distributed memory parallelization by replication of reduction object • Naturally combines with full replication on shared memory • For locking with non-trivial memory layouts, two options • Communicate locks also • Copy reduction elements to a separate buffer

  10. Apriori Association Mining 500MB dataset, N2000,L20, 4 threads

  11. K-means Shared Memory Parallelization

  12. Performance on Cluster of SMPs Apriori Association Mining

  13. Results from EM Clustering Algorithm • EM is a popular data mining algorithm • Can we parallelize it using the same support that worked for other clustering algo (k-means) and algo for other mining tasks

  14. Results from FP-tree FPtree: 800 MB dataset 20 frequent itemsets

  15. A Case Study: Decision Tree Construction • Question: can we parallelize decision tree construction using the same framework ? • Most existing parallel algorithms have a fairly different structure (sorting, writing back …) • Being able to support decision tree construction will significantly add to the usefulness of the framework

  16. Approach • Implemented RainForest framework (Gehrke) • Currently focus on RF-read • Overview of the algorithm • While the stop condition not satisfied • read the data • build the AVC-group for nodes • choose the splitting attributes to split nodes • select a new set of node to process as long as the main memory could hold it

  17. Parallelization Strategies • Pure approach: only apply one of full replication, optimized full locking and cache-sensitive locking • Vertical approach: use replication at top levels, locking at lower • Horizontal: use replication for attributes with a small number of distinct values, locking otherwise • Mixed approach: combine the above two

  18. Results Performance of pure versions, 1.3GB dataset with 32 million records in the training set, function 7, the depth of decision tree = 16.

  19. Results Combining full replication and full locking

  20. Results Combining full replication and cache-sensitive locking

  21. Combining Distributed Memory and Shared Memory Parallelization for Decision Tree • The key problem: large size of AVC groups means very high communication volume • Results in very limited speedups • Can we modify the algorithm to reduce communication volume ?

  22. SPIES On (a) FREERIDE • Developed a new communication efficient decision tree construction algorithm – Statistical Pruning of Intervals for Enhanced Scalability (SPIES) • Combines RainForest with statistical pruning of intervals of numerical attributes to reduce memory requirements and communication volume • Does not require sorting of data, or partitioning and writing-back of records • Paper in SDM regular program

  23. Applying FREERIDE for Scientific Data Mining • Joint work with Machiraju and Parthasarathy • Focusing on feature extraction, tracking, and mining approach developed by Machiraju et al. • A feature is a region of interest in a dataset • A suite of algorithms for extracting and tracking features

  24. Summary • Demonstrated a common framework for parallelization of a wide range of mining algos • Association mining – apriori and fp-tree • Clustering – k-means and EM • Decision tree construction • Nearest neighbor search • Both shared memory and distributed memory parallelism • A number of advantages • Ease parallelization • Support higher-level interfaces

More Related