1 / 9

Geoffrey Fox gcf@indiana infomall School of Informatics and Computing

Panel: New Opportunities in High Performance Data Analytics (HPDA) and High Performance Computing (HPC). The 2014 International Conference on High Performance Computing & Simulation (HPCS 2014) July 21 – 25, 2014 The Savoia Hotel Regency Bologna (Italy) July 22 2014. Geoffrey Fox

Download Presentation

Geoffrey Fox gcf@indiana infomall School of Informatics and Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Panel: New Opportunities in High Performance Data Analytics (HPDA) and High Performance Computing (HPC) The 2014 International Conference on High Performance Computing & Simulation (HPCS 2014) July 21 – 25, 2014 The Savoia Hotel Regency Bologna (Italy) July 22 2014 Geoffrey Fox gcf@indiana.edu http://www.infomall.org School of Informatics and Computing Digital Science Center Indiana University Bloomington

  2. SPIDAL (Scalable Parallel Interoperable Data Analytics Library)

  3. Introduction to SPIDAL • Learn from success of PetSc, Scalapack etc. as HPC Libraries • Here discuss Global Machine Learning GML as part of SPIDAL (Scalable Parallel Interoperable Data Analytics Library) • GML = Machine Learning parallelized over nodes • LML = Pleasingly Parallel; Machine Learning on each node • Surprisingly little packaged scalable GML • Apache: Mahout low performance and MLlib just starting • R largely sequential (best for local machine learning LML) • Our experience based on four big data algorithms • Dimension Reduction (Multi Dimensional Scaling) • Levenberg-Marquardt Optimization • Clustering: similar to Gaussian Mixture Models, PLSI (probabilistic latent semantic indexing), LDA (Latent Dirichlet Allocation) • Deep Learning

  4. Some Core Machine Learning Building Blocks

  5. Some General Issues Parallelism

  6. Some Parallelism Issues • All use parallelism over data points • Entities to cluster or map to Euclidean space • Except deep learning which has parallelism over pixel plane in neurons not over items in training set • as need to look at small numbers of data items at a time in Stochastic Gradient Descent • Maximum Likelihood or 2 both lead to structure like • Minimize sum items=1N(Positive nonlinear function of unknown parameters for item i) • All solved iteratively with (clever) first or second order approximation to shift in objective function • Sometimes steepest descent direction; sometimes Newton • Have classic Expectation Maximization structure

  7. Parameter “Server” • Note learning networks have huge number of parameters (11 billion in Stanford work) so that inconceivable to look at second derivative • Clustering and MDS have lots of parameters but can be practical to look at second derivative and use Newton’s method to minimize • Parameters are determined in distributed fashion but are typically needed globally • MPI use broadcast and “All.. Collectives” • AI community: use parameter server and access as needed

  8. Some Important Cases • Need to cover non vector semimetricand vector spaces for clustering and dimension reduction (N points in space) • Vector spaces have Euclidean distance and scalar products • Algorithms can be O(N) and these are best for clustering but for MDS O(N) methods may not be best as obvious objective function O(N2) • MDS Minimizes Stress (X) = i<j=1Nweight(i,j) ((i, j) - d(Xi , Xj))2 • Semimetric spaces just have pairwise distances defined between points in space (i, j) • Note matrix solvers all use conjugate gradient – converges in 5-100 iterations – a big gain for matrix with a million rows. This removes factor of N in time complexity • Full matrices not sparse as in HPCG • In clustering, ratio of #clusters to #points important; new ideas if ratio >~ 0.1 • There is quite a lot of work on clever methods of reducing O(N2) to O(N) and logs • This is extensively used in search but not in “arithmetic”as in MDS or semimetric clustering • Arithmetic similar to fast multipole methods in O(N2) particle dynamics

  9. Some Futures • Always run MDS. Gives insight into data • Leads to a data browser as GIS gives for spatial data • Claim is algorithm change gave as much performance increase as hardware change in simulations. Will this happen in analytics? • Today is like parallel computing 30 years ago with regular meshs. • We will learn how to adapt methods automatically to give “multigrid” and “fast multipole” like algorithms • Need to start developing the libraries that support Big Data • Understand architectures issues • Have coupled batch and streaming versions • Develop much better algorithms • Please join SPIDAL (Scalable Parallel Interoperable Data Analytics Library) community

More Related