1 / 18

Parallel Research at Illinois

Parallel Research at Illinois. Parallel Everywhere. www.parallel.illinois.edu. Parallel @ Illinois. UPCRC. Cloud Computing Testbed. Illiac IV. OpenSparc Center of Excellence. Extreme Scale Computing. CUDA Center of Excellence. Taxonomy.

kimama
Download Presentation

Parallel Research at Illinois

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Research at Illinois Parallel Everywhere www.parallel.illinois.edu

  2. Parallel @ Illinois UPCRC Cloud Computing Testbed Illiac IV OpenSparc Center of Excellence Extreme Scale Computing CUDA Center of Excellence

  3. Taxonomy Cloud: runs many loosely coupled queries or applications that often are IO intensive; focus is on dynamic provisioning, virtualization, cost reduction, etc. Costs are linear and fault tolerance is relatively easy Supercomputer: runs few tightly coupled, compute intensive applications. Focus is on algorithms, performance, communication, synchronization; fault tolerance is hard and cost is nonlinear Both can be humongous systems

  4. Talk focuses on supercomputers from petascale to exascale

  5. Current Leader: Jaguar Cray XT5 at Oak Ridge (Band)

  6. Supercomputing in Two YearsBlue Waters Computing System at Illinois System Attribute Vendor Processor Peak Performance (PF) Sustained Performance (PF) Number of Processor Cores Amount of Memory (PB) Amount of Disk Storage (PB) Amount of Archival Storage (PB) External Bandwidth (Gbps) Blue Waters* IBM Power 7 >1.0 >200,000 >0.8 >10 >500 100-400 $200M Machine room designed for 20 MGW * Reference petascale computing system (no accelerators).

  7. Think of a Supercomputer as a Large Scientific Instrument to Observe the Unobservable Molecular Science Weather & Climate Forecasting Health Astronomy Earth Science

  8. Performance growths 1,000-fold every 11 years

  9. Exascale in 2020 • Extrapolation of current technology • 100M- 1B threads: the compute power of each thread does not increase! • 100-500 MWatts

  10. Main Issues Can we reduce power consumption? Can we use so many threads? Can we program them? Will the machine stay up? How do we deal with variability & jitter? How do we control & manage such a system?

  11. Low Power Design • Energy consumption might be reduced one order of magnitude with aggressive technology and architecture change • But changes have a price • Low power cores -- more cores • Aggressive voltage scaling -- more errors • Aggressive DRAM redesign -- less bandwidth • Hybrid architecture -- hard to program • This problem also affects large clouds, but to a lesser extent

  12. Scaling Applications to 1B Threads • Weak scaling: use more powerful machine to solve larger problem • increase application size and keep running time constant; e.g., refine grid • Larger problem may not be of interest • May want to scale time, not space (molecular dynamics) – not always easy to work in 4D • Cannot scale space without scaling time (iterative methods): granularity decreases and communication increases

  13. Scaling Iterative Methods Assume that number of cores (and compute power) are increased by factor of k4 Space and time scales are refined by factor of k Mesh size increased by factor of k×k×k Local cell dimension decreases by factor of k1/4 Cell volume decreases by factor of k3/4while area decreases by factor of k2/4; area to volume ratio (communication to computation ratio) increases by factor of k3/2.

  14. Debugging and Tuning: Observing 1B Threads Scalable infrastructure to control and instrument 1B threads Parallel information compression to identify “anomalies” Need to ability to express “normality” (global correctness and performance assertions)

  15. Decreasing Mean Time to Failure • Problem: • More transistors • Smaller transistors • Lower voltage • More manufacturing variance • Current technology: global, synchronized checkpoint; HW error detection • Future technology: • More HW fault detection • More efficient checkpoint (OS?, compiler? application?) • Fault resilient algorithms?

  16. Handling Variability • Current parallel programming models implicitly assume that all tasks progress at same rate. • Future systems will have increased variability in rate of progress • HW variability: power management, manufacturing variability, error correction • System variability: Jitter • Application variability: dynamic, irregular computations. • Dynamic load balancing overcomes variability, but increases communication and reduces locality

  17. Controlling the System Observability: Can you probe state of any HW or SW component? Situational awareness: When cascading errors occur, how long before you find the root cause? System analytics: Can you mine logs to identify anomalies? This problem also affects large clouds, but to a lesser extent

  18. Summary Supercomputing is fun (again): business as usual will not go far Supercomputing is essential for progress on key scientific and societal issues You might live thru the end of Moore’s law: this will be a fundamental shift in our discipline and our industry

More Related