1 / 12

Outline

Outline. Course Administration Parallel Archtectures Overview Details Applications Special Approaches Our Class Computer Four Bad Parallel Algorithms. Parallel Computer Architectures. MPP – Massively Parallel Processors

crescent
Download Presentation

Outline

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Outline • Course Administration • Parallel Archtectures • Overview • Details • Applications • Special Approaches • Our Class Computer • Four Bad Parallel Algorithms

  2. Parallel Computer Architectures • MPP – Massively Parallel Processors • Top of the top500 list consists of mostly mpps but clusters are “rising” • Clusters • “simple” cluster (1 processor in each node) • Cluster of small smp’s (small # processors / node) • Constellations (large # processors / node) • Older Architectures • SIMD – Single Instruction Multiple Data (CM2) • Vector Processors (Old Cray machines)

  3. Architecture Details • MPPs are built with speicalized networks by vendors with the intent of being used as a parallel computer. Clusters are built from independent computers integrated through an aftermarket network. • Buzzwords: “COTS” Commodity off the shelf componets rather than custom archictures. • Clusters are a market reactions to MPPS with the thought of being cheaper • Originally considered to have solower communications but are catching up.

  4. More details • 2. “NOW”-Networks of Workstations • Beowulf (Goddard in Greenbelt MD) – • Clusters of a small number of pc’s • (pre 10th century poem in Old English about a Scandinaavian warrior from the 6th century)

  5. M M M M M P P P P P D D D D C C C C More Details • Computers  SMPs World’s simplest computer Standard computer MPP

  6. More Details SMP (Symmetric Multiprocessor) P/C P/C M + Disk P/C P/C NUMA – Non uniform memory access 4. Constellation: Every nodes ia large smps

  7. More details • 5) SIMD – SIngle inmctruction multiple data • 6: Speeds: • Megaflops 106 flops • Gigaflops 109 flops workstations • Teraflops 1012 top 17 supercomputers by 2005 every supercomputer in the top 500 • Petaflops 1015 2010?

  8. Moore’s Law: Number transistors per square in in an integrated circuit doubles every 18 months • Every decade – computer performance increases 2 order of magnitude

  9. Applications of Parallel Computers • Traditionally: government labs, numerically intensive applications • Research Institutions • Recent Growth in Industrial Applications • 236 of the top 500 • Financial analysis, drug design and analysis, oil exploration, aerospace and automotive

  10. Goal of Parallel Computing • Solve bigger problems faster Challenge of Parallel Computing Coordinate, control, and monitor the computation

  11. Easiest Applictions • Embarassingly Parallel – Lots of work that can be dived out with little coordination or communication • Example: integration, Monte Carlo methods, Adding numbers

  12. Special Approaches • Distributed Computing on the internet • Seti@home signal processing 15 Teraflops • Distributed.net factor product of 2 large primes • Parabon – biomedical, protein folding, gene expression • Akamai Network – Tom Leighton, Danny Lewin • Thousands of servers spread globally that caches web pages and routes traffic away from congested areas • Embedded Computing : Mercury (inverse to the worldwide distribution)

More Related