1 / 18

Parallel Computing at a Glance

Parallel Computing at a Glance. History Parallel Computing. What is Parallel Processing ?. Processing of multiple tasks simultaneously on multiple Processors is called parallel processing. D1. D2. D3. P1. P2. P3. Pm. R. Why Parallel Processing ?.

xanto
Download Presentation

Parallel Computing at a Glance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Computing at a Glance

  2. History Parallel Computing

  3. What is Parallel Processing ? Processing of multiple tasks simultaneously on multiple Processors is called parallel processing. D1 D2 D3 P1 P2 P3 Pm R

  4. Why Parallel Processing ? . Computational requirements are ever increasing, both in the area of scientific and business. grand challenge problems . Sequential architecture reaches physical limitation. . Hardware improvements in pipelining, superscalar are non-scalable and requires sophisticated complier technology. . Vector processing works well for certain of problems. . The technology of parallel processing is mature. . Significant development in networking technology is paving a way for heterogeneous computing.

  5. Hardware Architecture for Parallel Processing ? • Single instruction single data (SISD) • Single instruction multiple data (SIMD) • Multiple instruction and single data (MISD) • Multiple instruction and multiple data (MIMD)

  6. Single instruction single data (SISD) Sequential computer : PC, Macintosh, Workstation

  7. Single instruction multiple data (SIMD) Vector machines CRAY, Thinking Machines

  8. Multiple instruction and single data (MISD)

  9. Multiple instruction and multiple data (MIMD) work asynchronously

  10. Shared Memory MIMD Machine Tightly-couple multiprocessor Silicon Graphics machines Sun’s SMP address shared memory single address space real address vs. virtual address thread NUMA v.s UMA Message passing v.s shared memory

  11. Distributed Shared Memory MIMD Machine Loosely-coupled multiprocessor C-DAC’s PARAM IBM’s SP/2 Intel’s Paragaon

  12. Comparison between Shared Memory MIMD and Distributed Shared MIMD

  13. Approaches to Parallel Programming .Data Parallelism (SIMD) .Process Parallelism .Farmer and Worker Models (Master and Slaves)

  14. PARAM Supercomputers

  15. PARAS Operating Environment It is a complete parallel programming environment. • OS kernel • Host servers • Compliers • Run-time environment • Parallel file system • On-line debugger and profiling tool • Graphics and visualization support • Networking interface • Off-line parallel processing tools • Program restructures • libraries Program development environment Program run-time environment Utilities

  16. PARAS Programming Model . PARAS Microkernel . CONcurrent threads Environment(CORE) . POSIX threads interface . Popular Message Passing interface such as . MPI . PVM . Parallelizing Compliers . Tools and debuggers for parallel programming . Load balancing and distribution tools

  17. Levels of Parallelism

  18. Levels of Parallelism

More Related