1 / 22

CONCEPTS-2

CONCEPTS-2. Parallel Computer Architectures. Taxonomy based on how processors, memory & interconnect are laid out, resources are managed Massively Parallel Processors (MPP) Symmetric Multiprocessors (SMP) Cache-Coherent Non-Uniform Memory Access (CC-NUMA) Clusters

Download Presentation

CONCEPTS-2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CONCEPTS-2 Computer Engg, IIT(BHU)

  2. Parallel Computer Architectures Taxonomy based on how processors, memory & interconnect are laid out, resources are managed Massively Parallel Processors (MPP) Symmetric Multiprocessors (SMP) Cache-Coherent Non-Uniform Memory Access (CC-NUMA) Clusters Distributed Systems – Grids/P2P

  3. Parallel Computer Architectures MPP A large parallel processing system with a shared-nothing architecture Consist of several hundred nodes with a high-speed interconnection network/switch Each node consists of a main memory & one or more processors Runs a separate copy of the OS SMP 2-64 processors today Shared-everything architecture All processors share all the global resources available Single copy of the OS runs on these systems

  4. Parallel Computer Architectures CC-NUMA a scalable multiprocessor system having a cache-coherent nonuniform memory access architecture every processor has a global view of all of the memory Clusters a collection of workstations / PCs that are interconnected by a high-speed network work as an integrated collection of resources have a single system image spanning all its nodes Distributed systems considered conventional networks of independent computers have multiple system images as each node runs its own OS the individual machines could be combinations of MPPs, SMPs, clusters, & individual computers

  5. Dawn of Computer Architectures Vector Computers (VC) - proprietary system: provided the breakthrough needed for the emergence of computational science, buy they were only a partial answer. Massively Parallel Processors (MPP) -proprietary systems: high cost and a low performance/price ratio. Symmetric Multiprocessors (SMP): suffers from scalability Distributed Systems: difficult to use and hard to extract parallel performance. Clusters - gaining popularity: High Performance Computing - Commodity Supercomputing High Availability Computing - Mission Critical Applications

  6. Technology Trends Performance of PC/Workstations components has almost reached performance of those used in supercomputers… Microprocessors (50% to 100% per year) Networks (Gigabit SANs) Operating Systems (Linux,...) Programming environments (MPI,…) Applications (.edu, .com, .org, .net, .shop, .bank) The rate of performance improvements of commodity systems is much rapid compared to specialized systems

  7. Towards Cluster Computing Since the early 1990s, there is an increasing trend to move away from expensive and specialized proprietary parallel supercomputers towards clusters of computers (PCs, workstations) From specialized traditional supercomputing platforms to cheaper, general purpose systems consisting of loosely coupled components built up from single or multiprocessor PCs or workstations Linking together two or more computers to jointly solve computational problems

  8. Cluster A cluster is a type of parallel and distributed processing system, which consists of a collection of interconnected stand-alone computers cooperatively working together as a single, integrated computing resource. A node a single or multiprocessor system with memory, I/O facilities, & OS A cluster: generally 2 or more computers (nodes) connected together in a single cabinet, or physically separated & connected via a LAN appears as a single system to users and applications provides a cost-effective way to gain features and benefits

  9. Parallel Processing Use multiple processors to build MPP/DSM-like systems for parallel computing Network RAM Use memory associated with each workstation as aggregate DRAM cache Software RAID Redundant Array of Inexpensive/Independent Disks Use the arrays of workstation disks to provide cheap, highly available and scalable file storage Possible to provide parallel I/O support to applications Multipath Communication Use multiple networks for parallel data transfer between nodes

  10. Common Cluster Models High Performance (dedicated). High Throughput (idle cycle harvesting). High Availability (fail-over). A Unified System – HP and HA within the same cluster

  11. Cluster Components Multiple High Performance Computers PCs Workstations SMPs (CLUMPS) Distributed HPC Systems leading to Grid Computing

  12. System Disk Disk and I/O Overall improvement in disk access time has been less than 10% per year Amdahl’s law Speed-up obtained from faster processors is limited by the slowest system component Parallel I/O Carry out I/O operations in parallel, supported by parallel file system based on hardware or software RAID

  13. High Performance Networks Ethernet (10Mbps), Fast Ethernet (100Mbps), Gigabit Ethernet (1Gbps) SCI (Scalable Coherent Interface- MPI- 12µsec latency) ATM (Asynchronous Transfer Mode) Myrinet (1.28Gbps) QsNet (Quadrics Supercomputing World, 5µsec latency for MPI messages) Digital Memory Channel FDDI (fiber distributed data interface) InfiniBand

  14. Parts of the Cluster Computers Fast Communication Protocols and Services (User Level Communication): Active Messages (Berkeley) Fast Messages (Illinois) U-net (Cornell) XTP (Virginia) Virtual Interface Architecture (VIA)

  15. Parts of the Cluster Computers Cluster Middleware Single System Image (SSI) System Availability (SA) Infrastructure Hardware DEC Memory Channel, DSM (Alewife, DASH), SMP Techniques Operating System Kernel/Gluing Layers Solaris MC, Unixware, GLUnix, MOSIX

  16. Parts of the Cluster Computers Applications and Subsystems Applications (system management and electronic forms) Runtime systems (software DSM, PFS etc.) Resource management and scheduling (RMS) software Oracle Grid Engine, Platform LSF (Load Sharing Facility), PBS (Portable Batch Scheduler), Microsoft Cluster Compute Server (CCS)

  17. Communication S/W Communication infrastructure support protocol for Bulk-data transport Streaming data Group communications Communication service provides cluster with important QoS parameters Latency Bandwidth Reliability Fault-tolerance

  18. Communication S/W Network services are designed as a hierarchical stack of protocols with relatively low-level communication API, providing means to implement wide range of communication methodologies RPC DSM Stream-based and message passing interface (e.g., MPI, PVM)

  19. Parts of Cluster Computers Parallel Programming Environments and Tools Threads (PCs, SMPs, NOW..) POSIX Threads Java Threads MPI (Message Passing Interface) Linux, Windows, on many Supercomputers Parametric Programming Software DSMs (Shmem) Compilers C/C++/Java Parallel programming with C++ (MIT Press book)

  20. Parts of Cluster Computers RAD (rapid application development) tools GUI based tools for PP modeling Debuggers Performance Analysis Tools Visualization Tools

  21. Applications Sequential Parallel / Distributed Grand Challenging applications Weather Forecasting Quantum Chemistry Molecular Biology Modeling Engineering Analysis (CAD/CAM) PDBs, web servers, data-mining

  22. Advantageous of Clustering High Performance Expandability and Scalability High Throughput High Availability

More Related