170 likes | 333 Views
High Performance Computing – Supercomputers. Robert Whitten Jr. Welcome!. Today’s Agenda: Questions from last week Parallelism Supercomputers. What is a supercomputer?. The fastest computer at that moment Top 500 list is updated every 6 months Serial vs. parallel
E N D
High Performance Computing – Supercomputers Robert Whitten Jr
Welcome! • Today’s Agenda: • Questions from last week • Parallelism • Supercomputers
What is a supercomputer? • The fastest computer at that moment • Top 500 list is updated every 6 months • Serial vs. parallel • Serial means you’re doing one thing at a time sequentially • Parallel means you’re doing multiple things at the same time • Parallel is the way to do things
Parallelism • Performing multiple tasks simultaneously will increase how much work can be done in a given amount of time
Parallel example • How long would it take a person to build a car? • Assume they are a pretty good mechanic • Assume they could build a car in a month by themselves • Now we’re talking about serial processing • What if 2 mechanics worked on the car? • What if 3?...4? • Now we’re talking parallel processing
Shared Memory Parallelism • If mechanics used the same bucket of bolts • Contention occurs whenever someone reached for the same bolt as someone else • Communication occurs when bolts (data) need to be passed back and forth between mechanics
Distributed memory parallelism • Now each mechanic has his own section of the car to work on • Problem decomposition • Sharing of a common resource is not a major factor now • What if one section is easier to do? • What if one mechanic is faster than the others? • Load balancing occurs when work has to be distributed among other mechanics
Supercomputers • Distributed systems • Teragrid • SETI@home • Clusters • Each nodes is an individual computer • Link them all together and you’ve got a cluster • Supercomputers • Proprietary (Cray, IBM, etc) • Custom interconnect networks
Distributed systems • Typically heterogeneous systems • PCs, Macs, Linux, etc • All nodes are physically separated from other nodes • Geographically • Logically • Typically only communicate data back and forth • Works best if data can be divided into independent chucks with little interdependence
Clusters • Typically homogeneous • Might be some difference in hardware, but minimal • Typically co-located • Share common network • Ethernet, infiniband, etc
Supercomputers • Typically homogenous • Exceptions are out there (i.e. Roadrunner @ LANL) • Share common network fabric • Interconnect between processors • Interconnect between nodes • Nodes cannot be independent of each other • Service nodes • Login nodes • I/O nodes • Compute nodes
Jobs • Execution object on a distributed system • Can be interactive or batch • Interactive means a user has to be present to enter data • Batch means data is read from files and user does not have to be present • Allows for greater utilization of the machine since many jobs can be submitted at same time
Homework • Send me that email if you haven’t already whittenrm1@ornl.gov
Questions? http://www.nccs.gov Oak Ridge National Laboratory U. S. Department Of Energy17