E N D
Case Studies Class 4
LAM/MPI • LAM/MPI is a high-quality open-source implementation of the Message Passing Interface specification, including all of MPI-1.2 and much of MPI-2. Intended for production as well as research use, LAM/MPI includes a rich set of features for system administrators, parallel programmers, application users, and parallel computing researchers.
LAM/MPI • Cluster Friendly, Grid Capable • From its beginnings, LAM/MPI was designed to operate on heterogeneous clusters. With support for Globus and Interoperable MPI, LAM/MPI can span clusters of clusters. • Performance • Several transport layers, including Myrinet, are supported by LAM/MPI. With TCP/IP, LAM imposes virtually no communication overhead, even at gigabit Ethernet speeds. New collective algorithms exploit hierarchical parallelism in SMP clusters. • Empowering Developers • The xmpi profiling tool and parallel debugger support (e.g., using TotalView or the Distributed Debugging Tool) enable in-depth application tuning and debugging. • A Stable Extensible Platform for Research • Enables developers to incorporate new functionality into LAM/MPI—without having to understand its internal details. By writing to LAM/MPI’s system services interface, researchers can readily add support for new transport layers, new collective algorithms, boot protocols, checkpoint/restart, and more. • Tools and Third Party Applications • Since LAM/MPI implements the specified MPI standard, most third-party parallel applications are developed based on MPI. In addition, a number of auxiliary tools are available for LAM/MPI.
SPRNG • The Scalable Parallel Random Number Generators Library • SPRNG is a set of libraries for scalable and portable pseudorandom number generation, and has been developed keeping in mind the requirements of users involved in parallel Monte Carlo simulations • Monte Carlo calculations consume a large fraction of all supercomputing cycles. The accuracy of these computations is critically influenced by the quality of the random number generators used. While the issue of random number generation in sequential calculations has been well studied, albeit on less powerful computers, there has been comparatively less work done in the context of parallel Monte Carlo applications. SPRNG seeks to fill this gap by implementing parallel random number generators that satisfy the requirements given below.
SPRNG • Quality: • Provide “high quality” pseudorandom numbers in a computationally inexpensive and scalable manner. • Reproducibility: • Provide totally reproducible streams of parallel pseudorandom numbers, independent of the number of processors used in the computation and of the loading produced by sharing of the parallel computer. • Locality: • Allow for the creation of unique pseudorandom number streams on a parallel machine with minimal interprocessor communication. • Portability: • Should be portable between serial and parallel platforms and must be available on the most commonly used workstations and supercomputers.
MPICH and SPRNG • Estimation of Pi using Monte Carlo Method • Consider a circle of radius r circumscribed by a square board. Darts are thrown to the board. The ratio of r number of darts (n) fall on the circle to that of all throwed (N) is approximately equal to the ratio of the area of the circle and the square. The more random darts are thrown, the more accurate is the approximation. • n/N ~ Pi/4. Therefore, Pi can be estimated.
MPICH and SPRNG • Estimation of Pi using Monte Carlo Method 2r r
MPICH and SPRNG • Estimation of Pi using Monte Carlo Method • Consider a quadrant in a unit square • random coordinates (xi, yi) are generated with SPRNG and count the number of (xi,yi) that fall within the quadrant. • The process can be parallelized and the counting workload can be distributed to multiple compute nodes. • The partial counts can then add together to form n and the ratio n/N can be calculated. • The method can be extended to find integrals of arbitrary function.
MPICH and SPRNG • Himalaya Option Pricing using Monte Carlo and Quasi Monte Carlo Simulation • Like an Asian option, the Himalaya is a call on the average performance of the best stocks within the basket. Throughout the life of the option, there are particular measurement dates where the best performer within the basket is removed, and this process is continued until all the assets with the exception of 1 have been removed from the basket. The total return on this last stock is taken as the final measure. The payoff is the sum of all the measured returns over the life of the option. • Implemented by HKBU Mathematics Department
mpiJava • mpiJava is an object-oriented Java interface to the standard Message Passing Interface (MPI). The interface was developed as part of the HPJava project, but mpiJava itself does not assume any special extensions to the Java language - it should be portable to any platform that provides compatible Java-development and native MPI environments.
mpiJava • Nozzle • This code simulates a 2-D inviscid flow through an axisymmetric nozzle. The simulation yields contour plots of all flow variables, including velocity components, pressure, mach number, density and entropy, and temperature. The plots show the location of any shock wave that would reside in the nozzle. Also, the code finds the steady state solution to the 2-D Euler equations