1 / 7

MPI and Grid Computing

MPI and Grid Computing. UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008. MPICH-G2. MPICH is an implementation of the standard MPI MPICH-G is an implementation of MPI that uses the Globus Toolkit MPICH-G2 is a complete redesign and reimplementation of MPICH-G. MPICH-G2.

kieve
Download Presentation

MPI and Grid Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MPI and Grid Computing UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008

  2. MPICH-G2 • MPICH is an implementation of the standard MPI • MPICH-G is an implementation of MPI that uses the Globus Toolkit • MPICH-G2 is a complete redesign and reimplementation of MPICH-G

  3. MPICH-G2 • MPICH-G2 hides heterogeneity by using the Globus Toolkit for • authentication • authorization • exectutable staing • process control • communication • redirection of standard input and output • remote file access

  4. % grid-proxy-init% mpirun -np 6 myprog Generates RSL MDS mpirun GASS globusrun Submits multiple jobs Authenticates, Coordinates Startup DUROC GRAM GRAM GRAM Initiates Job, Detects Termination fork Condor SGE Process Control P0 P1 P2 P3 P4 P5

  5. + ( &(resourceManagerContact="m1.utech.edu") (count=10)‏ (jobtype=mpi)‏ (label="subjob 0")‏ (environment=(GLOBUS_DUROC_SUBJOB_INDEX 0))‏ (directory=/homes/users/smith)‏ (executable=/homes/users/smith/myapp)‏ )‏ ( &(resourceManagerContact="m2.utech.edu") (count=10)‏ (label="subjob 1")‏ (environment=(GLOBUS_DUROC_SUBJOB_INDEX 1))‏ (directory=/homes/users/smith)‏ (executable=/homes/users/smith/myapp)‏ )‏ ( &(resourceManagerContact="c1.nlab.gov") (count=10)‏ (jobtype=mpi)‏ (label="subjob 2")‏ (environment=(GLOBUS_DUROC_SUBJOB_INDEX 2))‏ (directory=/users/smith)‏ (executable=/users/smith/myapp)‏ )‏ RSL • mpirun will generate the RSL or one can create/provide the RSL • This is RSL version 1 (not XML)‏

  6. Topology Site A Site B Shared-Memory Cluster 1 Cluster 2 • Early implementations of MPI's collective communication operations assumed all processes were equidistant from on another. • This assumption is unlikely in a Grid environment 0 1 4 5 8 9 2 3 6 7 10 11

  7. Topology • Use the concept of Communicators, MPICH-G2 collective communication operations are topology-aware. • It uses the concept of “levels” and “colors”

More Related