1 / 11

Message Passing and MPI CS433 Spring 2001

Message Passing and MPI CS433 Spring 2001. Laxmikant Kale. Message Passing. send. receive. copy. data. data. PE0. PE1. Basic Message Passing. We will describe a hypothetical message passing system, with just a few calls that define the model

mira
Download Presentation

Message Passing and MPI CS433 Spring 2001

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Message Passing and MPICS433Spring 2001 Laxmikant Kale

  2. Message Passing send receive copy data data PE0 PE1

  3. Basic Message Passing • We will describe a hypothetical message passing system, • with just a few calls that define the model • Later, we will look at real message passing models (e.g. MPI), with a more complex sets of calls • Basic calls: • send(int proc, int tag, int size, char *buf); • recv(int proc, int tag, int size, char * buf); • Recv may return the actual number of bytes received in some systems • tag and proc may be wildcarded in a recv: • recv(ANY, ANY, 1000, &buf); • broadcast: • Other global operations (reductions)

  4. Pi with message passing Int count, c1; main() { Seed s = makeSeed(myProcessor); for (I=0; I<100000/P; I++) { x = random(s); y = random(s); if (x*x + y*y < 1.0) count++; } send(0,1,4, &count);

  5. Pi with message passing if (myProcessorNum() == 0) { for (I=0; I<maxProcessors(); I++) { recv(I,1,4, c); count += c; } printf(“pi=%f\n”, 4*count/100000); } } /* end function main */

  6. Collective calls • Message passing is often, but not always, used for SPMD style of programming: • SPMD: Single process multiple data • All processors execute essentially the same program, and same steps, but not in lockstep • All communication is almost in lockstep • Collective calls: • global reductions (such as max or sum) • syncBroadcast (often just called broadcast): • syncBroadcast(whoAmI, dataSize, dataBuffer); • whoAmI: sender or receiver

  7. Standardization of message passing • Historically: • nxlib (On Intel hypercubes) • ncube variants • PVM • Everyone had their own variants • MPI standard: • Vendors, ISVs, and academics got together • with the intent of standardizing current practice • Ended up with a large standard • Popular, due to vendor support • Support for • communicators: avoiding tag conflicts, .. • Data types: • ..

  8. Basic MPI calls • MPI_Init(&argc, &argv); MPI_Finalize(); • MPI_Comm_rank(MPI_COM_WORLD, &my_rank); • my_rank is an int. (what is my processor’s serial number) • MPI_Comm_size(MPI_COM_WORLD, &P); • P is an int. Total number of processors (processes) • MPI_Send(m, size, MPI_CHAR, dest, tag,MPI_COMM_WORLD); • MPI_Recv(m, size, MPI_CHAR, source, tag, MPI_COMM_WORLD, &status); • m is a char*, while size, tag, and status are ints • These 6 calls suffice to write many parallel programs!

  9. #include <stdio.h> #include “mpi.h” #define MPIW MPI_COMM_WORLD main(int argc, char **argv) { int me, P, size, tag; char buf[10] = “hello”; MPI_Init(&argc,&argv); MPI_Comm_rank(MPIW, &me); MPI_Comm_size(MPIW, &P); if (me != 0) MPI_Recv(buf, strlen(buf)+1, MPI_CHAR, me-1, 5, MPIW, &status); printf(“%s from process %d\n”, buf, me); if (me < P-1) MPI_Send(buf, strlen(buf)+1, MPI_CHAR, me+1, 5, MPIW); } MPI_Finalize(); A Simple MPI program

  10. Review Basic MPI calls • MPI_Init(&argc, &argv); MPI_Finalize(); • MPI_Comm_rank(MPI_COM_WORLD, &my_rank); • my_rank is an int. (what is my processor’s serial number) • MPI_Comm_size(MPI_COM_WORLD, &P); • P is an int. Total number of processors (processes) • MPI_Send(m, size, MPI_CHAR, dest, tag,MPI_COMM_WORLD); • MPI_Recv(m, size, MPI_CHAR, source, tag, MPI_COMM_WORLD, &status); • m is a char*, while size, tag, and status are ints • These 6 calls suffice to write many parallel programs! So, what is MPI_CHAR and MPI_COMM_WORLD? Does it support other data types? Yes. Other worlds? Well. other “communicators” which are partitions of this

  11. MPI collective communications • Reductions and Broadcasts • MPI_Bcast(msg, size, datatype, root, communicator); • Note: all processes must call this, including the root. • It is an implicit “send” by the root, Recv by the others • MPI_Reduce(data, result, size, type, op, root, comm); • data, result are pointers, op specifies operation (sum, max, min..) • count is the number of data items (not bytes) • MPI_Barrier(Comm); • MPI_Gather: • collect data from everyone in one place • MPI_Scatter • reverse

More Related