110 likes | 160 Views
Send and Receive. MPI Send and Receive. Sending and receiving are two foundational concepts of MPI Almost every single function in MPI can be implemented with basic send and receive calls. Overview of Sending and Receiving. MPI’s send and receive calls operate in the following manner
E N D
MPI Send and Receive • Sending and receiving are two foundational concepts of MPI • Almost every single function in MPI can be implemented with basic send and receive calls
Overview of Sending and Receiving • MPI’s send and receive calls operate in the following manner - Process 0 tries to send a message to process 1 - Process 0 packages the data into a buffer - Route the message to the right location -The location is defined by the process’s rank - Process 1 needs to acknowledge to want to receive 0’s data - Once process 1 acknowledges this, the data has been transmitted - Process 0 may go back to work
Overview of Sending and Receiving • If Process 0 has to send different types of messages to processor 1 - MPI allows senders and receivers to specify message IDs with the message (tags) - Process 1 can request a message with a certain tag number - Messages with different tag numbers will be buffered until Processor is ready to receive them
Overview of Sending and Receiving • Prototypes for MPI sending and receiving functions MPI_Send(void* data, int count, MPI_Datatype datatype, int destination, int tag, MPI_Comm communicator) MPI_Recv(void* data, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm communicator, MPI_Status* status)
Overview of Sending and Receiving • The first argument is the data buffer • The second and third arguments describe the count and type of elements that reside in the buffer • MPI_Send sends the exact count of elements • MPI_Recv will receive at most the count of elements • The fourth and fifth arguments specify the rank of the sending/receiving process and the tag of the message • The sixth argument specifies the communicator • The last argument (for MPI_Recv only) provides information about the received message
Example 1 // Find out rank, size int world_rank; MPI_Comm_rank(MPI_COMM_WORLD, &world_rank); int world_size; MPI_Comm_size(MPI_COMM_WORLD, &world_size); int number; if (world_rank == 0) { number = -1; MPI_Send(&number, 1, MPI_INT, 1, 0, MPI_COMM_WORLD); } else if (world_rank == 1) { MPI_Recv(&number, 1, MPI_INT, 0, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); printf("Process 1 received number %d from process 0\n", number); }
Example 2 int token; if (world_rank != 0) { MPI_Recv(&token, 1, MPI_INT, world_rank - 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); printf("Process %d received token %d from process %d\n", world_rank, token, world_rank - 1); } else { // Set the token's value if you are process 0 token = -1; } MPI_Send(&token, 1, MPI_INT, (world_rank + 1) % world_size, 0, MPI_COMM_WORLD); // Now process 0 can receive from the last process. if (world_rank == 0) { MPI_Recv(&token, 1, MPI_INT, world_size - 1, 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE); printf("Process %d received token %d from process %d\n", world_rank, token, world_size - 1); }
Example 2 • Initialize a value from process 0, and the value is passed around every single process • Terminates when process zero receives the value from the last process • Extra care is taken to assure that it doesn’t deadlock • Process 0 makes sure that it has completed its first send before it tries to receive the value from the last process • All of the other processes simply call MPI_Recv (receiving from their neighboring lower process) and then MPI_Send (sending the value to their neighboring higher process) to pass the value along the ring
Example 2 Process 1 received token -1 from process 0 Process 2 received token -1 from process 1 Process 3 received token -1 from process 2 Process 4 received token -1 from process 3 Process 0 received token -1 from process 4