1 / 20

Operating Systems

Operating Systems. Session 2: Process & Threads Parallel Computing Multitasking, Multiprocessing _ Multithreading MPI. Juan Carlos Martinez. Parallel Computing. A task is broken down into sub-tasks, performed by separate workers or processes.

karan
Download Presentation

Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating Systems • Session 2: • Process & Threads • Parallel Computing • Multitasking, Multiprocessing _Multithreading • MPI Juan Carlos Martinez

  2. Parallel Computing A task is broken down into sub-tasks, performed by separate workers or processes. Processes interact by exchanging information. What do we basically need? • The ability to start the tasks. • A way for them to communicate.

  3. Parallel Computing Why do we need it? Speedup!!! Alternatives Simple Local Multithreaded Applications Java Library for threads Clusters MPI Grid Globus Toolkit 4, Grid Superscalar

  4. Parallel Computing Pros and Cons • Better performance (Speedup) + Blocking, Take Advantage of CPUs, Prioritizations…. • More Complex code - • Concurrency problems - Deadlocks, Integrity …

  5. Multitasking, Multiprocessing and Multithreading Multitasking Ability of an OS to switch among tasks quickly to give the appearance of simultaneous execution of those tasks. E.g. Windows XP Multiprocessing vs Multithreading First… What’s the difference between a process and a thread????

  6. Multitasking, Multiprocessing and Multithreading Multiprocessing vs Multithreading Differences between Processes and Threads: • The basic difference is that fork() creates a new process (child process) and with threads no new process are created (all is in one process). • Relation 1 to many (Process – Thread) • With thread the data can be "shared" with other instances of thread while with fork() not. • shared memory space vs individual memory space Now which one should be faster? And why?

  7. Multitasking, Multiprocessing and Multithreading Multiprocessing • A Multi-processing application is such that has multiple processes running on different processors of the same or different computers (even across OSs). • Process -> own memory space -> more memory resources • The advantage of using processes instead of threads is that there is very little synchronization overhead between processes and for example in video software this might lead to faster renders compared to multithreaded renders. • Another benefit: One error in one process does not affect other processes. • Contrast this with multi-threading, in which an error in one thread can bring down all the threads in the process. Further, individual processes may • May be run as different users and have different permissions.

  8. Multitasking, Multiprocessing and Multithreading Multithreading • Multi-threading refers to an application with multiple threads running within a process. • A thread is a stream of instructions within a process. Each thread has its own instruction pointer, set of registers and stack memory. The virtual address space is process specific, or common to all threads within a process. So, data on the heap can be readily accessed by all threads, for good or ill. • Switching and synchronization costs are lower. The shared address space (noted above) means data sharing requires no extra work.

  9. MPI • A message passing library specification • Message-passing model • Not a compiler specification (i.e. not a language) • Not a specific product • Designed for parallel computers, clusters, and heterogeneous networks

  10. MPI • Development began in early 1992 • Open process/Broad participation • IBM,Intel, TMC, Meiko, Cray, Convex, Ncube • PVM, p4, Express, Linda, … • Laboratories, Universities, Government • Final version of draft in May 1994 • Public and vendor implementations are now widely available

  11. MPI Point to Point Communication  • A message is sent from a sender to a receiver • There are several variations on how the sending of a message can interact with the program Synchronous  • A synchronous communication does not complete until the message has been received • A FAX or registered mail Asynchronous  • An asynchronous communication completes as soon as the message is on the way. • A post card or email

  12. MPI • Blocking and Non-blocking  Blocking operations only return when the operation has been completed • Printer • Non-blocking operations return right away and allow the program to do other work • TV Capture Cards (Can record one channel and still be watching another one) •  Collective Communications  Point-to-point communications involve pairs of processes. Many message passing systems provide operations which allow larger numbers of processes to participate

  13. MPI Types of Collective Transfers  • Barrier • Synchronizes processors • No data is exchanged but the barrier blocks until all processes have called the barrier routine • Broadcast (sometimes multicast) • A broadcast is a one-to-many communication • One processor sends one message to several destinations • Reduction • Often useful in a many-to-one communication

  14. MPI What’s in a Message?  An MPI message is an array of elements of a particular MPI datatype. All MPI messages are typed The type of the contents must be specified in both the send and the receive

  15. Basic MPI Data Types

  16. Basic MPI Data Types

  17. MPI include file #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } variable declarations #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } #include <mpi.h> void main (int argc, char *argv[]) { int np, rank, ierr; ierr = MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD,&rank); MPI_Comm_size(MPI_COMM_WORLD,&np); /* Do Some Works */ ierr = MPI_Finalize(); } Initialize MPI environment Do work and make message passing calls Terminate MPI Environment General MPI Program Structure

  18. Sample Program: Hello World! #include <stdio.h> #include <mpi.h> void main (int argc, char *argv[]) { int myrank, size; /* Initialize MPI */ MPI_Init(&argc, &argv); /* Get my rank */ MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /* Get the total number of processors */ MPI_Comm_size(MPI_COMM_WORLD, &size); printf("Processor %d of %d: Hello World!\n", myrank, size); MPI_Finalize(); /* Terminate MPI */ }

  19. MPI Finally a little more complex example: You’ve got 4 computers in a cluster, name them A, B, C and D. Your application should do the following operations: • V=AtxB • W=AxBt • X=VxBt • Y=WxAt • Z=X+Y Ideas suggestions for this???

  20. Reminders Don’t forget to chose your course project. Have a good weekend!

More Related