1 / 67

Introduction to OpenMP

Introduction to OpenMP. OpenMP Introduction. Credits: www.sdsc.edu/~allans/cs260/lectures/OpenMP.ppt www.mgnet.org/~douglas/Classes/cs521-s02/... openmp /MPI- OpenMP . ppt OpenMP Homepage: http://www.openmp.org/. Module Objectives. Introduction to the OpenMP standard

Download Presentation

Introduction to OpenMP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to OpenMP

  2. OpenMP Introduction

  3. Credits: www.sdsc.edu/~allans/cs260/lectures/OpenMP.ppt www.mgnet.org/~douglas/Classes/cs521-s02/...openmp/MPI-OpenMP.ppt OpenMP Homepage: http://www.openmp.org/

  4. Module Objectives Introduction to the OpenMP standard After completion, users should be equipped to implement OpenMP constructs in theirapplications and realize performance improvements on shared memory machines

  5. Definition Parallel Computing: • Computing multiple things simultaneously. • Usually means computing different parts of the same problem simultaneously. • In scientific computing, it often means decomposing a domain into more than one sub-domain and computing a solution on each sub-domain separately and simultaneously (or almost separately and simultaneously).

  6. Types of Parallelism • Perfect (a.k.a Embarrassing, Trivial) Parallelism • Monte-Carlo Methods • Cellular Automata • Data Parallelism • Domain Decomposition • Dense Matrix Multiplication • Task Parallelism • Pipelining • Monte-Carlo? • Cellular Automata?

  7. Performance Measures • Peak Performance: Theoretical upper bound on performance. • Sustained Performance: Highest consistently achievable speed. • MHz: Million cycles per second. • MIPS: Million instructions per second. • Mflops: Million floating point operations per second. • Speedup: Sequential run time divided by parallel run time.

  8. Parallelism Issues • Programming notation • Algorithms and Data Structures • Load Balancing • Problem Size • Communication • Portability • Scalability

  9. Getting your feet wet

  10. Memory Types CPU CPU CPU Memory Memory Memory CPU CPU CPU CPU Shared CPU Memory Memory Distributed

  11. Clustered SMPssymmetric multiprocessors Cluster Interconnect Network Memory Memory Memory Multi-socket and/or Multi-core

  12. Distributed vs. Shared Memory Shared - all processors share a global pool of memory simpler to program bus contention leads to poor scalability Distributed - each processor physically has its own (private) memory scales well memory management is more difficult

  13. What is OpenMP? • OpenMP is a portable, multiprocessing API for shared memory computers • OpenMP is not a “language” • Instead, OpenMP specifies a notation as part of an existing language (FORTRAN, C) for parallel programming on a shared memory machine • Portable across different architectures • Scalable as more processors are added • Easy to convert sequential code to parallel

  14. Why should I use OpenMP?

  15. Where should I use OpenMP?

  16. OpenMP Specification • OpenMp consists of three main parts: • Compiler directives used by the programmer to communicate with the compiler • Runtime library which enables the setting and querying of parallel parameters • Environmental variables that can be used to define a limited number of runtime parameters

  17. OpenMP Example Usage (1 of 2) Sequential Program OpenMP Compiler Annotated Source compiler switch Parallel Program

  18. OpenMP Example Usage (2 of 2) If you give sequential switch, comments and pragmas are ignored. If you give parallel switch, comments and/or pragmas are read, and cause translation into parallel program. Ideally, one source for both sequential and parallel program (big maintenance plus).

  19. Simple OpenMP Program • Most OpenMP constructs are compiler directives or pragmas • The focus of OpenMP is to parallelize loops • OpenMP offers an incremental approach to parallelism

  20. OpenMP Programming Model OpenMP is a shared memory model. • Workload is distributed among threads – Variables can be • shared among all threads • duplicated and private to each thread – Threads communicate by sharing variables • Unintended sharing of data can lead to race conditions: – race condition: when the program’s outcome changes as the threads are scheduled differently. • To control race conditions: – Use synchronization (Chapter Four) to protect data conflicts. • Careless use of synchronization can lead to deadlocks (Chapter Four)

  21. OpenMP Execution Model • Fork-join model of parallel execution • Begin execution as a single process (master thread) • Start a parallel construct: Master thread creates a team of threads • Complete a parallel construct: Threads in the team wait until all team work has been completed • Only master thread continues execution

  22. The Basic Idea

  23. OpenMP directive format in C #pragma directives, defined by C standard as a mechanism to do compiler-specific tasks e.g. ignore errors, generate special code #pragma must be ignored if not understood; thus, SOME OpenMP programs can be compiled for sequential OR parallel execution Typically, OpenMP directives can be enabled by compiler option • OpenMP pragma Usage: #pragma omp directive_name [ clause [ clause ] ... ] CR • Conditional compilation #ifdef _OPENMP printf(“%d avail.processors\n”,omp_get_num_procs()); #endif • case sensitive • Include file for library routines: #ifdef _OPENMP #include <omp.h> #endif

  24. Microsoft Visual Studio OpenMP Option

  25. Other Compilers • Intel (icc, ifort, icpc) • -openmp • PGI (pgcc, pgf90, …) • -mp • GNU (gcc, gfortran, g++) • -fopenmp • need version 4.2 or later

  26. OpenMP parallel region construct Block of code to be executed by multiple threads in parallel Each thread executes the same code redundantly! • C/C++: #pragma omp parallel [ clause [ clause ] ... ] CR { structured-block } • clause can be either or both of the following: private(comma-separated identifier-list) shared(comma-separated identifier-list) • If no private/shared list, shared is assumed for all variables

  27. OpenMP parallel region construct

  28. Communicating Among Threads Shared Memory Model threads read and write shared variables no need for explicit message passing change storage attribute to private to minimize synchronization and improve cache reuse because private variables are duplicated in every team member

  29. Storage Model – Data Scoping Shared memory programming model: variables are shared by default Global variables are SHARED among threads C: file scope variables, static Private Variables: exist only within the scope of each thread, i.e. they are uninitialized and undefined outside the data scope loop index variables Stack variables in sub-programs called from parallel regions

  30. OpenMP -- example #include <stdio.h> int main() { // Do this part in parallel printf( "Hello, World!\n" ); return 0; }

  31. OpenMP -- example #include <stdio.h> #include <omp.h> int main() { omp_set_num_threads(16); // Do this part in parallel #pragma omp parallel { printf( "Hello, World!\n" ); } return 0; }

  32. OpenMP environment variables OMP_NUM_THREADS – sets the number of threads to use during execution – when dynamic adjustment of the number of threads is enabled, the value of this environment variable is the maximum number of threads to use setenv OMP_NUM_THREADS 16 [csh, tcsh] export OMP_NUM_THREADS=16 [sh, ksh, bash] At runtime, omp_set_num_threads(6)

  33. OpenMP runtime library omp_get_num_threads Function Returns the number of threads currently in the team executing the parallel region from which it is called – C/C++: int omp_get_num_threads(void); omp_get_thread_num Function Returns the thread number, within the team, that lies between 0 and omp_get_num_threads()-1, inclusive. The master thread of the team is thread 0 – C/C++: int omp_get_thread_num(void);

  34. int main() { // serial region printf(“Hello…”); // parallel region #pragma omp parallel { printf(“World”); } Fork Join // serial again printf(“!”); } Hello…WorldWorldWorldWorld! Programming Model - Fork/Join

  35. 0 0 1 2 3 4 5 6 7 0 Programming Model – Thread Identification Master Thread • Thread with ID=0 • Only thread that exists in sequential regions • Depending on implementation, may have special purpose inside parallel regions • Some special directives affect only the master thread (like master) • Other threads in a team have ids 1..N-1 Fork Join

  36. Run-time Library: Timing There are 2 portable timing routines omp_get_wtime portable wall clock timer returns a double precision value that is number of elapsed seconds from some point in the past gives time per thread - possibly not globally consistent difference 2 times to get elapsed time in code omp_get_wtick time between ticks in seconds

  37. Loop Constructs Because the use of parallel followed by a loop construct is so common, this shorthand notation is often used (note: directive should be followed immediately by the loop) #pragma parallel for [ clause [ clause ] ... ] CR for ( ; ; ) { } Subsets of iterations are assigned to each thread in the team

  38. ? ? Programming Model – Concurrent Loops • OpenMP easily parallelizes loops • No data dependencies between iterations! • Preprocessor calculates loop bounds for each thread directly from serial source for( i=0; i < 25; i++ ) { printf(“Foo”); } #pragma omp parallel for

  39. Sequential Matrix Multiply for( i=0; i<n; i++ ) for( j=0; j<n; j++ ) { c[i][j] = 0.0; for( k=0; k<n; k++ ) c[i][j] += a[i][k]*b[k][j]; }

  40. OpenMP Matrix Multiply #pragma omp parallel for for( i=0; i<n; i++ ) for( j=0; j<n; j++ ) { c[i][j] = 0.0; for( k=0; k<n; k++ ) c[i][j] += a[i][k]*b[k][j]; }

  41. OpenMP parallel for directive clause can be one of the following: private( list ) shared( list ) default( none | shared | private ) if (Boolean expression) reduction( operator: list) schedule( type [ , chunk ] ) nowait num_threads(N) • Implicit barrier at the end of for unless nowait is specified • If nowait is specified, threads do not synchronize at the end of the parallel loop schedule clause specifies how iterations of the loop are divided among the threads of the team. – Default is implementation dependent

  42. OpenMP parallel/for directive #pragma omp parallel private(f) { f=7; #pragma omp for for (i=0; i<20; i++) a[i] = b[i] + f * (i+1); } /* omp end parallel */ // i is private // a, b are shared

  43. Default Clause Note that the default storage attribute is DEFAULT (SHARED) To change default: DEFAULT(PRIVATE) each variable in static extent of the parallel region is made private as if specified by a private clause mostly saves typing DEFAULT(none): no default; must list storage attribute for each variable USE THIS!

  44. If Clause if (Boolean expression) executes (in parallel) normally if the expression is true, otherwise it executes the parallel region serially Used to test if there is sufficient work to justify the overhead in creating and terminating a parallel region

  45. Conditional Parallelism: Example for( i=0; i<n; i++ ) #pragma omp parallel for if( n-i > 100 ) for( j=i+1; j<n; j++ ) for( k=i+1; k<n; k++ ) a[j][k] = a[j][k] - a[i][k]*a[i][j] / a[j][j]

  46. Data model • Private and shared variables • Variables in the global data space are accessed by all parallel threads (shared variables). • Variables in a thread’s private space can only be accessed by the thread (private variables) • several variations, depending on the initial values and whether the results are copied outside the region.

  47. #pragma omp parallel for private( privIndx, privDbl ) for ( i = 0; i < arraySize; i++ ) { for ( privIndx = 0; privIndx < 16; privIndx++ ) { privDbl = ( (double) privIndx ) / 16; y[i] = sin( exp( cos( - exp( sin(x[i]) ) ) ) ) + cos( privDbl ); } } Parallel for loop index is Private by default.

  48. Reduction Variables #pragma omp parallel for reduction( op:list ) op is one of +, *, -, &, ^, |, &&, or || The variables in list must be used with this operator in the loop. The variables are automatically initialized to sensible values.

  49. The reduction clause sum = 0.0; #pragma parallel for default(none) shared (n, x) reduction(+ : sum) for (int I=0; I<n; I++) sum = sum + x(I); A private instance of sum is allocated to each thread Performs a local sum in each thread Before terminating, each thread adds its local sum to the global sum variable

More Related