1 / 17

OpenMP

OpenMP. Uses some of the slides for chapters 7 and 9 accompanying “Introduction to Parallel Computing”, Addison Wesley, 2003. http://www-users.cs.umn.edu/~karypis/parbook And some slides from OpenMP Usage / Orna Agmon Ben- Yehuda www.haifux.org/lectures/209/openmp.pdf . OpenMP.

zack
Download Presentation

OpenMP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OpenMP Uses some of the slides for chapters 7 and 9 accompanying “Introduction to Parallel Computing”, Addison Wesley, 2003. http://www-users.cs.umn.edu/~karypis/parbook And some slides from OpenMP Usage / OrnaAgmon Ben-Yehuda www.haifux.org/lectures/209/openmp.pdf

  2. OpenMP • Shared memory parallel applications API for C, C++ and Fortran. • High-level directives for threads manipulation. • OpenMP directives provide support for concurrency, synchronization, and data handling. • Based on the #pragmacompiler directive.

  3. Reminder – Quicksort • Choose one of the elements to be the pivot. • Rearrange list so that all smaller elements are before the pivot, and all larger one are after it. • After rearrangement, pivot is in its final location! • Recursively rearrange smaller and larger elements. • Divide and conquer algorithm – suitable for parallel execution.

  4. Serial Quicksort code void quicksort(int* values, int from, int to) { intpivotVal, i, j; if (from < to) { pivotVal = values[from]; i = from; for (j = from + 1; j < to; j++) { if (values[j] <= pivotVal) { i = i + 1; swap(values[i], values[j]); } } swap(values[from], values[i]); quicksort(values, from, i); quicksort(values, i + 1, to); } } (a) (b) (c) (d) (e) Pivot Final position

  5. Naïve parallel Quicksort – Try 1 void quicksort(int* values, int from, int to) { intpivotVal, i, j; if (from < to) { pivotVal = values[from]; i = from; for (j = from + 1; j < to; j++) { if (values[j] <= pivotVal) { i = i + 1; swap(values[i], values[j]); } } swap(values[from], values[i]); Sections are performed in parallel #pragmaomp parallel sections { #pragmaomp parallel sections quicksort(values, from, i); #pragmaomp section quicksort(values, i + 1, to); } } } Without omp_set_nested(1) no threads will be created recursively Is this efficient?

  6. Naïve parallel Quicksort – Try 2 void quicksort(int* values, int from, int to) { intpivotVal, i, j, create_thread; intmax_threads = omp_get_max_threads(); static intthreads_num = 1; if (from < to) { // … choose pivot and do swaps … #pragmaomp atomic create_thread = (threads_num++ < max_threads); if (create_thread) { #pragmaomp parallel sections { #pragmaomp section quicksort(values, from, i); #pragmaomp section quicksort(values, i + 1, to); } #pragmaomp atomic threads_num--; } else { #pragmaomp atomic threads_num--; quicksort(values, from, i); quicksort(values, i + 1, to); } Does this compile? What’s the atomic action?

  7. Naïve parallel Quicksort – Try 3 void quicksort(int* values, int from, int to) { intpivotVal, i, j, create_thread; if (from < to) { // … choose pivot and do swaps … create_thread = change_threads_num(1); if (create_thread) { #pragmaomp parallel sections { #pragmaomp section quicksort(values, from, i); #pragmaomp section quicksort(values, i + 1, to); } change_threads_num(-1); } else { quicksort(values, from, i); quicksort(values, i + 1, to); } } intchange_threads_num(int change) { staticintthreads_num = 0; static intmax_threads = omp_get_max_threads(); intdo_change; #pragmaomp critical { do_change = (threads_num += change) < max_threads; threads_num = min( max_threads, threads_num); } return do_change; } Can’t return from block Is this always better than non-atomic actions? What goes wrong if thread count is inaccurate?

  8. Breaking Out • Breaking out of parallel loops is not allowed and will not compile. • A continue can replace many breaks, pending on the probabilities known to programmer. • An exit using a wrapper function will not be identified at compile time, but results will be undefined. • If exit data is important, better to collect status in a variable and exit at the end of the loop. • Need to treat the possibility that more than one thread reaches a certain error state. • Need to be careful in other constructs (e.g. barriers).

  9. Lessons from naïve Quicksort • Control the number of threads when creating them recursively. • No compare-and-swap operations in OpenMP • Critical sections are relatively expensive. • Implicit flush occurs after barriers, critical sections, atomic operations (only to the accessed address).

  10. Efficient Quicksort • Naïve implementation performs partitioning serially. • We would like to parallelize partitioning • In each step • Pivot selection • Local rearrangement (why?) • Global rearrangement

  11. Efficient global rearrangement • Without efficient global rearrangement the partitioning is still serial. • Each processor will keep track of smaller (Si) and larger (Li) numbers than pivot. • Then we will sum prefixes to find the location for each value. • Prefix sum can be done in parallel (PPC)

  12. Code for the brave… Block executed by all threads in parallel voidp_quicksort(int* A, int n) { struct proc *d; inti, p, s, ss, p_loc, done = 0; int *temp, *next, *cur = A; next = temp = malloc(sizeof(int) * n); #pragmaomp parallel shared(next, done, d, p) private(i, s, ss, p_loc) { int c = omp_get_thread_num(); p = omp_get_num_threads(); #pragmaomp single d = malloc(sizeof(struct proc) * p); d[c].start = c * n / p; d[c].end = (c + 1) * n / p – 1; d[c].sec = d[c].done = 0; do { s = d[c].start; if (c == d[c].sec && !d[c].done) { intp_loc = quick_select(cur+s, d[c].end-s+1); swap(cur[s], cur[s+p_loc]); /* Set pivot at start */ } ss = d[d[c].sec].start; /* pivot location */ #pragmaomp barrier /* Local Rearrangment */ for (i = d[c].start; i <= d[c].end; i++) if (cur[i] <= cur[ss]) { swap(cur[s], cur[i]); s++; } Use thread ID and threads num to divide work. Some work must be done only once. Implicit barrier follows. Wait for all threads to arrive before moving on.

  13. More code for the brave… /* Sum prefix */ d[c].s_i = max(s - d[c].start, 0); d[c].l_i = max(d[c].end - s + 1, 0); #pragmaomp barrier if (c == d[c].sec) d[c].p_loc = s_prefix_sum(d, d[c].sec, p) + d[c].start – 1; /* Global rearrengment */ #pragmaomp barrier p_loc = d[c].p_loc = d[d[c].sec].p_loc; memcpy(next + ss + d[c].s_i, cur + d[c].start, sizeof(int)*(s – d[c].start)); memcpy(next + p_loc + d[c].l_i + 1, cur + s, sizeof(int)*(d[c].end - s + 1)); #pragmaomp barrier /* Put pivot in place */ if (d[c].sec == c && !d[c].done) { swap(next[ss], next[p_loc]); cur[p_loc] = next[p_loc]; /* the pivot in its final current */ } #pragmaomp barrier #pragmaomp single { done = p_rebalance(d, p); swap(cur, next); } } while (!done); /* One more iteration sequentially */ seq_quicksort(cur+d[c].start, 0, d[c].end - d[c].start + 1); #pragmaomp barrier if (A != cur) memcpy(A + n*c/p, cur + n*c/p, sizeof(int)*(n*(c+1)/p – n*c/p)); } free(d); free(temp); } Out of parallel block – implicit barrier.

  14. Lessons from efficient Quicksort • Implicit barrier takes place at the entry and exit of work-sharing construct – parallel, critical, ordered and the exit of single construct. • Sometimes we need to rethink parallel algorithms, and make them different from serial ones. • Algorithms should be chosen by actual performance and not only complexity. • Pivot selection has higher importance in the efficient Quicksort.

  15. Moving data between threads • Barriers may be overkill (overhead...). • OpenMPflush (list) directive can be used for specific variables: • Write variable on thread A • Flush variable on thread A • Flush variable on thread B • Read variable on thread B • Parameters are flushed by the order of the list. • More about OpenMP memory model later on this course.

  16. Backup

  17. Efficient Quicksort – last bits intp_rebalance(struct proc* d, int p) { int c, new_sec, sec, done = 1; new_sec = 1; for (c = 0; c < p; c++) { d[c].sec = sec = new_sec ? c : max(sec, d[c].sec); new_sec = 0; if (c+1 < p && d[c].sec == d[c+1].sec && d[c].p_loc >= d[c].start && d[c].p_loc <= d[c].end) { d[c+1].start = d[c].p_loc + 1; d[c].end = d[c].p_loc – 1; new_sec = 1; } if (c+2 < p && d[c+2].sec == d[c].sec && d[c].p_loc >= d[c+1].start && d[c].p_loc <= d[c+1].end && d[c+2].end - d[c].p_loc > d[c].p_loc - d[c].start) { d[c+1].start = d[c].p_loc + 1; d[c].end = d[c].p_loc – 1; new_sec = 1; } if (d[c].end < d[c].start) { d[c].sec = c; new_sec = 1; d[c].done = 1; } else if (d[c].end > d[sec].start && d[c].sec != c) done = 0; } return done; } This is the serial implementation. Could also be done in parallel.

More Related