1 / 27

Lecture 11 Sorting

Lecture 11 Sorting. Parallel Computing Fall 2008. Sorting Algorithm. Rearranging a list of numbers into increasing (strictly nondecreasing) order. Potential Speedup. O( n log n ) optimal for any sequential sorting algorithm without using special properties of the numbers.

edison
Download Presentation

Lecture 11 Sorting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 11 Sorting Parallel Computing Fall 2008

  2. Sorting Algorithm • Rearranging a list of numbers into increasing (strictly nondecreasing) order.

  3. Potential Speedup • O(nlogn) optimal for any sequential sorting algorithm without using special properties of the numbers. • Best we can expect based upon a sequential sorting algorithm using n processors is Optimal parallel time complexity = O(n logn)/n= O(logn) • Has been obtained but the constant hidden in the order notation extremely large.

  4. Compare-and-Exchange Sorting Algorithms: Compare and Exchange • Form the basis of several, if not most, classical sequential sorting algorithms. • Two numbers, say A and B, are compared. If A > B, A and B are exchanged, i.e.: if (A > B) { temp = A; A = B; B = temp; }

  5. Message Passing Method For P1 to send A to P2 and P2 to send B to P1. Then both processes perform compare operations. P1 keeps the larger of A and B and P2 keeps the smaller of A and B:

  6. Merging Two Sublists

  7. Bubble Sort • First, largest number moved to the end of list by a series ofcompares and exchanges, starting at the opposite end. • Actions repeated with subsequent numbers, stopping just before the previously positioned number. • In this way, the larger numbers move (“bubble”) toward one end,

  8. Bubble Sort

  9. Time Complexity • Number of compare and exchange operations • Indicates a time complexity of O(n^2) given that a single compare-and-exchange operation has a constant complexity, O(1).

  10. Parallel Bubble Sort Iteration could start before previous iteration finished if doesnot overtake previous bubbling action:

  11. Odd-Even (Transposition) Sort Variation of bubble sort. Operates in two alternating phases, even phase and odd phase. Even phase Even-numbered processes exchange numbers with their right neighbor. Odd phase Odd-numbered processes exchange numbers with their right neighbor.

  12. Sequential Odd-Even Transposition Sort

  13. Parallel Odd-Even Transposition SortSorting eight numbers

  14. Parallel Odd-Even Transposition Sort • Consider the one item per processor case. • There are n iterations; in each iteration, each processor does one compare-exchange –which can all be done in parallel. • The parallel run time of this formulation is Θ(n). • This is cost optimal with respect to the base serial algorithm but not the optimal serial algorithm.

  15. Parallel Odd-Even Transposition Sort

  16. Parallel Odd-Even Transposition Sort • Consider a block of n/p elements per processor. • The first step is a local sort. • In each subsequent step, the compare exchange operation is replaced by the compare split operation. • There are p phases with each phase performing Θ(n/p) compares and Θ(n/p) communication. • The parallel run time of the formulation is • The parallel formulation is cost-optimal for p= O(logn).

  17. Quicksort • Very popular sequential sorting algorithm that performs well with average sequential time complexity of O(nlogn). • First list divided into two sublists. All numbers in one sublist arranged to be smaller than all numbers in other sublist. • Achieved by first selecting one number, called a pivot, against which every other number is compared. If the number is less than the pivot, it is placed in one sublist. Otherwise, it is placed in the other sublist. • Pivot could be any number in the list, but often first number in list chosen. Pivot itself could be placed in one sublist, or the pivot could be separated and placed in its final position

  18. Quicksort

  19. Quicksort Example of the quicksort algorithm sorting a sequence of size n= 8.

  20. Parallel Quicksort • Lets start with recursive decomposition -the list is partitioned by a single process and then each of the subproblems is handled by a different processor. • The time for this algorithm is lower-bounded by Ω(n)! • –Not cost optimal as the process-time product is Ω(n^2). • Can we parallelize the partitioning step -in particular, if we can use n processors to partition a list of length n around a pivot in O(1)time, we have a winner. • This is difficult to do on real machines, though.

  21. Parallel Quicksort • Using tree allocation of processes

  22. Parallel Quicksort • With the pivot being withheld in processes:

  23. Analysis • Fundamental problem with all tree constructions –initial division done by a single processor, which will seriously limit speed. • Tree in quicksort will not, in general, be perfectly balanced • Pivot selection very important to make quicksort operate fast.

  24. Parallelizing Quicksort: PRAM Formulation • We assume a CRCW (concurrent read, concurrent write) PRAM with concurrent writes resulting in an arbitrary write succeeding. • The formulation works by creating pools of processors. Every processor is assigned to the same pool initially and has one element. • Each processor attempts to write its element to a common location (for the pool). • Each processor tries to read back the location. If the value read back is greater than the processor's value, it assigns itself to the 'left' pool, else, it assigns itself to the 'right' pool. • Each pool performs this operation recursively. • Note that the algorithm generates a tree of pivots. The depth of the tree is the expected parallel runtime. The average value is O(logn).

  25. Parallelizing Quicksort: PRAM Formulation A binary tree generated by the execution of the quicksortalgorithm. Each level of the tree represents a different array-partitioning iteration. If pivot selection is optimal, then the height of the tree is Θ(log n), which is also the number of iterations.

  26. Parallelizing Quicksort: PRAM Formulation The execution of the PRAM algorithm on the array shown in (a).

  27. End Thank you!

More Related