1 / 30

CS342 Data Structures

CS342 Data Structures. Complexity Analysis. Hu: asymptotic:. Complexity Analysis: topics. Introduction Simple examples The Big-O notation Properties of Big-O notation Ω and Θ notations More examples of complexity analysis Asymptotic complexity The best, average, and worst cases

Download Presentation

CS342 Data Structures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS342 Data Structures Complexity Analysis

  2. Hu: asymptotic: Complexity Analysis: topics • Introduction • Simple examples • The Big-O notation • Properties of Big-O notation • Ω and Θ notations • More examples of complexity analysis • Asymptotic complexity • The best, average, and worst cases • Amortized complexity

  3. Definition of a good program • Correctness: does it work? • Ease of use: user friendly? • Efficiency: runs fast? consumes a lot of memory? • Others • easy to maintain • …

  4. Why it is important to study complexity of algorithm • It addresses the efficiency (time and space) problem.Complexity analysis allows one to estimate • the time needed to execute an algorithm (time complexity) • the amount of memory needed to execute the algorithm (space complexity).

  5. Space complexity • Instruction space: space needed to store compiled code. • Data space: space needed to store all constants, variables, dynamically allocated memory, environmental stack space (used to save information needed to resume execution of partially completed functions). Although memory is inexpensive today, it can be important for portable devices (where size, power consumption, are important).

  6. Time or computational complexity • It is not objective to simply measure the total execution time of an algorithm because the the execution time depends on many factors such as the processor architecture, the efficiency of the compiler, and the semiconductor technology used to make the processor. • One more meaningful approach is to identify one or more key or primary operations and determine the number of times these operations are performed. Key operations are those that take much more time to execute than other types of operations. The time needed to execute an algorithm increases as the amount of data, n, as n increases, so does the execution time. The big-O notation provides an (asymptotic) estimate (upper bound) of the execution time when n becomes large.

  7. Example on key instruction count: locating the minimum element in any array of n elements • Assuming the comparison operation is the key operation (i.e., assignment operation is not considered a key operation here). then: T(n) = n – 1

  8. A more involved example: the selection sort of an n-element array a into ascending order • Again, assuming that comparison is the key operation or instruction. • The algorithm requires (n –1) passes; each pass locates the minimum value in the unsorted sub-list and the value is swapped with a[i], where i = 0 for pass 1, 1 for pass 2, and so on. In pass 1, there are (n - 1) comparison operations. The # comparison decreases by 1 in the following pass, which lead to: • T(n) = (n – 1) + (n – 2) + … + 3 + 2 + 1 = n(n - 1) / 2 = n2 / 2 – n / 2

  9. The best, worst, and average case • Using linear search as an example: searching for an element in an n-element array. • best case: T(n) = 1 • worst case: T(n) = n • average case: T(n) = n / 2 • The complexity analysis should take into account of the best, worst and average cases, although the worst case is the most important.

  10. The big-O notation • Big-O notation can be obtained from T(n) by taking the dominant term from T(n); dominant term is the term that increases fastest as the value of n increases. • Example • T(n) = n – 1, n is the dominant term (why?), therefore, O(n) = n, where O(n) reads big-O of n. • T(n) = n2 / 2 – n / 2, (n2 / 2) is the dominant term (why?), after dropping the constant 2, we obtain O(n2). We say the selection sort has a running time of O(n2)

  11. Orders of magnitude of Big-O functions • In reality, a small set of big-O functions defines the running time of most algorithms. • constant time: O(1), e.g., access first element of an array, append an element to the end of a list, etc. • Linear: O(n), linear search • Quadratic: O(n2), selection sort • Cubic: O(n3); matrix multiplication; slow • Logarithmic: O(log2n), binary search; very fast • O(nlog2n), quick-sort • Exponential: O(2n); very slow

  12. Contributions from individual terms of f(n) = n2 + 100n + log10n + 1000 and how they varies with n

  13. Formal definition of Big-O notation • Definition: f(n) is O(g(n)) if there exist two positive numbers c and N such the f(n)  cg(n) for all n  N. The definition reads f(n) is big-O of g, where g is an upper bound on the value of f(n), meaning f grows at most as fast as g as n increases.

  14. Big-O notation: an example • Let f(n) = 2n2 + 3n + 1 • Based on the definition, we can say f(n) = O(n2), where g(n) = n2, if and only if we can find solutions for c and n such that processor. Since c is positive, we can re-write the equation: 2 + 3 / n + 1 / n2c. There can be infinite set of solutions for the inequality. For example, the above inequality is satisfied if c  6 and n = 1, or c  3 ¾ and n = 2, etc., i.e., c = 6 and n = 1 pair is a solution; so is c = 3 and n = 2. • The best pair of (c, n) solution, we need to determine the smallest value of n for which the dominant term (i.e., 2n2 in f(n)) becomes larger than all other terms. Setting 2n2  3n, and 2n2 > 1, it is clear that when n = 2, the two inequalities hold. From 2 + 3 / n + 1 / n2c, we obtain c  3 ¾.

  15. Solutions to the inequality equation 2n2 + 3n + 1 cn2

  16. Comparison of functions for different values of c and n

  17. Classes of algorithms and their execution time (assuming processor speed =1 MIPS)

  18. Typical function g(n) applied in big-O estimates

  19. Properties of big-O notation • Transitivity: if f(n) is O(g(n)), and g(n) is O(h(n)), then f(n) is O(h(n)). • If f(n) is O(h(n)) and g(n) is O(h(n)) then f(n) + g(n) is O(h(n)). • The function ank is O(nk). • The function nk is O(nk + j) for any positive j. • If f(n) = cg(n), then f(n) is O(g(n)). • The function logan is O(logbn) for any positive numbers a and b  0. The proofs are rather straightforward and appear on p57 of textbook.

  20. The  notation • Definition: the function f(n) is (f(n)) is there exist positive numbers c and N such that f(n)  cg(n) for all n  N. It reads "f is big-Omega of g" and implies cg(n) is the lower bound of f(n). • The above definition implies: f(n) is (g(n)) if and only if g(n) is O(f(n)). • Similar to big-O situation, there can be infinite number solutions for c and N pair. For practical purposes, only the closest s are the interest, which represents the largest lower bound.

  21. The  (theta) notation • Definition: f(n) is  (g(n)) if there exist positive numbers c1, c2, and N such that c1g(n)  f(n)  c2g(n) for all n  N. It reads "f has an order of magnitude of g, or f is on the order of g, or both f and g functions grow at the same rate in the long run (i.e., when n continues to increase.)

  22. Some observations • Comparing two algorithms: O(n) and O(n2), which one is faster? • But, what about f(n) = 10 n2 + 108 n. Is itO(n) or O(n2)? Obviously, for smaller value of n (n  106)the second is faster. So, sometimes it is desirable to consider constants which are very large.

  23. Additional examples • for (i = sum = 0; i < n; i++) sum += a[i]; // There are (2 + 2n) assignments (key ops); O(n) • for (i = 0; i < n; i++){ for (j = 1, sum = a[0]; j <= i; j++) sum += a[j]; cout << "Sum for sub-array 0 thru " << i << " is " << sum << endl; } // There are 1 + 3n + 2(1 + 2 + … + n – 1) = 1 + 3n + n(n –1) // = O(n) + O(n2) = O(n2) assignment operations before the program is completed.

  24. Rigorous analyses of best, average, and worst cases • Best case: smallest number of steps. • Worst case: maximum number of steps. • Average case: the analysis may sometimes be quite involved. There are a couple of examples on pages 65 and 66.

  25. The amortized complexity • Takes into consideration of interdependence between operations and their results. Will be discussed in more detail in conjunction with data structures.

  26. as·ymp·tote (ăsʹĭm-tōt´, -ĭmp-) nounMathematics.A line considered a limit to a curve in the sense that the perpendicular distance from a moving point on the curve to the line approaches zero as the point moves an infinite distance from the origin.

  27. am·or·tize (ămʹər-tīz´, ə-môrʹ-) verb, transitiveam·or·tized, am·or·tiz·ing, am·or·tiz·es • 1.To liquidate (a debt, such as a mortgage) by installment payments or payment into a sinking fund. • 2.To write off an expenditure for (office equipment, for example) by prorating over a certain period.

  28. Computational complexity • Complexity of an algorithm measured in terms of space and time. • Space: main memory (RAM) needed to run the program (implementation of the algorithm); less important today because of the advancement of the VLSI technology. • Time: total execution time of an algorithms is not meaningful because it is dependent on the hardware (architecture or instruction set), on the language in which the algorithm is code, and the efficiency of the compiler (optimizer). A more meaningful metric is to measure the number of operations as a function of size n, which can be the number of elements in an array or a data file.

  29. The asymptotic behavior of math functions as·ymp·tote (ăsʹĭm-tōt´, -ĭmp-) nounMathematics.A line considered a limit to a curve in the sense that the perpendicular distance from a moving point on the curve to the line approaches zero as the point moves an infinite distance from the origin. Examples: tan(), e-x, etc.

  30. The asymptotic complexity: an example • A simple loop that sum n elements in array a: for (int i = sum = 0; i < n; n++) sum += a[i];

More Related