1 / 46

Analyzing Algorithms and Problems

Lecture 5. Chapter 2. Analyzing Algorithms and Problems. Prof. Sin-Min Lee Department of Computer Science. Euclid's Algorithm.

galya
Download Presentation

Analyzing Algorithms and Problems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 5 Chapter 2 Analyzing Algorithms and Problems Prof. Sin-Min Lee Department of Computer Science

  2. Euclid's Algorithm • In Euclid's 7th book in the Elements series (written about 300BC), he gives an algorithm to calculate the highest common factor (largest common divisor) of two numbers x and y where (x < y). This can be stated as: • 1.Divide y by x with remainder r. • 2.Replace y by x, and x with r. • 3.Repeat step 1 until r is zero. • When this algorithm terminates, y is the highest common factor.

  3. Euclid's Algorithm determines the greatest common divisor of two natural numbers a, b. That is, the largest natural number d such that d | a and d | b. GCD(33,21)=3 33 = 1*21 + 12 21 = 1*12 + 9 12 = 1*9 + 3 9 = 3*3 Euclid's Algorithm

  4. Basic Algorithm Analysis • Questions • How does one calculate the running time of an algorithm? • How can we compare two different algorithms? • How do we know if an algorithm is `optimal'?

  5. Time vs. Input Graph

  6. How do you know if you have the best algorithm? A lower bound can be established because a basic number of operations must be performed. Thus, any algorithm that performs that number would be optimal, even if no such algorithm has been found that uses that number of steps.

  7. Long time ago, there were three towers in Honoi. In the very beginning, there were 64 disks with different sizes stacking on the A tower, small disks always stepping on the bigger disks. A monk had to move all disks to the C tower following rules: 1. Each time only one disk could be moved from a tower, to any other; 2. At anytime, small disks should step on the bigger disks, not allowing big disks step on smaller disks. Hanoi Tower

  8. The Tower of Hanoi (sometimes referred to as the Tower of Brahma or the End of the World Puzzle) was invented by the French mathematician, Edouard Lucas, in 1883. He was inspired by a legend that tells of a Hindu temple where the pyramid puzzle might have been used for the mental discipline of young priests. Legend says that at the beginning of time the priests in the temple were given a stack of 64 gold disks, each one a little smaller than the one beneath it. Their assignment was to transfer the 64 disks from one of the three poles to another, with one important proviso—a large disk could never be placed on top of a smaller one. The priests worked very efficiently, day and night. When they finished their work, the myth said, the temple would crumble into dust and the world would vanish.

  9. .Count the number of basic operations performed by the algorithm on the worst-case input • A basic operation could be: • An assignment • A comparison between two variables • An arithmetic operation between two variables. The worst-case input is that input assignment for which the most basic operations are performed.

  10. Algorithm Efficiency • Determining the efficiency of an Algorithm is done by counting the number of basic operations that are carried out when the algorithm is executed • A basic operation is an operation carried out by the algorithm that is language independent, such as addition, subtraction, comparison of two values, etc…

  11. Determination of Efficiency • How to determine the number of basic operations per every input? Impossible!!! • Instead we devise a way to compare algorithms to each other in a relative way in order to get a “feel” for their efficiency. • We do this by gauging the algorithm’s Asymptotic Growth Rate

  12. Asymptotic Notation for Big O • Big O (Omicron) can be defined as • Note: the following two conditions must be met: • The limit must exist • The limit c < 

  13. An Example of solving Big O Example: • Thus c = 35, and n = 1 0

  14. Big Omega (Ω) Definition: • The limit must be greater than 0 • The limit may be 

  15. Definition:

  16. Big Theta(Θ) • Theta can be defined as •  set is greater than 0 •  set is smaller than 

  17. Comparison of Ω, Θ, and Ο • Ο(g): functions that grow no faster than g • Θ(g): functions grow at the same rate as g • Ω(g): functions grow at least as fast as g • Together Big O and  set establish the lower and upper boundaries of asymptotic order. • (g) = (g)  (g)

  18. Notes about Asymptotic Notation • Big O notation should be of the smallest order possible • Lower order and constant terms can be dropped as long as a larger term exists that does not have a coefficient of 0

  19. The Importance of Asymptotic Order • An algorithm that runs at O(n) time is generally better than an algorithm that runs at O(n ). 2

  20. Clarification • When talking about Big O, , , and little o, , we do not talk about specific functions. (that is, there is no function that is exclusively Big O of something else) • What we mean when we say that function f is big O of function g is that the function f is a member of the set of functions which satisfy the given criteria for big O. • The proper notation is to say that f is an element in the set of functions which is identified as O(g), or more explicitly • f  O(g), • Remember, f  O(g)

  21. Big O Notation • Big O (omicron) notation is used to define a set of functions which grow no faster than a given function • Let g be a function from the nonnegative integers to the positive real numbers

  22. Definition of big O • O(g) is the set of all functions f from nonnegative integers into positive real numbers such that the limit of f over g as n approaches is equal to a constant c such that c < 

  23. Example of big O (1) You have three functions g: y = .5x f1: y = .2x f2: y = 2x You want to check if f1 and f2 are O of g

  24. Example of big O (2) • Both values are less than  • Both f1 and f2 are elements of O(g)

  25. Example of big O (3) • In both cases as the initial value increased, the limit of the ratio between f1, f2 and g remained less than  • f1 and f2 are  O(g) or f1 and f2 are O of g. • In practice, this means that the functions in question (in our case f1 and f2) are growing at a rate that is not faster than g.

  26. When Big O shows a difference • Pink g • g = 1,000,000n • Yellow f • f = en

  27. Summing up Big O • In order for the following two conditions must be met • The limit must exist • The limit c < 

  28. The Omega set • The Omega set () is the “big brother” of Big O •  of g is the set of functions such that:

More Related