1 / 88

Dynamic programming

Dynamic programming. Dynamic programming. Dynamic Programming is an algorithm design technique for optimization problems : often minimizing or maximizing. Like divide and conquer, DP solves problems by combining solutions to subproblems.

tannar
Download Presentation

Dynamic programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic programming

  2. Dynamic programming • Dynamic Programming is an algorithm design technique for optimization problems: often minimizing or maximizing. • Like divide and conquer, DP solves problems by combining solutions to subproblems. • Unlike divide and conquer, subproblems are not independent. • Subproblems may share subsubproblems, • However, solution to one subproblem may not affect the solutions to other subproblems of the same problem. (More on this later.)

  3. What is Dynamic Programming? • Dynamic programming solves optimization problems by combining solutions to subproblems • “Programming” refers to a tabular method with a series of choices, not “coding”

  4. What is Dynamic Programming? …contd • A set of choices must be made to arrive at an optimal solution • As choices are made, subproblems of the same form arise frequently • The key is to store the solutions of subproblems to be reused in the future

  5. What is Dynamic Programming? …contd • Recall the divide-and-conquer approach • Partition the problem into independent subproblems • Solve the subproblems recursively • Combine solutions of subproblems • This contrasts with the dynamic programming approach

  6. What is Dynamic Programming? …contd • Dynamic programming is applicable when subproblems are not independent • i.e., subproblems share subsubproblems • Solve every subsubproblem only once and store the answer for use when it reappears • A divide-and-conquer approach will do more work than necessary

  7. A Sequence of 4 Steps • A dynamic programming approach consists of a sequence of 4 steps • Characterize the structure of an optimal solution • Recursively define the value of an optimal solution • Compute the value of an optimal solution in a bottom-up fashion • Construct an optimal solution from computed information

  8. Properties of a problem that can be solved with dynamic programming • Simple Subproblems • We should be able to break the original problem to smaller subproblems that have the same structure • Optimal Substructure of the problems • The solution to the problem must be a composition of subproblem solutions • Subproblem Overlap • Optimal subproblems to unrelated problems can contain subproblems in common

  9. Example – Fibonacci Numbersusing recursion Function f(n) if n = 0 output 0 else if n = 1 output 1 else output f(n-1) + f(n-2)

  10. Example – Fibonacci Numbersusing recursion • Run time: T(n) = T(n-1) + T(n-2) • This function grows as n grows • The run time doubles as n grows and is order O(2n). • This is a bad algorithm as there are numerous repetitive calculations.

  11. Example – Fibonacci Numbersusing Dynamic programming f(5) f(4) f(3) f(3) f(2) f(2) f(1) f(2) f(1)

  12. Example – Fibonacci Numbersusing Dynamic programming • Dynamic programming calculates from bottom to top. • Values are stored for later use. • This reduces repetitive calculation.

  13. Longest Common Subsequence Problem:Given 2 sequences, X = x1,...,xm and Y = y1,...,yn, find a common subsequence whose length is maximum.

  14. Other sequence questions • Edit distance:Given 2 sequences, X = x1,...,xm and Y = y1,...,yn, what is the minimum number of deletions, insertions, and changes that you must do to change one to another? • Protein sequence alignment:Given a score matrix on amino acid pairs, s(a,b) for a,b{}A, and2 amino acid sequences, X = x1,...,xmAm and Y = y1,...,ynAn, find the alignment with lowest score…

  15. More problems Optimal BST: Given sequence K = k1< k2<··· < kn of n sorted keys, with a search probability pifor each key ki, build a binary search tree (BST) with minimum expected search cost. Matrix chain multiplication: Given a sequence of matrices A1 A2 … An, with Ai of dimension mini, insert parenthesis to minimize the total number of scalar multiplications. Minimum convex decomposition of a polygon, Hydrogen placement in protein structures, …

  16. Matrix-chain Multiplication

  17. Matrix-chain Multiplication • Suppose we have a sequence or chain A1, A2, …, An of n matrices to be multiplied • That is, we want to compute the product A1A2…An • There are many possible ways (parenthesizations) to compute the product

  18. Matrix-chain Multiplication …contd • Example: consider the chain A1, A2, A3, A4 of 4 matrices • Let us compute the product A1A2A3A4 • There are 5 possible ways: • (A1(A2(A3A4))) • (A1((A2A3)A4)) • ((A1A2)(A3A4)) • ((A1(A2A3))A4) • (((A1A2)A3)A4)

  19. Matrix-chain Multiplication …contd • To compute the number of scalar multiplications necessary, we must know: • Algorithm to multiply two matrices • Matrix dimensions • Can you write the algorithm to multiply two matrices?

  20. Algorithm to Multiply 2 Matrices Input: Matrices Ap×q and Bq×r (with dimensions p×q and q×r) Result: Matrix Cp×r resulting from the product A·B MATRIX-MULTIPLY(Ap×q , Bq×r) 1. for i ← 1 top 2. for j ← 1 tor 3. C[i, j]← 0 4. for k ← 1 toq 5. C[i, j]← C[i, j] + A[i, k]· B[k, j] 6. returnC Scalar multiplication in line 5 dominates time to compute C Number of scalar multiplications = pqr

  21. Matrix-chain Multiplication …contd • Example: Consider three matrices A10100, B1005, and C550 • There are 2 ways to parenthesize • ((AB)C) = D105·C550 • AB  10·100·5=5,000 scalar multiplications • DC  10·5·50 =2,500 scalar multiplications • (A(BC)) = A10100·E10050 • BC  100·5·50=25,000 scalar multiplications • AE  10·100·50 =50,000 scalar multiplications Total: 7,500 Total: 75,000

  22. Matrix-chain Multiplication …contd • Matrix-chain multiplication problem • Given a chain A1, A2, …, An of n matrices, where for i=1, 2, …, n, matrix Ai has dimension pi-1pi • Parenthesize the product A1A2…An such that the total number of scalar multiplications is minimized • Brute force method of exhaustive search takes time exponential in n

  23. Dynamic Programming Approach • The structure of an optimal solution • Let us use the notation Ai..j for the matrix that results from the product Ai Ai+1 … Aj • An optimal parenthesization of the product A1A2…An splits the product between Akand Ak+1for some integer k where1 ≤ k < n • First compute matrices A1..k and Ak+1..n ; then multiply them to get the final matrix A1..n

  24. Dynamic Programming Approach …contd • Key observation: parenthesizations of the subchains A1A2…Ak and Ak+1Ak+2…An must also be optimal if the parenthesization of the chain A1A2…An is optimal (why?) • That is, the optimal solution to the problem contains within it the optimal solution to subproblems

  25. Dynamic Programming Approach …contd • Recursive definition of the value of an optimal solution • Let m[i, j] be the minimum number of scalar multiplications necessary to compute Ai..j • Minimum cost to compute A1..n is m[1, n] • Suppose the optimal parenthesization of Ai..jsplits the product between Akand Ak+1for some integer k where i ≤ k < j

  26. Dynamic Programming Approach …contd • Ai..j= (Ai Ai+1…Ak)·(Ak+1Ak+2…Aj)= Ai..k· Ak+1..j • Cost of computing Ai..j = cost of computing Ai..k + cost of computing Ak+1..j + cost of multiplying Ai..k and Ak+1..j • Cost of multiplying Ai..k and Ak+1..j is pi-1pk pj • m[i, j ] = m[i, k] + m[k+1, j ] + pi-1pk pj for i ≤ k < j • m[i, i ] = 0 for i=1,2,…,n

  27. 0 if i =j m[i, j ] = min {m[i, k] + m[k+1, j ] + pi-1pk pj}if i<j, i ≤ k< j Dynamic Programming Approach …contd • But… optimal parenthesization occurs at one value of k among all possible i ≤ k < j • Check all these and select the best one

  28. Dynamic Programming Approach …contd • To keep track of how to construct an optimal solution, we use a table s • s[i, j ] = value of k at which Ai Ai+1 … Ajis split for optimal parenthesization • Algorithm: next slide • First computes costs for chains of length l=1 • Then for chains of length l=2,3, … and so on • Computes the optimal cost bottom-up

  29. Algorithm to Compute Optimal Cost Input: Array p[0…n] containing matrix dimensions and n Result: Minimum-cost table m and split table s MATRIX-CHAIN-ORDER(p[ ], n) for i ← 1 ton m[i, i]← 0 for l ← 2 ton for i ← 1 ton-l+1 j← i+l-1 m[i, j]←  for k ← itoj-1 q ← m[i, k] + m[k+1, j] + p[i-1] p[k] p[j] ifq < m[i, j] m[i, j]← q s[i, j]← k returnm and s Takes O(n3) time Requires O(n2) space

  30. Constructing Optimal Solution • Our algorithm computes the minimum-cost table m and the split table s • The optimal solution can be constructed from the split table s • Each entry s[i, j ]=k shows where to split the product Ai Ai+1 … Ajfor the minimum cost

  31. Example • Show how to multiply this matrix chain optimally • Solution on the board • Minimum cost 15,125 • Optimal parenthesization ((A1(A2A3))((A4 A5)A6))

  32. Elements of Dynamic Programming • For dynamic programming to be applicable, an optimization problem must have: • Optimal substructure • An optimal solution to the problem contains within it optimal solution to subproblems (but this may also mean a greedy strategy applies) • Overlapping subproblems • The space of subproblems must be small; i.e., the same subproblems are encountered over and over

  33. Longest Common Subsequence Problem (LCSS)

  34. Longest Common Subsequence Problem LCSS What is the difference between a substring and a subsequence? A substring is a contiguous “mini string” within a string. A subsequence is a number of characters in the same order as within the original string, but it may be non-contiguous. String A: a b b a c d e (1-2-3-4-5-6-7) Substring B: a b b a (1-2-3-4) Subsequence C: a a c e (1-4-5-7)

  35. LCSS Finding the longest common subsequence (lcs) is important for applications like spell checking or finding alignments in DNA strings in the field of bioinformatics (e.g. BLAST). How does it work? There are several methods, e.g. the “brute force policy” or the LCSS which is a dynamic programming method.

  36. Brute Force Method • For every subsequence of X, check whether it’s a subsequence of Y . • Time:Θ(n2m). • 2msubsequences of X to check. • Each subsequence takes Θ(n)time to check: scan Y for first letter, for second, and so on.

  37. Optimal Substructure Theorem Let Z = z1, . . . , zk be any LCS of X and Y . 1. If xm= yn, then zk= xm= ynand Zk-1 is an LCS of Xm-1 and Yn-1. 2. If xmyn, then either zkxmand Z is an LCS of Xm-1 and Y . 3. or zkynandZ is an LCS of X and Yn-1. Notation: prefix Xi= x1,...,xiis the first i letters of X. This says what any longest common subsequence must look like; do you believe it?

  38. Optimal Substructure Theorem Let Z = z1, . . . , zk be any LCS of X and Y . 1. If xm= yn, then zk= xm= ynand Zk-1 is an LCS of Xm-1 and Yn-1. 2. If xmyn, then either zkxmand Z is an LCS of Xm-1 and Y . 3. or zkynandZ is an LCS of X and Yn-1. Proof: (case 1: xm= yn) Any sequence Z’ that does not end in xm= yncan be made longer by adding xm= ynto the end. Therefore, • longest common subsequence (LCS) Z must end in xm= yn. • Zk-1 is a common subsequence of Xm-1 and Yn-1, and • there is no longer CS of Xm-1 and Yn-1, or Z would not be an LCS.

  39. Optimal Substructure Theorem Let Z = z1, . . . , zk be any LCS of X and Y . 1. If xm= yn, then zk= xm= ynand Zk-1 is an LCS of Xm-1 and Yn-1. 2. If xmyn, then either zkxmand Z is an LCS of Xm-1 and Y . 3. or zkynandZ is an LCS of X and Yn-1. Proof: (case 2: xmyn, andzkxm) Since Z does not end in xm, • Z is a common subsequence of Xm-1 and Y, and • there is no longer CS of Xm-1 and Y, or Z would not be an LCS.

  40. Recursive Solution • Define c[i, j] = length of LCS of Xiand Yj. • We want c[m,n]. This gives a recursive algorithm and solves the problem.But does it solve it well?

  41. Recursive Solution c[springtime, printing] c[springtim, printing] c[springtime, printin] [springti, printing] [springtim, printin] [springtim, printin] [springtime, printi] [springt, printing] [springti, printin] [springtim, printi] [springtime, print]

  42. Recursive Solution • Keep track of c[a,b] in a table of nm entries: • top/down • bottom/up

  43. Computing the length of an LCS LCS-LENGTH (X, Y) • m← length[X] • n← length[Y] • for i ← 1 to m • do c[i, 0] ← 0 • for j ← 0 to n • do c[0, j ] ← 0 • for i ← 1 to m • do for j ← 1 to n • do if xi= yj • then c[i, j ] ← c[i1, j1] + 1 • b[i, j ] ← “ ” • else if c[i1, j ] ≥ c[i, j1] • then c[i, j ] ← c[i 1, j ] • b[i, j ] ← “↑” • else c[i, j ] ← c[i, j1] • b[i, j ] ← “←” • return c and b b[i, j ] points to table entry whose subproblem we used in solving LCS of Xi and Yj. c[m,n] contains the length of an LCS of X and Y. Time:O(mn)

  44. Constructing an LCS PRINT-LCS (b, X, i, j) • if i = 0 or j = 0 • then return • if b[i, j ] = “ ” • then PRINT-LCS(b, X, i1, j1) • print xi • elseif b[i, j ] = “↑” • then PRINT-LCS(b, X, i1, j) • else PRINT-LCS(b, X, i, j1) • Initial call is PRINT-LCS (b, X,m, n). • When b[i, j ] = , we have extended LCS by one character. So LCS = entries with in them. • Time: O(m+n)

  45. Optimal Binary Search Trees • Problem • Given sequence K = k1< k2<··· < kn of n sorted keys, with a search probability pifor each key ki. • Want to build a binary search tree (BST) with minimum expected search cost. • Actual cost = # of items examined. • For key ki, cost = depthT(ki)+1, where depthT(ki)= depth of kiin BST T .

  46. Expected Search Cost Sum of probabilities is 1. (15.16)

  47. Example • Consider 5 keys with these search probabilities:p1 = 0.25, p2 = 0.2, p3 = 0.05, p4 = 0.2, p5 = 0.3. i depthT(ki) depthT(ki)·pi 1 1 0.25 2 0 0 3 2 0.1 4 1 0.2 5 2 0.6 1.15 k2 k1 k4 k5 k3 Therefore, E[search cost] = 2.15.

  48. k2 k1 k5 k4 k3 Example • p1 = 0.25, p2 = 0.2, p3 = 0.05, p4 = 0.2, p5 = 0.3. i depthT(ki)depthT(ki)·pi 1 1 0.25 2 0 0 3 3 0.15 4 2 0.4 5 1 0.3 1.10 Therefore, E[search cost] = 2.10. This tree turns out to be optimal for this set of keys.

  49. Example • Observations: • Optimal BST may not have smallest height. • Optimal BST may not have highest-probability key at root. • Build by exhaustive checking? • Construct each n-node BST. • For each, assign keys and compute expected search cost. • But there are (4n/n3/2)different BSTs with n nodes.

  50. Optimal Substructure • Any subtree of a BST contains keys in a contiguous range ki, ..., kjfor some 1 ≤ i ≤ j ≤ n. • If T is an optimal BST and T contains subtree T with keys ki, ... ,kj, then Tmust be an optimal BST for keys ki, ..., kj. • Proof:Cut and paste. T T

More Related