1 / 33

Chapter 24.

Chapter 24. Single-Source Shortest Paths. Shortest Paths. How to find the shortest route between two points on a map. Input: Directed graph G = (V, E) Weight function w : E → R Weight of path p = ( v 0 , v 1 , . . . , v k ) = = sum of edge weights on path p .

teness
Download Presentation

Chapter 24.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 24. Single-Source Shortest Paths

  2. Shortest Paths • How to find the shortest route between two points on a map. • Input: • Directed graphG = (V, E) • Weight functionw : E → R • Weight of pathp = (v0, v1, . . . , vk) = = sum of edge weights on pathp . • Shortest-path weightu tov: δ(u, v) = min {w(p) : }if there exists a path , ∞ otherwise . • Shortest pathu tov is any pathp such thatw(p) = δ(u, v).

  3. Example: shortest paths from s • The shortest path might not be unique. • When we look at shortest paths from one vertex to all other vertices, • the shortest paths are organized as a tree. • Can think of weights as representing any measure that • accumulates linearly along a path, • we want to minimize. • Examples: time, cost, penalties, loss. • Generalization of breadth-first search to weighted graphs.

  4. .. continued • Variants • Single-source: Find shortest paths from a given source vertex s Vto every vertex v  V. • Single-destination: Find shortest paths to a given destination vertex. • Single-pair: Find shortest path from uto v. No way known that’s better in worst case than solving single-source. • All-pairs: Find shortest path from uto vfor all u, v V. • Negative-weight edge OK, as long as no negative-weight cycles are reachable from the source. • If we have a negative-weight cycle, we can just keep going around it, and get w(s, v)= -∞ for all von the cycle. • But OK if the negative-weight cycle is not reachable from the source. • Some algorithms work only if there are no negative-weight edges in the graph. We’ll be clear when they’re allowed and not allowed.

  5. .. continued • Optimal Substructure Lemma Any subpath of a shortest path is a shortest path. Proof Cut-and-paste Suppose this pathp is a shortest path fromu tov. Thenδ(u, v) = w(p) = w(pux ) + w(pxy ) + w(pyv ). Now suppose there exists a shorter path . Thenw (p’xy ) < w (pxy ). Construct p’: Then w(p’) = w(pux ) + w(p’xy) + w(pyv) < w(pux ) + w(pxy) + w(pyv) = w(p) . Sop wasn’t a shortest path after all!

  6. .. continued • Cycles Shortest paths can’t contain cycles: • Already ruled out negative-weight cycles. • Positive-weight we can get a shorter path by omitting the cycle. • Zero-weight: no reason to use them  assume that our solutions won’t use them. • Output of single-source shortest-path algorithm For each vertex v  V: • d[v] = δ(s, v). • Initially,d[v]=∞. • Reduces as algorithms progress. But always maintaind[v] ≥ δ(s, v). • Calld[v] a shortest-path estimate. • π[v] = predecessor of von a shortest path froms. • If no predecessor, π[v] = NIL. • π induces a tree—shortest-path tree.

  7. .. continued • Initialization All the shortest-paths algorithms starts with INIT-SINGLE-SOUCRCE.

  8. .. continued • Relaxing an edge (u, v) Can we improve the shortest-path estimate for v by going through u and taking (u,v)? For all the single-source shortest-paths algorithms we’ll look at, • start by calling INIT-SINGLE-SOURCE, • then relax edges. The algorithms differ in the order and how many times they relax each edge.

  9. Shortest-paths Properties • Based on calling INIT-SINGLE-SOURCE once and then calling RELAX zero or more times. • Triangle Inequality: For all(u, v) E, we haveδ(s, v) ≤ δ(s, u) + w(u, v). • Upper-bound Property Always haved[v] ≥ δ(s, v)for allv. Onced[v] = δ(s, v), it never changes. • No-path Property If δ(s, v)=∞,then d[v]=∞always. • Convergence Property If → v is a shortest path, d [u] = δ(s, u), and we call RELAX(u,v,w), thend[v] = δ(s, v)afterward. • Path Relaxation Property Let p = v0, v1, . . . , vkbe a shortest path froms = v0tovk. If we relax, in order,(v0, v1), (v1, v2), . . . , (vk-1, vk), even intermixed with other relaxations, thend[vk] = δ(s, vk ).

  10. .. continued • Triangle Inequality: For all(u, v) E, we haveδ(s, v) ≤ δ(s, u) + w(u, v). Proof Weight of shortest pathis ≤ weight of any path. Path→ v is a path, and if we use a shortest path, its weight is δ(s, u) + w(u, v). ■ • Upper-bound Property Always haved[v] ≥ δ(s, v) for allv. Onced[v] = δ(s, v), it never changes. Proof Initially true. Suppose there exists a vertex such thatd[v] < δ(s, v). Without loss of generality, v is first vertex for which this happens. Let u be the vertex that causesd[v] to change. Thend[v] = d[u] + w(u, v). So,d[v] < δ(s, v) ≤ δ(s, u) + w(u, v) (triangle inequality) ≤ d[u] + w(u, v) (v is first violation) d[v] < d[u] + w(u, v) . Contradictsd[v] = d[u] + w(u, v). Onced[v] reachesδ(s, v), it never goes lower. It never goes up, since relaxations only lower shortest-path estimates. ■

  11. .. continued • No-path Property If δ(s, v)=∞, then d[v]=∞ always. Proof d[v] ≥ δ(s, v)=∞ d[v]=∞. ■ • Convergence Property If → v is a shortest path, d[u] = δ(s, u), and we call RELAX(u,v,w), thend[v] = δ(s, v) afterward. Proof After relaxation: d[v] ≤ d [u] + w(u, v) (RELAX code) = δ(s, u) + w(u, v) = δ(s, v) (lemma—optimal substructure) Sinced[v] ≥ δ(s, v), must haved [v] = δ(s, v). ■

  12. .. continued • Path Relaxation Property Letp = v0, v1, . . . , vkbe a shortest path froms = v0tovk. If we relax, in order,(v0, v1), (v1, v2), . . . , (vk-1, vk), even intermixed with other relaxations, thend[vk] = δ(s, vk ). Proof Induction to show thatd[vi] = δ(s, vi ) after(vi-1, vi ) is relaxed. Basis:i = 0. Initially,d[v0] = 0 = δ(s, v0) = δ(s, s). Inductive step: Assumed[vi-1] = δ(s, vi-1). Relax(vi-1, vi ). By convergence property,d[vi] = δ(s, vi ) afterward andd[vi] never changes. ■

  13. The Bellman-Ford Algorithm • Allows negative-weight edges. • Computes d[v] and π[v] for all v V. • Returns TRUE if no negative-weight cycles reachable from s, FALSE otherwise. • Core: The first for loop relaxes all edges |V| -1 times. • Time: (VE).

  14. Example: • Values you get on each pass and how quickly it converges depends on order of relaxation. • But guaranteed to converge after |V| -1 passes, assuming no negative-weight cycles. Proof Use path-relaxation property. Letv be reachable froms, and letp = v0, v1, . . . , vkbe a shortest path fromstov,wherev0 = s andvk= v. Sincep is acyclic, it has ≤ |V| -1 edges, sok ≤ |V| -1. Each iteration of the for loop relaxes all edges: • First iteration relaxes(v0, v1). • Second iteration relaxes(v1, v2). • kth iteration relaxes(vk-1, vk). By the path-relaxation property,d[v] = d[vk] = δ(s, vk ) = δ(s, v). ■

  15. .. continued How about the TRUE/FALSE return value? • Suppose there is no negative-weight cycle reachable froms. At termination, for all(u, v) E, d[v] = δ(s, v) ≤ δ(s, u) + w(u,,v) (triangle inequality) = d[u] + w(u, v) . So BELLMAN-FORD returns TRUE. • Now suppose there exists negative-weight cyclec = v0, v1, . . . , vk, wherev0 = vk, reachable froms. Then Suppose (for contradiction) thatBELLMAN-FORDreturnsTRUE. Then d[vi] ≤ d[vi-1] + w(vi-1, vi ) for i = 1, 2, . . . , k. Sum around c: Each vertex appears once in each summation and 0 ≤ This contradictsc being a negative-weight cycle!

  16. Example:

  17. Single-source shortest paths in a directed acyclic graph • Since a dag, we’re guaranteed no negative-weight cycles. • Example: • Time: (V + E). -- line 2 + an inner for loop of line(4-5) • Correctness: Because we process vertices in topologically sorted order, edges of any path must be relaxed in order of appearance in the path.  Edges on any shortest path are relaxed in order.  By path-relaxation property, correct.

  18. Reference: Topological Sort • A topological sort of a dag G=(V, E) is a linear ordering of all its vertices such that if G contains an edge (u, v), then u appears before v in the ordering. • (If the graph is not acyclic, then no linear ordering is possible.) • A topological sort of a graph can be viewed as an ordering of its vertices along a horizontal line so that all directed edges go from left to right. • Topological sorting is thus different from the usual kind of sorting.

  19. Dijkstra’s Algorithm • No negative-weight edges. • Essentially a weighted version of breadth-first search. • Instead of a FIFO queue, uses a priority queue. • Keys are shortest-path weights (d[v]). • Have two sets of vertices: • S = vertices whose final shortest-path weights are determined, • Q = priority queue = V-S. • DIJKSTRA (V, E, w, s) INIT-SINGLE-SOURCE (V, s) S  Q  V while Q do u  EXTRACT-MIN(Q) S  S  {u} for each vertex v  Adj[u] do RELAX (u, v, w) • Looks a lot like Prim’s algorithm, but computing d[v], and using shortest-path weights as keys. • Dijkstra’s algorithm can be viewed as greedy, since it always chooses the “lightest” (“closest”?) vertex in V- S to add to S.

  20. Example: • Order of adding S: s, y, z, x. • Correctness: Loop invariant: At the start of each iteration of the while loop, d[v] = δ(s, v) for all v S. Initialization: Initially, S=, so trivially true. Termination: At end, Q=   S=V  d[v]= (s, v) for all v  V. Maintenance: Need to show that d[u]= (s, u) when u is added to S in each iteration. Suppose there exists u such that d[u] δ(s, v). Without loss of generality, let u be the first vertex for which d[u] δ(s, v)when u is added to S. Observations: • u s, since d[s]= (s, s)=0. • Therefore, s  S, so S . • There must be some path , since otherwise d[u] =  (s, u) = ∞by no-path property. So, there is a path . This means there is a shortest path

  21. .. continued (maintenance of invariant) Just before uis added to S, path p connects a vertex in S(i.e., s) to a vertex in V-S(i.e., u). Let y be first vertex along p that’s in V-S, and let x  Sbe y’s predecessor. Decompose p into (Could have x = s or y = u, so that p1 or p2 may have no edges.) Claim d[y] = δ(s, y)when u is added to S. Proof x  Sand u is the first vertex such that d[u]  δ(s, u)when u is added to S  d[x] = δ(s, x)when x is added to S. Relaxed (x, y)at that time, so by the convergence property, d[y] = δ(s, y).■ (claim) Now can get a contradiction to d[u] δ(s, u): y is on shortest path , and all edge weights are nonnegative δ(s, y) ≤ δ(s, u) d[y] = δ(s, y) ≤ δ(s, u) ≤ d[u] (upper-bound property) . Also, both yand uwere in Qwhen we chose u, so d[u] ≤ d[y] d[u] = d[y]. Therefore, d[y] = δ(s, y) = δ(s, u) = d[u]. Contradicts assumption that d[u] δ(s, u). Hence, Dijkstra’s algorithm is correct. ■

  22. .. continued Analysis: Like Prim’s algorithm, depends on implementation of priority queue. • If binary heap, each operation takes O(lg V)time  O((V+E) lg V) = (E lg V) if all vertices are reachable from the source. • If a Fibonacci heap: • Each EXTRACT-MIN takes O(1)amortized time. • There are O(V)other operations, taking O(lg V)amortized time each. • Therefore, time is O(V lg V + E).

More Related