1 / 119

An Introduction to Network Coding

An Introduction to Network Coding. Muriel Médard Associate Professor EECS Massachusetts Institute of Technology Ralf Koetter Director Institute for Communications Engineering Technical University of Munich. Outline of course. An introduction to network coding: Network model

janae
Download Presentation

An Introduction to Network Coding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Introduction to Network Coding Muriel Médard Associate Professor EECS Massachusetts Institute of Technology Ralf Koetter Director Institute for Communications Engineering Technical University of Munich

  2. Outline of course • An introduction to network coding: • Network model • Algebraic aspects • Delay issues • Network coding for wireless multicast: • Distributed randomized coding • Erasure reliability • Use of feedback • Optimization in choice of subgraphs • Distributed optimization • Dealing with mobility • Relation to compression • Network coding in non-multicast: • Algorithms • Heuristics • Network coding for delay reduction in wireless downloading • Security with network coding: • Byzantine security • Wiretapping aspects

  3. Network coding • Canonical example [Ahslwede et al. 00] • What choices can we make? • No longer distinct flows, but information s b b 1 2 b b t u 1 2 w b b 1 2 x y z

  4. Network coding • Picking a single bit does not work • Time sharing does not work • No longer distinct flows, but information s b b 1 2 b b t u 1 2 w b b 1 2 b 1 x y z b b 1 1

  5. Network coding • Need to use algebraic nature of data • No longer distinct flows, but information s b b 1 2 b b t u 1 2 w b b 1 2 b + b 1 2 x y z b + b b + b 1 2 1 2

  6. [KM01, 02, 03]

  7. A simple example

  8. A simple example

  9. Transfer matrix

  10. Linear network system

  11. Solutions

  12. Multicast

  13. Multicast

  14. One source, disjoint multicasts

  15. Delays

  16. Delays

  17. Delays

  18. Network coding for multicast: • Distributed randomized coding • Erasure reliability • Use of feedback • Optimization in choice of subgraphs • Distributed optimization • Dealing with mobility

  19. Randomized network coding • The effect of the network is that of a transfer matrix from sources to receivers • To recover symbols at the receivers, we require sufficient degrees of freedom – an invertible matrix in the coefficients of all nodes • The realization of the determinant of the matrix will be non-zero with high probability if the coefficients are chosen independently and randomly • Probability of success over field F ≈ • Randomized network coding can use any multicast subgraph which satisfies min-cut max-flow bound for each receiver [HKMKE03, HMSEK03, WCJ03] for any number of sources, even when correlated [HMEK04] Endogenous inputs j Exogenous input

  20. Erasure reliability • Packet losses in networks result from • congestion, • buffer overflows, • (in wireless) outage due to fading or change in topology • Prevailing approach for reliability: Request retransmission • Not suitable for • high-loss environments, • multicast, • real-time applications.

  21. Erasure reliability • Alternative approach: Forward Error Correction (FEC) • Multiple description codes • Erasure-correcting codes (e.g. Reed-Solomon, Tornado, LT, Raptor) • End-to-end: Connection as a whole is viewed as a single channel; coding is performed only at the source node.

  22. Erasure reliability – single flow • End-to-end erasure coding: Capacity is packets per unit time. • As two separate channels: Capacity is packets per unit time. • -Can use block erasure coding on each channel. But delay is a problem. • Network coding: minimum cut is capacity • - For erasures, correlated or not, we can in the multicast case deal with average flows uniquely [Lun et al. 04, 05], [Dana et al. 04]: • - Nodes store received packets in memory • Random linear combinations of memory contents sent out • Delay expressions generalize Jackson networks to the innovative packets • Can be used in a rateless fashion

  23. Feedback for reliability • Parameters we consider: • delay incurred at B: excess time, relative to • the theoretical minimum, that it takes for k packets • to be communicated, disregarding any delay due to • the use of the feedback channel • block size • feedback: number of feedback packets used • (feedback rate Rf = number of feedback messages / number of received packets) • memory requirement at B • achievable rate from A to C

  24. Feedback for reliability Follow the approach of Pakzad et al. 05, Lun et al. 06 Scheme V allows us to achieve the min-cut rate, while keeping the average memory requirements at node B finite note that the feedback delay for Scheme V is smaller than the usual ARQ (withRf= 1) by a factor of Rf feedback is required only on link BC [Fragouli et al. 07]

  25. Erasure reliability • For erasures, correlated or not, we can in the multicast case deal with average flows uniquely [LME04], [LMK05], [DGPHE04] • We consider a scheme [LME04] where • nodes store received packets in memory; • random linear combinations of memory contents sent out at every transmission opportunity (without waiting for full block). • Scheme gets to capacity under arbitrary coding at every node for • unicast and multicast connections • networks with point-to-point and broadcast links.

  26. Scheme for erasure reliability • We have k message packets w1, w2, . . . , wk(fixed-length vectors over Fq) at the source. • (Uniformly-)random linear combinations of w1, w2, . . . , wkinjected into source’s memory according process with rate R0. • At every node, (uniformly-)random linear combinations of memory contents sent out; • received packets stored into memory. • in every packet, store length-kvector over Fqrepresenting the transformation it is of w1, w2, . . . , wk— global encoding vector.

  27. Coding scheme • Since all coding is linear, can write any packet xas a linear combination of w1, w2, . . . , wk: • The vector γis the global encoding vector of x. • We send the global encoding vector along with x, in its header, incurring a constant overhead. • The side information provided by γis very important to the functioning of the scheme.

  28. Outline of proof • Keep track of the propagation of innovative packets - packets whose auxiliary encoding vectors (transformation with respect to the n packets injected into the source’s memory) are linearly independent across particular cuts. • Can show that, if R0 less than capacity and input process is Poisson, then propagation of innovative packets through any node forms a stable M/M/1 queueing system in steady-state. • So, Ni, the number of innovative packets in the network is a time-invariant random variable with finite mean. • We obtain delay expressions using in effect a generalization of Jackson networks for the innovative packets

  29. Comments for erasure reliability • Particularly suitable for • overlay networks using UDP, and • wireless packet networks (have erasures and can perform coding at all nodes). • Code construction is completely decentralized. • Scheme can be operated ratelessly - can be run indefinitely until successful reception.

  30. Coding for packet losses - unicast Average number of transmissions required per packet in random networks of varying size. Sources and sinks were chosen randomly according to a uniform distribution. Paths or subgraphs were chosen in each random instance to minimize the total number of transmissions required, except in the cases of end-to-end retransmission and end-to-end coding, where they were chosen to minimize the number of transmissions required by the source node. [Lun et al. 04]

  31. Explicit Feedback - Main Idea V1 V2 V V3 Vn • Store linear combinations of original packets • No need to store information commonly known at all receivers (i.e. V∆) Vector spaces representing knowledge V: Knowledge of sender Vj:Knowledge of receiver j V∆: Common knowledge of all receivers (Sundararajan et al 07)

  32. Algorithm Outline Separate out common knowledge Incorporate channel state feedback VΔ’(t)‏ Incorporate arrivals of slot (t-1)‏ V(t)‏ Vj(t)‏ VΔ (t)‏ V(t-1) Vj'(t)‏ V(t-1)‏ Vj(t-1)‏ VΔ (t-1)‏ U’’(t) Uj’’(t)‏ VΔ’(t)‏ Slot (t-1)‏ Slot (t+1)‏ Slot t V(t-1): Knowledge of the sender after incorporating slot (t-1) arrivals; Vj(t-1): Knowledge of receiver j at the end of slot (t-1);

  33. Algorithm Outline Separate out common knowledge Incorporate channel state feedback VΔ’(t)‏ Incorporate arrivals of slot (t-1)‏ V(t)‏ Vj(t)‏ VΔ (t)‏ V(t-1) Vj'(t)‏ V(t-1)‏ Vj(t-1)‏ VΔ (t-1)‏ U’’(t) Uj’’(t)‏ VΔ’(t)‏ Slot (t-1)‏ Slot (t+1)‏ Slot t Vj’(t): Knowledge of receiver j after incorporating feedback;

  34. Algorithm Outline Separate out common knowledge Incorporate channel state feedback VΔ’(t)‏ Incorporate arrivals of slot (t-1)‏ V(t)‏ Vj(t)‏ VΔ (t)‏ V(t-1) Vj'(t)‏ V(t-1)‏ Vj(t-1)‏ VΔ (t-1)‏ U’’(t) Uj’’(t)‏ VΔ’(t)‏ Slot (t-1)‏ Slot (t+1)‏ Slot t Remaining part of information:

  35. Algorithm Outline Separate out common knowledge Incorporate channel state feedback VΔ’(t)‏ Incorporate arrivals of slot (t-1)‏ V(t)‏ Vj(t)‏ VΔ (t)‏ V(t-1) Vj'(t)‏ V(t-1)‏ Vj(t-1)‏ VΔ (t-1)‏ U’’(t) Uj’’(t)‏ VΔ’(t)‏ Slot (t-1)‏ Slot (t+1)‏ Slot t We can ensure that for all j: Therefore, it is sufficient to store linear combinations corresponding to some basis of U’’(t)

  36. Separate out common knowledge Incorporate channel state feedback UΔ’(t)‏ Incorporate arrivals of slot (t-1)‏ U(t)‏ Uj(t)‏ U(t-1) Uj'(t)‏ U(t-1)‏ Uj(t-1)‏ U’’(t) Uj’’(t)‏ UΔ’(t)‏ Slot (t-1)‏ Slot (t+1)‏ Slot t Incremental version of the vector spaces • All the operations can be performed even after excluding the common knowledge from all the vector spaces, since it is not relevant any more. Define U(t) and Uj(t) such that

  37. Separate out common knowledge Incorporate channel state feedback UΔ’(t)‏ Incorporate arrivals of slot (t-1)‏ U(t)‏ Uj(t)‏ U(t-1) Uj'(t)‏ U(t-1)‏ Uj(t-1)‏ U’’(t) Uj’’(t)‏ UΔ’(t)‏ Slot (t-1)‏ Slot (t+1)‏ Slot t Incremental version of the vector spaces • Let Uj’(t) be the incremental knowledge of receiver j after including feedback • Then the incremental common knowledge is :and we get

  38. The Algorithm Separate out common knowledge Incorporate channel state feedback UΔ’(t)‏ Incorporate arrivals of slot (t-1)‏ U(t-1) Uj'(t)‏ U(t-1)‏ Uj(t-1)‏ U’’(t) Uj’’(t)‏ UΔ’(t)‏ Slot t In time slot t, • Compute U(t-1) after including slot (t-1) arrivals into U’’(t-1) • Using the feedback, compute Uj’(t)’s and hence U∆’(t) • Compute a basis B∆ for U∆ ’(t). • Extend this basis to a basis B for U(t-1).

  39. Separate out common knowledge Incorporate channel state feedback UΔ’(t)‏ Incorporate arrivals of slot (t-1)‏ U(t-1) Uj'(t)‏ U(t-1)‏ Uj(t-1)‏ U’’(t) Uj’’(t)‏ UΔ’(t)‏ Slot t The Algorithm • Replace current queue contents with linear combinations of packets whose coefficient vectors are those in B\B∆ • Express all the vector spaces in terms of the new basis B\B∆. This basis spans U’’(t) • Compute a linear combination g which is in U’’(t) but not in any of the Uj’’(t)’s (except if Uj’’(t)=U’’(t)). (This is possible iff the field size exceeds no. of receivers)

  40. No. of arrivals in slot t Bounding the queue size • Q(t): Physical queue size at the end of slot t • Can be proved using following property of subspaces where X+Y is span(X U Y).

  41. No. of arrivals in slot t Bounding the queue size • Q(t): Physical queue size at the end of slot t • LHS : Physical queue size (the amount of the sender's knowledge that is not known at all receivers) • RHS : Virtual queue size (sum of backlogs in linear degrees of freedom of all the receivers)

  42. Uncoded Networks Coded Networks Knowledge represented by Set of received packets Vector space spanned by the coefficient vectors of the received linear combinations Amount of knowledge Number of packets received Number of linearly independent (innovative) linear combinations of packets received (i.e., dimension of the vector space)‏ Queue stores All undelivered packets Linear combination of packets which form a basis for the coset space of the common knowledge at all receivers Update rule after each transmission If a packet has been received by all receivers – drop it. Recompute the common knowledge space V∆; Compute a new set of linear combinations so that their span is independent of V∆. Summary: Uncoded vs. Coded

  43. Complexity and Overhead • All computations can be performed on incremental versions of the vector spaces (the common knowledge can be excluded everywhere) – hence complexity tracks the queue size • Overhead for the coding coefficients depends on how many uncoded packets get coded together. This may not track the queue size • Overhead can be bounded by bounding the busy period of the virtual queues • Question: given this use of coding for queueing, how do we manage the network resources to take advantage of network coding?

More Related