910 likes | 1.2k Views
IWQoS June 2000. Engineering for QoS and the limits of service differentiation. Jim Roberts (james.roberts@francetelecom.fr). The central role of QoS. feasible technology. quality of service transparency response time accessibility. service model resource sharing
E N D
IWQoS June 2000 Engineering for QoS and the limits of service differentiation Jim Roberts (james.roberts@francetelecom.fr)
The central role of QoS feasible technology • quality of service • transparency • response time • accessibility • service model • resource sharing • priorities,... • network engineering • provisioning • routing,... a viable business model
Engineering for QoS: a probabilistic point of view • statistical characterization of traffic • notions of expected demand and random processes • for packets, bursts, flows, aggregates • QoS in statistical terms • transparency: Pr [packet loss], mean delay, Pr [delay > x],... • response time: E [response time],... • accessibility: Pr [blocking],... • QoS engineering, based on a three-way relationship: demand performance capacity
Outline • traffic characteristics • QoS engineering for streaming flows • QoS engineering for elastic traffic • service differentiation
Internet traffic is self-similar • a self-similar process • variability at all time scales • due to: • infinite variance of flow size • TCP induced burstiness • a practical consequence • difficult to characterize a traffic aggregate Ethernet traffic, Bellcore 1989
Traffic on a US backbone link (Thomson et al, 1997) • traffic intensity is predictable ... • ... and stationary in the busy hour
Traffic on a French backbone link • traffic intensity is predictable ... • ... and stationary in the busy hour tue wed thu fri sat sun mon 12h 18h 00h 06h
IP flows • a flow = one instance of a given application • a "continuous flow" of packets • basically two kinds of flow, streaming and elastic
IP flows • a flow = one instance of a given application • a "continuous flow" of packets • basically two kinds of flow, streaming and elastic • streaming flows • audio and video, real time and playback • rate and duration are intrinsic characteristics • not rate adaptive (an assumption) • QoS negligible loss, delay, jitter
IP flows • a flow = one instance of a given application • a "continuous flow" of packets • basically two kinds of flow, streaming and elastic • streaming flows • audio and video, real time and playback • rate and duration are intrinsic characteristics • not rate adaptive (an assumption) • QoS negligible loss, delay, jitter • elastic flows • digital documents ( Web pages, files, ...) • rate and duration are measures of performance • QoS adequate throughput (response time)
variable rate video Flow traffic characteristics • streaming flows • constant or variable rate • compressed audio (O[103 bps]) and video (O[106 bps]) • highly variable duration • a Poisson flow arrival process (?)
Flow traffic characteristics • streaming flows • constant or variable rate • compressed audio (O[103 bps]) and video (O[106 bps]) • highly variable duration • a Poisson flow arrival process (?) • elastic flows • infinite variance size distribution • rate adaptive • a Poisson flow arrival process (??) variable rate video
Modelling traffic demand • stream traffic demand • arrival rate x bit rate x duration • elastic traffic demand • arrival rate x size • a stationary process in the "busy hour" • eg, Poisson flow arrivals, independent flow size traffic demand Mbit/s busy hour time of day
Outline • traffic characteristics • QoS engineering for streaming flows • QoS engineering for elastic traffic • service differentiation
Open loop control for streaming traffic • a "traffic contract" • QoS guarantees rely on • traffic descriptors + admission control + policing • time scale decomposition for performance analysis • packet scale • burst scale • flow scale user-network interface user-network interface network-network interface
Packet scale: a superposition of constant rate flows • constant rate flows • packet size/inter-packet interval = flow rate • maximum packet size = MTU
buffer size log Pr [saturation] Packet scale: a superposition of constant rate flows • constant rate flows • packet size/inter-packet interval = flow rate • maximum packet size = MTU • buffer size for negligible overflow? • over all phase alignments... • ...assuming independence between flows
Packet scale: a superposition of constant rate flows • constant rate flows • packet size/inter-packet interval = flow rate • maximum packet size = MTU • buffer size for negligible overflow? • over all phase alignments... • ...assuming independence between flows • worst case assumptions: • many low rate flows • MTU-sized packets buffer size increasing number, increasing pkt size log Pr [saturation]
Packet scale: a superposition of constant rate flows • constant rate flows • packet size/inter-packet interval = flow rate • maximum packet size = MTU • buffer size for negligible overflow? • over all phase alignments... • ...assuming independence between flows • worst case assumptions: • many low rate flows • MTU-sized packets • buffer sizing for M/DMTU/1 queue • Pr [queue > x] ~ C e -r x buffer size M/DMTU/1 increasing number, increasing pkt size log Pr [saturation]
The "negligible jitter conjecture" • constant rate flows acquire jitter • notably in multiplexer queues
The "negligible jitter conjecture" • constant rate flows acquire jitter • notably in multiplexer queues • conjecture: • if all flows are initially CBR and in all queues: S flow rates < service rate • they never acquire sufficient jitter to become worse for performance than a Poisson stream of MTU packets
The "negligible jitter conjecture" • constant rate flows acquire jitter • notably in multiplexer queues • conjecture: • if all flows are initially CBR and in all queues: S flow rates < service rate • they never acquire sufficient jitter to become worse for performance than a Poisson stream of MTU packets • M/DMTU/1 buffer sizing remains conservative
bursts Burst scale: fluid queueing models • assume flows have an intantaneous rate • eg, rate of on/off sources packets arrival rate
packets bursts arrival rate Burst scale: fluid queueing models • assume flows have an intantaneous rate • eg, rate of on/off sources • bufferless or buffered multiplexing? • Pr [arrival rate < service rate] < e • E [arrival rate] < service rate
buffer size 0 0 log Pr [saturation] Buffered multiplexing performance: impact of burst parameters Pr [rate overload]
buffer size 0 0 log Pr [saturation] Buffered multiplexing performance: impact of burst parameters longer burst length shorter
buffer size 0 0 log Pr [saturation] Buffered multiplexing performance: impact of burst parameters more variable burst length less variable
buffer size 0 0 log Pr [saturation] Buffered multiplexing performance: impact of burst parameters long range dependence burst length short range dependence
Choice of token bucket parameters? • the token bucket is a virtual queue • service rate r • buffer size b r b
b b' non- conformance probability Choice of token bucket parameters? • the token bucket is a virtual queue • service rate r • buffer size b • non-conformance depends on • burst size and variability • and long range dependence r b
Choice of token bucket parameters? • the token bucket is a virtual queue • service rate r • buffer size b • non-conformance depends on • burst size and variability • and long range dependence • a difficult choice for conformance • r >> mean rate... • ...or b very large b b' non- conformance probability r b
time Bufferless multiplexing: alias "rate envelope multiplexing" • provisioning and/or admission control to ensure Pr [Lt>C] < e • performance depends only on stationary rate distribution • loss rate E [(Lt -C)+] / E [Lt] • insensitivity to self-similarity output rate C combined input rate Lt
Efficiency of bufferless multiplexing • small amplitude of rate variations ... • peak rate << link rate (eg, 1%)
Efficiency of bufferless multiplexing • small amplitude of rate variations ... • peak rate << link rate (eg, 1%) • ... or low utilisation • overall mean rate << link rate
Efficiency of bufferless multiplexing • small amplitude of rate variations ... • peak rate << link rate (eg, 1%) • ... or low utilisation • overall mean rate << link rate • we may have both in an integrated network • priority to streaming traffic • residue shared by elastic flows
Flow scale: admission control • accept new flow only if transparency preserved • given flow traffic descriptor • current link status • no satisfactory solution for buffered multiplexing • (we do not consider deterministic guarantees) • unpredictable statistical performance • measurement-based control for bufferless multiplexing • given flow peak rate • current measured rate (instantaneous rate, mean, variance,...)
Flow scale: admission control • accept new flow only if transparency preserved • given flow traffic descriptor • current link status • no satisfactory solution for buffered multiplexing • (we do not consider deterministic guarantees) • unpredictable statistical performance • measurement-based control for bufferless multiplexing • given flow peak rate • current measured rate (instantaneous rate, mean, variance,...) • uncritical decision threshold if streaming traffic is light • in an integrated network
utilization (r=a/m) for E(m,a) = 0.01 r 0.8 0.6 0.4 0.2 m 0 20 40 60 80 100 Provisioning for negligible blocking • "classical" teletraffic theory; assume • Poisson arrivals, rate l • constant rate per flow r • mean duration 1/m • mean demand, A = l/m r bits/s • blocking probability for capacity C • B = E(C/r,A/r) • E(m,a) is Erlang's formula: • E(m,a)= • scale economies
utilization (r=a/m) for E(m,a) = 0.01 r 0.8 0.6 0.4 0.2 m 0 20 40 60 80 100 Provisioning for negligible blocking • "classical" teletraffic theory; assume • Poisson arrivals, rate l • constant rate per flow r • mean duration 1/m • mean demand, A = l/m r bits/s • blocking probability for capacity C • B = E(C/r,A/r) • E(m,a) is Erlang's formula: • E(m,a)= • scale economies • generalizations exist: • for different rates • for variable rates
Outline • traffic characteristics • QoS engineering for streaming flows • QoS engineering for elastic traffic • service differentiation
Closed loop control for elastic traffic • reactive control • end-to-end protocols (eg, TCP) • queue management • time scale decomposition for performance analysis • packet scale • flow scale
Packet scale: bandwidth and loss rate • a multi-fractal arrival process
Packet scale: bandwidth and loss rate • a multi-fractal arrival process • but loss and bandwidth related by TCP (cf. Padhye et al.) congestion avoidance loss rate p B(p)
Packet scale: bandwidth and loss rate • a multi-fractal arrival process • but loss and bandwidth related by TCP (cf. Padhye et al.) congestion avoidance loss rate p B(p)
Packet scale: bandwidth and loss rate • a multi-fractal arrival process • but loss and bandwidth related by TCP (cf. Padhye et al.) • thus, p = B-1(p): ie, loss rate depends on bandwidth share congestion avoidance loss rate p B(p)
Packet scale: bandwidth sharing • reactive control (TCP, scheduling) shares bottleneck bandwidth unequally • depending on RTT, protocol implementation, etc. • and differentiated services parameters
Example: a linear network route 0 route 1 route L Packet scale: bandwidth sharing • reactive control (TCP, scheduling) shares bottleneck bandwidth unequally • depending on RTT, protocol implementation, etc. • and differentiated services parameters • optimal sharing in a network: objectives and algorithms... • max-min fairness, proportional fairness, maximal utility,...
Packet scale: bandwidth sharing • reactive control (TCP, scheduling) shares bottleneck bandwidth unequally • depending on RTT, protocol implementation, etc. • and differentiated services parameters • optimal sharing in a network: objectives and algorithms... • max-min fairness, proportional fairness, maximal utility,... • ... but response time depends more on traffic process than the static sharing algorithm! Example: a linear network route 0 route 1 route L
link capacity C fair shares Flow scale: performance of a bottleneck link • assume perfect fair shares • link rate C, n elastic flows • each flow served at rate C/n
link capacity C fair shares Flow scale: performance of a bottleneck link • assume perfect fair shares • link rate C, n elastic flows • each flow served at rate C/n • assume Poisson flow arrivals • an M/G/1 processor sharing queue • load, r = arrival rate x size / C a processor sharing queue