310 likes | 453 Views
Integrated Dynamic Congestion Controller. Andreas Pitsillides, Petros Ioannou, Marios Lestas, Loukas Rossides. A recent remark. ‘Networks are very complex. Do not kid yourselves otherwise. ’
E N D
Integrated Dynamic Congestion Controller Andreas Pitsillides, Petros Ioannou, Marios Lestas, Loukas Rossides
A recent remark ‘Networks are very complex. Do not kid yourselves otherwise.’ Debasis Mitra Senior VP Research, Bell Labs Panel discussion at Infocom 2001 (Organiser: Ariel Orda) on Modelling of the Shrew (beast): Quest for a ‘Model’ Network Model
Congestion Control Problem • Generally accepted that: • Network congestion control remains a critical issue given the growing size, demand and speed (bandwidth) of the increasingly integrated services networks. • Despite research efforts spanning a few decades and a large number of different schemes proposed there is no universally accepted scheme. • Current solutions in existing networks: • Are developed using intuition, based on non-linear simple schemes. • Exhibit poor performance (oscillatory, chaotic behavior, unfairness). • Increasingly become ineffective especially in the light of non-elastic applications (multimedia) being massively deployed. • Cannot easily scale up.
Potential Problems of Congestion Control • Large scale and complex. • Distributed nature • Large feedback delays (Increasing bandwidth-delay products). • Diverse nature of carried traffic. • Diverse nature of application requirements. • Unpredictable and time varying user behavior. • Lack of appropriate dynamic models. Although small scale dynamics are easy to describe mathematically large scale modeling is difficult. • Expectation for guaranteed quality of service. • Provide fairness without increasing the computational cost dramatically.
Existing Approaches to Congestion Control • clear that existing TCP congestion avoidance mechanisms (RFC2001), while necessary and powerful, not sufficient. • Basically, limit as to how much control can be accomplished from edges of network • Some mechanisms needed in routers to complement endpoint congestion avoidance mechanisms. (Need for gateway control realised early; e.g. see [Jacobson, 1988], where for future work gateway side advocated as necessary, RED, …). • Evolutionary, for TCP/IP and earlier ATM we see • progressive shift of controls from edges of network (initially open loop then edge binary feedback) to network assisted. Feedback also shifting from implicit to explicit, from pure binary to multivalued and explicit. • TCP itself as a deterministic process creates chaos, which generates self-similarity. • The Chaotic Nature of TCP Congestion Control Andras Veres, Infocom 2000 Best paper award
The Chaotic Nature of TCP Congestion Control Andras Veres, Infocom 2000 Best paper award The congestion window processes of two competing TCP sources The results support that traffic of a one-TCP microflow is consistent with asymptotic second-order self-similarity with H >0:5.
Need for modelling • Now a hot topic. • For example Infocom 2001 devoted a panel discussion on Modelling of the Shrew (beast): Quest for a ‘Model’ Network Model • Have to look at wider networks now • e.g. packets too microscopic, Coarser grained models needed, Resurgence of fluid models • Modelling for control is just entered the debate • Other measures of network control performance are starting to be discussed • delay, loss, and throughput are not enough. Must look at, e.g. robustness, complexity, …
Integrated Dynamic Congestion Controller(Our Proposal) • Generic scheme for handling multiple differentiated classes of traffic. • The dynamic fluid flow model used, is developed using packet flow conservation considerations and by matching the behaviour of an M/M/1 queue at equilibrium. • The control strategy is model based dynamic feedback linearization, with proportional plus integral action and adaptation. • The methodology is general and independent of technology, as for example TCP/IP or ATM. • The proposed IDCC algorithm can be classified as Network-assisted Congestion Control and uses queue length information for feedback.
Differentiated Services • Following the same spirit adopted by the IETF Diff-Serv group we define classes of aggregated behaviour: • Premium Traffic Service (Assured Forwarding): Designed for applications (non-elastic) with stringent delay and loss requirements on per packet basis that can specify upper bounds on their traffic needs and required quality of service. Any regulation of this type of traffic has to be achieved at the connection phase (Admission Control). • Ordinary Traffic service (Expedited Forwarding): Intended for applications that have relaxed delay requirements and allow the rate into the network to be controlled (elastic). • Best effort Traffic Service: It opportunistically uses any instantaneous leftover capacity from both Premium and Ordinary Traffic Services.
Implementation of Control Strategy • The capacity of the Premium Traffic is dynamically allocated up to a maximum. The algorithm uses the error (queue size – reference queue size ) as the feedback signal to calculate the capacity to be allocated.The sampling interval is set to . • The Ordinary Traffic takes into account the leftover capacity, , uses the error between the queue length and the reference value as the feedback signal and updates the common rate allocated to the Ordinary Traffic users. • The Best Effort Traffic Service operates at the packet/cell scale and uses any instantaneous left over capacity. In the absence of packets in the server buffer it allows a transmission. This process however is packet size dependent and care should be taken to account for this.
Dynamic Model of a Single Queue • Assumptions: • The link has a FIFO discipline and a common buffer. • The packets arrive according to a Poisson process. • Packet transmission time is proportional to packet length. • Packets are exponentially distributed with mean length 1. • From M/M/1 queuing formulas: • Requiring that when :
Validity of the Model • Time evolution of the network system queue state obtained using OPNET simulation (broken line) and solution of fluid flow model (solid line). There is reasonable agreement.
Integrated Congestion Control Strategy(Premium Traffic) • The allocated capacity is calculated using: • where • and • Pr[.] is a projection operator. • a and δ are design constants affecting the convergence rate and performance.
Integrated Congestion Control Strategy(Ordinary Traffic) • The capacity allocated to the outgoing Ordinary Traffic is what is left after allocation to the Premium Traffic: • Using feedback linearization we choose the controlled traffic input rate as: • is a design constant.
Simulation Topology • Simulation Network Model: • Different link delays chosen to emulate LAN and WAN (RTT=120msec) • Sources are on-off (period adjusted to simulate different loading conditions). • VBR and CBR sources were also used. • Different sampling periods (0.085-1 msec) were also used.
Simulation results • Switch 2 (bottleneck link): Time evolution of the Ordinary Traffic queue length for a LAN and a WAN and changing reference queue values and for different sampling times: Ts = 0.085 - 1 msec Ts = 0.085 - 1 msec
Switch 2(bottleneck link): Time evolution of the Premium Traffic queue length for a LAN and a WAN and changing reference queue values:
Typical behaviour of the time evolution of the common calculated allowed cell rate at Switch 2 for a LAN and a WAN configuration.
Typical behaviour of the time evolution of the transmission rate of controlled sources using switch 2 for LAN and WAN configurations
Controller Evaluation • The system exhibits good transient behaviour • A fast speed of response is observed in all scenarios. • There are no high overshoots or undershoots. • No cyclic behaviour is observed. • Reasonable insensitivity to different control periods was exhibited. • There is not much degradation in performance due to longer feedback delays. • In the case of ordinary traffic a sizeable offset in the queue length was observed for each reference setting. This can be avoided by introducing integrating action. This, however might add unnecessary complexity.
Fairness in Ordinary Traffic • The Network test configuration used in order to investigate whether the algorithm provides fairness among Ordinary Traffic sources is shown below. All sources are assumed to be saturated:
Fairness in Ordinary Traffic • The bandwidth allocation of Ordinary Traffic Sources for LAN and WAN environments are shown below. All sources are dynamically allocated their fair share at all times with no discrimination (for example due to their geographic proximity to the bottleneck switch).
Future Work • Investigate the way this scheme interacts with TCP: • The scheme will be tested on the ns2 Simulator in a Diff-Serv environment with other interacting TCP sources. • Will it be TCP friendly? • TCP applications utilizing existing congestion controls will probably induce high frequency components in the queue size of the congested switch. These, need to filtered out to avoid oscillations.
Future Work (cont.) • Investigate the way the network and the sources manipulate the aggregate sending rate calculated, to send the right number of packets:. • What type of feedback signal is used (implicit, explicit)? Explicit signals add to the packet overhead. • How do the sources react to this information? Is a rate based or a window based protocol used? In the latter case how is the conversion made? • It is probable that the aggregate sending rate will need to be normalized to avoid oscillations and underutilization of the network. These oscillations were actually observed in the WAN test environment.
Future Work (cont.) • Investigate if other models are more appropriate in some scenarios: • Higher order models can be evaluated. • suggested model based on Poisson arrivals. However, has been shown that Internet Traffic sometimes exhibits self-similar behaviour. This makes the Poisson assumption inapplicable. • Choose the optimal value of the sampling frequency. • Higher sampling frequencies provide better control. • Smaller sampling frequencies reduce computational cost. • Optimal choice directly related to feedback delay and settling time. • Calculate the controller design parameters to optimise the system performance.
University of Cyprus (UCY)Computer Science Department Networks Group • Faculty: • Associate Prof. Andreas Pitsillides. • Students: • Mr. Loukas Rossides. • Mr. Giannos Mylonas. • Mr. Chrysostomos Chrysostomou. • Mr. Marios Lestas. • Main collaborators: • Prof. Petros Ioannou (University of Southern California). • Ahmet Sekercioglu (Monash University, Melbourne, Australia). • Athanasios Vasilakos (ICS-FORTH, Greece).
Congestion • Definition: • Network state in which performance degrades due to saturation of network resources (communication links, networks switches ,processor cycles and memory buffers). • Congestion Control: • Refers to the set of actions taken by the network to minimize intensity, spread and duration of congestion. • Optimal control of networks of queues: • Well-known, much studied and notoriously difficult problem, even for simplest of cases. E.g. Papademetriou and Pertsekas show problem of optimally controlling simple network of queues with simple arrival and service distributions and multiple customer classes is complete for exponential time (i.e provably intractable).
Classification of Congestion Control Schemes • End-to-end vs Hop-by-hop. (Where are the control actions taken?) • Closed loop vs Open loop. • Window vs Rate based control • Explicit vs Implicit (What type of feedback signals are used?) • Reactive vs Preventive. End-to-end Level Hop Level Controller System Controller System
Congestion can be Sensed or Predicted by: • Packet Loss • Sensed by the queue as an overflow. • Sensed by destination (sequence number) and acknowledged to a user. • Sensed by sender due to lack of acknowledgement (timeout mechanism). • Packet Delay • Inferred by the queue size. • Observed by destination and acknowledged to user (e.g time stamps). • Observed by sender, e.g by packet probe to measure RTT. • Loss of Throughput • Observed by the sender queue size (waiting time in queue). • Other Signals: • Queue size. • Available Bandwidth. • Mark rate. • Calculated signals