270 likes | 410 Views
OffPiste QoSaware Routing Protocol. By Yigal Eliaspur. Problem Background. The main challenge in QoS routing is to be able to respond to online requests with Reasonable response time Minimal network overhead (Messages, Memory Processing time).
E N D
OffPiste QoSaware Routing Protocol By Yigal Eliaspur
Problem Background • The main challenge in QoS routing is to be able to respond to online requests with • Reasonable response time • Minimal network overhead (Messages, Memory Processing time). • Minimal probability of blocking (requests failure). • A typical request is to reserve a certain resource along a path from a transaction source to a transaction destination. • The resource may be an additive resource (e.g. delay) or/and a nonadditive resource (e.g. bandwidth). • The request may be applied to Unicast or Multicast traffic.
Available Solutions • The solutions today can be partitioned to three broad classes: • Source routing algorithms • Transforms a distributed problem into a centralized one. • Maintaining a complete global state in each node of dynamics network resources. • Distributed routing algorithms • The path computation is distributed among the intermediate nodes between the source and the destination. • Single path search – usually assumes global state in each node. • Multi path search (flooding) – uses only local state in each node. • Hierarchical routing algorithms • Each node maintains only a partial global state • To cope with the scalability problem of global state in large internetworks.
Related Work • Our OPsAR protocol (OffPiste QoSaware Routing) can be classified to the Multi path, distributed routing algorithms family. • Others works related to this family are: • Selective flooding – multi path search will be done only on pre-computed routes. • Ticket based probing - Every probing (search) message is supposed to carry at least one ticket, and thus the total number of tickets limit the multi path search. • QMRP and S-QMRP (Scalable - Distributed QoS Multicast Routing Protocol): • The unicast route towards the destination is checked first. • If that failed selective scanning mechanism is applied. • The scanning is controlled by Maximum Branching Degree and Maximum Branching Level parameters. • QOSMic - best suited to a multicast environment, since it looks for a point on a multicast tree to ``hook'' on the new receiver.
The OPsAR Protocol • Our main motivation of OPsAR is to improve tradeoff between the overhead of the protocol and the success ratio it produces. • In OPsAR, a node keeps track of recent QoS messages to learn about resource availability to and from various target points. • The learning is reflected in the node's ``knowledgestate''. • Efficient path selection is been done by leveraging on the knowledgestate at the nodes. • The OPsAR protocol is built of 3 main stages: • Try Phase – in which a single path search and reservation is applied from the transaction source to the transaction destination. • Scan Phase – in which a multi path search without reservation is applied from the transaction destination back to the source. • Try 2 Phase – in which the scan phase results are evaluated and the best candidate path is reserved from the transaction source to the destination.
Try Phase • A path search from the transaction source toward the transaction target. • Try phase follows the shortest path as long as it has the required resources. • The deviation from the shortest path takes an ``offpiste'' route that leverages on the knowledgestate to optimize the routing protocol. • Its deviation from the shortest path is bounded . • If resources cannot be reserved within that boundary, the resources which have already been reserved are released, and a request is sent to the transaction target to begin the Scan phase.
Scan Phase • The scan process is based on limited BreadthFirstSearch (BFS) from the transaction target toward the transaction source. • We neither reserve resources in the Scan phase nor keep any state that relates to the specific scan. • As in the Try Phase the scanning process takes advantage of the knowledgestate to optimize the search. • The branching limitation is done by: • Ticketing scheme to bound the total number of paths. • Maximum branching degree (MBD) at each node in order to increase the variety of potential paths to traverse. • Off-piste counter to limit the distance from the shortest path, similar to the Try phase. • Branch is terminated during the scanning process if: • Offpiste limit is reached and the unicast route does not have the resources. • Or when no outgoing link has the requested resources.
Try 2 Phase • If the transactionsource receives several successful scan messages, it initiates the Try 2 phase. • It chooses the ``best'' route from the successful scan messages and asks to reserve the resources along that path. • If reservation failure along the explicit route is detected, • The OPsAR tries to route the reservation request message to the transactiondestination using alternative routes that the offpiste mechanism offers. • If that fails a nack message is returned to the transaction source indicating the need to choose another explicit route from the previous scan result.
Knowledge State - definition • Each node maintains • A local state in which it holds its links' status and the resource availability on them. • A bounded list of records • <target nodes, outgoinglink> • For each record the resources availability is maintained with respect to that outgoing link : • Max BW toward the targetnode. • Max BW from the targetnode. • This information is updated occasionally and is marked to identify the time of its last update. • This time is used for aging mechanism.
Knowledge State - usage • Any OPsAR protocol message traversing a node is used to update the knowledgestate (KS). • Each OPsAR protocol message includes the following relevant fields: • Max BW To Origin • Max BW From Origin • There are three main operation the KS is involve with: • KS record creation/update • J • J • OPsAR message fields update • J • Routing decision • The choice is made according to the resource availability along the various links toward the target, and according to how recent that information is. • This is Based on three levels of outgoing links (neighbors) maintained per target node: • Fresh • Stale • Old
Simulation Model • NS2 simulator • PowerLaw network topology • As the node degree increases, the number of nodes with that degree decreases exponentially. • Used the topology generator described in Osnat’s work (On the tomography of networks and multicast trees) • The generator was extended to support BW allocation. • The bandwidth on the links was uniformly distributed from {10,34,45,100} Mb/s. • In order to make sure that the congestion would first occur in the core network we reassigned the bandwidth of the endpoints to 1000Mb. • We also conducted tests with hierarchical bandwidth assignment chosen from {10Mb,100Mb,1G,10G} bits per second. • This backbone / metro type of over provisioning BW allocation showed almost no congestion for BW reservation requests. • Therefore, the topologies simulated were only large edge networks and ISP like networks.
Simulation Basis • 600 nodes was used. • Transaction endpoints were chosen out of 120 edge nodes. • Most of the graphs are the result of 10,000 transactions performed on six different generated topologies. • We ran each simulation on 5 different protocol types: • Traditional RSVP - Allocates the QoS requirement along the unicast route toward the transaction destination; • SQMRP* - Is S-QMRP adaptation to unicast routing • Basically it’s the same as OPsAR but without KS and Off-Piste counter support. • SQMRP*D – Is S-QMRP* with Off-Piste counter support. • OPsAR • OPT - Implemented as a BFS which finds the shortest path that fulfills the bandwidth QoS requirements. • Message overhead;
Simulation Basis (cont.) • We performed the following simulation and evaluate their relationship to the reservation success ratio: • Memory usage • Amount of concurrent transactions • Number of edge nodes • Number of destination nodes • The Cost and Performance Gain of Using Try&Scan Phases • Gradual Deployment within RSVP Framework
Memory Usage vs. Success Ratio (cont.) • The amount of memory sufficient to achieve about 85% of success ratio is very reasonable. • The memory is theoretically bounded by • The out degree of a core node (or by the aged out threshold which is 9 in our case) • Times the number of possible transactiondestination nodes. • In the largest simulation done this theoretical number was 160KB. • The average memory consumption was about 10% of the theoretical bound. • Where 60KB was limit - the actual bound set in the simulation code.
Message Overhead vs. Success Ratio (cont.) • We studied all the parameter's possible combination within a specific range • branching degree • scanning deviation • and number of tickets. • Each simulation result generated one point in the graph. • OPsAR vs. S-QMRP* • For the same amount of message overhead, the OPsAR improves the success ratio up to 30% more than SQMRP*. • OPsAR vs. RSVP • Increase of overhead by five se times yields about three times the success ratio. • Another point to consider is that the average path length is about 8 hops when deviation is allowed and the 4 hops when deviation is forbidden (e.g. RSVP). • OPsAR vs. OPT • The overhead/sucess ratio of OPT is 20.6 while the overhead/sucess ratio of the OPsAR in 200K messages, is in 29.8 which is only 30% more then the OPT.
Number of Destinations Nodes vs. Success Ratio (cont.) • We ran the simulation with a constant number of 25% edge nodes (as opposed to the 20% we usually used). • The number of candidate destination nodes varied from 1% up to the whole set of edge nodes (25%). • The candidate set of source nodes was always the whole set of edge nodes. • Only the links from those destination nodes were assigned a bandwidth capacity of 1000Mb.
The Cost and Performance Gain of Using Try&Scan Phases (cont.) • Scan Phase uses extra time and messages over Try Phase. • Our simulations showed that the time to complete a Try followed by a Scan is three times the time it takes to complete the Try phase alone.
Gradual Deployment within RSVP Framework (cont.) • At glance, there is no inherent limitation in the protocol that prohibits its use in an incremental manner. • The ``RSVP only'' routers were selected based on their distance from the a core. • The edge routers have a better chance to be chosen as ``RSVP only'' routers. • From the learning mechanism perspective, the available capacity of the links betweens ``RSVP only'' routers is ignored. • Future work can focus on deployment methods for the OPsAR protocol that will maintain the gain obtained from the learning mechanism.
Future Work • Machine learning improving • The overall scheme of our protocol is an intelligent the choice of routes from a full BreadthFirstSearch algorithm (BFS). • Future on research can focus on improving the educated choice of routes while limiting the overhead in memory. • We expect to find ways to use machine learning techniques to achieve that goal. • KS Aggregations • Save in memory by aggregating the information, using techniques like longest prefix matching on transactions destination. • Packet losses, link/node failures • Should be relatively easy using timers and retries for messages, and using soft state reservation. • Additive resources • Handling additive resources, like delay, requires minor changes d to the protocols presented. • Tuning the KS parameters • Linear increasing the fresh neighbor group did not increase the performance and sometimes cause it to be degraded. • Increasing the age out threshold – does not improved the performance either even though it increase the total memory requirements. • Further research must be conducted in order to explore the interdependencies among the various variables of OPsAR, and to automatically learn and choose the optimal values, possibly l using machine learning techniques.