650 likes | 840 Views
Routing. Murat Demirbas SUNY Buffalo. Routing patterns in WSN. Model: static large-scale WSN Convergecast: Nodes forwards their data to basestation over multihops, scenario: monitoring application Broadcast: Basestation pushes data to all nodes in WSN, scenario: reprogramming
E N D
Routing Murat Demirbas SUNY Buffalo
Routing patterns in WSN Model: static large-scale WSN • Convergecast: Nodes forwards their data to basestation over multihops, scenario: monitoring application • Broadcast: Basestation pushes data to all nodes in WSN, scenario: reprogramming • Data driven: Nodes subscribe for data of interest to them, scenario: operator queries the nearby nodes for some data (similar to querying)
Outline • Convergecast • Routing tree • Grid routing • Reliable bursty broadcast • Broadcast • Flood, Flood-Gossip-Flood • Trickle • Polygonal broadcast, Fire-cracker • Data driven • Directed diffusion • Rumor routing
Routing tree • Most commonly used approach is to induce a spanning tree over the network • The root is the base-station • Each node forwards data to its parent • In-network aggregation possible at intermediate nodes • Initial construction of the tree is problematic • Broadcast storm, remember Complex behavior at scale • Link status change non-deterministically • Snooping on nearby traffic to choose high-quality neighbors pays off • Taming the Underlying Challenges of Reliable Multihop Routing • Trees are problematic since a change somewhere in the tree might lead to escalating changes in the rest (or a deformed structure)
Outline • Convergecast • Routing tree • Grid routing • Reliable bursty broadcast • Broadcast • Flood, Flood-Gossip-Flood • Trickle • Polygonal broadcast, Fire-cracker • Data driven • Directed diffusion • Rumor routing
Grid Routing Protocol • The protocol is simple: it requires each mote to send only one three-byte msg every T seconds • This protocol is reliable: it can overcome random msg loss and mote failure • Routing on a grid is stateless: perturbed region upon failure of nodes is bounded by their local neighbors
The Logical Grid • The motes are named as if they form an M*N logical grid • Each mote is named by a pair (i, j) where i = 0 .. M-1 and j = 0 .. N-1 • The network root is mote (0,0) • Physical connectivity between motes is a superset of their connectivity in the logical grid: (0,1) (0,1) (1,1) (2,1) (2,1) (1,1) (2,0) (0,0) (1,0) (2,0) (0,0) (1,0)
Neighbors • Each mote (i, j) has • two low-neighbors (i-H, j) and (i, j-H) • two high-neighbors (i+H, j) and (i, j+H) • H is a positive integer called the tree hop • If a mote (i, j) receives a msg from any mote other than its low- and high-neighbors, (i, j) discards the msg (i, j+H) (i, j) (i-H, j) (i+H, j) (i, j-H)
Communication Pattern • Each mote (i, j) can send msgs whose ultimate destination is mote (0, 0) • The motes need to maintain an incoming spanning tree whose root is (0, 0): each mote maintains a pointer to its parent • When a mote (i, j) has a msg, it forwards the msg to its parent. This continues until the msg reaches mote (0, 0). (H = 2)
Choosing the Parent • Usually, each mote (i, j) chooses one of its low-neighbors (i-H, j) or (i, j-H) to be its parent • If both its low-neighbors fail, then (i, j) chooses one of its high-neighbors (i+H, j) or (i, j+H) to be its parent. This is called inversion • Example: there is one inversion at mote (2, 2) because the two low-neighbors of (2, 2) have failed. failed failed (H = 2)
Inversion Count • Each mote (i, j) maintains • the id (x, y) of its parent, and • the value c of its inversion count: the number of inversions that occur along the tree from (i, j) to (0, 0) • Inversion count c has an upper value cmax • Example: failed failed (3,2), 1 (0,3), 0 (0,0), 0 failed failed (0,3), 0 (0,0), 0 failed (H = 2) (0,1), 0
Protocol Message • If a mote (i, j) has a parent, then every T seconds it sends a msg with three fields: connected(i, j, c) where c is the inversion count of mote (i,j) • Otherwise, mote (i, j) does nothing. • Every 3 seconds, mote (0, 0) sends a msg with three fields: connected(0, 0, 0)
Acquiring a Parent • Initially, every mote (i, j) has no parent. • When mote (i, j) has no parent and receives connected(x, y, e), (i, j) chooses (x, y) as its parent • if (x, y) is its low-neighbor, or • if (x, y) is its high-neighbor and e < cmax • When mote (i, j) receives a connected(x, y, e) and chooses (x, y) to be its parent, (i, j) computes its inversion count c as: • if (x, y) is low-neighbor, c := e • if (x, y) is high-neighbor, c := e + 1
Keeping the Parent • If mote (i, j) has a parent (x, y) and receives any connected(x, y, e) then (i, j) updates its inversion count c as: • if (x, y) is low-neighbor, c := e • if (x, y) is high-neighbor and e < cmax, c := e + 1 • if (x, y) is high-neighbor and e = cmax, then (i, j) loses its parent
Losing the Parent • There are two scenarios that cause mote (i, j) to lose its parent (x, y) • (i, j) receives a connected(x, y, cmax) msg and (x, y) happens to be a high-neighbor of (i, j) • (i, j) does not receive any connected(x, y, e) msg for kT seconds
Replacing the Parent • If mote (i, j) has a parent (x, y), and receives a connected(u, v, f) msg where (u, v) is a neighbor of (i, j), and (i,j) detects that by adopting (u, v) as a parent and using f to compute its inversion count c, the value of c is reduced then (i, j) adopts (u, v) as its parent and recomputes its inversion count
Allowing Long Links • Add the following rule to the previous rules for acquiring and replacing a parent: • If any mote (i,j) ever receives a message connected(0,0,0), then mote (i,j) makes mote (0,0) its parent
Outline • Convergecast • Routing tree • Grid routing • Reliable bursty broadcast • Broadcast • Flood, Flood-Gossip-Flood • Trickle • Polygonal broadcast, Fire-cracker • Data driven • Directed diffusion • Rumor routing
Application context • A Line in the Sand (Lites) • field sensor network experiment for real-time target detection, classification, and tracking • A target can be detected by tens of nodes • Traffic burst • Bursty convergecast • Deliver traffic bursts to a base station nearby
Problem statement • Only 33.7% packets are delivered with the default TinyOS messaging stack • Unable to support precise event classification • Objectives • Close to 100% reliability • Close to optimal event goodput (real-time) • Experimental study for high fidelity
base station Network setup • Network • 49 MICA2s in a 7 X 7 grid • 5 feet separation • Power level: 9 (for 2-hop reliable communication range) • Logical Grid Routing (LGR) • It uses reliable links • It spreads traffic uniformly
Traffic trace from Lites • Packets generated in a 7 X 7 subgrid, when a vehicle passes across the middle of the Lites network • Optimal event goodput: 6.66 packets/second
Retransmission based packet recovery • At each hop, retransmit a packet if the corresponding ACK is not received after a constant time • Synchronous explicit ack (SEA) • Explicit ACK immediately after packet reception • Shorter retransmission timer • Stop-and-wait implicit ack (SWIA) • Forwarded packet as an ACK • Longer retransmission timer
SEA • Retransmission does not help much, and may even decrease reliability and goodput • Similar observations when adjusting contention window of B-MAC and using S-MAC • Retransmission-incurred contention
SWIA • Again, retransmission does not help • Compared with SEA, longer delay and lower goodput/reliability • longer retransmission timer & blocking flow control • More ACK losses, and thus more unnecessary retransmissions
Protocol RBC • Differentiated contention control • Reduce channel contention caused by packet retransmissions • Window-less block ACK • Non-blocking flow control • Reduce ack loss • Fine-grained tuning of retransmission timers
Window-less block ACK Non-blocking window-less queue management • Unlike sliding-window based black ACK, in order packet delivery is not considered • Packets have been timestamped • For block ACK, sender and receiver maintain the “order” in which packets have been transmitted • “order” is identified without using sliding-window, thus there is no upper bound on the number of un-ACKed packet transmissions
VQ0 1 2 high VQ1 3 4 5 occupied ID of buffer/packet VQM low VQM+1 empty 6 static physical queue ranked virtual queues (VQ) Sender: queue management M: max. # of retransmissions
Differentiated contention control • Schedule channel access across nodes • Higher priority in channel access is given to • nodes having fresher packets • nodes having more queued packets
Implementation of contention control • The rank of a node j = M - k, |VQk|, ID(j) , where • M: maximum number retransmissions per-hop • VQk: the highest-ranked non-empty virtual queue at j • ID(j): the ID of node j • A node with a larger rank value has higher priority • Neighboring nodes exchange their ranks • Lower ranked nodes leave the floor to higher ranked ones
Fine tuning retransmission timer • Timeout value: tradeoff between • delay in necessary retransmissions • probability of unnecessary retransmissions • In RBC • Dynamically estimate ACK delay • Conservatively choose timeout value; also reset timers upon packet and ACK loss
Event-wise • Retransmission helps improve reliability and goodput • close to optimal goodput (6.37 vs. 6.66) • Compared with SWIA, delay is significantly reduced • 1.72 vs. 18.77 seconds
Distribution of packet generation and reception • RBC • Packet reception smoothes out and almost matches packet generation • SEA • Many packets are lost despite quick packet reception • SWIA • Significant delay and packet loss
Field deployment (http://www.cse.ohio-state.edu/exscal) • A Line in the Sand (Lites) • ~ 100 MICA2’s • 10 X 20 meter2 field • Sensors: magnetometer, micro impulse radar (MIR) • ExScal • ~ 1,000 XSM’s, ~ 200 Stargates • 288 X 1260 meter2 field • Sensors: passive infrared radar (PIR), acoustic sensor, magnetometer
Outline • Convergecast • Routing tree • Grid routing • Reliable bursty broadcast • Broadcast • Flood, Flood-Gossip-Flood • Trickle • Polygonal broadcast, Fire-cracker • Data driven • Directed diffusion • Rumor routing
Flooding • Forward the message upon hearing it the first time • Leads to broadcast storm and loss of messages • Obvious optimizations are possible • The node sets a timer upon receiving the message first time • Might be based on RSSI • If, before the timer expires, the node hears message broadcasted T times, then node decides not to broadcast
Outline • Convergecast • Routing tree • Grid routing • Reliable bursty broadcast • Broadcast • Flood, Flood-Gossip-Flood • Trickle • Polygonal broadcast, Fire-cracker • Data driven • Directed diffusion • Rumor routing
Flooding, gossiping, flooding, … • Flood a message upon first hearing a message • Gossiping periodically (less frequently) to ensure that there are no missed messages • Upon detecting a missed message disseminate by flooding again • Best effort flooding (fast) followed by a guaranteed coverage gossiping (slow) followed by best effort flooding • Algorithm takes care of delivery to loosely connected sections of the wsn Livadas and Lynch, 2003
Outline • Convergecast • Routing tree • Grid routing • Reliable bursty broadcast • Broadcast • Flood, Flood-Gossip-Flood • Trickle • Polygonal broadcast, Fire-cracker • Data driven • Directed diffusion • Rumor routing
Trickle • See Phil Levis’s talk.
Outline • Convergecast • Routing tree • Grid routing • Reliable bursty broadcast • Broadcast • Flood, Flood-Gossip-Flood • Trickle • Polygonal broadcast, Fire-cracker • Data driven • Directed diffusion • Rumor routing
0 1 0 1 1 1 1 1 1 Polygonal broadcasts • Imaginary polygonal tilings for supporting communication E.g., 1-bit broadcast scheme for hexagonal tiling Dolev, Herman, Lahiani, “Brief announcement: polygonal broadcast, secret maturity and the firing sensors”, PODC 2004
Outline • Convergecast • Routing tree • Grid routing • Reliable bursty broadcast • Broadcast • Flood, Flood-Gossip-Flood • Trickle • Polygonal broadcast, Fire-cracker • Data driven • Directed diffusion • Rumor routing
Fire-cracker protocol • Firecracker uses a combination of routing and broadcasts to rapidly deliver a piece of data to every node in a network • To start dissemination, the data source sends data to distant points in the network • Once the data reaches its destinations, broadcast-based dissemination begins along the paths • By using an initial routing phase, Firecracker can disseminate at a faster rate than scalable broadcasts while sending fewer packets • The selection of points to route to has a large effect on performance.
Outline • Convergecast • Routing tree • Grid routing • Reliable bursty broadcast • Broadcast • Flood, Flood-Gossip-Flood • Trickle • Polygonal broadcast, Fire-cracker • Data driven • Directed diffusion • Rumor routing
Directed Diffusion • Protocol initiated by destination (through query) • Data has attributes; sink broadcasts interests • Nodes diffuse the interest towards producers via a sequence of local interactions • Nodes receiving the broadcast set up a gradient (leading towards the sink) • Intermediate nodes opportunistically fuse interests, aggregate, correlate or cache data • Reinforcement and negative reinforcement used to converge to efficient distribution
Directed diffusion Intanagonwiwat, Govindan and Estrin, “Directed diffusion: a scalable and robust communication paradigm for sensor networks” 6th conf. on Mobile computing and networking, 2000.
Directed Diffusion…. Directional Flooding Interest Gradient Sink Source
Directed Diffusion…. Interest Gradient Sink Source