260 likes | 423 Views
Efficient Clustering for Improving Network Performance in Wireless Sensor Networks. Bracha Hod Joint work with: Tal Anker, Danny Bickson and Danny Dolev The Hebrew University of Jerusalem. Outline. Introduction Related work Motivation and main contribution Belief Propagation (BP)
E N D
Efficient Clustering for Improving Network Performance in Wireless Sensor Networks Bracha Hod Joint work with: Tal Anker, Danny Bickson and Danny Dolev The Hebrew University of Jerusalem
Outline • Introduction • Related work • Motivation and main contribution • Belief Propagation (BP) • Clustering using BP • Simulation results • Summary
Introduction • Cluster-based network is divided into subsets • Each group of nodes contains a single leader (cluster head) and several ordinary nodes
Introduction • Clustering main objectives • Minimize the total transmission power aggregated over all nodes in the selected path • Balance the load to prolong the network lifetime • Clustering advantages • Increase network scalability • Support data aggregation • Reduce energy consumption • Clustering challenges • Optimal cluster selection is a hard problem • Cluster maintenance is essential
Related Work • Many research efforts • LEACH - Low Energy Adaptive Clustering Hierarchy(Heinzelman et al, 2002) • HEED - Hybrid, Energy-Efficient, Distributed clustering (Younis et al, 2004) • VCA - Voting-based Clustering Algorithm (Qin et al, 2005) • EEUC - Energy-Efficient Unequal Clustering (Li et al, 2005)
Motivation • Network performance is important • Retransmission and dropped packets may waste energy • Since the network is usually dense and several nodes are redundant, network lifetime should be measured by the time that the system is available for providing services
Main Contribution • We propose a novel approach based on Belief Propagation (BP) • Considers both local properties of a node and joint characteristics of a group of nodes • Utilizes better the available information • Incurs small constant overhead • Resulting in Better network performance Balanced power consumption among the nodes • Our scalable and practical implementation of BP can be used for other inference goals
Belief Propagation (BP) • BP is an iterative algorithm for computing maximum or marginal posterior probabilities by a local message passing • BP is associated with rapid convergence, accurate results and good performance in asynchronous environment • When performed on trees, BP converges to the correct values in a finite number of iterations
The Min-Sum Algorithm (MS) • The goal is to minimize the overall cost in the network, based on the local cost functions and the constraints between the nodes • Each node transmits to its neighbors a message with its local and joint costs. Each neighbor updates its own belief accordingly and transmits the new belief • Gradually the information is propagated through the network until the nodes converge to a common belief
Efficient Implementation • The Min-Sum variation of BP requires simple operations and works well with integer values • Saves floating-point calculation • Broadcast messages instead of the traditional unicast messages in BP • Preserves communication resources • The routing tree is used as the message-passing tree • No special maintenance and overhead
Network Routing tree which used also as the message-passing tree 80 A 7 8 6 85 83 100 B C D 4 5 4 E A 151 B C D E Example
80 A 7 8 6 85 83 100 B C D 4 5 4 E 151 Example • Round 1: Messages transmitted by all the nodes A B C D E • Processing by node A: • A: A->A+B->A+C->A+D->A = 80+7+6+8 = 101 • B: A->B+B->B+C->B+D->D = 7+85+5+83 = 180 • If B is selected to be the cluster head, D selects itself • … A 80 B 7 C 6 D 8 B 85 A 7 C 5 E 65535 C 100 A 6 B 5 D 4 E 4 D 83 A 8 C 4 E 151 C 4 B 65535 D 65535
80 A 7 8 6 85 83 100 B C D 4 5 4 E 151 Example • Round 2: Messages transmitted by all the nodes A B C D E • Processing by node A: • A->A: 80 • B->A: 87 – 80 = 7 • C->A: 237 – 80 = 157 • The message from E is propagated to A • D->A: 88 – 80 = 8 • Total cost: 80 + 7 + 157 + 8 = 252 A 101 B 180 C 115 D 180 B 92 A 87 C 11 E 65535 C 110 A 237 B 163 D 163 E 235 D 91 A 88 C 10 E 155 C 104 B 65535 D 65535
80 A 7 8 6 85 83 100 B C D 4 5 4 E 151 Example • Round 3: Messages transmitted by all the nodes A B C D E • After round 3, all the nodes converge to a common belief – node C should be the cluster head A B C D E A 252 B 331 C 119 D 331 B 180 A 101 C 115 E 65535 C 119 A 252 B 331 D 331 E 252 D 180 A 101 C 115 E 235 C 110 B 65535 D 65535 A 252 B 331 C 119 D 331 B 331 A 252 C 119 E 65535 C 119 A 252 B 331 D 331 E 252 D 331 A 252 C 119 E 252 C 119 B 65535 D 65535
Clustering using MS • Two events trigger a clustering process • A regular node does not have a cluster head • Periodically by each cluster head to balance the power • Message passing properties • 1-hop vicinity for localized and distributed process • Complete asynchronous operation • Number of rounds is bounded and determines a-priory to avoid impact of the environment
Clustering using MS • Every message contains • Self cost of being a cluster head • Cost of connecting to other cluster heads • Final state decision (on last round) • Cost metric • Self cost is based on the expected energy consumption in a period and the residual battery power • The expected energy consumption considers degree and distance from the base station • Joint cost is based on link quality and the residual battery power
Simulation Model • Simulation in TOSSIM, TinyOS simulator • 250 nodes including a single base station • Link Estimation and Parent Selection routing protocol • Shortest path metric combined with link quality • Surge application for data aggregation • Power information of Berkeley Mica2 mote • Variable power levels for cluster heads and regular nodes
HEED • Cluster heads are selected with a probability based on their residual energy • When there are no cluster heads’ announcements, a node selects itself with the probability it has or alternatively doubles its probability for the next round • Local and efficient method which achieves very good results on simulations
Performance Evaluation • Data collection time • Clustering using BP achieves more than 40% higher throughput than HEED
Performance Evaluation • Data collection rate during the network lifetime • Clustering with BP achieves better routing, deployment and stability
Performance Evaluation • BP has better deployment and network stability than HEED • Average hop count • Re-clustering processes
Clustering Overhead BP suffers from more overhead because larger size of messages Network Lifetime HEED has very small advantage because using BP more packets are transmitted Performance Evaluation
Summary • We present a new framework for clustering based on BP • This approach is fully distributed, localized, asynchronous, robust and scalable • Utilization of all available information and not only subset of parameters yields better results and better network performance • Future work • Comparing the BP algorithm with the theoretical optimal clustering algorithm
Appendix – Notations of BP • - set of possible states of node i • - local distribution function of node i • - joint function of two connected nodes i and j • - unicast message from node i to node j at round t about the state that node j should be • - broadcast message from node i to its direct neighbors N(i)
MS Formulation • Message passing • Message update rule • Belief calculation • Efficient implementation in sensors by broadcast messages and integer calculations only