1 / 14

Joint Power and Channel Minimization in Topology Control: A Cognitive Network Approach

Joint Power and Channel Minimization in Topology Control: A Cognitive Network Approach. Jorge Mori Alexander Yakobovich Michael Sahai Lev Faynshteyn. Problem Definition. An ad-hoc wireless network topology faces two problems: Power consumption Mobile devices have limited power supply

Download Presentation

Joint Power and Channel Minimization in Topology Control: A Cognitive Network Approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Joint Power and Channel Minimization in Topology Control: A Cognitive Network Approach Jorge Mori Alexander Yakobovich Michael Sahai Lev Faynshteyn

  2. Problem Definition An ad-hoc wireless network topology faces two problems: • Power consumption • Mobile devices have limited power supply • Overcrowded spectrum • Too many devices try to use the same frequency simultaneously resulting in inteference

  3. Previous Work Interference avoidance has led to three viewpoints: • Radio • Minimize channel interference at link-level • Topology • Channel assignments made in an already existing topology • Network • A combination of channel assignment with routing

  4. Previous Work • Two assumptions: • Power control • Channel control • Power approaches: • Bukhart, assigning weights to connections that are equal to the number of radios the connection interferes with. • Used with MMLIP, MAICPC and IMST algorithms. • Use of a radio interference function, in which the interference contribution of a radio is the maximum interference of all connections incident upon it. • Used in MMMIP and LILT algorithms.

  5. Previous Work (Cont.) • Channel Control: • Connectivity of the network is fixed and that two radios can only communicate if they share a common channel, of which there are fewer available than needed.

  6. Researches’ Approach • Their work assume that radios regulate both power and channel selection. • A two-phased, two cognitive element approach to: • Power assignment • Channel assignment • A game-theoretic model is used to analyze the behaviors of these elements.

  7. Methodology A two-phased game model is used: • The first phase is a pure power control game where POWERCONTROL elements attempt to minimize their transmit power level and maintain network connectivity. • The output of the first phase is a power-efficient topology, which is fed into the second phase, where CHANNELCONTROL elements selfishly play the channel selection game.

  8. Methodology (Cont.) The POWERCONTROL elements utilize δ-ImprovementAlgorithm (DIA):

  9. Methodology (Cont.) LOCAL-RS - a localized version of the Random Sequential coloring algorithm:

  10. Optimized Approach – Power Control Use Minimum Spanning Tree (MST) algorithm to solve Power Control problem: • G = (V, E,W) denotes the input undirected stochastic graph: • V - vertex set • E - edge set • matrix W - probability distribution of the edge weight in the stochastic graph • Each node of the graph is a learning automaton • Resulting network is described by a triple < A, α, W >, where: • A = { A1, A2,..., Am } - set of the learning automata • α = { α 1, α 2,..., α m} - set of actions in which α i = { α i1, α i2,..., α ij,..., α ir } defines the set of actions that can be taken by learning automata A i for each α ∈αi • Weight w i jis the cost associated with edge e(i, j)

  11. MST Algorithm • Step 1. The learning automata are sequentially and randomly activated and choose one of their actions according to their action probability vectors. Automata are sequentially activated until either the number of the selected edges is greater than or equal to (n −1) or there are no more automata which have not already been activated. • Step 2. The weight of the traversed spanning tree is computed and then compared to the dynamic threshold Tk: , where • Step 3. If the weight of the traversed spanning tree is less than or equal to the dynamic threshold, i.e. WՇi (k +1) ≤Tk, the activated automata are rewarded with probability di(k) in accordance with LR-P learning algorithm, else the activated automata are penalized: • Step 4. Steps 2 and 3 are repeated until the product of the probabilities of the edges along the traversed spanning tree is greater than a certain threshold or the number of traversed trees exceeds a pre-specified threshold.

  12. Optimized Approach – Channel Control • Resulting network is described by the pair < A, α>, where: • A = {A1, A2, …, Am}denotes the set of learning automata • α = {α1, α2, …, αm} denotes the set of actions • αi = {αi1, αi2, …, αir} defines the set of actions that can be taken by learning automaton Ai, for each αi ∈ α • The set of colors with which each vertex vican be colored from the set of actions can be taken by learning automaton Ai

  13. Channel Control Algorithm • Step 1. Color selection phase • For all learning automata do in parallel • Each automaton Ai. Pick the colors that have not being selected yet • Vertex Viis colored with the color corresponding to the selected action • The selected color is added to the list of colors (color-set) with which the graph may be legally colored at this stage. • Step 2. Updating the dynamic threshold and action probabilities • If the cardinality of the color-set (in a legal coloring) created is less than or equal to dynamic threshold Tk, then • Threshold Tkis set to the cardinality of the color-set selected in this stage. • All learning automata reward their actions and update action probably vectors using a LR-P reinforcement scheme • Otherwise • Each learning automaton updates its probability vector by penalizing its chosen action. • Step 3. Stopping Condition • The process of selecting legal colorings of the graph and updating the action probabilities are repeated until the product of the probability of choosing the colors of a legal coloring called PLC is greater than a certain threshold or the number of colorings exceeds a pre-specified threshold. The coloring which is chosen last before stopping the algorithm is the coloring with the smallest color-set among all proper colorings.

  14. Thank you. Questions?

More Related