1 / 21

Adaptive Offloading Inference for Delivering Application in Pervasive Computing Environments

Adaptive Offloading Inference for Delivering Application in Pervasive Computing Environments. Presented by Jinhui Qin. Outline :. Introduction ( 3 slides) Proposed approach ( 1 slide) System overview ( 3 slides) Design and algorithms ( 5 slides) Performance evaluation ( 5 slides)

Download Presentation

Adaptive Offloading Inference for Delivering Application in Pervasive Computing Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Adaptive Offloading Inference for Delivering Application in Pervasive Computing Environments Presented by Jinhui Qin

  2. Outline : • Introduction ( 3 slides) • Proposed approach ( 1 slide) • System overview ( 3 slides) • Design and algorithms ( 5 slides) • Performance evaluation ( 5 slides) • Conclusion (1 slide)

  3. Motivation • It is challenging to deliver complex applications on mobile devices • resource-constrained • Limitations for existing approach • Degrading an application’s fidelity • Adaptation efficiency is limited by coarse-grained approaches • Expensive to rewrite an application Introduction (Slide 1 of 3)

  4. Main idea of AIDE • AIDE • Adaptive Infrastructure for Distributed Execution • A fine-grained runtime offloading system • Main idea • Dynamically partitioning the application during runtime. • Offloading part of the application execution to a powerful nearby surrogate device. Introduction (Slide 2 of 3)

  5. Key problems • Which objects should be offloaded? • When to trigger the offloading action? • OLIE solves above problems Introduction (Slide 3 of 3)

  6. OLIE , the proposed approach • Makes intelligent offloading decisions • Timely triggering of adaptive offloading • Intelligent selection of an application partitioning policy • Using Fuzzy Control model • Only focus on relieving the memory constraint • Enables AIDE to deliver resource-intensive applications with minimum overhead Proposed approach (Slide 1 of 1)

  7. System overview Triggering offloading action and making offloading decisions. Transforming method invocations to offloaded objects into remote invocations. System overview (Slide 1 of 3)

  8. Program (Java) execution information Class: A; Memory: 5KB; AccessFreq: 10; Location: surrogate; isNative: false; InteractionFreq: 12 BandwidthRequirement: 1KB System overview (Slide 2 of 3)

  9. OLIE making offloading decisions • Monitoring • Tracking the amount of free space in the Java heap, obtained from the JVM garbage collector • Bandwidth and delay are estimated by periodically invoking the ping system utility. • Making offloading decisions • The new target memory utilization • Classes to be offloaded • Classes to be pulled back System overview (Slide 3 of 3)

  10. OLIE design and algorithms • Goal • To relieve the memory constraint with minimum overhead • Migration cost • Remote data access delay • Remote function invocation delay Design and algorithms (Slide 1 of 5)

  11. Triggering of adaptive offloading • A generic fuzzy inference engine based on fuzzy logic theory • Based on the Fuzzy Control model • Typical Decision-making rule specifications, e.g. If(AvailMem is low) and (AvailBW is high) Then NewMemSize := low; If(AvailMem is low) and (AvailW is moderate) Then NewMemSize := average; • Membership functions define the mappings between the numerical value and the linguistic values Design and algorithms (Slide 2 of 5)

  12. Value mappings AvaiMembelongs to the set of linguistic value low is 100%. AvaiMembelongs to the linguistic value low is linear decreasing function from 100% to 0%. AvaiMembelongs to the linguistic value either low or moderate, but with different confidence probabilities. Design and algorithms (Slide 3 of 5)

  13. Intelligent partitioning selection • All nodes with isNative=true are merged into one node N to form the first partition set • The coalescing process • Examines each of the neighbors of N and selects ones and merged into the first partition set based on several policies, • OLIE_MB (BandwithRequirment) • Minimize the wireless network transmission load • OLIE_ML (interactionFreq) • Minimize the interaction delay • OLIE_Combined (BandwithRequirment, interactionFreq, memory) • Keep most active classes • Have largest interactionFreq and BandwithRequirment • Offload most inactive classes • Have smallest interactionFreq and largest amount of memory Design and algorithms (Slide 4 of 5)

  14. Decision-making algorithm Mi: Memory size for Java class i; EG = {N0, N1, …., Nn}: execution graph; CMT: the maximum memory size for a class node; NewMemoryUtilization := -1; while (offloading service is on) while (no significant changes happen) perform executions and update EG accordingly; while (Mi > CMT) create a new node to represent class i; //make the adaptive offloading triggering decision SetLingVar(); //set numerical values for all input linguistic variables fuzzify(); // map the numerical values to the linguistic values FuzzyInferencEngine(); //checking rules and updating NewMemoryUtilization defuzzify(); // map the linguistic values to the numerical values if (NewMemoryUtilization == -1) then offloading is not triggered; else //make the partitioning decision merge all non-offloadable classes into a node N; while (size(EG) > 1) merge (N, one of its neighbors NBj); if ( current cut is better) bestPos = NBj; Partitionmobiledevice = {N0, …, NbestPos}; Partitionsurrogate = { NbestPost+1, …, Nn}; Design and algorithms (Slide 5 of 5)

  15. Performance evaluation • Using extensive trace-driven simulations • App. execution traces • Executed on a Linux desktop machine • By querying an Instrumented JVM • Trace file records • Method invocations • Data field accesses • Object creations and deletions • Wireless network traces • The Ping system utility on an IBM Thinkpad • IEEE 802.11 WaveLAN network card Performance evaluation (Slide 1 of 5)

  16. Simulator • Only consider average RTT for small packets (about 2.4ms on average) • Remote function invocation overhead, RTT/2 • Remote data access overhead, RTT • Migration overhead, Memoryclasses to be migrated current available bandwidth Performance evaluation (Slide 2 of 5)

  17. Compared approaches • Random and LRU • Both use one simple fixed policy • availMemory < 5% totalMemory && newMemoryUtilization < 80% totalMemory • Random algorithm keeps randomly selected classes. • LRU algorithm offloads least recently used classes according to the AccessFreq of each class Performance evaluation (Slide 3 ofl 5)

  18. Experimental applications Performance evaluation (Slide 4 of 5)

  19. Offloading Overhead (second) Time (second) Migration Remote Data Remote Access Function Call DIA Biomer JavaNote Results Performance evaluation (Slide 5 of 5)

  20. Conclusion OLIE relieves memory constraints for mobile devices with much lower overhead than other common approaches Major contributions Identifying two key decision-making problems Applying the Fuzzy Control model to OLIE Proposing three policies for selecting application partitions Conclusion Conclusion (Slide 1 of 1)

  21. Question?

More Related