1 / 21

Automated Generation of Context-Aware Tests

Automated Generation of Context-Aware Tests. by Zhimin Wang (University of Nebraska–Lincoln) Sebastian Elbaum (University of Nebraska–Lincoln) David S. Rosenblum (University College London). Research funded in part by the EPSRC and the Royal Society. Background Ubiquitous Computing Systems.

hailey
Download Presentation

Automated Generation of Context-Aware Tests

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automated Generation of Context-Aware Tests by Zhimin Wang (University of Nebraska–Lincoln) Sebastian Elbaum (University of Nebraska–Lincoln) David S. Rosenblum (University College London) Research funded in part by the EPSRC and the Royal Society

  2. BackgroundUbiquitous Computing Systems • Context-Aware • Execution driven by changesin execution environment • Sensed through middlewareinvocation of context handlers • Context is an additional input space thatmust be explored adequately during testing • Adaptive • Execution must adapt to context changes • Changes to multiple context variablesmay occur simultaneously An important emerging class of software systems Radio signal strength Location Nearby device IDs Battery level Available memory

  3. Problem StatementTesting Ubiquitous Systems Discovering concurrencyfaults in context-awareubiquitous systems • Failures occur frequentlyduring attempts to handlemultiple context changes • Existing testing techniques havelimited effectiveness in discoveringthe underlying faults New SMS Found Wi-Fi

  4. Additional Challenges during Testing • Hard to control when and how to input contexts • Middleware can introduce noise • Interference can occur between context handlers • Hard to define a precise oracle • Execution differs under various vectors of context inputs • Real environment is not available • Too many sensors are required • Sensed contexts can be inconsistent • Example: The border between rooms at an exhibition

  5. Contributions • CAPPs: Context-Aware Program Points • A model of how context affects program execution • CAPPs-Based Test Adequacy Criteria • Criteria for evaluating test suite effectiveness • Defined in terms of sets of test drivers • Sequence of CAPPs to cover • CAPPs-Driven Test Suite Enhancement • Automated exploration of variant interleavings of invocations of context handlers • Schedules interleavings via special instrumentation

  6. Application for Case StudyTourApp Released with the Context Toolkit

  7. Location:Registration Room Application Response:Pop-Up Registration Form Application for Case StudyTourApp Released with the Context Toolkit

  8. Location:DemoRoom 1 Application Response:Display Lecture Information Application for Case StudyTourApp Released with the Context Toolkit

  9. Power:LowBattery! Application Response:Confine Display Updates Application for Case StudyTourApp Released with the Context Toolkit

  10. Overview of Testing Infrastructure AnnotatedFlow Graph P Test Drivers (D) Achieved Coverage And Test Case Extension Feedback on Coverage Selected ContextAdequacy Criteria Context-Aware Program (P) Test Suite (T) CAPPs Identifier Context Driver Generator Program Instrumentor Context Manipulator

  11. Test Adequacy CriteriaContext Adequacy (CA) • Test driver covering at least one CAPP in each type of context handler • Examples • { capp1, capp2 } • or { capp3, capp2 } • or { capp2, capp1 }

  12. Test Adequacy CriteriaSwitch-to-Context Adequacy (StoC-k) • Set of test drivers covering all possible combinations of k switches between context handlers • StoC-1 Example: • { capp1, capp2 }, { capp5, capp3 }, { capp3, capp3 }, { capp5, capp5 }

  13. Test Adequacy CriteriaSwitch-with-Preempted-Capp Adequacy(StoC-k-FromCapp) • Set of test drivers covering all possible combinationsof k switches between context handlers, with each switch exercised at every CAPP • StoC-1-FromCapp Example: • { capp1, capp2 }, { capp3, capp2 }, { capp4, capp5 }, { capp2, capp3 }, { capp5, capp1 }, { capp6, capp3 }, { capp3, capp3 }, { capp5, capp5 }

  14. Case Study Design and SettingsTourApp • 11 KLOC of Java, 4 seeded faults • Test suite of 36 end-to-end test cases • Executing test suite takes 10 minutes • Studied 4 versions: • originalTourApp: unmodified original • manipulatedTourApp: instrumented with calls to our scheduler methods • delayShortTourApp: instrumented with 1–3 seconds random delays (sleep()) • delayLongTourApp: instrumented with 1–10 seconds random delay (sleep())

  15. Results: CostTimings with manipulatedTourApp • Summary of study • Execution time increases with more demanding context coverage criteria as more context scenarios are required

  16. Results: FeasibilityDrivers in manipulatedTourApp • Summary of study • Some test drivers were not realised in the application within the set timeouts • Improvements in generation of D may be needed via better flow-sensitive analysis

  17. Results: EffectivenessPercentage of Contextual Coverageand Fault Detection • Summary of study • Coverage decreases with more powerful criteria • But only slightly for manipulatedTourApp • manipulatedTourApp performs the best, especially with more powerful criteria

  18. Related Work • Deterministic testing of concurrent programs(Carver & Tai, Taylor et al.) • Concurrency intrinsic to program, not execution environment • Metamorphic testing for context-aware applications(Tse et al.) • Oracles embodying metamorphic properties must be defined by tester • Data flow coverage criteria for context-aware applications(Lu et al.) • Does not support notion of CAPPs or manipulation of test executions • Random sleeps for test perturbation (Edelstein et al.) • Inferior to controlled scheduling of context switches

  19. Conclusion • Defined the CAPPsmodel of how context changes affect context-aware applications • Defined test adequacy criteria in terms of this model • Created an automated technique to guide test executions in a way that systematically explores many interesting context change scenarios • Demonstrated the superiority of this technique in discovering concurrency faults

  20. Future Work • Investigate Additional Classes of Faults • User interface ‘faults’ • Adaptation priority faults • Memory leaks • ContextNotifier and TestingEmulator • Emulation infrastructure for testing • (Tools and libraries from vendors are pathetic!) • Simulation-Driven Testing • Test case execution driven by mobility traces from simulation runs

  21. Questions? http://www.ubival.org/

More Related