1 / 52

Black-box conformance testing for real-time systems

Black-box conformance testing for real-time systems. Stavros Tripakis VERIMAG Joint work with Moez Krichen. black box. inputs. outputs. Tester. Verdicts (pass/fail/?). Black-box conformance testing. Does the SUT conform to the Specification ?. SUT (system under test). Specification.

doyle
Download Presentation

Black-box conformance testing for real-time systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Black-box conformance testing for real-time systems Stavros Tripakis VERIMAG Joint work with Moez Krichen

  2. black box inputs outputs Tester Verdicts (pass/fail/?) Black-box conformance testing Does the SUT conform to the Specification ? SUT (system under test) Specification

  3. Model-based testing • The specification is given as a formal model. • The SUT also behaves according to an unknown model (black-box). • Conformance of SUT to the specification is formally defined w.r.t. these models.

  4. SUT inputs outputs Tester Verdicts (pass/fail) Real-time Testing Our models of preference Theory: timed automata Practice: the IF language (www-verimag.imag.fr/~async/IF/) Tester observes events and time-stamps.

  5. Plan of talk • Specification model • Conformance relation • Analog & digital tests • Test generation • Tool and case studies

  6. Plan of talk • Specification model • Conformance relation • Analog & digital tests • Test generation • Tool and case studies

  7. Specification model:general timed automata with input/output/unobservable actions • Timed automata = finite-state machines + clocks. • Input/output actions: interface with environment and tester. • Unobservable actions: • Model partial observability of the tester. • Good for compositional specifications.

  8. a? b! x  4 x:=0 Simple example 1 “Output b at most 4 time units after receiving input a.”

  9. Compositional specifications A B C Compositional specifications with internal (unobservable) actions.

  10. internal (unobservable) actions. Compositional specifications

  11. environment system (spec) Modeling assumptions on the environment Compose the specification with a model of the environment. Export the interactions between them (make them observable).

  12. Simple example 2 “Output b at most 4 time units after receiving input a, provided a is received no later than 10 time units.” a? b! x  10 x  4 x:=0 x:=0 Constraints on the inputs model assumptions. Constraints on the outputs model requirements.

  13. a? b! x  4 x:=0 Simple example 2 “Output b at most 4 time units after receiving input a, provided a is received no later than 10 time units.” a? b! a! y  10 y:=0 A compositional modeling of the same example.

  14. Plan of talk • Specification model • Conformance relation • Analog & digital tests • Test generation • Tool and case studies

  15. Conformance relation: tioco • A timed extension of Tretman’s ioco (input-output conformance relation). • Informally, A tioco B if • Every output of the implementation is allowed by the specification, including time delays. • A: implementation/SUT (input-complete). • B: specification (not always input-complete (model environment assumptions).

  16. Conformance relation • Formally: A tioco B (A: implementation, B:specification) iff Traces(B). out(A after )  out(B after )

  17. A after  = {s | Seq. s0  s  proj(,Obs)=} Conformance relation • where: out(S)= delays(S)  outputs(S)

  18. delays(S) = {tR | sS. UnobsSeq. time() = t  s }  a outputs(S) = {aOutputs | sS . s } Conformance relation • where:

  19. a? b! x  4 x:=0 Spec: Examples “Output b at most 4 time units after receiving input a.”

  20. a? a? b! b! x  4 x:=0 x:=0 x = 4 Spec: Impl 1: Examples “Output b at most 4 time units after receiving input a.”

  21. a? a? b! b! x  4 x:=0 x:=0 x = 4 Spec: Impl 1: Examples “Output b at most 4 time units after receiving input a.” OK!

  22. a? a? a? b! b! b! x  4 x  2 x:=0 x:=0 x:=0 x = 4 Spec: Impl 1: Impl 2: Examples “Output b at most 4 time units after receiving input a.” OK!

  23. a? a? a? b! b! b! x  4 x  2 x:=0 x:=0 x:=0 x = 4 Spec: Impl 1: Impl 2: Examples “Output b at most 4 time units after receiving input a.” OK! OK!

  24. a? a? b! b! x  4 x:=0 x:=0 x = 5 Spec: Impl 3: Examples “Output b at most 4 time units after receiving input a.”

  25. a? a? b! b! x  4 x:=0 x:=0 x = 5 Spec: Impl 3: Examples “Output b at most 4 time units after receiving input a.” NOT OK!

  26. a? a? b! b! x  4 x:=0 x:=0 x = 5 Spec: Impl 3: a? Impl 4: Examples “Output b at most 4 time units after receiving input a.” NOT OK!

  27. a? a? b! b! x  4 x:=0 x:=0 x = 5 Spec: Impl 3: a? Impl 4: Examples “Output b at most 4 time units after receiving input a.” NOT OK! NOT OK!

  28. Plan of talk • Specification model • Conformance relation • Analog & digital tests • Test generation • Tool and case studies

  29. Timed tests • Two types of tests: • Analog-clock tests: • Can measure real-time precisely • Difficult to implement for real-time SUTs • Good (flexible) for discrete-time SUTs with unknown time step • Digital-clock tests: • Can count “ticks” of a periodic clock/counter • Implementable for any SUT • Conservative (may say PASS when it’s FAIL)

  30. a a b b c c 1.3 2.4 2.7 time time Timed tests • Analog-clock tests: • They can observe real-time precisely, e.g.: • Digital-clock (or periodic-sampling) tests: • They only have access to a periodic clock, e.g.: 1 2 3

  31. a b c 1.3 2.4 2.7 time Timed tests • Analog-clock tests: • They can observe real-time precisely, e.g.: • Digital-clock (or periodic-sampling) tests: • They only have access to a periodic clock, e.g.: a b c 1 2 3 time

  32. Note • Digital-clock tests does not mean we discretize time: • The specification is still dense-time • The capabilities of the observer are discrete-time ) • Many dense-time traces will look the same to the digital observer (verdict approximation)

  33. Plan of talk • Specification model • Conformance relation • Analog & digital tests • Test generation • Tool and case studies

  34. Untimed tests • Can be represented as finite trees (“strategies”): i o1 o2 o3 o4 fail i1 i2 i3 … … fail pass

  35. Digital-clock tests • Can be represented as finite trees: i Models the tick of the tester’s clock o1 o2 o3 o4 tick fail … i1 i2 i3 … … fail pass

  36. Analog-clock tests • Cannot be represented as finite trees: i … o1 o2 o3 o4 0.1 0.11 0.2 fail i1 i2 i3 Infinite number of unknown delays … … fail pass Solution: on-the-fly testing

  37. On-the-fly testing • Generate the testing strategy during test execution. • Symbolic generation. • Can be applied to digital-clock testing as well.

  38. If empty, FAIL. observation (event or delay) runs matching observation next estimate Test generation principle current estimate = set of possible states of specification

  39. Test generation algorithmics • Sets of states are represented symbolically (standard timed automata technology, DBMs, etc.) • Updates amount to performing some type of symbolic reachability. • Implemented in verification tools, e.g., Kronos. • IF has more: dynamic clock creation/deletion, activity analysis, parametric DBMs, etc.

  40. “Tick” original specification automaton tick! new specification automaton z = 1 z:= 0 Digital-clock test generation • Can be on-the-fly or static. • Same algorithms. • Trick: • Generate “untimed” tester: tick is observable. • Can also model skew, etc, using other “Tick” automata.

  41. Recent advances • Representing analog-clock tests as timed automata. • Coverage criteria.

  42. Timed automata testers • On-the-fly testing needs to be fast: • Tester reacts in real-time with the SUT. • BUT: reachability can be costly. • Can we generate a timed automaton tester ? • Problem undecidable in general: • Non-determinizability of timed automata. • Pragmatic approach: • Fix the number of clocks of the tester. • Fix their reset positions. • Synthesize the rest: locations, guards, etc.

  43. b? 1  x  4 Timed automata testers • Example: a? b! Spec: 1  x  4 x:=0 a! Tester: PASS x=1 b? x < 1 x > 4 FAIL

  44. Coverage • A single test is not enough. • Exhaustive test suite up to given depth: • Explosion: # of tests grows exponentially! • Coverage: few tests, some guarantees. • Various criteria: • Location: cover locations of specification. • Edge: cover edges of specification. • State: cover states (location,clocks) of spec. • Algorithms: • Based on symbolic reachability graph. • Performance can be impressive: 8 instead of 15000 tests.

  45. Plan of talk • Specification model • Conformance relation • Analog & digital tests • Test generation • Tool and case studies

  46. TTG: Timed Test Generation Implementation • Implemented on top of IFenvironment.

  47. Tool • Input language: IF timed automata • Dynamic creation of processes/channels. • Synchronous/asynchronous communication. • Priorities, variables, buffers, external data structures, etc. • Tool options: • Generate analog tester (or monitor). • Generate digital test/monitor suite: • Interactively (user guided). • Exhaustive up to given length. • Coverage (current work).

  48. SUT SUT outputs inputs outputs Monitor Tester Verdicts (pass/fail) Verdicts (pass/fail) Real-time Monitoring/Testing

  49. A sample test generated by TTG

  50. Case studies • A bunch of simple examples tried out • A simple light controller. • 15000 digital tests up to depth 8. • 8 tests suffice to cover specification. • A larger example: NASA K9 Roverexecutive. • SUT: 30000 lines of C++ code. • TA specification generated automatically from mission plans. • Monitors generated automatically from TA specs. • Traces generated by NASA and tested by us.

More Related