1 / 11

Center for Information Technology - IRST

Center for Information Technology - IRST. Code Based Test Case Generation Nadia Alshahwan, Gordon Fraser, Yue Jia, Kiran Lakhotia, David Schuler, Paolo Tonella. Premise.

marisa
Download Presentation

Center for Information Technology - IRST

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Center for Information Technology - IRST Code Based Test Case Generation Nadia Alshahwan, Gordon Fraser, Yue Jia, Kiran Lakhotia, David Schuler, Paolo Tonella

  2. Premise Different test case generators experience different problems, so we have to distinguish among different classes of generators… • search based generators (SBG) • dynamic symbolic generators (DSG) • explicit state model checkers (ESMC) • hybrid/integrated generators (HYB) • random (RND)

  3. Premise …and generators’ goals. • structural coverage • killing mutants • violating assertions (including generating exceptions or crashes)

  4. Brainstorming the open problems… • P1: string input that satisfy some language and is parsed • P2: unbounded inputs: lists, arrays, (also) strings • P3: complex data structures • P4: loops • P5: method sequences necessary to prepare for testing • P6: mock database, file-system, environment synthesis • P7: mock called functions / services synthesis

  5. Brainstorming the open problems… • P8: propagation of infection to output for killing mutants • P9: code that uses type-dependent constructs (e.g., instanceof) • P10: code that uses reflection • P11: semantically meaningful input and preconditions on input • P12: understandability of the test suite • P13: understandability of assertion violation produced by automatically generated test cases • P14: generation of tests for goals that go beyond coverage

  6. Clustering the problems… • C1: complex data structures: • P1 : string inputs • P2 : unbound inputs • P3 : complex data structure • P11b: preconditions on inputs • C2: complex control flow and code constructs: • P4 : loops • P5 : method sequences • P9 : type-dependent constructs • P10: reflection

  7. Clustering the problems… • C3: automated synthesis of mocks: • P6 : mock database, file-system • P7 : mock called functions / services synthesis • C4: code generation goals: • P8 : propagation of the fault infection • P14: go beyond coverage • C5: understandability of generated tests: • P11a: semantically meaningful inputs • P12: understandability of the test suite • P13: understandability of assertion violation

  8. Promising directions C1: complex data structures C3: automated synthesis of mocks C5: understandability of generated tests

  9. Papers • AUTOMOCK: automated synthesis of a mock environment for test case generation. • A novel approach to generate complex data structures in automated test data generation.

  10. Collaboration Joint research to develop AUTOMOCK

  11. Grant proposal A method to improve the understandability of automatically generated test cases

More Related