1 / 19

A Review of Software Testing - P. David Coward

This review explores the different types of software testing, including functional and structural testing, and discusses various testing techniques such as partition analysis and random testing. The importance of thorough testing and the use of metrics to assess test coverage are also addressed.

ngano
Download Presentation

A Review of Software Testing - P. David Coward

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Review of Software Testing- P. David Coward

  2. What is Testing? • Does software match requirements? • Specs might not be correct • Testing attempts to assess how well tasks have been performed • Expected output vs. output from test

  3. Two types of Testing Testing • Non – Functional • Implementing customer functions not good enough • Need additional requirements • Satisfies legal obligations • Performs within specified response times • Meets documentation standards Functional Addresses whether program produces correct output Regression: name given to testing following modifications

  4. Views to Testing Aims of Testing Constructive Too gentle of testing so may miss some faults Destructive Finding faults in the software Affects programmer’s ego Solution: QA team Have to find faults and demonstrate their absence by looking for faults Metrics must be used to assess thoroughness of tests and develop techniques that are implemented

  5. Testing Strategies Testing strategies Structural Testing Functions included in software, but not required then use this method Ex: method to retrieve data from database • Functional • Functions identified and the faults should be documented • 2 steps • Identify the functions • II. Create test data which will check the functions

  6. Functional Testing Don't care how program performs these functions(aka Black Box Testing) Rules constructed for direct identification of functions and data from systematic design documentation Not enough to classify faults (have to distinguish the properties of the faults) Tester submit cases to program based on intended function of the program States output for particular case (Oracle)

  7. Structural Testing • White box testing (based upon detailed design rather than functional requirements.) • 2 scenarios: • Execute program w/ test cases • Functions of the program are compared w/ required functions for congruence • Structural testing involves execution and may require execution of single path through program • Like to test all paths, but we have phenomenal combinatorial explosions

  8. Structural Testing Cont. • Problems: • Infeasible path: contradiction of some predicates at conditional statements • Island code: series of code following transfer of control or program termination, which isn’t the destination

  9. Static vs. Dynamic Analysis • Static: testing without executing the software with data (mathematical approach) • Dynamic analysis: Probes are utilized during software execution (records frequency of execution of elements of the program) • Bridge between functional and structural testing • Initially functional testing may dictate set of test cases which can be examined by dynamic analysis. The code can be analyzed structurally to determine test cases which will exercise code left idle by previous test.

  10. Various Testing Techniques • Static Structural: no execution, features assessed vary with techniques • Symbolic execution: no real execution. Data values are replaced by symbolic values. One expression per output variable. Middle ground between testing data and program proving • Done by flow graph: contains decisions points and assignments associated w/ each branch traversing the flow graph from entry point along a particular path. A list of statements and branch predicates are produced.

  11. Various Testing Techniques Cont. • Partition Analysis: • Uses symbolic execution to identify sub domains of input data domain. Specs are expressed in a manner close to program code. • Program Proving: • Common method is “inductive assertion verification” Bridge between functional and structural testing • Assertions placed at beginning and end of selected procedures. Assertion describes functions mathematically. • If input assertion upon entry of code ensures truth of output upon exit then assertions are correct. • “Validity” achieved by wide audience who can’t disprove the proof

  12. Various Testing Techniques Cont. • Anomaly Analysis: • 1st level: compiler checks syntax • 2nd level: searches for anomalies not outlawed by programming languages • The existence of (island code) unexecutable code • Problems concerning array bounds • Failure to initialize variables • Labels and variables which are not used • Jumps into and out of loops • This method doesn’t find a fault just the potential one(data flow anomaly)

  13. Dynamic Functional Test Cases • Executes test cases. No consideration to detailed designed • Domain Testing: • Cases are created based on informal classifications of the requirement into domains. Data or requirements provide foundation of cases. Results are compared to expectations. • Random Testing: • Produces data without reference to code or Spec. Main tool is random number generator where operational reliability can be derived • Disadvantage: no guarantee of complete coverage of the code • Use this method for a small subset of data

  14. Dynamic Functional Test Cases Cont. • Adaptive Perturbation Testing: • Assesses quality of a set of test cases effectiveness measure to generate further test cases to increase effectiveness • Use of executable assertions which developer inserts into SW • Assertion: statement about reasonableness of values of variables. Initial test runs and assertion violations recorded. Optimization routines are used to find best value to replace discarded value to maximize assertion violations. Go until # of assertions can’t be increased.

  15. Dynamic Functional Test Cases Cont. • Cause – effect graphing: • explores input combinations used for Boolean logic operators. Used for small cases

  16. Dynamic Structural Test Cases • Software is executed w/ test cases. Test cases are created upon analysis of software • Domain and Computational Testing: • are strategies for selecting test cases. Use struc of program and select paths used to identify domains. • Path computation: set of algebraic expressions one for each output variable. Path condition is the conjunction of constraints in path. Empty path is infeasible and cannot be executed. • Domain error: when cases follows wrong path due to fault in conditional statement

  17. Dynamic Structural Test Cases Cont. • Automatic Test Data Generation: • Repeated use of path generation and predicate solving paths. This produces potential test cases where high coverage of paths is probable. • High coverage may still not meet its specs. Maybe due to omission in program of one of the functions defined in spec. • Solution: formal specification methods • Mutation Analysis: • Concerned w/ the quality of test data. (testing the test data) • Mutants are modified programs. • The same test data is used on these mutants • If the result is different then the mutant is of no use (dead) • If the result is the same then the mutant is of interest (live- change hasn’t been detected) • Ratio of dead to live mutants is the benchmark

  18. Conclusion • Objective of software testing is to gain confidence in the software • There are many testing techniques which aim to help achieve thorough testing • Debate continues as to whether correctness can be inferred when a set of test cases find no errors • For the production of correct software the wider the range of testing techniques used the better the software is likely to be.

  19. Questions?

More Related