1 / 56

Software Testing and Debugging

YU. Department of Software Engineering. Software Testing and Debugging. Spring 2017. Mohammed Akour Assistant Professor. Software Engineering.

adaugherty
Download Presentation

Software Testing and Debugging

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. YU Department of Software Engineering SoftwareTestingandDebugging Spring 2017 Mohammed Akour Assistant Professor

  2. Software Engineering • Software Engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation and maintenance of software; that is, the application of engineering to software.  2/2 • Source: IEEE Std 610.12-1990, IEEE Standard Glossary of Software Engineering Terminology

  3. What is Software Testing? • Software testing is the process of operating software under specified conditions, observing or recording the results and making an evaluation of some aspect of the software. (IEEE/ANSI std 610.12-1990)

  4. What is Software Testing? • Software testing is the process of uncovering evidence of defects in software systems. A defect can be introduced during any phase of development or maintenance and results from one or more “bugs”. i.e., mistakes, misunderstandings, omissions or even misguided intent on the part of the developers. (McGregor & Sykes 2001)

  5. Why is Testing So Difficult? • Poorly Constructed and Documented Software • Time and Budget Constraints • Large or Infinite Input Space i.e., input values, combinations and orderings • Generation of Test Oracles • Anticipating and Replicating User Environment,i.e. hardware, operating system and applications

  6. Testing Terminology • Caution: Terms widely abused in the literature! • Failure – is the manifested inability of a system or component to perform a required function within specified limits e.g., abnormal termination, or unmet time and space constraints of the software. • Fault - incorrect step, process, or data definition in the software. • Error - a human action that produces a fault. (Binder 2000 and McGregor & Sykes 2001)

  7. Testing Terminology (cont’d) • A bug refers to an error or a fault. • Debugging is the process of tracking down the source of failures (errors and/or faults) and making repairs. • A test case specifies the pretest state of the component under test (CUT) and its environment, the test inputs and conditions, and the expected results. • A test point is a specific value for test case input state variables.

  8. Inspections and Testing • Software inspections Concerned with analysis of the static system representation to discover problems (static verification) • May be supplement by tool-based document and code analysis. • Software testing Concerned with exercising and observing product behaviour (dynamic verification) • The system is executed with test data and its operational behaviour is observed.

  9. Software Inspections • These involve people examining the source representation with the aim of discovering anomalies and defects. • Inspections not require execution of a system so may be used before implementation. • They may be applied to any representation of the system (requirements, design, configuration data, test data, etc.). • They have been shown to be an effective technique for discovering program errors.

  10. Inspections and Testing • Inspections and testing are complementary and not opposing verification techniques. • Both should be used during the V & V process. • Inspections can check conformance with a specification but not conformance with the customer’s real requirements. • Inspections cannot check non-functional characteristics such as performance, usability, etc.

  11. Test-Driven Development • Test-driven development (TDD) is an approach to program development in which you inter-leave testing and code development. • Tests are written before code and ‘passing’ the tests is the critical driver of development. • You develop code incrementally, along with a test for that increment. You don’t move on to the next increment until the code that you have developed passes its test.

  12. TDD Process Activities • Start by identifying the increment of functionality that is required. This should normally be small and implementable in a few lines of code. • Write a test for this functionality and implement this as an automated test. • Run the test, along with all other tests that have been implemented. Initially, you have not implemented the functionality so the new test will fail. • Implement the functionality and re-run the test. • Once all tests run successfully, you move on to implementing the next chunk of functionality.

  13. Software Testing Principles • A necessary part of a test case is a definition of the expected output or result. • a test case must consist of two components: • A description of the input data to the program. • A precise description of the correct output of the program for that set of input data. • If there are no expectations, there can be no surprises

  14. Principles (Cont’d) • A programmer should avoid attempting to test his or her own program. • After a programmer has constructively designed and coded a program, it is extremely difficult to suddenly change perspective to look at the program with a destructive eye. • Testing is more effective and successful if someone else does it • A programming organization should not test its own programs. • The testing process may be viewed as decreasing the probability of meeting the schedule and the cost objectives • Thoroughly inspect the results of each test.

  15. Principles (Cont’d) • Test cases must be written for input conditions that are invalid and unexpected, as well as for those that are valid and expected. • Examining a program to see if it does not do what it is supposed to do is only half the battle; the other half is seeing whether the program does what it is not supposed to do. • a payroll program that produces the correct paychecks is still an erroneous program if it also produces extra checks for nonexistent employees • Avoid throwaway test cases unless the program is truly a throwaway program.

  16. Principles (Cont’d) • Do not plan a testing effort under the tacit assumption that no errors will be found. • The probability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section. • For instance, if a program consists of two classes A and B, and five errors have been found in module A and only one error has been found in module B, and if module A has not been purposely subjected to a more rigorous test, then this principle tells us that the likelihood of more errors in module A is greater than the likelihood of more errors in module B. • Testing is an extremely creative and intellectually challenging task.

  17. Testing Process Overview • The testing perspective must be considered, preferably by professional testers, when development methods and tools are selected. • The form and quality of the requirements specification also affects the testing process. • Product requirements comprise the source of test cases in system and acceptance testing. • System testers should participate in the gathering and validation of the reqs. – need to understand reqs., assess risks, and check for testability.

  18. Testing Process (Cont’d) • Testability: • The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met. [IEEE 610] • The degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met. [IEEE 610] • Test criteria - The criteria that a system or component must meet in order to pass a given test. [IEEE 610]

  19. Testing Process (Cont’d) There are two types of testing criteria: • Test data selection criterion – represents a rule used to determine which test cases to select. • Test data adequacy criterion – a rule used determine whether or not sufficient testing has been performed. • Test data selection criterion serves as the basis for selecting a test set to satisfy some goal, while a test data adequacy criterion checks to see whether a previously selected test set satisfies the goal. [Weyuker 93]

  20. True Or False • What is your definition of software testing? • “Testing is the process of demonstrating that errors are not present.” • “The purpose of testing is to show that a program performs its intended functions correctly.” • “Testing is the process of establishing confidence that a program does what it is supposed to do.” • A program certainly has errors if it ”does what it is not supposed to do.”

  21. True Or False (Cont’d) • Testing is the process of executing a program with the intent of finding errors. • A good test case has high probability of detecting an undiscovered error. • A successful test case is the one that detects an undiscovered error. • The concept of a program without errors is basically unrealistic • A program can contain errors if the program ”does what it is supposed to do”.

  22. Dimensions of Software Testing • Each dimension represents an important consideration over a continuum of possible levels of effort or approaches • Who performs the testing? • Which pieces will be tested? Developers and Independent Testers Developers Independent Testers Test Nothing Test a Sample Test Everything

  23. Dimensions (Cont’d) • When is testing performed? • How is testing performed? • How much testing is adequate? Test Components as they are Developed Test Every Day Test All Components Together at the End Knowledge of Specification Only Knowledge of Specification and Implementation No Testing Exhaustive Testing

  24. Test Plan A test plan should: • Identify the roles each person will be assigned. • For each role allocate time and effort. • Schedule time allocated for each part of the testing effort. (Note that the development schedule usually drives much of the testing schedule) • Identify the resources needed for the testing effort e.g., hardware, software, expertise

  25. Roles in Testing Process Traditional Roles: • Unit Tester – responsibility is to test the individual classes (cluster of classes) as they are produced. • Integration Tester – responsibility testing a set of objects that are being brought together from different development sources e.g., individuals or teams. • System Tester – has domain knowledge and is responsible for independently verifying that the completed application satisfies the reqs.

  26. Roles (Cont’d) • Test Manager – responsible for managing the test process i.e., requesting, coordinating, and making effective use of the resources allocated. • Team Lead – technical leadership for the test program, including test approach. • Test Engineers (usability, manual, automated, network, security) – specialist testers in each of the areas. • Test Environment Specialist – installs test tools and establishes test-tool environment

  27. Computer-Based Testing • The most important consideration in program testing is the design and creation of effective test cases. • Complements, not replacing, human testing • No guarantee that all the errors would be exposed • Complete testing is impossible

  28. The key question • What subset of all possible test cases has the highest probability of detecting the most errors? • Random input testing • The process of testing a program by selecting, at random, some subset of all possible input values • In terms of the likelihood of detecting the most errors, a randomly selected collection of test cases has little chance of being close to an optimal subset • The least effective method

  29. Practical Method • exhaustive black- and white-box testing is impossible. • The recommended testing procedure is to start with some black-box testing techniques and supplement them with some white-box testing techniques

  30. White Box Testing • Requires access to code • Looks at the program code and performs testing by mapping program code to functionality

  31. Why White Box Testing? • The program code truly represents what the program actually does and not just what it is intended to do! • It minimizes delay between defect injection and defect detection (i.e., does not postpone detection to be done by someone else). • Can catch “obvious” programming errors that do not necessarily map to common user scenarios (e.g., divide by zero).

  32. Typesof White Box Testing

  33. Static Testing • Involves only the source code and not the executables or binaries • Does not involve executing the program on a machine but rather humans going through it or the usage of specialized tools • Some of the things tested by static testing: • Whether the code works according to the functional requirement • Whether the code follows all applicable standards • Whether the code for any functionality has been missed out • Whether the code handles errors properly

  34. Static Testing by Humans • Humans can find errors that computers can’t. • Multiple humans can provide multiple perspectives. • A human evaluation can compare the code against the specifications more thoroughly.

  35. Static Testing Types • Different types • Desk checking of the code • Code walkthrough • Code review • Code inspection • Increasing the involvement of people • More variety of perspectives • Increasing formalism • Increasing the likelihood of identifying more complex defects

  36. Desk Checking • Author informally checks the code against the specifications and corrects defects found. • No structured method or formalism is required to ensure completeness. • No log or checklist is maintained. • It relies only on the thoroughness of the author.

  37. Code Walkthrough • Group oriented (as against desk checking) • Brings in multiple perspectives • Multiple specific roles (author, moderator, inspector, etc.),

  38. Roles in a formal inspection • Author • Author of the work product • Makes available the required material to the reviewers • Fixes defects that are reported • Moderator • Controls the meeting(s) • Inspectors (reviewers) • Prepare by reading the required documents • Take part in the meeting(s) and report defects • Scribe • Takes down notes during the meeting • Assigned in advance by turns • Can participate to review to the extent possible • Documents the minutes and circulates them to participants

  39. White-Box Testing Techniques • Statement coverage • Decision coverage • Condition coverage • Decision-condition coverage • Multiple-condition coverage

  40. White-Box Testing • It is concerned with the degree to which test cases exercise or cover the logic of source code of the program • The ultimate white-box testing is to execute every path in the program—an unrealistic goal with a program with loops. • If we back away from executing every path, a reasonable goal is to execute every statement at least once.

  41. Statement Coverage void foo(int a, int b, int x) { if (a>1 && b==0) { x=x/a; } if (a==2 || x>1) { x=x+1; } } How many paths in this program? Figure 4.1

  42. All possible Paths Which paths provide the most coverage?

  43. Statement Coverage • A set of test cases that traverses every statement at least once. • Path ace provides statement coverage. • The test case consists of (A=2, B=0, X=4) for input and the expected output is (X=3). • i.e., execute every statement at least once. • This criterion is poor. • Some undetected errors. • Maybe, the first condition should be an OR rather than an AND • Maybe, the second condition should be X>0 rather than X>1 • The path abd (A=1, B=1, X=1) leaves X unchanged (if it is an error)

  44. DecisionCoverageor Branch Coverage • A stronger logic-coverage criterion than the statement coverage. • We must write enough test cases to cover each decision with both the true and false outcomes at least once. • In case of multiple choices in a decision, we must write test cases to cover all choices at least once • Switch statement in C and Java • Some programming languages may have multiple entry points for a module or program (PL/1)

  45. Decision Coverage (Cont’d) • A more precise definition for the decision-coverage testing is • to exercise every possible outcome of all decisions at least once and that each point of entry is invoked at least once.

  46. Decision Coverage vs. Statement coverage • Decision coverage provides statement coverage in most circumstances • every statement must be executed if every branch direction is executed.

  47. Decision Coverage vs. Statement coverage • Exceptions • Programs with no decisions. • Programs or subroutines with multiple entry points: • A given statement might be executed only if the program is entered at a particular entry point. • So a stronger definition of the decision coverage • The decision-coverage testing requires that each outcome ofeach decision is exercised at least once, that each statement is exercised at least once, and that each entry point is invoked at least once.

  48. Decision Coverage • There are two sets of test cases that satisfy the criterion:

  49. Decision Coverage Problems! • Only 50% chance of catching the “X must be changed” error (if it is stated and described in the expected outcomes of the test cases) • Only the path abdwill catch this error and we have only a 50% chance of selecting the set of test cases that containsabd. • If the second decision were in error (if it should have said X<1 instead of X>1), • the mistake would not be detected by the two test cases in Yellow. Therefore, we need a better coverage criterion.

  50. Condition Coverage • This testing requires that we write enough test cases so that each condition in each decision takes on all possible values at least once and that each point of entry is invoked at least once • A stronger logic-coverage test than the decision coverage

More Related