1 / 27

Software Engineering week 5

Software Engineering week 5. Madalina Croitoru IUT Montpellier. Software design. Computer programs are composed of multiple, interacting modules The goal of system design is to: Decide what the modules are Decide what the modules should contain Decide how the modules should interact.

ulric-wells
Download Presentation

Software Engineering week 5

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Engineeringweek 5 Madalina Croitoru IUT Montpellier MADALINA CROITORU croitoru@lirmm.fr

  2. Software design • Computer programs are composed of multiple, interacting modules • The goal of system design is to: • Decide what the modules are • Decide what the modules should contain • Decide how the modules should interact MADALINA CROITORU croitoru@lirmm.fr

  3. Criteria for design MADALINA CROITORU croitoru@lirmm.fr

  4. Five principles for good design • Linguistic modular units • Few interfaces • Small interfaces • Explicit interfaces • Information hiding MADALINA CROITORU croitoru@lirmm.fr

  5. Waterfall model MADALINA CROITORU croitoru@lirmm.fr

  6. Testing “Program testing can be used to show the presence of defects, but never their absence” Edsger W. Dijkstra MADALINA CROITORU croitoru@lirmm.fr

  7. Testing • Critically important for quality software • Industry averages: • 30-85 errors per 1000 lines of code • 0.5 – 3 errors per 1000 lines of code NOT detected before delivery • The ability to test a system depends on a good, detailed requirements document MADALINA CROITORU croitoru@lirmm.fr

  8. Errors • Errors in the programs could be either: • Compile time (syntax errors, etc.): cheap to fix • Run-time (logical errors): expensive to fix MADALINA CROITORU croitoru@lirmm.fr

  9. Systematic about testing • Analogy: scientific experiment, in chemistry • To find out whether some process works • Need to state the expected result before the experiment • Must know the precise conditions under which the experiment runs • The experiment must be repeatable MADALINA CROITORU croitoru@lirmm.fr

  10. What about the goal of testing? • What to measure? • Achieve an acceptable level of confidence that the system behaves correctly under all circumstances of interest • What • … is a level of confidence? • … is correct behavior? • … are circumstances of interest? MADALINA CROITORU croitoru@lirmm.fr

  11. Testing strategies • Never possible for the designer to anticipate EVERY use of the system • Offline strategies: • Syntax checking • Dry runs / Inspections / Reviews • Online strategies: • Black box testing • White box testing MADALINA CROITORU croitoru@lirmm.fr

  12. Syntax checking • Detecting errors at compile time is preferable to having them occur at run time • Syntax checking • Programs doing deeper tests: • This line never gets executed • This variable does not get initialised (Java compiler: warnings) MADALINA CROITORU croitoru@lirmm.fr

  13. Inspection / Review • Review • informal • … • Inspection • formal MADALINA CROITORU croitoru@lirmm.fr

  14. Why review? • Everyone makes mistakes • Programming is a social activity (or should be) • Find errors in program early (before it is run the first time) • Improve programming skills of all involved • Anything can be reviewed (…, use cases, documentation, …) MADALINA CROITORU croitoru@lirmm.fr

  15. How to hold a review meeting? • Purpose: evaluate a software product to • determine its suitability for its intended use • identify discrepancies from specifications and standards • Participants read documents in advance • then bring their comments to a meeting for discussion • A review • may provide recommendations and suggest alternatives • may be held at any time during a project • need not reach conclusions on all points MADALINA CROITORU croitoru@lirmm.fr

  16. What should not happen in a review? • Improvements to the program • Blaming programmers • Finger pointing MADALINA CROITORU croitoru@lirmm.fr

  17. More formal: inspection • Idea behind inspection: Michael Fagan (IBM, 1976) • Purpose: detect and identify software product anomalies by systematic peer evaluation • The inspection leader is not the author • is a trained “moderator” • organizes the selection of inspectors • distributes the documents • leads the meeting • ensures all follow up actions are taken MADALINA CROITORU croitoru@lirmm.fr

  18. How to inspect? • Set an agenda and maintain it • Limit debate and rebuttal • Do not attempt to solve every problem noted • Take written notes • Insist on advance preparation • Conduct meaningful training for all participants • Inspect your earlier inspections MADALINA CROITORU croitoru@lirmm.fr

  19. Dry runs • A team of programmers mentally execute the code using simple test data • Expensive in terms of human resources • Impossible for a lot of systems MADALINA CROITORU croitoru@lirmm.fr

  20. Black box testing • We ignore the internals of the system and focus on the RELATIONSHIP between inputs and outputs • Exhaustive testing would mean examining output of system for every conceivable input – not practical • Use equivalence partitioning and boundary analysis MADALINA CROITORU croitoru@lirmm.fr

  21. BlackBox Testing • Examines all functions and compares actual to expected result • Typically used in later testing stages • System tests • Acceptance tests int binsearch (int[] a, int v) { int low = 0; int high = length - 1 while (low <= high) { int mid = (low + high) / 2; if (a[mid] > value) high = mid – 1; else if (a[mid] < value) low = mid + 1; else return mid; } return -1; } “array search” Input: int array a int v Output: index of v in a or -1 if v not in a MADALINA CROITORU croitoru@lirmm.fr

  22. Equivalence partitioning • Let us say the system asks for a number between 100 and 999 • Three equivalence classes of input: • Less than 100 • 100 to 999 • Greater than 999 MADALINA CROITORU croitoru@lirmm.fr

  23. Boundary Analysis • Most programs fail at input boundaries • The system asks for a number between 100 and 999 inclusive • The boundaries are 100 and 999 • We then use the values: 99 100 101 998 999 1000 MADALINA CROITORU croitoru@lirmm.fr

  24. White box testing • We use the knowledge of the internal structure of systems to guide development of tests • Examine every possible run of a system • If not possible, aim to test every statement at least once MADALINA CROITORU croitoru@lirmm.fr

  25. WhiteBox Testing • Examines paths and decisions by looking inside the program • Typically used in early testing stages • Unit tests • Integration tests int binsearch (int[] a, int v) { int low = 0; int high = length – 1; while (low <= high) { int mid = (low + high) / 2; if (a[mid] > value) high = mid – 1; else if (a[mid] < value) low = mid + 1; else return mid; } return – 1; } MADALINA CROITORU croitoru@lirmm.fr

  26. Testing plans • Rigorous test plans have to be devised • Generated usually from requirements analysis document (for black box) and program code (for white box) • Distinguish between: • Unit tests • Integration tests • System tests MADALINA CROITORU croitoru@lirmm.fr

  27. Alpha and beta testing • In-house testing is called alpha testing • Beta testing involves distributed TESTED code to prospective customers for evaluation and use • Delivering buggy beta code is embarrassing MADALINA CROITORU croitoru@lirmm.fr

More Related