1 / 33

Software Engineering

Software Engineering. Lecture 19: Object-Oriented Testing & Technical Metrics. Today’s Topics. Evaluating OOA and OOD Models Unit, Class & Integration Testing OO Design Metrics Class-Oriented Metrics Operation-Oriented Metrics Testing Metrics Project Metrics. O-O Programs are Different.

Download Presentation

Software Engineering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Engineering Lecture 19: Object-OrientedTesting & Technical Metrics

  2. Today’s Topics • Evaluating OOA and OOD Models • Unit, Class & Integration Testing • OO Design Metrics • Class-Oriented Metrics • Operation-Oriented Metrics • Testing Metrics • Project Metrics

  3. O-O Programs are Different • High Degree of Reuse • Does this mean more, or less testing? • Unit Testing vs. Class Testing • What is the right “unit” in OO testing? • Review of Analysis & Design • Classes appear early, so defects can be recognized early as well

  4. Testing OOA and OOD Models • Correctness(of each model element) • Syntactic (notation, conventions)review by modeling experts • Semantic (conforms to real problem)review by domain experts • Consistency(of each class) • Revisit CRC & Class Diagram • Trace delegated responsibilities • Examine / adjust cohesion of responsibilities

  5. Model Testing [2] • Evaluating the Design • Compare behavioral model to class model • Compare behavioral & class models to the use cases • Inspect the detailed design for each class (algorithms & data structures)

  6. Unit Testing • What is a “Unit”? • Traditional: a “single operation” • O-O: encapsulated data & operations • Smallest testable unit = classmany operations • Inheritance • testing “in isolation” is impossible operations must be tested every place they are used

  7. Shape Circle Square Ellipse move() resize() resize() resize() Testing under Inheritance Q: What if implementation of resize()for each subclass calls inherited operation move() ? A: Shape cannot be completely tested unlesswe also test Circle, Square, & Ellipse!

  8. Integration Testing • O-O Integration: Not Hierarchical • Coupling is not via subroutine • “Top-down” and “Bottom-up” have little meaning • Integrating one operation at a time is difficult • Indirect interactions among operations

  9. O-O Integration Testing • Thread-Based Testing • Integrate set of classes required to respond to one input or event • Integrate one thread at a time • Example: Event-Dispatching Thread vs. Event Handlers in Java • Implement & test all GUI events first • Add event handlers one at a time

  10. O-O Integration [2] • Use-Based Testing • Implement & test independent classes first • Then implement dependent classes (layer by layer, or cluster-based) • Simple driver classes or methods sometimes required to test lower layers

  11. Validation Testing • Details of objects not visible • Focus on user-observable input and output • Methods: • Utilize use cases to derive tests (both manual & automatic) • Black-box testing for automatic tests

  12. Test Case Design • Focus: “Designing sequences of operations to exercise the states of a class instance” • Challenge: Observability • Do we have methods that allow us to inspect the inner state of an object? • Challenge: Inheritance • Can test cases for a superclass be used to test a subclass?

  13. Test Case Checklist [Berard ’93] • Identify unique tests & associate with a particular class • Describe purpose of the test • Develop list of testing steps: • Specified states to be tested • Operations (methods) to be tested • Exceptions that might occur • External conditions & changes thereto • Supplemental information (if needed)

  14. Object-Oriented Metrics • Five characteristics [Berard ’95]: • Localizationoperations used in many classes • Encapsulationmetrics for classes, not modules • Information Hidingshould be measured & improved • Inheritanceadds complexity, should be measured • Object Abstractionmetrics represent level of abstraction

  15. Design Metrics [Whitmire ’97] • Size • Population (# of classes, operations) • Volume (dynamic object count) • Length (e.g., depth of inheritance) • Functionality (# of user functions) • Complexity • How classes are interrelated

  16. Design Metrics [2] • Coupling • # of collaborations between classes, number of method calls, etc. • Sufficiency • Does a class reflect the necessary properties of the problem domain? • Completeness • Does a class reflect all the properties of the problem domain? (for reuse)

  17. Design Metrics [3] • Cohesion • Do the attributes and operations in a class achieve a single, well-defined purpose in the problem domain? • Primitiveness (Simplicity) • Degree to which class operations can’t be composed from other operations

  18. Design Metrics [4] • Similarity • Comparison of structure, function, behavior of two or more classes • Volatility • The likelihood that a change will occur in the design or implementation of a class

  19. Class-Oriented Metrics • Of central importance in evaluating object-oriented design (which is inherently class-based) • A variety of metrics proposed: • Chidamber & Kemerer (1994) • Lorenz & Kidd (1994) • Harrison, Counsell & Hithi (1998)

  20. Weighted Methods per Class • Assume class C hasn methods,complexity measures c0…ciWMC(C) =  ci • Complexity is a function of the # of methods and their complexity • Issues: • How to count methods? (inheritance) • Normalize ci to 1.0

  21. Depth of Inheritance Tree • Maximum length from a node C to the root of the tree • PRO: inheritance = reuse • CON: Greater depth implies greater complexity • Hard to predict behavior under inheritance • Greater design complexity (effort)

  22. DIT Example 1 2 DIT = 4(Longest pathfrom root to childnode in hierarchy) 3 4 [from SEPA 5/e]

  23. Number of Children • Subclasses immediately subordinate to class C are its children • As # of children (NOC) increases: • PRO: more reuse • CON: parent becomes less abstract • CON: more testing required

  24. Coupling Between Objects • Number of collaborations for a given class C • As CBO increases: • CON: reusability decreases • CON: harder to modify, test • CBO should be minimized

  25. Response For A Class • Response Set: the set of methods than can potentially execute in response to some message • RFC: The # of methods in the response set • As RFC increases: • CON: Effort for testing increases • CON: Design complexity increases

  26. Lack of Cohesion in Methods • LCOM: # of methods that access one or more of the same attributes • When LCOM is high: • More coupling between methods • Additional design complexity • When LCOM is low: • Lack of cohesion?e.g.: control panel gauges • Reduced design complexity

  27. Class Size • Number of operations • Inherited & Local • Number of attributes • Inherited & Local • These may be added, but… • They lack the weighting for complexity which WMC provides

  28. Method Inheritance Factor • Proportion of inherited methods to total methods available in a classMIF =  Mi(Ci) /  Ma(Ci) • A way to measure inheritance(and the additional design & testing complexity)

  29. Operation-Oriented Metrics • Average Operation Size (OSavg) • LOC not a good measure • Better: number of messages sent • Should strive to minimize • Operation Complexity (OC) • E.g., Function Points; minimize • Average # Parameters (NPavg) • Larger = more complex collaborations between objects; try to minimize

  30. O-O Testing Metrics • Percent Public & Protected (PAP) • Comparison of attribute types • Higher: greater chance of side-effects • Public Access to Data (PAD) • # of classes that can access data in another (encapsulation violation) • Higher: greater chance of side-effects

  31. Testing Metrics [2] • Number of Root Classes (NOR) • # of distinct class hierarchies • Higher: increased testing effort, since test cases must be defined for each • Fan-In (FIN) • In O-O context = multiple inheritance • FIN > 1 should be avoided! (Java)

  32. Project Metrics • Number of Scenario Scripts (NSS) • Proportional to # classes, methods… • Strong indicator of program size • Number of Key Classes (NKC) • Unique to solution (not reused) • Higher: substantial development work • Number of Subsystems (NSUB) • Impact: resource allocation, parallel scheduling, integration effort

  33. Questions?

More Related