1 / 24

Test Scenario

JeeHyun Hwang. Test Scenario. Example. Test deleteUserAccount Code. Ideas. Idea 1: Find various scenarios to cover certain coverages Ides 2: One test case, while changing variables to cover many policies (only chainging …variables) within context

afric
Download Presentation

Test Scenario

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. JeeHyunHwang Test Scenario

  2. Example • Test deleteUserAccountCode

  3. Ideas • Idea 1: Find various scenarios to cover certain coverages • Ides 2: One test case, while changing variables to cover many policies (only chainging…variables) within context • Idea 3: High covearge of code while maintaiing high coverage of ac • Idea 4: find dead access code results • Idea 5: formulate a request set • Idea 6: flow test (after one thing A -> A’) yes or no. • Idea 7: Role combination - Similarility

  4. Example • Test Covearage while achiving high coverage on access control policies?

  5. Example • Test Covearage while achiving high coverage on access control policies?

  6. Access Control • Access control is one of the most widely used privacy and security mechanisms • used to control which principals (e.g., users or processes) have access to which resources • Access control is often governed by security policies called Access Control Policies (ACP) • Security policies are often specified and maintained separately from application code

  7. Motivation • Security requirements change over times -> Security policies are often evolved • Security policy changes may introduce security faults (e.g., unauthorized access) • System developers execute system test cases to ensure that behavior changes (introduced by security policy changes) are expected

  8. Problem • Two pitfalls of executing all of existing system test cases • Executing all of existing system test cases is time consuming • Existing system test cases may not expose behavior changes sufficiently induced by security policy changes • There are no existing approaches for testing applications effectively in the context of security policy evolution

  9. Our Goal • Regression system test cases for policy evolution • Select and execute only system test cases (from an existing test suite), which expose behavior changes • Augment system test cases to expose behavior changes (which are not exposed with existing system tests)

  10. Challenges • Select and augment regression system test cases impacted by policy changes with low false-positives and false-negatives • require to analyze effects correctly of policy changes • require to monitor interactions correctly between system test cases and security policies

  11. Definition: Coverage • Coverage for security policies • measure which rules of the policy are involved (called “covered”) in policy evaluation [Martin et al. WWW 07]

  12. Test Selection Technique I • Find system test cases impacted for policy changes by mutation analysis [Setup: rule-test correlation] • Policy P and its mutant Pm by changing rule ri’s decision (e.g., Permit -> Deny) • Collect requests Q issued from system test cases T • Evaluate Q against P and Pm, respectively and find requests Qimp(QimpЄ Q) which expose different policy behaviors • Correlate ri with system tests Timp(TimpЄ T), which issue requests Qimp • Continue until we find each rule’s correlated system test cases in turn

  13. Test Selection Technique I - cont [Test selection for policy changes] • Find rules R impacted by policy changes • Select system test cases correlated with a rule rЄ R Cost: given n rules in P , we need to execute R for 2*n times. However, we are enabled to conduct setup process prior to encountering policy changes.

  14. Test Selection Technique II • Find system test cases impacted for policy changes by analyzing which rules are evaluated (i.e., covered) [Setup: rule-test correlation] • Execute systems test cases T • Detect which rules rs are evaluated for each system test case Timp • Correlate a rule r with its corresponding system test cases

  15. Test Selection Technique II [Test selection for policy changes] • Find rules R impacted by policy changes • Select system test cases correlated with a rule rЄ R Cost: given n rules in P , we need to execute T once. However, we are enabled to conduct setup process prior to encountering policy changes.

  16. Test Selection Techniques III • Find system test cases impacted for policy changes by recording and evaluating requests [Setup: request collection] • Record all requests issued to policy decision point (PDP) for each system test case

  17. Test Selection Techniques III - cont [Test selection for policy changes] • Select requests (with corresponding system test cases) to evaluate different decisions for two different policy versions Cost: given n rules, we need to execute all of system test cases for only once.

  18. Test Augmentation Technique • Augment system test cases for policy evolution • Collect request-response pairs qs, which expose different policy behaviors • Select only pairs qsi(qsiСqs ) , which cannot be issued from existing system tests • Find system test cases to issue requests in high similarity with qsiby counting the number of the common attribute values • Two requests (faculty, write, grades) and (student, write, grades) include two common attribute values • Manually modify system test cases to issue a request q (qЄqsi)

  19. Evaluation Subjects A collection of Java programs interacting with security policies

  20. Metrics • True Positive: # correclty T/ # collected T • False Positive: System test cases as when they are selected • the entities predicted as vulnerable when they are not vulnerable • False Negative: Selected Test cases • entities predicted as non-vulnerable when they are vulnerable • True Negative: • Elapsed time for execution • The number of test runs

  21. Research Questions • RQ1: How effectively our proposed techniques select system test cases with policy changes? • Precision and recall • Cost of each technique: elapsed time for execution and the number of test runs • RQ2: How effectively our test augmentation technique suggests system test cases (which expose policy behavior differences) while existing system test cases cannot expose such differences? • Precision and recall

  22. Open Questions • How to correlate unit test cases with each changed location? • Our techniques are sound assuming when we apply rule decision change mutation • For rule addition/deletion, we may correlate unit test cases to a default-fall-through rule or non-applicable cases • If we consider other types of mutants (e.g., rule combination), it would be challenging

  23. Open Questions – cont’ • How to partition of difference-exposing policy unit test cases produced by Margrave • For OrBAC, each rule is evaluated by only one request. I think that each request represents one category. (I need to synthesize outcome of Margrave to find all possible requests). • In general, a XACML policy may include rules to be applicable for more than one request, we may categorize requests based on covering rules. Consider that req1 and req2 cover rule 1. We classify these two requests into the same category.

More Related