1 / 36

Cache and Pipeline Sensitive Fixed Priority Scheduling for Preemptive Real-Time Systems

Jörn Schneider. Cache and Pipeline Sensitive Fixed Priority Scheduling for Preemptive Real-Time Systems. Saarland University, Germany. 21th IEEE Real-Time Systems Symposium, Orlando. Outline. Introduction Cache Interference Analysis Pipeline Preemption Analysis

kalare
Download Presentation

Cache and Pipeline Sensitive Fixed Priority Scheduling for Preemptive Real-Time Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Jörn Schneider Cache and Pipeline Sensitive Fixed Priority Scheduling for Preemptive Real-Time Systems Saarland University, Germany 21th IEEE Real-Time Systems Symposium, Orlando

  2. Outline • Introduction • Cache Interference Analysis • Pipeline Preemption Analysis • Isolated and integrated RTA Method • Isolated vs. Integrated • Conclusion and Future Work

  3. Classical model for Response Time Analysis (RTA) Interference by other Tasks Single Task WCET

  4. Classical model for Response Time Analysis (RTA) Interference (Traditional Methods) WCET of other Tasks Cache reload costs [Basumalick et al. LCTRTS’94], [Busquets-Mataix et al. RTAS ‘96], [Lee et al. RTSS ’97], ...

  5. Improved model for Response Time Analysis (RTA) Pipeline preemption costs WCET of other Tasks + Scheduler Cache reload costs

  6. Interaction between cache and pipeline behavior • Cache misses and pipeline stalls can coincide in a way that cache misses have a reduced or no impact • Prefetch queues buffer instructions thus invalidating the negative effect of instruction cache misses

  7. WCET of other Tasks + Scheduler WCET of other Tasks + Scheduler Pipeline preemption costs Pipeline preemption costs Novel model for Response Time Analysis (RTA) Interference (Integrated Method) Cache reload costs

  8. Outline • Introduction • Cache Interference Analysis • Pipeline Preemption Analysis • Isolated and integrated RTA Method • Isolated vs. Integrated • Conclusion and Future Work

  9. Cache Interference Analysis • Extrinsic cache behavior is determined by Conflicting Cache Sets (CCS) • A CCS is a cache set with more belonging references to different memory blocks than fit into the set

  10. Cache Interference Analysis Conflicting Cache Set X of Task 2 Task 2 Scheduler Task 2 Scheduler Scheduler Task 2 Task 1 Line 0 Scheduler Scheduler Task 2 Scheduler Task 1 Line 1

  11. Cache Interference Analysis Memory references of a set of interfering tasks for Taski Cache Interference Analyzer All possibly conflicting cache sets for Taski

  12. Outline • Introduction • Cache Interference Analysis • Pipeline Preemption Analysis • Isolated and integrated RTA Method • Isolated vs. Integrated • Conclusion and Future Work

  13. Tasks WCET prediction Scheduler WCET prediction Entering (input) pipeline state (p.s.) of the scheduler • Consider every possible input p.s. or • make it independent of input p.s. Entering and re-entering pipeline state of tasks Pipeline Preemption Analysis • make it independent of input p.s. Assume an empty pipeline Therefore pretend a pipeline flush

  14. Pipeline Preemption Analysis • For each program point of each task the worst-case Pipeline Flush Time is computed • The worst-case Pipeline Flush Time at the end of a task or scheduler execution is computed and added to the WCET

  15. Pipeline Preemption Analysis • The Pipeline Preemption Analysis is safe if the following Assumptions hold: • (A1) An empty pipeline can cause no longer delay than a nonempty one • (A2) The time needed to flush the pipeline is at least as large as any prolongation due to pipeline effects

  16. Pipeline Preemption Analysis • Limitations (L) and Workarounds (W) • L: For CPUs with prefetch queues empty pipelines can cause longer delays than nonempty ones [violation of (A1)] • W: Increase Pipeline Flush Time by prefetch queue fill time • L: Out-of-order execution could cause delays larger than flush time [violation of (A2)] • W: • Artificially influence pipeline state by special instructions or • Consider all possible (re-)entering pipeline states

  17. Outline • Introduction • Cache Interference Analysis • Pipeline Preemption Analysis • Isolated and integrated RTA Method • Isolated vs. Integrated • Conclusion and Future Work

  18. Response Time Analysis • Fixed Priority Scheduling • Rate monotonic priority assignment or • Deadline monotonic priority assignment • Periodic and sporadic tasks • Sporadic tasks: minimum inter-arrival time must be guaranteed • Communicating tasks (blocking) • ICPP (Immediate Ceiling Priority Protocol)

  19. Response Time Analysis (Invariants) • (I1): The scheduler is never interrupted • (I2): Every switch between tasks is done by the scheduler • (I1) ==> No preemption costs for the scheduler, but release jitter • (I2) ==> Sporadic tasks can be treated like periodic tasks

  20. Isolated RTA Method • Separate addition of cache reload costs and pipeline preemption costs WCET of other Tasks + Scheduler Cache reload costs Pipeline preemption costs

  21. x x x x Conflicting Cache Set of Isolated RTA Method Cache-Analysis Results Cache Cache ... ... Always Miss ... ... Preempted Task Preempting Task ... ... Always Hit ... ... Always Hit Cache reload costs per preemption = 1 x miss penalty

  22. Isolated RTA Method Cache Interference Analyzer Xi,j, ... RTA Task1, ..., Taskn Pipeline Preemption Analyzer Fi Pipeline Analyzer Cache Analyzer CFG Builder Value Analyzer Path Analysis Executable program WCET WCET Analysis Annotated program source

  23. Restrictions of Isolated Approaches • Overlapping of preemption caused cache misses with pipeline effects is ignored ==> Overestimations • Underestimations can occur when considering CPUs with dynamic pipeline decisions (e.g. Superscalar CPUs), because of higher than expected miss penalties cf. [Lundqvist and Stenström, RTSS ´99]

  24. Q: How to consider pipeline behavior changes for additional (e.g. preemption caused) cache misses? • First idea: compute local effects of additional cache misses on pipeline. • But, for CPUs with dynamic pipeline decisions these effects are no longer local!

  25. Q: How to consider pipeline behavior changes for additional (e.g. preemption caused) cache misses? A: Predict all possible cache misses statically and consider them during pipeline analysis step!

  26. Updated x x Always Miss x x Conflicting Cache Set of Integrated RTA Method Cache-Analysis Results Cache Cache ... ... Always Miss Possibly Preempting Tasks ... ... Preempted Task ... ... Always Hit + Scheduler ... ... Always Hit

  27. Integrated RTA Method • Compensation of preemption caused miss penalties by pipeline effects possible • Remaining cache reload costs are considered per instruction execution not per preemption

  28. Integrated RTA Method Pipeline Preemption Analyzer RTA Fi Cache Interference Analyzer Cache Analyzer CFG Builder Value Analyzer Pipeline Analyzer Path Analysis Executable program WCET Task1, ..., Taskn WCET Analysis Annotated program source

  29. Outline • Introduction • Cache Interference Analysis • Pipeline Preemption Analysis • Isolated and integrated RTA Method • Isolated vs. Integrated • Conclusion and Future Work

  30. Isolated vs. Integrated Method • Contra Isolated Method • Applicability limited to a certain set of CPUs • No compensation of preemption caused cache reload costs by pipeline effects possible • Cache related preemption costs increase unboundedly with the number of preemptions • Contra Integrated Method • Overestimations if no. of reuses of cache lines > no. of preemptions

  31. Practical Experiments • Comparison of Simple Isolated (Busquets-Mataix et al, RTAS ‘96), Isolated and Integrated Method for a sample task set • Result: • High number of preemptionsIntegrated Method wins • Low no. of preemptions  Isolated Method wins • Isolated always better than Simple Isolated Method

  32. Response time of two low priority tasks in dependence of period of a high priority task

  33. Outline • Introduction • Cache Interference Analysis • Pipeline Preemption Analysis • Isolated and integrated RTA Method • Isolated vs. Integrated • Conclusion and Future Work

  34. Conclusion • Pipeline-related preemption costs are considered by both shown RTA-methods • Isolated approaches are shown to be limited to a certain class of CPUs • The integrated method is the first to allow for compensation of preemption caused cache misses by pipeline effects • The integrated method is not limited to a certain class of CPUs • Both methods can be used together and the best results can be picked

  35. Future Work • More sophisticated computation of worst-case blocking times for communicating tasks • Improvement of Integrated Approach for low numbers of preemptions

  36. More Information • http://www.cs.uni-sb.de/~js • http://www.absint.de

More Related