1 / 37

Task Allocation and Scheduling

Task Allocation and Scheduling. Problem : How to assign tasks to processors and to schedule them in such a way that deadlines are met Our initial focus: uniprocessor task scheduling Extensions: to multiprocessors. Uniprocessor Task Scheduling. Initial Assumptions: Each task is periodic

kisha
Download Presentation

Task Allocation and Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Task Allocation and Scheduling • Problem: How to assign tasks to processors and to schedule them in such a way that deadlines are met • Our initial focus: uniprocessor task scheduling • Extensions: to multiprocessors

  2. Uniprocessor Task Scheduling • Initial Assumptions: • Each task is periodic • Periods of different tasks may be different • Worst-case task execution times are known • Relative deadline of a task is equal to its period • No dependencies between tasks: they are independent • Only resource constraint considered is execution time • No critical sections • Preemption costs are negligible • Tasks must be completed for output to have any value

  3. Standard Scheduling Algorithms • Rate-Monotonic (RM) Algorithm: • Static priority • Higher-frequency tasks have higher priority • Earliest-Deadline First (EDF) Algorithm: • Dynamic priority • Task with the earliest absolute deadline has highest priority

  4. RMA • Task priority is inversely proportional to the task period (directly proportional to task frequency) • At any moment, the processor is either • idle if there are no tasks to run, or • running the highest-priority task available • A lower-priority task can suffer many preemptions • To a task, lower-priority tasks are effectively invisible

  5. RMA • Example • Schedulability criteria: • Sufficiency condition (Liu & Layland, 1973) • Necessary & sufficient conditions (Joseph & Pandya, 1986; Lehoczky, Sha, Ding 1989)

  6. RMA • Critical Instant of a Task: An instant at which a request for that task will have the largest response time • Critical Time-zone of a Task: Interval between a critical instant of that task and the completion time of that task • Critical Instant Theorem: Critical instant of a task T_i occurs whenever T_i arrives simultaneously with all higher-priority tasks

  7. RMA: Scheulability Check • The Critical Instant Theorem leads to a schedulability check: • If a task is released at the same time as all of the tasks of higher priority and it meets its deadline, then it will meet its deadline under all circumstances

  8. RMA: Schedulability Test • If a task is released simultaneously with all higher-priority tasks, determine when it will be done • If this completion time is no later than this task’s deadline, we have succeeded with this task • Find a systematic procedure to turn this process into a necessary-and-sufficient schedulability check

  9. RMA: Schedulability • Start with a single-task set and obtain its schedulability conditions • Extend this to a two-task set • Exploit any intuition gained to generalize this

  10. RM Schedulability

  11. Earliest Deadline First (EDF) • Same assumptions as before • This is a dynamic priority algorithm: the relative priorities of tasks can change with time • The task with the earliest absolute deadline has the processor • Schedulability Test: Total utilization of task set must not exceed 1.

  12. EDF • Lemma 3.8: If a deadline is missed for the first time at some time t_miss, the processor must have been continuously busy over [0,t_miss]. • Theorem 3.11: A task set is schedulable iff its total utilization is no greater than 1.

  13. EDF: When Deadline != Period

  14. for t >= d_max

  15. Critical Sections • Remove the assumption that tasks can be preempted at any time • If a task is within a critical section of code • It may be preempted • However, until that task finishes executing that critical section, no other task can enter it (irrespective of its priority) • Obvious effect: Some higher-priority tasks which also need to enter a critical section will have to wait • Less obvious effect: Priority-inversion can occur

  16. Example From J. W. Liu: Real-Time Systems, Prentice-Hall, 2000

  17. Critical Sections (contd.) From J. W. Liu, op cit.

  18. Priority Inheritance Protocol • Key feature is the priority inheritance rule: • When a higher-priority task A gets blocked due to resource R by a lower-priority task B, B inherits the priority of A. • When B releases R, the priority of B reverts to the value it held before it inherited the priority of A. • Priority inheritance is transitive.

  19. Priority Ceiling Protocol • The priority ceiling of any resource is the highest priority of all the tasks requiring that resource. • The current priority ceiling of the system is the highest priority ceiling of the resources currently locked. • A task that requires no critical section resources proceeds according to the traditional approach

  20. When task A requests resource R, • If R is held by another task, it is blocked. • If R is free, • If A’s priority is greater than the current system priority ceiling, A is granted access to R • If A’s priority is not greater than the current system priority ceiling, then it is blocked unless A holds resources whose priority ceiling equals the system priority ceiling. • Blocking tasks inherit the priority of the tasks they block (as in the priority inheritance protocol)

  21. The priority ceiling protocol: • Prevents deadlocks from ever occurring • Ensures that no task can be blocked for more than the duration of one critical section

  22. Example From J. Liu, op cit.

  23. Properties of the Ceiling Protocol • Deadlock is not possible • Transitive blocking does not occur, i.e., a task which blocks another task cannot itself be blocked. • Each task can be blocked for the duration of at most one critical section. • The longest critical section provides a bound on the blocking time.

  24. IRIS Tasks • IRIS = Increased Reward for Increased Service • Also called “imprecise” tasks • Consist of: • Mandatory portion, which has to be executed • Optional portion • Reward function linking the execution time to resulting quality of output • Examples: Search and numerical algorithms

  25. Identical Linear Reward Fn • If mandatory portion of all tasks is zero, EDF is optimal, i.e., it results in a maximal reward. • If mandatory portions of at least one task is non-zero, it gets more complex • See Algorithm IRIS1 on page 99

  26. IRIS 1 Example

  27. Non-identical Linear Rewards • Basic Idea: • Check if the mandatory portions can be scheduled. If not, then give up • Otherwise, keep augmenting the task set with optional portions of tasks in descending order of weights, and running IRIS1 on them

  28. IRIS2 Algorithm

  29. Identical Concave Rewards • Captures the property of diminishing returns seen in many iterative algorithms • Consider here tasks with zero mandatory portions • Tactic: Ensure that the optional time given to each task is as equal as possible • Example: Aperiodic tasks. • Start from the end of the schedule & work backwards

  30. Sporadic Tasks • In EDF, simply use the deadline of the sporadic task to determine their priority • In RM, create a “sporadic server” periodic task that is a placeholder for the sporadic tasks. • Several obvious ways in which to manage the sporadic server

  31. Task Assignment • Scheduling tasks on a multiprocessor is generally an NP-complete problem • Traditional heuristics do it in two steps: • Assign or allocate tasks to processors • Use a uniprocessor scheduling algorithm to schedule tasks assigned to each processor • Do this iteratively, if necessary

  32. Assignment Algorithms • Bin packing: • First fit • Best fit

  33. Fault-Tolerant Scheduling • Fault Tolerance: The ability of a system to suffer component failures and still function adequately • Fault-Tolerant Scheduling: Save enough time in a schedule that the system can still function despite a certain number of processor failures

  34. FT-Scheduling: Model • System Model • Multiprocessor system • Each processor has its own memory • Tasks are preloaded into assigned processors • Task Model • Tasks are independent of one another • Schedules are created ahead of time

  35. Basic Idea • Preassign backup copies, called ghosts. • Assign ghosts to the processors along with the primary copies • A ghost and a primary copy of the same task can’t be assigned to the same processor • For each processor, all the primaries and a particular subset of the ghost copies assigned to it should be feasibly schedulable on that processor

  36. Requirements • Two main variations: • Current and future iterations of the task have to be saved if a processor fails • Only future iterations need to be saved; the current iteration can be discarded

More Related