Contents Index Previous Next
D.2.1 The Task Dispatching Model
1
[The task dispatching model specifies preemptive
scheduling, based on conceptual priority-ordered ready queues.]
Dynamic Semantics
2
A task runs (that is, it becomes a
running
task) only when it is ready (see
9.2) and the
execution resources required by that task are available. Processors are allocated
to tasks based on each task's active priority.
3
It is implementation defined whether, on a multiprocessor,
a task that is waiting for access to a protected object keeps its processor
busy.
3.a
Implementation defined: Whether,
on a multiprocessor, a task that is waiting for access to a protected
object keeps its processor busy.
4
{task dispatching}
{dispatching, task} {task
dispatching point [distributed]} {dispatching
point [distributed]} Task dispatching
is the process by which one ready task is selected for execution on a processor.
This selection is done at certain points during the execution of a task called
task dispatching points. A task reaches a task dispatching point whenever
it becomes blocked, and whenever it becomes ready. In addition, the completion
of an
accept_statement (see
9.5.2),
and task termination are task dispatching points for the executing task. [Other
task dispatching points are defined throughout this Annex.]
4.a
Ramification: On multiprocessor
systems, more than one task can be chosen, at the same time, for execution
on more than one processor, as explained below.
5
{ready queue} {head
(of a queue)} {tail (of
a queue)} {ready task}
{task dispatching policy [partial]}
{dispatching policy for tasks
[partial]} Task dispatching policies
are specified in terms of conceptual
ready queues, task states,
and task preemption. A ready queue is an ordered list of ready tasks.
The first position in a queue is called the
head of the queue,
and the last position is called the
tail of the queue. A task
is
ready if it is in a ready queue, or if it is running. Each
processor has one ready queue for each priority value. At any instant,
each ready queue of a processor contains exactly the set of tasks of
that priority that are ready for execution on that processor, but are
not running on any processor; that is, those tasks that are ready, are
not running on any processor, and can be executed using that processor
and other available resources. A task can be on the ready queues of more
than one processor.
5.a
Discussion: The core language
defines a ready task as one that is not blocked. Here we refine this
definition and talk about ready queues.
6
{running task} Each
processor also has one
running task, which is the task currently
being executed by that processor. Whenever a task running on a processor
reaches a task dispatching point, one task is selected to run on that
processor. The task selected is the one at the head of the highest priority
nonempty ready queue; this task is then removed from all ready queues
to which it belongs.
6.a
Discussion: There is always
at least one task to run, if we count the idle task.
7
{preemptible resource}
A preemptible resource is a resource that while allocated
to one task can be allocated (temporarily) to another instead. Processors are
preemptible resources. Access to a protected object (see
9.5.1)
is a nonpreemptible resource.
{preempted task} When
a higher-priority task is dispatched to the processor, and the previously running
task is placed on the appropriate ready queue, the latter task is said to be
preempted.
7.a
Reason: A processor that
is executing a task is available to execute tasks of higher priority,
within the set of tasks that that processor is able to execute. Write
access to a protected object, on the other hand, cannot be granted to
a new task before the old task has released it.
8
{task dispatching point [partial]}
{dispatching point [partial]}
A new running task is also selected whenever there
is a nonempty ready queue with a higher priority than the priority of
the running task, or when the task dispatching policy requires a running
task to go back to a ready queue. [These are also task dispatching points.]
8.a
Ramification: Thus, when
a task becomes ready, this is a task dispatching point for all running
tasks of lower priority.
Implementation Permissions
9
An implementation is allowed to define additional
resources as execution resources, and to define the corresponding allocation
policies for them. Such resources may have an implementation defined effect
on task dispatching (see
D.2.2).
9.a
Implementation defined: The
affect of implementation defined execution resources on task dispatching.
10
An implementation may place implementation-defined
restrictions on tasks whose active priority is in the Interrupt_Priority
range.
10.a
Ramification: For example,
on some operating systems, it might be necessary to disallow them altogether.
This permission applies to tasks whose priority is set to interrupt level
for any reason: via a pragma, via a call to Dynamic_Priorities.Set_Priority,
or via priority inheritance.
11
7 Section 9 specifies under
which circumstances a task becomes ready. The ready state is affected
by the rules for task activation and termination, delay statements, and
entry calls. {blocked [partial]} When
a task is not ready, it is said to be blocked.
12
8 An example of a possible
implementation-defined execution resource is a page of physical memory,
which needs to be loaded with a particular page of virtual memory before
a task can continue execution.
13
9 The ready queues are purely
conceptual; there is no requirement that such lists physically exist
in an implementation.
14
10 While a task is running,
it is not on any ready queue. Any time the task that is running on a
processor is added to a ready queue, a new running task is selected for
that processor.
15
11 In a multiprocessor system,
a task can be on the ready queues of more than one processor. At the
extreme, if several processors share the same set of ready tasks, the
contents of their ready queues is identical, and so they can be viewed
as sharing one ready queue, and can be implemented that way. [Thus, the
dispatching model covers multiprocessors where dispatching is implemented
using a single ready queue, as well as those with separate dispatching
domains.]
16
Contents Index Previous Next Legal