Page 96 - DCAP103_Principle of operating system
P. 96
Unit 3: Process Management-II
previous methods that prevents chained blocking and mutual deadlocks. In this protocol, each Notes
resource is assigned a priority ceiling which is the highest priority of any task that may request
its service. The following rules then apply:
1. A higher priority task always preempts a lower priority task.
2. A task cannot enter its critical section unless its priority is higher than the priority ceiling
of all resources currently locked by other tasks.
3. A lower priority task that blocks a higher priority task temporarily inherits the priority
of the higher priority task.
The only difference between this protocol and the Priority Inheritance Protocol is the addition
of rule 2. However, this rule is what prevents chained blocking and mutual deadlocking by
forcing a total ordering of executing and suspended critical sections. The relative complexity
of implementing this protocol is its primary drawback. A real-world problem that often arises
is the existence of a finite number of priority levels. Most algorithmic research tends to assume
the availability of infinite priority levels for ease of modeling and analysis but in practical
systems, this is just not the case. The effects of limited priorities and have shown that limited
priorities affect the degree of schedulability (maximum processor utilization) and produce a
form of priority inversion. For a system scheduled using the rate-monotonic algorithm with the
potential requirement of 100,000 priority levels, limiting the system to only 256 priority levels
reduced the maximum processor utilization by a factor of 0.9986, which is negligible. However,
reducing the number of priority levels to 16 reduces the maximum processor utilization by 0.7025
which is starting to become significant. Reducing the number of priority levels to four reduces
the maximum utilization by 0.0811. This translates into less than 7.5% processor utilization if
task deadlines are guaranteed. Additionally, reducing the number of priority levels means that
tasks of slightly different priority must be grouped together under a single priority level. This
has the effect of causing another form of priority inversion that has no work around, such as
the priority ceiling protocol, due to a higher real priority task blocking on a task within the
same priority group that has a lower real priority. The lack of performance guarantees under
overload conditions does not have an easy work around with static priority-based algorithms.
Most attempts to counter this deficiency involve modifying the scheduling algorithm to use
some form of adaptive priority. These algorithms will be explored later when dynamic priority
algorithms are discussed. The difficulty in obtaining high processor utilization is another problem
that has no simple work around. This tends to be task set specific and general methods to ensure
high processor utilization under these scheduling algorithms do not exist. Most attempts at
countering this problem add a “server task” or something similar to attempt to “capture” the idle
processor time, but then the scheduling algorithm falls into the class of best effort algorithms,
another form of dynamic priority algorithm that will be discussed later.
Future Research
Much of the current work has focused on uniprocessor applications of the static priority-based
scheduling algorithms. When multiple processors are involved, the complexity of the problem
increases exponentially. However, with today’s multiprocessor systems becoming more and
more common, this is an area that needs further research. Burchard, Liebeherr, Oh, and Son
among others, have begun to investigate the problem of adapting static priority-based scheduling
algorithms to multiprocessors. They have proposed a method that develops more stringent
schedulability conditions so that more tasks can be assigned to each processor using rate-
monotonic scheduling then was previously considered feasible. However, their results depend
upon “novel” schedulability conditions and more work is needed before their techniques are
ready for real-world use. Lortz and Shin have also investigated multiprocessor scheduling and
have concluded that in multiprocessor systems, the implementation of mutual exclusion and task
LOVELY PROFESSIONAL UNIVERSITY 89