Page 95 - DCAP103_Principle of operating system
P. 95

Principles of Operating Systems



                   Notes         smooth progression, without significant bursts of activity for any one task or group of tasks.
                                 Proportionate progress is used as a measure of temporal fairness. Assuming that processing time
                                 is divided into equal length slots, a schedule exhibits proportionate progress if for all processes
                                 p at all integer times t, the difference between the amount of processor time (number of slots)
                                 that should have been allocated to process p and the amount that was allocated is strictly less
                                 than the absolute value of 1 for all p at time t. Baruah has shown that proportionate progress is
                                 a stronger requirement than periodic scheduling. That is, all schedules exhibiting proportionate
                                 progress are periodic, but not all periodic schedules display proportionate progress. The weight-
                                 monotonic algorithm first defines a weight for each task in the system, where the task’s weight is
                                 the ratio of its required execution time to xe its period. At each time slot, the processor is assigned
                                 to the task with xp the greatest weight, with ties broken arbitrarily. At first glance, this looks
                                 very similar to the rate-monotonic algorithm. The difference is that the rate-monotonic makes
                                 scheduling decisions based solely on the task’s period while the weight-monotonic algorithm
                                 also considers its execution requirement. This has the effect of elevating the priority of a task
                                 that would have been given low priority under the rate-monotonic algorithm so that it receives
                                 a proportionally larger share of the available processing time. It seems that this would negate
                                 the advantages gained from rate-monotonic scheduling but experiments show that the weight-
                                 monotonic algorithm often successfully schedules task sets that were unschedulable under
                                 the rate-monotonic algorithm. The reason for this apparent contradiction is the difference in
                                 the definition of “contending tasks” between schedules exhibiting proportionate progress and
                                 periodic schedules. Essentially, in proportionate progress schedules, high priority tasks do not
                                 completely take over the processor, thus allowing lower priority tasks an opportunity to run and
                                 avoid starvation. Static priority-based scheduling algorithms have a number of problems that
                                 have to be addressed when they are used for real-world problems. Under overload conditions,
                                 it is difficult to guarantee performance of static priority-based algorithms. It can also be quite
                                 difficult to guarantee performance and achieve high processor utilization. Without providing
                                 some kind of algorithmic extension, such as a server task, static priority-based algorithms do
                                 not have provisions to handle “soft” or aperiodic processes. This topic will be addressed in
                                 more detail later in this paper when dynamic priority algorithms are discussed. A phenomenon
                                 known as priority inversion, which in essence makes a high priority task block while waiting
                                 on a lower task to release  some resource  can  also  be a problem  although there are several
                                 techniques to counter this situation. The Priority Inheritance Protocol, Priority Ceiling Emulation,
                                 and Priority Ceiling Protocol are all useful in alleviating the problem. The Priority Inheritance
                                 Protocol acts when a higher priority task is blocked by a lower priority task. When this occurs,
                                 the lower priority task inherits the priority of the higher priority task. This prevents a medium
                                 priority task from preempting the lower priority task. When the lower priority task releases
                                 the resource the higher priority task was blocking on, the lower priority task is returned to its
                                 original priority level. Although the Priority Inheritance Protocol prevents unbounded blocking
                                 of a higher priority task by a lower priority task, it does not guarantee that mutual deadlocks
                                 will not occur. It also suffers from the possibility of chained blocking. Chained blocking occurs
                                 when two lower priority tasks each hold a resource that a higher priority task desires. The higher
                                 priority task now has to wait on both lower priority tasks to release their resources before it
                                 can run. Priority Ceiling Emulation combats the problem of priority inversion by selectively
                                 inhibiting preemption. With this method, the priority of a low priority task is raised high enough
                                 to prevent it being blocked by a medium priority task. To accomplish this, the highest priority
                                 of any task that will lock a resource is kept as an attribute of that resource. Whenever a task
                                 is granted access to that resource, its priority is temporarily raised to the maximum priority
                                 associated with the resource. When the task has finished with the resource, the task is returned
                                 to its original priority. Under this protocol, deadlocks cannot occur and a task can be blocked
                                 at most once by a lower priority task. The Priority Ceiling Protocol is a combination of the two



        88                                LOVELY PROFESSIONAL UNIVERSITY
   90   91   92   93   94   95   96   97   98   99   100