Page 97 - DCAP103_Principle of operating system
P. 97

Principles of Operating Systems



                   Notes         synchronization is handled better by global semaphores based on a task’s blocking tolerance (the
                                 amount of time it can be blocked without missing its deadline) rather than the task’s priority. In
                                 this scenario, simple FIFO queues for the semaphores perform better than rate-monotonic queues
                                 in direct opposition to what provides the best performance on a uniprocessor. These results
                                 underscore the vast differences between uniprocessor and multiprocessor real-time scheduling
                                 that need to be explored. Another topic that needs additional work is the integration of static
                                 priority-based scheduling algorithms with adaptive scheduling algorithms to handle aperiodic
                                 and soft deadline tasks and to provide a smooth, graceful degradation of system performance
                                 under overload conditions while still providing the advantages of static priority-based scheduling
                                 under normal conditions.
                                 3.6.7 Dynamic Planning-based Algorithms

                                 Static priority-based scheduling algorithms were shown to have many advantages but two of
                                 their disadvantages have received a significant amount of study. Their low processor utilization
                                 and poor handling of aperiodic and soft-deadline tasks have prompted researchers to search for
                                 ways to combat these deficiencies. This research has resulted in a class of scheduling algorithms
                                 known as Dynamic Planning-Based algorithms. Dynamic planning-based algorithms attempt to
                                 improve the response and performance of a system to aperiodic and soft-deadline tasks while
                                 continuing to guarantee the performance of the hard-deadline periodic tasks. The traditional
                                 way of handling aperiodic and soft-deadline tasks in a system that contained periodic tasks with
                                 hard-deadlines was to allow the aperiodic or soft tasks to run in the background. This meant
                                 that these types of tasks only got serviced when the processor had nothing else to do. The result
                                 of this was unpredictable and normally rather poor response to these tasks. The other approach
                                 used was to model aperiodic tasks as periodic tasks with a period equal to the minimum time
                                 between their arrivals and then schedule them using the same algorithm as for the real periodic
                                 tasks. This tended to be extremely wasteful of CPU cycles because the minimum period between
                                 arrivals was usually significantly smaller than the average. The general model for these types
                                 of algorithms is a system where all periodic tasks have hard deadlines equal to the end of their
                                 period, their periods are constant, and their worst case execution times are constant. All aperiodic
                                 tasks (and soft-deadline tasks) are assumed to have no deadlines and their arrival, or ready,
                                 times are unknown. Dynamic Planning-Based algorithms tend to be quite flexible in servicing
                                 aperiodic tasks while still maintaining the completion guarantees for hard-deadline tasks. Most
                                 of the algorithms also provide a form of guarantee for aperiodic tasks. They will not accept the
                                 task for execution if they cannot guarantee its on-time completion. Additionally, most of the
                                 algorithms can provide higher processor utilization than static priority-based algorithms while
                                 still including task completion guarantees. Earliest Deadline First (EDF) scheduling was one of
                                 the first dynamic planning-based algorithms proposed. It provides the basis for many of the
                                 algorithms currently being studied and proposed. The concept behind EDF is relatively simple.
                                 When it is time to select a task to execute, select the task that has the earliest deadline. One
                                 advantage of EDF is that if a task set is scheduled using the Earliest Deadline First algorithm,
                                 there will be no idle processor time prior to system overload. This means that  unlike static
                                 priority-based  algorithms,  when  using  EDF,  the  system  will  only  suffer  overload  when  the
                                 execution time requirement for the task set exceeds 100% of the available time. This compares
                                 with approximately 69% for rate-monotonic scheduling. Of course, in reality there will be some
                                 overhead associated with scheduling and context switches but when using EDF scheduling,
                                 processor utilization can approach 100%. The Predictive Deadline (PD) algorithm extends the
                                 EDF algorithm to predict timing faults and to reject less critical tasks during overload conditions.
                                 The PD algorithm assumes cyclic task behavior where each cycle is defined by the task’s ready
                                 time and its deadline. Tasks are required to estimate their execution time, which is used by the
                                 algorithm to predict overloads. Each task is also assigned a priority which is used to shed less
                                 important tasks during overload conditions. Tasks are then scheduled by earliest deadline. As
                                 each task is inserted into the ready queue, the sum of the execution times for all tasks preceding



        90                                LOVELY PROFESSIONAL UNIVERSITY
   92   93   94   95   96   97   98   99   100   101   102