Page 92 - DCAP103_Principle of operating system
P. 92
Unit 3: Process Management-II
blocks are split into smaller pieces and more processors are added to the problem. The Notes
algorithm then begins again at the point of assigning processes and their resources to a
processor.
Future Research
More work needs to be done on incorporating error detection, recovery and fault tolerance in
pre-run-time scheduled systems. Although a technique for doing this was presented earlier
in this paper, more robust methods need to be developed. Many pre-run-time scheduling
techniques require a severe set of constraints in order to bound the amount of computation
necessary to produce a schedule. These constraints often limit the applicability of the technique
to real world conditions. More work needs to be done to eliminate as many of the constraints
as possible if pre-run-time scheduling is going to be a viable solution for other than only the
highly constrained system.
3.6.4 On-line Scheduling Algorithms
On-line scheduling algorithms compute a schedule in real-time as processes arrive. The on-
line scheduler does not assume any knowledge of process characteristics for processes which
have not yet arrived. Major advantages of on-line scheduling are that there is no requirement
to know process characteristics in advance and they tend to be flexible and easily adaptable
to environmental changes. Many arguments have been made against run-time scheduling.
Static priority driven schedulers are capable of generating only a limited subset of the possible
schedules. This, and the basic assumption that the system has no knowledge of process
characteristics for processes that have not yet arrived, severely restricts the potential for the
system to meet timing and resource sharing requirements. If the scheduler does not have such
knowledge, it is impossible to guarantee that system timing constraints will be met. No matter
how sophisticated the scheduler is, it is always possible that some newly arriving task will
have characteristics that make either the process or some other process(es) miss its deadline.
As an example, there are times when it is necessary to allow the processor to become idle if all
timing constraints are to be met, such as when a high priority task with a far away deadline
idles to wait for the arrival of a lower priority task with a near-term deadline. Neither static nor
dynamic algorithms can deal with this type of problem. Because of the relative small number of
possible schedules that can be produced by a run-time scheduler, CPU utilization is usually lower
than that provided by a pre-run-time scheduler. Reliance on run-time mechanisms for process
synchronization and mutual exclusion, such as rendezvous and monitors creates timing issues
that are very difficult to predict. Additionally, the use of such mechanisms allows scheduling
decisions to be made at the process level, rather than relegating them to the system. This
introduces yet another source of uncertainty in predicting system performance. Other problems
include starvation, priority inversion, and deadlock. In pre-run-time scheduling, it is possible to
avoid all this overhead by defining precedence and exclusion relations between processes and
process segments to allow synchronization and mutual exclusion. Allowing interrupts to occur
at any random time from internal or external events further increases the unpredictability of a
system. It also requires a significant amount of resources for context switching. Pre-run-time
scheduling significantly reduces the amount of run-time resources needed for scheduling and
context switching. In pre-run-time scheduling, a periodic interrupts are noted but not serviced
until the corresponding periodic handler is scheduled to run. This reduces time spent in context
switches and gives greater latitude in scheduling the entire system. Systems scheduled using on-
line algorithms are usually “proved” by testing and/or simulation. As has been observed, you
can only show the presence of errors, no amount of testing or simulation can prove the system
error free. Additionally, in on-line scheduling the amount of time and resources available for
scheduling is severely restricted and the scheduling problem is computationally hard. Although
these arguments are compelling, many of today’s real-time systems use on-line scheduling
simply because it does perform reasonably well under most circumstances and it is flexible.
LOVELY PROFESSIONAL UNIVERSITY 85