Page 95 - DCAP403_Operating System
P. 95
Operating System
Notes The design of the short term scheduler is one of the critical areas in the overall system design,
because of the immediate effects on system performance from the user’s point of view. It’s usually
one of the trickiest as well: since most processor architectures support their own task switching
facilities, the implementation of the process switch mechanism is generally machine-dependent.
The result is that the actual process switch software is usually written in the assembly language
of a particular machine, whether the operating system is meant to be portable across different
machines or not.
5.6 Multiple Processor Scheduling
The development of appropriate scheduling schemes for multiprocessor systems is problematic.
Not only are uni-processor algorithms not directly applicable but some of the apparently correct
methods are counter intuitive.
The scheduling problem for multiprocessor systems can be generally stated as “How can you
execute a set of tasks T on a set of processors P subject to some set of optimizing criteria C?”
The most common goal of scheduling is to minimize the expected runtime of a task set. Examples
of other scheduling criteria include minimizing the cost, minimizing communication delay,
giving priority to certain users’ processes, or needs for specialized hardware devices.
The scheduling policy for a multiprocessor system usually embodies a mixture of several of these
criteria. Issues in Multiprocessor Scheduling Solutions to the scheduling problem come in two
general forms: algorithms and scheduling systems.
Algorithms concentrate on policy while scheduling systems provide mechanism to implement
the algorithms. Some scheduling systems run outside the operating system kernel, while others
are part of a tightly-integrated distributed or parallel operating system.
Distributed systems communicate via message-passing, while parallel systems use shared
memory. A task is the unit of computation in computing systems, and a job consists of one or
more cooperating tasks. Global scheduling involves assigning a task to a particular processor
within the system.
Local scheduling determines which of the set of available tasks at a processor runs next on that
processor. Task migration can change the global mapping by moving a task to a new processor.
If you have several jobs, each composed of many tasks, you can either assign several processors
to a single job, or you can assign several tasks to a single processor. The former is known as space
sharing, and the latter is called time sharing.
Global scheduling is often used to perform load sharing. Load sharing allows busy processors
to off-load some of their work to less busy processors. Load balancing is a special case of load
sharing, in which the goal is to keep the load even across all processors. Sender-initiated load
sharing occurs when busy processors try to find idle processors to off-load some work. Receiver-
initiated load sharing occurs when idle processors seek busy processors. It is now accepted
wisdom that full load balancing is generally not worth doing, as the small gain in execution time
over simpler load sharing is more than offset by the effort expended in maintaining the balanced
load.
As the system runs, new tasks arrive while old tasks complete execution (or are served). If the
arrival rate is greater than the service rate then the system is said to be unstable. If tasks are
serviced as least as fast as they arrive, the system is said to be stable. If the arrival rate is just
slightly less than the service rate for a system, an unstable scheduling policy can push the system
into instability. A stable policy will never make a stable system unstable.
88 LOVELY PROFESSIONAL UNIVERSITY