Page 66 - DCAP403_Operating System
P. 66
Unit 4: Process Management
Multitasking and multiprogramming, the two techniques that intend to use the computing Notes
resources optimally have been dealt with in the previous unit at length. In this unit you will
learn about yet another technique that has caused remarkable improvement on the utilization of
resources - thread.
A thread is a finer abstraction of a process.
Recall that a process is defined by the resources it uses and by the location at which it is executing
in the memory. There are many instances, however, in which it would be useful for resources
to be shared and accessed concurrently. This concept is so useful that several new operating
systems are providing mechanism to support it through a thread facility.
Thread Structure
A thread, sometimes called a lightweight process (LWP), is a basic unit of resource utilization,
and consists of a program counter, a register set, and a stack. It shares with peer threads its code
section, data section, and operating-system resources such as open files and signals, collectively
known as a task.
A traditional or heavyweight process is equal to a task with one thread. A task does nothing if
no threads are in it, and a thread must be in exactly one task. The extensive sharing makes CPU
switching among peer threads and the creation of threads inexpensive, compared with context
switches among heavyweight processes. Although a thread context switch still requires a register
set switch, no memory-management-related work need be done. Like any parallel processing
environment, multithreading a process may introduce concurrency control problems that require
the use of critical sections or locks.
Also, some systems implement user-level threads in user-level libraries, rather than via system
calls, so thread switching does not need to call the operating system, and to cause an interrupt
to the kernel. Switching between user-level threads can be done independently of the operating
system and, therefore, very quickly. Thus, blocking a thread and switching to another thread
is a reasonable solution to the problem of how a server can handle many requests effi ciently.
User-level threads do have disadvantages, however. For instance, if the kernel is single-threaded,
then any user-level thread executing a system call will cause the entire task to wait until the
system call returns.
You can grasp the functionality of threads by comparing multiple-thread control with
multiple-process control. With multiple processes, each process operates independently of the
others; each process has its own program counter, stack register, and address space. This type of
organization is useful when the jobs performed by the processes are unrelated. Multiple processes
can perform the same task as well. For instance, multiple processes can provide data to remote
machines in a network file system implementation.
However, it is more efficient to have one process containing multiple threads serve the same
purpose. In the multiple process implementation, each process executes the same code but
has its own memory and file resources. One multi-threaded process uses fewer resources than
multiple redundant processes, including memory, open files and CPU scheduling, for example,
as Solaris evolves, network daemons are being rewritten as kernel threads to increase greatly the
performance of those network server functions.
Threads operate, in many respects, in the same manner as processes. Threads can be in one of
several states: ready, blocked, running, or terminated.
A thread within a process executes sequentially, and each thread has its own stack and program
counter. Threads can create child threads, and can block waiting for system calls to complete; if
one thread is blocked, another can run. However, unlike processes, threads are not independent
of one another. Because all threads can access every address in the task, a thread can read or write
LOVELY PROFESSIONAL UNIVERSITY 59