Page 155 - DCAP103_Principle of operating system
P. 155
Principles of Operating Systems
Notes
5.12 Thrashing
5.12.1 Cause of Thrashing
5.12.2 Working-set Model
5.12.3 Page-fault Frequency
5.13 Summary
5.14 Keywords
5.15 Review Questions
5.16 Further Readings
Objectives
After studying this unit, you will be able to:
• Explain memory management
• Understand swapping
• Discuss contiguous-memory allocation
• Explain paging
• Discuss thrashing
• Explain overview of page replacement
• Understand LRU page replacement
Introduction
In this unit, we discuss various ways to manage memory. The memory management algorithms
vary from a primitive bare-machine approach to paging and segmentation strategies. Each
approach has its own advantages and disadvantages. Selection of a memory-management
method for a specific system depends on many factors, especially on the hardware design of the
system. As we shall see, many algorithms require hardware support, although recent designs
have closely integrated the hardware and operating system.
Memory is central to the operation of a modern computer system. Memory consists of a large
array of words or bytes, each with its own address. The CPU fetches instructions from memory
according to the value of the program counter. These instructions may cause additional loading
from and storing to specific memory addresses. A typical instruction-execution cycle, for example,
first fetches an instruction from memory. The instruction is then decoded and may cause operands
to be fetched from memory. After the instruction has been executed on the operands, results
may be stored back in memory. The memory unit sees only a stream of memory addresses; it
does not know how they are generated (by the instruction counter, indexing, indirection, literal
addresses, and so on) or what they are for (instructions or data). Accordingly, we can ignore
how a memory address is generated by a program. We are interested in only the sequence of
memory addresses generated by the running program.
5.1 Address Binding
Usually, a program resides on a disk as a binary executable file. The program must be brought
into memory and placed within a process for it to be executed. Depending on the memory
management in use, the process may be moved between disk and memory during its execution.
The collection of processes on the disk, that is waiting to be brought into memory for execution
forms the input queue.
The normal procedure is to select one of the processes in the input queue and to load that
process into memory. As the process is executed, it accesses instructions and data from memory.
Eventually, the process terminates, and its memory space is declared available.
148 LOVELY PROFESSIONAL UNIVERSITY