Page 165 - DCAP103_Principle of operating system
P. 165
Principles of Operating Systems
Notes Paging used by the CPU dispatcher to define the hardware page table
when a process is to be allocated the CPU. Paging therefore increases the
context-switch time.
5.5.2 Hardware Support
Each operating system has its own methods for storing page tables. Most allocate a page table
for each process. A pointer to the page table is stored with the other register values (like the
instruction counter) in the process control block. When the dispatcher is told to start a process,
it must reload the user registers and define the correct hardware page-table values from the
stored user page table. The hardware implementation of the page table can be done in several
ways. In the simplest case, the page table is implemented as a set of dedicated registers. These
registers should be built with very high-speed logic to make the paging-address translation
efficient. Every access to memory must go through the paging map, so efficiency is a major
consideration. The CPU dispatcher reloads these registers, just as it reloads the other registers.
Instructions to load or modify the page-table registers are, of course, privileged, so that only
the operating system can change the memory map. The DEC PDP-11 is an example of such
an architecture. The address consists of 16 bits, and the page size is 8 KB. The page table thus
consists of eight entries that are kept in fast registers. The use of registers for the page table is
satisfactory if the page table is reasonably small (for example, 256 entries). Most contemporary
computers, however, allow the page table to be very large (for example, 1 million entries). For
these machines, the use of fast registers to implement the page table is not feasible. Rather, the
page table is kept in main memory, and a page-table base register (PTBR) points to the page
table. Changing page tables requires changing only this one register, substantially reducing
context-switch time.
The problem with this approach is the time required to access a user memory location. If we
want to access location i, we must first index into the page table, using the value in the PTBR
offset by the page number for i. This task requires a memory access. It provides us with the
frame number, which is combined with the page offset to produce the actual address. We can
then access the desired place in memory. With this scheme, two memory accesses are needed
to access a byte (one for the page-table entry, one for the byte). Thus, memory access is slowed
by a factor of 2. This delay would be intolerable under most circumstances. We might as well
resort to swapping!
The standard solution to this problem is to use a special, small, fast lookup hardware cache,
called translation look-aside buffer (TLB). The TLB is associative, high-speed memory. Each
entry in the TLB consists of two parts: a key (or tag) and a value. When the associative memory
is presented with an item, it is compared with all keys simultaneously. If the item is found, the
corresponding value field is returned. The search is fast; the hardware, however, is expensive.
Typically, the number of entries in a TLB is small, often numbering between 64 and 1,024. The
TLB is used with page tables in the following way. The TLB contains only a few of the page-
table entries. When a logical address is generated by the CPU, its page number is presented to
the TLB. If the page number is found, its frame number is immediately available and is used
to access memory. The whole task may take less than 10 per cent longer than it would if an
unmapped memory reference were used.
If the page number is not in the TLB (known as a TLB miss), a memory reference to the page
table must be made. When the frame number is obtained, we can use it to access memory. In
addition, we add the page number and frame number to the TLB, so that they will be found
quickly on the next reference. If the TLB is already full of entries, the operating system must
select one for replacement. Replacement policies range from least recently used (LRU) to random.
158 LOVELY PROFESSIONAL UNIVERSITY