Page 434 - DCAP103_Principle of operating system
P. 434

Unit 14: Case Study of Linux Operating System



                                                                                                  Notes
                              Figure 14.17: Operation of the Buddy Algorithm



















            Now suppose that a second request comes in for 8 pages. This can be satisfied hardwired;
            no part of it is ever paged out. The rest of memory is available for user pages, the paging
            cache,  and  other  purposes.  The  page  cache  holds  pages  containing  file  blocks  that  have
            recently been read or have been read in advance in expectation of being used in the near
            future, or pages of file blocks which need to be written to disk, such as those which have
            been created from user mode processes which have been swapped out to disk. It is dynamic
            in size and competes for the same pool of pages as the user processes. The paging cache
            is not really a separate cache, but simply the set of user pages that are no longer needed
            and are waiting around to be paged out. If a page in the paging cache is reused before it is
            evicted from memory, it can be reclaimed quickly.
            Buddy algorithm leads to considerable internal fragmentation because if you want a 65-page
            chunk, you have to ask for and get a 128-page chunk. To alleviate this problem, Linux has a
            second memory allocation, the slab allocator, that takes chunks using the buddy algorithm
            but then carves slabs (smaller units) from them and manages the smaller units separately.
            Since the kernel frequently creates and destroy objects of certain type (e.g., task_ struct), it
            relies on so called object caches. These caches consist of pointers to one or more slab which
            can store a number of objects of the same type. Each of the slabs may be full, partially full,
            or empty.

            For  instance,  when  the  kernel  needs  to  allocate  a  new  process  descriptor,  that  is,  a  new
            task_struct it looks in the object cache for task structures, and first tries to find a partially full
            slab, and allocate a new task_ struct object there. If no such slab is available, it looks through
            the list of empty slabs. Finally, if necessary, it will allocate a new slab, place the new task
            structure there, and link this slab with the task structure object cache. The kmalloc kernel
            service,  which  allocates  physically  contiguous  memory  regions  in  the  kernel  address
            space, is in fact built on top of the slab and object cache interface described here. A third
            memory  allocator,  vmalloc,  is  also  available  and  is  used  when  the  requested  memory
            need only be contiguous in virtual space, but not in physical memory. In practice, this
            is true for most of the requested memory. One exception is devices, which live on the
            other side of the memory bus and the memory management unit, and therefore do not
            understand virtual addresses. However, the use of vmalloc results in some performance
            degradation,  and  is  used  primarily  for  allocating  large  amounts  of  contiguous virtual
            address space, such as for dynamically inserting kernel modules. All these memory allocators
            are derived from those in System V.






                                             LOVELY PROFESSIONAL UNIVERSITY                                   427
   429   430   431   432   433   434   435   436   437   438   439