Open In App

Design Issues for Paging

Last Updated : 13 Mar, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

Paging is an important concept in memory management. In Paging, the operating system divides each incoming program into pages of equal sizes of blocks. The section of disks is called blocks. The section of main memory is called page frames. Fixed-size blocks are called frames and the breaking of logical memory into blocks of the same size is called pages. Each page can be stored in an available page frame anywhere in the main memory. Memory manager keeps track of pages of programs in memory. The relation between virtual addresses and physical memory addresses is given by the page table. The paging system has made multiprogramming very effective. 

To set good performance for the paging system, the issues are :

  • Working set
  • Local and Global allocation
  • Page size
  • Shared pages
  • Shared libraries
  • Mapped Files
  • Cleaning Policy
  • Virtual memory interface

 

Working Set :

The set of pages whose process is currently in use or it is in execution is called a working set. If the entire working set of a process is in the memory, it will execute quickly. The execution of a program follows the principle of locality of reference. The locality of reference is a phase of execution where the process references particular pages. The working set can result in page fault if available memory given to the process cannot be handled by the working set. If a program causes page faults every few instructions are called thrashing. The working set issue is designed to reduce the page fault rate. The operating system keeps track of the working set of a process. The operating system uses the data of the working set process to reduce the page fault rate.

Local and Global Allocation :

A local allocation algorithm assigns a fixed number of frames to every running process. In the case of a page, a fault occurs in this algorithm, it considers only the pages allocated to the fixed number of frames for replacement.  If the working set size grows, then page fault will increase in case of local allocation. A global allocation algorithm dynamically allocates page frames in runnable processes. The number of page frames assigned to each process varies over time. In case of page, a fault occurs in this algorithm, it considers all pages assigned to different processes for page replacement. Global allocation performs better when the working set size can vary greatly over lifetime of a process.

Page Size :

Page size is an important parameter in designing a paging system. Determining an optimum page size depends on various factors :

  • Large page sizes will reduce the number of pages allocated to a process. It is called a Small page allocation table.
  • Large page sizes will increase the waste of space. On average, the last page allocated to the process is half utilized due to internal fragmentation.
  • A smaller page size will increase the size of the page allocation table but the wastage of the last page half utilization is reduced. It is called a Large page allocation table
  • Optimum Page Size = √2*Se, where Se = Average process size in ‘e’ bytes.

Shared Pages :

Shared pages are used to improve the performance of the paging system. There can be a scenario where multiple users might be executing the same jobs at a time. To avoid the duplication of the same pages in the same memory, it is preferable to share the pages. Shared pages are used in order to avoid having two copies of a page in memory at once. Shared pages can only be read by the software that has requested the page, and the data written to a shared page is not visible to other applications. A shared page is a shared memory page that can be used by multiple processes at the same time. Shared pages can be used in place of physical RAM when more memory is needed.  The most advantage of shared pages is that only one copy of a shared file exists in memory, reducing the overhead of pages and allowing more efficient use of RAM. 

Shared Libraries :

A shared library is a file whose contents are divided into blocks, and each block is loaded into memory in units of pages. Shared libraries are a special type of software that provides more than one function for one program. For example, a shared library is loaded into the virtual space when a program starts up. Some functions included in this library can be used by other programs as well. When the user wants to use these functions, then the user needs to load this library into memory again. This process occurs independently of the other programs and usually takes longer than loading individual functions separately. The advantage of a shared library is that the user can replace its original file with another version if the user wants.

With shared libraries, programs can access more libraries and functions by using only one shared library. Another advantage of using the shared library is that if the user modifies the function in the shared library without recompiling the program, it will still work as before. An example of a program that uses a shared library is a graphics application that calls graphics routines in the graphics library (such as GDI). If you upgrade your graphics library, there is no need to recompile your GDI program because it uses references to functions in the new version instead.

Mapped files : 

A memory-mapped file is a shared memory object that allows processes to share its memory. The hardware is responsible for allocating the space needed on the page used to store a mapped file. Mapped files are created by using a mapping process called a “mapping table”. A process can map a file to a part of its virtual address space by making a system call. Mapped files are specialized shared memory structures that support the sharing of a file among many processes. Essentially, they allow one process to access another’s portion of the file as its own virtual address space. When writing to a structure, the mapped page always refers to the same physical page in the file system. Mapped pages thus can more efficiently use available memory and storage space on a hard drive than regular files. Mapped files are also used for communication between processes.

Cleaning Policy :

The paging daemon is an important component of the paging system. The paging system continuously keeps track of the number of free page frames in the system and ensures that they are used before they are no longer needed. If there are too few free page frames, the paging daemon selects pages that it needs to use through a page replacement algorithm. 

Virtual Memory Interface :

A virtual memory interface (VMI) is a type of hardware that allows a program to access system memory, irrespective of whether memory actually exists. Memory mapping is the translation of a virtual address within the local address space to an actual memory location. This type of virtual memory provides off-hardware access to different processes using separate pages. When a process requests memory mapping, a virtual memory table is created that contains information about where each page exists in physical memory. The virtual memory interface allows user to create a virtual memory system by providing the program with a name for each segment of virtual memory. Later, the program can use that address as an index into a paging file and then retrieve the data from it. 

Virtual memory is used in operating systems to allow multiple processes to share the same physical memory storage space. This means that the accesses are done through a technique called blocking, which means that when an access occurs, the CPU must wait until all other accesses have been completed. The benefits of virtual memory include ease of use, can run multiple processes on a computer at once, and sharing of memory between them.


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads