Open In App

Paging in Operating System

Last Updated : 22 May, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

Paging is the process of moving parts of a program, called pages, from secondary storage (like a hard drive) into the main memory (RAM). The main idea behind paging is to break a program into smaller fixed-size blocks called pages.

To keep track of where each page is stored in memory, the operating system uses a page table. This table shows the connection between the logical page numbers (used by the program) and the physical page frames (actual locations in RAM). The memory management unit uses the page table to convert logical addresses into physical addresses, so the program can access the correct data in memory.

Paging in Memory Management

Paging is a memory management technique that addresses common challenges in allocating and managing memory efficiently. Here we can understand why paging is needed as a Memory Management technique:

  • Memory isn’t always available in a single block: Programs often need more memory than what is available in a single continuous block. Paging breaks memory into smaller, fixed-size pieces, making it easier to allocate scattered free spaces.
  • Processes size can increase or decrease: programs don’t need to occupy continuous memory, so they can grow dynamically without the need to be moved.

Terminologies Associated with Memory Control

  • Logical Address Space or Virtual Address Space: The Logical Address Space, also known as the Virtual Address Space, refers to the set of all possible logical addresses that a process can generate during its execution. It is a conceptual range of memory addresses used by a program and is independent of the actual physical memory (RAM).
  • Physical Address Space: The Physical Address Space refers to the total range of addresses available in a computer's physical memory (RAM). It represents the actual memory locations that can be accessed by the system hardware to store or retrieve data.

Important Features of Paging

  • Logical to physical address mapping: Paging divides a process's logical address space into fixed-size pages. Each page maps to a frame in physical memory, enabling flexible memory management.
  • Fixed page and frame size: Pages and frames have the same fixed size. This simplifies memory management and improves system performance.
  • Page table entries: Each logical page is represented by a page table entry (PTE). A PTE stores the corresponding frame number and control bits.
  • Number of page table entries: The page table has one entry per logical page. Thus, its size equals the number of pages in the process's address space.
  • Page table stored in main memory: The page table is kept in main memory. This can add overhead when processes are swapped in or out.

Working of Paging

When a process requests memory, the operating system allocates one or more page frames to the process and maps the process's logical pages to the physical page frames. When a program runs, its pages are loaded into any available frames in the physical memory.

Each program has a page table, which the operating system uses to keep track of where each page is stored in physical memory. When a program accesses data, the system uses this table to convert the program's address into a physical memory address.

Steps Involved in Paging :

Step 1 Divide Memory : Logical → Pages, Physical → Frames .

Step 2 Allocate Pages : Load pages into available frames.

Step 3 Page Table : Map logical pages to physical frames.

Step 4 Translate Address : Convert logical to physical address.

Step 5 Handle Page Fault : Load missing pages from disk.

Step 6 Run Program : CPU uses page table during execution.

  • If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 = 27 bits
  • If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address = log2 224 = 24 bits

The mapping from virtual to physical address is done by the Memory Management Unit (MMU) which is a hardware device and this mapping is known as the paging technique.

  • The Physical Address Space is conceptually divided into a number of fixed-size blocks, called frames.
  • The Logical Address Space is also split into fixed-size blocks, called pages.
  • Page Size = Frame Size

Example

  • Physical Address = 12 bits, then Physical Address Space = 4 K words
  • Logical Address = 13 bits, then Logical Address Space = 8 K words
  • Page size = frame size = 1 K words (assumption)

Number of frames = Physical Address Space / Frame Size = 4K / 1K = 4 = 22
Number of Pages = Logical Address Space / Page Size = 8K / 1K = 23

paging

The address generated by the CPU is divided into

  1. Page number(p): Number of bits required to represent the pages in Logical Address Space or Page number
  2. Page offset(d): Number of bits required to represent a particular word in a page or page size of Logical Address Space or word number of a page or page offset.

A Physical Address is divided into two main parts:

  1. Frame Number(f): Number of bits required to represent the frame of Physical Address Space or Frame number frame
  2. Frame Offset(d): Number of bits required to represent a particular word in a frame or frame size of Physical Address Space or word number of a frame or frame offset.

So, a physical address in this scheme may be represented as follows:

Physical Address = (Frame Number << Number of Bits in Frame Offset) + Frame Offset
where "<<" represents a bitwise left shift operation.

  • The TLB is associative, high-speed memory.
  • Each entry in TLB consists of two parts: a tag and a value.
  • When this memory is used, then an item is compared with all tags simultaneously. If the item is found, then the corresponding value is returned.

Hardware implementation of Paging

The hardware implementation of the page table can be done by using dedicated registers. But the usage of the register for the page table is satisfactory only if the page table is small. If the page table contains a large number of entries then we can use TLB(translation Look-aside buffer), a special, small, fast look-up hardware cache.

  • The TLB is associative, high-speed memory.
  • Each entry in TLB consists of two parts: a tag and a value.
  • When this memory is used, then an item is compared with all tags simultaneously. If the item is found, then the corresponding value is returned.
page_table
Hit and Miss In Paging

Main memory access time = m
If page table are kept in main memory,
Effective access time = m(for page table) + m(for particular page in page table)

Read more about - TLB hit and miss

Advantages of Paging

  • Eliminates External Fragmentation: Paging divides memory into fixed-size blocks (pages and frames), so processes can be loaded wherever there is free space in memory. This prevents wasted space due to fragmentation.
  • Efficient Memory Utilization: Since pages can be placed in non-contiguous memory locations, even small free spaces can be utilized, leading to better memory allocation.
  • Supports Virtual Memory: Paging enables the implementation of virtual memory, allowing processes to use more memory than physically available by swapping pages between RAM and secondary storage.
  • Ease of Swapping: Individual pages can be moved between physical memory and disk (swap space) without affecting the entire process, making swapping faster and more efficient.
  • Improved Security and Isolation: Each process works within its own set of pages, preventing one process from accessing another's memory space.

Disadvantages of Paging

  • Internal Fragmentation: If the size of a process is not a perfect multiple of the page size, the unused space in the last page results in internal fragmentation.
  • Increased Overhead: Maintaining the Page Table requires additional memory and processing. For large processes, the page table can grow significantly, consuming valuable memory resources.
  • Page Table Lookup Time: Accessing memory requires translating logical addresses to physical addresses using the page table. This additional step increases memory access time, although Translation Lookaside Buffers (TLBs) can help reduce the impact.
  • I/O Overhead During Page Faults: When a required page is not in physical memory (page fault), it needs to be fetched from secondary storage, causing delays and increased I/O operations.
  • Complexity in Implementation: Paging requires sophisticated hardware and software support, including the Memory Management Unit (MMU) and algorithms for page replacement, which add complexity to the system.

Read more about - Memory Management Unit (MMU)

Also read - Multilevel Paging in Operating System
Also read - Paged Segmentation and Segmented Paging


Next Article

Similar Reads