Open In App

Translation Lookaside Buffer (TLB) in Paging

Last Updated : 11 Jul, 2025
Comments
Improve
Suggest changes
Like Article
Like
Report

In Operating System (Memory Management Technique: Paging), for each process page table will be created, which will contain a Page Table Entry (PTE). This PTE will contain information like frame number (The address of the main memory where we want to refer), and some other useful bits (e.g., valid/invalid bit, dirty bit, protection bit, etc). This page table entry (PTE) will tell where in the main memory the actual page is residing. 

Now the question is where to place the page table, such that overall access time (or reference time) will be less. The problem initially was to fast access the main memory content based on the address generated by the CPU (i.e. logical/virtual address). Initially, some people thought of using registers to store page tables, as they are high-speed memory so access time will be less. 

The idea used here is, to place the page table entries in registers, for each request generated from the CPU (virtual address), it will be matched to the appropriate page number of the page table, which will now tell where in the main memory that corresponding page resides. Everything seems right here, but the problem is registered size is small (in practice, it can accommodate a maximum of 0.5k to 1k page table entries) and the process size may be big hence the required page table will also be big (let's say this page table contains 1M entries), so registers may not hold all the PTE's of the Page table. So this is not a practical approach. 

The entire page table was kept in the main memory to overcome this size issue. but the problem here is two main memory references are required: 

  1. To find the frame number 
  2. To go to the address specified by frame number 

To overcome this problem a high-speed cache is set up for page table entries called a Translation Lookaside Buffer (TLB). Translation Lookaside Buffer (TLB) is a special cache used to keep track of recently used transactions. TLB contains page table entries that have been most recently used. Given a virtual address, the processor examines the TLB if a page table entry is present (TLB hit), the frame number is retrieved and the real address is formed. If a page table entry is not found in the TLB (TLB miss), the page number is used as an index while processing the page table. TLB first checks if the page is already in main memory, if not in main memory a page fault is issued then the TLB is updated to include the new page entry. 

 

Steps in TLB hit

  1. CPU generates a virtual (logical) address. 
  2. It is checked in TLB (present). 
  3. The corresponding frame number is retrieved, which now tells where the main memory page lies. 

Steps in TLB miss

  1. CPU generates a virtual (logical) address. 
  2. It is checked in TLB (not present). 
  3. Now the page number is matched to the page table residing in the main memory (assuming the page table contains all PTE). 
  4. The corresponding frame number is retrieved, which now tells where the main memory page lies. 
  5. The TLB is updated with new PTE (if space is not there, one of the replacement techniques comes into the picture i.e either FIFO, LRU or MFU etc).

Effective memory access time(EMAT)

 TLB is used to reduce adequate memory access time as it is a high-speed associative cache. 

EMAT=h×(c+m)+(1−h)×(c+nm)

where:

  • h is the hit ratio of the TLB,
  • m is the memory access time,
  • c is the TLB access time and
  • n represents the system level.

And the system level can be represented as follows:

1 --> No page table.

2 --> One page table.

3 --> Two page tables


TLB, Demand Paging, Thrashing, Page Replacement Algorithms | OS
Visit Course explore course icon
Video Thumbnail

TLB, Demand Paging, Thrashing, Page Replacement Algorithms | OS

Video Thumbnail

Translation Look Aside Buffer in Operating System

Article Tags :

Similar Reads