Techniques to handle Thrashing
Last Updated :
28 Dec, 2024
Prerequisite - Virtual Memory
Thrashing is a condition or a situation when the system is spending a major portion of its time servicing the page faults, but the actual processing done is very negligible.
Causes of thrashing:
- High degree of multiprogramming.
- Lack of frames.
- Page replacement policy.
Thrashing's Causes
Thrashing has an impact on the operating system's execution performance. Thrashing also causes serious performance issues with the operating system. When the CPU's usage is low, the process scheduling mechanism tries to load multiple processes into memory at the same time, increasing the degree of Multi programming.
In this case, the number of processes in the memory exceeds the number of frames available in the memory. Each process is given a set number of frames to work with.
If a high-priority process arrives in memory and the frame is not vacant at the moment, the other process occupying the frame will be moved to secondary storage, and the free frame will be allotted to a higher-priority process.
We may also argue that as soon as the memory is full, the procedure begins to take a long time to swap in the required pages. Because most of the processes are waiting for pages, the CPU utilization drops again.
As a result, a high level of multi programming and a lack of frames are two of the most common reasons for thrashing in the operating system.

The basic concept involved is that if a process is allocated to few frames, then there will be too many and too frequent page faults. As a result, no useful work would be done by the CPU and the CPU utilization would fall drastically. The long-term scheduler would then try to improve the CPU utilization by loading some more processes into the memory thereby increasing the degree of multi programming. This would result in a further decrease in the CPU utilization triggering a chained reaction of higher page faults followed by an increase in the degree of multi programming, called Thrashing.
Locality Model -
A locality is a set of pages that are actively used together. The locality model states that as a process executes, it moves from one locality to another. A program is generally composed of several different localities which may overlap.
For example, when a function is called, it defines a new locality where memory references are made to the instructions of the function call, it's local and global variables, etc. Similarly, when the function is exited, the process leaves this locality.
Techniques to handle:
1. Working Set Model -
This model is based on the above-stated concept of the Locality Model.
The basic principle states that if we allocate enough frames to a process to accommodate its current locality, it will only fault whenever it moves to some new locality. But if the allocated frames are lesser than the size of the current locality, the process is bound to thrash.
According to this model, based on parameter A, the working set is defined as the set of pages in the most recent 'A' page references. Hence, all the actively used pages would always end up being a part of the working set.
The accuracy of the working set is dependent on the value of parameter A. If A is too large, then working sets may overlap. On the other hand, for smaller values of A, the locality might not be covered entirely.
If D is the total demand for frames and WSS_i is the working set size for process i,
D=\sum\nolimits{ WSS_ i}
Now, if 'm' is the number of frames available in the memory, there are 2 possibilities:
- (i) D>m i.e. total demand exceeds the number of frames, then thrashing will occur as some processes would not get enough frames.
- (ii) D<=m, then there would be no thrashing.
2. Page Fault Frequency -
A more direct approach to handling thrashing is the one that uses the Page-Fault Frequency concept.

The problem associated with Thrashing is the high page fault rate and thus, the concept here is to control the page fault rate.
If the page fault rate is too high, it indicates that the process has too few frames allocated to it. On the contrary, a low page fault rate indicates that the process has too many frames.
Upper and lower limits can be established on the desired page fault rate as shown in the diagram.
If the page fault rate falls below the lower limit, frames can be removed from the process. Similarly, if the page fault rate exceeds the upper limit, more frames can be allocated to the process.
In other words, the graphical state of the system should be kept limited to the rectangular region formed in the given diagram.
Here too, if the page fault rate is high with no free frames, then some of the processes can be suspended and frames allocated to them can be reallocated to other processes. The suspended processes can then be restarted later.
FAQ:
What is thrashing in virtual memory?
Thrashing is a condition when the system is spending a major portion of its time servicing the page faults, but the actual processing done is very negligible.
What are the causes of thrashing?
The high degree of multiprogramming, lack of frames, and page replacement policy are the main causes of thrashing.
How does the working set model handle thrashing?
A process's frame allocation is determined by the working set model, which takes into consideration its current locality. When the frames allocated are insufficient to accommodate said locality, the outcome is thrashing. Defining the working set as the assortment of pages accessed in the previous 'A' reference pages depends on parameter A.
What is the locality model in virtual memory?
Localities refer to a group of pages that are frequently utilized as a whole. The model of locality suggests that a system progresses from one locality to the next as it runs. Programs can consist of numerous localities that sometimes intersect.
How does the page fault frequency approach handle thrashing?
Managing the page fault frequency approach regulates the occurrence of page faults during the process. Whenever the page fault rate becomes excessive, it signifies that the allocation of frames to the process is insufficient. Conversely, a decreased frequency of page faults suggests excessive allocation of frames. The establishment of maximum and minimum thresholds for the desired page fault rate can prevent excessive data swapping and thrashing.
Similar Reads
Operating System Tutorial An Operating System(OS) is a software that manages and handles hardware and software resources of a computing device. Responsible for managing and controlling all the activities and sharing of computer resources among different running applications.A low-level Software that includes all the basic fu
4 min read
OS Basics
Structure of Operating System
Types of OS
Batch Processing Operating SystemIn the beginning, computers were very large types of machinery that ran from a console table. In all-purpose, card readers or tape drivers were used for input, and punch cards, tape drives, and line printers were used for output. Operators had no direct interface with the system, and job implementat
6 min read
Multiprogramming in Operating SystemAs the name suggests, Multiprogramming means more than one program can be active at the same time. Before the operating system concept, only one program was to be loaded at a time and run. These systems were not efficient as the CPU was not used efficiently. For example, in a single-tasking system,
5 min read
Time Sharing Operating SystemMultiprogrammed, batched systems provide an environment where various system resources were used effectively, but it did not provide for user interaction with computer systems. Time-sharing is a logical extension of multiprogramming. The CPU performs many tasks by switches that are so frequent that
5 min read
What is a Network Operating System?The basic definition of an operating system is that the operating system is the interface between the computer hardware and the user. In daily life, we use the operating system on our devices which provides a good GUI, and many more features. Similarly, a network operating system(NOS) is software th
2 min read
Real Time Operating System (RTOS)Real-time operating systems (RTOS) are used in environments where a large number of events, mostly external to the computer system, must be accepted and processed in a short time or within certain deadlines. such applications are industrial control, telephone switching equipment, flight control, and
6 min read
Process Management
Introduction of Process ManagementProcess Management for a single tasking or batch processing system is easy as only one process is active at a time. With multiple processes (multiprogramming or multitasking) being active, the process management becomes complex as a CPU needs to be efficiently utilized by multiple processes. Multipl
8 min read
Process Table and Process Control Block (PCB)While creating a process, the operating system performs several operations. To identify the processes, it assigns a process identification number (PID) to each process. As the operating system supports multi-programming, it needs to keep track of all the processes. For this task, the process control
6 min read
Operations on ProcessesProcess operations refer to the actions or activities performed on processes in an operating system. These operations include creating, terminating, suspending, resuming, and communicating between processes. Operations on processes are crucial for managing and controlling the execution of programs i
5 min read
Process Schedulers in Operating SystemA process is the instance of a computer program in execution. Scheduling is important in operating systems with multiprogramming as multiple processes might be eligible for running at a time.One of the key responsibilities of an Operating System (OS) is to decide which programs will execute on the C
7 min read
Inter Process Communication (IPC)Processes need to communicate with each other in many situations. Inter-Process Communication or IPC is a mechanism that allows processes to communicate. It helps processes synchronize their activities, share information, and avoid conflicts while accessing shared resources.Types of Process Let us f
5 min read
Context Switching in Operating SystemContext Switching in an operating system is a critical function that allows the CPU to efficiently manage multiple processes. By saving the state of a currently active process and loading the state of another, the system can handle various tasks simultaneously without losing progress. This switching
4 min read
Preemptive and Non-Preemptive SchedulingIn operating systems, scheduling is the method by which processes are given access the CPU. Efficient scheduling is essential for optimal system performance and user experience. There are two primary types of CPU scheduling: preemptive and non-preemptive. Understanding the differences between preemp
5 min read
CPU Scheduling in OS
Threads in OS
Process Synchronization
Critical Section Problem Solution
Peterson's Algorithm in Process SynchronizationPeterson's Algorithm is a classic solution to the critical section problem in process synchronization. It ensures mutual exclusion meaning only one process can access the critical section at a time and avoids race conditions. The algorithm uses two shared variables to manage the turn-taking mechanis
15+ min read
Semaphores in Process SynchronizationSemaphores are a tool used in operating systems to help manage how different processes (or programs) share resources, like memory or data, without causing conflicts. A semaphore is a special kind of synchronization data that can be used only through specific synchronization primitives. Semaphores ar
15+ min read
Semaphores and its typesA semaphore is a tool used in computer science to manage how multiple programs or processes access shared resources, like memory or files, without causing conflicts. Semaphores are compound data types with two fields one is a Non-negative integer S.V(Semaphore Value) and the second is a set of proce
6 min read
Producer Consumer Problem using Semaphores | Set 1The Producer-Consumer problem is a classic synchronization issue in operating systems. It involves two types of processes: producers, which generate data, and consumers, which process that data. Both share a common buffer. The challenge is to ensure that the producer doesn't add data to a full buffe
4 min read
Readers-Writers Problem | Set 1 (Introduction and Readers Preference Solution)The readers-writer problem in operating systems is about managing access to shared data. It allows multiple readers to read data at the same time without issues but ensures that only one writer can write at a time, and no one can read while writing is happening. This helps prevent data corruption an
7 min read
Dining Philosopher Problem Using SemaphoresThe Dining Philosopher Problem states that K philosophers are seated around a circular table with one chopstick between each pair of philosophers. There is one chopstick between each philosopher. A philosopher may eat if he can pick up the two chopsticks adjacent to him. One chopstick may be picked
11 min read
Hardware Synchronization Algorithms : Unlock and Lock, Test and Set, SwapProcess Synchronization problems occur when two processes running concurrently share the same data or same variable. The value of that variable may not be updated correctly before its being used by a second process. Such a condition is known as Race Around Condition. There are a software as well as
4 min read
Deadlocks & Deadlock Handling Methods
Introduction of Deadlock in Operating SystemA deadlock is a situation where a set of processes is blocked because each process is holding a resource and waiting for another resource acquired by some other process. In this article, we will discuss deadlock, its necessary conditions, etc. in detail.Deadlock is a situation in computing where two
11 min read
Conditions for Deadlock in Operating SystemA deadlock is a situation where a set of processes is blocked because each process is holding a resource and waiting for another resource acquired by some other process. In this article, we will discuss what deadlock is and the necessary conditions required for deadlock.What is Deadlock?Deadlock is
8 min read
Banker's Algorithm in Operating SystemBanker's Algorithm is a resource allocation and deadlock avoidance algorithm used in operating systems. It ensures that a system remains in a safe state by carefully allocating resources to processes while avoiding unsafe states that could lead to deadlocks.The Banker's Algorithm is a smart way for
8 min read
Wait For Graph Deadlock Detection in Distributed SystemDeadlocks are a fundamental problem in distributed systems. A process may request resources in any order and a process can request resources while holding others. A Deadlock is a situation where a set of processes are blocked as each process in a Distributed system is holding some resources and that
5 min read
Handling DeadlocksDeadlock is a situation where a process or a set of processes is blocked, waiting for some other resource that is held by some other waiting process. It is an undesirable state of the system. In other words, Deadlock is a critical situation in computing where a process, or a group of processes, beco
8 min read
Deadlock Prevention And AvoidanceDeadlock prevention and avoidance are strategies used in computer systems to ensure that different processes can run smoothly without getting stuck waiting for each other forever. Think of it like a traffic system where cars (processes) must move through intersections (resources) without getting int
5 min read
Deadlock Detection And RecoveryDeadlock Detection and Recovery is the mechanism of detecting and resolving deadlocks in an operating system. In operating systems, deadlock recovery is important to keep everything running smoothly. A deadlock occurs when two or more processes are blocked, waiting for each other to release the reso
6 min read
Deadlock Ignorance in Operating SystemIn this article we will study in brief about what is Deadlock followed by Deadlock Ignorance in Operating System. What is Deadlock? If each process in the set of processes is waiting for an event that only another process in the set can cause it is actually referred as called Deadlock. In other word
5 min read
Recovery from Deadlock in Operating SystemIn today's world of computer systems and multitasking environments, deadlock is an undesirable situation that can bring operations to a halt. When multiple processes compete for exclusive access to resources and end up in a circular waiting pattern, a deadlock occurs. To maintain the smooth function
8 min read