Difference Between Preemptive and Non-Preemptive CPU Scheduling Algorithms
Last Updated :
04 Sep, 2024
Preemptive scheduling permits a high-priority task to interrupt a running process, When a process transitions from the running to the ready or from the waiting to the ready states, preemptive scheduling is used. Whereas in non-preemptive scheduling, any new process must wait until the running process completes its CPU cycle. In this article, we will discuss what are preemptive and non-preemptive scheduling and their differences.
What is Preemptive Scheduling?
In this, a scheduler may preempt a low-priority running process anytime when a high-priority process enters into a ready state. When scheduling takes from either of the below circumstances it is preemptive scheduling:
- when a process enters the ready state after exiting the running state, as happens when an interrupt happens.
- When a process enters the ready state of completion as opposed to the waiting state.
When a process transitions from the running to the ready or from the waiting to the ready states, preemptive scheduling is used. The process is given resources (mostly CPU cycles) for a set period before having them removed. If the process still has CPU burst time left, it is then added back to the ready queue. Until it has another opportunity to run, that process remains in the ready queue.
Example of Preemptive Scheduling: Consider there are four processes P0, P1, P2, and P3 uses the preemptive scheduling technique.
Process ID
|
Arrival Time
|
Burst Time
|
P0
|
3
|
2
|
P1
|
2
|
4
|
P2
|
0
|
6
|
P3
|
1
|
4
|

Preemptive Scheduling
Advantages of Preemptive Scheduling
- Because a process might not monopolize the CPU, this approach is more reliable.
- Every occurrence results in a pause in the progress of ongoing tasks.
- It raises the mean reaction time.
- When you apply this technique in a multi-programming environment, the benefits increase.
- The operating system makes sure that each active process uses an equal quantity of CPU power.
Disadvantages of Preemptive Scheduling
- Utilising limited computational resources is necessary.
- It takes longer to suspend the active process, modify the context, and launch the newly arrived process.
- If more than one high-priority process came at the same time, the low-priority process would have to wait.
What is Non-Preemptive Scheduling?
In this, once a process enters the running state it cannot be preempted until it completes it’s allocation time. When scheduling takes place only under the below circumstances we say that the scheduling scheme is non-preemptive or cooperative.
- When a process enters the waiting state (due to an I/O request or an attempt to call wait() to end a child process), it changes from the running state.
- when a process comes to an end.
A process cannot be stopped under non-preemptive scheduling until it ends on its own or runs out of time. A process with a longer burst time on its CPU would cause a process with a shorter burst time to starve.
Example of Non-Preemptive Scheduling: Consider there are four processes P0, P1, P2, and P3 uses non-preemptive scheduling technique.
Process ID
|
Arival Time
|
Burst Time
|
P0
|
3
|
2
|
P1
|
2
|
4
|
P2
|
0
|
6
|
P3
|
1
|
4
|

Non Preemptive Scheduling
Advantages of Non-Preemptive Scheduling
- It has a minimal scheduling overhead.
- It’s an extremely easy technique.
- Less processing power is used.
- It provides a high throughput.
Disadvantages of Non-Preemptive Scheduling
- For the procedure, it responds slowly.
- Errors can cause a machine to freeze.
Difference Between Preemptive and Non-Preemptive Scheduling
Preemptive Scheduling |
Non-Preemptive Scheduling |
The CPU is allocated to the processes for a certain amount of time. |
The CPU is allocated to the process till it ends its the fewer execution or switches to waiting state. |
The executing process here is interrupted in the middle of execution. |
The executing process here is not interrupted in the middle of execution. |
It usually switches the process from the ready state to the running state, vice-versa, and maintains the ready queue. |
It does not switch the process from running state to a ready state. |
Here, if a process with high priority frequently arrives in the ready queue then the process with low priority has to wait for long, and it may have to starve. |
Here, if CPU is allocated to the process with a larger burst time then the processes with a small burst time may have to starve. |
It is quite flexible because the critical processes are allowed to access CPU as they arrive into the ready queue, no matter what process is executing currently. |
It is rigid as even if a critical process enters the ready queue the process running CPU is not disturbed. |
This is cost associative as it has to maintain the integrity of shared data. |
This is not cost associative. |
This scheduling leads to more context switches. |
This scheduling leads to less context switches compared to preemptive scheduling. |
Preemptive scheduling allows a process to be interrupted in the midst of its execution, taking the CPU away and allocating it to another process. |
Non-preemptive scheduling ensures that a process relinquishes control of the CPU only when it finishes with its current CPU burst. |
Preemptive scheduling incurs a cost associated with accessing shared data. |
Non-preemptive scheduling does not increase the cost. |
It also affects the design of the operating system Kernel. |
It does not affect the design of the OS kernel. |
Preemptive scheduling is more complex. |
Simple, but very inefficient. |
Example: Round robin method. |
Example: First come first serve method. |
Preemptive scheduling is better than non-preemptive scheduling or vice-versa can’t be said definitely. It depends on how scheduling minimizes the average waiting time of the processes and maximizes CPU utilization.
Conclusion
Preemptive scheduling allows for better handling of high-priority processes and improved system responsiveness, but at the cost of increased context-switching overhead. While Non-preemptive scheduling is simpler and with lower overhead but it can lead to inefficiencies and longer wait times for some processes. For both scheduling process some process has to starve like for preemptive scheduling process with less priority and in not preemptive process with less burst time will starve. We have to choose the the algorithm according to our need we have a system which want high priority task to be completed no matter about waiting of less prioritize process than we should go with preemptive otherwise non premptive.
Similar Reads
Difference between Preemptive Priority based and Non-preemptive Priority based CPU scheduling algorithms
Prerequisite - CPU Scheduling Priority Scheduling : In priority scheduling, each process has a priority which is an integer value assigned to it. The smallest integer is considered as the highest priority and the largest integer is considered as the lowest priority. The process with the highest prio
4 min read
Difference between EDF and LST CPU scheduling algorithms
1. Earliest Deadline First (EDF) : In Earliest Deadline First scheduling algorithm, at every scheduling point the task having the shortest deadline is scheduled for the execution. It is an optimal dynamic priority-driven scheduling algorithm used in real-time systems. It uses priorities of the tasks
4 min read
Difference between LJF and LRJF CPU scheduling algorithms
1. Longest Job First (LJF) : It CPU Scheduling algorithm where the process with the largest burst line is executed first. Once the process enters the ready queue, the process exits only after the completion of execution, therefore it is a non-preemptive process. In case, the burst times of the proce
2 min read
Difference between SRJF and LRJF CPU scheduling algorithms
1. Shortest remaining job first (SRJF) : Shortest remaining job first also called the shortest remaining time first is the preemptive version of the shortest job first scheduling algorithm. In the shortest remaining job first, the process with the smallest runtime to complete (i.e remaining time) is
3 min read
Difference Between FCFS and SJF CPU Scheduling Algorithms
CPU scheduling is a key part of how an operating system works. It decides which task (or process) the CPU should work on at any given time. This is important because a CPU can only handle one task at a time, but there are usually many tasks that need to be processed. In this article, we are going to
5 min read
Difference between SJF and LJF CPU scheduling algorithms
Shortest Job First: The shortest job first (SJF) algorithm is a CPU scheduling algorithm designed to reorder the jobs so that the process having the smallest burst time is chosen for the next execution. It is used to reduce the average waiting time for other processes waiting for execution. This may
4 min read
Difference between SJF and SRJF CPU scheduling algorithms
1. Shortest Job First (SJF) : The Shortest Job First (SJF) is a scheduling policy that selects the waiting process with the smallest execution time to execute next. It is also known as Shortest Job Next (SJN) or Shortest Process Next (SPN). It is a non-preemptive scheduling algorithm. 2. Shortest Re
2 min read
Difference Between SCAN and CSCAN Disk Scheduling Algorithms
The disk scheduling algorithm is a technique operating systems use to manage the order in which disk I/O (input/output) requests are processed. Disk scheduling is also known as I/O Scheduling. The main goals of disk scheduling are to optimize the performance of disk operations, reduce the time it ta
7 min read
Difference between LOOK and C-LOOK Disk scheduling algorithms
1. LOOK disk scheduling algorithm : Look Algorithm is actually an improves version of SCAN Algorithm. In this algorithm, the head starts from first request at one side of disk and moves towards the other end by serving all requests in between. After reaching the last request of one end, the head rev
3 min read
Difference between FCFS and C-SCAN disk scheduling algorithm
1. FCFS Disk Scheduling Algorithm : FCFS stands for First come first serve, this algorithm entertains the task in the order they arrived in the disk queue. It is the simplest and easy to understand disk scheduling algorithm. In this the head or pointer moves in the direction in which the task arrive
3 min read