I/O scheduling in Operating Systems
Last Updated :
10 May, 2025
Input/Output (I/O) operations are how a computer communicates with external devices such as hard drives, keyboards, printers, and network interfaces. These operations involve transferring data into and out of the system whether it’s reading a file, saving a document, printing, or sending data over a network.
Since I/O devices are much slower than the CPU, efficient management of I/O requests is crucial. This is where I/O Scheduling Algorithms come in. These algorithms determine the order in which I/O requests are handled to improve system performance, reduce wait times, and ensure fairness among processes. Common algorithms include FCFS (First-Come-First-Serve), SSTF (Shortest Seek Time First), and SCAN.
In simple terms, I/O (Input/Output) refers to the communication between a computer system and the outside world. This could be:
- Reading a file from a hard drive
- Saving a document
- Printing a page
- Transferring data over a network
- Using a keyboard or mouse
I/O operations involve communication between the CPU and external devices like hard drives, keyboards, and printers. These operations are managed by the operating system, device drivers, and other system programs. Following are the steps involved in I/O Operations:
1. I/O Request Initiation: When a user or program requests an I/O operation (like opening a file), the OS communicates with the device driver to handle the request.
2. I/O Traffic Controller: The I/O Traffic Controller keeps track of the status of all devices, control units, and communication channels. It ensures that devices are ready and available to handle the request.
3. I/O Scheduler: The I/O Scheduler determines the order in which I/O requests are processed. It manages access to devices based on priority, fairness, and availability to optimize system performance.
4. I/O Device Handler: The I/O Device Handler manages device interrupts and oversees the data transfer. It ensures that data is transferred between the device and memory (or CPU) properly.
5. Completion and Notification: Once the I/O operation is complete, the OS informs the program or user that the task is finished.
Why is I/O Scheduling Needed?
Imagine a busy post office. People line up to send or receive packages. If the staff helps people randomly, it might waste time, especially if some tasks are quick and others take longer. To make the process faster and fair, the post office needs a system to schedule who gets served when.
This is exactly what I/O scheduling does in an operating system. When multiple processes request I/O at the same time, the OS needs to decide the order in which to handle these requests. The goal is to:
- Improve overall system performance
- Reduce wait time
- Avoid long delays for any single process
- Make efficient use of I/O devices (especially hard drives)
I/O Scheduling Algorithms
I/O Scheduling algorithms are used by the operating system to manage the order in which I/O operations are processed. This ensures efficient use of system resources, reduces delays, and improves overall performance, especially when there are multiple requests to the same or different devices.
A few I/O handling algorithms are :
- FCFS [First come first server].
- SSTF [Shortest seek time first].
- SCAN
- Look
Every scheduling algorithm aims to minimize arm movement, mean response time, and variance in response time. An overview of all I/O scheduling algorithms is described below :
1. First Come First Serve [FCFS]
It is one of the simplest device-scheduling algorithms since it is easy to program and essentially fair to users (I/O devices). The only barrier could be the high seek time, so any other algorithm that can surpass the minimum seek time is suitable for scheduling.
Read more about FCFS Algorithm
2. Shortest Seek Time First [SSTF]
It uses the same ideology as the Shortest Job First in process scheduling, where the shortest processes are served first and longer processes have to wait for their turn. Comparing the SJF concept in I/O scheduling, the request with the track closest to the one being served (The one with the shortest distance to travel on disk) is next to be satisfied. The main advantage over FCFS is that it minimizes overall seek time. It favors easy-to-reach requests and postpones traveling to those that are out of the way.
Read more about SSTF Algorithm
3. SCAN Algorithm
Scan uses a status flag that tells the direction of the arm, it tells whether the arm is moving toward the center of the disk or to the other side. This algorithm moves the arm from the end of the disk to the center track servicing every request in its way. When it reaches the innermost track, it reverses the direction and moves towards outer tracks on the disk, again servicing every request in its path.
Read more about SCAN Algorithm
4. LOOK [Elevator Algorithm]
It's a variation of the SCAN algorithm, here arm doesn't necessarily go all the way to either side on disk unless there are requests pending. It looks ahead to a request before servicing it. A big question that arises is "Why should we use LOOK over SCAN?". The major advantage of LOOK over SCAN is that it discards the indefinite delay of I/O requests.
Read more about LOOK Disk Scheduling Algorithm
5. Other Variations of SCAN
- N-Step Scan: N-Step Scan holds all the pending requests until the arm starts its way back. New requests are grouped for the next cycle of rotation.
- C-SCAN [Circular SCAN] : It provides a uniform wait time as the arm serves requests on its way during the inward cycle. To know more, refer difference between SCAN and C-SCAN.
- C-LOOK [Optimized version of C-SCAN]: C-LOOK improves on C-SCAN by moving the disk arm only as far as the last request in one direction, then jumping to the lowest pending request. It avoids scanning unused tracks, reducing unnecessary movement.
To know more, refer to the Difference between C-LOOK and C-SCAN.
Similar Reads
CPU Scheduling in Operating Systems
CPU scheduling is a process used by the operating system to decide which task or process gets to use the CPU at a particular time. This is important because a CPU can only handle one task at a time, but there are usually many tasks that need to be processed. The following are different purposes of a
8 min read
List scheduling in Operating System
Prerequisite - CPU Scheduling List Scheduling also known as Priority List Based Scheduling is a scheduling technique in which an ordered list of processes are made by assigning them some priorities. So, basically what happens is, a list of processes that are ready to be executed at a given point is
3 min read
Gang scheduling in Operating System
Scheduling is the process of managing resources (mostly hardware resources like I/O, CPU, memory, etc.) effectively to complete a given set of tasks. Most of the methods of scheduling are process scheduling like First Come First Serve (FCFS), Shortest Job First (SJF), Round Robin (RR), etc. There ar
3 min read
Linear Scheduling Method in Operating System
Linear Scheduling Method is graphical technique in which horizontal axis is used to represent length of linear project, and vertical axis represents duration of project's activities. It is also known as distance-time scheduling. Every activity is mapped sequentially on graph, depending on sequence o
3 min read
Two-level scheduling in Operating Systems
Two-level scheduling is an efficient scheduling method that uses two schedulers to perform process scheduling. Let us understand it by an example : Suppose a system has 50 running processes all with equal priority and the system's memory can only hold 10 processes simultaneously. Thus, 40 processes
2 min read
Priority Scheduling in Operating System
Priority scheduling is one of the most common scheduling algorithms used by the operating system to schedule processes based on their priority. Each process is assigned a priority value based on criteria such as memory requirements, time requirements, other resource needs, or the ratio of average I/
4 min read
Deadline scheduler in Operating System
Deadline Scheduler is n I/O scheduler for the Linux kernel and guarantee a start service time for a request. Deadline Scheduler imposes deadlines on all I/O operations in order to prevent wanted requests. Two deadline read and write queues (basically sorted by their deadline) are maintained. For eve
3 min read
Lottery Process Scheduling in Operating System
Prerequisite - CPU Scheduling, Process Management Lottery Scheduling is a type of process scheduling, somewhat different from other Scheduling. Processes are scheduled in a random manner. Lottery scheduling can be preemptive or non-preemptive. It also solves the problem of starvation. Giving each pr
4 min read
Spooling in Operating System
In the Operating System, we had to provide input to the CPU, which then executed the instructions and returned the output. However, there was a flaw in this strategy. In a typical situation, we must deal with numerous processes, and we know that the time spent on I/O operations is very large in comp
4 min read
Long Term Scheduler in Operating System
Pre-requisites: Process Schedulers in Operating System A long-term scheduler, also known as a job scheduler, is an operating system component that determines which processes should be admitted to the system and when. It is used in batch processing systems and operates at a high level. The long-term
3 min read