Unit - 1 Introduction To Operating System
Unit - 1 Introduction To Operating System
Functions of
Operating System
In an operating system software performs each of the function:
1. Process management: Process management helps OS to create and delete processes. It also provides
mechanisms for synchronization and communication among processes.
2. Memory management: Memory management module performs the task of allocation and de-
allocation of memory space to programs in need of this resources.
3. File management: It manages all the file-related activities such as organization storage, retrieval,
naming, sharing, and protection of files.
4. Device Management: Device management keeps tracks of all devices. This module also responsible for
this task is known as the I/O controller. It also performs the task of allocation and de-allocation of the
devices.
5. I/O System Management: One of the main objects of any OS is to hide the peculiarities of that
hardware devices from the user.
6. Secondary-Storage Management: Systems have several levels of storage which includes primary
storage, secondary storage, and cache storage. Instructions and data must be stored in primary storage
or cache so that a running program can reference it.
7. Security: Security module protects the data and information of a computer system against malware
threat and authorized access.
8. Command interpretation: This module is interpreting commands given by the and acting system
resources to process that commands.
9. Networking: A distributed system is a group of processors which do not share memory, hardware
devices, or a clock. The processors communicate with one another through the network.
10. Job accounting: Keeping track of time & resource used by various job and users.
11. Communication management: Coordination and assignment of compilers, interpreters, and another
software resource of the various users of the computer systems.
Modern computers consist of processors, memories, timers, disks, mice, network interfaces, printers, and
a wide variety of other devices. In the alternative view, the job of the operating system is to provide for an
orderly and controlled allocation of the processors, memories, and input/output devices among the
various programs competing for them.
When a computer (or network) has multiple users, the need for managing and protecting the memory,
input/output devices, and other resources is even greater, since the users might otherwise interface with
one another. In addition, users often need to share not only hardware, but information (files, databases,
etc.) as well. In short, this view of the operating system holds that its primary task is to keep track of which
programs are using which resources, to grant resource requests, to account for usage, and to mediate
conflicting requests from different programs and users.
Resource management includes multiplexing (sharing) resources in two different ways:
1. Time Multiplexing
2. Space Multiplexing
1. Time Multiplexing : When the resource is time multiplexed, different programs or users take turns using
it. First one of them gets to use the resource, then another, and so on.
For example: With only one CPU and multiple programs that want to run on it, operating system first
allocates the CPU to one long enough, another one gets to use the CPU, then another and ten eventually
the first one again.
Determining how the resource is time multiplexed – who goes next and for how long – is the task of the
operating system.
2. Space Multiplexing : In space multiplexing, instead of the customers taking turns, each one gets part of
the resource.
For example: Main memory is normally divided up among several running programs, so each one can be
resident at the same time (for example, in order to take turns using the CPU). Assuming there is enough
memory to hold multiple programs, it is more efficient to hold several programs in memory at once rather
than give one of them all of it, especially if it only needs a small fraction of the total. Of course, this raises
issues of fairness, protection, and so on, and it is up to the operating system to solve them.
Operating System Structure
An operating system is a construct that allows the user application programs to interact with the system
hardware. Since the operating system is such a complex structure, it should be created with utmost care so
it can be used and modified easily. An easy way to do this is to create the operating system in parts. Each
of these parts should be well defined with clear inputs, outputs and functions.
1. Simple Structure : There are many operating systems that have a rather simple structure. These
started as small systems and rapidly expanded much further than their scope. A common example
of this is MS-DOS. It was designed simply for a niche amount for people. There was no indication
that it would become so popular.
An image to illustrate the structure of MS-DOS is as follows −
It is better that operating systems have a modular structure, unlike MS-DOS. That would lead to greater
control over the computer system and its various applications. The modular structure would also allow the
programmers to hide information as required and implement internal routines as they see fit without
changing the outer specifications.
2. Layered Structure : One way to achieve modularity in the operating system is the layered approach.
In this, the bottom layer is the hardware and the topmost layer is the user interface.
An image demonstrating the layered approach is as follows −
As seen from the image, each upper layer is built on the bottom layer. All the layers hide some structures,
operations etc from their upper layers.
One problem with the layered structure is that each layer needs to be carefully defined. This is necessary
because the upper layers can only use the functionalities of the layers below them
What is Shell and Kernel
Both the Shell and the Kernel are the Parts of this Operating System. These Both Parts are used for
performing any Operation on the System. When a user gives his Command for Performing Any Operation,
then the Request Will goes to the Shell Parts, The Shell Parts is also called as the Interpreter which
translate the Human Program into the Machine Language and then the Request will be transferred to the
Kernel. So that Shell is just called as the interpreter of the Commands which Converts the Request of the
User into the Machine Language.
Kernel is also called as the heart of the Operating System and the Every Operation is performed by using
the Kernel , When the Kernel Receives the Request from the Shell then this will Process the Request and
Display the Results on the Screen. The various Types of Operations those are Performed by the Kernel are
as followings:-
1) It Controls the State the Process Means it checks whether the Process is running or Process is
Waiting for the Request of the user.
2) Provides the Memory for the Processes those are Running on the System Means Kernel Runs the
Allocation and De-allocation Process , First When we Request for the service then the Kernel will Provides
the Memory to the Process and after that he also Release the Memory which is Given to a Process.
3) The Kernel also Maintains a Time table for all the Processes those are Running Means the Kernel
also Prepare the Schedule Time means this will Provide the Time to various Process of the CPU and the
Kernel also Puts the Waiting and Suspended Jobs into the different Memory Area.
4) When a Kernel determines that the Logical Memory doesn’t fit to Store the Programs. Then he uses
the Concept of the Physical Memory which Will Stores the Programs into Temporary Manner. Means
the Physical Memory of the System can be used as Temporary Memory.
5) Kernel also maintains all the files those are Stored into the Computer System and the Kernel Also
Stores all the Files into the System as no one can read or Write the Files without any Permissions. So
that the Kernel System also Provides us the Facility to use the Passwords and also all the Files are Stored
into the Particular Manner.
As we have learned there are Many Programs or Functions those are Performed by the Kernel But the
Functions those are Performed by the Kernel will never be Shown to the user. And the Functions of the
Kernel are Transparent to the user.
controls all the tasks of the system while the shell is the interface that allows the users to communicate
Unix is an operating system. It is the interface between the user and the hardware. It performs a variety of
tasks including file handling, memory management, controlling hardware devices, process management and
many more. There are various versions of Unix: Solaris Unix, HP Unix, AIX, etc. Linux is a flavor of Unix, and it
is free and open-source. Unix is popular in enterprise-level because it supports multiple user environment.
Kernel and Shell are two components in Unix architecture. The kernel is the heart of the operating system
An operating system is a framework that enables user application programs to interact with system
hardware. The operating system does not perform any functions on its own, but it provides an atmosphere
in which various apps and programs can do useful work. The operating system may be observed from the
point of view of the user or the system, and it is known as the user view and the system view. In this
article, you will learn the views of the operating system.
Viewpoints of Operating System
The operating system may be observed from the viewpoint of the user or the system. It is known as the
user view and the system view. There are mainly two types of views of the operating system. These are as
follows:
1. User View
2. System View
User View
The user view depends on the system interface that is used by the users. Some systems are designed for a
single user to monopolize the resources to maximize the user's task. In these cases, the OS is designed
primarily for ease of use, with little emphasis on quality and none on resource utilization.
The user viewpoint focuses on how the user interacts with the operating system through the usage of
various application programs. In contrast, the system viewpoint focuses on how the hardware interacts
with the operating system to complete various tasks.
1. Single User View Point
Most computer users use a monitor, keyboard, mouse, printer, and other accessories to operate their
computer system. In some cases, the system is designed to maximize the output of a single user. As a
result, more attention is laid on accessibility, and resource allocation is less important. These systems are
much more designed for a single user experience and meet the needs of a single user, where the
performance is not given focus as the multiple user systems.
2. Multiple User View Point
Another example of user views in which the importance of user experience and performance is given is
when there is one mainframe computer and many users on their computers trying to interact with their
kernels over the mainframe to each other. In such circumstances, memory allocation by the CPU must be
done effectively to give a good user experience. The client-server architecture is another good example
where many clients may interact through a remote server, and the same constraints of effective use of
server resources may arise.
3. Handled User View Point
Moreover, the touchscreen era has given you the best handheld technology ever. Smartphones interact via
wireless devices to perform numerous operations, but they're not as efficient as a computer interface,
limiting their usefulness. However, their operating system is a great example of creating a device focused
on the user's point of view.
4. Embedded System User View Point
Some systems, like embedded systems that lack a user point of view. The remote control used to
turn on or off the tv is all part of an embedded system in which the electronic device communicates with
another program where the user viewpoint is limited and allows the user to engage with the application.
System View
The OS may also be viewed as just a resource allocator. A computer system comprises various sources,
such as hardware and software, which must be managed effectively. The operating system manages the
resources, decides between competing demands, controls the program execution, etc. According to this
point of view, the operating system's purpose is to maximize performance. The operating system is
responsible for managing hardware resources and allocating them to programs and users to ensure
maximum performance.
From the user point of view, we've discussed the numerous applications that require varying degrees of
user participation. However, we are more concerned with how the hardware interacts with the operating
system than with the user from a system viewpoint. The hardware and the operating system interact for a
variety of reasons, including:
1. Resource Allocation
The hardware contains several resources like registers, caches, RAM, ROM, CPUs, I/O interaction, etc.
These are all resources that the operating system needs when an application program demands them. Only
the operating system can allocate resources, and it has used several tactics and strategies to maximize its
processing and memory space. The operating system uses a variety of strategies to get the most out of the
hardware resources, including paging, virtual memory, caching, and so on. These are very important in the
case of various user viewpoints because inefficient resource allocation may affect the user viewpoint,
causing the user system to lag or hang, reducing the user experience.
2. Control Program
The control program controls how input and output devices (hardware) interact with the operating system.
The user may request an action that can only be done with I/O devices; in this case, the operating system
must also have proper communication, control, detect, and handle such devices.
The purpose of this operating system was mainly to transfer control from one job to another as soon as the
job was completed. It contained a small set of programs called the resident monitor that always resided in
one part of the main memory. The remaining part is used for servicing jobs.
Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time between two
jobs.
Disadvantages of Batch OS
1. Starvation
Batch processing suffers from starvation.
For Example:
There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very high, then
the other four jobs will never be executed, or they will have to wait for a very long time. Hence the other
processes get starved.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires the input
of two numbers from the console, then it will never get it in the batch processing scenario since the user is
not present at the time of execution.
Multiprogramming Operating System
Multiprogramming is an extension to batch processing where the CPU is always kept busy. Each process
needs two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU can start the execution of other
processes. Therefore, multiprogramming improves the efficiency of the system.
Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems resources are used
efficiently, but they do not provide any user interaction with the computer system.
Multiprocessing Operating System
In Multiprocessing, Parallel computing is achieved. There are more than one processors present in the
system which can execute more than one process at the same time. This will increase the throughput of
the system.
In Multiprocessing, Parallel computing is achieved. More than one processor present in the system can
execute more than one process simultaneously, which will increase the throughput of the system.
An Operating system, which includes software and associated protocols to communicate with other
computers via a network conveniently and cost-effectively, is called Network Operating System.
The Application of a Real-Time system exists in the case of military applications, if you want to drop a
missile, then the missile is supposed to be dropped with a certain precision.
What is Process?
A Process is an execution of a specific program. It is an active entity that actions the purpose of the
application. Multiple processes may be related to the same program. For example, if you double-click on
Google Chrome browser, you start a process that runs Google Chrome and when you open another
instance of Chrome, you essentially create a second process.
What is Process?
KEY DIFFERENCE
• Process is an executing part of a program whereas a program is a group of ordered operations to
achieve a programming goal.
• The process has a shorter and minimal lifespan whereas program has a longer lifespan.
• Process contains many resources like a memory address, disk, printer while Program needs memory
space on the disk to store all instructions.
• When we distinguish between process and program, Process is a dynamic or active entity whereas
Program is a passive or static entity.
• To differentiate program and process, Process has considerable overhead whereas Program has no
significant overhead cost.
Features of Program
• A program is a passive entity. It stores a group of instructions to be executed.
• Various processes may be related to the same program.
• A user may run multiple programs where the operating systems simplify its internal programmed
activities like memory management.
• The program can’t perform any action without a run. It needs to be executed to realize the steps
mentioned in it.
• The operating system allocates main memory to store programs instructions.
Features of Process
• A process has a very limited lifespan.
• They also generate one or more child processes, and they die like a human being.
• Like humans, even process has information like who is a parent when it is created, address space of
allocated memory, security properties which includes ownership credentials and privileges.
• Processes are allocated system resources like file descriptors and network ports.
Summary
• A Program is an executable file which contains a certain set of instructions written to complete the
specific job or operation on your computer.
• A Process is an execution of a specific program. It is an active entity that actions the purpose of the
application.
• A program is a passive entity. It stores a group of instructions to be executed.
• Processes are allocated system resources like file descriptors and network ports.
The process, from its creation to completion, passes through various states. The minimum number of
states is five.
The names of the states are not standardized although the process may be in one of the following states
during execution.
1. New : A program which is going to be picked up by the OS into the main memory is called a new process.
2. Ready : Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU
to be assigned. The OS picks the new processes from the secondary memory and put all of them in the
main memory.
The processes which are ready for the execution and reside in the main memory are called ready state
processes. There can be many processes present in the ready state.
3. Running : One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm. Hence, if we have only one CPU in our system, the number of running processes for a
particular time will always be one. If we have n processors in the system then we can have n processes
running simultaneously.
4. Block or wait : From the Running state, a process can make the transition to the block or wait state
depending upon the scheduling algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input from the user then the OS
move this process to the block or wait state and assigns the CPU to the other processes.
5. Completion or termination : When a process finishes its execution, it comes in the termination state. All
the context of the process (Process Control Block) will also be deleted the process will be terminated by
the Operating system.
6. Suspend ready : A process in the ready state, which is moved to secondary memory from the main
memory due to lack of the resources (mainly primary memory) is called in the suspend ready state.
If the main memory is full and a higher priority process comes for the execution then the OS have to make
the room for the process in the main memory by throwing the lower priority process out into the
secondary memory. The suspend ready processes remain in the secondary memory until the main memory
gets available.
7. Suspend wait : Instead of removing the process from the ready queue, it's better to remove the blocked
process which is waiting for some resources in the main memory. Since it is already waiting for some
resource to get available hence it is better if it waits in the secondary memory and make room for the
higher priority process. These processes complete their execution once the main memory gets available
and their wait is finished.
Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for each of
the process states and PCBs of all processes in the same execution state are placed in the same queue.
When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new
state queue.
The Operating System maintains the following important process scheduling queues −
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting
to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O device constitute
this queue.
The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS
scheduler determines how to move processes between the ready and run queues which can only have
one entry per processor core on the system; in the above diagram, it has been merged with the CPU.
Types of Schedulers
Schedulers are special system software which handle process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of
three types −
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler
1. Long Term Scheduler : It is also called a job scheduler. A long-term scheduler determines which
programs are admitted to the system for processing. It selects processes from the queue and loads
them into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the average departure rate of
processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When a process changes the state from new to ready, then there is
use of long-term scheduler.
2. Short Term Scheduler : It is also called as CPU scheduler. Its main objective is to increase system
performance in accordance with the chosen set of criteria. It is the change of ready state to running
state of the process. CPU scheduler selects a process among the processes that are ready to execute
and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.
3. Medium Term Scheduler : Medium-term scheduling is a part of swapping. It removes the processes
from the memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-
charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes cannot
make any progress towards completion. In this condition, to remove the process from memory and make
space for other processes, the suspended process is moved to the secondary storage. This process is
called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to
improve the process mix.
Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler
2 Speed is lesser than short Speed is fastest among Speed is in between both short
term scheduler other two and long term scheduler.
5 It selects processes from pool It selects those processes It can re-introduce the process
and loads them into memory which are ready to into memory and execution
for execution execute can be continued.
A thread is a single sequential flow of execution of tasks of a process so it is also known as thread of
execution or thread of control. There is a way of thread execution inside the process of any operating
system. Apart from this, there can be more than one thread inside a process. Each thread of the same
process makes use of a separate program counter and a stack of activation records and control blocks.
Thread is often referred to as a lightweight process.
The process can be split down into so many threads. For example, in a browser, many tabs can be viewed
as threads. MS Word uses many threads - formatting text from one thread, processing input from another
thread, etc.
Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new process.
o Threads can share the common data, they do not need to use Inter- Process communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.
Benefits of Threads
o Enhanced throughput of the system: When the process is split into many threads, and each thread
is treated as a job, the number of jobs done in the unit time increases. That is why the throughput
of the system also increases.
o Effective Utilization of Multiprocessor system: When you have more than one thread in one
process, you can schedule more than one thread in more than one processor.
o Faster context switch: The context switching period between threads is less than the process
context switching. The process context switch means more overhead for the CPU.
o Responsiveness: When the process is split into several threads, and when a thread completes its
execution, that process can be responded to as soon as possible.
o Communication: Multiple-thread communication is simple because the threads share the same
address space, while in process, we adopt just a few exclusive communication strategies for
communication between two processes.
o Resource sharing: Resources can be shared between all threads within a process, such as code,
data, and files. Note: The stack and register cannot be shared between threads. There is a stack and
register for each thread.
Types of Threads
In the operating system, there are two types of threads.
1. Kernel level thread.
2. User-level thread.
• User-level thread
The operating system does not recognize the user-level thread. User threads can be easily implemented
and it is implemented by the user. If a user performs a user-level thread blocking operation, the whole
process is blocked. The kernel level thread does not know nothing about the user level thread. The kernel-
level thread manages user-level threads as if they are single-threaded processes?examples: Java
thread, POSIX threads, etc.HTML Tutorial
Advantages of User-level threads
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not support threads at
the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread control
blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the process.
Disadvantages of User-level threads
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.
PROCESS Pi
FLAG[i] = true
while( (turn != i) AND (CS is !free) ){ wait;
}
CRITICAL SECTION FLAG[i] = false
turn = j; //choose another process to go to CS
• Assume there are N processes (P1, P2, … PN) and every process at some point of time requires to
enter the Critical Section
• A FLAG[] array of size N is maintained which is by default false. So, whenever a process requires to
enter the critical section, it has to set its flag as true. For example, If Pi wants to enter it will set
FLAG[i]=TRUE.
• Another variable called TURN indicates the process number which is currently wating to enter into
the CS.
• The process which enters into the critical section while exiting would change the TURN to another
number from the list of ready processes.
• Example: turn is 2 then P2 enters the Critical section and while exiting turn=3 and therefore P3
breaks out of wait loop.
Synchronization Hardware
Some times the problems of the Critical Section are also resolved by hardware. Some operating system
offers a lock functionality where a Process acquires a lock when entering the Critical section and releases
the lock after leaving it.
So when another process is trying to enter the critical section, it will not be able to enter as it is locked. It
can only do so if it is free by acquiring the lock itself.
Mutex Locks
Synchronization hardware not simple method to implement for everyone, so strict software method
known as Mutex Locks was also introduced.
In this approach, in the entry section of code, a LOCK is obtained over the critical resources used inside the
critical section. In the exit section that lock is released.
Semaphore Solution
Semaphore is simply a variable that is non-negative and shared between threads. It is another algorithm or
solution to the critical section problem. It is a signaling mechanism and a thread that is waiting on a
semaphore, which can be signaled by another thread.
It uses two atomic operations, 1)wait, and 2) signal for the process synchronization.
Example
WAIT ( S ):
while ( S <= 0 );
S = S - 1;
SIGNAL ( S ):
S = S + 1;
Summary:
• Process synchronization is the task of coordinating the execution of processes in a way that no two
processes can have access to the same shared data and resources.
• Four elements of critical section are 1) Entry section 2) Critical section 3) Exit section 4) Reminder
section
• A critical section is a segment of code which can be accessed by a signal process at a specific point
of time.
• Three must rules which must enforce by critical section are : 1) Mutual Exclusion 2) Process solution
3)Bound waiting
• Mutual Exclusion is a special type of binary semaphore which is used for controlling access to the
shared resource.
• Process solution is used when no one is in the critical section, and someone wants in.
• In bound waiting solution, after a process makes a request for getting into its critical section, there
is a limit for how many other processes can get into their critical section.
• Peterson’s solution is widely used solution to critical section problems.
• Problems of the Critical Section are also resolved by synchronization of hardware
• Synchronization hardware is not a simple method to implement for everyone, so the strict software
method known as Mutex Locks was also introduced.
• Semaphore is another algorithm or solution to the critical section problem
Why do we need Scheduling?
In Multiprogramming, if the long term scheduler picks more I/O bound processes then most of the time,
the CPU remains idol. The task of Operating system is to optimize the utilization of resources.
If most of the running processes change their state from running to waiting then there may always be a
possibility of deadlock in the system. Hence to reduce this overhead, the OS needs to schedule the jobs to
get the optimal utilization of CPU and to avoid the possibility to deadlock.
CPU-I/O Burst Cycle
Process execution consists of a cycle of CPU execution and I/O wait. The state of process under execution is
called CPU burst and the state of process under I/O request & its handling is called I/O burst.
Processes alternate between these two states. Process execution begins with a CPU burst. That is followed
by an I/O burst, which is followed by another CPU burst, then another I/O burst, and so on. Eventually, the
final CPU burst ends with a system request to terminate execution as shown in the following figure:
It is the responsibility of CPU scheduler to allot a process to CPU whenever the CPU is in the idle state. The
CPU scheduler selects a process from ready queue and allocates the process to CPU. The scheduling which
takes place when a process switches from running state to ready state or from waiting state to ready state
is called Preemptive Scheduling.
On the other hand, the scheduling which takes place when a process terminates or switches from running
to waiting for state this kind of CPU scheduling is called Non-Preemptive Scheduling.
The basic difference between preemptive and non-preemptive scheduling lies in their name itself. That is a
Preemptive scheduling can be preempted; the processes can be scheduled. In Non-preemptive scheduling,
the processes can not be scheduled.
Let us discuss the differences between the both Preemptive and Non-Preemptive Scheduling in brief with
the help of comparison chart shown below.
BASIS FOR
PREEMPTIVE SCHEDULING NON PREEMPTIVE SCHEDULING
COMPARISON
waiting state.
frequently arrives in the ready running CPU, then another process with
queue, low priority process may less CPU burst time may starve.
starve.
processes.
associated. associative.
1. The basic difference between preemptive and non-preemptive scheduling is that in preemptive
scheduling the CPU is allocated to the processes for the limited time. While in Non-preemptive
scheduling, the CPU is allocated to the process till it terminates or switches to waiting state.
2. The executing process in preemptive scheduling is interrupted in the middle of execution whereas,
the executing process in non-preemptive scheduling is not interrupted in the middle of execution.
3. Preemptive Scheduling has the overhead of switching the process from ready state to running state,
vise-verse, and maintaining the ready queue. On the other hands, non-preemptive scheduling has no
overhead of switching the process from running state to ready state.
4. In preemptive scheduling, if a process with high priority frequently arrives in the ready queue then
the process with low priority have to wait for a long, and it may have to starve. On the other hands,
in the non-preemptive scheduling, if CPU is allocated to the process with larger burst time then the
processes with small burst time may have to starve.
5. Preemptive scheduling is quite flexible because the critical processes are allowed to access CPU as
they arrive into the ready queue, no matter what process is executing currently. Non-preemptive
scheduling is rigid as even if a critical process enters the ready queue the process running CPU is not
disturbed.
6. The Preemptive Scheduling is cost associative as it has to maintain the integrity of shared data which
is not the case with Non-preemptive Scheduling.
CPU Scheduling Criteria
Different CPU scheduling algorithms have different properties and the choice of a particular algorithm
depends on the various factors. Many criteria have been suggested for comparing CPU scheduling
algorithms.
The criteria include the following:
1. CPU utilisation –
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilisation can range from 0 to 100 but in a real-time system, it varies from 40 to 90
percent depending on the load upon the system.
2. Throughput –
A measure of the work done by CPU is the number of processes being executed and completed per unit
time. This is called throughput. The throughput may vary depending upon the length or duration of
processes.
3. Turnaround time –
For a particular process, an important criteria is how long it takes to execute that process. The time
elapsed from the time of submission of a process to the time of completion is known as the turnaround
time. Turn-around time is the sum of times spent waiting to get into memory, waiting in ready queue,
executing in CPU, and waiting for I/O.
4. Waiting time –
A scheduling algorithm does not affect the time required to complete the process once it starts
execution. It only affects the waiting time of a process i.e. time spent by a process waiting in the ready
queue.
5. Response time –
In an interactive system, turn-around time is not the best criteria. A process may produce some output
fairly early and continue computing new results while previous results are being output to the user.
Thus another criteria is the time taken from submission of the process of request until the first
response is produced. This measure is called response time.
Operating System Scheduling algorithms
A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms which we are going to discuss
in this chapter −
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-Next (SJN) Scheduling
• Round Robin(RR) Scheduling
• Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so
that once a process enters the running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75
P0 0 5 0
P1 1 3 5
P2 2 8 14
P3 3 6 8
Waiting time of each process is as follows −
Process Waiting Time
P0 0-0=0
P1 5-1=4
P2 14 - 2 = 12
P3 8-3=5
Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2