0% found this document useful (0 votes)
101 views30 pages

Unit - 1 Introduction To Operating System

The document discusses the introduction to operating systems. It defines an operating system as software that acts as an interface between computer hardware and the user. It provides key features like memory management, multitasking, file systems, and security. Operating systems allow users to run applications without knowing hardware details and make computers easy to use. However, issues can arise if the OS has problems and operating systems software can be expensive. The document then discusses functions of operating systems like process management, memory management, and file management. It describes how the operating system acts as a resource manager, allocating processors, memory, and I/O devices among programs.

Uploaded by

Satvir Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views30 pages

Unit - 1 Introduction To Operating System

The document discusses the introduction to operating systems. It defines an operating system as software that acts as an interface between computer hardware and the user. It provides key features like memory management, multitasking, file systems, and security. Operating systems allow users to run applications without knowing hardware details and make computers easy to use. However, issues can arise if the OS has problems and operating systems software can be expensive. The document then discusses functions of operating systems like process management, memory management, and file management. It describes how the operating system acts as a resource manager, allocating processors, memory, and I/O devices among programs.

Uploaded by

Satvir Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIT - 1

Introduction to Operating System


What is an Operating System?
An Operating System (OS) is a software that acts as an interface between computer hardware components
and the user. Every computer system must have at least one operating system to run other programs.
Applications like Browsers, MS Office, Notepad Games, etc., need some environment to run and perform
its tasks.
The OS helps you to communicate with the computer without knowing how to speak the computer’s
language. It is not possible for the user to use any computer or mobile device without having an operating
system.

Features of Operating System (OS)


Here is a list important features of OS:
• Protected and supervisor mode
• Allows disk access and file systems Device drivers Networking Security
• Program Execution
• Memory management Virtual Memory Multitasking
• Handling I/O operations
• Manipulation of the file system
• Error Detection and handling
• Resource allocation
• Information and Resource Protection

Advantage of Operating System


• Allows you to hide details of hardware by creating an abstraction
• Easy to use with a GUI
• Offers an environment in which a user may execute programs/applications
• The operating system must make sure that the computer system convenient to use
• Operating System acts as an intermediary among applications and the hardware components
• It provides the computer system resources with easy to use format
• Acts as an intermediate or between all hardware’s and software’s of the system

Disadvantages of Operating System


• If any issue occurs in OS, you may lose all the contents which have been stored in your system
• Operating system’s software is quite expensive for small size organization which adds burden on
them. Example Windows
• It is never entirely secure as a threat can occur at any time

Functions of Operating System


Some typical operating system functions may include managing memory, files, processes, I/O system &
devices, security, etc.

Below are the main functions of Operating System:

Functions of
Operating System
In an operating system software performs each of the function:

1. Process management: Process management helps OS to create and delete processes. It also provides
mechanisms for synchronization and communication among processes.
2. Memory management: Memory management module performs the task of allocation and de-
allocation of memory space to programs in need of this resources.
3. File management: It manages all the file-related activities such as organization storage, retrieval,
naming, sharing, and protection of files.
4. Device Management: Device management keeps tracks of all devices. This module also responsible for
this task is known as the I/O controller. It also performs the task of allocation and de-allocation of the
devices.
5. I/O System Management: One of the main objects of any OS is to hide the peculiarities of that
hardware devices from the user.
6. Secondary-Storage Management: Systems have several levels of storage which includes primary
storage, secondary storage, and cache storage. Instructions and data must be stored in primary storage
or cache so that a running program can reference it.
7. Security: Security module protects the data and information of a computer system against malware
threat and authorized access.
8. Command interpretation: This module is interpreting commands given by the and acting system
resources to process that commands.
9. Networking: A distributed system is a group of processors which do not share memory, hardware
devices, or a clock. The processors communicate with one another through the network.
10. Job accounting: Keeping track of time & resource used by various job and users.
11. Communication management: Coordination and assignment of compilers, interpreters, and another
software resource of the various users of the computer systems.

Operating System as Resource Manager


Let us understand how the operating system works as a Resource Manager.
• Now-a-days all modern computers consist of processors, memories, timers, network interfaces,
printers, and so many other devices.
• The operating system provides for an orderly and controlled allocation of the processors,
memories, and I/O devices among the various programs in the bottom-up view.
• Operating system allows multiple programs to be in memory and run at the same time.
• Resource management includes multiplexing or sharing resources in two different ways: in time
and in space.
• In time multiplexed, different programs take a chance of using CPU. First one tries to use the
resource, then the next one that is ready in the queue and so on. For example: Sharing the printer
one after another.
• In space multiplexing, Instead of the customers taking a chance, each one gets part of the resource.
For example − Main memory is divided into several running programs, so each one can be resident
at the same time.
The diagram given below shows the functioning of OS as a resource manager −

Modern computers consist of processors, memories, timers, disks, mice, network interfaces, printers, and
a wide variety of other devices. In the alternative view, the job of the operating system is to provide for an
orderly and controlled allocation of the processors, memories, and input/output devices among the
various programs competing for them.
When a computer (or network) has multiple users, the need for managing and protecting the memory,
input/output devices, and other resources is even greater, since the users might otherwise interface with
one another. In addition, users often need to share not only hardware, but information (files, databases,
etc.) as well. In short, this view of the operating system holds that its primary task is to keep track of which
programs are using which resources, to grant resource requests, to account for usage, and to mediate
conflicting requests from different programs and users.
Resource management includes multiplexing (sharing) resources in two different ways:
1. Time Multiplexing
2. Space Multiplexing
1. Time Multiplexing : When the resource is time multiplexed, different programs or users take turns using
it. First one of them gets to use the resource, then another, and so on.
For example: With only one CPU and multiple programs that want to run on it, operating system first
allocates the CPU to one long enough, another one gets to use the CPU, then another and ten eventually
the first one again.
Determining how the resource is time multiplexed – who goes next and for how long – is the task of the
operating system.
2. Space Multiplexing : In space multiplexing, instead of the customers taking turns, each one gets part of
the resource.
For example: Main memory is normally divided up among several running programs, so each one can be
resident at the same time (for example, in order to take turns using the CPU). Assuming there is enough
memory to hold multiple programs, it is more efficient to hold several programs in memory at once rather
than give one of them all of it, especially if it only needs a small fraction of the total. Of course, this raises
issues of fairness, protection, and so on, and it is up to the operating system to solve them.
Operating System Structure
An operating system is a construct that allows the user application programs to interact with the system
hardware. Since the operating system is such a complex structure, it should be created with utmost care so
it can be used and modified easily. An easy way to do this is to create the operating system in parts. Each
of these parts should be well defined with clear inputs, outputs and functions.
1. Simple Structure : There are many operating systems that have a rather simple structure. These
started as small systems and rapidly expanded much further than their scope. A common example
of this is MS-DOS. It was designed simply for a niche amount for people. There was no indication
that it would become so popular.
An image to illustrate the structure of MS-DOS is as follows −

It is better that operating systems have a modular structure, unlike MS-DOS. That would lead to greater
control over the computer system and its various applications. The modular structure would also allow the
programmers to hide information as required and implement internal routines as they see fit without
changing the outer specifications.
2. Layered Structure : One way to achieve modularity in the operating system is the layered approach.
In this, the bottom layer is the hardware and the topmost layer is the user interface.
An image demonstrating the layered approach is as follows −

As seen from the image, each upper layer is built on the bottom layer. All the layers hide some structures,
operations etc from their upper layers.
One problem with the layered structure is that each layer needs to be carefully defined. This is necessary
because the upper layers can only use the functionalities of the layers below them
What is Shell and Kernel
Both the Shell and the Kernel are the Parts of this Operating System. These Both Parts are used for
performing any Operation on the System. When a user gives his Command for Performing Any Operation,
then the Request Will goes to the Shell Parts, The Shell Parts is also called as the Interpreter which
translate the Human Program into the Machine Language and then the Request will be transferred to the
Kernel. So that Shell is just called as the interpreter of the Commands which Converts the Request of the
User into the Machine Language.
Kernel is also called as the heart of the Operating System and the Every Operation is performed by using
the Kernel , When the Kernel Receives the Request from the Shell then this will Process the Request and
Display the Results on the Screen. The various Types of Operations those are Performed by the Kernel are
as followings:-
1) It Controls the State the Process Means it checks whether the Process is running or Process is
Waiting for the Request of the user.
2) Provides the Memory for the Processes those are Running on the System Means Kernel Runs the
Allocation and De-allocation Process , First When we Request for the service then the Kernel will Provides
the Memory to the Process and after that he also Release the Memory which is Given to a Process.
3) The Kernel also Maintains a Time table for all the Processes those are Running Means the Kernel
also Prepare the Schedule Time means this will Provide the Time to various Process of the CPU and the
Kernel also Puts the Waiting and Suspended Jobs into the different Memory Area.
4) When a Kernel determines that the Logical Memory doesn’t fit to Store the Programs. Then he uses
the Concept of the Physical Memory which Will Stores the Programs into Temporary Manner. Means
the Physical Memory of the System can be used as Temporary Memory.
5) Kernel also maintains all the files those are Stored into the Computer System and the Kernel Also
Stores all the Files into the System as no one can read or Write the Files without any Permissions. So
that the Kernel System also Provides us the Facility to use the Passwords and also all the Files are Stored
into the Particular Manner.
As we have learned there are Many Programs or Functions those are Performed by the Kernel But the
Functions those are Performed by the Kernel will never be Shown to the user. And the Functions of the
Kernel are Transparent to the user.

Difference Between Kernel and Shell


The main difference between kernel and shell is that the kernel is the core of the operating system that

controls all the tasks of the system while the shell is the interface that allows the users to communicate

with the kernel.

Unix is an operating system. It is the interface between the user and the hardware. It performs a variety of

tasks including file handling, memory management, controlling hardware devices, process management and

many more. There are various versions of Unix: Solaris Unix, HP Unix, AIX, etc. Linux is a flavor of Unix, and it

is free and open-source. Unix is popular in enterprise-level because it supports multiple user environment.

Kernel and Shell are two components in Unix architecture. The kernel is the heart of the operating system

while Shell is a utility to process user’s requests.


Views of Operating System

An operating system is a framework that enables user application programs to interact with system
hardware. The operating system does not perform any functions on its own, but it provides an atmosphere
in which various apps and programs can do useful work. The operating system may be observed from the
point of view of the user or the system, and it is known as the user view and the system view. In this
article, you will learn the views of the operating system.
Viewpoints of Operating System
The operating system may be observed from the viewpoint of the user or the system. It is known as the
user view and the system view. There are mainly two types of views of the operating system. These are as
follows:
1. User View
2. System View
User View
The user view depends on the system interface that is used by the users. Some systems are designed for a
single user to monopolize the resources to maximize the user's task. In these cases, the OS is designed
primarily for ease of use, with little emphasis on quality and none on resource utilization.
The user viewpoint focuses on how the user interacts with the operating system through the usage of
various application programs. In contrast, the system viewpoint focuses on how the hardware interacts
with the operating system to complete various tasks.
1. Single User View Point
Most computer users use a monitor, keyboard, mouse, printer, and other accessories to operate their
computer system. In some cases, the system is designed to maximize the output of a single user. As a
result, more attention is laid on accessibility, and resource allocation is less important. These systems are
much more designed for a single user experience and meet the needs of a single user, where the
performance is not given focus as the multiple user systems.
2. Multiple User View Point
Another example of user views in which the importance of user experience and performance is given is
when there is one mainframe computer and many users on their computers trying to interact with their
kernels over the mainframe to each other. In such circumstances, memory allocation by the CPU must be
done effectively to give a good user experience. The client-server architecture is another good example
where many clients may interact through a remote server, and the same constraints of effective use of
server resources may arise.
3. Handled User View Point
Moreover, the touchscreen era has given you the best handheld technology ever. Smartphones interact via
wireless devices to perform numerous operations, but they're not as efficient as a computer interface,
limiting their usefulness. However, their operating system is a great example of creating a device focused
on the user's point of view.
4. Embedded System User View Point
Some systems, like embedded systems that lack a user point of view. The remote control used to
turn on or off the tv is all part of an embedded system in which the electronic device communicates with
another program where the user viewpoint is limited and allows the user to engage with the application.
System View
The OS may also be viewed as just a resource allocator. A computer system comprises various sources,
such as hardware and software, which must be managed effectively. The operating system manages the
resources, decides between competing demands, controls the program execution, etc. According to this
point of view, the operating system's purpose is to maximize performance. The operating system is
responsible for managing hardware resources and allocating them to programs and users to ensure
maximum performance.
From the user point of view, we've discussed the numerous applications that require varying degrees of
user participation. However, we are more concerned with how the hardware interacts with the operating
system than with the user from a system viewpoint. The hardware and the operating system interact for a
variety of reasons, including:
1. Resource Allocation
The hardware contains several resources like registers, caches, RAM, ROM, CPUs, I/O interaction, etc.
These are all resources that the operating system needs when an application program demands them. Only
the operating system can allocate resources, and it has used several tactics and strategies to maximize its
processing and memory space. The operating system uses a variety of strategies to get the most out of the
hardware resources, including paging, virtual memory, caching, and so on. These are very important in the
case of various user viewpoints because inefficient resource allocation may affect the user viewpoint,
causing the user system to lag or hang, reducing the user experience.
2. Control Program
The control program controls how input and output devices (hardware) interact with the operating system.
The user may request an action that can only be done with I/O devices; in this case, the operating system
must also have proper communication, control, detect, and handle such devices.

Types of Operating Systems


An operating system is a well-organized collection of programs that manages the computer hardware. It is
a type of system software that is responsible for the smooth functioning of the computer system.

Batch Operating System


In the 1970s, Batch processing was very popular. In this technique, similar types of jobs were batched
together and executed in time. People were used to having a single computer which was called a
mainframe.
In Batch operating system, access is given to more than one person; they submit their respective jobs to
the system for the execution.
The system put all of the jobs in a queue on the basis of first come first serve and then executes the jobs
one by one. The users collect their respective output when all the jobs get executed.

The purpose of this operating system was mainly to transfer control from one job to another as soon as the
job was completed. It contained a small set of programs called the resident monitor that always resided in
one part of the main memory. The remaining part is used for servicing jobs.

Advantages of Batch OS
o The use of a resident monitor improves computer efficiency as it eliminates CPU time between two
jobs.
Disadvantages of Batch OS
1. Starvation
Batch processing suffers from starvation.
For Example:
There are five jobs J1, J2, J3, J4, and J5, present in the batch. If the execution time of J1 is very high, then
the other four jobs will never be executed, or they will have to wait for a very long time. Hence the other
processes get starved.
2. Not Interactive
Batch Processing is not suitable for jobs that are dependent on the user's input. If a job requires the input
of two numbers from the console, then it will never get it in the batch processing scenario since the user is
not present at the time of execution.
Multiprogramming Operating System
Multiprogramming is an extension to batch processing where the CPU is always kept busy. Each process
needs two types of system time: CPU time and IO time.
In a multiprogramming environment, when a process does its I/O, The CPU can start the execution of other
processes. Therefore, multiprogramming improves the efficiency of the system.

Advantages of Multiprogramming OS
o Throughout the system, it increased as the CPU always had one program to execute.
o Response time can also be reduced.
Disadvantages of Multiprogramming OS
o Multiprogramming systems provide an environment in which various systems resources are used
efficiently, but they do not provide any user interaction with the computer system.
Multiprocessing Operating System
In Multiprocessing, Parallel computing is achieved. There are more than one processors present in the
system which can execute more than one process at the same time. This will increase the throughput of
the system.
In Multiprocessing, Parallel computing is achieved. More than one processor present in the system can
execute more than one process simultaneously, which will increase the throughput of the system.

Advantages of Multiprocessing operating system:


o Increased reliability: Due to the multiprocessing system, processing tasks can be distributed among
several processors. This increases reliability as if one processor fails, the task can be given to
another processor for completion.
o Increased throughout: As several processors increase, more work can be done in less.
Disadvantages of Multiprocessing operating System
o Multiprocessing operating system is more complex and sophisticated as it takes care of multiple
CPUs simultaneously.
Multitasking Operating System

The multitasking operating system is a logical extension of a multiprogramming system that


enables multiple programs simultaneously. It allows a user to perform more than one computer task at the
same time.
Advantages of Multitasking operating system
o This operating system is more suited to supporting multiple users simultaneously.
o The multitasking operating systems have well-defined memory management.
Disadvantages of Multitasking operating system
o The multiple processors are busier at the same time to complete any task in a multitasking
environment, so the CPU generates more heat.
Network Operating System

An Operating system, which includes software and associated protocols to communicate with other
computers via a network conveniently and cost-effectively, is called Network Operating System.

Advantages of Network Operating System


o In this type of operating system, network traffic reduces due to the division between clients and the
server.
o This type of system is less expensive to set up and maintain.
Disadvantages of Network Operating System
o In this type of operating system, the failure of any node in a system affects the whole system.
o Security and performance are important issues. So trained network administrators are required for
network administration.
Real Time Operating System
In Real-Time Systems, each job carries a certain deadline within which the job is supposed to be
completed, otherwise, the huge loss will be there, or even if the result is produced, it will be completely
useless.

The Application of a Real-Time system exists in the case of military applications, if you want to drop a
missile, then the missile is supposed to be dropped with a certain precision.

Advantages of Real-time operating system:


o Easy to layout, develop and execute real-time applications under the real-time operating system.
o In a Real-time operating system, the maximum utilization of devices and systems.
Disadvantages of Real-time operating system:
o Real-time operating systems are very costly to develop.
o Real-time operating systems are very complex and can consume critical CPU cycles.
Time-Sharing Operating System
In the Time Sharing operating system, computer resources are allocated in a time-dependent fashion to
several programs simultaneously. Thus it helps to provide a large number of user's direct access to the
main computer. It is a logical extension of multiprogramming. In time-sharing, the CPU is switched among
multiple programs given by different users on a scheduled basis.
A time-sharing operating system allows many users to be served simultaneously, so sophisticated CPU
scheduling schemes and Input/output management are required.
Time-sharing operating systems are very difficult and expensive to build.
Advantages of Time Sharing Operating System
o The time-sharing operating system provides effective utilization and sharing of resources.
o This system reduces CPU idle and response time.
Disadvantages of Time Sharing Operating System
o Data transmission rates are very high in comparison to other methods.
o Security and integrity of user programs loaded in memory and data need to be maintained as many
users access the system at the same time.
Distributed Operating System
The Distributed Operating system is not installed on a single machine, it is divided into parts, and these
parts are loaded on different machines. A part of the distributed Operating system is installed on each
machine to make their communication possible. Distributed Operating systems are much more complex,
large, and sophisticated than Network operating systems because they also have to take care of varying
networking protocols.

Advantages of Distributed Operating System


o The distributed operating system provides sharing of resources.
o This type of system is fault-tolerant.
Disadvantages of Distributed Operating System
o Protocol overhead can dominate computation cost.
Program Vs Process | Difference between Process and Program

What is Program in OS?


A Program is an executable file which contains a certain set of instructions written to complete the specific
job or operation on your computer. For example, Google browser chrome.exe is an executable file which
stores a set of instructions written in it which allow you to open the browser and explore web pages.
Programs are never stored on the primary memory in your computer. Instead, they are stored on a disk or
secondary memory on your PC or laptop. They are read from the primary memory and executed by the
kernel.

What is Process?
A Process is an execution of a specific program. It is an active entity that actions the purpose of the
application. Multiple processes may be related to the same program. For example, if you double-click on
Google Chrome browser, you start a process that runs Google Chrome and when you open another
instance of Chrome, you essentially create a second process.

What is Process?

KEY DIFFERENCE
• Process is an executing part of a program whereas a program is a group of ordered operations to
achieve a programming goal.
• The process has a shorter and minimal lifespan whereas program has a longer lifespan.
• Process contains many resources like a memory address, disk, printer while Program needs memory
space on the disk to store all instructions.
• When we distinguish between process and program, Process is a dynamic or active entity whereas
Program is a passive or static entity.
• To differentiate program and process, Process has considerable overhead whereas Program has no
significant overhead cost.

Features of Program
• A program is a passive entity. It stores a group of instructions to be executed.
• Various processes may be related to the same program.
• A user may run multiple programs where the operating systems simplify its internal programmed
activities like memory management.
• The program can’t perform any action without a run. It needs to be executed to realize the steps
mentioned in it.
• The operating system allocates main memory to store programs instructions.

Features of Process
• A process has a very limited lifespan.
• They also generate one or more child processes, and they die like a human being.
• Like humans, even process has information like who is a parent when it is created, address space of
allocated memory, security properties which includes ownership credentials and privileges.
• Processes are allocated system resources like file descriptors and network ports.

What is the Difference between Program and Process?


Here is the main difference between Process and Program:

Difference between Program and Process


Parameter Process Program
A program is a group of ordered
An executing part of a program is called a
Definition operations to achieve a programming
process.
goal.
The nature of the program is passive, so
The process is an instance of the program
Nature it’s unlikely to do to anything until it
being executing.
gets executed.
Resource The resource requirement is quite high in The program only needs memory for
management case of a process. storage.
Overheads Processes have considerable overhead. No significant overhead cost.
The process has a shorter and very limited A program has a longer lifespan as it is
Lifespan lifespan as it gets terminated after the stored in the memory until it is not
completion of the task. manually deleted.
New processes require
Creation No such duplication is needed.
duplication of the parent process.
The program is stored on disk in some
Required Process holds resources like CPU, memory
file and does not require any other
Process address, disk, I/O, etc.
resources.
Entity type A process is a dynamic or active entity. A program is a passive or static entity.
A process contains many resources like a A program needs memory space on disk
Contain
memory address, disk, printer, etc. to store all instructions.

Summary

• A Program is an executable file which contains a certain set of instructions written to complete the
specific job or operation on your computer.
• A Process is an execution of a specific program. It is an active entity that actions the purpose of the
application.
• A program is a passive entity. It stores a group of instructions to be executed.
• Processes are allocated system resources like file descriptors and network ports.

What is Process Control Block (PCB)?


Process Control Block is a data structure that contains information of the process related to it. The process
control block is also known as a task control block, entry of the process table, etc.
It is very important for process management as the data structuring for processes is done in terms of the
PCB. It also defines the current state of the operating system.
While creating a process the operating system performs several operations. To identify the processes, it
assigns a process identification number (PID) to each process. As the operating system supports multi-
programming, it needs to keep track of all the processes. For this task, the process control block (PCB) is
used to track the process’s execution status. Each block of memory contains information about the process
state, program counter, stack pointer, status of opened files, scheduling algorithms, etc. All these
information is required and must be saved when the process is switched from one state to another. When
the process makes a transition from one state to another, the operating system must update information
in the process’s PCB.
A process control block (PCB) contains information about the process, i.e. registers, quantum, priority, etc.
The process table is an array of PCB’s, that means logically contains a PCB for all of the current processes in
the system.
Structure of the Process Control Block
The process control stores many data items that are needed for efficient process management.
Some of these data items are explained with the help of the given diagram −

The following are the data items −


• Process State : This specifies the process state i.e. new, ready, running, waiting or terminated.
• Process Number: This shows the number of the particular process.
• Program Counter : This contains the address of the next instruction that needs to be executed in the
process.
• Registers : This specifies the registers that are used by the process. They may include accumulators,
index registers, stack pointers, general purpose registers etc.
• List of Open Files : These are the different files that are associated with the process
• CPU Scheduling Information : The process priority, pointers to scheduling queues etc. is the CPU
scheduling information that is contained in the PCB. This may also include any other scheduling
parameters.
• Memory Management Information : The memory management information includes the page tables
or the segment tables depending on the memory system used. It also contains the value of the base
registers, limit registers etc.
• I/O Status Information : This information includes the list of I/O devices used by the process, the list of
files etc.
• Accounting information : The time limits, account numbers, amount of CPU used, process numbers
etc. are all a part of the PCB accounting information.
• Location of the Process Control Block : The process control block is kept in a memory area that is
protected from the normal user access. This is done because it contains important process information.
Some of the operating systems place the PCB at the beginning of the kernel stack for the process as it is
a safe location.
Process state diagram
Process States
State Diagram

The process, from its creation to completion, passes through various states. The minimum number of
states is five.
The names of the states are not standardized although the process may be in one of the following states
during execution.
1. New : A program which is going to be picked up by the OS into the main memory is called a new process.
2. Ready : Whenever a process is created, it directly enters in the ready state, in which, it waits for the CPU
to be assigned. The OS picks the new processes from the secondary memory and put all of them in the
main memory.
The processes which are ready for the execution and reside in the main memory are called ready state
processes. There can be many processes present in the ready state.
3. Running : One of the processes from the ready state will be chosen by the OS depending upon the
scheduling algorithm. Hence, if we have only one CPU in our system, the number of running processes for a
particular time will always be one. If we have n processors in the system then we can have n processes
running simultaneously.
4. Block or wait : From the Running state, a process can make the transition to the block or wait state
depending upon the scheduling algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input from the user then the OS
move this process to the block or wait state and assigns the CPU to the other processes.
5. Completion or termination : When a process finishes its execution, it comes in the termination state. All
the context of the process (Process Control Block) will also be deleted the process will be terminated by
the Operating system.
6. Suspend ready : A process in the ready state, which is moved to secondary memory from the main
memory due to lack of the resources (mainly primary memory) is called in the suspend ready state.
If the main memory is full and a higher priority process comes for the execution then the OS have to make
the room for the process in the main memory by throwing the lower priority process out into the
secondary memory. The suspend ready processes remain in the secondary memory until the main memory
gets available.
7. Suspend wait : Instead of removing the process from the ready queue, it's better to remove the blocked
process which is waiting for some resources in the main memory. Since it is already waiting for some
resource to get available hence it is better if it waits in the secondary memory and make room for the
higher priority process. These processes complete their execution once the main memory gets available
and their wait is finished.
Process Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for each of
the process states and PCBs of all processes in the same execution state are placed in the same queue.
When the state of a process is changed, its PCB is unlinked from its current queue and moved to its new
state queue.
The Operating System maintains the following important process scheduling queues −
• Job queue − This queue keeps all the processes in the system.
• Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting
to execute. A new process is always put in this queue.
• Device queues − The processes which are blocked due to unavailability of an I/O device constitute
this queue.

The OS can use different policies to manage each queue (FIFO, Round Robin, Priority, etc.). The OS
scheduler determines how to move processes between the ready and run queues which can only have
one entry per processor core on the system; in the above diagram, it has been merged with the CPU.

Types of Schedulers

Schedulers are special system software which handle process scheduling in various ways. Their main task
is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of
three types −
• Long-Term Scheduler
• Short-Term Scheduler
• Medium-Term Scheduler

1. Long Term Scheduler : It is also called a job scheduler. A long-term scheduler determines which
programs are admitted to the system for processing. It selects processes from the queue and loads
them into memory for execution. Process loads into the memory for CPU scheduling.
The primary objective of the job scheduler is to provide a balanced mix of jobs, such as I/O bound and
processor bound. It also controls the degree of multiprogramming. If the degree of multiprogramming is
stable, then the average rate of process creation must be equal to the average departure rate of
processes leaving the system.
On some systems, the long-term scheduler may not be available or minimal. Time-sharing operating
systems have no long term scheduler. When a process changes the state from new to ready, then there is
use of long-term scheduler.

2. Short Term Scheduler : It is also called as CPU scheduler. Its main objective is to increase system
performance in accordance with the chosen set of criteria. It is the change of ready state to running
state of the process. CPU scheduler selects a process among the processes that are ready to execute
and allocates CPU to one of them.
Short-term schedulers, also known as dispatchers, make the decision of which process to execute next.
Short-term schedulers are faster than long-term schedulers.

3. Medium Term Scheduler : Medium-term scheduling is a part of swapping. It removes the processes
from the memory. It reduces the degree of multiprogramming. The medium-term scheduler is in-
charge of handling the swapped out-processes.
A running process may become suspended if it makes an I/O request. A suspended processes cannot
make any progress towards completion. In this condition, to remove the process from memory and make
space for other processes, the suspended process is moved to the secondary storage. This process is
called swapping, and the process is said to be swapped out or rolled out. Swapping may be necessary to
improve the process mix.
Comparison among Scheduler
S.N. Long-Term Scheduler Short-Term Scheduler Medium-Term Scheduler

1 It is a job scheduler It is a CPU scheduler It is a process swapping


scheduler.

2 Speed is lesser than short Speed is fastest among Speed is in between both short
term scheduler other two and long term scheduler.

3 It controls the degree of It provides lesser control It reduces the degree of


multiprogramming over degree of multiprogramming.
multiprogramming

4 It is almost absent or minimal It is also minimal in time It is a part of Time sharing


in time sharing system sharing system systems.

5 It selects processes from pool It selects those processes It can re-introduce the process
and loads them into memory which are ready to into memory and execution
for execution execute can be continued.

Threads in Operating System

A thread is a single sequential flow of execution of tasks of a process so it is also known as thread of
execution or thread of control. There is a way of thread execution inside the process of any operating
system. Apart from this, there can be more than one thread inside a process. Each thread of the same
process makes use of a separate program counter and a stack of activation records and control blocks.
Thread is often referred to as a lightweight process.
The process can be split down into so many threads. For example, in a browser, many tabs can be viewed
as threads. MS Word uses many threads - formatting text from one thread, processing input from another
thread, etc.
Need of Thread:
o It takes far less time to create a new thread in an existing process than to create a new process.
o Threads can share the common data, they do not need to use Inter- Process communication.
o Context switching is faster when working with threads.
o It takes less time to terminate a thread than a process.
Benefits of Threads
o Enhanced throughput of the system: When the process is split into many threads, and each thread
is treated as a job, the number of jobs done in the unit time increases. That is why the throughput
of the system also increases.
o Effective Utilization of Multiprocessor system: When you have more than one thread in one
process, you can schedule more than one thread in more than one processor.
o Faster context switch: The context switching period between threads is less than the process
context switching. The process context switch means more overhead for the CPU.
o Responsiveness: When the process is split into several threads, and when a thread completes its
execution, that process can be responded to as soon as possible.
o Communication: Multiple-thread communication is simple because the threads share the same
address space, while in process, we adopt just a few exclusive communication strategies for
communication between two processes.
o Resource sharing: Resources can be shared between all threads within a process, such as code,
data, and files. Note: The stack and register cannot be shared between threads. There is a stack and
register for each thread.
Types of Threads
In the operating system, there are two types of threads.
1. Kernel level thread.
2. User-level thread.

• User-level thread
The operating system does not recognize the user-level thread. User threads can be easily implemented
and it is implemented by the user. If a user performs a user-level thread blocking operation, the whole
process is blocked. The kernel level thread does not know nothing about the user level thread. The kernel-
level thread manages user-level threads as if they are single-threaded processes?examples: Java
thread, POSIX threads, etc.HTML Tutorial
Advantages of User-level threads
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not support threads at
the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini thread control
blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the process.
Disadvantages of User-level threads
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.

• Kernel level thread


The kernel thread recognizes the operating system. There is a thread control block and process control
block in the system for each thread and process in the kernel-level thread. The kernel-level thread is
implemented by the operating system. The kernel knows about all the threads and manages them. The
kernel-level thread offers a system call to create and manage the threads from user-space. The
implementation of kernel threads is more difficult than the user thread. Context switch time is longer in
the kernel thread. If a kernel thread performs a blocking operation, the Banky thread execution can
continue. Example: Window Solaris.
Advantages of Kernel-level threads
1. The kernel-level thread is fully aware of all threads.
2. The scheduler may decide to spend more CPU time in the process of threads being large numerical.
3. The kernel-level thread is good for those applications that block the frequency.
Disadvantages of Kernel-level threads
1. The kernel thread manages and schedules all threads.
2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.
Components of Threads
Any thread has the following components.
1. Program counter
2. Register set
3. Stack space

What is Process Synchronization?


Process Synchronization is the task of coordinating the execution of processes in a way that no two
processes can have access to the same shared data and resources.
It is specially needed in a multi-process system when multiple processes are running together, and more
than one processes try to gain access to the same shared resource or data at the same time.
This can lead to the inconsistency of shared data. So the change made by one process not necessarily
reflected when other processes accessed the same shared data. To avoid this type of inconsistency of data,
the processes need to be synchronized with each other.

How Process Synchronization Works?


For Example, process A changing the data in a memory location while another process B is trying to read
the data from the same memory location. There is a high probability that data read by the second process
will be erroneous.
Sections of a Program
Here, are four essential elements of the critical section:
• Entry Section: It is part of the process which decides the entry of a particular process.
• Critical Section: This part allows one process to enter and modify the shared variable.
• Exit Section: Exit section allows the other process that are waiting in the Entry Section, to enter into
the Critical Sections. It also checks that a process that finished its execution should be removed
through this Section.
• Remainder Section: All other parts of the Code, which is not in Critical, Entry, and Exit Section, are
known as the Remainder Section.

What is Critical Section Problem?


A critical section is a segment of code which can be accessed by a signal process at a specific point of time.
The section consists of shared data resources that required to be accessed by other processes.
• The entry to the critical section is handled by the wait() function, and it is represented as P().
• The exit from a critical section is controlled by the signal() function, represented as V().
In the critical section, only a single process can be executed. Other processes, waiting to execute their
critical section, need to wait until the current process completes its execution.
Rules for Critical Section
The critical section need to must enforce all three rules:
• Mutual Exclusion: Mutual Exclusion is a special type of binary semaphore which is used for
controlling access to the shared resource. It includes a priority inheritance mechanism to avoid
extended priority inversion problems. Not more than one process can execute in its critical section
at one time.
• Progress: This solution is used when no one is in the critical section, and someone wants in. Then
those processes not in their reminder section should decide who should go in, in a finite time.
• Bound Waiting: When a process makes a request for getting into critical section, there is a specific
limit about number of processes can get into their critical section. So, when the limit is reached, the
system must allow request to the process to get into its critical section.
Solutions To The Critical Section
In Process Synchronization, critical section plays the main role so that the problem must be solved.
Here are some widely used methods to solve the critical section problem.
Peterson Solution
Peterson’s solution is widely used solution to critical section problems. This algorithm was developed by a
computer scientist Peterson that’s why it is named as a Peterson’s solution.
In this solution, when a process is executing in a critical state, then the other process only executes the rest
of the code, and the opposite can happen. This method also helps to make sure that only a single process
runs in the critical section at a specific time.
Example

PROCESS Pi
FLAG[i] = true
while( (turn != i) AND (CS is !free) ){ wait;
}
CRITICAL SECTION FLAG[i] = false
turn = j; //choose another process to go to CS
• Assume there are N processes (P1, P2, … PN) and every process at some point of time requires to
enter the Critical Section
• A FLAG[] array of size N is maintained which is by default false. So, whenever a process requires to
enter the critical section, it has to set its flag as true. For example, If Pi wants to enter it will set
FLAG[i]=TRUE.
• Another variable called TURN indicates the process number which is currently wating to enter into
the CS.
• The process which enters into the critical section while exiting would change the TURN to another
number from the list of ready processes.
• Example: turn is 2 then P2 enters the Critical section and while exiting turn=3 and therefore P3
breaks out of wait loop.
Synchronization Hardware
Some times the problems of the Critical Section are also resolved by hardware. Some operating system
offers a lock functionality where a Process acquires a lock when entering the Critical section and releases
the lock after leaving it.
So when another process is trying to enter the critical section, it will not be able to enter as it is locked. It
can only do so if it is free by acquiring the lock itself.
Mutex Locks
Synchronization hardware not simple method to implement for everyone, so strict software method
known as Mutex Locks was also introduced.
In this approach, in the entry section of code, a LOCK is obtained over the critical resources used inside the
critical section. In the exit section that lock is released.
Semaphore Solution
Semaphore is simply a variable that is non-negative and shared between threads. It is another algorithm or
solution to the critical section problem. It is a signaling mechanism and a thread that is waiting on a
semaphore, which can be signaled by another thread.
It uses two atomic operations, 1)wait, and 2) signal for the process synchronization.
Example
WAIT ( S ):
while ( S <= 0 );
S = S - 1;
SIGNAL ( S ):
S = S + 1;
Summary:
• Process synchronization is the task of coordinating the execution of processes in a way that no two
processes can have access to the same shared data and resources.
• Four elements of critical section are 1) Entry section 2) Critical section 3) Exit section 4) Reminder
section
• A critical section is a segment of code which can be accessed by a signal process at a specific point
of time.
• Three must rules which must enforce by critical section are : 1) Mutual Exclusion 2) Process solution
3)Bound waiting
• Mutual Exclusion is a special type of binary semaphore which is used for controlling access to the
shared resource.
• Process solution is used when no one is in the critical section, and someone wants in.
• In bound waiting solution, after a process makes a request for getting into its critical section, there
is a limit for how many other processes can get into their critical section.
• Peterson’s solution is widely used solution to critical section problems.
• Problems of the Critical Section are also resolved by synchronization of hardware
• Synchronization hardware is not a simple method to implement for everyone, so the strict software
method known as Mutex Locks was also introduced.
• Semaphore is another algorithm or solution to the critical section problem
Why do we need Scheduling?
In Multiprogramming, if the long term scheduler picks more I/O bound processes then most of the time,
the CPU remains idol. The task of Operating system is to optimize the utilization of resources.
If most of the running processes change their state from running to waiting then there may always be a
possibility of deadlock in the system. Hence to reduce this overhead, the OS needs to schedule the jobs to
get the optimal utilization of CPU and to avoid the possibility to deadlock.
CPU-I/O Burst Cycle
Process execution consists of a cycle of CPU execution and I/O wait. The state of process under execution is
called CPU burst and the state of process under I/O request & its handling is called I/O burst.
Processes alternate between these two states. Process execution begins with a CPU burst. That is followed
by an I/O burst, which is followed by another CPU burst, then another I/O burst, and so on. Eventually, the
final CPU burst ends with a system request to terminate execution as shown in the following figure:

Difference Between Preemptive and Non-Preemptive Scheduling in OS

It is the responsibility of CPU scheduler to allot a process to CPU whenever the CPU is in the idle state. The
CPU scheduler selects a process from ready queue and allocates the process to CPU. The scheduling which
takes place when a process switches from running state to ready state or from waiting state to ready state
is called Preemptive Scheduling.
On the other hand, the scheduling which takes place when a process terminates or switches from running
to waiting for state this kind of CPU scheduling is called Non-Preemptive Scheduling.
The basic difference between preemptive and non-preemptive scheduling lies in their name itself. That is a
Preemptive scheduling can be preempted; the processes can be scheduled. In Non-preemptive scheduling,
the processes can not be scheduled.
Let us discuss the differences between the both Preemptive and Non-Preemptive Scheduling in brief with
the help of comparison chart shown below.
BASIS FOR
PREEMPTIVE SCHEDULING NON PREEMPTIVE SCHEDULING
COMPARISON

Basic The resources are allocated to a Once resources are allocated to a

process for a limited time. process, the process holds it till it

completes its burst time or switches to

waiting state.

Interrupt Process can be interrupted in Process can not be interrupted till it

between. terminates or switches to waiting state.

Starvation If a high priority process If a process with long burst time is

frequently arrives in the ready running CPU, then another process with

queue, low priority process may less CPU burst time may starve.

starve.

Overhead Preemptive scheduling has Non-preemptive scheduling does not

overheads of scheduling the have overheads.

processes.

Flexibility Preemptive scheduling is flexible. Non-preemptive scheduling is rigid.

Cost Preemptive scheduling is cost Non-preemptive scheduling is not cost

associated. associative.

Key Differences Between Preemptive and Non-Preemptive Scheduling

1. The basic difference between preemptive and non-preemptive scheduling is that in preemptive
scheduling the CPU is allocated to the processes for the limited time. While in Non-preemptive
scheduling, the CPU is allocated to the process till it terminates or switches to waiting state.
2. The executing process in preemptive scheduling is interrupted in the middle of execution whereas,
the executing process in non-preemptive scheduling is not interrupted in the middle of execution.
3. Preemptive Scheduling has the overhead of switching the process from ready state to running state,
vise-verse, and maintaining the ready queue. On the other hands, non-preemptive scheduling has no
overhead of switching the process from running state to ready state.
4. In preemptive scheduling, if a process with high priority frequently arrives in the ready queue then
the process with low priority have to wait for a long, and it may have to starve. On the other hands,
in the non-preemptive scheduling, if CPU is allocated to the process with larger burst time then the
processes with small burst time may have to starve.
5. Preemptive scheduling is quite flexible because the critical processes are allowed to access CPU as
they arrive into the ready queue, no matter what process is executing currently. Non-preemptive
scheduling is rigid as even if a critical process enters the ready queue the process running CPU is not
disturbed.
6. The Preemptive Scheduling is cost associative as it has to maintain the integrity of shared data which
is not the case with Non-preemptive Scheduling.
CPU Scheduling Criteria
Different CPU scheduling algorithms have different properties and the choice of a particular algorithm
depends on the various factors. Many criteria have been suggested for comparing CPU scheduling
algorithms.
The criteria include the following:
1. CPU utilisation –
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible.
Theoretically, CPU utilisation can range from 0 to 100 but in a real-time system, it varies from 40 to 90
percent depending on the load upon the system.
2. Throughput –
A measure of the work done by CPU is the number of processes being executed and completed per unit
time. This is called throughput. The throughput may vary depending upon the length or duration of
processes.
3. Turnaround time –
For a particular process, an important criteria is how long it takes to execute that process. The time
elapsed from the time of submission of a process to the time of completion is known as the turnaround
time. Turn-around time is the sum of times spent waiting to get into memory, waiting in ready queue,
executing in CPU, and waiting for I/O.
4. Waiting time –
A scheduling algorithm does not affect the time required to complete the process once it starts
execution. It only affects the waiting time of a process i.e. time spent by a process waiting in the ready
queue.
5. Response time –
In an interactive system, turn-around time is not the best criteria. A process may produce some output
fairly early and continue computing new results while previous results are being output to the user.
Thus another criteria is the time taken from submission of the process of request until the first
response is produced. This measure is called response time.
Operating System Scheduling algorithms
A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms which we are going to discuss
in this chapter −
• First-Come, First-Served (FCFS) Scheduling
• Shortest-Job-Next (SJN) Scheduling
• Round Robin(RR) Scheduling
• Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed so
that once a process enters the running state, it cannot be preempted until it completes its allotted time,
whereas the preemptive scheduling is based on priority where a scheduler may preempt a low priority
running process anytime when a high priority process enters into a ready state.

First Come First Serve (FCFS)


• Jobs are executed on first come, first serve basis.
• It is a non-preemptive, pre-emptive scheduling algorithm.
• Easy to understand and implement.
• Its implementation is based on FIFO queue.
• Poor in performance as average wait time is high.

Wait time of each process is as follows −


Process Wait Time : Service Time - Arrival Time

P0 0-0=0

P1 5-1=4

P2 8-2=6

P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75

Shortest Job Next (SJN)


• This is also known as shortest job first, or SJF
• This is a non-preemptive, pre-emptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in advance.
• Impossible to implement in interactive systems where required CPU time is not known.
• The processer should know in advance how much time process will take.
Given: Table of processes, and their Arrival time, Execution time
Process Arrival Time Execution Time Service Time

P0 0 5 0

P1 1 3 5

P2 2 8 14

P3 3 6 8
Waiting time of each process is as follows −
Process Waiting Time

P0 0-0=0

P1 5-1=4

P2 14 - 2 = 12

P3 8-3=5
Average Wait Time: (0 + 4 + 12 + 5)/4 = 21 / 4 = 5.25

Round Robin Scheduling


• Round Robin is the preemptive process scheduling algorithm.
• Each process is provided a fix time to execute, it is called a quantum.
• Once a process is executed for a given time period, it is preempted and other process executes for
a given time period.
• Context switching is used to save states of preempted processes.

Wait time of each process is as follows −


Process Wait Time : Service Time - Arrival Time

P0 (0 - 0) + (12 - 3) = 9

P1 (3 - 1) = 2

P2 (6 - 2) + (14 - 9) + (20 - 17) = 12


P3 (9 - 3) + (17 - 12) = 11
Average Wait Time: (9+2+12+11) / 4 = 8.5

Multiple-Level Queues Scheduling


Multiple-level queues are not an independent scheduling algorithm. They make use of other existing
algorithms to group and schedule jobs with common characteristics.
• Multiple queues are maintained for processes with common characteristics.
• Each queue can have its own scheduling algorithms.
• Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue.
The Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based
on the algorithm assigned to the queue.

You might also like