0% found this document useful (0 votes)
22 views

Os Notes

The document discusses operating systems, including what they are, their functions and objectives. It describes different types of operating systems such as batch, time-sharing, distributed and real-time operating systems. It also covers how to check operating systems and examples of common operating systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Os Notes

The document discusses operating systems, including what they are, their functions and objectives. It describes different types of operating systems such as batch, time-sharing, distributed and real-time operating systems. It also covers how to check operating systems and examples of common operating systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 64

Operating System lies in the category of system software.

It basically manages all


the resources of the computer. An operating system acts as an interface between the
software and different parts of the computer or the computer hardware. The
operating system is designed in such a way that it can manage the overall resources
and operations of the computer.
Operating System is a fully integrated set of specialized programs that handle all the
operations of the computer. It controls and monitors the execution of all other
programs that reside in the computer, which also includes application programs and
other system software of the computer. Examples of Operating Systems are
Windows, Linux, Mac OS, etc.
An Operating System (OS) is a collection of software that manages computer
hardware resources and provides common services for computer programs. The
operating system is the most important type of system software in a computer
system.
What is an Operating System Used for?
The operating system helps in improving the computer software as well as hardware.
Without OS, it became very difficult for any application to be user-friendly. The
Operating System provides a user with an interface that makes any application
attractive and user-friendly. The operating System comes with a large number of
device drivers that make OS services reachable to the hardware environment. Each
and every application present in the system requires the Operating System. The
operating system works as a communication channel between system hardware and
system software. The operating system helps an application with the hardware part
without knowing about the actual hardware configuration. It is one of the most
important parts of the system and hence it is present in every device, whether large
or small device.

Operating System
Functions of the Operating System
• Resource Management: The operating system manages and allocates
memory, CPU time, and other hardware resources among the various programs
and processes running on the computer.
• Process Management: The operating system is responsible for starting,
stopping, and managing processes and programs. It also controls the scheduling
of processes and allocates resources to them.
• Memory Management: The operating system manages the computer’s
primary memory and provides mechanisms for optimizing memory usage.
• Security: The operating system provides a secure environment for the user,
applications, and data by implementing security policies and mechanisms such
as access controls and encryption.
• Job Accounting: It keeps track of time and resources used by various jobs
or users.
• File Management: The operating system is responsible for organizing and
managing the file system, including the creation, deletion, and manipulation of
files and directories.
• Device Management: The operating system manages input/output devices
such as printers, keyboards, mice, and displays. It provides the necessary drivers
and interfaces to enable communication between the devices and the computer.
• Networking: The operating system provides networking capabilities such as
establishing and managing network connections, handling network protocols,
and sharing resources such as printers and files over a network.
• User Interface: The operating system provides a user interface that enables
users to interact with the computer system. This can be a Graphical User
Interface (GUI), a Command-Line Interface (CLI), or a combination of both.
• Backup and Recovery: The operating system provides mechanisms for
backing up data and recovering it in case of system failures, errors, or disasters.
• Virtualization: The operating system provides virtualization capabilities
that allow multiple operating systems or applications to run on a single physical
machine. This can enable efficient use of resources and flexibility in managing
workloads.
• Performance Monitoring: The operating system provides tools for
monitoring and optimizing system performance, including identifying
bottlenecks, optimizing resource usage, and analyzing system logs and metrics.
• Time-Sharing: The operating system enables multiple users to share a
computer system and its resources simultaneously by providing time-sharing
mechanisms that allocate resources fairly and efficiently.
• System Calls: The operating system provides a set of system calls that
enable applications to interact with the operating system and access its resources.
System calls provide a standardized interface between applications and the
operating system, enabling portability and compatibility across different
hardware and software platforms.
• Error-detecting Aids: These contain methods that include the production of
dumps, traces, error messages, and other debugging and error-detecting methods.
Objectives of Operating Systems
Let us now see some of the objectives of the operating system, which are mentioned
below.
• Convenient to use: One of the objectives is to make the computer system
more convenient to use in an efficient manner.
• User Friendly: To make the computer system more interactive with a more
convenient interface for the users.
• Easy Access: To provide easy access to users for using resources by acting
as an intermediary between the hardware and its users.
• Management of Resources: For managing the resources of a computer in a
better and faster way.
• Controls and Monitoring: By keeping track of who is using which resource,
granting resource requests, and mediating conflicting requests from different
programs and users.
• Fair Sharing of Resources: Providing efficient and fair sharing of resources
between the users and programs.
Types of Operating Systems
• Batch Operating System: A Batch Operating System is a type of operating
system that does not interact with the computer directly. There is an operator
who takes similar jobs having the same requirements and groups them into
batches.
• Time-sharing Operating System: Time-sharing Operating System is a type
of operating system that allows many users to share computer resources
(maximum utilization of the resources).
• Distributed Operating System: Distributed Operating System is a type of
operating system that manages a group of different computers and makes appear
to be a single computer. These operating systems are designed to operate on a
network of computers. They allow multiple users to access shared resources and
communicate with each other over the network. Examples include Microsoft
Windows Server and various distributions of Linux designed for servers.
• Network Operating System: Network Operating System is a type of
operating system that runs on a server and provides the capability to manage
data, users, groups, security, applications, and other networking functions.
• Real-time Operating System: Real-time Operating System is a type of
operating system that serves a real-time system and the time interval required to
process and respond to inputs is very small. These operating systems are
designed to respond to events in real time. They are used in applications that
require quick and deterministic responses, such as embedded systems, industrial
control systems, and robotics.
• Multiprocessing Operating System: Multiprocessor Operating Systems are
used in operating systems to boost the performance of multiple CPUs within a
single computer system. Multiple CPUs are linked together so that a job can be
divided and executed more quickly.
• Single-User Operating Systems: Single-User Operating Systems are
designed to support a single user at a time. Examples include Microsoft
Windows for personal computers and Apple macOS.
• Multi-User Operating Systems: Multi-User Operating Systems are
designed to support multiple users simultaneously. Examples include Linux and
Unix.
• Embedded Operating Systems: Embedded Operating Systems are designed
to run on devices with limited resources, such as smartphones, wearable devices,
and household appliances. Examples include Google’s Android and Apple’s iOS.
• Cluster Operating Systems: Cluster Operating Systems are designed to run
on a group of computers, or a cluster, to work together as a single system. They
are used for high-performance computing and for applications that require high
availability and reliability. Examples include Rocks Cluster Distribution and
OpenMPI.
How to Check the Operating System?
There are so many factors to be considered while choosing the best Operating
System for our use. These factors are mentioned below.
• Price Factor: Price is one of the factors to choose the correct Operating
System as there are some OS that is free, like Linux, but there is some more OS
that is paid like Windows and macOS.
• Accessibility Factor: Some Operating Systems are easy to use like macOS
and iOS, but some OS are a little bit complex to understand like Linux. So, you
must choose the Operating System in which you are more accessible.
• Compatibility factor: Some Operating Systems support very less
applications whereas some Operating Systems supports more application. You
must choose the OS, which supports the applications which are required by you.
• Security Factor: The security Factor is also a factor in choosing the correct
OS, as macOS provide some additional security while Windows has little fewer
security features.
Examples of Operating Systems
• Windows (GUI-based, PC)
• GNU/Linux (Personal, Workstations, ISP, File, and print server, Three-tier
client/Server)
• macOS (Macintosh), used for Apple’s personal computers and workstations
(MacBook, iMac).
• Android (Google’s Operating System for smartphones/tablets/smartwatches)
• iOS (Apple’s OS for iPhone, iPad, and iPod Touch)
Functions of Operating System
An Operating System acts as a communication bridge (interface) between the user and
computer hardware. The purpose of an operating system is to provide a platform on
which a user can execute programs conveniently and efficiently.
An operating system is a piece of software that manages the allocation of Computer
Hardware. The coordination of the hardware must be appropriate to ensure the correct
working of the computer system and to prevent user programs from interfering with
the proper working of the system.
The main goal of the Operating System is to make the computer environment more
convenient to use and the Secondary goal is to use the resources most efficiently.
What is an Operating System?
An operating system is a program that manages a computer’s hardware. It also
provides a basis for application programs and acts as an intermediary between the
computer user and computer hardware. The main task an operating system carries out
is the allocation of resources and services, such as the allocation of memory, devices,
processors, and information. The operating system also includes programs to manage
these resources, such as a traffic controller, a scheduler, a memory
management module, I/O programs, and a file system. The operating system simply
provides an environment within which other programs can do useful work.
Why are Operating Systems Used?
Operating System is used as a communication channel between the Computer
hardware and the user. It works as an intermediate between System Hardware and
End-User. Operating System handles the following responsibilities:
• It controls all the computer resources.
• It provides valuable services to user programs.
• It coordinates the execution of user programs.
• It provides resources for user programs.
• It provides an interface (virtual machine) to the user.
• It hides the complexity of software.
• It supports multiple execution modes.
• It monitors the execution of user programs to prevent errors.
Functions of an Operating System

Memory Management

The operating system manages the Primary Memory or Main Memory. Main memory
is made up of a large array of bytes or words where each byte or word is assigned a
certain address. Main memory is fast storage and it can be accessed directly by the
CPU. For a program to be executed, it should be first loaded in the main memory. An
operating system manages the allocation and deallocation of memory to various
processes and ensures that the other process does not consume the memory allocated
to one process. An Operating System performs the following activities for Memory
Management:
• It keeps track of primary memory, i.e., which bytes of memory are used by
which user program. The memory addresses that have already been allocated and
the memory addresses of the memory that has not yet been used.
• In multiprogramming, the OS decides the order in which processes are granted
memory access, and for how long.
• It Allocates the memory to a process when the process requests it and
deallocates the memory when the process has terminated or is performing an I/O
operation.
Memory Management

Processor Management

In a multi-programming environment, the OS decides the order in which processes


have access to the processor, and how much processing time each process has. This
function of OS is called Process Scheduling. An Operating System performs the
following activities for Processor Management.
An operating system manages the processor’s work by allocating various jobs to it
and ensuring that each process receives enough time from the processor to function
properly.
Keeps track of the status of processes. The program which performs this task is
known as a traffic controller. Allocates the CPU that is a processor to a process. De-
allocates processor when a process is no longer required.
Processor Management

Device Management

An OS manages device communication via its respective drivers. It performs the


following activities for device management.
• Keeps track of all devices connected to the system. Designates a program
responsible for every device known as the Input/Output controller.
• Decide which process gets access to a certain device and for how long.
• Allocates devices effectively and efficiently. Deallocates devices when they
are no longer required.
• There are various input and output devices. An OS controls the working of
these input-output devices.
• It receives the requests from these devices, performs a specific task, and
communicates back to the requesting process.

File Management

A file system is organized into directories for efficient or easy navigation and usage.
These directories may contain other directories and other files. An Operating System
carries out the following file management activities. It keeps track of where
information is stored, user access settings, the status of every file, and more. These
facilities are collectively known as the file system. An OS keeps track of information
regarding the creation, deletion, transfer, copy, and storage of files in an organized
way. It also maintains the integrity of the data stored in these files, including the file
directory structure, by protecting against unauthorized access.
File Management

User Interface or Command Interpreter

The user interacts with the computer system through the operating system. Hence OS
acts as an interface between the user and the computer hardware. This user interface is
offered through a set of commands or a graphical user interface (GUI). Through this
interface, the user makes interacts with the applications and the machine hardware.

Command Interpreter

Booting the Computer


The process of starting or restarting the computer is known as booting. If the
computer is switched off completely and if turned on then it is called cold booting.
Warm booting is a process of using the operating system to restart the computer.

Security

The operating system uses password protection to protect user data and similar other
techniques. it also prevents unauthorized access to programs and user data. The
operating system provides various techniques which assure the integrity and
confidentiality of user data. The following security measures are used to protect user
data:
• Protection against unauthorized access through login.
• Protection against intrusion by keeping the firewall active.
• Protecting the system memory against malicious access.
• Displaying messages related to system vulnerabilities.

Job Accounting

The operating system Keeps track of time and resources used by various tasks and
users, this information can be used to track resource usage for a particular user or
group of users. In a multitasking OS where multiple programs run simultaneously,
the OS determines which applications should run in which order and how time
should be allocated to each application.

Error-Detecting Aids

The operating system constantly monitors the system to detect errors and avoid
malfunctioning computer systems. From time to time, the operating system checks
the system for any external threat or malicious software activity. It also checks the
hardware for any type of damage. This process displays several alerts to the user so
that the appropriate action can be taken against any damage caused to the system.

Coordination Between Other Software and Users

Operating systems also coordinate and assign interpreters, compilers, assemblers,


and other software to the various users of the computer systems. In simpler terms,
think of the operating system as the traffic cop of your computer. It directs and
manages how different software programs can share your computer’s resources
without causing chaos. It ensures that when you want to use a program, it runs
smoothly without crashing or causing problems for others. So, it’s like the friendly
officer ensuring a smooth flow of traffic on a busy road, making sure everyone gets
where they need to go without any accidents or jams.
Performs Basic Computer Tasks
The management of various peripheral devices such as the mouse, keyboard, and
printer is carried out by the operating system. Today most operating systems are
plug-and-play. These operating systems automatically recognize and configure the
devices with no user interference.

Network Management

• Network Communication: Think of them as traffic cops for your internet


traffic. Operating systems help computers talk to each other and the internet.
They manage how data is packaged and sent over the network, making sure it
arrives safely and in the right order.
• Settings and Monitoring: Think of them as the settings and security guard
for your internet connection. They also let you set up your network connections,
like Wi-Fi or Ethernet, and keep an eye on how your network is doing. They
make sure your computer is using the network efficiently and securely, like
adjusting the speed of your internet or protecting your computer from online
threats.
Services Provided by an Operating System
The Operating System provides certain services to the users which can be listed in
the following manner:
• User Interface: Almost all operating systems have a user interface (UI).
This interface can take several forms. One is a command-line interface(CLI),
which uses text commands and a method for entering them (say, a keyboard for
typing in commands in a specific format with specific options). Another is a
batch interface, in which commands and directives to control those commands
are entered into files, and those files are executed. Most commonly, a graphical
user interface (GUI) is used. the interface is a window system with a pointing
device to direct I/O, choose from menus, and make selections and a keyboard to
enter text.
• Program Execution: The Operating System is responsible for the execution
of all types of programs whether it be user programs or system programs. The
Operating System utilizes various resources available for the efficient running of
all types of functionalities.
• Handling Input/Output Operations: The Operating System is responsible
for handling all sorts of inputs, i.e., from the keyboard, mouse, desktop, etc. The
Operating System does all interfacing most appropriately regarding all kinds of
Inputs and Outputs.
For example, there is a difference between all types of peripheral devices such
as mice or keyboards, the Operating System is responsible for handling data
between them.
• Manipulation of File System: The Operating System is responsible for
making decisions regarding the storage of all types of data or files, i.e., floppy
disk/hard disk/pen drive, etc. The Operating System decides how the data should
be manipulated and stored.
• Resource Allocation: The Operating System ensures the proper use of all
the resources available by deciding which resource to be used by whom for how
much time. All the decisions are taken by the Operating System.
• Accounting: The Operating System tracks an account of all the
functionalities taking place in the computer system at a time. All the details such
as the types of errors that occurred are recorded by the Operating System.
• Information and Resource Protection: The Operating System is
responsible for using all the information and resources available on the machine
in the most protected way. The Operating System must foil an attempt from any
external resource to hamper any sort of data or information.
• Communication: The operating system implements communication
between one process to another process to exchange information. Such
communication may occur between processes that are executing on the same
computer or between processes that are executing on different computer systems
tied together by a computer network.
• System Services: The operating system provides various system services,
such as printing, time and date management, and event logging.
• Error Detection: The operating system needs to be detecting and correcting
errors constantly. Errors may occur in the CPU and memory hardware ( for eg. a
memory error or a power failure), in I/O devices (such as a parity error on disk,
a connection failure on a network, or a lack of paper in the printer), and in the
user program ( an arithmetic overflow, an attempt to access an illegal memory
location or a too-great use of CPU time). For each type of error, the operating
system should take the appropriate action to ensure correct and consistent
computing.
All these services are ensured by the Operating System for the convenience of the
users to make the programming task easier. All different kinds of Operating Systems
more or less provide the same services.
Characteristics of Operating System
• Virtualization: Operating systems can provide Virtualization capabilities,
allowing multiple operating systems or instances of an operating system to run
on a single physical machine. This can improve resource utilization and provide
isolation between different operating systems or applications.
• Networking: Operating systems provide networking capabilities, allowing
the computer system to connect to other systems and devices over a network.
This can include features such as network protocols, network interfaces,
and network security.
• Scheduling: Operating systems provide scheduling algorithms that
determine the order in which tasks are executed on the system. These algorithms
prioritize tasks based on their resource requirements and other factors to
optimize system performance.
• Interprocess Communication: Operating systems provide mechanisms for
applications to communicate with each other, allowing them to share data and
coordinate their activities.
• Performance Monitoring: Operating systems provide tools for monitoring
system performance, including CPU usage, memory usage, disk usage, and
network activity. This can help identify performance bottlenecks and optimize
system performance.
• Backup and Recovery: Operating systems provide backup and recovery
mechanisms to protect data in the event of system failure or data loss.
• Debugging: Operating systems provide debugging tools that allow
developers to identify and fix software bugs and other issues in the system.
An Operating System performs all the basic tasks like managing files, processes,
and memory. Thus operating system acts as the manager of all the resources,
i.e. resource manager. Thus, the operating system becomes an interface between
the user and the machine. It is one of the most required software that is present in
the device.
Operating System is a type of software that works as an interface between the
system program and the hardware. There are several types of Operating Systems in
which many of which are mentioned below. Let’s have a look at them.
Types of Operating Systems
There are several types of Operating Systems which are mentioned below.
• Batch Operating System
• Multi-Programming System
• Multi-Processing System
• Multi-Tasking Operating System
• Time-Sharing Operating System
• Distributed Operating System
• Network Operating System
• Real-Time Operating System

1. Batch Operating System

This type of operating system does not interact with the computer directly. There is
an operator which takes similar jobs having the same requirement and groups them
into batches. It is the responsibility of the operator to sort jobs with similar needs.

Batch Operating System


Advantages of Batch Operating System
• It is very difficult to guess or know the time required for any job to complete.
Processors of the batch systems know how long the job would be when it is in
the queue.
• Multiple users can share the batch systems.
• The idle time for the batch system is very less.
• It is easy to manage large work repeatedly in batch systems.
Disadvantages of Batch Operating System
• The computer operators should be well known with batch systems.
• Batch systems are hard to debug.
• It is sometimes costly.
• The other jobs will have to wait for an unknown time if any job fails.
Examples of Batch Operating Systems: Payroll Systems, Bank Statements, etc.

2. Multi-Programming Operating System

Multiprogramming Operating Systems can be simply illustrated as more than one


program is present in the main memory and any one of them can be kept in
execution. This is basically used for better execution of resources.

MultiProgramming
Advantages of Multi-Programming Operating System
• Multi Programming increases the Throughput of the System.
• It helps in reducing the response time.
Disadvantages of Multi-Programming Operating System
• There is not any facility for user interaction of system resources with the
system.

3. Multi-Processing Operating System

Multi-Processing Operating System is a type of Operating System in which more


than one CPU is used for the execution of resources. It betters the throughput of the
System.
Multiprocessing
Advantages of Multi-Processing Operating System
• It increases the throughput of the system.
• As it has several processors, so, if one processor fails, we can proceed with
another processor.
Disadvantages of Multi-Processing Operating System
• Due to the multiple CPU, it can be more complex and somehow difficult to
understand.

4. Multi-Tasking Operating System

Multitasking Operating System is simply a multiprogramming Operating System


with having facility of a Round-Robin Scheduling Algorithm. It can run multiple
programs simultaneously.
There are two types of Multi-Tasking Systems which are listed below.
• Preemptive Multi-Tasking
• Cooperative Multi-Tasking
Multitasking
Advantages of Multi-Tasking Operating System
• Multiple Programs can be executed simultaneously in Multi-Tasking
Operating System.
• It comes with proper memory management.
Disadvantages of Multi-Tasking Operating System
• The system gets heated in case of heavy programs multiple times.

5. Time-Sharing Operating Systems

Each task is given some time to execute so that all the tasks work smoothly. Each
user gets the time of the CPU as they use a single system. These systems are also
known as Multitasking Systems. The task can be from a single user or different
users also. The time that each task gets to execute is called quantum. After this time
interval is over OS switches over to the next task.
Time-Sharing OS
Advantages of Time-Sharing OS
• Each task gets an equal opportunity.
• Fewer chances of duplication of software.
• CPU idle time can be reduced.
• Resource Sharing: Time-sharing systems allow multiple users to share
hardware resources such as the CPU, memory, and peripherals, reducing the cost
of hardware and increasing efficiency.
• Improved Productivity: Time-sharing allows users to work concurrently,
thereby reducing the waiting time for their turn to use the computer. This
increased productivity translates to more work getting done in less time.
• Improved User Experience: Time-sharing provides an interactive
environment that allows users to communicate with the computer in real time,
providing a better user experience than batch processing.
Disadvantages of Time-Sharing OS
• Reliability problem.
• One must have to take care of the security and integrity of user programs and
data.
• Data communication problem.
• High Overhead: Time-sharing systems have a higher overhead than other
operating systems due to the need for scheduling, context switching, and other
overheads that come with supporting multiple users.
• Complexity: Time-sharing systems are complex and require advanced
software to manage multiple users simultaneously. This complexity increases the
chance of bugs and errors.
• Security Risks: With multiple users sharing resources, the risk of security
breaches increases. Time-sharing systems require careful management of user
access, authentication, and authorization to ensure the security of data and
software.
Examples of Time-Sharing OS with explanation
• IBM VM/CMS: IBM VM/CMS is a time-sharing operating system that was
first introduced in 1972. It is still in use today, providing a virtual machine
environment that allows multiple users to run their own instances of operating
systems and applications.
• TSO (Time Sharing Option): TSO is a time-sharing operating system that
was first introduced in the 1960s by IBM for the IBM System/360 mainframe
computer. It allowed multiple users to access the same computer simultaneously,
running their own applications.
• Windows Terminal Services: Windows Terminal Services is a time-sharing
operating system that allows multiple users to access a Windows server remotely.
Users can run their own applications and access shared resources, such as
printers and network storage, in real-time.

6. Distributed Operating System

These types of operating system is a recent advancement in the world of computer


technology and are being widely accepted all over the world and, that too, at a great
pace. Various autonomous interconnected computers communicate with each other
using a shared communication network. Independent systems possess their own
memory unit and CPU. These are referred to as loosely coupled systems or
distributed systems. These systems’ processors differ in size and function. The
major benefit of working with these types of the operating system is that it is always
possible that one user can access the files or software which are not actually present
on his system but some other system connected within this network i.e., remote
access is enabled within the devices connected in that network.

Distributed OS
Advantages of Distributed Operating System
• Failure of one will not affect the other network communication, as all
systems are independent of each other.
• Electronic mail increases the data exchange speed.
• Since resources are being shared, computation is highly fast and durable.
• Load on host computer reduces.
• These systems are easily scalable as many systems can be easily added to the
network.
• Delay in data processing reduces.
Disadvantages of Distributed Operating System
• Failure of the main network will stop the entire communication.
• To establish distributed systems the language is used not well-defined yet.
• These types of systems are not readily available as they are very expensive.
Not only that the underlying software is highly complex and not understood well
yet.
Examples of Distributed Operating Systems are LOCUS, etc.
The distributed os must tackle the following issues:
• Networking causes delays in the transfer of data between nodes of a
distributed system. Such delays may lead to an inconsistent view of data located
in different nodes, and make it difficult to know the chronological order in
which events occurred in the system.
• Control functions like scheduling, resource allocation, and deadlock
detection have to be performed in several nodes to achieve computation speedup
and provide reliable operation when computers or networking components fail.
• Messages exchanged by processes present in different nodes may travel over
public networks and pass through computer systems that are not controlled by
the distributed operating system. An intruder may exploit this feature to tamper
with messages, or create fake messages to fool the authentication procedure and
masquerade as a user of the system.

7. Network Operating System

These systems run on a server and provide the capability to manage data, users,
groups, security, applications, and other networking functions. These types of
operating systems allow shared access to files, printers, security, applications, and
other networking functions over a small private network. One more important aspect
of Network Operating Systems is that all the users are well aware of the underlying
configuration, of all other users within the network, their individual connections, etc.
and that’s why these computers are popularly known as tightly coupled systems.

Network Operating System


Advantages of Network Operating System
• Highly stable centralized servers.
• Security concerns are handled through servers.
• New technologies and hardware up-gradation are easily integrated into the
system.
• Server access is possible remotely from different locations and types of
systems.
Disadvantages of Network Operating System
• Servers are costly.
• User has to depend on a central location for most operations.
• Maintenance and updates are required regularly.

8. Real-Time Operating System

These types of OSs serve real-time systems. The time interval required to process
and respond to inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict
like missile systems, air traffic control systems, robots, etc.
Types of Real-Time Operating Systems
• Hard Real-Time Systems:
Hard Real-Time OSs are meant for applications where time constraints are very
strict and even the shortest possible delay is not acceptable. These systems are
built for saving life like automatic parachutes or airbags which are required to be
readily available in case of an accident. Virtual memory is rarely found in these
systems.
• Soft Real-Time Systems:
These OSs are for applications where time-constraint is less strict.
Real-Time Operating System
Advantages of RTOS
• Maximum Consumption: Maximum utilization of devices and systems,
thus more output from all the resources.
• Task Shifting: The time assigned for shifting tasks in these systems is very
less. For example, in older systems, it takes about 10 microseconds in shifting
from one task to another, and in the latest systems, it takes 3 microseconds.
• Focus on Application: Focus on running applications and less importance
on applications that are in the queue.
• Real-time operating system in the embedded system: Since the size of
programs is small, RTOS can also be used in embedded systems like in transport
and others.
• Error Free: These types of systems are error-free.
• Memory Allocation: Memory allocation is best managed in these types of
systems.
Disadvantages of RTOS
• Limited Tasks: Very few tasks run at the same time and their concentration
is very less on a few applications to avoid errors.
• Use heavy system resources: Sometimes the system resources are not so
good and they are expensive as well.
• Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.
• Device driver and interrupt signals: It needs specific device drivers and
interrupts signal to respond earliest to interrupts.
• Thread Priority: It is not good to set thread priority as these systems are
very less prone to switching tasks.
Examples of Real-Time Operating Systems are Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic
control systems, etc.
The fundamental goal of an Operating System is to execute user programs and to
make tasks easier. Various application programs along with hardware systems are
used to perform this work. Operating System is software that manages and controls
the entire set of resources and effectively utilizes every part of a computer. The
figure shows how OS acts as a medium between hardware units and application
programs.

Need for Operating System


OS as a platform for Application programs: The operating system provides a
platform, on top of which, other programs, called application programs can run.
These application programs help users to perform a specific task easily. It acts as an
interface between the computer and the user. It is designed in such a manner that it
operates, controls, and executes various applications on the computer.

Managing Input-Output unit: The operating system also allows the computer to
manage its own resources such as memory, monitor, keyboard, printer, etc.
Management of these resources is required for effective utilization. The operating
system controls the various system input-output resources and allocates them to the
users or programs as per their requirements.

Multitasking: The operating system manages memory and allows multiple


programs to run in their own space and even communicate with each other through
shared memory. Multitasking gives users a good experience as they can perform
several tasks on a computer at a time.
A platform for other software applications: Different application programs are
needed by users to carry out particular system tasks. These applications are managed
and controlled by the OS to ensure their effectiveness. It serves as an interface
between the user and the applications, in other words.
Controls memory: It helps in controlling the computer’s main memory.
Additionally, it allows and deallocates memory to all tasks and
applications.
Looks after system files: It helps with system file management. As far as we are
aware, all of the data on the system exists as files. It facilitates simple file
interaction.
Provides Security: It helps to maintain the system and applications safe through the
authorization process. Thus, the OS provides security to the system.
Functions of an Operating System
An operating system has a variety of functions to perform. Some of the prominent
functions of an operating system can be broadly outlined:
Processor Management: This deals with the management of the Central Processing
Unit (CPU). The operating system takes care of the allotment of CPU time to
different processes. When a process finishes its CPU processing after executing for
the allotted time period, this is called scheduling. There is various type of
scheduling techniques that are used by the operating systems:
• Shortest Job First(SJF): The process which needs the shortest CPU time is
scheduled first.
• Round Robin Scheduling: Each process is assigned a fixed CPU execution
time in a cyclic way.
• Priority-Based Scheduling (Non-Preemptive): In this scheduling, processes
are scheduled according to their priorities, i.e., the highest priority process is
scheduled first. If the priorities of the two processes match, then schedule
according to arrival time.
Context Switching: In most multitasking OSs, multiple running processes on the
system may need a change of state in execution. Even if there are multiple processes
being executed at any one point in time, only one task is executed in the foreground,
while the others are put in the background. So the process that is in the foreground
is determined by the priority-based scheduling, and the OS saves the execution state
of the previous process before switching to the current one. This is known as
context switching.
Device Management: The Operating System communicates with the hardware and
the attached devices and maintains a balance between them and the CPU. This is all
the more important because the CPU processing speed is much higher than that of
I/O devices. In order to optimize the CPU time, the operating system employs two
techniques – Buffering and Spooling.
Buffering: In this technique, input and output data are temporarily stored in Input
Buffer and Output Buffer. Once the signal for input or output is sent to or from the
CPU respectively, the operating system through the device controller moves the data
from the input device to the input buffer and from the output buffer to the output
device. In the case of input, if the buffer is full, the operating system sends a signal
to the program which processes the data stored in the buffer. When the buffer
becomes empty, the program informs the operating system which reloads the buffer
and the input operation continues.
Spooling (Simultaneous Peripheral Operation On-Line): This is a device
management technique used for processing different tasks on the same input/output
device. When there are various users on a network sharing the same resource then it
can be a possibility that more than one user might give it a command at the same
point in time. So, the operating system temporarily stores the data of every user on
the hard disk of the computer to which the resource is attached. The individual user
need not wait for the execution process to be completed. Instead, the operating
system sends the data from the hard disk to the resource one by one.
Example: printer
Memory management: In a computer, both the CPU and the I/O devices interact
with the memory. When a program needs to be executed it is loaded onto the main
memory till the execution is completed. Thereafter that memory space is freed and
is available for other programs. The common memory management techniques used
by the operating system are Partitioning and Virtual Memory.
Partitioning: The total memory is divided into various partitions of the same size or
different sizes. This helps to accommodate a number of programs in the memory.
The partition can be fixed i.e. remains the same for all the programs in the memory
or variable i.e. memory is allocated when a program is loaded onto the memory. The
latter approach causes less wastage of memory but in due course of time, it may
become fragmented.
Virtual Memory: This is a technique used by operating systems that allows the user
can load programs that are larger than the main memory of the computer. In this
technique, the program is executed even if the complete program can not be loaded
inside the main memory leading to efficient memory utilization.
File Management: The operating system manages the files, folders, and directory
systems on a computer. Any data on a computer is stored in the form of files and the
operating system keeps the information about all of them using the File Allocation
Table (FAT), or a data structure called an inode in Linux. The FAT stores general
information about files like filename, type (text or binary), size, starting address,
and access mode (sequential/indexed sequential/direct/relative). The file manager of
the operating system helps to create, edit, copy, allocate memory to the files, and
also updates the FAT. The operating system also takes care that files are opened
with proper access rights to read or edit them.
Operating System Services
The main purpose of the operating system is to provide an environment for the
execution of programs. Thus, an operating system provides certain services to
programs and the users of those programs.
1. Program Execution
• The operating system provides a convenient environment where users can
run their programs.
• The operating system performs memory allocation to programs, and load
them into appropriate location so that they can execute. The users do not have to
worry about all these tasks.
2. I/O Operations
• In order to execute a program, it usually requires an I/O operation. For
example, it may need to read a file and print the output.
• When all these I/O operations are performed users cannot control I/O devices.
• All I/O is performed under the control of the operating system.
3. Communication
• The various processes executing on a system may need to communicate in
order to exchange data or information.
• The operating system provides this communication by using a facility
for message passing. In message passing packets of information are moved
between processes by the operating system.
Types of Operating Systems
There are various types of operating systems. These are what they are:

Batch OS

The jobs and tasks are not forwarded to the CPU directly in this system by the OS. It
functions by combining similar job types into a single category. We also refer to this
group as a “batch.” Hence, batch operating system. The payroll system, a bank
statement, etc. are some examples.

Time-Shared OS

Time-shared OS refers to a system that runs multiple tasks simultaneously. Because


the system can run various tasks concurrently as needed. As a result, they each share
CPU time individually. We also refer to it as multitasking as a result. Quantum is
the amount of time that each task receives.

Distributed OS

There are multiple CPUs present in this system. All of the processors receive equal
task distribution from the OS. There is no shared memory or clock time between the
processors. Through various communication channels, OS manages all of its
communication.LOCUS is just one example.

Network OS

A server is connected to a variety of systems in these OS. Sharing resources like


files, printers, applications, etc. is made possible by this system. Additionally, it
provides the ability to manage these resources.
Features of OS
Various tasks are carried out by an operating system. Let’s research them. These are
a few of the OS’s features:

1. Memory Control
It is the control of the primary or main memory. Furthermore, the main memory
must contain the program that is being run. Consequently, more than one program
may be active at once. Consequently, managing memory is necessary. operating
system memory is allocated and released.
keeps track of who uses which area of primary memory and how often.enables
memory distribution while multiprocessing.

2. Management and Scheduling of Processors

When a system has multiple processes running, the OS determines how and when
each process will use the CPU. So, CPU Scheduling is another name for it.

3. File Management

The files on a system are stored in different directories. The OS:


1. Keeps records of the status and locations of files.
2. Responsible for the Allocation and deallocation of resources.
Operating system is a software that acts as an intermediary between the user and
computer hardware. It is a program with the help of which we are able to run
various applications. It is the one program that is running all the time. Every
computer must have an operating system to smoothly execute other programs.
The OS coordinates the use of the hardware and application programs for various
users. It provides a platform for other application programs to work. The operating
system is a set of special programs that run on a computer system that allows it to
work properly. It controls input-output devices, execution of programs, managing
files, etc.
Services of Operating System
1. Program execution
2. Input Output Operations
3. Communication between Process
4. File Management
5. Memory Management
6. Process Management
7. Security and Privacy
8. Resource Management
9. User Interface
10. Networking
11. Error handling
12. Time Management

Program Execution

It is the Operating System that manages how a program is going to be executed. It


loads the program into the memory after which it is executed. The order in which
they are executed depends on the CPU Scheduling Algorithms. A few are FCFS,
SJF, etc. When the program is in execution, the Operating System also handles
deadlock i.e. no two processes come for execution at the same time. The Operating
System is responsible for the smooth execution of both user and system programs.
The Operating System utilizes various resources available for the efficient running
of all types of functionalities.

Input Output Operations

Operating System manages the input-output operations and establishes


communication between the user and device drivers. Device drivers are software
that is associated with hardware that is being managed by the OS so that the sync
between the devices works properly. It also provides access to input-output devices
to a program when needed.

Communication between Processes

The Operating system manages the communication between processes.


Communication between processes includes data transfer among them. If the
processes are not on the same computer but connected through a computer network,
then also their communication is managed by the Operating System itself.

File Management

The operating system helps in managing files also. If a program needs access to a
file, it is the operating system that grants access. These permissions include read-
only, read-write, etc. It also provides a platform for the user to create, and delete
files. The Operating System is responsible for making decisions regarding the
storage of all types of data or files, i.e, floppy disk/hard disk/pen drive, etc. The
Operating System decides how the data should be manipulated and stored.

Memory Management

Let’s understand memory management by OS in simple way. Imagine a cricket team


with limited number of player . The team manager (OS) decide whether the
upcoming player will be in playing 11 ,playing 15 or will not be included in team ,
based on his performance . In the same way, OS first check whether the upcoming
program fulfil all requirement to get memory space or not ,if all things good, it
checks how much memory space will be sufficient for program and then load the
program into memory at certain location. And thus , it prevents program from using
unnecessary memory.

Process Management

Let’s understand the process management in unique way. Imagine, our kitchen stove
as the (CPU) where all cooking(execution) is really happen and chef as the (OS)
who uses kitchen-stove(CPU) to cook different dishes(program). The chef(OS) has
to cook different dishes(programs) so he ensure that any particular dish(program)
does not take long time(unnecessary time) and all dishes(programs) gets a chance to
cooked(execution) .The chef(OS) basically scheduled time for all dishes(programs)
to run kitchen(all the system) smoothly and thus cooked(execute) all the different
dishes(programs) efficiently.

Security and Privacy

• Security : OS keep our computer safe from an unauthorized user by adding


security layer to it. Basically, Security is nothing but just a layer of protection
which protect computer from bad guys like viruses and hackers. OS provide us
defenses like firewalls and anti-virus software and ensure good safety of
computer and personal information.

• Privacy : OS give us facility to keep our essential information hidden like


having a lock on our door, where only you can enter and other are not allowed .
Basically , it respect our secrets and provide us facility to keep it safe.

Resource Management

System resources are shared between various processes. It is the Operating system
that manages resource sharing. It also manages the CPU time among processes
using CPU Scheduling Algorithms. It also helps in the memory management of the
system. It also controls input-output devices. The OS also ensures the proper use of
all the resources available by deciding which resource to be used by whom.

User Interface

User interface is essential and all operating systems provide it. Users either interface
with the operating system through the command-line interface or graphical user
interface or GUI. The command interpreter executes the next user-specified
command.
A GUI offers the user a mouse-based window and menu system as an interface.

Networking

This service enables communication between devices on a network, such as


connecting to the internet, sending and receiving data packets, and managing
network connections.

Error Handling

The Operating System also handles the error occurring in the CPU, in Input-Output
devices, etc. It also ensures that an error does not occur frequently and fixes the
errors. It also prevents the process from coming to a deadlock. It also looks for any
type of error or bugs that can occur while any task. The well-secured OS sometimes
also acts as a countermeasure for preventing any sort of breach of the Computer
System from any external source and probably handling them.
Time Management

Imagine traffic light as (OS), which indicates all the cars(programs) whether it
should be stop(red)=>(simple queue) , start(yellow)=>(ready
queue),move(green)=>(under execution) and this light (control) changes after a
certain interval of time at each side of the road(computer system) so that the
cars(program) from all side of road move smoothly without traffic.
Introduction of Process Management
A process is a program in execution. For example, when we write a program in C or
C++ and compile it, the compiler creates binary code. The original code and binary
code are both programs. When we actually run the binary code, it becomes a process.
A process is an ‘active’ entity instead of a program, which is considered a ‘passive’
entity. A single program can create many processes when run multiple times; for
example, when we open a .exe or binary file multiple times, multiple instances begin
(multiple processes are created).
Process management includes various tools and techniques such as process mapping,
process analysis, process improvement, process automation, and process control. By
applying these tools and techniques, organizations can streamline their processes,
eliminate waste, and improve productivity. Overall, process management is a critical
aspect of modern business operations and can help organizations achieve their goals
and stay competitive in today’s rapidly changing marketplace.
What is Process Management?
If the operating system supports multiple users then services under this are very
important. In this regard, operating systems have to keep track of all the completed
processes, Schedule them, and dispatch them one after another. However, the user
should feel that he has full control of the CPU. Process management refers to the
techniques and strategies used by organizations to design, monitor, and control their
business processes to achieve their goals efficiently and effectively. It involves
identifying the steps involved in completing a task, assessing the resources required
for each step, and determining the best way to execute the task.
Process management can help organizations improve their operational efficiency,
reduce costs, increase customer satisfaction, and maintain compliance with regulatory
requirements. It involves analyzing the performance of existing processes, identifying
bottlenecks, and making changes to optimize the process flow.
Some of the systems call in this category are as follows.
• Create a child’s process identical to the parent’s.
• Terminate a process
• Wait for a child process to terminate
• Change the priority of the process
• Block the process
• Ready the process
• Dispatch a process
• Suspend a process
• Resume a process
• Delay a process
• Fork a process

How Does a Process Look Like in Memory?

The process looks like

Explanation of Process
• Text Section: A Process, sometimes known as the Text Section, also includes
the current activity represented by the value of the Program Counter.
• Stack: The stack contains temporary data, such as function parameters, returns
addresses, and local variables.
• Data Section: Contains the global variable.
• Heap Section: Dynamically memory allocated to process during its run time.
Key Components of Process Management
Below are some key component of process management.
• Process mapping: Creating visual representations of processes to understand
how tasks flow, identify dependencies, and uncover improvement opportunities.
• Process analysis: Evaluating processes to identify bottlenecks, inefficiencies,
and areas for improvement.
• Process redesign: Making changes to existing processes or creating new ones
to optimize workflows and enhance performance.
• Process implementation: Introducing the redesigned processes into the
organization and ensuring proper execution.
• Process monitoring and control: Tracking process performance, measuring
key metrics, and implementing control mechanisms to maintain efficiency and
effectiveness.
Importance of Process Management System
It is critical to comprehend the significance of process management for any manager
overseeing a firm. It does more than just make workflows smooth. Process
Management makes sure that every part of business operations moves as quickly as
possible.
By implementing business processes management, we can avoid errors caused by
inefficient human labor and cut down on time lost on repetitive operations. It also
keeps data loss and process step errors at bay. Additionally, process management
guarantees that resources are employed effectively, increasing the cost-effectiveness
of our company. Process management not only makes business operations better, but
it also makes sure that our procedures meet the needs of your clients. This raises
income and improves consumer happiness.
Characteristics of a Process
A process has the following attributes.
• Process Id: A unique identifier assigned by the operating system.
• Process State: Can be ready, running, etc.
• CPU registers: Like the Program Counter (CPU registers must be saved and
restored when a process is swapped in and out of the CPU)
• Accounts information: Amount of CPU used for process execution, time
limits, execution ID, etc
• I/O status information: For example, devices allocated to the process, open
files, etc
• CPU scheduling information: For example, Priority (Different processes
may have different priorities, for example, a shorter process assigned high priority
in the shortest job first scheduling)
All of the above attributes of a process are also known as the context of the process.
Every process has its own process control block(PCB), i.e. each process will have a
unique PCB. All of the above attributes are part of the PCB.
States of Process
A process is in one of the following states:
• New: Newly Created Process (or) being-created process.
• Ready: After the creation process moves to the Ready state, i.e. the process is
ready for execution.
• Run: Currently running process in CPU (only one process at a time can be
under execution in a single processor)
• Wait (or Block): When a process requests I/O access.
• Complete (or Terminated): The process completed its execution.
• Suspended Ready: When the ready queue becomes full, some processes are
moved to a suspended ready state
• Suspended Block: When the waiting queue becomes full.
Context Switching of Process
The process of saving the context of one process and loading the context of another
process is known as Context Switching. In simple terms, it is like loading and
unloading the process from the running state to the ready state.

When Does Context Switching Happen?

1. When a high-priority process comes to a ready state (i.e. with higher priority than
the running process).
2. An Interrupt occurs.
3. User and kernel-mode switch (It is not necessary though)
4. Preemptive CPU scheduling is used.

Context Switch vs Mode Switch

A mode switch occurs when the CPU privilege level is changed, for example when a
system call is made or a fault occurs. The kernel works in more a privileged mode
than a standard user task. If a user process wants to access things that are only
accessible to the kernel, a mode switch must occur. The currently executing process
need not be changed during a mode switch. A mode switch typically occurs for a
process context switch to occur. Only the kernel can cause a context switch.
CPU-Bound vs I/O-Bound Processes
A CPU-bound process requires more CPU time or spends more time in the running
state. An I/O-bound process requires more I/O time and less CPU time. An I/O-bound
process spends more time in the waiting state.
Process planning is an integral part of the process management operating system. It
refers to the mechanism used by the operating system to determine which process to
run next. The goal of process scheduling is to improve overall system performance by
maximizing CPU utilization, minimizing execution time, and improving system
response time.
Process Scheduling Algorithms
The operating system can use different scheduling algorithms to schedule processes.
Here are some commonly used timing algorithms:
• First-come, first-served (FCFS): This is the simplest scheduling algorithm,
where the process is executed on a first-come, first-served basis. FCFS is non-
preemptive, which means that once a process starts executing, it continues until it
is finished or waiting for I/O.
• Shortest Job First (SJF): SJF is a proactive scheduling algorithm that selects
the process with the shortest burst time. The burst time is the time a process takes
to complete its execution. SJF minimizes the average waiting time of processes.
• Round Robin (RR): Round Robin is a proactive scheduling algorithm that
reserves a fixed amount of time in a round for each process. If a process does not
complete its execution within the specified time, it is blocked and added to the end
of the queue. RR ensures fair distribution of CPU time to all processes and avoids
starvation.
• Priority Scheduling: This scheduling algorithm assigns priority to each
process and the process with the highest priority is executed first. Priority can be
set based on process type, importance, or resource requirements.
• Multilevel queue: This scheduling algorithm divides the ready queue into
several separate queues, each queue having a different priority. Processes are
queued based on their priority, and each queue uses its own scheduling algorithm.
This scheduling algorithm is useful in scenarios where different types of processes
have different priorities.
Advantages of Process Management
• Improved Efficiency: Process management can help organizations identify
bottlenecks and inefficiencies in their processes, allowing them to make changes
to streamline workflows and increase productivity.
• Cost Savings: By identifying and eliminating waste and inefficiencies,
process management can help organizations reduce costs associated with their
business operations.
• Improved Quality: Process management can help organizations improve the
quality of their products or services by standardizing processes and reducing
errors.
• Increased Customer Satisfaction: By improving efficiency and quality,
process management can enhance the customer experience and increase
satisfaction.
• Compliance with Regulations: Process management can help organizations
comply with regulatory requirements by ensuring that processes are properly
documented, controlled, and monitored.
Disadvantages of Process Management
• Time and Resource Intensive: Implementing and maintaining process
management initiatives can be time-consuming and require significant resources.
• Resistance to Change: Some employees may resist changes to established
processes, which can slow down or hinder the implementation of process
management initiatives.
• Overemphasis on Process: Overemphasis on the process can lead to a lack of
focus on customer needs and other important aspects of business operations.
• Risk of Standardization: Standardizing processes too much can limit
flexibility and creativity, potentially stifling innovation.
• Difficulty in Measuring Results: Measuring the effectiveness of process
management initiatives can be difficult, making it challenging to determine their
impact on organizational performance.
Process Table and Process Control Block (PCB)
While creating a process, the operating system performs several operations. To
identify the processes, it assigns a process identification number (PID) to each process.
As the operating system supports multi-programming, it needs to keep track of all the
processes. For this task, the process control block (PCB) is used to track the process’s
execution status. Each block of memory contains information about the process state,
program counter, stack pointer, status of opened files, scheduling algorithms, etc.
All this information is required and must be saved when the process is switched from
one state to another. When the process makes a transition from one state to another,
the operating system must update information in the process’s PCB. A process control
block (PCB) contains information about the process, i.e. registers, quantum, priority,
etc. The process table is an array of PCBs, that means logically contains a PCB for all
of the current processes in the system.
1. Pointer: It is a stack pointer that is required to be saved when the process is
switched from one state to another to retain the current position of the process.
2. Process state: It stores the respective state of the process.
3. Process number: Every process is assigned a unique id known as process ID
or PID which stores the process identifier.
4. Program counter: It stores the counter,: which contains the address of the
next instruction that is to be executed for the process.
5. Register: These are the CPU registers which include the accumulator, base,
registers, and general-purpose registers.
6. Memory limits: This field contains the information about memory
management system used by the operating system. This may include page tables,
segment tables, etc.
7. Open files list : This information includes the list of files opened for a process.
Additional Points to Consider for Process Control Block (PCB)
• Interrupt handling: The PCB also contains information about the interrupts
that a process may have generated and how they were handled by the operating
system.
• Context switching: The process of switching from one process to another is
called context switching. The PCB plays a crucial role in context switching by
saving the state of the current process and restoring the state of the next process.
• Real-time systems: Real-time operating systems may require additional
information in the PCB, such as deadlines and priorities, to ensure that time-
critical processes are executed in a timely manner.
• Virtual memory management: The PCB may contain information about a
process’s virtual memory management, such as page tables and page fault
handling.
• Inter-process communication: The PCB can be used to facilitate inter-
process communication by storing information about shared resources and
communication channels between processes.
• Fault tolerance: Some operating systems may use multiple copies of the PCB
to provide fault tolerance in case of hardware failures or software errors.
Advantages-
1. Efficient process management: The process table and PCB provide an
efficient way to manage processes in an operating system. The process table
contains all the information about each process, while the PCB contains the
current state of the process, such as the program counter and CPU registers.
2. Resource management: The process table and PCB allow the operating
system to manage system resources, such as memory and CPU time, efficiently.
By keeping track of each process’s resource usage, the operating system can
ensure that all processes have access to the resources they need.
3. Process synchronization: The process table and PCB can be used to
synchronize processes in an operating system. The PCB contains information
about each process’s synchronization state, such as its waiting status and the
resources it is waiting for.
4. Process scheduling: The process table and PCB can be used to schedule
processes for execution. By keeping track of each process’s state and resource
usage, the operating system can determine which processes should be executed
next.
Disadvantages-
1. Overhead: The process table and PCB can introduce overhead and reduce
system performance. The operating system must maintain the process table and
PCB for each process, which can consume system resources.
2. Complexity: The process table and PCB can increase system complexity and
make it more challenging to develop and maintain operating systems. The need to
manage and synchronize multiple processes can make it more difficult to design
and implement system features and ensure system stability.
3. Scalability: The process table and PCB may not scale well for large-scale
systems with many processes. As the number of processes increases, the process
table and PCB can become larger and more difficult to manage efficiently.
4. Security: The process table and PCB can introduce security risks if they are
not implemented correctly. Malicious programs can potentially access or modify
the process table and PCB to gain unauthorized access to system resources or
cause system instability.
5. Miscellaneous accounting and status data – This field includes information
about the amount of CPU used, time constraints, jobs or process number, etc. The
process control block stores the register content also known as execution content
of the processor when it was blocked from running. This execution content
architecture enables the operating system to restore a process’s execution context
when the process returns to the running state. When the process makes a transition
from one state to another, the operating system updates its information in the
process’s PCB. The operating system maintains pointers to each process’s PCB in
a process table so that it can access the PCB
quickly.

Operations on ProcesseS
A process is an activity of executing a program. Basically, it is a program under
execution. Every process needs certain resources to complete its

task.

Operation on a Process
The execution of a process is a complex activity. It involves various operations.
Following are the operations that are performed while execution of a process:

Creation
This is the initial step of the process execution activity. Process creation means the
construction of a new process for execution. This might be performed by the system,
the user, or the old process itself. There are several events that lead to the process
creation. Some of the such events are the following:
1. When we start the computer, the system creates several background processes.
2. A user may request to create a new process.
3. A process can create a new process itself while executing.
4. The batch system takes initiation of a batch job.

Scheduling/Dispatching

The event or activity in which the state of the process is changed from ready to run. It
means the operating system puts the process from the ready state into the running
state. Dispatching is done by the operating system when the resources are free or the
process has higher priority than the ongoing process. There are various other cases in
which the process in the running state is preempted and the process in the ready state
is dispatched by the operating system.

Blocking

When a process invokes an input-output system call that blocks the process, and
operating system is put in block mode. Block mode is basically a mode where the
process waits for input-output. Hence on the demand of the process itself, the
operating system blocks the process and dispatches another process to the processor.
Hence, in process-blocking operations, the operating system puts the process in a
‘waiting’ state.

Preemption

When a timeout occurs that means the process hadn’t been terminated in the allotted
time interval and the next process is ready to execute, then the operating system
preempts the process. This operation is only valid where CPU scheduling supports
preemption. Basically, this happens in priority scheduling where on the incoming of
high priority process the ongoing process is preempted. Hence, in process preemption
operation, the operating system puts the process in a ‘ready’ state.

Process Termination

Process termination is the activity of ending the process. In other words, process
termination is the relaxation of computer resources taken by the process for the
execution. Like creation, in termination also there may be several events that may
lead to the process of termination. Some of them are:
1. The process completes its execution fully and it indicates to the OS that it has
finished.
2. The operating system itself terminates the process due to service errors.
3. There may be a problem in hardware that terminates the process.
4. One process can be terminated by another process.
Process Schedulers in Operating System
In computing, a process is the instance of a computer program that is being executed
by one or many threads. Scheduling is important in many different computer
environments. One of the most important areas of scheduling is which programs will
work on the CPU. This task is handled by the Operating System (OS) of the computer
and there are many different ways in which we can choose to configure programs.
What is Process Scheduling?
Process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process based on a
particular strategy.
Process scheduling is an essential part of a Multiprogramming operating system. Such
operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.

Process scheduler
Categories of Scheduling
Scheduling falls into one of two categories:
• Non-preemptive: In this case, a process’s resource cannot be taken before the
process has finished running. When a running process finishes and transitions to a
waiting state, resources are switched.
• Preemptive: In this case, the OS assigns resources to a process for a
predetermined period. The process switches from running state to ready state or
from waiting for state to ready state during resource allocation. This switching
happens because the CPU may give other processes priority and substitute the
currently active process for the higher priority process.
Types of Process Schedulers
There are three types of process schedulers:
1. Long Term or Job Scheduler

It brings the new process to the ‘Ready State’. It controls the Degree of Multi-
programming, i.e., the number of processes present in a ready state at any point in
time. It is important that the long-term scheduler make a careful selection of both I/O
and CPU-bound processes. I/O-bound tasks are which use much of their time in input
and output operations while CPU-bound processes are which spend their time on the
CPU. The job scheduler increases efficiency by maintaining a balance between the
two. They operate at a high level and are typically used in batch-processing systems.

2. Short-Term or CPU Scheduler

It is responsible for selecting one process from the ready state for scheduling it on the
running state. Note: Short-term scheduler only selects the process to schedule it
doesn’t load the process on running. Here is when all the scheduling algorithms are
used. The CPU scheduler is responsible for ensuring no starvation due to high burst
time processes.

Short Term Scheduler


The dispatcher is responsible for loading the process selected by the Short-term
scheduler on the CPU (Ready to Running State) Context switching is done by the
dispatcher only. A dispatcher does the following:
• Switching context.
• Switching to user mode.
• Jumping to the proper location in the newly loaded program.

3. Medium-Term Scheduler

It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements
has overcommitted available memory, requiring memory to be freed up. It is helpful
in maintaining a perfect balance between the I/O bound and the CPU bound. It
reduces the degree of multiprogramming.
Medium Term Scheduler
Some Other Schedulers
• I/O schedulers: I/O schedulers are in charge of managing the execution of I/O
operations such as reading and writing to discs or networks. They can use various
algorithms to determine the order in which I/O operations are executed, such
as FCFS (First-Come, First-Served) or RR (Round Robin).
• Real-time schedulers: In real-time systems, real-time schedulers ensure that
critical tasks are completed within a specified time frame. They can prioritize and
schedule tasks using various algorithms such as EDF (Earliest Deadline First) or
RM (Rate Monotonic).
Comparison Among Scheduler
Long Term Scheduler Short term schedular Medium Term
Scheduler

It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.

Speed lies in between


Generally, Speed is lesser Speed is the fastest
both short and long-term
than short term scheduler among all of them.
schedulers.

It gives less control over


It controls the degree of how much It reduces the degree of
multiprogramming multiprogramming is multiprogramming.
done.

It is barely present or
It is a minimal time- It is a component of
nonexistent in the time-
sharing system. systems for time sharing.
sharing system.
Long Term Scheduler Short term schedular Medium Term
Scheduler

It can re-enter the process It can re-introduce the


It selects those processes
into memory, allowing process into memory and
which are ready to
for the continuation of execution can be
execute
execution. continued.

Two-State Process Model Short-Term


The terms “running” and “non-running” states are used to describe the two-state
process model.
1. Running: A newly created process joins the system in a running state when it
is created.
2. Not running: Processes that are not currently running are kept in a queue and
await execution. A pointer to a specific process is contained in each entry in the
queue. Linked lists are used to implement the queue system. This is how the
dispatcher is used. When a process is stopped, it is moved to the back of the
waiting queue. The process is discarded depending on whether it succeeded or
failed. The dispatcher then chooses a process to run from the queue in either
scenario.
Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in
the Process Control block. A context switcher makes it possible for multiple processes
to share a single CPU using this method. A multitasking operating system must
include context switching among its features.
The state of the currently running process is saved into the process control block when
the scheduler switches the CPU from executing one process to another. The state used
to set the computer, registers, etc. for the process that will run next is then loaded
from its own PCB. After that, the second can start processing.

Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in
the Process Control block. A context switcher makes it possible for multiple processes
to share a single CPU using this method. A multitasking operating system must
include context switching among its features.
• Program Counter
• Scheduling information
• The base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information
Context Switching in Operating System
An operating system is a program loaded into a system or computer. and manage all
the other program which is running on that OS Program, it manages the all other
application programs. or in other words, we can say that the OS is an interface
between the user and computer hardware.
So in this article, we will learn about what is Context switching in an Operating
System and see how it works also understand the triggers of context switching and an
overview of the Operating System.
What is Context Switching in an Operating System?
Context switching in an operating system involves saving the context or state of a
running process so that it can be restored later, and then loading the context or state of
another. process and run it.
Context Switching refers to the process/method used by the system to change the
process from one state to another using the CPUs present in the system to perform its
job.
Example of Context Switching
Suppose in the OS there (N) numbers of processes are stored in a Process Control
Block(PCB). like The process is running using the CPU to do its job. While a process
is running, other processes with the highest priority queue up to use the CPU to
complete their job.
Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process. This task is known as a context
switch. When a context switch occurs, the kernel saves the context of the old process
in its PCB and loads the saved context of the new process scheduled to run. Context-
switch time is pure overhead because the system does no useful work while switching.
Switching speed varies from machine to machine, depending on the memory speed,
the number of registers that must be copied, and the existence of special instructions
(such as a single instruction to load or store all registers). A typical speed is a few
milliseconds. Context-switch times are highly dependent on hardware support. For
instance, some processors (such as the Sun UltraSPARC) provide multiple sets of
registers. A context switch here simply requires changing the pointer to the current
register set. Of course, if there are more active processes than there are register sets,
the system resorts to copying register data to and from memory, as before. Also, the
more complex the operating system, the greater the amount of work that must be done
during a context switch
Need of Context Switching
Context switching enables all processes to share a single CPU to finish their execution
and store the status of the system’s tasks. The execution of the process begins at the
same place where there is a conflict when the process is reloaded into the system.
The operating system’s need for context switching is explained by the reasons listed
below.
• One process does not directly switch to another within the system. Context
switching makes it easier for the operating system to use the CPU’s resources to
carry out its tasks and store its context while switching between multiple
processes.
• Context switching enables all processes to share a single CPU to finish their
execution and store the status of the system’s tasks. The execution of the process
begins at the same place where there is a conflict when the process is reloaded
into the system.
• Context switching only allows a single CPU to handle multiple processes
requests parallelly without the need for any additional processors.
Context Switching Triggers
The three different categories of context-switching triggers are as follows.
• Interrupts
• Multitasking
• User/Kernel switch
Interrupts: When a CPU requests that data be read from a disc, if any interruptions
occur, context switching automatically switches to a component of the hardware that
can handle the interruptions more quickly.
Multitasking: The ability for a process to be switched from the CPU so that another
process can run is known as context switching. When a process is switched, the
previous state is retained so that the process can continue running at the same spot in
the system.
Kernel/User Switch: This trigger is used when the OS needed to switch between
the user mode and kernel mode.
When switching between user mode and kernel/user mode is necessary, operating
systems use the kernel/user switch.
What is Process Control Block(PCB)?
So, The Process Control block(PCB) is also known as a Task Control Block. it
represents a process in the Operating System. A process control block (PCB) is a data
structure used by a computer to store all information about a process. It is also called
the descriptive process. When a process is created (started or installed), the operating
system creates a process manager.
State Diagram of Context Switching
Working Process Context Switching
So the context switching of two processes, the priority-based process occurs in the
ready queue of the process control block. These are the following steps.
• The state of the current process must be saved for rescheduling.
• The process state contains records, credentials, and operating system-specific
information stored on the PCB or switch.
• The PCB can be stored in a single layer in kernel memory or in a custom OS
file.
• A handle has been added to the PCB to have the system ready to run.
• The operating system aborts the execution of the current process and selects a
process from the waiting list by tuning its PCB.
• Load the PCB’s program counter and continue execution in the selected
process.
• Process/thread values can affect which processes are selected from the queue,
this can be important.
Preemptive and Non-Preemptive Scheduling
You will discover the distinction between preemptive and non-preemptive scheduling
in this article. But first, you need to understand preemptive and non-preemptive
scheduling before going over the differences.
Preemptive Scheduling
Preemptive scheduling is used when a process switches from the running state to the
ready state or from the waiting state to the ready state. The resources (mainly CPU
cycles) are allocated to the process for a limited amount of time and then taken away,
and the process is again placed back in the ready queue if that process still has CPU
burst time remaining. That process stays in the ready queue till it gets its next chance
to execute.

Algorithms based on preemptive scheduling are Round Robin (RR), Shortest


Remaining Time First (SRTF), Priority (preemptive version), etc.

Preemptive scheduling has a number of advantages and disadvantages. The following


are non-preemptive scheduling’s benefits and drawbacks:
Advantages
1. Because a process may not monopolize the processor, it is a more reliable
method.
2. Each occurrence prevents the completion of ongoing tasks.
3. The average response time is improved.
4. Utilizing this method in a multi-programming environment is more
advantageous.
5. The operating system makes sure that every process using the CPU is using
the same amount of CPU time.
Disadvantages
1. Limited computational resources must be used.
2. Suspending the running process, change the context, and dispatch the new
incoming process all take more time.
3. The low-priority process would have to wait if multiple high-priority
processes arrived at the same time.
Non-Preemptive Scheduling
Non-preemptive Scheduling is used when a process terminates, or a process switches
from running to the waiting state. In this scheduling, once the resources (CPU cycles)
are allocated to a process, the process holds the CPU till it gets terminated or reaches
a waiting state. In the case of non-preemptive scheduling does not interrupt a process
running CPU in the middle of the execution. Instead, it waits till the process
completes its CPU burst time, and then it can allocate the CPU to another process.

Algorithms based on non-preemptive scheduling are: Shortest Job First (SJF


basically non preemptive) and Priority (nonpreemptive version), etc.

Non-preemptive scheduling has both advantages and disadvantages. The following


are non-preemptive scheduling’s benefits and drawbacks:
Advantages
1. It has a minimal scheduling burden.
2. It is a very easy procedure.
3. Less computational resources are used.
4. It has a high throughput rate.
Disadvantages
1. Its response time to the process is super.
2. Bugs can cause a computer to freeze up.

Key Differences Between Preemptive and Non-Preemptive Scheduling


1. In preemptive scheduling, the CPU is allocated to the processes for a limited
time whereas, in Non-preemptive scheduling, the CPU is allocated to the process
till it terminates or switches to the waiting state.
2. The executing process in preemptive scheduling is interrupted in the middle of
execution when a higher priority one comes whereas, the executing process in
non-preemptive scheduling is not interrupted in the middle of execution and waits
till its execution.
3. In Preemptive Scheduling, there is the overhead of switching the process from
the ready state to the running state, vise-verse, and maintaining the ready queue.
Whereas in the case of non-preemptive scheduling has no overhead of switching
the process from running state to ready state.
4. In preemptive scheduling, if a high-priorThe process The process non-
preemptive low-priority process frequently arrives in the ready queue then the
process with low priority has to wait for a long, and it may have to starve. , in
non-preemptive scheduling, if CPU is allocated to the process having a larger
burst time then the processes with a small burst time may have to starve.
5. Preemptive scheduling attains flexibility by allowing the critical processes to
access the CPU as they arrive in the ready queue, no matter what process is
executing currently. Non-preemptive scheduling is called rigid as even if a critical
process enters the ready queue the process running CPU is not disturbed.
6. Preemptive Scheduling has to maintain the integrity of shared data that’s why
it is cost associative which is not the case with Non-preemptive Scheduling.

Comparison Chart

Parameter PREEMPTIVE NON-PREEMPTIVE


SCHEDULING SCHEDULING

Once resources(CPU Cycle)


In this resources(CPU are allocated to a process,
Basic Cycle) are allocated to a the process holds it till it
process for a limited time. completes its burst time or
switches to waiting state.

Process can not be


Process can be
Interrupt interrupted until it terminates
interrupted in between.
itself or its time is up.

If a process having high If a process with a long burst


priority frequently arrives time is running CPU, then
Starvation in the ready queue, a low later coming process with
priority process may less CPU burst time may
starve. starve.

It has overheads of
Overhead It does not have overheads.
scheduling the processes.

Flexibility flexible rigid

Cost cost associated no cost associated


Parameter PREEMPTIVE NON-PREEMPTIVE
SCHEDULING SCHEDULING

In preemptive scheduling, It is low in non preemptive


CPU Utilization
CPU utilization is high. scheduling.

Preemptive scheduling Non-preemptive scheduling


Waiting Time
waiting time is less. waiting time is high.

Preemptive scheduling Non-preemptive scheduling


Response Time
response time is less. response time is high.

Decisions are made by the


Decisions are made by
process itself and the OS just
the scheduler and are
Decision making follows the process’s
based on priority and
instructions
time slice allocation

The OS has greater The OS has less control over


Process control control over the the scheduling of processes
scheduling of processes

Lower overhead since


Higher overhead due to
context switching is less
Overhead frequent context
frequent
switching

Examples of preemptive Examples of non-preemptive


scheduling are Round scheduling are First Come
Examples
Robin and Shortest First Serve and Shortest Job
Remaining Time First. First.

CPU Scheduling in Operating Systems


Scheduling of processes/work is done to finish the work on time. CPU Scheduling is
a process that allows one process to use the CPU while another process is delayed (in
standby) due to unavailability of any resources such as I / O etc, thus making full use
of the CPU. The purpose of CPU Scheduling is to make the system more efficient,
faster, and fairer.

Tutorial on CPU Scheduling Algorithms in Operating System


Whenever the CPU becomes idle, the operating system must select one of the
processes in the line ready for launch. The selection process is done by a temporary
(CPU) scheduler. The Scheduler selects between memory processes ready to launch
and assigns the CPU to one of them.
What is a process?
In computing, a process is the instance of a computer program that is being
executed by one or many threads. It contains the program code and its activity.
Depending on the operating system (OS), a process may be made up of multiple
threads of execution that execute instructions concurrently.
How is process memory used for efficient operation?
The process memory is divided into four sections for efficient operation:
• The text category is composed of integrated program code, which is read
from fixed storage when the program is launched.
• The data class is made up of global and static variables, distributed and
executed before the main action.
• Heap is used for flexible, or dynamic memory allocation and is managed by
calls to new, delete, malloc, free, etc.
• The stack is used for local variables. The space in the stack is reserved for
local variables when it is announced.

To know further, you can refer to our detailed article on States of a Process in
Operating system.
What is Process Scheduling?
Process Scheduling is the process of the process manager handling the removal of an
active process from the CPU and selecting another process based on a specific
strategy.
Process Scheduling is an integral part of Multi-programming applications. Such
operating systems allow more than one process to be loaded into usable memory at a
time and the loaded shared CPU process uses repetition time.
There are three types of process schedulers:
• Long term or Job Scheduler
• Short term or CPU Scheduler
• Medium-term Scheduler
Why do we need to schedule processes?
• Scheduling is important in many different computer environments. One of the
most important areas is scheduling which programs will work on the CPU. This
task is handled by the Operating System (OS) of the computer and there are many
different ways in which we can choose to configure programs.
• Process Scheduling allows the OS to allocate CPU time for each process.
Another important reason to use a process scheduling system is that it keeps the
CPU busy at all times. This allows you to get less response time for programs.
• Considering that there may be hundreds of programs that need to work, the OS
must launch the program, stop it, switch to another program, etc. The way the OS
configures the system to run another in the CPU is called “context switching”. If
the OS keeps context-switching programs in and out of the provided CPUs, it can
give the user a tricky idea that he or she can run any programs he or she wants to
run, all at once.
• So now that we know we can run 1 program at a given CPU, and we know we
can change the operating system and remove another one using the context switch,
how do we choose which programs we need. run, and with what program?
• That’s where scheduling comes in! First, you determine the metrics, saying
something like “the amount of time until the end”. We will define this metric as
“the time interval between which a function enters the system until it is
completed”. Second, you decide on a metrics that reduces metrics. We want our
tasks to end as soon as possible.
What is the need for CPU scheduling algorithm?
CPU scheduling is the process of deciding which process will own the CPU to use
while another process is suspended. The main function of the CPU scheduling is to
ensure that whenever the CPU remains idle, the OS has at least selected one of the
processes available in the ready-to-use line.
In Multiprogramming, if the long-term scheduler selects multiple I / O binding
processes then most of the time, the CPU remains an idle. The function of an effective
program is to improve resource utilization.
If most operating systems change their status from performance to waiting then there
may always be a chance of failure in the system. So in order to minimize this excess,
the OS needs to schedule tasks in order to make full use of the CPU and avoid the
possibility of deadlock.

Objectives of Process Scheduling Algorithm:

• Utilization of CPU at maximum level. Keep CPU as busy as possible.


• Allocation of CPU should be fair.
• Throughput should be Maximum. i.e. Number of processes that complete
their execution per time unit should be maximized.
• Minimum turnaround time, i.e. time taken by a process to finish execution
should be the least.
• There should be a minimum waiting time and the process should not starve
in the ready queue.
• Minimum response time. It means that the time when a process produces the
first response should be as less as possible.
What are the different terminologies to take care of in any CPU Scheduling
algorithm?
• Arrival Time: Time at which the process arrives in the ready queue.
• Completion Time: Time at which process completes its execution.
• Burst Time: Time required by a process for CPU execution.
• Turn Around Time: Time Difference between completion time and arrival
time.
Turn Around Time = Completion Time – Arrival Time
• Waiting Time(W.T): Time Difference between turn around time and burst
time.
Waiting Time = Turn Around Time – Burst Time
Things to take care while designing a CPU Scheduling algorithm?
Different CPU Scheduling algorithms have different structures and the choice of a
particular algorithm depends on a variety of factors. Many conditions have been
raised to compare CPU scheduling algorithms.
The criteria include the following:
• CPU utilization: The main purpose of any CPU algorithm is to keep the CPU
as busy as possible. Theoretically, CPU usage can range from 0 to 100 but in a
real-time system, it varies from 40 to 90 percent depending on the system load.
• Throughput: The average CPU performance is the number of processes
performed and completed during each unit. This is called throughput. The output
may vary depending on the length or duration of the processes.
• Turn round Time: For a particular process, the important conditions are how
long it takes to perform that process. The time elapsed from the time of process
delivery to the time of completion is known as the conversion time. Conversion
time is the amount of time spent waiting for memory access, waiting in line, using
CPU, and waiting for I / O.
• Waiting Time: The Scheduling algorithm does not affect the time required to
complete the process once it has started performing. It only affects the waiting
time of the process i.e. the time spent in the waiting process in the ready queue.
• Response Time: In a collaborative system, turn around time is not the best
option. The process may produce something early and continue to computing the
new results while the previous results are released to the user. Therefore another
method is the time taken in the submission of the application process until the first
response is issued. This measure is called response time.
What are the different types of CPU Scheduling Algorithms?
There are mainly two types of scheduling methods:
• Preemptive Scheduling: Preemptive scheduling is used when a process
switches from running state to ready state or from the waiting state to the ready
state.
• Non-Preemptive Scheduling: Non-Preemptive scheduling is used when a
process terminates , or when a process switches from running state to waiting state.

Different types of CPU Scheduling Algorithms


Let us now learn about these CPU scheduling algorithms in operating systems one by
one:

1. First Come First Serve:

FCFS considered to be the simplest of all operating system scheduling algorithms.


First come first serve scheduling algorithm states that the process that requests the
CPU first is allocated the CPU first and is implemented by using FIFO queue.
Characteristics of FCFS:
• FCFS supports non-preemptive and preemptive CPU scheduling algorithms.
• Tasks are always executed on a First-come, First-serve concept.
• FCFS is easy to implement and use.
• This algorithm is not much efficient in performance, and the wait time is quite
high.
Advantages of FCFS:
• Easy to implement
• First come, first serve method
Disadvantages of FCFS:
• FCFS suffers from Convoy effect.
• The average waiting time is much higher than the other algorithms.
• FCFS is very simple and easy to implement and hence not much efficient.
2. Shortest Job First(SJF):

Shortest job first (SJF) is a scheduling process that selects the waiting process with
the smallest execution time to execute next. This scheduling method may or may not
be preemptive. Significantly reduces the average waiting time for other processes
waiting to be executed. The full form of SJF is Shortest Job First.

Characteristics of SJF:
• Shortest Job first has the advantage of having a minimum average waiting
time among all operating system scheduling algorithms.
• It is associated with each task as a unit of time to complete.
• It may cause starvation if shorter processes keep coming. This problem can be
solved using the concept of ageing.
Advantages of Shortest Job first:
• As SJF reduces the average waiting time thus, it is better than the first come
first serve scheduling algorithm.
• SJF is generally used for long term scheduling
Disadvantages of SJF:
• One of the demerit SJF has is starvation.
• Many times it becomes complicated to predict the length of the upcoming
CPU request
To learn about how to implement this CPU scheduling algorithm, please refer to our
detailed article on Shortest Job First.

3. Longest Job First(LJF):


Longest Job First(LJF) scheduling process is just opposite of shortest job first (SJF),
as the name suggests this algorithm is based upon the fact that the process with the
largest burst time is processed first. Longest Job First is non-preemptive in nature.
Characteristics of LJF:
• Among all the processes waiting in a waiting queue, CPU is always assigned
to the process having largest burst time.
• If two processes have the same burst time then the tie is broken using FCFS
i.e. the process that arrived first is processed first.
• LJF CPU Scheduling can be of both preemptive and non-preemptive types.
Advantages of LJF:
• No other task can schedule until the longest job or process executes
completely.
• All the jobs or processes finish at the same time approximately.
Disadvantages of LJF:
• Generally, the LJF algorithm gives a very high average waiting time and
average turn-around time for a given set of processes.
• This may lead to convoy effect.

4. Priority Scheduling:

Preemptive Priority CPU Scheduling Algorithm is a pre-emptive method of CPU


scheduling algorithm that works based on the priority of a process. In this algorithm,
the editor sets the functions to be as important, meaning that the most important
process must be done first. In the case of any conflict, that is, where there is more than
one process with equal value, then the most important CPU planning algorithm works
on the basis of the FCFS (First Come First Serve) algorithm.
Characteristics of Priority Scheduling:
• Schedules tasks based on priority.
• When the higher priority work arrives and a task with less priority is executing,
the higher priority proess will takes the place of the less priority proess and
• The later is suspended until the execution is complete.
• Lower is the number assigned, higher is the priority level of a process.
Advantages of Priority Scheduling:
• The average waiting time is less than FCFS
• Less complex
Disadvantages of Priority Scheduling:
• One of the most common demerits of the Preemptive priority CPU scheduling
algorithm is the Starvation Problem. This is the problem in which a process has to
wait for a longer amount of time to get scheduled into the CPU. This condition is
called the starvation problem.

5. Round robin:

Round Robin is a CPU scheduling algorithm where each process is cyclically


assigned a fixed time slot. It is the preemptive version of First come First Serve CPU
Scheduling algorithm. Round Robin CPU Algorithm generally focuses on Time
Sharing technique.
Characteristics of Round robin:
• It’s simple, easy to use, and starvation-free as all processes get the balanced
CPU allocation.
• One of the most widely used methods in CPU scheduling as a core.
• It is considered preemptive as the processes are given to the CPU for a very
limited time.
Advantages of Round robin:
• Round robin seems to be fair as every process gets an equal share of CPU.
• The newly created process is added to the end of the ready queue.

6. Shortest Remaining Time First:

Shortest remaining time first is the preemptive version of the Shortest job first
which we have discussed earlier where the processor is allocated to the job closest to
completion. In SRTF the process with the smallest amount of time remaining until
completion is selected to execute.
Characteristics of Shortest remaining time first:
• SRTF algorithm makes the processing of the jobs faster than SJF algorithm,
given it’s overhead charges are not counted.
• The context switch is done a lot more times in SRTF than in SJF and
consumes the CPU’s valuable time for processing. This adds up to its processing
time and diminishes its advantage of fast processing.
Advantages of SRTF:
• In SRTF the short processes are handled very fast.
• The system also requires very little overhead since it only makes a decision
when a process completes or a new process is added.
Disadvantages of SRTF:
• Like the shortest job first, it also has the potential for process starvation.
• Long processes may be held off indefinitely if short processes are continually
added.
9. Multiple Queue Scheduling:

Processes in the ready queue can be divided into different classes where each class
has its own scheduling needs. For example, a common division is a foreground
(interactive) process and a background (batch) process. These two classes have
different scheduling needs. For this kind of situation Multilevel Queue Scheduling is
used.

The description of the processes in the above diagram is as follows:


• System Processes: The CPU itself has its process to run, generally termed as
System Process.
• Interactive Processes: An Interactive Process is a type of process in which
there should be the same type of interaction.
• Batch Processes: Batch processing is generally a technique in the Operating
system that collects the programs and data together in the form of a batch before
the processing starts.
Advantages of multilevel queue scheduling:
• The main merit of the multilevel queue is that it has a low scheduling
overhead.
Disadvantages of multilevel queue scheduling:
• Starvation problem
• It is inflexible in nature

10. Multilevel Feedback Queue Scheduling::

Multilevel Feedback Queue Scheduling (MLFQ) CPU Scheduling is


like Multilevel Queue Scheduling but in this process can move between the queues.
And thus, much more efficient than multilevel queue scheduling.
Characteristics of Multilevel Feedback Queue Scheduling:
• In a multilevel queue-scheduling algorithm, processes are permanently
assigned to a queue on entry to the system, and processes are not allowed to move
between queues.
• As the processes are permanently assigned to the queue, this setup has the
advantage of low scheduling overhead,
• But on the other hand disadvantage of being inflexible.
Advantages of Multilevel feedback queue scheduling:
• It is more flexible
• It allows different processes to move between different queues
Disadvantages of Multilevel feedback queue scheduling:
• It also produces CPU overheads
• It is the most complex algorithm.
CPU Scheduling Criteria
CPU scheduling is a method process or task that the CPU will run at any given
moment. It is an essential part of modern operating systems as it enables multiple
processes to run at the same time on the same processor. In short, the CPU scheduler
decides the order and priority of the processes to run and allocates the CPU time
based on various parameters such as CPU usage, throughput, turnaround, waiting time,
and response time.
CPU scheduling is essential for the system’s performance and ensures that processes
are executed correctly and on time. Different CPU scheduling algorithms have other
properties and the choice of a particular algorithm depends on various factors. Many
criteria have been suggested for comparing CPU scheduling algorithms.
Criteria of CPU Scheduling
CPU Scheduling has several criteria. Some of them are mentioned below.

CPU utilization

The main objective of any CPU scheduling algorithm is to keep the CPU as busy as
possible. Theoretically, CPU utilization can range from 0 to 100 but in a real-time
system, it varies from 40 to 90 percent depending on the load upon the system.

Throughput

A measure of the work done by the CPU is the number of processes being executed
and completed per unit of time. This is called throughput. The throughput may vary
depending on the length or duration of the processes.

Turnaround Time

For a particular process, an important criterion is how long it takes to execute that
process. The time elapsed from the time of submission of a process to the time of
completion is known as the turnaround time. Turn-around time is the sum of times
spent waiting to get into memory, waiting in the ready queue, executing in CPU, and
waiting for I/O.
Turn Around Time = Completion Time - Arrival Time.

Waiting Time

A scheduling algorithm does not affect the time required to complete the process once
it starts execution. It only affects the waiting time of a process i.e. time spent by a
process waiting in the ready queue.
Waiting Time = Turnaround Time - Burst Time.

Response Time

In an interactive system, turn-around time is not the best criterion. A process may
produce some output fairly early and continue computing new results while previous
results are being output to the user. Thus another criterion is the time taken from
submission of the process of the request until the first response is produced. This
measure is called response time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) -
Arrival Time

Completion Time

The completion time is the time when the process stops executing, which means that
the process has completed its burst time and is completely executed.

Priority

If the operating system assigns priorities to processes, the scheduling mechanism


should favor the higher-priority processes.

Predictability

A given process always should run in about the same amount of time under a similar
system load.

Importance of Selecting the Right CPU Scheduling Algorithm for Specific


Situations
It is important to choose the correct CPU scheduling algorithm because different
algorithms have different priorities for different CPU scheduling criteria.Different
algorithms have different strengths and weaknesses. Choosing the wrong CPU
scheduling algorithm in a given situation can result in suboptimal performance of the
system.
Example:
Here are some examples of CPU scheduling algorithms that work well in different
situations.
Round Robin scheduling algorithm works well in a time-sharing system where tasks
have to be completed in a short period of time.SJF scheduling algorithm works best in
a batch processing system where shorter jobs have to be completed first in order to
increase throughput.Priority scheduling algorithm works better in a real-time system
where certain tasks have to be prioritized so that they can be completed in a timely
manner.
Factors Influencing CPU Scheduling Algorithms
There are many factors that influence the choice of CPU scheduling algorithm. Some
of them are listed below.
• The number of processes.
• The processing time required.
• The urgency of tasks.
• The system requirements.
Selecting the correct algorithm will ensure that the system will use system resources
efficiently, increase productivity, and improve user satisfaction.
Introduction of Process Synchronization
Process Synchronization is the coordination of execution of multiple processes in a
multi-process system to ensure that they access shared resources in a controlled and
predictable manner. It aims to resolve the problem of race conditions and other
synchronization issues in a concurrent system.
The main objective of process synchronization is to ensure that multiple processes
access shared resources without interfering with each other and to prevent the
possibility of inconsistent data due to concurrent access. To achieve this, various
synchronization techniques such as semaphores, monitors, and critical sections are
used.
In a multi-process system, synchronization is necessary to ensure data consistency and
integrity, and to avoid the risk of deadlocks and other synchronization problems.
Process synchronization is an important aspect of modern operating systems, and it
plays a crucial role in ensuring the correct and efficient functioning of multi-process
systems.
On the basis of synchronization, processes are categorized as one of the following two
types:
• Independent Process: The execution of one process does not affect the
execution of other processes.
• Cooperative Process: A process that can affect or be affected by other
processes executing in the system.
Process synchronization problem arises in the case of Cooperative processes also
because resources are shared in Cooperative processes.

Race Condition
When more than one process is executing the same code or accessing the same
memory or any shared variable in that condition there is a possibility that the output
or the value of the shared variable is wrong so for that all the processes doing the race
to say that my output is correct this condition known as a race condition. Several
processes access and process the manipulations over the same data concurrently, and
then the outcome depends on the particular order in which the access takes place.
A race condition is a situation that may occur inside a critical section. This happens
when the result of multiple thread execution in the critical section differs according to
the order in which the threads execute. Race conditions in critical sections can be
avoided if the critical section is treated as an atomic instruction. Also, proper thread
synchronization using locks or atomic variables can prevent race conditions.
Example:
Let’s understand one example to understand the race condition better:
Let’s say there are two processes P1 and P2 which share a common variable
(shared=10), both processes are present in – queue and waiting for their turn to be
executed. Suppose, Process P1 first come under execution, and the CPU store a
common variable between them (shared=10) in the local variable (X=10) and
increment it by 1(X=11), after then when the CPU read line sleep(1),it switches from
current process P1 to process P2 present in ready-queue. The process P1 goes in a
waiting state for 1 second.
Now CPU execute the Process P2 line by line and store common variable (Shared=10)
in its local variable (Y=10) and decrement Y by 1(Y=9), after then when CPU read
sleep(1), the current process P2 goes in waiting for state and CPU remains idle for
some time as there is no process in ready-queue, after completion of 1 second of
process P1 when it comes in ready-queue, CPU takes the process P1 under execution
and execute the remaining line of code (store the local variable (X=11) in common
variable (shared=11) ), CPU remain idle for sometime waiting for any process in
ready-queue,after completion of 1 second of Process P2, when process P2 comes in
ready-queue, CPU start executing the further remaining line of Process P2(store the
local variable (Y=9) in common variable (shared=9) ).
Initially Shared = 10

Process 1 Process 2

int X = shared int Y = shared

X++ Y–
Process 1 Process 2

sleep(1) sleep(1)

shared = X shared = Y

Note: We are assuming the final value of a common variable(shared) after execution
of Process P1 and Process P2 is 10 (as Process P1 increment variable (shared=10) by
1 and Process P2 decrement variable (shared=11) by 1 and finally it becomes
shared=10). But we are getting undesired value due to a lack of proper
synchronization.

Actual meaning of race-condition

• If the order of execution of the process(first P1 -> then P2) then we will get
the value of common variable (shared) =9.
• If the order of execution of the process(first P2 -> then P1) then we will get
the final value of common variable (shared) =11.
• Here the (value1 = 9) and (value2=10) are racing, If we execute these two
processes in our computer system then sometime we will get 9 and sometime we
will get 10 as the final value of a common variable(shared). This phenomenon is
called race condition.
Critical Section Problem
A critical section is a code segment that can be accessed by only one process at a time.
The critical section contains shared variables that need to be synchronized to maintain
the consistency of data variables. So the critical section problem means designing a
way for cooperative processes to access shared resources without creating data
inconsistencies.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion: If a process is executing in its critical section, then no
other process is allowed to execute in the critical section.
• Progress: If no process is executing in the critical section and other processes
are waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will enter
the critical section next, and the selection can not be postponed indefinitely.
• Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
Semaphores
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore
can be signaled by another thread. This is different than a mutex as the mutex can be
signaled only by the thread that is called the wait function.
A semaphore uses two atomic operations, wait and signal for process
synchronization.
A Semaphore is an integer variable, which can be accessed only through two
operations wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
• Binary Semaphores: They can only be either 0 or 1. They are also known as
mutex locks, as the locks can provide mutual exclusion. All the processes can
share the same mutex semaphore that is initialized to 1. Then, a process has to
wait until the lock becomes 0. Then, the process can make the mutex semaphore
1 and start its critical section. When it completes its critical section, it can reset
the value of the mutex semaphore to 0 and some other process can enter its
critical section.
• Counting Semaphores: They can have any value and are not restricted to a
certain domain. They can be used to control access to a resource that has a
limitation on the number of simultaneous accesses. The semaphore can be
initialized to the number of instances of the resource. Whenever a process wants
to use that resource, it checks if the number of remaining instances is more than
zero, i.e., the process has an instance available. Then, the process can enter its
critical section thereby decreasing the value of the counting semaphore by 1.
After the process is over with the use of the instance of the resource, it can leave
the critical section thereby adding 1 to the number of available instances of the
resource.
Advantages of Process Synchronization
• Ensures data consistency and integrity
• Avoids race conditions
• Prevents inconsistent data due to concurrent access
• Supports efficient and effective use of shared resources
Disadvantages of Process Synchronization
• Adds overhead to the system
• This can lead to performance degradation
• Increases the complexity of the system
• Can cause deadlocks if not implemented properly.

You might also like