Os Notes
Os Notes
Operating System
Functions of the Operating System
• Resource Management: The operating system manages and allocates
memory, CPU time, and other hardware resources among the various programs
and processes running on the computer.
• Process Management: The operating system is responsible for starting,
stopping, and managing processes and programs. It also controls the scheduling
of processes and allocates resources to them.
• Memory Management: The operating system manages the computer’s
primary memory and provides mechanisms for optimizing memory usage.
• Security: The operating system provides a secure environment for the user,
applications, and data by implementing security policies and mechanisms such
as access controls and encryption.
• Job Accounting: It keeps track of time and resources used by various jobs
or users.
• File Management: The operating system is responsible for organizing and
managing the file system, including the creation, deletion, and manipulation of
files and directories.
• Device Management: The operating system manages input/output devices
such as printers, keyboards, mice, and displays. It provides the necessary drivers
and interfaces to enable communication between the devices and the computer.
• Networking: The operating system provides networking capabilities such as
establishing and managing network connections, handling network protocols,
and sharing resources such as printers and files over a network.
• User Interface: The operating system provides a user interface that enables
users to interact with the computer system. This can be a Graphical User
Interface (GUI), a Command-Line Interface (CLI), or a combination of both.
• Backup and Recovery: The operating system provides mechanisms for
backing up data and recovering it in case of system failures, errors, or disasters.
• Virtualization: The operating system provides virtualization capabilities
that allow multiple operating systems or applications to run on a single physical
machine. This can enable efficient use of resources and flexibility in managing
workloads.
• Performance Monitoring: The operating system provides tools for
monitoring and optimizing system performance, including identifying
bottlenecks, optimizing resource usage, and analyzing system logs and metrics.
• Time-Sharing: The operating system enables multiple users to share a
computer system and its resources simultaneously by providing time-sharing
mechanisms that allocate resources fairly and efficiently.
• System Calls: The operating system provides a set of system calls that
enable applications to interact with the operating system and access its resources.
System calls provide a standardized interface between applications and the
operating system, enabling portability and compatibility across different
hardware and software platforms.
• Error-detecting Aids: These contain methods that include the production of
dumps, traces, error messages, and other debugging and error-detecting methods.
Objectives of Operating Systems
Let us now see some of the objectives of the operating system, which are mentioned
below.
• Convenient to use: One of the objectives is to make the computer system
more convenient to use in an efficient manner.
• User Friendly: To make the computer system more interactive with a more
convenient interface for the users.
• Easy Access: To provide easy access to users for using resources by acting
as an intermediary between the hardware and its users.
• Management of Resources: For managing the resources of a computer in a
better and faster way.
• Controls and Monitoring: By keeping track of who is using which resource,
granting resource requests, and mediating conflicting requests from different
programs and users.
• Fair Sharing of Resources: Providing efficient and fair sharing of resources
between the users and programs.
Types of Operating Systems
• Batch Operating System: A Batch Operating System is a type of operating
system that does not interact with the computer directly. There is an operator
who takes similar jobs having the same requirements and groups them into
batches.
• Time-sharing Operating System: Time-sharing Operating System is a type
of operating system that allows many users to share computer resources
(maximum utilization of the resources).
• Distributed Operating System: Distributed Operating System is a type of
operating system that manages a group of different computers and makes appear
to be a single computer. These operating systems are designed to operate on a
network of computers. They allow multiple users to access shared resources and
communicate with each other over the network. Examples include Microsoft
Windows Server and various distributions of Linux designed for servers.
• Network Operating System: Network Operating System is a type of
operating system that runs on a server and provides the capability to manage
data, users, groups, security, applications, and other networking functions.
• Real-time Operating System: Real-time Operating System is a type of
operating system that serves a real-time system and the time interval required to
process and respond to inputs is very small. These operating systems are
designed to respond to events in real time. They are used in applications that
require quick and deterministic responses, such as embedded systems, industrial
control systems, and robotics.
• Multiprocessing Operating System: Multiprocessor Operating Systems are
used in operating systems to boost the performance of multiple CPUs within a
single computer system. Multiple CPUs are linked together so that a job can be
divided and executed more quickly.
• Single-User Operating Systems: Single-User Operating Systems are
designed to support a single user at a time. Examples include Microsoft
Windows for personal computers and Apple macOS.
• Multi-User Operating Systems: Multi-User Operating Systems are
designed to support multiple users simultaneously. Examples include Linux and
Unix.
• Embedded Operating Systems: Embedded Operating Systems are designed
to run on devices with limited resources, such as smartphones, wearable devices,
and household appliances. Examples include Google’s Android and Apple’s iOS.
• Cluster Operating Systems: Cluster Operating Systems are designed to run
on a group of computers, or a cluster, to work together as a single system. They
are used for high-performance computing and for applications that require high
availability and reliability. Examples include Rocks Cluster Distribution and
OpenMPI.
How to Check the Operating System?
There are so many factors to be considered while choosing the best Operating
System for our use. These factors are mentioned below.
• Price Factor: Price is one of the factors to choose the correct Operating
System as there are some OS that is free, like Linux, but there is some more OS
that is paid like Windows and macOS.
• Accessibility Factor: Some Operating Systems are easy to use like macOS
and iOS, but some OS are a little bit complex to understand like Linux. So, you
must choose the Operating System in which you are more accessible.
• Compatibility factor: Some Operating Systems support very less
applications whereas some Operating Systems supports more application. You
must choose the OS, which supports the applications which are required by you.
• Security Factor: The security Factor is also a factor in choosing the correct
OS, as macOS provide some additional security while Windows has little fewer
security features.
Examples of Operating Systems
• Windows (GUI-based, PC)
• GNU/Linux (Personal, Workstations, ISP, File, and print server, Three-tier
client/Server)
• macOS (Macintosh), used for Apple’s personal computers and workstations
(MacBook, iMac).
• Android (Google’s Operating System for smartphones/tablets/smartwatches)
• iOS (Apple’s OS for iPhone, iPad, and iPod Touch)
Functions of Operating System
An Operating System acts as a communication bridge (interface) between the user and
computer hardware. The purpose of an operating system is to provide a platform on
which a user can execute programs conveniently and efficiently.
An operating system is a piece of software that manages the allocation of Computer
Hardware. The coordination of the hardware must be appropriate to ensure the correct
working of the computer system and to prevent user programs from interfering with
the proper working of the system.
The main goal of the Operating System is to make the computer environment more
convenient to use and the Secondary goal is to use the resources most efficiently.
What is an Operating System?
An operating system is a program that manages a computer’s hardware. It also
provides a basis for application programs and acts as an intermediary between the
computer user and computer hardware. The main task an operating system carries out
is the allocation of resources and services, such as the allocation of memory, devices,
processors, and information. The operating system also includes programs to manage
these resources, such as a traffic controller, a scheduler, a memory
management module, I/O programs, and a file system. The operating system simply
provides an environment within which other programs can do useful work.
Why are Operating Systems Used?
Operating System is used as a communication channel between the Computer
hardware and the user. It works as an intermediate between System Hardware and
End-User. Operating System handles the following responsibilities:
• It controls all the computer resources.
• It provides valuable services to user programs.
• It coordinates the execution of user programs.
• It provides resources for user programs.
• It provides an interface (virtual machine) to the user.
• It hides the complexity of software.
• It supports multiple execution modes.
• It monitors the execution of user programs to prevent errors.
Functions of an Operating System
Memory Management
The operating system manages the Primary Memory or Main Memory. Main memory
is made up of a large array of bytes or words where each byte or word is assigned a
certain address. Main memory is fast storage and it can be accessed directly by the
CPU. For a program to be executed, it should be first loaded in the main memory. An
operating system manages the allocation and deallocation of memory to various
processes and ensures that the other process does not consume the memory allocated
to one process. An Operating System performs the following activities for Memory
Management:
• It keeps track of primary memory, i.e., which bytes of memory are used by
which user program. The memory addresses that have already been allocated and
the memory addresses of the memory that has not yet been used.
• In multiprogramming, the OS decides the order in which processes are granted
memory access, and for how long.
• It Allocates the memory to a process when the process requests it and
deallocates the memory when the process has terminated or is performing an I/O
operation.
Memory Management
Processor Management
Device Management
File Management
A file system is organized into directories for efficient or easy navigation and usage.
These directories may contain other directories and other files. An Operating System
carries out the following file management activities. It keeps track of where
information is stored, user access settings, the status of every file, and more. These
facilities are collectively known as the file system. An OS keeps track of information
regarding the creation, deletion, transfer, copy, and storage of files in an organized
way. It also maintains the integrity of the data stored in these files, including the file
directory structure, by protecting against unauthorized access.
File Management
The user interacts with the computer system through the operating system. Hence OS
acts as an interface between the user and the computer hardware. This user interface is
offered through a set of commands or a graphical user interface (GUI). Through this
interface, the user makes interacts with the applications and the machine hardware.
Command Interpreter
Security
The operating system uses password protection to protect user data and similar other
techniques. it also prevents unauthorized access to programs and user data. The
operating system provides various techniques which assure the integrity and
confidentiality of user data. The following security measures are used to protect user
data:
• Protection against unauthorized access through login.
• Protection against intrusion by keeping the firewall active.
• Protecting the system memory against malicious access.
• Displaying messages related to system vulnerabilities.
Job Accounting
The operating system Keeps track of time and resources used by various tasks and
users, this information can be used to track resource usage for a particular user or
group of users. In a multitasking OS where multiple programs run simultaneously,
the OS determines which applications should run in which order and how time
should be allocated to each application.
Error-Detecting Aids
The operating system constantly monitors the system to detect errors and avoid
malfunctioning computer systems. From time to time, the operating system checks
the system for any external threat or malicious software activity. It also checks the
hardware for any type of damage. This process displays several alerts to the user so
that the appropriate action can be taken against any damage caused to the system.
Network Management
This type of operating system does not interact with the computer directly. There is
an operator which takes similar jobs having the same requirement and groups them
into batches. It is the responsibility of the operator to sort jobs with similar needs.
MultiProgramming
Advantages of Multi-Programming Operating System
• Multi Programming increases the Throughput of the System.
• It helps in reducing the response time.
Disadvantages of Multi-Programming Operating System
• There is not any facility for user interaction of system resources with the
system.
Each task is given some time to execute so that all the tasks work smoothly. Each
user gets the time of the CPU as they use a single system. These systems are also
known as Multitasking Systems. The task can be from a single user or different
users also. The time that each task gets to execute is called quantum. After this time
interval is over OS switches over to the next task.
Time-Sharing OS
Advantages of Time-Sharing OS
• Each task gets an equal opportunity.
• Fewer chances of duplication of software.
• CPU idle time can be reduced.
• Resource Sharing: Time-sharing systems allow multiple users to share
hardware resources such as the CPU, memory, and peripherals, reducing the cost
of hardware and increasing efficiency.
• Improved Productivity: Time-sharing allows users to work concurrently,
thereby reducing the waiting time for their turn to use the computer. This
increased productivity translates to more work getting done in less time.
• Improved User Experience: Time-sharing provides an interactive
environment that allows users to communicate with the computer in real time,
providing a better user experience than batch processing.
Disadvantages of Time-Sharing OS
• Reliability problem.
• One must have to take care of the security and integrity of user programs and
data.
• Data communication problem.
• High Overhead: Time-sharing systems have a higher overhead than other
operating systems due to the need for scheduling, context switching, and other
overheads that come with supporting multiple users.
• Complexity: Time-sharing systems are complex and require advanced
software to manage multiple users simultaneously. This complexity increases the
chance of bugs and errors.
• Security Risks: With multiple users sharing resources, the risk of security
breaches increases. Time-sharing systems require careful management of user
access, authentication, and authorization to ensure the security of data and
software.
Examples of Time-Sharing OS with explanation
• IBM VM/CMS: IBM VM/CMS is a time-sharing operating system that was
first introduced in 1972. It is still in use today, providing a virtual machine
environment that allows multiple users to run their own instances of operating
systems and applications.
• TSO (Time Sharing Option): TSO is a time-sharing operating system that
was first introduced in the 1960s by IBM for the IBM System/360 mainframe
computer. It allowed multiple users to access the same computer simultaneously,
running their own applications.
• Windows Terminal Services: Windows Terminal Services is a time-sharing
operating system that allows multiple users to access a Windows server remotely.
Users can run their own applications and access shared resources, such as
printers and network storage, in real-time.
Distributed OS
Advantages of Distributed Operating System
• Failure of one will not affect the other network communication, as all
systems are independent of each other.
• Electronic mail increases the data exchange speed.
• Since resources are being shared, computation is highly fast and durable.
• Load on host computer reduces.
• These systems are easily scalable as many systems can be easily added to the
network.
• Delay in data processing reduces.
Disadvantages of Distributed Operating System
• Failure of the main network will stop the entire communication.
• To establish distributed systems the language is used not well-defined yet.
• These types of systems are not readily available as they are very expensive.
Not only that the underlying software is highly complex and not understood well
yet.
Examples of Distributed Operating Systems are LOCUS, etc.
The distributed os must tackle the following issues:
• Networking causes delays in the transfer of data between nodes of a
distributed system. Such delays may lead to an inconsistent view of data located
in different nodes, and make it difficult to know the chronological order in
which events occurred in the system.
• Control functions like scheduling, resource allocation, and deadlock
detection have to be performed in several nodes to achieve computation speedup
and provide reliable operation when computers or networking components fail.
• Messages exchanged by processes present in different nodes may travel over
public networks and pass through computer systems that are not controlled by
the distributed operating system. An intruder may exploit this feature to tamper
with messages, or create fake messages to fool the authentication procedure and
masquerade as a user of the system.
These systems run on a server and provide the capability to manage data, users,
groups, security, applications, and other networking functions. These types of
operating systems allow shared access to files, printers, security, applications, and
other networking functions over a small private network. One more important aspect
of Network Operating Systems is that all the users are well aware of the underlying
configuration, of all other users within the network, their individual connections, etc.
and that’s why these computers are popularly known as tightly coupled systems.
These types of OSs serve real-time systems. The time interval required to process
and respond to inputs is very small. This time interval is called response time.
Real-time systems are used when there are time requirements that are very strict
like missile systems, air traffic control systems, robots, etc.
Types of Real-Time Operating Systems
• Hard Real-Time Systems:
Hard Real-Time OSs are meant for applications where time constraints are very
strict and even the shortest possible delay is not acceptable. These systems are
built for saving life like automatic parachutes or airbags which are required to be
readily available in case of an accident. Virtual memory is rarely found in these
systems.
• Soft Real-Time Systems:
These OSs are for applications where time-constraint is less strict.
Real-Time Operating System
Advantages of RTOS
• Maximum Consumption: Maximum utilization of devices and systems,
thus more output from all the resources.
• Task Shifting: The time assigned for shifting tasks in these systems is very
less. For example, in older systems, it takes about 10 microseconds in shifting
from one task to another, and in the latest systems, it takes 3 microseconds.
• Focus on Application: Focus on running applications and less importance
on applications that are in the queue.
• Real-time operating system in the embedded system: Since the size of
programs is small, RTOS can also be used in embedded systems like in transport
and others.
• Error Free: These types of systems are error-free.
• Memory Allocation: Memory allocation is best managed in these types of
systems.
Disadvantages of RTOS
• Limited Tasks: Very few tasks run at the same time and their concentration
is very less on a few applications to avoid errors.
• Use heavy system resources: Sometimes the system resources are not so
good and they are expensive as well.
• Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.
• Device driver and interrupt signals: It needs specific device drivers and
interrupts signal to respond earliest to interrupts.
• Thread Priority: It is not good to set thread priority as these systems are
very less prone to switching tasks.
Examples of Real-Time Operating Systems are Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic
control systems, etc.
The fundamental goal of an Operating System is to execute user programs and to
make tasks easier. Various application programs along with hardware systems are
used to perform this work. Operating System is software that manages and controls
the entire set of resources and effectively utilizes every part of a computer. The
figure shows how OS acts as a medium between hardware units and application
programs.
Managing Input-Output unit: The operating system also allows the computer to
manage its own resources such as memory, monitor, keyboard, printer, etc.
Management of these resources is required for effective utilization. The operating
system controls the various system input-output resources and allocates them to the
users or programs as per their requirements.
Batch OS
The jobs and tasks are not forwarded to the CPU directly in this system by the OS. It
functions by combining similar job types into a single category. We also refer to this
group as a “batch.” Hence, batch operating system. The payroll system, a bank
statement, etc. are some examples.
Time-Shared OS
Distributed OS
There are multiple CPUs present in this system. All of the processors receive equal
task distribution from the OS. There is no shared memory or clock time between the
processors. Through various communication channels, OS manages all of its
communication.LOCUS is just one example.
Network OS
1. Memory Control
It is the control of the primary or main memory. Furthermore, the main memory
must contain the program that is being run. Consequently, more than one program
may be active at once. Consequently, managing memory is necessary. operating
system memory is allocated and released.
keeps track of who uses which area of primary memory and how often.enables
memory distribution while multiprocessing.
When a system has multiple processes running, the OS determines how and when
each process will use the CPU. So, CPU Scheduling is another name for it.
3. File Management
Program Execution
File Management
The operating system helps in managing files also. If a program needs access to a
file, it is the operating system that grants access. These permissions include read-
only, read-write, etc. It also provides a platform for the user to create, and delete
files. The Operating System is responsible for making decisions regarding the
storage of all types of data or files, i.e, floppy disk/hard disk/pen drive, etc. The
Operating System decides how the data should be manipulated and stored.
Memory Management
Process Management
Let’s understand the process management in unique way. Imagine, our kitchen stove
as the (CPU) where all cooking(execution) is really happen and chef as the (OS)
who uses kitchen-stove(CPU) to cook different dishes(program). The chef(OS) has
to cook different dishes(programs) so he ensure that any particular dish(program)
does not take long time(unnecessary time) and all dishes(programs) gets a chance to
cooked(execution) .The chef(OS) basically scheduled time for all dishes(programs)
to run kitchen(all the system) smoothly and thus cooked(execute) all the different
dishes(programs) efficiently.
Resource Management
System resources are shared between various processes. It is the Operating system
that manages resource sharing. It also manages the CPU time among processes
using CPU Scheduling Algorithms. It also helps in the memory management of the
system. It also controls input-output devices. The OS also ensures the proper use of
all the resources available by deciding which resource to be used by whom.
User Interface
User interface is essential and all operating systems provide it. Users either interface
with the operating system through the command-line interface or graphical user
interface or GUI. The command interpreter executes the next user-specified
command.
A GUI offers the user a mouse-based window and menu system as an interface.
Networking
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output
devices, etc. It also ensures that an error does not occur frequently and fixes the
errors. It also prevents the process from coming to a deadlock. It also looks for any
type of error or bugs that can occur while any task. The well-secured OS sometimes
also acts as a countermeasure for preventing any sort of breach of the Computer
System from any external source and probably handling them.
Time Management
Imagine traffic light as (OS), which indicates all the cars(programs) whether it
should be stop(red)=>(simple queue) , start(yellow)=>(ready
queue),move(green)=>(under execution) and this light (control) changes after a
certain interval of time at each side of the road(computer system) so that the
cars(program) from all side of road move smoothly without traffic.
Introduction of Process Management
A process is a program in execution. For example, when we write a program in C or
C++ and compile it, the compiler creates binary code. The original code and binary
code are both programs. When we actually run the binary code, it becomes a process.
A process is an ‘active’ entity instead of a program, which is considered a ‘passive’
entity. A single program can create many processes when run multiple times; for
example, when we open a .exe or binary file multiple times, multiple instances begin
(multiple processes are created).
Process management includes various tools and techniques such as process mapping,
process analysis, process improvement, process automation, and process control. By
applying these tools and techniques, organizations can streamline their processes,
eliminate waste, and improve productivity. Overall, process management is a critical
aspect of modern business operations and can help organizations achieve their goals
and stay competitive in today’s rapidly changing marketplace.
What is Process Management?
If the operating system supports multiple users then services under this are very
important. In this regard, operating systems have to keep track of all the completed
processes, Schedule them, and dispatch them one after another. However, the user
should feel that he has full control of the CPU. Process management refers to the
techniques and strategies used by organizations to design, monitor, and control their
business processes to achieve their goals efficiently and effectively. It involves
identifying the steps involved in completing a task, assessing the resources required
for each step, and determining the best way to execute the task.
Process management can help organizations improve their operational efficiency,
reduce costs, increase customer satisfaction, and maintain compliance with regulatory
requirements. It involves analyzing the performance of existing processes, identifying
bottlenecks, and making changes to optimize the process flow.
Some of the systems call in this category are as follows.
• Create a child’s process identical to the parent’s.
• Terminate a process
• Wait for a child process to terminate
• Change the priority of the process
• Block the process
• Ready the process
• Dispatch a process
• Suspend a process
• Resume a process
• Delay a process
• Fork a process
Explanation of Process
• Text Section: A Process, sometimes known as the Text Section, also includes
the current activity represented by the value of the Program Counter.
• Stack: The stack contains temporary data, such as function parameters, returns
addresses, and local variables.
• Data Section: Contains the global variable.
• Heap Section: Dynamically memory allocated to process during its run time.
Key Components of Process Management
Below are some key component of process management.
• Process mapping: Creating visual representations of processes to understand
how tasks flow, identify dependencies, and uncover improvement opportunities.
• Process analysis: Evaluating processes to identify bottlenecks, inefficiencies,
and areas for improvement.
• Process redesign: Making changes to existing processes or creating new ones
to optimize workflows and enhance performance.
• Process implementation: Introducing the redesigned processes into the
organization and ensuring proper execution.
• Process monitoring and control: Tracking process performance, measuring
key metrics, and implementing control mechanisms to maintain efficiency and
effectiveness.
Importance of Process Management System
It is critical to comprehend the significance of process management for any manager
overseeing a firm. It does more than just make workflows smooth. Process
Management makes sure that every part of business operations moves as quickly as
possible.
By implementing business processes management, we can avoid errors caused by
inefficient human labor and cut down on time lost on repetitive operations. It also
keeps data loss and process step errors at bay. Additionally, process management
guarantees that resources are employed effectively, increasing the cost-effectiveness
of our company. Process management not only makes business operations better, but
it also makes sure that our procedures meet the needs of your clients. This raises
income and improves consumer happiness.
Characteristics of a Process
A process has the following attributes.
• Process Id: A unique identifier assigned by the operating system.
• Process State: Can be ready, running, etc.
• CPU registers: Like the Program Counter (CPU registers must be saved and
restored when a process is swapped in and out of the CPU)
• Accounts information: Amount of CPU used for process execution, time
limits, execution ID, etc
• I/O status information: For example, devices allocated to the process, open
files, etc
• CPU scheduling information: For example, Priority (Different processes
may have different priorities, for example, a shorter process assigned high priority
in the shortest job first scheduling)
All of the above attributes of a process are also known as the context of the process.
Every process has its own process control block(PCB), i.e. each process will have a
unique PCB. All of the above attributes are part of the PCB.
States of Process
A process is in one of the following states:
• New: Newly Created Process (or) being-created process.
• Ready: After the creation process moves to the Ready state, i.e. the process is
ready for execution.
• Run: Currently running process in CPU (only one process at a time can be
under execution in a single processor)
• Wait (or Block): When a process requests I/O access.
• Complete (or Terminated): The process completed its execution.
• Suspended Ready: When the ready queue becomes full, some processes are
moved to a suspended ready state
• Suspended Block: When the waiting queue becomes full.
Context Switching of Process
The process of saving the context of one process and loading the context of another
process is known as Context Switching. In simple terms, it is like loading and
unloading the process from the running state to the ready state.
1. When a high-priority process comes to a ready state (i.e. with higher priority than
the running process).
2. An Interrupt occurs.
3. User and kernel-mode switch (It is not necessary though)
4. Preemptive CPU scheduling is used.
A mode switch occurs when the CPU privilege level is changed, for example when a
system call is made or a fault occurs. The kernel works in more a privileged mode
than a standard user task. If a user process wants to access things that are only
accessible to the kernel, a mode switch must occur. The currently executing process
need not be changed during a mode switch. A mode switch typically occurs for a
process context switch to occur. Only the kernel can cause a context switch.
CPU-Bound vs I/O-Bound Processes
A CPU-bound process requires more CPU time or spends more time in the running
state. An I/O-bound process requires more I/O time and less CPU time. An I/O-bound
process spends more time in the waiting state.
Process planning is an integral part of the process management operating system. It
refers to the mechanism used by the operating system to determine which process to
run next. The goal of process scheduling is to improve overall system performance by
maximizing CPU utilization, minimizing execution time, and improving system
response time.
Process Scheduling Algorithms
The operating system can use different scheduling algorithms to schedule processes.
Here are some commonly used timing algorithms:
• First-come, first-served (FCFS): This is the simplest scheduling algorithm,
where the process is executed on a first-come, first-served basis. FCFS is non-
preemptive, which means that once a process starts executing, it continues until it
is finished or waiting for I/O.
• Shortest Job First (SJF): SJF is a proactive scheduling algorithm that selects
the process with the shortest burst time. The burst time is the time a process takes
to complete its execution. SJF minimizes the average waiting time of processes.
• Round Robin (RR): Round Robin is a proactive scheduling algorithm that
reserves a fixed amount of time in a round for each process. If a process does not
complete its execution within the specified time, it is blocked and added to the end
of the queue. RR ensures fair distribution of CPU time to all processes and avoids
starvation.
• Priority Scheduling: This scheduling algorithm assigns priority to each
process and the process with the highest priority is executed first. Priority can be
set based on process type, importance, or resource requirements.
• Multilevel queue: This scheduling algorithm divides the ready queue into
several separate queues, each queue having a different priority. Processes are
queued based on their priority, and each queue uses its own scheduling algorithm.
This scheduling algorithm is useful in scenarios where different types of processes
have different priorities.
Advantages of Process Management
• Improved Efficiency: Process management can help organizations identify
bottlenecks and inefficiencies in their processes, allowing them to make changes
to streamline workflows and increase productivity.
• Cost Savings: By identifying and eliminating waste and inefficiencies,
process management can help organizations reduce costs associated with their
business operations.
• Improved Quality: Process management can help organizations improve the
quality of their products or services by standardizing processes and reducing
errors.
• Increased Customer Satisfaction: By improving efficiency and quality,
process management can enhance the customer experience and increase
satisfaction.
• Compliance with Regulations: Process management can help organizations
comply with regulatory requirements by ensuring that processes are properly
documented, controlled, and monitored.
Disadvantages of Process Management
• Time and Resource Intensive: Implementing and maintaining process
management initiatives can be time-consuming and require significant resources.
• Resistance to Change: Some employees may resist changes to established
processes, which can slow down or hinder the implementation of process
management initiatives.
• Overemphasis on Process: Overemphasis on the process can lead to a lack of
focus on customer needs and other important aspects of business operations.
• Risk of Standardization: Standardizing processes too much can limit
flexibility and creativity, potentially stifling innovation.
• Difficulty in Measuring Results: Measuring the effectiveness of process
management initiatives can be difficult, making it challenging to determine their
impact on organizational performance.
Process Table and Process Control Block (PCB)
While creating a process, the operating system performs several operations. To
identify the processes, it assigns a process identification number (PID) to each process.
As the operating system supports multi-programming, it needs to keep track of all the
processes. For this task, the process control block (PCB) is used to track the process’s
execution status. Each block of memory contains information about the process state,
program counter, stack pointer, status of opened files, scheduling algorithms, etc.
All this information is required and must be saved when the process is switched from
one state to another. When the process makes a transition from one state to another,
the operating system must update information in the process’s PCB. A process control
block (PCB) contains information about the process, i.e. registers, quantum, priority,
etc. The process table is an array of PCBs, that means logically contains a PCB for all
of the current processes in the system.
1. Pointer: It is a stack pointer that is required to be saved when the process is
switched from one state to another to retain the current position of the process.
2. Process state: It stores the respective state of the process.
3. Process number: Every process is assigned a unique id known as process ID
or PID which stores the process identifier.
4. Program counter: It stores the counter,: which contains the address of the
next instruction that is to be executed for the process.
5. Register: These are the CPU registers which include the accumulator, base,
registers, and general-purpose registers.
6. Memory limits: This field contains the information about memory
management system used by the operating system. This may include page tables,
segment tables, etc.
7. Open files list : This information includes the list of files opened for a process.
Additional Points to Consider for Process Control Block (PCB)
• Interrupt handling: The PCB also contains information about the interrupts
that a process may have generated and how they were handled by the operating
system.
• Context switching: The process of switching from one process to another is
called context switching. The PCB plays a crucial role in context switching by
saving the state of the current process and restoring the state of the next process.
• Real-time systems: Real-time operating systems may require additional
information in the PCB, such as deadlines and priorities, to ensure that time-
critical processes are executed in a timely manner.
• Virtual memory management: The PCB may contain information about a
process’s virtual memory management, such as page tables and page fault
handling.
• Inter-process communication: The PCB can be used to facilitate inter-
process communication by storing information about shared resources and
communication channels between processes.
• Fault tolerance: Some operating systems may use multiple copies of the PCB
to provide fault tolerance in case of hardware failures or software errors.
Advantages-
1. Efficient process management: The process table and PCB provide an
efficient way to manage processes in an operating system. The process table
contains all the information about each process, while the PCB contains the
current state of the process, such as the program counter and CPU registers.
2. Resource management: The process table and PCB allow the operating
system to manage system resources, such as memory and CPU time, efficiently.
By keeping track of each process’s resource usage, the operating system can
ensure that all processes have access to the resources they need.
3. Process synchronization: The process table and PCB can be used to
synchronize processes in an operating system. The PCB contains information
about each process’s synchronization state, such as its waiting status and the
resources it is waiting for.
4. Process scheduling: The process table and PCB can be used to schedule
processes for execution. By keeping track of each process’s state and resource
usage, the operating system can determine which processes should be executed
next.
Disadvantages-
1. Overhead: The process table and PCB can introduce overhead and reduce
system performance. The operating system must maintain the process table and
PCB for each process, which can consume system resources.
2. Complexity: The process table and PCB can increase system complexity and
make it more challenging to develop and maintain operating systems. The need to
manage and synchronize multiple processes can make it more difficult to design
and implement system features and ensure system stability.
3. Scalability: The process table and PCB may not scale well for large-scale
systems with many processes. As the number of processes increases, the process
table and PCB can become larger and more difficult to manage efficiently.
4. Security: The process table and PCB can introduce security risks if they are
not implemented correctly. Malicious programs can potentially access or modify
the process table and PCB to gain unauthorized access to system resources or
cause system instability.
5. Miscellaneous accounting and status data – This field includes information
about the amount of CPU used, time constraints, jobs or process number, etc. The
process control block stores the register content also known as execution content
of the processor when it was blocked from running. This execution content
architecture enables the operating system to restore a process’s execution context
when the process returns to the running state. When the process makes a transition
from one state to another, the operating system updates its information in the
process’s PCB. The operating system maintains pointers to each process’s PCB in
a process table so that it can access the PCB
quickly.
Operations on ProcesseS
A process is an activity of executing a program. Basically, it is a program under
execution. Every process needs certain resources to complete its
task.
Operation on a Process
The execution of a process is a complex activity. It involves various operations.
Following are the operations that are performed while execution of a process:
Creation
This is the initial step of the process execution activity. Process creation means the
construction of a new process for execution. This might be performed by the system,
the user, or the old process itself. There are several events that lead to the process
creation. Some of the such events are the following:
1. When we start the computer, the system creates several background processes.
2. A user may request to create a new process.
3. A process can create a new process itself while executing.
4. The batch system takes initiation of a batch job.
Scheduling/Dispatching
The event or activity in which the state of the process is changed from ready to run. It
means the operating system puts the process from the ready state into the running
state. Dispatching is done by the operating system when the resources are free or the
process has higher priority than the ongoing process. There are various other cases in
which the process in the running state is preempted and the process in the ready state
is dispatched by the operating system.
Blocking
When a process invokes an input-output system call that blocks the process, and
operating system is put in block mode. Block mode is basically a mode where the
process waits for input-output. Hence on the demand of the process itself, the
operating system blocks the process and dispatches another process to the processor.
Hence, in process-blocking operations, the operating system puts the process in a
‘waiting’ state.
Preemption
When a timeout occurs that means the process hadn’t been terminated in the allotted
time interval and the next process is ready to execute, then the operating system
preempts the process. This operation is only valid where CPU scheduling supports
preemption. Basically, this happens in priority scheduling where on the incoming of
high priority process the ongoing process is preempted. Hence, in process preemption
operation, the operating system puts the process in a ‘ready’ state.
Process Termination
Process termination is the activity of ending the process. In other words, process
termination is the relaxation of computer resources taken by the process for the
execution. Like creation, in termination also there may be several events that may
lead to the process of termination. Some of them are:
1. The process completes its execution fully and it indicates to the OS that it has
finished.
2. The operating system itself terminates the process due to service errors.
3. There may be a problem in hardware that terminates the process.
4. One process can be terminated by another process.
Process Schedulers in Operating System
In computing, a process is the instance of a computer program that is being executed
by one or many threads. Scheduling is important in many different computer
environments. One of the most important areas of scheduling is which programs will
work on the CPU. This task is handled by the Operating System (OS) of the computer
and there are many different ways in which we can choose to configure programs.
What is Process Scheduling?
Process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process based on a
particular strategy.
Process scheduling is an essential part of a Multiprogramming operating system. Such
operating systems allow more than one process to be loaded into the executable
memory at a time and the loaded process shares the CPU using time multiplexing.
Process scheduler
Categories of Scheduling
Scheduling falls into one of two categories:
• Non-preemptive: In this case, a process’s resource cannot be taken before the
process has finished running. When a running process finishes and transitions to a
waiting state, resources are switched.
• Preemptive: In this case, the OS assigns resources to a process for a
predetermined period. The process switches from running state to ready state or
from waiting for state to ready state during resource allocation. This switching
happens because the CPU may give other processes priority and substitute the
currently active process for the higher priority process.
Types of Process Schedulers
There are three types of process schedulers:
1. Long Term or Job Scheduler
It brings the new process to the ‘Ready State’. It controls the Degree of Multi-
programming, i.e., the number of processes present in a ready state at any point in
time. It is important that the long-term scheduler make a careful selection of both I/O
and CPU-bound processes. I/O-bound tasks are which use much of their time in input
and output operations while CPU-bound processes are which spend their time on the
CPU. The job scheduler increases efficiency by maintaining a balance between the
two. They operate at a high level and are typically used in batch-processing systems.
It is responsible for selecting one process from the ready state for scheduling it on the
running state. Note: Short-term scheduler only selects the process to schedule it
doesn’t load the process on running. Here is when all the scheduling algorithms are
used. The CPU scheduler is responsible for ensuring no starvation due to high burst
time processes.
3. Medium-Term Scheduler
It is responsible for suspending and resuming the process. It mainly does swapping
(moving processes from main memory to disk and vice versa). Swapping may be
necessary to improve the process mix or because a change in memory requirements
has overcommitted available memory, requiring memory to be freed up. It is helpful
in maintaining a perfect balance between the I/O bound and the CPU bound. It
reduces the degree of multiprogramming.
Medium Term Scheduler
Some Other Schedulers
• I/O schedulers: I/O schedulers are in charge of managing the execution of I/O
operations such as reading and writing to discs or networks. They can use various
algorithms to determine the order in which I/O operations are executed, such
as FCFS (First-Come, First-Served) or RR (Round Robin).
• Real-time schedulers: In real-time systems, real-time schedulers ensure that
critical tasks are completed within a specified time frame. They can prioritize and
schedule tasks using various algorithms such as EDF (Earliest Deadline First) or
RM (Rate Monotonic).
Comparison Among Scheduler
Long Term Scheduler Short term schedular Medium Term
Scheduler
It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.
It is barely present or
It is a minimal time- It is a component of
nonexistent in the time-
sharing system. systems for time sharing.
sharing system.
Long Term Scheduler Short term schedular Medium Term
Scheduler
Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in
the Process Control block. A context switcher makes it possible for multiple processes
to share a single CPU using this method. A multitasking operating system must
include context switching among its features.
• Program Counter
• Scheduling information
• The base and limit register value
• Currently used register
• Changed State
• I/O State information
• Accounting information
Context Switching in Operating System
An operating system is a program loaded into a system or computer. and manage all
the other program which is running on that OS Program, it manages the all other
application programs. or in other words, we can say that the OS is an interface
between the user and computer hardware.
So in this article, we will learn about what is Context switching in an Operating
System and see how it works also understand the triggers of context switching and an
overview of the Operating System.
What is Context Switching in an Operating System?
Context switching in an operating system involves saving the context or state of a
running process so that it can be restored later, and then loading the context or state of
another. process and run it.
Context Switching refers to the process/method used by the system to change the
process from one state to another using the CPUs present in the system to perform its
job.
Example of Context Switching
Suppose in the OS there (N) numbers of processes are stored in a Process Control
Block(PCB). like The process is running using the CPU to do its job. While a process
is running, other processes with the highest priority queue up to use the CPU to
complete their job.
Switching the CPU to another process requires performing a state save of the current
process and a state restore of a different process. This task is known as a context
switch. When a context switch occurs, the kernel saves the context of the old process
in its PCB and loads the saved context of the new process scheduled to run. Context-
switch time is pure overhead because the system does no useful work while switching.
Switching speed varies from machine to machine, depending on the memory speed,
the number of registers that must be copied, and the existence of special instructions
(such as a single instruction to load or store all registers). A typical speed is a few
milliseconds. Context-switch times are highly dependent on hardware support. For
instance, some processors (such as the Sun UltraSPARC) provide multiple sets of
registers. A context switch here simply requires changing the pointer to the current
register set. Of course, if there are more active processes than there are register sets,
the system resorts to copying register data to and from memory, as before. Also, the
more complex the operating system, the greater the amount of work that must be done
during a context switch
Need of Context Switching
Context switching enables all processes to share a single CPU to finish their execution
and store the status of the system’s tasks. The execution of the process begins at the
same place where there is a conflict when the process is reloaded into the system.
The operating system’s need for context switching is explained by the reasons listed
below.
• One process does not directly switch to another within the system. Context
switching makes it easier for the operating system to use the CPU’s resources to
carry out its tasks and store its context while switching between multiple
processes.
• Context switching enables all processes to share a single CPU to finish their
execution and store the status of the system’s tasks. The execution of the process
begins at the same place where there is a conflict when the process is reloaded
into the system.
• Context switching only allows a single CPU to handle multiple processes
requests parallelly without the need for any additional processors.
Context Switching Triggers
The three different categories of context-switching triggers are as follows.
• Interrupts
• Multitasking
• User/Kernel switch
Interrupts: When a CPU requests that data be read from a disc, if any interruptions
occur, context switching automatically switches to a component of the hardware that
can handle the interruptions more quickly.
Multitasking: The ability for a process to be switched from the CPU so that another
process can run is known as context switching. When a process is switched, the
previous state is retained so that the process can continue running at the same spot in
the system.
Kernel/User Switch: This trigger is used when the OS needed to switch between
the user mode and kernel mode.
When switching between user mode and kernel/user mode is necessary, operating
systems use the kernel/user switch.
What is Process Control Block(PCB)?
So, The Process Control block(PCB) is also known as a Task Control Block. it
represents a process in the Operating System. A process control block (PCB) is a data
structure used by a computer to store all information about a process. It is also called
the descriptive process. When a process is created (started or installed), the operating
system creates a process manager.
State Diagram of Context Switching
Working Process Context Switching
So the context switching of two processes, the priority-based process occurs in the
ready queue of the process control block. These are the following steps.
• The state of the current process must be saved for rescheduling.
• The process state contains records, credentials, and operating system-specific
information stored on the PCB or switch.
• The PCB can be stored in a single layer in kernel memory or in a custom OS
file.
• A handle has been added to the PCB to have the system ready to run.
• The operating system aborts the execution of the current process and selects a
process from the waiting list by tuning its PCB.
• Load the PCB’s program counter and continue execution in the selected
process.
• Process/thread values can affect which processes are selected from the queue,
this can be important.
Preemptive and Non-Preemptive Scheduling
You will discover the distinction between preemptive and non-preemptive scheduling
in this article. But first, you need to understand preemptive and non-preemptive
scheduling before going over the differences.
Preemptive Scheduling
Preemptive scheduling is used when a process switches from the running state to the
ready state or from the waiting state to the ready state. The resources (mainly CPU
cycles) are allocated to the process for a limited amount of time and then taken away,
and the process is again placed back in the ready queue if that process still has CPU
burst time remaining. That process stays in the ready queue till it gets its next chance
to execute.
Comparison Chart
It has overheads of
Overhead It does not have overheads.
scheduling the processes.
To know further, you can refer to our detailed article on States of a Process in
Operating system.
What is Process Scheduling?
Process Scheduling is the process of the process manager handling the removal of an
active process from the CPU and selecting another process based on a specific
strategy.
Process Scheduling is an integral part of Multi-programming applications. Such
operating systems allow more than one process to be loaded into usable memory at a
time and the loaded shared CPU process uses repetition time.
There are three types of process schedulers:
• Long term or Job Scheduler
• Short term or CPU Scheduler
• Medium-term Scheduler
Why do we need to schedule processes?
• Scheduling is important in many different computer environments. One of the
most important areas is scheduling which programs will work on the CPU. This
task is handled by the Operating System (OS) of the computer and there are many
different ways in which we can choose to configure programs.
• Process Scheduling allows the OS to allocate CPU time for each process.
Another important reason to use a process scheduling system is that it keeps the
CPU busy at all times. This allows you to get less response time for programs.
• Considering that there may be hundreds of programs that need to work, the OS
must launch the program, stop it, switch to another program, etc. The way the OS
configures the system to run another in the CPU is called “context switching”. If
the OS keeps context-switching programs in and out of the provided CPUs, it can
give the user a tricky idea that he or she can run any programs he or she wants to
run, all at once.
• So now that we know we can run 1 program at a given CPU, and we know we
can change the operating system and remove another one using the context switch,
how do we choose which programs we need. run, and with what program?
• That’s where scheduling comes in! First, you determine the metrics, saying
something like “the amount of time until the end”. We will define this metric as
“the time interval between which a function enters the system until it is
completed”. Second, you decide on a metrics that reduces metrics. We want our
tasks to end as soon as possible.
What is the need for CPU scheduling algorithm?
CPU scheduling is the process of deciding which process will own the CPU to use
while another process is suspended. The main function of the CPU scheduling is to
ensure that whenever the CPU remains idle, the OS has at least selected one of the
processes available in the ready-to-use line.
In Multiprogramming, if the long-term scheduler selects multiple I / O binding
processes then most of the time, the CPU remains an idle. The function of an effective
program is to improve resource utilization.
If most operating systems change their status from performance to waiting then there
may always be a chance of failure in the system. So in order to minimize this excess,
the OS needs to schedule tasks in order to make full use of the CPU and avoid the
possibility of deadlock.
Shortest job first (SJF) is a scheduling process that selects the waiting process with
the smallest execution time to execute next. This scheduling method may or may not
be preemptive. Significantly reduces the average waiting time for other processes
waiting to be executed. The full form of SJF is Shortest Job First.
Characteristics of SJF:
• Shortest Job first has the advantage of having a minimum average waiting
time among all operating system scheduling algorithms.
• It is associated with each task as a unit of time to complete.
• It may cause starvation if shorter processes keep coming. This problem can be
solved using the concept of ageing.
Advantages of Shortest Job first:
• As SJF reduces the average waiting time thus, it is better than the first come
first serve scheduling algorithm.
• SJF is generally used for long term scheduling
Disadvantages of SJF:
• One of the demerit SJF has is starvation.
• Many times it becomes complicated to predict the length of the upcoming
CPU request
To learn about how to implement this CPU scheduling algorithm, please refer to our
detailed article on Shortest Job First.
4. Priority Scheduling:
5. Round robin:
Shortest remaining time first is the preemptive version of the Shortest job first
which we have discussed earlier where the processor is allocated to the job closest to
completion. In SRTF the process with the smallest amount of time remaining until
completion is selected to execute.
Characteristics of Shortest remaining time first:
• SRTF algorithm makes the processing of the jobs faster than SJF algorithm,
given it’s overhead charges are not counted.
• The context switch is done a lot more times in SRTF than in SJF and
consumes the CPU’s valuable time for processing. This adds up to its processing
time and diminishes its advantage of fast processing.
Advantages of SRTF:
• In SRTF the short processes are handled very fast.
• The system also requires very little overhead since it only makes a decision
when a process completes or a new process is added.
Disadvantages of SRTF:
• Like the shortest job first, it also has the potential for process starvation.
• Long processes may be held off indefinitely if short processes are continually
added.
9. Multiple Queue Scheduling:
Processes in the ready queue can be divided into different classes where each class
has its own scheduling needs. For example, a common division is a foreground
(interactive) process and a background (batch) process. These two classes have
different scheduling needs. For this kind of situation Multilevel Queue Scheduling is
used.
CPU utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as
possible. Theoretically, CPU utilization can range from 0 to 100 but in a real-time
system, it varies from 40 to 90 percent depending on the load upon the system.
Throughput
A measure of the work done by the CPU is the number of processes being executed
and completed per unit of time. This is called throughput. The throughput may vary
depending on the length or duration of the processes.
Turnaround Time
For a particular process, an important criterion is how long it takes to execute that
process. The time elapsed from the time of submission of a process to the time of
completion is known as the turnaround time. Turn-around time is the sum of times
spent waiting to get into memory, waiting in the ready queue, executing in CPU, and
waiting for I/O.
Turn Around Time = Completion Time - Arrival Time.
Waiting Time
A scheduling algorithm does not affect the time required to complete the process once
it starts execution. It only affects the waiting time of a process i.e. time spent by a
process waiting in the ready queue.
Waiting Time = Turnaround Time - Burst Time.
Response Time
In an interactive system, turn-around time is not the best criterion. A process may
produce some output fairly early and continue computing new results while previous
results are being output to the user. Thus another criterion is the time taken from
submission of the process of the request until the first response is produced. This
measure is called response time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) -
Arrival Time
Completion Time
The completion time is the time when the process stops executing, which means that
the process has completed its burst time and is completely executed.
Priority
Predictability
A given process always should run in about the same amount of time under a similar
system load.
Race Condition
When more than one process is executing the same code or accessing the same
memory or any shared variable in that condition there is a possibility that the output
or the value of the shared variable is wrong so for that all the processes doing the race
to say that my output is correct this condition known as a race condition. Several
processes access and process the manipulations over the same data concurrently, and
then the outcome depends on the particular order in which the access takes place.
A race condition is a situation that may occur inside a critical section. This happens
when the result of multiple thread execution in the critical section differs according to
the order in which the threads execute. Race conditions in critical sections can be
avoided if the critical section is treated as an atomic instruction. Also, proper thread
synchronization using locks or atomic variables can prevent race conditions.
Example:
Let’s understand one example to understand the race condition better:
Let’s say there are two processes P1 and P2 which share a common variable
(shared=10), both processes are present in – queue and waiting for their turn to be
executed. Suppose, Process P1 first come under execution, and the CPU store a
common variable between them (shared=10) in the local variable (X=10) and
increment it by 1(X=11), after then when the CPU read line sleep(1),it switches from
current process P1 to process P2 present in ready-queue. The process P1 goes in a
waiting state for 1 second.
Now CPU execute the Process P2 line by line and store common variable (Shared=10)
in its local variable (Y=10) and decrement Y by 1(Y=9), after then when CPU read
sleep(1), the current process P2 goes in waiting for state and CPU remains idle for
some time as there is no process in ready-queue, after completion of 1 second of
process P1 when it comes in ready-queue, CPU takes the process P1 under execution
and execute the remaining line of code (store the local variable (X=11) in common
variable (shared=11) ), CPU remain idle for sometime waiting for any process in
ready-queue,after completion of 1 second of Process P2, when process P2 comes in
ready-queue, CPU start executing the further remaining line of Process P2(store the
local variable (Y=9) in common variable (shared=9) ).
Initially Shared = 10
Process 1 Process 2
X++ Y–
Process 1 Process 2
sleep(1) sleep(1)
shared = X shared = Y
Note: We are assuming the final value of a common variable(shared) after execution
of Process P1 and Process P2 is 10 (as Process P1 increment variable (shared=10) by
1 and Process P2 decrement variable (shared=11) by 1 and finally it becomes
shared=10). But we are getting undesired value due to a lack of proper
synchronization.
• If the order of execution of the process(first P1 -> then P2) then we will get
the value of common variable (shared) =9.
• If the order of execution of the process(first P2 -> then P1) then we will get
the final value of common variable (shared) =11.
• Here the (value1 = 9) and (value2=10) are racing, If we execute these two
processes in our computer system then sometime we will get 9 and sometime we
will get 10 as the final value of a common variable(shared). This phenomenon is
called race condition.
Critical Section Problem
A critical section is a code segment that can be accessed by only one process at a time.
The critical section contains shared variables that need to be synchronized to maintain
the consistency of data variables. So the critical section problem means designing a
way for cooperative processes to access shared resources without creating data
inconsistencies.
In the entry section, the process requests for entry in the Critical Section.
Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion: If a process is executing in its critical section, then no
other process is allowed to execute in the critical section.
• Progress: If no process is executing in the critical section and other processes
are waiting outside the critical section, then only those processes that are not
executing in their remainder section can participate in deciding which will enter
the critical section next, and the selection can not be postponed indefinitely.
• Bounded Waiting: A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made a
request to enter its critical section and before that request is granted.
Semaphores
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore
can be signaled by another thread. This is different than a mutex as the mutex can be
signaled only by the thread that is called the wait function.
A semaphore uses two atomic operations, wait and signal for process
synchronization.
A Semaphore is an integer variable, which can be accessed only through two
operations wait() and signal().
There are two types of semaphores: Binary Semaphores and Counting Semaphores.
• Binary Semaphores: They can only be either 0 or 1. They are also known as
mutex locks, as the locks can provide mutual exclusion. All the processes can
share the same mutex semaphore that is initialized to 1. Then, a process has to
wait until the lock becomes 0. Then, the process can make the mutex semaphore
1 and start its critical section. When it completes its critical section, it can reset
the value of the mutex semaphore to 0 and some other process can enter its
critical section.
• Counting Semaphores: They can have any value and are not restricted to a
certain domain. They can be used to control access to a resource that has a
limitation on the number of simultaneous accesses. The semaphore can be
initialized to the number of instances of the resource. Whenever a process wants
to use that resource, it checks if the number of remaining instances is more than
zero, i.e., the process has an instance available. Then, the process can enter its
critical section thereby decreasing the value of the counting semaphore by 1.
After the process is over with the use of the instance of the resource, it can leave
the critical section thereby adding 1 to the number of available instances of the
resource.
Advantages of Process Synchronization
• Ensures data consistency and integrity
• Avoids race conditions
• Prevents inconsistent data due to concurrent access
• Supports efficient and effective use of shared resources
Disadvantages of Process Synchronization
• Adds overhead to the system
• This can lead to performance degradation
• Increases the complexity of the system
• Can cause deadlocks if not implemented properly.
•