0% found this document useful (0 votes)
104 views

Unit 2 - OS

OPERATING SYSTEM NOTES UNIT - 2 AKTU (2023-24)

Uploaded by

waxaw90937
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
104 views

Unit 2 - OS

OPERATING SYSTEM NOTES UNIT - 2 AKTU (2023-24)

Uploaded by

waxaw90937
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

RAJ KUMAR GOEL INSTITUTE OF TECHNOLOGY ,

Ghaziabad
Operating Systems
BCS-401

Unit: II

Operating System

Mr. H.S.Tomer
B Tech :4th Sem Assistant Professor
CSE(AI&ML)
1
Course Outcomes

Course outcome: After completion of this course students will be able to:

CO1 Understand the structure and functions of OS K1, K2

CO2 Understand the principles of concurrency and Deadlocks K2

CO3 Learn about Processes, Threads and Scheduling algorithms K1, K2

CO4 Learn various memory management scheme K2

CO5 Study I/O management and File systems. K2,K4


2
Course Outcome of Unit 2

After completion of this Unit students will be able to:-

• CO2: Understand the principles of concurrency and Deadlocks.

3
Process Synchronization

Process:- a process is a program in execution which then forms the basis


of all computation. The process is not as same as program code but a lot
more than it. A process is an 'active' entity as opposed to the program
which is considered to be a 'passive' entity.
There are various processes in a computer system, which can be either
• Independent processes:- is considered independent when any other
processes operating on the system may not impact a process . They don't
share any data with other processes
• Cooperating processes :- A process may be affected by any other
process executing on the system. A cooperating process shares data with
another.
4
Process Synchronization

5
Process Synchronization

When two or more process cooperates with each other, their order
of execution must be preserved otherwise there can be conflicts in
their execution and inappropriate outputs can be produced.

Process Synchronization:- The procedure involved in preserving


the appropriate order of execution of cooperative processes is
known as Process Synchronization. There are various
synchronization mechanisms that are used to synchronize the
processes.

6
Process Synchronization
Concurrency:- Concurrency in operating systems refers to the ability of an
operating system to handle multiple tasks or processes at the same time.

• With the increasing demand for high performance computing, concurrency


has become a critical aspect of modern computing systems.
• Operating systems that support concurrency can execute multiple tasks
simultaneously, leading to better resource utilization, improved
responsiveness, and enhanced user experience.
• Concurrency is essential in modern operating systems due to the increasing
demand for multitasking, real-time processing, and parallel computing.
• Concurrency also introduces new challenges such as race conditions,
deadlocks, and priority inversion, which need to be managed effectively to
ensure the stability and reliability of the system.
7
Process Synchronization

Principles of Concurrency :-The principles of concurrency in operating


systems are designed to ensure that multiple processes or threads can
execute efficiently and effectively, without interfering with each other or
causing deadlock.
Concurrent system design frequently requires developing dependable
strategies for coordinating their execution, data interchange, memory
allocation, and execution schedule to decrease response time and maximize
throughput.

8
Process Synchronization

Advantages of Concurrency in Operating System:-


• Improved performance − Concurrency allows multiple tasks to be
executed simultaneously, improving the overall performance of the system.
• Resource utilization − Concurrency allows better utilization of system
resources, such as CPU, memory, and I/O devices.
• Scalability − Concurrency can improve the scalability of the system by
allowing it to handle an increasing number of tasks and users without
degrading performance.
• Fault tolerance − Concurrency can improve the fault tolerance of the
system by allowing tasks to be executed independently. If one task fails, it
does not affect the execution of other tasks.

9
Process Synchronization

Problems in Concurrency :-These problems can be difficult to debug and


diagnose and often require careful design and implementation of
concurrency mechanisms to avoid.
1. Race conditions:- occur when the output of a system depends on the
order and timing of the events, which leads to unpredictable behavior.
A Race Condition typically occurs when two or more threads try to
read, write and possibly make the decisions based on the memory that they
are accessing concurrently.
2. Deadlocks:- occur when two or more processes or threads are waiting
for each other to release resources, resulting in a circular wait.
A deadlock is a situation where a set of processes are blocked because
each process is holding a resource and waiting for another resource
acquired by some other process. 10
Process Synchronization
3. Starvation :-occurs when a process or thread cannot access the resource
it needs to complete its task because other processes or threads are hogging
the resource. This results in the process or thread being stuck in a loop,
unable to make progress.
Starvation is a problem of resource management where in the OS, the
process does not have resources because it is being used by other processes.
4. Priority inversion occurs when a low-priority process or thread holds a
resource that a high-priority process or thread needs, resulting in the high-
priority process or thread being blocked.
Priority inversion is a bug that occurs when a high priority task is
indirectly preempted by a low priority task

11
Process Synchronization

5. Memory consistency :-In a concurrent system, memory consistency can


be challenging to ensure, leading to incorrect behavior.

Memory consistency refers to the order in which memory operations are


performed by different processes or threads.

12
Process Synchronization

13
Process Synchronization

Critical Section:- The regions of a


program that try to access shared resources
and may cause race conditions are called
critical section. To avoid race condition
among the processes, we need to assure
that only one process at a time can execute
within the critical section.

A Race Condition typically occurs when


two or more threads try to read, write and
possibly make the decisions based on the
memory that they are accessing
concurrently. 14
Process Synchronization
• Producer-Consumer Problem :- The Producer-Consumer
problem is a classical multi-process synchronization problem,
that is we are trying to achieve synchronization between more
than one process.
• The problem is defined as follows: there is a fixed-size buffer
and a Producer process, and a Consumer process.
• The Producer process creates an item and adds it to the shared
buffer.
• The Consumer process takes items out of the shared buffer and
“consumes” them.

15
Process Synchronization

16
Process Synchronization

Below are a few points that considered as the problems occur in


Producer-Consumer:
•The producer should produce data only when the buffer is not full. In case it
is found that the buffer is full, the producer is not allowed to store any data
into the memory buffer.

•Data can only be consumed by the consumer if and only if the memory buffer
is not empty. In case it is found that the buffer is empty, the consumer is not
allowed to use any data from the memory buffer.

•Accessing memory buffer should not be allowed to producer and consumer at


the same time
17
Process Synchronization

18
Process Synchronization

19
Process Synchronization

20
Process Synchronization

21
Process Synchronization

22
Process Synchronization

23
Process Synchronization

24
Process Synchronization
Critical Section:- The regions of a program that try to access shared
resources and may cause race conditions are called critical section. To avoid
race condition among the processes, we need to assure that only one process
at a time can execute within the critical section.
The critical section problem is used to design a set of protocols which
can ensure that the Race condition among the processes will never arise.
Proper use of critical sections and process synchronization
mechanisms is essential in concurrent programming to ensure proper
synchronization of shared resources and avoid race conditions, deadlocks, and
other synchronization-related issues.

25
Process Synchronization

26
Process Synchronization
In order to synchronize the cooperative processes, our main task is to solve
the critical section problem. We need to provide a solution in such a way that
the following conditions can be satisfied:-
Mutual Exclusion :-Mutual exclusion implies that only one process can be
inside the critical section at any time. If any other processes require the
critical section, they must wait until it is free.
Progress :-Progress means that if one process doesn't need to execute into
critical section then it should not stop other processes to get into the critical
section.
Bounded Waiting :-We should be able to predict the waiting time for every
process to get into the critical section. The process must not be endlessly
waiting for getting into the critical section.
27
Process Synchronization
Advantages of critical section in process synchronization:
1. Prevents race conditions: By ensuring that only one process can execute
the critical section at a time, race conditions are prevented, ensuring
consistency of shared data.
2. Provides mutual exclusion: Critical sections provide mutual exclusion to
shared resources, preventing multiple processes from accessing the same
resource simultaneously and causing synchronization-related issues.
3. Reduces CPU utilization: By allowing processes to wait without wasting
CPU cycles, critical sections can reduce CPU utilization, improving overall
system efficiency.
4. Simplifies synchronization: Critical sections simplify the
synchronization of shared resources, as only one process can access the
resource at a time, eliminating the need for more complex synchronization
28
mechanisms.
Process Synchronization

Disadvantages of critical section in process synchronization:-


1.Overhead: Implementing critical sections using synchronization
mechanisms like semaphores and mutexes can introduce additional
overhead, slowing down program execution.
2. Deadlocks: Poorly implemented critical sections can lead to deadlocks,
where multiple processes are waiting indefinitely for each other to release
resources.
3. Can limit parallelism: If critical sections are too large or are executed
frequently, they can limit the degree of parallelism in a program, reducing its
overall performance.

29
Process Synchronization

Mutual exclusion is a property of process synchronization which states


that “no two processes can exist in the critical section at any given point of
time”. Any process synchronization technique being used must satisfy the
property of mutual exclusion, without which it would not be possible to get
rid of a race condition.
If a process Pi is executing a Critical section for “ds” another process
wishing to execute a Critical section for same data “ds”, it will have to
wait until Pi finishes executing its Critical Section . Thus Critical Section
for a data item ds is a mutual exclusive region with respect to access to
shared data “ds” and the concept is called Mutual Exclusion.

30
Process Synchronization
The mutual exclusion problem is to design a pre-protocol and a post-
protocol based on either hardware or software that-
•Prevent two processes from being in their critical sections at the same time.
•Hence the desirable no deadlock and no starvation properties.
Allow critical sections to be executed atomically

Atomic Transactions: An operation is atomic if the steps are done as a


unit.
Operations that are not atomic, but interruptible and done by multiple
processes can cause problems

31
Process Synchronization
Solutions to Mutual Exclusion Problem:
•Software Solutions: Correctness does not rely on any other
assumptions
• Two process solution
1. Algorithm 1.
2. Algorithm 2
3. Algorithm 3 (Dekker’s Algo)
4. Algorithm 4 (Peterson’s Algo)
•. N-Process solution
1. Lampor’t Bakery Algorithm.

32
Process Synchronization
• Hardware Solutions: Rely on some special machine instructions.
1. Disabling Interrupts
2. Special Machine Instructions
3. Test and Set Lock.
• Operating System Solutions: Provide some functions and data structures
to programmes.
1. Semaphores
2. Mutex

33
Process Synchronization

34
Process Synchronization

35
Process Synchronization

36
Process Synchronization
Dekker’s Algorithm:--
Dekker’s algorithm is the first solution of critical section problem. There are
many versions of this algorithms, the 5th or final version satisfies the all the
conditions below and is the most efficient among all of them. The solution to
critical section problem must ensure the following three conditions:
•Mutual Exclusion
•Progress
•Bounded Waiting

37
Process Pi Process Pj
do { do {
flag [i] = true; flag [j] = true;
while (flag[j]) while (flag[i])
{ {
If (turn = = j) If (turn = = i)
{ {
flag[i] = false; flag[j] = false;
while (turn = = j); while (turn = = i);
flag[i] = true; flag[j] = true;
} }
} }
critical section critical section
turn = j; turn = i;
flag[i] = false; flag[j] = false;
remainder section remainder section
} }
38
while (true) while (true)
Process Synchronization
Peterson’s Algorithm:-- This Algorithm satisfies all the 3 conditions of the
critical section problem.
The process has 2 variables:-
1. int turn
2. Boolean flag[2]
initially
flag[0]=flag[1]=false
turn- immaterial

39
Process Synchronization

40
Process Synchronization

41
Process Synchronization

42
Process Synchronization

43
Process Synchronization

Test and Set Lock:- It is a hardware based synchronization


mechanism.
It uses a test and set instruction to provide the synchronization among
the processes executing concurrently.
If one process is currently executing a test-and-set, no other process is
allowed to begin another test-and-set until the first process test-and-set is
finished.
Lock value = 0 means the critical section is currently vacant and no process
is present inside it.
Lock value = 1 means the critical section is currently occupied and a
process is present inside it.

44
Process Synchronization

45
Process Synchronization

46
Process Synchronization

47
Process Synchronization

48
Process Synchronization
Semaphores:- are integer variables share between the threads that are used to
solve the critical section problem by using two atomic operations, wait and
signal that are used for process synchronization.
The definitions of wait and signal are as follows −
Wait:- The wait operation decrements the value of its argument S, if it is
positive. If S is negative or zero, then no operation is performed.

Signal:-The signal operation increments the value of its argument S.

49
Process Synchronization
Types of Semaphores
There are two main types of semaphores i.e. counting semaphores and binary
semaphores. Details about these are given as follows −
Counting Semaphores :- These are integer value semaphores and have an
unrestricted value domain. Used when a recourse has multiple instances. If
the resources are added, semaphore count automatically incremented and if
the resources are removed, the count is decremented.
Binary Semaphores :-The binary semaphores are like counting semaphores
but their value is restricted to 0 and 1. The wait operation only works when
the semaphore is 1 and the signal operation succeeds when semaphore is 0. It
is sometimes easier to implement binary semaphores than counting
semaphores.
50
Process Synchronization

Classical problems of Synchronization:-


There are number of classical problems of synchronization as examples of a
large class of concurrency-control problems. we use semaphores for
synchronization, since that is the traditional way to present such solutions.
However, actual implementations of these solutions could use mutex locks in
place of binary semaphores. The following problems of synchronization are
considered as classical problems:
1. Bounded-buffer (or Producer-Consumer) Problem
2. Dining-Philosophers Problem
3. Sleeping Barber Problem
4. Readers and Writers Problem

51
Process Synchronization

1. Bounded-buffer (or Producer-Consumer) Problem

52
Process Synchronization

53
Process Synchronization

54
Process Synchronization

55
Process Synchronization

2. Dining-Philosophers Problem

56
Process Synchronization

2. Dining-Philosophers Problem

57
Process Synchronization

58
Process Synchronization

59
Process Synchronization
3. Sleeping Barber Problem
A barbershop consists of a waiting room with n chairs and the barber room containing the
barber chair. If there are no customers to be served, the barber goes to sleep. If a customer
enters the barbershop and all chairs are occupied, then the customer leaves the shop. If the
barber is busy but chairs are available, then the customer sits in one of the free chairs. If the
barber is asleep, the customer wakes up the barber.

60
Process Synchronization
We use 3 semaphores.:-
Chairs :- n
Semaphore Customers (0) :- counts waiting customers
Semaphore Barbers (0) :- is the barbers idle 0 otherwise 1
Semaphore Mutex (1) :- is used for mutual exclusion.
int waiting (0) :- A shared data variable counts waiting customers copy of semaphore
customers needed because we can’t access the value of semaphores directly.

61
Process Synchronization
Customer Process
Barber Process
do()
do()
{
wait(mutex); {
if (waiting <chair ) wait(customers);
{ wait(mutex);
waiting = waiting + 1 waiting = waiting - 1 ;
signal(customers); signal(barbers);
signal(mutex); signal(mutex);
wait(barbers); cut_hair();
get_haircut(); }while(true)
}
else
signal(mutex);
}while(true) 62
Process Synchronization

63
Process Synchronization
Customer Process
Barber Process
do()
do()
{
wait(mutex); {
if (waiting <chair ) wait(customers);
{ wait(mutex);
waiting = waiting + 1 waiting = waiting - 1 ;
signal(customers); signal(barbers);
signal(mutex); signal(mutex);
wait(barbers); cut_hair();
get_haircut(); }while(true)
}
else
signal(mutex);
}while(true) 64
Process Synchronization

65
Process Synchronization

66
Process Synchronization
Interprocess communication is the mechanism provided by the operating
system that allows processes to communicate with each other. This
communication could involve a process letting another process know that some
event has occurred or transferring of data from one process to another.

The models of interprocess communication are as follows −


•Shared Memory Model
•Message Passing Model

67
Process Synchronization
Interprocess communication is the mechanism provided by the operating
system that allows processes to communicate with each other. This
communication could involve a process letting another process know that some
event has occurred or transferring of data from one process to another.

The models of interprocess communication are as follows −


•Shared Memory Model
•Message Passing Model

68
Process Synchronization
Shared Memory Model
Shared memory is the memory that can be simultaneously accessed by multiple
processes. This is done so that the processes can communicate with each other. All
POSIX systems, as well as Windows operating systems use shared memory.
Advantage of Shared Memory Model
Memory communication is faster on the shared memory model as compared to the
message passing model on the same machine.
Disadvantages of Shared Memory Model
Some of the disadvantages of shared memory model are as follows −
All the processes that use the shared memory model need to make sure that they
are not writing to the same memory location.
Shared memory model may create problems such as synchronization and memory
protection that need to be addressed.
69
Process Synchronization

Message Passing Model


Multiple processes can read and write data to the message queue without being
connected to each other. Messages are stored on the queue until their recipient
retrieves them. Message queues are quite useful for interprocess communication and
are used by most operating systems.
The size of the message can be fixed or variable.
Advantage of Messaging Passing Model
The message passing model is much easier to implement than the shared memory
model.
Disadvantage of Messaging Passing Model
The message passing model has slower communication than the shared memory
model because the connection setup takes time.

70
Process Synchronization

Direct Communication:-
In this type of communication process, usually, a link is created or established
between two communicating processes. However, in every pair of
communicating processes, only one link can exist.
Indirect Communication
Indirect communication can only exist or be established when processes share
a common mailbox, and each pair of these processes shares multiple
communication links. These shared links can be unidirectional or bi-
directional.

71
Process Synchronization

72
Process Synchronization

73
Process Synchronization

74
Faculty Video Links, Youtube & NPTEL Video Links and Online
Courses Details

Youtube/other Video Links

• https://www.youtube.com/watch?v=_zOTMOubT1M
• https://www.youtube.com/playlist?list=PLmXKhU9FNesSF
vj6gASuWmQd23Ul5omtD
• https://www.youtube.com/watch?v=x_UpLHXF9dU
• https://www.youtube.com/watch?v=cviEfwtdcEE
• https://nptel.ac.in/courses/106108101

You might also like