0% found this document useful (0 votes)
12 views

5-Process Synchronisation

Process synchronization is crucial in operating systems for managing shared resources and preventing issues like race conditions and deadlocks. Techniques such as mutexes, semaphores, and monitors are employed to ensure mutual exclusion and data consistency among concurrent processes. The document also discusses the critical section problem, cooperative processes, race conditions, and various synchronization methods to maintain system reliability and efficiency.

Uploaded by

gaurav prajapati
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

5-Process Synchronisation

Process synchronization is crucial in operating systems for managing shared resources and preventing issues like race conditions and deadlocks. Techniques such as mutexes, semaphores, and monitors are employed to ensure mutual exclusion and data consistency among concurrent processes. The document also discusses the critical section problem, cooperative processes, race conditions, and various synchronization methods to maintain system reliability and efficiency.

Uploaded by

gaurav prajapati
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Process Synchronisation

1. Introduction to Process Synchronisation


Process synchronisation is a key concept in operating systems that allows multiple processes
to run at the same time without issues. It plays a vital role in managing shared resources such
as memory, files, and databases, preventing data inconsistencies, and ensuring system
reliability. Synchronisation mechanisms help coordinate processes, enforcing constraints to
avoid issues like race conditions, deadlocks, and resource starvation. By implementing proper
synchronisation techniques, operating systems maintain the integrity and efficiency of
concurrent process execution.

1.1 Importance of Process Synchronisation

 Ensures Data Consistency: Prevents multiple processes from making conflicting


updates to shared data. For example, in a banking system where multiple processes
are trying to update a customer's account balance simultaneously, process
synchronisation ensures that only one process can update the balance at a time to
prevent inconsistencies. This helps maintain the accuracy of the account information
and prevents errors such as overdrawing funds or incorrect balances.
 Avoids Deadlocks: Prevents system freezes by efficiently handling resource
distribution. For instance, in a multi-threaded application, process synchronisation
can prevent deadlocks by carefully controlling the order in which resources are
accessed. This ensures that processes do not block each other and continue to function
smoothly without getting stuck in a deadlock situation.
 Improves System Efficiency: Enables parallel execution without compromising
accuracy.
 Facilitates Inter-process Communication (IPC): Helps processes coordinate their
tasks efficiently.

2. The Critical Section Problem


The critical section problem arises when many processes need to use shared resources at the
same time during concurrent processing. A section of the program, known as the critical
section, is where these shared resources are accessed and modified. To maintain consistency
and prevent data errors, it's essential to enforce mutual exclusion, allowing only one process
to work in its critical section at a time. Proper synchronisation mechanisms, such as locks,
semaphores, and monitors, help manage access to critical sections and prevent potential
conflicts.

2.1 Solution to the Critical Section Problem

A solution to the critical section problem must satisfy the following three conditions:

1. Mutual Exclusion: No two processes should execute in their critical sections


simultaneously.
2. Progress: If no process is in the critical section and some processes wish to enter, one
of them must be allowed to proceed. For example, in a computer operating system, a
file access system might use a lock to ensure that only one process can write to a file
at a time, preventing data corruption. This helps maintain the integrity of the file
system and ensures that all processes can safely access and modify files without
interference.
3. Bounded Waiting: There should be a limit on the number of times a process is
bypassed before it gets access to the critical section. For instance, in a ticket
reservation system, each customer requesting a ticket is placed in a queue and given a
limited number of attempts to access available seats in the theatre. This prevents any
single customer from being constantly overlooked in favour of others, ensuring fair
access to all customers. This bounded waiting system helps maintain order and
fairness in the ticket reservation process.

2.2 Example of the Critical Section Problem

Consider two processes, P1 and P2, accessing a shared variable:

int counter = 0;
void P1() {
counter = counter + 1;
}
void P2() {
counter = counter - 1;
}

If both processes execute simultaneously without synchronisation, the final value of the
counter can be unpredictable, leading to inconsistencies.

2.3 Methods to Solve the Critical Section Problem

 Peterson’s Algorithm: Provides a software-based solution using flags and turn


variables.
 Locks (Mutex): Ensures exclusive access to the critical section by locking resources.
 Monitors: A high-level abstraction that provides built-in synchronisation.

2.3.1 Peterson’s Algorithm

It is a classic software-based solution for the critical section problem in concurrent


programming. It ensures mutual exclusion between two processes that need to access a
shared resource without using hardware-based locking mechanisms.

How Peterson’s Algorithm Works

It relies on two shared variables:

1. flag[i]: An array where flag[i] = true indicates that process i wants to enter the
critical section.
2. turn: A variable indicating whose turn it is to enter the critical section.

Algorithm Steps for Two Processes (P0 and P1)


Entry Section (Requesting Critical Section)

1. Process i sets flag[i] = true (indicating it wants to enter).


2. Process i sets turn = j (giving the other process a chance).
3. Process i waits while flag[j] == true and turn == j.

Critical Section

 The process executes its critical section safely.

Exit Section (Leaving Critical Section)

 Process i sets flag[i] = false (releasing the resource).

Peterson’s Algorithm in C
#include <stdio.h>
#include <pthread.h>
#include <stdbool.h> #define NUM_ITERATIONS 5 volatile bool flag[2] =
{false, false}; volatile int turn; void *process(void *arg) {
int i = *(int *)arg;
int j = 1 - i; // The other process for (int k = 0; k <
NUM_ITERATIONS; k++) {
// Entry Section
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
} // Critical Section
printf("Process %d is in the critical section.\n", i); // Exit Section
flag[i] = false;
}
return NULL;
} int main() {
pthread_t t0, t1;
int p0 = 0, p1 = 1; pthread_create(&t0, NULL, process, &p0);
pthread_create(&t1, NULL, process, &p1); pthread_join(t0, NULL);
pthread_join(t1, NULL); return 0;
}

Why Use Peterson’s Algorithm?

 Ensures mutual exclusion (only one process enters at a time).


 Guarantees progress (a process gets access if no one else is in).
 Enforces bounded waiting (a process won’t be indefinitely blocked).

Limitations

 Works only for two processes (not scalable for multiple processes).
 Requires busy waiting, which can be inefficient.

2.3.2 Locks (Mutex) in Process Synchronisation


A lock (also called a mutex, short for mutual exclusion) is a synchronisation mechanism used
in multi-threaded or multi-process environments to ensure that only one thread/process can
access a shared resource at a time.

How a Mutex Works

A mutex has two states:

1. Locked (Acquired)—A thread that locks the mutex gains exclusive access to the
critical section.
2. Unlocked (Released)—When the thread releases the mutex, other threads can acquire
it.

Example of Mutex Usage

Consider a scenario where multiple threads try to update a shared variable:

Without Mutex (Race Condition Example)

#include <stdio.h>
#include <pthread.h>
int counter = 0; // Shared resource
void *increment(void *arg)
{
for (int i = 0; i< 1000000; i++)
{
counter++; // Race condition (no protection)
}
return NULL;
}

int main() {
pthread_t t1, t2;
pthread_create(&t1, NULL, increment, NULL);
pthread_create(&t2, NULL, increment, NULL);

pthread_join(t1, NULL);
pthread_join(t2, NULL); printf("Final Counter Value: %d\n", counter); //
Inconsistent result due to race condition
return 0;
}

Issue: If both threads modify the counter simultaneously, operations can interleave
incorrectly, leading to inconsistent results.

With Mutex (Preventing Race Condition)

#include <stdio.h>
#include <pthread.h> int counter = 0;
pthread_mutex_tlock; // Mutex declaration void *increment(void *arg) {
for (int i = 0; i< 1000000; i++) {
pthread_mutex_lock(&lock); // Acquire lock
counter++;
pthread_mutex_unlock(&lock); // Release lock
}
return NULL;
} int main() {
pthread_t t1, t2;
pthread_mutex_init(&lock, NULL); // Initialise mutex pthread_create(&t1,
NULL, increment, NULL);
pthread_create(&t2, NULL, increment, NULL); pthread_join(t1, NULL);
pthread_join(t2, NULL); pthread_mutex_destroy(&lock); // Destroy mutex
printf("Final Counter Value: %d\n", counter); // Correct value (no race
condition)
return 0;
}

Solution: The mutex ensures that only one thread modifies the counter at a time,
preventing race conditions.

Characteristics of Mutex Locks

 Mutual Exclusion: Only one thread can hold the lock at a time.
 Blocking: If a thread tries to acquire a locked mutex, it must wait.
 Explicit Locking & Unlocking: The programmer must manually acquire and release
the lock.

Where Are Mutexes Used?

 Database Transactions: Preventing concurrent writes to avoid corruption.


 File Access: Ensuring that only one process writes to a file at a time.
 Multithreading in OS: Protecting shared memory between threads.

Drawbacks of Mutexes

1. Deadlocks—If two threads wait indefinitely for each other's lock.


2. Starvation—If a thread never gets a chance to acquire the lock.
3. Performance Overhead—Locking and unlocking introduce some delays.

2.3.3 Monitors in Process Synchronisation

A monitor is a high-level synchronisation construct that encapsulates shared resources and


provides mutual exclusion to ensure that only one process (or thread) accesses the critical
section at a time. It is an abstraction over locks (mutexes) and condition variables, making
synchronisation easier and less error-prone.

Key Features of Monitors


1. Encapsulation—Combines data (shared resource) and methods (operations on the
resource) inside a single construct.
2. Automatic Mutual Exclusion—Only one thread can execute a monitor’s function at
a time.
3. Condition Variables—Allows processes to wait until a certain condition is met
before proceeding.
4. Simplifies Synchronization – Provides a structured way to handle shared resources
compared to low-level locks.

Monitor Structure

A monitor consists of

 Shared Variables: Data that multiple processes/threads need to access.


 Procedures/Methods: Functions that operate on shared variables.
 Synchronisation Mechanism: Ensures only one thread enters at a time.

Example of a Monitor (Pseudo-Code)


monitor SharedResource {
int resource = 0; condition canAccess; void acquire() {
if (resource == 1) wait(canAccess); // Wait if resource is busy
resource = 1;
} void release() {
resource = 0;
signal(canAccess); // Notify waiting threads
}
}

How It Works:

 If a process wants to use a resource, it calls acquire(). If another process is using it,
it waits.
 When the process is done, it calls release(), which signals waiting processes.

Monitors in Programming Languages

Some languages provide built-in monitor support:

 Java (Synchronised Methods & Blocks)


 Python (threading.Condition())
 C++ (Monitor-like behaviour using std::mutex and std::condition_variable)
Example: Monitor in Java (Synchronised Methods)

Java provides synchronised methods to implement monitors.

class MonitorExample {
private int resource = 0; public synchronised void acquire() {
while (resource == 1) {
try { wait(); } catch (InterruptedException e) {}
}
resource = 1;
} public synchronised void release() {
resource = 0;
notify(); // Wake up a waiting thread
}
}

Benefit: The synchronised keyword ensures that only one thread enters the method at a
time.

Comparison: Monitors vs. Mutexes vs. Semaphores

Feature Monitors Mutexes Semaphores


Encapsulation Yes No No
Mutual Exclusion Yes Yes Sometimes (Counting Semaphore
allows multiple)
Condition Yes No No
Variables
Ease of Use High Medium Low (Prone to deadlocks)

Advantages of Monitors

 Higher-Level Abstraction– Easier to use than raw mutexes and semaphores.


 Automatic Synchronisation– Ensures only one thread accesses the shared resource.
 Encapsulation—Bundles synchronisation logic with shared data.

Disadvantages of Monitors

 Limited Control– Cannot handle advanced synchronisation like priority scheduling.

 Not Available in All Languages—Some languages require manual implementation.

Use Cases of Monitors

Thread Synchronisation in Javasynchronized methods)


Process Communication in Operating Systems
Managing Shared Buffers (Producer-Consumer Problem)
3. Cooperative Processes
Cooperative processes are processes that can affect or be affected by other processes while
executing. Unlike independent processes, cooperative processes share data and resources,
making synchronisation essential to avoid errors.

3.1 Advantages of Cooperative Processes

 Inter-process communication: Allows processes to share information efficiently.


 Resource sharing: Enables multiple processes to use shared resources without
redundancy.
 Modular design: Helps in designing modular systems where different tasks run in
parallel.

3.2 Example of Cooperative Processes

Consider a text editor and an auto-save feature running as separate processes. The auto-save
process needs access to the same file as the text editor. Without synchronisation, data
inconsistencies or corruption may occur.

4. Race Condition
A race condition occurs when multiple processes or threads access and modify shared data
concurrently, leading to unpredictable and inconsistent outcomes depending on the order of
execution.

4.1 Example of a Race Condition In a banking application, multiple users may try to
transfer money from the same account at the same time. If proper synchronisation
mechanisms are not in place, it could result in incorrect balances or even lost transactions.
This can lead to financial loss and customer dissatisfaction.

Consider two ATM transactions updating a bank balance:

int balance = 1000;


void withdraw(int amount) {
if (balance >= amount) {
balance = balance - amount;
}
}

If two users simultaneously attempt to withdraw $500, both processes may read the balance
as $1000 before updating it, leading to an incorrect final balance of $500 instead of $0.

4.2 Preventing Race Conditions

To prevent race conditions, proper synchronisation mechanisms such as locks, semaphores,


or monitors must be used.
5. Semaphores
Semaphores are synchronisation tools used to control access to shared resources. A
semaphore is an integer variable that supports two atomic operations:

1. Wait (P operation): Decrements the semaphore value; if it becomes negative, the


process waits.
For example, in a banking system, if two users both try to transfer money from the
same account at the same time, a race condition could occur where incorrect balances
are displayed. By using semaphores to control access to the account balance, only one
user at a time would be able to make changes, preventing any discrepancies in the
final balance.
2. Signal (V operation): Increments the semaphore value and wakes up a waiting
process if any exist.

For instance, when a customer is withdrawing money from an ATM, a semaphore can
be used to ensure that only one transaction is processed at a time to avoid
overdrawing the account. This ensures that the balance is correctly updated and
prevents any errors in the transaction process.

5.1 Types of Semaphores

 Binary Semaphore: Takes values 0 and 1, acting as a lock. For example, a binary
semaphore can be used in a multi-threaded program to limit access to a shared
resource, such as a printer, ensuring only one thread can access it at a time. This
prevents conflicts and ensures proper printing order.
 Counting Semaphore: Takes non-negative values, managing multiple resources. For
instance, a counting semaphore can be used in a server application to control the
number of clients accessing a database at once, ensuring that the database is not
overloaded with requests. This helps to maintain system stability and prevent data
corruption.

5.2 Example of Semaphore Implementation


#include <stdio.h>
#include <pthread.h> #include <semaphore.h>sem_t semaphore; void
*process(void *arg) {
sem_wait(&semaphore);
printf("Process %d is in the critical section\n", (int)arg);
sem_post(&semaphore);
return NULL;
} int main() {
pthread_t t1, t2;
sem_init(&semaphore, 0, 1);
pthread_create(&t1, NULL, process, (void*)1);
pthread_create(&t2, NULL, process, (void*)2);
pthread_join(t1, NULL);
pthread_join(t2, NULL);
sem_destroy(&semaphore);
return 0;
}
In this example, a semaphore ensures that only one process enters the critical section at a
time, avoiding race conditions.

5.3 Semaphore Implementation in Operating Systems

 Process Synchronisation in UNIX: UNIX systems use semaphores for managing


shared memory.
 Producer-Consumer Problem: Semaphores help coordinate data production and
consumption in a buffer.
 Readers-Writers Problem: Ensures readers and writers access shared resources
fairly.

6. Conclusion
Process synchronisation is essential for concurrent execution to prevent race conditions and
ensure consistency in shared resources. Mechanisms such as semaphores provide a reliable
way to control access to critical sections. By implementing proper synchronisation, systems
can achieve safe and efficient multitasking.

You might also like