0% found this document useful (0 votes)
4 views35 pages

OS Unit 2

A process is an executing instance of a program, consisting of program code, data, heap, stack, and a process control block (PCB). Processes transition through various states such as New, Ready, Running, Waiting, and Terminated, while the operating system manages them using different schedulers and context switching. Concurrency principles address challenges like race conditions and deadlocks, ensuring mutual exclusion and synchronization through mechanisms like semaphores and mutexes.

Uploaded by

anwar.shadab1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views35 pages

OS Unit 2

A process is an executing instance of a program, consisting of program code, data, heap, stack, and a process control block (PCB). Processes transition through various states such as New, Ready, Running, Waiting, and Terminated, while the operating system manages them using different schedulers and context switching. Concurrency principles address challenges like race conditions and deadlocks, ensuring mutual exclusion and synchronization through mechanisms like semaphores and mutexes.

Uploaded by

anwar.shadab1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

OS: Unit 2

Process Concept in Operating Systems

A process is a program in execution. It is the fundamental unit of work in an operating system (OS).
A process is different from a program, as a program is just passive code, while a process is active,
requiring CPU execution, memory, and resources.

1. What is a Process?

 A process is an executing instance of a program.


 It consists of:
o Program Code (Text Section) – The actual instructions.
o Data Section – Global variables.
o Heap – Dynamically allocated memory.
o Stack – Function calls, local variables.
o Process Control Block (PCB) – Stores process-related information.

2. Process States

A process goes through different states during its lifecycle:

1. New – Process is created but not yet loaded into memory.


2. Ready – Process is waiting in the ready queue for CPU execution.
3. Running – Process is being executed by the CPU.
4. Waiting (Blocked) – Process is waiting for an I/O operation or event.
5. Terminated (Exit) – Process has finished execution or was forcibly stopped.

🔹 State Transition Diagram:

New → Ready → Running → (Waiting/Ready) → Terminated

3. Process Control Block (PCB)

The PCB is a data structure maintained by the OS to store process-related information.

Contents of PCB:

 Process ID (PID) – Unique identifier for the process.


 Process State – Ready, Running, Waiting, etc.
 Program Counter – Address of the next instruction.
 CPU Registers – Stores execution context.
 Memory Management Info – Page tables, segment tables, etc.
 I/O Status Info – Files and devices used by the process.
 CPU Scheduling Info – Priority, scheduling queue details.

🔹 The OS uses the PCB to manage and switch between processes efficiently.
4. Process Scheduling

The OS schedules processes to maximize CPU utilization and efficiency.

Types of Process Schedulers:

1. Long-Term Scheduler (Job Scheduler)


o Decides which processes to load into memory for execution.
o Controls the degree of multiprogramming.
2. Short-Term Scheduler (CPU Scheduler)
o Decides which process to run next on the CPU.
o Runs frequently (milliseconds).
3. Medium-Term Scheduler (Swapper)
o Temporarily removes processes from memory (swapping) to optimize performance.

5. Context Switching

🔹 What is Context Switching?

 When the CPU switches from one process to another, it saves the state of the current process
and loads the state of the next process.
 The saved data includes registers, program counter, and PCB details.

🔹 Why is it needed?

 Allows multitasking by switching between processes.


 Introduces overhead, as no useful work is done during switching.

6. Process Creation and Termination

🔹 How a Process is Created?

 A new process is created using system calls like fork() (Unix/Linux).


 A parent process creates child processes.

Process Creation Steps:

1. Assigns a unique PID to the new process.


2. Copies the parent’s PCB to the child’s PCB.
3. Allocates memory for the new process.
4. Adds the process to the Ready Queue.

🔹 How a Process is Terminated?

 A process terminates when:


o Execution completes.
o It is killed due to an error.
o The parent process terminates (cascading termination).

7. Inter-Process Communication (IPC)

🔹 What is IPC?

 When processes need to communicate (e.g., sharing data), they use Inter-Process
Communication (IPC) mechanisms.

Types of IPC:

1. Shared Memory – Processes share a memory region for data exchange.


2. Message Passing – Processes send and receive messages via the OS (send(), receive()).

Difference Between Busy Wait and Blocking Wait


1. Busy Waiting

 Definition: A process continuously checks a condition in a loop, consuming CPU cycles.


 Example: Continuously checking if a resource is available.
 Impact: Wastes CPU time as the process remains in an active state.
 Usage: Used in spinlocks, Dekker’s algorithm, Test-and-Set Locks.

2. Blocking Waiting

 Definition: A process suspends itself and allows the OS to put it in a waiting state until an
event occurs.
 Example: A process waiting for I/O or acquiring a lock.
 Impact: Saves CPU time as the process sleeps and resumes only when needed.
 Usage: Used in semaphores, condition variables, and OS scheduling.

3. Key Differences Table


Feature Busy Waiting Blocking Waiting

CPU Usage Wastes CPU cycles Saves CPU cycles

Process State Always running Suspended (sleep)

Efficiency Inefficient for long waits More efficient

Common Uses Spinlocks, polling Semaphores, system calls

Example while(!condition); sem_wait(&mutex);

Summary

Concept Description

Process A program in execution


Concept Description

Process States New, Ready, Running, Waiting, Terminated

PCB Stores process info (PID, registers, memory, etc.)

Schedulers Long-term (Job), Short-term (CPU), Medium-term (Swapper)

Context Switching Switching between processes (CPU overhead)

Process Creation fork() system call creates new processes

IPC Shared Memory, Message Passing

Principles of Concurrency in Operating Systems


1. What is Concurrency?

 Concurrency refers to the ability of an operating system to execute multiple processes or


threads simultaneously.
 It allows efficient resource sharing and improves CPU utilization.
 Concurrency does not always mean parallel execution (which requires multiple
processors/cores).

2. Challenges of Concurrency

Concurrency introduces several challenges that must be handled properly:

A. Race Conditions

 When multiple processes/threads access shared resources simultaneously, the final outcome
depends on execution order.
 Example: Two threads updating a shared bank account balance might lead to incorrect results.

B. Deadlocks

 When two or more processes wait indefinitely for each other to release resources.
 Example: Process A locks Resource X, Process B locks Resource Y, but both need each
other’s resource to proceed.

C. Starvation

 A process waits indefinitely because other higher-priority processes keep executing.

D. Critical Section Problem

 The critical section is a part of the code where shared resources are accessed.
 A solution must ensure mutual exclusion, so only one process enters the critical section at a
time.

3. Principles of Concurrency

To effectively manage concurrency, operating systems follow these principles:

1. Mutual Exclusion

 Ensures that only one process or thread can access a shared resource at a time.
 Prevents race conditions and data inconsistency.
 Implemented using:
o Locks (Mutex)
o Semaphores
o Monitors

2. Synchronization

 Processes must coordinate their execution to avoid conflicts.


 Achieved using:
o Semaphores (wait() and signal())
o Message Passing (send/receive mechanisms)
o Condition Variables

3. Interleaving and Overlapping Execution

 A process may pause execution while waiting for I/O and another process gets CPU time.
 In single-core systems, processes are executed by interleaving instructions.
 In multi-core systems, processes may run in true parallelism.

4. Process and Thread Management

 The OS must schedule processes and threads efficiently to balance system performance.
 Types of scheduling:
o Preemptive Scheduling – OS can interrupt a running process (e.g., Round Robin).
o Non-Preemptive Scheduling – Once a process starts, it runs until it completes or
blocks.

5. Deadlock Prevention & Avoidance

 The OS must detect and handle deadlocks, ensuring no two processes block each other
indefinitely.
 Methods:
o Deadlock Prevention – Restrict resource allocation to avoid circular waiting.
o Deadlock Detection & Recovery – Detect deadlocks and force termination if
necessary.
4. Techniques for Handling Concurrency

A. Locks and Mutexes (Mutual Exclusion Locks)

 Ensures only one thread can access a resource at a time.


 Example in C using Pthreads Mutex:

pthread_mutex_lock(&mutex);
// Critical Section
pthread_mutex_unlock(&mutex);

B. Semaphores

 Binary Semaphore (Mutex) – Works like a lock (0 or 1).


 Counting Semaphore – Allows multiple processes to access a limited resource.
 Example:

wait(S); // Decrease semaphore


// Critical Section
signal(S); // Increase semaphore

C. Monitors

 High-level synchronization construct that allows only one process inside the monitor at a
time.

D. Message Passing

 Processes communicate by sending and receiving messages instead of shared memory.

5. Advantages of Concurrency

✅ Increased CPU Utilization – No idle CPU time.


✅ Improved System Throughput – More processes completed in a given time.
✅ Better Resource Utilization – Efficient use of memory, I/O, and CPU.
✅ Faster Response Time – Essential for real-time and interactive systems.

6. Disadvantages of Concurrency

❌ Complex Synchronization – Requires careful handling of shared resources.


❌ Risk of Deadlocks – Processes may end up waiting forever.
❌ Increased Overhead – Context switching and scheduling increase CPU workload.

Summary Table
Principle Description

Mutual Exclusion Prevents multiple processes from accessing a shared resource simultaneously.

Synchronization Ensures correct execution order among processes.

Deadlock Handling Prevents and resolves processes waiting indefinitely.

Process Scheduling Efficiently manages CPU time for different processes.

Producer-Consumer Problem (Bounded Buffer Problem)


1. What is the Producer-Consumer Problem?

 The Producer-Consumer Problem is a classic synchronization problem in concurrent


programming.
 It involves two processes (or threads):
o Producer: Generates data and puts it into a shared buffer.
o Consumer: Retrieves and processes data from the buffer.
 The challenge is ensuring proper synchronization so the producer and consumer do not
interfere with each other.

2. Bounded Buffer Problem

 The producer-consumer problem is often implemented with a finite (bounded) buffer of size
N.
 The constraints:
1. The producer must wait if the buffer is full.
2. The consumer must wait if the buffer is empty.
3. Only one process (producer or consumer) should modify the buffer at a time to
prevent race conditions.

3. Solution Using Semaphores

To handle synchronization, we use three semaphores:

1. Mutex (binary semaphore) – Ensures mutual exclusion while accessing the buffer.
2. Empty (counting semaphore) – Tracks empty slots in the buffer.
3. Full (counting semaphore) – Tracks filled slots in the buffer.

Semaphore Initial Values:

 Mutex = 1 (Only one process can modify the buffer at a time)


 Empty = N (Initially, all N slots are empty)
 Full = 0 (Initially, there are no produced items)

4. Producer-Consumer Algorithm Using Semaphores


🔹 Producer Process:

wait(empty); // Wait if buffer is full


wait(mutex); // Lock the buffer

// Add item to buffer

signal(mutex); // Unlock the buffer


signal(full); // Increase filled slot count

🔹 Consumer Process:

wait(full); // Wait if buffer is empty


wait(mutex); // Lock the buffer

// Remove item from buffer

signal(mutex); // Unlock the buffer


signal(empty); // Increase empty slot count

5. C Implementation (Using Pthreads and Semaphores)


#include <stdio.h>
#include <pthread.h>
#include <semaphore.h>
#include <unistd.h>

#define BUFFER_SIZE 5

int buffer[BUFFER_SIZE];
int in = 0, out = 0;

sem_t empty, full; // Counting semaphores


pthread_mutex_t mutex; // Mutex lock

void *producer(void *arg) {


int item;
for (int i = 0; i < 10; i++) { // Producing 10 items
item = i;

sem_wait(&empty); // Wait if buffer is full


pthread_mutex_lock(&mutex); // Lock buffer

buffer[in] = item;
printf("Producer produced: %d\n", item);
in = (in + 1) % BUFFER_SIZE;

pthread_mutex_unlock(&mutex); // Unlock buffer


sem_post(&full); // Signal that an item was produced

sleep(1);
}
}

void *consumer(void *arg) {


int item;
for (int i = 0; i < 10; i++) { // Consuming 10 items
sem_wait(&full); // Wait if buffer is empty
pthread_mutex_lock(&mutex); // Lock buffer
item = buffer[out];
printf("Consumer consumed: %d\n", item);
out = (out + 1) % BUFFER_SIZE;

pthread_mutex_unlock(&mutex); // Unlock buffer


sem_post(&empty); // Signal that an item was consumed

sleep(2);
}
}

int main() {
pthread_t prod, cons;

sem_init(&empty, 0, BUFFER_SIZE); // Initialize empty slots


sem_init(&full, 0, 0); // Initially, buffer is empty
pthread_mutex_init(&mutex, NULL);

pthread_create(&prod, NULL, producer, NULL);


pthread_create(&cons, NULL, consumer, NULL);

pthread_join(prod, NULL);
pthread_join(cons, NULL);

sem_destroy(&empty);
sem_destroy(&full);
pthread_mutex_destroy(&mutex);

return 0;
}

6. Explanation of Implementation

 The producer waits if the buffer is full (sem_wait(empty)) and then inserts an item.
 The consumer waits if the buffer is empty (sem_wait(full)) before consuming an item.
 pthread_mutex_lock(&mutex) ensures that only one thread accesses the buffer at a time.
 Semaphores (empty and full) track the availability of buffer slots.

7. Alternative Solutions

A. Using Condition Variables (Monitors)

 Instead of semaphores, we can use condition variables with pthread_cond_wait() and


pthread_cond_signal().
 Condition variables allow blocking without busy waiting.

B. Using Message Passing (IPC)

 Instead of a shared buffer, processes communicate using pipes or message queues.


8. Key Takeaways
Concept Description

Producer-Consumer Synchronization problem where producer adds and consumer removes


Problem items from a shared buffer.

Occurs if both access the buffer simultaneously without proper


Race Condition
synchronization.

Mutual Exclusion Ensured using Mutex Locks (pthread_mutex_t).

Synchronization Managed using Semaphores (sem_t empty, full).

Blocking Operations sem_wait() (wait) and sem_post() (signal) prevent race conditions.

Conclusion

 The producer-consumer problem demonstrates the need for synchronization in concurrent


programming.
 Using semaphores and mutex locks, we ensure proper coordination between producer and
consumer.
 The bounded buffer problem is widely used in multithreading, operating systems, and
real-time applications.

Mutual Exclusion in Operating Systems

1. What is Mutual Exclusion?

 Mutual Exclusion (Mutex) is a fundamental principle in concurrent programming.


 It ensures that only one process (or thread) accesses a shared resource (critical section) at
a time to prevent data inconsistency.
 Without mutual exclusion, multiple processes might modify shared data simultaneously,
leading to race conditions and unpredictable outcomes.

2. The Critical Section Problem

A critical section is a part of the program where shared resources (e.g., files, memory, database) are
accessed.
The critical section problem arises when multiple processes attempt to execute their critical sections
simultaneously, leading to data corruption or unexpected behavior.

Example of the Problem

Consider two processes P1 and P2 updating a shared bank account balance:

// Process P1
balance = balance + 100;

// Process P2
balance = balance - 50;

If P1 and P2 execute simultaneously, the final balance may be incorrect due to inconsistent updates.
The goal of mutual exclusion is to prevent such conflicts by ensuring only one process modifies
balance at a time.

3. Requirements for Mutual Exclusion

A good mutual exclusion mechanism should satisfy these conditions:

1. Mutual Exclusion:
o Only one process can be inside the critical section at a time.
2. Progress:
o If no process is in the critical section, other processes should not be unnecessarily
delayed.
3. Bounded Waiting (Fairness):
o A process waiting for access should eventually enter the critical section (avoids
starvation).
4. No Assumptions About CPU Speed or Number of Processors:
o The solution should work regardless of hardware specifications.

4. Methods to Achieve Mutual Exclusion

There are software-based and hardware-based solutions to achieve mutual exclusion.

A) Software-Based Solutions

These solutions rely on algorithms that do not require special hardware support.

1. Peterson’s Algorithm (For Two Processes)

 Uses two shared variables:


o flag[i]: Indicates if a process wants to enter the critical section.
o turn: Determines whose turn it is to enter the critical section.
 Ensures mutual exclusion, progress, and bounded waiting.

// Process i
flag[i] = true;
turn = j;
while (flag[j] && turn == j); // Wait if other process wants to enter

// Critical Section

flag[i] = false; // Exit critical section

 Problem: Works only for two processes, not scalable.


B) Hardware-Based Solutions

These solutions rely on special CPU instructions to enforce atomic operations.

1. Disabling Interrupts

 A process disables CPU interrupts before entering the critical section and enables them
afterward.
 Drawback:
o Works in single-core systems but fails in multi-core CPUs since other cores can still
execute processes.

2. Test and Set (TAS) Lock

 Uses an atomic hardware instruction to check and update a shared lock variable.

while (test_and_set(&lock)); // Busy waiting


// Critical Section
lock = false; // Release lock

 Problem: Causes busy waiting, wasting CPU cycles.

C) Synchronization Mechanisms (OS-Based Solutions)

1. Semaphores (Integer Counters for Resource Management)

 Semaphores are integer variables used to control access to shared resources.


 Two operations:
o wait(S): Decreases S. If S < 0, process waits.
o signal(S): Increases S, waking up a waiting process.

Example:

wait(mutex); // Lock
// Critical Section
signal(mutex); // Unlock

 Types of Semaphores:
o Binary Semaphore (Mutex Lock): 0 or 1 value (like a lock).
o Counting Semaphore: Allows multiple processes (e.g., resource pools).

2. Mutex Locks (Binary Locks for Single Process Access)

 A mutex (mutual exclusion lock) is a special binary semaphore used for exclusive access.

Example using Pthreads in C:


pthread_mutex_lock(&mutex);
// Critical Section
pthread_mutex_unlock(&mutex);

 Drawback: Can lead to deadlocks if not managed properly.

5. Issues in Mutual Exclusion

A) Race Condition

 Occurs when multiple processes read and write shared data simultaneously, leading to
inconsistent results.
 Solution: Use synchronization mechanisms like mutexes or semaphores.

B) Deadlocks

 Occurs when two or more processes wait indefinitely for resources held by each other.
 Example: Process A waits for Resource X, while Process B waits for Resource Y, but both
resources are locked.
 Solution:
o Avoid circular wait conditions.
o Use deadlock prevention techniques (e.g., priority allocation).

C) Starvation

 Occurs when a low-priority process never gets access because higher-priority processes keep
executing.
 Solution:
o Use Fair Scheduling (FIFO queues).
o Apply priority aging (increase process priority over time).

6. Summary of Mutual Exclusion Solutions

Solution Type Pros Cons

Peterson’s Algorithm No hardware needed Works only for 2 processes

Disabling Interrupts Simple Not suitable for multi-core CPUs

Test & Set Lock Works for multiple processes Causes busy waiting

Semaphores Efficient for multiple processes Can cause deadlocks if misused

Mutex Locks Simple & effective Deadlock risk

Conclusion
 Mutual exclusion is essential for concurrent programming to avoid race conditions and
ensure data integrity.
 Different techniques (software, hardware, OS-level) are used based on system requirements.
 Choosing the right synchronization method depends on system constraints like
performance, fairness, and resource availability.

Critical Section Problem in Operating Systems


1. What is the Critical Section Problem?

 The critical section is the part of a program where a process accesses shared resources
(variables, files, memory, etc.).
 The Critical Section Problem arises when multiple processes try to access shared resources
simultaneously, leading to race conditions and data inconsistency.
 The goal is to ensure only one process executes in the critical section at a time (Mutual
Exclusion).

2. Conditions of the Critical Section Problem

A correct solution must satisfy these three conditions:

1. Mutual Exclusion
o Only one process can be inside the critical section at a time.
2. Progress
o If no process is in the critical section, then other processes must not be blocked from
entering.
3. Bounded Waiting
o A process must enter the critical section within a finite time (avoiding starvation).

3. Solution Approaches

A. Software-Based Solutions

1. Peterson’s Algorithm
o Works for two processes using two shared variables:
 flag[i] = true (indicates process wants to enter).
 turn = j (gives priority to one process at a time).

do {
flag[i] = true;
turn = j;
while (flag[j] && turn == j); // Wait if other process is in CS

// Critical Section

flag[i] = false; // Exit CS


} while (true);

2. Dekker’s Algorithm
o Similar to Peterson’s but ensures mutual exclusion using a turn variable.
B. Hardware-Based Solutions

1. Disable Interrupts
o A process disables CPU interrupts before entering the critical section and enables
them after leaving.
o Issues: Not suitable for multiprocessor systems.
2. Test-and-Set (Atomic Instruction)
o Uses a hardware-supported atomic instruction to check and set a lock.

do {
while (TestAndSet(lock)); // Busy waiting

// Critical Section

lock = false; // Exit CS


} while (true);

C. OS-Based Solutions (Synchronization Mechanisms)

1. Semaphores
o A binary semaphore (mutex) ensures only one process enters the critical section at a
time.

sem_wait(mutex);
// Critical Section
sem_post(mutex);

2. Mutex Locks
o Simple lock/unlock mechanism.

c
CopyEdit
pthread_mutex_lock(&mutex);
// Critical Section
pthread_mutex_unlock(&mutex);

3. Monitors
o High-level synchronization, where only one process can access the monitor’s critical
section.

4. Deadlock and Starvation in the Critical Section

 Deadlock: Occurs when two or more processes wait indefinitely for each other.
 Starvation: A low-priority process may never get a chance to enter the critical section.

Solution:

 Use Fair Scheduling (FIFO Queue)


 Set Time Limits for waiting processes
 Use Priority Inversion Handling

5. Summary Table
Approach Description

Peterson’s Algorithm Software-based, works for two processes.

Test-and-Set Lock Hardware-supported, atomic instruction.

Semaphores OS-supported, wait() and signal().

Mutex Locks Lightweight lock for mutual exclusion.

Monitors High-level synchronization in programming languages.

Conclusion

 The critical section problem is key to concurrent programming and operating system
synchronization.
 Semaphores, mutexes, and monitors are widely used to ensure safe access to shared
resources.

Dekker’s Solution to the Critical Section Problem


1. What is Dekker’s Algorithm?

 Dekker’s algorithm is the first known software-based solution to the critical section
problem.
 It ensures mutual exclusion for two processes without using special hardware instructions.
 Uses two flags (to indicate interest in entering the critical section) and a turn variable (to
decide which process gets priority).

2. Working Principle of Dekker’s Algorithm

Uses Three Key Elements:

1. Flags (flag[0], flag[1])


o Each process sets its flag to true before entering the critical section.
2. Turn Variable (turn)
o Indicates which process has priority when both want to enter.
3. Mutual Exclusion
o If both processes want to enter, priority is given to the process whose turn it is.

3. Dekker’s Algorithm (C Implementation)


#include <stdio.h>
#include <pthread.h>
#include <stdbool.h>
#include <unistd.h>

#define NUM_ITERATIONS 5

volatile bool flag[2] = {false, false};


volatile int turn = 0;

void *process_0(void *arg) {


for (int i = 0; i < NUM_ITERATIONS; i++) {
flag[0] = true; // Process 0 wants to enter
while (flag[1]) { // If Process 1 also wants to enter
if (turn == 1) { // Give turn to Process 1
flag[0] = false;
while (turn == 1); // Wait for the turn to change
flag[0] = true;
}
}

// Critical Section
printf("Process 0 in critical section\n");
sleep(1);

// Exit Section
turn = 1;
flag[0] = false;
}
return NULL;
}

void *process_1(void *arg) {


for (int i = 0; i < NUM_ITERATIONS; i++) {
flag[1] = true; // Process 1 wants to enter
while (flag[0]) { // If Process 0 also wants to enter
if (turn == 0) { // Give turn to Process 0
flag[1] = false;
while (turn == 0); // Wait for the turn to change
flag[1] = true;
}
}

// Critical Section
printf("Process 1 in critical section\n");
sleep(1);

// Exit Section
turn = 0;
flag[1] = false;
}
return NULL;
}

int main() {
pthread_t t0, t1;

pthread_create(&t0, NULL, process_0, NULL);


pthread_create(&t1, NULL, process_1, NULL);
pthread_join(t0, NULL);
pthread_join(t1, NULL);

return 0;
}

4. Explanation of Code

 Each process sets its flag to true before entering the critical section.
 If both processes want to enter, the turn variable determines who goes first.
 The waiting process repeatedly checks turn until it gets access.
 After exiting, the process gives turn to the other process and sets its flag to false.

5. Properties of Dekker’s Algorithm

✅ Ensures Mutual Exclusion – Only one process enters at a time.


✅ Ensures Progress – If one process wants to enter, it is allowed unless the other is using the critical
section.
✅ Avoids Starvation – The turn variable ensures fairness.
❌ Limited to Two Processes – Does not work efficiently for more than two processes.
❌ Busy Waiting – A process continuously checks turn, leading to CPU wastage.

6. Comparison with Peterson’s Algorithm


Feature Dekker’s Algorithm Peterson’s Algorithm

Concept Uses flag array and turn variable Uses flag array and turn variable

Efficiency Uses busy waiting Less busy waiting

Suitability Works for two processes only Works for two processes only

Implementation Complex but works on all hardware Simpler but requires hardware support

7. Summary

 Dekker’s algorithm is a software-based solution to the critical section problem for two
processes.
 It ensures mutual exclusion, progress, and bounded waiting using two flags and a turn
variable.
 It has been replaced by modern synchronization primitives like mutexes, semaphores,
and monitors.
Peterson’s Algorithm for the Critical Section Problem
1. What is Peterson’s Algorithm?

 Peterson’s Algorithm is a software-based solution for mutual exclusion in two-process


synchronization.
 It ensures that only one process enters the critical section at a time using two shared
variables:
o flag[i]: Indicates if process i wants to enter the critical section.
o turn: Decides whose turn it is to enter the critical section.

2. Working Principle of Peterson’s Algorithm

Three Key Elements:

1. Mutual Exclusion – Only one process can enter the critical section at a time.
2. Progress – If one process wants to enter, it must be allowed if no other process is in the
critical section.
3. Bounded Waiting – No process waits indefinitely to enter the critical section.

3. Peterson’s Algorithm (C Implementation)


#include <stdio.h>
#include <pthread.h>
#include <stdbool.h>
#include <unistd.h>

#define NUM_ITERATIONS 5

volatile bool flag[2] = {false, false}; // Flags for each process


volatile int turn = 0; // Decides which process gets priority

void *process_0(void *arg) {


for (int i = 0; i < NUM_ITERATIONS; i++) {
flag[0] = true; // Process 0 wants to enter
turn = 1; // Give turn to process 1

while (flag[1] && turn == 1); // Wait if Process 1 is in critical section

// Critical Section
printf("Process 0 in critical section\n");
sleep(1);

// Exit Section
flag[0] = false;
}
return NULL;
}

void *process_1(void *arg) {


for (int i = 0; i < NUM_ITERATIONS; i++) {
flag[1] = true; // Process 1 wants to enter
turn = 0; // Give turn to process 0
while (flag[0] && turn == 0); // Wait if Process 0 is in critical section

// Critical Section
printf("Process 1 in critical section\n");
sleep(1);

// Exit Section
flag[1] = false;
}
return NULL;
}

int main() {
pthread_t t0, t1;

pthread_create(&t0, NULL, process_0, NULL);


pthread_create(&t1, NULL, process_1, NULL);

pthread_join(t0, NULL);
pthread_join(t1, NULL);

return 0;
}

4. Explanation of Code

1. Entry Section:
o Each process sets its flag to true to indicate it wants to enter.
o It sets turn to the other process, allowing it to proceed if it also wants to enter.
o If the other process already has the turn and wants to enter, it waits.
2. Critical Section:
o The process performs its operations safely without interference.
3. Exit Section:
o The process resets its flag to false, allowing the other process to enter.

5. Properties of Peterson’s Algorithm

✅ Ensures Mutual Exclusion – Only one process enters at a time.


✅ Ensures Progress – If one process is waiting, it gets access when the other leaves.
✅ Ensures Bounded Waiting – No process starves indefinitely.
❌ Limited to Two Processes – Cannot handle more than two processes efficiently.
❌ May Not Work on Modern CPUs – Due to compiler optimizations and out-of-order execution.

6. Comparison with Dekker’s Algorithm


Feature Peterson’s Algorithm Dekker’s Algorithm

Concept Uses flag and turn Uses flag and turn

Efficiency Less busy waiting More complex busy waiting

Suitability Works for two processes only Works for two processes only
Feature Peterson’s Algorithm Dekker’s Algorithm

Implementation Simpler, widely used More complex, historically important

7. Summary

 Peterson’s Algorithm is a software solution for two-process mutual exclusion.


 It prevents race conditions by ensuring only one process enters the critical section at a
time.
 Modern operating systems use semaphores, mutexes, and locks instead of Peterson’s
Algorithm.

Semaphores in Operating Systems


1. What is a Semaphore?

 A semaphore is a synchronization primitive used for process synchronization and mutual


exclusion.
 It is an integer variable that helps control access to shared resources.
 Semaphores prevent race conditions in concurrent programming.

2. Types of Semaphores

(A) Counting Semaphore

 Holds a value from 0 to N.


 Used to manage multiple instances of a resource.
 Example: Controlling access to a pool of database connections.

(B) Binary Semaphore (Mutex)

 Holds only 0 or 1 (like a lock).


 Used for mutual exclusion (one process enters at a time).
 Example: Ensuring one thread writes to a file at a time.

3. Operations on Semaphores

Semaphores use two main operations:

(A) Wait (P) Operation

 Decreases the semaphore value by 1.


 If value becomes negative, the process is blocked.

P(S):
while (S ≤ 0) wait; // Wait if resource is unavailable
S = S - 1; // Acquire the resource

(B) Signal (V) Operation

 Increases the semaphore value by 1.


 If a process was waiting, it gets unblocked.

V(S):
S = S + 1; // Release the resource

4. Example: Semaphore in C (POSIX Threads)

#include <stdio.h>
#include <pthread.h>
#include <semaphore.h>
#include <unistd.h>

sem_t semaphore; // Declare semaphore

void *thread_function(void *arg) {


sem_wait(&semaphore); // Wait (P operation)
printf("Thread %ld entered critical section\n", pthread_self());
sleep(2); // Simulate work
printf("Thread %ld leaving critical section\n", pthread_self());
sem_post(&semaphore); // Signal (V operation)
return NULL;
}

int main() {
pthread_t t1, t2;

sem_init(&semaphore, 0, 1); // Initialize semaphore (binary)

pthread_create(&t1, NULL, thread_function, NULL);


pthread_create(&t2, NULL, thread_function, NULL);

pthread_join(t1, NULL);
pthread_join(t2, NULL);

sem_destroy(&semaphore); // Clean up
return 0;
}

5. Real-World Applications of Semaphores

✅ Process Synchronization – Prevents race conditions in multithreading.


✅ Resource Allocation – Controls access to limited resources (e.g., printers, database connections).
✅ Producer-Consumer Problem – Ensures proper synchronization between producers and
consumers.
6. Comparison with Mutex
Feature Semaphore Mutex

Value 0 to N (Counting) 0 or 1 (Binary)

Ownership Shared among processes Owned by one thread

Uses Synchronization Mutual Exclusion

Difference Between Concurrent and Parallel Execution


Feature Concurrent Execution Parallel Execution
Tasks appear to run simultaneously but Tasks run simultaneously on
Definition
actually switch multiple cores
Execution Single core (time-sharing) or multi-core Multi-core or multi-processor
Speedup Improves responsiveness Improves execution speed
Example Multithreading Multiprocessing
Used In Web servers, real-time OS Data processing, deep learning

Test and Set Operation in Synchronization


1. What is Test and Set?

 Test and Set (TAS) is a hardware-supported atomic operation used in synchronization.


 It is primarily used to implement locks and mutual exclusion in multiprocessor systems.
 Key Idea: It checks and modifies a variable atomically, preventing race conditions.

2. How Test and Set Works?

Test and Set operates on a lock variable (usually a Boolean or integer).

1. Read the current value of the variable.


2. Set the variable to 1 (locked state).
3. Return the old value.

If the previous value was 0, the process acquires the lock.


If it was already 1, the process must wait.

3. Test and Set Algorithm (Pseudo-code)

boolean TestAndSet(boolean *lock) {


boolean old_value = *lock; // Read current value
*lock = true; // Set to true (lock acquired)
return old_value; // Return previous value
}
4. Using Test and Set for Locking

boolean lock = false;

void enter_critical_section() {
while (TestAndSet(&lock)); // Busy-wait until lock is free
}

void exit_critical_section() {
lock = false; // Release the lock
}

5. Properties of Test and Set


Feature Description

Atomicity Ensures that read and write occur as a single, unbreakable step.

Busy Waiting If the lock is held, other processes continuously check, leading to CPU wastage.

Mutual Exclusion Only one process can enter the critical section at a time.

6. Disadvantages of Test and Set

❌ Leads to Busy Waiting (Spinlock) – Wastes CPU cycles if waiting processes keep checking.
❌ Can cause Starvation – Some processes might never get the lock if others keep acquiring it.
✅ Solution – Use Test and Set with Backoff or use more advanced techniques like Mutexes and
Semaphores.

Dining Philosophers Problem


1. What is the Dining Philosophers Problem?

 A classical synchronization problem introduced by Edsger Dijkstra.


 Scenario:
o Five philosophers sit around a circular table.
o Each philosopher alternates between thinking and eating.
o A fork is placed between each pair of philosophers.
o A philosopher needs two forks (left and right) to eat.
o The challenge is to avoid deadlock and starvation while ensuring mutual exclusion.

2. Representation of the Problem


🍴
P1 — 🍽️— P2
| |
🍽️ 🍽️
| |
P4 — 🍽️— P3
🍴
P5

 Philosophers (P1–P5) are processes.


 Forks (🍴) are shared resources.
 Philosophers must pick both left and right forks to eat.

3. Issues in the Problem

🔴 Deadlock: All philosophers pick up their left fork and wait for the right fork, leading to an infinite
wait.
🟡 Starvation: Some philosophers might never get to eat if others keep eating continuously.

4. Solutions to the Dining Philosophers Problem

(A) Using Semaphores (Resource Hierarchy)


#include <stdio.h>
#include <pthread.h>
#include <semaphore.h>
#include <unistd.h>

#define N 5 // Number of philosophers


sem_t forks[N]; // Semaphore for forks

void *philosopher(void *num) {


int id = *(int *)num;

while (1) {
printf("Philosopher %d is thinking...\n", id);
sleep(1);

// Pick left fork


sem_wait(&forks[id]);
// Pick right fork
sem_wait(&forks[(id + 1) % N]);

// Eating
printf("Philosopher %d is eating...\n", id);
sleep(2);

// Release forks
sem_post(&forks[id]);
sem_post(&forks[(id + 1) % N]);

printf("Philosopher %d finished eating...\n", id);


}
}

int main() {
pthread_t philosophers[N];
int ids[N];

for (int i = 0; i < N; i++) {


sem_init(&forks[i], 0, 1);
}

for (int i = 0; i < N; i++) {


ids[i] = i;
pthread_create(&philosophers[i], NULL, philosopher, &ids[i]);
}

for (int i = 0; i < N; i++) {


pthread_join(philosophers[i], NULL);
}

return 0;
}

✅ Solution avoids deadlock by ensuring that forks are always picked up in a fixed order.

(B) Solution Using Even-Odd Ordering

1. Even philosophers pick the right fork first, then the left.
2. Odd philosophers pick the left fork first, then the right.
✅ This prevents circular wait, avoiding deadlock.

(C) Chandy/Misra Solution (Using a Butler Process)

 A butler (or coordinator) ensures that only (N-1) philosophers attempt to eat at a time.
 If a philosopher wants to eat, they must ask the butler for permission.
 ✅ Prevents deadlock and starvation.

5. Summary Table
Deadlock- Starvation-
Solution Complexity
Free? Free?

Using Semaphores ✅ Yes ❌ No Medium

Even-Odd Ordering ✅ Yes ✅ Yes Low

Chandy/Misra (Butler) ✅ Yes ✅ Yes High

Sleeping Barber Problem


1. Problem Statement

The Sleeping Barber Problem is a classic synchronization


problem that illustrates process scheduling in an operating
system.
Deadlock- Starvation-
Solution Complexity
Free? Free?

Scenario:

 A barber has a barber chair and a waiting room


with N chairs.
 If there are no customers, the barber sleeps.
 When a customer arrives:
o If a chair is available, the customer sits and
waits.
o If all chairs are full, the customer leaves.
 When the barber finishes a haircut, they check for
waiting customers.
 If customers are present, the barber wakes up and
serves them.
 If no customers are present, the barber goes back to
sleep.

2. Challenges

🔴 Race Condition: Multiple customers arriving at the same


time may cause inconsistent queue management.
🔴 Deadlock: If not handled correctly, customers and the
barber may get stuck waiting indefinitely.
🔴 Starvation: If priority is not managed, some customers
may never get a turn.

3. Solution Using Semaphores (C


Implementation)

We use three semaphores:

 Customers (Semaphore) – Counts the number of


waiting customers.
 Barber (Semaphore) – Indicates whether the barber
is ready.
 Mutex (Semaphore) – Ensures mutual exclusion
when modifying the waiting count.

C Implementation
#include <stdio.h>
#include <pthread.h>
#include <semaphore.h>
#include <unistd.h>

#define CHAIRS 5 // Number of waiting chairs


Deadlock- Starvation-
Solution Complexity
Free? Free?

sem_t customers; // Counts waiting customers


sem_t barber; // Barber availability
sem_t mutex; // Ensures mutual exclusion
int waiting = 0; // Number of waiting customers

void *barber_function(void *arg) {


while (1) {
sem_wait(&customers); // Wait for a customer
sem_wait(&mutex); // Lock waiting count update
waiting--; // One customer gets a haircut
sem_post(&barber); // Barber is now ready
sem_post(&mutex); // Unlock waiting count
printf("Barber is cutting hair...\n");
sleep(2); // Simulate haircut time
}
}

void *customer_function(void *arg) {


sem_wait(&mutex); // Lock waiting count
if (waiting < CHAIRS) {
waiting++;
printf("Customer %ld is waiting...\n", pthread_self());
sem_post(&customers); // Notify barber
sem_post(&mutex); // Unlock waiting count
sem_wait(&barber); // Wait for barber
printf("Customer %ld is getting a haircut...\n",
pthread_self());
} else {
sem_post(&mutex); // Unlock and leave if no chairs
printf("Customer %ld left (No seats available)...\n",
pthread_self());
}
}

int main() {
pthread_t barber_thread, customer_threads[10];

// Initialize semaphores
sem_init(&customers, 0, 0);
sem_init(&barber, 0, 0);
sem_init(&mutex, 0, 1);

// Create barber thread


pthread_create(&barber_thread, NULL, barber_function,
NULL);

// Create customer threads


for (int i = 0; i < 10; i++) {
pthread_create(&customer_threads[i], NULL,
customer_function, NULL);
sleep(1); // Simulate customer arrival time
}

// Join customer threads


for (int i = 0; i < 10; i++) {
pthread_join(customer_threads[i], NULL);
}
Deadlock- Starvation-
Solution Complexity
Free? Free?

return 0;
}

4. Explanation of Solution

1. Customers arrive:
o If there is a seat, they wait.
o If no seat is available, they leave.
2. Barber sleeps when no customers:
o The barber waits for a customer to arrive.
3. When a customer is present:
o The barber wakes up and starts cutting hair.
4. Mutual exclusion (mutex) ensures safe access to
the waiting count.

5. Key Features of Solution

✅ Avoids Deadlock – The barber is always either cutting


hair or sleeping.
✅ Prevents Starvation – First-come, first-served approach
with FIFO queueing.
✅ Efficient Synchronization – Uses semaphores for mutual
exclusion and signaling.

6. Summary Table
Feature Description

Barber Sleeping If no customers, the barber sleeps.

Customers Waiting If seats are available, customers wait.

Synchronization Semaphores prevent race conditions.

The barber always eventually cuts


Deadlock-Free
hair.

Customers are served in order of


Starvation-Free
arrival.
Inter-Process Communication (IPC) Model & Schemes
1. What is Inter-Process Communication (IPC)?

 Definition: Inter-Process Communication (IPC) allows processes to exchange data and


synchronize their actions in a multitasking operating system.
 Why IPC is needed?
o Processes may need to share data or coordinate actions.
o Enables communication between independent processes.
o Used in client-server architectures, multiprocessing systems, and distributed
computing.

2. IPC Models

(A) Shared Memory Model

 Processes share a common memory space.


 One process writes data to shared memory, and another process reads it.
 Requires synchronization mechanisms (e.g., semaphores, mutexes) to avoid race
conditions.

✅ Advantages:

 Faster than message passing (no kernel intervention).


 Efficient for large data transfers.

❌ Disadvantages:

 Requires synchronization mechanisms (e.g., semaphores).


 Security risks (one process can overwrite another's data).

🔹 Example: Shared Memory in C (Using shmget, shmat, shmdt, shmctl)

(B) Message Passing Model

 Processes communicate by sending and receiving messages via the operating system.
 Does not require shared memory.
 Can be synchronous (blocking) or asynchronous (non-blocking).

✅ Advantages:

 No shared memory needed (safer).


 Useful for distributed systems (e.g., Client-Server architecture).

❌ Disadvantages:

 Slower than shared memory (involves kernel intervention).


 Message size limits can be restrictive.

🔹 Example: Message Queues, Sockets, Pipes

3. IPC Schemes (Techniques)

(1) Pipes (Anonymous & Named)

 Pipes provide one-way communication between related processes.


 Anonymous Pipes: Used between parent-child processes (e.g., shell pipelines).
 Named Pipes (FIFOs): Allow communication between unrelated processes.

🔹 Example:

// Simple Pipe Example in C (Parent writes, Child reads)


int pipefd[2];
pipe(pipefd);
if (fork() == 0) {
close(pipefd[1]); // Close write end
read(pipefd[0], buffer, sizeof(buffer));
} else {
close(pipefd[0]); // Close read end
write(pipefd[1], "Hello", 5);
}

✅ Pros: Simple, fast.


❌ Cons: Unidirectional, only for local communication.

(2) Message Queues

 Provides structured message passing.


 Messages are queued in the kernel and retrieved in order.
 Used in distributed and real-time systems.

🔹 Example: POSIX message queues (mq_open, mq_send, mq_receive).

✅ Pros: No shared memory needed, can handle multiple messages.


❌ Cons: Kernel overhead, limited message size.

(3) Shared Memory

 Processes map a common memory region and communicate via it.


 Needs synchronization tools like mutexes or semaphores to prevent conflicts.

🔹 Example: Using shmget, shmat, shmdt in C.


✅ Pros: Fast, efficient for large data.
❌ Cons: Needs synchronization (prone to race conditions).

(4) Semaphores

 Used for process synchronization and resource sharing.


 A counter variable is used to control access to shared resources.
 Can be binary (mutex) or counting semaphores.

🔹 Example: POSIX semaphores (sem_init, sem_wait, sem_post).

✅ Pros: Avoids race conditions.


❌ Cons: Complex, prone to deadlocks.

(5) Sockets

 Used for communication between processes on different machines (networked IPC).


 Supports TCP (connection-based) and UDP (connectionless) communication.
 Used in client-server models.

🔹 Example: Using socket(), bind(), send(), recv() in C.

✅ Pros: Works across networks.


❌ Cons: Slower due to network latency.

4. Comparison Table of IPC Methods


Communication Synchronization
IPC Method Used In
Type Needed?

Pipes Unidirectional No Parent-Child Processes

Message Queues Bidirectional No Distributed Systems

High-Speed Local
Shared Memory Bidirectional Yes
Communication

Semaphores N/A (Synchronization) Yes Synchronizing Shared Resources

Sockets Bidirectional No Network Communication

5. Summary

 Shared Memory is the fastest but requires synchronization.


 Message Passing (Queues, Pipes, Sockets) is safer and better for distributed systems.
 Semaphores help avoid race conditions but can cause deadlocks.
 Sockets allow IPC over networks, enabling client-server communication.

Process Generation in Operating Systems


1. What is Process Generation?

 Process generation refers to the creation and management of processes in an operating


system.
 A process is created when a program is loaded and executed.
 New processes are typically generated using system calls like fork() and exec() in UNIX-
based systems.

2. Process Creation Steps

1. Parent process requests creation (via system call).


2. Operating system allocates resources (memory, CPU time, etc.).
3. Process Control Block (PCB) is initialized (stores process info).
4. Process is assigned a unique Process ID (PID).
5. New process enters the Ready or Running state (based on scheduling).

3. Process Creation Methods

(A) fork() System Call (UNIX/Linux)

 Creates a child process that is a copy of the parent.


 Child process gets a new PID but shares resources with the parent.
 Returns 0 to the child and PID of the child to the parent.

🔹 Example:

#include <stdio.h>
#include <unistd.h>

int main() {
int pid = fork();
if (pid == 0)
printf("Child process: PID = %d\n", getpid());
else
printf("Parent process: PID = %d\n", getpid());
return 0;
}

✅ Advantage: Simple and efficient for process creation.


❌ Disadvantage: Parent and child may compete for resources.

(B) exec() System Call


 Replaces the current process with a new program.
 Used when a child needs to run a different executable from its parent.

🔹 Example:

#include <stdio.h>
#include <unistd.h>

int main() {
execl("/bin/ls", "ls", "-l", NULL);
return 0;
}

✅ Advantage: Efficient for executing new programs.


❌ Disadvantage: The old process is replaced, not duplicated.

(C) spawn() System Call (Windows)

 Windows alternative to fork().


 Creates a new process with separate memory.

4. Process Hierarchy (Parent-Child Relationship)

 A parent process spawns one or more child processes.


 The child can further create subprocesses → forms a process tree.

🔹 Example:

Parent (PID 100)


├── Child 1 (PID 101)
├── Child 2 (PID 102)
│ ├── Subprocess 1 (PID 103)
│ ├── Subprocess 2 (PID 104)

5. Process Termination

A process can terminate due to:

1. Normal exit (exit() system call).


2. Error (Exception/Faults).
3. Killed by another process (kill command in UNIX).
4. Parent terminates (wait() system call ensures cleanup).

6. Summary Table
Method Function

fork() Duplicates the calling process


Method Function

exec() Replaces a process with a new program

spawn() Creates a new process in Windows

exit() Terminates a process

wait() Parent waits for child termination

You might also like