0% found this document useful (0 votes)
2 views21 pages

Os Unit 3

A critical resource is a shared component in a computer system that must be accessed by only one process or thread at a time to prevent errors and ensure data integrity. Critical sections are parts of programs that access shared resources, requiring mechanisms like semaphores and mutexes for controlled access. The document also discusses various synchronization problems, including the Readers-Writers Problem and the Dining Philosophers Problem, and highlights the importance of interprocess communication (IPC) for data sharing and coordination among processes.

Uploaded by

gopalnallagavla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views21 pages

Os Unit 3

A critical resource is a shared component in a computer system that must be accessed by only one process or thread at a time to prevent errors and ensure data integrity. Critical sections are parts of programs that access shared resources, requiring mechanisms like semaphores and mutexes for controlled access. The document also discusses various synchronization problems, including the Readers-Writers Problem and the Dining Philosophers Problem, and highlights the importance of interprocess communication (IPC) for data sharing and coordination among processes.

Uploaded by

gopalnallagavla
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Cri cal Resource – Defini on and Explana on

A cri cal resource is a shared hardware or so ware component in a computer system that
must not be accessed by more than one process or thread at the same me.

Key Points:

 Used by mul ple processes or threads.

 Needs controlled access to avoid errors, conflicts, or data corrup on.

 Protec ng it ensures data integrity and system stability.

Examples of Cri cal Resources:

Resource Type Example Why It’s Cri cal

I/O Devices Printer Two print jobs mixing leads to garbage output

Files Log file or config file One write may overwrite another

Shared Memory Cache or buffer Simultaneous access may corrupt data

Databases Bank account records Concurrent updates can lead to wrong balance

CPU Register Shared among threads May lose data if both write at the same me

What Happens Without Protec on?

If cri cal resources are not properly managed, it can lead to:

 Race Condi ons

 Deadlocks

 Inconsistent outputs

 System crashes

How to Protect Cri cal Resources:

Opera ng systems use synchroniza on mechanisms to manage access:

 Semaphores

 Mutexes

 Monitors

 Locks

 Peterson’s Algorithm
Cri cal Sec on – Defini on and Explana on

A cri cal sec on is a part of a program where the process accesses shared resources (like
shared memory, files, or printers) that can be used by only one process at a me.

Why is it Important?

 Prevents data corrup on and race condi ons.

 Ensures only one process/thread can execute that sec on at a me.

 Protects the integrity of shared resources.

Structure of a Cri cal Sec on Program:

Each process that accesses shared data must follow this pa ern:

css

CopyEdit

Entry Sec on → Cri cal Sec on → Exit Sec on → Remainder Sec on

Sec on Func on

Entry Sec on Code that requests permission to enter the cri cal sec on

Cri cal Sec on Code that accesses/modifies the shared resource

Exit Sec on Code that releases the resource

Remainder Sec on Other code that doesn’t access shared resources

Cri cal Sec on Problem – 3 Condi ons

To design correct cri cal sec on handling, three condi ons must be sa sfied:

1. Mutual Exclusion – Only one process in the cri cal sec on at a me.

2. Progress – If no process is in the cri cal sec on, selec on of the next should not be
delayed unnecessarily.

3. Bounded Wai ng – A limit must exist on how long a process waits to enter its cri cal
sec on.
Semaphore: Implementa on and Explana onA semaphore is a synchroniza on tool used in
opera ng systems to control access to cri cal resources and prevent issues like race
condi ons, deadlocks, and inconsistent data.

What is a Semaphore?A semaphore is an integer variable used to signal and control access
between processes. There are two main types:

1. Coun ng Semaphore – Can have any integer value (used for managing a resource
with mul ple instances).

2. Binary Semaphore (Mutex) – Only 0 or 1 (used for mutual exclusion; only one
process at a me).

Basic Opera ons:Semaphores support two atomic opera ons:

Opera on Descrip on

wait(S) (also called P(S) or


Decrements semaphore. If result < 0, process is blocked.
down(S))

signal(S) (also called V(S) or Increments semaphore. If any process is wai ng, it gets
up(S)) unblocked.

Example – Semaphore Implementa on:

CopyEdit

// Define a semaphore

int S = 1; // Ini al value 1 for mutex

// Wait opera on

void wait(int *S) {

while (*S <= 0); // Busy wait

(*S)--; // Enter cri cal sec on

// Signal opera on

void signal(int *S) {

(*S)++; // Exit cri cal sec on next line }


Using Semaphore for Mutual Exclusion:

CopyEdit

semaphore mutex = 1;

Process 1:

wait(mutex); // Entry Sec on

// Cri cal Sec on

signal(mutex); // Exit Sec on

Process 2:

wait(mutex); // Entry Sec on

// Cri cal Sec on

signal(mutex); // Exit Sec on

This ensures only one process at a me enters the cri cal sec on.

Issues and Solu ons:

 Busy Wai ng (CPU keeps checking): Resolved using blocking semaphores (e.g., in
OS-level implementa on).

 Deadlock: Can occur if processes don’t follow proper ordering.

 Starva on: A process may wait indefinitely if others keep entering first.

Semaphore Advantages:

 Prevents race condi ons.

 Provides mutual exclusion and synchroniza on.

 Supports mul ple instances of a resource (coun ng semaphores).

Semaphore Limita ons:

 Complex to implement correctly.

 Poor design may lead to deadlock or starva on.

 Requires careful use of wait() and signal().


Bounded Buffer (Producer-Consumer) Problem Using Semaphores

Goal:

 Producer adds items to a shared buffer.

 Consumer removes items from the same buffer.

 The buffer has limited size, so:

o Producer must wait if buffer is full.

o Consumer must wait if buffer is empty.

Semaphores Used:

Semaphore Ini al Value Purpose

mutex 1 Ensures mutual exclusion for buffer access

empty N (buffer size) Counts empty slots in buffer

full 0 Counts filled slots in buffer

C-style Pseudocode Implementa on:

CopyEdit

#define N 5 // Size of buffer

int buffer[N];

int in = 0, out = 0;

semaphore mutex = 1; // For mutual exclusion

semaphore empty = N; // Ini ally, all slots are empty

semaphore full = 0; // Ini ally, no item is produced

// Producer Process

void producer() {

int item;

while (true) {
item = produce_item(); // Create an item

wait(empty); // Wait if buffer is full

wait(mutex); // Enter cri cal sec on

buffer[in] = item; // Add item to buffer

in = (in + 1) % N;

signal(mutex); // Exit cri cal sec on

signal(full); // One more item produced

// Consumer Process

void consumer() {

int item;

while (true) {

wait(full); // Wait if buffer is empty

wait(mutex); // Enter cri cal sec on

item = buffer[out]; // Remove item from buffer

out = (out + 1) % N;

signal(mutex); // Exit cri cal sec on

signal(empty); // One more empty slot

consume_item(item); // Use the item

}
Readers-Writers Problem (Using Semaphores)

The Readers-Writers Problem is a classic synchroniza on problem that deals with shared
data being accessed by:

 Mul ple readers (who only read and do not modify), and

 Mul ple writers (who modify the data).

Problem Statement:

 Mul ple readers can read simultaneously (no data modifica on).

 Only one writer can write at a me (no other writer or reader should be accessing
the data).

 Goal is to maximize concurrency (allow as many readers as possible without viola ng


safety).

Challenges:

 Prevent race condi ons.

 Ensure mutual exclusion for writers.

 Avoid reader or writer starva on (depending on the variant).

Reader Process:

CopyEdit

wait(mutex);

read_count++;

if (read_count == 1)

wait(rw_mutex); // First reader locks writers

signal(mutex);

// ---- Reading Sec on ----

read_data();

// --------------------------

wait(mutex);
read_count--;

if (read_count == 0)

signal(rw_mutex); // Last reader unlocks writers

signal(mutex);

Writer Process:

CopyEdit

wait(rw_mutex);

// ---- Wri ng Sec on ----

write_data();

// --------------------------

signal(rw_mutex);

Explana on:

 First reader locks rw_mutex to block writers.

 Mul ple readers can enter the cri cal sec on at the same me.

 Writers must wait un l all readers are done (i.e., read_count == 0).

 mutex protects the modifica on of read_count to avoid race condi ons.

Drawbacks:

 Writer starva on: If readers keep coming, writer may never get a chance.

 This is a reader-priority solu on.

Variants:

1. Writer-priority (gives preference to writers).

2. Fair solu on (avoids starva on for both).


Dining Philosophers Problem Using Monitors – Explained Simply

Problem Statement:The Dining Philosophers Problem is a classic synchroniza on problem.


It describes five philosophers si ng around a table, each alterna ng between thinking and
ea ng. There is one chops ck between each philosopher, and a philosopher needs both
the le and right chops cks to eat.

The main goal is to design a system so that:

 No two neighboring philosophers eat at the same me.

 No philosopher starves (waits forever to eat).

 Deadlock is avoided.

What is a Monitor?A monitor is a high-level synchroniza on construct that:

 Provides mutual exclusion automa cally.

 Uses condi on variables (wait() and signal()) to control access.

Monitor-Based Solu on Idea:

 Each philosopher has a state: THINKING, HUNGRY, or EATING.

 Use a monitor to manage these states and access to chops cks.

 Allow only one philosopher to access the monitor (shared logic) at a me.

Key Concepts:

Philosopher States:

CopyEdit

enum { THINKING, HUNGRY, EATING };

Monitor Structure:

CopyEdit

monitor DiningPhilosophers {

enum { THINKING, HUNGRY, EATING } state[5];

condi on self[5]; // One condi on for each philosopher

void pickup(int i); // Philosopher i tries to eat

void putdown(int i); // Philosopher i finishes ea ng


void test(int i); // Check if i can eat now

};

Monitor Func on Defini ons:pickup(i) – Called when philosopher i wants to eat:

CopyEdit

pickup(int i) {

state[i] = HUNGRY;

test(i); // Check if both neighbors are not ea ng

if (state[i] != EATING)

self[i].wait(); // Wait if can't eat now

putdown(i) – Called a er philosopher i finishes ea ng:

CopyEdit

putdown(int i) {

state[i] = THINKING;

test((i + 4) % 5); // Try to wake up le neighbor

test((i + 1) % 5); // Try to wake up right neighbor

test(i) – Called to check if philosopher i can start ea ng:

CopyEdit

test(int i) {

if (state[i] == HUNGRY &&

state[(i + 4) % 5] != EATING &&

state[(i + 1) % 5] != EATING) {

state[i] = EATING;

self[i].signal(); // Allow philosopher i to eat next two lines}}


Interprocess Communica on (IPC) – Defini on and Explana on

Defini on:Interprocess Communica on (IPC) is the mechanism that allows processes to


exchange data and coordinate with each other, either within the same system or over a
network. Since processes run independently and have their own memory space, IPC
provides a way to share informa on or signals between them.

Main Purposes of IPC:

 Data sharing

 Synchroniza on

 Event signaling

 Resource sharing

Advantages of Interprocess Communica on:

Advantage Descrip on

1. Data Sharing Enables processes to exchange data safely and efficiently.

Promotes modular programming by allowing separate programs


2. Modularity
to cooperate.

Supports parallel execu on of tasks while enabling


3. Concurrency
communica on.

4. Helps avoid race condi ons and ensures consistent access to


Synchroniza on shared resources.

Shared memory IPC can be much faster than other methods (like
5. Efficiency
message passing).

6. Scalability Useful in distributed systems and mul processing environments.

How Communica on Takes Place in a Shared Memory Environment

In a shared memory environment, two or more processes communicate by accessing a


common area of memory that they all can read and write to. This method is widely used
because it's fast and allows high-bandwidth communica on between processes.

Step-by-Step Communica on Process:

1. Crea on of Shared Memory:

One process (usually the parent or server) creates a shared memory segment using OS
system calls.
 In UNIX/Linux: shmget(key, size, IPC_CREAT | permissions);

2. A aching Shared Memory:

Processes that want to use the shared memory a ach it to their address space.

 System call: shmat(shmid, NULL, 0);

3. Communica on via Memory:

 One process writes data to the shared memory.

 The other process reads data from the same memory region.

Example:

 Process A (Writer) places a message in shared memory.

 Process B (Reader) reads that message from the same memory.

4. Synchroniza on:Since mul ple processes may access the shared memory
simultaneously, synchroniza on tools are used to avoid race condi ons:

 Semaphores

 Mutexes

 Monitors

These tools ensure that only one process at a me accesses the shared sec on when
required.

5. Detaching and Removing Shared Memory:

 Once communica on is done, processes detach the shared memory (shmdt()).

 The memory segment can then be deleted using shmctl() with IPC_RMID.
Interprocess Communica on (IPC) on a Single Computer System

What is IPC?Interprocess Communica on (IPC) refers to the techniques and


mechanisms used by processes running on the same opera ng system to exchange data
and coordinate ac ons.

Why is IPC Needed?

On a single computer:

 Processes are independent with separate memory spaces.

 To cooperate (e.g., share results or coordinate execu on), they must communicate
using IPC mechanisms provided by the OS.

Common IPC Mechanisms on a Single System:

IPC Mechanism Descrip on

Shared Memory Processes share a region of memory for fast data exchange.

One-way communica on channel. Used between parent-child


Pipes
processes.

Named Pipes
Like pipes but accessible by unrelated processes using a name.
(FIFOs)

OS-managed queue where processes send/receive structured


Message Queues
messages.

Semaphores Used for signaling and synchroniza on, not direct data transfer.

Lightweight no fica on between processes (e.g., interrupt a


Signals
process).

Can be used for communica on even between processes on


Sockets
the same system.

Memory-mapped Map files into memory so that changes by one process are
Files visible to another.

x Examples of IPC in Ac on:

Shared Memory with Semaphores:

 One process writes data to shared memory.

 Another process reads it.


 Semaphores ensure one doesn't read/write while the other is modifying the data.

Pipes:

bash

CopyEdit

$ ls | grep ".txt"

Here, the output of ls is piped to grep. This is IPC using anonymous pipes.

Message Queues (C Example Concept):

CopyEdit

msgsnd() // sends message to queue

msgrcv() // receives message from queue

Synchroniza on in IPC:

To prevent conflicts (like two processes wri ng at the same me), IPC is o en paired
with synchroniza on tools like:

 Semaphores

 Mutexes

 Monitors

 Condi on variables

Advantages of IPC on a Single System:

Benefit Descrip on

Process Coordina on Enables coopera on between processes

Efficiency Especially with shared memory

Modularity Enables processes to be designed independently

Synchroniza on Support Helps maintain data consistency


How Pipes Are Created and Used in IPC (Interprocess Communica on)

PIPES – Overview

A pipe is a unidirec onal communica on channel that allows data to flow from one
process to another. It's one of the simplest forms of IPC, mainly used for parent-child
process communica on.

Crea on of a Pipe (Anonymous Pipe)

In C (UNIX/Linux systems), you can create a pipe using the pipe() system call.

Syntax:

CopyEdit

int fd[2];

pipe(fd);

 fd[0] – for reading

 fd[1] – for wri ng

Example of Anonymous Pipe (Parent to Child):

CopyEdit

int fd[2];

pipe(fd);

if (fork() == 0) {

// Child Process

close(fd[1]); // Close write end

read(fd[0], buffer, sizeof(buffer));

} else {

// Parent Process

close(fd[0]); // Close read end

write(fd[1], "Hello", 5);


}

Pipes like this only work between related processes (e.g., parent-child).

Named Pipes (FIFOs)

What is a Named Pipe?

A Named Pipe, also called FIFO (First In, First Out), is a special file used for IPC between
unrelated processes.

 It exists in the file system.

 Created using mkfifo() or mknod().

How to Create a Named Pipe:

In Terminal:

bash

CopyEdit

mkfifo mypipe

This creates a special file named mypipe.

In C Code:

CopyEdit

mkfifo("mypipe", 0666); // 0666 = read & write permissions

IPC Using Named Pipes – Step-by-Step:

Process 1 (Writer):

CopyEdit

int fd = open("mypipe", O_WRONLY);

write(fd, "Hello", 5);

close(fd);

Process 2 (Reader):

CopyEdit
int fd = open("mypipe", O_RDONLY);

read(fd, buffer, 5);

close(fd);

Workflow of Named Pipe IPC:

lua

CopyEdit

Process A (Writer) Process B (Reader)

| |

open("mypipe", O_WRONLY) open("mypipe", O_RDONLY)

| |

write() ------------------> read()

| |

close() close()

Advantages of Named Pipes:

Feature Benefit

Works between unrelated processes Useful for independent applica ons

Easy to use Simple file-like opera ons (read, write)

Persistent name Exists un l deleted from file system

Limita ons:

 Unidirec onal: One-way data flow (unless two pipes are used).

 Blocking I/O: If one end isn’t opened, the other blocks.

 Slower than shared memory for large data.


Message Queue Structure in the Kernel – Explained with Diagram

What is a Message Queue in IPC?

A message queue is a method of interprocess communica on (IPC) that allows mul ple
processes to exchange messages using a queue data structure managed by the kernel.

 Each message is stored as a record.

 The OS maintains the message queue and ensures ordering and access control.

Purpose:

 Allows asynchronous communica on between processes.

 Messages are stored in the queue un l the receiving process reads them.

Kernel Structure of a Message Queue

Here’s a visual representa on of how the kernel structures a message queue:

Key Components in the Kernel:

Component Descrip on

msqid_ds The main message queue structure maintained in the kernel.


Component Descrip on

msg_perm Permissions (user ID, group ID, read/write rights).

msg_first Pointer to the first message in the queue.

msg_last Pointer to the last message in the queue.

msg_type Message type (user-defined long integer, used for filtering messages).

msg_text Actual data content of the message.

msg_size Size of the message in bytes.

*msg_next Pointer to the next message in the queue (linked list structure).

Advantages:

 Fully managed by the kernel.

 Supports asynchronous, non-blocking communica on.

 Allows mul ple senders and receivers.

Limita ons:

 Limited queue size (can be adjusted via system se ngs).

 Kernel memory usage increases with large/queued messages.

 Slower than shared memory for very large data transfers.

Detailed Explana on of Kernel Data Structures for Shared Memory

In Interprocess Communica on (IPC) using Shared Memory, the opera ng system


kernel plays a key role in managing memory regions that can be accessed by mul ple
processes.

To efficiently manage this, the kernel maintains internal data structures to:

 Track shared memory segments.

 Manage access rights.

 Control process a achment/detachment.

 Track memory usage and synchroniza on.

Main Kernel Data Structures for Shared Memory

shmid_ds – Shared Memory Descriptor Structure:The shmid_ds structure contains


metadata about each shared memory segment.
Structure Fields:

CopyEdit

struct shmid_ds {

struct ipc_perm shm_perm; // Permissions (UID, GID, access)

size_t shm_segsz; // Size of segment in bytes

me_t shm_a me; // Last a ach me

me_t shm_d me; // Last detach me

me_t shm_c me; // Last change me

pid_t shm_cpid; // PID of creator

pid_t shm_lpid; // PID of last shmat()/shmdt()

shma _t shm_na ch; // Number of current a aches

};

ipc_perm – Permission Control Structure:Embedded inside shmid_ds, this structure


handles access control for the shared segment.

Key Fields:

CopyEdit

struct ipc_perm {

key_t __key; // Unique iden fier

uid_t uid; // Owner's user ID

gid_t gid; // Owner's group ID

mode_t mode; // Access modes (read/write)

};

Shared Memory Segment Table

The kernel maintains an internal table (array or linked list) of all shared memory
segments, each represented by a shmid_ds structure.
Entry No. Key Size (bytes) Creator PID A ach Count

0 12345 4096 1012 2

1 12346 2048 1013 1

Process Page Table Mapping

 When a process a aches to a shared memory segment using shmat(), the kernel
maps the shared memory into the process's page table.

 This allows the process to access the shared region as part of its address space.

Lifecycle of Shared Memory (Kernel View):

1. Crea on: shmget() allocates a segment and creates an entry in the kernel's segment
table (shmid_ds).

2. A achment: shmat() maps the segment into the process’s address space;
shm_na ch is incremented.

3. Access: The process reads/writes directly to the memory.

4. Detachment: shmdt() removes mapping; shm_na ch is decremented.

5. Dele on: shmctl(..., IPC_RMID, ...) marks the segment for removal once all processes
detach.

How Kernel Manages Synchroniza on:

 The kernel does not provide built-in locking for shared memory.

 User-level synchroniza on mechanisms like semaphores, mutexes, or monitors


must be used to prevent race condi ons.

Summary Table:

Data Structure Descrip on

shmid_ds Holds shared memory segment metadata

ipc_perm Access control info for the segment

Page Table Maps shared memory into process address space

Kernel Table Stores all ac ve shared memory segments

You might also like