Interprocess Communication And
Synchronization
Interprocess Communication
Processes within a system may be independent or cooperating
Cooperating process can affect or be affected by other processes,
including sharing data.
Reasons for cooperating processes:
Information sharing: when several users want to access the same
piece of information. They require concurrent access to these types
of resources.
Computation speedup: A task is broken down into various subtasks
so that each of them run in parallel.
Modularity: When a system to be created into modular fashion by
dividing it into small functional units.
Cooperating processes need interprocess communication (IPC)
Two models of IPC
Shared memory
Message passing
Messaging Passing Method
In this method, processes communicate with each other without using any
kind of shared memory. If two processes p1 and p2 want to communicate
with each other, they proceed as follow:
Establish a communication link (if a link already exists, no need to establish
it again.)
Start exchanging messages using basic primitives.
We need at least two primitives:
– send(message, destinaion) or send(message)
– receive(message, host) or receive(message)
Direct Communication
Processes must name each other explicitly:
send (P, message) – send a message to process P
receive(Q, message) – receive a message from process Q
Properties of communication link
Links are established automatically
A link is associated with exactly one pair of communicating
processes
Between each pair there exists exactly one link
The link may be unidirectional, but is usually bi-directional
Indirect Communication
In this addressing, message is sent and received using
a mailbox. A mailbox can be abstractly viewed as an
object into which message may be placed. Message
may be extracted by a process.
In this type, sender and receiver processes should
share a mailbox to communicate.
The types of communication link made through mailbox:
One to one link: One sender wants to communicate
with one receiver. So, only one link will be
established.
Many to one link: Multiple senders want to
communicate with single receiver. Example: In client-
server system, there are one server process and many
client processes. Here, mailbox is known as a port.
One to Many link: One sender wants to communicate
with multiple receivers. That is broadcasting a
message.
Many to Many link: Multiple senders want to
communicate with multiple receivers.
Shared memory model:
Shared memory is a memory shared between two or
more processes.
Each process has its own address space, if any process
wants to communicate with some information from its
own address space to other processes, then it is only
possible with IPC (inter process communication)
techniques.
Communication is done via this shared memory where
changes made by one process can be viewed by
another process.
SYSTEM CALLS USED ARE:
ftok(): is use to generate a unique key.
shmget(): int shmget(key_t,size_tsize,intshmflg); upon
successful completion, shmget() returns an identifier for the
shared memory segment.
shmat(): Before you can use a shared memory segment, you
have to attach yourself to it using shmat(). void *shmat(int
shmid ,void *shmaddr ,int shmflg); shmid is shared memory id.
shmaddr specifies specific address to use but we should set it to
zero and OS will automatically choose the address.
shmdt(): When you’re done with the shared memory segment,
your program should detach itself from it using shmdt(). int
shmdt(void *shmaddr);
shmctl(): when you detach from shared memory,it is not
destroyed. So, to destroy shmctl() is used. shmctl(int
shmid,IPC_RMID,NULL);
Signals
A signal is a single bit message.
Signals are a way to asynchronously notify the
occurrence of an event:
Timer
I/O completion
Program exceptions,
other user-defined actions
Sender asynchronous,
Receive is asynchronous
Pipe
Pipe is a communication medium between two or more related or interrelated
processes.
It can be either within one process or a communication between the child and the
parent processes.
Communication can also be multi-level such as communication between the parent,
the child and the grand-child, etc.
Communication is achieved by one process writing into the pipe and other reading
from the pipe.
To achieve the pipe system call, create two files, one to write into the file and
another to read from the file.
Pipe mechanism can be viewed with a real-time scenario such as filling water with
the pipe into some container, say a bucket, and someone retrieving it, say with a
mug. The filling process is nothing but writing into the pipe and the reading process
is nothing but retrieving from the pipe. This implies that one output (water) is input
for the other (bucket).
Process Synchronization
Process Synchronization
A co-operating process is one that can affect or be affected
by other processes executing in the system.
Such co-operative processes may either share a logical
address space or be allowed to share data only through files.
When co-operating processes concurrently requires
mechanisms to ensure the orderly execution of co-operating
processes.
Thus, process synchronization ensure a prefect co-
ordination among the processes. When cooperating
processes share data, process synchronization maintains
data consistency.
Process synchronization can be provided by using several
different tools like semaphores, mutex and monitors.
Concept of race condition:
When several processes access and manipulate the
same data at the same time, they may enter into a race
condition.
A race condition is flaws of system where output of the
process is dependent on the sequence of other
processes.
Race condition occur among processes that share the
common storage and each process can read write on
this shared common storage.
Thus, Race Condition occurs due to improper
synchronization of the shared memory access.
Let us assume two processes p1 and p2 each want to
increment the value of a global integer(say i) by 1, the
following sequence of operations would take place:-
Int i=10;
P1 reads the value of I from memory into a registrer:10
P2 reads the value of I from memory into a
registrer:10
P1 increment the value of i from memory into registrer
10+1=11.
P2 increment the value of i from memory into registrer
10+1=11.
The resultant value of i=11.
Critical Section Problem
Consider system of n processes {p 0, p1, … pn-1}
Each process has critical section segment of code
Process may be changing common variables, updating
table, writing file, etc
When one process in critical section, no other may be in its
critical section
Critical section problem is to design protocol to solve
this.
Each process must ask permission to enter critical section
in entry section, may follow critical section with exit
section, then remainder section
Especially challenging with preemptive kernels
Critical Section
General structure of process pi is
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical
section, then no other processes can be executing in their critical
sections
2. Progress - If no process is executing in its critical section and
there exist some processes that wish to enter their critical
section, then the selection of the processes that will enter the
critical section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of times
that other processes are allowed to enter their critical sections
after a process has made a request to enter its critical section and
before that request is granted
Critical Section Problem
Critical section is a code segment that can be accessed
by only one process at a time. Critical section contains
shared variables which need to be synchronized to
maintain consistency of data variables.
Algorithm 1
Initially, turn value is set to 0.
Turn value = 0 means it is the turn of process P0 to
enter the critical section.
Turn value = 1 means it is the turn of process P1 to
enter the critical section.
Peterson’s Solution
Peterson’s Solution is a classical software based solution to the critical
section problem.
In Peterson’s solution, we have two shared variables:
Boolean flag[i] :Initialized to FALSE, initially no one is interested in
entering the critical section
int turn : The process whose turn is to enter the critical section.
Peterson’s Solution preserves all three conditions :
Mutual Exclusion is assured as only one process can
access the critical section at any time.
Progress is also assured, as a process outside the
critical section does not block other processes from
entering the critical section.
Bounded Waiting is preserved as every process gets a
fair chance.
Disadvantages of Peterson’s Solution
It involves Busy waiting
It is limited to 2 processes.
Bakery Algorithm
The Bakery algorithm is one of the simplest known
solutions to the mutual exclusion problem for the
general case of N process. Bakery Algorithm is a
critical section solution for N processes. The algorithm
preserves the first come first serve property.
Before entering its critical section, the process receives
a number. Holder of the smallest number enters the
critical section.
If processes Pi and Pj receive the same number,if i < j
Pi is served first; else Pj is served first.
Hardware solution for mutual exclusion
Disabling Interrupt
Perhaps the most obvious way of achieving mutual exclusion is to allow
a process to disable interrupts before it enters its critical section and then
enable interrupts after it leaves its critical section.
By disabling interrupts the CPU will be unable to switch processes. This
guarantees that the process can use the shared variable without another
process accessing it.
But, disabling interrupts, is a major undertaking. At best, the computer
will not be able to service interrupts for, maybe, a long time (who knows
what a process is doing in its critical section?). At worst, the process may
never enable interrupts, thus (effectively) crashing the computer.
Although disabling interrupts might seem a good solution its
disadvantages far outweigh the advantages.
Test And Set
Test And Set is a hardware solution to the
synchronization problem.
In Test And Set, we have a shared lock variable which
can take either of the two values, 0 or 1.
Semaphores
Semaphore is simply a variable.
This variable is used to solve the critical section problem and to
achieve process synchronization in the multiprocessing environment.
The two most common kinds of semaphores are counting semaphores
and binary semaphores.
Counting semaphore can take non-negative integer values and Binary
semaphore can take the value 0 & 1. only.
What is Semaphore?
Semaphore is simply a variable that is non-negative
and shared between threads. A semaphore is a
signaling mechanism, and a thread that is waiting on a
semaphore can be signaled by another thread.
It uses two atomic operations,
1)wait
2) signal for the process synchronization.