Experiment List
Distributed System
CS-7001
Experiment List
S. No. Experiment Date Sign
1 Clock Synchronization in
Lamport’s Algorithm.
2 Clock Synchronization using total
ordering.
3 Implementing a mutual exclusion
algorithm.
4 Implementing a Bully algorithm.
5 Banker’s Algorithm for
distributed Deadlock
CASE STUDY :
6 CORBA
7 CASE study on Reservation System
8 CASE study on inventory system
9 CASE study on chain system
10 CASE study on online counseling
11 CASE study on universities
management.
12 Study of RMI.
Lab 1: Partial Ordering (Lamport’s algorithm)
Introduction:
Assign sequence numbers to messages
All cooperating processes can agree on order of events vs. physical
clocks: time of day
Assume no central time source
Each system maintains its own local clock
No total ordering of events
No concept of happened-when
Lamport’s “happened-before” notation
a bevent a happened before event b
e.g.:a: message being sent, b: message receipt
Transitive:
if a b and b c then a c
Algorithm/Technique Used:
• Each message carries a timestamp of the sender’s clock
• When a message arrives:
– if receiver’s clock < message timestamp
set system clock to (message timestamp + 1)
– else do nothing
• Clock must be advanced between any two events in the same process
• Algorithm allows us to maintain time ordering among related events
– Partial ordering
Pseudo Code:
Each processor has a local logical clock: my_TS
Each event e has a timestamp e.TS
Each message m carries the timestamp m.TS of the sending event
Implementing
the logical clocks: Lamport’s algorithm (pseudo-code)
---------------------------------------------------------------------
Initially,
my_TS = 0
on
event e do
if
e is the receipt of message m then
my_TS := max(m.TS,my_TS)+1;
e.TS := my_TS
elseif
e is an internal event then
my_TS := my_TS+1; e.TS := my_TS
elseif
e is the sending of message m then
my_TS := my_TS+1;
e.TS := my_TS;
m.TS = my_TS;
end
Lab 2: Total Ordering (Lamport’s algorithm)
Introduction:
ab, bc, …: local events sequenced
ic, fd , dg, … : Lamport imposes a sendreceive relationship
Unique (totally ordered) timestamps
Algorithm/Technique Used:
We can force each timestamp to be unique
Define global logical timestamp (Ti, i)
Ti represents local Lamport timestamp
i represents process number (globally unique)
E.g. (host address, process ID)
Compare timestamps:
(Ti, i) < (Tj, j)
if and only if
Ti < Tj or
Ti = Tj and i < j
Pseudo Code:
Each processor has a local logical clock: my_TS
Each event e has a timestamp (pid,e.TS)
Each message m carries the timestamp pid,m.TS of the sending event
Implementing the logical clocks: Total ordering algorithm (pseudo-code)
---------------------------------------------------------------------
Initially,
my_TS = 0
on
event e do
if
e is the receipt of message m then
pid,my_TS := max(pid,m.TS, pid,my_TS)+1;
pid,e.TS := pid,my_TS
elseif
e is an internal event then
pid,my_TS := pid,my_TS+1; pid,e.TS := pid,my_TS
elseif
e is the sending of message m then
pid,my_TS := pid,my_TS+1;
pid,e.TS := pid,my_TS;
pid,m.TS = pid,my_TS;
end
Lab 3: Mutual Exclusion
(Ricart’s algorithm: Extension Lamport’s)
Introduction:
Exclusive access to such a shared resource by a process must
be ensured. This exclusive access is called Mutual Exclusion
between processes.
The sections of a program that need exclusive access to
shared resources are referred to as critical sections.
Algorithm/Technique Used:
Distributed algorithm using reliable multicast and logical clocks
Process wants to enter critical section:
Compose message containing:
Identifier (machine ID, process ID)
Name of resource
Timestamp (totally-ordered, Lamport)
Send request to all processes in group
Wait until everyone gives permission
Enter critical section / use resource
Pseudo Code:
Implementation in Ricart’s algorithm:
Implementation in Lamport's first fast lock
Code to be executed by process i. Variable Y is initialized to free.
The delay at line 7 must be long enough for any process that has already
read Y = 3 at line 3 to complete lines 5, 6, and (if appropriate) 10 and 11.
start:
X := i
if Y <> free
goto start
Y := i
if X <> i
{ delay }
if Y <> i
goto start {
critical section }
Y := free
{ non-critical section
} goto start
Lamport's second fast lock
Code to be executed by process i. Variable Y is initialized to free and
each element of the B array is initialized to false.
start:
B[i] := true
X := i
if Y <> free
B[i] := false
repeat until Y = free
goto start
Y := i
if X <> i
B[i] := false
for j := 1 to N
repeat while B[j]
if Y <> i
repeat until Y = free
goto start
{ critical section
} Y := free
B[i] := false
{ non-critical section
} goto start
Lab 4: Election Algorithm (Bully)
1. Introduction:
If each process has a unique identity, and the identities are ordered
Elect the non-crashed node with minimum/maximum identity
Formally, an algorithm is an election algorithm iff:
Each process has the same local algorithm
The algorithm is decentralized
It can be initiated by any number of processes in the system
It reaches a terminal configuration, and in each reachable terminal
configuration one process is in state leader and the rest are in the
state lost
2. Algorithm/Technique Used:
When a process P determines that the current coordinator is down
because of message timeouts or failure of the coordinator to initiate a
handshake, it performs the following sequence of actions:
1. P broadcasts an election message (inquiry) to all other processes
with higher process IDs.
2. If P hears from no process with a higher process ID than it, it wins
the election and broadcasts victory.
3. If P hears from a process with a higher ID, P waits a certain
amount of time for that process to broadcast itself as the leader. If
it does not receive this message in time, it re-broadcasts the
election message.
Note that if P receives a victory message from a process with a lower
ID number, it immediately initiates a new election. This is how the
algorithm gets its name - a process with a higher ID number will
bully a lower ID process out of the coordinator position as soon as it
comes online.
5. Pseudo Code:
Lab 5: Deadlock
1. Expected outcomes:
A deadlock safe state.
2. Introduction:
Deadlock prevention is possible because of the presence of atomic
transactions.
Deadlock avoidance is never used in distributed system; in fact, it
is not even used in single processor systems.
The problem is that the banker’s algorithm needs to know (in
advance) how much of each resource every process will
eventually need. This information is rarely, if ever, available.
3. Algorithm/Technique Used:
A mathematical way to determine if a deadlock has, or may occur.
G = ( V, E ) The graph contains nodes and edges.
V Nodes consist of processes = { P1, P2, P3, ...} and resource
types { R1, R2, ...}
E Edges are ( Pi, Rj ) or ( Ri, Pj )
An arrow from the process to resource indicates the process is
requesting the resource. An arrow from resource to process shows
an instance of the resource has been allocated to the process.
Process is a circle, resource type is square; dots represent number
of instances of resource in type. Request points to square,
assignment comes from dot.
If the graph contains no cycles, then no process is deadlocked.
If there is a cycle, then:
i. If resource types have multiple instances, then deadlock
MAY exist.
ii. If each resource type has 1 instance, then deadlock has
occurred.
5. Pseudo Code: