0% found this document useful (0 votes)
21 views98 pages

Chapter - 04 - Process Management

Chapter 4 discusses process management, covering concepts such as process creation, coordination, communication, synchronization, and deadlock. It explains the importance of multiple processes for improved CPU usage, multitasking, and processing speed, as well as the role of threads in executing functions concurrently. Additionally, the chapter details scheduling algorithms and communication mechanisms between processes, including signals, pipes, shared memory, messages, and sockets.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views98 pages

Chapter - 04 - Process Management

Chapter 4 discusses process management, covering concepts such as process creation, coordination, communication, synchronization, and deadlock. It explains the importance of multiple processes for improved CPU usage, multitasking, and processing speed, as well as the role of threads in executing functions concurrently. Additionally, the chapter details scheduling algorithms and communication mechanisms between processes, including signals, pipes, shared memory, messages, and sockets.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Chapter 4 PROCESS MANAGEMENT

 Process concepts
 Coordination of processes
 Communication between processes
 Synchronize processes
 Deadlock
Process
2

 The process is a processing program


 Each process has an address space, a command
pointer, a separate set of registers and stacks.
 The process may require some resources such as
CPU, main memory, files and I/O devices.
 The operating system uses a scheduler to
determine when to stop the processing of the
processing process and select the next process
to be performed.
 In the system, there are processes of the
operating system and the users.
Purpose for multiple processes working
simultaneously
3

 Increased CPU usage performance


(increased multi-chapter level)
 Increased multitasking level
 Increase processing speed
Increased CPU usage performance
(increased multi-chapter level)
4

 Most of the execution processes undergo many


processing cycles (using CPU) and I/O cycles
(using I/O devices) alternating as follows:

 If there is only 1 process in the system, then in


the IO cycles of the process, the CPU will be
completely idle. The idea of increasing the
number of processes in the system is to utilize the
CPU: if process 1 execute IO, then the operating
system can use the CPU to perform process 2 ...
Increased multitasking
5
level
 For each process to rotate in a very short
time, giving the impression that the system
has many processes executing simultaneously.
Increase processing
6
speed
 Some problems can be processed in
parallel, if built into many units
operating at the same time, it will save
processing time.
 For example, consider the calculation of
expression value kq = a * b + c * d. If (a
* b) and (c * d) are performed
concurrently, the processing time will be
shorter than a sequential execution.
Thread
7

 A process can create multiple threads.


 Each thread performs a function and
executes it simultaneously by sharing the
CPU.
 Threads in the same process share
process address space but have their own
instruction pointers, set of registers, and
stack.
 A thread can also create sub-threads, and
take different states as a process.
Communication between
8
threads
 Processes can communicate with each other only through
mechanisms provided by the operating system.
 Threads communicate easily through the process's global
variables.
 Threads can be managed by the operating system or by
the operating system and process jointly managed
Thread example
9
Thread example
10

 A process has three threads, each thread has its own stack
Thread installation
11

 Installation in Kernel-space
 Installation in User-space
 Installation in Kernel-space and
User-space
Thread installation in Kernel-space
12

 The thread table is stored in the Kernel-space.


 Threads are coordinated by the operating system.
Thread installation in User-space
13

 The thread table is stored in the User-space.


 Threads are coordinated by the process itself.
Thread installation in Kernel-space and
User-space
14

 An operating system thread manages several threads of the process


Thread coordination
15
example
 quantum of process=50 msec
 quantum of thread=5 msec
 Process A has 3 threads, Process B has 4 threads.

Thread coordinating is Thread coordinating is done


done at user-space level at kernel-space level
Process states
16
 New: The process is being created
 Running: Instructions are being executed
 Waiting: The process is waiting for some event to occur
 Ready: The process is waiting to be assigned to a processor
 Terminated: The process has finished execution
Processing mode of the
17
process

Two processing modes:


• Privileged mode
• Non-privileged mode
Data structure of process control block
18

 Process control block (PCB): is a


memory area that stores
descriptive information for the
process:
 Process identifier (1)
 Process status (2)
 Context of the process (3):
 CPU status, Processor, Main
memory, Used resources,
Created resources
 Communication information (4):
 Parent process, Child process,
Priority
 Statistical information (5)
PCB
Operations on processes
19

 Create process (create)


 Destroy process (destroy)
 Suspend process (suspend)
 Resume process (resume)
 Change process priority (change priority)
Create process (create)
20

 Identify for new processes


 Put the process into the management list
of the system
 Determine the priority for the process
 Create PCB for the process
 Allocate initial resources for the process
Destroy process (destroy)
21

 Revoke system resources allocated of


the process
 Destroy the process from all system
management lists
 Destroy the PCB of the process
Allocate resources for the process
22

Resource management block


 The objectives of the allocation technique:
 Ensuring a valid number of concurrent access processes
to non-shared resources.
 Allocating resources for the process required during an
acceptable delay.
 Optimize resource use.
Process Scheduling
23

 The operating system coordinates the process


through the scheduler and the dispatcher.
 The scheduler uses an appropriate algorithm to
select the next processed process.
 The dispatcher is responsible for updating the
context of the suspended process and assigning the
CPU to the process selected by the scheduler for the
execution process.
 Coordination objectives:
 Fairness (Fairness)
 Effectiveness (Efficiency)
 Reasonable response time (Response time)
 Time saved in the system (TurnaroundTime)

Process characteristics
24

 Oriented I/O ( I/O-boundedness):


 Multiple CPU usage times, each using a short

time.
 Oriented execution ( CPU-boundedness):
 Less CPU usage times, long time usage each.

 Process of interaction or batch processing


 Priority of the process
 Time used by the process's CPU
 The time left for the process to complete
Scheduling principles
25

 Exclusive coordination (preemptive)


 CPU monopoly
 Not suitable for multi-user systems
 Non-exclusive coordination (nonpreemptive)
 Avoid a CPU monopoly process
 Can lead to inconsistencies in retrieval -> need
appropriate synchronization methods to
resolve.
 Complex in prioritizing
 Generate additional costs when switching CPU
between processes
Timing of Scheduling
26

 running -> blocked


 for example, waiting for an I/O manipulation or
waiting for a child process to finish ...
 running -> ready
 for example, when an interrupt occurs.
 blocked -> ready
 for example, when an I/O manipulation is finished.
 The process finish.
 A process with a higher priority appears
 Applies only to non-exclusive coordination
Scheduling lists
27

 Job list
 Ready list
 Waiting list

ds sẵn sàng CPU

Yêu cầu
I/O ds đợi tài nguyên
tài nguyên

Hết

thời gian

Ngắt xảy ra
Đợi 1 ngắt
Types of scheduling
28

 Job scheduling
 Select which job is put in the main memory to
perform
 Determines the degree of multiplication
 Low frequency of operation
 Process scheduling
 Select a process available (loaded in main
memory, and have enough resources to run) to
allocate CPU for that process to execute.
 Have a high operating frequency (1 time / 100 ms).
 Use the best algorithms
Scheduling algorithms
29

 FIFO algorithm
 Round Robin algorithm
 Priority algorithm
 Shortest-job-first algorithm (SJF)
 Multiple priority algorithm
 Lottery Scheduling Strategy
(Lottery)
FIFO algorithm
30

Preemptive Scheduling

Process Time to enter RL Processing Time P1 P2 P3


P1 0 24 0 24 27
P2 1 3 30
P3 2 3

 The waiting time is processed as 0 for P1,


(24 -1) for P2 and (27-2) for P3.
 The average waiting time: (0 + 23 + 25) / 3
= 16 milliseconds.
Round Robin algorithm
31

Process Time to enter RL Processing Time


P1 0 24
P2 1 3
P3 2 3

P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30

Quantum: 4 milliseconds

 The average waiting time: (6+3+5)/3 =


4.66 milliseconds.
Priority algorithm
32

 Priority: P2 > P3> P1


Proce Time to Priority Processi Preemptive Priority Algorithm
ss enter RL ng Time P1 P2 P3
P1 0 3 24 0 24 27
P2 1 1 3 30
P3 2 2 3 The average waiting time:
(0+23+25)/3 =16 milliseconds

Non-Preemptive Priority Algorithm

P1 P2 P3 P1
0 1 4 7 30

The average waiting time:


(6+0+2)/3 = 2.7
Shortest-job-first algorithm (SJF)
33

 t: the processing time required by the process


 Priority p = 1/t Preemptive SJF Algorithm
Tiến Thời điểm Thời gian P1 P4 P3 P2
trình vào RL xử lý
0 6 8 12
P1 0 6 20
P2 1 8
The average waiting time:
P3 2 4 (0+11+6+3)/4 = 5 milliseconds
P4 3 2
Non-Preemptive SJF Algorithm
P1 P4 P1 P3 P2
0 3 5 8 12
20
The average waiting time:
(2+11+6+0)/4 = 4.75 milliseconds
Multiple priority algorithm
34

The ready list is divided into several lists.


Each list consists of processes that have the
same priority and have their own scheduling
algorithm.
Multilevel Feedback scheduling
algorithm
35
Lottery Scheduling Strategy
(Lottery)
36

 Each process is given a "lottery".


 The OS chooses a "winning" ticket, the
process that owns this ticket will receive
the CPU.
 Preemptive algorithm.
 Simple, low cost, fairness of processes.
COMMUNICATE BETWEEN PROCESSES
37

 Purpose:
 to share information such as file sharing,
memory, ...
 cooperating to complete the job
 Mechanisms:
 Communication by signal (Signal)
 Communication by pipe (Pipe)
 Communication via shared memory (shared
memory)
 Communication by message (Message)
 Communication by socket
Communication by signal (Signal)
38

Signal Describe

SIGINT Users press Ctrl-C to stop processing the


process

SIGILL Process execute an invalid instruction

SIGKILL Request to end a process

SIGFPT Error divided by 0

SIGSEGV The process of retrieving to an invalid


address

SIGCLD Child process ends


Signals are sent by: When the signal receiving process:
- Hardware - Call the signal processing function.
- Operating system - Process according to the way of the
- Process process.
- User - Ignore the signal.
Communication by pipe
39
(Pipe)

 Data transfer: stream of bytes (FIFO)


 The pipe read process will be blocked if the pipe is empty,
and wait until the pipe has data to be retrieved.
 The pipe write process will be locked if the pipe is full, and
wait until the pipe has room to store data.
Communication via shared memory
(shared memory)
40

 Shared memory is independent of processes


 The process must mount the common memory
into the process's private address space

Shared memory is:


- The fastest method to exchange data between processes.
- need to be protected by synchronization mechanisms.
- cannot be effectively applied in distributed systems
Communication by message (Message)
41

 Establish a link between the two processes


 Use send and receive functions provided by
the operating system to exchange messages
 How to communicate by message:
 Indirect communication
 Send (A, message): sends a message to port A
 Receive (A, message): receive a message from port A

 Direct communication
 Send (P, message): send a message to process P
 Receive (Q, message): receive a message from the Q
process
Example: producer-consumer
problem
42

void producer()
{ while(1)
{ create_product ();
send(consumer, product); //send product to consumer
}
}
void consumer()
{ while(1)
{ receive(producer, product); //consumer waiting to
receive product
consume (product);
}
}
Communication by socket
43

 Each process needs to create a separate socket


 Each socket is tied to a different port.
 The read / write operations on the socket are the
exchange of data between two processes.
 How to communicate by socket
 Mail contact (socket acts as a post office)
 "Sending process" writes data to its socket, the data will be
transferred to the socket of "receive process"
 "Receive process" will receive data by reading data from the socket
of "receive process"
 Telephone communication (socket plays the role of
switchboard)
 Two processes that need to be connected before data transmission /
reception and connection are maintained during data transmission
PROCESS SYNCHRONIZATION
44

 Ensure parallel processing processes do not


mislead each other.
 Request exclusive access(Mutual exclusion):
 At one point, only one process has access to an
unreachable resource.
 Synchronization requirements:
 processes that need to work together to get things
done.
 Two "synchronous problems" need to be solved:
 the problem of "exclusive access" ("critical section
problem")
 the problem of "coordinated implementation".
Critical Section
45

 The code of a process maybe occur when accessing


shared resources (variables, files, ...).
 Example:
 if (account >= withdrawal money) account = account -
withdrawal money;
 else print “cannot withdraw money! “;
The conditions required when solving the
critical section problem
46

1. There are no two processes in the critical


section at the same time.
2. There is no assumption about the speed
of processes, nor about the number of
processors.
3. A process outside the critical section
must not prevent other processes from
entering the critical section.
4. No process must wait indefinitely to
enter the critical section
Synchronous solution
47
groups
 Busy Waiting
 Sleep And Wakeup
 Semaphore
 Monitor
 Message.
Busy Waiting
48

 Software solutions
 Algorithm using the flag variable
 Algorithm using alternating variables
 Peterson algorithm
 Hardware solutions
 No interruption
 Use TSL command (Test and Set Lock)
Algorithm using flag variable (used for
many processes)
49

 lock=0 There is no process in the critical section.


 lock=1 There is a process in the critical section.
lock=0;
while (1)
{
while (lock == 1);

lock = 1;
critical-section ();.
lock = 0;

noncritical-section();
}

Violation: "Two processes can be in the critical section


at a time."
Algorithm using alternating variables
(used for 2 processes)
50
 Two processes A and B use the same turn variable:
turn = 0, process A gets into the critical section

 turn = 1, then B can enter the critical section

// A process turn=0, A // B process turn=1, B


while (1) can enter while (1) can enter
{ critical { critical
while (turn == 1); section while (turn == 0);section
critical-section (); critical-section ();

turn = 1; turn = 0;
Noncritical-section (); Noncritical-section ();
} }

Two processes certainly cannot enter the critical section at the


same time, because at one time the turn has only one value.

Violation: a process can be prevented into the critical section


by another process not in the critical section.
Peterson algorithm (used for 2
processes)
51

 We share two variables turn and flag [2] (type int).


 flag [0] = flag [1] = FALSE
 turn is started as 0 or 1.
 If flag [i] = TRUE (i = 0,1) -> Pi wants to enter the critical
domain and turn = i, indicate that Pi's turn.
 To inter the critical domain:
 Pi sets the flag [i] = TRUE to indicate that it wants to enter the
critical domain.
 set turn = j to try proposing process Pj into the critical domain.
 If process Pj is not interested in entering the critical domain
(flag [j] = FALSE), then Pi can enter the critical domain.
 if flag [j] = TRUE then Pi has to wait until flag [j] = FALSE.
 When process Pi leaves the critical domain, it resets flag [i] to
FALSE.
Peterson algorithm (code)
52
// tiến trình P0 (i=0)
while (TRUE)
{
flag [0]= TRUE;//P0 thông báo là P0 muon vao mg
turn = 1; //thu de nghi P1 vao
while (turn == 1 && flag [1]==TRUE); //neu P1 muon vao thi
P0 chờ
critical_section();
flag [0] = FALSE; //P0 ra ngoai mg
noncritical_section ();
}
// tiến trình P1 (i=1)
while (TRUE)
{
flag [1]= TRUE; //P1 thông báo là P1 muon vao mg
turn = 0;//thử de nghi P0 vao
while (turn == 0 && flag [0]==TRUE); //neu P0 muon vao thi
P1 chờ
critical_section();
flag [1] = FALSE;//P1 ra ngoai mg
No interruption
53

 The process prohibits all interrupts before


entering the critical section, and restores
interrupts when leaving the critical section.

 Not safe for the system.


 Does not work on systems with multiple
processors.
Use TSL command (Test and Set Lock)
54
The TSL command allows checking and
updating a device in a preemptive
operation.

boolean Test_And_Set_Lock (boolean lock)


{
boolean temp=lock;
lock = TRUE;
return temp; //trả về giá trị ban đầu của biến
lock
}
boolean lock=FALSE; //biến dùng
chung
while (TRUE)
{
 Works on systems while (Test_And_Set_Lock(lock));
with multiple critical_section ();
processors. lock = FALSE;
noncritical_section ();
Solution Group: SLEEP and WAKEUP
55

 Using the SLEEP and WAKEUP command


 Using Semaphore structure
 Using Monitors structure
 Using Message
Using the SLEEP and WAKEUP command
56

 SLEEP -> “Ready List", get the CPU back to


another Process.
 WAKEUP -> OS selects a Process in the Ready
List, for further execution.
 Process is not eligible for the critical section -
> call SLEEP to lock itself, until another
Process calls WAKEUP to free it.
 A process calls WAKEUP when leaving the
critical section to wake up a pending process,
creating an opportunity for this process to
enter the critical section.
Using the SLEEP and WAKEUP command
57

int busy=FALSE; // TRUE There is progress in the critical section and opposite is FALSE.
int blocked=0; // Count the number of processes being locked
while (TRUE)
{
if (busy)
{
blocked = blocked + 1;
sleep();
}

else busy = TRUE;


critical-section ();

busy = FALSE;
if (blocked>0)
{
wakeup(); //wake up a pending process
blocked = blocked - 1;
}
Noncritical-section ();
}
Using Semaphore structure
58

 The semaphore s variable has the following


properties:
 A positive integer value e;
 A queue f: the list of pending processes on the
semaphore s.
 Two operations on the semaphore s:
 Down (s): e = e-1.
 If e <0, the process must wait in f (sleep), otherwise
the process continues.
 Up (s): e = e + 1.
 If e <= 0, select a process in f for further execution
(wake up).
Using Semaphore structure
59

Down(s) P is the process of performing the Down(s) or


{ e = e - 1; Up(s) operation.
if (e < 0)
{ status(P)= blocked; // Switch P to blocked state (wait) enter(P,f);
// Let P in the queue f
}
}
Up(s)
{ e = e + 1;
if (e<= 0 )
{
exit(Q,f); // Take a Q process out of the queue f
status (Q) = ready; // Move Q to ready state
enter(Q,ready-list); // Put Q on the ready-list of systems
}
}
Using Semaphore structure
60

 The operating system needs to install Down and


Up operations to be exclusive.
 The structure of the semaphore:
class semaphore
{
int e;
PCB * f; //Semaphore's own list
public:
down();
up();
};
 |e| = Process number waiting on f.
Solve the problem of critical section with
Semaphores
61

 Using a semaphore s, e is initialized to 1.


 All processes apply the same program structure:

semaphore s=1; // e of semaphore s is 1


while (1)
{
Down(s);
critical-section ();
Up(s);
Noncritical-section ();
}
Solve the problem of critical section with
Semaphores
62

Example:

Process Operation E(s) CS F(s)

1 A Down 0 A

2 B Down -1 A B

3 C Down -2 A B, C

4 A Up -1 B C
Solving synchronous problems with
Semaphores
63
 Two processes of P1 and P2, P1 execute Job_1, P2 execute Job_2.
 Job_1 execute first an then Job_2, let P1 and P2 share a
semaphore s, initialize e (s) = 0 :

semaphore s=0; // shared for two processes


P1:
{ If setting Down and Up is wrong
job1(); or missing, it may be wrong.
Up(s); //wake up P2
}
P2:
{
Down(s); // wait P1 to wake up
job2();
}
Problems when using semaphore
64

The process forgets to call Up (s), as a result,


when leaving the critical section it will not allow
another process into the critical section!

e(s)=1;
while (1)
{
Down(s);
critical-section ();
Noncritical-section ();
}
Problems when using semaphore
65

Using semaphore can cause congestion.

P1:
{ Two processes P1, P2 used 2 shared
down(s1); down(s2); semaphore s1=s2=1
….
up(s1); up(s2); If the order does as following:
} P1: down(s1), P2: down(s2),
P2: P1: down(s2), P2: down(s1)
{ Then s1=s2=-1,
down(s2); down(s1); So P1, P2 are waiting forever
….
up(s2); up(s1);
}
Using Monitors structure
66

 Monitor is a special structure (class)


 exclusive methods (critical-section)
 variables (shared for processes)
 Variables
in the monitor can only be accessed
by methods in the monitor
 At one point, there is only one process that
is operated inside a monitor.
 Condition Variable c
 used to synchronize the use of variables in the
monitor.
 Wait(c) and Signal(c):
Using Monitors structure
67

Wait(c)
{ status(P)= blocked; // Move P to the waiting state
enter(P,f(c)); // Put P on the queue f(c) of the condition variable c
} Wait(c): switch the state of call
progress to waiting (blocked) and set
Signal(c) this process to the queue of the
{ condition variable c.
if (f(c) != NULL)
{
exit(Q,f(c)); // Get the Q process waiting on c
statusQ) = ready; // Move Q to ready state
enter(Q,ready-list); // Put Q on the ready-list
Signal(c): if there is a waiting process in the
}
queue of c, re-activate that process and the
} calling process will leave the monitor. If no
process is waiting in the queue of c, the Signal(c)
command is ignored.
Using Monitors structure
68

monitor <name_monitor > // Declare


monitor shared for processes
{
<shared variables>;
<condition variables>;
<exclusive methods>;
}
//process Pi:
while (1) // structure of process i
critical-section {
Noncritical-section ();
<name_monitor>. Method_i;
//execute the exclusive job i
Noncritical-section ();
}
Using Monitors structure
69

 The risk of execution wrong synchronization is greatly


reduced.
 Very few languages support monitor structure.

"Busy and waiting" solutions do


not have to do context switching
while the "sleep and wakeup"
solution will take time for this.
Monitors and the 5 philosophers
have dinner Problem
70

3
4
Monitors and the 5 philosophers have
dinner Problem
71

monitor philosopher
{ enum {thinking, hungry, eating} state[5];// shared variables for
philosophers
condition self[5]; // condition variables for synchronise the having
dinner

// exclusive methods (critical-sections)


void init();// initiating method
void test(int i); // test condition before take philosopher i having dinner
void pickup(int i); // pickup method
void putdown(int i); // putdown method
}
Monitors and the 5 philosophers have
dinner Problem
72

void philosopher()// initiating method (constructor)


{ // assigning the initial state to philosophers is "thinking"
for (int i = 0; i < 5; i++) state[i] = thinking;
}
void test(int i)
{ // If philosopher_i is hungry and the philosopher on the left and right
are not eating, then give philosopher_i food
if ( (state[i] == hungry) && (state[(i + 4) % 5] != eating)
&&(state[(i + 1) % 5] != eating))
{
self[i].signal();// wake up philosopher_i, if philosopher_i is waiting
state[i] = eating; // philosopher_i is eating
}
}
Monitors and the 5 philosophers have
dinner Problem
73

void pickup(int i)
{
state[i] = hungry; // philosopher_i is hungry
test(i); // Check before sending food to philosopher_i
if (state[i] != eating) self[i].wait(); // waiting resource
}
void putdown(int i)
{
state[i] = thinking; // philosopher_i is thinking
test((i+4) % 5); // check philosopher on the right, if OK take this
philosopher having dinner
test((i+1) % 5);// check philosopher on the left, if OK take this
philosopher having dinner
}
Monitors and the 5 philosophers have
dinner Problem
74

// Structure of the Process_Pi execute the having dinner of


philosopher_i
philosopher pp; //shared monitor variable
Pi:
while (1)
{
Noncritical-section ();
[Link](i); // pickup is critical-section and access
exclusively
eat(); // eating
[Link](i);// putdown is critical-section and access
exclusively
Noncritical-section ();
}
Using Message
75

 A process that controls using of resource and


many other processes that require resources.
while (1)
{
Send(process controler, request message); //call requesting message
resource and change to blocked
Receive(process controler, accept message); //receive accepted message
using resource
critical-section (); // using shared resource exclusively
Send(process controler, end message); // call ending message using
resource
Noncritical-section ();
}
In distributed systems, the message exchange
mechanism will be simpler and be used to solve
the synchronization problem.
DEADLOCKS
76

 A set of processes is called deadlocks if


each process in the set is waiting for the
resource that another process in the set
is occupying.
Conditions appear
77
Deadlocks
 Condition 1: Using resources that
cannot be shared.
 Condition 2: Occupying and requesting
resources that cannot be shared.
 Condition 3: Do not retrieve resources
from the process that is keeping them.
 Condition 4: There exists a cycle in the
resource allocation graph.
Resource allocation graph
78

Process C occupies U,
Process A Process B requests T, process D
occupies requests occupies T, requests U.
resources R resource S Set of process {C,D} is
deadlock
Example about Deadlocks
80

In case of deadlock?
Methods for handling and preventing
deadlocks
81

 Using a resource allocation algorithm ->


never happen deadlocks.
 Allow for deadlock -> find a way to fix
deadlocks.
 Ignoring deadlocks handling, it seems
that the system never happens to be
congested.
Prevent Deadlocks
82

 Condition 1 is almost inevitable.


 To condition 2 does not occur:
 The process must require all necessary resources before
executing begins.
 When the process requires a new resource and is denied
 freeing occupied resources
 The old resource will be allocated again with the new resource.
 To condition 3 does not occur:
 retrieve resources from blocked processes and allocate
them back to the process when it gets out of blocked status.
 To condition 4 does not occur:
 When the process is occupying resources Ri, only request
resources Rj if F(Rj)> F(Ri).
Resource allocation algorithm to avoid
deadlocks
83

 Algorithm to determine the safe state


 Banker algorithm
Algorithm to determine the safe state
84

 int NumResources; // number of resources


 int NumProcs; // number of processes in the system
 int Available[NumResources] // the matrix of number of free resources

 int Max[NumProcs, NumResources];


//Max[p,r]= Maximum demand of process p on resource r

 int Allocation[NumProcs, NumResources];


//Allocation[p,r] = The number of resource r allocated for the process p

 int Need[NumProcs, NumResources];


//Need[p,r] = Max[p,r] - Allocation[p,r]= the number of resource r which p process still
needs to use

 int Finish[NumProcs] = false;


//Finish[p]=true; the process p has completed execution;
Algorithm to determine the safe state
85
 ST1.
 If i:
 Finish[i] = false // Process i have not finished executing
 Need[i,j] <= Available[j], j // All needs about resources of process i can meet
 Do ST2 else go to ST3

 ST2. Allocate all the need_resources of process i


Allocation[i,j]= Allocation[i,j]+Need[i,j]; j
Allocate sufficient resources for
need[i,j]=0 ; j the process i
Available[j]= Available[j] - Need[i,j];
Mark the progress i “finish”
Finish[i] = true;
Update the number of available
resources j
Available[j]= Available[j] + Allocation[i,j];
 goto ST1

 ST3. i If Finish[i] = true -> « The system is in a safe state », else «


Banker Algorithm
86

Pi requires kr instance of resource


r
 ST1. If kr <= Need[i,r] r, goto ST2,
Banker Algorithm: When a
else « error » process requires resources,
the OS tries to allocate, then
 ST2. If kr <= Available[r] r, goto ST3, determines whether the
else Pi « wait » state of system is SAFE. If
the system is safe, then
 ST3. r: actually allocate the
resources that the process
Available[r]=Available[r]-kr;
requires, whereas the
Allocation[i,r] =Allocation[i,r]+ kr;
Need[i,r] = Need[i,r] – kr;
process must wait.


ST4: Check the safe state of the system (using Algorithm to
determine the safe state).
Example - resource allocation to avoid
deadlocks
87

Max Allocation Available


R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 3 2 2 1 0 0 4 1 2
P2 6 1 3 2 1 1
P3 3 1 4 2 1 1
P4 4 2 2 0 0 2

 If the process P2 require 4 R1, 1 R3.


Please indicate whether this request can
be met without deadlock?
Example - resource allocation to avoid
deadlocks
88

 ST0: Calculate “Need”, which is the remaining need for


each resource j of each process i:
 Need[i,j]=Max[i,j]-Allocation[i,j]

Need Allocation Available


R1 R2 R3 R1 R2 R3 R1 R2 R3
P1 2 2 2 1 0 0 4 1 2
P2 4 0 2 2 1 1
P3 1 0 3 2 1 1
P4 4 2 0 0 0 2
Example - resource allocation to avoid
deadlocks
89

 ST1+ST2: resource requirements of P2 meet the


conditions of ST1, ST2.
 ST3: Try allocating for P2, updating the system status

Need Allocation Available

R1 R2 R3 R1 R2 R3 R1 R2 R3

P1 2 2 2 1 0 0 0 1 1

P2 0 0 1 6 1 2

P3 1 0 3 2 1 1

P4 4 2 0 0 0 2
Example - resource allocation to avoid
deadlocks
90

 ST4: Check the safe state of the system.


 In turn, choose the process to try to allocate:
 Choose P2, try allocating, suppose P2 is finished then retrieve
 Available[j]= Available[j] + Allocation[i,j];

Using the
Need Allocation Available Algorithm
to
R R R3 R1 R R3 R R R determine
1 2 2 1 2 3 the safe
P1 2 2 2 1 0 0 6 2 3 state

P2 0 0 0 0 0 0

P3 1 0 3 2 1 1

P4 4 2 0 0 0 2
Example - resource allocation to avoid
deadlocks
91

+ Choose P1

Need Allocation Available

R1 R2 R3 R1 R R R1 R2 R3
2 3

P1 0 0 0 0 0 0 7 2 3

P2 0 0 0 0 0 0

P3 1 0 3 2 1 1

P4 4 2 0 0 0 2
Example - resource allocation to avoid
deadlocks
92

+ Choose P3:

Need Allocation Available

R1 R2 R3 R1 R2 R3 R1 R2 R3

P1 0 0 0 0 0 0 9 3 4

P2 0 0 0 0 0 0

P3 0 0 0 0 0 0

P4 4 2 0 0 0 2
Example - resource allocation to avoid
deadlocks
93

+ Choose P4:

Need Allocation Available

R1 R2 R3 R1 R2 R3 R1 R2 R3

P1 0 0 0 0 0 0 9 3 6

P2 0 0 0 0 0 0

P3 0 0 0 0 0 0

P4 0 0 0 0 0 0

All processes have been allocated resources with the highest


requirements, so the state of the system is safe, so it is possible
to allocate resources required by P2 without deadlock.
Exercise:
94

Allocation Max Available


Proce
ss A B C A B C A B C

P1 3 0 1 10 7 4
P2 3 2 1 8 5 3
6 2 2
P3 2 1 3 6 3 4
P4 0 3 0 9 6 3

If the process P2 require (2,0,1). Using


Banker’s algorithm confirm that the curent
state is safe or not?
Deadlocks detection
95
algorithm
 For Resources with only one instance
 For Resources with many instances
For Resources with only one instance
96

 Use the resources waiting graph (wait-for


graph)
 built from the resource allocation graph
(resource-allocation graph) by removing the
vertices representing the resource type.
 edge from Pi to Pj: Pi is waiting for Pj to
release a resource that Pi needs.
 The system is deadlock if and only if the
resources waiting graph have cycle.
Example
97

Process Request Occupy


P1 R1 R2
P2 R3, R4 R1
P3 R5 R4
P4 R2 R5
P5 R3
P5
P5 P5

R1 R3 R4
R1 R3 R4 P1 P2 P3

P1 P2 P3
P1 P2 P3
P4

R2 R5 R2 P4 R5
P4
Resources waiting
Resource-allocation Resource allocation graph
graph test graph on
demand
For Resources with many instances
98

 Step 1: Select the first Pi so that the


resource requirement can be met,
 If not, the system is deadlock.
 Step 2: Try allocating resources for Pi and
check the system status,
 If the system is safe, go to Step 3,
 otherwise, turn to Step 1 for the next Pi.
 Step 3: Allocate resources for Pi. If all Pi
are met, the system is not deadlock,
otherwise go back to Step 1.
Deadlock correction
99

 Cancel process in a deadlock state


 Cancel until there is no more deadlock-causing cycle
 Based on factors such as priority, processing time, number
of resources occupied, number of resources required ...
 Retrieve resources
 Select a victim: which process will have resources
retrieved? and which resources are retrieved?
 Back to the previous state of deadlock (rollback): when
retrieving a process's resources, it is necessary to restore
the state of the process back to the previous state without
deadlock.
 Status of "resource hunger": how to ensure that no
process is always retrieved resources?

You might also like