0% found this document useful (0 votes)
321 views67 pages

Chapter 6 Process Synchronization

Uploaded by

Shazad osman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
321 views67 pages

Chapter 6 Process Synchronization

Uploaded by

Shazad osman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Chapter 6: Synchronization

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Chapter 6: Synchronization Tools
OBJECTIVES
 To present the concept of process
 6.1 Background
synchronization.
 6.2 The Critical-Section Problem
 To introduce the critical-section
 6.3 Peterson’s Solution
problem, whose solutions can be
 6.4 Synchronization Hardware used to ensure the consistency of
 6.5 Mutex Locks shared data
 6.6 Semaphores  To present both software and
hardware solutions of the critical-
section problem
 To examine several classical
process-synchronization problems
 To explore several tools that are
used to solve process
synchronization problems

Operating System Concepts – 9th Edition 6.2 Silberschatz, Galvin and Gagne ©2013
Introduction Recall

• Independent process
– A process that does not share data with other processes.
– cannot affect or be affected by the execution of another
process.

• Cooperating process
– can affect or be affected by the execution of another
process.
– processes directly share logical address space
Chapter 3

(code & data) or processes share data through files


or messages.
6.1 Background
 Concurrent access: accessing of the same data at
the same time by more than one process, thus:
 Concurrent access to shared data may result
in data
inconsistency
 Maintaining data consistency requires
mechanisms to ensure the orderly execution of
cooperating processes
 Race Condition: problem in concurrent execution
both hardware and the operating system
 The synchronization mechanism is usually provided
 Illustration of the problem – The producer-
by
Consumer problem.
 Basic assumption – load and store instructions are atomic.

Operating System Concepts – 9th Edition 6.4 Silberschatz, Galvin and Gagne ©2013
6.1
Background
Why a Race Condition Occurs?

Two processes may be :


 executing simultaneously, and
 trying to access the same global variable.

Operating System Concepts – 9th Edition 6.5 Silberschatz, Galvin and Gagne ©2013
6

http://undergraduate.csse.uwa.edu.au/units/CITS2230/handouts/Lecture09/lecture9.pdf

Operating System Concepts – 9th Edition 6.6 Silberschatz, Galvin and Gagne ©2013
7

Critical section

Mutual exclusion

Operating System Concepts – 9th Edition 6.7 Silberschatz, Galvin and Gagne ©2013
Producer-Consumer Problems
• Several processes work together to complete
common task – Process Cooperation
• Common paradigm for cooperating processes,
producer process produces information that
is consumed by a consumer process.

Examples:
–Compiler produces an assembly code which is
consumed by an assembler.
–Web server provides HTML files which is consumed
by the client web browser requesting the resource.
6.1 Background
• Suppose that we wanted to provide a
solution that fills all the buffers where we
allow the producer and consumer processes
to increment and decrement the same
variable.
• We can do so by adding another integer
variable -- counter that keeps track of
the number of full buffers. Initially,
counter is set to 0.
• The variable counter It is incremented by
the producer after it produces a new buffer
and is decremented by the consumer after it
consumes a buffer.
• Code is shown in next two slides
• To enable producer and consumer to be executed
concurrently, a buffer (shared memory) is used for
producer to put data while consumer access it.

– unbounded-buffer provides unlimited buffer size.


– bounded-buffer assumes that there is a fixed buffer
size.
Chapter 3
• Producer puts data in one slot while consumer
retrieve data from another slot.
• Producer and consumer need to be
synchronized so that consumer will not retrieve
data before producer puts the data into the
buffer.

• Therefore,
Chapter 3

– producer needs to wait if buffer is full


– consumer needs to wait if buffer is empty
Producer Process
while (true) {
/* produce an item in next produced */

while (counter == BUFFER_SIZE) ;


/* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter = counter
+1;
}

Operating System Concepts – 9th Edition 6.12 Silberschatz, Galvin and Gagne ©2013
Consumer Process
while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter = counter - 1;

/* consume the item in next


consumed */

Operating System Concepts – 9th Edition 6.13 Silberschatz, Galvin and Gagne ©2013
Bounded-Buffer Problems
• Shared Variables
– in, out, counter and buffer[ ]
– Initial values for in, out, counter =0

• Both routines are correct if executed independently.

• However, it may give problem if run concurrently


when the processes are not synchronized properly.

(For Slide page 21-24:


Silberschatz, A., Galvin, P.B., and Gagne, G. Operating System Concepts (8th Edition). John Wiley & Sons: Asia. (2010) Page 254.
Buffer [6]

Producer Consumer

 If current counter = 5 and


statements:
counter ++  (producer)
counter --  (consumer)
are executed concurrently . . .

 Then counter may be 4, 5 or 6.


 The correct value is 5 (synchronized properly)
16

 Each statement consists of several


machine
instructions:
counter  counter + 1
reg1  counter Producer
reg1  reg1 + 1 counter++
counter  reg1

counter  counter - 1
reg2  counter Consumer
counter--
reg2  reg2 - 1
counter  reg2
Operating System Concepts – 9th Edition 6.16 Silberschatz, Galvin and Gagne ©2013
Race Condition
 counter = counter + 1 could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
 counter = counter -1
could be implemented as

register2 = counter
register2 = register2 - 1
counter = register2
 Consider this execution interleaving with “count = 5 initially:
S0: producer execute register1 = counter {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute counter = register1 {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}

Synchronize problem (race condition)

Operating System Concepts – 9th Edition 6.17 Silberschatz, Galvin and Gagne ©2013
Race Condition - Definition
• Def 1: A situation whereby at least two processes perform some
operations on shared data and the outcome depends on the order
of data access.
• Result of process must be independent of the speed of
execution of other concurrent processes;
– Depend on the control / data synchronization mechanisms.
• Def 2: A situation whereby multiple threads access a data item
without coordination in a multithreaded application.
• possibly causing inconsistent results (depending on which thread
reaches the data item first).
* (Multithread normally access / used shared variables)

Silberschatz, A., Galvin, P.B., and Gagne, G. Operating System Concepts (9th Edition). John Wiley & Sons: Asia. (2014) Page 255.
Solution to Race Condition
• Ensure that only one process at a time can
manipulate the shared variable (eg. Counter)

• Therefore process synchronization is


needed.
– means coordinating the activities of two or more
processes.

• Synchronization is necessary to ensure that


interdependent code is executed in the proper
sequence.
Race Condition (Cont.)
 How do we solve the race condition?

 We need to make sure that:


 The execution of
counter = counter + 1
is executed as an “atomic” action. That is, while it is being
executed, no other instruction can be executed concurrently.
 Similarly for
counter = counter - 1

 The ability to execute an instruction, or a number of instructions,


atomically is crucial for being able to solve many of the
synchronization problems.

Operating System Concepts – 9th Edition 6.20 Silberschatz, Galvin and Gagne ©2013
Exercise 6.1:

Based on a several machine instructions below that


has synchronization problem, give your suggestion on
how to overcome that race condition situation.
Chapter 3
Solution 6.1:

Producer should update the counter just before


consumer read the value of counter.

T0 Producer reg1  counter {reg1 = 5}


T1 Producer reg1  reg1+1 {reg1 = 6}
T2 Producer counter  reg1 {counter=6}
T3 Consumer reg2  counter {reg2 = 6}
T4 Consumer reg2  reg2-1 {reg2 = 5}
T5 Consumer counter  reg2 {counter=5}
6.2 Critical Section Problem
 Part of a program (segment of code for each process)
 Consider system of n processes {P0, P1, … Pn-1}
 Each process has critical section segment of code
 Process may be changing common variables,
updating
table, writing file, etc
 When one process in critical section, no other may be in
its critical section
 Critical section problem is to design protocol to solve this.
 Each process must ask permission to enter critical section in
entry section code; it then executes in the critical section;
once it finishes executing in the critical section it enters the
exit section code. The process then enters the remainder
section code.
Operating System Concepts – 9th Edition 6.23 Silberschatz, Galvin and Gagne ©2013
General structure of Process Entering the Critical Section

 General structure of process Pi

Figure 6.1 General structure of Process Entering the Critical Section

Operating System Concepts – 9th Edition 6.24 Silberschatz, Galvin and Gagne ©2013
CS – Process Structure
Chapter 3
Algorithm for Process Pi

 Entry section to critical section is to “disable interrupts.


 Execute in the critical section
 Exit section is to “enable interrupts”.
 Implementation issues:
 Uniprocessor systems
 Currently running code would execute without preemption
 Multiprocessor systems.
 Generally too inefficient on multiprocessor systems
 Operating systems using this not broadly scalable

 Is this an acceptable solution?


 This is impractical if the critical section code is
taking a long time to execute.

Operating System Concepts – 9th Edition 6.26 Silberschatz, Galvin and Gagne ©2013
Algorithm for Process Pi
Have a variable “turn” to indicate which process is next

do {

while (turn == j);

critical section
turn = j;

remainder

section
Figure 6.2
} while (true);

•Algorithm is correct. Only one process at a time in


the critical section. But….
• Busy waiting

Operating System Concepts – 9th Edition 6.27 Silberschatz, Galvin and Gagne ©2013
Requirements for CS solutions
Assumptions:
 Assume that each process executes at a
non- zero speed
 No assumption concerning relative speed of the n
processes or #number of CPU.

Mutual Exclusion
The algorithm does
satisfy the three
essential criteria to solve Progress
the CS problems.
Bounded Waiting
28

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Solution to Critical-Section Problem
Must satisfy the following three requirements:
1. Mutual Exclusion - If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections
2. Progress - If no process is executing in its critical section
and there exist some processes that wish to enter their critical
section, then the selection of the processes that will enter the
critical section next cannot be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of
times that other processes are allowed to enter their critical
sections after a process has made a request to enter its
critical section and before that request is granted
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes

Operating System Concepts – 9th Edition 6.29 Silberschatz, Galvin and Gagne ©2013
4 Types of locking mechanisms to solve CS problem:

1. Software defined (Strict Alternation, Flagging


and Peterson)

2. Hardware support (Test-and-set and swap)

3. Support from the Operating System


(Semaphore)
4. Support the programming language (Ada
Chapter 3

– rendevous)
6.3 Software Solution: Peterson’s Algorithm

 Good algorithmic software solution


 Two process solution
 Assume that the and store machine-language
instructions are atomic; that is, cannot be interrupted
load
 The two processes share two variables:
 int turn;
 Boolean flag[2]

 The variable turn indicates whose turn it is to enter the


critical section
 The flag array is used to indicate if a process is ready to
enter the critical section. flag[i] = true implies that
process Pi is ready!

Operating System Concepts – 9th Edition 6.31 Silberschatz, Galvin and Gagne ©2013
Algorithm for Process Pi
do {  Provable that the three
flag[i] = true; CS requirement are met:
turn = j; 1.Mutual exclusion
while (flag[j] && turn = = j); is preserved
critical section Pi enters CS only if:
flag[i] = false;
either flag[j] =
remainder section
false or turn = i
} while (true);
2. Progress requirement
is satisfied
Figure 6.2 The structure of Process Pi in
Peterson’s Solution 3. Bounded-
requirement
waiting is met

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013
Exercises 1
• Rewrite the Peterson's algorithm for CS solution
for process 0 (P0).
• Rewrite the Peterson's algorithm for CS
solution for process 1 (P1).

P1

P0
Initial SHARED values:
flag[0] = FALSE Answer
flag[1] = FALSE
do { do {
flag[0] = TRUE; flag[1] = TRUE;
turn = 1; turn = 0;
while(flag[1] && turn == 1) while(flag[0] && turn == 0)
; /* do ; /* do nothing
nothingsection
critical */ */
critical section
flag[0] = FALSE; flag[1] = FALSE;
remainder section remainder section
} while (TRUE); } while (TRUE);
P0 P1

Initially the flags are false. When a process wants to execute it’s critical section, it sets it’s
flag to true and turn as the index of the other process. This means that the process wants
to execute but it will allow the other process to run first. The process performs busy waiting
until the other process has finished it’s own critical section.
After this the current process enters it’s critical section and adds or removes a random
number from the shared buffer. After completing the critical section, it sets it’s own flag to
false, indication it does not wish to execute anymore.
"I am ready to enter CS" flag[0]=true;
“But you can execute your own turn=1;
CS" while(flag[1]
"If your
it's you are
turn
ready
I'll wait.“
to enter and P0 ==true&&tu
/*P0 wait*/ }
Otherwise, I’ll enter my CS rn==1){
CS
"I don't want to enter any flag[0]=false;
more."

Entrance to the CS is granted for process P0

• if P1 does not want to enter its CS,


flag[1]=FALSE
or
Chapter 3

• if P1 has given priority to P0 by setting turn=0.


Exercises 2
• What is the event for each ti if 2 concurrent
processes wishing to enter their CSs.

• Check if the following


is preserved:
– Mutual exclusion
– Progress
– Bounded Waiting
Answer: Execution of algorithm
P0 P1

TRUE FALS 1 P0 request to enter CS0


E P0 enters CS0
FALS 1
E P1 requests to enter CS1
0
FALS TRUE 0 P0 executes RS0
E P1 enters CS1
FALS 0
E P0 executes RS0
FALS 0
E P1 executes RS1
FALS FALS 0
Chapter 3

E E P0 executes RS0
FALS FALS 0 1
E E P1 requests to enter CS1
FALS 0 2

E
Answer: Execution of algorithm
P0 P1
Mutual
exclusion Time flag[0] flag[1] turn Events
t0 TRUE FALSE 1 P0 request to enter CS0
t1 TRUE FALSE 1 P0 enters CS0
t2 TRUE TRUE 0 P1 requests to enter CS1
t3 FALSE TRUE 0 P0 executes RS0
t4 FALSE TRUE 0 P1 enters CS1
t5 FALSE TRUE 0 P0 executes RS0
t6 FALSE FALSE 0 P1 executes RS1
t7 FALSE FALSE 0 P0 executes RS0
1
t8 FALSE TRUE 0 P1 requests to enter CS1 2
Peterson’s
algorithm :
Table shows a process that can enter its CS for
different values of flag[] and turn

flag[0] flag[1] turn Process that can


enter its CS
TRUE FALSE 1 P0

FALSE TRUE 0 P1

TRUE TRUE 0 P0
4
TRUE TRUE 1 P1
2
Exercise 3:
Continue from previous example
time flag[0] flag[1] turn Events
t10 TRUE FALSE 1 P0 requests to enter CS0
t11 TRUE TRUE 0 P1 request to enter CS1
t12 TRUE TRUE 0 P0 enters CS0
t13 TRUE TRUE 0 P1 busy waiting in loop
t14 FALSE TRUE 0 P0 executes RS0
t15 FALSE TRUE 0 P1 enters CS1
t16 TRUE TRUE 1 P0 request to enter CS0
3

t17 TRUE TRUE 1 P0 busy waiting in loop 4

t18 TRUE FALSE 1 P1 executes RS1


t19 TRUE FALSE 1 P0 enters CS0
Exercise 3:
Continue from previous example
time flag[0] flag[1] turn Events
t10 TRUE FALSE 1 P0 requests to enter CS0
t11 TRUE TRUE 0 P1 request to enter CS1 Bounded
Waiting
t12 TRUE TRUE 0 P0 enters CS0
t13 TRUE TRUE 0 P1 busy waiting in loop
t14 FALSE TRUE 0 P0 execute R S 0

Progress t15 FALSE TRUE 0 P1 enters C


t16 TRUE TRUE 1 P0 request to enter CS0
3
t17 TRUE TRUE 1 P0 busy waiting in loop
4

t18 TRUE FALSE 1 P1 executes RS1


t19 TRUE FALSE 1 P0 enters CS0
Solution to Critical-section Problem Using Locks

 Software-based solutions such as Peterson’s are


not guaranteed to work on modern computer
architectures.
 Many systems provide hardware support for implementing
the critical section code.
 All solutions are based on idea of locking
Protecting critical regions via locks

 Code:
do { acquire lock

critical section
release lock
remainder section
} while (TRUE);
Figure 6.4: Mutual-exclusion implementation with test_and_set()
Operating System Concepts – 9th Edition 6.42 Silberschatz, Galvin and Gagne ©2013
Synchronization Hardware
 Modern machines special atomic hardware
instructions
provide to implement locks
 Atomic = non-interruptible

 Two types instructions:


 Test memory word and set value
 Swap contents of two memory words

Operating System Concepts – 9th Edition 6.43 Silberschatz, Galvin and Gagne ©2013
Test-and-Set Instruction
• Test-and-Set is a single indivisible machine instruction
known simply as TS and was introduced by IBM for its
multiprocessing System 360/370 computers.

• In a single machine cycle, it tests to see if the key is


available and if it is, sets it to unavailable.

• The key is a single bit in a storage location that can contain


• 0 (if it is free) or
• 1 (if busy).
test_and_set Instruction
 Definition:
boolean test_and_set (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
Figure 6.3 The definition of the test_and_set()
instruction
 Properties:
 Executed atomically
 Returns the original value of passed parameter
 Set the new value of passed parameter to
“TRUE”.

Operating System Concepts – 9th Edition 6.45 Silberschatz, Galvin and Gagne ©2013
Test-and-Set for CS Application
• a process (P1) would test the condition code using Test-and-Set
instruction before entering a CS.

• If no other process was in the CS, then P1 would be allowed to


proceed and the condition code is would be changed from 0 to 1.

• Later, when P1 exits the CS, the condition code is reset to 0 so


that another process can enter its CS.

• On the other hand, if P1 finds a busy condition code, then it is


placed in a waiting loop where it continues to test the condition
code and wait until it is free.
Solution using test_and_set()
 Shared Boolean variable lock, initialized to FALSE
 Each process, wishing to execute CS code:

do {
while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);
Figure 6.4 Mutual exclusion implementation with test_and_set()

 Solution results in busy waiting.


 What about bounded waiting?
 However, this algorithm does not satisfy bounded waiting.
Read
textbook for the solution.
Operating System Concepts – 9th Edition 6.47 Silberschatz, Galvin and Gagne ©2013
6.5 Mutex Locks
Previous solutions are complicated and generally
inaccessible to application programmers

OS designers build software tools to solve critical section


problem.

Simplest tools is the mutex lock, which has a Boolean


variable “available” associated with it to indicate if the lock
is available or not.

However, there is a more robust tool that can behave


similarly to a mutex lock but can also provide more
sophisticated ways for processes to synchronize their
activities.

Operating System Concepts – 9th Edition 6.48 Silberschatz, Galvin and Gagne ©2013
Semaphores
Overview

A method of visual signaling, usually by means of flags or


lights.

They are used in :


– railways ( raise arm  track is
clear, lowered arm  track is busy
and train must wait)

– between foreign navy ships to


communicate messages over long
distances.
49
Semaphores

Similar function in OS i.e it signals if and when a


resource is free and can be used by a process.

Operating System Concepts – 9th Edition 6.50 Silberschatz, Galvin and Gagne ©2013
Semaphores
Semaphore in OS:

 Protocol mechanism for task communication.

 Specifically, semaphores are used to:


 Control access to a shared resource (mutual exclusion).
 Signal the occurrence of an event.
 Allow two tasks to synchronize their activities.

Operating System Concepts – 9th Edition 6.50 Silberschatz, Galvin and Gagne ©2013
Semaphores  Synchronization tool that provides more sophisticated ways (than
Mutex locks) for processes to synchronize their activities.
 Semaphore S – integer variable
 Can only be accessed via two indivisible (atomic) operations
 when one process modify semaphore variable
S, then no other process can modify the variable
concurrently
 wait() and signal()

 Originally called P() and V()


 Definition of the wait()operation
wait(S)
{ while (S <=
0)
; // busy wait
S = S - 1;
}
(Process calling on wait()operation must wait until S > 0 (+ve value))
(S <= 0 implies busy critical
section / region )
 Definition of the signal() operation
signal(S) {
S++;
}

Operating System Concepts – 9th Edition 6.52 Silberschatz, Galvin and Gagne ©2013
Types of Semaphores
 Counting semaphore – integer value can range over an
unrestricted domain

 Binary – integer value can range only


semaphore
between 0 and 1
 Same as a mutex lock

 Can implement a counting semaphore S as a


binary
semaphore

Operating System Concepts – 9th Edition 6.53 Silberschatz, Galvin and Gagne ©2013
Semaphore Usage
Can solve various synchronization problems
 A solution to the CS problem.
 Create a semaphore “synch” initialized to 1
wait(synch)
CS
signal(synch);

Operating System Concepts – 9th Edition 6.54 Silberschatz, Galvin and Gagne ©2013
Semaphore Usage
Consider P1 and P2 that require code segment S1 to happen before code
segment S2
Create a semaphore “synch” initialized to 0
P1:
S1 ;
signal(synch);
P2:
wait(synch);
S2 ;

Because synch is initialized to 0,


P2 is executed only after P1 has invoked
signal(sync), which is after statement S1
has been executed.

Operating System Concepts – 9th Edition 6.55 Silberschatz, Galvin and Gagne ©2013
Semaphore Implementation: Busy waiting
 Must guarantee that no two processes can execute the wait() and
signal() on the same semaphore at the same time

 Thus, the implementation becomes the critical section problem where the
wait and signal code are placed in the critical section

 This implementation is based on busy waiting in critical section


implementation (that is, the code for wait() and signal())
 But implementation code is short
 Little busy waiting if critical section rarely occupied

 Can we implement semaphores with no busy waiting? YES. The


textbook has the answer.

Operating System Concepts – 9th Edition 6.56 Silberschatz, Galvin and Gagne ©2013
Semaphore Implementation with no Busy Waiting

 With each semaphore there is an


waiting queue
associated
 Each entry in a waiting queue has two data items:
 value (of type integer)
 pointer to next record in the list
typedef struct{
int value;
struct process *list;
} semaphore;

 Two operations:
 block – place the process invoking
the operation on the appropriate waiting queue
 wakeup – remove one of processes in
the waiting queue and place it in the ready
queue
Operating System Concepts – 9th Edition 6.57 Silberschatz, Galvin and Gagne ©2013
Exercise:
Chapter 3

Continue…
Chapter 3
Chapter 3
Example: Producer-Consumer
Initial values: mutex = 1, empty = n, full = 0 Problem
Producer Consumer

do { do
…. { wait(full //dec full cnt
);
produce an item in nextp
wait(mutex);
…. …..remove an item from buffer to nextc
wait(empty); //dec empty cnt ….
wait(mutex); signal(mutex);
…. signal(empty); //inc empty cnt
add nextp to buffer ….
…. consume the item in nextc
signal(mutex); ….
signal(full);//inc full cnt } while (TRUE);
} while (TRUE);
Exercise:

Based on the previous example, fill in the


following table with the value of those
variables used.

Assume that initial value:


mutex = 1, empty = 5, full = 0
Producer : Consumer :
wait() signal() wait() signal()
empty empty
mutex mutex
full full
• The producer will start first since wait(empty=5)
and consumer with wait(full=0)

• Once producer update full=1, then consumer


proceed with all updated value by producer.

Producer : Consumer :
wait() signal() wait() signal()
empty empty
mutex mutex
Chapter 3

full full
Initial value: mutex = 1, empty = 5, full =
0
wait(empty=5) wait(full=1)
empty=4 full=0
wait(mutex=1)
wait(mutex=1) m
mutex=0 utex=0
signal(mutex=0) signal(mutex=0)
mutex=1 mutex=1
signal(full=0) signal(empty=4)
fu emp
ll=1 ty=5
Producer : Consumer :
wait()
wait( signal()
signal() wait()
wait( signal()
signal()
empty ) 4 empty ) 5
mutex 0 1 mutex 0 1
full
ful 1 full
ful 0
l l
6.6.3 Deadlock and Starvation
 Incorrect use of semaphore operations can produced:

 Deadlock – two or more processes are waiting indefinitely for an


event that can be caused by only one of the waiting processes
 Starvation – indefinite blocking
 A process may never be removed from the semaphore queue in which it
is suspended

Operating System Concepts – 9th Edition 6.65 Silberschatz, Galvin and Gagne ©2013
Summary

• Mutual exclusion
– Prevents deadlock
– Maintained with test-and-set, WAIT and
SIGNAL, and semaphores (P, V, and mutex)

• Synchronize processes using hardware


and software mechanisms
End of Chapter 6

Operating System Concepts – 9th Edition Silberschatz, Galvin and Gagne ©2013

You might also like