0% found this document useful (0 votes)
10 views

Atomic Read-Modify-Write: Lock: If (!location) If (!test-And-Set (Location) ) Return Goto Lock

The document discusses synchronization techniques for concurrent programs. It covers locks, semaphores, monitors and condition variables as high-level primitives to enforce atomic access to shared state. The techniques allow threads to safely access shared resources and coordinate activities. Implementation involves low-level atomic operations like load/store, interrupts and test-and-set instructions.

Uploaded by

hoang.van.tuan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Atomic Read-Modify-Write: Lock: If (!location) If (!test-And-Set (Location) ) Return Goto Lock

The document discusses synchronization techniques for concurrent programs. It covers locks, semaphores, monitors and condition variables as high-level primitives to enforce atomic access to shared state. The techniques allow threads to safely access shared resources and coordinate activities. Implementation involves low-level atomic operations like load/store, interrupts and test-and-set instructions.

Uploaded by

hoang.van.tuan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Atomic read-modify-write

n HW provides some special instructions


n test&set (most arch) --- read value, write 1 back to memory

Semaphores n If x = 0, test&set(x) returns 0 and sets x to 1


n If x = 1, test&set(x) returns 1 and x remains 1

lock value = 0;
Arvind Krishnamurthy
Spring 2001 Lock::Acquire() { while (test&set(value) == 1); }

Lock::Release() { value = 0;}

Eliminating Busy-Wait Test-and-Set on Multiprocessors


Use a queue, but still require an atomic “go to sleep; guard = 0;” n Each processor repeatedly executes a test_and_set
Why doesn’t Release perform a “value = FREE” on one branch? In hardware, it is implemented as:
n P1 P2
n Fetch the old value
C1 C2
n Write a “1” blindly
Lock::Acquire() { Lock::Release() { n Write in a cached system results in invalidations to other
while (test&set(guard)); while (test&set(guard)); caches
if (value == BUSY) { if anyone on wait queue { n Simple algorithm results in a lot of bus traffic
Put on queue of threads waiting for lock; Take a waiting thread off wait n Wrap an extra test (test-and-test-and-set)
Go to sleep and set guard to 0 queue and put it at the front
} else { of the ready queue;
value = BUSY; } else {
lock: if (!location)
guard = 0; value = FREE; if (!test-and-set(location))
} } return;
} guard = 0; goto lock;
}

Ticket Lock for Multiprocessors Array Locks


n Hardware support: fetch-and-increment n Problem with ticket locks: everyone is polling the same location
n Obtain a ticket number and wait for your turn n Distribute the shared value, and do directed “unlocks”

Lock:
Lock:
next_ticket = fetch_and_increment(next_ticket)
my_slot = fetch_and_increment(next_slot);
while (next_ticket != now_serving); if (my_place % numProcs == 0)
fetch_and_add(next_slot, -numProcss);
Unlock: my_slot = my_slot % numProcs;
now_serving++; while (slots[my_slot] == must_wait);
slots[my_slot] = must_wait;
n Ensures fairness
Unlock:
n Still could result in a lot of bus transactions (every processor caches slots[(my_slot + 1) % numProcs] = has_lock;
now_serving); each release results in P invalidations and P loads

1
Summary The big picture
Many threads + sharing state = race conditions
Concurrent Applications
n
n one thread modifying while others reading/writing

n How to solve? High-Level Locks


To make multiple threads behave like one safe sequential thread, Semaphores Monitors Send/Receive
Atomic API
n
(condition
force only one thread at a time to use shared state. variables)

Pure load/store methods are too complex and difficult


Low-Level
n

General solution is to use high-level primitives, e.g., locks. It lets us Load/Store Interrupt disable Test&Set
Atomic Ops
n

bootstrap to arbitrarily sized atomic units.

n Load/store, disabling/enabling interrupts, and atomic read-


modify-write instructions are all ways that we can Interrupt (timer or I/O completion), Scheduling, Multiprocessor
implement high-level atomic operations

The plan Motivations


n So far: n Writing concurrent programs is hard because:
n definition of locks; how to implement locks; how to program with n you need to worry about multiple concurrent activities reading and
locks? writing the same memory;
n hard because ordering matters.

n What is next?
n Synchronization is a way of coordinating multiple
n definitions of semaphores, monitors, and condition variables concurrent activities that are using shared state.
n how to implement them?
how to use them to solve synchronization problems?
What are the right synchronization abstractions, to make it
n
n
easy to build concurrent programs ?
n Future:
n message-passing and IPC n Locks, semaphores, condition variables, and monitors are
n language support to concurrency just various ways of structuring the sharing.

Semaphores (Dijkstra 1965) Implementing semaphores


n Semaphores are a kind of generalized lock. P means “test” (proberen in Dutch)
n They are the main synchronization primitive used in the earlier V means “increment” (verhogen in Dutch)
Unix.
n Semaphores have a non-negative integer value, and n Standard tricks:
support two operations: n Can be built using interrupts disable/enable
n semaphore->P(): an atomic operation that waits for semaphore n Or using test-and-set
to become positive, then decrements it by 1
n Use a queue to prevent unnecessary busy-waiting
n semaphore->V(): an atomic operation that increments
semaphore by 1, waking up a waiting P, if any.
n Binary semaphores:
n Semaphores are like integers except: n Like a lock; also known as “mutex”; can only have value 0 or 1
(1) non-negative values; (2) only allow P&V --- can’t read/write value except to set (unlike the previous “counting semaphore” which can be any non-
it initially; (3) operations must be atomic: two P’s that occur together can’t
decrement the value below zero. Similarly, thread going to sleep in P won’t miss negative integers)
wakeup from V, even if they both happen at about the same time.

2
How to use semaphores Scheduling constaints
n Binary semaphores can be used for mutual exclusion: n Something must happen after one another
initial value of 1; P() is called before the critical section; and V() is called after the
critical section.
semaphore->P(); Initial value of semaphore = 0;
// critical section goes here Fork a child thread
semaphore->V(); Thread::Join calls P // will wait until something
// makes the semaphore positive
-------------------------------------------------------------------------
n Scheduling constraints
n having one thread to wait for something to happen Thread finish calls V // makes the semaphore positive
n Example: Thread::Join, which must wait for a thread to terminate. By
// and wakes up the thread
setting the initial value to 0 instead of 1, we can implement waiting on
a semaphore // waiting in Join

n Controlling access to a finite resource

Example: producer-consumer
Scheduling with Semaphores with a bounded buffer
n In general, scheduling dependencies between threads T1, T2,
…, Tn can be enforced with n-1 semaphores, S1, S2, …, Sn-1 n Example:
used as follows: n cpp file.S | as
n T1 runs and signals V(S1) when done.
n Tm waits on Sm-1 (using P) and signals V(Sm) when done.
n (contrived) example: schedule print(f(x,y))
float x, y, z;
sem Sx = 0, Sy = 0, Sz = 0; Producer Consumer
T1: T2: T3:
x = …; P(Sx); P(Sz);
V(Sx); P(Sy); print(z);
y = …; z = f(x,y); …
V(Sy); V(Sz);
… ...

Producer-consumer: problem Producer-consumer with


definition semaphores (1)
n Producer puts things into a shared buffer; consumer takes n Correctness constraints
them out. n consumer must wait for producer to fill buffers, if all empty (scheduling
n Need synchronization for coordinating producer and consumer constraints)
n producer must wait for consumer to empty buffers, if all full
(scheduling constaints)
n Don’t want producer and consumer to have to operate in n Only one thread can manipulate buffer queue at a time (mutual
lockstep exclusion)
n so put a fixed-size buffer between them
n need to synchronize access to this buffer n General rule of thumb: use a separate semaphore for each constraint
n Producer needs to wait if buffer is full Semaphore fullBuffers; // consumer’s constraint
n Consumer needs to wait if buffer is empty // if 0, no coke in machine
n Semaphores are used for both mutex and scheduling Semaphore emptyBuffers; // producer’s constraint
// if 0, nowhere to put more coke
n Another example: Coke machine. Producer is delivery Semaphore mutex; // mutual exclusion
person; consumers are students and faculty.

3
Producer-consumer with
semaphores (2) Order of P&Vs
Semaphore fullBuffers = 0; // initially no coke Semaphore fullBuffers = 0; // initially no coke
Semaphore emptyBuffers = numBuffers; Semaphore emptyBuffers = numBuffers;
// initially, # of empty slots semaphore used to Consumer() { // initially, # of empty slots semaphore used to Consumer() {
// count how many resources there are fullBuffers.P(); // check if there is // count how many resources there are mutex.P(); // make sure no one
Semaphore mutex = 1; // no one using the machine // a coke in the machine Semaphore mutex = 1; // no one using the machine // else is using machin
mutex.P(); // make sure no one fullBuffers.P(); // check if there is
Producer() { // else is using machine Producer() { // a coke in the machine
emptyBuffers.P(); // check if there is space mutex.P(); // make sure no one else
// for more coke take 1 Coke out; // is using machine take 1 Coke out;
mutex.P(); // make sure no one else emptyBuffers.P(); // check if there is space
// is using machine mutex.V(); // next person’s turn // for more coke fullBuffers.V(); // tell producer
fullBuffers.V(); // tell producer // we need more
put 1 Coke in machine; // we need more put 1 Coke in machine; mutex.V(); // next person’s turn
} }
mutex.V(); // ok for others to use machine fullBuffers.V(); // tell consumers there is now
fullBuffers.V(); // tell consumers there is now // a Coke in the machine
Deadlock---two or more processes are
// a Coke in the machine What if we have 2 producers and 2 mutex.V(); // ok for others to use machine
waiting indefinitely for an event that
} consumers? }
can be caused by only one of the
waiting processes.

You might also like