0% found this document useful (0 votes)
5 views84 pages

Synchronization Part 2

Uploaded by

Vab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views84 pages

Synchronization Part 2

Uploaded by

Vab
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 84

Solutions to critical section

problem
Issues of software based solution
• It involves Busy waiting
• It is limited to 2 processes.

11/19/2024 Nahida Nazir


Hardware based solution to critical
section

TestAndSet
TestAndSet is a hardware solution to the synchronization problem. In TestAndSet, we have a shared lock variable
which can take either of the two values, 0 or 1.
0 Unlock 1 Lock
Before entering into the critical section, a process inquires about the lock. If it is locked, it keeps on waiting until
it becomes free and if it is not locked, it takes the lock and executes the critical section.

In TestAndSet, Mutual exclusion and progress are preserved but bounded waiting cannot be preserved.

11/19/2024 Nahida Nazir


Busy waiting
• Busy waiting, also known as spinning, or busy looping
is a process synchronization technique in which a
process/task waits and constantly checks for a
condition to be satisfied before proceeding with
its execution. In busy waiting, a process executes
instructions that test for the entry condition to be true,
such as the availability of a lock or resource in the
computer system

11/19/2024 Nahida Nazir


• Need of Busy Waiting
• Busy waiting is required in operating system for
achieving mutual exclusion. Mutual exclusion is used for
preventing the processes from accessing the shared
resources simultaneously. In operating system the
critical section is defined as a program code in which
concurrent access is avoided. In the critical section of
processes they are granted with exclusive control for
accessing it’s resources without any interference from
the other available processes in mutual exclusion.

11/19/2024 Nahida Nazir


• Limitations of Busy Waiting
• Below are the limitations of Busy Waiting:
• Busy waiting keeps CPU busy at all the time until it’s
condition gets satisfied.
• The synchronisation mechanism that makes use of busy
waiting suffers from the problem of priority inversion.
.
• Busy waiting consumes more power.

11/19/2024 Nahida Nazir


• Priority inversion is a situation that can occur when a low-
priority task is holding a resource such as a semaphore for
which a higher-priority task is waiting.

11/19/2024 Nahida Nazir


Bounded buffer problem
• The producer-consumer problem refers to a classic
synchronization issue involving two types of processes:
producers and consumers. These processes share a
common, finite-sized buffer. The producers generate
data items and place them into the buffer, while
consumers retrieve and process these items.

11/19/2024 Nahida Nazir


1.Shared Buffer: There's a fixed-size buffer or queue where
producers place items and from which consumers retrieve them.
This buffer is shared among the producers and consumers.
2.Producers: These are processes or threads responsible for
generating data items and inserting them into the buffer.
Producers must ensure that they do not produce items into a full
buffer. If the buffer is full, producers may need to wait until there is
space available.
3.Consumers: These are processes or threads that retrieve data
items from the buffer and process them. Consumers must ensure
that they do not consume items from an empty buffer. If the buffer
is empty, consumers may need to wait until there are items
available.
11/19/2024 Nahida Nazir
1.Synchronization Mechanisms: To coordinate the
activities of producers and consumers and ensure safe
access to the shared buffer, synchronization
mechanisms are used. This typically involves techniques
like semaphores, mutexes, or condition variables.
2.Producer-Consumer Algorithm: Various algorithms
can be used to solve the producer-consumer problem,
ensuring proper synchronization and avoiding issues like
race conditions, deadlock, or buffer overflow. Classic
solutions include using counting semaphores to track
the number of items in the buffer and signaling when
it's full or empty.
11/19/2024 Nahida Nazir
Solution to classic problem
• Initialize Semaphores and Mutex:

• Create two semaphores: one to track the number of empty slots in the buffer
(emptyCount) and another to track the number of filled slots (fullCount).
• Create a mutex (bufferMutex) to control access to the buffer.
• Producer Code:

• Wait on the emptyCount semaphore to ensure there is space in the buffer.


• Acquire the bufferMutex to access the buffer.
• Add the produced item to the buffer.
• Release the bufferMutex.
• Signal the fullCount semaphore to indicate that a new item has been added to
the buffer.
11/19/2024 Nahida Nazir
• Consumer Code:

• Wait on the fullCount semaphore to ensure there are items in the


buffer.
• Acquire the bufferMutex to access the buffer.
• Remove an item from the buffer.
• Release the bufferMutex.
• Signal the emptyCount semaphore to indicate that a slot in the buffer
has been emptied.
• This solution ensures that producers wait if the buffer is full and
consumers wait if the buffer is empty, preventing race conditions and
buffer overflow/underflow.
11/19/2024 Nahida Nazir
Continued

11/19/2024 Nahida Nazir


Reader writer problem
• The reader-writer problem is another classic
synchronization problem in computer science and
operating systems. It involves multiple processes
concurrently accessing a shared resource, where some
processes only read from the resource (readers), while
others also write to it (writers). The goal is to ensure
that data integrity is maintained while allowing for
maximum concurrency.

11/19/2024 Nahida Nazir


1.Readers: Processes that only read from the shared
resource. Multiple readers can access the resource
simultaneously without any issue.
2.Writers: Processes that both read from and modify the
shared resource. Writers need exclusive access to the
resource to ensure data consistency.

11/19/2024 Nahida Nazir


1.Data Integrity: To maintain data integrity, no two
writers should be allowed to access the resource
simultaneously. Additionally, writers should be given
priority over readers to prevent readers from accessing
outdated or inconsistent data while the resource is
being modified.
2.Concurrency: The problem should be solved in a way
that maximizes concurrency, allowing multiple readers
to access the resource simultaneously whenever
possible.

11/19/2024 Nahida Nazir


1.First Readers-Writers Problem: In this variation,
readers have priority over writers. If a reader is reading
from the resource, writers are prevented from accessing
it until all readers have finished reading.
2.Second Readers-Writers Problem: In this variation,
writers have priority over readers. Once a writer wants
to access the resource, it blocks any subsequent
readers or writers until it finishes its write operation.

11/19/2024 Nahida Nazir


1.Fair Readers-Writers Problem: This variation
ensures fairness by allowing both readers and writers to
access the resource in the order they request it. It
prevents starvation of either readers or writers.
• Solutions to the reader-writer problem often use
synchronization primitives like semaphores, mutexes, or
monitors to coordinate access to the shared resource
and enforce the desired behavior regarding concurrency
and data integrity.
• Overall, the reader-writer problem is about efficiently
managing concurrent access to a shared resource by
readers and writers whileNahida
11/19/2024 ensuring
Nazir data integrity and
Dining Philosophers problem
• The Dining Philosophers problem is another classic
synchronization problem in computer science,
particularly in the field of concurrent programming and
operating systems. It illustrates the challenges of
resource allocation and deadlock avoidance in a multi-
threaded or multi-process environment.

11/19/2024 Nahida Nazir


1.Scenario: The problem imagines a group of
philosophers sitting around a table with a bowl of
spaghetti in front of each. Between each pair of
adjacent philosophers, there's a single fork.
2.Activity: The philosophers spend their time thinking
and eating spaghetti. To eat, a philosopher needs to
pick up the two forks adjacent to them.

11/19/2024 Nahida Nazir


• The goal of the Dining Philosophers problem is to find a
solution that prevents deadlock and ensures that each
philosopher gets a chance to eat without causing
starvation.

11/19/2024 Nahida Nazir


1.Resource Hierarchy Solution: Assign a unique
numerical identifier to each fork and require
philosophers to pick up forks in ascending order of their
identifiers. This solution prevents the circular wait
condition that leads to deadlock.
2.Chandy/Misra Solution: Introduce a process for
requesting and releasing forks, where a philosopher can
only pick up forks if they are both available. If not, they
put themselves on a waiting list and notify the other
philosophers when a fork becomes available.

11/19/2024 Nahida Nazir


1.Dijkstra's Solution: Use a global resource manager to
allocate forks to philosophers. Philosophers must
request both forks from the manager, and the manager
only grants permission if both forks are available. This
solution ensures that no two adjacent philosophers can
eat simultaneously.
• These solutions aim to prevent deadlock and starvation
by introducing rules and mechanisms for coordinating
access to shared resources (forks) among philosophers.
They illustrate principles of concurrency control and
resource allocation in multi-process or multi-threaded
environments.
11/19/2024 Nahida Nazir
1.Constraint: The constraint is that a philosopher can
only pick up the forks that are adjacent to them.
Therefore, a philosopher must wait until both forks they
need are available before they can eat.
2.Concurrency Issue: If each philosopher tries to pick
up their left and right fork simultaneously, they may
end up in a deadlock situation where each philosopher
is holding one fork and waiting for the other, causing all
of them to starve.

11/19/2024 Nahida Nazir


Continued

11/19/2024 Nahida Nazir


Monitor

• Monitor in an operating system is one method for


achieving process synchronization. Programming
languages help the monitor to accomplish mutual
exclusion between different activities in a
system. wait() and notify() constructs are
synchronization functions that are available in the Java
programming language

11/19/2024 Nahida Nazir


• A feature of programming languages called monitors
helps control access to shared data. The Monitor is a
collection of shared actions, data structures, and
synchronization between parallel procedure calls. A
monitor is therefore also referred to as a
synchronization tool. Some of the languages that
support the usage of monitors include Java, C#, Visual
Basic, Ada, and concurrent Euclid. Although they can
call the monitor’s procedures, processes running
outside of the monitor are unable to access its internal
variables.
11/19/2024 Nahida Nazir
Syntax of monitor in OS

• Monitor in os has a simple syntax similar to how we


define a class, it is as follows:
• Monitor monitorName{
variables_declaration;
condition_variables;

11/19/2024 Nahida Nazir


• procedure p1{ ... };
• procedure p2{ ... };
• ...
• procedure pn{ ... };

•{
• initializing_code;
•}
•}
11/19/2024 Nahida Nazir
• In an operating system, a monitor is only a class that
includes variable_declarations, condition_variables,
different procedures (functions), and an initializing_code
block for synchronizing processes.

11/19/2024 Nahida Nazir


Characteristics of Monitors in
OS
• Mutual Exclusion: Monitors ensure mutual exclusion,
which means only one process or thread can be inside
the monitor at any given time. This property prevents
concurrent processes from accessing shared resources
simultaneously and eliminates the risk of data
corruption or inconsistent results due to race conditions.

11/19/2024 Nahida Nazir


• Encapsulation: Monitors encapsulate both the shared
resource and the procedures that operate on it. By
bundling the resource and the relevant procedures
together, monitors provide a clean and organized
approach to managing concurrent access. This
encapsulation simplifies the design and maintenance of
concurrent programs, as the necessary synchronization
logic is localized within the monitor.

11/19/2024 Nahida Nazir


• Synchronization Primitives: Monitors often support
synchronization primitives, such as condition variables.
Condition variables enable threads within the monitor to
wait for specific conditions to become true or to signal
other threads when certain conditions are met. These
primitives allow for efficient coordination among threads
and help avoid busy-waiting, which can waste CPU
cycles.

11/19/2024 Nahida Nazir


• Blocking Mechanism: When a process or thread
attempts to enter a monitor that is already in use, it is
blocked and put in a queue (entry queue) until the
monitor becomes available. This blocking mechanism
avoids busy-waiting and allows other processes to
proceed while waiting for their turn to access the
monitor.

11/19/2024 Nahida Nazir


Local Data: Each thread that enters a monitor has its own local data or stack, which
means the variables declared within a monitor procedure are unique to each thread’s
execution. This feature prevents interference between threads and ensures that data
accessed within the monitor remains consistent for each thread.

11/19/2024 Nahida Nazir


• Priority Inheritance: In some advanced
implementations of monitors, a priority inheritance
mechanism can be used to prevent priority inversion.
When a higher-priority thread is waiting for a lower-
priority thread to release a resource inside the monitor,
the lower-priority thread’s priority may be temporarily
elevated to avoid unnecessary delays caused by priority
inversion scenarios.

11/19/2024 Nahida Nazir


• High-Level Abstraction: Monitors provide a higher-
level abstraction for concurrency management
compared to low-level synchronization mechanisms like
semaphores or spinlocks. This abstraction reduces the
complexity of concurrent programming and makes it
easier to write correct and maintainable code.

11/19/2024 Nahida Nazir


Components of Monitor in an operating system

• Shared Resource:
The shared resource is the data or resource that
multiple processes or threads need to access in a
mutually exclusive manner. Examples of shared
resources can include critical sections of code, global
variables, or any data structure that needs to be
accessed atomically

11/19/2024 Nahida Nazir


• Entry Queue:
The entry queue is a data structure that holds the
processes or threads that are waiting to enter the
monitor and access the shared resource. When a
process or thread tries to enter the monitor while it is
already being used by another process, it is placed in
this queue, and its execution is temporarily suspended
until the monitor becomes available.

11/19/2024 Nahida Nazir


Entry Procedures (or Monitor
Procedures)
• Entry procedures are special procedures that provide
access to the shared resource and enforce mutual
exclusion. When a process or thread wants to access
the shared resource, it must call one of these entry
procedures. The monitor’s implementation ensures that
only one process or thread can execute an entry
procedure at a time, thus achieving mutual exclusion

11/19/2024 Nahida Nazir


Condition Variables
• Condition variables enable communication and
synchronization between processes or threads within
the monitor. They allow threads to wait until a specific
condition is satisfied or to signal other threads when
certain conditions become true. Condition variables are
crucial for avoiding busy-waiting, which can be
inefficient and wasteful of system resources

11/19/2024 Nahida Nazir


Local Data (or Local Variables):
• Each process or thread that enters the monitor has its
own set of local data or local variables. These variables
are unique to each thread’s execution and are not
shared between threads. Local data allows each thread
to work independently within the monitor without
interfering with other threads’ data.

11/19/2024 Nahida Nazir


• Condition Variables
The condition variables of the monitor can be subjected
to two different types of operations:
• Wait
• Signal
• Consider a condition variable (y) is declared in the
monitor:

11/19/2024 Nahida Nazir


• y.wait(): The activity/process that applies the wait
operation on a condition variable will be suspended, and
the suspended process is located in the condition
variable’s block queue.

11/19/2024 Nahida Nazir


• y.signal(): If an activity/process applies the signal
action on the condition variable, then one of the blocked
activity/processes in the monitor is given a chance to
execute

11/19/2024 Nahida Nazir


Precedence graphs
• Precedence graphs are directed acyclic graphs (DAGs)
that represent several processes' execution hierarchy. It
consists of nodes and edges where the nodes represent
the processes, and the edges represent the flow of
execution

11/19/2024 Nahida Nazir


• Operating systems utilize a data structure called a precedence
graph to show the interdependencies between various tasks or
processes. Another name for it is a Task Dependency Graph.
Several processes may be running at once in a multi-tasking
operating system, and some of these processes may wait for
others to finish before they can start executing. These
dependencies are represented by a Precedence graph, which
is a directed graph with each node being a process or task and
edges denoting dependencies between tasks. In the
precedence graph, each node's label indicates which process
or task it corresponds to, and each edge's label indicates the
kind of dependence that exists between tasks.
11/19/2024 Nahida Nazir
Take into account the following list
of project-related tasks
• Create the interface.

• Create the database's code.

• Create the front-end code.

• Create the back-end code.

• Check the system as a whole


11/19/2024 Nahida Nazir
• We may create a Precedence Graph to show how these
jobs are interdependent. Each job will be represented as
a node in this graph, and dependencies will be shown as
edges connecting the nodes.

11/19/2024 Nahida Nazir


11/19/2024 Nahida Nazir
• Since the user interface design must be finished before
implementing the database code, job A in the graph is a
requirement for task B. Similar to job B, tasks C and D
are dependent on it since before either the front-end or
back-end code can be written, the database needs to be
set up. Last but not least, task E depends on every
other task since it tests a fully working system.

11/19/2024 Nahida Nazir


• The graph's edges include labels to show what type of
dependency there is between the jobs. For instance, the
edge from A to B may be labelled "Design UI," which
denotes that job A is the creation of the user interface
and that task B, which is the writing of code, is
dependent upon task A

11/19/2024 Nahida Nazir


There are two typical forms of
dependencies
• When one process must complete before another can
start, this is referred to as control dependence. When
one task needs the results of another task in order to
begin, data dependence occurs. Scheduling algorithms
employ precedence graphs to decide the sequence in
which activities should be finished, assuring
effectiveness and prompt completion

11/19/2024 Nahida Nazir


Process hierarchy
• Now-a-days all general purpose operating systems
permit a user to create and destroy processes. A
process can create several new processes during its
time of execution.
• The creating process is called the Parent Process and
the new process is called Child Process

11/19/2024 Nahida Nazir


• Execution − The child process is executed by the parent
process concurrently or it waits till all children get
terminated.
• Sharing − The parent or child process shares all
resources like memory or files or children process shares
a subset of parent’s resources or parent and children
process share no resource in common.

11/19/2024 Nahida Nazir


Threads
• A thread is a small unit of processing that runs
independently within a program. Threads are essential
in programming languages because they enable
developers to write concurrent and parallel applications
that can perform multiple tasks simultaneously.

11/19/2024 Nahida Nazir


Threads in Operating Systems

• Threads are an essential component of modern


operating systems. Threads in Operating Systems help
to increase the performance of the applications as well
as the entire system. A Thread is a smaller unit of
execution within a process that shares the same
memory and resources. Threads can be created and
managed by the operating system or the application
itself. Threads can be implemented in various different
ways in an Operating System

11/19/2024 Nahida Nazir


• The image given below shows the threads of a single
process in an operating system.

11/19/2024 Nahida Nazir


Components of Threads in Operating System

• The Threads in Operating System have the following


three components.
• Stack Space
• Register Set
• Program Counter

11/19/2024 Nahida Nazir


• Threads offer several advantages in the operating
system, resulting in improved performance. Some
reasons why threads are necessary for the operating
system include:
• Threads use the same data and code, reducing
operational costs between threads.
• Creating and terminating threads is faster than creating
or terminating processes.
• Context switching in threads is faster than in processes.

11/19/2024 Nahida Nazir


Multithreading is needed in Operating Systems

• Multithreading involves dividing a single process into


multiple threads instead of creating a new process, in
order to achieve parallelism and improve performance.
This approach provides several advantages that include,
• Resource sharing among threads, increased program
responsiveness even if some parts of the program are
blocked, and improved economy compared to creating
separate processes.

11/19/2024 Nahida Nazir


• By sharing resources within a single process, threads
can work more efficiently than processes, which can be
costly to create and manage

11/19/2024 Nahida Nazir


Process vs Thread

11/19/2024 Nahida Nazir


11/19/2024 Nahida Nazir
Types of Threads in Operating
System
• Threads in Operating System are of the following two
types.
1.User-level Threads in Operating System
2.Kernel Level Threads in Operating System

11/19/2024 Nahida Nazir


• User-level threads in Operating System
User-level threads are not recognized by the operating
system and are implemented by the user. If a user-level
operation causes thread blocking, the entire process is
blocked. Kernel-level threads do not interact with user-
level threads. Instead, user-level threads are treated by
the kernel as single-threaded processes. Implementing
user-level threads is a straightforward process.

11/19/2024 Nahida Nazir


• Advantages of User-Level Threads in Operating
Systems
The following are some of the advantages of User-Level
Threads in Operating Systems.
• User-level threads are highly efficient and have fast
switching times that are comparable to procedural calls.
• They do not require intervention from the operating
system and offer great flexibility, making them
adaptable to the specific needs of an application.

11/19/2024 Nahida Nazir


• Disadvantages of User-Level Threads in Operating
Systems
User-Level Threads in Operating Systems also have the
following disadvantages.
• Since the operating system is not aware of user-level
threads, it cannot effectively schedule them.
• If a user-level thread performs a blocking operation, the
entire process will be blocked.
• User-level threads cannot fully utilize multi-core
processors, as only one thread can run on a single core
at any given time.
11/19/2024 Nahida Nazir
• Kernel-level Threads in Operating System
The operating system manages and supports kernel-
level threads. These threads are controlled by the
kernel, which provides more visibility and control over
thread execution. However, this increased control and
visibility come with a cost, including higher overhead
and potential scalability problems.

11/19/2024 Nahida Nazir


• Advantages of Kernel-level threads in Operating System
The Advantages of Kernel-Level Threads in Operating Systems
are given below.
• Kernel-level threads in Operating Systems are fully recognized
and managed by the kernel, which enables the scheduler to
handle them more efficiently.
• Since kernel-level threads are managed directly by the operating
system, they provide better performance compared to user-level
threads. The kernel can schedule them more efficiently, resulting
in better resource utilization and reduced overhead.
• If a kernel-level thread is blocked, the kernel can still schedule
another thread for execution.
11/19/2024 Nahida Nazir
• Disadvantages of Kernel-level threads in Operating
System
Kernel-Level Threads in Operating Systems have the following
disadvantages.
• Compared to user-level threads in Operating System, creating
and managing kernel-level threads is slower and less efficient.
The overhead associated with kernel-level threads is that it
requires a thread control block, which contains information
about the thread’s state and execution context.
• Creating and managing these control blocks can lead to
resource wastage and scheduling overhead, which can make
kernel-level threads less efficient in terms of system resources.
11/19/2024 Nahida Nazir
• Issues with Threading
• Multithreading also faces the following drawbacks.
• When multiple threads access shared resources or
perform operations that rely on the order of execution,
issues such as race conditions, deadlocks, and
synchronization problems may arise.
• Multithreading can also make debugging and testing
more difficult and lead to performance problems due to
overhead and competition for system resources.

11/19/2024 Nahida Nazir


There exists three established multithreading
models classifying these relationships are:

• Many to one multithreading model


• One to one multithreading model
• Many to Many multithreading models

11/19/2024 Nahida Nazir


Many to one multithreading model:

• The many to one model maps many user levels threads


to one kernel thread. This type of relationship facilitates
an effective context-switching environment, easily
implemented even on the simple kernel with no thread
support.
• The disadvantage of this model is that since there is
only one kernel-level thread schedule at any given time,
this model cannot take advantage of the hardware
acceleration offered by multithreaded processes or
multi-processor systems. In this, all the thread
management is done in the userspace. If blocking
comes, this model blocksNahida
11/19/2024 theNazirwhole system.
11/19/2024 Nahida Nazir
One to one multithreading model

• The one-to-one model maps a single user-level thread


to a single kernel-level thread. This type of relationship
facilitates the running of multiple threads in parallel.
However, this benefit comes with its drawback. The
generation of every new user thread must include
creating a corresponding kernel thread causing an
overhead, which can hinder the performance of the
parent process. Windows series and Linux operating
systems try to tackle this problem by limiting the
growth of the thread count.

11/19/2024 Nahida Nazir


11/19/2024 Nahida Nazir
Many to Many Model multithreading
model
• In this type of model, there are several user-level
threads and several kernel-level threads. The number of
kernel threads created depends upon a particular
application. The developer can create as many threads
at both levels but may not be the same. The many to
many model is a compromise between the other two
models. In this model, if any thread makes a blocking
system call, the kernel can schedule another thread for
execution. Also, with the introduction of multiple
threads, complexity is not present as in the previous
models. Though this model allows the creation of
multiple kernel threads, true concurrency cannot be
achieved by this model. This is because the kernel can
11/19/2024 Nahida Nazir
11/19/2024 Nahida Nazir
scheduler activations
• One technique for communication between the user-
thread library and the kernel is known as scheduler
activation

11/19/2024 Nahida Nazir


• It works as likes: The kernel provides an application
with a set of virtual processors (LWPs), and the
application can schedule user threads onto an available
virtual processor. Moreover, the kernel must inform an
application about certain events

11/19/2024 Nahida Nazir


• This procedure is known as an upcall. Upcalls are
handled by the thread library with an upcall handler,
and upcall handlers must run on a virtual processor.
One event that triggers an upcall occurs when an
application thread is about to block

11/19/2024 Nahida Nazir


• Scheduler activation involves transitioning a thread
from a blocked or waiting state to a runnable state
when the conditions it was waiting for are met. This
activation process typically occurs in response to events
such as I/O completion, timer expiration, or inter-
process communication signals.

11/19/2024 Nahida Nazir


• Return to the second terminal with the production1
active shell on the servera machine, and try to log in to
the serverb machine as the production1 user.

11/19/2024 Nahida Nazir


11/19/2024 Nahida Nazir

You might also like