0% found this document useful (0 votes)
4 views

week_9

The document discusses the differences between concurrency and parallelism, emphasizing that concurrency involves managing overlapping tasks while parallelism focuses on executing tasks simultaneously using multiple processing units. It also covers thread synchronization techniques, including mutexes, semaphores, and condition variables, which are essential for preventing data corruption when multiple threads access shared resources. Additionally, it addresses memory management in operating systems, highlighting concepts such as memory allocation, fragmentation, and techniques like compaction and memory pooling to optimize memory usage.

Uploaded by

kihtirdoga
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

week_9

The document discusses the differences between concurrency and parallelism, emphasizing that concurrency involves managing overlapping tasks while parallelism focuses on executing tasks simultaneously using multiple processing units. It also covers thread synchronization techniques, including mutexes, semaphores, and condition variables, which are essential for preventing data corruption when multiple threads access shared resources. Additionally, it addresses memory management in operating systems, highlighting concepts such as memory allocation, fragmentation, and techniques like compaction and memory pooling to optimize memory usage.

Uploaded by

kihtirdoga
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Difference Between Concurrency Difference Between Concurrency

and Parallelism and Parallelism


Managing the execution of many tasks so they can overlap Using multithreading and multiprocessing techniques,
and advance simultaneously is called concurrency. concurrent and parallel programming is possible.
Carrying out numerous tasks to achieve maximum Parallelism is enabled by using many processes running
performance using different processing units concurrently, simultaneously with multiprocessing utilizing separate CPU
is called parallelism. cores, while concurrency is enabled via multithreading by
enabling numerous threads within a single process.

Thread Synchronization Thread Synchronization


Programming for many threads requires careful Thread synchronization is necessary when multiple threads
consideration of thread synchronization. access shared resources or variables simultaneously.
Preventing conflicts and race conditions entails The primary goals of synchronization are:
coordinating the execution of several threads and ensuring Mutual Exclusion and
that shared resources are accessed and modified securely. Coordination
Threads can interfere with one another without adequate
synchronization, resulting in data corruption, inconsistent
results, or unexpected behavior.
Thread Synchronization Thread Synchronization
Mutual Exclusion Some commonly used techniques include
Ensuring that only one thread can access a shared Locks,
resource or a critical code section at a time. This prevents Semaphores, and
data corruption or inconsistent states caused by concurrent Condition variables.
modifications.
Coordination
Allowing threads to communicate and coordinate their
activities effectively. This includes tasks like signaling other
threads when a condition is met or waiting for a certain
condition to be satisfied before proceeding

Lock(Mutex) Lock(Mutex)
Definition: Usage:
Mutexes are typically used in critical sections of code, where shared
A lock, usually called a mutex, is a fundamental resources are accessed or modified.
primitive for synchronization that permits mutual Before entering a critical section, a thread locks the mutex, ensuring
exclusive access to the resource.
exclusion. After completing its work in the critical section, the thread unlocks the
A mutex (short for mutual exclusion) is a mutex, allowing other threads to access the resource.
synchronization mechanism used to control access to Operations:
Locking: A thread locks a mutex before entering a critical section using
shared resources in a multithreaded environment. operations like pthread_mutex_lock() in POSIX threads or std::mutex::lock()
in C++.
While other threads wait for the lock to be released, it
If the mutex is already locked by another thread, the calling
ensures that only one thread can ever acquire the lock. thread blocks until the mutex becomes available.
It allows only one thread at a time to access a shared Unlocking: After completing its work in the critical section, the thread
unlocks the mutex using operations like pthread_mutex_unlock() in POSIX
resource, ensuring that concurrent access by multiple threads or std::mutex::unlock() in C++.
threads doesn't lead to data corruption or inconsistency. This allows other threads waiting on the mutex to acquire it and
enter the critical section.
Lock(Mutex) Lock(Mutex)
you could categorize threads broadly into two types: POSIX
threads and Windows threads. The underlying logic and concepts of
POSIX Threads (pthreads): threading are the same regardless of the
POSIX threads are standardized threads based on the POSIX
standard. threading model or API used.
POSIX stands for "Portable Operating System Interface".
It's a family of standards that define the interface between an However, the names of the methods,
operating system and application software.
POSIX-compliant operating systems adhere to these standards,
functions, and synchronization primitives
ensuring compatibility across different systems. differ between POSIX threads and Windows
They are commonly used in Unix and Unix-like operating systems threads.
such as Linux, macOS, and various flavors of Unix.
Functions and synchronization primitives are provided by the Here's a comparison:
pthreads API, such as pthread_create(), mutexes, condition
variables, etc.

Lock(Mutex) Lock(Mutex)
Creating Threads: Mutexes:
POSIX Threads: POSIX Threads:
pthread_create(pthread_t *thread, const pthread_mutex_t mutex =
pthread_attr_t *attr, void PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_lock(&mutex);
*(*start_routine)(void *), void *arg); pthread_mutex_unlock(&mutex);
Windows Threads: Windows Threads:
CreateThread(LPSECURITY_ATTRIBUTES CRITICAL_SECTION cs;
lpThreadAttributes, SIZE_T dwStackSize, InitializeCriticalSection(&cs);
LPTHREAD_START_ EnterCriticalSection(&cs);
LeaveCriticalSection(&cs);
Lock(Mutex) Lock(Mutex)

Deadlock Deadlock
If we have two threads: thread_function1 and
thread_function2. And we have two mutexes.
Each thread locks one mutex and then attempts to lock
the other mutex.
If both threads lock one mutex and then attempt to lock
the other mutex, they may end up waiting indefinitely for
each other to release the mutex they need, resulting in a
deadlock.
To prevent deadlocks, it's crucial to acquire mutexes in a
consistent order across all threads. In the example
above, if both threads always locked mutex1 before
mutex2 or vice versa, a deadlock would be avoided.
Deadlock Semaphore
A semaphore is a synchronization object that
maintains a count.
It allows multiple threads to enter a critical
section up to a specified limit.
If the limit is reached, subsequent threads
will be blocked until a thread releases the
semaphore.

Semaphore Semaphore
Consider a scenario where we have a We initialize a semaphore connection_sem with an
shared resource (e.g., a database initial value of MAX_CONNECTIONS, which
connection, a file, etc.), and we want to limit represents the maximum number of connections
allowed simultaneously.
the number of threads that can access this
resource simultaneously.
We'll use a semaphore to achieve this.
Semaphore Semaphore
Each thread, before accessing the shared resource After the thread finishes using the resource, it calls
(here, simply printing a message), must call sem_post(&connection_sem) to increment the
sem_wait(&connection_sem) to decrement the semaphore, indicating that it's done with the
semaphore. If the semaphore's value is greater resource.
than zero, the thread continues; otherwise, it
blocks until a connection is available.

Semaphore Producer-Consumer Problem


It is a classic synchronization problem in computer
science that involves two processes, a producer
and a consumer, sharing a common, fixed-size
buffer or queue.
The producer's job is to generate data items and
put them into the buffer, and the consumer's job is
to take items out of the buffer and consume them.
Producer-Consumer Problem Producer-Consumer Problem
Shared Buffer:
The problem arises in scenarios where there are There is a shared buffer or queue of fixed size where items are
multiple producers and consumers operating stored. The buffer can be implemented using an array, a linked list,
or any other suitable data structure.
concurrently, and it requires coordination between Producer:
them to ensure that the producer does not produce The producer generates data items and adds them to the buffer.
items when the buffer is full and the consumer If the buffer is full, the producer waits until there is space available in
the buffer.
does not consume items when the buffer is empty. After producing an item and adding it to the buffer, the producer
signals the consumer that an item is available.
The main challenge is to prevent race conditions Consumer:
and ensure that producers and consumers operate The consumer takes items out of the buffer and consumes them.
If the buffer is empty, the consumer waits until there are items
safely and efficiently. available in the buffer.
After consuming an item from the buffer, the consumer signals the
producer that a slot in the buffer is available for a new item.

Condition Variables Condition Variables


Condition variables allow threads to wait for A producer thread produces items and adds
a specific condition to be met before them to a shared buffer, while a consumer
proceeding. thread consumes items from the buffer.
They provide a mechanism for threads to The condition variables buffer_not_full and
signal each other and coordinate their buffer_not_empty synchronize the producer
activities. and consumer threads, ensuring that the
buffer is not full before producing and not
empty before consuming.
Condition Variables Condition Variables
A producer thread produces items and adds Producer Function:
It runs in an infinite loop, generating items and adding them to
them to a shared buffer, while a consumer the buffer.
thread consumes items from the buffer. It first acquires the condition for the buffer not being full
(buffer_not_full).
Condition Variables: buffer_not_full and If the buffer is full (len(buffer) == MAX_BUFFER_SIZE), it
buffer_not_empty are condition variables used waits until notified that the buffer is not full
(buffer_not_full.wait()).
for synchronization. Once space is available, it produces an item (here, a random
The condition variables buffer_not_full and number between 1 and 100) and adds it to the buffer.
buffer_not_empty synchronize the producer and After adding an item, it notifies the consumer that the buffer is
not empty (buffer_not_empty.notify()).
consumer threads, ensuring that the buffer is It then simulates some processing time before the next
not full before producing and not empty before iteration.
consuming.

Condition Variables Condition Variables


Consumer Function: These condition variables (buffer_not_full
It also runs in an infinite loop, consuming items from the
buffer. and buffer_not_empty) synchronize the
It first acquires the condition for the buffer not being producer and consumer threads, ensuring
empty (buffer_not_empty). that the producer doesn't produce when the
If the buffer is empty (len(buffer) == 0), it waits until
notified that the buffer is not empty buffer is full and the consumer doesn't
(buffer_not_empty.wait()). consume when the buffer is empty.
Once an item is available, it consumes the first item from
the buffer. When the buffer state changes, the
After consuming an item, it notifies the producer that the appropriate condition variable notifies
buffer is not full (buffer_not_full.notify()). the waiting thread, allowing it to proceed
It then simulates some processing time before the next
iteration. with its task.
Memory Management in Operating Memory Management in Operating
System (OS) System (OS)
Memory is the important part of the Moreover, to increase performance, several
computer that is used to store the data. processes are executed simultaneously.
Its management is critical to the computer For this, we must keep several processes in
system because the amount of main the main memory, so it is even more
memory available in a computer system is important to manage them effectively.
very limited.
At any time, many processes are competing
for it.

Memory Management in Operating Memory Management in Operating


System (OS) System (OS)
Memory Allocation: When a program is Static Allocation
launched, the operating system allocates
memory space for it. There are typically two
main types of memory allocation:
Static Allocation: The memory allocation is done at
compile-time. Each program is given a fixed
memory space, which cannot be changed during
runtime.
Dynamic Allocation: Memory allocation is done at
runtime. Programs request memory dynamically as
they need it. Common techniques for dynamic
allocation include stack allocation and heap
allocation.
Memory Management in Operating Memory Management in Operating
System (OS) System (OS)
Dynamic Allocation Dynamic Allocation

Memory Management in Operating Memory Management in Operating


System (OS) System (OS)
Address Binding: Once memory is allocated to a Memory Protection: Modern operating
program, the operating system needs to bind or systems implement memory protection
associate the program's instructions and data with mechanisms to prevent one program from
actual memory addresses. There are mainly three accessing or modifying the memory of another
types of address binding:
program. This protection is crucial for system
Compile Time Binding: The addresses are generated
by the compiler and are fixed before the program is stability and security.
executed. Memory Sharing: Memory management
Load Time Binding: Addresses are assigned to the facilitates memory sharing between processes,
program's variables and instructions when the program which allows multiple processes to access the
is loaded into memory for execution.
Execution Time Binding: Also known as dynamic
same portion of memory. This is particularly
binding, addresses are assigned to the program's useful for inter-process communication (IPC)
instructions and data during runtime. and for efficient memory usage.
Memory Management in Operating Memory Management in Operating
System (OS) System (OS)
Virtual Memory: Virtual memory is a technique used Memory Fragmentation: Memory
to allow programs to use more memory than is fragmentation occurs when memory is allocated
physically available in the system. and deallocated in a way that leaves small gaps
It provides a way to store inactive memory pages on of unusable memory between allocated blocks.
disk temporarily, freeing up physical memory for There are two main types:
other processes. External Fragmentation: Free memory is broken
into small pieces, making it challenging to allocate
When a program needs a page of memory that's not memory to larger programs, even when there's
in physical RAM, the operating system swaps it from enough total free memory.
disk into RAM, and vice versa. Internal Fragmentation: Memory blocks allocated
This illusion of a larger memory space improves to a process may be larger than what the process
system performance and allows for smoother needs, leading to wasted memory within each block.
multitasking.

Memory Management in Operating Memory Management in Operating


System (OS) System (OS)
External Fragmentation: Internal Fragmentation:
Memory Management in Operating Memory Management in Operating
System (OS) System (OS)
To manage fragmentation, operating Compaction:
systems use techniques like Compaction is a technique used in memory
compaction, which rearranges memory to place management to reduce external
all free memory together, fragmentation.
and memory pooling, which preallocates External fragmentation occurs when free
memory in fixed-size blocks to reduce memory blocks are scattered throughout the
fragmentation. memory space, making it difficult to allocate
contiguous memory blocks for larger
processes even if the total free memory is
sufficient.

Memory Management in Operating Memory Management in Operating


System (OS) System (OS)
Here's how compaction works:
Identifying Fragmentation: The operating system constantly monitors
Memory Pooling:
memory usage and identifies areas of free memory scattered between
allocated blocks. Memory pooling, also known as object
Memory Relocation: When fragmentation is detected, the operating pooling, is a technique used to manage
system relocates allocated memory blocks to consolidate free memory
into a single contiguous block. memory allocation by preallocating fixed-size
Updating Pointers: After relocating memory blocks, the operating
system updates the pointers and references to the moved memory blocks of memory from which individual
locations to reflect the new addresses. allocations are made.
Compacting: Once all memory blocks are relocated, the operating
system compacts the memory space so that all free memory is
contiguous. This reduces fragmentation and makes it easier to allocate
This technique is commonly used in
larger memory blocks to processes. scenarios where memory allocation and
Impact on Performance: Compaction can be a resource-intensive
process, as it involves moving memory contents and updating deallocation occur frequently, such as in
references. It is typically performed during periods of low system activity
to minimize disruption to running processes.
multi-threaded or real-time systems.
Memory Management in Operating Memory Management in Operating
System (OS) System (OS)
Here's how memory pooling works:
Preallocation: During initialization or when memory is needed, a pool of fixed-
size memory blocks is preallocated from the operating system.
Object Allocation: When a program needs memory, it requests memory from
the preallocated pool. Instead of requesting memory from the operating system
each time, the program retrieves a block from the pool.
Reuse: Once an object is no longer needed, it is returned to the memory pool
rather than being deallocated. This allows the memory to be reused for future
allocations.
Efficiency: Memory pooling improves efficiency by reducing the overhead
associated with dynamic memory allocation and deallocation. Since memory
blocks are preallocated and reused, there's no need for costly memory
allocation operations from the operating system.
Contiguous Allocation: Memory pooling ensures that memory allocations are
contiguous within each block, which can improve cache efficiency and reduce
memory fragmentation.
Customization: Memory pools can be customized based on the specific
requirements of the application, such as allocating pools with different block
sizes or using pools for specific types of objects.

Memory Management in Operating Memory Management in Operating


System (OS) System (OS)
Memory manager is used to keep track of the The memory manager is responsible for
status of memory locations, whether it is free or protecting the memory allocated to each
allocated.
It addresses primary memory by providing
process from being corrupted by another
abstractions so that software perceives a large process.
memory is allocated to it. If this is not ensured, then the system may
Memory manager permits computers with a small exhibit unpredictable behavior.
amount of main memory to execute programs
larger than the size or amount of available Memory managers should enable sharing of
memory. memory space between processes.
It does this by moving information back and forth Thus, two programs can reside at the same
between primary memory and secondary memory
by using the concept of swapping. memory location although at different times.
Memory Management in Operating Memory Management in Operating
System (OS) System (OS)
Memory Management Techniques: Contiguous memory management
The memory management techniques schemes
can be classified into following main
categories:
Contiguous memory management
schemes
Non-Contiguous memory management
schemes

Memory Management in Operating Memory Management in Operating


System (OS) System (OS)
Non-Contiguous memory management
schemes
Memory Management in Operating Memory Management in Operating
System (OS) System (OS)
Contiguous memory management schemes: Single contiguous memory management
In a Contiguous memory management schemes:
scheme, each program occupies a single The Single contiguous memory management
contiguous block of storage locations, i.e., a scheme is the simplest memory management
scheme used in the earliest generation of
set of memory locations with consecutive computer systems.
addresses. In this scheme, the main memory is divided
When a process requests memory, the into two contiguous areas or partitions.
operating system finds a large enough block The operating systems reside permanently in
of contiguous memory and allocates it to the one partition, and the user process is loaded
process. into the other partition.

Memory Management in Operating Memory Management in Operating


System (OS) System (OS)
Advantages of Single contiguous Disadvantages of Single contiguous memory
management schemes:
memory management schemes:
Wastage of memory space due to unused memory
Simple to implement. as the process is unlikely to use all the available
memory space.
Easy to manage and design.
The CPU remains idle, waiting for the disk to load
In a Single contiguous memory the binary image into the main memory.
management scheme, once a process is It can not be executed if the program is too large
loaded, it is given full processor's time, to fit the entire available main memory space.
and no other processor will interrupt it. It does not support multiprogramming, i.e., it
cannot handle multiple programs simultaneously.
Memory Management in Operating Memory Management in Operating
System (OS) System (OS)
Multiple Partitioning: To switch between two processes, the
The single Contiguous memory management operating systems need to load both
scheme is inefficient as it limits computers to processes into the main memory.
execute only one program at a time resulting The operating system needs to divide the
in wastage in memory space and CPU time. available main memory into multiple parts
The problem of inefficient CPU use can be
to load multiple processes into the main
memory.
overcome using multiprogramming that
allows more than one program to run Thus multiple processes can reside in the
concurrently. main memory simultaneously.

Memory Management in Operating Memory Management in Operating


System (OS) System (OS)
The multiple partitioning schemes can Fixed Partitioning
be of two types: The main memory is divided into several
Fixed Partitioning fixed-sized partitions in a fixed partition
Dynamic Partitioning memory management scheme or static
partitioning.
These partitions can be of the same size
or different sizes.
Memory Management in Operating Memory Management in Operating
System (OS) System (OS)
Each partition can hold a single process. Advantages of Fixed Partitioning memory
management schemes:
The number of partitions determines the Simple to implement.
degree of multiprogramming, i.e., the Easy to manage and design.
maximum number of processes in Disadvantages of Fixed Partitioning
memory. memory management schemes:
These partitions are made at the time of This scheme suffers from internal
fragmentation.
system generation and remain fixed after The number of partitions is specified at the
that. time of system generation.

Memory Management in Operating Memory Management in Operating


System (OS) System (OS)
Dynamic Partitioning Advantages of Dynamic Partitioning
The dynamic partitioning was designed to overcome memory management schemes:
the problems of a fixed partitioning scheme.
Simple to implement.
In a dynamic partitioning scheme, each process
occupies only as much memory as they require when Easy to manage and design.
loaded for processing. Disadvantages of Dynamic Partitioning
Requested processes are allocated memory until the memory management schemes:
entire physical memory is exhausted or the remaining
space is insufficient to hold the requesting process. This scheme also suffers from internal
In this scheme the partitions used are of variable size, fragmentation.
and the number of partitions is not defined at the The number of partitions is specified at the
system generation time time of system segmentation.
Memory Management in Operating Memory Management in Operating
System (OS) System (OS)
Non-Contiguous memory management What is paging?
schemes: Paging is a technique that eliminates the
In a Non-Contiguous memory management requirements of contiguous allocation of main
scheme, the program is divided into different memory.
blocks and loaded at different portions of the In this, the main memory is divided into fixed-
memory that need not necessarily be size blocks of physical memory called
adjacent to one another. frames.
This scheme can be classified depending The size of a frame should be kept the same
upon the size of blocks and whether the as that of a page to maximize the main
blocks reside in the main memory or not. memory and avoid external fragmentation.

Memory Management in Operating Memory Management in Operating


System (OS) System (OS)
Advantages of paging:
Pages reduce external fragmentation.
Simple to implement.
Memory efficient.
Due to the equal size of frames, swapping
becomes very easy.
It is used for faster access of data.
Memory Management in Operating Memory Management in Operating
System (OS) System (OS)
What is Segmentation?
Segmentation is a technique that eliminates the
requirements of contiguous allocation of main
memory.
In this, the main memory is divided into variable-
size blocks of physical memory called segments.
It is based on the way the programmer follows to
structure their programs.
With segmented memory allocation, each job is
divided into several segments of different sizes,
one for each module.
Functions, subroutines, stack, array, etc., are
examples of such modules.

Summary Next Week


Process
Context Switching Until 1st of May, all the students are required
Threads
Clarification About Stack and Heap to upload reports to LMS and send project
Programs, Processes, and Threads
Concurrency
folders as email via WeTransfer.
Difference Between Concurrency and Parallelism
Thread Synchronization
Lock (Mutex) And submit as a hard copy in the class on
Deadlock
Semaphore the first lesson day in May (3rd). Consider, as
Producer-Consumer Problem
Condition Variables
if you had a final exam on 3rd of May.
Memory Management in Operating System (OS)
Summary
Syllabus has been completed Between 3-24 May, Final Projects will be
presented in the class.
Questions?

May your coding be as efficient as your


memory management, your threads as
synchronized as your teamwork, and your
future as promising as the systems you'll build.

Best of luck in your careers!

You might also like