Operating System
Operating System
;
Mutual Exclusion
Mutual exclusion, also known as a mutex, is a mechanism used to
prevent concurrent access to shared resources, ensuring that only one
thread can enter a critical section at a time.
It is implemented to avoid race conditions, which can occur when
multiple threads attempt to access and modify shared data
simultaneously.
Mutual exclusion ensures that a thread of execution does not enter
the critical section while another thread is already using it,
preventing data instability and ensuring data integrity.
In a mutex, when one thread is performing a write operation
(modifying the shared resource), other threads are not allowed to
access the same object until the writing thread has completed its
operation and released the object.
Mutexes are an essential tool in concurrency control, providing
synchronization and coordination among threads to prevent data
corruption, inconsistencies, and conflicts in shared resources.
There are four conditions applied to mutual exclusion, which are
mentioned below :
Mutual exclusion should be ensured in the middle of different
processes when accessing shared resources. There must not be two
processes within their critical sections at any time.
Assumptions should not be made as to the respective speed of the
unstable processes.
The process that is outside the critical section must not interfere with
another for access to the critical section.
When multiple processes access its critical section, they must be
allowed access in a finite time, i.e. they should never be kept waiting
in a loop that has no limits.
Mutual Exclusion with Busy Waiting
Mutual exclusion with busy waiting is a synchronization technique
used to implement mutual exclusion, where a process or thread
continuously checks for the availability of a lock or critical section.
Here are 5 points summarizing mutual exclusion with busy waiting:
Mutual exclusion with busy waiting involves a process or thread
repeatedly checking the status of a lock or critical section in a loop
until it becomes available.
When a process wants to enter a critical section, it checks if the lock
is available. If it is not, the process enters a loop and continuously
checks until the lock becomes available.
Busy waiting can be resource-intensive since the process is actively
waiting, consuming CPU cycles even when it is not performing
useful work.
Mutual exclusion with busy waiting is often used in situations where
the expected wait time for the lock is short and it is more efficient
than other synchronization techniques.
Care should be taken when implementing mutual exclusion with
busy waiting to avoid issues like starvation (a process being unable
to access the critical section) or priority inversion (lower-priority
processes blocking higher-priority ones).
lock variables:
Lock variables, or mutexes, are used for mutual exclusion and
control access to shared resources in concurrent programming.
Threads or processes acquire a lock variable before entering a
critical section and release it afterward.
Lock variables support atomic operations, ensuring that only one
thread can acquire the lock at a time.
Blocking locks suspend threads attempting to acquire the lock until
it becomes available, while non-blocking locks allow threads to
check the lock's status without blocking.
Lock variables are essential for preventing race conditions and
maintaining data consistency in multithreaded or multiprocessing
environments.
Careful usage of lock variables is crucial to avoid issues like
deadlocks and lock contention, ensuring effective and efficient
concurrent programming.
Peterson's algorithm (or Peterson's solution) is a
concurrent programming algorithm for mutual exclusion that
allows two or more processes to share a single-use resource
without conflict, using only shared memory for communication.
It was formulated by Gary L. Peterson in 1981.[1] While
Peterson's original formulation worked with only two
processes, the algorithm can be generalized for more than
two.[2]
The algorithm
The algorithm uses two variables: flag and turn. A flag[n] value
of true indicates that the process n wants to enter the critical
section. Entrance to the critical section is granted for process
P0 if P1 does not want to enter its critical section or if P1 has
given priority to P0 by setting turn to 0.
Peterson's algorithm
bool flag[2] = {false, false};
int turn;
File Structure
A File Structure needs to be predefined format in such a way that an
operating system understands. It has an
exclusively defined structure, which is based on its type.
Three types of files structure in OS:
A text file: It is a series of characters that is organized in lines.
An object file: It is a series of bytes that is organized into blocks.
A source file: It is a series of functions and processes.
File Access
File access is a process that determines the way that files are
accessed and read into memory. Generally, a
single access method is always supported by operating systems.
Though there are some operating system
which also supports multiple access methods.
Three file access methods are:
Sequential access
Direct random access
Index sequential access
Sequential Access
In this type of file access method, records are accessed in a certain
pre-defined sequence. In the sequential
access method, information stored in the file is also processed one
by one. Most compilers access files using
this access method.
Direct Random Access
The random access method is also called direct random access. This
method allow accessing the record
directly. Each record has its own address on which can be directly
accessed for reading and writing.
Index Sequential Access
This type of accessing method is based on simple sequential access.
In this access method, an index is built
for every file, with a direct pointer to different memory blocks. In
this method, the Index is searched
sequentially, and its pointer can access the file directly. Multiple
levels of indexing can be used to offer
greater efficiency in access. It also reduces the time needed to access
a single record.
File Attributes
A file has a name and data. Moreover, it also stores Meta
information like file creation date and time, current size, last
modified date, etc. All this information is called the attributes of a
file system.
Here, are some important File attributes used in OS:
Name: It is the only information stored in a human-readable form.
Identifier: Every file is identified by a unique tag number within a
file system known as an identifier.
Location: Points to file location on device.
Type: This attribute is required for systems that support various
types of files.
Size: Attribute used to display the current file size.
Protection: This attribute assigns and controls the access rights of
reading, writing, and executing the file.
Time, date and security: It is used for protection, security, and also
used for monitoring
File Operations
A file is an abstract data type. OS can provide system calls to create,
write, read, reposition, delete and
truncate files.
Creating a file – First space in the file system must be found for the
file. Second, an entry for the new file
must be made in the directory.
Writing a file – To write a file, specify both the name of the file and
the information to be written to the
file. The system must keep a write pointer to the location in the file
where the next write is to take place.
Reading a file – To read from a file, directory is searched for the
associated entry and the system needs to
keep a read pointer to the location in the file where the next read is
to take place. Because a process is either
reading from or writing to a file, the current operation location can
be kept as a per process current file
position pointer.
Repositioning within a file – Directory is searched for the
appropriate entry and the current file position
pointer is repositioned to a given value. This operation is also known
as file seek.Deleting a file – To delete a file, search the directory for
the named file. When found, release all file space
and erase the directory entry.
Truncating a file – User may want to erase the contents of a file but
keep its attributes. This functionallows all attributes to remain
unchanged except for file length
Directory
Directory can be defined as the listing of the related files on the disk.
The directory may store some or the
entire file attributes.
To get the benefit of different file systems on the different operating
systems, a hard disk can be divided into
the number of partitions of different sizes. The partitions are also
called volumes or mini disks.
Each partition must have at least one directory in which, all the files
of the partition can be listed. A
directory entry is maintained for each file in the directory which
stores all the information related to that file.
Single-level directory:
Definition (5 points):
A directory structure where all files are contained within a single
directory.
It is the simplest form of directory organization.
All files have unique names within the directory.
Easy to implement and understand.
Limited scalability and potential for name collisions.
Advantages (4 points):
Easy implementation and support.
Faster searching for smaller files.
Simple file operations like creation, searching, deletion, and
updating.
Logical organization of files within a single directory.
Disadvantages (4 points):
Potential for name collisions when multiple users have files with the
same name.
Slow searching as the directory grows larger.
Inability to group similar files together or create subdirectories.
Limited scalability for managing a large number of files or users.
Two-level directory:
Definition (5 points):
A directory structure where each user has a separate user files
directory (UFD).
Users' UFDs have similar structures and list only their own files.
The system's master file directory (MFD) is searched to validate user
IDs.
Provides a solution to the problem of file name conflicts in a single-
level directory.
Users cannot create subdirectories but can have more than two files
with the same name.
Advantages (4 points):
Avoids name collisions among users' files.
Provides security by preventing access to other users' files.
Easy searching of files within a user's directory.
Each user has their own dedicated directory for better file
organization.
Disadvantages (4 points):
Inability to share files among users.
Users cannot create subdirectories within their directories.
Lack of scalability for managing a large number of files or users.
Potential for slower searching as the number of files within each
user's directory increases.
Tree Structure/Hierarchical Structure:
Definition (5 points):
A directory structure resembling an upside-down tree, with the root
directory at the top.
Users can create subdirectories and store files within their
directories.
Each user has their own directory, and they cannot modify the root
directory data.
Users do not have access to other users' directories, enhancing
privacy and security.
Allows for a more flexible and scalable organization of files and
subdirectories.
Advantages (4 points):
Allows the creation of subdirectories for better file organization.
Easier searching within the directory structure.
Facilitates file sorting and organization, including important and
unimportant files.
More scalable than single-level or two-level directories for
managing large amounts of data.
Disadvantages (4 points):
Inability to share files among users within the directory structure.
Complicated searching if there are many levels of subdirectories.
Users cannot modify the root directory data.
Files may need to be split across multiple directories if they exceed
the capacity of a single directory.
L
Contiguous Allocation:
Definition (8 points):
Contiguous allocation is a file system implementation method where
files are stored in consecutive blocks on a storage medium.
In this method, each file occupies a contiguous block of storage
space.
The starting location of a file's storage block and its length are
recorded in a file allocation table or similar data structure.
The contiguous blocks are allocated when a file is created and
released when the file is deleted.
This method is commonly used in early file systems and some
embedded systems.
Functions (8 points):
File Allocation: Contiguous allocation determines and manages the
allocation of contiguous blocks of storage space for files.
File Creation: When a new file is created, contiguous blocks of
storage are reserved to store the file's data.
File Reading: The starting location and length of a file's contiguous
block allow for efficient retrieval of file data.
File Writing: Contiguous allocation ensures that file data is written
to consecutive blocks for optimal performance.
File Deletion: When a file is deleted, the contiguous blocks it
occupied are marked as available for reuse.
File System Navigation: Contiguous allocation allows for easy
navigation within a file system since file blocks are stored
consecutively.
File Access Efficiency: Contiguous allocation can provide efficient
access to large files since the blocks are stored contiguously.
File System Optimization: Contiguous allocation simplifies the file
system implementation and can be efficient for certain workloads.
Advantages (5 points):
Simple Implementation: Contiguous allocation is straightforward to
implement compared to other allocation methods.
Sequential Access: It enables efficient sequential access of files,
especially when accessing large files.
Minimal Fragmentation: Since files are stored contiguously, there is
minimal fragmentation of storage space.
Low Overhead: The allocation table or data structure used in
contiguous allocation has low overhead, resulting in efficient file
system operations.
Fast File Retrieval: With contiguous allocation, file retrieval can be
faster compared to other allocation methods since there is no need to
traverse multiple non-contiguous blocks.
Disadvantages (5 points):
External Fragmentation: Contiguous allocation can lead to external
fragmentation as files are created and deleted, leaving gaps of
unused space that may not be efficiently utilized.
File Size Limitations: The contiguous nature of allocation may
impose limitations on the maximum file size, as large contiguous
blocks may not be available.
Poor Space Utilization: If the available free space is not contiguous,
it can lead to wasted space since it may not be usable for storing
files.
Limited Flexibility: Contiguous allocation makes it challenging to
insert or expand files dynamically, as it requires finding contiguous
free space.
Disk Defragmentation: Over time, as files are created and deleted,
the file system may become fragmented, requiring periodic disk
defragmentation to optimize storage utilization and performance.
Linked List Allocation
Definition (8 points):
Linked List Allocation is a file system implementation method that
uses a linked list data structure, known as the File Allocation Table
(FAT), to keep track of the storage blocks allocated to each file.
In this method, each file is divided into blocks of storage, and the
FAT maintains a chain of pointers that link these blocks together.
The FAT contains an entry for each storage block, indicating whether
it is allocated to a file or free.
The entries in the FAT store the addresses or pointers to the next
block in the file's chain, allowing for non-contiguous allocation of
storage space.
Linked List Allocation is commonly used in file systems like FAT16
and FAT32.
Functions (8 points):
File Allocation: Linked List Allocation uses the FAT to allocate and
manage storage blocks for files by updating the pointers in the
linked list.
File Creation: When a new file is created, the FAT allocates and
updates the linked list with the blocks required to store the file's
data.
File Reading: The FAT allows for efficient retrieval of file data by
following the linked list of blocks.
File Writing: Linked List Allocation enables efficient writing of file
data by appending new blocks and updating the pointers in the
linked list.
File Deletion: When a file is deleted, the FAT marks the
corresponding blocks as free, removing them from the linked list.
File System Navigation: Linked List Allocation allows for
navigation within the file system using the FAT to traverse the linked
lists associated with each file.
File Access Efficiency: Linked List Allocation can efficiently handle
files of varying sizes, as it allows non-contiguous allocation of
blocks.
File System Optimization: The use of linked lists in the FAT enables
flexibility in file allocation and storage management.
Advantages (5 points):
Efficient Space Utilization: Linked List Allocation minimizes
internal fragmentation, as it can allocate non-contiguous blocks,
utilizing storage space more efficiently.
Flexibility in File Size: This method allows for dynamic file size
changes, as files can be easily expanded by adding new blocks to the
linked list.
Simplified File Insertion and Deletion: Linked List Allocation
simplifies the process of inserting and deleting files, as it involves
updating the linked list pointers rather than shifting blocks.
Easy File System Maintenance: The use of the FAT in Linked List
Allocation makes file system maintenance tasks like file system
consistency checking and repair more manageable.
Support for Multiple File Systems: Linked List Allocation using FAT
is widely supported across various operating systems and platforms,
making it compatible and accessible.
Disadvantages (5 points):
Poor Sequential Access Performance: Retrieving data from a file
stored in non-contiguous blocks can result in slower sequential
access compared to contiguous allocation methods.
Increased Overhead: Linked List Allocation introduces additional
overhead in terms of storage space required for the FAT, which can
impact overall storage efficiency.
Limited Performance for Large Files: As the file size grows, the
overhead of traversing the linked list increases, leading to potential
performance degradation.
Fragmentation Over Time: Linked List Allocation can suffer from
external fragmentation as files are created, modified, and deleted,
leading to scattered free blocks within the storage space.
Complex File System Recovery: In case of FAT corruption or
failure, recovering the file system and reconstructing the linked lists
can be complex and time-consuming.
I/O device management
I/O device management in an operating system involves
recognizing and initializing connected devices, ensuring they are
ready for communication with the system.
It facilitates data transfer between I/O devices and the rest of the
system, handling synchronization, buffering, and error handling to
maintain data integrity.
The operating system handles interrupts generated by devices,
suspending and resuming programs to service the interrupt requests
and coordinate device activities.
Classification of I/O Devices:
Machine Readable or Block Devices:
Definition: Machine-readable or block devices are I/O devices that
read or write data in fixed-sized blocks or chunks.
Characteristics: a. They operate at the block level and access data in
fixed-sized blocks. b. Examples include hard disk drives (HDDs),
solid-state drives (SSDs), and USB flash drives. c. They provide
high-speed data transfer rates and are typically used for storage
purposes. d. Accessing data from block devices usually involves
seeking and reading or writing entire blocks at once. e. These
devices are often managed by the operating system's file system.
User Readable or Character Devices:
Definition: User-readable or character devices are I/O devices that
read or write data character by character.
Characteristics: a. They operate at the character level and handle
data one character at a time. b. Examples include keyboards, mice,
joysticks, and barcode scanners. c. They provide a way for users to
input or output data interactively. d. The operating system usually
processes each character as it arrives or is sent to the device. e.
These devices often require device drivers to translate user input or
output into a format that the operating system can understand.
Communication Devices:
Definition: Communication devices facilitate the transmission of
data between different systems or devices.
Characteristics: a. They enable communication between computers
or other devices over various communication channels. b. Examples
include network interface cards (NICs), modems, and wireless
adapters. c. They provide connectivity options such as Ethernet, Wi-
Fi, Bluetooth, or cellular networks. d. Communication devices allow
for data exchange and networking capabilities. e. They are often
managed by the operating system's network stack and require
appropriate drivers to function.
Controllers
Device drivers are software modules that can be plugged into an OS
to handle a particular device. Operating
System takes help from device drivers to handle all I/O devices.
The Device Controller works like an interface between a device and
a device driver. I/O units (Keyboard,
mouse, printer, etc.) typically consist of a mechanical component
and an electronic component where
electronic component is called the device controller.
There is always a device controller and a device driver for each
device to communicate with the Operating
Systems. A device controller may be able to handle multiple devices.
As an interface its main task is to
convert serial bit stream to block of bytes, perform error correction
as necessary.
Any device connected to the computer is connected by a plug and
socket, and the socket is connected to a
device controller. Following is a model for connecting the CPU,
memory, controllers, and I/O devices where
CPU and device controllers all use a common bus for
communication.
Memory-Mapped I/O I/O Mapped I/O
I/O devices have
I/O devices accessed
Addressing separate addresses
through memory addresses.
distinct from memory.
I/O devices share the same I/O devices have a
Address Range address space as the main distinct address space
memory. from memory.
Memory controller decodes
Address Separate I/O controller
memory addresses for I/O
Decoding decodes I/O addresses.
operations.
Dedicated IN and
Load and store instructions
Instructions OUT instructions used
used to access I/O devices.
for I/O access.
Utilizes the same bus for Requires separate bus
Bus Utilization both memory and I/O lines for I/O
operations. operations.
I/O devices accessed as if I/O devices have
Device
they were memory dedicated ports for
Interaction
locations. data transfer.
Interrupts can be handled Interrupt handling
Interrupt
using memory-mapped typically requires I/O
Handling
interrupt vectors. ports.
Address conflicts may occur
Separate address space
Address if I/O devices use
reduces the chance of
Conflicts overlapping memory
conflicts.
addresses.
Offers flexibility in using Provides dedicated
Flexibility memory instructions for I/O instructions for I/O
operations. operations.
Compatibility May be compatible with May be compatible with
with Legacy legacy systems that use legacy systems that use
Systems memory-mapped I/O. I/O-mapped I/O.
interrupt I/O Polled I/O
Handling CPU continuously
I/O device interrupts the CPU
Mechanis checks the status of I/O
when it requires attention.
m devices.
CPU can perform other tasks CPU is actively involved
CPU
while waiting for I/O in checking I/O device
Utilization
completion. status.
Response time depends
Responsive Provides faster responsiveness
on the polling frequency
ness as CPU can handle other tasks.
of the CPU.
Lower CPU overhead as the
Higher CPU overhead
Overhead CPU is not continuously
due to constant polling.
checking device status.
Higher throughput as CPU can Lower throughput as
Throughpu
perform other tasks CPU is dedicated to
t
concurrently. polling.
Device and CPU are Device and CPU
Synchroniz
synchronized through synchronization is not as
ation
interrupts. direct.
Requires interrupt handling Simpler to implement as
Complexit
mechanisms in hardware and no interrupt handling is
y
software. needed.
Lower latency as the CPU can Higher latency as the
Latency respond immediately to device CPU must wait for the
events. polling interval.
Well-suited for handling May become less
Scalability multiple I/O devices efficient with multiple
simultaneously. I/O devices.
Provides flexibility for Limited flexibility in
Flexibility handling diverse I/O device handling complex I/O
operations. operations.
DMA (Direct Memory Access)
Direct memory access (DMA) is a method that allows an
input/output (I/O) device to send or receive data
directly to or from the main memory, bypassing the CPU to speed up
memory operations.
The process is managed by a chip known as a DMA controller
(DMAC).
L
Goals of I/O Software:
Uniform Naming:
Provide a consistent and abstracted naming system for I/O devices,
allowing users to access them without being aware of the underlying
hardware names.
Enable easy interchangeability of devices without modifying
application code.
Synchronous versus Asynchronous:
Support both synchronous and asynchronous I/O operations to
accommodate different requirements.
Asynchronous operations allow the CPU to continue processing
other tasks while waiting for I/O completion.
Provide interrupt-driven mechanisms to handle asynchronous I/O
and manage blocking states efficiently.
Device Independence:
Achieve device independence by providing a common interface that
allows programs to access various I/O devices.
Allow programs to interact with different devices without rewriting
the code for each specific device.
Enable portability and ease of use by abstracting hardware-specific
details from application code.
Buffering:
Implement buffering mechanisms to manage data transfer between
devices and memory efficiently.
Break data into smaller groups and transfer it to buffers for
examination and processing.
Optimize I/O performance by balancing the rate of data filling and
emptying from buffers.
Error Handling:
Incorporate error handling mechanisms to detect and handle errors
generated by I/O devices.
Allow lower-level components, such as controllers, to handle errors
and prevent them from reaching higher levels.
Provide robust error reporting and recovery strategies to ensure
reliable I/O operations.
Shareable and Non-Shareable Devices:
Support both shareable and non-shareable devices in the I/O
software.
Enable multiple processes to share devices like hard disks, while
ensuring exclusive access to non-shareable devices like printers.
Implement resource allocation and scheduling mechanisms to
manage device sharing efficiently.
Handling I/O Methods:
Programmed I/O:
Data transfer controlled by the CPU.
Continuously checks I/O devices for inputs.
Carries out I/O requests until no further input signal is received.
Examples: Printing a document where the request is sent through the
CPU to the printer.
Interrupt-Based I/O:
Data transfer activity controlled by interrupts.
CPU can continue processing other tasks until an input signal from
an I/O device is received.
Interrupts the CPU to handle the I/O request.
Example: Keyboard strokes generating an interrupt signal to the
CPU to process the keystroke.
Direct Memory Access (DMA) I/O:
Direct transfer of data between memory and I/O devices.
Data transfer occurs without CPU involvement, reducing CPU
overhead.DMA controller manages the transfer between memory
and I/O devices.
Example: Transferring pictures from a camera plugged into a USB
port, where the DMA handles the transfer instead of the CPU.
IO Software Layers
Basically, input/output software organized in the following four
layers:
Interrupt handlers
Device drivers
Device-independent input/output software
User-space input/output software
Disk Scheduling
Disk scheduling is done by operating systems to schedule I/O
requests arriving for the disk. Disk schedulingis also known as I/O
scheduling.Disk scheduling is important because:
Multiple I/O requests may arrive by different processes and only
one I/O request can be served at a
time by the disk controller. Thus other I/O requests need to wait in
the waiting queue and need to be scheduled.
Two or more request may be far from each other so can result in
greater disk arm movement.
Hard drives are one of the slowest parts of the computer system
and thus need to be accessed in an
efficient manner
FCFS
FCFS is the simplest of all the Disk Scheduling Algorithms. In
FCFS, the requests are addressed in the orderthey arrive in the disk
queue. Let us understand this with the help of an example.
Example:
Suppose the order of request is- (82, 170, 43, 140, 24, 16, 190)
And current position of Read/Write head is: 50