Document 2
Document 2
The operating system can be implemented with the help of various structures. The structure of the OS
depends mainly on how the various standard components of the operating system are interconnected and
merge into the kernel. This article discusses a variety of operating system implementation structures and
explains how and why they function.
What is a System Structure for an Operating System?
A system structure for an operating system is like the blueprint of how an OS is organized and how its
different parts interact with each other. Because operating systems have complex structures, we want a
structure that is easy to understand so that we can adapt an operating system to meet our specific needs.
Similar to how we break down larger problems into smaller, more manageable subproblems, building an
operating system in pieces is simpler. The operating system is a component of every segment. The strategy
for integrating different operating system components within the kernel can be thought of as an operating
system structure. As will be discussed below, various types of structures are used to implement operating
systems.
Simple Structure
Simple structure operating systems do not have well-defined structures and are small, simple, and limited.
The interfaces and levels of functionality are not well separated. MS-DOS is an example of such an
operating system. In MS-DOS, application programs are able to access the basic I/O routines. These types
of operating systems cause the entire system to crash if one of the user programs fails.
Advantages of Simple Structure
• It delivers better application performance because of the few interfaces between the application
program and the hardware.
• It is easy for kernel developers to develop such an operating system.
Types of Kernel
The kernel manages the system’s resources and facilitates communication between hardware and software
components. These kernels are of different types let’s discuss each type along with its advantages and
disadvantages:
1. Monolithic Kernel
It is one of the types of kernel where all operating system services operate in kernel space. It has
dependencies between systems components. It has huge lines of code which is complex.
Example:
2. Micro Kernel
It is kernel types which has minimalist approach. It has virtual memory and thread scheduling. It is more
stable with less services in kernel space. It puts rest in user space. It is use in small os.
Example :
3. Hybrid Kernel
It is the combination of both monolithic kernel and microkernel. It has speed and design of monolithic kernel
and modularity and stability of microkernel.
Example :
4. Exo Kernel
It is the type of kernel which follows end-to-end principle. It has fewest hardware abstractions as possible.
It allocates physical resources to applications.
Example :
Example :
EROS etc.
Functions of Kernel
The kernel is responsible for various critical functions that ensure the smooth operation of the computer
system. These functions include:
1. Process Management
2. Memory Management
3. Device Management
4. File System Management
5. Resource Management
6. Security and Access Control
7. Inter-Process Communication
Working of Kernel
• A kernel loads first into memory when an operating system is loaded and remains in memory until
the operating system is shut down again. It is responsible for various tasks such as disk
management , task management, and memory management .
• The kernel has a process table that keeps track of all active processes
• The process table contains a per-process region table whose entry points to entries in the region
table.
• The kernel loads an executable file into memory during the ‘exec’ system call’.
Advantages
• Efficient Resource Management
• Process Management
• Hardware Abstraction
Disadvantages
• Limited Flexibility
• Dependency on Hardware
A process is divided into Segments. The chunks that a program is divided into which are not necessarily all
of the exact sizes are called segments. Segmentation gives the user’s view of the process which paging
does not provide. Here the user’s view is mapped to physical memory.
• Base Address: It contains the starting physical address where the segments reside in memory.
• Segment Limit: Also known as segment offset. It specifies the length of the segment.
Segmentation is crucial in efficient memory management within an operating system. For an in-depth
understanding of memory management and other critical OS topics, explore the
GATE CS Self-Paced Course, designed to help you succeed in the GATE exam.
Translation of Two-dimensional Logical Address to Dimensional Physical Address.
Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution from memory, it
creates a small free hole. These holes can not be assigned to new processes because holes are not
combined or do not fulfill the memory requirement of the process. To achieve a degree of multiprogramming,
we must reduce the waste of memory or fragmentation problems. In the operating systems two types of
fragmentation:
1. Internal fragmentation: Internal fragmentation occurs when memory blocks are allocated to the
process more than their requested size. Due to this some unused space is left over and creating
an internal fragmentation problem.Example: Suppose there is a fixed partitioning used for
memory allocation and the different sizes of blocks 3MB, 6MB, and 7MB space in memory. Now a
new process p4 of size 2MB comes and demands a block of memory. It gets a memory block of
3MB but 1MB block of memory is a waste, and it can not be allocated to other processes too.
This is called internal fragmentation.
2. External fragmentation: In External Fragmentation, we have a free memory block, but we can
not assign it to a process because blocks are not contiguous. Example: Suppose (consider the
above example) three processes p1, p2, and p3 come with sizes 2MB, 4MB, and 7MB
respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB allocated respectively.
After allocating the process p1 process and the p2 process left 1MB and 2MB. Suppose a new
process p4 comes and demands a 3MB block of memory, which is available, but we can not
assign it because free memory space is not contiguous. This is called external fragmentation.
Operating System a type of system software. It basically manages all the resources of the computer. An
operating system acts as an interface between the software and different parts of the computer or the
computer hardware. The operating system is designed in such a way that it can manage the overall
resources and operations of the computer.
Operating System is a fully integrated set of specialized programs that handle all the operations of the
computer. It controls and monitors the execution of all other programs that reside in the computer, which
also includes application programs and other system software of the computer. Examples of Operating
Systems are Windows, Linux, Mac OS, etc.
An Operating System (OS) is a collection of software that manages computer hardware resources and
provides common services for computer programs. In this article we will see basic of operating system in
detail.
• Convenient to use: One of the objectives is to make the computer system more convenient to
use in an efficient manner.
• User Friendly: To make the computer system more interactive with a more convenient interface
for the users.
• Easy Access: To provide easy access to users for using resources by acting as an intermediary
between the hardware and its users.
• Management of Resources: For managing the resources of a computer in a better and faster
way.
• Controls and Monitoring: By keeping track of who is using which resource, granting resource
requests, and mediating conflicting requests from different programs and users.
• Fair Sharing of Resources: Providing efficient and fair sharing of resources between the users
and programs.
Types of Operating Systems
• Batch Operating System: A Batch Operating System is a type of operating system that does not
interact with the computer directly. There is an operator who takes similar jobs having the same
requirements and groups them into batches.
• Time-sharing Operating System: Time-sharing Operating System is a type of operating system
that allows many users to share computer resources (maximum utilization of the resources).
• Distributed Operating System: Distributed Operating System is a type of operating system that
manages a group of different computers and makes appear to be a single computer. These
operating systems are designed to operate on a network of computers. They allow multiple users
to access shared resources and communicate with each other over the network. Examples
include Microsoft Windows Server and various distributions of Linux designed for servers.
• Network Operating System: Network Operating System is a type of operating system that runs
on a server and provides the capability to manage data, users, groups, security, applications, and
other networking functions.
• Real-time Operating System: Real-time Operating System is a type of operating system that
serves a real-time system and the time interval required to process and respond to inputs is very
small. These operating systems are designed to respond to events in real time. They are used in
applications that require quick and deterministic responses, such as embedded systems,
industrial control systems, and robotics.
• Multiprocessing Operating System: Multiprocessor Operating Systems are used in operating
systems to boost the performance of multiple CPUs within a single computer system. Multiple
CPUs are linked together so that a job can be divided and executed more quickly.
• Single-User Operating Systems: Single-User Operating Systems are designed to support a
single user at a time. Examples include Microsoft Windows for personal computers and Apple
macOS.
• Multi-User Operating Systems: Multi-User Operating Systems are designed to support multiple
users simultaneously. Examples include Linux and Unix.
• Embedded Operating Systems: Embedded Operating Systems are designed to run on devices
with limited resources, such as smartphones, wearable devices, and household appliances.
Examples include Google’s Android and Apple’s iOS.
A process is a program in execution. For example, when we write a program in C or C++ and compile it,
the compiler creates binary code. The original code and binary code are both programs. When we actually
run the binary code, it becomes a process. A process is an ‘active’ entity instead of a program, which is
considered a ‘passive’ entity. A single program can create many processes when run multiple times; for
example, when we open a .exe or binary file multiple times, multiple instances begin (multiple processes
are created).
In this article, we will discuss process management in detail, along with the different states of a process,
its advantages, disadvantages, etc.
• Text Section: A Process, sometimes known as the Text Section, also includes the current activity
represented by the value of the Program Counter.
• Stack: The stack contains temporary data, such as function parameters, returns addresses, and
local variables.
• Data Section: Contains the global variable.
• Heap Section: Dynamically memory allocated to process during its run time.
Characteristics of a Process
A process has the following attributes.
States of Process
A process is in one of the following states:
Process Operations
Process operations in an operating system refer to the various activities the OS performs to manage
processes. These operations include process creation, process scheduling, execution and killing the
process. Here are the key process operations
Process Creation
Process creation in an operating system (OS) is the act of generating a new process. This new process is
an instance of a program that can execute independently.
Scheduling
Once a process is ready to run, it enters the “ready queue.” The scheduler’s job is to pick a process from
this queue and start its execution.
Execution
Execution means the CPU starts working on the process. During this time, the process might:
• Move to a waiting queue if it needs to perform an I/O operation.
• Get blocked if a higher-priority process needs the CPU.
• When a high-priority process comes to a ready state (i.e. with higher priority than the running
process).
• An Interrupt occurs.
• User and kernel-mode switch (It is not necessary though)
• Preemptive CPU scheduling is used.
A mode switch occurs when the CPU privilege level is changed, for example when a system call is made
or a fault occurs. The kernel works in more a privileged mode than a standard user task. If a user process
wants to access things that are only accessible to the kernel, a mode switch must occur. The currently
executing process need not be changed during a mode switch. A mode switch typically occurs for a process
context switch to occur. Only the kernel can cause a context switch.
• Pointer: It is a stack pointer that is required to be saved when the process is switched from one
state to another to retain the current position of the process.
• Process state: It stores the respective state of the process.
• Process number: Every process is assigned a unique id known as process ID or PID which
stores the process identifier.
• Program counter: Program Counter stores the counter, which contains the address of the next
instruction that is to be executed for the process.
• Register: Registers in the PCB, it is a data structure. When a processes is running and it’s time
slice expires, the current value of process specific registers would be stored in the PCB and the
process would be swapped out. When the process is scheduled to be run, the register values is
read from the PCB and written to the CPU registers. This is the main purpose of the registers in
the PCB.
• Memory limits: This field contains the information about memory management system used by
the operating system. This may include page tables, segment tables, etc.
• List of Open files: This information includes the list of files opened for a process.
• First-Come, First-Served (FCFS): This is the simplest scheduling algorithm, where the process
is executed on a first-come, first-served basis. FCFS is non-preemptive, which means that once a
process starts executing, it continues until it is finished or waiting for I/O.
• Shortest Job First (SJF): SJF is a proactive scheduling algorithm that selects the process with
the shortest burst time. The burst time is the time a process takes to complete its execution. SJF
minimizes the average waiting time of processes.
• Round Robin (RR): Round Robin is a proactive scheduling algorithm that reserves a fixed
amount of time in a round for each process. If a process does not complete its execution within
the specified time, it is blocked and added to the end of the queue. RR ensures fair distribution of
CPU time to all processes and avoids starvation.
• Priority Scheduling: This scheduling algorithm assigns priority to each process and the process
with the highest priority is executed first. Priority can be set based on process type, importance,
or resource requirements.
• Multilevel Queue: This scheduling algorithm divides the ready queue into several separate
queues, each queue having a different priority. Processes are queued based on their priority,
Inter Process Communication (IPC)
Processes can coordinate and interact with one another using a method called inter-process
communication (IPC) . Through facilitating process collaboration, it significantly contributes to improving
the effectiveness, modularity, and ease of software systems.
Types of Process
• Independent process
• Co-operating process
An independent process is not affected by the execution of other processes while a co-operating process
can be affected by other executing processes. Though one can think that those processes, which are
running independently, will execute very efficiently, in reality, there are many situations when cooperative
nature can be utilized for increasing computational speed, convenience, and modularity. Inter-process
communication (IPC) is a mechanism that allows processes to communicate with each other and
synchronize their actions. The communication between these processes can be seen as a method of
cooperation between them. Processes can communicate with each other through both:
Methods of IPC
• Shared Memory
• Message Passing
A deep understanding of Inter-process Communication (IPC) mechanisms is essential for success in
exams like GATE, where operating systems and process management are key topics. To strengthen your
knowledge and enhance your exam preparation, consider enrolling in the GATE CS Self-Paced Course .
This course offers comprehensive coverage of IPC and other essential operating system concepts,
providing you with the insights and skills needed to excel in your exams.
Now, We will start our discussion of the communication between processes via message passing. In this
method, processes communicate with each other without using any kind of shared memory. If two
processes p1 and p2 want to communicate with each other, they proceed as follows:
• Establish a communication link (if a link already exists, no need to establish it again.)
• Start exchanging messages using basic primitives.
We need at least two primitives:
– send (message, destination) or send (message)
– receive (message, host) or receive (message)
The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for an OS designer
but complicated for a programmer and if it is of variable size then it is easy for a programmer but complicated
for the OS designer. A standard message can have two parts: header and body. The header part is used
for storing message type, destination id, source id, message length, and control information. The control
information contains information like what to do if runs out of buffer space, sequence number, priority.
Generally, message is sent using FIFO style.
Message Passing Through Communication Link
Now, We will start our discussion about the methods of implementing communication links. While
implementing the link, there are some questions that need to be kept in mind like :
A link has some capacity that determines the number of messages that can reside in it temporarily for which
every link has a queue associated with it which can be of zero capacity, bounded capacity, or unbounded
capacity. In zero capacity, the sender waits until the receiver informs the sender that it has received the
message. In non-zero capacity cases, a process does not know whether a message has been received or
not after the send operation. For this, the sender must communicate with the receiver explicitly.
Implementation of the link depends on the situation, it can be either a direct communication link or an in-
directed communication link.
Direct Communication links are implemented when the processes use a specific process identifier for
the communication, but it is hard to identify the sender ahead of time.
For example the print server.
In-direct Communication is done via a shared mailbox (port), which consists of a queue of messages.
The sender keeps the message in mailbox and the receiver picks them up.
Synchronous and Asynchronous Message Passing
A process that is blocked is one that is waiting for some event, such as a resource becoming available or
the completion of an I/O operation. IPC is possible between the processes on same computer as well as
on the processes running on different computer i.e. in networked/distributed system. In both cases, the
process may or may not be blocked while sending a message or attempting to receive a message so
message passing may be blocking or non-blocking. Blocking is considered synchronous and blocking
send means the sender will be blocked until the message is received by receiver. Similarly, blocking
receive has the receiver block until a message is available. Non-blocking is considered asynchronous
and Non-blocking send has the sender sends the message and continue. Similarly, Non-blocking receive
has the receiver receive a valid message or null. After a careful analysis, we can come to a conclusion that
for a sender it is more natural to be non-blocking after message passing as there may be a need to send
the message to different processes
A thread is a single sequence stream within a process. Threads are also called lightweight processes as
they possess some of the properties of processes. Each thread belongs to exactly one process. In an
operating system that supports multithreading, the process can consist of many threads. But threads can
be effective only if the CPU is more than 1 otherwise two threads have to context switch for that single CPU.
• Stack Space
• Register Set
• Program Counter
Types of Thread in Operating System
Threads are of two types. These are described below.
User Level Thread is a type of thread that is not created using system calls. The kernel has no work in the
management of user-level threads. User-level threads can be easily implemented by the user. In case when
user-level threads are single-handed processes, kernel-level thread manages them. Let’s look at the
advantages and disadvantages of User-Level Thread.
The main objective of process synchronization is to ensure that multiple processes access shared
resources without interfering with each other and to prevent the possibility of inconsistent data due to
concurrent access. To achieve this, various synchronization techniques such as semaphores, monitors,
and critical sections are used.
In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to
avoid the risk of deadlocks and other synchronization problems. Process synchronization is an important
aspect of modern operating systems, and it plays a crucial role in ensuring the correct and efficient
functioning of multi-process systems.
Any solution to the critical section problem must satisfy three requirements:
• Mutual Exclusion : If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
• Progress : If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their remainder
section can participate in deciding which will enter the critical section next, and the selection can
not be postponed indefinitely.
• Bounded Waiting : A bound must exist on the number of times that other processes are allowed
to enter their critical sections after a process has made a request to enter its critical section and
before that request is granted.
Peterson’s Solution
Peterson’s Solution is a classical software-based solution to the critical section problem. In Peterson’s
solution, we have two shared variables:
• boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section
• int turn: The process whose turn is to enter the critical section.
Therefore, as we want to compare these two, we have to get acquainted with both of these terms in a much
deeper way. I am excited to know more about operating systems, especially the deep concepts of operating
systems. So, let’s get started.
In other words, “Deadlock is a situation in which two or more processes require resources to complete their
execution, but those resources are held by another process. Due to which the execution of the process is
not completed.”
There are Four Conditions That May Occur in Deadlock
1. Mutual Exclusion: It simply states that one process can only access a resource at any point in
time.
2. Hold and Wait: One or several processes are waiting for other processes, which in turn are
holding resources that the processes are waiting for.
3. No Preemption: One cannot force the resources to be removed from the process that has it.
4. Circular Wait: A cyclic chain of processes is present, and every process is holding at least one
resource and waiting for some process that is holding the resource that is required by the
immediate successive process.
Prevention of Deadlock
To avoid deadlock, it is necessary to remove at least one of the prerequisites that may lead to that. For
instance:
1. Avoid Circular Wait: Adopt a proper hierarchy of resources in order to maintain an order in the
resource list.
2. Release Resources: Make certain processes return a signal if they cannot go on any further.
3. Preemption: Enable the deprivation of resources by the system z under specific circumstances.
Advantages of Deadlock
• Detection and Prevention Research : Helps to create the detection and prevention algorithms,
which in turn contribute towards the overall evolution of computer science, especially in the
management of resources.
• Resource Allocation Efficiency Studies: Assists researchers in utilization, the allocation, of
resources in operation systems and thus encourages innovative designs.
• Ensures Process Fairness: When solved effectively, it maintains that no process takes more
than its fair share of the resources, hence achieving overall system fairness.
• Avoidance Strategy Improvements: Kindled innovation in the formulation of ways of minimizing
deadlocks to inform on the best resource allocation approaches in enhancing the design of
systems.
Disadvantages of Deadlock
• System Freeze: The most serious disadvantage is that the system may be locked and no
process can go on as critical sections are mutually exclusive.
• Resource Wastage: Those resources trapped in the deadlocked processes cannot be utilized by
the other process, and this leads to low efficiency of the system.
• System Instability: Deadlocks can result in the unavailability of necessary services that can
result in the crashing of the system.
• High Complexity in Detection: To identify deadlocks is a computationally intensive process,
particularly when there are numerous processes and resources in the system.
What is Starvation?
Starvation is the problem that occurs when high priority processes keep executing and low priority
processes get blocked for indefinite time. In heavily loaded computer system, a steady stream of higher-
priority processes can prevent a low-priority process from ever getting the CPU. In starvation resources are
continuously utilized by high priority processes. Problem of starvation can be resolved using Aging. In Aging
priority of long waiting processes is gradually increased.
Causes of Starvation
1. Priority Scheduling: If there are always higher-priority processes available, then the lower-
priority processes may never be allowed to run.
2. Resource Utilization: We see that resources are always used by more significant priority
processes and leave a lesser priority process starved.
Prevention of Starvation
Starvation can be cured using a technique that is regarded as aging. In aging, priority of process increases
with time and thus guarantees that poor processes will equally run in the system.
Advantages of Starvation
• Prioritizes Critical Tasks: The high-priority tasks can run at once, thereby improving on the, for
instance, real-time systems where some tasks are very limited in time.
• Efficiency for High-Priority Processes: Helps guarantee that effective/ineffective utilization of
resources by such critical/time-sensitive processes enhances performance in many systems.
• Optimizes Resource Usage for Priority Processes: This means that resources are constantly
reallocated to where they are needed most, or in other words, resources are well aligned with
high-priority jobs.
• Simplicity in Scheduling: Stagnation is a common result of simple priority scheduling, which is
comparatively less tricky to balance from the other algorithms.
Disadvantages of Starvation
• Process Delays: Some of the less important processes may never run at all or run very late
indeed, which can translate into very long wait times or worse, system failure.
• Unfair Resource Distribution: Resource allocation becomes unfair. I/O-bound processes starve
and may not run their turn efficiently.
• System Inefficiency : In the long run, the system efficiency may be an issue, especially because
unnecessary and less important tasks may accumulate time and compress the system.
• Potential for Resource Starvation : Essential yet less critical processes could never execute,
which potentially results in poor service provision and the health of the system.
The term memory can be defined as a collection of data in a specific format. It is used to store instructions
and process data. The memory comprises a large array or group of words or bytes, each with its own
location. The primary purpose of a computer system is to execute programs. These programs, along with
the information they access, should be in the main memory during execution. The CPU fetches instructions
from memory according to the value of the program counter.
• Static Loading: Static Loading is basically loading the entire program into a fixed address. It
requires more memory space.
• Dynamic Loading: The entire program and all data of a process must be in physical memory for
the process to execute. So, the size of a process is limited to the size of physical memory. To
gain proper memory utilization, dynamic loading is used. In dynamic loading, a routine is not
loaded until it is called. All routines are residing on disk in a relocatable load format. One of the
advantages of dynamic loading is that the unused routine is never loaded. This loading is useful
when a large amount of code is needed to handle it efficiently.
Static and Dynamic Linking
To perform a linking task a linker is used. A linker is a program that takes one or more object files generated
by a compiler and combines them into a single executable file.
• Static Linking: In static linking, the linker combines all necessary program modules into a single
executable program. So there is no runtime dependency. Some operating systems support only
static linking, in which system language libraries are treated like any other object module.
• Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading. In dynamic
linking, “Stub” is included for each appropriate library routine reference. A stub is a small piece of
code. When the stub is executed, it checks whether the needed routine is already in memory or
not. If not available then the program loads the routine into memory.
Memory Management with Monoprogramming (Without Swapping)
This is the simplest memory management approach the memory is divided into two sections:
• A memory partition scheme with a fixed number of partitions was introduced to support
multiprogramming. this scheme is based on contiguous allocation
• Each partition is a block of contiguous memory
• Memory is partitioned into a fixed number of partitions.
• Each partition is of fixed size
An address generated by the CPU is commonly referred to as a logical address. the address seen by the
memory unit is known as the physical address. The logical address can be mapped to a physical address
by hardware with the help of a base register this is known as dynamic relocation of memory references.
Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient manner. One of the
simplest methods for allocating memory is to divide memory into several fixed-sized partitions and each
partition contains exactly one process. Thus, the degree of multiprogramming is obtained by the number of
partitions.
• Multiple partition allocation: In this method, a process is selected from the input queue and
loaded into the free partition. When the process terminates, the partition becomes available for
other processes.
• Fixed partition allocation: In this method, the operating system maintains a table that indicates
which parts of memory are available and which are occupied by processes. Initially, all memory is
available for user processes and is considered one large block of available memory. This
available memory is known as a “Hole”. When the process arrives and needs memory, we search
for a hole that is large enough to store this process. If the requirement is fulfilled then we allocate
memory to process, otherwise keeping the rest available to satisfy future requests. While
allocating a memory sometimes dynamic storage allocation problems occur, which concerns how
to satisfy a request of size n from a list of free holes. There are some solutions to this problem:
First Fit
In the First Fit, the first available free hole fulfil the requirement of the process allocated.
Here, in this diagram, a 40 KB memory block is the first available free hole that can store process A (size
of 25 KB), because the first two blocks did not have sufficient memory space.
Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements. For this, we search
the entire list, unless the list is ordered by size.
Here in this example, first, we traverse the complete list and find the last hole 25KB is the best suitable hole
for Process A(size 25KB). In this method, memory utilization is maximum as compared to other memory
allocation techniques.
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces the largest leftover
hole.
Swapping
When a process is executed it must have resided in memory. Swapping is a process of swapping a process
temporarily into a secondary memory from the main memory, which is fast compared to secondary memory.
A swapping allows more processes to be run and can be fit into memory at one time. The main part of
swapping is transferred time and the total time is directly proportional to the amount of memory swapped.
Swapping is also known as roll-out, or roll because if a higher priority process arrives and wants service,
the memory manager can swap out the lower priority process and then load and execute the higher priority
process. After finishing higher priority work, the lower priority process swapped back in memory and
continued to the execution process.
Paging
Paging is a memory management scheme that eliminates the need for a contiguous allocation of physical
memory. This scheme permits the physical address space of a process to be non-contiguous.
• Logical Address or Virtual Address (represented in bits): An address generated by the CPU.
• Logical Address Space or Virtual Address Space (represented in words or bytes): The set
of all logical addresses generated by a program.
• Physical Address (represented in bits): An address actually available on a memory unit.
• Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses.
Example:
• If Logical Address = 31 bits, then Logical Address Space = 231 words = 2 G words (1 G = 230)
• If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 = 27
bits
• If Physical Address = 22 bits, then Physical Address Space = 2 22 words = 4 M words (1 M = 220)
• If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address = log2 224 = 24
bits
The mapping from virtual to physical address is done by the memory management unit (MMU) which is a
hardware device and this mapping is known as the paging technique.
• The Physical Address Space is conceptually divided into several fixed-size blocks, called frames.
• The Logical Address Space is also split into fixed-size blocks, called pages.
• Page Size = Frame Size
What is Thrash?
In computer science, thrash is the poor performance of a virtual memory (or paging) system when the
same pages are being loaded repeatedly due to a lack of main memory to keep them in memory. Depending
on the configuration and algorithm, the actual throughput of a system can degrade by multiple orders of
magnitude.
To know more clearly about thrashing, first, we need to know about page fault and swapping.
• Page fault: We know every program is divided into some pages. A page fault occurs when a
program attempts to access data or code in its address space but is not currently located in the
system RAM.
• Swapping: Whenever a page fault happens, the operating system will try to fetch that page from
secondary memory and try to swap it with one of the pages in RAM. This process is called swapping.
A computer file is defined as a medium used for saving and managing data in the computer system. The
data stored in the computer system is completely in digital format, although there can be various types of
files that help us to store the data.
• FAT (File Allocation Table): An older file system used by older versions of Windows and other
operating systems.
• NTFS (New Technology File System): A modern file system used by Windows. It supports
features such as file and folder permissions, compression, and encryption.
• ext (Extended File System): A file system commonly used on Linux and Unix-based operating
systems.
• HFS (Hierarchical File System): A file system used by macOS.
• APFS (Apple File System): A new file system introduced by Apple for their Macs and iOS
devices.
File Directories
The collection of files is a file directory. The directory contains information about the files, including attributes,
location, and ownership. Much of this information, especially that is concerned with storage, is managed by
the operating system. The directory is itself a file, accessible by various file management routines.
• Name
• Type
• Address
• Current length
• Maximum length
• Date last accessed
• Date last updated
• Owner id
• Protection information
The operation performed on the directory are:
• Naming Problem: Users cannot have the same name for two files.
• Grouping Problem: Users cannot group files according to their needs.
Two-Level Directory
In this separate directories for each user is maintained.
• Path Name: Due to two levels there is a path name for every file to locate that file.
• Now, we can have the same file name for different users.
• Searching is efficient in this method.
Tree-Structured Directory
The directory is maintained in the form of a tree. Searching is efficient and also there is grouping capability.
We have absolute or relative path name for a file.
Continuous Allocation
A single continuous set of blocks is allocated to a file at the time of file creation. Thus, this is a pre-allocation
strategy, using variable size portions. The file allocation table needs just a single entry for each file, showing
the starting block and the length of the file. This method is best from the point of view of the individual
sequential file. Multiple blocks can be read in at a time to improve I/O performance for sequential processing.
It is also easy to retrieve a single block.
Linked Allocation(Non-Contiguous Allocation)
Allocation is on an individual block basis. Each block contains a pointer to the next block in the chain. Again
the file table needs just a single entry for each file, showing the starting block and the length of the file.
Although pre-allocation is possible, it is more common simply to allocate blocks as needed. Any free block
can be added to the chain. The blocks need not be continuous. An increase in file size is always possible
if a free disk block is available.
Indexed Allocation
It addresses many of the problems of contiguous and chained allocation. In this case, the file allocation
table contains a separate one-level index for each file: The index has one entry for each block allocated to
the file. The allocation may be on the basis of fixed-size blocks or variable-sized blocks. Allocation by blocks
eliminates external fragmentation, whereas allocation by variable-size blocks improves locality.
Divides the disk into sectors before storing data so that the disk controller can read and write Each sector
can be:
The header retains information, data, and error correction code (ECC) sectors of data, typically 512 bytes
of data, but optional disks use the operating system’s own data structures to preserve files using disks.
1. Divide the disc into multiple cylinder groups. Each is treated as a logical disk.
2. Logical format or “Create File System”. The OS stores the data structure of the first file system on the
disk. Contains free space and allocated space.
For efficiency, most file systems group blocks into clusters. Disk I / O runs in blocks. File I / O runs in a
cluster.
Boot block:
• When the computer is turned on or restarted, the program stored in the initial bootstrap ROM
finds the location of the OS kernel from the disk, loads the kernel into memory, and runs the OS.
start.
• To change the bootstrap code, you need to change the ROM and hardware chip. Only a small
bootstrap loader program is stored in ROM instead.
• The full bootstrap code is stored in the “boot block” of the disk.
• A disk with a boot partition is called a boot disk or system disk.
• The bootstrap program is required for a computer to initiate the booting after it is powered up or
rebooted.
• It initializes all components of the system, from CPU registers to device controllers and the
contents of main memory, and then starts the operating system.
• The bootstrap program then locates the OS kernel on disk, loads that kernel into memory, and
jumps to an initial address to start the operating-system execution.
• The Read Only Memory (ROM) does not require initialization and is at a fixed location that the
processor can begin executing when powered up or reset. Therefore bootstrap is stored in ROM.
• Because of read only feature of ROM; it cannot be infected by a computer virus. The difficulty is
that modification of this bootstrap code requires changing the ROM hardware chips.
• Therefore, most systems store a small bootstrap loader program in the boot ROM which invokes
and bring full bootstrap program from disk into main memory.
• The modified version of full bootstrap program can be simply written onto the disk.
• The fixed storage location of full bootstrap program is in the “boot blocks”.
• A disk that has a boot partition is called a boot disk or system disk.
Bad Blocks:
1. Partitioning: This involves dividing a single physical disk into multiple logical partitions. Each
partition can be treated as a separate storage device, allowing for better organization and
management of data.
2. Formatting: This involves preparing a disk for use by creating a file system on it. This process
typically erases all existing data on the disk.
3. File system management: This involves managing the file systems used by the operating system
to store and access data on the disk. Different file systems have different features and
performance characteristics.
4. Disk space allocation: This involves allocating space on the disk for storing files and directories.
Some common methods of allocation include contiguous allocation, linked allocation, and indexed
allocation.
5. Disk defragmentation: Over time, as files are created and deleted, the data on a disk can become
fragmented, meaning that it is scattered across the disk. Disk defragmentation involves
rearranging the data on the disk to improve performance.
The process of implementation, operation, and maintenance of a device by an operating system is called
device management. When we use computers we will have various devices connected to our system like
mouse, keyboard, scanner, printer, and pen drives. So all these are the devices and the operating system
acts as an interface that allows the users to communicate with these devices. An operating system is
responsible for successfully establishing the connection between these devices and the system. The
operating system uses the concept of drivers to establish a connection between these devices with the
system.
Usually, systems are hardware or physical devices like computers, laptops, servers, cell phones, etc.
Additionally, they might be virtual, like virtual switches or machines. A program may require a variety of
computer resources (devices) to go through to the end. It is the operating system’s responsibility to allocate
resources wisely. The operating system is alone in charge of determining if the resource is available. It
deals not only with device allocation but also with deallocation, which means that a device or resource must
be removed from a process once its use is over.
• Boot Device: It stores information in a fixed-size block, each one with its own address. Example,
disks.
• Character Device: It delivers or accepts a stream of characters. the individual characters are not
addressable. For example, printers, keyboards etc.
• Network Device: It is for transmitting data packets.
Features of Device Management in Operating System
• The operating system is responsible in managing device communication through their respective
drivers.
• The operating system keeps track of all devices by using a program known as an input-output
controller.
• It decides which process to assign to CPU and for how long.
• O.S. is responsible in fulfilling the request of devices to access the process.
• It connects the devices to various programs in an efficient way without error.
• Deallocate devices when they are not in use.
Types of Devices
1. Dedicated Device
Certain devices are assigned to only one task at a time in device management until that task releases them.
Plotters, printers, tape drives, and other similar devices require this kind of allocation method because
sharing them with numerous users at the same time will be inconvenient. The drawback of these devices
is the inefficiency that results from assigning the device to a single user throughout the entirety of the task
execution process, even in cases when the device is not utilized exclusively.
2. Shared Device
There are numerous processes that these devices could be assigned to. Disk-DASD could be shared
concurrently by many processes by interleaving their requests. All issues must be resolved by pre-
established policies, and the Device Manager closely monitors the interleaving.
3. Virtual Device
Virtual devices are dedicated devices that have been converted into shared devices, making them a hybrid
of the two types of devices. For instance, a spooling programme that routes all print requests to a disc can
turn a printer into a sharing device. A print job is routed to the disc and not delivered straight to the printer
until it is ready with all the necessary formatting and sequencing, at which time it is sent to the printers. The
method can increase usability and performance by turning a single printer into a number of virtual printers.
Device Tracking
Operating system keeps track of all devices by using a program known as input output controller. Apart
from allowing the system to make the communication between these drivers operating system is also
responsible in keeping track all these devices which are connected with the system. If any device request
any process which is under execution by the CPU then the operating system has to send a signal to the
CPU to immediately release that process and moves to the next process from the main memory so that the
process which is asked by the device fulfills the request of this device. That’s why operating system has to
continuously keep on checking the status of all the devices and for doing that operating system uses a
specialized program which is known as Input/Output controller.
Process Assignment
Operating system decides which process to assign to CPU and for how long. So operating system is
responsible in assigning the processes to the CPU and it is also responsible in selecting appropriate
process from the main memory and setting up the time for that process like how long that process needs
to get executed inside the CPU. Operating system is responsible in fulfilling the request of devices to access
the process. If the printer requests for the process which is now getting executed by the CPU then it is the
responsibility of the operating system to fulfill that request. So what operating system will do is it will tell the
CPU that you need to immediately release that process which the device printer is asking for and assign it
to the printer.
Connection
Operating system connects the devices to various programs in efficient way without error. So we use
software to access these drivers because we cannot directly access to keyboard, mouse, printers, scanners,
etc. We have to access these devices with the help of software. Operating system helps us in establishing
an efficient connection with these devices with the help of various software applications without any error.
Device Allocation
Device allocation refers to the process of assigning specific devices to processes or users. It ensures that
each process or user has exclusive access to the required devices or shares them efficiently without
interference.
Device Deallocation
Operating system deallocates devices when they are no longer in use. When these drivers or devices are
in use, they will be using certain space in the memory so it is the responsibility of the operating system to
continuously keep checking which device is in use and which device is not in use so that it can release that
device if we are no longer using that device.
What is OpenMP?
OpenMP is a set of compiler directives as well as an API for programs written in C, C++, or FORTRAN that
provides support for parallel programming in shared-memory environments. OpenMP identifies parallel
regions as blocks of code that may run in parallel. Application developers insert compiler directives into
their code at parallel regions, and these directives instruct the OpenMP run-time library to execute the
region in parallel. The following C program illustrates a compiler directive above the parallel region
containing the printf() statement −
#include <omp.h>
#include <stdio.h>
int main(int argc, char *argv[]){
/* sequential code */
#pragma omp parallel{
printf("I am a parallel region.");
}
/* sequential code */
return 0;
}