0% found this document useful (0 votes)
23 views31 pages

Document 2

The document discusses various structures of operating systems, including simple, monolithic, micro-kernel, hybrid-kernel, layered, and modular structures, each with their advantages and disadvantages. It also explains the role of the kernel, types of kernels, segmentation, fragmentation, and the overall function of an operating system as an interface between software and hardware. Additionally, it highlights the importance of efficient memory management and the challenges associated with different OS structures.

Uploaded by

hdhdbdbenny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views31 pages

Document 2

The document discusses various structures of operating systems, including simple, monolithic, micro-kernel, hybrid-kernel, layered, and modular structures, each with their advantages and disadvantages. It also explains the role of the kernel, types of kernels, segmentation, fragmentation, and the overall function of an operating system as an interface between software and hardware. Additionally, it highlights the importance of efficient memory management and the challenges associated with different OS structures.

Uploaded by

hdhdbdbenny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Operating Systems Structures

The operating system can be implemented with the help of various structures. The structure of the OS
depends mainly on how the various standard components of the operating system are interconnected and
merge into the kernel. This article discusses a variety of operating system implementation structures and
explains how and why they function.
What is a System Structure for an Operating System?
A system structure for an operating system is like the blueprint of how an OS is organized and how its
different parts interact with each other. Because operating systems have complex structures, we want a
structure that is easy to understand so that we can adapt an operating system to meet our specific needs.
Similar to how we break down larger problems into smaller, more manageable subproblems, building an
operating system in pieces is simpler. The operating system is a component of every segment. The strategy
for integrating different operating system components within the kernel can be thought of as an operating
system structure. As will be discussed below, various types of structures are used to implement operating
systems.
Simple Structure
Simple structure operating systems do not have well-defined structures and are small, simple, and limited.
The interfaces and levels of functionality are not well separated. MS-DOS is an example of such an
operating system. In MS-DOS, application programs are able to access the basic I/O routines. These types
of operating systems cause the entire system to crash if one of the user programs fails.
Advantages of Simple Structure
• It delivers better application performance because of the few interfaces between the application
program and the hardware.
• It is easy for kernel developers to develop such an operating system.

Disadvantages of Simple Structure


• The structure is very complicated, as no clear boundaries exist between modules.
• It does not enforce data hiding in the operating system.
Monolithic Structure
A monolithic structure is a type of operating system architecture where the entire operating system is
implemented as a single large process in kernel mode. Essential operating system services, such as
process management, memory management, file systems, and device drivers, are combined into a single
code block.
Advantages of Monolithic Structure
• Performance of Monolithic structure is fast as since everything runs in a single block, therefore
communication between components is quick.
• It is easier to build because all parts are in one code block.
Disadvantages of Monolithic Architecture
• It is hard to maintain as a small error can affect entire system.
• There are also some security risks in the Monolithic architecture.
Micro-Kernel Structure
Micro-Kernel structure designs the operating system by removing all non-essential components from the
kernel and implementing them as system and user programs. This results in a smaller kernel called the
micro-kernel. Advantages of this structure are that all new services need to be added to user space and
does not require the kernel to be modified. Thus it is more secure and reliable as if a service fails, then rest
of the operating system remains untouched. Mac OS is an example of this type of OS.
Advantages of Micro-kernel Structure
• It makes the operating system portable to various platforms.
• As microkernels are small so these can be tested effectively.
Disadvantages of Micro-kernel Structure
• Increased level of inter module communication degrades system performance.
Hybrid-Kernel Structure
Hybrid-Kernel structure is nothing but just a combination of both monolithic-kernel structure and micro-
kernel structure. Basically, it combines properties of both monolithic and micro-kernel and make a more
advance and helpful approach. It implement speed and design of monolithic and modularity and stability of
micro-kernel structure.
Advantages of Hybrid-Kernel Structure
• It offers good performance as it implements the advantages of both structure in it.
• It supports a wide range of hardware and applications.
• It provides better isolation and security by implementing micro-kernel approach.
• It enhances overall system reliability by separating critical functions into micro-kernel for
debugging and maintenance.
Disadvantages of Hybrid-Kernel Structure
• It increases overall complexity of system by implementing both structure (monolithic and micro)
and making the system difficult to understand.
• The layer of communication between micro-kernel and other component increases time
complexity and decreases performance compared to monolithic kernel.
Exo-Kernel Structure
Layered Structure
An OS can be broken into pieces and retain much more control over the system. In this structure, the OS
is broken into a number of layers (levels). The bottom layer (layer 0) is the hardware, and the topmost layer
(layer N) is the user interface. These layers are so designed that each layer uses the functions of the lower-
level layers. This simplifies the debugging process, if lower-level layers are debugged and an error occurs
during debugging, then the error must be on that layer only, as the lower-level layers have already been
debugged. The main disadvantage of this structure is that at each layer, the data needs to be modified and
passed on which adds overhead to the system. Moreover, careful planning of the layers is necessary, as a
layer can use only lower-level layers. UNIX is an example of this structure.
Advantages of Layered Structure
• Layering makes it easier to enhance the operating system, as the implementation of a layer can
be changed easily without affecting the other layers.
• It is very easy to perform debugging and system verification.

Disadvantages of Layered Structure


• In this structure, the application’s performance is degraded as compared to simple structure.
• It requires careful planning for designing the layers, as the higher layers use the functionalities of
only the lower layers.
Modular Structure
It is considered as the best approach for an OS. It involves designing of a modular kernel. The kernel has
only a set of core components and other services are added as dynamically loadable modules to the kernel
either during runtime or boot time. It resembles layered structure due to the fact that each kernel has defined
and protected interfaces, but it is more flexible than a layered structure as a module can call any other
module. For example Solaris OS is organized as shown in the figure.
Virtual Machines (VMs)
Based on our needs, a virtual machine abstracts the hardware of our personal computer, including the CPU,
disc drives, RAM, and NIC (Network Interface Card), into a variety of different execution contexts, giving us
the impression that each execution environment is a different computer. An illustration of it is a virtual box.
An operating system enables us to run multiple processes concurrently while making it appear as though
each one is using a different processor and virtual memory by using CPU scheduling and virtual memory
techniques.
What is a Shell?
The shell is a command-line interface that allows the user to enter commands to interact with the operating
system. It acts as an intermediary between the user and the kernel, interpreting commands entered by the
user and translating them into instructions that the kernel can execute. The shell also provides various
features like command history, tab completion, and scripting capabilities to make it easier for the user to
work with the system.
Command Line Shell
Shell can be accessed by users using a command line interface. A special program called Terminal in
Linux/macOS, or Command Prompt in Windows OS is provided to type in the human-readable commands
such as “cat”, “ls” etc. and then it is being executed. The result is then displayed on the terminal to the
user. A terminal in Ubuntu 16.4 system looks like this –
Graphical Shells
Graphical shells provide means for manipulating programs based on the graphical user interface (GUI),
by allowing for operations such as opening, closing, moving, and resizing windows, as well as switching
focus between windows. Window OS or Ubuntu OS can be considered as a good example which
provides GUI to the user for interacting with the program. Users do not need to type in commands for
every action. A typical GUI in the Ubuntu system –
Advantages
• Efficient Command Execution
• Scripting capability
Disadvantages
• Limited Visualization
• Steep Learning Curve
What is Kernel?
The kernel is the core component of the operating system that manages system resources and provides
services to other programs running on the system. It acts as a bridge between the user and the resources
of the system by accessing various computer resources like the CPU, I/O devices and other resources. It
is responsible for tasks such as memory management, process scheduling, and device drivers. The kernel
operates at a lower level than the shell and interacts directly with the hardware of the computer.

Types of Kernel
The kernel manages the system’s resources and facilitates communication between hardware and software
components. These kernels are of different types let’s discuss each type along with its advantages and
disadvantages:

1. Monolithic Kernel
It is one of the types of kernel where all operating system services operate in kernel space. It has
dependencies between systems components. It has huge lines of code which is complex.

Example:

Unix, Linux, Open VMS, XTS-400 etc.

2. Micro Kernel
It is kernel types which has minimalist approach. It has virtual memory and thread scheduling. It is more
stable with less services in kernel space. It puts rest in user space. It is use in small os.
Example :

Mach, L4, AmigaOS, Minix, K42 etc.

3. Hybrid Kernel
It is the combination of both monolithic kernel and microkernel. It has speed and design of monolithic kernel
and modularity and stability of microkernel.
Example :

Windows NT, Netware, BeOS etc.

4. Exo Kernel
It is the type of kernel which follows end-to-end principle. It has fewest hardware abstractions as possible.
It allocates physical resources to applications.

Example :

Nemesis, ExOS etc.


5. Nano Kernel
It is the type of kernel that offers hardware abstraction but without system services. Micro Kernel also does
not have system services therefore the Micro Kernel and Nano Kernel have become analogous.

Example :

EROS etc.

Functions of Kernel
The kernel is responsible for various critical functions that ensure the smooth operation of the computer
system. These functions include:
1. Process Management
2. Memory Management
3. Device Management
4. File System Management
5. Resource Management
6. Security and Access Control
7. Inter-Process Communication

Working of Kernel
• A kernel loads first into memory when an operating system is loaded and remains in memory until
the operating system is shut down again. It is responsible for various tasks such as disk
management , task management, and memory management .
• The kernel has a process table that keeps track of all active processes
• The process table contains a per-process region table whose entry points to entries in the region
table.
• The kernel loads an executable file into memory during the ‘exec’ system call’.
Advantages
• Efficient Resource Management
• Process Management
• Hardware Abstraction

Disadvantages
• Limited Flexibility
• Dependency on Hardware

Segmentation in Operating System

A process is divided into Segments. The chunks that a program is divided into which are not necessarily all
of the exact sizes are called segments. Segmentation gives the user’s view of the process which paging
does not provide. Here the user’s view is mapped to physical memory.

Types of Segmentation in Operating Systems


• Virtual Memory Segmentation: Each process is divided into a number of segments, but the
segmentation is not done all at once. This segmentation may or may not take place at the run
time of the program.
• Simple Segmentation: Each process is divided into a number of segments, all of which are
loaded into memory at run time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical addresses in segmentation. A table
stores the information about all such segments and is called Segment Table.
What is Segment Table?
It maps a two-dimensional Logical address into a one-dimensional Physical address. It’s each table entry
has:

• Base Address: It contains the starting physical address where the segments reside in memory.
• Segment Limit: Also known as segment offset. It specifies the length of the segment.

Segmentation is crucial in efficient memory management within an operating system. For an in-depth
understanding of memory management and other critical OS topics, explore the
GATE CS Self-Paced Course, designed to help you succeed in the GATE exam.
Translation of Two-dimensional Logical Address to Dimensional Physical Address.

The address generated by the CPU is divided into:

• Segment number (s): Number of bits required to represent the segment.


• Segment offset (d): Number of bits required to represent the position of data within a segment.
Advantages of Segmentation in Operating System
• Reduced Internal Fragmentation : Segmentation can reduce internal fragmentation compared
to fixed-size paging, as segments can be sized according to the actual needs of a process.
However, internal fragmentation can still occur if a segment is allocated more space than it is
actually used.
• Segment Table consumes less space in comparison to Page table in paging.
• As a complete module is loaded all at once, segmentation improves CPU utilization.
• The user’s perception of physical memory is quite similar to segmentation. Users can divide user
programs into modules via segmentation. These modules are nothing more than separate
processes’ codes.
• The user specifies the segment size, whereas, in paging, the hardware determines the page size.
• Segmentation is a method that can be used to segregate data from security operations.
• Flexibility: Segmentation provides a higher degree of flexibility than paging. Segments can be of
variable size, and processes can be designed to have multiple segments, allowing for more fine-
grained memory allocation.
• Sharing: Segmentation allows for sharing of memory segments between processes. This can be
useful for inter-process communication or for sharing code libraries.
• Protection: Segmentation provides a level of protection between segments, preventing one
process from accessing or modifying another process’s memory segment. This can help increase
the security and stability of the system.
Disadvantages of Segmentation in Operating System
• External Fragmentation : As processes are loaded and removed from memory, the free memory
space is broken into little pieces, causing external fragmentation. This is a notable difference from
paging, where external fragmentation is significantly lesser.
• Overhead is associated with keeping a segment table for each activity.
• Due to the need for two memory accesses, one for the segment table and the other for main
memory, access time to retrieve the instruction increases.
• Fragmentation: As mentioned, segmentation can lead to external fragmentation as memory
becomes divided into smaller segments. This can lead to wasted memory and decreased
performance.
• Overhead: Using a segment table can increase overhead and reduce performance. Each
segment table entry requires additional memory, and accessing the table to retrieve memory
locations can increase the time needed for memory operations.
• Complexity: Segmentation can be more complex to implement and manage than paging. In
particular, managing multiple segments per process can be challenging, and the potential for
segmentation faults can increase as a result.

Fragmentation
Fragmentation is defined as when the process is loaded and removed after execution from memory, it
creates a small free hole. These holes can not be assigned to new processes because holes are not
combined or do not fulfill the memory requirement of the process. To achieve a degree of multiprogramming,
we must reduce the waste of memory or fragmentation problems. In the operating systems two types of
fragmentation:

1. Internal fragmentation: Internal fragmentation occurs when memory blocks are allocated to the
process more than their requested size. Due to this some unused space is left over and creating
an internal fragmentation problem.Example: Suppose there is a fixed partitioning used for
memory allocation and the different sizes of blocks 3MB, 6MB, and 7MB space in memory. Now a
new process p4 of size 2MB comes and demands a block of memory. It gets a memory block of
3MB but 1MB block of memory is a waste, and it can not be allocated to other processes too.
This is called internal fragmentation.
2. External fragmentation: In External Fragmentation, we have a free memory block, but we can
not assign it to a process because blocks are not contiguous. Example: Suppose (consider the
above example) three processes p1, p2, and p3 come with sizes 2MB, 4MB, and 7MB
respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB allocated respectively.
After allocating the process p1 process and the p2 process left 1MB and 2MB. Suppose a new
process p4 comes and demands a 3MB block of memory, which is available, but we can not
assign it because free memory space is not contiguous. This is called external fragmentation.

Operating System a type of system software. It basically manages all the resources of the computer. An
operating system acts as an interface between the software and different parts of the computer or the
computer hardware. The operating system is designed in such a way that it can manage the overall
resources and operations of the computer.

Operating System is a fully integrated set of specialized programs that handle all the operations of the
computer. It controls and monitors the execution of all other programs that reside in the computer, which
also includes application programs and other system software of the computer. Examples of Operating
Systems are Windows, Linux, Mac OS, etc.

An Operating System (OS) is a collection of software that manages computer hardware resources and
provides common services for computer programs. In this article we will see basic of operating system in
detail.

Functions of the Operating System


• Resource Management: The operating system manages and allocates memory, CPU time, and
other hardware resources among the various programs and processes running on the computer.
• Process Management: The operating system is responsible for starting, stopping, and managing
processes and programs. It also controls the scheduling of processes and allocates resources to
them.
• Memory Management: The operating system manages the computer’s primary memory and
provides mechanisms for optimizing memory usage.
• Security: The operating system provides a secure environment for the user, applications, and
data by implementing security policies and mechanisms such as access controls and encryption.
• Job Accounting: It keeps track of time and resources used by various jobs or users.
• File Management: The operating system is responsible for organizing and managing the file
system, including the creation, deletion, and manipulation of files and directories.
• Device Management: The operating system manages input/output devices such as printers,
keyboards, mice, and displays. It provides the necessary drivers and interfaces to enable
communication between the devices and the computer.
• Networking: The operating system provides networking capabilities such as establishing and
managing network connections, handling network protocols, and sharing resources such as
printers and files over a network.
• User Interface: The operating system provides a user interface that enables users to interact
with the computer system. This can be a Graphical User Interface (GUI), a Command-Line
Interface (CLI), or a combination of both.
• Backup and Recovery: The operating system provides mechanisms for backing up data and
recovering it in case of system failures, errors, or disasters.
• Virtualization: The operating system provides virtualization capabilities that allow multiple
operating systems or applications to run on a single physical machine. This can enable efficient
use of resources and flexibility in managing workloads.
• Performance Monitoring: The operating system provides tools for monitoring and optimizing
system performance, including identifying bottlenecks, optimizing resource usage, and analyzing
system logs and metrics.
• Time-Sharing: The operating system enables multiple users to share a computer system and its
resources simultaneously by providing time-sharing mechanisms that allocate resources fairly
and efficiently.
Objectives of Operating Systems
Let us now see some of the objectives of the operating system, which are mentioned below.

• Convenient to use: One of the objectives is to make the computer system more convenient to
use in an efficient manner.
• User Friendly: To make the computer system more interactive with a more convenient interface
for the users.
• Easy Access: To provide easy access to users for using resources by acting as an intermediary
between the hardware and its users.
• Management of Resources: For managing the resources of a computer in a better and faster
way.
• Controls and Monitoring: By keeping track of who is using which resource, granting resource
requests, and mediating conflicting requests from different programs and users.
• Fair Sharing of Resources: Providing efficient and fair sharing of resources between the users
and programs.
Types of Operating Systems
• Batch Operating System: A Batch Operating System is a type of operating system that does not
interact with the computer directly. There is an operator who takes similar jobs having the same
requirements and groups them into batches.
• Time-sharing Operating System: Time-sharing Operating System is a type of operating system
that allows many users to share computer resources (maximum utilization of the resources).
• Distributed Operating System: Distributed Operating System is a type of operating system that
manages a group of different computers and makes appear to be a single computer. These
operating systems are designed to operate on a network of computers. They allow multiple users
to access shared resources and communicate with each other over the network. Examples
include Microsoft Windows Server and various distributions of Linux designed for servers.
• Network Operating System: Network Operating System is a type of operating system that runs
on a server and provides the capability to manage data, users, groups, security, applications, and
other networking functions.
• Real-time Operating System: Real-time Operating System is a type of operating system that
serves a real-time system and the time interval required to process and respond to inputs is very
small. These operating systems are designed to respond to events in real time. They are used in
applications that require quick and deterministic responses, such as embedded systems,
industrial control systems, and robotics.
• Multiprocessing Operating System: Multiprocessor Operating Systems are used in operating
systems to boost the performance of multiple CPUs within a single computer system. Multiple
CPUs are linked together so that a job can be divided and executed more quickly.
• Single-User Operating Systems: Single-User Operating Systems are designed to support a
single user at a time. Examples include Microsoft Windows for personal computers and Apple
macOS.
• Multi-User Operating Systems: Multi-User Operating Systems are designed to support multiple
users simultaneously. Examples include Linux and Unix.
• Embedded Operating Systems: Embedded Operating Systems are designed to run on devices
with limited resources, such as smartphones, wearable devices, and household appliances.
Examples include Google’s Android and Apple’s iOS.

Introduction of Process Management

A process is a program in execution. For example, when we write a program in C or C++ and compile it,
the compiler creates binary code. The original code and binary code are both programs. When we actually
run the binary code, it becomes a process. A process is an ‘active’ entity instead of a program, which is
considered a ‘passive’ entity. A single program can create many processes when run multiple times; for
example, when we open a .exe or binary file multiple times, multiple instances begin (multiple processes
are created).

In this article, we will discuss process management in detail, along with the different states of a process,
its advantages, disadvantages, etc.

How Does a Process Look Like in Memory?


A process in memory is divided into several distinct sections, each serving a different purpose. Here’s how
a process typically looks in memory:

• Text Section: A Process, sometimes known as the Text Section, also includes the current activity
represented by the value of the Program Counter.
• Stack: The stack contains temporary data, such as function parameters, returns addresses, and
local variables.
• Data Section: Contains the global variable.
• Heap Section: Dynamically memory allocated to process during its run time.
Characteristics of a Process
A process has the following attributes.

• Process Id: A unique identifier assigned by the operating system.


• Process State: Can be ready, running, etc.
• CPU Registers: Like the Program Counter (CPU registers must be saved and restored when a
process is swapped in and out of the CPU)
• Accounts Information: Amount of CPU used for process execution, time limits, execution ID, etc
• I/O Status Information: For example, devices allocated to the process, open files, etc
• CPU Scheduling Information: For example, Priority (Different processes may have different
priorities, for example, a shorter process assigned high priority in the shortest job first scheduling)
All of the above attributes of a process are also known as the context of the process. Every process has
its own process control block(PCB), i.e. each process will have a unique PCB. All of the above attributes
are part of the PCB.

States of Process
A process is in one of the following states:

• New: Newly Created Process (or) being-created process.


• Ready: After the creation process moves to the Ready state, i.e. the process is ready for
execution.
• Running: Currently running process in CPU (only one process at a time can be under execution
in a single processor).
• Wait (or Block): When a process requests I/O access.
• Complete (or Terminated): The process completed its execution.
• Suspended Ready: When the ready queue becomes full, some processes are moved to a
suspended ready state
• Suspended Block: When the waiting queue becomes full.

Process Operations
Process operations in an operating system refer to the various activities the OS performs to manage
processes. These operations include process creation, process scheduling, execution and killing the
process. Here are the key process operations
Process Creation
Process creation in an operating system (OS) is the act of generating a new process. This new process is
an instance of a program that can execute independently.
Scheduling
Once a process is ready to run, it enters the “ready queue.” The scheduler’s job is to pick a process from
this queue and start its execution.
Execution
Execution means the CPU starts working on the process. During this time, the process might:
• Move to a waiting queue if it needs to perform an I/O operation.
• Get blocked if a higher-priority process needs the CPU.

Killing the Process


After the process finishes its tasks, the operating system ends it and removes its Process Control Block
(PCB).
Context Switching of Process
The process of saving the context of one process and loading the context of another process is known as
Context Switching. In simple terms, it is like loading and unloading the process from the running state to
the ready state.
Advantages of Process Management
• Running Multiple Programs: Process management lets you run multiple applications at the
same time, for example, listen to music while browsing the web.
• Process Isolation: It ensures that different programs don’t interfere with each other, so a
problem in one program won’t crash another.
• Fair Resource Use: It makes sure resources like CPU time and memory are shared fairly among
programs, so even lower-priority programs get a chance to run.
• Smooth Switching: It efficiently handles switching between programs, saving and loading their
states quickly to keep the system responsive and minimize delays.
Disadvantages of Process Management
• Overhead: Process management uses system resources because the OS needs to keep track of
various data structures and scheduling queues. This requires CPU time and memory, which can
affect the system’s performance.
• Complexity: Designing and maintaining an OS is complicated due to the need for complex
scheduling algorithms and resource allocation methods.
• Deadlocks: To keep processes running smoothly together, the OS uses mechanisms like
semaphores and mutex locks. However, these can lead to deadlocks, where processes get stuck
waiting for each other indefinitely.
• Increased Context Switching: In multitasking systems, the OS frequently switches between
processes. Storing and loading the state of each process (context switching) takes time and
computing power, which can slow down the system.
When Does Context Switching Happen?
Context Switching Happen:

• When a high-priority process comes to a ready state (i.e. with higher priority than the running
process).
• An Interrupt occurs.
• User and kernel-mode switch (It is not necessary though)
• Preemptive CPU scheduling is used.

Context Switch vs Mode Switch

A mode switch occurs when the CPU privilege level is changed, for example when a system call is made
or a fault occurs. The kernel works in more a privileged mode than a standard user task. If a user process
wants to access things that are only accessible to the kernel, a mode switch must occur. The currently
executing process need not be changed during a mode switch. A mode switch typically occurs for a process
context switch to occur. Only the kernel can cause a context switch.

Structure of the Process Control Block


A Process Control Block (PCB) is a data structure used by the operating system to manage information
about a process. The process control keeps track of many important pieces of information needed to
manage processes efficiently. The diagram helps explain some of these key data items.

• Pointer: It is a stack pointer that is required to be saved when the process is switched from one
state to another to retain the current position of the process.
• Process state: It stores the respective state of the process.
• Process number: Every process is assigned a unique id known as process ID or PID which
stores the process identifier.
• Program counter: Program Counter stores the counter, which contains the address of the next
instruction that is to be executed for the process.
• Register: Registers in the PCB, it is a data structure. When a processes is running and it’s time
slice expires, the current value of process specific registers would be stored in the PCB and the
process would be swapped out. When the process is scheduled to be run, the register values is
read from the PCB and written to the CPU registers. This is the main purpose of the registers in
the PCB.
• Memory limits: This field contains the information about memory management system used by
the operating system. This may include page tables, segment tables, etc.
• List of Open files: This information includes the list of files opened for a process.

What is CPU scheduling?


CPU Scheduling is a process that allows one process to use the CPU while another process is delayed due
to unavailability of any resources such as I / O etc, thus making full use of the CPU. In short, CPU scheduling
decides the order and priority of the processes to run and allocates the CPU time based on various
parameters such as CPU usage, throughput, turnaround, waiting time, and response time. The purpose of
CPU Scheduling is to make the system more efficient, faster, and fairer.
types
In real time scheduling, The system's correctness is dependent not only on the computation's logical
outcome but also on the speed at which the results are generated. Tasks or processes try to influence or
respond to external events because they happen in "real time," and processes have to keep up with them.
In Multiple-Processor Scheduling, A system with many processors that share the same memory, bus, and
input/output devices is referred to as a multi-processor. The bus links all of the computer's other parts,
including the RAM and I/O devices, to the processor.

Criteria of CPU Scheduling


CPU scheduling criteria, such as turnaround time, waiting time, and throughput, are essential metrics
used to evaluate the efficiency of scheduling algorithms. For a deeper understanding of how these criteria
are applied in real-world scenarios, the GATE CS and IT – 2025 course offers detailed lessons on CPU
scheduling algorithms, helping students master these concepts through practical examples and exercises
Now let’s discuss CPU Scheduling has several criteria. Some of them are mentioned below.
1. CPU utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible. Theoretically,
CPU utilization can range from 0 to 100 but in a real-time system, it varies from 40 to 90 percent depending
on the load upon the system.
2. Throughput
A measure of the work done by the CPU is the number of processes being executed and completed per
unit of time. This is called throughput. The throughput may vary depending on the length or duration of the
processes.
3. Turnaround Time
For a particular process, an important criterion is how long it takes to execute that process. The time elapsed
from the time of submission of a process to the time of completion is known as the turnaround time. Turn-
around time is the sum of times spent waiting to get into memory, waiting in the ready queue, executing in
CPU, and waiting for I/O.
Turn Around Time = Completion Time – Arrival Time.
4. Waiting Time
A scheduling algorithm does not affect the time required to complete the process once it starts execution.
It only affects the waiting time of a process i.e. time spent by a process waiting in the ready queue.
Waiting Time = Turnaround Time – Burst Time.
5. Response Time
In an interactive system, turn-around time is not the best criterion. A process may produce some output
fairly early and continue computing new results while previous results are being output to the user. Thus
another criterion is the time taken from submission of the process of the request until the first response is
produced. This measure is called response time.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) – Arrival Time
6. Completion Time
The completion time is the time when the process stops executing, which means that the process has
completed its burst time and is completely executed.
7. Priority
If the operating system assigns priorities to processes, the scheduling mechanism should favor the higher-
priority processes.
8. Predictability
A given process always should run in about the same amount of time under a similar system load.

Factors Influencing CPU Scheduling Algorithms


There are many factors that influence the choice of CPU scheduling algorithm. Some of them are listed
below.

• The number of processes.


• The processing time required.
• The urgency of tasks.
• The system requirements.
Selecting the correct algorithm will ensure that the system will use system resources efficiently, increase
productivity, and improve user satisfaction.

CPU Scheduling Algorithms


There are several CPU Scheduling Algorithms, that are listed below.

• First Come First Served (FCFS)


• Shortest Job First (SJF)
• Longest Job First (LJF)
• Priority Scheduling
• Round Robin (RR)
• Shortest Remaining Time First (SRTF)
• Longest Remaining Time First (LRTF)

Process Scheduling Algorithms


The operating system can use different scheduling algorithms to schedule processes. Here are some
commonly used timing algorithms:

• First-Come, First-Served (FCFS): This is the simplest scheduling algorithm, where the process
is executed on a first-come, first-served basis. FCFS is non-preemptive, which means that once a
process starts executing, it continues until it is finished or waiting for I/O.
• Shortest Job First (SJF): SJF is a proactive scheduling algorithm that selects the process with
the shortest burst time. The burst time is the time a process takes to complete its execution. SJF
minimizes the average waiting time of processes.
• Round Robin (RR): Round Robin is a proactive scheduling algorithm that reserves a fixed
amount of time in a round for each process. If a process does not complete its execution within
the specified time, it is blocked and added to the end of the queue. RR ensures fair distribution of
CPU time to all processes and avoids starvation.
• Priority Scheduling: This scheduling algorithm assigns priority to each process and the process
with the highest priority is executed first. Priority can be set based on process type, importance,
or resource requirements.
• Multilevel Queue: This scheduling algorithm divides the ready queue into several separate
queues, each queue having a different priority. Processes are queued based on their priority,
Inter Process Communication (IPC)

Processes can coordinate and interact with one another using a method called inter-process
communication (IPC) . Through facilitating process collaboration, it significantly contributes to improving
the effectiveness, modularity, and ease of software systems.

Types of Process
• Independent process
• Co-operating process
An independent process is not affected by the execution of other processes while a co-operating process
can be affected by other executing processes. Though one can think that those processes, which are
running independently, will execute very efficiently, in reality, there are many situations when cooperative
nature can be utilized for increasing computational speed, convenience, and modularity. Inter-process
communication (IPC) is a mechanism that allows processes to communicate with each other and
synchronize their actions. The communication between these processes can be seen as a method of
cooperation between them. Processes can communicate with each other through both:
Methods of IPC

• Shared Memory
• Message Passing
A deep understanding of Inter-process Communication (IPC) mechanisms is essential for success in
exams like GATE, where operating systems and process management are key topics. To strengthen your
knowledge and enhance your exam preparation, consider enrolling in the GATE CS Self-Paced Course .
This course offers comprehensive coverage of IPC and other essential operating system concepts,
providing you with the insights and skills needed to excel in your exams.

Shared Memory Method


There are two processes: Producer and Consumer . The producer produces some items and the Consumer
consumes that item. The two processes share a common space or memory location known as a buffer
where the item produced by the Producer is stored and from which the Consumer consumes the item if
needed. There are two versions of this problem: the first one is known as the unbounded buffer problem in
which the Producer can keep on producing items and there is no limit on the size of the buffer, the second
one is known as the bounded buffer problem in which the Producer can produce up to a certain number of
items before it starts waiting for Consumer to consume it. We will discuss the bounded buffer problem. First,
the Producer and the Consumer will share some common memory, then the producer will start producing
items. If the total produced item is equal to the size of the buffer, the producer will wait to get it consumed
by the Consumer. Similarly, the consumer will first check for the availability of the item. If no item is available,
the Consumer will wait for the Producer to produce it. If there are items available, Consumer will consume
them. The pseudo-code to demonstrate is provided below:

Messaging Passing Method

Now, We will start our discussion of the communication between processes via message passing. In this
method, processes communicate with each other without using any kind of shared memory. If two
processes p1 and p2 want to communicate with each other, they proceed as follows:
• Establish a communication link (if a link already exists, no need to establish it again.)
• Start exchanging messages using basic primitives.
We need at least two primitives:
– send (message, destination) or send (message)
– receive (message, host) or receive (message)
The message size can be of fixed size or of variable size. If it is of fixed size, it is easy for an OS designer
but complicated for a programmer and if it is of variable size then it is easy for a programmer but complicated
for the OS designer. A standard message can have two parts: header and body. The header part is used
for storing message type, destination id, source id, message length, and control information. The control
information contains information like what to do if runs out of buffer space, sequence number, priority.
Generally, message is sent using FIFO style.
Message Passing Through Communication Link

Direct and Indirect Communication link

Now, We will start our discussion about the methods of implementing communication links. While
implementing the link, there are some questions that need to be kept in mind like :

• How are links established?


• Can a link be associated with more than two processes?
• How many links can there be between every pair of communicating processes?
• What is the capacity of a link? Is the size of a message that the link can accommodate fixed or
variable?
• Is a link unidirectional or bi-directional?

A link has some capacity that determines the number of messages that can reside in it temporarily for which
every link has a queue associated with it which can be of zero capacity, bounded capacity, or unbounded
capacity. In zero capacity, the sender waits until the receiver informs the sender that it has received the
message. In non-zero capacity cases, a process does not know whether a message has been received or
not after the send operation. For this, the sender must communicate with the receiver explicitly.
Implementation of the link depends on the situation, it can be either a direct communication link or an in-
directed communication link.
Direct Communication links are implemented when the processes use a specific process identifier for
the communication, but it is hard to identify the sender ahead of time.
For example the print server.
In-direct Communication is done via a shared mailbox (port), which consists of a queue of messages.
The sender keeps the message in mailbox and the receiver picks them up.
Synchronous and Asynchronous Message Passing
A process that is blocked is one that is waiting for some event, such as a resource becoming available or
the completion of an I/O operation. IPC is possible between the processes on same computer as well as
on the processes running on different computer i.e. in networked/distributed system. In both cases, the
process may or may not be blocked while sending a message or attempting to receive a message so
message passing may be blocking or non-blocking. Blocking is considered synchronous and blocking
send means the sender will be blocked until the message is received by receiver. Similarly, blocking
receive has the receiver block until a message is available. Non-blocking is considered asynchronous
and Non-blocking send has the sender sends the message and continue. Similarly, Non-blocking receive
has the receiver receive a valid message or null. After a careful analysis, we can come to a conclusion that
for a sender it is more natural to be non-blocking after message passing as there may be a need to send
the message to different processes

A thread is a single sequence stream within a process. Threads are also called lightweight processes as
they possess some of the properties of processes. Each thread belongs to exactly one process. In an
operating system that supports multithreading, the process can consist of many threads. But threads can
be effective only if the CPU is more than 1 otherwise two threads have to context switch for that single CPU.

What is Thread in Operating Systems?


In a process, a thread refers to a single sequential activity being executed. these activities are also known
as thread of execution or thread control. Now, any operating system process can execute a thread. we can
say, that a process can have multiple threads.

Why Do We Need Thread?


• Threads run in parallel improving the application performance. Each such thread has its own CPU
state and stack, but they share the address space of the process and the environment.
• Threads can share common data so they do not need to use inter-process communication. Like
the processes, threads also have states like ready, executing, blocked, etc.
• Priority can be assigned to the threads just like the process, and the highest priority thread is
scheduled first.
• Each thread has its own Thread Control Block (TCB). Like the process, a context switch occurs
for the thread, and register contents are saved in (TCB). As threads share the same address
space and resources, synchronization is also required for the various activities of the thread.
Components of Threads
These are the basic components of the Operating System.

• Stack Space
• Register Set
• Program Counter
Types of Thread in Operating System
Threads are of two types. These are described below.

• User Level Thread


• Kernel Level Thread

1. User Level Threads

User Level Thread is a type of thread that is not created using system calls. The kernel has no work in the
management of user-level threads. User-level threads can be easily implemented by the user. In case when
user-level threads are single-handed processes, kernel-level thread manages them. Let’s look at the
advantages and disadvantages of User-Level Thread.

Advantages of User-Level Threads

• Implementation of the User-Level Thread is easier than Kernel Level Thread.


• Context Switch Time is less in User Level Thread.
• User-Level Thread is more efficient than Kernel-Level Thread.
• Because of the presence of only Program Counter, Register Set, and Stack Space, it has a
simple representation.

Disadvantages of User-Level Threads

• There is a lack of coordination between Thread and Kernel.


• In case of a page fault, the whole process can be blocked.
2. Kernel Level Threads
A kernel Level Thread is a type of thread that can recognize the Operating system easily. Kernel Level
Threads has its own thread table where it keeps track of the system. The operating System Kernel helps in
managing threads. Kernel Threads have somehow longer context switching time. Kernel helps in the
management of threads.
Advantages of Kernel-Level Threads
• It has up-to-date information on all threads.
• Applications that block frequency are to be handled by the Kernel-Level Threads.
• Whenever any process requires more time to process, Kernel-Level Thread provides more time
to it.
Disadvantages of Kernel-Level threads
• Kernel-Level Thread is slower than User-Level Thread.
• Implementation of this type of thread is a little more complex than a user-level thread.

Multi-threading- It is a process of multiple threads executes at same time.

Multi-threading model are of three types.

Many to many model.


Many to one model.
one to one model.

Many to Many Model


In this model, we have multiple user threads multiplex to same or lesser number of kernel level threads.
Number of kernel level threads are specific to the machine, advantage of this model is if a user thread is
blocked we can schedule others user thread to other kernel thread. Thus, System doesn’t block if a
particular thread is blocked.
It is the best multi threading model.

Many to One Model


In this model, we have multiple user threads mapped to one kernel thread. In this model when a user
thread makes a blocking system call entire process blocks. As we have only one kernel thread and only
one user thread can access kernel at a time, so multiple threads are not able access multiprocessor at
the same time.
The thread management is done on the user level so it is more efficient.

One to One Model


In this model, one to one relationship between kernel and user thread. In this model multiple thread can
run on multiple processor. Problem with this model is that creating a user thread requires the
corresponding kernel thread.
As each user thread is connected to different kernel , if any user thread makes a blocking system call, the
other user threads won’t be blocked.
Process Synchronization is the coordination of execution of multiple processes in a multi-process system
to ensure that they access shared resources in a controlled and predictable manner. It aims to resolve the
problem of race conditions and other synchronization issues in a concurrent system.

The main objective of process synchronization is to ensure that multiple processes access shared
resources without interfering with each other and to prevent the possibility of inconsistent data due to
concurrent access. To achieve this, various synchronization techniques such as semaphores, monitors,
and critical sections are used.

In a multi-process system, synchronization is necessary to ensure data consistency and integrity, and to
avoid the risk of deadlocks and other synchronization problems. Process synchronization is an important
aspect of modern operating systems, and it plays a crucial role in ensuring the correct and efficient
functioning of multi-process systems.

What is Race Condition?


When more than one process is executing the same code or accessing the same memory or any shared
variable in that condition there is a possibility that the output or the value of the shared variable is wrong so
for that all the processes doing the race to say that my output is correct this condition known as a race
condition. Several processes access and process the manipulations over the same data concurrently, and
then the outcome depends on the particular order in which the access takes place. A race condition is a
situation that may occur inside a critical section. This happens when the result of multiple thread execution
in the critical section differs according to the order in which the threads execute. Race conditions in critical
sections can be avoided if the critical section is treated as an atomic instruction. Also, proper thread
synchronization using locks or atomic variables can prevent race conditions.

Critical Section Problem


A critical section is a code segment that can be accessed by only one process at a time. The critical section
contains shared variables that need to be synchronized to maintain the consistency of data variables. So
the critical section problem means designing a way for cooperative processes to access shared resources
without creating data inconsistencies.

Any solution to the critical section problem must satisfy three requirements:

• Mutual Exclusion : If a process is executing in its critical section, then no other process is
allowed to execute in the critical section.
• Progress : If no process is executing in the critical section and other processes are waiting
outside the critical section, then only those processes that are not executing in their remainder
section can participate in deciding which will enter the critical section next, and the selection can
not be postponed indefinitely.
• Bounded Waiting : A bound must exist on the number of times that other processes are allowed
to enter their critical sections after a process has made a request to enter its critical section and
before that request is granted.
Peterson’s Solution
Peterson’s Solution is a classical software-based solution to the critical section problem. In Peterson’s
solution, we have two shared variables:

• boolean flag[i]: Initialized to FALSE, initially no one is interested in entering the critical section
• int turn: The process whose turn is to enter the critical section.

Peterson’s Solution preserves all three conditions


• Mutual Exclusion is assured as only one process can access the critical section at any time.
• Progress is also assured, as a process outside the critical section does not block other
processes from entering the critical section.

Disadvantages of Peterson’s Solution


• It involves busy waiting. (In the Peterson’s solution, the code statement- “while(flag[j] && turn ==
j);” is responsible for this. Busy waiting is not favored because it wastes CPU cycles that could be
used to perform other tasks.)
Semaphores
A semaphore is a signaling mechanism and a thread that is waiting on a semaphore can be signaled by
another thread. This is different than a mutex as the mutex can be signaled only by the thread that is called
the wait function.
A semaphore uses two atomic operations, wait and signal for process synchronization.
There are two types of semaphores: Binary Semaphores and Counting Semaphores .
• Binary Semaphores: They can only be either 0 or 1. They are also known as mutex locks, as the
locks can provide mutual exclusion . All the processes can share the same mutex semaphore that
is initialized to 1.
• Counting Semaphores: They can have any value and are not restricted to a certain domain.
They can be used to control access to a resource that has a limitation on the number of
simultaneous accesses. The semaphore can be initialized to the number of instances of the
resource. Whenever a process wants to use that resource, it checks if the number of remaining
instances is more than zero, i.e., the process has an instance available.

What is The Readers-Writers Problem?


The Readers-Writers Problem is a classic synchronization issue in operating systems that involves
managing access to shared data by multiple threads or processes. The problem addresses the scenario
where:
• Readers: Multiple readers can access the shared data simultaneously without causing any issues
because they are only reading and not modifying the data.
• Writers: Only one writer can access the shared data at a time to ensure data integrity, as writers
modify the data, and concurrent modifications could lead to data corruption or inconsistencies.
Bounded Buffer Problem in Operating System using Semaphores
In Bounded Buffer Problem there are three entities storage buffer slots, consumer and producer. The
producer tries to store data in the storage slots while the consumer tries to remove the data from the
Advantages of Process Synchronization
• Ensures data consistency and integrity
• Avoids race conditions
• Prevents inconsistent data due to concurrent access
• Supports efficient and effective use of shared resources
Disadvantages of Process Synchronization
• Adds overhead to the system
• This can lead to performance degradation
• Increases the complexity of the system
• Can cause deadlock if not implemented properly
Deadlock is a condition wherein all the processes are holding one resource each and are waiting for
another process to release another resource. Starvation is a state that prevents lower-precedence
processes from getting the resources. Starvation arises when procedures with critical importance keep on
utilizing the resources frequently.

Therefore, as we want to compare these two, we have to get acquainted with both of these terms in a much
deeper way. I am excited to know more about operating systems, especially the deep concepts of operating
systems. So, let’s get started.

In other words, “Deadlock is a situation in which two or more processes require resources to complete their
execution, but those resources are held by another process. Due to which the execution of the process is
not completed.”
There are Four Conditions That May Occur in Deadlock

1. Mutual Exclusion: It simply states that one process can only access a resource at any point in
time.
2. Hold and Wait: One or several processes are waiting for other processes, which in turn are
holding resources that the processes are waiting for.
3. No Preemption: One cannot force the resources to be removed from the process that has it.
4. Circular Wait: A cyclic chain of processes is present, and every process is holding at least one
resource and waiting for some process that is holding the resource that is required by the
immediate successive process.

Prevention of Deadlock

To avoid deadlock, it is necessary to remove at least one of the prerequisites that may lead to that. For
instance:

1. Avoid Circular Wait: Adopt a proper hierarchy of resources in order to maintain an order in the
resource list.
2. Release Resources: Make certain processes return a signal if they cannot go on any further.
3. Preemption: Enable the deprivation of resources by the system z under specific circumstances.
Advantages of Deadlock
• Detection and Prevention Research : Helps to create the detection and prevention algorithms,
which in turn contribute towards the overall evolution of computer science, especially in the
management of resources.
• Resource Allocation Efficiency Studies: Assists researchers in utilization, the allocation, of
resources in operation systems and thus encourages innovative designs.
• Ensures Process Fairness: When solved effectively, it maintains that no process takes more
than its fair share of the resources, hence achieving overall system fairness.
• Avoidance Strategy Improvements: Kindled innovation in the formulation of ways of minimizing
deadlocks to inform on the best resource allocation approaches in enhancing the design of
systems.
Disadvantages of Deadlock
• System Freeze: The most serious disadvantage is that the system may be locked and no
process can go on as critical sections are mutually exclusive.
• Resource Wastage: Those resources trapped in the deadlocked processes cannot be utilized by
the other process, and this leads to low efficiency of the system.
• System Instability: Deadlocks can result in the unavailability of necessary services that can
result in the crashing of the system.
• High Complexity in Detection: To identify deadlocks is a computationally intensive process,
particularly when there are numerous processes and resources in the system.
What is Starvation?
Starvation is the problem that occurs when high priority processes keep executing and low priority
processes get blocked for indefinite time. In heavily loaded computer system, a steady stream of higher-
priority processes can prevent a low-priority process from ever getting the CPU. In starvation resources are
continuously utilized by high priority processes. Problem of starvation can be resolved using Aging. In Aging
priority of long waiting processes is gradually increased.
Causes of Starvation
1. Priority Scheduling: If there are always higher-priority processes available, then the lower-
priority processes may never be allowed to run.
2. Resource Utilization: We see that resources are always used by more significant priority
processes and leave a lesser priority process starved.
Prevention of Starvation
Starvation can be cured using a technique that is regarded as aging. In aging, priority of process increases
with time and thus guarantees that poor processes will equally run in the system.
Advantages of Starvation
• Prioritizes Critical Tasks: The high-priority tasks can run at once, thereby improving on the, for
instance, real-time systems where some tasks are very limited in time.
• Efficiency for High-Priority Processes: Helps guarantee that effective/ineffective utilization of
resources by such critical/time-sensitive processes enhances performance in many systems.
• Optimizes Resource Usage for Priority Processes: This means that resources are constantly
reallocated to where they are needed most, or in other words, resources are well aligned with
high-priority jobs.
• Simplicity in Scheduling: Stagnation is a common result of simple priority scheduling, which is
comparatively less tricky to balance from the other algorithms.
Disadvantages of Starvation
• Process Delays: Some of the less important processes may never run at all or run very late
indeed, which can translate into very long wait times or worse, system failure.
• Unfair Resource Distribution: Resource allocation becomes unfair. I/O-bound processes starve
and may not run their turn efficiently.
• System Inefficiency : In the long run, the system efficiency may be an issue, especially because
unnecessary and less important tasks may accumulate time and compress the system.
• Potential for Resource Starvation : Essential yet less critical processes could never execute,
which potentially results in poor service provision and the health of the system.

Memory Management in Operating System

The term memory can be defined as a collection of data in a specific format. It is used to store instructions
and process data. The memory comprises a large array or group of words or bytes, each with its own
location. The primary purpose of a computer system is to execute programs. These programs, along with
the information they access, should be in the main memory during execution. The CPU fetches instructions
from memory according to the value of the program counter.

To achieve a degree of multiprogramming and proper utilization of memory, memory management is


important. Many memory management methods exist, reflecting various approaches, and the effectiveness
of each algorithm depends on the situation.

What is Main Memory?


The main memory is central to the operation of a Modern Computer. Main Memory is a large array of words
or bytes, ranging in size from hundreds of thousands to billions. Main memory is a repository of rapidly
available information shared by the CPU and I/O devices. Main memory is the place where programs and
information are kept when the processor is effectively utilizing them. Main memory is associated with the
processor, so moving instructions and information into and out of the processor is extremely fast. Main
memory is also known as RAM (Random Access Memory). This memory is volatile. RAM loses its data
when a power interruption occurs.
What is Memory Management?
In a multiprogramming computer, the Operating System resides in a part of memory, and the rest is used
by multiple processes. The task of subdividing the memory among different processes is called Memory
Management. Memory management is a method in the operating system to manage operations between
main memory and disk during process execution. The main aim of memory management is to achieve
efficient utilization of memory.

Logical and Physical Address Space


• Logical Address Space: An address generated by the CPU is known as a “Logical Address”. It
is also known as a Virtual address. Logical address space can be defined as the size of the
process. A logical address can be changed.
• Physical Address Space: An address seen by the memory unit (i.e the one loaded into the
memory address register of the memory) is commonly known as a “Physical Address”. A Physical
address is also known as a Real address. The set of all physical addresses corresponding to
these logical addresses is known as Physical address space. A physical address is computed by
MMU. The run-time mapping from virtual to physical addresses is done by a hardware device
Memory Management Unit(MMU). The physical address always remains constant.
Static and Dynamic Loading
Loading a process into the main memory is done by a loader. There are two different types of loading :

• Static Loading: Static Loading is basically loading the entire program into a fixed address. It
requires more memory space.
• Dynamic Loading: The entire program and all data of a process must be in physical memory for
the process to execute. So, the size of a process is limited to the size of physical memory. To
gain proper memory utilization, dynamic loading is used. In dynamic loading, a routine is not
loaded until it is called. All routines are residing on disk in a relocatable load format. One of the
advantages of dynamic loading is that the unused routine is never loaded. This loading is useful
when a large amount of code is needed to handle it efficiently.
Static and Dynamic Linking
To perform a linking task a linker is used. A linker is a program that takes one or more object files generated
by a compiler and combines them into a single executable file.

• Static Linking: In static linking, the linker combines all necessary program modules into a single
executable program. So there is no runtime dependency. Some operating systems support only
static linking, in which system language libraries are treated like any other object module.
• Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading. In dynamic
linking, “Stub” is included for each appropriate library routine reference. A stub is a small piece of
code. When the stub is executed, it checks whether the needed routine is already in memory or
not. If not available then the program loads the routine into memory.
Memory Management with Monoprogramming (Without Swapping)

This is the simplest memory management approach the memory is divided into two sections:

• One part of the operating system


• The second part of the user program
• In this approach, the operating system keeps track of the first and last location available for the
allocation of the user program
• The operating system is loaded either at the bottom or at top
• Interrupt vectors are often loaded in low memory therefore, it makes sense to load the operating
system in low memory
• Sharing of data and code does not make much sense in a single process environment
• The Operating system can be protected from user programs with the help of a fence register.
Advantages of Memory Management

• It is a simple management approach


Disadvantages of Memory Management

• It does not support multiprogramming


• Memory is wasted
Multiprogramming with Fixed Partitions (Without Swapping)

• A memory partition scheme with a fixed number of partitions was introduced to support
multiprogramming. this scheme is based on contiguous allocation
• Each partition is a block of contiguous memory
• Memory is partitioned into a fixed number of partitions.
• Each partition is of fixed size

Logical vs Physical Address

An address generated by the CPU is commonly referred to as a logical address. the address seen by the
memory unit is known as the physical address. The logical address can be mapped to a physical address
by hardware with the help of a base register this is known as dynamic relocation of memory references.

Contiguous Memory Allocation


The main memory should accommodate both the operating system and the different client processes.
Therefore, the allocation of memory becomes an important task in the operating system. The memory is
usually divided into two partitions: one for the resident operating system and one for the user processes.
We normally need several user processes to reside in memory simultaneously. Therefore, we need to
consider how to allocate available memory to the processes that are in the input queue waiting to be brought
into memory. In adjacent memory allotment, each process is contained in a single contiguous segment of
memory.

Memory Allocation
To gain proper memory utilization, memory allocation must be allocated efficient manner. One of the
simplest methods for allocating memory is to divide memory into several fixed-sized partitions and each
partition contains exactly one process. Thus, the degree of multiprogramming is obtained by the number of
partitions.

• Multiple partition allocation: In this method, a process is selected from the input queue and
loaded into the free partition. When the process terminates, the partition becomes available for
other processes.
• Fixed partition allocation: In this method, the operating system maintains a table that indicates
which parts of memory are available and which are occupied by processes. Initially, all memory is
available for user processes and is considered one large block of available memory. This
available memory is known as a “Hole”. When the process arrives and needs memory, we search
for a hole that is large enough to store this process. If the requirement is fulfilled then we allocate
memory to process, otherwise keeping the rest available to satisfy future requests. While
allocating a memory sometimes dynamic storage allocation problems occur, which concerns how
to satisfy a request of size n from a list of free holes. There are some solutions to this problem:
First Fit
In the First Fit, the first available free hole fulfil the requirement of the process allocated.
Here, in this diagram, a 40 KB memory block is the first available free hole that can store process A (size
of 25 KB), because the first two blocks did not have sufficient memory space.
Best Fit
In the Best Fit, allocate the smallest hole that is big enough to process requirements. For this, we search
the entire list, unless the list is ordered by size.
Here in this example, first, we traverse the complete list and find the last hole 25KB is the best suitable hole
for Process A(size 25KB). In this method, memory utilization is maximum as compared to other memory
allocation techniques.
Worst Fit
In the Worst Fit, allocate the largest available hole to process. This method produces the largest leftover
hole.

Swapping
When a process is executed it must have resided in memory. Swapping is a process of swapping a process
temporarily into a secondary memory from the main memory, which is fast compared to secondary memory.
A swapping allows more processes to be run and can be fit into memory at one time. The main part of
swapping is transferred time and the total time is directly proportional to the amount of memory swapped.
Swapping is also known as roll-out, or roll because if a higher priority process arrives and wants service,
the memory manager can swap out the lower priority process and then load and execute the higher priority
process. After finishing higher priority work, the lower priority process swapped back in memory and
continued to the execution process.

Paging
Paging is a memory management scheme that eliminates the need for a contiguous allocation of physical
memory. This scheme permits the physical address space of a process to be non-contiguous.

• Logical Address or Virtual Address (represented in bits): An address generated by the CPU.
• Logical Address Space or Virtual Address Space (represented in words or bytes): The set
of all logical addresses generated by a program.
• Physical Address (represented in bits): An address actually available on a memory unit.
• Physical Address Space (represented in words or bytes): The set of all physical addresses
corresponding to the logical addresses.
Example:

• If Logical Address = 31 bits, then Logical Address Space = 231 words = 2 G words (1 G = 230)
• If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address = log2 227 = 27
bits
• If Physical Address = 22 bits, then Physical Address Space = 2 22 words = 4 M words (1 M = 220)
• If Physical Address Space = 16 M words = 24 * 220 words, then Physical Address = log2 224 = 24
bits
The mapping from virtual to physical address is done by the memory management unit (MMU) which is a
hardware device and this mapping is known as the paging technique.

• The Physical Address Space is conceptually divided into several fixed-size blocks, called frames.
• The Logical Address Space is also split into fixed-size blocks, called pages.
• Page Size = Frame Size
What is Thrash?
In computer science, thrash is the poor performance of a virtual memory (or paging) system when the
same pages are being loaded repeatedly due to a lack of main memory to keep them in memory. Depending
on the configuration and algorithm, the actual throughput of a system can degrade by multiple orders of
magnitude.

To know more clearly about thrashing, first, we need to know about page fault and swapping.

• Page fault: We know every program is divided into some pages. A page fault occurs when a
program attempts to access data or code in its address space but is not currently located in the
system RAM.
• Swapping: Whenever a page fault happens, the operating system will try to fetch that page from
secondary memory and try to swap it with one of the pages in RAM. This process is called swapping.

File Systems in Operating System

A computer file is defined as a medium used for saving and managing data in the computer system. The
data stored in the computer system is completely in digital format, although there can be various types of
files that help us to store the data.

What is a File System?


A file system is a method an operating system uses to store, organize, and manage files and directories on
a storage device. Some common types of file systems include:

• FAT (File Allocation Table): An older file system used by older versions of Windows and other
operating systems.
• NTFS (New Technology File System): A modern file system used by Windows. It supports
features such as file and folder permissions, compression, and encryption.
• ext (Extended File System): A file system commonly used on Linux and Unix-based operating
systems.
• HFS (Hierarchical File System): A file system used by macOS.
• APFS (Apple File System): A new file system introduced by Apple for their Macs and iOS
devices.
File Directories
The collection of files is a file directory. The directory contains information about the files, including attributes,
location, and ownership. Much of this information, especially that is concerned with storage, is managed by
the operating system. The directory is itself a file, accessible by various file management routines.

Below are information contained in a device directory.

• Name
• Type
• Address
• Current length
• Maximum length
• Date last accessed
• Date last updated
• Owner id
• Protection information
The operation performed on the directory are:

• Search for a file


• Create a file
• Delete a file
• List a directory
• Rename a file
• Traverse the file system
Single-Level Directory
In this, a single directory is maintained for all the users.

• Naming Problem: Users cannot have the same name for two files.
• Grouping Problem: Users cannot group files according to their needs.
Two-Level Directory
In this separate directories for each user is maintained.

• Path Name: Due to two levels there is a path name for every file to locate that file.
• Now, we can have the same file name for different users.
• Searching is efficient in this method.
Tree-Structured Directory
The directory is maintained in the form of a tree. Searching is efficient and also there is grouping capability.
We have absolute or relative path name for a file.

Advantages of File System


• Organization: A file system allows files to be organized into directories and subdirectories,
making it easier to manage and locate files.
• Data Protection: File systems often include features such as file and folder permissions, backup
and restore, and error detection and correction, to protect data from loss or corruption.
• Improved Performance: A well-designed file system can improve the performance of reading
and writing data by organizing it efficiently on disk.
Disadvantages of File System
• Compatibility Issues: Different file systems may not be compatible with each other, making it
difficult to transfer data between different operating systems.
• Disk Space Overhead: File systems may use some disk space to store metadata and other
overhead information, reducing the amount of space available for user data.
• Vulnerability: File systems can be vulnerable to data corruption, malware, and other security
threats, which can compromise the stability and security of the system.
File Allocation Methods
There are several types of file allocation methods. These are mentioned below.

Continuous Allocation

A single continuous set of blocks is allocated to a file at the time of file creation. Thus, this is a pre-allocation
strategy, using variable size portions. The file allocation table needs just a single entry for each file, showing
the starting block and the length of the file. This method is best from the point of view of the individual
sequential file. Multiple blocks can be read in at a time to improve I/O performance for sequential processing.
It is also easy to retrieve a single block.
Linked Allocation(Non-Contiguous Allocation)
Allocation is on an individual block basis. Each block contains a pointer to the next block in the chain. Again
the file table needs just a single entry for each file, showing the starting block and the length of the file.
Although pre-allocation is possible, it is more common simply to allocate blocks as needed. Any free block
can be added to the chain. The blocks need not be continuous. An increase in file size is always possible
if a free disk block is available.

Indexed Allocation
It addresses many of the problems of contiguous and chained allocation. In this case, the file allocation
table contains a separate one-level index for each file: The index has one entry for each block allocated to
the file. The allocation may be on the basis of fixed-size blocks or variable-sized blocks. Allocation by blocks
eliminates external fragmentation, whereas allocation by variable-size blocks improves locality.

Disk Management of the Operating System Includes:


• Disk Format
• Booting from disk
• Bad block recovery

The low-level format or physical format:

Divides the disk into sectors before storing data so that the disk controller can read and write Each sector
can be:

The header retains information, data, and error correction code (ECC) sectors of data, typically 512 bytes
of data, but optional disks use the operating system’s own data structures to preserve files using disks.

It is conducted in two stages:

1. Divide the disc into multiple cylinder groups. Each is treated as a logical disk.

2. Logical format or “Create File System”. The OS stores the data structure of the first file system on the
disk. Contains free space and allocated space.

For efficiency, most file systems group blocks into clusters. Disk I / O runs in blocks. File I / O runs in a
cluster.

Boot block:

• When the computer is turned on or restarted, the program stored in the initial bootstrap ROM
finds the location of the OS kernel from the disk, loads the kernel into memory, and runs the OS.
start.
• To change the bootstrap code, you need to change the ROM and hardware chip. Only a small
bootstrap loader program is stored in ROM instead.
• The full bootstrap code is stored in the “boot block” of the disk.
• A disk with a boot partition is called a boot disk or system disk.
• The bootstrap program is required for a computer to initiate the booting after it is powered up or
rebooted.
• It initializes all components of the system, from CPU registers to device controllers and the
contents of main memory, and then starts the operating system.
• The bootstrap program then locates the OS kernel on disk, loads that kernel into memory, and
jumps to an initial address to start the operating-system execution.
• The Read Only Memory (ROM) does not require initialization and is at a fixed location that the
processor can begin executing when powered up or reset. Therefore bootstrap is stored in ROM.
• Because of read only feature of ROM; it cannot be infected by a computer virus. The difficulty is
that modification of this bootstrap code requires changing the ROM hardware chips.
• Therefore, most systems store a small bootstrap loader program in the boot ROM which invokes
and bring full bootstrap program from disk into main memory.
• The modified version of full bootstrap program can be simply written onto the disk.
• The fixed storage location of full bootstrap program is in the “boot blocks”.
• A disk that has a boot partition is called a boot disk or system disk.

Bad Blocks:

• Disks are error-prone because moving parts have small tolerances.


• Most disks are even stuffed from the factory with bad blocks and are handled in a variety of
ways.
• The controller maintains a list of bad blocks.
• The controller can instruct each bad sector to be logically replaced with one of the spare sectors.
This scheme is known as sector sparing or transfer.
• A soft error triggers the data recovery process.
• However, unrecoverable hard errors may result in data loss and require manual intervention.
• Failure of the disk can be:
Some common disk management techniques used in operating systems include:

1. Partitioning: This involves dividing a single physical disk into multiple logical partitions. Each
partition can be treated as a separate storage device, allowing for better organization and
management of data.
2. Formatting: This involves preparing a disk for use by creating a file system on it. This process
typically erases all existing data on the disk.
3. File system management: This involves managing the file systems used by the operating system
to store and access data on the disk. Different file systems have different features and
performance characteristics.
4. Disk space allocation: This involves allocating space on the disk for storing files and directories.
Some common methods of allocation include contiguous allocation, linked allocation, and indexed
allocation.
5. Disk defragmentation: Over time, as files are created and deleted, the data on a disk can become
fragmented, meaning that it is scattered across the disk. Disk defragmentation involves
rearranging the data on the disk to improve performance.

Advantages of disk management include:

1. Improved organization and management of data.


2. Efficient use of available storage space.
3. Improved data integrity and security.
4. Improved performance through techniques such as defragmentation.
Disadvantages of disk management include:

1. Increased system overhead due to disk management tasks.


2. Increased complexity in managing multiple partitions and file systems.
3. Increased risk of data loss due to errors during disk management tasks.
4. Overall, disk management is an essential aspect of operating system management and can
greatly improve system performance and data integrity when implemented properly.

Device Management in Operating System

The process of implementation, operation, and maintenance of a device by an operating system is called
device management. When we use computers we will have various devices connected to our system like
mouse, keyboard, scanner, printer, and pen drives. So all these are the devices and the operating system
acts as an interface that allows the users to communicate with these devices. An operating system is
responsible for successfully establishing the connection between these devices and the system. The
operating system uses the concept of drivers to establish a connection between these devices with the
system.

What is Device Management?


Device Administration within An operating system controls every piece of hardware and virtual device on a
PC or computer. Input/output devices are assigned to processes by the device management system based
on their importance. Depending on the situation, these devices may also be temporarily or permanently
reallocated.

Usually, systems are hardware or physical devices like computers, laptops, servers, cell phones, etc.
Additionally, they might be virtual, like virtual switches or machines. A program may require a variety of
computer resources (devices) to go through to the end. It is the operating system’s responsibility to allocate
resources wisely. The operating system is alone in charge of determining if the resource is available. It
deals not only with device allocation but also with deallocation, which means that a device or resource must
be removed from a process once its use is over.

Functions of Device Management


• Keeps track of all devices and the program that is responsible for performing this is called the I/O
controller.
• Monitoring the status of each device such as storage drivers, printers, and other peripheral
devices.
• Enforcing preset policies and making a decision on which process gets the device when and for
how long.
• Allocates and deallocates the device efficiently.
Types of Device Management
There are three main types of devices:

• Boot Device: It stores information in a fixed-size block, each one with its own address. Example,
disks.
• Character Device: It delivers or accepts a stream of characters. the individual characters are not
addressable. For example, printers, keyboards etc.
• Network Device: It is for transmitting data packets.
Features of Device Management in Operating System
• The operating system is responsible in managing device communication through their respective
drivers.
• The operating system keeps track of all devices by using a program known as an input-output
controller.
• It decides which process to assign to CPU and for how long.
• O.S. is responsible in fulfilling the request of devices to access the process.
• It connects the devices to various programs in an efficient way without error.
• Deallocate devices when they are not in use.
Types of Devices

1. Dedicated Device

Certain devices are assigned to only one task at a time in device management until that task releases them.
Plotters, printers, tape drives, and other similar devices require this kind of allocation method because
sharing them with numerous users at the same time will be inconvenient. The drawback of these devices
is the inefficiency that results from assigning the device to a single user throughout the entirety of the task
execution process, even in cases when the device is not utilized exclusively.

2. Shared Device

There are numerous processes that these devices could be assigned to. Disk-DASD could be shared
concurrently by many processes by interleaving their requests. All issues must be resolved by pre-
established policies, and the Device Manager closely monitors the interleaving.

3. Virtual Device

Virtual devices are dedicated devices that have been converted into shared devices, making them a hybrid
of the two types of devices. For instance, a spooling programme that routes all print requests to a disc can
turn a printer into a sharing device. A print job is routed to the disc and not delivered straight to the printer
until it is ready with all the necessary formatting and sequencing, at which time it is sent to the printers. The
method can increase usability and performance by turning a single printer into a number of virtual printers.

What are the Various Techniques for Accessing a Device?


• Polling: In this instance, a CPU keeps an eye on the status of the device to share data. Busy-
waiting is a drawback, but simplicity is a plus. In this scenario, when an input/output operation is
needed, the computer simply keeps track of the I/O device’s status until it’s ready, at which time it
is accessed. Stated differently, the computer waits for the device to be ready.
• Interrupt-Driven I/O: Notifying the associated driver of the device’s availability is the device
controller’s job. One interrupt for each keyboard input results in slower data copying and
movement for character devices, but the advantages include more effective use of CPU cycles. A
block of bytes is created from a serial bit stream by a device controller. It also does error
correction if needed. It consists of two primary parts: a data buffer that an operating system can
read or write to, and device registers for communication with the CPU.
• DMA(Direct Memory Access) : Data motions are carried out by using a second controller. This
approach has the benefit of not requiring the CPU to duplicate data, but it also has the drawback
of preventing a process from accessing data that is in transit.
• Double Buffering: This mode of access has two buffers. One fills up while the other is utilised,
and vice versa. In order to hide the line-by-line scanning from the viewer, this technique is
frequently employed in animation and graphics.
Device Drivers
Operating system is responsible for managing device communication through their respective drivers. As
we know that the operating system will have many devices like the mouse, printer, and scanner and
operating system is responsible for managing these devices and establishing the communication between
these devices with the computer through their respective drivers. So operating system uses its respective
drivers each and every device will have its own driver. Without the use of their respective driver, that device
cannot make communication with other systems.

Device Tracking
Operating system keeps track of all devices by using a program known as input output controller. Apart
from allowing the system to make the communication between these drivers operating system is also
responsible in keeping track all these devices which are connected with the system. If any device request
any process which is under execution by the CPU then the operating system has to send a signal to the
CPU to immediately release that process and moves to the next process from the main memory so that the
process which is asked by the device fulfills the request of this device. That’s why operating system has to
continuously keep on checking the status of all the devices and for doing that operating system uses a
specialized program which is known as Input/Output controller.

Process Assignment
Operating system decides which process to assign to CPU and for how long. So operating system is
responsible in assigning the processes to the CPU and it is also responsible in selecting appropriate
process from the main memory and setting up the time for that process like how long that process needs
to get executed inside the CPU. Operating system is responsible in fulfilling the request of devices to access
the process. If the printer requests for the process which is now getting executed by the CPU then it is the
responsibility of the operating system to fulfill that request. So what operating system will do is it will tell the
CPU that you need to immediately release that process which the device printer is asking for and assign it
to the printer.

Connection
Operating system connects the devices to various programs in efficient way without error. So we use
software to access these drivers because we cannot directly access to keyboard, mouse, printers, scanners,
etc. We have to access these devices with the help of software. Operating system helps us in establishing
an efficient connection with these devices with the help of various software applications without any error.

Device Allocation
Device allocation refers to the process of assigning specific devices to processes or users. It ensures that
each process or user has exclusive access to the required devices or shares them efficiently without
interference.

Device Deallocation
Operating system deallocates devices when they are no longer in use. When these drivers or devices are
in use, they will be using certain space in the memory so it is the responsibility of the operating system to
continuously keep checking which device is in use and which device is not in use so that it can release that
device if we are no longer using that device.

What is OpenMP?
OpenMP is a set of compiler directives as well as an API for programs written in C, C++, or FORTRAN that
provides support for parallel programming in shared-memory environments. OpenMP identifies parallel
regions as blocks of code that may run in parallel. Application developers insert compiler directives into
their code at parallel regions, and these directives instruct the OpenMP run-time library to execute the
region in parallel. The following C program illustrates a compiler directive above the parallel region
containing the printf() statement −

#include <omp.h>
#include <stdio.h>
int main(int argc, char *argv[]){
/* sequential code */
#pragma omp parallel{
printf("I am a parallel region.");
}
/* sequential code */
return 0;
}

You might also like