OS Full Units
OS Full Units
UNIT – I
Introduction: operating system, history (1990s to 2000 and beyond), distributed computing, parallel
computation. Process concepts: definition of process, process states – Life cycle of a process, process
management – process state transitions, process control block (PCB), process operations, suspend and
resume, context switching, Interrupts – Interrupt processing, interrupt classes, Inter process communication
- signals, message passing.
UNIT – II
Asynchronous concurrent processes: mutual exclusion - critical section, mutual exclusion primitives,
implementing mutual exclusion primitives, Peterson‘s algorithm, software solutions to the mutual Exclusion
Problem - n-thread mutual exclusion-Lamports Bakery Algorithm. Semaphores – Mutual exclusion with
Semaphores, thread synchronization with semaphores, counting semaphores, implementing semaphores.
Concurrent programming: monitors, message passing.
UNIT – III
Deadlock and indefinite postponement: Resource concepts, four necessary conditions for deadlock,
deadlock prevention, deadlock avoidance and Dijkstra‗s Banker‗s algorithm, deadlock detection, deadlock
recovery.
UNIT – IV
Job and processor scheduling: scheduling levels, scheduling objectives, scheduling criteria, preemptive vs
non-preemptive scheduling, interval timer or interrupting clock, priorities, scheduling algorithms - FIFO
scheduling, RR scheduling, quantum size, SJF scheduling, SRT scheduling, HRN scheduling, multilevel
feedback queues, Fairshare scheduling.
UNIT – V
Real Memory organization and Management: Memory organization, Memory management, Memory
hierarchy, Memory management strategies, contiguous vs non-contiguous memory allocation, single user
contiguous memory allocation, fixed partition multiprogramming, variable partition multiprogramming,
Memory swapping.
Virtual Memory organization: virtual memory basic concepts, multilevel storage organization, block
mapping, paging basic concepts, segmentation, paging /segmentation systems.
Virtual Memory Management: Demand Paging, Page replacement strategies.
UNIT - I
INTRODUCTION:
• An operating system acts as an intermediary between the user of a computer and the computer
hardware.
• OS is software that manages the computer hardware.
• Purpose of OS: Provide an environment in which a user can execute programs in a convenient and
efficient manner.
• Mainframe OS: Optimize utilization of hardware.
• Personal computer OS: Support complex games, business application etc.
• Mobile computer OS: A user can easily interface with computer to execute programs.
• The operating system is the one program running at all times on the computer – usually called the
kernel.
In kernel, there are 2 types of programs.
System programs: Associated with the OS but are not necessarily part of the kernel.
Application programs: All programs not associated with the operation of the system.
• Mobile OS: core kernel & middleware.
Middleware: Set of software frameworks that provide additional services to application
developers.
Features of core kernel with middleware: supports database, multimedia, and graphics etc.
WHAT IS AN OS?
• The software that controls the hardware.
• The layer of software.
• An operating system is software that enables applications to interact with a computer‘s hardware.
• The operating system is a ―black box‖ between the applications and the hardware they run on that
ensures the proper result, given appropriate inputs.
• Operating system are primarily resource managers-they manage hardware, including processor,
memory, input/output devices and communication devices.
• They manage applications and other software abstractions.
HISTORY OF OPERATING SYSTEM
The First Generation (1945-55) : Vacuum Tubes and Plugboards
• The first electronic computer was developed without any operating system.
• No programming languages
• In these early days, a single group of people designed, built, programmed, operated, and
maintained each machine.
• All programming was done in absolute machine language.
• Programs often by wiring up plugboards to control the machine‗s basic functions.
• By the early 1950s, punched cards are started to used.
• It was now possible to write programs on cards and read them in cards instead of using
plugboards.
Batch system
Batch processing is execution of a series of programs ("jobs") on a computer without manual
intervention.
Figure 1.4 An early batch system. (a) Programmers bring cards to 1401. (b) 1401 reads batch of jobs
onto tape. (c) Operator carries input tape to 7094. (d) 7094 does computing. (e) Operator carries
output tape to 1401. (f) 1401 prints output.
After finishing process the results were written in output tape. (Such as the IBM 1401)
The Third Generation (1965-1980) : ICs and Multiprogramming
• 7094 was the word-oriented, large-scale scientific computers which were used for numerical
calculations in science and engineering.
• On the other hand, 1401 was the character-oriented, commercial computers which were widely
used for commercial works.
• Both of these machines are very huge and people need small machines.
• IBM produced the System/360 to solve these problems.
• All the machines had the same architecture and instruction set, programs written for one machine
could run on all the others.
• 360 was designed to handle both scientific and commercial computing.
• The 360 was the first major computer in which is used (small-scale) Integrated Circuits
first.
• OS/360 is the operating system used in third generation computers.
• Multiprogramming is first used in OS/360.
3. Linux Growth:
Linux matured with distributions like Ubuntu, Red Hat, and Debian.
It became the standard OS for servers, supercomputers, and embedded systems.
4. Mobile OS Emergence:
iOS (2007) and Android (2008) transformed mobile computing.
Based on UNIX and Linux respectively, they brought touch interfaces and app
ecosystems.
5. Cloud & Virtualization:
Operating systems adapted to cloud computing and virtualization (e.g., via VMware,
Docker, AWS).
Server OSes became modular and container-friendly.
6th 2010 – Smartphones, cloud systems, ▶ iOS, Android, Windows 10/11, Linux
Generation Present IoT devices (Ubuntu, RHEL)
Virtual machines, containers, ▶ Docker, Kubernetes, RTOS, Embedded
embedded Linux, Cloud OS
DISTRIBUTED COMPUTING
What is Distributed Computing?
Distributed computing refers to a system where processing and data storage is distributed across
multiple devices or systems, rather than being handled by a single central device. In a distributed system,
each device or system has its own processing capabilities and may also store and manage its own data.
These devices or systems work together to perform tasks and share resources, with no single device
serving as the central hub.
One example of a distributed computing system is a cloud computing system, where resources
such as computing power, storage, and networking are delivered over the Internet and accessed on
demand. In this type of system, users can access and use shared resources through a web browser or
other client software.
Components
There are several key components of a Distributed Computing System
• Devices or Systems: The devices or systems in a distributed system have their own processing
capabilities and may also store and manage their own data.
• Network: The network connects the devices or systems in the distributed system, allowing them to
communicate and exchange data.
• Resource Management: Distributed systems often have some type of resource management system
in place to allocate and manage shared resources such as computing power, storage, and networking.
The architecture of a Distributed Computing System is typically a Peer-to-Peer Architecture, where
devices or systems can act as both clients and servers and communicate directly with each other.
Characteristics
There are several characteristics that define a Distributed Computing System
• Multiple Devices or Systems: Processing and data storage is distributed across multiple devices or
systems.
• Peer-to-Peer Architecture: Devices or systems in a distributed system can act as both clients and
servers, as they can both request and provide services to other devices or systems in the network.
• Shared Resources: Resources such as computing power, storage, and networking are shared among
the devices or systems in the network.
• Horizontal Scaling: Scaling a distributed computing system typically involves adding more devices
or systems to the network to increase processing and storage capacity. This can be done through
hardware upgrades or by adding additional devices or systems to the network.
PARALLEL COMPUTATION
Parallel computation is a method of performing multiple calculations or processes simultaneously,
with the goal of solving problems more efficiently, especially those that are large or complex. It involves
dividing a task into smaller sub-tasks that can be executed at the same time on multiple processors or
cores.
Shares data among connected computers, often Shares less data, typically within the same memory
leading to higher latency. space, reducing latency.
More resilient to hardware failures as tasks can Less resilient to hardware failures, but individual
be rerouted to other nodes. tasks are isolated.
Can scale horizontally by adding more Can scale vertically by adding more processors or
machines to the network. cores to a single machine.
Complex programming model due to the need Often uses simpler programming models,
for handling distributed resources. especially in shared-memory architectures.
May have dependencies on remote data, Minimizes data dependency, allowing tasks to
affecting execution speed. execute independently.
Offers greater flexibility in terms of hardware More rigid in terms of hardware requirements,
and geographical distribution. often centralized.
Resource utilization may vary based on the Optimizes resource utilization by dividing tasks
load and distribution of tasks. efficiently among processors.
PROCESS CONCEPTS:
DEFINITION OF PROCESS:
• A Program in execution
• A asynchronous activity
• The ―animated spirit‖ of a procedure
• The ―locus of control‖ of a procedure in execution
The data structure of the process called a ―process descriptor‖ or a ―process control block‖.
There are two key concepts
1. A process is an ―entity‖ – each process has its own address space, which consists of a text region,
data region and stack region.
Text region: stores the code that the processor execute. Data
region: stores variable and dynamically allocated memory that
the process uses during execution.
Stack region: stores instruction and local variables for active procedure
calls.
2. A process is a ―program in execution‖.
PROCESS STATES
• When a process executes, it passes through different states.
• These stages may differ in different operating systems.
• In general, a process can have one of the following five states at a time.
• Not Running: This means the process is not currently using the CPU. It could be waiting for
something, like user input or data, or it might just be paused. New: This is the initial state when a
process is first started/created.
• Ready: The process are waiting to have the processor allocated to them by the operating system so
that they can run.
o Process may come into this state after leaving the start state or while running but be interrupted by the
scheduler to assign CPU to some other process.
• Running: After Ready state, the process state is set to running and the processor execute its instruction.
• Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting for user
input, or waiting for a file to become available.
• Terminated: Once the process finishes its execution or its is terminated by the operating system, it is
moved to the terminated state where it waits to be removed from main memory.
PROCESS MANAGEMENT
Process Management in an operating system (OS) involves overseeing the lifecycle of processes—
from their creation to termination. This ensures efficient CPU utilization, multitasking, and system stability.
PROCESS OPERATIONS
A process may spawn a new process
– The creating process is called the parent process
– The created process is called the child process
– Exactly one parent process creates a child
– When a parent process is destroyed, operating systems typically respond in one of two ways:
• Destroy all child processes of that parent
• Allow child processes to proceed independently of their parents
Fig. Process creation hierarchy.
SUSPEND AND RESUME
Suspending a process
• Indefinitely removes it from contention for time on a processor without being destroyed.
• Useful for detecting security threats and for software debugging purposes.
• A suspension may be initiated by the process being suspended or by another process.
• A suspended process must be resumed by another process.
• Two suspended states:
• suspendedready
• suspendedblocked
CONTEXT SWITCHING
Context Switching in an operating system is a critical function that allows the CPU to efficiently
manage multiple processes. By saving the state of a currently active process and loading the state of
another, the system can handle various tasks simultaneously without losing progress. This switching
mechanism ensures optimal use of the CPU, enhancing the system‘s ability to perform multitasking
effectively.
Working Process Context Switching
INTERRUPT:
Interrupts can come from various sources, and they can be categorized into two main types:
1. Hardware Interrupts:
These are triggered by external hardware devices, like keyboards, mice, or network interfaces, to
signal the CPU that they need processing.
For example:
o Keyboard input: When a key is pressed, a hardware interrupt is generated to inform
the CPU to process the input.
o Timer interrupts: Generated by a timer to ensure the CPU doesn't get stuck in
longrunning processes. o I/O devices: Devices like hard drives, printers, etc., can send
interrupts when they are ready to transfer data.
Hardware interrupts are generally further classified into:
Maskable interrupts (IRQ): These can be disabled (masked) by the CPU if it is currently
processing something more urgent.
Non-maskable interrupts (NMI): These cannot be disabled and typically indicate critical
hardware errors, like a system crash or power failure.
2.
3. Software Interrupts:
These are triggered by software programs to request a service from the operating system or to
handle a system call. For instance, a program may need access to system resources like file
handling, memory allocation, or input/output operations.
Examples include system calls for reading/writing files, managing memory, or handling
processes.
INTERRUPT CLASSES:
Interrupt classes in operating systems can be categorized by their source and behavior, including
hardware, software, timer, external, internal (exceptions), and maskable/non-maskable interrupts. Hardware
interrupts originate from external devices, while software interrupts are triggered by programs or
exceptional conditions.
Detailed Breakdown:
Hardware Interrupts:
These interrupts are generated by external hardware devices like keyboards, mice, network cards, and
other peripheral devices.
Software Interrupts:
These interrupts are generated by software or due to exceptional conditions, such as an error or a
system call.
Timer Interrupts:
These are periodic interrupts that occur at regular intervals, often used for scheduling tasks or
time-based operations.
External Interrupts:
Similar to hardware interrupts, these originate from external sources, but may also include
interrupts generated by other modules within the system.
Internal Interrupts (Exceptions):
These are interrupts caused by exceptional conditions within the processor itself, such as a
division by zero or attempting to access an invalid memory address.
Maskable vs. Non-Maskable Interrupts:
Maskable interrupts can be temporarily disabled by the operating system, while non-maskable
interrupts are critical and must be handled immediately.
Interrupt Priority:
Interrupts can be assigned different priorities to determine which ones should be handled first.
Interrupt Handling:
When an interrupt occurs, the processor suspends its current task, saves the state, and executes an
interrupt handler routine to handle the interrupt. After the interrupt is handled, the processor restores its
state and resumes the original task.
FIFOs (Named Like pipes but with a name; can be used by Communication between
Pipes) unrelated processes. unrelated processes.
Message Queues Messages are sent to and retrieved from a queue Queue-based communication.
maintained by the OS.
Shared Memory Memory segment is shared between processes. Large data exchange with
Fastest IPC, but needs sync. synchronization.
Semaphores Used to control access to shared resources (mostly Prevent race conditions.
for synchronization).
Signals OS sends a simple signal to a process (like Process control (e.g., kill,
interrupt). notify).
SIGNALS:
What is a Signal?
A signal is a limited form of inter-process communication used in Unix/Linux systems. It is a software
interrupt sent to a process to notify it that an event has occurred.
Common Signals:
Signal Description
SIGINT Interrupt (Ctrl+C)
SIGTERM Termination request
SIGKILL Force kill a process (cannot be caught)
SIGSTOP Stop/suspend process
SIGCONT Continue a stopped process
SIGUSR1 User-defined signal 1
MESSAGE PASSING:
What is Message Passing?
Message passing is an IPC method where processes send and receive messages using OS-provided
mechanisms like message queues, mailboxes, or sockets.
Characteristics:
Direct or indirect communication
Synchronous (blocking) or asynchronous (non-blocking)
Supports structured communication (e.g., with headers, priorities)
Key Methods:
Mechanism Description
Message Queues Queue in the kernel that stores messages sent between processes.
Sockets Useful for both local and network communication.
Mailboxes Named communication objects, mostly in some OS like Windows or RTOS.