Operating System (OS) • In 1985, Microsoft launched Windows 1.
0,
developed by Bill Gates, which allowed
An operating system (OS) is a crucial software users to interact using clicks and graphical
component that manages computer hardware and elements.
software resources. It serves as an intermediary
between users and the computer hardware, allowing • Linux was introduced as an open-source
users to interact with the system and run alternative to Windows, enabling developers
applications. to modify and enhance its features.
History of Computing 1990-2000: Networking and Advanced Operating
Systems
1940-1950: Early Computers
• Networking became essential, allowing
• This decade marked the rise of early computers to communicate over networks.
computers, with notable companies like
IBM and GM emerging. • Notable Windows versions during this
decade included Windows 95, Windows
• The primary focus was on single-task 2000, and Windows XP.
operations.
• The introduction of 32-bit computing
1950-1960: Mainframe Computers significantly increased processing capacity.
• The introduction of the IBM 360 represented 2000-2010: Evolution of Windows
a significant advancement.
Windows XP developed into Windows 7, making
• These systems were designed for things easier and better for users.
commercial use, mainly for calculations and
data processing. Windows Vista had many updates but got mixed
feedback.
1961-1970: Time-Sharing Systems
Windows 7 simplified things with icons and took
• The concept of time-sharing was introduced away the search bar.
in 1961, allowing multiple users to share
computer resources. Windows 8 (2012) introduced touch features,
making it easier to use on tablets.
• During this period, the UNIX operating
system was developed by Ken Thompson Windows 10 and Windows 11 (2021) added new
and Dennis Ritchie, becoming one of the tech like Internet of Things (IoT) and artificial
most reliable operating systems. intelligence (AI) to help with decision-making.
1970-1980: Personal Computers Virtualization technology allowed multiple virtual
computers to run on one machine, reducing the need
• The first personal computers were for extra hardware
introduced, including desktop models.
Modules of Operating system:
• Notable operating systems included MS-
DOS and the Apple Macintosh. • Process Management:
1980-1990: Graphical User Interfaces (GUIs) Controls the running of programs, making
sure each gets time on the CPU and doesn’t
• The introduction of graphical interfaces interfere with others.
transformed user interactions with • Memory Management:
computers, moving from text-based to Decides how memory is shared among
graphical interactions. programs and keeps track of what’s being
used.
• File System Management: An application program (or application software)
Organizes and controls data storage, letting is a type of software designed to help users perform
you create, delete, and manage files. specific tasks or solve particular problems.
• Input/Output Management:
Handles communication between the Examples of application programs include:
computer and devices like keyboards or • Word processors (e.g., Microsoft Word) –
printers. for writing and editing text documents.
• Security:
Protects data and resources by controlling • Web browsers (e.g., Google Chrome,
access and preventing unauthorized use. Firefox) – for browsing the internet.
• Network Management:
Manages connections between computers • Spreadsheet software (e.g., Microsoft
for sharing data over networks. Excel) – for organizing data in tables and
• Interrupt Management: performing calculations.
Responds to signals from devices or
• Media players (e.g., VLC, Windows Media
programs that need immediate attention,
Player) – for playing audio and video files.
pausing other tasks temporarily.
• Graphic design software (e.g., Adobe
Lecture # 02 Photoshop, Canva) – for creating and editing
images or designs.
Computer system
A computer system is a combination of hardware,
software, and peripheral devices that work together
to perform computational tasks and manage data.
Hardware: it provides basic computing resources it
include central processing unit (CPU), memory
(RAM), storage devices (hard drives or SSDs),
input devices (keyboard, mouse), and output
devices (monitor, printer).
OS: Operating System (OS): It controls and
coordinates the use of hardware resources among
various application programs and users. The OS
acts as an intermediary between the hardware and
the software, managing tasks such as memory
allocation, process scheduling, input/output
operations, and file management to ensure that Objective of OS:
programs run efficiently and users can interact with
the system. Convenience:
Users: it includes people, machines and other The primary objective of the OS is convenience—
computers etc. the OS makes the computer system easy and
efficient to use, allowing users to focus on their
System software manages the hardware of a
tasks without needing to understand technical
computer and provides a platform for running
details.
application software
Efficiency:
Application program:
Secondary objective is efficiency. Efficiency refers
to how well the operating system (OS) manages the
computer's resources, such as the CPU, memory, directly with the system; instead, they prepare their
and storage. The goal is to use these resources as tasks (jobs) using offline devices like punch cards
effectively as possible, ensuring that programs run and give them to a computer operator.
quickly and smoothly, while also minimizing errors.
Jobs with similar needs are grouped together and
speed and accuracy matter in efficiency. run at the same time to speed up the process. Once a
job is completed, its memory is freed up for the next
task
Ability to work:
Types of Batch Operating Systems (Simplified)
OS allows effective development, testing and 1. Simple Batch System:
introduction of new system function without
interfering with the surface. o In this type, users don't interact directly with the
computer. They prepare jobs (tasks) with program
Role of operating system instructions, control information, and data, usually
on punch cards.
Govern:
o Jobs are executed one by one, and the output
The operating system (OS) functions like a
appears after some time.
government, managing the basic resources of a
computer system, which include hardware, 2. Multi-programmed Batch System:
software, and data. The OS provides the means and
environment for the proper use of these resources, o This system stores many jobs in memory at once.
ensuring that they are allocated and utilized o The operating system selects a job, executes it, and
effectively. if the job needs to wait for resources, the system
Resource allocator: switches to another job. This improves CPU
utilization and makes the system more efficient.
Computer system has many resources. Each
resource is used to solve a problem. Operating Advantages:
system is responsible for managing the use of • Increased Throughput: Efficiently handles
system resources. Operating system decides which a large volume of jobs.
request should obtain the required resource.
• Reduced CPU Idle Time: Minimizes user
Control program: interaction, allowing for automated
It controls the execution of user program and processing of repetitive tasks.
devices; it is responsible to monitor and control • Time-Saving: Jobs with similar
operation of software. requirements are grouped to enhance
Control hardware component of computer system processing speed.
and ensure use of resources properly. Disadvantages:
Lecture #03 • Debugging Challenges: Errors may not be
discovered until the entire batch is executed.
BATCH OS:
• Lack of Flexibility: Users must wait for the
What is a Batch Operating System: entire batch to complete, which can lead to
A batch operating system processes jobs in groups delays.
(called batches) instead of handling them one by • Complex Job Scheduling: If one job fails,
one. This was common in the 1970s, especially with it may delay subsequent jobs, leading to
large mainframe computers. Users do not interact inefficiencies.
MULTIPRORMIING OS: 1. More memory needed – Multiple programs
need to be in memory.
In operating systems (OS), multiprogramming
refers to a technique that allows multiple programs 2. Can slow down – Too many programs can
to reside in memory and be executed by the CPU make the computer slower.
concurrently.
3. Complex management – The OS needs to
The OS manages these programs, ensuring efficient carefully manage which program runs and
CPU utilization by switching between tasks when when
one is idle, such as waiting for input/output (I/O)
Types of Multiprogramming Operating Systems
operations.
Here are the two main types:
How it Works:
• In a multiprogramming system, multiple
programs are loaded into memory at the 1. Multitasking Operating System
same time.
• Definition: Runs two or more programs at
• The OS switches between these programs, the same time by quickly switching between
executing them in a way that keeps the them
CPU busy.
Multiuser Operating System
• If one program is waiting for an I/O
operation to complete, the OS assigns the • Definition: Allows multiple users to access
CPU to another ready program. a powerful central computer at the same
time from different terminals.
Key Concepts:
1. CPU Scheduling: The OS uses algorithms
(like Round Robin or Shortest Job Next) to CLUSTER OPERATING SYSTEM:
decide which process to execute next. A cluster operating system (Cluster OS) is
2. Context Switching: The process of saving designed to manage a cluster of interconnected
the current state of a program and loading computers (nodes) working together as a single
the state of another. system. The primary goal is to provide high
availability, scalability, and reliability for
3. Memory Management: Multiple programs applications by distributing workloads across
require proper management of the available multiple machines.
memory through techniques like partitioning
or paging. A cluster system uses multiple CPUs to complete a
task. It consists of two or more individual systems
Advantages: combined together. These systems share storage and
processing resources and are closely linked through
• Efficient CPU Utilization: The CPU is not
a LAN network.
left idle while waiting for a task to complete.
• Increased Productivity: Multiple users or Types:
programs can share the system resources Asymmetric Clustering System
simultaneously.
In an asymmetric clustering system, one node is in
• Faster Response Time: For users, it hot standby mode, while the other nodes run
appears that multiple tasks are being essential applications.
processed at the same time.
• Hot Standby Mode: This mode acts as a
Disadvantages failsafe. The hot standby node is part of the
system and continuously monitors the active • Definition: A hard real-time system guarantees
server. that tasks will be completed within strict time
constraints. If a deadline is missed, it can lead to
• Failover: If the active server fails, the hot
severe consequences or system failures.
standby node quickly takes over its role,
ensuring the system remains operational. • Characteristics:
o Deterministic Behavior: Responses to inputs
and outputs occur within a specified timeframe.
o Guaranteed Processing: Tasks must be
completed by their deadlines to ensure system
reliability.
o Examples:
▪ Aerospace Systems: Software used in pilot
control systems for jets, where any delay could
jeopardize safety.
Symmetric Clustering System ▪ Industrial Machines: Systems that control
machinery, where timing is critical for safe
In a symmetric clustering system, multiple nodes
operations.
run all applications and monitor each other
simultaneously. Soft Real-Time Systems:
• Resource Utilization: All hardware • Definition: A soft real-time system is less
resources are actively used, making this restrictive than a hard real-time system. It does
system more efficient than asymmetric not guarantee that tasks will be completed
clustering systems. within a specific time frame, but aims for timely
execution.
• Redundancy: Since there are no standby
nodes, every node contributes to processing • Characteristics:
tasks and ensuring system reliability.
o Flexibility: These systems can tolerate missed
deadlines without severe consequences.
o Performance: While tasks are expected to be
completed promptly, occasional delays are
acceptable and do not lead to system failure.
o Examples:
▪ Multimedia Applications: Streaming video or
audio, where minor delays may cause buffering but
:
do not stop playback.
Real-Time Operating System:A Real-Time ▪ Networking Applications: Systems like VoIP that
Operating System (RTOS) is designed for rely on timely packet delivery but can handle some
applications requiring immediate response to delays without critical failures.
events. It ensures that tasks are completed within
strict time constraints by prioritizing critical tasks. An interrupt is a signal sent to the CPU by hardware or
software indicating that an event needs immediate
Hard Real-Time Systems: attention. Interrupts allow the CPU to respond to
important events in real-time, rather than waiting for the • Function Parameters: Values that are passed to
current task to complete. functions.
Hardware Interrupts: • Local Variables: Variables that are only used
• These interrupts are generated by hardware devices, within a function.
such as keyboards, mice, or disk drives, to signal
• Return Addresses: Where the program should
that they need processing
go back after a function finishes executing.
Software Interrupts:
The stack works in LIFO (Last In, First Out) order.
• These interrupts are generated by programs when This means that the last item added to the stack is
they require operating system services. the first one to be removed.
How Interrupts Work: Data Section:
1. Interrupt Signal: This is the part of the process that contains global
o When an interrupt happens, the CPU stops and static variables.
its current task.
Heap:
2. Interrupt Handler:
The heap is a data structure used to store data in a
o The operating system runs a special piece of non-linear way, organized in a tree format. It allows
code called an interrupt handler to deal for efficient random access to data
with the interrupt.
3. Return to Previous Task:
STATES OF A PROCESS:
o After handling the interrupt, the CPU goes
back to the task it was doing before. The state of a process refers to its current status
during its execution
Lecture#04
1. New:
Process:
o The process is being created but is not yet ready
process is a program in execution and serves as the for execution.
basic unit of resource allocation in a system. A
program, is a passive entity on the other hand, a 2. Ready:
process is an active entity that has a defined starting o The process is loaded into memory and is ready
and ending point. It must execute and progress to run, waiting for CPU time.
sequentially.
3. Running:
Attribute of OS:
o The process is actively executing instructions on
Program counter: the CPU.
The program counter keeps track of which 4. Waiting (or blocked):
instruction is currently being executed and what
will be executed next o The process is waiting for some event to occur
(e.g., I/O operation completion) before it can
Stack: continue execution.
The stack is a special area in memory that stores Terminated:
temporary data for a process. It keeps track of
things like: o The process has finished execution or has been
terminated by the system.
5. loop
o can occur when a process repeatedly transitions It is the process the operating system uses to decide
between certain states without making progress which process gets to use the CPU at any given
towards termination time.
Goal: It ensures that multiple processes can share
the CPU effectively, maximizing efficiency and
new -> ready
minimizing waiting time
ready-> running
5. Memory Management Information
running-> ready
running-> waiting This part stores information about how a process
uses memory.
running-> end
What it Includes: It has details like where the
process's memory starts (base) and how much
memory it can use (limit). It can also include page
tables, which help the system manage memory more
efficiently.
6. I/O Status
Information
This information includes
the list of I/O
Lecture 05:
devices used by the
Process Control Block process, the list of files
A Process Control Block (PCB) is a data structure etc.
used by the operating system to store information
about a process. It contains essential details that the 7. Accounting
operating system needs to manage and control Information:
processes effectively.
1. Process State: In the Process Control
Block (PCB) includes
Indicates the current state of the process (e.g., New,
Ready, Running, Waiting, or Terminated). various details about a
process's resource usage.
2. Program Counter (PC)
The Program Counter (PC) is a key part of a What it Includes:
computer that keeps track of the next instruction to Time Limits: How long a process is allowed to
be executed in a program. run.
Account Numbers: Identifiers for billing or
3. CPU Registers
resource tracking.
CPU Registers are small storage areas in the CPU CPU Usage: The amount of CPU time the
that hold important data for a running process.
process has consumed.
4. CPU Scheduling Process Numbers: Unique identifiers for each
process
The long-term scheduler,or job scheduler,selects
processes from this pool and loads them into
SCHEDULING QUEUE: memory for execution.
short-term scheduler:
A scheduling queue is a data structure or
mechanism used in operating systems or other The short-term scheduler,or CPU scheduler, selects
software environments to manage and organize from among the processes that are ready to execute
tasks, processes, or events that need to be executed, and allocates the CPU to one of them.
often based on priority or time constraints
medium-term Scheduler:
Types of Scheduling Queues in Operating • Purpose: Manages swapping – temporarily
Systems
removing or suspending processes from
1. Job Queue memory to reduce load and later resuming
o Contains all the processes that are them.
submitted for execution. • Function: Helps free up memory by
o These processes are waiting for
suspending processes (moving them to
system resources (e.g., CPU, I/O) to
secondary storage) and bringing them back
be allocated.
when resources are available.
2. Ready Queue
o Holds the processes that are ready to • Frequency: Runs occasionally when the
run but waiting for CPU access. system is under heavy load or needs
o These processes have all the
memory optimization.
resources except the CPU and are
waiting for their turn to execute.
3. Waiting (or Blocked) Queue
o Contains processes that are waiting Context Switching
for an I/O event or other resources.
o Once the required event completes, • Definition: The process of saving the
the process is moved back to the current state of a process (context) and
ready queue. restoring the state of another process so the
Job queue CPU can switch from one process to
another.
Key Steps:
1. Save the current process's state (like
program counter, registers) to memory.
Ready queue 2. Load the next process's state into the CPU.
3. Resume execution of the next process.
Why It’s Needed:
• Allows multitasking by enabling the CPU to
Waiting queue
switch between processes, giving the
illusion that multiple processes are running
Lecture 06: simultaneously.
long-term scheduler:
In cooperative processes, memory sharing and
message passing are two key methods for
enabling data sharing and communication
among different processes or threads.
Memory Shared
Definition: Memory sharing involves multiple
processes accessing a common memory space.
This allows them to read from and write to the
same memory location.
How It Works:
1. Create Memory: One process asks the
operating system for a block of memory.
Inter-Process Communication
2. Attach Memory: Other processes connect
(IPC) to this memory area.
• Definition: A mechanism that allows 3. Share Data: Processes can read and write
processes to communicate and share data
data in this shared space.
with each other, either within the same
system or across networks. 4. Control Access: Use tools (like
Why IPC is Important: semaphores) to avoid problems when
multiple processes try to use it at the same
• Enables cooperation between processes. time.
Types of IPC: 5. Detach When Done: Processes disconnect
• Independent process. from the memory when they finish using it.
• Cooperative process.
Message Passing:
Independent process:
Processes communicate by sending messages to
The process that cannot be effect by any other each other, which helps manage data sharing
process executing in the system is called without shared memory.
independent process.
What It Is: A method for processes to send
Independent process cannot share the data with messages to each other instead of sharing
any other process. memory.
Cooperative process: How It Works:
The process that can be effect by any other 1. Send Message: One process sends a
process executing in the system is called message to another process.
cooperative process. 2. Receive Message: The other process gets
Cooperative process can share both module data the message and can act on it.
and information. 3. Can Be Synchronous or Asynchronous:
o Synchronous: The sender waits for the Key Features:
receiver to get the message.
• Lightweight: Threads are more efficient
o Asynchronous: The sender sends the than processes because they share the same
message and continues without waiting. resources (like memory) of their parent
process.
• Concurrent Execution: Multiple threads
can run at the same time, performing
different tasks.
• Shared Data: Threads can access shared
data easily since they operate within the
same memory space.
Thread Control Block (TCB)
What It Is: A Thread Control Block (TCB) is a
data structure used by the operating system to
Mailbox in Inter-Process Communication manage threads
(IPC)
Characteristics/attribute:
What It Is: A mailbox is a method used for
message passing between processes. It acts as a • Thread ID: A unique identifier for the thread.
storage area where messages can be sent and • Thread State: The current state of the thread
(e.g., running, ready, blocked).
received.
• Program Counter (PC): The address of the
How It Works: next instruction to be executed by the thread.
• Registers: The values of CPU registers used by
1. Create Mailbox: One process creates a the thread, which must be saved and restored
mailbox. during context switches.
• Stack Pointer: A pointer to the top of the
2. Send Message: When a process wants to thread's stack, which holds local variables and
communicate, it sends a message to the function calls.
mailbox. • Priority: The priority level of the thread, which
helps in scheduling decisions.
3. Store Messages: The mailbox stores the
• Data Section
messages until the receiving process
retrieves them. • What It Is: Part of memory that holds
global and static variables.
4. Receive Message: The receiving process
checks the mailbox and reads the messages. • Types:
o Initialized Data: Variables that are given a
Lecture 07:
value before the program runs (e.g., int x =
THREAD
10;).
A thread is a basic unit of cpu utilization, it is
o Uninitialized Data: Variables declared but
also called light weight process.it is a sequence
not assigned a value (e.g., int y;).
within process so we can say that it behave like
a process with in the process. Lifetime: Exists for the duration of the
program.
• Address Section harder to
manage
• What It Is: The range of memory addresses Throughput Generally Throughput
allocated for a program. lower increases
throughput through
• Components: threading
Basic unit It is basic unit It is basic unit
o Code Segment: Where the program’s of resource of CPU
instructions are stored. allocation utilization.
o Data Segment: Contains global/static variables.
o Heap: For dynamic memory allocation (e.g.,
Lecture #08
using malloc). CPU SCHEDULING:
o Stack: For function calls and local variables. CPU scheduling in an operating system is a way to
decide which process (task) should use the CPU
Comparison Between Processes when there are multiple tasks waiting to be
and Threads executed. Since the CPU can handle only one
process at a time, it switches between them to make
Aspect Process Thread
sure each gets its turn.
Weight Heavyweight Lightweight
(more (less overhead) Objective:
overhead)
Independence Can exist Cannot exist • Maximize CPU utilization.
without threads without a • Maximize utilization of other resources
process
Control Block Requires a Requires a
• Maximize throughput
Process Control Thread Control • Maximize executing time
Block (PCB) Block (TCB) • Minimize waiting time
Variables Requires Uses local • Maximize turnaround time. (Turnaround Time:
external variables Time taken from process submission to
variables
Resource Requires more Requires fewer completion.)
Requirement resources resources • Fair access to each process.
(memory, CPU,
etc.) Diagram from register.
Lifecycle If a thread If a process Total:
finishes, the ends, all threads
process within it are = Pa +Pb
continues terminated
Communication Inter-process Communication =60+60
communication is easier within
is complex the same =120
(IPC) process
Switching More work Easier and
Pa=exact =60 sec
Overhead needed for faster switching Pb=exact =60 sec
context between threads
switching Pa = waiting time
Creation Creating a Creating a
process is more thread is easier Pb = waiting time
complex and less costly
Multi- Multi- Multi-threading Total = 240 sec
processing processing is is easier and
more efficient
I/O and CPU Burst Thrashing occurs in an operating system when the
CPU spends more time swapping pages in and out
The execution of the process consists of an
of memory (paging) than executing actual
alternation of CPU burst and input output burst.
processes. This leads to a severe drop in system
Process begins and ends with the CPA banners in
performance.
between. CPU activity is suspended with an input
output operation as needed.
1. What is a CPU Burst?
A CPU burst refers to the period when a process is
actively using the CPU for computation. During
this time, the process doesn’t need any input or
output operations—it only needs the CPU.
• Example: Calculating numbers or executing
code.
2. What is an I/O Burst?
An I/O burst occurs when a process waits for input
or output operations to complete, such as reading
from a file or receiving data from the user. During
this time, the CPU is not needed.
• Example: Waiting for data to be read from
the disk.
Cycle of CPU and I/O Bursts
A typical process execution alternates between CPU
bursts and I/O bursts.
1. CPU Burst → The process uses the CPU to Preemptive Scheduling
perform calculations.
The CPU can switch between processes anytime
2. I/O Burst → The process waits for Non-Preemptive Scheduling
input/output, such as file access or user
input. A process runs to completion without interruption
3. This cycle repeats until the process
completes.
input output bound
The process is called input output bound if CPU
bursts or shorter as compared to the input output
burst.
CPU bound
The process is called CPU bound. If CPU bursts out
longer than input, output burst.
Thrashing
CPU scheduling decisions take place under one of four • It is non-preemptive, meaning that no process
conditions: can force the current process to stop.
1. When a process switches from the running state Example: Like waiting in a line—whoever comes first
to the waiting state. gets served first.
2. When a process switches from the running state
Disadvantages of FCFS:
to the ready state.
3. When a process switches from the waiting state 1. Poor Average Waiting Time: If a long process
to the ready state arrives first, it can make shorter processes wait a
4. When a process terminates. long time (convoy effect).
For conditions 1 and 4 there is no choice - A new process Advantages:
must be selected.
• Simple and easy to implement.
For conditions 2 and 3 there is a choice - To either
continue running the current process or select a different • Fair, as every process gets served in order.
one.
If scheduling takes place only under conditions 1 and 4,
the system is said to be non-preemptive, or cooperative.
Under these conditions, once a process starts running it
keeps running, until it either voluntarily blocks or until it Process ID Arrival time Burst time
finishes. Otherwise, the system is said to be preemptive. P1 2 2
Dispatcher P2 5 6
P3 0 4
The dispatcher is the module that gives control of the
CPU to the process selected by the scheduler. This P4 0 7
function involves: P5 7 4
• Switching context.
• Switching to user mode. Solution
• Jumping to the proper location in the newly
loaded program.
Gantt chart
The dispatcher needs to be as fast as possible, as it is run
on every context switch. The time consumed by the
dispatcher is known as dispatch latency.
Lecture 09:
First-Come, First-Served (FCFS)
It is one of the simplest scheduling algorithms used in Proces Arriva Burs Completio Waiting
operating systems and other queuing systems. It operates s ID l time t n time(CT
on the principle that the process which requests the CPU
(or resource) first is served first. Below is a detailed time time -BT-AT)
overview. P1 2 2 13 9
P2 5 6 19 8
How FCFS Works: P3 0 4 4 0
P4 0 7 11 4
• Processes are executed in the order in which
they arrive in the queue. P5 7 4 23 11
• Once a process starts execution, it cannot be
interrupted until it is completed.
Average Waiting time = (9+8+0+4+12)/5 = 33/5 = 6.6
time unit
Average completion time = (13+19+4+11+23)/5 = 14
time unit Process Burst Arrival Co Wai
Queue time time m ting
shortest job first time time
The shortest job first (SJF) or shortest job next, is a P1 6 2 9 1
scheduling policy that selects the waiting process with P2 2 5 11 4
the smallest execution time to execute next.
P3 8 1 23 14
Advantages:
P4 3 0 3 0
• Minimizes waiting time.
P5 4 4 15 7
• Maximum throughput
Disadvantages:
• Long processes may get delayed (risk of
starvation). Average Waiting Time= 0+1+4+7+14/5 = 26/5 = 5.2
There are two types:
• Non-preemptive: Once a process starts, it can't
be interrupted. Preemptive:
• Preemptive (Shortest Remaining Time First):
If a shorter process arrives, it interrupts the
current one. Process Burst Arrival
Non preemptive: Queue time time
P1 6 2
Process Queue Burst time Arrival time P2 2 5
P1 6 2 P3 8 1
P2 2 5
P4 3 0
P3 8 1
P5 4 4
P4 3 0
P5 4 4
Process Burst Arrival Co Waiti
mpl ng •Flexible: You can assign priorities based on the
Queue time time
importance of each task.
ete time
time Disadvantages:
P1 6 2 15 7
• Starvation: Lower-priority processes may never
P2 2 5 7 0 execute if higher-priority tasks keep arriving.
• Complexity: It can be tricky to assign the right
P3 8 1 23 14 priorities.
P4 3 0 3 0
P5 4 4 10 2 Non Preemptive priority scheduling:
Average Waiting Time = 0+7+0+2+14/5 = 23/5 =4.6
PRIORITY SCHEDULING:
Priority Scheduling is a scheduling algorithm used
in operating systems where each process is assigned
a priority, and the process with the highest priority
is executed first. It ensures that important tasks are
completed quickly but can lead to issues like
starvation if lower-priority processes are never
executed.
Types of Priority Scheduling:
1. Preemptive Priority Scheduling:
o If a new process with a higher priority
arrives, it interrupts the currently
running process.
o Example: If Process A (priority 5) is
running and Process B (priority 2)
arrives, Process B will preempt A (since
a lower number indicates higher
priority).
2. Non-preemptive Priority Scheduling:
o The currently running process completes
before switching, even if a higher-
priority process arrives during
execution.
Advantages:
Preemptive scheduling:
•Important tasks first: It makes sure that the most
important jobs run quickly.
•Good for real-time systems: Works well when
some tasks (like emergencies) need immediate
attention.
• The period of time for which a
process or job is allowed to run
in a pre-emptive method is called
time quantum.
•Each process or job present in the
ready queue is assigned the CPU
for that time quantum, if the
execution of the process is
completed during that time then
the process will end else the
process will go back to
Process Priority Arrival Burst Completion Waiting
the waiting table and wait for its
Id Time Time Time Time
next turn to complete the
execution.
1 2 0 1 1 0
2 6 1 7 22 14
3 3 2 3 5 0
4 5 3 6 16 7
5 4 4 5 10 1
6 10 5 15 45 25
7 9 15 8 30 7
AWT:
(0+14+0+7+1+25+7)/7= 7.71
Lecture 10:
Round Robin is a CPU scheduling
algorithm where each process is
Finish
cyclically assigned a fixed time slot. Burst Waiting
Processes time
• Round Robin CPU Algorithm Time Time
generally focuses on Time
Sharing technique.
P1 21 32 32-21=11
2. Separate Scheduling
Finish
Burst Waiting Algorithms for Queues:
Processes time
Time Time Each queue can use a different
scheduling algorithm, such as:
o First-Come-First-Serve
P2 3 8 8-3=5 (FCFS) for batch jobs.
o Round Robin (RR) for
P3 6 21 21-6=15
interactive tasks.
P4 2 15 15-2=13 o Priority Scheduling for
critical processes.
3. Priority Between Queues:
Multi-Level Queue Scheduling The queues are ordered by
Algorithm priority. Higher-priority queues
are given CPU time first. If there
Multi-Level Queue (MLQ)
are processes in a higher-priority
Scheduling is a CPU scheduling
queue, the CPU won’t allocate
algorithm where processes are divided
time to lower-priority queues
into multiple queues, with each queue
until the higher-priority queue is
assigned a specific priority and
empty.
scheduling policy. It's commonly used
in systems that need to handle different 4. No Process Movement between
types of tasks, such as real-time, batch, Queues:
and interactive processes. Once a process is assigned to a
queue, it stays there for its entire
lifetime, unlike multi-level
How It Works: feedback queue scheduling,
1. Classification of Processes: where processes can move
Processes are categorized based between queues.
on their type or priority (e.g., Advantages:
system processes, interactive
1. Specialization: Each type of task
tasks, batch jobs). Each category
can be scheduled using the most
gets assigned to a different
appropriate algorithm.
queue.
2. Simplicity: Easy to manage queues with different priorities to manage
processes efficiently. It dynamically adjusts the
when processes are clearly priority of processes based on their behavior,
categorized. moving processes between queues
3. Separation of Concerns: This ensures better responsiveness and resource
utilization while preventing starvation (when low-
Prevents high-priority tasks from priority tasks never get executed).
being affected by low-priority
Key Features of MLFQ:
tasks.
Key Features of MLFQ:
1. Multiple Queues:
Disadvantages: o Each queue has a different priority level.
o Processes in higher-priority queues run
1. Starvation: Low-priority queues before those in lower ones.
2. Feedback Mechanism:
may never get CPU time if o If a process uses its entire time slice without
higher-priority queues are always finishing, it moves to a lower-priority queue.
o If a process behaves well (short bursts), it can
busy. stay or move to a higher-priority queue.
3. Time Slicing:
2. Lack of Flexibility: Processes o Each queue has its own time limit (quantum)
can't move between queues, even for running processes.
4. Dynamic Priority Adjustment:
if their behavior changes (e.g., a o Processes can be promoted or demoted
batch job becoming interactive). between queues based on performance.
5. Preemption:
3. Overhead: Requires careful o A high-priority process can interrupt a low-
priority process to ensure quick execution.
configuration and tuning of 6. Boosting:
priorities and algorithms for each o Processes stuck too long in low-priority
queue. queues are boosted to higher queues to
prevent starvation.
Advantages of MLFQ:
• Efficient CPU Utilization: Balances
between short, interactive tasks and long-
running jobs.
• Fairness: Boosting ensures long-waiting
processes get a chance to run.
What is Multi-Level Feedback Queue (MLFQ)
Scheduling?
A Multi-Level Feedback Queue (MLFQ) is a
CPU scheduling algorithm that uses multiple