0% found this document useful (0 votes)
64 views

Fundamental of Operating System

Fundamentals of operating system

Uploaded by

Rohit Chaudhary
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views

Fundamental of Operating System

Fundamentals of operating system

Uploaded by

Rohit Chaudhary
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Unit- I

Fundamental of Operating System


An operating system acts as an intermediary between the user of a computer and computer
hardware. The purpose of an operating system is to provide an environment in which a user can
execute programs conveniently and efficiently.
An operating system is software that manages computer hardware. The hardware must provide
appropriate mechanisms to ensure the correct operation of the computer system and to prevent
user programs from interfering with the proper operation of the system. A more common definition
is that the operating system is the one program running at all times on the computer (usually called
the kernel), with all else being application programs.
An operating system is concerned with the allocation of resources and services, such as memory,
processors, devices, and information. The operating system correspondingly includes programs to
manage these resources, such as a traffic controller, a scheduler, a memory management module,
I/O programs, and a file system.

An operating system (OS) serves as a resource manager


An operating system (OS) serves as a resource manager for a computer system, overseeing and
controlling various hardware and software resources to ensure efficient and secure operation. Here
are some key aspects of how an operating system functions as a resource manager:
1. Processor Management:

• The OS allocates CPU time to various processes, managing the execution of multiple
tasks concurrently.

• It implements scheduling algorithms to determine the order in which processes run,


optimizing overall system performance.

2. Memory Management:

• The OS is responsible for allocating and deallocating memory to different processes


and applications.

• It manages virtual memory, allowing processes to use more memory than physically
available by using techniques like paging and swapping.

3. File System Management:

• The OS provides a file system that organizes and stores data on storage devices. It
manages file creation, deletion, and access.

• It ensures data integrity and security through permissions and access controls.

4. Device Management:

• The OS controls and coordinates communication with peripheral devices such as


printers, disk drives, and network interfaces.

• It provides device drivers that act as intermediaries between the hardware and higher-
level software.

5. I/O (Input/Output) Management:

• The OS oversees input and output operations, managing data transfers between the
CPU, memory, and external devices.

• It handles buffering, caching, and error handling to ensure reliable and efficient I/O
operations.

6. Security and Protection:

• The OS enforces security policies and access controls to protect the system and its
resources from unauthorized access.

• It provides user authentication mechanisms, file permissions, and encryption to


safeguard data.

7. Network Management:

• In a networked environment, the OS manages network connections, protocols, and


data transfer between systems.

• It may include features for remote access, distributed computing, and network
security.
8. Error Handling:

• The OS monitors the system for errors and exceptions, providing mechanisms to
handle and recover from faults.

• It logs errors, generates error messages, and may attempt to recover from errors when
possible.

The Role of OS as a Resource Manager


• A computer system holds various resources like processors, memories, timers, disks,
printers, and many others.
• The OS manages these resources and allocates them to particular programs.
• As a resource manager, OS provides controlled allocation of the processors, memories,
and I/O devices among various programs.
• Therefore OS is also called a Resource Allocator, which is one of the main features of
an OS.
• In the computer system, multiple user programs are running simultaneously.
• The CPU itself is one kind of resource and the OS decides how much processor time
should be given for the execution of a particular user program.

Structure of operating System (Role of Kernel and Shell)


Kernel is central component of an operating system that manages operations of computer
and hardware. It basically manages operations of memory and CPU time. It is core
component of an operating system. Kernel acts as a bridge between applications and data
processing performed at hardware level using inter-process communication and system
calls.
Kernel loads first into memory when an operating system is loaded and remains into memory
until operating system is shut down again. It is responsible for various tasks such as disk
management, task management, and memory management.
It decides which process should be allocated to processor to execute and which process
should be kept in main memory to execute. It basically acts as an interface between user
applications and hardware. The major aim of kernel is to manage communication between
software i.e. user-level applications and hardware i.e., CPU and disk memory.
Objectives of Kernel :

• To establish communication between user level application and hardware.


• To decide state of incoming processes.
• To control disk management.
• To control memory management.
• To control task management.
Types of Kernel :

1. Monolithic Kernel –
It is one of types of kernels where all operating system services operate in kernel space. It
has dependencies between systems components. It has huge lines of code which is complex.
Example:
Unix, Linux, Open VMS, XTS-400 etc.

2. Micro Kernel –
It is kernel types which has minimalist approach. It has virtual memory and thread
scheduling. It is more stable with less services in kernel space. It puts rest in user space.
It is use in small os.
Example:

Mach, L4, AmigaOS, Minix, K42 etc.

3. Hybrid Kernel –
It is the combination of both monolithic kernel and microkernel. It has speed and design of
monolithic kernel and modularity and stability of microkernel.
Example:

4. Exo-Kernel –
It is the type of kernel which follows end-to-end principle. It has fewest hardware
abstractions as possible. It allocates physical resources to applications.
Example:

Nemesis, ExOS etc.


5.NanoKernel–
It is the type of kernel that offers hardware abstraction but without system services. Micro
Kernel also does not have system services therefore the Micro Kernel and Nano Kernel have
become analogous.
Example:

EROS etc.
shells
The shell is the outermost layer of the operating system. Shells incorporate a programming
language to control processes and files, as well as to start and control other programs. The
shell manages the interaction between you and the operating system by prompting you for
input, interpreting that input for the operating system, and then handling any resulting
output from the operating system.
Shells provide a way for you to communicate with the operating system. This
communication is carried out either interactively (input from the keyboard is acted upon
immediately) or as a shell script. A shell script is a sequence of shell and operating system
commands that is stored in a file.
When you log in to the system, the system locates the name of a shell program to execute.
After it is executed, the shell displays a command prompt. This prompt is usually a $ (dollar
sign). When you type a command at the prompt and press the Enter key, the shell evaluates
the command and attempts to carry it out. Depending on your command instructions, the
shell writes the command output to the screen or redirects the output. It then returns the
command prompt and waits for you to type another command.

• Korn shell
The Korn shell (ksh command) is backwardly compatible with the Bourne shell (bsh
command) and contains most of the Bourne shell features as well as several of the
best features of the C shell.
• Bourne shell
The Bourne shell is an interactive command interpreter and command programming
language.
• C shell
The C shell is an interactive command interpreter and a command programming
language. It uses syntax that is similar to the C programming language.
Views of Operating System
An operating system is a framework that enables user application programs to interact with
system hardware. The operating system does not perform any functions on its own, but it
provides an atmosphere in which various apps and programs can do useful work. The
operating system may be observed from the point of view of the user or the system, and it is
known as the user view and the system view. In this article, you will learn the views of the
operating system.
Viewpoints of Operating System

The operating system may be observed from the viewpoint of the user or the system. It is
known as the user view and the system view. There are mainly two types of views of the
operating system. These are as follows:

1. User View
2. System View

1. User View

The user view depends on the system interface that is used by the users. Some systems are
designed for a single user to monopolize the resources to maximize the user's task. In these
cases, the OS is designed primarily for ease of use, with little emphasis on quality and none
on resource utilization.

The user viewpoint focuses on how the user interacts with the operating system through the
usage of various application programs. In contrast, the system viewpoint focuses on how the
hardware interacts with the operating system to complete various tasks.

• Single User View Point

Most computer users use a monitor, keyboard, mouse, printer, and other accessories to
operate their computer system. In some cases, the system is designed to maximize the output
of a single user. As a result, more attention is laid on accessibility, and resource allocation is
less important. These systems are much more designed for a single user experience and meet
the needs of a single user, where the performance is not given focus as the multiple user
systems.

• Multiple User View Point

Another example of user views in which the importance of user experience and performance
is given is when there is one mainframe computer and many users on their computers trying
to interact with their kernels over the mainframe to each other. In such circumstances,
memory allocation by the CPU must be done effectively to give a good user experience. The
client-server architecture is another good example where many clients may interact through
a remote server, and the same constraints of effective use of server resources may arise.
• Handled User View Point

Moreover, the touchscreen era has given you the best handheld technology ever.
Smartphones interact via wireless devices to perform numerous operations, but they're not
as efficient as a computer interface, limiting their usefulness. However, their operating
system is a great example of creating a device focused on the user's point of view.

• Embedded System User View Point

Some systems, like embedded systems that lack a user point of view. The remote control used
to turn on or off the tv is all part of an embedded system in which the electronic device
communicates with another program where the user viewpoint is limited and allows the user
to engage with the application.

2. System View

The OS may also be viewed as just a resource allocator. A computer system comprises various
sources, such as hardware and software, which must be managed effectively. The operating
system manages the resources, decides between competing demands, controls the program
execution, etc. According to this point of view, the operating system's purpose is to maximize
performance. The operating system is responsible for managing hardware resources and
allocating them to programs and users to ensure maximum performance.

From the user point of view, we've discussed the numerous applications that require varying
degrees of user participation. However, we are more concerned with how the hardware
interacts with the operating system than with the user from a system viewpoint. The
hardware and the operating system interact for a variety of reasons, including:

• Resource Allocation

The hardware contains several resources like registers, caches, RAM, ROM, CPUs, I/O
interaction, etc. These are all resources that the operating system needs when an application
program demands them. Only the operating system can allocate resources, and it has used
several tactics and strategies to maximize its processing and memory space. The operating
system uses a variety of strategies to get the most out of the hardware resources, including
paging, virtual memory, caching, and so on. These are very important in the case of various
user viewpoints because inefficient resource allocation may affect the user viewpoint, causing
the user system to lag or hang, reducing the user experience.

• Control Program

The control program controls how input and output devices (hardware) interact with the
operating system. The user may request an action that can only be done with I/O devices; in
this case, the operating system must also have proper communication, control, detect, and
handle such devices.
Evaluation and Types of operating System

An operating system (OS) is a software program that serves as a conduit between computer
hardware and the user. It is a piece of software that coordinates the execution of application
programs, software resources, and computer hardware. It also aids in the control of
software and hardware resources such as file management, memory management,
input/output, and a variety of peripheral devices such as a disc drive, printers, and so
on. Every computer system must have at least one operating system to run other
applications. Browsers, MS Office, Notepad Games, and other applications require an
environment to execute and fulfill their functions. This blog explains the evolution of
operating systems over the past years

What is Evolution of Operating Systems?

Operating systems have progressed from slow and expensive systems to today’s
technology, which has exponentially increased computing power at comparatively modest
costs. So let’s have a detailed look at the evolution of operating systems.

The operating system can be classified into four generations, as follows:


First Generation
Second Generation
Third Generation
Fourth Generation

First Generation (1945-1955)


Serial Processing The evolution of operating systems began with serial processing. It marks
the start of the development of electronic computing systems as alternatives to mechanical
computers. Because of the flaws in mechanical computing devices, humans’ calculation
speed is limited, and they are prone to making mistakes. Because there is no operating
system in this generation, the computer system is given instructions that must be carried
out immediately.

Programmers were incorporated into hardware components without using an operating


system by the 1940s and 1950s. The challenges here are scheduling and setup time. The
user logs in for machine time by wasting computational time. Setup time is required when
loading the compiler, saving the compiled program, the source program, linking, and
buffering. The process is restarted if an intermediate error occurs.
Example: Windows 95 and 98 are examples of serial processing operating systems.
Second Generation (1955-1965)

Batch System
The batched systems marked the second generation in the evolution of operating systems.
In the second generation, the batch processing system was implemented, which allows a
job or task to be done in a series and then completed sequentially. The computer system in
this generation does not have an operating system, although various operating system
functionalities are available, such as FMS and IBSYS. It is used to improve computer
utilization and application. On cards and tapes, jobs were scheduled and submitted. Then,
using Job Control Language, they were successively executed on the monitors. The first
computers employed in the batch operation method created a computer batch of jobs that
never paused or stopped. The software is written on punch cards and then transferred to
the tape’s processing unit. When the computer finishes one job, it immediately moves on
to the next item on the tape. Despite the fact that it is inconvenient for the users, it is
designed to keep the pricey computer as busy as possible by running a leveraged stream of
operations. Memory protection prevents the memory space that makes up the monitor
from being changed, and the timer prevents the job from monopolizing the system. When
the input and output devices are in use, the processor remains idle due to poor CPU
utilization.
Example: MVS Operating System of IBM is an example of a batch processing operating
system.

Third Generation (1965-1980)


Multi-Programmed Batched System
The evolution of operating systems embarks the third generation with multi-programmed
batched systems. In the third generation, the operating system was designed to serve
numerous users simultaneously. Interactive users can communicate with a computer via an
online terminal, making the operating system multi-user and multiprogramming. It is used
to execute several jobs that should be kept in the main memory. The processor determines
which program to run through job scheduling algorithms.
Example: Windows and IOS are examples of multi-programmed batched operating
systems.

Fourth Generation (1980-Now)


The operating system is employed in this age for computer networks where users are aware
of the existence of computers connected to one another.
The era of networked computing has already begun, and users are comforted by a Graphical
User Interface (GUI), which is an incredibly comfortable graphical computer interface. In
the fourth generation, the time-sharing operating system and the Macintosh operating
system came into existence.

Time-Sharing Operating System

The Time-sharing of operating systems had a great impact on the evolution of operating
systems. Multiple users can access the system via terminals at the same time, and the
processor’s time is divided among them. Printing ports were required for programs having
a command-line user interface, which required written responses to prompts or written
commands. The interaction is scrolled down like a roll of paper. It was previously used to
develop batch replacement systems. The user interfaces directly with the computer via
printing ports, much like an electric teletype. Few users shared the computer immediately,
and each activity was completed in a fraction of a second before moving on to the next. By
establishing iterations when they are receiving full attention, the fast server may act on a
large number of users’ processes at once. Multiple programs use time-sharing systems to
apply to the computer system by sharing the system interactively.
Example: Unix Operating System is an example of a time-sharing OS.

Macintosh Operating System

It was based on decades of research into graphical operating systems and applications for
personal computers. The photo depicts a Sutherland pioneer program sketchpad that was
developed in 1960, employing many of the characteristics of today’s graphical user
interface, but the hardware components cost millions of dollars and took up a room. The
initiative on massive computers and hardware improvements made the Macintosh
commercially and economically viable after many research gaps. Many research
laboratories are still working on research prototypes like sketchpads. It served as the
foundation for anticipated products.

Example: Mac OS X 10.6.8 snow leopard and OS X 10.7.5 Lion are some examples of
macintosh OS.
Process & Thread Management:
Process: Processes are basically the programs that are dispatched from the ready state and are
scheduled in the CPU for execution. PCB(Process Control Block) holds the concept of process. A
process can create other processes which are known as Child Processes. The process takes more
time to terminate and it is isolated means it does not share the memory with any other process.
The process can have the following states new, ready, running, waiting, terminated, and
suspended.
Thread: Thread is the segment of a process which means a process can have multiple threads and
these multiple threads are contained within a process. A thread has three states: Running, Ready,
and Blocked.

Difference between Process and Thread:

S.NO Process Thread

Process means any program is in


Thread means a segment of a process.
1. execution.

2. The process takes more time to terminate. The thread takes less time to terminate.

3. It takes more time for creation. It takes less time for creation.

It also takes more time for context


It takes less time for context switching.
4. switching.

The process is less efficient in terms of Thread is more efficient in terms of


5. communication. communication.

We don’t need multi programs in action for


Multiprogramming holds the concepts of
multiple threads because a single process
multi-process.
6. consists of multiple threads.

7. The process is isolated. Threads share memory.


S.NO Process Thread

The process is called the heavyweight A Thread is lightweight as each thread in a


8. process. process shares code, data, and resources.

Thread switching does not require calling an


Process switching uses an interface in an
operating system and causes an interrupt to
operating system.
9. the kernel.

If one process is blocked then it will not If a user-level thread is blocked, then all other
10. affect the execution of other processes user-level threads are blocked.

Thread has Parents’ PCB, its own Thread


The process has its own Process Control
Control Block, and Stack and common
Block, Stack, and Address Space.
11. Address space.

Since all threads of the same process share


Changes to the parent process do not address space and other resources so any
affect child processes. changes to the main thread may affect the
12. behavior of the other threads of the process.

No system call is involved, it is created using


A system call is involved in it.
13. APIs.

The process does not share data with each


Threads share data with each other.
14. other.
# Program vs Process

ASPECT PROGRAM PROCESS


A program is a set of instructions A process is an instance of a program currently
written in a programming language being executed by the operating system. It
Definition
that defines a specific task or includes program code, data, and system
functionality. resources.
Nature Static Dynamic
Lifecycle Fixed Dynamic
Storage Exists as a file on disk or in memory. Exists in memory while running.
State Inactive (until executed). Active (running) or inactive (terminated).
Not directly executable by the
Execution computer. Needs an interpreter or Executable by the computer's CPU.
compiler.
Does not consume system resources
Resources Consumes system resources during execution.
while not running.
Interacts with users or other programs Interacts with the operating system and other
Interaction
through input/output operations. processes through system calls.
Created by a programmer or Created by the operating system when the
Creation
developer. program is executed.
Controlled by the operating system or can
Termination Controlled by the user or the program.
complete its execution naturally.
Parallel Multiple instances can run in parallel if Multiple processes can run concurrently,
Execution started separately. utilizing multiple CPU cores.
Can be a part of a larger software A single program can give rise to multiple
Relationship
system or application. processes, each independent.
Microsoft Word, Photoshop, Web Instances of a web browser, background
Examples
Browsers. system tasks, printing documents.

# Process Control Block (PCB)

1. Process ID(PID): A distinct Process ID (PID) on the PCB serves as the process's identifier within
the operating system. The operating system uses this ID to keep track of, manage, and
differentiate among processes.
2. State: The state of the process, such as running, waiting, ready, or terminated, is indicated.
The operating system makes use of this data to schedule and manage operations.
3. Program Counter(PC): The program counter value, which indicates the address of the
following instruction to be performed in the process, is stored on the PCB. The program
counter is saved in the PCB of the running process during context switches and then restored
to let execution continue where it left off.
4. CPU registers: Looks at how the process's associated CPU registers are now working. Examples
include stack pointers, general-purpose registers, and program status flags. Processes can
continue operating uninterrupted during context changes by saving and restoring register
values.
5. Priority: Some operating systems provide a priority value to each process to decide the order
in which processes receive CPU time. The PCB may have a priority field that determines the
process's priority level, allowing the scheduler to distribute CPU resources appropriately.
6. I/O information: The PCB maintains information about I/O devices and data related to the
process. Open file descriptors, I/O buffers, and pending I/O requests are all included. Storing
this information enables the operating system to manage I/O operations and efficiently
handle input/output requests.
7. Pointer :- This field contains the address of the next PCB, which is in ready state. This helps
the operating system to hierarchically maintain an easy control flow between parent
processes and child processes.
8. Accounting information: Keeps track of the process's resource utilization data, such as CPU
time, memory usage, and I/O activities. This data aids in performance evaluation and
resource allocation choices.
State Transition diagram

A process has several stages that it passes through from beginning to end. There must be a minimum
of five states. Even though during execution, the process could be in one of these states, the names
of the states are not standardized. Each process goes through several stages throughout its life cycle.

Process States in Operating System. The states of a process are as follows:

1. New (Create): In this step, the process is about to be created but not yet created. It is the
program that is present in secondary memory that will be picked up by OS to create the
process.
2. Ready: New -> Ready to run. After the creation of a process, the process enters the ready
state i.e. the process is loaded into the main memory. The process here is ready to run and
is waiting to get the CPU time for its execution. Processes that are ready for execution by
the CPU are maintained in a queue called ready queue for ready processes.
3. Running: The process is chosen from the ready queue by the CPU for execution and the
instructions within the process are executed by any one of the available CPU cores.
4. Blocked or Waiting : Whenever the process requests access to I/O or needs input from the
user or needs access to a critical region(the lock for which is already acquired) it enters the
blocked or waits state. The process continues to wait in the main memory and does not
require CPU. Once the I/O operation is completed the process goes to the ready state.
5. Terminated or Completed: The process has finished its execution.. The resources
allocated to the process will be released or deallocated.
Transitions between these states are typically triggered by various events:
• Admission:The process moves from the "New" state to the "Ready" state when it is
admitted to the system.
• Dispatch: The process transitions from the "Ready" state to the "Running" state when the
operating system scheduler assigns the CPU to it.
• I/O Request:The process may move from the "Running" state to the "Blocked" state when
it issues an I/O request and has to wait for the I/O operation to complete.
• I/O Completion: When the I/O operation is completed, the process can move from the
"Blocked" state back to the "Ready" state.
• Completion: The process transitions from the "Running" state to the "Terminated" state
when it finishes its execution.
Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for each
of the process states and PCBs of all processes in the same execution state are placed in the same
queue. When the state of a process is changed, its PCB is unlinked from its current queue and moved
to its new state queue.

Important process scheduling queues The Operating System maintains the following important
process scheduling queues –

● Job queue − This queue keeps all the processes in the system.

● Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting
to execute. A new process is always put in this queue.

● Device queues − The processes which are blocked due to unavailability of an I/O device constitute
this queue.

.
Types of Schedulers

• Schedulers are specialised computer programmes that manage process scheduling in


a variety of ways. Their primary responsibility is to choose which jobs to submit into
the system and which processes to run. Click here to learn more on Process Schedulers
in Operating System. There are three types of schedulers:

Long-Term :- A job scheduler is another name for it. A long-term scheduler determines
which programmes are accepted for processing into the system. It picks processes
from the queue and then loads them into memory so they can be executed. For CPU
scheduling, a process loads into memory.

Short-Term: CPU scheduler is another name for it. Its major goal is to improve system
performance according to the set of criteria defined. It refers to the transition from
the process’s ready state to the running stage. The CPU scheduler happens to choose
a process from those that are ready to run and then allocates the CPU to it.

Medium-Term: It includes medium-term scheduling. Swapping clears the memory of


the processes. The degree of multiprogramming is reduced. The switched out
processes are handled by the medium-term scheduler.
Concept Of Threads Benefits Types Of Threads
Threads in Operating System (OS)
A thread is a single sequential flow of execution of tasks of a process so it is also known as thread of
execution or thread of control. There is a way of thread execution inside the process of any operating
system. Apart from this, there can be more than one thread inside a process. Each thread of the same
process makes use of a separate program counter and a stack of activation records and control blocks.
Thread is often referred to as a lightweight process.

Types of Threads

1. Kernel level thread.

2. User-level thread.

User-level thread
The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread blocking
operation, the whole process is blocked. The kernel level thread does not know nothing about the user
level thread. The kernel-level thread manages user-level threads as if they are single-threaded
processes.

Advantages of User-level threads


1. The user threads can be easily implemented than the kernel thread.

2. User-level threads can be applied to such types of operating systems that do not support
threads at the kernel-level.

3. It is faster and efficient.

4. Context switch time is shorter than the kernel-level threads.

5. It does not require modifications of the operating system.


6. User-level threads representation is very simple. The register, PC, stack, and mini thread
control blocks are stored in the address space of the user-level process.

7. It is simple to create, switch, and synchronize threads without the intervention of the process.

Disadvantages of User-level threads


1. User-level threads lack coordination between the thread and the kernel.

2. If a thread causes a page fault, the entire process is blocked.

Kernel level thread


The kernel thread recognizes the operating system. There is a thread control block and process control
block in the system for each thread and process in the kernel-level thread. The kernel-level thread is
implemented by the operating system. The kernel knows about all the threads and manages them. The
kernel-level thread offers a system call to create and manage the threads from user-space. The
implementation of kernel threads is more difficult than the user thread. Context switch time is longer
in the kernel thread. If a kernel thread performs a blocking operation, the Banky thread execution can
continue. Example: Window Solaris.
Advantages of Kernel-level threads
1. The kernel-level thread is fully aware of all threads.

2. The scheduler may decide to spend more CPU time in the process of threads being large
numerical.

3. The kernel-level thread is good for those applications that block the frequency.

Disadvantages of Kernel-level threads


1. The kernel thread manages and schedules all threads.

2. The implementation of kernel threads is difficult than the user thread.

3. The kernel-level thread is slower than user-level threads.

Components of Threads
Any thread has the following components.

1. Program counter

2. Register set

3. Stack space

Benefits of Threads
1. Enhanced throughput of the system: When the process is split into many threads, and each
thread is treated as a job, the number of jobs done in the unit time increases. That is why the
throughput of the system also increases.

2. Effective Utilization of Multiprocessor system: When you have more than one thread in one
process, you can schedule more than one thread in more than one processor.

3. Faster context switch: The context switching period between threads is less than the process
context switching. The process context switch means more overhead for the CPU.

4. Responsiveness: When the process is split into several threads, and when a thread completes
its execution, that process can be responded to as soon as possible.

5. Communication: Multiple-thread communication is simple because the threads share the


same address space, while in process, we adopt just a few exclusive communication strategies
for communication between two processes.

6. Resource sharing: Resources can be shared between all threads within a process, such as
code, data, and files. Note: The stack and register cannot be shared between threads. There is
a stack and register for each thread.

CPU i/O burst cycle


• Almost all programs have some alternating cycle of CPU number crunching and waiting for I/O
of some kind. ( Even a simple fetch from memory takes a long time relative to CPU speeds. )
• In a simple system running a single process, the time spent waiting for I/O is wasted, and those
CPU cycles are lost forever.
• A scheduling system allows one process to use the CPU while another is waiting for I/O,
thereby making full use of otherwise lost CPU cycles.
• The challenge is to make the overall system as "efficient" and "fair" as possible, subject to
varying and often dynamic conditions, and where "efficient" and "fair" are somewhat
subjective terms, often subject to shifting priority policies.
CPU-I/O Burst Cycle

1. Almost all processes alternate between two states in a continuing cycle, as shown in Figure
2. A CPU burst of performing calculations.
3. An I/O burst, waiting for data transfer in or out of the system.

Scheduling Criteria in OS

There are different CPU scheduling algorithms with different properties. The choice of algorithm is
dependent on various different factors such as waiting for time, efficiency, CPU utilization, etc.
Scheduling is a process of allowing one process to use the CPU resources, keeping on hold the
execution of another process due to the unavailability of resources CPU.
Types of Scheduling Criteria in an Operating System

There are different CPU scheduling algorithms with different properties. The choice of algorithm is
dependent on various different factors. There are many criteria suggested for comparing CPU schedule
algorithms, some of which are:

• CPU utilization
• Throughput
• Turnaround time
• Waiting time
• Response time

There are different CPU scheduling algorithms with different properties. The choice of algorithm is
dependent on various different factors such as waiting for time, efficiency, CPU utilization, etc.

CPU utilization- The object of any CPU scheduling algorithm is to keep the CPU busy if possible and to
maximize its usage. In theory, the range of CPU utilization is in the range of 0 to 100 but in real-time,
it is actually 50 to 90% which relies on the system’s load.

Throughput- It is a measure of the work that is done by the CPU which is directly proportional to the
number of processes being executed and completed per unit of time. It keeps on varying which relies
on the duration or length of processes.

Turnaround time- An important Scheduling criterion in OS for any process is how long it takes to
execute a process. A turnaround time is the elapsed from the time of submission to that of completion.
It is the summation of time spent waiting to get into the memory, waiting for a queue to be ready, for
the I/O process, and for the execution of the CPU.

Waiting time- Once the execution starts, the scheduling process does not hinder the time that is
required for the completion of the process. The only thing that is affected is the waiting time of the
process, i.e the time that is spent by a process waiting in a queue.

Response time- Turnaround time is not considered as the best criterion for comparing scheduling
algorithms in an interactive system. Some outputs of the process might produce early while computing
other results simultaneously. Another criterion is the time that is taken from process submission to
generate the first response.

Maximize:

• CPU utilization - It makes sure that the CPU is operating at its peak and is busy.
• Throughoutput - It is the number of processes that complete their execution per unit of time.

Minimize:

• Waiting time- It is the amount of waiting time in the queue.


• Response time- Time retired for generating the first request after submission.
• Turnaround time- It is the amount of time required to execute a specific process.
FCFS (FIRST-COME, FIRST-SERVED) Scheduling

FCFS is considered as simplest CPU-scheduling algorithm. In FCFS algorithm, the process that requests
the CPU first is allocated in the CPU first. The implementation of FCFS algorithm is managed with FIFO
(First in first out) queue. FCFS scheduling is non-pre-emptive. Non-pre-emptive means, once the CPU
has been allocated to a process, that process keeps the CPU until it executes a work or job or task and
releases the CPU, either by requesting I/O.

FCFS Scheduling Mathematical Examples

In CPU-scheduling problems some terms are used while solving the problems, so for conceptual
purpose the terms are discussed as follows −

• Arrival time (AT) − Arrival time is the time at which the process arrives in ready queue.
• Burst time (BT) or CPU time of the process − Burst time is the unit of time in which a
particular process completes its execution.
• Completion time (CT) − Completion time is the time at which the process has been
terminated.
• Turn-around time (TAT) − The total time from arrival time to completion time is known
as turn-around time. TAT can be written as,
Turn-around time (TAT) = Completion time (CT) – Arrival time (AT) or, TAT = Burst time (BT) +
Waiting time (WT)

• Waiting time (WT) − Waiting time is the time at which the process waits for its
allocation while the previous process is in the CPU for execution. WT is written as,
Waiting time (WT) = Turn-around time (TAT) – Burst time (BT)

• Response time (RT) − Response time is the time at which CPU has been allocated to a
particular process first time.In case of non-preemptive scheduling, generally Waiting time
and Response time is same.
• Gantt chart − Gantt chart is a visualization which helps to scheduling and managing
particular tasks in a project. It is used while solving scheduling problems, for a concept of
how the processes are being allocated in different algorithms.
Example:

Process No. Arrival Burst Completion TAT WT RT


Time Time/Execution Time
time

P1 0 2

P2 1 2

P3 5 3

P4 6 4

SJF Scheduling

The shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting
process with the smallest execution time to execute next. SJN, also known as Shortest Job Next
(SJN), can be preemptive or non-preemptive.

Characteristics of SJF Scheduling:

• Shortest Job first has the advantage of having a minimum average waiting time among
all scheduling algorithms.

• It is a Greedy Algorithm.

• It may cause starvation if shorter processes keep coming. This problem can be solved using
the concept of ageing.

• It is practically infeasible as Operating System may not know burst times and therefore
may not sort them. While it is not possible to predict execution time, several methods can
be used to estimate the execution time for a job, such as a weighted average of previous
execution times.

• SJF can be used in specialized environments where accurate estimates of running time are
available.

Algorithm:

• Sort all the processes according to the arrival time.

• Then select that process that has minimum arrival time and minimum Burst time.

• After completion of the process make a pool of processes that arrives afterward till the
completion of the previous process and select that process among the pool which is
having minimum Burst time.
Example

Round Robin Scheduling Algorithm

In this tutorial, we are going to learn about the most efficient CPU Process Scheduling Algorithm
named Round Robin CPU Process Scheduling. This algorithm is very special because it is going to
remove all the Flaws which we have detected in the previous CPU Process Scheduling Algorithms.

There is a lot of popularity for this Round Robin CPU Scheduling is because Round Robin works
only in Pre Emptive state. This makes it very reliable.

Important Abbreviations

1. CPU - - - > Central Processing Unit

2. AT - - - > Arrival Time

3. BT - - - > Burst Time

4. WT - - - > Waiting Time

5. TAT - - - > Turn Around Time

6. CT - - - > Completion Time

7. FIFO - - - > First In First Out

8. TQ - - - > Time Quantum

Round Robin CPU Scheduling

Round Robin CPU Scheduling is the most important CPU Scheduling Algorithm which is ever used
in the history of CPU Scheduling Algorithms. Round Robin CPU Scheduling uses Time Quantum
(TQ). The Time Quantum is something which is removed from the Burst Time and lets the chunk
of process to be completed.

Time Sharing is the main emphasis of the algorithm. Each step of this algorithm is carried out
cyclically. The system defines a specific time slice, known as a time quantum.

First, the processes which are eligible to enter the ready queue enter the ready queue. After
entering the first process in Ready Queue is executed for a Time Quantum chunk of time. After
execution is complete, the process is removed from the ready queue. Even now the process
requires some time to complete its execution, then the process is added to Ready Queue.

The Ready Queue does not hold processes which already present in the Ready Queue. The Ready
Queue is designed in such a manner that it does not hold non unique processes. By holding same
processes Redundancy of the processes increases.After, the process execution is complete, the
Ready Queue does not take the completed process for holding.
Advantages
The Advantages of Round Robin CPU Scheduling are:

1. A fair amount of CPU is allocated to each job.

2. Because it doesn't depend on the burst time, it can truly be implemented in the system.

3. It is not affected by the convoy effect or the starvation problem as occurred in First Come
First Serve CPU Scheduling Algorithm.

Disadvantages

The Disadvantages of Round Robin CPU Scheduling are:

1. Low Operating System slicing times will result in decreased CPU output.

2. Round Robin CPU Scheduling approach takes longer to swap contexts.

3. Time quantum has a significant impact on its performance.

4. The procedures cannot have priorities established.

Example

You might also like