Fundamental of Operating System
Fundamental of Operating System
• The OS allocates CPU time to various processes, managing the execution of multiple
tasks concurrently.
2. Memory Management:
• It manages virtual memory, allowing processes to use more memory than physically
available by using techniques like paging and swapping.
• The OS provides a file system that organizes and stores data on storage devices. It
manages file creation, deletion, and access.
• It ensures data integrity and security through permissions and access controls.
4. Device Management:
• It provides device drivers that act as intermediaries between the hardware and higher-
level software.
• The OS oversees input and output operations, managing data transfers between the
CPU, memory, and external devices.
• It handles buffering, caching, and error handling to ensure reliable and efficient I/O
operations.
• The OS enforces security policies and access controls to protect the system and its
resources from unauthorized access.
7. Network Management:
• It may include features for remote access, distributed computing, and network
security.
8. Error Handling:
• The OS monitors the system for errors and exceptions, providing mechanisms to
handle and recover from faults.
• It logs errors, generates error messages, and may attempt to recover from errors when
possible.
1. Monolithic Kernel –
It is one of types of kernels where all operating system services operate in kernel space. It
has dependencies between systems components. It has huge lines of code which is complex.
Example:
Unix, Linux, Open VMS, XTS-400 etc.
2. Micro Kernel –
It is kernel types which has minimalist approach. It has virtual memory and thread
scheduling. It is more stable with less services in kernel space. It puts rest in user space.
It is use in small os.
Example:
3. Hybrid Kernel –
It is the combination of both monolithic kernel and microkernel. It has speed and design of
monolithic kernel and modularity and stability of microkernel.
Example:
4. Exo-Kernel –
It is the type of kernel which follows end-to-end principle. It has fewest hardware
abstractions as possible. It allocates physical resources to applications.
Example:
EROS etc.
shells
The shell is the outermost layer of the operating system. Shells incorporate a programming
language to control processes and files, as well as to start and control other programs. The
shell manages the interaction between you and the operating system by prompting you for
input, interpreting that input for the operating system, and then handling any resulting
output from the operating system.
Shells provide a way for you to communicate with the operating system. This
communication is carried out either interactively (input from the keyboard is acted upon
immediately) or as a shell script. A shell script is a sequence of shell and operating system
commands that is stored in a file.
When you log in to the system, the system locates the name of a shell program to execute.
After it is executed, the shell displays a command prompt. This prompt is usually a $ (dollar
sign). When you type a command at the prompt and press the Enter key, the shell evaluates
the command and attempts to carry it out. Depending on your command instructions, the
shell writes the command output to the screen or redirects the output. It then returns the
command prompt and waits for you to type another command.
• Korn shell
The Korn shell (ksh command) is backwardly compatible with the Bourne shell (bsh
command) and contains most of the Bourne shell features as well as several of the
best features of the C shell.
• Bourne shell
The Bourne shell is an interactive command interpreter and command programming
language.
• C shell
The C shell is an interactive command interpreter and a command programming
language. It uses syntax that is similar to the C programming language.
Views of Operating System
An operating system is a framework that enables user application programs to interact with
system hardware. The operating system does not perform any functions on its own, but it
provides an atmosphere in which various apps and programs can do useful work. The
operating system may be observed from the point of view of the user or the system, and it is
known as the user view and the system view. In this article, you will learn the views of the
operating system.
Viewpoints of Operating System
The operating system may be observed from the viewpoint of the user or the system. It is
known as the user view and the system view. There are mainly two types of views of the
operating system. These are as follows:
1. User View
2. System View
1. User View
The user view depends on the system interface that is used by the users. Some systems are
designed for a single user to monopolize the resources to maximize the user's task. In these
cases, the OS is designed primarily for ease of use, with little emphasis on quality and none
on resource utilization.
The user viewpoint focuses on how the user interacts with the operating system through the
usage of various application programs. In contrast, the system viewpoint focuses on how the
hardware interacts with the operating system to complete various tasks.
Most computer users use a monitor, keyboard, mouse, printer, and other accessories to
operate their computer system. In some cases, the system is designed to maximize the output
of a single user. As a result, more attention is laid on accessibility, and resource allocation is
less important. These systems are much more designed for a single user experience and meet
the needs of a single user, where the performance is not given focus as the multiple user
systems.
Another example of user views in which the importance of user experience and performance
is given is when there is one mainframe computer and many users on their computers trying
to interact with their kernels over the mainframe to each other. In such circumstances,
memory allocation by the CPU must be done effectively to give a good user experience. The
client-server architecture is another good example where many clients may interact through
a remote server, and the same constraints of effective use of server resources may arise.
• Handled User View Point
Moreover, the touchscreen era has given you the best handheld technology ever.
Smartphones interact via wireless devices to perform numerous operations, but they're not
as efficient as a computer interface, limiting their usefulness. However, their operating
system is a great example of creating a device focused on the user's point of view.
Some systems, like embedded systems that lack a user point of view. The remote control used
to turn on or off the tv is all part of an embedded system in which the electronic device
communicates with another program where the user viewpoint is limited and allows the user
to engage with the application.
2. System View
The OS may also be viewed as just a resource allocator. A computer system comprises various
sources, such as hardware and software, which must be managed effectively. The operating
system manages the resources, decides between competing demands, controls the program
execution, etc. According to this point of view, the operating system's purpose is to maximize
performance. The operating system is responsible for managing hardware resources and
allocating them to programs and users to ensure maximum performance.
From the user point of view, we've discussed the numerous applications that require varying
degrees of user participation. However, we are more concerned with how the hardware
interacts with the operating system than with the user from a system viewpoint. The
hardware and the operating system interact for a variety of reasons, including:
• Resource Allocation
The hardware contains several resources like registers, caches, RAM, ROM, CPUs, I/O
interaction, etc. These are all resources that the operating system needs when an application
program demands them. Only the operating system can allocate resources, and it has used
several tactics and strategies to maximize its processing and memory space. The operating
system uses a variety of strategies to get the most out of the hardware resources, including
paging, virtual memory, caching, and so on. These are very important in the case of various
user viewpoints because inefficient resource allocation may affect the user viewpoint, causing
the user system to lag or hang, reducing the user experience.
• Control Program
The control program controls how input and output devices (hardware) interact with the
operating system. The user may request an action that can only be done with I/O devices; in
this case, the operating system must also have proper communication, control, detect, and
handle such devices.
Evaluation and Types of operating System
An operating system (OS) is a software program that serves as a conduit between computer
hardware and the user. It is a piece of software that coordinates the execution of application
programs, software resources, and computer hardware. It also aids in the control of
software and hardware resources such as file management, memory management,
input/output, and a variety of peripheral devices such as a disc drive, printers, and so
on. Every computer system must have at least one operating system to run other
applications. Browsers, MS Office, Notepad Games, and other applications require an
environment to execute and fulfill their functions. This blog explains the evolution of
operating systems over the past years
Operating systems have progressed from slow and expensive systems to today’s
technology, which has exponentially increased computing power at comparatively modest
costs. So let’s have a detailed look at the evolution of operating systems.
Batch System
The batched systems marked the second generation in the evolution of operating systems.
In the second generation, the batch processing system was implemented, which allows a
job or task to be done in a series and then completed sequentially. The computer system in
this generation does not have an operating system, although various operating system
functionalities are available, such as FMS and IBSYS. It is used to improve computer
utilization and application. On cards and tapes, jobs were scheduled and submitted. Then,
using Job Control Language, they were successively executed on the monitors. The first
computers employed in the batch operation method created a computer batch of jobs that
never paused or stopped. The software is written on punch cards and then transferred to
the tape’s processing unit. When the computer finishes one job, it immediately moves on
to the next item on the tape. Despite the fact that it is inconvenient for the users, it is
designed to keep the pricey computer as busy as possible by running a leveraged stream of
operations. Memory protection prevents the memory space that makes up the monitor
from being changed, and the timer prevents the job from monopolizing the system. When
the input and output devices are in use, the processor remains idle due to poor CPU
utilization.
Example: MVS Operating System of IBM is an example of a batch processing operating
system.
The Time-sharing of operating systems had a great impact on the evolution of operating
systems. Multiple users can access the system via terminals at the same time, and the
processor’s time is divided among them. Printing ports were required for programs having
a command-line user interface, which required written responses to prompts or written
commands. The interaction is scrolled down like a roll of paper. It was previously used to
develop batch replacement systems. The user interfaces directly with the computer via
printing ports, much like an electric teletype. Few users shared the computer immediately,
and each activity was completed in a fraction of a second before moving on to the next. By
establishing iterations when they are receiving full attention, the fast server may act on a
large number of users’ processes at once. Multiple programs use time-sharing systems to
apply to the computer system by sharing the system interactively.
Example: Unix Operating System is an example of a time-sharing OS.
It was based on decades of research into graphical operating systems and applications for
personal computers. The photo depicts a Sutherland pioneer program sketchpad that was
developed in 1960, employing many of the characteristics of today’s graphical user
interface, but the hardware components cost millions of dollars and took up a room. The
initiative on massive computers and hardware improvements made the Macintosh
commercially and economically viable after many research gaps. Many research
laboratories are still working on research prototypes like sketchpads. It served as the
foundation for anticipated products.
Example: Mac OS X 10.6.8 snow leopard and OS X 10.7.5 Lion are some examples of
macintosh OS.
Process & Thread Management:
Process: Processes are basically the programs that are dispatched from the ready state and are
scheduled in the CPU for execution. PCB(Process Control Block) holds the concept of process. A
process can create other processes which are known as Child Processes. The process takes more
time to terminate and it is isolated means it does not share the memory with any other process.
The process can have the following states new, ready, running, waiting, terminated, and
suspended.
Thread: Thread is the segment of a process which means a process can have multiple threads and
these multiple threads are contained within a process. A thread has three states: Running, Ready,
and Blocked.
2. The process takes more time to terminate. The thread takes less time to terminate.
3. It takes more time for creation. It takes less time for creation.
If one process is blocked then it will not If a user-level thread is blocked, then all other
10. affect the execution of other processes user-level threads are blocked.
1. Process ID(PID): A distinct Process ID (PID) on the PCB serves as the process's identifier within
the operating system. The operating system uses this ID to keep track of, manage, and
differentiate among processes.
2. State: The state of the process, such as running, waiting, ready, or terminated, is indicated.
The operating system makes use of this data to schedule and manage operations.
3. Program Counter(PC): The program counter value, which indicates the address of the
following instruction to be performed in the process, is stored on the PCB. The program
counter is saved in the PCB of the running process during context switches and then restored
to let execution continue where it left off.
4. CPU registers: Looks at how the process's associated CPU registers are now working. Examples
include stack pointers, general-purpose registers, and program status flags. Processes can
continue operating uninterrupted during context changes by saving and restoring register
values.
5. Priority: Some operating systems provide a priority value to each process to decide the order
in which processes receive CPU time. The PCB may have a priority field that determines the
process's priority level, allowing the scheduler to distribute CPU resources appropriately.
6. I/O information: The PCB maintains information about I/O devices and data related to the
process. Open file descriptors, I/O buffers, and pending I/O requests are all included. Storing
this information enables the operating system to manage I/O operations and efficiently
handle input/output requests.
7. Pointer :- This field contains the address of the next PCB, which is in ready state. This helps
the operating system to hierarchically maintain an easy control flow between parent
processes and child processes.
8. Accounting information: Keeps track of the process's resource utilization data, such as CPU
time, memory usage, and I/O activities. This data aids in performance evaluation and
resource allocation choices.
State Transition diagram
A process has several stages that it passes through from beginning to end. There must be a minimum
of five states. Even though during execution, the process could be in one of these states, the names
of the states are not standardized. Each process goes through several stages throughout its life cycle.
1. New (Create): In this step, the process is about to be created but not yet created. It is the
program that is present in secondary memory that will be picked up by OS to create the
process.
2. Ready: New -> Ready to run. After the creation of a process, the process enters the ready
state i.e. the process is loaded into the main memory. The process here is ready to run and
is waiting to get the CPU time for its execution. Processes that are ready for execution by
the CPU are maintained in a queue called ready queue for ready processes.
3. Running: The process is chosen from the ready queue by the CPU for execution and the
instructions within the process are executed by any one of the available CPU cores.
4. Blocked or Waiting : Whenever the process requests access to I/O or needs input from the
user or needs access to a critical region(the lock for which is already acquired) it enters the
blocked or waits state. The process continues to wait in the main memory and does not
require CPU. Once the I/O operation is completed the process goes to the ready state.
5. Terminated or Completed: The process has finished its execution.. The resources
allocated to the process will be released or deallocated.
Transitions between these states are typically triggered by various events:
• Admission:The process moves from the "New" state to the "Ready" state when it is
admitted to the system.
• Dispatch: The process transitions from the "Ready" state to the "Running" state when the
operating system scheduler assigns the CPU to it.
• I/O Request:The process may move from the "Running" state to the "Blocked" state when
it issues an I/O request and has to wait for the I/O operation to complete.
• I/O Completion: When the I/O operation is completed, the process can move from the
"Blocked" state back to the "Ready" state.
• Completion: The process transitions from the "Running" state to the "Terminated" state
when it finishes its execution.
Scheduling Queues
The OS maintains all PCBs in Process Scheduling Queues. The OS maintains a separate queue for each
of the process states and PCBs of all processes in the same execution state are placed in the same
queue. When the state of a process is changed, its PCB is unlinked from its current queue and moved
to its new state queue.
Important process scheduling queues The Operating System maintains the following important
process scheduling queues –
● Job queue − This queue keeps all the processes in the system.
● Ready queue − This queue keeps a set of all processes residing in main memory, ready and waiting
to execute. A new process is always put in this queue.
● Device queues − The processes which are blocked due to unavailability of an I/O device constitute
this queue.
.
Types of Schedulers
Long-Term :- A job scheduler is another name for it. A long-term scheduler determines
which programmes are accepted for processing into the system. It picks processes
from the queue and then loads them into memory so they can be executed. For CPU
scheduling, a process loads into memory.
Short-Term: CPU scheduler is another name for it. Its major goal is to improve system
performance according to the set of criteria defined. It refers to the transition from
the process’s ready state to the running stage. The CPU scheduler happens to choose
a process from those that are ready to run and then allocates the CPU to it.
Types of Threads
2. User-level thread.
User-level thread
The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread blocking
operation, the whole process is blocked. The kernel level thread does not know nothing about the user
level thread. The kernel-level thread manages user-level threads as if they are single-threaded
processes.
2. User-level threads can be applied to such types of operating systems that do not support
threads at the kernel-level.
7. It is simple to create, switch, and synchronize threads without the intervention of the process.
2. The scheduler may decide to spend more CPU time in the process of threads being large
numerical.
3. The kernel-level thread is good for those applications that block the frequency.
Components of Threads
Any thread has the following components.
1. Program counter
2. Register set
3. Stack space
Benefits of Threads
1. Enhanced throughput of the system: When the process is split into many threads, and each
thread is treated as a job, the number of jobs done in the unit time increases. That is why the
throughput of the system also increases.
2. Effective Utilization of Multiprocessor system: When you have more than one thread in one
process, you can schedule more than one thread in more than one processor.
3. Faster context switch: The context switching period between threads is less than the process
context switching. The process context switch means more overhead for the CPU.
4. Responsiveness: When the process is split into several threads, and when a thread completes
its execution, that process can be responded to as soon as possible.
6. Resource sharing: Resources can be shared between all threads within a process, such as
code, data, and files. Note: The stack and register cannot be shared between threads. There is
a stack and register for each thread.
1. Almost all processes alternate between two states in a continuing cycle, as shown in Figure
2. A CPU burst of performing calculations.
3. An I/O burst, waiting for data transfer in or out of the system.
Scheduling Criteria in OS
There are different CPU scheduling algorithms with different properties. The choice of algorithm is
dependent on various different factors such as waiting for time, efficiency, CPU utilization, etc.
Scheduling is a process of allowing one process to use the CPU resources, keeping on hold the
execution of another process due to the unavailability of resources CPU.
Types of Scheduling Criteria in an Operating System
There are different CPU scheduling algorithms with different properties. The choice of algorithm is
dependent on various different factors. There are many criteria suggested for comparing CPU schedule
algorithms, some of which are:
• CPU utilization
• Throughput
• Turnaround time
• Waiting time
• Response time
There are different CPU scheduling algorithms with different properties. The choice of algorithm is
dependent on various different factors such as waiting for time, efficiency, CPU utilization, etc.
CPU utilization- The object of any CPU scheduling algorithm is to keep the CPU busy if possible and to
maximize its usage. In theory, the range of CPU utilization is in the range of 0 to 100 but in real-time,
it is actually 50 to 90% which relies on the system’s load.
Throughput- It is a measure of the work that is done by the CPU which is directly proportional to the
number of processes being executed and completed per unit of time. It keeps on varying which relies
on the duration or length of processes.
Turnaround time- An important Scheduling criterion in OS for any process is how long it takes to
execute a process. A turnaround time is the elapsed from the time of submission to that of completion.
It is the summation of time spent waiting to get into the memory, waiting for a queue to be ready, for
the I/O process, and for the execution of the CPU.
Waiting time- Once the execution starts, the scheduling process does not hinder the time that is
required for the completion of the process. The only thing that is affected is the waiting time of the
process, i.e the time that is spent by a process waiting in a queue.
Response time- Turnaround time is not considered as the best criterion for comparing scheduling
algorithms in an interactive system. Some outputs of the process might produce early while computing
other results simultaneously. Another criterion is the time that is taken from process submission to
generate the first response.
Maximize:
• CPU utilization - It makes sure that the CPU is operating at its peak and is busy.
• Throughoutput - It is the number of processes that complete their execution per unit of time.
Minimize:
FCFS is considered as simplest CPU-scheduling algorithm. In FCFS algorithm, the process that requests
the CPU first is allocated in the CPU first. The implementation of FCFS algorithm is managed with FIFO
(First in first out) queue. FCFS scheduling is non-pre-emptive. Non-pre-emptive means, once the CPU
has been allocated to a process, that process keeps the CPU until it executes a work or job or task and
releases the CPU, either by requesting I/O.
In CPU-scheduling problems some terms are used while solving the problems, so for conceptual
purpose the terms are discussed as follows −
• Arrival time (AT) − Arrival time is the time at which the process arrives in ready queue.
• Burst time (BT) or CPU time of the process − Burst time is the unit of time in which a
particular process completes its execution.
• Completion time (CT) − Completion time is the time at which the process has been
terminated.
• Turn-around time (TAT) − The total time from arrival time to completion time is known
as turn-around time. TAT can be written as,
Turn-around time (TAT) = Completion time (CT) – Arrival time (AT) or, TAT = Burst time (BT) +
Waiting time (WT)
• Waiting time (WT) − Waiting time is the time at which the process waits for its
allocation while the previous process is in the CPU for execution. WT is written as,
Waiting time (WT) = Turn-around time (TAT) – Burst time (BT)
• Response time (RT) − Response time is the time at which CPU has been allocated to a
particular process first time.In case of non-preemptive scheduling, generally Waiting time
and Response time is same.
• Gantt chart − Gantt chart is a visualization which helps to scheduling and managing
particular tasks in a project. It is used while solving scheduling problems, for a concept of
how the processes are being allocated in different algorithms.
Example:
P1 0 2
P2 1 2
P3 5 3
P4 6 4
SJF Scheduling
The shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting
process with the smallest execution time to execute next. SJN, also known as Shortest Job Next
(SJN), can be preemptive or non-preemptive.
• Shortest Job first has the advantage of having a minimum average waiting time among
all scheduling algorithms.
• It is a Greedy Algorithm.
• It may cause starvation if shorter processes keep coming. This problem can be solved using
the concept of ageing.
• It is practically infeasible as Operating System may not know burst times and therefore
may not sort them. While it is not possible to predict execution time, several methods can
be used to estimate the execution time for a job, such as a weighted average of previous
execution times.
• SJF can be used in specialized environments where accurate estimates of running time are
available.
Algorithm:
• Then select that process that has minimum arrival time and minimum Burst time.
• After completion of the process make a pool of processes that arrives afterward till the
completion of the previous process and select that process among the pool which is
having minimum Burst time.
Example
In this tutorial, we are going to learn about the most efficient CPU Process Scheduling Algorithm
named Round Robin CPU Process Scheduling. This algorithm is very special because it is going to
remove all the Flaws which we have detected in the previous CPU Process Scheduling Algorithms.
There is a lot of popularity for this Round Robin CPU Scheduling is because Round Robin works
only in Pre Emptive state. This makes it very reliable.
Important Abbreviations
Round Robin CPU Scheduling is the most important CPU Scheduling Algorithm which is ever used
in the history of CPU Scheduling Algorithms. Round Robin CPU Scheduling uses Time Quantum
(TQ). The Time Quantum is something which is removed from the Burst Time and lets the chunk
of process to be completed.
Time Sharing is the main emphasis of the algorithm. Each step of this algorithm is carried out
cyclically. The system defines a specific time slice, known as a time quantum.
First, the processes which are eligible to enter the ready queue enter the ready queue. After
entering the first process in Ready Queue is executed for a Time Quantum chunk of time. After
execution is complete, the process is removed from the ready queue. Even now the process
requires some time to complete its execution, then the process is added to Ready Queue.
The Ready Queue does not hold processes which already present in the Ready Queue. The Ready
Queue is designed in such a manner that it does not hold non unique processes. By holding same
processes Redundancy of the processes increases.After, the process execution is complete, the
Ready Queue does not take the completed process for holding.
Advantages
The Advantages of Round Robin CPU Scheduling are:
2. Because it doesn't depend on the burst time, it can truly be implemented in the system.
3. It is not affected by the convoy effect or the starvation problem as occurred in First Come
First Serve CPU Scheduling Algorithm.
Disadvantages
1. Low Operating System slicing times will result in decreased CPU output.
Example