0% found this document useful (0 votes)
25 views41 pages

Unit 1&3

An operating system (OS) manages computer hardware and provides an interface for users to interact with applications. It controls hardware resources, facilitates process management, memory management, file management, and I/O system management. Various types of operating systems include mainframe systems, personal computer systems, multiprocessor systems, distributed systems, and real-time systems, each designed to meet specific user and resource management needs.

Uploaded by

Shivam Rathore
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views41 pages

Unit 1&3

An operating system (OS) manages computer hardware and provides an interface for users to interact with applications. It controls hardware resources, facilitates process management, memory management, file management, and I/O system management. Various types of operating systems include mainframe systems, personal computer systems, multiprocessor systems, distributed systems, and real-time systems, each designed to meet specific user and resource management needs.

Uploaded by

Shivam Rathore
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Operating System:

 An operating system is a program which manages all the computer hardwares.


 It provides the base for application program and acts as an intermediary between a
user and the computer hardware.
 The operating system has two objectives such as:
 Firstly, an operating system controls the computer’s hardware.
 The second objective is to provide an interactive interface to the user
and interpret commands so that it can communicate with the hardware.
 The operating system is very important part of almost every computer system.
Managing Hardware

 The prime objective of operating system is to manage & control the various
hardware resources of a computer system.
 These hardware resources include processer, memory, and disk space and so on.
 The output result was display in monitor. In addition to communicating with
the hardware theoperating system provides on error handling procedure and
display an error notification.
 If a device not functioning properly, the operating system cannot be
communicate with the device.
Providing an Interface
The operating system organizes application so that users can easily access, use and store them.
 It provides a stable and consistent way for applications to deal with the hardware
without the user having known details of the hardware.
 If the program is not functioning properly, the operating system again takes
control, stops the application and displays the appropriate error message.
 Computer system components are divided into 5 parts
 Computer hardware
 operating system
 utilities
 Application programs
 End user

 Hardware – provides basic computing resources (CPU, memory, I/O devices).

 Operating system – controls and coordinates the use of the hardware among the
various application programs for the various users.

 Applications programs – Define the ways in which the system resources are used to
solve the computing problems of the users (compilers, database systems, video
games, business programs).

 Users (people, machines, other computers)

 The operating system controls and coordinates a user of hardware and various
application programs for various users.
 It is a program that directly interacts with the hardware.
 The operating system is the first encoded with the Computer and it remains on the
memory all time thereafter.
Operating System Definitions

 Resource allocator – manages and allocates resources.


 Control program – controls the execution of user programs and operations of I/O
devices .
 Kernel – The one program running at all times (all else being application
programs).
 Components of OS: OS has two parts. (1)Kernel.(2)Shell.
o Kernel is an active part of an OS i.e., it is the part of OS running at all times.
It is a programs which can interact with the hardware. Ex: Device driver, dll
files, system files etc.
o Shell is called as the command interpreter. It is a set of programs used to
interact with the application programs. It is responsible for execution of
instructions given to OS (called commands).
System goals

 The purpose of an operating system is to be provided an environment in which


an user can execute programs.
 Its primary goals are to make the computer system convenience for the user.
 Its secondary goals are to use the computer hardware in efficient manner.
View of operating system
 User view:The user view of the computer varies by the interface being used.
The examples are -windows XP, vista, windows 7 etc. Most computer user sit in
the in front of personal computer (pc) in this case the operating system is
designed mostly for easy use with some attention paid to resource utilization.
Some user sit at a terminal connected to a mainframe/minicomputer. In this
case other users are accessing the same computer through the other terminals.
There user are share resources and may exchange the information. The operating
system in this case is designed to maximize resources utilization to assume that
all available CPU time, memory and I/O are used efficiently and no individual
user takes more than his/her fair and share.The other users sit at
workstations connected to network of other workstations and servers. These
users have dedicated resources but they share resources such as networking
and servers like file, compute and print server. Here the operating system is
designed to compromise between individual usability and resource utilization.
 System view: From the computer point of view the operating system is the
program which is most intermediate with the hardware. An operating system has
resources as hardware and software which may be required to solve a problem
like CPU time, memory space, file storage space and I/O devices and so on.
That’s why the operating system acts as manager of these resources. Another
view of the operating system is it is a control program. A control program
manages the execution of user programs to present the errors in proper use of
the computer. It is especially concerned of the user the operation and controls
the I/O devices.

Types of Operating System


1. Mainframe System: It is the system where the first computer used to handle many
commercial scientific applications. The growth of mainframe systems traced from
simple batch system where the computer runs one and only one application to time
shared systems which allowed for user interaction with the computer system

a. Batch /Early System: Early computers were physically large machine. The
common input devices were card readers, tape drivers. The common output
devices were line printers, tape drivers and card punches. In these systems
the user did not interact directly with the computer system. Instead the user
preparing a job which consists of programming data and some control
information and then submitted it to the computer operator after some time
the output is appeared. The output in these early computer was fairly simple is
main task was to transfer control automatically from one job to next. The
operating system always resides in the memory. To speed up processing
operators batched the jobs with similar needs and ran then together as a group.
The disadvantages of batch system are that in this execution environment the
CPU is often idle because the speed up of I/O devices is much slower than the
CPU.

b. Multiprogrammed System: Multiprogramming concept increases CPU


utilization by organization jobs so that the CPU always has one job to
execute the idea behind multiprogramming concept. The operating system
keeps several jobs in memory simultaneously as shown in below figure.

This set of job is subset of the jobs kept in the job pool. The operating system
picks and beginning to execute one of the jobs in the memory. In this
environment the operating system simply switches and executes another job.
When a job needs to wait the CPU is simply switched to another job and so
on. The multiprogramming operating system is sophisticated because the
operating system makes decisions for the user. This is known as scheduling.
If several jobs are ready to run at the same time the system choose one
among them. This is known as CPU scheduling. The disadvantages of the
multiprogrammed system are
 It does not provide user interaction with the computer system
during the program execution.
 The introduction of disk technology solved these problems rather
than reading the cards from card reader into disk. This form of
processing is known as spooling.
SPOOL stands for simultaneous peripheral operations online. It uses the disk
as a huge buffer for reading from input devices and for storing output data
until the output devices accept them. It is also use for processing data at
remote sides. The remote processing is done and its own speed with no CPU
intervention. Spooling overlaps the input, output one job with computation
of other jobs. Spooling has a beneficial effect on the performance of the
systems by keeping both CPU and I/O devices working at much higher time.
c. Time Sharing System:The time sharing system is also known as multi user
systems. The CPU executes multiple jobs by switching among them but the
switches occurs so frequently that the user can interact with each program
while it is running. An interactive computer system provides direct
communication between a user and system. The user gives instruction to the
operating systems or to a program directly using keyboard or mouse and wait
for immediate results. So the response time will be short. The time sharing
system allows many users to share the computer simultaneously. Since each
action in this system is short, only a little CPU time is needed for each user.
The system switches rapidly from one user to the next so each user feels as if
the entire computer system is dedicated to his use, even though it is being
shared by many users. The disadvantages of time sharing system are:
 It is more complex than multiprogrammed operating system
 The system must have memory management & protection, since several
jobs are kept in memory at the same time.
 Time sharing system must also provide a file system, so disk management is
required.
 It provides mechanism for concurrent execution which requires complex
CPU scheduling schemes.

2. Personal Computer System/Desktop System: Personal computer appeared in


1970’s. They are microcomputers that are smaller & less expensive than mainframe
systems. Instead of maximizing CPU & peripheral utilization, the systems opt for
maximizing user convenience & responsiveness. At first file protection was not
necessary on a personal machine. But when other computers 2nd other users can
access the files on a pc file protection becomes necessary. The lack of protection
made if easy for malicious programs to destroy data on such systems. These
programs may be self replicating& they spread via worm or virus mechanisms.
They can disrupt entire companies or even world wide networks. E.g : windows 98,
windows 2000, Linux.
3. Microprocessor Systems/ Parallel Systems/ Tightly coupled Systems:
These Systems have more than one processor in close communications which share
the computer bus, clock, memory & peripheral devices. Ex: UNIX, LINUX.
Multiprocessor Systems have 3 main advantages.
a. Increased throughput: No. of processes computed per unit time. By
increasing the no. of processors move work can be done in less time. The
speed up ratio with N processors is not N, but it is less than N. Because a
certain amount of overhead is incurred in keeping all the parts working
correctly.
b. Increased Reliability: If functions can be properly distributed among several
processors, then the failure of one processor will not halt the system, but slow
it down. This ability to continue to operate in spite of failure makes the
system fault tolerant.
c. Economic scale: Multiprocessor systems can save money as they can share
peripherals, storage & power supplies.
The various types of multiprocessing systems are:
 Symmetric Multiprocessing (SMP): Each processor runs an identical copy
of the operating system & these copies communicate with one another as
required. Ex: Encore’s version of UNIX for multi max computer. Virtually,
all modern operating system including Windows NT, Solaris, Digital UNIX,
OS/2 & LINUX now provide support for SMP.
 Asymmetric Multiprocessing (Master – Slave Processors): Each processor
is designed for a specific task. A master processor controls the system &
schedules & allocates the work to the slave processors. Ex- Sun’s Operating
system SUNOS version 4 provides asymmetric multiprocessing.
4. Distributed System/Loosely Coupled Systems: In contrast to tightly coupled
systems, the processors do not share memory or a clock. Instead, each processor
has its own local memory. The processors communicate with each other by various
communication lines such as high speed buses or telephone lines. Distributed
systems depend on networking for their functionalities. By being able to
communicate distributed systems are able to share computational tasks and provide
a rich set of features to the users. Networks vary by the protocols used, the
distances between the nodes and transport media. TCP/IP is the most common
network protocol. The processor is a distributed system varies in size and function.
It may microprocessors, work stations, minicomputer, and large general purpose
computers. Network types are based on the distance between the nodes such as
LAN (within a room, floor or building) and WAN (between buildings, cities or
countries). The advantages of distributed system are resource sharing, computation
speed up, reliability, communication.
5. Real time Systems: Real time system is used when there are rigid time
requirements on the operation of a processor or flow of data. Sensors bring data to
the computers. The computer analyzes data and adjusts controls to modify the
sensors inputs. System that controls scientific experiments, medical imaging
systems and some display systems are real time systems. The disadvantages of real
time system are:
a. A real time system is considered to function correctly only if it returns the
correct result within the time constraints.
b. Secondary storage is limited or missing instead data is usually stored in short
term memory or ROM.
c. Advanced OS features are absent. Real time system is of two types such as:
 Hard real time systems: It guarantees that the critical task has been
completed on time. The sudden task is takes place at a sudden instant of time.
 Soft real time systems: It is a less restrictive type of real time system where
a critical task gets priority over other tasks and retains that priority until it
computes. These have more limited utility than hard real time systems.
Missing an occasional deadline is acceptable
e.g. QNX, VX works. Digital audio or multimedia is included in this category.
It is a special purpose OS in which there are rigid time requirements on the
operation of a processor. A real time OS has well defined fixed time constraints.
Processing must be done within the time constraint or the system will fail. A real
time system is said to function correctly only if it returns the correct result within
the time constraint. These systems are characterized by having time as a key
parameter.

Basic Functions of Operation System:


The various functions of operating system are as follows:
1. Process Management:
 A program does nothing unless their instructions are executed by a CPU.A process is
a program in execution. A time shared user program such as a complier is a process.
A word processing program being run by an individual user on a pc is a process.
 A system task such as sending output to a printer is also a process. A process needs
certain resources including CPU time, memory files & I/O devices to accomplish its
task.
 These resources are either given to the process when it is created or allocated to it
while it is running. The OS is responsible for the following activities of process
management.
 Creating & deleting both user & system processes.
 Suspending & resuming processes.
 Providing mechanism for process synchronization.
 Providing mechanism for process communication.
 Providing mechanism for deadlock handling.
2. Main Memory Management:

The main memory is central to the operation of a modern computer system. Main
memory is a large array of words or bytes ranging in size from hundreds of
thousand to billions. Main memory stores the quickly accessible data shared by the
CPU & I/O device. The central processor reads instruction from main memory
during instruction fetch cycle & it both reads & writes data from main memory
during the data fetch cycle. The main memory is generally the only large storage
device that the CPU is able to address & access directly. For example, for the CPU
to process data from disk. Those data must first be transferred to main memory by
CPU generated E/O calls. Instruction must be in memory for the CPU to execute
them. The OS is responsible for the following activities in connection with memory
management.
 Keeping track of which parts of memory are currently being used & by whom.
 Deciding which processes are to be loaded into memory when memory
space becomes available.
 Allocating &deallocating memory space as needed.
3. File Management:
File management is one of the most important components of an OS computer can
store information on several different types of physical media magnetic tape,
magnetic disk & optical disk are the most common media. Each medium is
controlled by a device such as disk drive or tape drive those has unique
characteristics. These characteristics include access speed, capacity, data transfer
rate & access method (sequential or random).For convenient use of computer
system the OS provides a uniform logical view of information storage. The OS
abstracts from the physical properties of its storage devices to define a logical
storage unit the file. A file is collection of related information defined by its creator.
The OS is responsible for the following activities of file management.
 Creating & deleting files.
 Creating & deleting directories.
 Supporting primitives for manipulating files & directories.
 Mapping files into secondary storage.
 Backing up files on non-volatile media.
4. I/O System Management:
One of the purposes of an OS is to hide the peculiarities of specific hardware
devices from the user. For example, in UNIX the peculiarities of I/O devices are
hidden from the bulk of the OS itself by the I/O subsystem. The I/O subsystem
consists of:
 A memory management component that includes buffering, catching & spooling.
 A general device- driver interfaces drivers for specific hardware devices. Only
the device driver knows the peculiarities of the specific device to which it is
assigned.

5. Secondary Storage Management:


The main purpose of computer system is to execute programs. These programs with
the data they access must be in main memory during execution. As the main
memory is too small to accommodate all data & programs & because the data that it
holds are lost when power is lost. The computer system must provide secondary
storage to back-up main memory. Most modern computer systems are disks as the
storage medium to store data & program. The operating system is responsible for
the following activities of disk management.
 Free space management.
 Storage allocation.
 Disk scheduling
Because secondary storage is used frequently it must be used efficiently.

Networking:
A distributed system is a collection of processors that don’t share memory peripheral
devices or a clock. Each processor has its own local memory & clock and the processor
communicate with one another through various communication lines such as high speed
buses or networks. The processors in the system are connected through communication
networks which are configured in a number of different ways. The communication
network design must consider message routing & connection strategies are the problems
of connection & security.
Protection or security:
If a computer system has multi users & allow the concurrent execution of multiple
processes then the various processes must be protected from one another’s activities.
For that purpose, mechanisms ensure that files, memory segments, CPU & other
resources can be operated on by only those processes that have gained proper
authorization from the OS.
Command interpretation:
One of the most important functions of the OS is connected interpretation where it acts
as the interface between the user & the OS.

System Calls:
System calls provide the interface between a process & the OS. These are usually
available in the form of assembly language instruction. Some systems allow system
calls to be made directly from a high level language program like C, BCPL and PERL
etc. systems calls occur in different ways depending on the computer in use. System
calls can be roughly grouped into 5 major categories.

1. Process Control:
 End, abort: A running program needs to be able to has its execution either
normally (end) or abnormally (abort).
 Load, execute:A process or job executing one program may want to load and
executes another program.
 Create Process, terminate process: There is a system call specifying for the
purpose of creating a new process or job (create process or submit job). We may
want to terminate a job or process that we created (terminates process, if we find
that it is incorrect or no longer needed).
 Get process attributes, set process attributes: If we create a new job or process
we should able to control its execution. This control requires the ability to
determine & reset the attributes of a job or processes (get process attributes, set
process attributes).
 Wait time: After creating new jobs or processes, we may need to wait for them
to finish their execution (wait time).
 Wait event, signal event: We may wait for a specific event to occur (wait
event). The jobs or processes then signal when that event has occurred (signal
event).
2. File Manipulation:
 Create file, delete file: We first need to be able to create & delete files. Both the
system calls require the name of the file & some of its attributes.
 Open file, close file: Once the file is created, we need to open it & use it. We
close the file when we are no longer using it.
 Read, write, reposition file: After opening, we may also read, write or
reposition the file (rewind or skip to the end of the file).
 Get file attributes, set file attributes: For either files or directories, we need to
be able to determine the values of various attributes & reset them if necessary.
Two system calls get file attribute & set file attributes are required for their
purpose.
3. Device Management:
 Request device, release device: If there are multiple users of the system, we first
request the device. After we finished with the device, we must release it.
 Read, write, reposition: Once the device has been requested & allocated to us,
we can read, write & reposition the device.

4. Information maintenance:
 Get time or date, set time or date:Most systems have a system call to return the
current date & time or set the current date & time.
 Get system data, set system data: Other system calls may return information
about the system like number of current users, version number of OS, amount of
free memory etc.
 Get process attributes, set process attributes: The OS keeps information
about all its processes & there are system calls to access this information.
5. Communication: There are two modes of communication such as:
 Message passing model: Information is exchanged through an inter process
communication facility provided by operating system. Each computer in a
network has a name by which it is known. Similarly, each process has a process
name which is translated to an equivalent identifier by which the OS can refer to
it. The get hostid and get processed systems calls to do this translation. These
identifiers are then passed to the general purpose open & close calls provided by
the file system or to specific open connection system call. The recipient process
must give its permission for communication to take place with an accept
connection call. The source of the communication known as client & receiver
known as server exchange messages by read message & write message system
calls. The close connection call terminates the connection.
 Shared memory model: processes use map memory system calls to access
regions of memory owned by other processes. They exchange information by
reading & writing data in the shared areas. The processes ensure that they are not
writing to the same location simultaneously.

SYSTEM PROGRAMS:
System programs provide a convenient environment for program development &
execution. They are divided into the following categories.
 File manipulation: These programs create, delete, copy, rename, print &
manipulate files and directories.
 Status information: Some programs ask the system for date, time & amount
of available memory or disk space, no. of users or similar status information.
 File modification:Several text editors are available to create and modify the
contents of file stored on disk.
 Programming language support: compliers, assemblers & interpreters are
provided to the user with the OS.
 Programming loading and execution: Once a program is assembled or
compiled, it must be loaded into memory to be executed.
 Communications: These programs provide the mechanism for creating virtual
connections among processes users 2nd different computer systems.
 Application programs: Most OS are supplied with programs that are useful to
solve common problems or perform common operations. Ex: web browsers,
word processors & text formatters etc.
System structure:
1. Simple structure: There are several commercial system that don’t have a well-
defined structure such operating systems begins as small, simple & limited
systems and then grow beyond their original scope. MS-DOS is an example of
such system. It was not divided into modules carefully. Another example of
limited structuring is the UNIX operating system.

(MS DOS Structure)


2. Layered approach: In the layered approach, the OS is broken into a number of
layers (levels) each built on top of lower layers. The bottom layer (layer o ) is
the hardware & top
most layer (layer N) is the user interface. The main advantage of the layered approach is modularity.
The layers are selected such that each users functions (or operations) & services of only lower layer.

 This approach simplifies debugging & system verification, i.e. the first layer can be
debugged without concerning the rest of the system. Once the first layer is
debugged, its correct functioning is assumed while the 2nd layer is debugged & so
on.
 If an error is found during the debugging of a particular layer, the error must be on
that layer because the layers below it are already debugged. Thus the design &
implementation of the system are simplified when the system is broken down into
layers.
 Each layer is implemented using only operations provided by lower layers. A layer
doesn’t need to know how these operations are implemented; it only needs to know
what these operations do.

 The layer approach was first used in the operating system. It was defined in six layers.

Layers Functions
5 User Program

4 I/O Management

Operator Process
3
Communication

2 Memory Management

1 CPU Scheduling

0 Hardware

The main disadvantage of the layered approach is:


 The main difficulty with this approach involves the careful definition of the
layers, because a layer can use only those layers below it. For example, the
device driver for the disk space used by virtual memory algorithm must be at
a level lower than that of the memory management routines, because
memory management requires the ability to use the disk space.
 It is less efficient than a non layered system (Each layer adds overhead to
the system call & the net result is a system call that take longer time than on
a non layered system).

Virtual Machines:

By using CPU scheduling & virtual memory techniques an operating system can create
the illusion of multiple processes, each executing on its own processors & own virtual
memory. Each processor is provided a virtual copy of the underlying computer. The
resources of the computer are shared to create the virtual machines. CPU scheduling
can be used to create the appearance that users have their own processor.

Implementation: Although the virtual machine concept is useful, it is difficult to


implement since much effort is required to provide an exact duplicate of the underlying
machine. The CPU is being multiprogrammed among several virtual machines, which
slows down the virtual machines in various ways.
Difficulty: A major difficulty with this approach is regarding the disk system. The
solution is to provide virtual disks, which are identical in all respects except size. These
are known as mini disks in IBM’s VM OS. The sum of sizes of all mini disks should be
less than the actual amount of physical disk space available.

Operating System Services


An operating system provides an environment for the execution of the program. It
provides some services to the programs. The various services provided by an operating
system are as follows:
 Program Execution: The system must be able to load a program into memory
and to run that program. The program must be able to terminate this execution
either normally or abnormally.
 I/O Operation: A running program may require I/O. This I/O may involve a
file or a I/O device for specific device. Some special function can be desired.
Therefore the operating system must provide a means to do I/O.
 File System Manipulation: The programs need to create and delete files by
name and read and write files. Therefore the operating system must maintain each
and every files correctly.
 Communication: The communication is implemented via shared memory or by
the technique of message passing in which packets of information are moved
between the processes by the operating system.
 Error detection: The operating system should take the appropriate actions for
the occurrences of any type like arithmetic overflow, access to the illegal
memory location and too large user CPU time.
 Research Allocation: When multiple users are logged on to the system the
resources must be allocated to each of them. For current distribution of the
resource among the various processes the operating system uses the CPU
scheduling run times which determine which process will be allocated with the
resource.
 Accounting: The operating system keep track of which users use how many and
which kind of computer resources.
 Protection: The operating system is responsible for both hardware as well as
software protection. The operating system protects the information stored in a
multiuser computer system.
System Call:
 System calls provide an interface between the process and the operating system.
 System calls allow user-level processes to request some services from the operating
system which process itself is not allowed to do.
 For example, for I/O a process involves a system call telling the operating system to
read or write particular area and this request is satisfied by the operating system.
 The following different types of system calls provided by an operating system:
 Process control
o end, abort
o load, execute
o create process, terminate process
o get process attributes, set process attributes
o wait for time
o wait event, signal event
o allocate and free memory
 File management
o create file, delete file
o open, close
o read, write, reposition
o get file attributes, set file attributes
 Device management
o request device, release device
o read, write, reposition
o get device attributes, set device attributes
o logically attach or detach devices
 Information maintenance
o get time or date, set time or date
o get system data, set system data
o get process, file, or device attributes
o set process, file, or device attributes
 Communications
o create, delete communication connection
o send, receive messages
o transfer status information
o attach or detach remote devices
System Calls System Programs
Act as an interface between user-level Provide higher-level services and
applications and the operating system kernel. simplify complex operations.
Enable users to request services or resources Offer a user-friendly interface and
from the operating system. facilitate administrative tasks.
Serve as the fundamental building blocks of Glue that holds the operating system
an operating system. together.
Allow users to perform essential tasks such Enable users to interact with the
as file management, process control, and operating system more intuitively and
memory allocation. efficiently.
Unit -3
Process Management:
Process: A process or task is an instance of a program in execution. The execution of
a process must programs in a sequential manner. At any time at most one instruction is
executed. The process includes the current activity as represented by the value of the
program counter and the content of the processors registers. Also it includes the process
stack which contain temporary data (such as method parameters return address and local
variables) & a data section which contain global variables.
Difference between process & program:
 A program by itself is not a process. A program in execution is known as a
process.

 A program is a passive entity, such as the contents of a file stored on disk


whereas process is an active entity with a program counter specifying the
next instruction to execute and a set of associated resources may be shared
among several process with some scheduling algorithm being used to
determinate when the stop work on one process and service a different
one.
 Process can be described:
 I/O Bound Process- spends more time doing I/O then
computation.
 CPU Bound Process- spends more time doing computation.

Process state: As a process executes, it changes state. The state of a process is defined
by the correct activity of that process. Each process may be in one of the following
states.
 New: The process is being created.
 Ready: The process is waiting to be assigned to a processor.
 Running: Instructions are being executed.
 Waiting: The process is waiting for some event to occur.
 Terminated: The process has finished execution.
Many processes may be in ready and waiting state at the same time. But only one
process can be running on any processor at any instant.
Process scheduling:

Scheduling is a fundamental function of OS. When a computer is multiprogrammed, it


has multiple processes completing for the CPU at the same time. If only one CPU is
available, then a choice has to be made regarding which process to execute next. This
decision making process is known as scheduling and the part of the OS that makes this
choice is called a scheduler. The algorithm it uses in making this choice is called
scheduling algorithm.

Scheduling queues: As processes enter the system, they are put into a job queue.
This queue consists of all process in the system. The process that are residing in main memory and
are ready & waiting to execute or kept on a list called ready queue.

 Job queue – set of all processes in the system

 Ready queue – set of all processes residing in main memory,


ready and waiting to execute

 Device queues – set of processes waiting for an I/O device


 Processes migrate among the various queues
This queue is generally stored as a linked list. A ready queue header contains pointers to
the first & final PCB in the list. The PCB includes a pointer field that points to the next
PCB in the ready queue. The lists of processes waiting for a particular I/O device are
kept on a list called device queue. Each device has its own device queue. A new process
is initially put in the ready queue. It waits in the ready queue until it is selected for
execution & is given the CPU.

SCHEDULERS:
A process migrates between the various scheduling queues throughout its life-time
purposes. The OS must select for scheduling processes from these queues in some
fashion. This selection process is carried out by the appropriate scheduler. In a batch
system, more processes are submittedand then executed immediately. So these
processes are spooled to a mass storage device like disk, where they are kept for later
execution.
Types of schedulers:
There are 3 types of schedulers mainly used:
1. Long term scheduler:
 Long term scheduler selects process from the disk & loads them into
memory for execution.
 It controls the degree of multi-programming i.e. no. of processes in memory.
 It executes less frequently than other schedulers.
 If the degree of multiprogramming is stable than the average rate of process
creation is equal to the average departure rate of processes leaving the
system. So, the long term scheduler is needed to be invoked only when a
process leaves the system. Due to longer intervals between executions it can
afford to take more time to decide which process should be selected for
execution.
 Most processes in the CPU are either I/O bound or CPU bound. An I/O
bound process (an interactive ‘C’ program is one that spends most of its
time in I/O operation than it spends in doing I/O operation. A CPU bound
process is one that spends more of its time in doing computations than I/O
operations (complex sorting program). It is important that the long term
scheduler should select a good mix of I/O bound & CPU bound processes.

2. Short - term scheduler:


 The short term scheduler selects among the process that are ready to execute
& allocates the CPU to one of them.
 The primary distinction between these two schedulers is the frequency of
their execution.
 The short-term scheduler must select a new process for the CPU quite
frequently. It must execute at least one in 100ms.
 Due to the short duration of time between executions, it must be very fast.

3. Medium - term scheduler:


Some operating systems introduce an additional intermediate level of scheduling
known as medium - term scheduler. The main idea behind this scheduler is that
sometimes it is advantageous to remove processes from memory & thus reduce
the degree of multiprogramming. At some later time, the process can be
reintroduced into memory & its execution can be continued from where it had
left off. This is called as swapping. The process is swapped out & swapped in
later by medium term scheduler. Swapping is necessary to improve theprocess
miss or due to some change in memory requirements, the available memory
limit is exceeded which requires some memory to be freed up.
Dispatcher:

Dispatcher is the module that gives control of the CPU to the process
selected by the short -term scheduler. This function involves:

 Switching Context
o When CPU switches to another process, the system must
save the state of the old process and load the saved state
for the new process.
o Context-switch time is overhead; the system does no useful work while
switching.
o Time dependent on hardware support.
 Switching to user mode
 Jumping to the proper location in the user program to restart that program.

The dispatcher should be as fast as possible, given that it is invoked during


every process switch. The time it takes for the dispatcher to stop one process
and start another running is known as dispatch latency.

Process control block:


Each process is represented in the OS by a process
control block. It is also by a process control block. It is
also known as task control block.
A process control block contains many pieces of information associated with a specific
process. It includes the following informations.
 Process state: The state may be new, ready, running, waiting or terminated state.
 Program counter:it indicates the address of the next instruction to be
executed for this purpose.
 CPU registers: The registers vary in number & type depending on the
computer architecture. It includes accumulators, index registers, stack
pointer & general purpose registers, plus any condition- code information
must be saved when an interrupt occurs to allow the process to be continued
correctly after- ward.
 CPU scheduling information:This information includes process priority
pointers to scheduling queues & any other scheduling parameters.
 Memory management information: This information may include such
information as the value of the bar & limit registers, the page tables or the
segment tables, depending upon the memory system used by the operating
system.
 Accounting information: This information includes the amount of CPU
and real time used, time limits, account number, job or process numbers and
so on.
 I/O Status Information: This information includes the list of I/O devices
allocated to this process, a list of open files and so on. The PCB simply
serves as the repository for any information that may vary from process to
process.

Thread

 A thread, sometimes called a


lightweight process (LWP), is a basic
unit of CPU utilization; it comprises a
thread ID, a program counter, a register
set, and a stack.

 It shares with other threads belonging


to the same process its code section,
data section, and other operating-
system resources, such as open files
and signals.

 A traditional (or heavyweight) process has a single thread of control. If the process
has multiple threads of control, it can do more than one task at a time.

Single-threaded and multithreaded


Ex: A web browser might have one thread display images or text while another thread
retrieves data
from the network. A word processor may have a thread for displaying graphics, another
thread for reading keystrokes from the user, and a third thread for performing spelling and
grammar checking in the background.
In certain situations a single application may be required to perform several similar tasks. For
example, a web server accepts client requests for web pages, images, sound, and so forth. A
busy web server may have several (perhaps hundreds) of clients concurrently accessing it. If
the web server ran as a traditional single threaded process, it would be able to service only
one client at a time.
One solution is to have the server run as a single process that accepts requests. When the
server receives a request, it creates a separate process to service that request. In fact, this
process-creation method was in common use before threads became popular. Process creation
is very heavyweight . It is generally more efficient for one process that contains multiple
threads to serve the same purpose than to create new thread. This approach would multithread
the web-server process. The server would create a separate thread that would listen for client
requests; when a request was made, rather than creating another process, it would create
another thread to service the request.
Threads also play a vital role in remote procedure call (RPC) systems. RPCs allow inter-
process communication by providing a communication mechanism similar to ordinary
function or procedure calls.
Typically, RPC servers are multithreaded. When a server receives a message, it services the
message using a separate thread. This allows the server to service several concurrent requests.

Benefits
The benefits of multithreaded programming can be broken down into four major categories:

 Responsiveness: Multithreading an interactive application may allow a program to


continue running even if part of it is blocked or is performing a lengthy operation,
thereby increasing responsiveness to the user. For instance, a multithreaded web
browser could still allow user interaction in one thread while an image is being loaded
in another thread.

 Resource sharing: By default, threads share the memory and the resources of the
process to which they belong. The benefit of code sharing is that it allows an
application to have several different threads of activity all within the same address
space.

 Economy: Allocating memory and resources for process creation is costly.


Alternatively, because threads share resources of the process to which they belong, it
is more economical to create and context switch threads. It can be difficult to gauge
empirically the difference in overhead for creating and maintaining a process rather
than a thread, but in general it is much more time consuming to create and manage
processes than threads. In Solaris 2, creating a process is about 30 times slower than is
creating a thread, and context switching is about five times slower.

 Utilization of multiprocessor architectures: The benefits of multithreading can be


greatly increased in a multiprocessor architecture, where each thread may be running
in parallel on a different processor. A singlethreaded process can only run on one
CPU, no matter how many are available.

 Utilization of multiprocessor architectures :Multithreading on a multi-CPU


machine increases concurrency. In a single processor architecture, the CPU generally
moves between each thread so quickly as to create an illusion of parallelism, but in
reality only one thread is running at a time.

The OS supports the threads that can provided in following two levels:

User-Level Threads
User-level threads implement in user-level libraries, rather than via systems calls, so thread
switching
does not need to call operating system and to cause interrupt to the kernel. In fact, the kernel
knows nothing about user-level threads and manages them as if they were single-threaded
processes.
Advantages:

 User-level threads do not require modification to operating systems.

 Simple Representation: Each thread is represented simply by a PC, registers, stack


and a small control block, all stored in the user process address space.

 Simple Management: This simply means that creating a thread, switching between
threads and synchronization between threads can all be done without intervention of
the kernel.

 Fast and Efficient: Thread switching is not much more expensive than a procedure
call.
Disadvantages:

 There is a lack of coordination between threads and operating system kernel.

 User-level threads require non-blocking systems call i.e., a multithreaded kernel.


Kernel-Level Threads
In this method, the kernel knows about and manages the threads. Instead of thread table in
each process, the kernel has a thread table that keeps track of all threads in the system.
Operating Systems kernel provides system call to create and manage threads.
Advantages:

 Because kernel has full knowledge of all threads, Scheduler may decide to give more
time to a process

 having large number of threads than process having small number of threads.

 Kernel-level threads are especially good for applications that frequently block.
Disadvantages:

 The kernel-level threads are slow and inefficient. For instance, threads operations are
hundreds of times slower than that of user-level threads.

 Since kernel must manage and schedule threads as well as processes. It require a full
thread control block (TCB) for each thread to maintain information about threads. As
a result there is significant overhead and increased in kernel complexity.
Multithreading Models (Management of Threads):

Threads may be provided either at the user level, for user threads,
or by the kernel, f or kernel threads. User threads are supported above
the kernel and are managed without kernel support, whereas kernel
threads are supported and managed directly by the operating system.
There must exist a relationship between user threads and kernel threads.
There are three common ways of establishing this relationship.

A. Many-to-One Model:

The many-to-one model maps many user-level


threads to one kernel thread. Thread
management is done by the thread library in user
space, so it is efficient; but the entire process
will block if a thread makes a blocking system
call. Also, because only one thread can access
the kernel at a time, multiple threads are unable
to run in parallel on multiprocessors.

B. One-to-One Model:

The one-to-one model maps each user


thread to a kernel thread. It provides more
concurrency than the many-to-one model
by allowing another thread to run when a
thread makes a blocking system call; it also
allows multiple threads to run in parallel
on multiprocessors. The only drawback to
this model is that creating a user thread
requires creating the corresponding kernel
thread.

C. Many-to-Many Model:

The many-to-many model multiplexes many user-level threads to a


smaller or equal number of kernel threads. The one-to-one model
allows for greater concurrency.

The many-to-many model suffers from


neither of these shortcomings:
Developers can create as many user
threads as necessary, and the
corresponding kernel threads can run
in parallel on a multiprocessor. Also,
when a thread performs a blocking
system call, the kernel can schedule
another thread for execution.

CPU Scheduling Concept:

The main objective of CPU Scheduling is to maximize CPU


utilization. Basically we use process scheduling to maximize CPU
utilization. Process Scheduling is done by following ways:

CPU-I/O Burst Cycle:

The success of CPU scheduling


depends on an observed property of
processes: Process execution consists
of a cycle of CPU execution and I/O
wait. Processes alternate between
these two states. Process execution
begins with a CPU burst. That is
followed by an I/O burst, which is
followed by another CPU burst, then
another I/O burst, and so on.
Eventually, the final CPU burst ends
with a system request to terminate
execution.
Scheduling Performance Criteria:

 CPU Utilization: We want to keep the CPU as busy as possible.


Conceptually, CPU utilization can range from 0 to 100 percent. In
a real system, it should range from 40 percent (for a lightly loaded
system) to 90 percent (for a heavily used system).

Processor Utilization = (Processor Busy Time / (Processor Busy Time + Processor Idle time))*100
 Throughput: the number of processes that are completed per
time unit, called throughput.

 Turnaround Time: The


Throughput amount
= No. of time
of Process to execute
Completed a particular
/ Time Unit
process is called turnaround time.

Turnaround Time = T(Process Completed) – T(Process Submitted)


 Waiting Time: the amount of time that a process spends waiting in the ready
queue.
Waiting Time = Turnaround Time – Processing Time
 Response Time: time from the submission of a request until the
first response is produced. This measure, called response time.

Response Time = T(First Response) – T(submission of request)

 Optimization Criteria:
 Max CPU utilization
 Max throughput
 Min turnaround time
 Min waiting time
 Min response time
Scheduling Algorithms:

A. First-Come, First-Served Scheduling


B. Shortest-Job-First Scheduling
C. Priority Scheduling
D. Round-Robin Scheduling
E. Multilevel Queue Scheduling
F. Multilevel Feedback Queue Scheduling

A. First-Come, First-Served Scheduling (FCFS):


With this scheme, the process that requests the CPU first is
allocated the CPU f irst. The implementation of the FCFS policy is
easily managed with a FIFO queue. When a process enters the ready
queue, its PCB is linked onto the tail of the queue. When the CPU is free,
it is allocated to the process at the head of the queue. The running
process is then removed from the queue.
Example:
Process p1,p2,p3,p4,p5 having arrival time of 0,2,3,5,8
microseconds and processing time 3,3,1,4,2 microseconds, Draw Gantt
Chart & Calculate Average Turn Around Time, Average Waiting Time,
CPU Utilization & Throughput using FCFS.

Processes Arrival Time Processing Time T.A.T. W.T.


T(P.C.)-T(P.S.) TAT- T(Proc.)
P1 0 3 3-0=3 3-3=0
P2 2 3 6-2=4 4-3=1
P3 3 1 7-3=4 4-1=3
P4 5 4 11-5=6 6-4=2
P5 8 2 13-8=5 5-2=3
GANTT CHART:
P1 P2 P3 P4 P5
0 3 6 7 11 13
Average T.A.T. =(3+4+4+6+5)/5 = 22/5 = 4.4 Microsecond

Average W.T. = (0+1+3+2+3)/5 =9/5 = 1.8 Microsecond


CPU Utilization = (13/13)*100 = 100%

Throughput = 5/13 = 6.38

B. Shortest-Job-First Scheduling (SJF):

 Associate with each process the length of its next CPU burst. Use
these lengths to schedule the process with the shortest time
 Two schemes:
i. nonpreemptive – once CPU given to the process it cannot
be preempted until completes its CPU burst
ii. preemptive – if a new process arrives with CPU burst
length less than remaining time of current executing
process, preempt. This scheme is known as the Shortest-
Remaining-Time-First (SRTF)
 SJF is optimal – gives minimum average waiting time for a given set of
processes
Example:
Process p1,p2,p3,p4 having burst time of 6,8,7,3 microseconds. Draw
Gantt Chart & Calculate Average Turn Around Time, Average Waiting
Time, CPU Utilization & Throughput using SJF.
Processes Burst Time T.A.T. W.T.
T(P.C.)-T(P.S.) TAT- T(Proc.)
P4 3 3-0=3 3-3=0
P1 6 9-0=9 9-6=3
P3 7 16-0=16 16-7=9
P2 8 24-0=24 24-8=16

GANTT CHART
P4 P1 P3 P2
0 3 9 16 24

Average T.A.T. =(3+9+16+24)/4 = 13 microsecond


Average W.T. = (0+3+9+16)/4 =28/4 = 7 microsecond

CPU Utilization = (24/24)*100 = 100%


Throughput = 4/24

C. Priority Scheduling:

 A priority number (integer) is associated with each process

 The CPU is allocated to the process with the highest priority


(smallest integer  highest priority)
 Problem  Starvation – low priority processes may never execute
 Solution  Aging – as time progresses increase the priority of the process

Example: Process p1,p2,p3,p4,p5 having burst time of 10,1,2,1,5


microseconds and priorities are 3,1,4,5,2. Draw Gantt Chart &
Calculate Average Turn Around Time, Average Waiting Time, CPU
Utilization & Throughput using Priority Scheduling.

Processes Priority Processing Time T.A.T. W.T.


T(P.C.)-T(P.S.) TAT- T(Proc.)
P2 1 1 1-0=1 1-1=0
P5 2 5 6-0=6 6-5=1
P1 3 10 16-0=16 16-10=6
P3 4 2 18-0=18 18-2=16
P4 5 1 19-0=19 19-1=18

GANTT CHART:
P2 P5 P1 P3 P4
0 1 6 16 18 19

Average T.A.T. =(1+6+16+18+19)/5 = 12 microsecond


Average W.T. = (0+1+6+16+18)/5 =41/5 = 8.2 microsecond
CPU Utilization = (19/19)*100 = 100%
Throughput = 5/19
D. Round-Robin Scheduling:
 Each process gets a small unit of CPU time (time quantum),
usually 10-100 milliseconds. After this time has elapsed, the
process is preempted and added to the end of the ready queue.
 If there are n processes in the ready queue and the time quantum
is q, then each process gets 1/n of the CPU time in chunks of at
most q time units at once. No process waits more than (n-1)q
time units.
 Used for time sharing & multiuser O.S.

Example:
Process p1,p2,p3 having processing time of 24,3,3 milliseconds.
Draw Gantt Chart & Calculate Average Turn Around Time, Average
Waiting Time, CPU Utilization & Throughput using Round Robin with
time slice of 4milliseconds.

Processes Processing Time T.A.T. W.T.


T(P.C.)-T(P.S.) TAT- T(Proc.)
P1 24 30-0=30 30-24=6
P2 3 7-0=7 7-3=4
P3 3 10-0=10 10-3=7

GANTT CHART
P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26
30

Average T.A.T. =(30+7+7)/3 = 44/3 = 14.67 millisecond


Average W.T. = (6+4+7)/3 =17/3 = 5.67 millisecond
CPU Utilization = (30/30)*100 = 100%
Throughput = 3/30=0.1
E. Multilevel Feedback-Queue Scheduling
1. Ready queue is partitioned into separate queues:
2. The processes are permanently assigned to one queue based
on some property of the process. such as memory size, process
priority or process type.
3. Each queue has its own scheduling algorithm.
4. Scheduling must be done between the queues.
 Fixed priority scheduling; Possibility of starvation.
 Time slice (RR) Scheduling – each queue gets a certain
amount of CPU time which it can schedule amongst its

processes

Multilevel Feedback Queue Scheduling


1. A process can move between the various queues
2. Process using too much CPU time will be moved to a lower priority queue.
3. Process waiting too long in a lower priority queue may be
moved to a higher priority queue
4. This form of aging prevents starvation.
5. Example: in a MFQ scheduler with 3 queues, the scheduler
first executes all the processes in queue 0. only when queue 0 is
empty, it will execute processes in queue 1 . Similarly processes
in queue 2 will be executed only if queues 0 and 1 are empty.
6. Three queues:
Q0 – time quantum 8 milliseconds(RR)

Q1 – time quantum 16 milliseconds(RR)

Q2 – FCFS
7. Scheduling
1. A new job enters queue Q0. When it gains CPU, job
receives 8 milliseconds. If it does not finish in 8
milliseconds, job is moved to tail of queue Q1.
2. If queue Q0 is empty, the process at the head of Q1 is given
a quantum of 16 additional milliseconds. If it still does not
complete, it is preempted and moved to queue
Q2.Processes in queue 2 are run on an FCFS basis only
when queue Q 0 and Q1 are empty
Multilevel-feedback-queue scheduler defined by the following parameters:
1. number of queues
2. scheduling algorithms for each queue
3. method used to determine when to upgrade a process
4. method used to determine when to demote a process
5. method used to determine which queue a process will
enter when that process needs service
Multiple-Processor Scheduling:
CPU scheduling more complex when multiple CPUs are available.

1. Homogeneous multiprocessor system : Processors are identical


in terms of functionality; any available processor can be used to run any
process in the queue. Load sharing can be done. In order to avoid
unbalancing, we can allow all processes to go to one queue and are
scheduled onto available processor.
Scheduling approach in Homogeneous MP
Two scheduling approach are used.

 Self scheduling:
Each processor is self scheduling. Each process examines the ready queue
and selects a process to execute. Problem arises when more than one
processoraccess the same process from ready queue. Different techniques
are used to resolve the problem.

 Master - Slave structure :


In this approach, one processor can be appointed as scheduler for the other
processors. The other processor only executes user code.
2. Heterogeneous multiprocessor system
Processors are different in these systems. Only program compiled for a given
processor’s
instruction set could be run on that processor.
 Simpler than homogeneous multiprocessor system.
 No need of data sharing.

You might also like