I/O Management and Disk Scheduling: Operating Systems: Internals and Design Principles
I/O Management and Disk Scheduling: Operating Systems: Internals and Design Principles
Systems:
Internals Chapter 11
and
Design I/O Management
Principles and Disk Scheduling
Seventh Edition
By William Stallings
Operating Systems:
Internals and Design Principles
An artifact can be thought of as a meeting point—an
“interface” in today’s terms between an “inner”
environment, the substance and organization of the
artifact itself, and an “outer” environment, the
surroundings in which it operates. If the inner
environment is appropriate to the outer environment, or
vice versa, the artifact will serve its intended purpose.
Human readable
●
suitable for communicating with the computer user
●
printers, terminals, video display, keyboard, mouse
Machine readable
●
suitable for communicating with electronic equipment
●
disk drives, USB keys, sensors, controllers
Communication
●
suitable for communicating with remote devices
●
modems, digital line drivers
Differences in I/O Devices
Devices differ in a number of areas:
Data Rate
●
there may be differences of magnitude between the data transfer rates
Application
●
the use to which a device is put has an influence on the software
Complexity of Control
●
the effect on the operating system is filtered by the complexity of the I/O module that controls the device
Unit of Transfer
●
data may be transferred as a stream of bytes or characters or in larger blocks
Data Representation
●
different data encoding schemes are used by different devices
Error Conditions
●
the nature of errors, the way in which they are reported, their consequences, and the available range of responses differs from one device to another
Data Rates
Organization of the I/O
Function
Three techniques for performing I/O are:
Programmed I/O
the processor issues an I/O command on behalf of a process to an I/O module; that
process then busy waits for the operation to be completed before proceeding
Interrupt-driven I/O
the processor issues an I/O command on behalf of a process
if non-blocking – processor continues to execute instructions from the process that
issued the I/O command
if blocking – the next instruction the processor executes is from the OS, which will put
the current process in a blocked state and schedule another process
2 ●
A controller or I/O module is added
3 ●
Same configuration as step 2, but now interrupts are employed
4 ●
The I/O module is given direct control of memory via DMA
5 The I/O module is enhanced to become a separate processor, with a specialized instruction set
●
6 The I/O module has a local memory of its own and is, in fact, a computer in its own right
●
Direct
Memory
Access
DMA
Alternative
Configurations
Design Objectives
Efficiency Generality
Major effort in I/O design Desirable to handle all devices in a
uniform manner
Important because I/O operations
often form a bottleneck Applies to the way processes view I/O
devices and the way the operating
Most I/O devices are extremely system manages I/O devices and
slow compared with main operations
memory and the processor
Diversity of devices makes it difficult
The area that has received the to achieve true generality
most attention is disk I/O Use a hierarchical, modular approach
to the design of the I/O function
Hierarchical Design
Functions of the operating system should be separated according to their
complexity, their characteristic time scale, and their level of abstraction
Leads to an organization of the operating system into a series of layers
Each layer performs a related subset of the functions required of the
operating system
Layers should be defined so that changes in one layer do not require
changes in other layers
A Model of I/O
Organization
Buffering
Perform input transfers in advance of requests being made and perform output
transfers some time after the request is made
Block-oriented Stream-
device oriented device
●
stores information in blocks that are ●
transfers data in and out as a stream of
usually of fixed size bytes
●
transfers are made one block at a time ●
no block structure
●
possible to reference data by its block ●
terminals, printers, communications
number ports, and most other devices that are
●
disks and USB keys are examples not secondary storage are examples
Without a buffer, the OS
directly accesses the device
No Buffer when it needs
Single Buffer Operating system assigns a
buffer in main memory for an
I/O request
Block-Oriented Single
Buffer
Input transfers are made to the system buffer
Reading ahead/anticipated input
is done in the expectation that the block will eventually be needed
when the transfer is complete, the process moves the block into user space
and immediately requests another block
Performance
operating system
nature of the I/O channel
SCAN
reaches the last track in that direction
then the direction is reversed
Favors jobs whose requests are for
tracks nearest to both innermost and
outermost tracks but does not cause
starvation.
Restricts scanning to one direction
C-SCAN
only
When the last track has been visited
in one direction, the arm is returned
(Circular SCAN) to the opposite end of the disk and
the scan begins again
Reduces maximum delay for new
requests.
N-Step-SCAN
Segments the disk request queue into subqueues of length N
Subqueues are processed one at a time, using SCAN
While a queue is being processed new requests must be added to some
other queue
If fewer than N requests are available at the end of a scan, all of them
are processed with the next scan
FSCAN
Uses two subqueues
When a scan begins, all of the requests are in one of the queues, with
the other empty
During scan, all new requests are put into the other queue
Service of new requests is deferred until all of the old requests have
been processed
Summary
I/O architecture is the computer system’s interface to the outside world
A key aspect of I/O is the use of buffers that are controlled by I/O utilities rather than by
application processes
The use of buffers also decouples the actual I/O transfer from the address space of the
application process
Two of the most widely used approaches are disk scheduling and the disk cache
A disk cache is a buffer, usually kept in main memory, that functions as a cache of disk
block between disk memory and the rest of main memory