0% found this document useful (0 votes)
60 views

I/O in Linux

The document discusses I/O in Linux operating systems. It covers several topics: 1) I/O devices connect to computers via ports and buses. Common buses include PCI, expansion, and SCSI. 2) Devices communicate with registers for data, status, control. Memory-mapped I/O also allows direct memory access. 3) Interrupts notify the CPU when devices need attention, handled by interrupt handlers in an interrupt table.

Uploaded by

senthilkumarm50
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views

I/O in Linux

The document discusses I/O in Linux operating systems. It covers several topics: 1) I/O devices connect to computers via ports and buses. Common buses include PCI, expansion, and SCSI. 2) Devices communicate with registers for data, status, control. Memory-mapped I/O also allows direct memory access. 3) Interrupts notify the CPU when devices need attention, handled by interrupt handlers in an interrupt table.

Uploaded by

senthilkumarm50
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

I/O in linux

INTRODUCTION :

I/O devices can be roughly categorized as storage, communications, user-interface, and other

Devices communicate with the computer via signals sent over wires or through the air.

Devices connect with the computer via ports, e.g. a serial or parallel port.

A common set of wires connecting multiple devices is termed a bus.

Buses include rigid protocols for the types of messages that can be sent across the bus and
the procedures for resolving contention issues.

 The PCI bus connects high-speed high-bandwidth devices to the memory


subsystem.
 The expansion bus connects slower low-bandwidth devices, which typically data
one character at a time.
 The SCSI bus connects a number of SCSI devices to a common SCSI controller.
 A daisy-chain bus, ( not shown) is when a string of devices is connected to each
other like beads on a chain, and only one of the devices is directly connected to
the host.
One way of communicating with devices is through registers associated with each port.
Registers may be one to four bytes in size, and may typically include ( a subset of ) the
following four:

 The data-in register is read by the host to get input from the device.

 The data-out register is written by the host to send output.

 The status register has bits read by the host to ascertain the status of the device,
such as idle, ready for input, busy, error, transaction complete, etc.
 The control register has bits written by the host to issue commands or to change
settings of the device such as parity checking, word length, or full- versus half-
duplex operation.

Another technique for communicating with devices is memory-mapped I/O.

 In this case a certain portion of the processor's address space is mapped to the
device, and communications occur by reading and writing directly to/from those
memory areas.
 Memory-mapped I/O is suitable for devices which must move large quantities of
data quickly, such as graphics cards.
 Memory-mapped I/O can be used either instead of or more often in combination
with traditional registers. For example, graphics cards still use registers for
control information such as setting the video mode.
 A potential problem exists with memory-mapped I/O, if a process is allowed to
write directly to the address space used by a memory-mapped I/O device.

Interrupts

Interrupts allow devices to notify the CPU when they have data to transfer or when an
operation is complete, allowing the CPU to perform other duties when no I/O transfers need its
immediate attention.

The CPU has an interrupt-request line that is sensed after every instruction.

The CPU then performs a state save, and transfers control to the interrupt handler
routine at a fixed address in memory. ( The CPU catches the interrupt and dispatches the
interrupt handler. )

The interrupt handler determines the cause of the interrupt, performs the necessary
processing, performs a state restore, and executes a return from interrupt instruction to return
control to the CPU. ( The interrupt handler clears the interrupt by servicing the device. )
The above description is adequate for simple interrupt-driven I/O, but there are three
needs in modern computing which complicate the picture:

1. The need to defer interrupt handling during critical processing,

2. The need to determine which interrupt handler to invoke, without having


to poll all devices to see which one needs attention, and

3. The need for multi-level interrupts, so the system can differentiate


between high- and low-priority interrupts for proper response.

These issues are handled in modern computer architectures with interrupt-controller hardware.
Most CPUs now have two interrupt-request lines: One that is non-maskable for critical
error conditions and one that is maskable, that the CPU can temporarily ignore during critical
processing.

The interrupt mechanism accepts an address, which is usually one of a small set of
numbers for an offset into a table called the interrupt vector. This table ( usually located at
physical address zero ? ) holds the addresses of routines prepared to process specific interrupts.

The number of possible interrupt handlers still exceeds the range of defined interrupt
numbers, so multiple handlers can be interrupt chained. Effectively the addresses held in the
interrupt vectors are the head pointers for linked-lists of interrupt handlers.

The Intel Pentium interrupt vector. Interrupts 0 to 31 are non-maskable and reserved for
serious hardware and other errors. Maskable interrupts, including normal device I/O interrupts
begin at interrupt 32.

Modern interrupt hardware also supports interrupt priority levels, allowing systems to
mask off only lower-priority interrupts while servicing a high-priority interrupt, or conversely to
allow a high-priority signal to interrupt the processing of a low-priority one.

At boot time the system determines which devices are present, and loads the appropriate
handler addresses into the interrupt table.

Direct Memory Access

For devices that transfer large quantities of data ( such as disk controllers ), it is wasteful
to tie up the CPU transferring data in and out of registers one byte at a time.

 Instead this work can be off-loaded to a special processor, known as the Direct
Memory Access, DMA, Controller.
 The host issues a command to the DMA controller, indicating the location where
the data is located, the location where the data is to be transferred to, and the
number of bytes of data to transfer. The DMA controller handles the data transfer,
and then interrupts the CPU when the transfer is complete.
 A simple DMA controller is a standard component in modern PCs, and many
bus-mastering I/O cards contain their own DMA hardware.
 Handshaking between DMA controllers and their devices is accomplished
through two wires called the DMA-request and DMA-acknowledge wires.
 While the DMA transfer is going on the CPU does not have access to the PCI bus
( including main memory ), but it does have access to its internal registers and
primary and secondary caches.
 DMA can be done in terms of either physical addresses or virtual addresses that
are mapped to physical addresses. The latter approach is known as Direct Virtual
Memory Access, DVMA, and allows direct data transfer from one memory-
mapped device to another without using the main memory chips.

Application I/O Interface :

User application access to a wide variety of different devices is accomplished through


layering, and through encapsulating all of the device-specific code into device drivers, while
application layers are presented with a common interface for all ( or at least large general
categories of ) devices.
Most devices can be characterized as either block I/O, character I/O, memory mapped file
access, or network sockets. A few devices are special, such as time-of-day clock and the system
timer.

Most OS also have an escape, or back door, which allows applications to send commands
directly to device drivers if needed. In UNIX this is the ioctl( ) system call ( I/O Control ). Ioctl
takes three arguments - The file descriptor for the device driver being accessed, an integer
indicating the desired function to be performed, and an address used for communicating or
transferring additional information.
Network Devices

Because network access is inherently different from local disk access, most systems
provide a separate interface for network devices.

One common and popular interface is the socket interface, which acts like a cable or
pipeline connecting two networked entities. Data can be put into the socket at one end, and read
out sequentially at the other end. Sockets are normally full-duplex, allowing for bi-directional
data transfer.

The select( ) system call allows servers ( or other applications ) to identify sockets which
have data waiting, without having to poll all available sockets.

Blocking and Non-blocking I/O :

With blocking I/O a process is moved to the wait queue when an I/O request is made, and
moved back to the ready queue when the request completes, allowing other processes to run in
the meantime.

With non-blocking I/O the I/O request returns immediately, whether the requested I/O
operation has ( completely ) occurred or not. This allows the process to check for available data
without getting hung completely if it is not there.

One approach for programmers to implement non-blocking I/O is to have a multi-threaded


application, in which one thread makes blocking I/O calls ( say to read a keyboard or mouse ),
while other threads continue to update the screen or perform other tasks.

A subtle variation of the non-blocking I/O is the asynchronous I/O, in which the I/O request
returns immediately allowing the process to continue on with other tasks, and then the process is
notified ( via changing a process variable, or a software interrupt, or a callback function ) when
the I/O operation has completed and the data is available for use. ( The regular non-blocking I/O
returns immediately with whatever results are available, but does not complete the operation and
notify the process later. )
Kernel I/O Subsystem :

I/O Scheduling :

 Scheduling I/O requests can greatly improve overall efficiency. Priorities can also play a
part in request scheduling.

 The classic example is the scheduling of disk accesses, as discussed in detail in chapter
12.

 Buffering and caching can also help, and can allow for more flexible scheduling options.

 On systems with many devices, separate request queues are often kept for each device:

Buffering :

Buffering of I/O is performed for ( at least ) 3 major reasons:

Speed differences between two devices. ( See Figure 13.10 below. ) A slow device may
write data into a buffer, and when the buffer is full, the entire buffer is sent to the fast device all
at once. So that the slow device still has somewhere to write while this is going on, a second
buffer is used, and the two buffers alternate as each becomes full. This is known as double
buffering. ( Double buffering is often used in ( animated ) graphics, so that one screen image can
be generated in a buffer while the other ( completed ) buffer is displayed on the screen. This
prevents the user from ever seeing any half-finished screen images. )

Data transfer size differences. Buffers are used in particular in networking systems to
break messages up into smaller packets for transfer, and then for re-assembly at the receiving
side.

To support copy semantics. For example, when an application makes a request for a disk
write, the data is copied from the user's memory area into a kernel buffer. Now the application
can change their copy of the data, but the data which eventually gets written out to disk is the
version of the data at the time the write request was made.
Caching :

Caching involves keeping a copy of data in a faster-access location than where the data is
normally stored.

Buffering and caching are very similar, except that a buffer may hold the only copy of a
given data item, whereas a cache is just a duplicate copy of some other data stored elsewhere.

Buffering and caching go hand-in-hand, and often the same storage space may be used
for both purposes. For example, after a buffer is written to disk, then the copy in memory can be
used as a cached copy, (until that buffer is needed for other purposes. )

Spooling and Device Reservation :

A spool ( Simultaneous Peripheral Operations On-Line ) buffers data for ( peripheral )


devices such as printers that cannot support interleaved data streams.

If multiple processes want to print at the same time, they each send their print data to files
stored in the spool directory. When each file is closed, then the application sees that print job as
complete, and the print scheduler sends each file to the appropriate printer one at a time.

Support is provided for viewing the spool queues, removing jobs from the queues,
moving jobs from one queue to another queue, and in some cases changing the priorities of jobs
in the queues.

Spool queues can be general ( any laser printer ) or specific ( printer number 42. )

OSes can also provide support for processes to request / get exclusive access to a
particular device, and/or to wait until a device becomes available.

Error Handling :
 I/O requests can fail for many reasons, either transient ( buffers overflow ) or permanent
( disk crash ).

 I/O requests usually return an error bit ( or more ) indicating the problem. UNIX systems
also set the global variable errno to one of a hundred or so well-defined values to indicate
the specific error that has occurred. ( See errno.h for a complete listing, or man errno. )

 Some devices, such as SCSI devices, are capable of providing much more detailed
information about errors, and even keep an on-board error log that can be requested by
the host.

I/O Protection :

The I/O system must protect against either accidental or deliberate erroneous I/O.

User applications are not allowed to perform I/O in user mode - All I/O requests are handled
through system calls that must be performed in kernel mode.

Memory mapped areas and I/O ports must be protected by the memory management system,
but access to these areas cannot be totally denied to user programs. ( Video games and some
other applications need to be able to write directly to video memory for optimal performance for
example. ) Instead the memory protection system restricts access so that only one process at a
time can access particular parts of memory, such as the portion of the screen memory
corresponding to a particular window.

Kernel Data Structures

 The kernel maintains a number of important data structures pertaining to the I/O system,
such as the open file table.

 These structures are object-oriented, and flexible to allow access to a wide variety of I/O
devices through a common interface.
 Windows NT carries the object-orientation one step further, implementing I/O as a
message-passing system from the source through various intermediaries to the device.

Transforming I/O Requests to Hardware Operations

Users request data using file names, which must ultimately be mapped to specific blocks of
data from a specific device managed by a specific device driver.

DOS uses the colon separator to specify a particular device ( e.g. C:, LPT:, etc. )

UNIX uses a mount table to map filename prefixes ( e.g. /usr ) to specific mounted devices.
Where multiple entries in the mount table match different prefixes of the filename the one that
matches the longest prefix is chosen. ( e.g. /usr/home instead of /usr where both exist in the
mount table and both match the desired file. )

UNIX uses special device files, usually located in /dev, to represent and access physical
devices directly.

Each device file has a major and minor number associated with it, stored and displayed
where the file size would normally go.

The major number is an index into a table of device drivers, and indicates which device
driver handles this device. ( E.g. the disk drive handler. )

The minor number is a parameter passed to the device driver, and indicates which specific
device is to be accessed, out of the many which may be handled by a particular device driver.

A series of lookup tables and mappings makes the access of different devices flexible, and
somewhat transparent to users.
References:

1. Abraham Silberschatz, Greg Gagne, and Peter Baer Galvin, "Operating System Concepts,
Eighth Edition ",

You might also like