0% found this document useful (0 votes)
5 views

COS - Week 12

Uploaded by

barbara.stark915
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

COS - Week 12

Uploaded by

barbara.stark915
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Chapter 12:

Secondary-Storage Structure

Operating System Concepts – 8th Edition, Silberschatz, Galvin and Gagne ©2009
Chapter 12: Secondary-Storage Structure

 Overview of Mass Storage Structure


 Disk Structure
 Disk Attachment
 Disk Scheduling
 Disk Management
 Swap-Space Management
 RAID Structure
 Disk Attachment
 Stable-Storage Implementation
 Tertiary Storage Devices
 Operating System Issues
 Performance Issues

Operating System Concepts – 8th Edition 12.2 Silberschatz, Galvin and Gagne ©2009
Overview of Mass Storage Structure
 Magnetic disks provide bulk of secondary storage of modern computers
 Drives rotate at 60 to 200 times per second
 Transfer rate is rate at which data flow between drive and computer
 Positioning time (random-access time) is time to move disk arm to desired
cylinder (seek time) and time for desired sector to rotate under the disk head
(rotational latency)
 Head crash results from disk head making contact with the disk surface
 That’s bad
 Disks can be removable
 Drive attached to computer via I/O bus
 Busses vary, including EIDE, ATA, SATA, USB, Fibre Channel, SCSI
 Host controller in computer uses bus to talk to disk controller built into drive or
storage array

Operating System Concepts – 8th Edition 12.4 Silberschatz, Galvin and Gagne ©2009
Overview of Mass Storage Structure (Cont.)

 Magnetic tape
 Was early secondary-storage medium
 Relatively permanent and holds large quantities of data
 Access time slow
 Random access ~1000 times slower than disk
 Mainly used for backup, storage of infrequently-used data, transfer
medium between systems
 Kept in spool and wound or rewound past read-write head
 Once data under head, transfer rates comparable to disk
 20-200GB typical storage
 Common technologies are 4mm, 8mm, 19mm, LTO-2 and SDLT

Operating System Concepts – 8th Edition 12.5 Silberschatz, Galvin and Gagne ©2009
Moving-head Disk Mechanism

Operating System Concepts – 8th Edition 12.6 Silberschatz, Galvin and Gagne ©2009
Disk Structure
 Disk drives are addressed as large 1-dimensional arrays of logical blocks,
where the logical block is the smallest unit of transfer.

 The 1-dimensional array of logical blocks is mapped into the sectors of the
disk sequentially.
 Sector 0 is the first sector of the first track on the outermost cylinder.
 Mapping proceeds in order through that track, then the rest of the tracks
in that cylinder, and then through the rest of the cylinders from
outermost to innermost.

Operating System Concepts – 8th Edition 12.7 Silberschatz, Galvin and Gagne ©2009
Disk Attachment
 Host-attached storage accessed through I/O ports talking to I/O busses
 SCSI itself is a bus, up to 16 devices on one cable, SCSI initiator requests
operation and SCSI targets perform tasks
 Each target can have up to 8 logical units (disks attached to device
controller
 FC is high-speed serial architecture
 Can be switched fabric with 24-bit address space – the basis of storage
area networks (SANs) in which many hosts attach to many storage
units
 Can be arbitrated loop (FC-AL) of 126 devices

Operating System Concepts – 8th Edition 12.8 Silberschatz, Galvin and Gagne ©2009
Network-Attached Storage
 Network-attached storage (NAS) is storage made available over a network
rather than over a local connection (such as a bus)
 NFS and CIFS are common protocols
 Implemented via remote procedure calls (RPCs) between host and storage
 New iSCSI protocol uses IP network to carry the SCSI protocol

Operating System Concepts – 8th Edition 12.9 Silberschatz, Galvin and Gagne ©2009
Storage Area Network
 Common in large storage environments (and becoming more common)
 Multiple hosts attached to multiple storage arrays - flexible

Operating System Concepts – 8th Edition 12.10 Silberschatz, Galvin and Gagne ©2009
Disk Scheduling
 The operating system is responsible for using hardware efficiently — for the
disk drives, this means having a fast access time and disk bandwidth.
 Access time has two major components
 Seek time is the time for the disk are to move the heads to the cylinder
containing the desired sector.
 Rotational latency is the additional time waiting for the disk to rotate the
desired sector to the disk head.
 Minimize seek time
 Seek time ≈ seek distance
 Disk bandwidth is the total number of bytes transferred, divided by the total
time between the first request for service and the completion of the last
transfer.

Operating System Concepts – 8th Edition 12.11 Silberschatz, Galvin and Gagne ©2009
Disk Scheduling (Cont.)
 Several algorithms exist to schedule the servicing of disk I/O requests.
 We illustrate them with a request queue (0-199).

98, 183, 37, 122, 14, 124, 65, 67

Head pointer 53

Operating System Concepts – 8th Edition 12.12 Silberschatz, Galvin and Gagne ©2009
FCFS

Illustration shows total head movement of 640 cylinders.

Operating System Concepts – 8th Edition 12.13 Silberschatz, Galvin and Gagne ©2009
SSTF
 Selects the request with the minimum seek time from the current head
position.
 SSTF scheduling is a form of SJF scheduling; may cause starvation of
some requests.
 Illustration shows total head movement of 236 cylinders.

Operating System Concepts – 8th Edition 12.14 Silberschatz, Galvin and Gagne ©2009
SSTF (Cont.)

Operating System Concepts – 8th Edition 12.15 Silberschatz, Galvin and Gagne ©2009
SCAN
 The disk arm starts at one end of the disk, and moves toward the other end,
servicing requests until it gets to the other end of the disk, where the head
movement is reversed and servicing continues.
 Sometimes called the elevator algorithm.
 Illustration shows total head movement of 208 cylinders.

Operating System Concepts – 8th Edition 12.16 Silberschatz, Galvin and Gagne ©2009
SCAN (Cont.)

Operating System Concepts – 8th Edition 12.17 Silberschatz, Galvin and Gagne ©2009
C-SCAN
 Provides a more uniform wait time than SCAN.
 The head moves from one end of the disk to the other. servicing requests
as it goes. When it reaches the other end, however, it immediately returns
to the beginning of the disk, without servicing any requests on the return
trip.
 Treats the cylinders as a circular list that wraps around from the last cylinder
to the first one.

Operating System Concepts – 8th Edition 12.18 Silberschatz, Galvin and Gagne ©2009
C-SCAN (Cont.)

Operating System Concepts – 8th Edition 12.19 Silberschatz, Galvin and Gagne ©2009
LOOK
A version of SCAN.
The arm only goes to the last request in each direction, then
immediately returns. It doesn't have to go all the way to the end of
the disc first.

Operating System Concepts – 8th Edition 12.20 Silberschatz, Galvin and Gagne ©2009
C-LOOK
 Version of C-SCAN
 Arm only goes as far as the last request in each direction, then reverses
direction immediately, without first going all the way to the end of the disk.

Operating System Concepts – 8th Edition 12.21 Silberschatz, Galvin and Gagne ©2009
C-LOOK (Cont.)

Operating System Concepts – 8th Edition 12.22 Silberschatz, Galvin and Gagne ©2009
Selecting a Disk-Scheduling Algorithm
 SSTF is common and has a natural appeal
 SCAN and C-SCAN perform better for systems that place a heavy load on
the disk.
 Performance depends on the number and types of requests.
 Requests for disk service can be influenced by the file-allocation method.
 The disk-scheduling algorithm should be written as a separate module of
the operating system, allowing it to be replaced with a different algorithm if
necessary.
 Either SSTF or LOOK is a reasonable choice for the default algorithm.

Operating System Concepts – 8th Edition 12.23 Silberschatz, Galvin and Gagne ©2009
Disk Management
 Low-level formatting, or physical formatting — Dividing a disk into sectors
that the disk controller can read and write.
 To use a disk to hold files, the operating system still needs to record its own
data structures on the disk.
 Partition the disk into one or more groups of cylinders.
 Logical formatting or “making a file system”.
 Boot block initializes system.
 The bootstrap is stored in ROM.
 Bootstrap loader program.
 Methods such as sector sparing used to handle bad blocks.

Operating System Concepts – 8th Edition 12.24 Silberschatz, Galvin and Gagne ©2009
Booting from a Disk in Windows 2000

Operating System Concepts – 8th Edition 12.25 Silberschatz, Galvin and Gagne ©2009
Swap-Space Management
 Swap-space — Virtual memory uses disk space as an extension of main
memory.
 Swap-space can be carved out of the normal file system,or, more
commonly, it can be in a separate disk partition.
 Swap-space management
 4.3BSD allocates swap space when process starts; holds text segment
(the program) and data segment.
 Kernel uses swap maps to track swap-space use.
 Solaris 2 allocates swap space only when a page is forced out of
physical memory, not when the virtual memory page is first created.

Operating System Concepts – 8th Edition 12.26 Silberschatz, Galvin and Gagne ©2009
Data Structures for Swapping on Linux
Systems

Operating System Concepts – 8th Edition 12.27 Silberschatz, Galvin and Gagne ©2009
RAID Structure
 RAID – multiple disk drives provides reliability via redundancy.

 RAID is arranged into six different levels.

Operating System Concepts – 8th Edition 12.28 Silberschatz, Galvin and Gagne ©2009
RAID (cont)
 Several improvements in disk-use techniques involve the use of multiple
disks working cooperatively.

 Disk striping uses a group of disks as one storage unit.

 RAID schemes improve performance and improve the reliability of the


storage system by storing redundant data.
 Mirroring or shadowing keeps duplicate of each disk.
 Block interleaved parity uses much less redundancy.

Operating System Concepts – 8th Edition 12.29 Silberschatz, Galvin and Gagne ©2009
RAID Levels

Operating System Concepts – 8th Edition 12.30 Silberschatz, Galvin and Gagne ©2009
RAID
RAID Level 0: The virtual disk is divided into strips with k sectors each. Sectors
(0, k-1) are located on Strip0, sectors (k,2k-1) are located on Strip1, ....
 The data is divided into strips and written into different strips.
 Data is written to and read from different disks at the same time. Data read
and write speed is high.
 If one of the disks fails, data is lost.

Operating System Concepts – 8th Edition 12.31 Silberschatz, Galvin and Gagne ©2009
RAID
RAID Level 1: Mirrored (Reflected) discs.
 Each disc has a copy disc. The copy disk backs up the data of the first disk
exactly.
 Burning is done to both discs. Write performance is not good. Reading
performance is good.
 When one disk fails, the other disk is used immediately.
 In the repair process, the defective disk is replaced with a new one and the
healthy disk is copied exactly to this disk.

Operating System Concepts – 8th Edition 12.32 Silberschatz, Galvin and Gagne ©2009
RAID (0 + 1) and (1 + 0)

Operating System Concepts – 8th Edition 12.33 Silberschatz, Galvin and Gagne ©2009
RAID
RAID 2, which is rarely used in practice, stripes data at the bit (rather than
block) level, and uses a Hamming code for error correction. The disks are
synchronized by the controller to spin at the same angular orientation (they
reach index at the same time), so it generally cannot service multiple
requests simultaneously

Note: Hamming codes are a family of linear error-correcting codes.

Operating System Concepts – 8th Edition 12.34 Silberschatz, Galvin and Gagne ©2009
RAID
RAID 3, which is rarely used in practice, consists of byte-level striping with a
dedicated parity disk. One of the characteristics of RAID 3 is that it generally
cannot service multiple requests simultaneously, which happens because any
single block of data will, by definition, be spread across all members of the set and
will reside in the same physical location on each disk. Therefore, any I/O operation
requires activity on every disk and usually requires synchronized spindles.
This makes it suitable for applications that demand the highest transfer rates in
long sequential reads and writes, for example uncompressed video editing.
Applications that make small reads and writes from random disk locations will get
the worst performance out of this level.

Operating System Concepts – 8th Edition 12.35 Silberschatz, Galvin and Gagne ©2009
RAID
RAID 4 consists of block-level striping with a dedicated parity disk. As a result of
its layout, RAID 4 provides good performance of random reads, while the
performance of random writes is low due to the need to write all parity data to a
single disk,[22] unless the filesystem is RAID-4-aware and compensates for that.
An advantage of RAID 4 is that it can be quickly extended online, without parity
recomputation, as long as the newly added disks are completely filled with 0-
bytes.
In diagram 1, a read request for block A1 would be serviced by disk 0. A
simultaneous read request for block B1 would have to wait, but a read request for
B2 could be serviced concurrently by disk 1.

Operating System Concepts – 8th Edition 12.36 Silberschatz, Galvin and Gagne ©2009
RAID
RAID 5 consists of block-level striping with distributed parity. Unlike in RAID 4, parity
information is distributed among the drives. It requires that all drives but one be present to
operate. Upon failure of a single drive, subsequent reads can be calculated from the
distributed parity such that no data is lost. RAID 5 requires at least three disks.
There are many layouts of data and parity in a RAID 5 disk drive array depending upon the
sequence of writing across the disks, that is:
1.the sequence of data blocks written, left to right or right to left on the disk array, of
disks 0 to N.
2.the location of the parity block at the beginning or end of the stripe.
3.the location of the first block of a stripe with respect to parity of the previous stripe.
In comparison to RAID 4, RAID 5's distributed parity evens
out the stress of a dedicated parity disk among all RAID
members. Additionally, write performance is increased since
all RAID members participate in the serving of write
requests. Although it will not be as efficient as a striping
(RAID 0) setup, because parity must still be written, this is
no longer a bottleneck.
Since parity calculation is performed on the full stripe, small
changes to the array experience write amplification: in the
worst case when a single, logical sector is to be written, the
original sector and the according parity sector need to be
read, the original data is removed from the parity, the new
data calculated into the parity and both the new data sector
and theSystem
Operating newConcepts
parity –sector are written.
8th Edition 12.37 Silberschatz, Galvin and Gagne ©2009
RAID
RAID 6 extends RAID 5 by adding another parity block; thus, it uses block-level
striping with two parity blocks distributed across all member disks.
As in RAID 5, there are many layouts of RAID 6 disk arrays depending upon the
direction the data blocks are written, the location of the parity blocks with respect
to the data blocks and whether or not the first data block of a subsequent stripe is
written to the same drive as the last parity block of the prior stripe. The figure to
the right is just one of many such layouts.
According to the Storage Networking Industry Association (SNIA), the definition of
RAID 6 is: "Any form of RAID that can continue to execute read and write requests
to all of a RAID array's virtual disks in the presence of any two concurrent disk
failures. Several methods, including dual check data computations (parity
and Reed–Solomon), orthogonal dual parity check data and diagonal parity, have
been used to implement RAID Level 6."

Operating System Concepts – 8th Edition 12.38 Silberschatz, Galvin and Gagne ©2009
Stable-Storage Implementation
 Write-ahead log scheme requires stable storage.

 To implement stable storage:


 Replicate information on more than one nonvolatile storage media with
independent failure modes.
 Update information in a controlled manner to ensure that we can
recover the stable data after any failure during data transfer or recovery.

Operating System Concepts – 8th Edition 12.39 Silberschatz, Galvin and Gagne ©2009
Tertiary Storage Devices
 Low cost is the defining characteristic of tertiary storage.

 Generally, tertiary storage is built using removable media

 Common examples of removable media are floppy disks and CD-ROMs;


other types are available.

Operating System Concepts – 8th Edition 12.40 Silberschatz, Galvin and Gagne ©2009
Removable Disks

 Floppy disk — thin flexible disk coated with magnetic material, enclosed
in a protective plastic case.

 Most floppies hold about 1 MB; similar technology is used for


removable disks that hold more than 1 GB.
 Removable magnetic disks can be nearly as fast as hard disks, but
they are at a greater risk of damage from exposure.

Operating System Concepts – 8th Edition 12.41 Silberschatz, Galvin and Gagne ©2009
Removable Disks (Cont.)
 A magneto-optic disk records data on a rigid platter coated with magnetic
material.
 Laser heat is used to amplify a large, weak magnetic field to record a
bit.
 Laser light is also used to read data (Kerr effect).
 The magneto-optic head flies much farther from the disk surface than a
magnetic disk head, and the magnetic material is covered with a
protective layer of plastic or glass; resistant to head crashes.

 Optical disks do not use magnetism; they employ special materials that are
altered by laser light.

Operating System Concepts – 8th Edition 12.42 Silberschatz, Galvin and Gagne ©2009
WORM Disks
 The data on read-write disks can be modified over and over.
 WORM (“Write Once, Read Many Times”) disks can be written only once.
 Thin aluminum film sandwiched between two glass or plastic platters.
 To write a bit, the drive uses a laser light to burn a small hole through the
aluminum; information can be destroyed by not altered.
 Very durable and reliable.
 Read Only disks, such ad CD-ROM and DVD, com from the factory with the
data pre-recorded.

Operating System Concepts – 8th Edition 12.43 Silberschatz, Galvin and Gagne ©2009
Tapes
 Compared to a disk, a tape is less expensive and holds more data, but
random access is much slower.
 Tape is an economical medium for purposes that do not require fast random
access, e.g., backup copies of disk data, holding huge volumes of data.
 Large tape installations typically use robotic tape changers that move tapes
between tape drives and storage slots in a tape library.
 stacker – library that holds a few tapes
 silo – library that holds thousands of tapes
 A disk-resident file can be archived to tape for low cost storage; the
computer can stage it back into disk storage for active use.

Operating System Concepts – 8th Edition 12.44 Silberschatz, Galvin and Gagne ©2009
Operating System Issues
 Major OS jobs are to manage physical devices and to present a virtual
machine abstraction to applications

 For hard disks, the OS provides two abstraction:


 Raw device – an array of data blocks.
 File system – the OS queues and schedules the interleaved requests
from several applications.

Operating System Concepts – 8th Edition 12.45 Silberschatz, Galvin and Gagne ©2009
Application Interface
 Most OSs handle removable disks almost exactly like fixed disks — a new
cartridge is formatted and an empty file system is generated on the disk.
 Tapes are presented as a raw storage medium, i.e., and application does
not not open a file on the tape, it opens the whole tape drive as a raw
device.
 Usually the tape drive is reserved for the exclusive use of that application.
 Since the OS does not provide file system services, the application must
decide how to use the array of blocks.
 Since every application makes up its own rules for how to organize a tape, a
tape full of data can generally only be used by the program that created it.

Operating System Concepts – 8th Edition 12.46 Silberschatz, Galvin and Gagne ©2009
Tape Drives
 The basic operations for a tape drive differ from those of a disk drive.
 locate positions the tape to a specific logical block, not an entire track
(corresponds to seek).
 The read position operation returns the logical block number where the
tape head is.
 The space operation enables relative motion.
 Tape drives are “append-only” devices; updating a block in the middle of the
tape also effectively erases everything beyond that block.
 An EOT mark is placed after a block that is written.

Operating System Concepts – 8th Edition 12.47 Silberschatz, Galvin and Gagne ©2009
File Naming
 The issue of naming files on removable media is especially difficult when we
want to write data on a removable cartridge on one computer, and then use
the cartridge in another computer.
 Contemporary OSs generally leave the name space problem unsolved for
removable media, and depend on applications and users to figure out how
to access and interpret the data.
 Some kinds of removable media (e.g., CDs) are so well standardized that all
computers use them the same way.

Operating System Concepts – 8th Edition 12.48 Silberschatz, Galvin and Gagne ©2009
Hierarchical Storage Management (HSM)

 A hierarchical storage system extends the storage hierarchy beyond


primary memory and secondary storage to incorporate tertiary storage —
usually implemented as a jukebox of tapes or removable disks.
 Usually incorporate tertiary storage by extending the file system.
 Small and frequently used files remain on disk.
 Large, old, inactive files are archived to the jukebox.
 HSM is usually found in supercomputing centers and other large
installations that have enormous volumes of data.

Operating System Concepts – 8th Edition 12.49 Silberschatz, Galvin and Gagne ©2009
Speed
 Two aspects of speed in tertiary storage are bandwidth and latency.

 Bandwidth is measured in bytes per second.


 Sustained bandwidth – average data rate during a large transfer; # of
bytes/transfer time.
Data rate when the data stream is actually flowing.
 Effective bandwidth – average over the entire I/O time, including seek
or locate, and cartridge switching.
Drive’s overall data rate.

Operating System Concepts – 8th Edition 12.50 Silberschatz, Galvin and Gagne ©2009
Speed (Cont.)

 Access latency – amount of time needed to locate data.


 Access time for a disk – move the arm to the selected cylinder
and wait for the rotational latency; < 35 milliseconds.
 Access on tape requires winding the tape reels until the selected
block reaches the tape head; tens or hundreds of seconds.
 Generally say that random access within a tape cartridge is about
a thousand times slower than random access on disk.
 The low cost of tertiary storage is a result of having many cheap
cartridges share a few expensive drives.
 A removable library is best devoted to the storage of infrequently used
data, because the library can only satisfy a relatively small number of
I/O requests per hour.

Operating System Concepts – 8th Edition 12.51 Silberschatz, Galvin and Gagne ©2009
Reliability
 A fixed disk drive is likely to be more reliable than a removable disk or tape
drive.

 An optical cartridge is likely to be more reliable than a magnetic disk or


tape.

 A head crash in a fixed hard disk generally destroys the data, whereas the
failure of a tape drive or optical disk drive often leaves the data cartridge
unharmed.

Operating System Concepts – 8th Edition 12.52 Silberschatz, Galvin and Gagne ©2009
Cost
 Main memory is much more expensive than disk storage

 The cost per megabyte of hard disk storage is competitive with magnetic
tape if only one tape is used per drive.

 The cheapest tape drives and the cheapest disk drives have had about the
same storage capacity over the years.

 Tertiary storage gives a cost savings only when the number of cartridges is
considerably larger than the number of drives.

Operating System Concepts – 8th Edition 12.53 Silberschatz, Galvin and Gagne ©2009
Price per Gigabyte of DRAM, From 1991 to 2020

Operating System Concepts – 8th Edition 12.54 Silberschatz, Galvin and Gagne ©2009
Price per Megabyte of DRAM, From 1981 to 2004

Operating System Concepts – 8th Edition 12.55 Silberschatz, Galvin and Gagne ©2009
Price per Megabyte of Magnetic Hard Disk, From 2009 to 2023

Operating System Concepts – 8th Edition 12.56 Silberschatz, Galvin and Gagne ©2009
Price per Megabyte of Magnetic Hard Disk, From 1981 to 2004

Operating System Concepts – 8th Edition 12.57 Silberschatz, Galvin and Gagne ©2009
Price per Megabyte of a Tape Drive, From 1984-2000

Operating System Concepts – 8th Edition 12.58 Silberschatz, Galvin and Gagne ©2009
Operating System Concepts – 8th Edition 12.59 Silberschatz, Galvin and Gagne ©2009
End of Chapter 12

Operating System Concepts – 8th Edition, Silberschatz, Galvin and Gagne ©2009

You might also like