File System Implementation
File System Implementation
c Minh Qun
Sept-2013
CE108
Free-Space Management
Efficiency and Performance
Objectives
To describe the details of implementing local file systems and directory structures To discuss block allocation and free-block algorithms and trade-offs
Few terms
External fragmentation Internal fragmentation Direct access Sequential access Hash table
Directory Implementation
Linear list of file names with pointer to the data blocks
Simple to program Time-consuming to execute Linear search time Could keep ordered alphabetically via linked list or use B+ tree
Decreases directory search time Collisions situations where two file names hash to the same location Only good if entries are fixed size, or use chainedoverflow method
Allocation Methods
An allocation method refers to how disk blocks are allocated for files:
Contiguous Allocation
Mapping from logical to physical
Best performance in most cases Simple only starting location (block #) and length (number of blocks) are required Problems include finding space for file, knowing file size, external fragmentation, need for compaction off-line (downtime) or on-line
Extent-Based Systems
Many newer file systems (i.e., Veritas File System) use a modified contiguous allocation scheme Extent-based file systems allocate disk blocks in extents An extent is a contiguous block of disks
Extents are allocated for file allocation A file consists of one or more extents
Linked Allocation
Each file is a linked list of disk blocks: blocks may be scattered anywhere on the disk
block = pointer
File ends at nil pointer No external fragmentation Each block contains pointer to next block No compaction, external fragmentation Free space management system called when new block needed Improve efficiency by clustering blocks into groups but increases internal fragmentation Reliability can be a problem Locating a block can take many I/Os and disk seeks
Linked Allocation
File-Allocation Table
FAT (File Allocation Table) variation
Beginning of volume has table, indexed by block number Much like a linked list, but faster on disk and cacheable New block allocation simple
Each file has its own index block(s) of pointers to its data blocks
Logical view
index table
outer-index
index table
file
Combined Scheme: UNIX UFS (4K bytes per block, 32-bit addresses)
Note: More index blocks than can be addressed with 32-bit file pointer
Performance
Best method depends on file access type
Contiguous great for sequential and random Linked good for sequential, not random Declare access type at creation -> select either contiguous or linked
Single block access could require 2 index block reads then data block read Clustering can help improve throughput, reduce CPU overhead
Performance (Cont.)
Adding instructions to the execution path to save one disk I/O is reasonable
Intel Core i7 Extreme Edition 990x (2011) at 3.46Ghz = 159,000 MIPS http://en.wikipedia.org/wiki/Instructions_per_second Typical disk drive at 250 I/Os per second 159,000 MIPS / 250 = 630 million instructions during one disk I/O Fast SSD drives provide 60,000 IOPS 159,000 MIPS / 60,000 = 2.65 millions instructions during one disk I/O
Free-Space Management
File system maintains free-space list to track available blocks/clusters
bit[i] =
(number of bits per word) * (number of 0-value words) + offset of first 1 bit CPUs have instructions to return offset within word of first 1 bit
Example:
block size = 4KB = 212 bytes disk size = 240 bytes (1 terabyte) n = 240/212 = 228 bits (or 32MB) if clusters of 4 blocks -> 8MB of memory Easy to get contiguous files
Modify linked list to store address of next n-1 free blocks in first free block, plus a pointer to next block that contains free-block-pointers (like this one)
Because space is frequently contiguously used and freed, with contiguous-allocation allocation, extents, or clustering Keep address of first free block and count of following free blocks Free space list then has entries containing addresses and counts
Counting
Used in ZFS Consider meta-data I/O on very large file systems Full data structures like bit maps couldnt fit in memory -> thousands of I/Os Divides device space into metaslab units and manages metaslabs Given volume can contain hundreds of metaslabs Each metaslab has associated space map Uses counting algorithm But records to log file rather than file system Log of all block activity, in time order, in counting format Metaslab activity -> load space map into memory in balancedtree structure, indexed by offset Replay log into that structure Combine contiguous free blocks into single entry
Disk allocation and directory algorithms Types of data kept in files directory entry Pre-allocation or as-needed allocation of metadata structures Fixed-size or varying-size data structures
Keeping data and metadata close together Buffer cache separate section of main memory for frequently used blocks Synchronous writes sometimes requested by apps or needed by OS No buffering / caching writes must hit disk before acknowledgement Asynchronous writes more common, buffer-able, faster Free-behind and read-ahead techniques to optimize sequential access Reads frequently slower than writes
Ex. 1
Consider a system where free space is kept in a freespace list
a.
Suppose that the pointer to the free-space list is lost. Can the system reconstruct the free-space list? Explain your answer. Consider a file system similar to the one used by UNIX with indexed allocation. How many disk I/O operations might be required to read the contents of a small local file at /a/b/c? Assume that none of the disk blocks is currently being cached.
b.
Ex. 2
Some file systems allow disk storage to be allocated at different levels of granularity. For instance, a file system could allocate 4 KB of disk space as a single 4-KB block or as eight 512-byte blocks. How could we take advantage of this flexibility to improve performance? What modifications would have to be made to the free-space management scheme in order to support this feature? Answer:
Ex. 3
Consider a file system on a disk that has both logical and physical block sizes of 512 bytes. Assume that the information about each file is already in memory. For each of the three allocation strategies (contiguous, linked, and indexed), answer these questions:
a.
How is the logical-to-physical address mapping accomplished in this system? (For the indexed allocation, assume that a file is always less than 512 blocks long.) If we are currently at logical block 10 (the last block accessed was block 10) and want to access logical block 4, how many physical blocks must be read from the disk?
b.
Ex. 4
Fragmentation on a storage device could be eliminated by recompaction of the information. Typical disk devices do not have relocation or base registers (such as are used when memory is to be compacted), so how can we relocate files? Give three reasons why recompacting and relocation of files often are avoided. Answer: