0% found this document useful (0 votes)
23 views22 pages

OS_Question&Answers_M4 & M5

Uploaded by

muttagisiddappa4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views22 pages

OS_Question&Answers_M4 & M5

Uploaded by

muttagisiddappa4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

1 What is Translation look aside buffers (TLB)?

Explain TLB in detail with a simple paging system


with a neat diagram.
Translation Look aside Buffer

• A special, small, fast lookup hardware cache, called a translation look-aside


buffer(TLB).
• Each entry in the TLB consists of two parts: a key (or tag) and a value.
• When the associative memory is presented with an item, the item is compared with
all keys simultaneously. If the item is found, the corresponding value field is
returned. The search is fast; the hardware, however, is expensive. Typically, the
number of entries in a TLB is small, often numbering between 64 and 1,024.
• The TLB contains only a few of the
page-table entries.Working:
• When a logical-address is generated by the CPU, its page-number is presented to
theTLB.
• If the page-number is found (TLB hit), its frame-number is immediately
available and used toaccess memory
• If page-number is not in TLB (TLB miss), a memory-reference to page table must
be made. Theobtained frame-number can be used to access memory (Figure 1)

Figure : Paging hardware with TLB


• In addition, we add the page-number and frame-number to the TLB, so that they
will befound quickly on the next reference.
• If the TLB is already full of entries, the OS must select one for replacement.
• Percentage of times that a particular page-number is found in the TLB is
called hit ratio.Advantage: Search operation is fast.
Disadvantage: Hardware is expensive.
• Some TLBs have wired down entries that can't be removed.
Some TLBs store ASID (address-space identifier) in each entry of the TLB that uniquely identifyeach
process and provide address space protection for that process.
2 Explain Thrashing with the diagram.

THRASHING
If the number of frames allocated to a low-priority process falls below the minimum number required by
the computer architecture, then we suspend the process execution.
A process is thrashing if it is spending more time in paging than executing.
If the processes do not have enough number of frames, it will quickly page fault. During this it must
replace some pages that are not currently in use. Consequently, it quickly faults again and again.
The process continues to make a fault, replacing pages for which it then faults and brings back. This high
paging activity is called thrashing. The phenomenon of excessively moving pages back and forth b/w
memory and secondary has been called thrashing.
Cause of Thrashing
Thrashing results in severe performance problems.
The operating system monitors the CPU utilization is low. We increase the degree of multi programming
by introducing new processes to the system.
A global page replacement algorithm replaces pages with no regard to the process to which they belong.

The figure shows the thrashing


As the degree of multi programming increases, CPU utilization increases, although more slowly until a
maximum is reached.
If the degree of multi programming is increased even further, thrashing sets in and the CPU utilization
drops sharply.
At this point, to increase CPU utilization and stop thrashing, we must decrease the degree of
multiprogramming.
We can limit the effect of thrashing by using a local replacement algorithm. To prevent thrashing, we must
provide a process with as many frames as it needs.

3 Discuss in detail about contiguous memory allocation. Illustrate with example, the internal and
external fragmentation problem encountered in continuous memory allocation.

Memory Allocation
Two types of memory partitioning are: ((explain with diagram)
1. Fixed-sized partitioning
2. Variable-sized partitioning

1. Fixed-sized Partitioning
• The memory is divided into fixed-sized partitions.
• Each partition may contain exactly one process.
• The degree of multiprogramming is bound by the number of partitions.
• When a partition is free, a process is selected from the input queue and
loaded into the freepartition.
• When the process terminates, the partition becomes available for another process.
2. Variable-sized Partitioning
• The OS keeps a table indicating which parts of memory are available and which parts are
occupied.
• A hole is a block of available memory. Normally, memory contains a set of
holes of various sizes.
• Initially, all memory is available for user-processes and considered one large hole.
• When a process arrives, the process is allocated memory from a large hole.
• If we find the hole, we allocate only as much memory as is needed and
keep theremaining memory available to satisfy future requests.
Three strategies used to select a free hole from the set of available holes:
1. First Fit: Allocate the first hole that is big enough. Searching can start
either at thebeginning of the set of holes or at the location where the
previous first-fit search ended.
2. Best Fit: Allocate the smallest hole that is big enough. We must search the
entire list,unlessthe list is ordered by size. This strategy produces the smallest
leftover hole.
3. Worst Fit: Allocate the largest hole. Again, we must search the entire list,
unless it issortedby size. This strategy produces the largest leftover hole.
First-fit and best fit are better than worst fit in terms of decreasing time and storage utilization

3. Fragmentation (explain with diagram , take an example which I had taken in class while
explaining)
Two types of memory fragmentation:
1. Internal fragmentation
2. External fragmentation

1. Internal Fragmentation
• The general approach is to break the physical-memory into fixed-sized
blocks and allocatememory in units based on block size.
• The allocated-memory to a process may be slightly larger than the requested-memory.

• The difference between requested-memory and allocated-memory is called internal


fragmentation
i.e. Unused memory that is internal to a partition.
2. External Fragmentation
• External fragmentation occurs when there is enough total memory-space to satisfy
a request but the available-spaces are not contiguous. (i.e. storage is fragmented
into a large number of small holes).
• Both the first-fit and best-fit strategies for memory-allocation suffer from external
fragmentation.
• Statistical analysis of first-fit reveals that given N allocated blocks, another 0.5 N
blocks will be lost to fragmentation. This property is known as the 50-percent rule.
4 Discuss briefly about demand paging in memory management scheme.

DEMAND PAGING
A demand paging is similar to a paging system with swapping. When we want to execute a process, we
swap the process into memory otherwise it will not be loaded into memory.
Pure Demand Paging: Never bring a page into main memory until it is required.
We can start executing a process without loading any of its pages into main memory.
Page fault occurs for the non-memory resident pages.
After the page is brought into memory, process continues to execute.
Again, page fault occurs for the next page.
A swapper manipulates the entire processes, whereas a pager manipulates individual pages of the process.
▪ Bring a page into memory only when it is needed.
▪ Less I/O needed.
▪ Less memory needed.
▪ Faster response
▪ More users
▪ Page is needed ⇒ reference to it.
▪ invalid reference ⇒abort
▪ not-in-memory ⇒ bring to memory.
▪ Lazy swapper– never swaps a page into memory unless page will be needed.
▪ Swapper that deals with pages is a pager.
Hardware support: For demand paging the same hardware is required as paging and swapping.
1. Page table: -Has the ability to mark an entry invalid through valid-invalid bit.
2. Secondary memory: -This holds the pages that are not present in main memory.

Performance of Demand Paging: Demand paging can have significant effect on the performance of the
computer system.
✓ Let P be the probability of the page fault (0<=P<=1)

✓ Effective access time = (1-P) * ma + P * page fault.


o Where P = page fault and ma = memory access time.
✓ Effective access time is directly proportional to page fault rate. It is important to keep page fault rate low
in demand paging.
Fig: Transfer of a paged memory into continuous disk space

Basic concept: Instead of swapping the whole process the pager swaps only the necessary pages into
memory. Thus, it avoids reading unused pages and decreases the swap time and amount of physical memory
needed.
The valid-invalid bit scheme can be used to distinguish between the pages that are on the disk and that are
in memory.
With each page table entry, a valid–invalid bit is associated.
(v ⇒ in-memory, i⇒not-in-memory)

5 What is the principle behind paging? Explain its operation, clearly indicating how the logical
addresses are converted to physical addresses.

Basic Method of Paging


The basic method for implementing paging involves breaking physical memory into fixed-sized blocks
called frames and breaking logical memory into blocks of the same size called pages.
When a process is to be executed, its pages are loaded into any available memory frames from the
backing store.
The backing store is divided into fixed-sized blocks that are of the same size as the memory frames.

The hardware support for paging is illustrated in Figure.

Figure : Paging hardware


Address generated by CPU is divided into 2 parts (Figure 2):
1. Page-number (p) is used as an index to the page-table. The page-table contains the base-address of
each page in physical-memory.
2. Offset (d) is combined with the base-address to define the physical-address. This physical-address is
sent to the memory-unit.
The page table maps the page number to a frame number, to yield a physical address
The page table maps the page number to a frame number, to yield a physical address which also has
two parts: The frame number and the offset within that frame.
The number of bits in the frame number determines how many frames the system can address, and the
number of bits in the offset determines the size of each frame.
The paging model of memory is shown in Figure

Figure : Paging model of logical and physical memory.

the logical address is as follows:


page number page offset
p d

To show how to map logical memory into physical memory, consider a page size of 4 bytes and physical
memory of 32 bytes (8 pages) as shown in below figure.
a. Logical address 0 is page 0 and offset 0 and Page 0 is in frame 5. The logical address 0 maps to
physical address [(5*4) + 0]=20.
b. Logical address 3 is page 0 and offset 3 and Page 0 is in frame 5. The logical address 3 maps to
physical address [(5*4) + 3]= 23.
c. Logical address 4 is page 1 and offset 0 and page 1 is mapped to frame 6. So logical address 4 maps
to physical address [(6*4) + 0]=24.
d. Logical address 13 is page 3 and offset 1 and page 3 is mapped to frame 2. So logical address 13
maps to physical address [(2*4) + 1]=9.
6 Differentiate between paging and segmentation.

S.NO Paging Segmentation

In paging, the program is divided In segmentation, the program is


1.
into fixed or mounted size pages. divided into variable size sections.

For the paging operating system For segmentation compiler is


2.
is accountable. accountable.

Page size is determined by Here, the section size is given by the


3.
hardware. user.

It is faster in comparison to
4. Segmentation is slow.
segmentation.

Paging could result in internal Segmentation could result in external


5.
fragmentation. fragmentation.

In paging, the logical address is


Here, the logical address is split into
6. split into a page number and page
section number and section offset.
offset.

Paging comprises a page table While segmentation also comprises


7. that encloses the base address of the segment table which encloses
every page. segment number and segment offset.
The page table is employed to Section Table maintains the section
8.
keep up the page data. data.

In segmentation, the operating system


In paging, the operating system
9. maintains a list of holes in the main
must maintain a free frame list.
memory.

10. Paging is invisible to the user. Segmentation is visible to the user.

In paging, the processor needs In segmentation, the processor uses


11. the page number, and offset to segment number, and offset to
calculate the absolute address. calculate the full address.

It is hard to allow sharing of Facilitates sharing of procedures


12.
procedures between processes. between the processes.

In paging, a programmer cannot


13 It can efficiently handle data structures.
efficiently handle data structure.

Easy to apply for protection in


14. This protection is hard to apply.
segmentation.

The size of the page needs


There is no constraint on the size of
15. always be equal to the size of
segments.
frames.

A page is referred to as a A segment is referred to as a logical


16.
physical unit of information. unit of information.

Paging results in a less efficient Segmentation results in a more


17.
system. efficient system.
7 Describe the steps in handling page fault with diagram.

Page Fault
If a page is needed that was not originally loaded up, then a page fault trap is generated.
Steps in Handling a Page Fault
1. The memory address requested is first checked, to make sure it was a valid memory request.
2. If the reference is to an invalid page, the process is terminated. Otherwise, if the page is not present in
memory, it must be paged in.
3. A free frame is located, possibly from a free-frame list.
4. A disk operation is scheduled to bring in the necessary page from disk.
5. After the page is loaded to memory, the process's page table is updated with the new frame number, and
the invalid bit is changed to indicate that this is now a valid page reference.
6. The instruction that caused the page fault must now be restarted from the beginning.

Fig: steps in handling page fault

8 What are the three methods for allocating disk space? Explain with suitable example.

ALLOCATION METHODS
Allocation methods address the problem of allocating space to files so that disk space is utilized effectively,
and files can be accessed quickly.
Three methods exist for allocating disk space.
1. Contiguous allocation
2. Linked allocation
3. Indexed allocation
Contiguous allocation:
Requires that each file occupies a set of contiguous blocks on the disk.
Accessing a file is easy – only need the starting location (block #) and length (number of blocks)
Contiguous allocation of a file is defined by the disk address and length (in block units) of the first block. If
the file is n blocks long and starts at location b, then it occupies blocks b, b + 1, b + 2, ... ,b + n - 1. The
directory entry for each file indicates the address of the starting block and the length of the area allocated for
this file.
Accessing a file that has been allocated contiguously is easy. For sequential access, the file system
remembers the disk address of the last block referenced and when necessary, reads the next block. For direct
access to block i of a file that starts at block b, we can immediately access block b + i. Thus, both sequential
and direct access can be supported by contiguous allocation.
Disadvantages:
1. Finding space for a new file is difficult. The system chosen to manage free space determines how this task
is accomplished. Any management system can be used, but some are slower than others.
2. Satisfying a request of size n from a list of free holes is a problem. First fit and best fit are the most
common strategies used to select a free hole from the set of available holes.
3. The above algorithms suffer from the problem of external fragmentation.
▪ As files are allocated and deleted, the free disk space is broken into pieces.
▪ External fragmentation exists whenever free space is broken into chunks.
▪ It becomes a problem when the largest contiguous chunk is insufficient for a request; storage is
fragmented into a number of holes, none of which is large enough to store the data.
▪ Depending on the total amount of disk storage and the average file size, external fragmentation may be a
minor or a major problem.

Linked Allocation:
Solves the problems of contiguous allocation.
Each file is a linked list of disk blocks: blocks may be scattered anywhere on the disk.
The directory contains a pointer to the first and last blocks of a file.
Creating a new file requires only the creation of a new entry in the directory.
Writing to a file causes the free-space management system to find a free block.
➢ This new block is written to and is linked to the end of the file.
➢ Reading from a file requires only reading blocks by following the pointers from block to block.

Advantages
There is no external fragmentation.
Any free blocks on the free list can be used to satisfy a request for disk space.
The size of a file need not be declared when the file is created.
A file can continue to grow as long as free blocks are available.
It is never necessary to compact disk space for the sake of linked allocation (however, file access
efficiency may require it)
Indexed allocation:
Brings all the pointers together into one location called an index block.
Each file has its own index block, which is an array of disk-block addresses.
The ith entry in the index block points to the ith block of the file. The directory contains the address of the
index block. To find and read the ith block, we use the pointer in the ith index- block entry.
When the file is created, all pointers in the index block are set to nil. When the ith block is first written, a
block is obtained from the free-space manager and its address is put in the ith index- block entry.
Indexed allocation supports direct access, without suffering from external fragmentation, because any free
block on the disk can satisfy a request for more space.

Disadvantages:
▪ Suffers from some of the same performance problems as linked allocation.
▪ Index blocks can be cached in memory; however, data blocks may be spread all over the disk volume.
▪ Indexed allocation does suffer from wasted space.
▪ The pointer overhead of the index block is generally greater than the pointer overhead of linked
allocation.

9 Explain the access matrix model of implementing protection in OS.

ACCESS MATRIX
Our model of protection can be viewed as a matrix, called an access matrix. It is a general model of
protection that provides a mechanism for protection without imposing a particular protection policy.
The rows of the access matrix represent domains, and the columns represent objects.
Each entry in the matrix consists of a set of access rights.
The entry access(i,j) defines the set of operations that a process executing in domain Di can invoke on
object Oj.

In the above diagram, there are four domains and four objects—three files (F1, F2, F3) and one printer.
A process executing in domain D1 can read files F1 and F3. A process executing in domain D4 has the
same privileges as one executing in domain D1; but in addition, it can also write onto files F1 and F3.
When a user creates a new object Oj, the column Oj is added to the access matrix with the appropriate
initialization entries, as dictated by the creator.
The process executing in one domain and be switched to another domain. When we switch a process
from one domain to another, we are executing an operation (switch) on an object (the domain).
Domain switching from domain Di to domain Dj is allowed if and only if the access right switch
access(i,j). Thus, in the given figure, a process executing in domain D2 can switch to domain D3 or to
domain D4. A process in domain D4 can switch to D1, and one in domain D1 can switch to domain D2.

Allowing controlled change in the contents of the access-matrix entries requires three additional
operations: copy, owner, and control.
The ability to copy an access right from one domain (or row) of the access matrix to another is denoted
by an asterisk (*) appended to the access right. The copy right allows the copying of the access right
only within the column for which the right is defined. In the below figure, a process executing in
domain D2 can copy the read operation into any entry associated with file F2. Hence, the access matrix
of figure (a) can be modified to the access matrix shown in figure (b).

This scheme has two variants:


1. A right is copied from access(i,j) to access(k,j); it is then removed from access(i,j). This action is a
transfer of a right, rather than a copy.
2. Propagation of the copy right- limited copy. Here, when the right R* is copied from access(i,j) to
access(k,j), only the right R (not R*) is created. A process executing in domain Dk cannot further copy
the right R.

We also need a mechanism to allow addition of new rights and removal of some rights. The owner right
controls these operations. If access(i,j) includes the owner right, then a process executing in domain Di,
can add and remove any right in any entry in column j.
For example, in below figure (a), domain D1 is the owner of F1, and thus can add and delete any valid
right in column F1. Similarly, domain D2 is the owner of F2 and F3 and thus can add and remove any
valid right within these two columns. Thus, the access matrix of figure(a) can be modified to the access
matrix shown in figure(b) as follows.
A mechanism is also needed to change the entries in a row. If access(i,j) includes the control right, then a
process executing in domain Di, can remove any access right from row j. For example, in figure, we
include the control right in access(D3, D4). Then, a process executing in domain D3 can modify domain
D4.

IMPLEMENTATION OF ACCESS MATRIX


Different methods of implementing the access matrix (which is sparse)
Global Table
Access Lists for Objects
Capability Lists for Domains
Lock-Key Mechanism

1. Global Table
This is the simplest implementation of access matrix.
A set of ordered triples <domain, object, rights-set> is maintained in a file. Whenever an operation M
is executed on an object Oj, within domain Di, the table is searched for a triple <Di, Oj, Rk>. If this triple
is found, the operation is allowed to continue; otherwise, an exception (or error) condition is raised.
Drawbacks -
The table is usually large and thus cannot be kept in main memory. Additional I/O is needed
2. Access Lists for Objects
Each column in the access matrix can be implemented as an access list for one object. The empty
entries are discarded. The resulting list for each object consists of ordered pairs <domain, rights-set>.
It defines all domains access right for that object. When an operation M is executed on object O j in Di,
search the access list for object Oj, look for an entry <Di, RK > with M ϵ Rk. If the entry is found, we
allow the operation; if it is not, we check the default set. If M is in the default set, we allow the access.
Otherwise, access is denied, and an exception condition occurs. For efficiency, we may check the default
set first and then search the access list.
3. Capability Lists for Domains
A capability list for a domain is a list of objects together with the operations allowed on those objects.
An object is often represented by its name or address, called a capability.
To execute operation M on object Oj, the process executes the operation M, specifying the capability
for object Oj as a parameter. Simple possession of the capability means that access is allowed.

Capabilities are distinguished from other data in one of two ways:


1. Each object has a tag to denote its type either as a capability or as accessible data.
2. The address space associated with a program can be split into two parts. One part is accessible to the
program and contains the program's normal data and instructions. The other part, containing the
capability list, is accessible only by the operating system.

4. A Lock-Key Mechanism
The lock-key scheme is a compromise between access lists and capability lists.
Each object has a list of unique bit patterns, called locks. Each domain has a list of unique bit
patterns, called keys.
A process executing in a domain can access an object only if that domain has a key that matches one
of the locks of the object.

10 List typical file attributes and file operations indicating their purpose.

File Attributes
A file is named, for the convenience of its human users, and is referred to by its name. A name is usually a
string of characters, such as example.c
When a file is named, it becomes independent of the process, the user, and even the system that created it.

A file's attributes vary from one operating system to another but typically consist of these:
Name: The symbolic file name is the only information kept in human readable form.
Identifier: This unique tag, usually a number, identifies the file within the file system; it is the non-
human-readable name for the file.
Type: This information is needed for systems that support different types of files.
Location: This information is a pointer to a device and to the location of the file on that device.
Size: The current size of the file (in bytes, words, or blocks) and possibly the maximum allowed size are
included in this attribute.
Protection: Access-control information determines who can do reading, writing, executing, and so on.
Time, date, and user identification: This information may be kept for creation, last modification, and
last use. These data can be useful for protection, security, and usage monitoring.
The information about all files is kept in the directory structure, which also resides on secondary storage.
Typically, a directory entry consists of the file's name and its unique identifier. The identifier in turn locates
the other file attributes.
File Operations
A file is an abstract data type. To define a file properly, we need to consider the operations that can be
performed on files.
1. Creating a file: Two steps are necessary to create a file,
a) Space in the file system must be found for the file.
b) An entry for the new file must be made in the directory.
2. Writing a file: To write a file, we make a system call specifying both the name of the file and the
information to be written to the file. Given the name of the file, the system searches the directory to find the
file's location. The system must keep a write pointer to the location in the file where the next write is to take
place. The write pointer must be updated whenever a write occurs.
3. Reading a file: To read from a file, we use a system call that specifies the name of the file and where the
next block of the file should be put. Again, the directory is searched for the associated entry, and the system
needs to keep a read pointer to the location in the file where the next read
is to take place. Once the read has taken place, the read pointer is updated. Because a process is usually either
reading from or writing to a file, the current operation location can be kept as a per-process current file-
position pointer.
4. Repositioning within a file: The directory is searched for the appropriate entry, and the current-file-
position pointer is repositioned to a given value. Repositioning within a file need not involve any actual I/0.
This file operation is also known as files seek.
5. Deleting a file: To delete a file, search the directory for the named file. Having found the associated
directory entry, then release all file space, so that it can be reused by other files, and erase the directory entry.
6. Truncating a file: The user may want to erase the contents of a file but keep its attributes. Rather than
forcing the user to delete the file and then recreate it, this function allows all attributes to remain unchanged
but lets the file be reset to length zero and its file space released.

11 What do you mean by free space list? With suitable example, explain the methods of
implementation of free space list.
Free Space Management
The space created after deleting the files can be reused. Another important aspect of disk management is
keeping track of free space in memory. The list which keeps track of free space in memory is called the free-
space list. To create a file, search the free-space list for the required amount of space and allocate that space
to the new file. This space is then removed from the free- space list. When a file is deleted, its disk space is
added to the free-space list. The free-space list, is implemented in different ways as explained below.
b) Bit Vector

Fast algorithms exist for quickly finding contiguous blocks of a given size.
One simple approach is to use a bit vector, in which each bit represents a disk block, set to 1 if free or 0 if
allocated.
For example, consider a disk where blocks 2,3,4,5,8,9, 10,11, 12, 13, 17and 18 are free, and the rest of the
blocks are allocated. The free-space bit map would be.
0011110011111100011
Easy to implement and also very efficient in finding the first free block or ‘n’.

consecutive free blocks on the disk.


The downside is that a 40GB disk requires over 5MB just to store the bitmap.

c) Linked List
a. A linked list can also be used to keep track of all free blocks.
b. Traversing the list and/or finding a contiguous block of a given size are not easy, but fortunately are not
frequently needed operations. Generally, the system just adds and removes single blocks from the beginning
of the list.
c. The FAT table keeps track of the free list as just one more linked list on the table.

d) Grouping
a. A variation on linked list free lists. It stores the addresses of n free blocks in the first free block. The first
n-1 blocks are actually free. The last block contains the addresses of another n free blocks, and so on.
b. The address of a large number of free blocks can be found quickly.

e) Counting
a. When there are multiple contiguous blocks of free space then the system can keep track of the starting
address of the group and the number of contiguous free blocks.
b. Rather than keeping all list of n free disk addresses, we can keep the address of first free block and the
number of free contiguous blocks that follow the first block.
c. Thus, the overall space is shortened. It is similar to the extent method of allocating blocks.

12 Explain bad-block recovery briefly.


Bad Blocks
Disks are prone to failure of sectors due to the fast movement of r/w head. Sometimes the whole disk
will be changed. Such group of sectors that are defective are called as bad blocks.
Different ways to overcome bad blocks are -
Some bad blocks are handled manually, eg. In MS-DOS.

Some controllers replace each bad sector logically with one of the spare sectors (extra sectors). The
schemes used are sector sparing or forwarding and sector slipping.

1. In MS-DOS format command, scans the disk to find bad blocks. If format finds a bad block, it
writes a special value into the corresponding FAT entry to tell the allocation routines not to use
that block.

2. In SCSI disks, bad blocks are found during the low-level formatting at the factory and is updated
over the life of the disk. Low-level formatting also sets aside spare sectors not visible to the
operating system. The controller can be told to replace each bad sector logically with one of the
spare sectors. This scheme is known as sector sparing or forwarding.
A typical bad-sector transaction might be as follows:
The operating system tries to read logical block 87.
The controller finds that the sector is bad. It reports this finding to the operating system.
The next time the system is rebooted, a special, command is run to tell the SCSI controller to replace
the bad sector with a spare.
After that, whenever the system requests logical block 87, the request is translated into the
replacement sector's (spare) address by the controller.

3. Some controllers replace bad blocks by sector slipping.


Example: Suppose that logical block 17 becomes defective and the first available spare follows sector
202. Then, sector slipping remaps all the sectors from 17 to 202, moving them all down one spot. That
is, sector 202 is copied into the spare, then sector 201 into 202, and then 200 into 201, and so on, until
sector 18 is copied into sector 19. Slipping the sectors in this way frees up the space of sector 18, so
sector 17 can be mapped to it.
13 Explain the various access methods of files.

ACCESS METHODS
Files store information. When it is used, this information must be accessed and read into computer
memory. The information in the file can be accessed in several ways.
Some of the common methods are:

1. Sequential methods
The simplest access method is sequential methods. Information in the file is processed in order, one record
after the other.
Reads and writes make up the bulk of the operations on a file.
A read operation (next-reads) reads the next portion of the file and automatically advances a file pointer,
which tracks the I/O location.
The write operation (write next) appends to the end of the file and advances to the end of the newly
written material.
A file can be reset to the beginning and on some systems, a program may be able to skip forward or
backward n records for some integer n-perhaps only for n =1.

Figure: Sequential-access file.


2. Direct Access
A file is made up of fixed length logical records that allow programs to read and write records rapidly in
no particular order.
The direct-access method is based on a disk model of a file, since disks allow random access to any file
block. For direct access, the file is viewed as a numbered sequence of blocks or records.
Example: if we may read block 14, then read block 53, and then write block 7. There are no restrictions on
the order of reading or writing for a direct-access file.
Direct-access files are of great use for immediate access to large amounts of information such as
Databases, where searching becomes easy and fast.
For the direct-access method, the file operations must be modified to include the block number as a
parameter. Thus, we have read n, where n is the block number, rather than read next, and ·write n rather than
write next.

14 What is protection? Distinguish between mechanisms and policies.


Refer to ppt shared (chapter 14) access matrix
15 Explain the various types of directory structure.

The most common schemes for defining the logical structure of a directory are described below.
1. Single-level Directory
2. Two-Level Directory
3. Tree-Structured Directories
4. Acyclic-Graph Directories
5. General Graph Directory
Single-level Directory
o The simplest directory structure is the single-level directory. All files are contained in the same directory,
which makes it easy to support and understand.

The entire system contains only one directory, which contains only one entry per file.
Advantages:
Implementation is easy.
If there are less number of files, searching is easier.
Disadvantages:
o We cannot have 2 files with the same name, because of name collision.
o Searching time increases with the increase in directory size (number of
files increases).
o Protection cannot be implemented for multiple users.
o Finding a unique name for every file is difficult when the number of files increases or when the system has
more than one user.
o As directory structure is single, uniqueness of file name has to be maintained, which is difficult when there
are multiple users.
o Even a single user on a single-level directory may find it difficult to remember the names of all the files as
the number of files increases.
o It is not uncommon for a user to have hundreds of files on one computer system and an equal number of
additional files on another system. Keeping track of so many files is a daunting task.
Two-Level Directory
o In the two-level directory structure, each user has its own user file directory (UFD). The UFDs have
similar structures, but each lists only the files of a single user.
o When a user refers to a particular file, only his own UFD is searched. Different users may have files with
the same name, as long as all the file names within each UFD are unique.
o To create a file for a user, the operating system searches only that user's UFD to ascertain whether another
file of that name exists. To delete a file, the operating system confines its search to the local UFD; thus, it
cannot accidentally delete another user's file that has the same name.
o When a user job starts or a user logs in, the system's Master file directory (MFD) is searched. The MFD is
indexed by user name or account number, and each entry points to the UFD for that user.
Advantage:
▪ No file name-collision among different users.
▪ Efficient searching.

Disadvantage
▪ Users are isolated from one another and can’t cooperate on the same task.

3. Tree Structured Directories


o A tree is the most common directory structure.
o The tree has a root directory, and every file in the system has a unique path name.
o A directory contains a set of files or subdirectories. A directory is simply another file, but it is treated in a
special way.
o All directories have the same internal format. One bit in each directory entry defines the entry as a file (0)
or as a subdirectory (1). Special system calls are used to create and delete directories.

Two types of path-names:


1. Absolute path-name: begins at the root.
2. Relative path-name: defines a path from the current directory.

How to delete directory?


1. To delete an empty directory: Just delete the directory.
2. To delete a non-empty directory:
▪ First, delete all files in the directory.
▪ If any subdirectories exist, this procedure must be applied recursively to them.

Advantage:
▪ Users can be allowed to access the files of other users.
▪ A path to a file can be longer than a path in a two-level directory.
▪ Prohibits the sharing of files (or directories).
4. Acyclic Graph Directories
The common subdirectory should be shared. A shared directory or file will exist in the file system in two
or more places at once. A tree structure prohibits the sharing of files or directories.
An acyclic graph is a graph with no cycles. It allows directories to share subdirectories and files.

The same file or subdirectory may be in two different directories. The acyclic graph is a natural
generalization of the tree-structured directory scheme.
Two methods to implement shared files (or subdirectories):
1. Create a new directory-entry called a link. A link is a pointer to another file (or subdirectory).
2. Duplicate all information about shared files in both sharing directories.
Two problems:
1. A file may have multiple absolute path-names.
2. Deletion may leave dangling-pointers to the non-existent file.
Solution to deletion problem:
1. Use back-pointers: Preserve the file until all references to it are deleted.
2. With symbolic links, remove only the link, not the file. If the file itself is deleted, the link can be removed.
5. General Graph Directory
o Problem: If there are cycles, we want to avoid searching components twice.
o Solution: Limit the no. of directories accessed in a search.
o Problem: With cycles, the reference-count may be non-zero even when it is no longer possible to refer to
a directory (or file). (A value of 0 in the reference count means that there are no more references to the file or
directory, and the file can be deleted).
o Solution: Garbage-collection scheme can be used to determine when the last reference has been deleted.

Garbage collection is involved.


First pass traverses the entire file-system and marks everything that can be accessed.
A second pass collects everything that is not marked onto a list of free-space.

You might also like