ESE__Solved_QB__Operating_System[1]
ESE__Solved_QB__Operating_System[1]
Question Bank Solution The mutual-exclusion solution to this makes the shared resource available 1. Threat
only while the process is in a specific code segment called the critical
Course code -17YCS502 A program that has the potential to harm the system seriously.
section. It controls access to the shared resource by controlling each
Course title- Operating System
mutual execution of that part of its program where the resource would be 2. Attack
1. Define the importance of operating system. used.
A breach of security that allows unauthorized access to a resource.
An operating system is the most important software which runs on a 6. List out the 4 components of deadlock in OS.
Operating systems can be targeted with Denial of Service attacks, where the
computer. It controls the computer's memory, processes and all software
goal is to overwhelm the system's resources, making it unavailable to
and hardware. Several computer programs normally run at the same time, Mutual Exclusion (Mutex)
legitimate users. This can be achieved through network-based attacks,
all of which need to access the computer's processor (CPU), memory, and Hold and Wait (Hold and Wait or Hold-Wait)
resource exhaustion, or other methods.
storage. No Preemption
Circular Wait
11. Explain Batch Operating System.
2. Define the purpose of multitasking in OS.
7. Describe main memory swapping in OS.
Batch processing was very popular in the 1970s. The jobs were executed
Multitasking is used to keep all of a computer's resources at work as much
in batches. People used to have a single computer known as a mainframe.
of the time as possible. It is controlled by the operating system, which Swapping is a memory management scheme in which any process can be
Users using batch operating systems do not interact directly with the
loads programs into the computer for processing and oversees their temporarily swapped from main memory to secondary memory so that the
computer. Each user prepares their job using an offline device like a punch
execution until they are finished. main memory can be made available for other processes.
card and submitting it to the computer operator. Jobs with similar
requirements are grouped and executed as a group to speed up processing.
3. Describe the advantage of process management in OS. 8. Define the requirement of page replacement in OS.
Once the programmers have left their programs with the operator, they sort
the programs with similar needs into batches.
Increase efficieny In an operating system, page replacement refers to a scenario in which a
Reduce costs. page from the main memory should be replaced by a page from the
Improve quality. secondary memory. Page replacement occurs due to page faults. The
better control over operations. various page replacement algorithms like FIFO, Optimal page
Reduce times and thus reduce the production and delivery times of the replacement, LRU, LIFO, and Random page replacement help the
service. operating system decide which page to replace.
4. Describe the role of scheduling in operating system. 9. List out the four major activities of OS in file management.
Process Scheduling allows the OS to allocate CPU time for each process. The creation and deletion of files.
Another important reason to use a process scheduling system is that it The creation and deletion of directories.
keeps the CPU busy at all times. This allows you to get less response time The support of primitives for manipulating files and directories.
for programs. The mapping of files onto secondary storage.
The batch operating system grouped jobs that perform similar functions. These
5. State the problem of mutual exclusion in OS. 10. Define the security problems with operating systems. job groups are treated as a batch and executed simultaneously. A computer
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
system with this operating system performs the following batch processing Time-Sharing Operating Systems is one of the important type of operating A real-time operating system (RTOS) is a special-purpose operating
activities: system. system used in computers that has strict time constraints for any job to be
performed. It is employed mostly in those systems in which the results of
1. A job is a single unit that consists of a preset sequence of commands, data, Time-sharing enables many people, located at various terminals, to use a the computations are used to influence a process while it is executing.
and programs. particular computer system at the same time. Multitasking or Time-Sharing Whenever an event external to the computer occurs, it is communicated to
2. Processing takes place in the order in which they are received, i.e., first Systems is a logical extension of multiprogramming. Processor’s time is shared the computer with the help of some sensor used to monitor the event. The
come, first serve. among multiple users simultaneously is termed as time-sharing. sensor produces the signal that is interpreted by the operating system as an
3. These jobs are stored in memory and executed without the need for manual interrupt. On receiving an interrupt, the operating system invokes a
The main difference between Time-Sharing Systems and Multiprogrammed specific process or a set of processes to serve the interrupt.
information. Batch Systems is that in case of Multiprogrammed batch systems, the
4. When a job is successfully run, the operating system releases its memory. objective is to maximize processor use, whereas in Time-Sharing Systems, the This process is completely uninterrupted unless a higher priority interrupt occurs
objective is to minimize response time. during its execution. Therefore, there must be a strict hierarchy of priority among
Types of Batch Operating System the interrupts. The interrupt with the highest priority must be allowed to initiate
Multiple jobs are implemented by the CPU by switching between them, but the the process , while lower priority interrupts should be kept in a buffer that will
There are mainly two types of the batch operating system. These are as follows: switches occur so frequently. So, the user can receive an immediate response. be handled later. Interrupt management is important in such an operating system.
For an example, in a transaction processing, the processor executes each user
1. Simple Batched System Real-time operating systems employ special-purpose operating systems because
program in a short burst or quantum of computation, i.e.; if n users are present,
conventional operating systems do not provide such performance.
In the simple batched system there is no direct interaction between the user then each user can get a time quantum. Whenever the user submits the command,
the response time is in few seconds at most. The various examples of Real-time operating systems are:
and the computer. The user creates the job on the punch cards and submits
it to the computer operator. Then the operator makes batches and the An operating system uses CPU scheduling and multiprogramming to provide o MTS
computer starts to execute them sequentially. each user with a small portion of a time. Computer systems which were designed
o Lynx
primarily as batch systems have been modified to time-sharing systems.
2. Multi-programmed batched system o QNX
Advantages of Timesharing operating systems are − o VxWorks etc.
A multiprogrammed batch system is a computer operating system that uses
queues to schedule multiple programs and processes at the same time. This It provides the advantage of quick response. Applications of Real-time operating system (RTOS):
system allows the computer to keep track of all the programs and processes This type of operating system avoids duplication of software.
that need to be run, and then run them in the order that they are supposed It reduces CPU idle time. RTOS is used in real-time applications that must work within specific deadlines.
to be run. Following are the common areas of applications of Real-time operating systems
Disadvantages of Time-sharing operating systems are − are given below.
12. Describe Time-Sharing Operating Systems with its advantages and
Time sharing has problem of reliability. o Real-time running structures are used inside the Radar gadget.
disadvantages.
Question of security and integrity of user programs and data can be o Real-time running structures are utilized in Missile guidance.
raised. o Real-time running structures are utilized in on line inventory trading.
An operating system (OS) is basically a collection of software that manages Problem of data communication occurs. o Real-time running structures are used inside the cell phone switching
computer hardware resources and provides common services for computer gadget.
programs. Operating system is a crucial component of the system software in a 13. Explain Real-Time Operating System.
computer system.
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
Types of Real-time operating system Soft RTOS accepts a few delays via the means of the Operating system. In this Ability to move the virtual computers between the physical host computers
kind of RTOS, there may be a closing date assigned for a particular job, but a as holistically integrated files.
delay for a small amount of time is acceptable. So, cut off dates are treated softly
via means of this kind of RTOS. The below diagram shows you the difference between the single OS with no VM
and Multiple OS with VM −
For Example,
This type of system is used in Online Transaction systems and Livestock price
quotation Systems.
A virtual machine (VM) is a virtual environment which functions as a virtual A thread is a single sequential flow of execution of tasks of a process so it is also
Hard Real-Time operating system:
computer system with its own CPU, memory, network interface, and storage, known as thread of execution or thread of control. There is a way of thread
In Hard RTOS, all critical tasks must be completed within the specified time created on a physical hardware system. execution inside the process of any operating system. Apart from this, there can
duration, i.e., within the given deadline. Not meeting the deadline would result be more than one thread inside a process. Each thread of the same process makes
in critical failures such as damage to equipment or even loss of human life. VMs are isolated from the rest of the system, and multiple VMs can exist on a use of a separate program counter and a stack of activation records and control
single piece of hardware, like a server. That means, it as a simulated image of blocks. Thread is often referred to as a lightweight process.
For Example, application software and operating system which is executed on a host computer
or a server.
Let's take an example of airbags provided by carmakers along with a handle in
the driver's seat. When the driver applies brakes at a particular instance, the
It has its own operating system and software that will facilitate the resources to
airbags grow and prevent the driver's head from hitting the handle. Had there
been some delay even of milliseconds, then it would have resulted in an accident. virtual computers.
Similarly, consider an on-stock trading software. If someone wants to sell a Characteristics of virtual machines
particular share, the system must ensure that command is performed within a
given critical time. Otherwise, if the market falls abruptly, it may cause a huge The characteristics of the virtual machines are as follows −
loss to the trader.
Multiple OS systems use the same hardware and partition resources
Soft Real-Time operating system: between virtual computers.
Separate Security and configuration identity.
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
User-level thread
The operating system does not recognize the user-level thread. User threads can
be easily implemented and it is implemented by the user. If a user performs a
user-level thread blocking operation, the whole process is blocked. The kernel
level thread does not know nothing about the user level thread. The kernel-level
thread manages user-level threads as if they are single-threaded processes?
Examples: Java thread, POSIX threads, etc.
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that
do not support threads at the kernel-level.
3. It is faster and efficient.
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
ln – Create shortcuts to other files operations, making it easier for users to perform tasks without manual 2. Limited Features: Shell scripting may lack certain advanced features
cat – Display file contents on terminal intervention. found in more powerful programming languages. This can be a limitation
clear – Clear terminal for complex software development tasks.
ps- Display the processes in terminal 3. Command Execution: Shell scripts can execute system commands,
man – Access manual for all Linux commands manipulate files and directories, and perform various system-level 3. Security Concerns: Shell scripts may pose security risks if not written
grep- Search for a specific string in an output operations. This makes them powerful for system administrators and carefully. Poorly written scripts may inadvertently expose vulnerabilities
echo- Display active processes on the terminal developers. or allow unauthorized access.
wget – download files from the internet
whoami- Create or update passwords for existing users 4. Variables and Control Structures: Shell programming supports variables 4. Debugging Challenges: Debugging shell scripts can be challenging,
sort- sort the file content and control structures like loops and conditional statements, enabling users especially for complex scripts. Limited debugging tools are available
cal- View Calendar in terminal to write more complex and dynamic scripts. compared to integrated development environments for traditional
whereis – View the exact location of any command types after this programming languages.
command Advantages of Shell Programming:
df – Check the details of the file system In summary, shell programming is a powerful tool for automating tasks
wc – Check the lines, word count, and characters in a file using different 1. Ease of Use: Shell programming provides a simple and easy-to-learn and managing system configurations, but it has its limitations, particularly
options syntax. Users can quickly write scripts to perform tasks without the need in terms of performance and features. It is best suited for certain tasks,
for compiling or linking. such as system administration and automation, where its simplicity and
17. Describe shell programming. What are the advantages and ease of use are advantageous.
disadvantages of Shell programming? 2. Rapid Development: Shell scripts can be written and executed quickly,
facilitating rapid development and prototyping of solutions. 18. Explain Process Control Block.
Shell programming refers to the creation and execution of scripts using a
shell, which is a command-line interpreter or user interface for operating 3. Compatibility: Shell scripts are generally portable across different Unix- A Process Control Block (PCB), also known as a Task Control Block or
systems. The shell acts as an interface between the user and the operating like operating systems. This means that scripts written for one shell can Task Struct, is a data structure used by operating systems to manage
system, allowing users to interact with the system by typing commands. often be executed on other systems with minimal modifications. information about a running process. The PCB is a crucial component of
process management and is responsible for storing various details related
Here are some key aspects of shell programming, along with its advantages 4. System Administration: Shell scripts are widely used in system to a process, allowing the operating system to manage and control
and disadvantages: administration tasks, allowing administrators to automate routine processes effectively. Each active process in the system has its own PCB.
operations and manage system configurations efficiently.
Features of Shell Programming:
Disadvantages of Shell Programming:
1. Scripting Language: Shell scripts are typically written in scripting
languages, such as Bash (Bourne Again SHell), sh (Bourne Shell), or other 1. Performance: Shell scripts may not be as efficient as programs written
shell languages. These scripts contain a series of commands that can be in compiled languages. They are interpreted at runtime, which can result
executed sequentially or conditionally. in slower execution compared to compiled languages.
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
7. Open Files: A list of files that the process has opened during its
execution.
10 .I/O Status Information: Information about the I/O devices the process
is using, including open files, status of I/O operations, etc.
When a context switch occurs (i.e., the operating system switches from
executing one process to another), the contents of the CPU registers are
saved into the PCB of the currently running process, and the PCB of the
The information stored in a Process Control Block typically includes: new process to be executed is loaded into the CPU registers. This allows
the operating system to seamlessly switch between processes.
1. Process State: Indicates the current state of the process (e.g., running, 1. Kernel:
ready, blocked, terminated). The Process Control Block plays a crucial role in process scheduling, - The kernel is the core component of the operating system.
resource management, and overall system stability. It ensures that the state - It provides essential services for all other parts of the operating system.
2. Program Counter (PC): The address of the next instruction to be of each process is appropriately maintained, and the operating system can - It manages system resources, such as CPU, memory, and I/O devices.
executed by the process. effectively manage multiple processes running concurrently on a computer - The kernel is responsible for handling system calls and managing the
system. overall system state.
3. CPU Registers: The values of the CPU registers for the process. This
includes the contents of general-purpose registers, the program counter, 19. Illustrate the Architecture of Operating System. 2. Device Drivers:
and other relevant registers. - Device drivers are specialized modules that enable the operating
The architecture of an operating system refers to its overall structure and system to communicate with hardware devices.
4. Process ID (PID): A unique identifier assigned to each process to the way its components interact to perform various functions. The - They provide an abstraction layer between the hardware and the rest of
distinguish it from other processes in the system. architecture of an operating system can be complex, but I'll provide a high- the operating system, allowing software to interact with devices without
level illustration of the key components commonly found in many needing to understand their low-level details.
5. Priority: The priority level of the process, which may be used by the operating systems:
scheduler to determine the order in which processes are executed. 3. System Libraries:
- System libraries contain reusable, standardized code that applications
6. Memory Management Information: Information about the process's can use to perform common tasks.
memory space, including base and limit registers, page tables, and other
memory-related details.
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
- They provide an interface between the applications and the operating - It includes device drivers, interrupt handling, and buffering
system, making it easier for developers to create software without having mechanisms to efficiently manage data transfer between the CPU and
to deal with low-level details. peripherals.
6. Process Management:
First-Come, First-Served (FCFS) and Shortest Job First (SJF) are two fundamental
- Process management involves creating, scheduling, and terminating scheduling algorithms used in operating systems to manage the execution of processes. They
processes. differ in their approach to prioritizing processes and determining the order in which they are
- The operating system manages the execution of multiple processes, run.
ensuring they share resources efficiently and run concurrently.
First-Come, First-Served (FCFS) Scheduling Algorithm
7. Memory Management: FCFS, also known as the FIFO (First In, First Out) algorithm, is a non-preemptive
scheduling algorithm that prioritizes processes based on their arrival time. The process that
- Memory management is responsible for allocating and deallocating arrives first is the first to be executed, and this order is maintained until all processes have
memory for processes. finished.
- It includes mechanisms such as virtual memory, which allows
processes to use more memory than physically available by swapping data For example: Shortest Job First (SJF) is a scheduling algorithm used in operating systems for
between RAM and storage. managing the execution of processes. The key idea behind SJF is to prioritize
processes based on their burst time—the time it takes for a process to execute
8. I/O Management: from start to finish. The process with the shortest burst time is scheduled first.
- I/O management handles input and output operations, allowing For example :
processes to communicate with external devices.
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
The Bounded Buffer problem is also called the producer-consumer customers take empty seats to wait, or leave if no vacancy. This is basically
problem. This problem is generalized in terms of the Producer-Consumer the Sleeping Barber Problem.
problem. The solution to this problem is, to create two counting
semaphores ―full‖ and ―empty‖ to keep track of the current number of full 22. Discuss the mechanism of process synchronization in OS.
and empty buffers respectively. Producers produce a product and
consumers consume the product, but both use of one of the containers each Process synchronization is a crucial concept in operating systems that
time. involves coordinating the execution of multiple processes to ensure they
share resources in a controlled and orderly manner. The primary goal is to
prevent data inconsistency and avoid race conditions, where the outcome
Dining-Philosophers Problem of concurrent execution depends on the timing of events. Below are some
The Dining Philosopher Problem states that K philosophers seated around key mechanisms used for process synchronization:
a circular table with one chopstick between each pair of philosophers.
There is one chopstick between each philosopher. A philosopher may eat 1. Mutual Exclusion:
if he can pickup the two chopsticks adjacent to him. One chopstick may - Semaphore: Semaphores are integer variables used to control access to
be picked up by any one of its adjacent followers but not both. This critical sections. They have two standard operations: wait (decrement) and
problem involves the allocation of limited resources to a group of signal (increment). A semaphore can be used to ensure that only one
processes in a deadlock-free and starvation-free manner. process at a time can execute a critical section.
Philosopher Readers and Writers Problem - Mutex (Mutual Exclusion): Mutexes are binary semaphores that allow
Suppose that a database is to be shared among several concurrent or restrict access to a critical section. A process can lock the mutex before
processes. Some of these processes may want only to read the database, entering the critical section and unlock it when leaving, ensuring exclusive
whereas others may want to update (that is, to read and write) the database. access.
We distinguish between these two types of processes by referring to the
21. Explain the classical problems for process synchronization in OS.
former as readers and to the latter as writers. Precisely in OS we call this 2. Hardware Instructions:
situation as the readers-writers problem. Problem parameters: - *Test-and-Set (TAS) and Compare-and-Swap (CAS):* These atomic
Synchronization Problems
hardware instructions provide a way to implement mutual exclusion. TAS
These problems are used for testing nearly every newly proposed
One set of data is shared among a number of processes. sets a variable and returns its previous value, while CAS updates a variable
synchronization scheme. The following problems of synchronization are
Once a writer is ready, it performs its write. Only one writer may write at only if its current value matches an expected value.
considered as classical problems:
a time.
If a process is writing, no other process can read it. 3. Monitors:
1. Bounded-buffer (or Producer-Consumer) Problem,
If at least one reader is reading, no other process can write. - A monitor is a high-level abstraction that encapsulates shared data and
2. Dining-Philosophers Problem,
Readers may not write and only read. procedures that operate on that data. Only one process can execute a
3. Readers and Writers Problem,
monitor procedure at a time, ensuring mutual exclusion.
4. Sleeping Barber Problem
Sleeping Barber Problem
These are summarized, for detailed explanation, you can view the linked
Barber shop with one barber, one barber chair and N chairs to wait in. 4. Condition Variables:
articles for each.
When no customers the barber goes to sleep in barber chair and must be
woken when a customer comes in. When barber is cutting hair new
Bounded-Buffer (or Producer-Consumer) Problem
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
- Condition variables are used to block a process until a certain condition Deadlock is a situation in operating systems where two or more processes Preventing deadlocks typically involves addressing one or more of these
is true. They are often associated with a mutex to ensure that the condition cannot proceed because each is waiting for the other to release a resource. conditions. Common strategies include resource allocation policies,
check and the subsequent operation are atomic. There are four necessary conditions for deadlock, and each type of deadlock detection algorithms, and deadlock recovery mechanisms.
deadlock is associated with violating one or more of these conditions: Operating systems use techniques such as resource allocation graphs and
5. Semaphores: banker's algorithm to manage resources and avoid or resolve deadlocks.
- Besides mutual exclusion, semaphores can be used for process 1. Mutual Exclusion:
synchronization. Counting semaphores allow a specified number of - Condition: At least one resource must be held in a non-shareable mode, 24. Explain the causes of fragmentation in OS.
processes to access a resource simultaneously. meaning that only one process can use the resource at a time.
- Deadlock Type: Resource Holding Deadlock Fragmentation in operating systems refers to the phenomenon where the
6. Message Passing: - Description: Processes holding resources prevent others from accessing memory or storage space becomes divided into small, non-contiguous
- Processes can communicate and synchronize using message passing. them. blocks, making it challenging to allocate large contiguous blocks of
Synchronization is achieved by sending and receiving messages at memory for processes or files. There are two main types of fragmentation:
appropriate points in the execution. 2. Hold and Wait: external fragmentation and internal fragmentation. Here are the causes for
- Condition: A process must be holding at least one resource and waiting each:
7. Deadlock Handling: to acquire additional resources held by other processes.
- Deadlocks can occur when two or more processes are unable to proceed - Deadlock Type: No Preemption Deadlock 1. External Fragmentation:
because each is waiting for the other to release a resource. Techniques like - Description: Processes hold resources while waiting for others, creating - Definition: External fragmentation occurs when free memory or
deadlock detection, prevention, and recovery are employed to handle a circular waiting scenario. storage is scattered throughout the system, but there is not enough
deadlocks. contiguous free space to satisfy a memory or storage request.
3. No Preemption: - Causes:
8. Barrier Synchronization: - Condition: Resources cannot be preempted (taken away) from a - Allocation and Deallocation of Variable-Sized Blocks: As processes
- Barriers are synchronization mechanisms that allow multiple processes process; they must be explicitly released by the process holding them. or files are allocated and deallocated, memory or storage becomes divided
to wait for each other at a predefined point in the program. Once all - Deadlock Type: Hold and Wait Deadlock into small, non-contiguous chunks. Over time, these fragments can
processes reach the barrier, they are released simultaneously. - Description: Processes may hold resources and wait for others, without accumulate.
releasing the held resources. - Variable Partition Sizes: If the memory or storage is divided into
9. Readers-Writers Problem: variable-sized partitions, the remaining small gaps between partitions can
- In situations where multiple processes need to access shared data, the lead to external fragmentation.
4. Circular Wait:
readers-writers problem arises. Solutions involve providing exclusive - Condition: There must be a circular chain of two or more processes,
access to writers while allowing multiple readers to access the data 2. Internal Fragmentation:
each waiting for a resource held by the next one in the chain.
concurrently. - Definition: Internal fragmentation occurs when a process or file is
- Deadlock Type: Mutual Exclusion Deadlock
allocated more memory or storage space than it actually needs, and the
- Description: A cycle of waiting occurs among processes, with each
excess space is wasted.
Effective process synchronization is essential for maintaining the integrity waiting for a resource held by the next.
- Causes:
of shared data and preventing race conditions. The choice of
- Fixed Partition Sizes: If the memory or storage is divided into fixed-
synchronization mechanism depends on the specific requirements and These four conditions together are known as the Coffman conditions, and
sized partitions, a process may be allocated a partition larger than its actual
characteristics of the system and the processes involved. they are necessary for the occurrence of a deadlock. If all four conditions
size. The difference between the allocated space and the actual space
are present simultaneously, a deadlock can occur.
needed is internal fragmentation.
23. Describe the 4 types of deadlock in OS.
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
- Memory Allocation Algorithms: Certain memory allocation 1. Reduced Fragmentation: The buddy system helps minimize external
algorithms, like first-fit or best-fit, may lead to internal fragmentation. For Here's an overview of how the buddy system works: fragmentation because it ensures that free blocks are powers of 2, making
example, if the smallest available block that satisfies a request is larger it more likely to find a block that matches the requested size.
than the requested size, internal fragmentation occurs. 1. Memory Partitioning:
- The entire available memory is initially considered a single, large 2. Efficient Splitting and Merging: The splitting and merging of memory
Examples: block. blocks are straightforward and efficient in the buddy system, as they
- This block is then recursively split into smaller blocks, each being a involve dividing or combining blocks of the same size.
- External Fragmentation Example: power of 2 in size.
- Suppose you have three free memory blocks of sizes 20 KB, 15 KB, 3. Simplicity:The buddy system is relatively simple to implement
and 25 KB. If a process requests 30 KB of contiguous memory, it cannot 2. Power of 2 Blocks: compared to some other memory allocation algorithms.
be satisfied, even though the total free memory is 60 KB. - The block sizes are typically 2^0, 2^1, 2^2, 2^3, and so on.
- Each block is assigned an address and labelled with its size. Disadvantages and Considerations:
- Internal Fragmentation Example:
- In a fixed-size partitioning system, if each partition is 100 KB and a 3. Buddy Assignment: 1. Internal Fragmentation: The buddy system may still have some internal
process only needs 70 KB of memory, it will be allocated a whole partition, - When a process requests a specific amount of memory, the system fragmentation, especially when a process is allocated a block that is larger
resulting in 30 KB of internal fragmentation. allocates a block that is at least as large as the requested size. than its exact size.
- If the allocated block is larger than needed, it is split into two buddies
Impact: 3. Larger blocks may be allocated even if a smaller block could satisfy a
of half the size.
- The system keeps track of which blocks are currently allocated and request, leading to potential memory wastage.
- Performance Degradation: Fragmentation can lead to inefficient use of
which are free.
memory or storage, reducing the overall performance of the system. Despite these considerations, the buddy system is a practical and efficient
4. Merging Buddies: memory allocation strategy, particularly in scenarios where power-of-2
- Increased Overhead: Memory management algorithms and techniques to block sizes align well with the nature of memory requests in the system.
- When a process releases memory, the system checks if the adjacent
deal with fragmentation can introduce additional overhead. 26. Describe the purpose of the working set model.
block is also free and of the same size (i.e., the buddy).
- If both blocks are free and have the same size, they are merged back
To mitigate fragmentation, operating systems may use techniques such as
into a larger block.
compaction (rearranging memory to create contiguous blocks) or dynamic
27. Explain different Accessing Methods of a File.
memory allocation algorithms that try to minimize fragmentation, such as
the buddy system or memory paging. 5. Allocation and Deallocation:
- The buddy system ensures that memory is allocated and deallocated in File access methods refer to the techniques or mechanisms used to read
powers of 2, making it efficient for splitting and merging blocks. and write data to files in a computer system. Different methods are
25. Explain the buddy system of memory allocation in OS.
- When a process requests memory, the system looks for the smallest employed based on the requirements of the application and the underlying
available block that can accommodate the request. file system. Here are some common file access methods:
The buddy system is a memory allocation technique used in operating
systems to manage dynamic memory allocation in a way that minimizes
Advantages of the Buddy System: 1. Sequential Access:
fragmentation. It works by dividing the available memory into fixed-size
- In sequential access, data is read or written in a linear fashion from the
blocks that are powers of 2. Each block is then treated as a "buddy" to
beginning to the end of the file.
another block of the same size.
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
- Reading or writing operations must occur in a sequential order, and you - Example: Accessing records in a database using a hashed index.
cannot directly access arbitrary locations within the file.
- Example: Reading a text file line by line. 6. Memory-Mapped File Access:
- Memory-mapped file access allows a file to be mapped into the virtual
2. Random Access: memory of a process.
- Random access allows direct access to any specific location within the - The file can be accessed as if it were an array in memory, allowing
file. direct manipulation.
- Each block or record within the file has a unique address or index, and - Changes made to the memory-mapped region are reflected in the file
you can jump to any location without going through the entire file and vice versa.
sequentially. - Example: Using memory-mapped files for efficient I/O operations in
- Random access is suitable for applications that need quick and direct programming.
access to specific pieces of data.
- Example: Accessing data in a database file using indexing. The choice of file access method depends on factors such as the nature of
the data, the type of application, and the performance requirements.
3. Indexed Sequential Access Method (ISAM): Different file systems and programming languages provide support for 1. First-Come-First-Serve (FCFS):
- ISAM combines elements of both sequential and random access. various access methods. - Principle: Requests are serviced in the order they arrive in the disk
- An index is maintained to allow direct access to specific records, while queue.
maintaining the overall sequential order. 28. Discuss the various disk scheduling algorithms in operating system. - Advantage: Simple and easy to implement.
- This method aims to provide the speed of random access with the - Disadvantage : Can lead to poor performance, especially when there is
efficiency of sequential access. Disk scheduling algorithms are used in operating systems to determine the a mix of short and long I/O requests (the "convoy effect").
- Example: A file system that maintains an index for quick access but order in which disk I/O requests are serviced. The primary goal is to
still reads data sequentially. optimize the use of disk resources and reduce the overall response time for 2. Shortest Seek Time First (SSTF):
I/O operations. Here are some common disk scheduling algorithms: - Principle: The request with the shortest seek time (distance between the
4. Direct Access File: current head position and the track of the request) is serviced first.
- In direct access files, data can be accessed directly by specifying its - Advantage: Reduces the total seek time, improving disk performance.
logical block or record number. - Disadvantage: May cause starvation for requests located far from the
- This method is particularly efficient for large files where jumping to a disk arm's current position.
specific location is crucial.
- Direct access requires the support of file systems that can map logical 3. SCAN (Elevator) Algorithm:
addresses to physical disk locations. - Principle: The disk arm moves in one direction (up or down) servicing
- Example: Reading a specific page in a book stored as a file on disk. requests until it reaches the end of the disk, at which point it reverses
direction.
5. Hashed File Access: - Advantage: Fairly simple and prevents starvation for requests at one
- Hashing is a method where a hash function is used to calculate a end of the disk.
location or address in the file based on the data being stored. - Disadvantage: Requests at the middle of the disk may experience
- Hashing is useful for quick retrieval of data if the key is known. higher response times.
- It's commonly used in database systems for indexing.
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
4. C-SCAN (Circular SCAN) Algorithm: Different algorithms offer different trade-offs in terms of fairness, - Device Allocation: It allocates and deallocates resources to devices,
- Principle: Similar to SCAN, but the disk arm only goes in one direction throughput, and response time. resolving conflicts and ensuring fair access.
(e.g., from the outermost track to the innermost track), and upon reaching
the end, it jumps to the other end without servicing requests in between. 29. Define the functions of OS. 5. Security and Protection:
- Advantage: Reduces arm movement, providing a more predictable - User Authentication: The OS authenticates users and controls access
service time. The operating system (OS) serves as a crucial software layer that to system resources.
- Disadvantage: May result in increased response time for requests near facilitates communication and interaction between computer hardware - Data Encryption: It may provide encryption mechanisms to protect
the disk's ends. and software applications. It performs a variety of essential functions to sensitive data.
ensure efficient and secure operation of a computer system. Here are - Firewall and Antivirus Integration: Some operating systems include
5. LOOK Algorithm: some of the primary functions of an operating system: security features like firewalls and antivirus tools to protect against
- Principle: Similar to SCAN but does not go all the way to the end of external threats.
the disk. It reverses direction when there are no more requests in the 1. Process Management:
current direction. - Process Creation and Termination: The OS is responsible for creating, 6. User Interface:
- Advantage: Reduces response time for requests close to the current scheduling, and terminating processes, which are instances of running - Command-Line Interface (CLI) or Graphical User Interface (GUI):
position of the disk arm. programs. The OS provides a user interface for interaction, allowing users to
- Disadvantage: Similar to SCAN, it may cause starvation for requests at - Process Scheduling: The OS manages the execution of multiple communicate with the system and run applications.
one end of the disk. processes, determining which process gets access to the CPU at any
given time. 7. Networking:
6. C-LOOK Algorithm: - Network Protocols and Services: The OS supports networking
- Principle: Similar to C-SCAN but without servicing requests in 2. Memory Management: protocols and services, enabling communication between computers on a
between the jumps from one end to the other. - Memory Allocation: The OS allocates and deallocates memory for network.
- Advantage: Reduces arm movement and provides a more predictable processes, ensuring efficient utilization of available memory. - Network Configuration: It manages network configurations, including
service time. - Virtual Memory: Many operating systems support virtual memory, IP addresses, routing tables, and network settings.
- Disadvantage: Similar to C-SCAN, it may result in increased response allowing processes to use more memory than physically available by
time for requests near the disk's ends. swapping data between RAM and storage. 8. Error Handling:
- Fault Tolerance: The OS may include features to detect and recover
7. Circular LOOK (CLook): 3. File System Management: from errors, ensuring system stability.
- Principle: Similar to LOOK, but jumps from one end of the disk to the - File Creation, Deletion, and Manipulation: The OS provides functions - Error Logging: It logs system errors and events for diagnostics and
other without servicing requests in between. to create, delete, and manipulate files and directories. troubleshooting.
- Advantage: Reduces arm movement and provides a more predictable - File Access Control: It manages access to files, ensuring proper
service time. security and permissions. 9. System Calls and APIs:
- Disadvantage: Similar to LOOK, it may cause starvation for requests - Application Programming Interface (API):The OS provides an
at one end of the disk. 4. Device Management: interface for applications through system calls and APIs, allowing
- Device Drivers: The OS interacts with hardware devices through developers to access OS services.
The choice of a disk scheduling algorithm depends on the specific device drivers, enabling communication between applications and
characteristics of the I/O workload and the desired performance goals. hardware components. 10. Task Management:
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
- Task Synchronization and Communication: The OS facilitates - Objective: Keep the CPU as busy as possible to maximize its - Rationale: Predictable behaviour aids in system management and
communication and synchronization between processes, preventing utilization. allows users and applications to plan their activities more effectively.
conflicts and ensuring data consistency. - Rationale: A highly utilized CPU ensures that processes are executed
- Multitasking: It allows multiple tasks or processes to run efficiently, minimizing idle time. 8. Balancing Workload:
concurrently, sharing the CPU. - Objective: Distribute CPU workload evenly among processes and
2. Throughput: system resources.
These functions collectively ensure that a computer system operates - Objective: Maximize the number of processes that are completed per - Rationale: Balancing workload prevents certain processes from
smoothly, efficiently manages resources, provides a secure environment, unit of time. monopolizing resources, leading to better overall system performance.
and supports the execution of various applications and services. The - Rationale: Increased throughput means more tasks are finished in a
specific features and capabilities may vary across different operating given timeframe, improving the overall system performance. 9. Adaptability and Responsiveness:
systems. - Objective: Adjust the scheduling strategy based on system load and
3. Turnaround Time: changes in the workload dynamically.
30. Define monolithic kernel. - Objective: Minimize the time taken to execute a process from the - Rationale: Ensures the system adapts to varying demands and remains
submission to its completion. responsive to the changing environment.
A monolithic kernel is a type of operating system kernel architecture where - Rationale: Short turnaround time indicates quicker response and better
the entire operating system, including its core functions, device drivers, user satisfaction. 10. Resource Utilization:
and system call interface, is executed in kernel space. In a monolithic - Objective: Optimize the use of system resources, not just the CPU.
kernel, all operating system services run as a single, large program in 4. Waiting Time: - Rationale: Efficiently utilize other resources like memory, I/O devices,
privileged mode, with direct access to the underlying hardware. - Objective: Minimize the total time processes spend waiting in the ready and network to avoid bottlenecks.
queue.
31. Define thread in OS. - Rationale: Reducing waiting time enhances system responsiveness and 33. Define two basic operations of message passing.
efficiency.
A thread is a single sequential flow of execution of tasks of a process so it Message passing is a communication method used in distributed systems,
is also known as thread of execution or thread of control. There is a way 5. Response Time: parallel computing, and inter-process communication within operating
of thread execution inside the process of any operating system. Apart from - Objective: Minimize the time it takes for a system to respond to a user's systems. Two fundamental operations associated with message passing are
this, there can be more than one thread inside a process. input. sending a message and receiving a message.
- Rationale: Fast response times improve user experience and
32. List out the main objective of scheduling in OS. interactivity. 1. Sending a Message:
- Definition: Sending a message involves a process or a component
Scheduling in operating systems involves determining the order in which 6. Fairness: generating a message and transmitting it to another process or component.
processes are executed by the CPU. The main objectives of scheduling are - Objective: Provide fair access to CPU resources for all processes. - Process: The sending process creates a message containing
to optimize system performance, improve resource utilization, and provide - Rationale: Ensures that no process is unfairly starved of CPU time, information, data, or instructions to be communicated.
a fair and efficient allocation of resources. Here are the main objectives of promoting equitable resource allocation. - Transmission: The message is then transmitted through a
scheduling: communication channel, which could be shared memory, a network, or any
7. Predictability: other communication medium.
1. CPU Utilization: - Objective: Achieve consistent and predictable performance. - Destination: The message is delivered to the intended recipient process
or component.
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
and the operating system schedules the execution of these processes, programs to share common code. Memory partitioning facilitates these
2. Receiving a Message: making the computer appear as if it can run multiple tasks at the same time. features by providing separate memory spaces for different programs,
- Definition: Receiving a message involves a process or a component making it easier to load and link them at runtime.
waiting for and accepting a message from another process or component. 2. Isolation of Processes:
- Waiting: The receiving process enters a state of waiting or polling until - Purpose: Prevent interference between processes and ensure that each 7. Ease of Implementation:
a message arrives. process operates independently. - Purpose: Provide a straightforward and practical approach to memory
- Arrival: Upon the arrival of a message, the receiving process retrieves - Explanation: By assigning separate memory partitions to different management.
the message from the communication channel or message queue. processes, memory partitioning helps isolate the address spaces of - Explanation: Memory partitioning is a relatively simple technique
- Processing: The receiving process then processes the content of the individual processes. This isolation ensures that a process cannot compared to other memory management methods. It is easy to implement
received message, using the information, data, or instructions as needed. unintentionally access or modify the memory used by another process. and manage, making it suitable for various systems, especially those with
fixed-size partitions.
These two basic operations of sending and receiving messages form the 3. Protection:
foundation of message passing as a communication paradigm. Message - Purpose: Provide a level of protection for each process's memory space. Memory partitioning can be implemented using different schemes, such as
passing enables communication and coordination between independent - Explanation: Memory partitioning allows the operating system to set fixed partitioning, variable partitioning, and dynamic partitioning, each
processes, allowing them to exchange information and synchronize their access permissions and boundaries for each partition. This helps prevent with its own advantages and limitations. The choice of a specific
activities. This communication method is essential for building distributed unauthorized access to or modification of memory regions, enhancing the partitioning scheme depends on the characteristics and requirements of the
and concurrent systems, where multiple processes or components need to security and stability of the system. operating system and the applications it supports.
collaborate to achieve a common goal. The effectiveness of message
passing often depends on the underlying communication infrastructure and 4. Efficient Utilization of Memory: 35. Define demand paging in OS.
the mechanisms used to ensure reliable and orderly communication - Purpose: Optimize the use of available memory resources.
between processes. Demand paging can be described as a memory management technique that is
- Explanation: By allocating memory in fixed or variable-sized
used in operating systems to improve memory usage and system performance.
partitions, the operating system can use memory more efficiently. This
Demand paging is a technique used in virtual memory systems where pages
34. Describe the purpose of memory partitioning. prevents fragmentation and ensures that the available memory is used
enter main memory only when requested or needed by the CPU.
effectively to accommodate multiple processes.
Memory partitioning is a memory management technique used in
operating systems to divide the available memory space into multiple 5. Simplified Memory Allocation and Deallocation: In demand paging, the operating system loads only the necessary pages of a
partitions, each of which can be allocated to different processes. The - Purpose: Simplify the management of memory allocation and program into memory at runtime, instead of loading the entire program into
purpose of memory partitioning is to efficiently manage the allocation of deallocation. memory at the start.
memory to processes, allowing them to coexist in the computer's memory. - Explanation: Memory partitioning simplifies the process of allocating
Here are the main purposes of memory partitioning: and deallocating memory for processes. When a process starts, it is 36. List out the most important issues in the design of a real time
allocated a partition; when it completes or is terminated, its partition is operating system.
1. Multi-Programming and Multi-Processing: deallocated, making it available for other processes.
- Purpose: Enable the concurrent execution of multiple processes or Designing a real-time operating system (RTOS) involves addressing
programs. 6. Dynamic Loading and Linking: specific challenges and requirements unique to systems that must respond
- Explanation: Memory partitioning allows multiple processes to reside - Purpose: Support dynamic loading and linking of programs. to events within stringent timing constraints. Here are some of the most
in memory simultaneously. Each process is allocated a separate partition, - Explanation: Dynamic loading allows a program to be loaded into important issues in the design of a real-time operating system:
memory only when it is needed, and dynamic linking allows different
DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN DR.RAIS ABDUL HAMID KHAN
= (45-40) + (40-10) + (10-0) + (50-0) + (65-50) + (75-65) + (90-75) + (135-90) + (170- = 555 - Blocking: Can lead to inefficiencies, as the process is halted until the I/O
135) operation finishes.
- Buffering: Supports concurrent execution, allowing processes to continue
= 5 + 30 +10 +50 +15 + 10 +15 + 45 + 35
working while data is being transferred in the background.
39. What is blocking and buffering in operating system?
= 215
- Data Transfer Rates:
Blocking and buffering are concepts related to input/output (I/O) operations in
FCFS stands for First-Come-First-Serve. It is a very easy algorithm among the all-disk - Blocking: Can be a concern when the producer and consumer have different
operating systems. They are mechanisms used to manage the transfer of data
scheduling algorithms. It is an OS disk scheduling algorithm that runs the queued data transfer rates, potentially leading to idle time.
between processes and I/O devices efficiently.
requests and processes in the way that they arrive in the disk queue. It is a very easy - Buffering: Helps address speed mismatches by temporarily storing data,
and simple CPU scheduling algorithm. In this scheduling algorithm, the process which allowing for more flexible data transfer rates.
requests the processor first receives the processor allocation first. It is managed with 1. Blocking:
a FIFO queue. - Definition: Blocking refers to the state in which a process is temporarily
- Example:
stopped or "blocked" while waiting for an I/O operation to complete.
- Blocking: A process reading from a file may be blocked until the requested
Example: - Explanation: When a process initiates an I/O operation, it may have to wait
data is read from the storage device.
until the operation is finished before continuing its execution. During this
Let's take a disk with 180 tracks (0-179) and the disk queue having input/output - Buffering: A printer spooler may use buffering to store print jobs
waiting period, the process is said to be blocked. Blocking is typical in
requests in the following order: 75, 90, 40, 135, 50, 170, 65, 10. The initial head temporarily, allowing the printer to continue processing jobs even if the
synchronous I/O operations, where the process directly waits for the I/O to
position of the Read/Write head is 45. Find the total number of track movements of printing speed is slower than the data generation speed.
complete before proceeding.
the Read/Write head using the FCFS algorithm.
In many cases, a combination of blocking and buffering may be employed to
Solution: 2. Buffering:
optimize I/O operations. Buffering can help mitigate the impact of blocking by
- Definition: Buffering involves temporarily storing data in a buffer before it
allowing processes to proceed while data is transferred in the background. The
is consumed or processed by a program or transmitted to an I/O device.
choice of these mechanisms depends on the specific requirements and
- Explanation: Instead of directly transferring data between processes and I/O
characteristics of the I/O operations in a given system.
devices, a buffer is used as an intermediate storage area. This allows for more
efficient handling of data transfers, especially when there is a difference in data
transfer rates between the producer (e.g., a process) and the consumer (e.g., an
I/O device). Buffering helps decouple the production and consumption rates,
reducing the likelihood of blocking due to speed mismatches.
Key Differences:
- Timing:
- Blocking: Occurs during the actual I/O operation, where the process waits
Total head movements,
for the operation to complete.
The initial head point is 45, - Buffering: Involves storing data in an intermediate buffer, often before or
after the I/O operation.
= (75-45) + (90-75) + (90-40) + (135-40) + (135-50) + (170-50) + (170-65) + (65-10)
- Concurrency:
= 30 + 15 + 50 + 95 + 85 + 120 + 105 + 55