0% found this document useful (0 votes)
24 views

os short notes

The document outlines the characteristics of deadlock, including mutual exclusion, hold and wait, no-preemption, and circular wait conditions. It also discusses deadlock recovery methods, file system management techniques, disk scheduling, and various system architectures. Additionally, it covers mobile operating systems, distributed systems, and performance parameters related to disk operations.

Uploaded by

Prajakta Mhaske
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

os short notes

The document outlines the characteristics of deadlock, including mutual exclusion, hold and wait, no-preemption, and circular wait conditions. It also discusses deadlock recovery methods, file system management techniques, disk scheduling, and various system architectures. Additionally, it covers mobile operating systems, distributed systems, and performance parameters related to disk operations.

Uploaded by

Prajakta Mhaske
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Deadlock characterization

1. Mutual Exclusion Condition: The resources involved are non-shareable. Only one process
at a time can use a resource. Explanation: At least one resource (thread) must be held in a
non-shareable mode, that is, only one process at a time claims exclusive control of the
resource. If another process requests that resource, the requesting process must be delayed
until the resource has been released.

2. Hold and Wait Condition: A process holding at least one resource is waiting to acquire
additional resources held by other processes. Explanation: There must be a process that is
holding a resource already allocated to it; while waiting for additional resources that are
currently being held by other processes.

3. No-Preemption Condition: Resources already allocated to a process cannot be preempted.


Explanation: Resources cannot be removed from the processes until its completion or
released voluntarily by the process holding it.

4. Circular Wait Condition: The processes in the system form a circular list or chain; where
each process in the list is waiting for a resource held by the next process in the list.
Explanation: A set {P0, P1, …., Pn} of waiting processes must exist such that P0 is waiting for
resource held by P1, P1 is waiting for a resource held by P2,…..Pn-1 is waiting for a resource
held by Pn and Pn is waiting for a resource held by P0.

1. List all deadlock recovery methods

Deadlock recovery involves breaking the deadlock cycle once it has been detected. Common
methods include:

• Process Termination: Kill one or more processes involved in the deadlock. You can
either:

o Abort all deadlocked processes, or

o Abort one at a time until the cycle breaks.

• Resource Preemption: Temporarily take resources away from some processes and
give them to others. Requires rollback and tracking resource usage carefully.

• Process Rollback: Roll back one or more processes to a previous safe state and
restart them later.

• Combined Approach: Often a mix of the above methods is used depending on


system constraints.

2. List file system free space management techniques

These are ways to keep track of unallocated disk space:


• Bit Map (Bit Vector): Each block on the disk is represented by a bit; 0 indicates free, 1
indicates used. Simple but requires scanning.

• Linked List: Free blocks are linked together like a list; traversal is needed to find
suitable blocks.

• Grouping: Stores the address of a set of free blocks in a block, and that block
contains the address of the next group.

• Counting: Maintains the address of the first free block and the number of contiguous
free blocks. Useful if free blocks tend to be contiguous.

3. What is disk scheduling?

Disk scheduling refers to the way the OS decides the order in which disk I/O requests are
handled. Because disk head movement is slow compared to memory, efficient scheduling
improves system performance. Common algorithms include:

• FCFS (First Come First Serve)

• SSTF (Shortest Seek Time First)

• SCAN and C-SCAN

• LOOK and C-LOOK

4. What is deadlock?

A deadlock is a situation where a group of processes are blocked forever, each waiting for a
resource that another process holds. It arises when four conditions hold simultaneously:

1. Mutual Exclusion Condition: The resources involved are non-shareable. Only one process
at a time can use a resource. Explanation: At least one resource (thread) must be held in a
non-shareable mode, that is, only one process at a time claims exclusive control of the
resource. If another process requests that resource, the requesting process must be delayed
until the resource has been released.

2. Hold and Wait Condition: A process holding at least one resource is waiting to acquire
additional resources held by other processes. Explanation: There must be a process that is
holding a resource already allocated to it; while waiting for additional resources that are
currently being held by other processes.

3. No-Preemption Condition: Resources already allocated to a process cannot be preempted.


Explanation: Resources cannot be removed from the processes until its completion or
released voluntarily by the process holding it.
4. Circular Wait Condition: The processes in the system form a circular list or chain; where
each process in the list is waiting for a resource held by the next process in the list.
Explanation: A set {P0, P1, …., Pn} of waiting processes must exist such that P0 is waiting for
resource held by P1, P1 is waiting for a resource held by P2,…..Pn-1 is waiting for a resource
held by Pn and Pn is waiting for a resource held by P0.

5. Define object-based architecture

In an object-based architecture, the system is designed using objects—entities that contain


both data and methods. Each object encapsulates its state and communicates with others
via message passing. This is often seen in distributed systems or middleware platforms like
CORBA or RMI.

6. List system architectures

System architectures describe how components are structured in an OS. Examples include:

• Monolithic Architecture: All services run in kernel space (e.g., traditional UNIX).

• Layered Architecture: OS is divided into layers, each built on the one below.

• Microkernel Architecture: Only essential components in the kernel; others run in


user space.

• Client-Server Architecture: Services are provided over a network by dedicated


servers.

• Distributed Architecture: Components reside on different networked computers but


function as a single system.

7. List any four commercial mobile operating systems

• Android – developed by Google; open-source and widely used.

• iOS – Apple’s OS for iPhones and iPads.

• HarmonyOS – Developed by Huawei as an alternative to Android.

• KaiOS – Lightweight OS for feature phones, based on Linux.

8. What is kernel?

The kernel is the core part of an operating system. It directly interacts with hardware and
manages system resources. Its responsibilities include:
What is a Kernel?

Definition:

A kernel is the core component of an operating system (OS) that manages communication
between hardware and software, controls system resources, and ensures efficient and
secure system operations.

What the Kernel Does:

The kernel is responsible for:

1. Process Management

o Scheduling processes and managing CPU time.

2. Memory Management

o Allocating and freeing memory for applications.

3. Device Management

o Communicating with hardware via drivers (keyboard, mouse, disk, etc.).

4. File System Management

o Handling reading/writing files and directory structures.

5. System Call Handling

o Acts as a bridge between applications and the hardware.

How It Works:

When you run a program:

• It makes system calls (like open, read, write).

• The kernel receives these requests, talks to the hardware, and returns the result to
the app.

Types include:

• Monolithic Kernel

• Microkernel

• Hybrid Kernel
9. What are the features of mobile operating systems?

Mobile OSes are optimized for portability, battery efficiency, and user interaction. Key
features include:

• Touchscreen support

• Battery and power management

• Wireless connectivity (Wi-Fi, LTE, Bluetooth)

• Sensor support (GPS, accelerometer, gyroscope)

• App sandboxing and security

• Efficient UI/UX for smaller screens

10. List the types of distributed systems

• Client-Server: Clients request services, servers provide them.

• Peer-to-Peer (P2P): Each node acts as both client and server.

• Three-Tier/N-Tier: Presentation, logic, and data layers are separated.

• Clustered Systems: Multiple systems work together closely, like one system.

• Cloud Computing Systems: Resources are delivered as services over the internet.

11. What is request edge?

In a Resource Allocation Graph (RAG), a request edge is a directed edge from a process to a
resource.

• Denoted as: P → R

• Meaning: Process P is requesting resource R.

• These edges indicate that the process is blocked, waiting for the resource to become
available.

12. What is safe state?

A safe state in deadlock handling is one where the system can allocate resources to all
processes in some sequence without leading to deadlock.

• In a safe state, the OS can avoid deadlock by carefully scheduling resource allocation.
• Safe state ≠ deadlock-free, but if in a safe state, deadlock can be avoided.

13. List the names of any two disk allocation methods of disk space

1. Contiguous Allocation

Description:
Each file is stored in a single contiguous block of disk space.

Advantages:

• Fast access (especially for sequential reads).

• Simple to implement.

Disadvantages:

• Can cause external fragmentation.

• Difficult to grow a file if adjacent space is not available.

2. Linked Allocation

Description:
Each file is stored as a linked list of disk blocks, where each block contains a pointer to the
next.

Advantages:

• No external fragmentation.

• Easy to grow files dynamically.

Disadvantages:

• Slower random access (must follow pointers).

• Extra space needed for pointers in each block.

Other: Indexed Allocation.

14. List disk performance parameters

1. Seek Time – Time to move the disk arm to the desired track.

2. Rotational Latency – Time for the disk to rotate the desired sector under the
read/write head.
3. Transfer Time – Time to read/write the data once the sector is located.

4. Access Time – Sum of seek time and rotational latency.

15. Define distributed system

A distributed system is a collection of independent computers that appear to users as a


single coherent system.
Features:

• Shared resources

• Concurrency

• Fault tolerance

• Transparency (location, access, replication, etc.)

16. What is size scalability?

Size scalability refers to the ability of a system (especially distributed systems) to handle
growth in the number of users, nodes, or services without performance loss.
E.g., Adding more servers to a cloud service without slowing it down.

17. List the different architectural styles of distributed operating systems

1. Layered Architecture

2. Client-Server Model

3. Peer-to-Peer Model

4. Object-Based Architecture

5. Microkernel Architecture

Each style offers trade-offs between performance, scalability, and complexity.

18. What is kernel? (Already answered in Q8)

See Q8 – core component of the OS managing hardware and system resources.

19. What is RISC in ARM architecture?


RISC = Reduced Instruction Set Computing

• A CPU design principle where instructions are simple and execute in a single cycle.

• ARM uses RISC to achieve high performance with low power consumption, making
it ideal for mobile devices.

20. Write any two special service requirements of mobile operating system

1. Energy Efficiency – Mobile OS must manage power efficiently to extend battery life.

2. Mobility & Connectivity – Must support mobile networks (e.g., LTE, Wi-Fi) and
enable seamless transitions.

21. What is claim edge?

In a Resource Allocation Graph:

• A claim edge is a dashed edge from a process to a resource, indicating that the
process may request the resource in the future.

• Used in Banker’s algorithm to model potential future requests.

22. What is request edge? (Duplicate – see Q11)

Edge from a process to a resource, indicating a current request.

23. List any two file attributes

1. File Name – Human-readable identifier.

2. File Size – Amount of space the file occupies.

Others include file type, access permissions, creation/modification dates, etc.

24. Write any two design goals of distributed systems

1. Transparency

Definition:
Transparency means hiding the complexity and internal workings of the distributed system
from users and applications, making it appear as a single, unified system.
Types of transparency include:

• Access Transparency: Users can access resources without knowing their physical
location.

• Location Transparency: The name of a resource doesn’t reveal its location.

• Replication Transparency: Users are unaware of the fact that multiple copies of
resources exist.

• Concurrency Transparency: Multiple users can access shared resources without


conflicts.

• Failure Transparency: The system continues functioning properly even when


components fail.

Why it's important:


It simplifies the development and use of the system and improves usability, reliability, and
portability.

2. Scalability

Definition:
Scalability refers to the ability of a distributed system to handle growth — in terms of users,
data, and computing resources — without significant performance degradation.

Scalability involves:

• Size scalability: Adding more users or resources.

• Geographical scalability: Operating efficiently across large distances.

• Administrative scalability: Managing a growing system without too much complexity.

Why it's important:


Scalable systems can grow with the needs of users or organizations, supporting more
devices, locations, and workloads over time.

25. What are advantages of Windows Mobile OS?

1. Integration with Microsoft services – Seamless access to tools like MS Office,


Outlook, OneDrive.

2. Familiar Interface – UI similar to Windows desktop OS, reducing learning curve for
users.
3. Enterprise Features – Better suited for business users with enhanced security and
device management.

26. Difference Between Disk Scheduling Algorithms

SCAN vs LOOK:

• SCAN (Elevator Algorithm): The disk arm moves in one direction fulfilling requests
until it reaches the end, then reverses direction.

• LOOK: Similar to SCAN but reverses direction at the last request instead of going to
the physical end of the disk.

LOOK vs C-LOOK:

• LOOK: Services requests in both directions, reversing at last request.

• C-LOOK: Services in one direction only, then jumps to the beginning without servicing
requests on the return.

SCAN vs C-SCAN:

• SCAN: Services in both directions.

• C-SCAN: Services in one direction only and goes back to start unserviced, providing
uniform wait time.

27. Sensor Network

A sensor network is a collection of spatially distributed autonomous sensors to monitor


physical/environmental conditions (e.g., temperature, motion).

• Used in IoT, smart homes, health monitoring.

• Requires efficient OS support for power, wireless comm, real-time responses.

28. Centralized Organization System Architecture

In centralized architecture, a single system or server provides services to all users. All
decisions and data are stored in this central node.

Centralized Organization System Architecture – Definition & Explanation

Definition:
A Centralized System Architecture is an organizational model in which all processing,
control, and data storage are handled by a single central system (or server), and all user
devices (clients or terminals) rely on it for computing services.

Explanation:

In this architecture:

• There is one central computer (mainframe or server).

• All resources (CPU, memory, storage, software) are hosted centrally.

• Clients are usually simple terminals or thin clients with minimal processing
capabilities.

• All requests, processing, and responses happen through the central system.

Real-World Example:

• Old mainframe systems used in banks or universities, where user terminals were
connected to a powerful central server.

• Modern example: Web-based apps where the logic and data are hosted on a single
centralized cloud server.

Advantages:

1. Easier to manage and maintain

o All data and applications are in one place.

2. Centralized control

o Security and access can be tightly managed.

3. Lower client-side cost

o Clients don’t need high processing power.

Disadvantages:

1. Single Point of Failure

o If the central server fails, the entire system goes down.


2. Scalability Issues

o Difficult to scale with increasing users and data.

3. Network Dependency

o Requires stable and fast network for users to connect reliably.

Use Cases:

• Legacy enterprise applications

• Small-scale internal business tools

• Simple client-server setups

Visual Concept (Text-Based):

pgsql

CopyEdit

Users/Clients

| | |

v v v

+-------------------+

| Central Server |

| (CPU, Storage, DB)|

+-------------------+

29. Define:

Seek Time

Seek time is the time it takes for the disk drive’s read/write head to move to the track or
cylinder where the desired data is stored.

• It’s a major component of total disk access time.

• Lower seek time means faster access to data.

• Depends on how far the head must move.


Example: Moving from track 20 to track 80 takes more time than moving from 20 to 25.

Rotational Latency

Rotational latency is the time it takes for the desired sector of the disk to rotate under the
read/write head after the head is in position.

• It depends on the rotational speed of the disk (measured in RPM).

• On average, it is half the time of a full rotation.

Example: For a disk spinning at 7200 RPM, one full rotation takes ~8.33 ms, so average
rotational latency is ~4.17 ms.

30. Benefits of Distributed Systems

• Resource sharing

• Scalability

• Fault tolerance

• Improved performance and load balancing

• Transparency for users

31. Two Deadlock Prevention Strategies

1. Hold and Wait Prevention – Require all resources at once.

2. Circular Wait Prevention – Impose an ordering of resources; request must be in


increasing order.

32. Sequential & Direct File Access Methods

Sequential & Direct File Access Methods in Operating Systems

When accessing data in a file, the access method defines how the operating system and
application interact with the file contents. Two of the most common file access methods are:

1. Sequential Access Method

Definition:
• Data is accessed in a linear, sequential order, one record after another.

• This is the simplest and most commonly used access method.

Characteristics:

• Data must be read or written from the beginning to the desired location.

• Does not support random access.

• Suitable for devices like magnetic tapes or for file types like log files, audio/video
streams.

Example:

CopyEdit

while (read(file, &record, sizeof(record))) {

// Process each record one by one

Advantages:

• Simple to implement

• Good for files that are always processed from beginning to end (e.g., media files)

Disadvantages:

• Inefficient for accessing random records

• Cannot skip directly to a specific record

2. Direct (Random) Access Method

Definition:

• Data can be accessed directly using an index or record number.

• Allows reading/writing to any part of the file without reading the data before it.

Characteristics:

• Each block or record has a unique address or number.

• Suitable for databases, ISAM, hard disks, and files that require updates frequently.

Exampl
fseek(file, record_number * sizeof(record), SEEK_SET);

fread(&record, sizeof(record), 1, file);

Advantages:

• Fast and efficient for random data access

• Ideal for indexed or structured files

Disadvantages:

• Slightly more complex to implement

• Not all storage devices support direct access (e.g., magnetic tape)

Comparison Table

Feature Sequential Access Direct Access

Access Order Linear (start to end) Any location in the file

Speed for Random Access Slow Fast

Complexity Simple More complex

Use Case Logs, media streams Databases, indexed files

Supported Devices Tape drives Hard disks, SSDs

33. Android vs iPhone OS

Feature Android iPhone (iOS)

Customization High Limited

App Stores Google Play + others Apple App Store only

Open Source Yes No

Hardware Various brands Apple only

34. Special Constraints & Requirements of Mobile OS


Special Constraints & Requirements of Mobile Operating Systems

Mobile Operating Systems (like Android, iOS, KaiOS, HarmonyOS) are designed specifically
for portable, battery-powered, wireless-connected devices such as smartphones, tablets,
and wearables.

Compared to desktop OS, mobile OS must operate under unique constraints and serve
special requirements. Here's a detailed explanation:

1. Security & App Isolation

• Mobile devices are always connected, making them more vulnerable to attacks.

• Mobile OS uses sandboxing to isolate apps and protect user data.

• Must include:

o Secure boot

o App permissions

o Encrypted storage

o Biometric security (fingerprint, face recognition)

2. Power Management / Energy Efficiency

• Mobile devices rely on battery power, so the OS must optimize energy use.

• Features include:

o CPU throttling

o Sleep/idle modes

o Background activity limits

o App doze modes (e.g., Android Doze)

3. Wireless Connectivity & Mobility Support

• Mobile OS must support:

o Cellular (4G/5G)

o Wi-Fi, Bluetooth

o NFC
o Seamless handoff between networks (e.g., from Wi-Fi to mobile data)

• Devices should continue functioning while moving between locations.

4. Real-Time Responsiveness

• The OS must provide instant feedback to user inputs (touch, swipe).

• Needs low-latency processing for smooth UI interactions.

• Essential for:

o Gaming

o Multimedia streaming

o Real-time messaging

5. Touchscreen-Centric UI

• UI must be optimized for touch-based interaction, not mouse/keyboard.

• Includes:

o Gestures (swipe, pinch, tap)

o Soft keyboard management

o Responsive layout scaling (adaptive UIs)

6. App Ecosystem & Lifecycle Management

• Mobile OS needs to manage apps efficiently:

o Foreground/background task control

o Notifications

o App lifecycle (pause, resume, stop)

• Avoid excessive background processing to save resources.

7. Sensor Integration

• Support for sensors like:

o Accelerometer
o Gyroscope

o GPS

o Proximity sensor

o Ambient light sensor

• Must provide APIs for apps to use these safely and efficiently.

8. Limited Resources (CPU, RAM, Storage)

• Mobile hardware is less powerful than desktops.

• OS must:

o Use resources efficiently

o Prioritize essential processes

o Minimize background memory usage

9. Cloud & Sync Support

• Modern mobile OS integrates with cloud services for:

o Backup

o Sync (contacts, messages, files)

o Push notifications

• Example: iCloud for iOS, Google Drive for Android

10. Location & Context Awareness

• OS must support context-based services (e.g., GPS-based apps, activity recognition).

• Also needs to balance privacy with functionality.

In Summary

Constraint/Requirement Why It's Important

Security & Isolation Prevent data theft, malware


Constraint/Requirement Why It's Important

Power Efficiency Preserve battery life

Wireless & Mobility Always-connected operation

Real-Time Responsiveness Smooth user experience

Touchscreen UI Touch-first design

App Lifecycle Management Efficient memory, multitasking

Sensor Support Context-aware applications

Limited Resources Works on lower CPU, memory

Cloud Integration Sync across devices

Location Awareness Navigation, smart services

35. Two Deadlock Prevention Strategies (Same as Q31)

Deadlock prevention is a technique used in operating systems and concurrent programming


to ensure that deadlocks never occur. A deadlock is a situation where a set of processes
become stuck in a circular wait, each waiting for a resource held by another. To prevent this,
the system ensures that at least one of the necessary conditions for deadlock cannot hold.

Here are two common deadlock prevention strategies:

1. Resource Allocation Ordering (Impose a Total Order on Resources)

Strategy:
The system assigns a global ordering (priority) to all resources. Processes are required to
request resources in increasing order of enumeration. If a process needs multiple resources,
it must request them in the defined order.

How it prevents deadlock:


This strategy breaks the circular wait condition, one of the four necessary conditions for
deadlock. Since no process can request a lower-priority resource after holding a higher-
priority one, circular chains of waiting are not possible.

Example:

• Suppose the system has resources R1, R2, and R3 with an order: R1 < R2 < R3.
• If a process holds R2, it can only request R3, not R1.

• If all processes follow this order, circular waiting cannot happen.

Pros:

• Simple to implement.

• No need to track historical allocations.

Cons:

• Not always practical (especially if the process doesn’t know what resources it will
need in advance).

• May reduce concurrency and lead to inefficient resource utilization.

2. Preemptive Resource Allocation

Strategy:
The system allows resources to be preempted (i.e., taken away) from a process. If a process
holding some resources requests another resource that cannot be immediately allocated,
the system may force it to release its current resources and retry later.

How it prevents deadlock:


This strategy breaks the no preemption condition, another of the four necessary deadlock
conditions. By allowing the system to take back resources, processes can't hold resources
indefinitely while waiting for others.

Example:

• A process P1 holds resource R1 and requests R2.

• R2 is not available, so the system forces P1 to release R1.

• Now R1 becomes available to others, and P1 must retry its request sequence later.

Pros:

• More flexible than ordering strategies.

• Can potentially improve resource usage.

Cons:

• Preempting resources can be complex, especially for non-sharable or critical


resources.

• May cause inconsistency or need for rollback mechanisms.


36. Sequential & Direct File Access Methods (Same as Q32)

37. Cloud Computing System

Delivers computing services (storage, servers, networking, software) over the internet.

• On-demand availability

• Scalable

• Pay-as-you-go model

• Examples: AWS, Azure, Google Cloud

38. Desktop OS vs Mobile OS

Feature Desktop OS Mobile OS

Input Keyboard & Mouse Touchscreen

Power Use High Optimized

Interface Complex, multi-window Simple, app-based

Portability Less High

39. Necessary Conditions for Deadlock

1. Mutual Exclusion

2. Hold and Wait

3. No Preemption

4. Circular Wait

Example: Two processes each holding one resource and requesting the other's.

Diagram: Simple circular RAG with 2 processes and 2 resources.

40. Directory Structure

Organizes files in a file system:


• Single-Level: All files in one directory.

• Two-Level: Separate user directories.

• Tree: Hierarchical (like Windows).

• Acyclic Graph: Allows shared files.

• General Graph: Allows cycles (less common).

41. Deadlock Recovery Methods (Repeat of Q1)

42. Advantages of Distributed System (Repeat of Q30)

43. Relative Path

A relative path is a file or directory path related to the current working directory.

Example:
./folder/file.txt (relative to current directory)

44. Deadlock Definition (Repeat of Q4)

45. Disk Allocation Methods

1. Contiguous

2. Linked

3. Indexed

46. Banker’s Algorithm

Used to avoid deadlocks. Processes declare max resource needs. OS only allocates if system
stays in a safe state.

Steps:

1. Work = Available

2. Check if process need ≤ work

3. If true, allocate resources temporarily


4. If all processes finish, system is safe

47. Directory Operations

• Create/Delete File

• Create/Delete Directory

• Rename

• List contents

• Navigate (change directory)

48. Problem with SSTF

• Starvation: Requests far from the current head position may never be served if closer
requests keep arriving.

Problem with SSTF (Shortest Seek Time First) Disk Scheduling Algorithm

Quick Overview: What is SSTF?

SSTF selects the I/O request that is closest to the current head position, minimizing seek
time for each request.

• It’s like always picking the nearest stop first.

• Seeks to reduce the average head movement.

Major Problems / Drawbacks of SSTF

1. Starvation of Distant Requests

• If new, nearby requests keep coming in, farther requests might never get serviced.

• These distant requests may wait indefinitely—this is called starvation.

Example:

• Head at 53, requests at [54, 56, 58, 60, 120]

• If requests keep arriving near 50–60, the 120 request might never be chosen.
2. Unfairness

• SSTF favors requests that are closer to the head.

• Older, distant requests can be repeatedly bypassed, which is unfair in multi-user


environments.

3. Not Optimal for All Loads

• SSTF works well only when requests are evenly distributed.

• In skewed or heavy-load environments, performance degrades due to poor global


decision-making.

4. No Real-Time Guarantees

• SSTF doesn’t consider deadlines or time constraints.

• Real-time systems that require predictable access times may fail under SSTF.

5. Can Cause Zig-Zag Movement

• Head may keep jumping back and forth if new requests appear on either side.

• This leads to increased wear and inefficiency, especially on traditional hard drives.

6. Doesn’t Ensure Minimum Total Seek Time

• While it minimizes the next seek, it doesn’t guarantee the optimal overall schedule
(as compared to algorithms like LOOK or SCAN).

49. Cluster Computing System

Layered Components of Android Architecture:


1. Linux Kernel (Base Layer)

• Acts as the hardware abstraction layer.

• Manages core system services such as:

o Process management

o Memory management

o Device drivers

o Power management

o Networking

Android uses a modified version of the Linux kernel.

2. Hardware Abstraction Layer (HAL)

• A set of C/C++ libraries that provide standard interfaces for hardware vendors.

• Allows the Android system to communicate with hardware devices like camera, GPS,
audio, etc.

3. Native Libraries and Android Runtime

a) Native Libraries

• Written in C/C++ and compiled for the device.

• Includes:

o Surface Manager – UI rendering

o SQLite – Database engine

o OpenGL/ES – 2D/3D graphics

o WebKit – Browser engine

o SSL – Internet security

b) Android Runtime (ART)

• Replaces the older Dalvik Virtual Machine.

• Executes apps using Ahead-of-Time (AOT) compilation.

• Includes core Java libraries.


4. Application Framework

• Provides high-level APIs for app developers.

• Manages core Android functions like:

o Activity lifecycle

o Window management

o Content providers

o Location services

o Telephony

Acts as the bridge between apps and system components.

5. Applications (Top Layer)

• All pre-installed and third-party apps:

o Dialer, SMS, Browser, Camera

o User-installed apps (e.g., WhatsApp, Instagram)

• Built using the Java/Kotlin + XML + Android SDK

Text-Based Diagram: Android Architecture

sql

CopyEdit

|----------------------------------------------------|

| Applications Layer |

|----------------------------------------------------|

| System Apps & User Apps (UI Layer) |

|----------------------------------------------------|

| Application Framework Layer |

|----------------------------------------------------|

| Activity Manager | Content Providers | Telephony...|


|----------------------------------------------------|

| Android Runtime + Native Libraries |

|----------------------------------------------------|

| ART (Runtime) | SQLite | WebKit | SSL | OpenGL ES |

|----------------------------------------------------|

| Hardware Abstraction Layer (HAL) |

|----------------------------------------------------|

| Linux Kernel (Memory, Drivers, I/O) |

|----------------------------------------------------|

Why This Architecture?

• Modular design – easy updates and replacement of components

• Security – sandboxing via the Linux kernel

• Performance – native libraries + ART runtime boost efficiency

• Portability – runs on various device hardware

50. Two Mobile OS Special Requirements (Repeat of Q20)

51. Types of Distributed System (Repeat of Q10)

52. Android Architecture (with Diagram - Text Description)

1. Linux Kernel – Hardware abstraction

2. Libraries – SQLite, WebKit, OpenGL

3. Android Runtime – ART and Java APIs

4. Application Framework – Activity manager, content providers

5. Applications – Gmail, Maps, etc.

53. Deadlock Recovery Methods (Repeat)


54. Free Space Management Techniques (Repeat)

55. Disk Scheduling (Repeat)

56. Deadlock Definition (Repeat)

57. Object-Based Architecture (Repeat)

58. System Architectures of Distributed OS (Repeat of Q17)

59. Four Commercial Mobile OS (Repeat of Q7)

60. Kernel (Repeat)

61. Features of Mobile OS (Repeat of Q9)

62. Types of Distributed Systems (Repeat)

63. Goals of Distributed Systems

64. Scan vs C-Scan Disk Scheduling (Already covered in Q26)

The goals of distributed systems are what guide their design and operation. These goals
ensure that the system functions efficiently, reliably, and transparently across multiple
independent components.

Here’s a breakdown of the main goals of distributed systems:

1. Transparency
Transparency means hiding the complexity of the distributed nature from users and
applications. There are several types:

Type Description

Access Transparency Users can access remote resources as if they were local.

Location Transparency Users don’t need to know the physical location of a resource.

Migration Transparency Resources can move without affecting the user.

Replication Transparency Users are unaware of multiple copies of a resource.

Concurrency Multiple users can access resources simultaneously without


Transparency conflicts.

Failure Transparency The system hides failures and continues to operate smoothly.

2. Scalability

A distributed system should be able to:

• Scale up: Handle increasing loads (more users, data, etc.)

• Scale out: Add more machines to improve performance

• Maintain performance without a complete redesign.

3. Fault Tolerance (Reliability)

The system must continue to operate correctly even when parts fail. This involves:

• Redundancy: Multiple nodes can take over if one fails.

• Failure detection and recovery mechanisms.

4. Performance

The system should provide fast response times and efficient processing by:

• Distributing tasks optimally

• Reducing communication overhead

• Load balancing across nodes


5. Security

Even in a distributed environment, the system must ensure:

• Authentication: Verifying identity

• Authorization: Access control

• Encryption: Data privacy

• Integrity & Auditing

6. Resource Sharing

A distributed system should enable efficient sharing of:

• Hardware resources (e.g., printers, disk space)

• Software resources (e.g., applications)

• Data (e.g., files, databases)

7. Openness (Interoperability)

• The system should support heterogeneous environments (different OS, hardware,


programming languages).

• Open standards (like TCP/IP, REST, SOAP) help systems communicate smoothly.

8. Maintainability and Modularity

• Systems should be easy to update, scale, or debug.

• Modular design allows individual components to be improved or replaced without


affecting the whole.

Summary Table:

Goal Description

Transparency Hide complexities from users/applications


Goal Description

Scalability Handle growing workloads gracefully

Fault Tolerance Continue working despite failures

Performance Fast, efficient operation

Security Protect data and access

Resource Sharing Enable use of distributed resources

Openness Support diverse systems and technologies

Maintainability Easy to upgrade and manage

65. Resource Allocation Graph Example

What is a Resource Allocation Graph (RAG)?

A RAG is a directed graph used to represent the state of resource allocation in a system.

• Nodes:

o Processes: Denoted as circles (e.g., P1, P2)

o Resources: Denoted as squares (e.g., R1, R2)

• Edges:

o Request Edge (→): From a process to a resource, meaning the process is


requesting that resource.

o Assignment Edge (←): From a resource to a process, meaning the resource is


allocated to that process.

Example Scenario

Let’s say:

• Processes: P1, P2

• Resources: R1, R2 (each with 1 instance)

Step-by-step Example
1. P1 holds R1 → Edge from R1 → P1 (assignment)

2. P1 requests R2 → Edge from P1 → R2 (request)

3. P2 holds R2 → Edge from R2 → P2

4. P2 requests R1 → Edge from P2 → R1

Resource Allocation Graph Representation:

nginx

CopyEdit

P1 → R2

↑ ↓

R1 P2

↑ ↓

P2 → R1

• R1 is assigned to P1 → R1 → P1

• R2 is assigned to P2 → R2 → P2

• P1 is waiting for R2 → P1 → R2

• P2 is waiting for R1 → P2 → R1

Deadlock Detection:

This graph contains a cycle:

nginx

CopyEdit

P1 → R2 → P2 → R1 → P1

This cycle indicates a deadlock:

• P1 waits for R2 (held by P2)

• P2 waits for R1 (held by P1)

Neither process can proceed—classic deadlock!


Summary Table:

Symbol Meaning

◯ Process (e.g., P1)

□ Resource (e.g., R1)

→ Request (Process → Resource)

← Assignment (Resource → Process)

Would you like a visual graph (image or diagram) of this example? I can generate one for

66. Benefits of Distributed Systems

Distributed systems offer numerous advantages by leveraging multiple interconnected


computers. Here are the key benefits:

a. Resource Sharing

• Multiple users and systems can share hardware, software, and data across the
network.

b. Scalability

• Easy to scale horizontally by adding more machines.

• Ideal for handling growing workloads.

c. Fault Tolerance & Reliability

• If one node fails, others can take over.

• Redundancy ensures continuous availability of services.

d. Performance

• Parallel processing improves speed and responsiveness.

• Tasks can be distributed to reduce overall execution time.

e. Geographic Distribution

• Supports global operations with nodes in different locations.

• Reduces latency for geographically distributed users.


f. Cost-Effectiveness

• Can use low-cost commodity hardware instead of expensive mainframes.

g. Flexibility

• Easier to update or maintain individual components without shutting down the


entire system.

67. Deadlock Prevention Strategies

A deadlock occurs when processes are stuck waiting for resources held by each other, and
none can proceed. Prevention strategies avoid this by breaking one of the deadlock
conditions.

Deadlock Conditions (Coffman’s Conditions):

1. Mutual Exclusion

2. Hold and Wait

3. No Preemption

4. Circular Wait

To prevent deadlocks, one or more of these must be broken:

a. Eliminate Hold and Wait

• Require all resources to be requested at once.

• Process can only proceed if all are available.

b. Eliminate Circular Wait

• Impose a strict ordering of resource allocation.

• Processes must request resources in increasing order (e.g., R1 < R2 < R3).

c. Eliminate No Preemption

• Allow the system to forcibly take resources from a process.

• If a process can’t get all needed resources, it releases what it holds.

d. Eliminate Mutual Exclusion

• Not always possible, but where feasible, design resources to be sharable (like read-
only files).
68. Sequential & Direct File Access

These are two file access methods that define how data is read from or written to files on
disk:

1⃣ Sequential Access

Definition:

• Data is accessed in order, one record after another (like reading a tape).

Suitable For:

• Files read/written from start to finish (e.g., logs, text files).

Advantages:

• Simple to implement.

• Good for batch processing.

Disadvantages:

• Slow if you need to access data in the middle or end.

2⃣ Direct (or Random) Access

Definition:

• Data can be read or written at any position in the file directly using a position index
or block number.

Suitable For:

• Databases, binary files, applications that require fast lookup.

Advantages:

• Fast access to any part of the file.

• Efficient for read-heavy operations.

Disadvantages:

• More complex implementation.

• May require fixed-size records or indexing.


Comparison Table:

Feature Sequential Access Direct Access

Access Order One-by-one, in order Any order, random access

Speed Slower for large files Faster for specific access

Use Case Logs, reports Databases, media files

Implementation Simple Complex (needs indexing)

69. Architectural Styles in Distributed OS

• Layered – Clear separation, easier to manage

• Client-Server – Widely used, simple

• Peer-to-Peer – Equal peers, better load balancing

• Microkernel – Better modularity

Example in Detail: Client-Server: Clients send requests, servers respond. Simplifies


management and scales easily.

70. Deadlock Recovery Methods (Repeat)

71. Disk Allocation Methods

Disk allocation methods are techniques used by the operating system to allocate disk space
to files stored on a storage device. The goal is to manage space efficiently, avoid
fragmentation, and allow fast access to files.

There are three primary disk allocation methods:

1⃣ Contiguous Allocation

Definition:

Allocates a single continuous block of space on the disk for each file.
Advantages:

• Fast access – simple to calculate location (just starting block + offset).

• Minimal metadata – only need start block and file length.

Disadvantages:

• External fragmentation – free space may be scattered.

• Difficult to grow files – if next blocks are used, file must be moved.

Example:

File Start Block Length (Blocks)

A 5 3

B 8 2

2⃣ Linked Allocation

Definition:

Each file is a linked list of disk blocks; each block contains data and a pointer to the next
block.

Advantages:

• No fragmentation.

• Easy to grow files (just add a new block to the chain).

Disadvantages:

• Slower access – must follow pointers block-by-block.

• No random access (sequential only).

• Pointer overhead in each block.

Example:

File A: Block 2 → Block 10 → Block 7 → null

Each block contains data + a pointer to the next.

3⃣ Indexed Allocation
Definition:

Each file has its own index block, which contains pointers to all the disk blocks used by the
file.

Advantages:

• Supports direct/random access.

• No external fragmentation.

• Efficient for both small and large files.

Disadvantages:

• Index block adds overhead.

• May need multilevel indexing for very large files.

Example:

Index Block for File A

5 → 8 → 11 → 14 → 20

These blocks (5, 8, 11…) store the actual data.

Comparison Table:

Feature Contiguous Linked Indexed

Access Type Fast, Random Sequential only Fast, Random

Fragmentation External None None

File Growth Difficult Easy Easy

Overhead Low Pointer per block One index block

File Size Limit Limited by space No limit (theoretically) Depends on index size

72. Disk Performance Parameters (Repeat of Q14)

1. Seek Time
• Definition: Time taken by the disk’s read/write head to move to the track where the
required data is stored.

• Measured in: milliseconds (ms)

• Types:

o Full Stroke Seek Time: Time to move from the innermost to the outermost
track.

o Average Seek Time: Typical time taken to locate any random track (commonly
used in specs).

Lower seek time = faster data location.

2. Rotational Latency (Rotational Delay)

• Definition: Time the disk takes to rotate and position the desired sector under the
read/write head.

• Depends on: Disk rotation speed (measured in RPM).

• Formula:

Average Rotational Latency=12×60RPM seconds\text{Average Rotational Latency} =


\frac{1}{2} \times \frac{60}{\text{RPM}} \text{ seconds}Average Rotational Latency=21
×RPM60 seconds

Faster spinning disks have lower rotational latency.

3. Data Transfer Rate

• Definition: Rate at which data can be read from or written to the disk.

• Types:

o Internal Transfer Rate: Between disk platter and buffer.

o External Transfer Rate: Between buffer and host system.

• Measured in: MB/s or GB/s

Higher transfer rate = faster read/write performance.

4. Access Time
• Definition: Total time to access data from the disk.

• Formula:

Access Time=Seek Time+Rotational Latency\text{Access Time} = \text{Seek Time} +


\text{Rotational Latency}Access Time=Seek Time+Rotational Latency

This is the true measure of how long it takes to start reading or writing data.

5. IOPS (Input/Output Operations Per Second)

• Definition: Number of read/write operations a disk can perform in one second.

• Important for: Random access workloads like databases.

• Higher is better, especially for SSDs.

6. Disk Throughput

• Definition: Amount of data processed per unit time, factoring in overheads.

• Related to: Transfer rate, file system efficiency, and I/O patterns.

Parameter Unit Description

Seek Time ms Time to locate the correct track

Rotational Latency ms Time to rotate disk to correct sector

Data Transfer Rate MB/s, GB/s Speed of data transfer

Access Time ms Total time to access data (seek + rotation)

IOPS ops/sec Input/Output operations per second

Throughput MB/s Real-world sustained data transfer rate

Queue Depth count Number of concurrent I/O operations allowed

Cache Size MB, GB Internal memory for faster access

73. Distributed System Definition (Repeat)

Distributed System – Definition


A Distributed System is a collection of independent computers (also called nodes or
machines) that work together as a single system to provide a unified experience to users.
These computers are connected through a network and coordinate their actions by passing
messages.

Even though the system consists of multiple separate components, it appears to users and
applications as a single coherent system.

Key Characteristics of Distributed Systems:

1. Multiple Independent Components:


Each machine (node) operates independently but cooperatively.

2. Resource Sharing:
Components share hardware (like printers), data (files, databases), or processing
power.

3. Concurrency:
Multiple processes can run simultaneously across different nodes.

4. Scalability:
The system can grow by adding more nodes without much reconfiguration.

5. Fault Tolerance:
If one node fails, others can take over its responsibilities to avoid total system failure.

6. Transparency:
The user does not need to know where services or data are located. This includes:

o Location transparency (resources appear local),

o Access transparency (uniform interface),

o Replication transparency, etc.

74. Grid Computing

Grid Computing is a type of distributed computing where a network of computers work


together to perform large tasks, typically ones that require massive processing power or
data storage. Instead of relying on a single powerful supercomputer, grid computing takes
advantage of the combined processing power of multiple independent systems—often
geographically dispersed—to achieve a common goal.

Key Features of Grid Computing:


1. Resource Sharing:
Multiple computers (often called nodes) contribute CPU power, storage, memory, or
software to the grid.

2. Distributed Architecture:
Resources are spread out across multiple locations but are interconnected through a
network (often the internet).

3. Scalability:
New nodes can easily be added to the grid, allowing it to grow based on demand.

4. Heterogeneity:
Nodes in the grid can have different operating systems, hardware, or configurations.

5. Task Scheduling:
A central system or middleware manages how tasks are broken down and distributed
across the grid.

6. Fault Tolerance:
If one node fails, tasks can be redistributed to other nodes without stopping the
entire process.

Advantages of Grid Computing: 1. Grid computing environments are very modular in


performance. 2. Grid computing ensures easy scaling of applications. 3. Grid computing
adopts the use of open source, trust, transparency, and technology. 4. Grid computing
allows seamless sharing and distribution of computing resources across networks. 5. Grid
computing is also capable of making better use of the hardware that already exists. 6. Grid
computing has also proved to be useful in combining with other organizations. 7. The failure
rate of grid computing is low.

Disadvantages of Grid Computing: 1. Grid computing requires robust and fast


interconnection between resources. 2. It suffers from proprietary approaches. Grid
Computing vs. Other Models

Feature Grid Computing Cloud Computing Cluster Computing

Resource Shared from multiple Provided by a single Usually from a single


Ownership organizations cloud vendor organization

Resource Centralized in data Typically located


Geographically dispersed
Location centers together

Scalability High Very High Moderate

Cost Model Often volunteer or shared Pay-as-you-go Usually fixed


How It Works (Simplified):

1. A user submits a job (e.g., complex computation, rendering a film, simulating


molecules).

2. The job is split into smaller tasks.

3. A grid scheduler assigns these tasks to available nodes.

4. Each node processes its task independently.

5. Results are sent back to the central system and compiled into the final output.

Real-World Examples:

• Scientific Research:
SETI@home (Search for Extraterrestrial Intelligence) used idle home computers to
analyze radio signals from space.

• Medical Research:
Folding@home simulates protein folding to help understand diseases like Alzheimer's
or cancer.

• Finance:
Risk analysis and modeling that require processing vast amounts of data.

• Engineering/Graphics:
Complex 3D rendering for movies or simulations.

75. Size Scalability in Distributed Systems

What is Size Scalability?

Size scalability refers to the ability of a distributed system to grow in size — such as adding
more users, devices, nodes, or services — without degrading performance or reliability.

A size-scalable distributed system:

• Continues to work efficiently as it grows.

• Handles increased load without major redesign.

• Maintains performance, security, and responsiveness.


Why is it Important?

Distributed systems often span:

• Data centers

• Global networks

• IoT and sensor networks

• Cloud environments

As users and data grow exponentially, systems must be able to scale to handle:

• More requests

• More storage

• More processing power

Key Aspects of Size Scalability

1. Incremental Growth

o The system should allow adding new resources (e.g., nodes or servers)
without major disruption.

o Example: Adding new servers to a cloud system like AWS or Google Cloud.

2. Load Distribution

o As size grows, load should be evenly distributed to prevent bottlenecks.

o Load balancers and distributed hash tables (DHTs) help here.

3. Decentralized Control

o Centralized control becomes a bottleneck in large systems.

o Scalable systems use peer-to-peer or distributed algorithms.

4. Efficient Resource Discovery

o Finding data or services must remain fast, even as the system grows.

o Techniques: Caching, indexing, partitioning.

5. Minimal Performance Degradation

o System performance should not drop sharply when more nodes or clients are
added.
o Example: Adding 1000 users should not make a website 10x slower.

Design Strategies for Size Scalability

• Horizontal Scaling: Add more machines (not just upgrade existing ones).

• Partitioning (Sharding): Break large databases into smaller, manageable parts.

• Distributed Caching: Reduce the load on central storage.

• Eventual Consistency Models: Avoid strict locking to improve scalability.

Example:

Imagine a file-sharing system like BitTorrent:

• More users = more sharing = faster downloads.

• The system becomes more efficient as it grows — a sign of excellent size scalability.

76. Architectural Style in Distributed OS

Architectural Styles in Distributed Operating Systems (DOS)


In distributed systems, architectural style refers to how system components
(like nodes, servers, or services) are organized and interact across the network.
These styles impact performance, fault tolerance, scalability, and complexity
of the system.
Let’s dive into the main architectural styles found in Distributed Operating
Systems (DOS):

1. Layered Architecture

Structure:

• System is organized in layers, each layer built upon the one below it.
• Each layer performs specific functions and interacts only with adjacent
layers.

Example Layers:
• Hardware
• Communication
• Middleware
• Services
• Applications

Advantages:

• Simplicity and modular design


• Easy debugging and testing
• Layer isolation helps manage complexity

Disadvantages:

• Not very flexible


• Performance overhead due to multiple layers

2. Client-Server Architecture

Structure:

• Clients send requests.


• Servers process and respond to those requests.
• Communication is over a network.

Example:

• A distributed file system: client requests a file → server delivers the file.

Advantages:
• Simple to implement and understand
• Centralized control and security
• Scalable with added servers

Disadvantages:
• Server can become a bottleneck
• Single point of failure unless redundancy is used

3. Peer-to-Peer (P2P) Architecture

Structure:

• All nodes (peers) are equal, each acting as both client and server.
• No central coordination.

Example:

• BitTorrent, blockchain networks

Advantages:

• Highly scalable
• No central point of failure
• Efficient resource utilization

Disadvantages:
• Complex coordination
• Less control over security and consistency

4. Object-Based Architecture

Structure:

• System is composed of interacting objects (like in OOP).


• Each object encapsulates data and behavior, and communicates using
remote method calls.

Example:

• CORBA, Java RMI

Advantages:
• Good for complex systems
• Natural mapping with object-oriented design

Disadvantages:

• Communication overhead
• Performance can degrade with many objects

5. Microkernel Architecture

Structure:

• Only minimal services (e.g., communication, memory, process


management) are in the kernel.
• Other services run as user-space servers and interact via messaging.

Advantages:
• High modularity and security
• Easy to update or replace components
• Better fault isolation

Disadvantages:

• Messaging overhead can reduce performance

6. Hybrid or Modular Architecture


• Combines features of other styles (e.g., client-server + microkernel).
• Used in complex, large-scale systems like cloud operating systems.

77 Problem with scan Scheduling


Problem with SCAN Disk Scheduling Algorithm
Quick Recap: What is SCAN?

The SCAN algorithm, also called the elevator algorithm, moves the disk head in one
direction, servicing all pending requests, and then reverses direction to service requests on
the return path — just like an elevator picking up people going up, then coming back down.

Problems / Drawbacks of SCAN Scheduling:

1. Long Wait Time for Requests Just Behind the Head

If a request arrives just after the head has passed its position, it must wait until the head
goes to the end and comes back.

• This causes unfairness and long delays for some requests.

Example:

• Head at 50, moving toward 199.

• A request at 49 just arrives — it must wait for the head to reach 199 and return to
49.

2. High Seek Time for Light Loads

When the number of pending requests is small, SCAN may cause the head to move long
distances unnecessarily.

• Performance doesn't scale well in low load conditions.

3. Unfairness to Edge Requests

Requests near the ends of the disk (e.g., near cylinder 0 or 199) can get less frequent
service if most requests are near the center.

• These edge requests might be serviced only once every full pass.

4. More Head Movement Than Necessary

• The head travels all the way to the end of the disk, even if there are no requests at
the extreme end.
• This can lead to unnecessary seek time.

5. Reduced Performance Compared to LOOK

LOOK is a better alternative that avoids going to the end if there are no pending requests.

• SCAN's performance can suffer in scenarios where many requests are clustered in
one area.

6. No Priority Handling

SCAN treats all requests equally, regardless of:

• Age of the request

• Priority of the process

• Real-time constraints

So, real-time systems may suffer with SCAN.

78. Problem with look Scheduling

• Similar to SCAN, but still may cause unfair wait time to lower-priority requests.

• Problem with LOOK Scheduling Algorithm

• LOOK is a variant of the SCAN (Elevator) disk scheduling algorithm. In LOOK, the disk
arm only moves as far as the last request in each direction, instead of going all the
way to the end of the disk (like in SCAN).

• How LOOK Works (Quick Recap):

• The head moves toward the end of the disk but stops at the last request in that
direction.

• Then it reverses direction and continues servicing requests.

• It "looks ahead" for pending requests and doesn't go to the disk's physical end unless
there's a request.

• Problems / Drawbacks of LOOK Scheduling:


• 1. Starvation of Requests in Opposite Direction

• If requests keep coming in the direction the head is moving, requests in the opposite
direction may wait longer.

• Example: If the head is moving up (towards higher-numbered tracks) and new


requests keep appearing in that direction, the head might never reverse to serve
requests in the lower direction.

• 2. Not Fair for All Requests

• LOOK can sometimes lead to unfair scheduling: requests near the current head
position are served faster than those further away.

• Lower-priority requests may experience higher wait time due to the head favoring
nearer or more recent requests.

• 3. Overhead in Changing Direction

• Like SCAN, when the head changes direction, it takes extra time to reverse and start
moving the other way.

• This reversal doesn’t always improve performance significantly compared to one-


direction algorithms like C-LOOK.

• 4. Can Be Outperformed by C-LOOK

• In many cases, C-LOOK provides better performance because it eliminates the back-
and-forth motion and provides a more uniform wait time.

• LOOK may result in slightly longer average seek times compared to C-LOOK in large
queue systems.

• 5. Requires Additional Logic to "Look Ahead"

• Unlike SCAN, which simply goes to the end, LOOK requires the system to check for
pending requests in the direction before moving.

• This adds implementation complexity.


79. Banker's Algorithm Example

Given:

• Max, Allocated, and Available matrices

• Use algorithm to find if system is in safe state (Let me know if you'd like a solved
numerical example!)

80. Examples of Disk Scheduling Algorithms

• FCFS: 10 → 22 → 20 → 2 → 40

• SSTF: Serve closest request next

• SCAN: Sweep across the disk

• LOOK: Sweep only up to last request

• C-SCAN: One-way and return to start

• C-LOOK: Like C-SCAN but jump to lowest request

Let me know if you'd like illustrated diagrams, examples, or help solving Banker's algorithm
or disk scheduling problems!

4o

You said:

yes

ChatGPT said:

Great! Let’s go ahead and solve one example each of:

1. Banker’s Algorithm

2. Disk Scheduling Algorithms (for FCFS, SSTF, SCAN, LOOK, C-SCAN, C-LOOK)

1. Banker's Algorithm – Example

Given:
Process Max Allocation Need

P0 753010 743

P1 322200 122

P2 902302 600

P3 222211 011

P4 433002 431

Available Resources: 3 3 2

Step 1: Check if system is in a safe state

Let’s apply the Banker’s Algorithm.

Start:

• Work = Available = (3 3 2)

• Finish = [False, False, False, False, False]

Step-by-step:

1. P1: Need (1 2 2) ≤ Work (3 3 2) →


→ Work becomes (3+2, 3+0, 2+0) = (5 3 2), Finish[P1] = True

2. P3: Need (0 1 1) ≤ Work (5 3 2) →


→ Work becomes (5+2, 3+1, 2+1) = (7 4 3), Finish[P3] = True

3. P0: Need (7 4 3) ≤ Work (7 4 3) →


→ Work becomes (7+0, 4+1, 3+0) = (7 5 3), Finish[P0] = True

4. P2: Need (6 0 0) ≤ Work (7 5 3) →


→ Work becomes (7+3, 5+0, 3+2) = (10 5 5), Finish[P2] = True

5. P4: Need (4 3 1) ≤ Work (10 5 5) →


→ Work becomes (10+0, 5+0, 5+2) = (10 5 7), Finish[P4] = True

Safe Sequence: P1 → P3 → P0 → P2 → P4

System is in a safe state.

2. Disk Scheduling Algorithms Example

Given Disk Queue:


Request order = [98, 183, 37, 122, 14, 124, 65, 67]
Initial head position = 53

a) FCFS (First-Come-First-Serve)

Order: 53 → 98 → 183 → 37 → 122 → 14 → 124 → 65 → 67


Total head movement:
= |98-53| + |183-98| + |37-183| + |122-37| + |14-122| + |124-14| + |65-124| + |67-65|
= 45 + 85 + 146 + 85 + 108 + 110 + 59 + 2 = 640 cylinders

b) SSTF (Shortest Seek Time First)

Start at 53, choose the nearest request each time.

Steps:

1. From 53 → 65 (12)

2. From 65 → 67 (2)

3. From 67 → 37 (30)

4. From 37 → 14 (23)

5. From 14 → 98 (84)

6. 98 → 122 (24)

7. 122 → 124 (2)

8. 124 → 183 (59)

Total movement = 12 + 2 + 30 + 23 + 84 + 24 + 2 + 59 = 236 cylinders

c) SCAN (Elevator Algorithm)

Assume moving towards 0 initially.

Order: 53 → 37 → 14 → 0 → then reverse → 65 → 67 → 98 → 122 → 124 → 183

Movement:
= |53-37| + |37-14| + |14-0| + |0-65| + |65-67| + |67-98| + |98-122| + |122-124| + |124-
183|
= 16 + 23 + 14 + 65 + 2 + 31 + 24 + 2 + 59 = 236 cylinders
d) LOOK

Go only as far as the last request in each direction (no edge of disk).

Descending first: 53 → 37 → 14
Then reverse: 65 → 67 → 98 → 122 → 124 → 183

Movement = 16 + 23 + (65-14) + 2 + 31 + 24 + 2 + 59 = 222 cylinders

e) C-SCAN

Only service in one direction, go to end, jump to beginning, continue.

Direction: Towards higher

Order: 53 → 65 → 67 → 98 → 122 → 124 → 183 → (end 199) → 0 → 14 → 37

Movement:
= (65-53) + (67-65) + (98-67) + (122-98) + (124-122) + (183-124) + (199-183) + (199-0) + (14-
0) + (37-14)
= 12 + 2 + 31 + 24 + 2 + 59 + 16 + 199 + 14 + 23 = 382 cylinders

f) C-LOOK

Only go as far as highest request, then jump to lowest and continue.

Order: 53 → 65 → 67 → 98 → 122 → 124 → 183 → (jump) → 14 → 37

Movement =
12 + 2 + 31 + 24 + 2 + 59 + (183-14) + 23 = 273 cylinders

1.

You might also like