0% found this document useful (0 votes)
3 views14 pages

Unit 3 Notes of Operating Systems With Linux For B.SC AI & ML Sem III

The document provides an overview of operating systems with a focus on deadlock management, memory management techniques, and virtual memory concepts. It covers definitions, characteristics, and algorithms related to deadlocks, memory allocation methods, fragmentation types, and various memory management commands and system calls in Linux. Key topics include deadlock prevention, detection, recovery, paging, segmentation, and virtual memory management strategies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views14 pages

Unit 3 Notes of Operating Systems With Linux For B.SC AI & ML Sem III

The document provides an overview of operating systems with a focus on deadlock management, memory management techniques, and virtual memory concepts. It covers definitions, characteristics, and algorithms related to deadlocks, memory allocation methods, fragmentation types, and various memory management commands and system calls in Linux. Key topics include deadlock prevention, detection, recovery, paging, segmentation, and virtual memory management strategies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

SEMESTER - III

UNIT III ( Operating Systems with Linux)

1. Deadlock System Models

Definition:
Deadlock happens when a group of processes are waiting for resources in such a way that none can proceed.

System Model Assumptions:

 Resources:

o Examples: CPU, printer, disk, files, semaphores.

o Each resource type may have one or more instances.

 Process Execution:

1. Request resource.

2. Use resource.

3. Release resource.

Resource Allocation State:


The OS keeps a record of:

 Which resources are free.

 Which resources are allocated to which processes.

Diagram – Resource Usage Cycle:

sql

Request → Use → Release

2. Deadlock Characterization

Deadlock occurs if all 4 Coffman conditions hold:

1. Mutual Exclusion → Only one process can use a resource at a time.

2. Hold and Wait → A process holds one resource and waits for another.

3. No Preemption → Resources cannot be forcibly taken away.

4. Circular Wait → A cycle of processes exists, each waiting for a resource held by the next.

Example Circular Wait:

nginx

P1 → R1 → P2 → R2 → P3 → R3 → P1

Break any one of these, and deadlock cannot occur.


3. Resource Allocation Graph (RAG)

Purpose:
Visual representation of processes, resources, and allocations.

Symbols:

 Process (P1, P2, …) → Circle

 Resource (R1, R2, …) → Rectangle

 Instance of Resource → Dot inside rectangle

Edges:

 Request Edge: Pi → Rj (process asking for resource)

 Assignment Edge: Rj → Pi (resource assigned to process)

Example Diagram:

scss

(P1) ○ → □ (R1) → ○ (P2)

Deadlock check:

 No cycle → No deadlock.

 Cycle with single instance per resource → Deadlock exists.

 Cycle with multiple instances → Deadlock may exist.

4. Deadlock Prevention

Goal: Ensure one or more Coffman conditions never hold.


Coffman Condition How to Prevent

Mutual Exclusion Make resource shareable if possible (e.g., read-only files).

Hold and Wait Request all resources at once; or release before requesting new.

No Preemption Take away resources if needed, let process try later.

Circular Wait Give each resource a number; request only in increasing order.

Note: Prevention can reduce system efficiency.

5. Deadlock Detection and Recovery

When used: If system does not prevent or avoid deadlocks.

Detection:

edges). Cycle ⇒ deadlock.


 Single instance per resource → Use Wait-For Graph (replace resources in RAG with direct process-to-process

 Multiple instances per resource → Use detection algorithm similar to Banker's.

Recovery Methods:

1. Process Termination

o Abort all deadlocked processes, or abort one at a time until resolved.

o Select victim based on priority, resources held, etc.

2. Resource Preemption

o Take resource from some process.

o Rollback process to previous safe point.

6. Banker's Algorithm (Deadlock Avoidance)

Idea:
Always keep system in a Safe State (some sequence exists where all processes can finish).

Data Structures:

 Available: Vector showing free instances of each resource type.

 Max: Max demand of each process.

 Allocation: Currently allocated resources.

 Need: Max – Allocation.


Safety Algorithm (Check Safe State)

1. Work = Available
Finish[i] = false for all processes.

2. Find a process Pi with Finish[i] == false and Need[i] ≤ Work.

3. If found:

o Work = Work + Allocation[i]

o Finish[i] = true

o Repeat Step 2.

4. If all Finish[i] == true → Safe state.

Resource Request Algorithm

1. If Request[i] ≤ Need[i] → OK, else error.

2. If Request[i] ≤ Available → OK, else wait.

3. Temporarily allocate resources:

o Available -= Request[i]

o Allocation[i] += Request[i]

o Need[i] -= Request[i]

4. Run Safety Algorithm:

o If safe → Grant request.

o If unsafe → Rollback.

Banker’s Algorithm Example Diagram:

In csharp programming language Banker’s Algorithm is shown below:

[Available]

[Max Matrix]

[Allocation Matrix]

[Need Matrix]

Safety Check

Safe or Unsafe
Memory Management

1. Main Memory
Definition:
Main Memory (RAM – Random Access Memory) is the primary storage of the computer that stores programs
and data currently in use. It is directly accessible by the CPU.
Characteristics:
 Volatile: Contents are lost when power is turned off.
 Fast access compared to secondary storage.
 Stores both instructions and data.
 Divided into a set of addressable units (bytes or words).
Memory Hierarchy Diagram: (in Java programming language)
CPU Registers (Fastest, Smallest)
Cache Memory (Very Fast)
Main Memory (RAM)
Secondary Storage (HDD/SSD)
Tertiary Storage (Optical Discs, Tapes)
____________________________________________________________________________________
2. Contiguous Memory Allocation
Definition:
A memory management technique where each process is allocated a single contiguous block of memory
addresses.
How it works:
 Main memory is divided into fixed-size or variable-size partitions.
 Each process occupies one contiguous partition.
Advantages:
 Simple to implement.
 Easy to keep track of memory allocation.
Disadvantages:
 Leads to fragmentation.
 Limits process size to available partition size.
Diagram:
Main Memory:
+-----------+ <-- OS
| OS |
+-----------+ <-- Process 1
| Process 1 |
+-----------+ <-- Process 2
| Process 2 |
+-----------+ <-- Free Space
| |
+-----------+

3. Fragmentation
Fragmentation occurs when memory is used inefficiently, leading to wasted space.
Types:
1. Internal Fragmentation
o Wasted space inside allocated memory blocks.

o Happens when fixed-size blocks are assigned to processes smaller than the block size.

2. External Fragmentation
o Wasted space outside allocated memory blocks.

o Happens when free memory is scattered in small chunks between allocated blocks.

Diagram:
Example of External Fragmentation:
[Proc1][Free(2K)][Proc2][Free(1K)][Proc3][Free(3K)]
Total Free = 6K but not contiguous → cannot store a 6K process.

4. Paging
Definition:
A memory management technique that eliminates external fragmentation by dividing both the logical memory
and physical memory into fixed-size blocks.
 Logical Memory Blocks: Pages
 Physical Memory Blocks: Frames
 Each page maps to a frame.
Advantages:
 No external fragmentation.
 Processes can be placed in non-contiguous memory.
Disadvantages:
 Small memory overhead for page tables.
 Internal fragmentation possible (last page not fully used).
Diagram:
Logical Memory: nginx
Page 0
Page 1
Page 2
Page 3
Physical Memory : mathematica
Frame 2 <- Page 0
Frame 5 <- Page 1
Frame 1 <- Page 2
Frame 4 <- Page 3

Page Table Example:

Page No. Frame No.

0 2

1 5

2 1

3 4

5. Segmentation
Definition:
A memory management technique where logical memory is divided into variable-sized segments based on
the logical divisions of a program (e.g., code, data, stack).
Key Idea:
Each segment has:
 Base Address: Starting location in physical memory.
 Limit: Length of the segment.
Advantages:
 Matches logical program structure.
 Supports sharing and protection.
Disadvantages:
 External fragmentation possible.
 More complex management.
Diagram: yaml
Logical Memory:
Segment 0: Code
Segment 1: Data
Segment 2: Stack

Segment Table:
| Segment | Base | Limit |
|---------|------|-------|
| 0 | 1000 | 400 |
| 1 | 2000 | 300 |
| 2 | 3000 | 200 |

Physical Memory:
[Code][Data][Stack]

Quick Summary Table

Technique Division Type Fragmentation Type Contiguity Required Complexity

Contiguous Allocation Fixed/Variable Internal/External Yes Simple

Paging Fixed-size blocks Internal No Moderate

Segmentation Variable-size segs External No Higher

Virtual Memory
1. Virtual Memory
Definition:
Virtual memory is a memory management technique that gives an application the illusion that it has
contiguous and large memory space, even though the physical memory (RAM) might be smaller.
Key idea:
 Programs are divided into pages (fixed size).
 These pages can be stored partly in RAM and partly in disk (secondary storage).
 The OS automatically moves data between RAM and disk when needed.
Benefits:
1. Run programs larger than physical RAM.
2. Isolation – each process works in its own memory space.
3. Efficient memory utilization.
Diagram – Virtual Memory Concept:
mathematica
Process's Virtual Address Space (e.g., 4 GB)
┌─────────────────────┐
│ Page 0 │
│ Page 1 │
│ Page 2 │
│ Page 3 │
│ ... │
└─────────────────────┘
↓ Mapping (via Page Table)
Physical Memory (e.g., 1 GB RAM)
┌─────────────────────┐
│ Frame 5 ← Page 0 │
│ Frame 2 ← Page 3 │
│ Frame 8 ← Page 1 │
│ ... │
└─────────────────────┘
Missing pages → Stored on disk (Swap Area)

2. Demand Paging
Definition:
In demand paging, pages are loaded into RAM only when they are needed during program execution (not
all at once).
How it works:
1. CPU tries to access a page.
2. If page is in RAM → execute.
3. If page is not in RAM → page fault occurs → OS loads it from disk into RAM.
Advantages:
 Saves memory (only necessary pages are loaded).
 Can run large programs with small RAM.
Disadvantage:
 Page fault delay – disk access is slow.
Diagram – Demand Paging Flow:
In vbnet language

CPU Request → Page in Memory? → Yes → Continue



No

Page Fault occurs

OS brings page from Disk → RAM

Resume execution

3. Page Replacement
When RAM is full and a new page is needed, the OS must replace an existing page in RAM with the required
one.
This decision is called Page Replacement.
Goals:
 Minimize the page fault rate.
 Keep frequently used pages in memory.

4. Page Replacement Algorithms


4.1 FIFO (First-In-First-Out)
 Replace the oldest loaded page.
 Easy but might replace frequently used pages.

Example : In Vbnet programming Language

Frames: 3
Pages: 7, 0, 1, 2, 0, 3, 0, 4
Faults: Replace in the order they came.

4.2 Optimal (OPT)


 Replace the page that will not be used for the longest time in the future.
 Best theoretically, but needs future knowledge (used for benchmarking).

4.3 LRU (Least Recently Used)


 Replace the page that hasn’t been used for the longest time in the past.
 Tracks usage history.

4.4 Clock Algorithm (Second Chance)


 Pages arranged in a circular list.
 Each has a reference bit (0 or 1).
 On replacement, check bit:
o If 0 → replace.

o If 1 → set to 0 and move to next.


Diagram – LRU Example (3 Frames):
In Vbnet programming language
Pages: 7, 0, 1, 2, 0, 3, 0
Step by step:
7 | - | - → Fault
7 | 0 | - → Fault
7 | 0 | 1 → Fault
2 replaces 7 (LRU) → Fault
...

5. Allocation of Frames
Definition:
How physical frames (RAM slots for pages) are given to processes.
Methods:
1. Equal Allocation – each process gets the same number of frames.
2. Proportional Allocation – frames allocated based on process size.
3. Priority Allocation – more frames to high-priority processes.

6. Thrashing
Definition:
Thrashing happens when the CPU spends more time swapping pages in and out of RAM than executing
actual instructions.
Causes:
 Too many processes running.
 Too few frames allocated to each process.
 High page fault rate.
Symptoms:
 100% CPU usage but no real progress.
 Disk I/O extremely high.
Solution:
 Reduce the degree of multiprogramming (run fewer processes).
 Give processes more frames.
Thrashing Diagram: (In Sql)
High CPU demand → More processes → Less memory per process
→ More Page Faults → More Swapping → Less CPU time → Thrashing
Linux Operating System – Memory Management
Memory management in Linux is responsible for:
 Allocating and freeing memory for processes.
 Monitoring memory usage.
 Handling swapping, paging, and caching.
 Providing system calls for processes to request or release memory.

1. Memory Management Commands in Linux

Command Purpose Example Usage Output Description

Shows memory usage (RAM & Displays total, used, and free memory
free free -h
Swap) in human-readable format.

Real-time view of memory & CPU Shows running processes, %MEM


top top
usage usage, swap, cache.

Improved version of top (needs Interactive, color-coded memory and


htop htop
installation) CPU monitor.

Shows memory, swap, CPU stats every


vmstat Virtual memory statistics vmstat 2
2 sec.

cat Displays system memory parameters


Detailed memory info cat /proc/meminfo
/proc/meminfo directly from /proc.

ps aux --sort=- Lists processes sorted by memory


ps Show process-specific memory
%mem usage.

Report memory usage per Shows proportional set size (PSS) for
smem smem
process (needs installation) processes.

Shows memory stats at 1-second


sar Historical memory usage sar -r 1 3
intervals, 3 times.

watch free -h Auto-refresh memory usage watch -n 2 free -h Refreshes every 2 sec.

2. Memory Management Related System Calls


System calls allow programs to interact with memory at the kernel level.

System
Purpose Syntax Description
Call

Change end of data


brk() int brk(void *end_data_segment); Expands/shrinks heap memory.
segment

Increment data Moves program break by


sbrk() void* sbrk(intptr_t increment);
segment size specified bytes.

Map files/devices void *mmap(void *addr, size_t length, int Allocates memory by mapping a
mmap()
into memory prot, int flags, int fd, off_t offset); file or anonymous region.

Frees a memory mapping created


munmap() Unmap memory int munmap(void *addr, size_t length);
by mmap().
System
Purpose Syntax Description
Call

Prevents swapping of memory to


mlock() Lock memory int mlock(const void *addr, size_t len);
disk.

Allows memory to be swapped


munlock() Unlock memory int munlock(const void *addr, size_t len);
again.

Create shared int shmget(key_t key, size_t size, int


shmget() For inter-process communication.
memory segment shmflg);

Attach shared void *shmat(int shmid, const void Maps shared memory to process
shmat()
memory *shmaddr, int shmflg); address space.

Detach shared Removes shared memory from


shmdt() int shmdt(const void *shmaddr);
memory process address space.

Control shared int shmctl(int shmid, int cmd, struct Performs operations on shared
shmctl()
memory shmid_ds *buf); memory segments.

3. Diagram – Linux Memory Layout


Here’s a simple labeled diagram you can draw in exams:
pgsql
+----------------------+
| Command-line Args |
+----------------------+
| Environment Variables|
+----------------------+
| Stack (local vars) | <- grows down
+----------------------+
| Heap (malloc/free) | <- grows up
+----------------------+
| BSS (uninitialized) |
+----------------------+
| Data (initialized) |
+----------------------+
| Text (program code) |
+----------------------+

(In exams, use arrows to show stack grows downward and heap upward.)
4. Key Points for Exams
 free -h → Quick memory check.
 top → Live view of memory usage.
 cat /proc/meminfo → Detailed kernel memory stats.
 System calls like mmap() and brk() manage process memory.
 Shared memory is handled using shmget(), shmat(), shmdt(), shmctl().
 Linux uses virtual memory with paging for efficiency.

----- The End ----

You might also like