2. Processes
2. Processes
Processes
I. Process Concepts and States
Q1. What is Process? Explain the different states a process passes through in its life cycle (with state transition chart).
A process is an executing instance of a program.
Unlike a static program stored on disk, a process is a dynamic entity that carries its own current activity, which includes the program counter, CPU registers, and allocated resources (such as
memory, files, and I/O devices).
In simple terms, while a program is a passive set of instructions, a process is that program in action.
States in a Process’s Life Cycle
Based on the college material, a process typically goes through three main states:
1. Running
The process is actively executing on the CPU.
It is performing its instructions and using the CPU at that moment.
2. Ready
The process is loaded in memory and prepared to run but is waiting for CPU time.
Although it is ready for execution, the CPU is currently busy with other processes.
3. Blocked (Waiting)
The process cannot proceed because it is waiting for an external event (such as I/O completion or input availability).
Even if the CPU becomes available, a blocked process remains suspended until the event occurs.
State Transition Diagram Overview
A simplified state transition chart for a process is as follows:
Running → Blocked: The process requires an external event (e.g., I/O) and gets suspended.
Running → Ready: The process’s time slice expires or a higher priority process preempts it.
Ready → Running: The scheduler assigns CPU time to the process.
Blocked → Ready: Once the external event occurs, the process becomes ready again.
These transitions ensure efficient resource management and smooth multitasking within the operating system.
Q6. What are various criteria for a good process scheduling algorithm? Explain any two preemptive scheduling algorithms.
A good process scheduling algorithm should optimize various system performance metrics while ensuring fairness and efficiency. Key criteria include:
CPU Utilization: Keep the CPU as busy as possible.
Throughput: Maximize the number of processes completed per time unit.
Turnaround Time: Minimize the total time taken from process submission to its completion.
Waiting Time: Reduce the time a process spends in the ready queue waiting for CPU allocation.
Response Time: Minimize the time from when a request is submitted until the first response is produced.
Fairness: Ensure each process receives a fair share of the CPU.
Overhead: Keep scheduling decisions fast and resource-light to avoid excessive overhead.
Preemptive Scheduling Algorithms: Two common preemptive scheduling algorithms are:
Round Robin (RR) Scheduling:
Mechanism: Each process in the ready queue is assigned a fixed time quantum.
Process: A process executes for its allotted time; if not finished, it is preempted and moved to the back of the queue.
Advantages: Provides fairness and responsiveness, making it ideal for time-sharing systems.
Challenges: The choice of time quantum is critical—too short increases overhead, too long may cause poor response times.
Shortest Remaining Time First (SRTF):
Mechanism: This is the preemptive version of the Shortest Job First algorithm.
Process: The process with the smallest remaining CPU burst time is selected to run. If a new process arrives with a shorter remaining time, it preempts the current process.
Advantages: Minimizes average waiting time and optimizes turnaround time.
Challenges: Can lead to starvation of longer processes if shorter processes keep arriving.
Q7. Explain pre-emptive and non pre-emptive scheduling algorithms (with one example for each).
Pre-emptive Scheduling Algorithms
Pre-emptive scheduling allows the operating system to interrupt a running process when a higher-priority process becomes ready or when the allotted CPU time (time slice) expires.
This ensures that no single process monopolizes the CPU and improves overall system responsiveness.
This method is especially effective in time-sharing environments where quick response times are crucial.
Example:
Round Robin (RR) scheduling, each process is assigned a fixed time quantum. When this time expires, if the process is not finished, it is preempted and placed at the back of the ready
queue so that other processes get a chance to run.
Advantages:
Fair distribution of CPU time
Improved responsiveness
Disadvantages:
Increased overhead due to frequent context switches
Non Pre-emptive Scheduling Algorithms
In non pre-emptive scheduling, once a process is granted the CPU, it runs to completion or until it voluntarily releases the CPU by waiting for an I/O operation or terminating.
The scheduler does not forcibly remove a process even if a higher-priority process becomes ready.
This method is simple and has lower overhead because context switches are minimal.
Example:
First Come, First Served (FCFS) scheduling, where processes are attended to in the order they arrive in the ready queue.
Advantages:
Simplicity in implementation
Lower context switching overhead
Disadvantages:
Can cause longer wait times for shorter processes if they arrive after longer ones
Q11. Explain thread structure and explain why threads are used.
Thread Structure
Execution Context:
Each thread has its own program counter, CPU registers, and private stack.
These elements maintain the thread’s individual execution state and call history.
Shared Resources:
Threads share the same address space, heap, and open file descriptors of their parent process.
This resource sharing makes threads more lightweight compared to full processes, as they do not require separate copies of the process’s resources.
Lightweight Nature:
Due to the limited amount of data maintained independently (i.e., just the execution context), threads incur minimal overhead during creation and context switching.
Why Threads Are Used
Enhanced Concurrency:
Threads allow multiple tasks to run concurrently within the same process, which increases the overall efficiency and responsiveness of applications.
Improved Performance:
In multi-core systems, threads can execute in parallel on different cores, speeding up computational tasks and improving throughput.
Efficient Resource Sharing:
Since threads share a common memory space, communication between them is faster and requires less overhead than inter-process communication.
Better Responsiveness:
For interactive applications like web servers and GUI-based programs, using threads enables quick responses to user inputs by delegating different tasks to separate threads.
Lower Overhead:
Threads incur lower context switching costs compared to full processes, making them ideal for applications requiring frequent task switching.
These characteristics highlight why threads are often referred to as lightweight processes and why they play a critical role in modern operating systems.
Q14. List the main differences and similarities between threads and processes.
Similarities Between Threads and Processes
Execution Unit:
Both threads and processes represent units of execution in an operating system.
They are scheduled by the CPU and can run concurrently.
State and Lifecycle:
Both maintain states such as running, ready, waiting (blocked), and terminated.
They undergo similar lifecycle transitions during their execution.
Resource Utilization:
Both require certain resources such as CPU time, memory, and I/O access for execution.
Differences Between Threads and Processes
Memory Allocation:
Processes:
Have their own separate memory space and system resources.
Isolation between processes provides enhanced security and stability.
Threads:
Share the same address space and resources within a process.
This sharing makes threads more lightweight and faster to create.
Context Switching Overhead:
Processes: Switching between processes involves saving and loading a full set of registers, memory maps, etc., leading to higher overhead.
Threads: Context switches between threads are less resource-intensive because they maintain only a minimal execution context (e.g., program counter, registers, stack).
Intercommunication:
Processes: Communication between processes generally requires inter-process communication (IPC) mechanisms, which can be more complex and slower.
Threads: Since threads share the same memory, inter-thread communication is simpler and faster, though it requires careful synchronization to avoid race conditions.
These points highlight that while both threads and processes are essential for concurrent execution, threads are considered lightweight due to their minimal overhead and resource sharing.