Os Unit-2 Q&A
Os Unit-2 Q&A
Process States:
•When a process executes, it passes through different states. These stages may differ in different
operating systems.
•In general, a process can have one of the following five states at a time."
1. New: A program which is going to be picked up by the OS into the main memory is called
a new process.
2. Ready: Whenever a process is created, it directly enters in the ready state, in which, it
waits for the CPU to be assigned. The OS picks the new processes from the secondary
memory and put all of them in the main memory.
The processes which are ready for the execution and reside in the main memory are called
ready state processes. There can be many processes present in the ready state.
3. Running: One of the processes from the ready state will be chosen by the OS depending
upon the scheduling algorithm. Hence, if we have only one CPU in our system, the number of
running processes for a particular time will always be one. If we have n processors in the
system then we can have n processes running simultaneously.
4. Block or wait: From the Running state, a process can make the transition to the block or
wait state depending upon the scheduling algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input from the user then
the OS move this process to the block or wait state and assigns the CPU to the other
processes.
5. Completion or termination: When a process finishes its execution, it comes in the
termination state. All the context of the process (Process Control Block) will also be deleted
the process will be terminated by the Operating system.
6. Suspend ready: A process in the ready state, which is moved to secondary memory from
the main memory due to lack of the resources (mainly primary memory) is called in the
suspend ready state.
If the main memory is full and a higher priority process comes for the execution then the OS
have to make the room for the process in the main memory by throwing the lower priority
process out into the secondary memory. The suspend ready processes remain in the secondary
memory until the main memory gets available.
7. Suspend wait: Instead of removing the process from the ready queue, it's better to remove
the blocked process which is waiting for some resources in the main memory. Since it is
already waiting for some resource to get available hence it is better if it waits in the
secondary memory and make room for the higher priority process. These processes complete
their execution once the main memory gets available and their wait is finished.
Operations on the Process
1. Creation: Once the process is created, it will be ready and come into the ready queue
(main memory) and will be ready for the execution.
2. Scheduling: Out of the many processes present in the ready queue, the Operating system
chooses one process and start executing it. Selecting the process which is to be executed next,
is known as scheduling.
3. Execution: Once the process is scheduled for the execution, the processor starts executing
it. Process may come to the blocked or wait state during the execution then in that case the
processor starts executing the other processes.
4. Deletion/killing: Once the purpose of the process gets over then the OS will kill the
process. The Context of the process (PCB) will be deleted and the process gets terminated by
the Operating system.
3. Define Process Control Block (PCB) and explain about PCB in detail with diagram?
Ans) Process Control Block (PCB):
• A Process Control Block is a data structure maintained by the Operating System for every
process.
• The PCB is identified by an integer process ID (PID).
• A PCB keeps all the information needed to keep track of a process as listed below in the table –
1. Process State: The current state of the process i.e., whether it is ready, running, waiting, or
whatever.
3. Process ID: Unique identification for each of the process in the operating system.
5 .Program Counter: Program Counter is a pointer to the address of the next instruction to be
executed for this process.
6. CPU registers: Various CPU registers where process need to be stored for execution for
running state.
7. CPU Scheduling Information: Process priority and other scheduling information which is
required to schedule the process.
8. Memory management information: This includes the information of page table, memory
limits, Segment table depending on memory used by the operating system.
9 .Accounting information: This includes the amount of CPU used for process execution, time
limits, execution ID etc.
10 .I/O status information: This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB−
The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.
1.Process Creation: Processes need to be created in the system for different operations. This can
be done by the following events –
User request for process creation
System initialization
Execution of a process creation system call by a running process
Batch job initialization
A process may be created by another process using fork(). The creating process is called the
parent process and the created process is the child process. A child process can have only one
parent but a parent process may have many children. Both the parent and child processes have
the same memory image, open files, and environment strings. However, they have distinct
address spaces.
A diagram that demonstrates process creation using fork() is as follows −
2.Process Preemption: An interrupt mechanism is used in preemption that suspends the
process executing currently and the next process to execute is determined by the short-term
scheduler. Preemption makes sure that all processes get some CPU time for execution.
A diagram that demonstrates process preemption is as follows −
3.Process Blocking: The process is blocked if it is waiting for some event to occur. This
event may be I/O as the I/O events are executed in the main memory and don't require the
processor. After the event is complete, the process again goes to the ready state.
A diagram that demonstrates process blocking is as follows −
4.Process Termination
After the process has completed the execution of its last instruction, it is terminated.
The resources held by a process are released after it is terminated.
A child process can be terminated by its parent process if its task is no longer
relevant.
The child process sends its status information to the parent process before it
terminates.
Also, when a parent process is terminated, its child processes are terminated as well as
the child processes cannot run if the parent processes are terminated.
Eg: While playing a movie on a device the audio and video are controlled by different threads
in the background.
The above diagram shows the difference between a single-threaded process and a
multithreaded process and the resources that are shared among threads in a multithreaded
process.
Components of Thread
A thread has the following three components:
1.Program Counter
2.Register Set
3.Stack space
Why do we Need Threads?
Threads in the operating system provide multiple benefits and improve the overall
performance of the system. Some of the reasons threads are needed in the operating system
are:
•Since threads use the same data and code, the operational cost between threads is low.
•Creating and terminating a thread is faster compared to creating or terminating a process.
•Context switching is faster in threads compared to processes.
Types of Thread
User-level threads are implemented and managed by the user and the kernel is not aware of it.
Benefits of Threads
Enhanced throughput of the system: When the process is split into many threads, and
each thread is treated as a job, the number of jobs done in the unit time increases. That is
why the throughput of the system also increases.
Effective Utilization of Multiprocessor system: When you have more than one thread in
one process, you can schedule more than one thread in more than one processor.
Faster context switch: The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the CPU.
Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
Communication: Multiple-thread communication is simple because the threads share the
same address space, while in process, we adopt just a few exclusive communication
strategies for communication between two processes.
Resource sharing: Resources can be shared between all threads within a process, such as
code, data, and files. Note: The stack and register cannot be shared between threads.
There is a stack and register for each thread.
•Each thread has its own set of registers and stack space. There can be multiple threads in a
single process having the same or different functionality.
•Threads are also termed lightweight processes as they share common resources)
Types of Threads
User-level thread
The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread
blocking operation, the whole process is blocked. The kernel level thread does not know
nothing about the user level thread. The kernel-level thread manages user-level threads as if
they are single-threaded processes?examples: Java thread, POSIX threads, etc.
1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not
support threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini
thread control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.
1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.
The kernel thread recognizes the operating system. There is a thread control block and
process control block in the system for each thread and process in the kernel-level thread. The
kernel-level thread is implemented by the operating system. The kernel knows about all the
threads and manages them. The kernel-level thread offers a system call to create and manage
the threads from user-space. The implementation of kernel threads is more difficult than the
user thread. Context switch time is longer in the kernel thread. If a kernel thread performs a
blocking operation, the Banky thread execution can continue. Example: Window Solaris.
Advantages of Kernel-level threads
Process simply means any program in execution while the thread is a segment of a
process. The main differences between process and thread are mentioned below:
Process Thread
Processes use more resources and hence they Threads share resources and hence they are termed as
are termed as heavyweight processes. lightweight processes.
Creation and termination times of processes Creation and termination times of threads are faster
are slower. compared to processes.
Processes have their own code and data/file. Threads share code and data/file within a process.
Eg: Opening two different browsers. Eg: Opening two tabs in the same browser.
8. Explain about the concept of multithreads in detail?
Ans) Multithreading Models
• Some operating system provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach.
In a combined system, multiple threads within the same application can run in parallel on
multiple processors and a blocking system call need not block the entire process.
In this model, we have multiple user threads mapped to one kernel thread.
In this model when a user thread makes a blocking system call entire process blocks.
As we have only one kernel thread and only one user thread can access kernel at a time,
so multiple threads are not able access multiprocessor at the same time.
In this model, one to one relationship between kernel and user thread.
In this model multiple threads can run on multiple processors.
Problem with this model is that creating a user thread requires the corresponding kernel
thread.
• As each user thread is connected to different kernel, if any user thread makes a blocking
system call, the other user threads won’t be blocked.
9. Define scheduler and explain about different types of schedulers in detail?
Ans) Schedulers:
•Schedulers are special system software which handles process scheduling in various ways.
•Their main task is to select the jobs to be submitted into the system and to decide which process.
•Long-term scheduler (or job scheduler) – selects which processes should be brought into the
ready queue.
•Short-term scheduler (or CPU scheduler) – selects which process should be executed next
and allocates CPU.
The following major process scheduling queues are maintained by the Operating System:
Each queue can be managed by the OS using distinct policies (FIFO, Priority, Round Robin,
etc.). The OS scheduler governs how tasks are moved between the ready and run queues,
each of which can only have one item per processor core on a system; it has been integrated
with the CPU in the preceding figure.
The Job Queue stores all processes that are entered into the system.
The Ready Queue holds processes in the ready state.
Device Queues hold processes that are waiting for any device to become available.
For each I/O device, there are separate device queues.
The ready queue is where a new process is initially placed. It sits in the ready queue, waiting
to be chosen for execution or dispatched. One of the following occurrences can happen once
the process has been assigned to the CPU and is running:
The process could send out an I/O request before being placed in the I/O queue.
The procedure could start a new one and then wait for it to finish.
As a result of an interrupt, the process could be forcibly removed from the CPU and
returned to the ready queue.
The process finally moves from the waiting to ready state in the first two circumstances and
then returns to the ready queue. This cycle is repeated until a process is terminated, at which
point it is withdrawn from all queues, and its PCB and resources are reallocated.
11. Write the differences among the different types of schedulers in detail?
It is also called a
It is also called a It is also called a job
1. Alternate process swapping
CPU scheduler. scheduler.
Name scheduler.
degree of
multiprogramming. multiprogramming.
multiprogramming.
4. Usage in
It is almost absent or
time- sharing It is minimal in the It is a part of the time-
minimal in a sharing
system sharing time-sharing system. sharing system.
system.
system
6. Process Process state is ready Process state is not Process state is new
state to running present to ready.
Ans) CPU scheduling is the process of determining which process or task is to be executed
by the central processing unit (CPU) at any given time. It is an important component of
modern operating systems that allows multiple processes to run simultaneously on a single
processor. The CPU scheduler determines the order and priority in which processes are
executed and allocates CPU time accordingly, based on various criteria such as CPU
utilization, throughput, turnaround time, waiting time, and response time. Efficient CPU
scheduling is crucial for optimizing system performance and ensuring that processes are
executed in a fair and timely manner.
A high CPU utilization indicates that the CPU is busy and working efficiently, processing as
many tasks as possible. However, a too high CPU utilization can also result in a system
slowdown due to excessive competition for resources.
2. Throughput: Throughput is a criterion used in CPU scheduling that measures the number
of tasks or processes completed within a specific period. It is important to maximize
throughput because it reflects the efficiency and productivity of the system. A high
throughput indicates that the system is processing tasks efficiently, which can lead to
increased productivity and faster completion of tasks.
3. Turnaround Time: Turnaround time is a criterion used in CPU scheduling that measures
the time it takes for a task or process to complete from the moment it is submitted to the
system until it is fully processed and ready for output. It is important to minimize turnaround
time because it reflects the overall efficiency of the system and can affect user satisfaction
and productivity.
A short turnaround time indicates that the system is processing tasks quickly and efficiently,
leading to faster completion of tasks and improved user satisfaction. On the other hand, a
long turnaround time can result in delays, decreased productivity, and user dissatisfaction.
4. Waiting Time: Waiting time is a criterion used in CPU scheduling that measures the
amount of time a task or process waits in the ready queue before it is processed by the CPU.
It is important to minimize waiting time because it reflects the efficiency of the scheduling
algorithm and affects user satisfaction.
A short waiting time indicates that tasks are being processed efficiently and quickly, leading
to improved user satisfaction and productivity. On the other hand, a long waiting time can
result in delays and decreased productivity, leading to user dissatisfaction. Some CPU
scheduling algorithms that prioritize waiting time include Shortest Job First (SJF), Priority
scheduling, and Multilevel Feedback Queue (MLFQ). These algorithms aim to prioritize
short and simple tasks or give higher priority to more important tasks, which can lead to
shorter waiting times and improved system efficiency.
5. Response Time: Response time is a criterion used in CPU scheduling that measures the
time it takes for the system to respond to a user's request or input. It is important to minimize
response time because it affects user satisfaction and the overall efficiency of the system. A
short response time indicates that the system is processing tasks quickly and efficiently,
leading to improved user satisfaction and productivity. On the other hand, a long response
time can result in user frustration and decreased productivity. Some CPU scheduling
algorithms that prioritize response time include Round Robin, Priority scheduling, and
Multilevel Feedback Queue (MLFQ). These algorithms aim to prioritize tasks that require
immediate attention, such as user input, to reduce response time and improve system
efficiency.
Ans) Inter process communication is the mechanism provided by the operating system that
allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or the transferring of data
from one process to another.
A diagram that illustrates interprocess communication is as follows −
Pipe: A pipe is a data channel that is unidirectional. Two pipes can be used to create a
two-way data channel between two processes. This uses standard input and output
methods. Pipes are used in all POSIX systems as well as Windows operating systems.
Socket: The socket is the endpoint for sending or receiving data in a network. This is
true for data sent between processes on the same computer or data sent between
different computers on the same network. Most of the operating systems use sockets
for interprocess communication.
File: A file is a data record that may be stored on a disk or acquired on demand by a
file server. Multiple processes can access a file as required. All operating systems use
files for data storage.
Signal: Signals are useful in interprocess communication in a limited way. They are
system messages that are sent from one process to another. Normally, signals are not
used to transfer data but are used for remote commands between processes.
Shared Memory: Shared memory is the memory that can be simultaneously accessed
by multiple processes. This is done so that the processes can communicate with each
other. All POSIX systems, as well as Windows operating systems use shared memory.
Message Queue: Multiple processes can read and write data to the message queue
without being connected to each other. Messages are stored in the queue until their
recipient retrieves them. Message queues are quite useful for interprocess
communication and are used by most operating systems.
A diagram that demonstrates message queue and shared memory methods of interprocess
communication is as follows −
14. Explain about the modes of scheduling algorithms (preemptive and non preemptive)?
•A CPU scheduling algorithm is used to determine which process will use CPU for execution
and which processes to hold or remove from execution.
•The main goal or objective of CPU scheduling algorithms in OS is to make sure that the
CPU is never in an idle state, meaning that the OS has at least one of the processes ready for
execution among the available processes in the ready queue.
1.Preemptive Approach
2.Non-Preemptive Approach
Preemptive Scheduling:
Preemptive scheduling is used when a process switches from running state to ready state or
from the waiting state to the ready state.
•Whenever a high-priority process comes in, the lower-priority process that has occupied the
CPU is preempted.
•That is, it releases the CPU, and the high-priority process takes the CPU for its execution.
Non-Preemptive Scheduling:
•That is, once a process is running on the CPU, it will release it either by context switching or
terminating.
15. Explain about different types of CPU scheduling algorithm along with examples ?
(FCFS,SJF,ROUND ROBIN,PRIORITY).
Ans) Types of CPU scheduling Algorithms:
There are mainly six types of process scheduling algorithms
1. First Come First Serve (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Priority Scheduling
4. Round Robin Scheduling
5. Multilevel Queue Scheduling
6. Shortest Remaining Time
This is the first type of CPU Scheduling Algorithms. Here, in this CPU Scheduling
Algorithm we are going to learn how CPU is going to allot resources to the certain
process.
Here, in the First Come First Serve CPU Scheduling Algorithm, the CPU allots the
resources to the process in a certain order. The order is serial way. The CPU is
allotted to the process in which it has occurred.
We can also say that First Come First Serve CPU Scheduling Algorithm follows First
In First Out in Ready Queue.
Characteristics of FCFS:
Advantages:
Involves no complex logic and just picks processes from the ready queue one by one.
Easy to implement and understand.
Every process will eventually get a chance to run so no starvation occurs.
Disadvantages:
Waiting time for processes with less execution time is often very long.
It favors CPU-bound processes then I/O processes.
Leads to convoy effect.
Causes lower device and CPU utilization.
Poor performance as the average wait time is high.
Example:
Consider the given table below and find Completion time (CT), Turn-around time (TAT),
Waiting time (WT), Response time (RT), Average Turn-around time and Average Waiting
time.
P1 2 2
P2 5 6
P3 0 4
P4 0 7
P5 7 4
Solution:
Gantt chart
For this problem CT, TAT, WT, RT is shown in the given table −
P1 2 2 13 13-2= 11 11-2= 9 9
P2 5 6 19 19-5= 14 14-6= 8 8
P3 0 4 4 4-0= 4 4-4= 0 0
P4 0 7 11 11-0= 11 11-7= 4 4
P5 7 4 23 23-7= 16 16-4= 12 12
Average Waiting time = (9+8+0+4+12)/5 = 33/5 = 6.6 time unit (time unit can be
considered as milliseconds)
Average Turn-around time = (11+14+4+11+16)/5 = 56/5 = 11.2 time unit (time unit can be
considered as milliseconds)
This is another type of CPU Scheduling Algorithms. Here, in this CPU Scheduling
Algorithm we are going to learn how CPU is going to allot resources to the certain
process.
The Shortest Job is heavily dependent on the Burst Times. Every CPU Scheduling
Algorithm is basically dependent on the Arrival Times.
Here, in this Shortest Job First CPU Scheduling Algorithm, the CPU allots its resources
to the process which is in ready queue and the process which has least Burst Time.
If we face a condition where two processes are present in the Ready Queue and their
Burst Times are same, we can choose any process for execution.
In actual Operating Systems, if we face the same problem then sequential allocation
of resources takes place.
Shortest Job First can be called as SJF in short form.
Characteristics:
o SJF (Shortest Job First) has the least average waiting time. This is because all the
heavy processes are executed at the last. So, because of this reason all the very small,
small processes are executed first and prevent starvation of small processes.
o It is used as a measure of time to do each activity.
o If shorter processes continue to be produced, hunger might result. The idea of aging
can be used to overcome this issue.
o Shortest Job can be executed in Pre emptive and also non pre emptive way also.
Advantages
o SJF is used because it has the least average waiting time than the other CPU
Scheduling Algorithms
o SJF can be termed or can be called as long term CPU scheduling algorithm.
Disadvantages
o Starvation is one of the negative traits Shortest Job First CPU Scheduling Algorithm
exhibits.
o Often, it becomes difficult to forecast how long the next CPU request will take
P0 1 3
P1 2 6
P2 1 2
P3 3 7
P4 2 4
P5 5 5
P0 1 3 5 4 1
P1 2 6 20 18 12
P2 0 2 2 2 0
P3 3 7 27 24 17
P4 2 4 9 7 4
P5 5 5 14 10 5
Gantt Chart:
Now let us find out Average Completion Time, Average Turn Around Time, Average
Waiting Time.
This is another type of CPU Scheduling Algorithms. Here, in this CPU Scheduling
Algorithm we are going to learn how CPU is going to allot resources to the certain
process.
The Priority CPU Scheduling is different from the remaining CPU Scheduling
Algorithms. Here, each and every process has a certain Priority number.
Priority for Prevention The priority of a process determines how the CPU Scheduling
Algorithm operates, which is a preemptive strategy. Since the editor has assigned equal
importance to each function in this method, the most crucial steps must come first. The most
significant CPU planning algorithm relies on the FCFS (First Come First Serve) approach
when there is a conflict, that is, when there are many processors with equal value.
Characteristics
Advantages
o The typical or average waiting time for Priority CPU Scheduling is shorter than First
Come First Serve (FCFS).
o It is easier to handle Priority CPU scheduling
o It is less complex
Disadvantages
o The Starvation Problem is one of the Pre emptive Priority CPU Scheduling
Algorithm's most prevalent flaws. Because of this issue, a process must wait a longer
period of time to be scheduled into the CPU. The hunger problem or the starvation
problem is the name given to this issue.
Examples:
Now, let us explain this problem with the help of an example of Priority Scheduling
S. No Process ID Arrival Time Burst Time Priority
1 P1 0 5 5
2 P2 1 6 4
3 P3 2 2 0
4 P4 3 1 2
5 P5 4 7 1
6 P6 4 6 3
Here, in this problem the priority number with highest number is least prioritized.
This means 5 has the least priority and 0 has the highest priority.
1 P1 0 5 5 5 5 0
2 P2 1 6 4 27 26 20
3 P3 2 2 0 7 5 3
4 P4 3 1 2 15 12 11
5 P5 4 7 1 14 10 3
6 P6 4 6 3 21 17 11
Solution:
Gantt Chart:
Average Completion Time
Round Robin is a CPU scheduling mechanism those cycles around assigning each
task a specific time slot.
It is the First come, first served CPU Scheduling technique with preemptive mode.
The Round Robin CPU algorithm frequently emphasizes the Time-Sharing method.
o Every process receives an equal amount of CPU time, therefore round robin appears
to be equitable.
o To the end of the ready queue is added the newly formed process.
Examples:
Problem
P0 1 3
P1 0 5
P2 3 2
P3 4 3
P4 2 1
Solution:
P0 1 3 5 4 1
P1 0 5 14 14 9
P2 3 2 7 4 2
P3 4 3 10 6 3
P4 2 1 3 1 0
Gantt Chart: