0% found this document useful (0 votes)
20 views

Os Unit-2 Q&A

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Os Unit-2 Q&A

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

UNIT-2

1. Define process and Explain about the concept of process in detail?


Ans) Process Definition: A process is basically a program in execution or instance of the
program execution.
•The execution of a process must progress in a sequential fashion.
• Process is not as same as program code but a lot more than it.
• A process is an 'active' entity as opposed to program which is considered to be a 'passive' entity.
• Attributes held by process include hardware state, memory, CPU, etc.

Process memory is divided into four sections for efficient working:


•The Text section is made up of the compiled program code, read in from non-volatile storage
when the program is launched.
•The Data section is made up the global and static variables, allocated and initialized prior to
executing the main.
•The Heap is used for the dynamic memory allocation, and is managed via calls to new, delete,
malloc, free, etc.
•The Stack is used for local variables. Space on the stack is reserved for local variables when
they are declared.

Process States:
•When a process executes, it passes through different states. These stages may differ in different
operating systems.
•In general, a process can have one of the following five states at a time."

1 Start: This is the initial state when a process is first started/created.


2 Ready: The process is waiting to be assigned to a processor. Ready processes are waiting to
have the processor allocated to them by the operating system so that they can run. Process may
come into this state after Start state or while running it but interrupted by the scheduler to assign
CPU to some other process.
3 Running: Once the process has been assigned to a processor by the OS scheduler, the process
state is set to running and the processor executes its instructions.
4 Waiting: Process moves into the waiting state if it needs to wait for a resource, such as waiting
for user input, or waiting for a file to become available.
5 Terminated or Exit: Once the process finishes its execution, or it is terminated by the
operating system, it is moved to the terminated state where it waits to be removed from main
memory."

Process State Diagram:

2. Explain about process state diagram in detail with a neat diagram?

Ans) Process States


State Diagram:
The process, from its creation to completion, passes through various states. The minimum
number of states is five.
The names of the states are not standardized although the process may be in one of the
following states during execution.

1. New: A program which is going to be picked up by the OS into the main memory is called
a new process.
2. Ready: Whenever a process is created, it directly enters in the ready state, in which, it
waits for the CPU to be assigned. The OS picks the new processes from the secondary
memory and put all of them in the main memory.
The processes which are ready for the execution and reside in the main memory are called
ready state processes. There can be many processes present in the ready state.
3. Running: One of the processes from the ready state will be chosen by the OS depending
upon the scheduling algorithm. Hence, if we have only one CPU in our system, the number of
running processes for a particular time will always be one. If we have n processors in the
system then we can have n processes running simultaneously.
4. Block or wait: From the Running state, a process can make the transition to the block or
wait state depending upon the scheduling algorithm or the intrinsic behavior of the process.
When a process waits for a certain resource to be assigned or for the input from the user then
the OS move this process to the block or wait state and assigns the CPU to the other
processes.
5. Completion or termination: When a process finishes its execution, it comes in the
termination state. All the context of the process (Process Control Block) will also be deleted
the process will be terminated by the Operating system.
6. Suspend ready: A process in the ready state, which is moved to secondary memory from
the main memory due to lack of the resources (mainly primary memory) is called in the
suspend ready state.
If the main memory is full and a higher priority process comes for the execution then the OS
have to make the room for the process in the main memory by throwing the lower priority
process out into the secondary memory. The suspend ready processes remain in the secondary
memory until the main memory gets available.
7. Suspend wait: Instead of removing the process from the ready queue, it's better to remove
the blocked process which is waiting for some resources in the main memory. Since it is
already waiting for some resource to get available hence it is better if it waits in the
secondary memory and make room for the higher priority process. These processes complete
their execution once the main memory gets available and their wait is finished.
Operations on the Process
1. Creation: Once the process is created, it will be ready and come into the ready queue
(main memory) and will be ready for the execution.
2. Scheduling: Out of the many processes present in the ready queue, the Operating system
chooses one process and start executing it. Selecting the process which is to be executed next,
is known as scheduling.
3. Execution: Once the process is scheduled for the execution, the processor starts executing
it. Process may come to the blocked or wait state during the execution then in that case the
processor starts executing the other processes.
4. Deletion/killing: Once the purpose of the process gets over then the OS will kill the
process. The Context of the process (PCB) will be deleted and the process gets terminated by
the Operating system.

3. Define Process Control Block (PCB) and explain about PCB in detail with diagram?
Ans) Process Control Block (PCB):
• A Process Control Block is a data structure maintained by the Operating System for every
process.
• The PCB is identified by an integer process ID (PID).
• A PCB keeps all the information needed to keep track of a process as listed below in the table –

1. Process State: The current state of the process i.e., whether it is ready, running, waiting, or
whatever.

2. Process privileges: This is required to allow/disallow access to system resources.

3. Process ID: Unique identification for each of the process in the operating system.

4. Pointer: A pointer to parent process.

5 .Program Counter: Program Counter is a pointer to the address of the next instruction to be
executed for this process.

6. CPU registers: Various CPU registers where process need to be stored for execution for
running state.

7. CPU Scheduling Information: Process priority and other scheduling information which is
required to schedule the process.

8. Memory management information: This includes the information of page table, memory
limits, Segment table depending on memory used by the operating system.

9 .Accounting information: This includes the amount of CPU used for process execution, time
limits, execution ID etc.

10 .I/O status information: This includes a list of I/O devices allocated to the process.
The architecture of a PCB is completely dependent on Operating System and may contain
different information in different operating systems. Here is a simplified diagram of a PCB−

Process Control Block(PCB)Diagram:

The PCB is maintained for a process throughout its lifetime, and is deleted once the process
terminates.

4. Explain about operations performed on processes?


Ans) There are many operations that can be performed on processes. Some of these are process
creation, process preemption, process blocking, and process termination. These are given in
detail as follows –

1.Process Creation: Processes need to be created in the system for different operations. This can
be done by the following events –
 User request for process creation
 System initialization
 Execution of a process creation system call by a running process
 Batch job initialization
A process may be created by another process using fork(). The creating process is called the
parent process and the created process is the child process. A child process can have only one
parent but a parent process may have many children. Both the parent and child processes have
the same memory image, open files, and environment strings. However, they have distinct
address spaces.
A diagram that demonstrates process creation using fork() is as follows −
2.Process Preemption: An interrupt mechanism is used in preemption that suspends the
process executing currently and the next process to execute is determined by the short-term
scheduler. Preemption makes sure that all processes get some CPU time for execution.
A diagram that demonstrates process preemption is as follows −

3.Process Blocking: The process is blocked if it is waiting for some event to occur. This
event may be I/O as the I/O events are executed in the main memory and don't require the
processor. After the event is complete, the process again goes to the ready state.
A diagram that demonstrates process blocking is as follows −
4.Process Termination

 After the process has completed the execution of its last instruction, it is terminated.
 The resources held by a process are released after it is terminated.
 A child process can be terminated by its parent process if its task is no longer
relevant.
 The child process sends its status information to the parent process before it
terminates.
 Also, when a parent process is terminated, its child processes are terminated as well as
the child processes cannot run if the parent processes are terminated.

5. Define Thread and explain about benefits of threads in detail?

Ans) Threads in Operating System


•In computers, a single process might have multiple functionalities running parallelly where
each functionality can be considered as a thread.
•Each thread has its own set of registers and stack space. There can be multiple threads in a
single process having the same or different functionality.
•Threads are also termed lightweight processes as they share common resources.
What is Thread in Operating System?
 Thread is a sequential flow of tasks within a process. Threads in an operating system
can be of the same or different types.
 Threads are used to increase the performance of the applications.
 Each thread has its own program counter, stack, and set of registers. However, the
threads of a single process might share the same code and data/file

Eg: While playing a movie on a device the audio and video are controlled by different threads
in the background.
The above diagram shows the difference between a single-threaded process and a
multithreaded process and the resources that are shared among threads in a multithreaded
process.
Components of Thread
A thread has the following three components:
1.Program Counter
2.Register Set
3.Stack space
Why do we Need Threads?
Threads in the operating system provide multiple benefits and improve the overall
performance of the system. Some of the reasons threads are needed in the operating system
are:
•Since threads use the same data and code, the operational cost between threads is low.
•Creating and terminating a thread is faster compared to creating or terminating a process.
•Context switching is faster in threads compared to processes.

Types of Thread

1. User Level Thread:

User-level threads are implemented and managed by the user and the kernel is not aware of it.

2. Kernel level Thread:

Kernel level threads are implemented and managed by the OS.

Benefits of Threads

 Enhanced throughput of the system: When the process is split into many threads, and
each thread is treated as a job, the number of jobs done in the unit time increases. That is
why the throughput of the system also increases.
 Effective Utilization of Multiprocessor system: When you have more than one thread in
one process, you can schedule more than one thread in more than one processor.
 Faster context switch: The context switching period between threads is less than the
process context switching. The process context switch means more overhead for the CPU.
 Responsiveness: When the process is split into several threads, and when a thread
completes its execution, that process can be responded to as soon as possible.
 Communication: Multiple-thread communication is simple because the threads share the
same address space, while in process, we adopt just a few exclusive communication
strategies for communication between two processes.
 Resource sharing: Resources can be shared between all threads within a process, such as
code, data, and files. Note: The stack and register cannot be shared between threads.
There is a stack and register for each thread.

6. Explain about different types of threads in detail with neat diagrams?


Ans) Threads in Operating System
•In computers, a single process might have multiple functionalities running parallelly where each
functionality can be considered as a thread.

•Each thread has its own set of registers and stack space. There can be multiple threads in a
single process having the same or different functionality.

•Threads are also termed lightweight processes as they share common resources)

Types of Threads

In the operating system, there are two types of threads.

1. Kernel level thread.


2. User-level thread.

User-level thread

The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread
blocking operation, the whole process is blocked. The kernel level thread does not know
nothing about the user level thread. The kernel-level thread manages user-level threads as if
they are single-threaded processes?examples: Java thread, POSIX threads, etc.

Advantages of User-level threads

1. The user threads can be easily implemented than the kernel thread.
2. User-level threads can be applied to such types of operating systems that do not
support threads at the kernel-level.
3. It is faster and efficient.
4. Context switch time is shorter than the kernel-level threads.
5. It does not require modifications of the operating system.
6. User-level threads representation is very simple. The register, PC, stack, and mini
thread control blocks are stored in the address space of the user-level process.
7. It is simple to create, switch, and synchronize threads without the intervention of the
process.

Disadvantages of User-level threads

1. User-level threads lack coordination between the thread and the kernel.
2. If a thread causes a page fault, the entire process is blocked.

Kernel level thread

The kernel thread recognizes the operating system. There is a thread control block and
process control block in the system for each thread and process in the kernel-level thread. The
kernel-level thread is implemented by the operating system. The kernel knows about all the
threads and manages them. The kernel-level thread offers a system call to create and manage
the threads from user-space. The implementation of kernel threads is more difficult than the
user thread. Context switch time is longer in the kernel thread. If a kernel thread performs a
blocking operation, the Banky thread execution can continue. Example: Window Solaris.
Advantages of Kernel-level threads

1. The kernel-level thread is fully aware of all threads.


2. The scheduler may decide to spend more CPU time in the process of threads being
large numerical.
3. The kernel-level thread is good for those applications that block the frequency.

Disadvantages of Kernel-level threads

1. The kernel thread manages and schedules all threads.


2. The implementation of kernel threads is difficult than the user thread.
3. The kernel-level thread is slower than user-level threads.

7. Explain about the differences between process and threads?

Ans) Process vs Thread

Process simply means any program in execution while the thread is a segment of a
process. The main differences between process and thread are mentioned below:

Process Thread

Processes use more resources and hence they Threads share resources and hence they are termed as
are termed as heavyweight processes. lightweight processes.

Creation and termination times of processes Creation and termination times of threads are faster
are slower. compared to processes.

Processes have their own code and data/file. Threads share code and data/file within a process.

Communication between processes is slower. Communication between threads is faster.

Context Switching in processes is slower. Context switching in threads is faster.

Threads, on the other hand, are interdependent. (i.e


Processes are independent of each other.
they can read, write or change another thread’s data)

Eg: Opening two different browsers. Eg: Opening two tabs in the same browser.
8. Explain about the concept of multithreads in detail?
Ans) Multithreading Models
• Some operating system provide a combined user level thread and Kernel level thread facility.
Solaris is a good example of this combined approach.
 In a combined system, multiple threads within the same application can run in parallel on
multiple processors and a blocking system call need not block the entire process.

 Multithreading models are three types


 Many to many relationship.
 Many to one relationship.
 One to one relationship.

Many to many relationship:


 In this model, we have multiple user threads multiplex to same or lesser number of kernel
level threads.
 Number of kernel level threads are specific to the machine, advantage of this model is if a
user thread is blocked we can schedule others user thread to other kernel thread.
 Thus, System doesn’t block if a particular thread is blocked.
 It is the best multi-threading model.
Many to One Model :

 In this model, we have multiple user threads mapped to one kernel thread.
 In this model when a user thread makes a blocking system call entire process blocks.
 As we have only one kernel thread and only one user thread can access kernel at a time,
so multiple threads are not able access multiprocessor at the same time.

The thread management is done on the user level so it is more efficient.

One to One Model:

 In this model, one to one relationship between kernel and user thread.
 In this model multiple threads can run on multiple processors.
 Problem with this model is that creating a user thread requires the corresponding kernel
thread.
• As each user thread is connected to different kernel, if any user thread makes a blocking
system call, the other user threads won’t be blocked.
9. Define scheduler and explain about different types of schedulers in detail?
Ans) Schedulers:

•Schedulers are special system software which handles process scheduling in various ways.
•Their main task is to select the jobs to be submitted into the system and to decide which process.

Process Schedulers are of three types–


•Long-Term Scheduler
•Short-Term Scheduler
•Medium-Term Scheduler

•Long-term scheduler (or job scheduler) – selects which processes should be brought into the
ready queue.
•Short-term scheduler (or CPU scheduler) – selects which process should be executed next
and allocates CPU.

Short-term scheduler is invoked very frequently (milliseconds) (must be fast)


• Long-term scheduler is invoked very infrequently (seconds, minutes) (may be slow)
• The long-term scheduler controls the degree of multiprogramming

Processes can be described as either:


I/O-bound process – spends more time doing I/O than computations, many short CPU bursts
CPU-bound process – spends more time doing computations; few very long CPU bursts.

Long Term Scheduler:


It is also called a job scheduler.
• A long-term scheduler determines which programs are admitted to the system for processing.
• When a process changes the state from new to ready, then the reissue of long-term scheduler.
• It selects processes from the queue and loads them into memory for execution.
• Process loads into the memory for CPU scheduling.
• It also controls the degree of multiprogramming.
• On some systems, the long-term scheduler may not be available or minimal.
• Time-sharing operating systems have no long-term scheduler..

Short Term Scheduler:


It is also called as CPU scheduler.
• Its main objective is to increase system performance in accordance with the chosen set of
criteria.
• It is the change of ready state to running state of the process.
• CPU scheduler selects a process among the processes that are ready to execute and allocates
CPU to one of them.
• Short-term schedulers, also known as dispatchers, make the decision of which process to
execute next.
• Short-term schedulers are faster than long-term schedulers..

Medium Term Scheduler:


Medium-term scheduling is a part of swapping.
• It removes the processes from the memory.
• It reduces the degree of multiprogramming.
• The medium-term scheduler is in-charge of handling the swapped-out processes.
• A running process may become suspended if it makes an I/O request.
• A suspended process cannot make any progress towards completion.
• In this condition, to remove the process from memory and make space for other processes, the
suspended process is moved to the secondary storage.
• This process is called swapping, and the process is said to be swapped out or rolled out.
• Swapping may be necessary to improve the process mix.
10. Explain about scheduling queues in detail?
Ans) Process Scheduling Queues: All PCBs (Process Scheduling Blocks) are kept in Process
Scheduling Queues by the OS. Each processing state has its own queue in the OS, and PCBs
from all processes in the same execution state are put in the very same queue. A process’s PCB is
unlinked from its present queue and then moved to its next state queue when its status changes.

The following major process scheduling queues are maintained by the Operating System:

Job queue – It contains all of the system’s processes.


Ready queue – This queue maintains a list of all processes in the main memory that are
ready to run. This queue is always filled with new processes.
Device queue – This queue is made up of processes that are stalled owing to the lack of an
I/O device.

Each queue can be managed by the OS using distinct policies (FIFO, Priority, Round Robin,
etc.). The OS scheduler governs how tasks are moved between the ready and run queues,
each of which can only have one item per processor core on a system; it has been integrated
with the CPU in the preceding figure.

Two-State Process Model


The running and non-running states of a two-state process paradigm are detailed below.
Running
Whenever a new process is formed, it is immediately put into the operating state of the
system.
Not currently running
Processes that are not now executed are queued and will be executed when their turn comes.
Each queue entry is a pointer to a certain process. Linked lists are used to implement queues.
The following is how to use a dispatcher. When a process is halted, it is automatically shifted
to the waiting list. The procedure is discarded once it has been completed or failed. In either
scenario, the dispatcher chooses a process to run from the queue.
What are Scheduling Queues?

 The Job Queue stores all processes that are entered into the system.
 The Ready Queue holds processes in the ready state.
 Device Queues hold processes that are waiting for any device to become available.
For each I/O device, there are separate device queues.
The ready queue is where a new process is initially placed. It sits in the ready queue, waiting
to be chosen for execution or dispatched. One of the following occurrences can happen once
the process has been assigned to the CPU and is running:

 The process could send out an I/O request before being placed in the I/O queue.
 The procedure could start a new one and then wait for it to finish.
 As a result of an interrupt, the process could be forcibly removed from the CPU and
returned to the ready queue.

The process finally moves from the waiting to ready state in the first two circumstances and
then returns to the ready queue. This cycle is repeated until a process is terminated, at which
point it is withdrawn from all queues, and its PCB and resources are reallocated.

11. Write the differences among the different types of schedulers in detail?

Ans) Differences among different type of schedulers in operating system:Medium term,


and Long-Term Schedulers:

Short-Term Medium-term Long-Term


Basis Scheduler Scheduler Scheduler

It is also called a
It is also called a It is also called a job
1. Alternate process swapping
CPU scheduler. scheduler.
Name scheduler.

2. Degree in It provides lesser It reduces the control It controls the degree


programming control over the over the degree of of
Short-Term Medium-term Long-Term
Basis Scheduler Scheduler Scheduler

degree of
multiprogramming. multiprogramming.
multiprogramming.

The speed of a long-


The speed of the Speed of medium
term term scheduler
short-term scheduler scheduler between the
is more than
is very fast. short-term and long-
medium-term
term scheduler
3. Speed scheduler.

4. Usage in
It is almost absent or
time- sharing It is minimal in the It is a part of the time-
minimal in a sharing
system sharing time-sharing system. sharing system.
system.
system

It can reintroduce the


It selects the from among the It selects processes
processes from process into memory from the pool and
among the process that executes and its loads them into
that is ready to execution can be memory for
execute. continued. execution.
5. Purpose

6. Process Process state is ready Process state is not Process state is new
state to running present to ready.

Select that process,


Select a new process which is currently not Select a good process
for a CPU quite need to load fully on , mix of I/O bound
7. Selection of frequently. RAM, so it swap it and CPU bound.
process into swap partition.

12. Explain about the concept of scheduling criteria?

Ans) CPU scheduling is the process of determining which process or task is to be executed
by the central processing unit (CPU) at any given time. It is an important component of
modern operating systems that allows multiple processes to run simultaneously on a single
processor. The CPU scheduler determines the order and priority in which processes are
executed and allocates CPU time accordingly, based on various criteria such as CPU
utilization, throughput, turnaround time, waiting time, and response time. Efficient CPU
scheduling is crucial for optimizing system performance and ensuring that processes are
executed in a fair and timely manner.

Importance of CPU Scheduling Criteria

CPU scheduling criteria are important for several reasons −

 Efficient resource utilization − By maximizing CPU utilization and throughput,


CPU scheduling ensures that the processor is being used to its full potential. This
leads to increased productivity and efficient use of system resources.
 Fairness − CPU scheduling algorithms that prioritize waiting time and response time
help ensure that all processes have a fair chance to access the CPU. This is important
in multi-user environments where multiple users are competing for the same
resources.
 Responsiveness − CPU scheduling algorithms that prioritize response time ensure
that processes that require immediate attention (such as user input or real-time
systems) are executed quickly, improving the overall responsiveness of the system.
 Predictability − CPU scheduling algorithms that prioritize turnaround time provide a
predictable execution time for processes, which is important for meeting deadlines
and ensuring that critical tasks are completed on time.

CPU Scheduling Criteria

There are some CPU scheduling criteria given below –


1. CPU Utilization: CPU utilization is a criterion used in CPU scheduling that measures the
percentage of time the CPU is busy processing a task. It is important to maximize CPU
utilization because when the CPU is idle, it is not performing any useful work, and this can
lead to wasted system resources and reduced productivity.

A high CPU utilization indicates that the CPU is busy and working efficiently, processing as
many tasks as possible. However, a too high CPU utilization can also result in a system
slowdown due to excessive competition for resources.

2. Throughput: Throughput is a criterion used in CPU scheduling that measures the number
of tasks or processes completed within a specific period. It is important to maximize
throughput because it reflects the efficiency and productivity of the system. A high
throughput indicates that the system is processing tasks efficiently, which can lead to
increased productivity and faster completion of tasks.

Throughput is particularly important in batch processing environments where the goal is to


complete as many jobs as possible within a specific time frame. Maximizing throughput can
lead to improved system performance and increased productivity.

3. Turnaround Time: Turnaround time is a criterion used in CPU scheduling that measures
the time it takes for a task or process to complete from the moment it is submitted to the
system until it is fully processed and ready for output. It is important to minimize turnaround
time because it reflects the overall efficiency of the system and can affect user satisfaction
and productivity.

A short turnaround time indicates that the system is processing tasks quickly and efficiently,
leading to faster completion of tasks and improved user satisfaction. On the other hand, a
long turnaround time can result in delays, decreased productivity, and user dissatisfaction.

4. Waiting Time: Waiting time is a criterion used in CPU scheduling that measures the
amount of time a task or process waits in the ready queue before it is processed by the CPU.
It is important to minimize waiting time because it reflects the efficiency of the scheduling
algorithm and affects user satisfaction.

A short waiting time indicates that tasks are being processed efficiently and quickly, leading
to improved user satisfaction and productivity. On the other hand, a long waiting time can
result in delays and decreased productivity, leading to user dissatisfaction. Some CPU
scheduling algorithms that prioritize waiting time include Shortest Job First (SJF), Priority
scheduling, and Multilevel Feedback Queue (MLFQ). These algorithms aim to prioritize
short and simple tasks or give higher priority to more important tasks, which can lead to
shorter waiting times and improved system efficiency.

5. Response Time: Response time is a criterion used in CPU scheduling that measures the
time it takes for the system to respond to a user's request or input. It is important to minimize
response time because it affects user satisfaction and the overall efficiency of the system. A
short response time indicates that the system is processing tasks quickly and efficiently,
leading to improved user satisfaction and productivity. On the other hand, a long response
time can result in user frustration and decreased productivity. Some CPU scheduling
algorithms that prioritize response time include Round Robin, Priority scheduling, and
Multilevel Feedback Queue (MLFQ). These algorithms aim to prioritize tasks that require
immediate attention, such as user input, to reduce response time and improve system
efficiency.

13. Explain about Inter process Communication in detail?

Ans) Inter process communication is the mechanism provided by the operating system that
allows processes to communicate with each other. This communication could involve a
process letting another process know that some event has occurred or the transferring of data
from one process to another.
A diagram that illustrates interprocess communication is as follows −

Synchronization in Inter process Communication: Synchronization is a necessary part of


inter process communication. It is either provided by the inter process control mechanism or
handled by the communicating processes. Some of the methods to provide synchronization
are as follows −

 Semaphore: A semaphore is a variable that controls the access to a common


resource by multiple processes. The two types of semaphores are binary semaphores
and counting semaphores.
 Mutual Exclusion: Mutual exclusion requires that only one process thread can enter
the critical section at a time. This is useful for synchronization and also prevents race
conditions.
 Barrier: A barrier does not allow individual processes to proceed until all the
processes reach it. Many parallel languages and collective routines impose barriers.
 Spinlock: This is a type of lock. The processes trying to acquire this lock wait in a
loop while checking if the lock is available or not. This is known as busy waiting
because the process is not doing any useful operation even though it is active.

Approaches to Interprocess Communication

The different approaches to implement interprocess communication are given as follows −

 Pipe: A pipe is a data channel that is unidirectional. Two pipes can be used to create a
two-way data channel between two processes. This uses standard input and output
methods. Pipes are used in all POSIX systems as well as Windows operating systems.
 Socket: The socket is the endpoint for sending or receiving data in a network. This is
true for data sent between processes on the same computer or data sent between
different computers on the same network. Most of the operating systems use sockets
for interprocess communication.
 File: A file is a data record that may be stored on a disk or acquired on demand by a
file server. Multiple processes can access a file as required. All operating systems use
files for data storage.
 Signal: Signals are useful in interprocess communication in a limited way. They are
system messages that are sent from one process to another. Normally, signals are not
used to transfer data but are used for remote commands between processes.
 Shared Memory: Shared memory is the memory that can be simultaneously accessed
by multiple processes. This is done so that the processes can communicate with each
other. All POSIX systems, as well as Windows operating systems use shared memory.
 Message Queue: Multiple processes can read and write data to the message queue
without being connected to each other. Messages are stored in the queue until their
recipient retrieves them. Message queues are quite useful for interprocess
communication and are used by most operating systems.

A diagram that demonstrates message queue and shared memory methods of interprocess
communication is as follows −
14. Explain about the modes of scheduling algorithms (preemptive and non preemptive)?

Ans) Scheduling algorithms OR CPU scheduling algorithms:

•A CPU scheduling algorithm is used to determine which process will use CPU for execution
and which processes to hold or remove from execution.

•The main goal or objective of CPU scheduling algorithms in OS is to make sure that the
CPU is never in an idle state, meaning that the OS has at least one of the processes ready for
execution among the available processes in the ready queue.

Modes in CPU Scheduling Algorithms

There are two modes in CPU Scheduling Algorithms. They are:

1.Preemptive Approach

2.Non-Preemptive Approach

Preemptive Scheduling:

Preemptive scheduling is used when a process switches from running state to ready state or
from the waiting state to the ready state.

•In these algorithms, processes are assigned with priority.

•Whenever a high-priority process comes in, the lower-priority process that has occupied the
CPU is preempted.

•That is, it releases the CPU, and the high-priority process takes the CPU for its execution.

Non-Preemptive Scheduling:

Non-Preemptive scheduling is used when a process terminates, or when a process switches


from running state to waiting state.
•In these algorithms, we cannot preempt the process.

•That is, once a process is running on the CPU, it will release it either by context switching or
terminating.

15. Explain about different types of CPU scheduling algorithm along with examples ?
(FCFS,SJF,ROUND ROBIN,PRIORITY).
Ans) Types of CPU scheduling Algorithms:
There are mainly six types of process scheduling algorithms
1. First Come First Serve (FCFS)
2. Shortest-Job-First (SJF) Scheduling
3. Priority Scheduling
4. Round Robin Scheduling
5. Multilevel Queue Scheduling
6. Shortest Remaining Time

First Come First Serve:

 This is the first type of CPU Scheduling Algorithms. Here, in this CPU Scheduling
Algorithm we are going to learn how CPU is going to allot resources to the certain
process.
 Here, in the First Come First Serve CPU Scheduling Algorithm, the CPU allots the
resources to the process in a certain order. The order is serial way. The CPU is
allotted to the process in which it has occurred.
 We can also say that First Come First Serve CPU Scheduling Algorithm follows First
In First Out in Ready Queue.

Characteristics of FCFS:

 FCFS supports non-preemptive and preemptive CPU scheduling algorithms.


 Tasks are always executed on a First-come, First-serve concept.
 FCFS is easy to implement and use.
 This algorithm is not much efficient in performance, and the wait time is quite high.

Advantages:

 Involves no complex logic and just picks processes from the ready queue one by one.
 Easy to implement and understand.
 Every process will eventually get a chance to run so no starvation occurs.

Disadvantages:

 Waiting time for processes with less execution time is often very long.
 It favors CPU-bound processes then I/O processes.
 Leads to convoy effect.
 Causes lower device and CPU utilization.
 Poor performance as the average wait time is high.

Example:

Consider the given table below and find Completion time (CT), Turn-around time (TAT),
Waiting time (WT), Response time (RT), Average Turn-around time and Average Waiting
time.

Process ID Arrival time Burst time

P1 2 2

P2 5 6

P3 0 4

P4 0 7

P5 7 4

Solution:

Gantt chart
For this problem CT, TAT, WT, RT is shown in the given table −

Process ID Arrival Burst CT TAT=CT-AT WT=TAT-BT RT


time time

P1 2 2 13 13-2= 11 11-2= 9 9

P2 5 6 19 19-5= 14 14-6= 8 8

P3 0 4 4 4-0= 4 4-4= 0 0

P4 0 7 11 11-0= 11 11-7= 4 4

P5 7 4 23 23-7= 16 16-4= 12 12

Average Waiting time = (9+8+0+4+12)/5 = 33/5 = 6.6 time unit (time unit can be
considered as milliseconds)

Average Turn-around time = (11+14+4+11+16)/5 = 56/5 = 11.2 time unit (time unit can be
considered as milliseconds)

Shortest Job First scheduling:

 This is another type of CPU Scheduling Algorithms. Here, in this CPU Scheduling
Algorithm we are going to learn how CPU is going to allot resources to the certain
process.
 The Shortest Job is heavily dependent on the Burst Times. Every CPU Scheduling
Algorithm is basically dependent on the Arrival Times.
 Here, in this Shortest Job First CPU Scheduling Algorithm, the CPU allots its resources
to the process which is in ready queue and the process which has least Burst Time.
 If we face a condition where two processes are present in the Ready Queue and their
Burst Times are same, we can choose any process for execution.
 In actual Operating Systems, if we face the same problem then sequential allocation
of resources takes place.
 Shortest Job First can be called as SJF in short form.

Characteristics:

o SJF (Shortest Job First) has the least average waiting time. This is because all the
heavy processes are executed at the last. So, because of this reason all the very small,
small processes are executed first and prevent starvation of small processes.
o It is used as a measure of time to do each activity.
o If shorter processes continue to be produced, hunger might result. The idea of aging
can be used to overcome this issue.
o Shortest Job can be executed in Pre emptive and also non pre emptive way also.

Advantages

o SJF is used because it has the least average waiting time than the other CPU
Scheduling Algorithms
o SJF can be termed or can be called as long term CPU scheduling algorithm.

Disadvantages

o Starvation is one of the negative traits Shortest Job First CPU Scheduling Algorithm
exhibits.
o Often, it becomes difficult to forecast how long the next CPU request will take

Examples for Shortest Job First

Process ID Arrival Time Burst Time

______ _______ _______

P0 1 3

P1 2 6

P2 1 2

P3 3 7

P4 2 4

P5 5 5

Process Arrival Burst Completion Turn Around Time Waiting Time WT


ID Time Time Time TAT = CT - AT = CT - BT

P0 1 3 5 4 1

P1 2 6 20 18 12

P2 0 2 2 2 0
P3 3 7 27 24 17

P4 2 4 9 7 4

P5 5 5 14 10 5

Gantt Chart:

Now let us find out Average Completion Time, Average Turn Around Time, Average
Waiting Time.

Average Completion Time:

1. Average Completion Time = ( 5 + 20 + 2 + 27 + 9 + 14 ) / 6


2. Average Completion Time = 77/6
3. Average Completion Time = 12.833

Average Waiting Time:

1. Average Waiting Time = ( 1 + 12 + 17 + 0 + 5 + 4 ) / 6


2. Average Waiting Time = 37 / 6
3. Average Waiting Time = 6.666

Average Turn Around Time:

1. Average Turn Around Time = ( 4 +18 + 2 +24 + 7 + 10 ) / 6


2. Average Turn Around Time = 65 / 6
3. Average Turn Around Time = 6.166

Priority CPU Scheduling:

 This is another type of CPU Scheduling Algorithms. Here, in this CPU Scheduling
Algorithm we are going to learn how CPU is going to allot resources to the certain
process.
 The Priority CPU Scheduling is different from the remaining CPU Scheduling
Algorithms. Here, each and every process has a certain Priority number.

 There are two types of Priority Values.

o Highest Number is considered as Highest Priority Value


o Lowest Number is considered as Lowest Priority Value

Priority for Prevention The priority of a process determines how the CPU Scheduling
Algorithm operates, which is a preemptive strategy. Since the editor has assigned equal
importance to each function in this method, the most crucial steps must come first. The most
significant CPU planning algorithm relies on the FCFS (First Come First Serve) approach
when there is a conflict, that is, when there are many processors with equal value.

Characteristics

o Priority CPU scheduling organizes tasks according to importance.


o When a task with a lower priority is being performed while a task with a higher
priority arrives, the task with the lower priority is replaced by the task with the higher
priority, and the latter is stopped until the execution is finished.
o A process's priority level rises as the allocated number decreases.

Advantages

o The typical or average waiting time for Priority CPU Scheduling is shorter than First
Come First Serve (FCFS).
o It is easier to handle Priority CPU scheduling
o It is less complex

Disadvantages

o The Starvation Problem is one of the Pre emptive Priority CPU Scheduling
Algorithm's most prevalent flaws. Because of this issue, a process must wait a longer
period of time to be scheduled into the CPU. The hunger problem or the starvation
problem is the name given to this issue.

Examples:

Now, let us explain this problem with the help of an example of Priority Scheduling
S. No Process ID Arrival Time Burst Time Priority

___ ______ _______ _______ _______

1 P1 0 5 5

2 P2 1 6 4

3 P3 2 2 0

4 P4 3 1 2

5 P5 4 7 1

6 P6 4 6 3

Here, in this problem the priority number with highest number is least prioritized.

This means 5 has the least priority and 0 has the highest priority.

S. Process Arrival Burst Priorit Completion Time Turn Waiting Time


No Id Time Time y Around Time TAT = CT - WT = TAT - BT
AT

1 P1 0 5 5 5 5 0

2 P2 1 6 4 27 26 20

3 P3 2 2 0 7 5 3

4 P4 3 1 2 15 12 11

5 P5 4 7 1 14 10 3

6 P6 4 6 3 21 17 11

Solution:

Gantt Chart:
Average Completion Time

1. Average Completion Time = ( 5 +27 +7 +15 +14 + 21 ) / 6


2. Average Completion Time = 89 / 6
3. Average Completion Time = 14.8333

Average Waiting Time

1. Average Waiting Time = ( 0 + 20 + 3 + 11 + 3 + 11 ) / 6


2. Average Waiting Time = 48 / 6
3. Average Waiting Time = 7

Average Turn Around Time

1. Average Turn Around Time = ( 5 + 26 + 5 + 11 + 10 + 17 ) / 6


2. Average Turn Around Time = 74 / 6
3. Average Turn Around Time = 12.333
Round Robin CPU Scheduling

 Round Robin is a CPU scheduling mechanism those cycles around assigning each
task a specific time slot.

 It is the First come, first served CPU Scheduling technique with preemptive mode.

 The Round Robin CPU algorithm frequently emphasizes the Time-Sharing method.

Round Robin CPU Scheduling Algorithm characteristics include:

o Because all processes receive a balanced CPU allocation, it is straightforward, simple


to use, and starvation-free.
o One of the most used techniques for CPU core scheduling. Because the processes are
only allowed access to the CPU for a brief period of time, it is seen as preemptive.

The benefits of round robin CPU Scheduling:

o Every process receives an equal amount of CPU time, therefore round robin appears
to be equitable.
o To the end of the ready queue is added the newly formed process.

Examples:
Problem

Process ID Arrival Time Burst Time

P0 1 3

P1 0 5

P2 3 2

P3 4 3

P4 2 1

Solution:

Process Arrival Burst Completion Turn Around Waiting


ID Time Time Time Time Time

P0 1 3 5 4 1

P1 0 5 14 14 9

P2 3 2 7 4 2

P3 4 3 10 6 3

P4 2 1 3 1 0

Gantt Chart:

Average Completion Time = 7.8

Average Turn Around Time = 4.8

Average Waiting Time = 3

You might also like