Os-Ii-Notes 1
Os-Ii-Notes 1
In this article, you will learn the concurrency in the operating system with its
principles, issues, advantages and disadvantages.
What is Concurrency?
Principles of Concurrency
It's difficult to spot a programming error because reports are usually repeatable
due to the varying states of shared components each time the code is executed.
It could be inefficient for the OS to lock the resource and prevent other
processes from using it.
Advantages of Concurrency in OS
1. Better Performance
Disadvantages of Concurrency
2. Deadlock
In concurrent computing, it occurs when one group member waits for another
member, including itself, to send a message and release a lock. Software and
hardware locks are commonly used to arbitrate shared resources and
implement process synchronization in parallel computing, distributed systems,
and multiprocessing.
3. Blocking
A blocked process is waiting for some event, like the availability of a resource
or completing an I/O operation. Processes may block waiting for resources, and
a process may be blocked for a long time waiting for terminal input. If the
process is needed to update some data periodically, it will be very undesirable.
4. Race Conditions
5. Starvation
Whenever the CPU becomes idle, the operating system must select one of the
processes in the line ready for launch. The selection process is done by a
temporary (CPU) scheduler. The Scheduler selects between memory
processes ready to launch and assigns the CPU to one of them.
What is a process?
The process memory is divided into four sections for efficient operation:
• The text category is composed of integrated program code, which is
read from fixed storage when the program is launched.
• The data class is made up of global and static variables, distributed and
executed before the main action.
• Heap is used for flexible, or dynamic memory allocation and is managed
by calls to new, delete, malloc, free, etc.
• The stack is used for local variables. The space in the stack is reserved
for local variables when it is announced.
What is Process Scheduling?
Process Scheduling is the process of the process manager handling the
removal of an active process from the CPU and selecting another process
based on a specific strategy.
Process Scheduling is an integral part of Multi-programming applications.
Such operating systems allow more than one process to be loaded into
usable memory at a time and the loaded shared CPU process uses repetition
time.
There are three types of process schedulers:
• Long term or Job Scheduler
• Short term or CPU Scheduler
• Medium-term Scheduler
• That’s where scheduling comes in! First, you determine the metrics,
saying something like “the amount of time until the end”. We will define
this metric as “the time interval between which a function enters the
system until it is completed”. Second, you decide on a metrics that
reduces metrics. We want our tasks to end as soon as possible.
CPU scheduling is the process of deciding which process will own the CPU
to use while another process is suspended. The main function of the CPU
scheduling is to ensure that whenever the CPU remains idle, the OS has at
least selected one of the processes available in the ready-to-use line.
In Multiprogramming, if the long-term scheduler selects multiple I / O binding
processes then most of the time, the CPU remains an idle. The function of
an effective program is to improve resource utilization.
• Arrival Time: Time at which the process arrives in the ready queue.
• Completion Time: Time at which process completes its execution.
• Burst Time: Time required by a process for CPU execution.
• Turn Around Time: Time Difference between completion time and arrival
time. Turn Around Time = Completion Time – Arrival Time
• Waiting Time (W.T): Time Difference between turnaround time and burst
time. Waiting Time = Turn Around Time – Burst Time
4. Waiting time: A scheduling algorithm does not affect the time required to
complete the process once it starts execution. It only affects the waiting
time of a process i.e. time spent by a process waiting in the ready
queue. The formula for calculating Waiting Time
6. Completion time: The completion time is the time when the process
stops executing, which means that the process has completed its burst
time and is completely executed.
Preemptive Scheduling
Preemptive scheduling is used when a process switches from the running
state to the ready state or from the waiting state to the ready state. The
resources (mainly CPU cycles) are allocated to the process for a limited
amount of time and then taken away, and the process is again placed back in
the ready queue if that process still has CPU burst time remaining. That
process stays in the ready queue till it gets its next chance to execute.
Non-Preemptive Scheduling
Non-preemptive Scheduling is used when a process terminates, or a process
switches from running to the waiting state. In this scheduling, once the
resources (CPU cycles) are allocated to a process, the process holds the CPU
till it gets terminated or reaches a waiting state. In the case of non-preemptive
scheduling does not interrupt a process running CPU in the middle of the
execution. Instead, it waits till the process completes its CPU burst time, and
then it can allocate the CPU to another process.
Key Differences Between Preemptive and Non-
Preemptive Scheduling
It has overheads of
Overhead scheduling the It does not have overheads.
processes.
In preemptive
CPU It is low in non preemptive
scheduling, CPU
Utilization scheduling.
utilization is high.
Examples of preemptive
Examples of non-preemptive
scheduling are Round
Examples scheduling are First Come First
Robin and Shortest
Serve and Shortest Job First.
Remaining Time First.
Characteristics of FCFS:
• FCFS supports non-preemptive and preemptive CPU scheduling
algorithms.
• Tasks are always executed on a First-come, First-serve concept.
• FCFS is easy to implement and use.
• This algorithm is not much efficient in performance, and the wait time is
quite high.
Advantages of FCFS:
• Easy to implement
• First come, first serve method
Disadvantages of FCFS:
• FCFS suffers from Convoy effect.
• The average waiting time is much higher than the other algorithms.
• FCFS is very simple and easy to implement and hence not much efficient.
To learn about how to implement this CPU scheduling algorithm, please refer
to our detailed article on First come, First serve Scheduling.
2. Shortest Job First(SJF):
Shortest job first (SJF) is a scheduling process that selects the waiting
process with the smallest execution time to execute next. This scheduling
method may or may not be preemptive. Significantly reduces the average
waiting time for other processes waiting to be executed. The full form of SJF
is Shortest Job First.
Characteristics of SJF:
• Shortest Job first has the advantage of having a minimum average
waiting time among all operating system scheduling algorithms.
• It is associated with each task as a unit of time to complete.
• It may cause starvation if shorter processes keep coming. This problem
can be solved using the concept of ageing.
Advantages of Shortest Job first:
• As SJF reduces the average waiting time thus, it is better than the first
come first serve scheduling algorithm.
• SJF is generally used for long term scheduling
Disadvantages of SJF:
• One of the demerit SJF has is starvation.
• Many times, it becomes complicated to predict the length of the upcoming
CPU request
To learn about how to implement this CPU scheduling algorithm, please refer
to our detailed article on Shortest Job First.
3. Longest Job First(LJF):
Characteristics of LJF:
• Among all the processes waiting in a waiting queue, CPU is always
assigned to the process having largest burst time.
• If two processes have the same burst time then the tie is broken
using FCFS i.e. the process that arrived first is processed first.
• LJF CPU Scheduling can be of both preemptive and non-preemptive
types.
Advantages of LJF:
• No other task can schedule until the longest job or process executes
completely.
• All the jobs or processes finish at the same time approximately.
Disadvantages of LJF:
• Generally, the LJF algorithm gives a very high average waiting
time and average turn-around time for a given set of processes.
• This may lead to convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer
to our detailed article on the Longest job first scheduling.
4. Priority Scheduling:
5. Round robin:
Shortest remaining time first is the preemptive version of the Shortest job
first which we have discussed earlier where the processor is allocated to the
job closest to completion. In SRTF the process with the smallest amount of
time remaining until completion is selected to execute.
Disadvantages of SRTF:
• Like the shortest job first, it also has the potential for process starvation.
• Long processes may be held off indefinitely if short processes are
continually added.
To learn about how to implement this CPU scheduling algorithm, please refer
to our detailed article on the shortest remaining time first.
The longest remaining time first is a preemptive version of the longest job
first scheduling algorithm. This scheduling algorithm is used by the operating
system to program incoming processes for use in a systematic way. This
algorithm schedules those processes first which have the longest processing
time remaining for completion.
Advantages of LRTF:
• No other process can execute until the longest task executes completely.
• All the jobs or processes finish at the same time approximately.
Disadvantages of LRTF:
• This algorithm gives a very high average waiting time and average turn-
around time for a given set of processes.
• This may lead to a convoy effect.
To learn about how to implement this CPU scheduling algorithm, please refer
to our detailed article on the longest remaining time first.
8. Highest Response Ratio Next:
Advantages of HRRN:
• HRRN Scheduling algorithm generally gives better performance than
the shortest job first Scheduling.
• There is a reduction in waiting time for longer jobs and also it encourages
shorter jobs.
•
Disadvantages of HRRN:
• The implementation of HRRN scheduling is not possible as it is not
possible to know the burst time of every job in advance.
• In this scheduling, there may occur an overload on the CPU.
To learn about how to implement this CPU scheduling algorithm, please refer
to our detailed article on Highest Response Ratio Next.
Processes in the ready queue can be divided into different classes where
each class has its own scheduling needs. For example, a common division is
a foreground (interactive) process and a background (batch) process.
These two classes have different scheduling needs. For this kind of
situation Multilevel Queue Scheduling is used.
The description of the processes in the above diagram is as follows:
• System Processes: The CPU itself has its process to run, generally
termed as System Process.
• Interactive Processes: An Interactive Process is a type of process in
which there should be the same type of interaction.
• Batch Processes: Batch processing is generally a technique in the
Operating system that collects the programs and data together in the form
of a batch before the processing starts.