0% found this document useful (0 votes)
2 views

scheduling

The document outlines various CPU scheduling algorithms, including non-preemptive and preemptive methods, and discusses their advantages and disadvantages. It covers key terms related to scheduling, the scheduling process, and the concept of fair-share scheduling in Unix. Additionally, it explains the criteria for designing CPU scheduling algorithms and provides examples of different scheduling types such as FCFS, SJF, Round Robin, and Multilevel Feedback Queue Scheduling.

Uploaded by

bodarukmini2
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

scheduling

The document outlines various CPU scheduling algorithms, including non-preemptive and preemptive methods, and discusses their advantages and disadvantages. It covers key terms related to scheduling, the scheduling process, and the concept of fair-share scheduling in Unix. Additionally, it explains the criteria for designing CPU scheduling algorithms and provides examples of different scheduling types such as FCFS, SJF, Round Robin, and Multilevel Feedback Queue Scheduling.

Uploaded by

bodarukmini2
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 20

scheduling

Dr. Darshan Ruikar


Scheduling algorithms (non-
Schedulingpreemptive and preemptive), fair
share scheduler of Unix

Lesson Schedule
1.Introduction to scheduling, key terms related to scheduling,
Compare and contrast different non-preemptive and
preemptive scheduling algorithms
2. Scheduling process,
3. Fair share scheduler
4. System call for time, Clock profile
Learning outcomes

Topic Learning Outcomes COs BL


Define key terms related to scheduling, such as process, job, CO4 L2
turnaround time, waiting time, response time, and
throughput
Compare and contrast different non-preemptive scheduling CO4 L4
algorithms (FCFS, SJF, Priority) based on their performance
characteristics and suitability for different workloads.
Compare and contrast different preemptive scheduling CO6 L4
algorithms (Round Robin, SJF with preemption, Priority with
preemption) based on their performance characteristics and
suitability for different workloads
Explain the concept of fair-share scheduling and how it is CO4 L3
implemented in the Unix operating system.
CPU Scheduling

A process used by the operating system to decide which task or


program gets to use the CPU at a particular time. Since many
programs can run at the same time, the OS needs to manage the
CPU’s time so that every program gets a proper chance to run. The
purpose of CPU Scheduling is to make the system more efficient and
faster.
CPU scheduling is a key part of how an operating system works. It
decides which task (or process) the CPU should work on at any given
time. This is important because a CPU can only handle one task at a
time, but there are usually many tasks that need to be processed.
Terminologies Used in CPU Scheduling

•Arrival Time: The time at which the process arrives in the ready queue.

•Completion Time: The time at which the process completes its execution.

•Burst Time: Time required by a process for CPU execution.

•Turn Around Time: Time Difference between completion time and arrival time.

Turn Around Time = Completion Time – Arrival Time


•Waiting Time(W.T): Time Difference between turn around time and burst
time.

Waiting Time = Turn Around Time – Burst Time


CPU Scheduling Algorithm Designing Criteria's
CPU Utilization: The main purpose of any CPU algorithm is to keep
the CPU as busy as possible.

Throughput: The average CPU performance is the number of


processes performed and completed during each unit.

Turn Round Time: Minimum turn around time (Turn Around Time =
Completion Time – Arrival Time)

Waiting Time: The Scheduling algorithm does not affect the time
required to complete the process once it has started performing.
Minimum waiting time
Response Time: In a collaborative system, turnaround time is not the best option. The process may
produce something early and continue to compute the new results while the previous results are
released to the user. Therefore another method is the time taken in the submission of the application
process until the first response is issued.
Types of CPU Scheduling
Algorithms

Non-Preemptive
Preemptive
Scheduling: Non-
Scheduling: Preemptive
Preemptive scheduling is
scheduling is used when
used when a process
a process switches from
terminates , or when a
running state to ready
process switches from
state or from the waiting
running state to waiting
state to the ready state.
state.
Types of CPU Scheduling
Algorithms
First come first serve (FCFS)

Advantages of FCFS
Easy to implement
First come, first serve method
Disadvantages of FCFS
FCFS suffers from Convoy effect.
The average waiting time is much higher than the other
algorithms.
FCFS is very simple and easy to implement and hence not
much efficient.
Shortest job first (SJF)

• Advantages of SJF
• As SJF reduces the average waiting time thus, it is
better than the first come first serve scheduling
algorithm.
• SJF is generally used for long term scheduling
• Disadvantages of SJF
• One of the demerits SJF has is starvation.
• Many times it becomes complicated to predict the
length of the upcoming CPU request
Round robin

• It’s simple, easy to use, and starvation-free as all processes get


the balanced CPU allocation.

• One of the most widely used methods in CPU scheduling as a core.

• Time sharing: It is considered preemptive as the processes are


given to the CPU for a very limited time.

• Round robin seems to be fair as every process gets an equal share


of CPU.

• The newly created process is added to the end of the ready queue.
Pre-emptive Priority scheduling

• Schedules tasks based on priority.


• When the higher priority work arrives and a task with less
priority is executed, the higher priority process will place the
less takes the priority process the later is suspended until the
execution is complete.
• The lower is the number assigned, the higher is the priority
level of a process.
• the Starvation Problem. This is the problem in which a process
has to wait for a longer amount of time to get scheduled into
the CPU. This condition is called the starvation problem.
Multiple Queue Scheduling

Processes in the ready queue can be divided into


different classes where each class has its own
scheduling needs.

For example, a common division:


• foreground (interactive) process
• background (batch) process.

Starvation problem

It is inflexible in nature
Multilevel Feedback Queue
Scheduling
like Multilevel Queue Scheduling

But in this process can move between the queues.

Thus, much more efficient than multilevel queue scheduling.


multilevel feedback queue scheduler is defined by the following
parameters:
• The number of queues
• The scheduling algorithm for each queue
• The method used to determine when to upgrade a process to
• A higher priority queue
• A lower priority queue
• The method used to determine which queue a process will enter
when that process needs service
Something important
On a time-sharing system, the kernel allocates CPU to a process for a period of
time called the time slice or time quantum.

After the time quantum expires, it preempts the process and schedules another
one.

Every process has a priority associated with it.

Priority is also a parameter in deciding which process to schedule next.

The kernel recalculates the priority of the running process when it comes to user
mode from kernel mode, and it periodically re-adjusts the priority of every "ready-
to-run" process in user mode.
Round robin
Process Scheduling in with multilevel
Unix feedback.
/* Algorithm: schedule_process Input: none Output: none */
{
while (no process picked to execute)
{
for (every process on run queue)
pick highest priority process that is loaded in memory;
if (no process eligible to execute)
idle the machine;
// interrupt takes machine out of idle state
}
remove chosen process from run queue;
switch context to that of chosen process, resume its execution;
}
Example of
process scheduling

• Each process table entry


contains a priority field.
• The priority is a function of
recent CPU usage, where the
priority is lower if a process
has recently used the CPU.
• The range of priorities can be
partitioned in two classes:
• user priorities & kernel
priorities
The kernel calculates process priorities in these process states:
• It assigns priority to the process about to go to sleep. This priority solely
depends on the reason for the sleep. Processes that sleep in lower-level
algorithms tend to cause more system bottlenecks the longer they are
inactive; hence they receive a higher priority than process that would cause
fewer system bottlenecks. For instance, a process sleeping and waiting for
the completion of disk I/O has a higher priority than a process waiting for a
free buffer. Because the first process already has a buffer and it is possible
that after the completion of I/O, it will release the buffer and other
resources, resulting into more resource availability for the system.
• The kernel adjusts the priority of a process that returns from kernel mode to
user mode. The priority must be lowered to a user level priority. The kernel
penalizes the executing process in fairness to other processes, since it had
just used valuable kernel resources.
• The clock handler adjusts the priorities of all processes in user mode at 1
second intervals (on System V) and causes the kernel to go through the
scheduling algorithm to prevent a process from monopolizing use of the
CPU.
Example
• CPU = decay(CPU)=CPU/2
• Priority=(CPU/2)+ base
priority
// consider base priority=
60
Fair share
scheduler
• In fair share scheduling,
there are groups of
processes and the time
quantum is allocated equally
to all the groups, even if
number of processes in each
group might be different.
• priority = (CPU usage / 2) +
(Group CPU usage / 2) +
base priority

You might also like