0% found this document useful (0 votes)
22 views

SDC 2.2 2.3 Scheduling Algorithms

Uploaded by

Junaid Sayyed
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

SDC 2.2 2.3 Scheduling Algorithms

Uploaded by

Junaid Sayyed
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 130

Scheduling Algorithms

Some material © Silberschatz, Galvin, and Gagne, 2002


Basic Concepts
Why??
• Maximum CPU utilization obtained with multiprogramming:-
– Switching CPU among processes to make the OS more productive
– Several processes kept in memory at one time.
– When one process has to wait, the OS takes CPU away from that process
and gives it to another process.

• Scheduling is fundamental OS function.


CPU–I/O Burst Cycle –
• Process execution consists of a cycle of
CPU execution and I/O wait.
• Processes alternate between these 2
states.
– Process execution begins with CPU
burst,
– then followed by I/O burst and so
on.
– Last CPU burst will end with a system
request to terminate execution.

• CPU burst distribution is of main


concern
Scheduling Criteria
• CPU utilization –
– keep the CPU as busy as possible,
– May vary from 0 to 100%.
– In a real system, it should range from 40% (lightly loaded) to
90% (for a heavily used).

• Throughput –
– No of processes that complete their execution per time unit

• Turnaround time –
– amount of time to execute a particular process
– The interval from the time of submission of a process to the
time of completion.
Scheduling Criteria
• Waiting time –
– amount of time a process has been waiting in the ready
queue/ Sum of periods spent waiting in ready queue

• Response time –
– amount of time it takes from when a request was submitted
until the first response is produced, not output (for time-
sharing environment)
Scheduling Algorithm Optimization Criteria

• Max CPU utilization


• Max throughput
• Min turnaround time
• Min waiting time
• Min response time
CPU Scheduler
• Short-term scheduler selects from among the processes in
ready queue, and allocates the CPU to one of them
– Queue may be ordered in various ways
CPU Scheduler
• CPU scheduling decisions may take place when a process:

1) Switches from running to waiting state


Eg- I/O request, Invocation of wait for the termination of
one of the child processes

2) Switches from running to ready state


Eg- When an interrupt occurs

3) Switches from waiting to ready


Eg-Completion of I/O

4) Terminates
CPU Scheduler
• Scheduling under 1 and 4 is nonpreemptive

• All other scheduling is preemptive


– Consider access to shared data
– Consider preemption while in kernel mode
– Consider interrupts occurring during crucial OS activities
Non-Preemptive Scheduling
• Under non-preemptive scheduling,

– once the CPU has been allocated to a process, the process keeps
the CPU until it releases the CPU either by terminating or by
switching to the waiting state.

• This scheduling method was used by Microsoft Windows 3.x;


Preemptive Scheduling
– Windows 95 introduced preemptive scheduling,

– All subsequent versions of Windows operating systems have used


preemptive scheduling.
Dispatcher
• Dispatcher module gives control of the CPU to the process
selected by the short-term scheduler; this involves:
– switching context
– switching to user mode
– jumping to the proper location in the user program to restart that
program

• Dispatch latency – time it takes for the dispatcher to stop one


process and start another running.
Non Pre-emptive Algorithms
First- Come, First-Served (FCFS) Scheduling
• Simplest CPU scheduling Algorithm

• Process that requests CPU first, is allocated CPU first

• Implemented using FIFO Queue


• Process enters the ready queue, its PCB is linked onto the
tail of the queue.
• When CPU is free, it is allocated to the process at the head
of the queue.

• The Average Waiting time for FCFS is often quite long.


First- Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
• Suppose that the processes arrive in the order: P1 , P2 ,
P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30

• Waiting time for P1 = 0; P2 = 24; P3 = 27

• Average waiting time: (0 + 24 + 27)/3 = 17


FCFS Scheduling (Cont.)
• FCFS is Non-preemptive

• Once the CPU is allocated to a process, that process keeps


the CPU until it releases the CPU either by terminating or by
requesting I/o.

• Not good for Time sharing system


FCFS Scheduling (Cont.)
Process Burst Time
P1 24
P2 3
P3 3

Suppose that the processes arrive in the order:


P2 , P3 , P 1
• The Gantt chart for the schedule is:

• Waiting time for P1 = 6; P2 = 0; P3 = 3


• Average waiting time: (6 + 0 + 3)/3 = 3
• Much better than previous case
• Convoy effect - short process behind long process
Shortest-Job-First (SJF) Scheduling
• Associate with each process the length of its next CPU burst
– Use these lengths to schedule the process with the shortest
time

• SJF is optimal –
– gives minimum average waiting time for a given set of processes
Shortest-Job-First (SJF) Scheduling
• Two schemes:
– Non-preemptive
– preemptive – Also known as the
Shortest-Remaining-Time-First (SRTF).

• Non-preemptive SJF is optimal


– if all the processes are ready simultaneously– gives minimum
average waiting time for a given set of processes.

• SRTF is optimal
– if the processes may arrive at different times
Example of Non-preemptive SJF
ProcessArriva leBurst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3

• SJF scheduling chart

P4 P1 P3 P2
0 3 9 16 24

• Average waiting time = (3 + 16 + 9 + 0) / 4 = 7


Example for Non-Preemptive SJF with
different arrival times
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (non-preemptive)

P1

0 3 7

• At time 0, P1 is the only process, so it gets the CPU


and runs to completion
Example for Non-Preemptive SJF
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• Once P1 has completed the queue now holds P2, P3 and P4

P1 P3 P2 P4

0 3 7 8 12 16
• P3 gets the CPU first since it is the shortest. P2 then P4 get the CPU in turn
(based on arrival time)

• Avg waittime = (0+8+7+12)/4 = 6.75


Round Robin (RR)
What ?
• Designed for time sharing systems

• Similar to FCFS but preemption is added

• Time Quantum/Time Slice-small unit of time is defined, usually


10-100 milliseconds
Round Robin (RR)

What ?
• Ready queue is treated as Circular Queue

• The CPU scheduler goes around the ready queue, allocating


CPU to each process for time interval of upto 1 time
Quantum.
Round Robin (RR)
How?
• Ready Queue is kept as FIFO queue of processes.

• New processes added to the tail of the ready queue.

• The CPU scheduler picks the first process from the ready
queue, sets a timer to interrupt after 1 time quantum and
dispatches the process.
Round Robin (RR)
How?(contd.)
• Two scenarios:
– The process may have CPU burst < 1 TQ ,
• the process will itself release the CPU
voluntarily
– CPU burst>1 TQ,
• the timer will go off,
• causing an interrupt to the OS,
• Context switch will be done, process will be
put at the tail of the ready queue.
Example of RR with Time Quantum = 4
Process Burst Time
P1 24
P2 3
P3 3
• The Gantt chart is:

• Typically, higher average turnaround than SJF,


but better response
Round Robin (RR)
• RR is preemptive
• If there are n processes in the ready queue and the
time quantum is q,
– then each process gets 1/n of the CPU time in
chunks of at most q time units at once.
– No process waits more than (n-1)q time units.
Round Robin (RR)
• Performance

– Tq large  RR is same as FIFO


– Tq small  RR is called processor sharing,
appears to user as if each of n processes has its
own processor running at 1/n the speed of real
processor
– q must be large with respect to context switch,
otherwise overhead is too high
Time Quantum and Context Switch Time
Time Quantum and Context Switch Time

• The TQ should be large with respect to the context


switch time.
• Tq usually 10ms to 100ms, context switch < 10
usec
• Else much time will be wasted in Context switching.
Turnaround Time Varies With The Time Quantum

• Typically, higher average turnaround than SJF, but better


response

• The Average turnaround time of a set of processes does not


necessarily improve as the time quantum increases.
– The ATT can be improved if most processes finish their next CPU burst
in single TQ

• Thumb Rule- “80% of CPU bursts should be shorter than Tq”


Turnaround Time Varies With The Time Quantum

80% of CPU bursts


should be shorter than q

The ATT can be improved if most processes finish their next CPU burst in single TQ
Thumb Rule- “80% of CPU bursts should be shorter than Tq”
Pre-emptive Algorithms
Shortest Remaining Time First (SRTF)
• In the Shortest Remaining Time First (SRTF) scheduling
algorithm,
– The process with the smallest amount of time remaining until
completion is selected to execute.
P1 waiting time: 0 + 4-2 = 2
P2 waiting time: 0
The average waiting time(AWT): (0 + 2) / 2 = 1
Example for Preemptive SJF (SRTF)

Process Arrival Time Burst Time


P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4

• Time 0 – P1 gets the CPU Ready = [(P1,7)]


• Time 2 – P2 arrives – CPU has P1 with rem. time=5, Ready =
[(P2,4)] – P2 gets the CPU
• Time 4 – P3 arrives – CPU has P2 with rem. time = 2, Ready =
[(P1,5),(P3,1)] – P3 gets the CPU
P1 P2 P3

2 4 5
Example for Preemptive SJF (SRTF)
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4

• Time 5 – P3 completes and P4 arrives - Ready =


[(P1,5),(P2,2),(P4,4)] – P2 gets the CPU
• Time 7 – P2 completes – Ready = [(P1,5),(P4,4)] – P4 gets the
CPU
• Time 11 – P4 completes, P1 gets the CPU
P1 P2 P3 P2 P4 P1

5 7 11 16
Example for Preemptive SJF (SRTF)
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
• SJF (preemptive)

P1 P2 P3 P2 P4 P1

2 4 5 7 11 16
• Waiting Time for Processes:
• P1=(0+(11-2))=9, P2=((2-2)+(5-4))=1, P3=(4-4)=0, P4=(7-
5)=2
• Average waiting time = (9 + 1 + 0 +2)/4 = 3
Example of Shortest-remaining-time-first
• Now we add the concepts of varying arrival times and preemption to
the analysis
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5

????
Try solving this…………
Example of Shortest-remaining-time-first
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
• Time 0 – P1 gets the CPU Ready = [(P1,8)]
• Time 1 – P2 arrives – CPU has P1 with rem. time=7, Ready = [(P2,4)] – P2 gets
the CPU
• Time 2 – P3 arrives – CPU has P2 with rem. time = 3, Ready = [(P1,7)(P3,9)]
– P2 continues with the CPU
• Time 3- P4 arrives- CPU has P2 with rem. time=2, Ready =[(P1,7)(P3,9)(P4,5)]
– P2 continues with the CPU
• After P2 finishes, then P4 executes , then P1 , finally P3
• Preemptive SJF Gantt Chart

• Average waiting time = [(10-1)+(1-1)+(17-2)+(5-3)]/4 = 26/4 = 6.5 msec


Estimating the Length of Next CPU
Burst
• Problem with SJF: It is very difficult to know exactly the length of
the next CPU burst.
• Idea: Based on the observations in the recent past, we can try to
predict.

Exponential averaging:
• nth CPU burst = tn
• the average of all past bursts tn,
• using a weighting factor 0<=a<=1,
• the next CPU burst is: tn+1 = a tn + (1- a) tn.
Estimating the Length of Next CPU Burst
• This formula defines an Exponential average of the measured
lengths of previous CPU bursts.
• tn= nth CPU burst , contains most recent information,
• tn =the average of all past bursts, contains past history,
• a controls the relative weight of recent and past history in our
prediction.
• We expect that the next CPU burst will be similar in length to
the previous ones.
• tn+1 = a tn + (1- a) tn.
Examples of Exponential Averaging
• a =0
– tn+1 = tn
– Recent history does not matter
• a =1
– tn+1 = a tn
– Only the most recent CPU burst matters
– history is assumed to be old and irrevelant.

tn+1 = a tn + (1- a) tn
Estimating the Length of Next CPU Burst
• More commonly a=1/2, so recent history and past history is
equally weighted
• The initial t0 can be defined as a constant or as an overall
system average

• tn+1 = a tn + (1- a) tn.


Examples of Exponential Averaging
• To Understand the behavior of the exponential
average,
• We can expand the formula for tn+1 by substituting for
tn ,If we expand the formula, we get:
tn+1 = a tn+(1 - a)a tn -1 + …
+(1 - a )j a tn -j + …
+(1 - a )n +1 t0

• Since both a and (1 - a) are less than or equal to 1,


each successive term has less weight than its
predecessor
Estimating the Length of Next CPU Burst
• Figure shows exponential average with a=1/2 and t0=10
Priority Scheduling
• A priority number (integer) is associated with each process

• The CPU is allocated to the process with the highest priority


(smallest integer  highest priority)
Priority Scheduling

• SJF is a special case of priority scheduling:


– process priority = the inverse of remaining CPU time
– The larger the CPU burst, the lower the priority and
vice versa

• Equal priority processes are scheduled in FCFS order

– FCFS can be used to break ties.


Priority Scheduling

• We discuss in terms of High, low priority

• Priority are fixed range of numbers such as 0 to


7 or 0 to 4,095.
– No agreement on whether 0 is highest /lowest.
– Some systems use lower numbers to represent low
priority other use low numbers for high priority.
– Here , we use low numbers for high priority
Priority Scheduling
• Priorities can be defined:
– Internally
– externally

• Internally defined-
– Use Some measurable quantities
– Eg- time limits, memory requirements, the number of open
files, ratio of average I/O burst to average CPU burst have
been used

• Externally defines-
– Set by criteria that are external to the OS
– Importance of the process
– Type and amount of fund being paid for computer use
– The Department sponsoring the work
– Often political factors
Priority Scheduling
• Priority can be :
• pre-emptive
• non pre-emptive

• When a process arrives at the ready queue,


• the priority is compared with priority of the
current running process.
Priority Scheduling
Scenario:
• If the priority of the newly arrived process is higher
than the priority of the currently running process.

Characteristics:
• A preemptive priority scheduling algorithm will
• preempt the CPU

• A non-preemptive priority scheduling algorithm will


• simply put the new process at the head of the ready
queue ,
Priority Scheduling

• Problem  Starvation/Indefinite blocking

• Solution  Aging
Priority Scheduling
• Problem  Starvation/Indefinite blocking

• Low priority processes may never execute


– Leave some low priority processes waiting indefinitely for the
CPU

• Solution  Aging – as time progresses increase the priority


of the process that wait in the system for a long time.

• For e.g- If priorities range from 127 (low) to 0 (high),


decrement the priority of a waiting process by 1 every 15
minutes.
Example of Non Pre-emptive Priority Scheduling
Process Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
• Priority scheduling Gantt Chart

2 5

Waiting Time of Processes-


P1=6
P2=0
P3=16
P4=18
P5=1
• Average waiting time = =(6+0+16+18+1)/5=41/5=8.2 msec
Preemptive Priority Scheduling

Process Id Priority Arrival Time Burst Time


1 2 0 1
2 6 1 7
3 3 2 3
4 5 3 6
5 4 4 5
6 10 5 15
7 9 15 8
Preemptive Priority Scheduling
Process Priorit Arrival Burst At time=0, P1 arrives with the burst time of 1 units
Id y Time Time and priority 2.
Since no other process is available hence this will be
1 2 0 1 √ scheduled till next job arrives or its completion
2 6 1 7
(whichever is lesser).
3 3 2 3
4 5 3 6 Ready Queue={(P1,BT=1,PR=2)}
5 4 4 5
6 10 5 15
7 9 15 8

At time=1, P2 arrives.
P1 terminates
No other process is available at this time hence the Operating system has to
schedule P2 regardless of the priority assigned to it.
Ready Queue={(P2,BT=7,PR=6)}
Preemptive Priority Scheduling
Process Priorit Arrival Burst
Id y Time Time At Time=2
1 2 0 1 √ P3 arrives , the priority of P3 is higher to P2.
Ready Queue={(P3,BT=3,PR=3)}
2 6 1 7

3 3 2 3 P2 with Remaining Time=6,PR=6


4 5 3 6
P2 is Pre-empted
P3 is selected
5 4 4 5

6 10 5 15

7 9 15 8
Preemptive Priority Scheduling
Process Arrival Burst
Priority
Id Time Time
1 2 0 1 √
2 6 1 7
3 3 2 3

4 5 3 6
5 4 4 5
6 10 5 15
7 9 15 8

During the execution of P3, three more processes P4, P5 and P6 becomes
available.
Ready Queue={(P2,Rem BT=6,PR=6), (P4, BT=6,PR=5), (P5,BT=5,PR=4),
(P6,BT=15,PR=10)}
Since, all these three have the priority lower to P3,
No pre-emption, P3 continues
P3 terminates
P5 will be scheduled with the priority highest among the available processes.
Preemptive Priority Scheduling
Process Arrival Burst During the execution of P5, all the processes got
Priority available in the ready queue. At this point, the
Id Time Time
algorithm will start behaving as Non Preemptive
1 2 0 1 √ Priority Scheduling.
2 6 1 7
Ready Queue={(P2,Rem BT=6,PR=6), (P4, BT=6,PR=5),
(P6,BT=15,PR=10)}
3 3 2 3 √
4 5 3 6 √ P5 terminates
5 4 4 5 √ P4 selected with least priority in ready queue and will
6 10 5 15 be executed till the completion.
Ready Queue={(P2,Rem BT=6,PR=6),
7 9 15 8 (P6,BT=15,PR=10)}

P4 terminates
Preemptive Priority Scheduling
Ready Queue={(P2,Rem BT=6,PR=6),
Process Arrival Burst (P6,BT=15,PR=10)}
Priority
Id Time Time
P2 is selected, as it has highest priority
1 2 0 1 √ At time=15, P7 arrives
Ready Queue={(P6,BT=15,PR=10),(P7,BT=8,PR=9}
2 6 1 7 √ No pre-emption, PR of P7<PR of P2
3 3 2 3 √ P2 continues
P2 terminates
4 5 3 6

5 4 4 5 √
6 10 5 15

7 9 15 8
Preemptive Priority Scheduling
Process Arrival Burst
Id
Priority
Time Time
Ready Queue={(P6,BT=15,PR=10),(P7,BT=8,PR=9}
1 2 0 1 √ P7 will be selected
2 6 1 7 √
3 3 2 3 √
4 5 3 6 √
5 4 4 5 √
6 10 5 15 √
7 9 15 8 √
The only remaining process is P6 with the least priority, will be executed at the last.
Preemptive Priority Scheduling
Completion Turn around
Process Id Priority Arrival Time Burst Time Waiting Time
Time Time
1 2 0 1 1 1 0
2 6 1 7 22 21 14
3 3 2 3 5 3 0
4 5 3 6 16 13 7
5 4 4 5 10 6 1
6 10 5 15 45 40 25
7 9 6 8 30 24 16

Turnaround Time = Completion Time - Arrival Time


Waiting Time = Turn Around Time - Burst Time or Sum of times spent in Ready queue

Avg Waiting Time = (0+14+0+7+1+25+16)/7 = 63/7 = 9 units

https://www.javatpoint.com/os-preemptive-priority-scheduling
Multilevel Queue
What?
• Processes are classified into different groups.
• Common division is
– foreground (interactive)
– background (batch)

Why?
• These Processes
– have different response time requirements ,
– so have different scheduling needs.
– Foreground process have higher priority over background
processes
Multilevel Queue
How?
• Multi Level Queue Scheduling Algorithm(MQSA)
partitions Ready queue into separate queues, eg:

• Process permanently assigned to a given queue based


on some property as:
– Process priority
– Process type
– Memory size
Multilevel Queue
• Each queue has its own scheduling
algorithm:
– foreground – RR
– background – FCFS

• Scheduling must be done between the


queues:
– Fixed priority scheduling;
– Time Slicing
Multilevel Queue
• Fixed priority scheduling;
– Foreground Queue has absolute priority over
Background queue
– Serve all from foreground then from
background
– Possibility of starvation.
Multilevel Queue
• Time slice between the queue
– each queue gets a certain amount of CPU
time which it can schedule amongst its
processes;
– For eg, 80% of CPU time to foreground in RR
– 20% to background in FCFS
Multilevel Queue Scheduling
Multilevel Queue- Possibility of starvation

• MQSA with 5 queues


• Each queue has absolute
priority over lower
priority queues.
• No process in batch queue
could run
– unless the queues from
system, interactive,
interactive editing queue
were all empty.
Multilevel Queue- Possibility of starvation

• If an interacting editing
process entered the ready
queue while a batch
process was running,
– the batch process would be
prempted.
Multilevel Queue

• Process do not move between queues.


– Since processes do not change their foreground or
background nature.

• Advantage-
– Low scheduling overhead

• Disadvantage –
– Being Inflexible
Multilevel Feedback Queue

• A process can move between the various


queues

• The idea is to Separate Processes with different


CPU burst characteristic.
– Process uses too much CPU time,
• will be moved to a lower priority queue.
– This scheme leaves I/O bound and interactive
processes
• in the higher priority queues.
Multilevel Feedback Queue

Starvation=
• A process that waits too long in a lower priority
queue

Solution=
• Aging can be implemented this way to prevent
starvation
– may be moved to a higher priority queue.
Multilevel Feedback Queue

• Multilevel-feedback-queue scheduler defined by the


following parameters:
– number of queues
– scheduling algorithms for each queue
– method used to determine when to upgrade a process
– method used to determine when to demote a process
– method used to determine which queue a process will
enter when that process needs service
• Most general scheme but most complex
Example of Multilevel Feedback
Queue
• Three queues:
– Q0 – RR with time
quantum 8 milliseconds
– Q1 – RR time quantum 16
milliseconds
– Q2 – FCFS
Example of Multilevel Feedback
Queue
• Scheduler first executes all
processes in Queue 0,
– Only when Queue0 is
empty,
• will it execute processes in
Queue1.
– Queue2 will be executed
• only when Queue 0 and
Queue 1 is empty
Example of Multilevel Feedback
Queue
• A process that arrives
for Q1
– will preempt a process in
Q2
• A process that arrives
for Q0
– will preempt a process in
Q1
Example of Multilevel Feedback
• Scheduling
Queue
– A new job enters queue Q0
which is served RR
• When it gains CPU, job
receives 8 milliseconds
• If it does not finish in 8
milliseconds, job is
moved to tail of queue
Q1
Example of Multilevel Feedback
Queue
• Scheduling
– If Q0 is empty, the process at
the head of Q1 is given time
quantum of 16 milliseconds
• At Q1 job again receives 16
additional milliseconds
• If it still does not
complete, it is preempted
and moved to queue Q2
Example of Multilevel Feedback
Queue
• Scheduling
– Processes with CPU burst of
8 ms or less are given high
priority
– Processes with CPU burst >8
but <24 ms are also served
quickly with less priority
– Long processes automatically
sink to Queue 2
Linux Scheduling
Linux Scheduling Prior to kernel Version 2.5

• Prior to kernel version 2.5, ran variation of traditional


UNIX scheduling algorithm

• Two problems with the traditional UNIX scheduler are


– it does not provide adequate support for SMP
systems
– it does not scale well as the number of tasks on the
system grows.
Linux Scheduling Through Version 2.5

• Version 2.5 moved to constant order O(1) scheduling


time
• Regardless of the number of tasks on the system.
• The new scheduler also provides
– increased support for SMP,
– including processor affinity and load balancing,
– as well as providing fairness and support for
interactive tasks.
Linux Scheduling Through Version 2.5
• The Linux scheduler is
– a preemptive,
– priority-based algorithm with two separate priority ranges:
– a real-time range from 0 to 99 and
– a nice value ranging from 100 to 140.

real-time
value

nice value
Linux Scheduling Through Version 2.5
• These two ranges map into a global priority scheme wherein
– numerically lower values indicate higher priorities.

real-time
value

nice value
Linux Scheduling Through Version 2.5

• Unlike schedulers for many other systems, including


– Solaris
– Windows XP
• Linux assigns
– higher-priority tasks longer time quanta and
– lower-priority tasks shorter time quanta.
Linux Scheduling Through Version 2.5

• A runnable task is considered eligible for execution on


the CPU
– as long as it has time remaining in its time slice.

• When a task has exhausted its time slice,


– it is considered expired and
– is not eligible for execution again until all other tasks
have also exhausted their time quanta.

• The kernel maintains a list of all runnable tasks


– in a runqueue data structure.
Linux Scheduling Through Version 2.5

• Because of its support for SMP,

– Each processor maintains


• its own runqueue and
• schedules itself independently.

– Each runqueue contains two priority arrays:


• Active
• Expired
Linux Scheduling Through Version 2.5

• The active array


– contains all tasks with time remaining in their time
slices,
• The expired array
– contains all expired tasks.
Linux Scheduling Through Version 2.5

• Each of these priority arrays contains


– a list of tasks indexed according to priority
• The scheduler chooses the task
– with the highest priority from the active array
– for execution on the CPU.
Linux Scheduling Through Version 2.5
• On multiprocessor machines,
– each processor is scheduling the highest-priority task
– from its own run queue structure.

• When all tasks have exhausted their time slices (that is, the
active array is empty),
– the two priority arrays are exchanged;
– the expired array becomes the active array, and vice
versa.
Linux Scheduling (Cont.)
• Real-time scheduling according to POSIX.1b
– Real-time tasks are assigned static priorities
• Other tasks have
– dynamic priorities that are based on their nice values
plus or minus the value 5.
– The interactivity of a task determines whether
• the value 5 will be added to or subtracted from the
nice value. nice values
Linux Scheduling (Cont.)
• A task's interactivity is determined by
– how long it has been sleeping while waiting for I/0.
– Tasks that are more interactive typically have longer
sleep times and therefore are more likely to have
adjustments closer to -5,
– As the scheduler favors interactive tasks, The result of
such adjustments will be higher priorities for these tasks.
– Conversely, tasks with shorter sleep times are often
more CPU-bound and thus will have their priorities
lowered.
Linux Scheduling (Cont.)
• Tasks that are more interactive typically have longer
sleep times and therefore are more likely to have
adjustments closer to -5,
• Interactive Processes :
– These interact constantly with their users, and
therefore spend a lot of time waiting for keypresses
and mouse operations.
– When input is received, the process must be woken up
quickly, or the user will find the system to be
unresponsive.
– Typically, the average delay must fall between 50 and
150 ms.
– The variance of such delay must also be bounded, or the
user will find the system to be erratic.
– Typical interactive programs are command shells, text
editors, and graphical applications.
Linux Scheduling in Version 2.6.23 +
Completely Fair Scheduler (CFS)

• It is default scheduling process since version 2.6.23.

• Elegant handling of I/O and CPU bound process.

• Each run able process have a virtual time associated with


it in PCB (process control block).
Linux Scheduling in Version 2.6.23 +
• Whenever a context switch happens
– then current running process virtual time is increased by
– virtualruntime_currprocess+=T.
where T is time for which it is executed recently.

• Runtime for the process therefore monotonically increases.

• So initially every process have some starting virtual time


Linux Scheduling in Version 2.6.23 +
• CFS is quite simple algorithm for the process scheduling

• It is implemented using RED BLACK Trees and not queues.

– So all the process which are on main memory are


inserted into Red Black trees and
– whenever a new process comes it is inserted into the tree.
– As we know that Red Black trees are self Balancing binary
Search trees.
Linux Scheduling in Version 2.6.23 +
• When context switch occurs –
– The virtual time for the current process which was
executing is updated .

– The new process is decided


• which has lowest virtual time and
• that we know that is left most node of Red Black tree.
Linux Scheduling in Version 2.6.23 +
– If the current process still has some burst time then it is
inserted into the Red Black tree.

– So this way each process gets fair time for the execution as
• after every context switch the virtual time of a process
increases and
• thus priority shuffles.
CFS Performance
Thread Scheduling
Thread Scheduling
• Distinction between user-level and kernel-level threads

• On operating systems that support them,


– it is kernel-level threads-not processes-that are being
scheduled by the operating system.

– User-level threads are managed by a thread library, and


the kernel is unaware of them.
Thread Scheduling
• To run on a CPU, user-level threads
– must ultimately be mapped to an associated kernel-
level thread,
– Although this mapping may be indirect and may use a
lightweight process (LWP).
Thread Scheduling
• LWP-
– Light-weight process are threads in
the user space that acts as an
interface for the ULT to access the
physical CPU resources.

– Thread library schedules which


thread of a process to run on which
LWP and how long.

– The number of LWP created by the


thread library depends on the type
of application
Thread Scheduling
• When we say the thread library schedules user threads onto
available LWPs,
– we do not mean that the thread is actually running on a
CPU;
– this would require the operating system to schedule the
kernel thread onto a physical CPU.
Process Contention Scope (PCS)
• Many-to-one and many-to-many models,
– thread library schedules user-level threads to run on
LWP
– Known as process-contention scope (PCS) since
scheduling competition is within the process
– Typically done via priority set by programmer

• The contention takes place among threads within a same


process.
System Contention Scope (SCS)
• Kernel thread scheduled onto available CPU is system-
contention scope (SCS)
• i.e. Decide which Kernel thread to execute on the CPU
• Competition for the CPU with SCS scheduling takes place
among all threads in system

• The contention takes place among all threads in the system


PCS Scheduling
• PCS is done according to priority-
– the scheduler selects the runnable thread with the highest
priority to run.

• User-level thread priorities


– are set by the programmer and
– are not adjusted by the thread library,
– Although some thread libraries may allow the
programmer to change the priority of a thread.
PCS Scheduling
PCS will typically
– preempt the thread currently running in favor of a
higher-priority thread;
– However, there is no guarantee of time slicing among
threads of equal priority.
Pthread Scheduling

• API allows specifying either PCS or SCS


during thread creation
– PTHREAD_SCOPE_PROCESS schedules threads
using PCS scheduling
– PTHREAD_SCOPE_SYSTEM schedules threads
using SCS scheduling
Pthread
• POSIX Threads, usually referred to as pthreads,
• is an execution model that exists independently from a
language,
• As a parallel execution model.
• It allows a program to control multiple different flows of work
that overlap in time.
• Each flow of work is referred to as a thread, and creation and
control over these flows is achieved by making calls to the POSIX
Threads API.
• POSIX Threads is an API defined by the standard POSIX.1c
Pthread Scheduling
• Systems using the one-to-one model
– such as Windows XP, Solaris, and Linux,
– schedule threads using only SCS.

• Can be limited by OS – Linux and Mac OS X only allow


PTHREAD_SCOPE_SYSTEM
Pthread Scheduling
• The Pthread IPC provides two functions for getting-and setting-
the contention scope policy:

– pthread_attr_setscope(pthread_attr_t *attr, int scope)

– pthread_attr_getscope(pthread_attr_t *attr, int *scope)


Pthread Scheduling
pthread_attr_setscope(pthread_attr_t *attr, int scope)

• 1st parameter for both functions


– A pointer to the pthread_attr_t structure that defines the attributes to
use when creating new threads

• 2nd parameter :
– The new value for the contention scope attribute
– either PTHREAD_SCOPE_SYSTEM or PTHREAD_SCOPE_PROCESS
– indicating how the contention scope is to be set.

Description:
– Sets the thread contention scope attribute in the thread attribute
object attr to scope.
http://www.qnx.com/developers/docs/qnxcar2/index.jsp?topic=%2Fcom.qnx.doc.neutrino.lib_ref%2Ftopic%2Fp%2Fpthread_attr_setscope.html
Pthread Scheduling
pthread_attr_getscope(pthread_attr_t *attr, int *scope)

• 1st parameter for both functions


– A pointer to the pthread_attr_t structure that defines the attributes to
use when creating new threads

• 2nd parameter :
– A pointer to a location where the function can store the current
contention scope.

Description:
• Gets the thread contention scope attribute from the thread attribute
object attr and returns it in scope.

http://www.qnx.com/developers/docs/qnxcar2/index.jsp?topic=%2Fcom.qnx.doc.neutrino.lib_ref%2Ftopic%2Fp%2Fpthread_attr_setscope.html
Pthread Scheduling API
• The program first determines the existing contention scope and sets
it to PTHREAD_SCOPE_SYSTEM.
• It then creates five separate threads that will run using the SCS
scheduling policy.
#include <pthread.h>
#include <stdio.h>
#define NUM_THREADS 5
int main(int argc, char *argv[]) {
int i, scope;
pthread_t tid[NUM THREADS];
pthread_attr_t attr;
/* get the default attributes */
pthread_attr_init(&attr);
/* first inquire on the current scope */
if (pthread_attr_getscope(&attr, &scope) != 0)
fprintf(stderr, "Unable to get scheduling scope\n");
else {
if (scope == PTHREAD_SCOPE_PROCESS)
printf("PTHREAD_SCOPE_PROCESS");
else if (scope == PTHREAD_SCOPE_SYSTEM)
printf("PTHREAD_SCOPE_SYSTEM");
else
fprintf(stderr, "Illegal scope value.\n");
}
Pthread Scheduling
pthread_attr_init()
• Initialize a thread-attribute object

Synopsis:
#include <pthread.h>
int pthread_attr_init( pthread_attr_t *attr );

Arguments:
• attr -A pointer to the pthread_attr_t structure that you want
to initialize.

http://www.qnx.com/developers/docs/qnxcar2/index.jsp?topic=%2Fcom.qnx.doc.neutrino.lib_ref%2Ftopic%2Fp%2Fpthread_attr_setscope.html
Pthread Scheduling API
/* set the scheduling algorithm to PCS or SCS */
pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM);
/* create the threads */
for (i = 0; i < NUM_THREADS; i++)
pthread_create(&tid[i],&attr,runner,NULL);
/* now join on each thread */
for (i = 0; i < NUM_THREADS; i++)
pthread_join(tid[i], NULL);
}
/* Each thread will begin control in this function */
void *runner(void *param)
{
/* do some work ... */
pthread_exit(0);
}
Pthread Scheduling
pthread_create()
• Create a thread

Synopsis:
#include <pthread.h>
int pthread_create( pthread_t* thread, const pthread_attr_t*
attr, void* (*start_routine)(void* ), void* arg );

http://www.qnx.com/developers/docs/qnxcar2/index.jsp?topic=%2Fcom.qnx.doc.neutrino.lib_ref%2Ftopic%2Fp%2Fpthread_attr_setscope.html
Pthread Scheduling
int pthread_create( pthread_t* thread, const pthread_attr_t*
attr, void* (*start_routine)(void* ), void* arg );
Arguments:
• thread –
– NULL, or a pointer to a pthread_t object where the
function can store the thread ID of the new thread.
• attr -
– A pointer to a pthread_attr_t structure that specifies the
attributes of the new thread.
– Instead of manipulating the members of this structure
directly, use pthread_attr_init() and the
pthread_attr_set_* functions.
– If you modify the attributes in attr after creating the
thread, the thread's attributes aren't affected.
Pthread Scheduling
int pthread_create( pthread_t* thread, const pthread_attr_t*
attr, void* (*start_routine)(void* ), void* arg );
Arguments:
• start_routine-
– The routine where the thread begins, with arg as its only argument. If
start_routine() returns,
– There's an implicit call to pthread_exit(), using the return value of
start_routine() as the exit status.
– The thread in which main() was invoked behaves differently. When it
returns from main(), there's an implicit call to exit(), using the return
value of main() as the exit status.
• arg The argument to pass to start_routine.

http://www.qnx.com/developers/docs/qnxcar2/index.jsp?topic=%2Fcom.qnx.doc.neutrino.lib_ref%2Ftopic%2Fp%2Fpthread_attr_setscope.html
Pthread Scheduling
• int pthread_create( pthread_t* thread, const pthread_attr_t*
attr, void* (*start_routine)(void* ), void* arg );

• pthread_create(&tid[i],&attr,runner,NULL);

http://www.qnx.com/developers/docs/qnxcar2/index.jsp?topic=%2Fcom.qnx.doc.neutrino.lib_ref%2Ftopic%2Fp%2Fpthread_attr_setscope.html
Pthread Scheduling
Joining
• The simplest method of synchronization is to join the threads
as they terminate.

• Joining really means waiting for termination.

• Joining is accomplished by one thread waiting for the


termination of another thread. The waiting thread calls
pthread_join()
Pthread Scheduling
int pthread_join( pthread_t thread, void** value_ptr );
Eg-pthread_join(tid[i], NULL);
Arguments:
• thread –
– The target thread whose termination you're waiting for,
– Pass it the thread ID of the thread that you wish to join

• value_ptr –
– An optional value_ptr, which can be used to store the termination
return value from the joined thread.
– NULL, or a pointer to a location where the function can store the value
passed to pthread_exit() by the target thread.
– You can pass in a NULL if you aren't interested in this value
Pthread Scheduling
• The pthread_join function returns an integer value that also
indicates different error codes.

Return Value-
• 0 if the call was successful and this guarantees the given
thread has terminated.
• EDEADLK- a deadlock was detected.
• EINVAL- the given thread is not joinable
• ESRCH- the given thread ID can’t be found.
Pthread Scheduling API
/* set the scheduling algorithm to PCS or SCS */
pthread_attr_setscope(&attr, PTHREAD_SCOPE_SYSTEM);
/* create the threads */
for (i = 0; i < NUM_THREADS; i++)
pthread_create(&tid[i],&attr,runner,NULL);
/* now join on each thread */
for (i = 0; i < NUM_THREADS; i++)
pthread_join(tid[i], NULL);
}
/* Each thread will begin control in this function */
void *runner(void *param)
{
/* do some work ... */
pthread_exit(0);
}
The calling thread waits for every thread with the
pthread_join call in the loop
Pthread Scheduling API
#include <pthread.h>
/* set the scheduling algorithm to PCS
#include <stdio.h> or SCS */
#define NUM_THREADS 5 pthread_attr_setscope(&attr,
int main(int argc, char *argv[]) { PTHREAD_SCOPE_SYSTEM);
int i, scope; /* create the threads */
pthread_t tid[NUM THREADS]; for (i = 0; i < NUM_THREADS; i++)
pthread_attr_t attr; pthread_create(&tid[i],&attr,runner,NUL
/* get the default attributes */ L);
pthread_attr_init(&attr); /* now join on each thread */
/* first inquire on the current scope */ for (i = 0; i < NUM_THREADS; i++)
if (pthread_attr_getscope(&attr, &scope) pthread_join(tid[i], NULL);
!= 0) }
fprintf(stderr, "Unable to get /* Each thread will begin control in
this function */
scheduling scope\n");
void *runner(void *param)
else { {
if (scope == PTHREAD_SCOPE_PROCESS) /* do some work ... */
printf("PTHREAD_SCOPE_PROCESS"); pthread_exit(0);
else if (scope == }
PTHREAD_SCOPE_SYSTEM)
printf("PTHREAD_SCOPE_SYSTEM");
else
fprintf(stderr, "Illegal scope
value.\n");
}
Pthread Scheduling
pthread_exit()
• Terminate a thread
Synopsis:
#include <pthread.h>
void pthread_exit( void* value_ptr );
Arguments:
• value_ptr A pointer to a value that you want to be made available to any
thread joining the thread that you're terminating.
Description:
• Terminates the calling thread.
• This routine kills the thread
• If the thread is joinable, the value value_ptr is made available to any
thread joining the terminating thread (only one thread can get the return
status).
• If the thread is detached, all system resources allocated to the thread are
immediately reclaimed.

You might also like