0% found this document useful (0 votes)
58 views

OS - Module 4 - Process Scheduling

The document discusses process scheduling in operating systems. It covers the scheduler and its objectives of maximizing CPU utilization and performance. It describes different scheduling algorithms like FCFS, SJF, priority, and round robin. It also discusses CPU-bound versus I/O-bound processes and types of scheduling queues. Process scheduling aims to optimize metrics like throughput, turnaround time, waiting time, and response time.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

OS - Module 4 - Process Scheduling

The document discusses process scheduling in operating systems. It covers the scheduler and its objectives of maximizing CPU utilization and performance. It describes different scheduling algorithms like FCFS, SJF, priority, and round robin. It also discusses CPU-bound versus I/O-bound processes and types of scheduling queues. Process scheduling aims to optimize metrics like throughput, turnaround time, waiting time, and response time.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Process Scheduling

Module 4
Operating Systems

Module Outline
• Scheduler
o Scheduling algorithm
o Objectives of Scheduling
o Criteria for scheduling
o CPU-bound vs I/O-bound processes
• Types of Scheduling
• Process scheduling queues
o FCFS
o SJF
o Priority
o Round Robin
o Multilevel feedback queues scheduling
• BSD Unix scheduling
• Multiple processor scheduling

Process Scheduling 2
Operating Systems

Scheduling
• Scheduler
o The scheduler is part of the operating system, concerned with which process to run
next if multiple run-able processes are in ready queue
o Scheduler is concerned with deciding on policy, not providing a mechanism
o It is important because it can have a big effect on resource utilization and overall
performance of the system
• Scheduling Algorithm
o In a multiprogramming computer several processes will compete for use of the
processor
o At any time only one process will be running while other will wait for process to be
free
o Scheduling algorithm is logic that determines the order in which processes should
run when multiple processes are there
o The aim of process scheduling is to achieve objectives such as maximum CPU
utilization and to improve its performance

Process Scheduling 3
Operating Systems

Process Scheduling

Process Scheduling 4
Operating Systems

Process Scheduling
• Maximum CPU utilization obtained with
multiprogramming
• CPU–I/O Burst Cycle – Process execution
consists of a cycle of CPU execution and I/O
wait
• CPU burst followed by I/O burst
• CPU burst distribution is of main concern

Process Scheduling 5
Operating Systems

Histogram of CPU-burst Times

Process Scheduling 6
Operating Systems

CPU Scheduler
• Short-term scheduler selects from among the processes in ready queue,
and allocates the CPU to one of them
o Queue may be ordered in various ways
• CPU scheduling decisions may take place when a process:
o Switches from running to waiting state
o Switches from running to ready state
o Switches from waiting to ready
o Terminates
• Scheduling under 1 and 4 is non-preemptive
• All other scheduling is preemptive
o Consider access to shared data
o Consider preemption while in kernel mode
o Consider interrupts occurring during crucial OS activities

Process Scheduling 7
Operating Systems

Dispatcher
• Dispatcher module gives control of the CPU to the process selected by
the short-term scheduler; this involves:
o switching context
o switching to user mode
o jumping to the proper location in the user program to restart that program

• Dispatch latency
o time it takes for the dispatcher to stop one process and start another running

Process Scheduling 8
Operating Systems

Scheduling Performance
• CPU utilization
o keep the CPU as busy as possible
• Throughput
o # of processes that complete their execution per time unit
• Turnaround time
o amount of time to execute a particular process
• Waiting time
o amount of time a process has been waiting in the ready queue
• Response time
o amount of time it takes from when a request was submitted until the first
response is produced, not output (for time-sharing environment)
Process Scheduling 9
Operating Systems

Scheduling Algorithm Objectives


• Max CPU utilization
• Max throughput
• Min turnaround time
• Min waiting time
• Min response time

Process Scheduling 10
Operating Systems

Criteria for Scheduling


• Priority
o Priority assigned to that process or job
• Class
o Class of job, i.e. batch or on-line or real-time
• Resource requirements
o E.g. expected run time, memory required, etc
• I/O or CPU bound
o i.e. whether job is I/O bound or CPU bound
• Resources used to date
o i.e. the amount of processor time already consumed
• Waiting time to date
o i.e. the amount of time spent waiting for service so far
Process Scheduling 11
Operating Systems

Scheduling Philosophies
• Preemptive
o The system may preempt the CPU from a process at any time;
▪ Prevents any one process from using the CPU for too long
▪ Can lead to race condition (require synchronization techniques)
• Non-preemptive
o Each process voluntarily gives up the CPU
o It is;
▪ Simple
▪ Easy to implement
▪ Not suitable for multi-user systems
o If the running process become blocked next process can be scheduled

Process Scheduling 12
Operating Systems

Process Scheduling Queues


• Job queue
o Set of all the processes in the system
o It also includes the processes from the queues given below

• Ready queue
o Set of all the processes residing in main memory (ready or waiting)
• Device queue
o Set of processes waiting for an I/O devices

Process Scheduling 13
Operating Systems

Levels of Scheduling
• Scheduling can be exercised at three levels
• High-level scheduling (Long-term scheduler)
o Allow limited number of processes in the ready queue to compete
o It disallows processes beyond a certain limit for batch processes first and in the end
also the interactive processes
o Invoke less frequently
o May be slow
• Medium-level scheduling (Medium-term scheduler)
o It is concerned with decision to temporarily remove a process from the system (to
reduce system load)
o To reintroduce a process or swapping in
o What if there is room in memory and both a new process as well as swapped out
process want to be loaded
▪ In this case, medium level scheduler has to work in close conjunction with high level
scheduler

Process Scheduling 14
Operating Systems

Levels of Scheduling
• Low level-scheduling (Short-term scheduler)
o Handles the decisions of which process is to be assigned to the processor (dispatcher)
o Invoke very frequently
o Must execute fast
o Its scheduling could be preemptive or non-preemptive
o Dispatch latency should be minimum
▪ Dispatch latency is the time it takes for the dispatcher to stop one process and start another running
▪ Example:
• Assume a process runs for 90ms and scheduler run for 10ms
• Overheads = 10/(90+10) = 10%

• Any OS may use one or all of these levels, depending upon the desired
• These levels have to interact amongst themselves quite closely to ensure that the
computing resources are managed optimally
• The exact algorithms for these and interaction between them are quite complex

Process Scheduling 15
Operating Systems

Scheduling Algorithms

Process Scheduling 16
Operating Systems

First Come First Serve Scheduling


• The FCFS simply assign the processor to the process which is first in ready queue
• When the processor is free, the next process at the head of the ready queue will be
selected
• Consider the following set of three processes that arrive at time ‘0’
o Process ID Priority Burst Time
o P1 2 30 ms
o P2 3 6 ms
o P3 1 4 ms

P1 P2 P3

0 30 36 40

• The waiting time for each process is P1=0, P2=30, P3=36


• The average waiting time is (0+30+36)/3 = 22ms
• The average waiting time of FCFS may vary substantially if the process’s CPU burst time
vary greatly

Process Scheduling 17
Operating Systems

First Come First Serve Scheduling


• Salient points of FCFS
o It is the simplest scheduling algorithm
o Also know as the first in first out (FIFO)
o FCFS is non-preemptive
o It performs much better for long processes than short processes
o It tend to favor CPU-bond processes over I/O bound
o It is not suitable for time sharing system
o FCFS is rarely used on its own but is effective if combined with other scheme

Process Scheduling 18
Operating Systems

Shortest Job First Scheduling


• This algorithm associates with shortest job first
• When the CPU is available, it is assigned to the process that has smallest CPU burst time
• If two processes have same CPU-burst then FCFS scheduling is used along with this to break the
tie
• Moving a short process before a long one
o Decrease the waiting time of short process
o Increase the waiting time of long process
o Consequently, the average time decreases
• This will result in a smaller number of PCB’s in the ready or blocked queues
• The search time will be smaller, thus improving the response time
• The number of satisfied user will increase
• A long job in the queue may delayed indefinitely by a succession of smaller jobs arriving in ready
queue
• This can be avoided by setting higher external priority to those important jobs
• The OS at any time can calculates a resultant priority based on both
• Also known as shortest job next
Process Scheduling 19
Operating Systems

Shortest Job First Scheduling


• Salient Points of SJF
o SJF may be preemptive or non-preemptive
o It will preempt the currently executing process if its remaining time is greater
than that of a newly arrived process
o This scheme is known as shortest remaining time first
o SJF is optimal in sense that it yields the smallest average waiting time
o In general, it is difficult to predict the CPU time requirement for a process
o SJF is optimal for batch jobs for which the run times are known in advance
o SJF reduces average waiting time over FCFS

Process Scheduling 20
Operating Systems

Shortest Job First Scheduling


Non-Preemptive SJF Scheduling
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
◼ SJF (non-preemptive)

P1 P3 P2 P4

0 3 7 8 12 16
◼ Average waiting time = (0 + 6 + 3 + 7)/4 = 4

Process Scheduling 21
Operating Systems

Shortest Job First Scheduling


Preemptive SJF Scheduling
Process Arrival Time Burst Time
P1 0.0 7
P2 2.0 4
P3 4.0 1
P4 5.0 4
◼ SJF (preemptive)

P1 P2 P3 P2 P4 P1

0 2 4 5 7 11 16
◼ Average waiting time = (9 + 1 + 0 +2)/4 = 3

Process Scheduling 22
Operating Systems

Round Robin Scheduling


• The Round-Robin (RR) scheduling is designed especially for time
sharing systems
• In this algorithm there is small unit of time known as time quantum
• A time quantum is generally up to 10-100ms
• CPU scheduler goes around the ready queue, allocating the CPU to
each process for a time interval of up to 1 quantum

Process Scheduling 23
Operating Systems

Round Robin Scheduling


• For RR scheduling we keep the ready queue as a FIFO queue of
processes
• The CPU picks up the first process from the ready queue, set a timer
to interrupt after 1 quantum
• If the process burst time is less than 1 quantum
o then the process itself release the CPU voluntarily & proceed to next process
• If the process burst time is greater then 1 quantum
o then that process is sent back to the ready queue and CPU scheduler selects
the next process in the ready queue for scheduling

Process Scheduling 24
Operating Systems

Round Robin Scheduling


Process Burst Time
P1 53
P2 17
P3 68
P4 24

• Time quantum = 20 ms
• The Gantt chart is:

P1 P2 P3 P4 P1 P3 P4 P1 P3 P3

0 20 37 57 77 97 117 121 134 154 162

• Typically, higher average turnaround than SJF, but better response

Process Scheduling 25
Operating Systems

Priority Scheduling
• In priority scheduling a priority is associated with each process and
the CPU is allocated to the process with the highest priority
• Equal priority processes are scheduled in FCFS order
• Priorities are generally some fixed range of numbers, such as 0, 1, 5,
10
• There is no general agreement on whether ‘0’ is highest or lower
priority
• Priority can be defined on the bases of following
o Process time limit
o Memory requirements
o Number of open files
o Ratio of average I/O burst to CPU burst
Process Scheduling 26
Operating Systems

Priority Scheduling - Salient points


• Priority mechanism can be static or dynamic
o Static priority mechanism is easy to implement and have relatively low overheads
o Dynamic priority mechanism is more complex to implement and have greater overheads
• The priority scheduling may be preemptive or non-preemptive
o The choice arise when a high priority process comes in the queue while a low priority process
is executing
• A process that is ready to run but lacking the CPU can be considered blocked
• Solution to indefinite blocking is aging
• Aging is a technique of gradually increasing the priority of processes that waits for
a long time
• OS allows only a limited priority classes
• In VAX / VMS priority range is 0 to 31
• In Unix priority range is -20 to 20
Process Scheduling 27
Operating Systems

Multilevel Queue
• Ready queue is partitioned into separate queues:
foreground (interactive)
background (batch)
• Each queue has its own scheduling algorithm
o foreground – RR
o background – FCFS
• Scheduling must be done between the queues
o Fixed priority scheduling; (i.e., serve all from foreground then from
background)
▪ Possibility of starvation
o Time slice – each queue gets a certain amount of CPU time which it can
schedule amongst its processes; i.e., 80% to foreground in RR
o 20% to background in FCFS

Process Scheduling 28
Operating Systems

Multilevel Queue Scheduling

Process Scheduling 29
Operating Systems

Multilevel Feedback Queue


• A process can move between the various queues
o aging can be implemented this way
• Multilevel-feedback-queue scheduler defined by the following
parameters:
o number of queues
o scheduling algorithms for each queue
o method used to determine when to upgrade a process
o method used to determine when to demote a process
o method used to determine which queue a process will enter when that
process needs service

Process Scheduling 30
Operating Systems

Multilevel Feedback Queue


Example
• Three queues:
o Q0 – time quantum 8 milliseconds
o Q1 – time quantum 16 milliseconds
o Q2 – FCFS

• Scheduling
o A new job enters queue Q0 which is served FCFS
▪ When it gains CPU, job receives 8 milliseconds
▪ If it does not finish in 8 milliseconds, job is moved to queue Q1
o At Q1 job is again served FCFS and receives 16 additional milliseconds
▪ If it still does not complete, it is preempted and moved to queue Q2

Process Scheduling 31
Operating Systems

Multilevel Feedback Queues

Process Scheduling 32
Operating Systems

Thread Scheduling

Process Scheduling 33
Operating Systems

Thread Scheduling
• Distinction between user-level and kernel-level threads
• When threads supported, threads scheduled, not processes
• Process-contention scope (PCS)
o Thread library schedules user-level threads to run on LW
o Scheduling competition is within the process
o Typically done via priority set by programmer
o Many-to-one and many-to-many models

• System-contention scope (SCS)


o Kernel thread scheduled onto available CPU
o Competition among all threads in system

Process Scheduling 34
Operating Systems

Pthread Scheduling
• API allows specifying either PCS or SCS during thread creation
o PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling
o PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling

• Can be limited by OS
o Linux and macOS only allow PTHREAD_SCOPE_SYSTEM

Process Scheduling 35
Operating Systems

Multiple-Processor Scheduling
• CPU scheduling more complex when multiple CPUs are available
• Multiprocessor may be any one of the following architectures:
o Multicore CPUs
o Multithreaded cores
o NUMA systems
o Heterogeneous multiprocessing

Process Scheduling 38
Operating Systems

Multiple-Processor Scheduling
• Symmetric multiprocessing (SMP) is where each processor is self
scheduling
• All threads may be in a common ready queue (a)
• Each processor may have its own private queue of threads (b)

Process Scheduling 39
Operating Systems

Multicore Processors
• Recent trend to place multiple processor cores on same physical chip
• Faster and consumes less power
• Multiple threads per core also growing
o Takes advantage of memory stall to make progress on another thread while
memory retrieve happens

Process Scheduling 40
Operating Systems

Multithreaded Multicore System


• Each core has > 1 hardware threads
• If one thread has a memory stall, switch to another thread

Process Scheduling 41
Operating Systems

Multithreaded Multicore System


• Chip-multithreading (CMT) assigns
each core multiple hardware threads
o Intel refers to this as hyperthreading

• On a quad-core system with 2


hardware threads per core, the
operating system sees 8 logical
processors

Process Scheduling 42
Operating Systems

Multithreaded Multicore System


• Two levels of scheduling:

o The operating system deciding


which software thread to run
on a logical CPU

o How each core decides which


hardware thread to run on the
physical core

Process Scheduling 43
Operating Systems

Multiple-Processor Scheduling – Load Balancing


• If SMP, need to keep all CPUs loaded for efficiency

• Load balancing
o attempts to keep workload evenly distributed
• Push migration
o periodic task checks load on each processor, and if found pushes task from
overloaded CPU to other CPUs
• Pull migration
o idle processors pulls waiting task from busy processor

Process Scheduling 44
Operating Systems

Multiple-Processor Scheduling – Processor Affinity


• When a thread has been running on one processor, the cache contents of
that processor stores the memory accesses by that thread
• We refer to this as a thread having affinity for a processor (i.e., “processor
affinity”)
• Load balancing may affect processor affinity as a thread may be moved
from one processor to another to balance loads, yet that thread loses the
contents of what it had in the cache of the processor it was moved off of
• Soft affinity
o the operating system attempts to keep a thread running on the same processor, but
no guarantees
• Hard affinity
o allows a process to specify a set of processors it may run on
Process Scheduling 45
Operating Systems

NUMA and CPU Scheduling


• If the operating system is NUMA-aware, it will assign memory closes
to the CPU the thread is running on

Process Scheduling 46
Operating Systems

Real-Time Process Scheduling

Process Scheduling 47
Operating Systems

Real-Time CPU Scheduling


• Can present obvious challenges
• Soft real-time systems
o Critical real-time tasks have the highest priority, but no guarantee as to when
tasks will be scheduled
• Hard real-time systems
o task must be serviced by its deadline

Process Scheduling 48
Operating Systems

Real-Time CPU Scheduling


• Event latency – the amount of time
that elapses from when an event
occurs to when it is serviced
• Two types of latencies affect
performance
o Interrupt latency
▪ time from arrival of interrupt to start of
routine that services interrupt
o Dispatch latency
▪ time for schedule to take current process off
CPU and switch to another

Process Scheduling 49
Operating Systems

Interrupt Latency

Process Scheduling 50
Operating Systems

Dispatch Latency
• Conflict phase of dispatch
latency
o Preemption of any process
running in kernel mode
o Release by low-priority process
of resources needed by high-
priority processes

Process Scheduling 51
Operating Systems

Priority-based Scheduling
• For real-time scheduling, scheduler must support preemptive,
priority-based scheduling
o But only guarantees soft real-time
• For hard real-time must also provide ability to meet deadlines
• Processes have new characteristics: periodic ones require CPU at
constant intervals
o Has processing time t, deadline d, period p
o0≤t≤d≤p
o Rate of periodic task is 1/p

Process Scheduling 52
Operating Systems

Rate Monotonic Scheduling


• A priority is assigned based on the inverse of its period
• Shorter periods → higher priority
• Longer periods → lower priority

• P1 is assigned a higher priority than P2

Process Scheduling 53
Operating Systems

Missed Deadlines with Rate Monotonic Scheduling


• Process P2 misses finishing its deadline at time 80

Process Scheduling 54
Operating Systems

Earliest Deadline First Scheduling (EDF)


• Priorities are assigned according to deadlines
o The earlier the deadline, the higher the priority
o The later the deadline, the lower the priority

Process Scheduling 55
Operating Systems

Proportional Share Scheduling


• T shares are allocated among all processes in the system
• An application receives N shares where N < T
• This ensures each application will receive N / T of the total processor
time

Process Scheduling 56
Operating Systems

POSIX Real-Time Scheduling


• The POSIX.1b standard
• API provides functions for managing real-time threads
• Defines two scheduling classes for real-time threads
1. SCHED_FIFO - threads are scheduled using a FCFS strategy with a FIFO
queue
▪ There is no time-slicing for threads of equal priority
2. SCHED_RR - similar to SCHED_FIFO except time-slicing occurs for threads of
equal priority
• Defines two functions for getting and setting scheduling policy:
1. pthread_attr_getsched_policy(pthread_attr_t *attr, int *policy)
2. pthread_attr_setsched_policy(pthread_attr_t *attr, int policy)

Process Scheduling 57
Operating Systems

Operating System Examples

Unix, Linux, Windows, and Solaris scheduling

Process Scheduling 60
Operating Systems

Operating System Examples


• Linux scheduling
• Windows scheduling
• Solaris scheduling

Process Scheduling 61
Operating Systems

BSD Unix Scheduling


• BSD Unix uses multilevel feed back queue approach, with 32 run queues
• System processes use run queues 0 through 7
• User processes use run queues 8 through 31
• The dispatcher selects a process from higher priority queue
• Within a queue RR scheduling is used
• Time quantum vary among implementation, but all are less than 100ms
• Each process has an external nice priority
• The nice priority can vary between -20 and 20, -20 is the highest, 20 is the lowest
user priority
• Each process’s current priority id recalculated after each time quantum

Process Scheduling 62
Operating Systems

Linux Scheduling Through Version 2.5


• Prior to kernel version 2.5, ran variation of standard UNIX scheduling
algorithm
• Version 2.5 moved to constant order O(1) scheduling time
o Preemptive, priority based
o Two priority ranges: time-sharing and real-time
o Real-time range from 0 to 99 and nice value from 100 to 140
o Map into global priority with numerically lower values indicating higher priority
o Higher priority gets larger q
o Task run-able as long as time left in time slice (active)
o If no time left (expired), not run-able until all other tasks use their slices
o All run-able tasks tracked in per-CPU runqueue data structure
▪ Two priority arrays (active, expired)
▪ Tasks indexed by priority
▪ When no more active, arrays are exchanged
o Worked well, but poor response times for interactive processes

Process Scheduling 63
Operating Systems

Linux Scheduling in Version 2.6.23 +


Completely Fair Scheduler (CFS)
• Scheduling classes
o Each has specific priority
o Scheduler picks highest priority task in highest scheduling class
o Rather than quantum based on fixed time allotments, based on proportion of
CPU time
o Two scheduling classes included, others can be added
▪ default
▪ real-time

Process Scheduling 64
Operating Systems

Linux Scheduling in Version 2.6.23 +


• Quantum calculated based on nice value from -20 to +19
o Lower value is higher priority
o Calculates target latency – interval of time during which task should run at
least once
o Target latency can increase if say number of active tasks increases

• CFS scheduler maintains per task virtual run time in variable vruntime
o Associated with decay factor based on priority of task – lower priority is
higher decay rate
o Normal default priority yields virtual run time = actual run time

• To decide next task to run, scheduler picks task with lowest virtual run
time

Process Scheduling 65
Operating Systems

CFS Performance

Process Scheduling 66
Operating Systems

Linux Scheduling
• Real-time scheduling according to POSIX.1b
o Real-time tasks have static priorities
• Real-time plus normal map into global priority scheme
• Nice value of -20 maps to global priority 100
• Nice value of +19 maps to priority 139

Process Scheduling 67
Operating Systems

Linux Scheduling
• Linux supports load balancing, but is also NUMA-aware
• Scheduling domain is a set of CPU cores that can be balanced against
one another.
• Domains are organized by what they share (i.e., cache memory.) Goal
is to keep threads from migrating between domains

Process Scheduling 68
Operating Systems

Windows Scheduling
• Windows use a priority based, preemptive scheduling algorithm
• Windows scheduler ensures that highest priority thread will always run
o Thread runs until
1. blocks
2. uses time slice
3. preempted by higher-priority thread
• Windows dispatcher uses a 32-level priority scheme
• Priorities are divided into two classes
o Variable class (priorities from 0 to 15)
o Real time class (priorities from 16 to 31)
• Priority 0 is memory-management thread
• Dispatcher uses a queue for each scheduling priority
• If no run-able thread, runs idle thread

Process Scheduling 69
Operating Systems

Windows Scheduling: Priority Classes


• Win 32 API identifies several priority classes. These are;
o REALTIME_PRIORITY_CLASS
o HIGH_PRIORITY_CLASS
o ABOVE_NORMAL_PRIORITY_CLASS
o NORMAL_PRIORITY_CLASS
o BELOW_NORMAL_PRIORITY_CLASS
o IDLE_PRIORITY_CLASS
• Within each of these priority there is a relative priority which are;
o TIME_CRITICAL, HIGHEST, ABOVE_NORMAL, NORMAL, BELOW_NORMAL,
LOWEST, IDLE
• Priority class and relative priority combine to give numeric priority
• Base priority is NORMAL within the class

Process Scheduling 70
Operating Systems

Windows Scheduling: Priority Classes


• If quantum expires, priority lowered, but never below base
• If wait occurs, priority boosted depending on what was waited for
• Foreground window given 3x priority boost
• Windows 7 added user-mode scheduling (UMS)
o Applications create and manage threads independent of kernel
o For large number of threads, much more efficient
o UMS schedulers come from programming language libraries like
C++ Concurrent Runtime (ConcRT) framework

Process Scheduling 71
Operating Systems

Windows Priorities

Process Scheduling 72
Operating Systems

Solaris Scheduling
• Solaris uses priority-based process scheduling
• It has six classes of priority, which are;
o Real time (RT)
o System (SYS)
o Fair Share (FSS)
o Fixed priority (FP)
o Time sharing (TS) (default)
o Interactive (IA)
• Given thread can be in one class at a time
• Each class includes different set of priorities
• Each class has its own scheduling algorithm
• Real time processes has highest priority and interactive and time-sharing
processes have least priorities
• Solaris uses system class to run kernel processes, such as scheduler
o System class is reserved for kernel processes
Process Scheduling 73
Operating Systems

Solaris Dispatch Table

Process Scheduling 74
Operating Systems

Solaris Scheduling

Process Scheduling 75
Operating Systems

Solaris Scheduling
• Scheduler converts class-specific priorities into a per-thread global
priority
o Thread with highest priority runs next
o Runs until (1) blocks, (2) uses time slice, (3) preempted by higher-priority
thread
o Multiple threads at same priority selected via RR

Process Scheduling 76
Operating Systems

Evaluation of CPU Schedulers by Simulation

Process Scheduling 83
Operating Systems

Implementation
• Even simulations have limited accuracy
• Just implement new scheduler and test in real systems
o High cost, high risk
o Environments vary

• Most flexible schedulers can be modified per-site or per-system


• Or APIs to modify priorities
• But again environments vary

Process Scheduling 84

You might also like