RTOS Module 2 - Full
RTOS Module 2 - Full
MODULE II
2 CONTENTS
uHArReKIUiMsAcR_aASSlTePRdOFEcSSoORn_DteEPxT
O F EC
t s wE_iStCcTChEing.
1 Scheduling
1
Objectives
Be Fair while allocating resources to the processes.
Maximize throughput of the system.
Maximize number of users receiving acceptable response
times.
Be predictable.
Balance resource use.
Avoid indefinite postponement.
Enforce Priorities.
Give preference to processes holding key resources.
Give better service to processes that have desirable behavior
patterns.
1 CPU and I/O Burst
2
Cycle
🠶 Process execution consists of a cycle of CPU execution and I/O
wait.
🠶 Processes alternate between these two states.
🠶 Process execution begins with a CPU burst, followed by
an I/O burst, then another CPU burst ... Etc.
🠶 The last CPU burst will end with a system request to terminate
execution rather than with another I/O burst.
🠶 The duration of these CPU burst have been measured.
🠶 An I/O-bound program would typically have many short CPU
bursts, A CPU-
bound program might have a few very long CPU bursts.
🠶 This can help to select an appropriate CPU-scheduling
algorithm.
1
3
1 Preemptive
4
Scheduling
🠶 Preemptive scheduling is used when a process switches
from running state to ready state or from waiting state
to ready state.
🠶 The resources (mainly CPU cycles) are allocated to the
process for the limited amount of time and then is taken
away, and the process is again placed back in the ready
queue.
🠶 If that process still has CPU burst time remaining, that
process stays in ready queue till it gets next chance to
execute.
1 Non-Preemptive
5
Scheduling
🠶 Non-preemptive Scheduling is used when a process
terminates, or a process switches from running to waiting
state.
🠶 In this scheduling, once the resources (CPU cycles) is
allocated to a process, the process holds the CPU till it
gets terminated or it reaches a waiting state.
🠶 In case of non-preemptive scheduling a process running
CPU does not interrupt in middle of the execution.
🠶 Instead, it waits till the process complete its CPU burst
time and then
it can allocate the CPU to another process.
1
6
1 Scheduling
7
Criteria
There are several different criteria to consider when trying to select
the "best“ scheduling algorithm for a particular situation and
environment, including:
CPU utilization - Ideally the CPU would be busy 100% of the time,
so
as to waste 0 CPU cycles. On a real system CPU usage should
range from
40% ( lightly loaded ) to 90% ( heavily loaded. )
Throughput - Number of processes completed per unit time.
Turnaround time - Time required for a particular process to
complete, from submission time to completion.
Waiting time - How much time processes spend in the ready
queue waiting their
turn to get on the CPU.
Response time - The time taken in an interactive program from the
issuance of a command to the commence of a response to that
command
1
8
Advantage
s-It is simple and easy to understand.
It can be easily implemented using queue data
structure. It does not lead to starvation.
Disadvantages-
It does not consider the priority or burst time of
the processes.
It suffers from convoy effect i.e. processes with higher burst time
arrived before the processes with smaller burst time.
2
3
2
4
2
5
2
6
2
7
2
8
2
9
3 m
0
3
1
3 Shortest Job First
2
(SJF)
🠶 Process which have the shortest burst time are scheduled first.
🠶 If two processes have the same bust time, then FCFS is
used to break the tie.
🠶 This is a non-pre-emptive, pre-emptive scheduling algorithm.
🠶 Best approach to minimize waiting time.
🠶 Easy to implement in Batch systems where required CPU time is
known in advance.
🠶 Impossible to implement in interactive systems where required
CPU time is not known.
🠶 The processer should know in advance how much time process
will take.
Disadvantages-
Gantt chPaRErPtARiEsD
p r e pYaHrAeRIdKUMaAcR_cASoSTrPdRiOnEFgSOtRoD_ tEhTPeOFNECoE_nSCTPCEreemptive priority
B Y D VI A
scheduling.
4
8
GANTT
CHART
🠶 This scheduler allocates the CPU to each process in the ready queue for a
time interval of up to 1 time quantum in FIFO (circular) fashion.
🠶 If the process still running at the end of the quantum; it will be preempted
from the CPU. A context switch will be executed, and the process will be
put at the tail of the ready queue.
🠶 Then the scheduler will select the next process in the ready queue.
🠶 The ready queue is treated as a circular queue.
5
9
Advantages-
6
0 🠶 It can be actually implementable in the system because it is not depending on the burst time.
🠶 It doesn't suffer from the problem of starvation or convoy effect.
🠶 All the jobs get a fare allocation of CPU.
🠶 It gives the best performance in terms of average response time.
🠶 It is best suited for time sharing system, client server architecture and interactive system.
Disadvantages-
🠶 It leads to starvation for processes with larger burst time as they have to repeat the cycle many
times. The higher the time quantum, the higher the response time in the system.
🠶 Its performance heavily depends on time quantum.
🠶 Priorities can not be set for the processes.
🠶 The lower the time quantum, the higher the context switching overhead in the system.
🠶 Deciding a perfect time quantum is really a very difficult task in the system.
🠶 With decreasing value of time
quantum, Number of context switch
6 increases Response time decreases
Chances of starvation decreases
1
Thus, smaller value of time quantum is better in terms of response time.
🠶 If time quantum is very small, RR approach is called processor sharing and appears to
the users as though each of n process has its own processor running at 1/n the speed of
real processor.
🠶 If the Time –slice is chosen to be very small (closer to the context
switching period) then context switching overheads will be very high, thus
affecting the system throughput adversely.
🠶 With increasing value of time quantum,
Number of context switch decreases
Response time increases
Chances of starvation increases
Thus, higher value of time quantum is
better in terms of number of context
switch.
6
2
🠶 With increasing value of time quantum, Round Robin Scheduling tends to become
FCFS Scheduling.
🠶 When time quantum tends to infinity, Round Robin Scheduling becomes FCFS
Scheduling.
🠶 The performance of Round Robin scheduling heavily depends on the value of time
quantum.
🠶 Time -slice has to be carefully chosen .It should be small enough to give a good
response to the interactive users.
🠶 At the same time it should be large enough to keep the context –switching
overhead
low.
6
3
If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate the average
i PeREPaAnREdD BaY vDIeVYrAaHgAeRKI tUuMArRn_AaSSrToPRuOnFEdSSOtRi_mDEPeT.OF ECE_SCTCE
waiting tm
6
6
6
7
Multilevel Queue Scheduling
🠶 A multi-level queue scheduling algorithm partitions the ready queue
into several separate queues.
🠶 The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type.
🠶 There must be scheduling between the queues, which is
commonly implemented
as a fixed priority preemptive scheduling.
🠶 Each queue has absolute priority over lower-priority queues. No process in the batch
queue, for example, could run unless the queues for system processes, interactive
processes, and interactive editing processes were all empty.
🠶 If an interactive editing process entered the ready queue while a batch process was
running, the batch process will be pre-empted.
7
0
Multilevel Feedback Queue
Scheduling
🠶 Multilevel feedback queue scheduling, allows a process to move between queues.
🠶 The idea is to separate processes with different CPU-burst characteristics.
🠶 If a process uses too much CPU time, it will be moved to a lower-priority queue.
🠶 This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
🠶 Similarly, a process that waits too long in a lower priority queue may be
moved to a higher-priority queue. This form of aging prevents starvation.
🠶 For example, consider a multilevel feedback queue scheduler with
three queues, numbered from 0 to 2 .
The scheduler first executes all processes in queue 0.
Only when queue 0 is empty will it execute processes in queue 1. Similarly,
processes in queue 2 will be executed only if queues 0 and 1 are empty.
A process that arrives for queue 1 will preempt a process in queue 2.
A process that arrives for queue 0 will, in turn, preempt a process in
queue 1
🠶 A multilevel feedback queue scheduler is defined by the following
7 o The number of queues
parameters:
1 o The scheduling algorithm for each queue
o The method used to determine when to upgrade a process to a higher priority queue
o The method used to determine when to demote a process to a lower-priority queue
o The method used to determine which queue a process will enter when that process
needs service
Threads
Thread- Structure. User and kernel level threads, Multi-
threading models.
7 Thread
3
s
Difference between process and thread
🠶 Process: Processes are basically the programs that are dispatched from the ready state and are
scheduled in the CPU for execution. PCB(Process Control Block) holds the concept of
process. The process takes more time to terminate and it is isolated means it does not share the
memory with any other process. The process can have the following states new, ready,
running, waiting, terminated, and suspended.
🠶 Thread: Thread is the segment of a process which means a process can have multiple threads
and these multiple threads are contained within a process. A thread has three states: Running,
Ready, and Blocked.
🠶 Opening a new browser (say Chrome, etc) is an example of creating a process. At this point,
a new process will start to execute. On the contrary, opening multiple tabs in the browser is
an example of creating the thread.
Need of Thread:
🠶 It takes far less time to create a new thread in an existing process than to create a
new process.
🠶 Threads can share the common data, they do not need to use Inter- Process
communication.
🠶 Context switching is faster when working with threads.
🠶PREPAIRtEtDaBkY eDIsVYlAeHsAsRKI tUimMAeR_AtoSSTtPeRrOmFESiSnOaR_tDeEPaT OtFhErCeE_aSdCTCthE an a process.
7 Thread
5
s A thread is a basic unit of CPU utilization.
🠶
🠶 It comprises a thread ID, a program counter, a register set, and a stack.
🠶 Thread shares with other threads belonging to the same process its code
section, data section, and other operating-system resources, such as open files and
signals.
🠶 A traditional (or heavyweight) process has a single thread of control.
🠶 If a process has multiple threads of control, it can perform more than one task at a
time.
🠶 There is a way of thread execution inside the process of any operating system. Apart
from this, there can be more than one thread inside a process. Each thread of the
same process makes use of a separate program counter and a stack of activation
records and control blocks. Thread is often referred to as a lightweight process.
🠶 Thread is a mechanism to provide multiple execution control to the processes.
7 Single and Multithreaded
6
Processes
7
7
Benefits of thread over
process
🠶 Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned. Multithreading an interactive
application may allow a program to continue running even if part of it is blocked or is
performing a lengthy operation, thereby increasing responsiveness to the user. For instance,
a multithreaded web browser could still allow user interaction in one thread while an
image was being loaded in another thread.
🠶 Resource sharing: Resources like code, data, and files can be shared among all threads
within a process. The benefit of sharing code and data is that it allows an application to
have several different threads of activity within the same address space. Note: stack and
registers can’t be shared among the threads. Each thread has its own stack and
registers.
7 Benefits of thread over
8
process
🠶 Economy: Allocating memory and resources for process creation is costly.
Because threads share resources of the process to which they belong, it is more
economical to create and context-switch threads. In general it is much more
time consuming to create and manage processes than threads. In Solaris, for
example, creating a process is about thirty times slower than
is creating a thread, and context switching is about five times slower.
🠶 Faster context switch: Context switch time between threads is lower compared to
process context switch. Process context switching requires more overhead from
the CPU.
🠶 Communication: Communication between multiple threads is easier, as the
threads shares common address space. while in process we have to follow
some specific communication technique for communication between two
process.
🠶 Enhanced throughput of the system: If a process is divided into multiple threads,
and each thread function is considered as one job, then the number of jobs
completed per unit of time is increased, thus increasing the throughput of the
system.
🠶 Threads are classified as:
7 🠶 user threads and
9 🠶 kernel threads
User-level thread
🠶 Thread management done by user level thread library rather than via systems call.
🠶 Therefore , thread switching does not need to call operating system and to
cause interrupt to the kernel.
🠶 The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread
blocking operation, the whole process is blocked. The kernel level thread does not know
nothing about the user level thread. The kernel-level thread manages user-level threads as
if they are single-threaded processes.
🠶 The User-level threads are small and faster as compared to kernel-level threads.
🠶 These threads are represented by registers, the program counter (PC), stack, and
some small process control. Furthermore, there is no kernel interaction in user-level
thread synchronization.
🠶 It is also known as the many-to-one mapping thread, as the OS assigns every thread in a
multithreaded program to an execution context. Every multithreaded process is treated as
a single execution unit by the OS.
Advantages of User-level threads
8
🠶 The user threads can be easily implemented than the kernel thread.
0 🠶 User-level threads can be applied to such types of operating systems
that do not support threads at the kernel-level.
🠶 It is faster and efficient.
🠶 Context switch time is shorter than the kernel-level threads.
🠶 It does not require modifications of the operating system.
🠶 These are more portable and these threads may be run on any OS.
🠶 Simple Representation: User-level threads representation is very
simple. The register, PC, stack, and mini thread control blocks
are stored in the address space of the user-level process.
🠶 Simple management: It is simple to create, switch, and synchronize
threads without the intervention of the kernel.
Disadvantages of User-level threads
8
1 🠶 User-level threads lack coordination between the thread and the kernel. Therefore,
process as whole gets one time slice irrespective of whether process has one
thread or even 1000 threads within it. It is up to each thread to relinquish control
to other threads.
🠶 User-level threads require non-blocking systems call i.e., a multithreaded kernel.
Otherwise, entire process will be blocked in the kernel, even if there are runable
threads left in the processes. For example, if one thread causes a page fault, the
process gets blocked.
🠶 User-level threads don't support system-wide scheduling priorities.
🠶 It is not appropriate for a multiprocessor system.
🠶 One-to-One
🠶 Many-to-Many
8 Many-to-One
7
🠶 Examples:
🠶 Solaris Green Threads
🠶 GNU Portable Threads
8 One-to-One
8
🠶 Each user-level thread maps to kernel thread.
🠶 It provides more concurrency than the many-to-one model by
allowing another thread to run when a thread makes a
blocking system call.
🠶 It also allows multiple threads to run in parallel on
multiprocessors.
🠶 The only drawback to this model is that creating a user thread
requires creating the corresponding kernel thread.
🠶 Because the overhead of creating kernel threads can burden the
performance of an application, most implementations of this model
restrict the number of threads supported by the system.
🠶 Examples
🠶 Windows NT/XP/2000
🠶 Linux
🠶 Solaris 9 and later
PREPARED BY DIVYA HARIKUMAR_ASST PROFESSOR_DEPT OF ECE_SCTCE
8 Many-to-Many
9
Model
Draw the Gantt chart and calculate the average waiting time
and turn-around time for these processes if time quantum is 2
units
9 Referenc
7
e
🠶https://www.javatpoint.com
🠶https://www.javatpoint.com/multiple-processors-scheduling-in-operating-
system
🠶https://www.javatpoint.com/user-level-vs-kernel-level-threads-in-operating-
system
🠶 Textbook: Abraham Silberschatz- ‘Operating System
Principles’: Wiley India,7th edition, 2011
Threads
Thread- Structure. User and kernel level threads, Multi-threading
models.
7 Threa d
3
s
Difference between process and thread
> Process: Processes are basically the programs that are dispatched from the ready state and are
scheduled in the CPU for execution. PCB(Process Control Block) holds the concept of
process. The process takes more time to terminate and it is isolated means it does not share the
memory with any other process. The process can have the following states new, ready,
running, waiting, terminated, and suspended.
> Thread: Thread is the segment of a process which means a process can have multiple threads
and these multiple threads are contained within a process. A thread has three states: Running,
Ready, and Blocked.
> Opening a new browser (say Chrome, etc) is an example of creating a process. At this point,
a new process will start to execute. On the contrary, opening multiple tabs in the browser is
an example of creating the thread.
Need of Thread:
> It takes far less time to create a new thread in an existing process than to create a new
process.
> Threads can share the common data, they do not need to use Inter- Process
communication.
> Context switching is faster when working with threads.
PREPAItREtDaBkY eDIsVYlAeHsAsRIKtiUmM AeR_AtoSSTtPeROrmFESiSnO R_atDeEPaT OthF ErCeEa_SdC TCthE an a
process.
7 Thread
5
s A thread is a basic unit of CPU utilization.
>
> It comprises a thread ID, a program counter, a register set, and a stack.
> Thread shares with other threads belonging to the same process its code section,
data section, and other operating-system resources, such as open files and signals.
> A traditional (or heavyweight) process has a single thread of control.
> If a process has multiple threads of control, it can perform more than one task at a
time.
> There is a way of thread execution inside the process of any operating system. Apart
from this, there can be more than one thread inside a process. Each thread of the same
process makes use of a separate program counter and a stack of activation records
and control blocks. Thread is often referred to as a lightweight process.
> Thread is a mechanism to provide multiple execution control to the processes.
> Resource sharing: Resources like code, data, and files can be shared among all threads
within a process. The benefit of sharing code and data is that it allows an application to
have several different threads of activity within the same address space. Note: stack and
registers can’t be shared among the threads. Each thread has its own stack and
registers.
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F
EC E_SC TC E
7 Benefits of thread over
8
proc ess
> Economy: Allocating memory and resources for process creation is costly.
Because threads share resources of the process to which they belong, it is more
economical to create and context-switch threads. In general it is much more time
consuming to create and manage processes than threads. In Solaris, for example,
creating a process is about thirty times slower than
is creating a thread, and context switching is about five times slower.
> Faster context switch: Context switch time between threads is lower compared to
process context switch. Process context switching requires more overhead from
the CPU.
> Communication: Communication between multiple threads is easier, as the
threads shares common address space. while in process we have to follow some
specific communication technique for communication between two process.
> Enhanced throughput of the system: If a process is divided into multiple threads,
and each thread function is considered as one job, then the number of jobs
completed per unit of time is increased, thus increasing the throughput of the
system.
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F
EC E_SC TC E
> Threads are classified as:
7 > user threads and
9 > kernel threads
User-level thread
> Thread management done by user level thread library rather than via systems call.
> Therefore , thread switching does not need to call operating system and to cause
interrupt to the kernel.
> The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread
blocking operation, the whole process is blocked. The kernel level thread does not know
nothing about the user level thread. The kernel-level thread manages user-level threads as
if they are single-threaded processes.
> The User-level threads are small and faster as compared to kernel-level threads.
> These threads are represented by registers, the program counter (PC), stack, and
some small process control. Furthermore, there is no kernel interaction in user-level
thread synchronization.
> It is also known as the many-to-one mapping thread, as the OS assigns every thread in a
multithreaded program to an execution context. Every multithreaded process is treated as
a single execution unit by the OS.
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F
EC E_SC TC E
Advantages of User-level threads
8
> The user threads can be easily implemented than the kernel thread.
0 > User-level threads can be applied to such types of operating systems
that do not support threads at the kernel-level.
> It is faster and efficient.
> Context switch time is shorter than the kernel-level threads.
> It does not require modifications of the operating system.
> These are more portable and these threads may be run on any OS.
> Simple Representation: User-level threads representation is very
simple. The register, PC, stack, and mini thread control blocks are
stored in the address space of the user-level process.
> Simple management: It is simple to create, switch, and synchronize
threads without the intervention of the kernel.
> One-to-One
> Many-to-Many
> Examples:
> Solaris Green Threads
> GNU Portable Threads
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F
EC E_SC TC E
8 One-to-
8
One
> Each user-level thread maps to kernel thread.
> It provides more concurrency than the many-to-one model by
allowing another thread to run when a thread makes a blocking
system call.
> It also allows multiple threads to run in parallel on
multiprocessors.
> The only drawback to this model is that creating a user thread
requires creating the corresponding kernel thread.
> Because the overhead of creating kernel threads can burden the
performance of an application, most implementations of this
model restrict the number of threads supported by the system.
> Examples
> Windows NT/XP/2000
> Linux
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F EC E_SC TC E
Draw the Gantt chart and calculate the average waiting time
and turn-around time for these processes if time quantum is 2
units
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F
EC E_SC TC E
9 Reference
7
> https://www.javatpoint.com
> https://www.javatpoint.com/multiple-processors-scheduling-in-operating-
system
> https://www.javatpoint.com/user-level-vs-kernel-level-threads-in-operating-
system
> Textbook: Abraham Silberschatz- ‘Operating System Principles’: Wiley
India ,7th edition, 2011