0% found this document useful (0 votes)
27 views

RTOS Module 2 - Full

Uploaded by

joyalgigi799
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

RTOS Module 2 - Full

Uploaded by

joyalgigi799
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 123

RTOS

MODULE II
2 CONTENTS

🠶 Process Scheduling: FCFS, SJF, Priority, Round-Robin.


🠶 Multilevel Queue and Multilevel Feedback Queue
Scheduling.
🠶 Multiprocessor scheduling.
3 Process
Scheduling
🠶 Scheduling of processes/work is done to finish the work on
time.
🠶 Process Scheduling is the process of the process manager,
handling the removal of an active process from the CPU
and selecting another process based on a specific
strategy.
🠶 Process Scheduling is an integral part of Multi-programming
applications.
🠶 Process Scheduling allows the OS to allocate CPU time
for each process.
🠶 Another important reason to use a process scheduling
system is that it keeps the CPU busy at all times. This
allows you to get less response time for programs.
4

There are three types of process


schedulers:
🠶 Long term or Job
Scheduler
🠶 Short term or CPU
Scheduler
🠶 Medium-term Scheduler
5 Long term
Scheduler
🠶 Long term scheduler is also known as job scheduler.
🠶 It chooses the processes from the pool (secondary memory)
and keeps them in the ready queue maintained in the
primary memory.
🠶 Long Term scheduler mainly controls the degree of
Multiprogramming. The purpose of long term scheduler is to
choose a perfect mix of IO bound and CPU bound processes
among the jobs present in the pool.
🠶 If the job scheduler chooses more IO bound processes then all
of the jobs may reside in the blocked state all the time and the
CPU will remain idle most of the time. This will reduce the
degree of Multiprogramming.
🠶 Therefore, the Job of long term scheduler is very critical
and may affect the system for a very long time.
6 Short term
scheduler
🠶 Short term scheduler is also known as CPU scheduler.
🠶 It selects one of the Jobs from the ready queue and
dispatch to the CPU for the execution.
🠶 A scheduling algorithm is used to select which job is going to be
dispatched for the execution.
🠶 The Job of the short term scheduler can be very critical in the
sense that if it selects job whose CPU burst time is very high then
all the jobs after that, will have to wait in the ready queue for a
very long time.
🠶 This problem is called starvation which may arise if the short term
scheduler makes some mistakes while selecting the job.
7 Medium term
scheduler
🠶 Medium term scheduler takes care of the swapped out processes.
🠶 If the running state processes needs some IO time for the
completion then there is a need to change its state from
running to waiting.
🠶 Medium term scheduler is used for this purpose.
🠶 It removes the process from the running state to make
room for the other processes.
🠶 Such processes are the swapped out processes and this
procedure is called swapping.
🠶 The medium term scheduler is responsible for suspending and
resuming the
processes.
🠶 It reduces the degree of multiprogramming. The swapping is
necessary to have a perfect mix of processes in the ready
queue.
8
9
1 CPU
0
Scheduling
🠶 In the uni-programmming systems like MS DOS, when a process waits for
any I/O operation to be done, the CPU remains idle. This is an overhead since
it wastes the time and causes the problem of starvation. However, In
Multiprogramming systems, the CPU doesn't remain idle during the waiting
time of the Process and it starts executing other processes. Operating System
has to define which process the C PU will be given.
🠶 In Multiprogramming systems, the Operating system schedules the
processes on the CPU to have the maximum utilization of it and this
procedure is called CPU scheduling. The Operating System uses various
scheduling algorithm to schedule the processes.
🠶 This is a task of the short term scheduler to schedule the CPU for the number
of processes present in the Job Pool.
🠶 Whenever the running process requests some IO operation then the short
term scheduler saves the current context of the process (also called PCB)
and changes its state from running to waiting.
🠶 During the time, process is in waiting state; the Short term scheduler picks
another process from the ready queue and assigns the CPU to this process.
p This
PREPA RED BY D V
ro c eIYdA

uHArReKIUiMsAcR_aASSlTePRdOFEcSSoORn_DteEPxT
O F EC
t s wE_iStCcTChEing.
1 Scheduling
1
Objectives
 Be Fair while allocating resources to the processes.
 Maximize throughput of the system.
 Maximize number of users receiving acceptable response
times.
 Be predictable.
 Balance resource use.
 Avoid indefinite postponement.
 Enforce Priorities.
 Give preference to processes holding key resources.
 Give better service to processes that have desirable behavior
patterns.
1 CPU and I/O Burst
2
Cycle
🠶 Process execution consists of a cycle of CPU execution and I/O
wait.
🠶 Processes alternate between these two states.
🠶 Process execution begins with a CPU burst, followed by
an I/O burst, then another CPU burst ... Etc.
🠶 The last CPU burst will end with a system request to terminate
execution rather than with another I/O burst.
🠶 The duration of these CPU burst have been measured.
🠶 An I/O-bound program would typically have many short CPU
bursts, A CPU-
bound program might have a few very long CPU bursts.
🠶 This can help to select an appropriate CPU-scheduling
algorithm.
1
3
1 Preemptive
4
Scheduling
🠶 Preemptive scheduling is used when a process switches
from running state to ready state or from waiting state
to ready state.
🠶 The resources (mainly CPU cycles) are allocated to the
process for the limited amount of time and then is taken
away, and the process is again placed back in the ready
queue.
🠶 If that process still has CPU burst time remaining, that
process stays in ready queue till it gets next chance to
execute.
1 Non-Preemptive
5
Scheduling
🠶 Non-preemptive Scheduling is used when a process
terminates, or a process switches from running to waiting
state.
🠶 In this scheduling, once the resources (CPU cycles) is
allocated to a process, the process holds the CPU till it
gets terminated or it reaches a waiting state.
🠶 In case of non-preemptive scheduling a process running
CPU does not interrupt in middle of the execution.
🠶 Instead, it waits till the process complete its CPU burst
time and then
it can allocate the CPU to another process.
1
6
1 Scheduling
7
Criteria
There are several different criteria to consider when trying to select
the "best“ scheduling algorithm for a particular situation and
environment, including:
 CPU utilization - Ideally the CPU would be busy 100% of the time,
so
as to waste 0 CPU cycles. On a real system CPU usage should
range from
40% ( lightly loaded ) to 90% ( heavily loaded. )
 Throughput - Number of processes completed per unit time.
 Turnaround time - Time required for a particular process to
complete, from submission time to completion.
 Waiting time - How much time processes spend in the ready
queue waiting their
turn to get on the CPU.
 Response time - The time taken in an interactive program from the
issuance of a command to the commence of a response to that
command
1
8

🠶 Arrival Time: Time at which the process arrives in the ready


queue.
🠶 Completion Time: Time at which process completes its execution.
🠶 Burst Time: Time required by a process for CPU execution.
🠶 Turn Around Time: Time Difference between completion time
and arrival time.
Turn Around Time = Completion Time – Arrival Time
🠶 Waiting Time(W.T): Time Difference between turnaround time
and burst time.
Waiting Time = Turn Around Time – Burst Time
1 Process
9
Scheduling
🠶 A Process Scheduler schedules different processes to be
assigned to the CPU based on particular scheduling
algorithms.
1.First-Come, First-Served (FCFS) Scheduling
2.Shortest-Job-First (SJF) Scheduling
3.Priority Scheduling
4.Shortest Remaining Time
5.Round Robin(RR) Scheduling
6.Multiple-Level Queues Scheduling

These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are


designed so that once a process enters the running state, it cannot be preempted until it
completes its allotted
priority running processtime, whereas
anytime whenthe preemptive
a high priority scheduling is based
process enters on priority
into a
ready state.
where a scheduler may preempt a low
PREPARED BY DIVYA HARIKUMAR_ASST PROFESSOR_DEPT OF ECE_SCTCE
2
0
2 First Come First Serve
1
(FCFS)
FCFS considered to be the simplest of all operating system
scheduling algorithms.
🠶 It is a non-preemptive scheduling algorithm.
🠶 Jobs are executed on first come, first serve basis.
🠶 The process which arrives first in the ready queue is firstly
assigned the CPU.
🠶 Easy to understand and implement.
🠶 Its implementation is based on FIFO queue.
🠶 Poor in performance as average wait time is high
2
2

Advantage
s-It is simple and easy to understand.
It can be easily implemented using queue data
structure. It does not lead to starvation.

Disadvantages-
It does not consider the priority or burst time of
the processes.
It suffers from convoy effect i.e. processes with higher burst time
arrived before the processes with smaller burst time.
2
3
2
4
2
5
2
6
2
7
2
8
2
9
3 m
0
3
1
3 Shortest Job First
2
(SJF)
🠶 Process which have the shortest burst time are scheduled first.
🠶 If two processes have the same bust time, then FCFS is
used to break the tie.
🠶 This is a non-pre-emptive, pre-emptive scheduling algorithm.
🠶 Best approach to minimize waiting time.
🠶 Easy to implement in Batch systems where required CPU time is
known in advance.
🠶 Impossible to implement in interactive systems where required
CPU time is not known.
🠶 The processer should know in advance how much time process
will take.

🠶 Pre-emptive mode of Shortest Job First is called as Shortest


Remaining Time First (SRTF).
Advantages-
3
3
🠶 SRTF is optimal and guarantees the minimum average waiting time.
🠶 It provides a standard for other algorithms since no other algorithm performs
better than it.

Disadvantages-

🠶 It cannot be implemented practically since burst time of the


processes can not be known in advance.
🠶 It leads to starvation for processes with larger burst time.
🠶 Priorities can not be set for the processes.
🠶 Processes with larger burst time have poor response time.
3
4
3 🠶 Since the CPU scheduling policy is SJF non-
preemptive
5
3
6
3
7
3
8
3
9
4
0
4
1
4
2
4
3
4
4
4 Priority Based
5
Scheduling
🠶 In Priority scheduling, there is a priority number assigned to each process.
🠶 In some systems, the lower the number, the higher the priority.
🠶 While, in the others, the higher the number, the higher will be the priority.
🠶 The Process with the higher priority among the available processes is given the CPU.
🠶 In case of a tie, it is broken by FCFS Scheduling
🠶 There are two types of priority scheduling algorithm.
🠶 Preemptive priority scheduling
🠶 Non Preemptive Priority scheduling.
🠶 The priority number assigned to each of the process may or may not vary.
🠶 If the priority number doesn't change itself throughout the process, it is called static
priority, while if it keeps changing itself at the regular intervals, it is called
dynamic
p r i o rDIiVtYyA.HARIKUMAR_ASST PROFESSOR_DEPT OF ECE_SCTCE
PREPA RE D B Y
4 Non Preemptive Priority
6
Scheduling
🠶 The Processes are scheduled according to the priority number assigned to them.
🠶 Once the process gets scheduled, it will run till the completion.
🠶 Generally, the lower the priority number, the higher is the priority of the process.

PREPARED BY DIVYA HARIKUMAR_ASST PROFESSOR_DEPT OF ECE_SCTCE


4
7

Gantt chPaRErPtARiEsD
p r e pYaHrAeRIdKUMaAcR_cASoSTrPdRiOnEFgSOtRoD_ tEhTPeOFNECoE_nSCTPCEreemptive priority
B Y D VI A

scheduling.
4
8

GANTT
CHART

From the GANTT Chart prepared, we can determine the completion


time of every process. The turnaround time, waiting time and response
time will be determined.
4
9
5 Preemptive Priority
0
Scheduling
🠶 At the time of arrival of a process in the ready queue, its Priority is compared with
the priority of the other processes present in the ready queue as well as with the one
which is being executed by the CPU at that point of time.
🠶 The One with the highest priority among all the available processes will be given the
CPU next.
🠶 The difference between preemptive priority scheduling and non preemptive priority
scheduling is that, in the preemptive priority scheduling, the job which is being
executed can be stopped at the arrival of a higher priority job.
5
1
5 GANTT chart
2
Preparation
🠶 At time 0, P1 arrives with the burst time of 1 units and priority 2. Since no other
process is available hence this will be scheduled till next job arrives or its completion
(whichever is lesser).
🠶 At time 1, P2 arrives. P1 has completed its execution and no other process is
available at this time hence the Operating system has to schedule it regardless of the
priority assigned to it.
🠶 The Next process P3 arrives at time unit 2, the priority of P3 is higher to P2. Hence
the execution of P2 will be stopped and P3 will be scheduled on the CPU.
🠶 During the execution of P3, three more processes P4, P5 and P6 becomes available.
🠶 Since, all these three have the priority lower to the process in execution so PS can't
preempt the process.
🠶 P3 will complete its execution and then P5 will be scheduled with the priority highest
among the available processes.
5 GANTT chart Preparation
3
continued..
🠶 Meanwhile the execution of P5, all the processes got available in the ready queue.
🠶 At this point, the algorithm will start behaving as Non Preemptive Priority
Scheduling. Hence now, once all the processes get available in the ready queue, the OS
just took the process with the highest priority and execute that process till
completion.
🠶 In this case, P4 will be scheduled and will be executed till the completion.
🠶 Since P4 is completed, the other process with the highest priority available in the
ready queue is P2. Hence P2 will be scheduled next.
🠶 P2 is given the CPU till the completion. Since its remaining burst time is 6 units
hence
P7 will be scheduled after this.
🠶 The only remaining process is P6 with the least priority, the Operating System has no
choice unless of executing it. This will be executed at the last.
5
4
5
5
🠶 Priority scheduling in preemptive and non-preemptive mode behaves exactly same under
5
following conditions-
6 The arrival time of all the processes is same
All the processes become available
🠶 The waiting time for the process having the highest priority will always be zero in preemptive
mode.
🠶 The waiting time for the process having the highest priority may not be zero in non preemptive
mode.
5 Round Robin Scheduling
7

🠶 One of the most popular scheduling algorithm


which can actually be implemented in most of the operating systems.
🠶 The Algorithm focuses on Time Sharing.
🠶 This is the preemptive version of first come first serve scheduling.
🠶 In this algorithm, every process gets executed in a cyclic way.
🠶 CPU is assigned to the process on the basis of FCFS for a fixed amount of time.
🠶 A small unit of time, called a time quantum, or time slice, is assigned to each
process. Usually a quantum is 10 to 100 ms.
🠶 If the execution of the process is completed during that time quantum then the
process will terminate else the running process is preempted and sent to the
ready queue.
🠶 Then, the processor is assigned to the next arrived process.
5
8

🠶 This scheduler allocates the CPU to each process in the ready queue for a
time interval of up to 1 time quantum in FIFO (circular) fashion.
🠶 If the process still running at the end of the quantum; it will be preempted
from the CPU. A context switch will be executed, and the process will be
put at the tail of the ready queue.
🠶 Then the scheduler will select the next process in the ready queue.
🠶 The ready queue is treated as a circular queue.
5
9
Advantages-
6
0 🠶 It can be actually implementable in the system because it is not depending on the burst time.
🠶 It doesn't suffer from the problem of starvation or convoy effect.
🠶 All the jobs get a fare allocation of CPU.
🠶 It gives the best performance in terms of average response time.
🠶 It is best suited for time sharing system, client server architecture and interactive system.

Disadvantages-

🠶 It leads to starvation for processes with larger burst time as they have to repeat the cycle many
times. The higher the time quantum, the higher the response time in the system.
🠶 Its performance heavily depends on time quantum.
🠶 Priorities can not be set for the processes.
🠶 The lower the time quantum, the higher the context switching overhead in the system.
🠶 Deciding a perfect time quantum is really a very difficult task in the system.
🠶 With decreasing value of time
quantum, Number of context switch
6 increases Response time decreases
Chances of starvation decreases
1
Thus, smaller value of time quantum is better in terms of response time.
🠶 If time quantum is very small, RR approach is called processor sharing and appears to
the users as though each of n process has its own processor running at 1/n the speed of
real processor.
🠶 If the Time –slice is chosen to be very small (closer to the context
switching period) then context switching overheads will be very high, thus
affecting the system throughput adversely.
🠶 With increasing value of time quantum,
Number of context switch decreases
Response time increases
Chances of starvation increases
Thus, higher value of time quantum is
better in terms of number of context
switch.
6
2

🠶 With increasing value of time quantum, Round Robin Scheduling tends to become
FCFS Scheduling.
🠶 When time quantum tends to infinity, Round Robin Scheduling becomes FCFS
Scheduling.
🠶 The performance of Round Robin scheduling heavily depends on the value of time
quantum.
🠶 Time -slice has to be carefully chosen .It should be small enough to give a good
response to the interactive users.
🠶 At the same time it should be large enough to keep the context –switching
overhead
low.
6
3

Note: Take arrival time as 0 for all


process.
6
4
6
5

If the CPU scheduling policy is Round Robin with time quantum = 2 unit, calculate the average
i PeREPaAnREdD BaY vDIeVYrAaHgAeRKI tUuMArRn_AaSSrToPRuOnFEdSSOtRi_mDEPeT.OF ECE_SCTCE
waiting tm
6
6
6
7
Multilevel Queue Scheduling
🠶 A multi-level queue scheduling algorithm partitions the ready queue
into several separate queues.
🠶 The processes are permanently assigned to one queue, generally based on some
property of the process, such as memory size, process priority, or process type.
🠶 There must be scheduling between the queues, which is
commonly implemented
as a fixed priority preemptive scheduling.

🠶 Each queue has its own scheduling algorithm.


🠶 Let us consider an example of a multilevel queue-scheduling
algorithm with five
queues:
1. System Processes
2. Interactive Processes
3. Interactive Editing Processes
4. Batch Processes
5. Student Processes
6 Multilevel Queue Scheduling continued…
8

🠶 Each queue has absolute priority over lower-priority queue.


6 Multilevel Queue Scheduling continued…
9

🠶 Each queue has absolute priority over lower-priority queues. No process in the batch
queue, for example, could run unless the queues for system processes, interactive
processes, and interactive editing processes were all empty.
🠶 If an interactive editing process entered the ready queue while a batch process was
running, the batch process will be pre-empted.
7
0
Multilevel Feedback Queue
Scheduling
🠶 Multilevel feedback queue scheduling, allows a process to move between queues.
🠶 The idea is to separate processes with different CPU-burst characteristics.
🠶 If a process uses too much CPU time, it will be moved to a lower-priority queue.
🠶 This scheme leaves I/O-bound and interactive processes in the higher-priority queues.
🠶 Similarly, a process that waits too long in a lower priority queue may be
moved to a higher-priority queue. This form of aging prevents starvation.
🠶 For example, consider a multilevel feedback queue scheduler with
three queues, numbered from 0 to 2 .
The scheduler first executes all processes in queue 0.
Only when queue 0 is empty will it execute processes in queue 1. Similarly,
processes in queue 2 will be executed only if queues 0 and 1 are empty.
A process that arrives for queue 1 will preempt a process in queue 2.
A process that arrives for queue 0 will, in turn, preempt a process in
queue 1
🠶 A multilevel feedback queue scheduler is defined by the following
7 o The number of queues
parameters:
1 o The scheduling algorithm for each queue
o The method used to determine when to upgrade a process to a higher priority queue
o The method used to determine when to demote a process to a lower-priority queue
o The method used to determine which queue a process will enter when that process
needs service
Threads
Thread- Structure. User and kernel level threads, Multi-
threading models.
7 Thread
3
s
Difference between process and thread
🠶 Process: Processes are basically the programs that are dispatched from the ready state and are
scheduled in the CPU for execution. PCB(Process Control Block) holds the concept of
process. The process takes more time to terminate and it is isolated means it does not share the
memory with any other process. The process can have the following states new, ready,
running, waiting, terminated, and suspended.

🠶 Thread: Thread is the segment of a process which means a process can have multiple threads
and these multiple threads are contained within a process. A thread has three states: Running,
Ready, and Blocked.
🠶 Opening a new browser (say Chrome, etc) is an example of creating a process. At this point,
a new process will start to execute. On the contrary, opening multiple tabs in the browser is
an example of creating the thread.

PREPARED BY DIVYA HARIKUMAR_ASST PROFESSOR_DEPT OF ECE_SCTCE


7  Many software packages that run on modern desktop PCs are multithreaded.
4  An application typically is implemented as a separate process with
several threads of control.
Example:
 A web browser might have one thread display images or text while another thread retrieves
data from the network.
 A word processor may have a thread for displaying graphics, another thread
for responding to keystrokes from the user, and a third thread for performing spelling and
grammar checking in the background.

Need of Thread:
🠶 It takes far less time to create a new thread in an existing process than to create a
new process.
🠶 Threads can share the common data, they do not need to use Inter- Process
communication.
🠶 Context switching is faster when working with threads.
🠶PREPAIRtEtDaBkY eDIsVYlAeHsAsRKI tUimMAeR_AtoSSTtPeRrOmFESiSnOaR_tDeEPaT OtFhErCeE_aSdCTCthE an a process.
7 Thread
5
s A thread is a basic unit of CPU utilization.
🠶
🠶 It comprises a thread ID, a program counter, a register set, and a stack.
🠶 Thread shares with other threads belonging to the same process its code
section, data section, and other operating-system resources, such as open files and
signals.
🠶 A traditional (or heavyweight) process has a single thread of control.
🠶 If a process has multiple threads of control, it can perform more than one task at a
time.
🠶 There is a way of thread execution inside the process of any operating system. Apart
from this, there can be more than one thread inside a process. Each thread of the
same process makes use of a separate program counter and a stack of activation
records and control blocks. Thread is often referred to as a lightweight process.
🠶 Thread is a mechanism to provide multiple execution control to the processes.
7 Single and Multithreaded
6
Processes
7
7
Benefits of thread over
process
🠶 Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned. Multithreading an interactive
application may allow a program to continue running even if part of it is blocked or is
performing a lengthy operation, thereby increasing responsiveness to the user. For instance,
a multithreaded web browser could still allow user interaction in one thread while an
image was being loaded in another thread.

🠶 Effective utilization of multiprocessor system: If we have multiple threads in a single


process, then we can schedule multiple threads on multiple processor. This will make
process execution faster. The benefits of multithreading can be greatly increased in a
multiprocessor architecture, where threads may be running in parallel on different
processors. A single threaded process can only run on one CPU, no matter how many
are available. Multithreading on a multi-CPU machine increases concurrency.

🠶 Resource sharing: Resources like code, data, and files can be shared among all threads
within a process. The benefit of sharing code and data is that it allows an application to
have several different threads of activity within the same address space. Note: stack and
registers can’t be shared among the threads. Each thread has its own stack and
registers.
7 Benefits of thread over
8
process
🠶 Economy: Allocating memory and resources for process creation is costly.
Because threads share resources of the process to which they belong, it is more
economical to create and context-switch threads. In general it is much more
time consuming to create and manage processes than threads. In Solaris, for
example, creating a process is about thirty times slower than
is creating a thread, and context switching is about five times slower.
🠶 Faster context switch: Context switch time between threads is lower compared to
process context switch. Process context switching requires more overhead from
the CPU.
🠶 Communication: Communication between multiple threads is easier, as the
threads shares common address space. while in process we have to follow
some specific communication technique for communication between two
process.
🠶 Enhanced throughput of the system: If a process is divided into multiple threads,
and each thread function is considered as one job, then the number of jobs
completed per unit of time is increased, thus increasing the throughput of the
system.
🠶 Threads are classified as:
7 🠶 user threads and
9 🠶 kernel threads
User-level thread
🠶 Thread management done by user level thread library rather than via systems call.
🠶 Therefore , thread switching does not need to call operating system and to
cause interrupt to the kernel.
🠶 The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread
blocking operation, the whole process is blocked. The kernel level thread does not know
nothing about the user level thread. The kernel-level thread manages user-level threads as
if they are single-threaded processes.
🠶 The User-level threads are small and faster as compared to kernel-level threads.
🠶 These threads are represented by registers, the program counter (PC), stack, and
some small process control. Furthermore, there is no kernel interaction in user-level
thread synchronization.
🠶 It is also known as the many-to-one mapping thread, as the OS assigns every thread in a
multithreaded program to an execution context. Every multithreaded process is treated as
a single execution unit by the OS.
Advantages of User-level threads
8
🠶 The user threads can be easily implemented than the kernel thread.
0 🠶 User-level threads can be applied to such types of operating systems
that do not support threads at the kernel-level.
🠶 It is faster and efficient.
🠶 Context switch time is shorter than the kernel-level threads.
🠶 It does not require modifications of the operating system.
🠶 These are more portable and these threads may be run on any OS.
🠶 Simple Representation: User-level threads representation is very
simple. The register, PC, stack, and mini thread control blocks
are stored in the address space of the user-level process.
🠶 Simple management: It is simple to create, switch, and synchronize
threads without the intervention of the kernel.
Disadvantages of User-level threads
8
1 🠶 User-level threads lack coordination between the thread and the kernel. Therefore,
process as whole gets one time slice irrespective of whether process has one
thread or even 1000 threads within it. It is up to each thread to relinquish control
to other threads.
🠶 User-level threads require non-blocking systems call i.e., a multithreaded kernel.
Otherwise, entire process will be blocked in the kernel, even if there are runable
threads left in the processes. For example, if one thread causes a page fault, the
process gets blocked.
🠶 User-level threads don't support system-wide scheduling priorities.
🠶 It is not appropriate for a multiprocessor system.

Three primary thread libraries:


🠶 POSIX Pthreads
🠶 Win32 threads
🠶 Java threads
8 Kernel level thread
2 🠶 The kernel thread recognizes the operating system and is
implemented by the operating system.
🠶 There is a thread control block and process control block in the system for
each
thread
🠶 Theand process
kernel in the
knows kernel-level
about thread.
all the threads and manages them.
🠶 The kernel-level thread offers a system call to create and manage the
threads from user-space.
🠶 The implementation of kernel threads is more difficult than the user thread.
🠶 Context switch time is longer in the kernel thread.
🠶 Every process doesn't have a thread table, but the kernel has one that maintains
track of all of the threads in the system. If a thread wishes to make a new thread
or stop an existing one, it initiates a kernel call that performs the work.
🠶 The kernel-level threads table contains each thread's registers, status, and other
information. The data is identical to that of user-level threads, except it is now
in kernel space rather than user space.
Advantages of Kernel-level threads

1. The kernel-level thread is fully aware of all threads.


8 2.The scheduler may decide to spend more CPU time to a
3 process having large number of threads than process having
small number of threads.
3.If a thread in the kernel is blocked, it does not block all other
threads in the same process.
4.Several threads of the same process might be scheduled on
different CPUs in kernel-level threading. Kernel routines can be
multithreaded as well.

Disadvantages of Kernel-level threads

1.Since kernel must manage and schedule threads as well as


processes. It requires a full thread control block (TCB) for each
thread to maintain information about threads.
2.The implementation of kernel threads is difficult than the user
thread. kernel-level threads take longer time to create and
maintain.
3.The kernel-level thread are slow and inefficient. They are
hundreds of times slower than that of user-level threads.
4.A mode switch to kernel mode is required to transfer control
from one thread to another in a process.
PREPARED BY DIVYA HARIKUMAR_ASST PROFESSOR_DEPT OF ECE_SCTCE
Features User Level Thread Kernel level thread
8
Implemented by It is implemented by It is implemented by
4
the users. the OS.
Context switch time Its time is less. Its time is more.
Multithreading Multithread applications It may be multithreaded.
are unable to employ
multiprocessing in user-
level threads.
Implementation It is easy to implement. It is complicated
to implement.
Blocking Operation If a thread in the kernel If a thread in the kernel
is blocked, it blocks all is blocked, it does not
other threads in the block all other threads
same process. in the same process.
Recognize OS doesn't recognize it. It is recognized by OS.
Features User Level Thread Kernel level thread
8
Thread Management Its library includes the The application code on
5
source code for thread kernel-level threads
creation, data transfer, does not include
thread thread management
destruction, message code, and it is simply
passing, and thread an API to the kernel
scheduling. mode.
Hardware Support It doesn't need It requires
hardware support. hardware support.
Creation It may be created It takes much time to
and and create
Management managed much faster. and handle.
Examples Some instances of user- Some instances of
level threads are Java Kernel- level threads
threads and POSIX are Windows and
threads. Solaris.
Operating System Any OS may support it. The specific OS
may support it.
8 Multithreading
6
Models
🠶 Many-to-One

🠶 One-to-One

🠶 Many-to-Many
8 Many-to-One
7

🠶 Many user-level threads mapped to single kernel thread.


🠶 Thread management is done by the thread library in user space,
so it is efficient; but the entire process will block if a thread
makes a blocking system call.
🠶 Also, because only one thread can access the kernel at a time,
multiple threads are unable to run in parallel on
multiprocessors.

🠶 Examples:
🠶 Solaris Green Threads
🠶 GNU Portable Threads
8 One-to-One
8
🠶 Each user-level thread maps to kernel thread.
🠶 It provides more concurrency than the many-to-one model by
allowing another thread to run when a thread makes a
blocking system call.
🠶 It also allows multiple threads to run in parallel on
multiprocessors.
🠶 The only drawback to this model is that creating a user thread
requires creating the corresponding kernel thread.
🠶 Because the overhead of creating kernel threads can burden the
performance of an application, most implementations of this model
restrict the number of threads supported by the system.

🠶 Examples
🠶 Windows NT/XP/2000
🠶 Linux
🠶 Solaris 9 and later
PREPARED BY DIVYA HARIKUMAR_ASST PROFESSOR_DEPT OF ECE_SCTCE
8 Many-to-Many
9
Model

🠶 Allows many user level threads to be mapped to


a smaller or equal number of kernel threads.
🠶 Developers can create as many user threads as
necessary, and the corresponding kernel threads
can run in parallel on a multiprocessor.
🠶 Also, when a thread performs a blocking system call,
the kernel can schedule another thread for execution.

🠶 Solaris prior to version 9


🠶 IRIX, HP-UX, and Tru64 UNIX
9
0
Thread Control Block in Operating
System
 Thread Control Blocks (TCBs) represents threads generated in the
system.
 It contains information about the threads, such as it’s ID and states.
9
1
TCB
🠶 The components have been defined below:
🠶 Thread ID: It is a unique identifier assigned by the Operating System to
the thread when it is being created.
🠶 Thread states: These are the states of the thread which changes as
the thread
progresses through the system
🠶 CPU information: It includes everything that the OS needs to know about,
such as how far the thread has progressed and what data is being used.
🠶 Thread Priority: It indicates the weight (or priority) of the thread over other
threads which helps the thread scheduler to determine which thread should
be selected next from the READY queue.
🠶 A pointer which points to the process which triggered the creation of
this thread.
🠶 A pointer which points to the thread(s) created by this thread.
9 Multiple Processors Scheduling in
2
Operating System
🠶 Multiple processor scheduling or multiprocessor scheduling focuses
on designing the system's scheduling function, which consists of more
than one processor.
🠶 Multiple CPUs share the load in multiprocessor scheduling so that
various processes run simultaneously.
🠶 Multiprocessor scheduling is complex as compared to single
processor scheduling.
🠶 In the multiprocessor scheduling, there are many processors,
and they are identical, and we can run any process at any time.
🠶 The multiple CPUs in the system are in close communication, which shares
a common bus, memory, and other peripheral devices. So the system is
tightly coupled.
🠶 These systems are used when we want to process a bulk amount of data,
and these systems are mainly used in satellite, weather forecasting, etc.
9 🠶 There are cases when the processors are identical, i.e., homogenous, in
3 terms of their functionality in multiple-processor scheduling. We can use
any processor available to run any process in the queue.
🠶 Multiprocessor systems may be heterogeneous (different kinds of
CPUs) or homogenous (the same CPU). There may be special
scheduling
constraints, such as devices connected via a private bus to only one
CPU.
🠶 There is no policy or rule which can be declared as the best
scheduling solution to a system with a single processor. Similarly, there
is no best scheduling solution for a system with multiple processors as
well
9 Approaches to Multiple
4
Processor Scheduling

🠶 There are two approaches to multiple processor scheduling in the OS


: Symmetric Multiprocessing and Asymmetric Multiprocessing.
🠶 Symmetric Multiprocessing: It is used where each processor is
self-scheduling.
All processes may be in a common ready queue, or each processor may
have its private queue for ready processes. The scheduling proceeds
further by having the scheduler for each processor examine the ready
queue and select a process to execute.
🠶 Asymmetric Multiprocessing: It is used when all the scheduling
decisions and
I/O processing are handled by a single processor called the Master Server.
The other processors execute only the user code. This is simple and
reduces the need for data sharing, and this entire scenario is called
Asymmetric Multiprocessing.
9 Questions on module
5
2
1. Schedule the following processes with FCFS and Round Robin
algorithm for a time of 2mS. Assuming all the processes arrives at
time zero. Also state the performance of the system.

2. Compare user level threads and Kernel level threads.


3. Discuss the different types of multiprocessor scheduling operations.
4. Compare FCFS and Round -Robin scheduling algorithms
9 5. Explain the possible scheduling of user level threads with a 50ms process
6 quantum and threads that run 5ms per CPU time.
6. Explain the Shortest Remaining Time First algorithm with a suitable
example.
7. Explain thread scheduling algorithms used in operating systems in detail.
8. Schedule the given 5 processes with Round Robin scheduling.

Draw the Gantt chart and calculate the average waiting time
and turn-around time for these processes if time quantum is 2
units
9 Referenc
7
e
🠶https://www.javatpoint.com
🠶https://www.javatpoint.com/multiple-processors-scheduling-in-operating-
system
🠶https://www.javatpoint.com/user-level-vs-kernel-level-threads-in-operating-
system
🠶 Textbook: Abraham Silberschatz- ‘Operating System
Principles’: Wiley India,7th edition, 2011
Threads
Thread- Structure. User and kernel level threads, Multi-threading
models.
7 Threa d
3
s
Difference between process and thread
> Process: Processes are basically the programs that are dispatched from the ready state and are
scheduled in the CPU for execution. PCB(Process Control Block) holds the concept of
process. The process takes more time to terminate and it is isolated means it does not share the
memory with any other process. The process can have the following states new, ready,
running, waiting, terminated, and suspended.

> Thread: Thread is the segment of a process which means a process can have multiple threads
and these multiple threads are contained within a process. A thread has three states: Running,
Ready, and Blocked.
> Opening a new browser (say Chrome, etc) is an example of creating a process. At this point,
a new process will start to execute. On the contrary, opening multiple tabs in the browser is
an example of creating the thread.

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
7  Many software packages that run on modern desktop PCs are multithreaded.
4  An application typically is implemented as a separate process with
several threads of control.
Example:
 A web browser might have one thread display images or text while another thread retrieves
data from the network.
 A word processor may have a thread for displaying graphics, another thread
for responding to keystrokes from the user, and a third thread for performing spelling and
grammar checking in the background.

Need of Thread:
> It takes far less time to create a new thread in an existing process than to create a new
process.
> Threads can share the common data, they do not need to use Inter- Process
communication.
> Context switching is faster when working with threads.
PREPAItREtDaBkY eDIsVYlAeHsAsRIKtiUmM AeR_AtoSSTtPeROrmFESiSnO R_atDeEPaT OthF ErCeEa_SdC TCthE an a
process.
7 Thread
5
s A thread is a basic unit of CPU utilization.
>
> It comprises a thread ID, a program counter, a register set, and a stack.
> Thread shares with other threads belonging to the same process its code section,
data section, and other operating-system resources, such as open files and signals.
> A traditional (or heavyweight) process has a single thread of control.
> If a process has multiple threads of control, it can perform more than one task at a
time.
> There is a way of thread execution inside the process of any operating system. Apart
from this, there can be more than one thread inside a process. Each thread of the same
process makes use of a separate program counter and a stack of activation records
and control blocks. Thread is often referred to as a lightweight process.
> Thread is a mechanism to provide multiple execution control to the processes.

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
7 Single a nd Multithreaded
6
Proc esses

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
7
7
Benefits of thread over
proc ess
> Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned. Multithreading an interactive
application may allow a program to continue running even if part of it is blocked or is
performing a lengthy operation, thereby increasing responsiveness to the user. For
instance, a multithreaded web browser could still allow user interaction in one thread
while an image was being loaded in another thread.

> Effective utilization of multiprocessor system: If we have multiple threads in a single


process, then we can schedule multiple threads on multiple processor. This will make
process execution faster. The benefits of multithreading can be greatly increased in a
multiprocessor architecture, where threads may be running in parallel on different
processors. A single threaded process can only run on one CPU, no matter how many
are available. Multithreading on a multi-CPU machine increases concurrency.

> Resource sharing: Resources like code, data, and files can be shared among all threads
within a process. The benefit of sharing code and data is that it allows an application to
have several different threads of activity within the same address space. Note: stack and
registers can’t be shared among the threads. Each thread has its own stack and
registers.
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F
EC E_SC TC E
7 Benefits of thread over
8
proc ess
> Economy: Allocating memory and resources for process creation is costly.
Because threads share resources of the process to which they belong, it is more
economical to create and context-switch threads. In general it is much more time
consuming to create and manage processes than threads. In Solaris, for example,
creating a process is about thirty times slower than
is creating a thread, and context switching is about five times slower.
> Faster context switch: Context switch time between threads is lower compared to
process context switch. Process context switching requires more overhead from
the CPU.
> Communication: Communication between multiple threads is easier, as the
threads shares common address space. while in process we have to follow some
specific communication technique for communication between two process.
> Enhanced throughput of the system: If a process is divided into multiple threads,
and each thread function is considered as one job, then the number of jobs
completed per unit of time is increased, thus increasing the throughput of the
system.
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F
EC E_SC TC E
> Threads are classified as:
7 > user threads and
9 > kernel threads
User-level thread
> Thread management done by user level thread library rather than via systems call.
> Therefore , thread switching does not need to call operating system and to cause
interrupt to the kernel.
> The operating system does not recognize the user-level thread. User threads can be easily
implemented and it is implemented by the user. If a user performs a user-level thread
blocking operation, the whole process is blocked. The kernel level thread does not know
nothing about the user level thread. The kernel-level thread manages user-level threads as
if they are single-threaded processes.
> The User-level threads are small and faster as compared to kernel-level threads.
> These threads are represented by registers, the program counter (PC), stack, and
some small process control. Furthermore, there is no kernel interaction in user-level
thread synchronization.
> It is also known as the many-to-one mapping thread, as the OS assigns every thread in a
multithreaded program to an execution context. Every multithreaded process is treated as
a single execution unit by the OS.
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F
EC E_SC TC E
Advantages of User-level threads
8
> The user threads can be easily implemented than the kernel thread.
0 > User-level threads can be applied to such types of operating systems
that do not support threads at the kernel-level.
> It is faster and efficient.
> Context switch time is shorter than the kernel-level threads.
> It does not require modifications of the operating system.
> These are more portable and these threads may be run on any OS.
> Simple Representation: User-level threads representation is very
simple. The register, PC, stack, and mini thread control blocks are
stored in the address space of the user-level process.
> Simple management: It is simple to create, switch, and synchronize
threads without the intervention of the kernel.

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
Disadvantages of User-level threads
8
1 > User-level threads lack coordination between the thread and the kernel. Therefore,
process as whole gets one time slice irrespective of whether process has one
thread or even 1000 threads within it. It is up to each thread to relinquish control
to other threads.
> User-level threads require non-blocking systems call i.e., a multithreaded kernel.
Otherwise, entire process will be blocked in the kernel, even if there are runable
threads left in the processes. For example, if one thread causes a page fault, the
process gets blocked.
> User-level threads don't support system-wide scheduling priorities.
> It is not appropriate for a multiprocessor system.

Three primary thread libraries:


> POSIX Pthreads
> Win32 threads
> Java threads

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
8 Kernel level thread
2 > The kernel thread recognizes the operating system and is implemented by the
operating system.
> There is a thread control block and process control block in the system for
each
> thread and process
The kernel in the all
knows about kernel-level thread.
the threads and manages them.
> The kernel-level thread offers a system call to create and manage the threads
from user-space.
> The implementation of kernel threads is more difficult than the user thread.
> Context switch time is longer in the kernel thread.
> Every process doesn't have a thread table, but the kernel has one that maintains
track of all of the threads in the system. If a thread wishes to make a new thread or
stop an existing one, it initiates a kernel call that performs the work.
> The kernel-level threads table contains each thread's registers, status, and other
information. The data is identical to that of user-level threads, except it is now in
kernel space rather than user space.

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
Advantages of Kernel-level threads

1. The kernel-level thread is fully aware of all threads.


8 2.The scheduler may decide to spend more CPU time to a
3 process having large number of threads than process having
small number of threads.
3.If a thread in the kernel is blocked, it does not block all other
threads in the same process.
4.Several threads of the same process might be scheduled on
different CPUs in kernel-level threading. Kernel routines can be
multithreaded as well.

Disadvantages of Kernel-level threads

1.Since kernel must manage and schedule threads as well as


processes. It requires a full thread control block (TCB) for each
thread to maintain information about threads.
2.The implementation of kernel threads is difficult than the user
thread. kernel-level threads take longer time to create and
maintain.
3.The kernel-level thread are slow and inefficient. They are
hundreds of times slower than that of user-level threads.
4.A mode switch to kernel mode is required to transfer control
from one thread to another in a process.
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F EC E_SC TC E
Features User Level Thread Kernel level thread
8
Implemented by It is implemented by It is implemented by
4
the users. the OS.
Context switch time Its time is less. Its time is more.
Multithreading Multithread applications It may be multithreaded.
are unable to employ
multiprocessing in user-
level threads.
Implementation It is easy to implement. It is
complicated to
implement.
Blocking Operation If a thread in the kernel If a thread in the kernel
is blocked, it blocks all is blocked, it does not
other threads in the block all other threads
same process. in the same process.
Recognize OS doesn't recognize it. It is recognized by OS.

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
Features User Level Thread Kernel level thread
8
Thread Management Its library includes the The application code on
5
source code for thread kernel-level threads
creation, data transfer, does not include thread
thread management code, and
destruction, message it is simply an API to the
passing, and thread kernel mode.
scheduling.
Hardware Support It doesn't need It requires
hardware support. hardware support.
Creation and It may be created It takes much time to
Management and create
managed much faster. and handle.
Examples Some instances of user- Some instances of
level threads are Java Kernel- level threads
threads and POSIX are Windows and
threads. Solaris.
Operating System Any OS may support it. The specific OS
may support
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F
EC E_SC TC E it.
8 Multithreading
6
Models
> Many-to-One

> One-to-One

> Many-to-Many

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
8 Many-to-One
7

> Many user-level threads mapped to single kernel thread.


> Thread management is done by the thread library in user space,
so it is efficient; but the entire process will block if a thread
makes a blocking system call.
> Also, because only one thread can access the kernel at a time,
multiple threads are unable to run in parallel on multiprocessors.

> Examples:
> Solaris Green Threads
> GNU Portable Threads
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F
EC E_SC TC E
8 One-to-
8
One
> Each user-level thread maps to kernel thread.
> It provides more concurrency than the many-to-one model by
allowing another thread to run when a thread makes a blocking
system call.
> It also allows multiple threads to run in parallel on
multiprocessors.
> The only drawback to this model is that creating a user thread
requires creating the corresponding kernel thread.
> Because the overhead of creating kernel threads can burden the
performance of an application, most implementations of this
model restrict the number of threads supported by the system.

> Examples
> Windows NT/XP/2000
> Linux
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F EC E_SC TC E

> Solaris 9 and later


8 Many-to-Many
9
Model

> Allows many user level threads to be mapped to


a smaller or equal number of kernel threads.
> Developers can create as many user threads as
necessary, and the corresponding kernel threads can
run in parallel on a multiprocessor.
> Also, when a thread performs a blocking system call,
the kernel can schedule another thread for
execution.

> Solaris prior to version 9


> IRIX, HP-UX, and Tru64 UNIX

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
9
0
Thread Control Block in Operating
System
 Thread Control Blocks (TCBs) represents threads generated in the
system.
 It contains information about the threads, such as it’s ID and states.

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
9
1
TC
B
> The c omponents have been defined below:
> Thread ID: It is a unique identifier assigned by the Operating System to
the thread when it is being c reated.
> Thread states: These are the states of the thread which changes as the
thread
progresses through the system
> CPU information: It includes everything that the OS needs to know about,
such as how far the thread has progressed and what data is being used.
> Thread Priority: It indicates the weight (or priority) of the thread over other
threads which helps the thread scheduler to determine which thread should
be selec ted next from the READY queue.
> A pointer which points to the process which triggered the creation of
this thread.
> A pointer which points to the thread(s) created by this thread.

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
9 Multiple Proc essors Sc heduling in
2
O perating System
> Multiple processor scheduling or multiprocessor scheduling focuses on
designing the system's scheduling function, which consists of more than
one processor.
> Multiple CPUs share the load in multiprocessor scheduling so that
various proc esses run simultaneously.
> Multiprocessor scheduling is complex as compared to single
processor scheduling.
> In the multiprocessor scheduling, there are many processors, and
they are identic al, and we c an run any proc ess at a ny time.
> The multiple CPUs in the system are in close communication, which shares
a common bus, memory, and other peripheral devices. So the system is
tightly coupled.
> These systems are used when we want to process a bulk amount of data,
and these systems are mainly used in satellite, weather forecasting, etc.

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
9 > There are cases when the processors are identical, i.e., homogenous, in
3 terms of their functionality in multiple-processor scheduling. We can use
any processor available to run any process in the queue.
> Multiprocessor systems may be heterogeneous (different kinds of
CPUs) or homogenous (the same CPU). There may be special
scheduling
c onstraints, suc h a s devic es c onnec ted via a priva te bus to only
one C PU.
> There is no policy or rule which can be declared as the best
scheduling solution to a system with a single processor. Similarly, there
is no best scheduling solution for a system with multiple processors as
well

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
9 Approac hes to Multiple
4
Proc essor Scheduling

> There are two approaches to multiple processor scheduling in the OS


: Symmetric Multiprocessing and Asymmetric Multiprocessing.
> Symmetric Multiprocessing: It is used where each processor is self-
scheduling. All processes may be in a common ready queue, or each
processor may have its private queue for ready processes. The scheduling
proceeds further by having the scheduler for each processor examine the
ready queue and select a process to execute.
> Asymmetric Multiprocessing: It is used when all the scheduling decisions
and I/O processing are handled by a single processor called the Master
Server. The other processors execute only the user code. This is simple and
reduces the need for data sharing, and this entire scenario is called
Asymmetric Multiprocessing.

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E
9 Questions on module
5
2
1. Schedule the following processes with FCFS and Round Robin
algorithm for a time of 2mS. Assuming all the processes arrives at time
zero. Also state the performance of the system.

2. Compare user level threads and Kernel level threads.


3. Discuss the different types of multiprocessor scheduling operations.
4. Compare FCFS and Round -Robin scheduling algorithms
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F
EC E_SC TC E
9 5.Explain the possible scheduling of user level threads with a 50ms process quantum
6 and threads that run 5ms per CPU time.
6. Explain the Shortest Remaining Time First algorithm with a suitable example.
7. Explain thread scheduling algorithms used in operating systems in detail.
8. Schedule the given 5 processes with Round Robin scheduling.

Draw the Gantt chart and calculate the average waiting time
and turn-around time for these processes if time quantum is 2
units
PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F
EC E_SC TC E
9 Reference
7

> https://www.javatpoint.com
> https://www.javatpoint.com/multiple-processors-scheduling-in-operating-
system
> https://www.javatpoint.com/user-level-vs-kernel-level-threads-in-operating-
system
> Textbook: Abraham Silberschatz- ‘Operating System Principles’: Wiley
India ,7th edition, 2011

PREPARED BY DIVYA HARIKUM AR_ASST PRO FESSO R_DEPT O F


EC E_SC TC E

You might also like