OS Unit2
OS Unit2
Prepared by:
Dr. Khushboo Jain
There are several queues are implemented in operating system such as Job
Queue, Ready Queue, Device Queue.
• Job Queue: It consists of all processes in the system. As processes enter the system, they are
put into a job queue.
• Ready Queue: The processes that are residing in main memory and they are ready and
waiting to execute are kept on a list called the Ready Queue. Ready queue is generally stored
as a linked list. A ready-queue header contains pointers to the first and final PCBs in the list.
Each PCB includes a pointer field that points to the next PCB in the ready queue.
• Device Queue: Each device has its own device queue. It contains the list of processes waiting
for a particular I/O device.
Scheduling Queues(Continuing…)
Consider the given Queuing Diagram:
· Two types of queues are present: the Ready Queue
and a set of Device Queues.
· CPU and I/O are the resources that serve the queues.
· A new process is initially put in the ready queue. It
waits there until it is selected for execution or
dispatched.
• Once the process is allocated the CPU and is
executing, one of several events could occur:
1. The process could issue an I/O request and then be placed
in an I/O queue. Fig: Queuing diagram representation of process sche
2. The process could create a new child process and wait for
duling
the child’s termination.
3. The process could be removed forcibly from the CPU,
because of an interrupt and be put back in the ready
queue.
Schedulers
A process migrates among the various scheduling queues throughout its lifetime. For
scheduling purpose, the operating system must select processes from these queues. The
selection process is carried out by the Scheduler.
There are three types of Schedulers:
1. Long Term Scheduler
2. Short Term Scheduler
3. Medium Term Scheduler
Long Term Scheduler (New to ready state)
• Initially processes are spooled to a mass-storage device (i.e Hard disk), where they are kept for later
execution.
• Long-term scheduler or job scheduler selects processes from this pool and loads them into main
memory for execution. (i.e. from Hard disk to Main memory).
• The long-term scheduler executes much less frequently, there may be minutes of time between creation
of one new process to another process.
• The long-term scheduler controls the degree of multiprogramming (the number of processes in
memory).
Schedulers
Cont…
The long-term scheduler selects a good process mix of I/O-bound and CPU-bound
processes.
• If all processes are I/O bound, the ready queue will almost always be empty, and the CPU will remain
idle for long time because I/O device processing takes a lot of time.
• If all processes are CPU bound, the I/O waiting queue will almost always be empty. I/O devices will
be idle, and CPU is busy for most of the time.
• Thus, if the system maintains the combination of CPU bound and I/O bound processes then the system
performance will be increased.
Note: Time-sharing systems such as UNIX and Microsoft Windows systems often have no long-term scheduler but
simply put every new process in memory for the short-term scheduler.
Context Switching
• Switching the CPU from one process to another
process requires performing a state save of the
current process and a state restore of a different
process. This task is known as a Context Switch.
Process Creation
• During the execution of a process in its life time, a process may create several new processes.
• The creating process is called a parent process and the new processes are called children process.
• Each of these new processes may create other processes forming a tree of processes.
• Operating system identifies processes according to process identifier (pid).
• Pid provides an unique integer number for each process in the system.
• Pid can be used as an index to access various attributes of a process within the kernel.
Operations on Processes (Continuing…)
The below figure shows the process tree for the Linux OS that
shows the name of each process and its process ID (pid) . In Linux
process is called task.
1. fork(): The fork() system call is used to create a new process (child process). The child process is a
duplicate of the parent process, sharing the same code, but with separate memory and execution.
After the fork, both the parent and child processes run the next instruction.
2. exec(): The exec() system call is used to replace the current process's memory space with a new
program. It loads and runs a new executable file within the same process, effectively transforming the
process into the new program. It is commonly used after fork().
3. wait(): The wait() system call allows a parent process to pause its execution until one or more of its
child processes finish. It ensures that the parent doesn't proceed until the child terminates, returning
control to the parent when the child process ends.
4. exit(): The exit() system call is used by a process to terminate its execution. It ends the process and
returns a status code to the operating system, signaling whether the process ended successfully or
encountered an error.
Continuing…
Zombie Process: A zombie process is a process that has terminated, but its
parent has not yet called wait(). Although its resources are deallocated, its entry
in the process table remains to hold the exit status. Once the parent calls wait(),
the process identifier and table entry are released.
Orphan Process: An orphan process occurs when the parent terminates without
calling wait(), leaving the child process without a parent. In Linux and UNIX,
orphan processes are adopted by the init process, which periodically calls wait()
to clean up the orphan’s exit status and process identifier.
Process Scheduling
Process scheduling is the basis of Multi-programmed operating
systems. By switching the CPU among processes, the operating
system can make the computer more productive
• In a single-processor system, only one process can run at a
time. Others must wait until the CPU is free and can be
rescheduled.
• The CPU will sit idle and waiting for a process that needs an I/O
operation to complete. If the I/O operation completes then only
the CPU will start executing the process. A lot of CPU time has
been wasted with this procedure.
• The objective of multiprogramming is to have some process
always running to maximize CPU utilization.
• When several processes are in main memory, if one processes
is waiting for I/O then the operating system takes the CPU away
from that process and gives the CPU to another process. Hence
there will be no wastage of CPU time.
Process Scheduling
Concepts of Process Scheduling
1. CPU–I/O Burst Cycle
2. CPU Scheduler
3. Pre-emptive Scheduling
4. Dispatcher
Preemptive Scheduling
CPU-scheduling decisions may take place under the following four cases:
1. When a process switches from the running state to the waiting state.
Example: as the result of an I/O request or an invocation of wait( ) for the
termination of a child process.
2. When a process switches from the running state to the ready state.
Example: when an interrupt occurs
3. When a process switches from the waiting state to the ready state.
Example: at completion of I/O.
4. When a process terminates. For situations 2 and 4 are considered as Pre-
emptive scheduling situations. Mach OS X,WINDOWS 95 and all
subsequent versions of WINDOWS are using Preemptive scheduling.
Process Scheduling
Dispatcher
The dispatcher is the module that gives control of the CPU to the
process selected by the short-term scheduler. Dispatcher
function involves:
1. Switching context
2. Switching to user mode
3. Jumping to the proper location in the user program to restart
that program.
The dispatcher should be as fast as possible, since it is invoked
during every process switch. The time it takes for the dispatcher
to stop one process and start another process running is known
as the Dispatch Latency.
Process Scheduling
Different CPU-scheduling algorithms have different properties, and the
choice of a particular algorithm may favor one class of processes over
another.
Many criteria have been suggested for comparing CPU-scheduling
algorithms:
• CPU utilization: CPU must be kept as busy as possible. CPU utilization can range
from 0 to 100 percent. In a real system, it should range from 40 to 90 percent.
• Throughput: The number of processes that are completed per time unit.
• Turn-Around Time: It is the interval from the time of submission of a process to
the time of completion. Turnaround time is the sum of the periods spent waiting to
get into memory, waiting in the ready queue, executing on the CPU and doing I/O.
• Waiting time: It is the amount of time that a process spends waiting in the ready
queue.
• Response time: It is the time from the submission of a request until the first
response is produced. Interactive systems use response time as its measure.
Note: It is desirable to maximize CPU utilization and Throughput and to minimize Turn- round
Process Scheduling Algorithms
Process/CPU scheduling deals with the problem of deciding which
of the processes in the ready queue is to be allocated the CPU.
Different CPU-scheduling algorithms are:
1. First-Come, First-Served Scheduling (FCFS)
2. Shortest-Job-First Scheduling (SJF)
3. Priority Scheduling
4. Round Robin Scheduling
5. Multilevel Queue Scheduling
6. Multilevel Feedback Queue Scheduling
First-Come, First-Served Scheduling (FCFS)
In FCFS, the process that requests the CPU first is allocated the CPU first.
• FCFS scheduling algorithm is Non-preemptive.
• Once the CPU has been allocated to a process, it keeps the CPU until it releases the
CPU.
• FCFS can be implemented by using FIFO queues.
• When a process enters the ready queue, its PCB is linked onto the tail of the queue.
• When the CPU is free, it is allocated to the process at the head of the queue.
· The running process is then removed from the queue.
Example:1 Consider the following set of processes that arrive at time 0. The
processes arearrived in the order P1, P2, P3, with
Process the length of the CPU burst given in
Burst
milliseconds. Time
P1 24
P2 3
Gantt Chart for P3 3
FCFS is:
First-Come, First-Served Scheduling (FCFS)
The average waiting time under the FCFS policy is often quite long.
• The waiting time is 0 milliseconds for process P1, 24 milliseconds for process P2 and
27 milliseconds for process P3.
• Thus, the average waiting time is (0 + 24 + 27)/3 = 17 milliseconds.
The processes coming at P2, P3, P1 the average waiting time (6 + 0 + 3)/3 = 3
milliseconds whereas the processes are coming in the order P1, P2, P3 the average
waiting time is 17 milliseconds.
Disadvantage of FCFS:
FCFS scheduling algorithm is Non-preemptive, it allows one process to keep CPU for
long time. Hence it is not suitable for time sharing systems.
Shortest-Job-First Scheduling(SJF)
Shortest-Job-First Scheduling (SJF)
SJF algorithm is defined as “when the CPU is available, it is assigned to the process
that has the smallest next CPU burst”. If the next CPU bursts of two processes are the
same, FCFS scheduling is used between two processes.
tn be the length of the nth CPU burst (i.e. contains the most recent
information).
stores the history.
be our predicted value for the next CPU burst.
α controls the relative weight of recent and past history in our prediction (0
≤ α ≤1)
• If α=0, then , recent history has no effect
• If α=1 then , only the most recent CPU burst matters.
• If α = 1/2, so recent history and past history are equally weighted.
Shortest Remaining Time First Scheduling (SRTF)
SRTF is the pre-emptive SJF algorithm.
• A new process arrives at the ready queue, while a previous process is still
executing
• The next CPU burst of the newly arrived process may be shorter than the currently
executing process.
• SRTF will preempt the currently executing
Process Arrival process
Time and executes the shortest job.
Burst Time(ms)
Consider the four processes with arrival times and burst times in milliseconds:
P1 0 8
P2 1 4
P3 2 9
P4 3 5
Process P2 arrives at time 1. The remaining time for process P1 (7 milliseconds) is larger
than the time required by process P2 (4 milliseconds), so process P1 is preempted and
process P2 is scheduled.
Consider the following set of processes that arrive at time 0 and the processes are
arrived in Process Burst Time
the order P1, P2, P3 and Time Quanta=4. (ms)
P1 24
P2 3
Gantt chart of Round Robin Scheduling: P3 3
Round Robin(RR) Scheduling
• If we use a time quantum of 4 milliseconds, then process P1 gets the first 4
milliseconds.
• Since it requires another 20 milliseconds, it is preempted after the first-time
quantum and the CPU is given to the next process in the queue, process P2.
• CPU burst of Process P2 is 3, so it does not need 4 milliseconds then it quits before
its time quantum expires. The CPU is then given to the next process P3.
• Once each process has received 1 time quantum, the CPU is returned to process P1
for an additional time quantum.
Note: A rule of thumb is that 80 percent of the CPU bursts should be shorter than the
time
quantum.
Multilevel Queue
• Ready queue is partitioned into separate queues, eg:
• foreground (interactive)
• background (batch)
• Process permanently in a given queue
• Each queue has its own scheduling algorithm:
• foreground – RR
• background – FCFS
• Scheduling must be done between the queues:
• Fixed priority scheduling; (i.e., serve all from foreground then from background).
Possibility of starvation.
• Time slice – each queue gets a certain amount of CPU time which it can schedule
amongst its processes; i.e., 80% to foreground in RR and 20% to background in FCFS
Multilevel Queue Scheduling
Three queues:
Q0 – RR with time quantum 8 milliseconds
Q1 – RR time quantum 16 milliseconds
Q2 – FCFS
Scheduling
A new job enters queue Q0 which is served FCFS
When it gains CPU, job receives 8 milliseconds
Fig 4. Example of multilevel
If it does not finish in 8 milliseconds, job is moved feedback queue [1]
to queue Q1
At Q1 job is again served FCFS and receives 16
additional milliseconds
If it still does not complete, it is preempted and
moved to queue Q2
Thread
• Definition: Threads are the smallest unit of execution within a process.
• Benefits:
• Improved Performance: Allows concurrent execution, making better use of
CPU.
• Resource Sharing: Threads within the same process share memory and
resources.
• Responsiveness: Enhances the responsiveness of applications by performing
background tasks.
• Simplicity: Simplifies program design by breaking tasks into smaller threads.
Multicore Programming
• Multicore or multiprocessor systems putting pressure on programmers, challenges
include:
• Dividing activities
• Balance
• Data splitting
• Data dependency
• Testing and debugging
• Parallelism implies a system can perform more than one task simultaneously
• Single-threaded Process:
• Only one thread of execution.
• Simpler but less efficient for concurrent tasks.
• Multi-threaded Process :
• Multiple threads execute independently within a process.
• Provides better performance and responsiveness.
• Model Types:
• User-Level Threads (ULT): Managed by user-level libraries.
• Kernel-Level Threads (KLT): Managed by the operating system kernel.
• Hybrid Threads: Combination of ULT and KLT.
Single and Multithreaded Processes
• Features:
• Visibility: Kernel does not recognize ULTs, only the process.
• Context Switching: Faster and less costly.
• Features:
• Visibility: OS kernel is aware of each thread.
• Context Switching: More costly compared to ULT.
• Features:
• Management: Both user-level libraries and kernel support.
Limited to single-
Not visible to the OS, Lower overhead,
User-Level core processors,
User-level libraries managed by the faster context
Threads (ULT) less support from
thread library switching
OS
• Example of such system is Solaris. Multi threading model are of three types.
• Many-to-One
• One-to-One
• Many-to-Many
Many-to-One Model
• Many user-level threads mapped to single
kernel thread
• One thread blocking causes all to block
• Multiple threads may not run in parallel on
muticore system because only one may be in
kernel at a time
• Few systems currently use this model
• Examples:
• Solaris Green Threads
• GNU Portable Threads
• Thread library provides programmer with API for creating and managing threads
• Two primary ways of implementing
• Library entirely in user space
• Kernel-level library supported by the OS
Pthread Scheduling
2. Scheduling Policies:
• SCHED_FIFO (First In, First Out): Threads are executed in the order they
are scheduled, without preemption.
• SCHED_RR (Round Robin): Threads are given equal time slices in a cyclic
manner.
• SCHED_OTHER: Default policy with time-sharing, preemptive scheduling.
Pthread Scheduling
2. Deadlocks: A situation where two or more threads are waiting indefinitely for resources
held by each other.
• Prevention: Use strategies like resource ordering, timeout mechanisms, or deadlock
detection and recovery.
3. Starvation: Occurs when a thread is perpetually denied necessary resources due to the
prioritization of other threads.
• Prevention: Implement fair scheduling policies to ensure all threads get a fair share of
resources.
Threading Issues
4. Thread Overhead: Managing threads incurs overhead in terms of memory and CPU usage.
• Mitigation: Use thread pooling and minimize the number of threads to optimize
performance.
5. Context Switching: The process of saving and restoring thread states, which can be costly
in terms of performance.
• Optimization: Reduce context switching by optimizing thread management and using
efficient scheduling algorithms.
1. Process Identity:
• Process ID (PID): A unique identifier assigned to each process.
• Parent Process ID (PPID): The PID of the process that created the current process.
• User ID (UID) and Group ID (GID): Identifiers for the user and group that own the process, used for
permission checks and resource access.
2. Process Environment:
• Environment Variables: Key-value pairs that define the operating environment for the process. These
variables can influence the behavior of the process and its child processes.
• Command Line Arguments: Parameters passed to the process at startup, which can affect its execution.
• Current Working Directory: The directory in which the process operates, impacting file access and
relative paths.
Case study: Process Management in Linux
cont..
3. Process Context:
• Memory Management: Information about the process’s memory usage, including code, data, and stack segments.
• CPU State: The process’s current execution state, including the values of registers, program counter, and other
CPU-specific information.
• File Descriptors: References to open files or sockets that the process uses for input and output operations.
• Scheduling Information: Details related to process scheduling, such as priority and scheduling policies.
Process and Threads
• Linux uses the same internal representation for processes and threads; a thread is simply a
new process that happens to share the same address space as its parent
• A distinction is only made when a new thread is created by the clone
• system call
• fork creates a new process with its own entirely new process context
• clone creates a new process with its own identity, but that is allowed to share the data
structures of its parent
• Using clone gives an application fine-grained control over exactly what is shared between
two threads
Stages of a Process in Linux
74