0% found this document useful (0 votes)
29 views

Chapter Two

Uploaded by

haileamalk12
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Chapter Two

Uploaded by

haileamalk12
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 50

CHAPTER TWO:

PROCESSES AND PROCESS

MANAGEMENT
Process Concept
An operating system executes a variety of programs:
Batch system – jobs
Time-shared systems – user programs or tasks
Textbook uses the terms job and process almost
interchangeably
Process – a program in execution; process execution must
progress in sequential fashion
A process includes:
program counter
stack
data section
difference b/n process and program

The difference between a process and a program is
subtle

process is an activity of some kind,It has a program,
input, output, and a state.

A single processor may be shared among several
processes

In contrast, a program is something that may be
stored on disk, not doing anything.

A process will need certain resources— such as
CPU time, memory, files, and I/O devices to
accomplish its task.

These resources are allocated to the process either
when it is created or while it is executing.
Process Creation

Operating systems need some way to create
processes.

Four principal events cause processes to be
created:

System initialization.

Execution of a process-creation system call by a
running process.

A user request to create a new process.

Initiation of a batch job.

A process may create several new processes, via
a create-process system call

The creating process is called a parent process

the new processes are called the children of that
process

Each of these new processes may in turn create
other processes, forming a tree of processes.

operating systems identify processes according to a
unique process identifier.(or pid)
A tree of processes on a typical Solaris
system.

When a process creates a new process, two possibilities
exist in terms of execution:
(1).The parent continues to execute concurrently with its children.

(2).The parent waits until some or all of its children have terminated.

There are also two possibilities in terms of the address
space of the new process:
(1).The child process is a duplicate of the parent process (it has the
same program and data as the parent).
(2).The child process has a new program loaded into it.
Process Termination

After a process has been created, it starts running and
does whatever its job.

Later the new process will terminate, usually due to one
of the following conditions.
1. Normal exit (voluntary)

Most processes terminate because they have done their
work.

When a compiler has compiled the program given to it,
the compiler executes a system call to tell the os(operating
system) that it is finished. e.g

This call is exit in UNIX

call ExitProcess in windows
2.Error exit (voluntary).

Termination due to the process discovers a fatal error.
example :

to compile the program foo.c and no such file exists, the
compiler simply announces this fact and exits.
3.Fatal error (involuntary).

Termination due to an error caused by the process.
example :

include executing an illegal instruction.

Referencing non-existent memory.

dividing by zero.
4.Killed by another process (involuntary)

Terminate due to the process executes a system call
telling the operating system to kill some other process.

The killer must have the necessary authorization to do in
the kill.

In UNIX this call is kill . The corresponding Winx32
function is TerminateProcess .
Process State

 As a process executes, it changes state


◦ new: The process is being created
◦ running: Instructions are being executed
◦ waiting: The process is waiting for some event to
occur
◦ ready: The process is waiting to be assigned to a
processor
◦ terminated: The process has finished execution
Diagram of Process State
Process Control Block (PCB)
Information associated with each process
Process state
Program counter
CPU registers
CPU scheduling information
Memory-management information
Accounting information
I/O status information
Process Control Block (PCB)
Process state:- The state may be new, ready, running, waiting,
halted, and so on.
Program counter:- The counter indicates the address of the next
instruction to be executed for this process.
CPU registers:- The registers vary in number and type,
depending on the computer architecture.
CPU-scheduling information:-is information includes a process
priority,pointers to scheduling queues, and any other scheduling
parameters.
Memory-management information:- This information may
include such information as the value of the base and limit
registers
Accounting information:- This mformation includes the amount
of CPU and real time used, time limits, account numbers, job or
process numbers,and so on.
CPU Switch From Process to Process
Threads

In traditional OS, each process has an address space and a
single thread of control.

In fact, thread has almost the definition of a process.

A thread is a basic unit of CPU utilization; it comprises a
thread ID, a program counter, a register set, and a stack.

It shares with other threads belonging to the same process
its code section, data section, and other operating-system
resources.

A traditional (or heavyweight:) process has a single thread
of control.

If a process has multiple threads of control, it can perform
more than one task at a time.
Figure of difference between a traditional(single threaded) process and a
multithreaded process.

single-threaded process multithreaded process


Benefits of multithreading

Responsiveness:- Multithreading an interactive application may


allow a program to continue running even if part of it is blocked.

It performing a lengthy operation, thereby increasing
responsiveness to the user.

For instance if a multithreaded Web browser could allow user
interaction in one thread while an image was being loaded in
another thread.
Resource sharing:- Processes may only share resources through
techniques such as shared memory or message passing.

Threads share the memory and the resources of the process to
which they belong by default.
Economy:- Allocating memory and resources for process creation is
costly.

Because threads share the resources of the process to which they
belong.

it is more economical to create and context-switch threads.

Scalability:- The benefits of multithreading can be greatly increased in a
multiprocessor architecture, where threads may be running in parallel
on different processors.

A single-threaded process can only run on one processor, regardless
how many are available.

Multithreading on a multi CPU machine increases parallelism
(correspondence).
INTERPROCESS COMMUNICATION

Processes executing concurrently in the operating
system may be either independent processes or
cooperating processes.

A process is independent if it cannot affect or be
affected by the other processes executing in the
system.

A process is cooperating if it can affect or be affected
by the other processes executing in the system.

There are several reasons for providing an
environment that allows process cooperation

Information sharing:- Since several users may be
interested in the same piece of information (for
instance, a shared file),
INTERPROCESS COMMUNICATION

Computation speedup:- If we want a particular task to
run faster, we must break it into subtasks, each of which
will be executing in parallel with the others.

Modularity:- We may want to construct the system in a
modular fashion,dividing the system functions into
separate processes or threads.

Convenience:- Even an individual user may work on
many tasks at the same time. For instance, a user may be
editing, printing, and compiling in parallel.

There are two fundamental models of interprocess
communication:-
1 shared memory

In this model, a region of memory that is shared by cooperating
processes is established.

Processes can then exchange information by reading and writing
data to the shared region.

Shared memory can be faster than message passing
2 message passing

In the message-passing model, communication takes place by
means of messages exchanged between the cooperating processes.

Message passing is useful for exchanging smaller amounts of data.

because no conflicts need be avoided.

Message passing easier to implement
Figure of Communications models. (a) Shared memory. (b) Message
passing.
Process Scheduling

in multiprogrammed, multiple processes are compute for the CPU at
the same time.

This situation occurs whenever two or more of them are
simultaneously in the ready state.

If only one CPU is available, a choice has to be made which process to
run next.

The part of the operating system that makes the choice is called the
scheduler.

the algorithm it used to make choice among process is called the
scheduling algorithm.

non-preemptive scheduling, the process keeps the CPU until it
releases the CPU either by terminating or by switching to the waiting
state.

In preemptive,CPU can switches from running one process to another
process.
Process Scheduling Queues
Job queue – set of all processes in the system
Ready queue – set of all processes residing in main
memory, ready and waiting to execute
Device queues – set of processes waiting for an I/O
device
Processes migrate among the various queues
Ready Queue And Various I/O Device Queues
Representation of Process Scheduling
Schedulers
Long-term scheduler (or job scheduler) – selects which
processes should be brought into the ready queue
Short-term scheduler (or CPU scheduler) – selects
which process should be executed next and allocates CPU
Scheduling Criteria
CPU utilization – keep the CPU as busy as possible
Throughput – #(number) of processes that complete
their execution per time unit
Turnaround time – amount of time to execute a
particular process
Waiting time – amount of time a process has been
waiting in the ready queue
Response time – amount of time it takes from when a
request was submitted until the first response is
produced, not output (for time-sharing environment)
Scheduling Algorithm Optimization Criteria
Max CPU utilization
Max throughput
Min turnaround time
Min waiting time
Min response time
First-Come, First-Served (FCFS) Scheduling

the simplest CPU scheduling algorithm

the process that requests the CPU first is allocated the CPU
first.

The implementation of the FCFS policy is easily managed with
a FIFO queue.

When a process enters the ready queue, its PCB is linked onto
the tail of the queue.

When the CPU is free, it is allocated to the process at the head
of the queue.

The running process is then removed from the queue.

On the negative side, the average waiting time under the FCFS
policy is often quite long.
First-Come, First-Served (FCFS) Scheduling
Process Burst Time
P1 24
P2 3
P3 3
Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:

Waiting time for P = 0; P = 24; P = 27


1 2 3
Average waiting time: (0 + 24 + 27)/3 = 17
P1 P2 P3

0 24 27 30
FCFS Scheduling (Cont)
Suppose that the processes arrive in the order
P2 , P3 , P1
The Gantt chart for the schedule is:
Waiting time for P1 = 6; P2 = 0; P3 = 3
Average waiting time: (6 + 0 + 3)/3 = 3
Much better than previous case
Convoy effect short process behind long
process
P2 P3 P1

0 3 6 30
Shortest Job First (SJF) Scheduling
Associate with each process the length of its next CPU
burst.
Use these lengths to schedule the process with the
shortest time
SJF is optimal – gives minimum average waiting time for a
given set of processes

The difficulty is knowing the length of the next CPU request
 If the next CPU bursts of two processes are the same,FCFS
scheduling is used to break the tie.
SJF Scheduling (Continue...)

As an example of SJF scheduling, consider the following set of
processes,with the length of the CPU burst given in milliseconds:
Process Burst Time
P1 6
P2 8
P3 7
P4 3
The waiting time is 3 milliseconds for process P1 , 16 milliseconds for process
P2 , 9 milliseconds for process P3 , and 0 milliseconds for process P4

Using SJF scheduling, we would schedule these processes according


to the following Gantt chart:

Thus, the average waiting time is (3 + 16 + 9 + 0)/4 = 7 milliseconds.

By comparison, if we were using the FCFS scheduling scheme, the
average waiting time would be 10.25 milliseconds
• The SJF algorithm can be either preemptive or nonpreemptive
• The choice arises when a new process arrives at the ready queue
while a previous process is still executing.
• The next CPU burst of the newly arrived process may be shorter than
what is left of the currently executing process.
• A preemptive SJF algorithm will preempt the currently executing
process
• whereas a nonpreemptive SJF algorithm will allow the currently
running process to finish its CPU burst.
• Preemptive SJF scheduling is sometimes called shortest-remaining-
time-first scheduling.
• As an example, consider the following four processes, with the length
of the CPU burst given in milliseconds:
Process Arrival Time Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
then the resulting preemptive SJF schedule is as depicted in the following

Gantt chart:


Process P 1 is started at time 0, since it is the only process in the queue. Process
P 2 arrives at time 1.

The remaining time for process P 1 (7 milliseconds) is larger than the time
required by process P 2 (4 milliseconds), so process P 1 is preempted, and
process P 2 is scheduled.

The average waiting time for this example is [(10 − 1) + (1 − 1) + (17 − 2) + (5 −
3)]/4 = 26/4 = 6.5 milliseconds.

Nonpreemptive SJF scheduling would result in an average waiting time of 7.75
milliseconds.
Priority Scheduling
A priority number (integer) is associated with each process
The CPU is allocated to the process with the highest priority
(smallest integer  highest priority)
Preemptive
nonpreemptive
SJF is a priority scheduling where priority is the predicted
next CPU burst time
Problem  Starvation – low priority processes may never
execute
Solution  Aging – as time progresses increase the priority
of the process(decrease the value of the interger).
Process Burst Time Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2

Using priority scheduling, we would schedule these processes according to
the following Gantt chart.

The average waiting time is 8.2 milliseconds.



Round Robin (RR)
 Each process gets a small unit of CPU time (time quantum), usually 10-
100 milliseconds. After this time has elapsed, the process is preempted
and added to the end of the ready queue.
 If there are n processes in the ready queue and the time quantum is q,
then each process gets 1/n of the CPU time in chunks of at most q time
units at once. No process waits more than (n-1)q time units.
 Performance
 q large  FIFO
 q small  q must be large with respect to context switch, otherwise
overhead is too high
Process Burst Time
P1 24
P2 3
P3 3

The Gantt chart is: P1 P2 P3 P1 P1 P1 P1 P1

0 4 7 10 14 18 22 26 30
Typically, higher average turnaround than SJF, but
better response


Let’s calculate the average waiting time for this schedule.

P1 waits for 6 milliseconds (10 – 4),

P 2 waits for 4 milliseconds,

P3 waits for 7 milliseconds.

Thus, the average waiting time is 17/3 = 5.66 milliseconds.
Time Quantum and Context Switch Time
Multilevel Queue Scheduling

Multilevel Feedback Queues NUMA and CPU Scheduling


Algorithm Evaluation
Deterministic modeling – takes a particular
predetermined workload and defines the
performance of each algorithm for that
workload
Queueing models
Implementation
Evaluation of CPU schedulers by
Simulation
Dispatch Latency
Solaris 2 Scheduling
Multiple-Processor Scheduling
CPU scheduling more complex when multiple CPUs are
available
Homogeneous processors within a multiprocessor
Asymmetric multiprocessing – only one processor accesses
the system data structures, alleviating the need for data
sharing
Symmetric multiprocessing (SMP) – each processor is self-
scheduling, all processes in common ready queue, or each
has its own private queue of ready processes
Processor affinity – process has affinity for processor on
which it is currently running
soft affinity
hard affinity

You might also like