0% found this document useful (0 votes)
13 views

AAchapter4 Schedulers

Uploaded by

h240223t
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

AAchapter4 Schedulers

Uploaded by

h240223t
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 33

Process scheduling

3.3 Process Scheduling


Process Scheduling Queues

• A scheduler is special system software which


handles process scheduling in various ways. Their
main task is to select the jobs to be submitted into
the system and to decide which process to run.
Queues are of three types
• Job queue – set of all processes in the system
• Ready queue – set of all processes residing in
main memory, ready and waiting to execute
• Device queues – set of processes waiting for an
I/O device
• Processes migrate among the various queues
schedulers
• Long-term scheduler (or job scheduler) –
selects which processes should be brought into
the ready queue
• Short-term scheduler (or CPU scheduler) –
selects which process should be executed next
and allocates CPU
• Short-term scheduler is invoked very frequently
(milliseconds)  (must be fast)
Schedulers (Cont.)
• Long-term scheduler is invoked very infrequently
(seconds, minutes)  (may be slow)
• The long-term scheduler controls the degree of
multiprogramming
• Processes can be described as either:
– I/O-bound process – spends more time doing I/O
than computations, many short CPU bursts
– CPU-bound process – spends more time doing
computations; few very long CPU bursts
• Some systems have no long-term scheduler
– Every new process is loaded into memory
Cont…
• A Long Term Scheduler is also called job
scheduler. Long-term scheduler is invoked very
infrequently (seconds, minutes) and it may be slow.
• Long term scheduler determines which programs
are admitted to the system for processing.
• Job scheduler selects processes from the queue
and loads them into memory for execution. Process
loads into the memory for CPU scheduling.
Cont..

• The primary objective of the job scheduler is to


provide a balanced mix of jobs, such as I/O bound
and processor bound.
• It also controls the degree of multiprogramming.
• If the degree of multiprogramming is stable, then the
average rate of process creation must be equal to
the average departure rate of processes leaving the
system.
• Time-sharing Oss have no long term scheduler.
When process changes the state from new to ready,
then there is use of long term scheduler.
Cont..

• Short Term Scheduler is also called CPU


scheduler. Short-term scheduler is invoked very
frequently (milliseconds) and it must be fast.
• Main objective is increasing system performance
in accordance with the chosen set of criteria. It is
the change of ready state to running state of the
process.
• CPU scheduler selects process among the
processes that are ready to execute and allocates
CPU to one of them.
Cont…
• Short term scheduler also known as dispatcher,
execute most frequently and makes the fine
grained decision of which process to execute next.
• Short term scheduler is faster than long term
scheduler.
Cont..
Medium Term Scheduler .
• Medium term scheduling is part of swapping. It
removes the processes from the memory.
• It reduces the degree of multiprogramming
(multiple programs/ tasks running at the same
time). The medium term scheduler is in-charge of
handling the swapped out-processes.
Ready Queue And Various I/O Device Queues
Representation of Process Scheduling

An interrupt occurs
Cont…
• Running process may become suspended if
it makes an I/O request. Suspended
processes cannot make any progress
towards completion.
• In this condition, to remove the process from
memory and make space for other process,
the suspended process is moved to the
secondary storage.
Cont..
• This process is called swapping, and the
process is said to be swapped out or rolled
out. Swapping may be necessary to improve
the process mix.
3.4 Operations on Processes
Process Creation
• Parent process creates children processes, which, in
turn create other processes, forming a tree of
processes
• Resource sharing options
– Parent and children share all resources
– Children share subset of parent’s resources
– Parent and child share no resources
• Execution options
– Parent and children execute concurrently
– Parent waits until some or all of its children
have terminated
Process Creation (Cont.)
• Address space options
– Child process is a duplicate of the parent
process (same program and data)
– Child process has a new program loaded
into it
• UNIX example
– fork() system call creates a new process
– exec() system call used after a fork() to
replace the memory space of the process
with a new program
Process Termination
• Process executes last statement and asks the operating
system to terminate it (via exit)
– Exit (or return) status value from child is received by
the parent (via wait())
– Process’ resources are deallocated by the operating
system
• Parent may terminate execution of children processes (kill()
function)
– Child has exceeded allocated resources
– Task assigned to child is no longer required
– If parent is exiting
Some operating systems do not allow a child to
continue if its parent terminates
3.4 Interprocess Communication
Cooperating Processes
• An independent process is one that cannot affect or be
affected by the execution of another process
• A cooperating process can affect or be affected by the
execution of another process in the system
Advantages of process cooperation
– Information sharing (of the same piece of data)
– Computation speed-up (break a task into smaller
subtasks)
– Modularity (dividing up the system functions)
– Convenience (to do multiple tasks simultaneously)
Cont…
• Two fundamental models of interprocess
communication
– Shared memory (a region of memory is
shared)
– Message passing (exchange of messages
between processes)
Inter Process Communication (IPC)
• Inter Process Communication (IPC) refers to a
mechanism, where the operating systems allow
various processes to communicate with each
other.
• This involves synchronizing their actions and
managing shared data.
Communications Models

Message passing Shared


memory
Shared Memory Systems
• Shared memory requires communicating processes to
establish a region of shared memory
• Information is exchanged by reading and writing data in
the shared memory
• A common paradigm for cooperating processes is the
producer-consumer problem
• A producer process produces information that is consumed
by a consumer process
– unbounded-buffer places no practical limit on the
size of the buffer
– bounded-buffer assumes that there is a fixed buffer
size
– Buffer means a data area shared by programs
Message-Passing Systems
• Mechanism to allow processes to communicate
and to synchronize their actions
• No address space needs to be shared; this is
particularly useful in a distributed processing
environment (e.g., a chat program)
• Message-passing facility provides two operations:
– send(message) – message size can be fixed
or variable
– receive(message)
Cont..
• If P and Q wish to communicate, they need to:
– establish a communication link between them
– exchange messages via send/receive
• Logical implementation of communication link
– Direct or indirect communication
– Synchronous or asynchronous
communication
– Automatic or explicit buffering
Direct Communication
• Processes must name each other explicitly:
– send (P, message) – send a message to process P
– receive(Q, message) – receive a message from
process Q
• Properties of communication link
– Links are established automatically between every
pair of processes that want to communicate
– A link is associated with exactly one pair of
communicating processes
– Between each pair there exists exactly one link
– The link may be unidirectional, but is usually bi-
directional
Cont..
• Disadvantages
– Limited modularity of the resulting process
definitions
– Hard-coding of identifiers are less desirable
than indirection techniques
Indirect Communication

• Messages are directed and received from


mailboxes (also referred to as ports)
– Each mailbox has a unique id
– Processes can communicate only if they
share a mailbox
• send(A, message) – send message to
mailbox A
• receive(A, message) – receive message from
mailbox A
Cont…
• Properties of communication link
– Link is established between a pair of
processes only if both have a shared
mailbox
– A link may be associated with more than
two processes
– Between each pair of processes, there may
be many different links, with each link
corresponding to one mailbox
Indirect Communication (continued)
• For a shared mailbox, messages are received based on the
following methods:
– Allow a link to be associated with at most two
processes
– Allow at most one process at a time to execute a
receive() operation
– Allow the system to select arbitrarily which process
will receive the message (e.g., a round robin
approach)
• Mechanisms provided by the operating system
– Create a new mailbox
– Send and receive messages through the mailbox
– Destroy a mailbox
Synchronization
• Message passing may be either blocking or non-blocking
• Blocking is considered synchronous
– Blocking send has the sender block until the message
is received
– Blocking receive has the receiver block until a
message is available
• Non-blocking is considered asynchronous
– Non-blocking send has the sender send the message
and continue
– Non-blocking receive has the receiver receive a valid
message or null
• When both send() and receive() are blocking, we have a
rendezvous (meeting at an agreed time and place)
between the sender and the receiver
Buffering
• Whether communication is direct or indirect, messages
exchanged by communicating processes reside in a
temporary queue
• These queues can be implemented in three ways:
1. Zero capacity – the queue has a maximum length of
zero
- Sender must block until the recipient receives the
message
2. Bounded capacity – the queue has a finite length of n
- Sender must wait if queue is full
3. Unbounded capacity – the queue length is unlimited
Sender never blocks

You might also like