0% found this document useful (0 votes)
1 views

2Chapter Two- Process Management(2)

The document provides an overview of process management in operating systems, detailing the concepts of processes, their states, and the distinctions between processes and programs. It discusses process control blocks, scheduling, creation, termination, and the role of threads and interprocess communication. Additionally, it addresses issues like race conditions and synchronization mechanisms, including semaphores.

Uploaded by

jamsibro140
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

2Chapter Two- Process Management(2)

The document provides an overview of process management in operating systems, detailing the concepts of processes, their states, and the distinctions between processes and programs. It discusses process control blocks, scheduling, creation, termination, and the role of threads and interprocess communication. Additionally, it addresses issues like race conditions and synchronization mechanisms, including semaphores.

Uploaded by

jamsibro140
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

Chapter

Process 2
Management

Operating System

1
Processes

• Process Concept
• Process Scheduling
• Operation on Processes
• Threads
• Cooperating Processes
• Interprocess Communication

2
Process Vs Program
Program
• It is a sequence of instructions defined to perform some task.
• It is a passive entity.
Process
• It is a program in execution.
• It is an instance of a program running on a computer.
• It is an active entity.
A process includes:
• program counter
• stack
• data section

3
Types of Processes
Sequential Processes: Execution progresses in a sequential fashion, i.e. one
after the other.
At any point in time, at most one process is being executed.
Concurrent Processes: There are two types of concurrent processes.
 True Concurrency (Multiprocessing)  Two or more processes are
executed simultaneously in a multiprocessor environment. Supports
real parallelism.
 Apparent Concurrency (Multiprogramming)  Two or more
processes are executed in parallel in a uniprocessor environment by
switching from one process to another.

4
Process State
• During its lifetime, a process passes through a number of states.
• As a process executes, it changes state
 New: The process is being created.
 Ready: The process is ready to be executed as soon as the OS
dispatches it. The process is waiting to be assigned to a processor.
 Running: Instructions are being executed. The process is currently being
executed.
 Waiting: The process is waiting for some event to occur, such as an I/O
operation.
 Terminated: The process has finished execution.

5
Diagram of Process State

6
Process Control Block (PCB)

• the manifestation (sign) of a process in an operating system.


• also known to be process descriptor and Task Control Block.
• a data structure that contains certain important information about
the process which includes:
 Process state
 Program counter
 CPU registers
 CPU scheduling information
 Memory-management information
 Accounting information
 I/O status information

7
Process Control Block (PCB)
(Cont.)

8
Process Control Block (PCB)
(Cont.)
• Process state: The state may be new, ready, running and waiting, halted, and SO on.
• Program counter: The counter indicates the address of the next instruction to be
executed for this process.
• CPU registers: The registers vary in number and type, depending on the computer
architecture. They include accumulators, index registers, stack pointers, and general-
purpose registers, plus any condition-code information. Along with the program counter,
this state information must be saved when an interrupt occurs, to allow the process to
be continued correctly afterward.
• CPU-Schedule information: This information includes a process priority, pointers to
scheduling queues, and any other scheduling parameters.
• Memory-management information: This information may include such information as
the value of the base and limit registers, the page tables, or the segment tables,
depending on the memory system used by the operating system.
• Accounting information: This information includes the amount of CPU and real time
used, time limits, account numbers, job or process numbers, and so on.
• I/O status information: The information includes the list of I/O devices allocated to this
process, a list of open files, and so on.

9
Context Switch

• When CPU switches to another process, the system must save the
state of the old process and load the saved state for the new process.
• Context-switch time is overhead; the system does no useful work
while switching.
• Time dependent on hardware support.

10
Context Switch (Cont.)

11
Process Scheduling Queues

• Job queue – set of all processes in the system.


• Ready queue – set of all processes residing in main memory, ready
and waiting to execute.
• Device queues – set of processes waiting for an I/O device.
• Process migration between the various queues.
• Processes can be described as either:
 I/O-bound process – spends more time doing I/O than
computations, many short CPU bursts.
 CPU-bound process – spends more time doing computations; few
very long CPU bursts.

12
Representation of Process
Scheduling

13
Process Schedulers

• Long-term scheduler (or job scheduler) – selects which processes


should be brought into the ready queue.
 Long-term scheduler is invoked very infrequently (seconds,
minutes) ⇒ (may be slow).
 The long-term scheduler controls the degree of
multiprogramming.
• Short-term scheduler (or CPU scheduler) – selects which process
should be executed next and allocates CPU.
 Short-term scheduler is invoked very frequently (milliseconds) ⇒
(must be fast).
• Some operating system, such as time-sharing systems may introduce
an additional, intermediate level of scheduling.
• This medium-term scheduling perform swapping of processes.
14
Addition of Medium Term
Scheduling

15
Process Creation

• Parent process creates children processes, which, in turn create other


processes, forming a tree of processes.
• Resource sharing
 Parent and children share all resources.
 Children share subset of parent’s resources.
 Parent and child share no resources.
• Execution
 Parent and children execute concurrently.
 Parent waits until children terminate.

16
Process Creation (Cont.)

• Address space
 Child duplicate of parent.
 Child has a program loaded into it.
• UNIX examples
 fork system call creates new process
 execve system call used after a fork to replace the process’
memory space with a new program.

17
Process Termination
• Process executes last statement and asks the operating system to
decide it (exit).
 Output data from child to parent (via wait).
 Process’ resources are deallocated by operating system.
• Parent may terminate execution of children processes (abort).
 Child has exceeded allocated resources.
 Task assigned to child is no longer required.
 Parent is exiting.
 Operating system does not allow child to continue if its parent
terminates.
 Cascading termination.

18
Threads

• A thread is a dispatchable unit of work (lightweight process) that has


independent context, state and stack.
• A Thread is a single point of execution.
• A process is a collection of one or more threads and associated
system resources.
• Traditional operating systems are single-threaded systems.
• There are two main places to implement threads: user space and the
kernel. The choice is a bit controversial, and a hybrid implementation
is also possible
 User space - The kernel knows nothing about them. As far as the
kernel is concerned, it is managing ordinary, single-threaded
processes.
 Kernel space - the kernel know about and manage the threads.
19
Threads (Cont.)

20
Multithreading

• is a technique in which a process, executing an application, is divided


into threads that can run concurrently.
• All threads share the same address space and a separate thread table
is needed to manage the threads.
• The benefits of multithreaded programming can be broken down into
four major categories:
 Responsiveness
 Resource sharing
 Economy
 Utilization of multiprocessor architectures

21
Cooperating Processes

• The concurrent processes executing in the operating system may be


either independent processes or cooperating processes.
• Independent process cannot affect or be affected by the execution of
another process.
• Cooperating process can affect or be affected by the execution of
another process
• Advantages of process cooperation
 Information sharing
 Computation speed-up
 Modularity
 Convenience

22
Interprocess Communication
(IPC)
• Mechanism for processes to communicate and to synchronize
their actions.
• Message system – processes communicate with each other
without resorting to shared variables.
• IPC facility provides two operations:
• send(message) – message size fixed or variable
• receive(message)
• If P and Q wish to communicate, they need to:
• establish a communication link between them
• exchange messages via send/receive
• Implementation of communication link
• physical (e.g., shared memory, hardware bus)
• logical (e.g., logical properties)
23
Con’s

• DIRECT COMMUNICATION:
• Processes must name each other explicitly:
• send (P, message) – send a message to process P
• receive(Q, message) – receive a message from process Q
• Properties of communication link
• Links are established automatically.
• A link is associated with exactly one pair of communicating processes.
• Between each pair there exists exactly one link.
• The link may be unidirectional, but is usually bi-directional.

24
• INDIRECT COMMUNICATION:
• Messages are directed and received from mailboxes (also
referred to as ports).
• Each mailbox has a unique id.
• Processes can communicate only if they share a mailbox.
• Properties of communication link
• Link established only if processes share a common mailbox
• A link may be associated with many processes.
• Each pair of processes may share several communication links.
• Link may be unidirectional or bi-directional.
• Operations
• create a new mailbox
• send and receive messages through mailbox
• destroy a mailbox
25
• Race Conditions:
• In some operating Systems processes that are working together may
share some common storage that each one can read & write.
• E.g. Printer Spooler
• Conditions, Where two or more processes are reading or writing
some shared data and final result depends on who runs precisely are
called race conditions. Or A situation like this, where several
processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which
the access takes place, is called a race condition.

26
• When a process wants to print a file, it enters file name in a special
Spooler Directory. Another process, the printer daemon, periodically
checks to see if there are any files to be printed, and if there are it
prints them and then removes their names from the directory.
• Imagine that our spooler directory has a very large number of slots,
numbered 0, 1, 2, each one capable of holding a file name. Also
imagine that there are two shared variables, out which points the
next file to be printed and in which points to the next free slot in the
directory. These two variables might well be kept on a two-word file
available to all processes. At a certain instant slots 0 to 3 are empty
(the files have already printed) and slots 4 to 6 are full (with the
names of files queued for printing). More or less simultaneously,
Processes A and B decide they want to queue a file for printing. This
situation is shown in the following figure:

27
• Conditions required to avoiding race condition
• 1. No two processes may be simultaneously inside their critical
regions.
• 2. No assumptions may be made about speeds or the number of
CPUs.
• 3. No process running outside its critical region may block other
processes.
• 4. No process should have to wait forever to enter its critical region.

28
• CRITICAL SECTION:
• n processes all competing to use some shared data
• Each process has a code segment, called critical section, in which the shared data is
accessed.
• Problem – ensure that when one process is executing in its critical section, no other
process is allowed to execute in its critical section.
• Structure of process Pi
• repeat
• entry section
• critical section
• exit section
• reminder section
• until false;
• Solution to critical section:
• 1. Mutual Exclusion. If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
• 2. Progress. If no process is executing in its critical section and there exist some processes
that wish to enter their critical section, then the selection of the processes that will enter
the critical section next cannot be postponed indefinitely.
• 3. Bounded Waiting. A bound must exist on the number of times that other processes are
allowed to enter their critical sections after a process has made a request to enter its 29
critical section and before that request is granted.
• SEMAPHORES:
• Synchronization tool that does not require busy waiting.
• Semaphore S – integer variable
• can only be accessed via two indivisible (atomic) operations
• wait (S): while S 0 do no-op;
S := S – 1;
• signal (S): S := S + 1;
• Counting semaphore – integer value can range over an unrestricted
domain.
• Binary semaphore – integer value can range only between 0
and 1; can be simpler to implement.
• Can implement a counting semaphore S as a binary semaphore.

30

You might also like