0% found this document useful (0 votes)
39 views9 pages

Os Unit 2

Uploaded by

susmi738tha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views9 pages

Os Unit 2

Uploaded by

susmi738tha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Unit 2

Define process and process control block


A process is a program in execution. It's an active entity that carries out instructions and
interacts with the system's resources.

In an operating system, a process is more than just a piece of code; it's a program undergoing
execution, including its current activity, resources, and state. It represents the fundamental unit of
work that the system manages.

Process Control Block is a data structure that contains information of the process related to it. The
process control block is also known as a task control block, entry of the process table, etc.

It is very important for process management as the data structuring for processes is done in terms of
the PCB. It also defines the current state of the operating system.

Structure of the Process Control Block

The process control stores many data items that are needed for efficient process management. Some
of these data items are explained with the help of the given diagram −

The following are the data items −

Process State
This specifies the process state i.e. new, ready, running, waiting or terminated.

Process Number

This shows the number of the particular process.

Program Counter

This contains the address of the next instruction that needs to be executed in the process.

Registers

This specifies the registers that are used by the process. They may include accumulators, index
registers, stack pointers, general purpose registers etc.

List of Open Files

These are the different files that are associated with the process

CPU Scheduling Information

The process priority, pointers to scheduling queues etc. is the CPU scheduling information that is
contained in the PCB. This may also include any other scheduling parameters.

Memory Management Information

The memory management information includes the page tables or the segment tables depending on
the memory system used. It also contains the value of the base registers, limit registers etc.

I/O Status Information

This information includes the list of I/O devices used by the process, the list of files etc.

Accounting information

The time limits, account numbers, amount of CPU used, process numbers etc. are all a part of the
PCB accounting information.

Location of the Process Control Block

The process control block is kept in a memory area that is protected from the normal user access.
This is done because it contains important process information. Some of the operating systems place
the PCB at the beginning of the kernel stack for the process as it is a safe location.

Explain process state diagram


A process is a program in execution and it is more than a program code called as text section and this
concept works under all the operating system because all the task perform by the operating
system needs a process to perform the task

The process executes when it changes the state. The state of a process is defined by the current
activity of the process.

Each process may be in any one of the following states −

 New − The process is being created.

 Running − In this state the instructions are being executed.


 Waiting − The process is in waiting state until an event occurs like I/O operation completion
or receiving a signal.

 Ready − The process is waiting to be assigned to a processor.

 Terminated − the process has finished execution.

It is important to know that only one process can be running on any processor at any instant. Many
processes may be ready and waiting.

Now let us see the state diagram of these process states −

Explanation

Step 1 − Whenever a new process is created, it is admitted into ready state.

Step 2 − If no other process is present at running state, it is dispatched to running based on


scheduler dispatcher.

Step 3 − If any higher priority process is ready, the uncompleted process will be sent to the waiting
state from the running state.

Step 4 − Whenever I/O or event is completed the process will send back to ready state based on
the interrupt signal given by the running state.

Step 5 − Whenever the execution of a process is completed in running state, it will exit to terminate
state, which is the completion of process.

Describe process creation and termination


processes in most of the operating systems (both Windows and Linux) form hierarchy. So a new
process is always created by a parent process. The process that creates the new one is called the
parent process, and the newly created process is called the child process. A process can create
multiple new processes while it’s running by using system calls to create them.

1. When a new process is created, the operating system assigns a unique Process Identifier (PID) to it
and inserts a new entry in the primary process table.
2. Then required memory space for all the elements of the process such as program, data, and stack
is allocated including space for its Process Control Block (PCB).
3. Next, the various values in PCB are initialized such as,

1. The process identification part is filled with PID assigned to it in step (1) and also its parent's
PID.

2. The processor register values are mostly filled with zeroes, except for the stack pointer and
program counter. The stack pointer is filled with the address of the stack-allocated to it in
step (2) and the program counter is filled with the address of its program entry point.

3. The process state information would be set to 'New'.

4. Priority would be lowest by default, but the user can specify any priority during creation.
Then the operating system will link this process to the scheduling queue and the process
state would be changed from 'New' to 'Ready'. Now the process is competing for the CPU.

5. Additionally, the operating system will create some other data structures such as log files or
accounting files to keep track of process activity.

Process Termination

Processes terminate themselves when they finish executing their last statement, after which the
operating system uses the exit() system call to delete their context. Then all the resources held by
that process like physical and virtual memory, 10 buffers, open files, etc., are taken back by the
operating system. A process P can be terminated either by the operating system or by the parent
process of P.

A parent may terminate a process due to one of the following reasons:

1. When task given to the child is not required now.

2. When the child has taken more resources than its limit.
3. The parent of the process is exiting, as a result, all its children are deleted. This is called
cascaded termination.

A process can be terminated/deleted in many ways. Some of the ways are:

1. Normal termination: The process completes its task and calls an exit() system call. The
operating system cleans up the resources used by the process and removes it from the
process table.

2. Abnormal termination/Error exit: A process may terminate abnormally if it encounters an


error or needs to stop immediately. This can happen through the abort() system call.

3. Termination by parent process: A parent process may terminate a child process when the
child finishes its task. This is done by the using kill() system call.

4. Termination by signal: The parent process can also send specific signals like SIGSTOP to
pause the child or SIGKILL to immediately terminate it.

Discuss the relation between processes


In an operating system, processes can have independent or cooperating relationships. Independent
processes do not affect or are affected by other processes. Cooperating processes, on the other
hand, can interact and influence each other, enabling information sharing, faster computation
through task division, and other benefits. The OS manages these processes through techniques like
context switching, inter-process communication, and synchronization.

Types of Process Relationships:

 Independent Processes:

These processes operate in isolation. They do not share data or resources with other processes and
are not impacted by their execution.

 Cooperating Processes:

These processes can interact and share data or resources. This cooperation can be beneficial for
various reasons, including:

 Information sharing: Processes can exchange data, enabling collaboration and


communication.

 Speeding up computation: By dividing a task into smaller parts, multiple processes


can work concurrently, potentially reducing overall execution time.

 Modularity: Complex systems can be designed as cooperating processes, each


handling a specific part of the overall functionality.

 Convenience: Certain tasks are more easily achieved through the interaction of
multiple processes, such as a producer-consumer model where one process
generates data and another consumes it.

Define Thread and describe multithreading


A thread is a single sequence stream within a process. Threads are also called lightweight
processes as they possess some of the properties of processes. Each thread belongs to exactly one
process.

 In an operating system that supports multithreading, the process can consist of many
threads. But threads can be effective only if the CPU is more than 1 otherwise two threads
have to context switch for that single CPU.

 All threads belonging to the same process share - code section, data section, and OS
resources (e.g. open files and signals)

 But each thread has its own (thread control block) - thread ID, program counter, register set,
and a stack

 Any operating system process can execute a thread. we can say that single process can have
multiple threads.

Why Do We Need Thread?

 Threads run in concurrent manner that improves the application performance. Each such
thread has its own CPU state and stack, but they share the address space of the process and
the environment. For example, when we work on Microsoft Word or Google Docs, we notice
that while we are typing, multiple things happen together (formatting is applied, page is
changed and auto save happens).

 Threads can share common data so they do not need to use inter-process communication.
Like the processes, threads also have states like ready, executing, blocked, etc.

 Priority can be assigned to the threads just like the process, and the highest priority thread is
scheduled first.

 Each thread has its own Thread Control Block (TCB). Like the process, a context switch occurs
for the thread, and register contents are saved in (TCB). As threads share the same address
space and resources, synchronization is also required for the various activities of the thread.

Components of Threads

These are the basic components of the Operating System.

 Stack Space: Stores local variables, function calls, and return addresses specific to the
thread.

 Register Set: Hold temporary data and intermediate results for the thread's execution.

 Program Counter: Tracks the current instruction being executed by the thread.

Types of Thread in Operating System

Threads are of two types. These are described below.

 User Level Thread

 Kernel Level Thread


What is Multi-Threading?

A thread is also known as a lightweight process. The idea is to achieve parallelism by dividing a
process into multiple threads. For example, in a browser, multiple tabs can be different threads. MS
Word uses multiple threads: one thread to format the text, another thread to process inputs, etc.
More advantages of multithreading are discussed below.

Multithreading is a technique used in operating systems to improve the performance and


responsiveness of computer systems. Multithreading allows multiple threads (i.e., lightweight
processes) to share the same resources of a single process, such as the CPU, memory, and I/O
devices.
Single Threaded vs Multi-threaded Process

Multithreading can be done without OS support, as seen in Java's multithreading model. In Java,
threads are implemented using the Java Virtual Machine (JVM), which provides its own thread
management. These threads, also called user-level threads, are managed independently of the
underlying operating system.

Application itself manages the creation, scheduling, and execution of threads without relying on the
operating system's kernel. The application contains a threading library that handles thread creation,
scheduling, and context switching. The operating system is unaware of User-Level threads and treats
the entire process as a single-threaded entity.

Benefits of Thread in Operating System

 Responsiveness: If the process is divided into multiple threads, if one thread completes its
execution, then its output can be immediately returned.

 Faster context switch: Context switch time between threads is lower compared to the
process context switch. Process context switching requires more overhead from the CPU.

 Effective utilization of multiprocessor system: If we have multiple threads in a single


process, then we can schedule multiple threads on multiple processors. This will make
process execution faster.

 Resource sharing: Resources like code, data, and files can be shared among all threads
within a process. Note: Stacks and registers can't be shared among the threads. Each thread
has its own stack and registers.

 Communication: Communication between multiple threads is easier, as the threads share a


common address space. while in the process we have to follow some specific communication
techniques for communication between the two processes.
 Enhanced throughput of the system: If a process is divided into multiple threads, and each
thread function is considered as one job, then the number of jobs completed per unit of time
is increased, thus increasing the throughput of the system.

You might also like