0% found this document useful (0 votes)
33 views22 pages

Key Operations of Operating Systems

An operating system facilitates interaction between user applications and system hardware, managing processes, memory, devices, and files. Key operations include process scheduling, memory allocation, device management through drivers, and file organization. Additionally, it ensures protection and security against internal and external threats while providing essential services such as program execution and input/output operations.

Uploaded by

farsana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views22 pages

Key Operations of Operating Systems

An operating system facilitates interaction between user applications and system hardware, managing processes, memory, devices, and files. Key operations include process scheduling, memory allocation, device management through drivers, and file organization. Additionally, it ensures protection and security against internal and external threats while providing essential services such as program execution and input/output operations.

Uploaded by

farsana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Operating System Operations

An operating system is a construct that allows the user application programs to interact with
the system hardware. Operating system by itself does not provide any function but it provides
an atmosphere in which different applications and programs can do useful work. The major
operations of the operating system are process management, memory management, device
management and file management

Process Management

The operating system is responsible for managing the processes i.e assigning the processor to
a process at a time. This is known as process scheduling. The different algorithms used for
process scheduling are FCFS (first come first served), SJF (shortest job first), priority
scheduling, round robin scheduling [Link] are many scheduling queues that are used to
handle processes in process management. When the processes enter the system, they are put
into the job queue. The processes that are ready to execute in the main memory are kept in
the ready queue. The processes that are waiting for the I/O device are kept in the device
queue.

Memory Management

Memory management plays an important part in operating system. It deals with memory and
the moving of processes from disk to primary memory for execution and back again.

The activities performed by the operating system for memory management are −

 The operating system assigns memory to the processes as required. This can be done
using best fit, first fit and worst fit algorithms.
 All the memory is tracked by the operating system i.e. it nodes what memory parts
are in use by the processes and which are empty.
 The operating system deallocated memory from processes as required. This may
happen when a process has been terminated or if it no longer needs the memory.
Device Management

There are many I/O devices handled by the operating system such as mouse, keyboard, disk
drive etc. There are different device drivers that can be connected to the operating system to
handle a specific device. The device controller is an interface between the device and the
device driver. The user applications can access all the I/O devices using the device drivers,
which are device specific codes.

File Management

Files are used to provide a uniform view of data storage by the operating system. All the files
are mapped onto physical devices that are usually non volatile so data is safe in the case of
system failure.

The files can be accessed by the system in two ways i.e. sequential access and direct access −

 Sequential Access
The information in a file is processed in order using sequential access. The files
records are accessed one after another. Most of the file systems such as editors,
compilers etc. use sequential access.
 Direct Access
In direct access or relative access, the files can be accessed in random for read and
write operations.

Process Management
A process is a program in execution. For example, when we write a program in C or
C++ and compile it, the compiler creates binary code. The original code and binary
code are both programs. When we actually run the binary code, it becomes a process.
A process is an ‘active’ entity instead of a program, which is considered a ‘passive’
entity. A single program can create many processes when run multiple times;
operating systems have to keep track of all the completed processes, Schedule them,
and dispatch them one after another. However, the user should feel that he has full
control of the CPU.
Process memory is divided into four sections for efficient working :
 The Text section is made up of the compiled program code, read in from
nonvolatile storage when the program is launched
 The Data section is made up the global and static variables, allocated and
initialized prior to executing the main.
 The Heap is used for the dynamic memory allocation, to process during its run
time. and is managed via calls to new, delete, malloc, free, etc.
 The Stack is used for local variables. Space on the stack is reserved for local
variables when they are declared.

Characteristics of a Process
A process has the following attributes.
 Process Id: A unique identifier assigned by the operating system.
 Process State: Can be ready, running, etc.
 CPU Registers: Like the Program Counter (CPU registers must be saved and restored
when a process is swapped in and out of the CPU)
 Accounts Information: Amount of CPU used for process execution, time limits,
execution ID, etc
 I/O Status Information: For example, devices allocated to the process, open files, etc
 CPU Scheduling Information: For example, Priority (Different processes may have
different priorities, for example, a shorter process assigned high priority in the shortest
job first scheduling)
States of Process
A process is in one of the following states:
 New: Newly Created Process (or) being-created process.
 Ready: After the creation process moves to the Ready state, i.e. the process is ready for
execution.
 Running: Currently running process in CPU (only one process at a time can be under
execution in a single processor).
 Wait (or Block): When a process requests I/O access.
 Complete (or Terminated): The process completed its execution.
 Suspended Ready: When the ready queue becomes full, some processes are moved to a
suspended ready state
 Suspended Block: When the waiting queue becomes full.

Process Control Block (PCB)


To identify the processes, it assigns a process identification number (PID) to each process. As
the operating system supports multi-programming, it needs to keep track of all the processes.
For this task, the process control block (PCB) is used to track the process’s execution status.
Each block of memory contains information about the process state, program counter, stack
pointer, status of opened files, scheduling algorithms, etc.
Structure of the Process Control Block
A Process Control Block (PCB) is a data structure used by the operating system to manage
information about a process. The process control keeps track of many important pieces of
information needed to manage processes efficiently. The diagram helps explain some of these
key data items.
 Pointer: It is a stack pointer that is required to be saved when the process is switched
from one state to another to retain the current position of the process.
 Process state: It stores the respective state of the process.
 Process number: Every process is assigned a unique id known as process ID or PID
which stores the process identifier.
 Program counter: Program Counter stores the counter, which contains the address of the
next instruction that is to be executed for the process.
 Register: Registers in the PCB, it is a data structure. When a processes is running and it’s
time slice expires, the current value of process specific registers would be stored in the
PCB and the process would be swapped out. When the process is scheduled to be run, the
register values is read from the PCB and written to the CPU registers. This is the main
purpose of the registers in the PCB.
 Memory limits: This field contains the information about memory management
system used by the operating system. This may include page tables, segment tables, etc.
 List of Open files: This information includes the list of files opened for a process.
Additional Points to Consider for Process Control Block (PCB)
 Interrupt Handling: The PCB also contains information about the interrupts that a
process may have generated and how they were handled by the operating system.
 Context Switching: The process of switching from one process to another is called
context switching. The PCB plays a crucial role in context switching by saving the
state of the current process and restoring the state of the next process.
 Real-Time Systems: Real-time operating systems may require additional information
in the PCB, such as deadlines and priorities, to ensure that time-critical processes are
executed in a timely manner.
 Virtual Memory Management: The PCB may contain information about a
process’s virtual memory management, such as page tables and page fault handling.
 Inter-Process Communication: The PCB can be used to facilitate inter-process
communication by storing information about shared resources and communication
channels between processes.
 Fault Tolerance: Some operating systems may use multiple copies of the PCB to
provide fault tolerance in case of hardware failures or software errors.

Memory Management in Operating System

In a multiprogramming computer, the Operating System resides in a part of memory, and the
rest is used by multiple processes. The task of subdividing the memory among different
processes is called Memory Management. Memory management is a method in the operating
system to manage operations between main memory and disk during process execution. The
main aim of memory management is to achieve efficient utilization of memory.
Logical and Physical Address Space
 Logical Address Space: An address generated by the CPU is known as a “Logical
Address”. It is also known as a Virtual address. Logical address space can be defined as
the size of the process. A logical address can be changed.
 Physical Address Space: An address seen by the memory unit (i.e the one loaded into
the memory address register of the memory) is commonly known as a “Physical Address”.
A Physical address is also known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address space. A physical
address is computed by MMU. The run-time mapping from virtual to physical addresses
is done by a hardware device Memory Management Unit(MMU). The physical address
always remains constant.
Swapping
When a process is executed it must have resided in memory. Swapping is a process of
swapping a process temporarily into a secondary memory from the main memory, which is
fast compared to secondary memory. A swapping allows more processes to be run and can be
fit into memory at one time. The main part of swapping is transferred time and the total time
is directly proportional to the amount of memory swapped. Swapping is also known as roll-
out, or roll because if a higher priority process arrives and wants service, the memory
manager can swap out the lower priority process and then load and execute the higher priority
process. After finishing higher priority work, the lower priority process swapped back in
memory and continued to the execution process.

Storage Management

The operating system abstracts from the physical properties of its storage devices to define a
logical storage unit, the file. The operating system maps files onto physical media and
accesses these files via the storage devices.

File management

File management is one of the most visible components of an operating system. Computers
can store information on several different types of physical media. Magnetic disk, optical
disk, and magnetic tape are the most common. Each medium is controlled by a device, such
as a disk drive or tape drive, that also has its own unique characteristics. These properties
include access speed, capacity, data-transfer rate, and access method (sequential or random).
A file is a collection of related information defined by its creator. Commonly, files represent
programs (both source and object forms) and data. Data files may be numeric, alphabetic,
alphanumeric, or binary. Files may be free-form (for example, text files), or they may be
formatted rigidly (for example, fixed fields). The operating system implements the abstract
concept of a file by managing mass-storage media, such as tapes and disks, and the devices
that control them. In addition, files are normally organized into directories to make them
easier to use. Finally, when multiple users have access to files, it may be desirable to control
which user may access a file and how that user may access it (for example, read, write,
append). The operating system is responsible for the following activities in connection with
file management:

• Creating and deleting files


 Creating and deleting directories to organize files
• Supporting primitives for manipulating files and directories
• Mapping files onto secondary storage
• Backing up files on stable (nonvolatile) storage media

Mass-Storage Management

Most modern computer systems use disks as the principal on-line storage medium for both
programs and data. Most programs—including compilers, assemblers, word processors,
editors, and formatters—are stored on a disk until loaded into memory. They then use the
disk as both the source and destination of their processing. Hence, the proper management of
disk storage is of central importance to a computer system. The operating system is
responsible for the following activities in connection with disk management:

• Free-space management

• Storage allocation

• Disk scheduling

Caching
Cache is a type of memory that is used to increase the speed of data access. Normally, the
data required for any process resides in the main memory. However, it is transferred to the
cache memory temporarily if it is used frequently enough. The process of storing and
accessing data from a cache is known as caching.

 In an uncached system, there is no cache memory. So, all the data required by the
processor during execution is obtained from main memory. This is a comparatively
time consuming process.
 In contrast to this, a cached system contains a cache memory. Any data required by
the processor is searched in the cache memory first. If it is not available there then
main memory is searched. The cache system yields faster results than the uncached
system because cache is much faster than main memory.

Advantages of Cache Memory

 Cache memory is faster than main memory as it is located on the processor chip itself.
Its speed is comparable to the processor registers and so frequently required data is
stored in the cache memory.
 The memory access time is considerably less for cache memory as it is quite fast. This
leads to faster execution of any [Link] cache memory can store data temporarily
as long as it is frequently required. After the use of any data has ended, it can be
removed from the cache and replaced by new data from the main memory.

What is Protection and Security in Operating Systems?

OS uses two sets of techniques to counter threats to information namely:

 Protection
 Security

Protection

Protection tackles the system's internal threats. It provides a mechanism for controlling access
to processes, programs, and user resources. In simple words, It specifies which files a specific
user can access or view and modify to maintain the proper functioning of the system. It
allows the safe sharing of common physical address space or common logical address
space which means that multiple users can access the memory due to the physical address
space.

Let's take an example for a better understanding, suppose In a small organization there are
four employees p1, p2, p3, p4, and two data resources r1 and r2. The various departments
frequently exchange information but not sensitive information between all employees. The
employees p1 and p2 can only access r1 data resources and employees p3 and p4 can only
access r2 resources. If the employee p1 tries to access the data resource r2, then the employee
p1 is restricted from accessing that resource. Hence, p1 will not be able to access the r2
resource.

Security

Security tackles the system's external threats. The safety of their system resources such
as saved data, disks, memory, etc. is secured by the security systems against harmful
modifications, unauthorized access, and inconsistency. It provides a mechanism (encryption
and authentication) to analyze the user before allowing access to the system. Security can be
achieved by three attributes: confidentiality (prevention of unauthorized resources and
modification), integrity (prevention of all unauthorized users),
and availability (unauthorized withholding of resources).

Operating-System Services

An operating system is software that acts as an intermediary between the user and computer
hardware. It is a program with the help of which we are able to run various applications. It is
the one program that is running all the time. Every computer must have an operating system
to smoothly execute other programs.

Services of Operating System


 Program execution

 Input Output Operations


 Communication between Process
 File Management
 Memory Management
 Process Management
 Security and Privacy
 Resource Management
 User Interface
 Networking
 Error handling
 Time Management
Program Execution
It is the Operating System that manages how a program is going to be executed. It loads the
program into the memory after which it is executed. The order in which they are executed
depends on the CPU Scheduling Algorithms. A few are FCFS, SJF, etc. When the program is
in execution, the Operating System also handles deadlock i.e. no two processes come for
execution at the same time. The Operating System is responsible for the smooth execution of
both user and system programs. The Operating System utilizes various resources available for
the efficient running of all types of functionalities.
Input Output Operations
Operating System manages the input-output operations and establishes communication
between the user and device drivers. Device drivers are software that is associated with
hardware that is being managed by the OS so that the sync between the devices works
properly. It also provides access to input-output devices to a program when needed.
Communication between Processes
The Operating system manages the communication between processes. Communication
between processes includes data transfer among them. If the processes are not on the same
computer but connected through a computer network, then also their communication is
managed by the Operating System itself.
File Management
The operating system helps in managing files also. If a program needs access to a file, it is the
operating system that grants access. These permissions include read-only, read-write, etc. It
also provides a platform for the user to create, and delete files. The Operating System is
responsible for making decisions regarding the storage of all types of data or files, i.e, floppy
disk/hard disk/pen drive, etc. The Operating System decides how the data should be
manipulated and stored.
Memory Management
Let’s understand memory management by OS in simple way. Imagine a cricket team with
limited number of player . The team manager (OS) decide whether the upcoming player will
be in playing 11 ,playing 15 or will not be included in team , based on his performance . In
the same way, OS first check whether the upcoming program fulfill all requirement to get
memory space or not, if all things good, it checks how much memory space will be sufficient
for program and then load the program into memory at certain location. And thus, it prevents
program from using unnecessary memory.
Process Management
Let’s understand the process management in unique way. Imagine, our kitchen stove as the
(CPU) where all cooking (execution) is really happen and chef as the (OS) who uses kitchen-
stove(CPU) to cook different dishes(program). The chef(OS) has to cook different
dishes(programs) so he ensure that any particular dish(program) does not take long
time(unnecessary time) and all dishes(programs) gets a chance to cooked(execution) .The
chef(OS) basically scheduled time for all dishes(programs) to run kitchen(all the system)
smoothly and thus cooked(execute) all the different dishes(programs) efficiently.
Security and Privacy
 Security : OS keep our computer safe from an unauthorized user by adding security layer
to it. Basically, Security is nothing but just a layer of protection which protect computer
from bad guys like viruses and hackers. OS provide us defenses like firewalls and anti-
virus software and ensure good safety of computer and personal information.
 Privacy : OS give us facility to keep our essential information hidden like having a lock
on our door, where only you can enter and other are not allowed . Basically , it respect our
secrets and provide us facility to keep it safe.
Resource Management
System resources are shared between various processes. It is the Operating system that
manages resource sharing. It also manages the CPU time among processes using CPU
Scheduling Algorithms. It also helps in the memory management of the system. It also
controls input-output devices. The OS also ensures the proper use of all the resources
available by deciding which resource to be used by whom.
User Interface
User interface is essential and all operating systems provide it. Users either interface with the
operating system through the command-line interface or graphical user interface or GUI. The
command interpreter executes the next user-specified command.
A GUI offers the user a mouse-based window and menu system as an interface.
Networking
This service enables communication between devices on a network, such as connecting to the
internet, sending and receiving data packets, and managing network connections.
Error Handling
The Operating System also handles the error occurring in the CPU, in Input-Output devices,
etc. It also ensures that an error does not occur frequently and fixes the errors. It also prevents
the process from coming to a deadlock. It also looks for any type of error or bugs that can
occur while any task. The well-secured OS sometimes also acts as a countermeasure for
preventing any sort of breach of the Computer System from any external source and probably
handling them.
Time Management
Imagine traffic light as (OS), which indicates all the cars(programs) whether it should be
stop(red)=>(simple queue), start(yellow)=>(ready queue),move(green)=>(under execution)
and this light (control) changes after a certain interval of time at each side of the
road(computer system) so that the cars(program) from all side of road move smoothly
without traffic.

What is a System Call?

A system call is a method for a computer program to request a service from the kernel of
the operating system on which it is running. A system call is a method of interacting with the
operating system via programs. A system call is a request from computer software to an
operating system's kernel.

Features of System Calls


 Interface: System calls provide a well-defined interface between user programs and the
operating system. Programs make requests by calling specific functions, and the
operating system responds by executing the requested service and returning a result.
 Protection: System calls are used to access privileged operations that are not available
to normal user programs. The operating system uses this privilege to protect the system
from malicious or unauthorized access.
 Kernel Mode: When a system call is made, the program is temporarily switched from
user mode to kernel mode. In kernel mode, the program has access to all system
resources, including hardware, memory, and other processes.
 Context Switching: A system call requires a context switch, which involves saving the
state of the current process and switching to the kernel mode to execute the requested
service. This can introduce overhead, which can impact system performance.
 Error Handling: System calls can return error codes to indicate problems with the
requested service. Programs must check for these errors and handle them appropriately.
 Synchronization: System calls can be used to synchronize access to shared resources,
such as files or network connections. The operating system provides synchronization
mechanisms, such as locks or semaphores, to ensure that multiple programs can access
these resources safely.

Types of System Calls

There are commonly five types of system calls. These are as follows:

1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication

Process Control

Process control is the system call that is used to direct the processes. Some process control
examples include creating, load, abort, end, execute, process, terminate the process, etc.

File Management

File management is a system call that is used to handle the files. Some file management
examples include creating files, delete files, open, close, read, write, etc.

Device Management

Device management is a system call that is used to deal with devices. Some examples of
device management include read, device, write, get device attributes, release device, etc.
Information Maintenance

Information maintenance is a system call that is used to maintain information. There are
some examples of information maintenance, including getting system data, set time or date,
get time or date, set system data, etc.

Communication

Communication is a system call that is used for communication. There are some examples
of communication, including create, delete communication connections, send, receive
messages, etc.

Process Windows Unix


Process Control CreateProcess() Fork()
ExitProcess() Exit()
WaitForSingleObject() Wait()
File Manipulation CreateFile() Open()
ReadFile() Read()
WriteFile() Write()
CloseHandle() Close()
Device Management SetConsoleMode() Ioctl()
ReadConsole() Read()
WriteConsole() Write()
Information Maintenance GetCurrentProcessID() Getpid()
SetTimer() Alarm()
Sleep() Sleep()
Communication CreatePipe() Pipe()
CreateFileMapping() Shmget()
MapViewOfFile() Mmap()
Protection SetFileSecurity() Chmod()
InitializeSecurityDescriptor() Umask()
SetSecurityDescriptorgroup() Chown()

open()

The open() system call allows you to access a file on a file system. It allocates resources to
the file and provides a handle that the process may refer to. Many processes can open a file
at once or by a single process only. It's all based on the file system and structure.

read()

It is used to obtain data from a file on the file system. It accepts three arguments in general:
o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.

The file descriptor of the file to be read could be used to identify it and open it
using open() before reading.

wait()

In some systems, a process may have to wait for another process to complete its execution
before proceeding. When a parent process makes a child process, the parent process
execution is suspended until the child process is finished. The wait() system call is used to
suspend the parent process. Once the child process has completed its execution, control is
returned to the parent process.

write()

It is used to write data from a user buffer to a device like a file. This system call is one way
for a program to generate data. It takes three arguments in general:

o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.

fork()

Processes generate clones of themselves using the fork() system call. It is one of the most
common ways to create processes in operating systems. When a parent process spawns a
child process, execution of the parent process is interrupted until the child process
completes. Once the child process has completed its execution, control is returned to the
parent process.
close()

It is used to end file system access. When this system call is invoked, it signifies that the
program no longer requires the file, and the buffers are flushed, the file information is
altered, and the file resources are de-allocated as a result.

exec()

When an executable file replaces an earlier executable file in an already executing process,
this system function is invoked. As a new process is not built, the old process identification
stays, but the new process replaces data, stack, data, head, etc.

exit()

The exit() is a system call that is used to end program execution. This call indicates that the
thread execution is complete, which is especially useful in multi-threaded environments.
The operating system reclaims resources spent by the process following the use of
the exit() system function.

Process Scheduling

Operating system uses various schedulers for the process scheduling described below.

Long term scheduler

Long term scheduler is also known as job scheduler. It chooses the process from the pool
(secondary memory) and keeps them in the ready queue maintained in the primary memory.
This scheduler decides which programs should be executed from the pool of programs
waiting in the job queue. The long-term scheduler considers factors such as system
utilization, available memory, and the types of programs that are currently running. It
decides which programs should be admitted to the system, and it loads them into memory
for execution.

Short term scheduler

Once a process is admitted to the system, the short-term scheduler (also known as the CPU
scheduler) takes over. The short-term scheduler is responsible for selecting which process
should be executed next from the pool of processes that are currently waiting in the ready
queue. This scheduler is a faster and more frequent process than the long-term scheduler, as
it operates on a millisecond or nanosecond time scale. The short-term scheduler considers
factors such as process priority, remaining burst time, and the amount of CPU time that
each process has consumed so far.

Medium term scheduler

The medium-term scheduler is a less common type of scheduler that is used in some
operating systems. The medium-term scheduler is responsible for temporarily removing
processes from the memory when there is a shortage of memory resources. It does this by
swapping out some of the processes from the memory and storing them on the hard disk.
This frees up memory resources for other processes to use. Later, when the swapped-out
processes are needed again, the medium-term scheduler swaps them back into memory.

Process Queues

The operating system manages various types of queues for each of the process states. The
PCB related to the process is also stored in the queue of the same state. If the process is
moved from one state to another state then its PCB is also unlinked from the corresponding
queue and added to the other state queue in which the transition is made.

Job Queue

In starting, all the process get stored in the job [Link] is maintained in the secondary
[Link] long term scheduler (Job scheduler) picks some of the process and put them in
the primary memory.

Ready Queue
Ready queue is maintained in primary [Link] short term scheduler picks the job from
the ready queue and dispatch of the CPU for the execution.

Waiting Queue

When the process needs some IO operation in order to complete its execution, OS Changes
the state of the process from running to [Link] context (PCB) associated with the
process gets stored on the waiting queue which will be used by the processor when the
process finishes the IO.

Arrival Time

The time at which the process enters into the ready queue.

Burst Time

The total amount of time required by the CPU to execute the whole process is called the
Burst [Link] does not include the waiting [Link] is confusing to calculate the execution
time for a process even before executing it hence the scheduling problems based on the
burst time cannot be implemented in reality.

Completion Time

The time at which the process enters into the completion state or the time at which the
process completes its execution.
Turn around time

The total amount of time spent by the process from its arrival to its completion.

Waiting Time

The total amount of time for which the process waits for the CPU to be assigned.

Response Time

The difference between the arrival time and the time at which the process first gets the CPU.

CPU Scheduling

In the uniprogramming systems like MS DOS, when a process waits for I/O operation to be
done, the CPU remains [Link] is a type of operating system in which only
one program can be executed at a [Link] is an overhead since it wastes the time and
causes the problem of starvation.

 "overhead" refers to the extra resources that are required by the system to perform
tasks beyond what is required for the user's program. Overhead can include the time,
memory, processing power, or other system resources that are needed to manage and
run the operating system itself.
 However, in multirogramming systems, the CPU doesn't remain idle during the
waiting time of the process and it starts executing other processes.
 Operating system has to define which process the CPU will be given.
 In multiprogramming systems, the OS schedules the processes on the CPU to have
the maximum utilization of it and this procedure is called CPU scheduling. The
Operating system uses various scheduling algorithm to schedule the processes.
 This is a task of the short term schedular to schedule the CPU for the number of
processes present in ready queue.
 Whenever the running process requests some I/O operation then the short term
scheduler saves the current context of the process (also called PCB) and changes its
state from running to waiting. During the time, process is in waiting state; the short
term shceduler picks another process from the ready queue and assigns the CPU to
this process. This procedure is called context switching.
Operation on a Process
The execution of a process is a complex activity. It involves various operations. Following
are the operations that are performed while execution of a process:

Creation
This is the initial step of the process execution activity. Process creation means the
construction of a new process for execution. This might be performed by the system, the
user, or the old process itself. There are several events that lead to the process creation.
Some of the events are the following:
1. When we start the computer, the system creates several background processes.
2. A user may request to create a new process.
3. A process can create a new process itself while executing.
4. The batch system takes initiation of a batch job.
Scheduling/Dispatching
The event or activity in which the state of the process is changed from ready to run. It
means the operating system puts the process from the ready state into the running state.
Dispatching is done by the operating system when the resources are free or the process has
higher priority than the ongoing process. There are various other cases in which the process
in the running state is preempted and the process in the ready state is dispatched by the
operating system.
Blocking
When a process invokes an input-output system call that blocks the process, and operating
system is put in block mode. Block mode is basically a mode where the process waits for
input-output. Hence on the demand of the process itself, the operating system blocks the
process and dispatches another process to the processor. Hence, in process-blocking
operations, the operating system puts the process in a ‘waiting’ state.
Preemption
When a timeout occurs that means the process hadn’t been terminated in the allotted time
interval and the next process is ready to execute, then the operating system preempts the
process. This operation is only valid where CPU scheduling supports preemption. Basically,
this happens in priority scheduling where on the incoming of high priority process the
ongoing process is preempted. Hence, in process preemption operation, the operating
system puts the process in a ‘ready’ state.
Process Termination
Process termination is the activity of ending the process. In other words, process
termination is the relaxation of computer resources taken by the process for the execution.
Like creation, in termination also there may be several events that may lead to the process
of termination. Some of them are:
1. The process completes its execution fully and it indicates to the OS that it has finished.
2. The operating system itself terminates the process due to service errors.
3. There may be a problem in hardware that terminates the process.
4. One process can be terminated by another process.

You might also like