Key Operations of Operating Systems
Key Operations of Operating Systems
An operating system is a construct that allows the user application programs to interact with
the system hardware. Operating system by itself does not provide any function but it provides
an atmosphere in which different applications and programs can do useful work. The major
operations of the operating system are process management, memory management, device
management and file management
Process Management
The operating system is responsible for managing the processes i.e assigning the processor to
a process at a time. This is known as process scheduling. The different algorithms used for
process scheduling are FCFS (first come first served), SJF (shortest job first), priority
scheduling, round robin scheduling [Link] are many scheduling queues that are used to
handle processes in process management. When the processes enter the system, they are put
into the job queue. The processes that are ready to execute in the main memory are kept in
the ready queue. The processes that are waiting for the I/O device are kept in the device
queue.
Memory Management
Memory management plays an important part in operating system. It deals with memory and
the moving of processes from disk to primary memory for execution and back again.
The activities performed by the operating system for memory management are −
The operating system assigns memory to the processes as required. This can be done
using best fit, first fit and worst fit algorithms.
All the memory is tracked by the operating system i.e. it nodes what memory parts
are in use by the processes and which are empty.
The operating system deallocated memory from processes as required. This may
happen when a process has been terminated or if it no longer needs the memory.
Device Management
There are many I/O devices handled by the operating system such as mouse, keyboard, disk
drive etc. There are different device drivers that can be connected to the operating system to
handle a specific device. The device controller is an interface between the device and the
device driver. The user applications can access all the I/O devices using the device drivers,
which are device specific codes.
File Management
Files are used to provide a uniform view of data storage by the operating system. All the files
are mapped onto physical devices that are usually non volatile so data is safe in the case of
system failure.
The files can be accessed by the system in two ways i.e. sequential access and direct access −
Sequential Access
The information in a file is processed in order using sequential access. The files
records are accessed one after another. Most of the file systems such as editors,
compilers etc. use sequential access.
Direct Access
In direct access or relative access, the files can be accessed in random for read and
write operations.
Process Management
A process is a program in execution. For example, when we write a program in C or
C++ and compile it, the compiler creates binary code. The original code and binary
code are both programs. When we actually run the binary code, it becomes a process.
A process is an ‘active’ entity instead of a program, which is considered a ‘passive’
entity. A single program can create many processes when run multiple times;
operating systems have to keep track of all the completed processes, Schedule them,
and dispatch them one after another. However, the user should feel that he has full
control of the CPU.
Process memory is divided into four sections for efficient working :
The Text section is made up of the compiled program code, read in from
nonvolatile storage when the program is launched
The Data section is made up the global and static variables, allocated and
initialized prior to executing the main.
The Heap is used for the dynamic memory allocation, to process during its run
time. and is managed via calls to new, delete, malloc, free, etc.
The Stack is used for local variables. Space on the stack is reserved for local
variables when they are declared.
Characteristics of a Process
A process has the following attributes.
Process Id: A unique identifier assigned by the operating system.
Process State: Can be ready, running, etc.
CPU Registers: Like the Program Counter (CPU registers must be saved and restored
when a process is swapped in and out of the CPU)
Accounts Information: Amount of CPU used for process execution, time limits,
execution ID, etc
I/O Status Information: For example, devices allocated to the process, open files, etc
CPU Scheduling Information: For example, Priority (Different processes may have
different priorities, for example, a shorter process assigned high priority in the shortest
job first scheduling)
States of Process
A process is in one of the following states:
New: Newly Created Process (or) being-created process.
Ready: After the creation process moves to the Ready state, i.e. the process is ready for
execution.
Running: Currently running process in CPU (only one process at a time can be under
execution in a single processor).
Wait (or Block): When a process requests I/O access.
Complete (or Terminated): The process completed its execution.
Suspended Ready: When the ready queue becomes full, some processes are moved to a
suspended ready state
Suspended Block: When the waiting queue becomes full.
In a multiprogramming computer, the Operating System resides in a part of memory, and the
rest is used by multiple processes. The task of subdividing the memory among different
processes is called Memory Management. Memory management is a method in the operating
system to manage operations between main memory and disk during process execution. The
main aim of memory management is to achieve efficient utilization of memory.
Logical and Physical Address Space
Logical Address Space: An address generated by the CPU is known as a “Logical
Address”. It is also known as a Virtual address. Logical address space can be defined as
the size of the process. A logical address can be changed.
Physical Address Space: An address seen by the memory unit (i.e the one loaded into
the memory address register of the memory) is commonly known as a “Physical Address”.
A Physical address is also known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address space. A physical
address is computed by MMU. The run-time mapping from virtual to physical addresses
is done by a hardware device Memory Management Unit(MMU). The physical address
always remains constant.
Swapping
When a process is executed it must have resided in memory. Swapping is a process of
swapping a process temporarily into a secondary memory from the main memory, which is
fast compared to secondary memory. A swapping allows more processes to be run and can be
fit into memory at one time. The main part of swapping is transferred time and the total time
is directly proportional to the amount of memory swapped. Swapping is also known as roll-
out, or roll because if a higher priority process arrives and wants service, the memory
manager can swap out the lower priority process and then load and execute the higher priority
process. After finishing higher priority work, the lower priority process swapped back in
memory and continued to the execution process.
Storage Management
The operating system abstracts from the physical properties of its storage devices to define a
logical storage unit, the file. The operating system maps files onto physical media and
accesses these files via the storage devices.
File management
File management is one of the most visible components of an operating system. Computers
can store information on several different types of physical media. Magnetic disk, optical
disk, and magnetic tape are the most common. Each medium is controlled by a device, such
as a disk drive or tape drive, that also has its own unique characteristics. These properties
include access speed, capacity, data-transfer rate, and access method (sequential or random).
A file is a collection of related information defined by its creator. Commonly, files represent
programs (both source and object forms) and data. Data files may be numeric, alphabetic,
alphanumeric, or binary. Files may be free-form (for example, text files), or they may be
formatted rigidly (for example, fixed fields). The operating system implements the abstract
concept of a file by managing mass-storage media, such as tapes and disks, and the devices
that control them. In addition, files are normally organized into directories to make them
easier to use. Finally, when multiple users have access to files, it may be desirable to control
which user may access a file and how that user may access it (for example, read, write,
append). The operating system is responsible for the following activities in connection with
file management:
Mass-Storage Management
Most modern computer systems use disks as the principal on-line storage medium for both
programs and data. Most programs—including compilers, assemblers, word processors,
editors, and formatters—are stored on a disk until loaded into memory. They then use the
disk as both the source and destination of their processing. Hence, the proper management of
disk storage is of central importance to a computer system. The operating system is
responsible for the following activities in connection with disk management:
• Free-space management
• Storage allocation
• Disk scheduling
Caching
Cache is a type of memory that is used to increase the speed of data access. Normally, the
data required for any process resides in the main memory. However, it is transferred to the
cache memory temporarily if it is used frequently enough. The process of storing and
accessing data from a cache is known as caching.
In an uncached system, there is no cache memory. So, all the data required by the
processor during execution is obtained from main memory. This is a comparatively
time consuming process.
In contrast to this, a cached system contains a cache memory. Any data required by
the processor is searched in the cache memory first. If it is not available there then
main memory is searched. The cache system yields faster results than the uncached
system because cache is much faster than main memory.
Cache memory is faster than main memory as it is located on the processor chip itself.
Its speed is comparable to the processor registers and so frequently required data is
stored in the cache memory.
The memory access time is considerably less for cache memory as it is quite fast. This
leads to faster execution of any [Link] cache memory can store data temporarily
as long as it is frequently required. After the use of any data has ended, it can be
removed from the cache and replaced by new data from the main memory.
Protection
Security
Protection
Protection tackles the system's internal threats. It provides a mechanism for controlling access
to processes, programs, and user resources. In simple words, It specifies which files a specific
user can access or view and modify to maintain the proper functioning of the system. It
allows the safe sharing of common physical address space or common logical address
space which means that multiple users can access the memory due to the physical address
space.
Let's take an example for a better understanding, suppose In a small organization there are
four employees p1, p2, p3, p4, and two data resources r1 and r2. The various departments
frequently exchange information but not sensitive information between all employees. The
employees p1 and p2 can only access r1 data resources and employees p3 and p4 can only
access r2 resources. If the employee p1 tries to access the data resource r2, then the employee
p1 is restricted from accessing that resource. Hence, p1 will not be able to access the r2
resource.
Security
Security tackles the system's external threats. The safety of their system resources such
as saved data, disks, memory, etc. is secured by the security systems against harmful
modifications, unauthorized access, and inconsistency. It provides a mechanism (encryption
and authentication) to analyze the user before allowing access to the system. Security can be
achieved by three attributes: confidentiality (prevention of unauthorized resources and
modification), integrity (prevention of all unauthorized users),
and availability (unauthorized withholding of resources).
Operating-System Services
An operating system is software that acts as an intermediary between the user and computer
hardware. It is a program with the help of which we are able to run various applications. It is
the one program that is running all the time. Every computer must have an operating system
to smoothly execute other programs.
A system call is a method for a computer program to request a service from the kernel of
the operating system on which it is running. A system call is a method of interacting with the
operating system via programs. A system call is a request from computer software to an
operating system's kernel.
There are commonly five types of system calls. These are as follows:
1. Process Control
2. File Management
3. Device Management
4. Information Maintenance
5. Communication
Process Control
Process control is the system call that is used to direct the processes. Some process control
examples include creating, load, abort, end, execute, process, terminate the process, etc.
File Management
File management is a system call that is used to handle the files. Some file management
examples include creating files, delete files, open, close, read, write, etc.
Device Management
Device management is a system call that is used to deal with devices. Some examples of
device management include read, device, write, get device attributes, release device, etc.
Information Maintenance
Information maintenance is a system call that is used to maintain information. There are
some examples of information maintenance, including getting system data, set time or date,
get time or date, set system data, etc.
Communication
Communication is a system call that is used for communication. There are some examples
of communication, including create, delete communication connections, send, receive
messages, etc.
open()
The open() system call allows you to access a file on a file system. It allocates resources to
the file and provides a handle that the process may refer to. Many processes can open a file
at once or by a single process only. It's all based on the file system and structure.
read()
It is used to obtain data from a file on the file system. It accepts three arguments in general:
o A file descriptor.
o A buffer to store read data.
o The number of bytes to read from the file.
The file descriptor of the file to be read could be used to identify it and open it
using open() before reading.
wait()
In some systems, a process may have to wait for another process to complete its execution
before proceeding. When a parent process makes a child process, the parent process
execution is suspended until the child process is finished. The wait() system call is used to
suspend the parent process. Once the child process has completed its execution, control is
returned to the parent process.
write()
It is used to write data from a user buffer to a device like a file. This system call is one way
for a program to generate data. It takes three arguments in general:
o A file descriptor.
o A pointer to the buffer in which data is saved.
o The number of bytes to be written from the buffer.
fork()
Processes generate clones of themselves using the fork() system call. It is one of the most
common ways to create processes in operating systems. When a parent process spawns a
child process, execution of the parent process is interrupted until the child process
completes. Once the child process has completed its execution, control is returned to the
parent process.
close()
It is used to end file system access. When this system call is invoked, it signifies that the
program no longer requires the file, and the buffers are flushed, the file information is
altered, and the file resources are de-allocated as a result.
exec()
When an executable file replaces an earlier executable file in an already executing process,
this system function is invoked. As a new process is not built, the old process identification
stays, but the new process replaces data, stack, data, head, etc.
exit()
The exit() is a system call that is used to end program execution. This call indicates that the
thread execution is complete, which is especially useful in multi-threaded environments.
The operating system reclaims resources spent by the process following the use of
the exit() system function.
Process Scheduling
Operating system uses various schedulers for the process scheduling described below.
Long term scheduler is also known as job scheduler. It chooses the process from the pool
(secondary memory) and keeps them in the ready queue maintained in the primary memory.
This scheduler decides which programs should be executed from the pool of programs
waiting in the job queue. The long-term scheduler considers factors such as system
utilization, available memory, and the types of programs that are currently running. It
decides which programs should be admitted to the system, and it loads them into memory
for execution.
Once a process is admitted to the system, the short-term scheduler (also known as the CPU
scheduler) takes over. The short-term scheduler is responsible for selecting which process
should be executed next from the pool of processes that are currently waiting in the ready
queue. This scheduler is a faster and more frequent process than the long-term scheduler, as
it operates on a millisecond or nanosecond time scale. The short-term scheduler considers
factors such as process priority, remaining burst time, and the amount of CPU time that
each process has consumed so far.
The medium-term scheduler is a less common type of scheduler that is used in some
operating systems. The medium-term scheduler is responsible for temporarily removing
processes from the memory when there is a shortage of memory resources. It does this by
swapping out some of the processes from the memory and storing them on the hard disk.
This frees up memory resources for other processes to use. Later, when the swapped-out
processes are needed again, the medium-term scheduler swaps them back into memory.
Process Queues
The operating system manages various types of queues for each of the process states. The
PCB related to the process is also stored in the queue of the same state. If the process is
moved from one state to another state then its PCB is also unlinked from the corresponding
queue and added to the other state queue in which the transition is made.
Job Queue
In starting, all the process get stored in the job [Link] is maintained in the secondary
[Link] long term scheduler (Job scheduler) picks some of the process and put them in
the primary memory.
Ready Queue
Ready queue is maintained in primary [Link] short term scheduler picks the job from
the ready queue and dispatch of the CPU for the execution.
Waiting Queue
When the process needs some IO operation in order to complete its execution, OS Changes
the state of the process from running to [Link] context (PCB) associated with the
process gets stored on the waiting queue which will be used by the processor when the
process finishes the IO.
Arrival Time
The time at which the process enters into the ready queue.
Burst Time
The total amount of time required by the CPU to execute the whole process is called the
Burst [Link] does not include the waiting [Link] is confusing to calculate the execution
time for a process even before executing it hence the scheduling problems based on the
burst time cannot be implemented in reality.
Completion Time
The time at which the process enters into the completion state or the time at which the
process completes its execution.
Turn around time
The total amount of time spent by the process from its arrival to its completion.
Waiting Time
The total amount of time for which the process waits for the CPU to be assigned.
Response Time
The difference between the arrival time and the time at which the process first gets the CPU.
CPU Scheduling
In the uniprogramming systems like MS DOS, when a process waits for I/O operation to be
done, the CPU remains [Link] is a type of operating system in which only
one program can be executed at a [Link] is an overhead since it wastes the time and
causes the problem of starvation.
"overhead" refers to the extra resources that are required by the system to perform
tasks beyond what is required for the user's program. Overhead can include the time,
memory, processing power, or other system resources that are needed to manage and
run the operating system itself.
However, in multirogramming systems, the CPU doesn't remain idle during the
waiting time of the process and it starts executing other processes.
Operating system has to define which process the CPU will be given.
In multiprogramming systems, the OS schedules the processes on the CPU to have
the maximum utilization of it and this procedure is called CPU scheduling. The
Operating system uses various scheduling algorithm to schedule the processes.
This is a task of the short term schedular to schedule the CPU for the number of
processes present in ready queue.
Whenever the running process requests some I/O operation then the short term
scheduler saves the current context of the process (also called PCB) and changes its
state from running to waiting. During the time, process is in waiting state; the short
term shceduler picks another process from the ready queue and assigns the CPU to
this process. This procedure is called context switching.
Operation on a Process
The execution of a process is a complex activity. It involves various operations. Following
are the operations that are performed while execution of a process:
Creation
This is the initial step of the process execution activity. Process creation means the
construction of a new process for execution. This might be performed by the system, the
user, or the old process itself. There are several events that lead to the process creation.
Some of the events are the following:
1. When we start the computer, the system creates several background processes.
2. A user may request to create a new process.
3. A process can create a new process itself while executing.
4. The batch system takes initiation of a batch job.
Scheduling/Dispatching
The event or activity in which the state of the process is changed from ready to run. It
means the operating system puts the process from the ready state into the running state.
Dispatching is done by the operating system when the resources are free or the process has
higher priority than the ongoing process. There are various other cases in which the process
in the running state is preempted and the process in the ready state is dispatched by the
operating system.
Blocking
When a process invokes an input-output system call that blocks the process, and operating
system is put in block mode. Block mode is basically a mode where the process waits for
input-output. Hence on the demand of the process itself, the operating system blocks the
process and dispatches another process to the processor. Hence, in process-blocking
operations, the operating system puts the process in a ‘waiting’ state.
Preemption
When a timeout occurs that means the process hadn’t been terminated in the allotted time
interval and the next process is ready to execute, then the operating system preempts the
process. This operation is only valid where CPU scheduling supports preemption. Basically,
this happens in priority scheduling where on the incoming of high priority process the
ongoing process is preempted. Hence, in process preemption operation, the operating
system puts the process in a ‘ready’ state.
Process Termination
Process termination is the activity of ending the process. In other words, process
termination is the relaxation of computer resources taken by the process for the execution.
Like creation, in termination also there may be several events that may lead to the process
of termination. Some of them are:
1. The process completes its execution fully and it indicates to the OS that it has finished.
2. The operating system itself terminates the process due to service errors.
3. There may be a problem in hardware that terminates the process.
4. One process can be terminated by another process.