0% found this document useful (0 votes)
4K views31 pages

Overview of Operating Systems in COM 311

1. The document discusses the definition, evolution, and functions of operating systems. It defines an operating system as software that manages computer hardware and resources and provides services for other programs. 2. The evolution of operating systems is described from early computers in the 1940s-50s to modern systems. Key developments included time-sharing in the 1960s, personal computers in the 1970s, and Windows and Linux in the 1980s-90s. 3. The main functions of an operating system are described as memory management, processor management, device management, file management, and security, error detection, and coordination between software and users.

Uploaded by

Aremu Ridwan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4K views31 pages

Overview of Operating Systems in COM 311

1. The document discusses the definition, evolution, and functions of operating systems. It defines an operating system as software that manages computer hardware and resources and provides services for other programs. 2. The evolution of operating systems is described from early computers in the 1940s-50s to modern systems. Key developments included time-sharing in the 1960s, personal computers in the 1970s, and Windows and Linux in the 1980s-90s. 3. The main functions of an operating system are described as memory management, processor management, device management, file management, and security, error detection, and coordination between software and users.

Uploaded by

Aremu Ridwan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
  • Definition of Operating System
  • Supported OS Features by 1970s
  • Processor and Memory Management
  • Types of Operating Systems
  • Operating System Properties
  • Spooling and Interrupt Handling
  • OS Structure and Architecture
  • System Calls and Processes
  • Process Scheduling
  • Multithreading
  • Process and Thread Differences
  • Threading Issues
  • Mutual Exclusion
  • Process Interaction
  • Scheduling Algorithms
  • Deadlock and Resource Management

COM 311(OPERATING SYSTEM I)

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 1


DEFINITION OF OPERATING SYSTEM

An operating system (OS) is software that manages computer hardware and software resources
and provides common services for computer programs. The operating system is an essential component of
the system software in a computer system. Application programs usually require an operating system to
function.
An operating system can also be defined as a program that manages a computer’s hardware.
It also provides a basis for application programs and acts as an intermediary between the computer user
and the computer hardware.

An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs.

Structure of an operating system

Evolution of Operating Systems


The evolution of operating systems is directly dependent to the development of computer systems and how
users use them. Here is a quick tour of computing systems through the past fifty years in the timeline.

Early Evolution
 1945: ENIAC, Moore School of Engineering, University of Pennsylvania.
 1949: EDSAC and EDVAC
 1949 BINAC - a successor to the ENIAC
 1951: UNIVAC by Remington
 1952: IBM 701
 1956: The interrupt
 1954-1957: FORTRAN was developed

Operating Systems by the late 1950s


By the late 1950s Operating systems were well improved and started supporting following usages:
 It was able to Single stream batch processing
 It could use Common, standardized, input/output routines for device access
 Program transition capabilities to reduce the overhead of starting a new job was added
 Error recovery to clean up after a job terminated abnormally was added.
 Job control languages that allowed users to specify the job definition and resource requirements were
made possible.

Operating Systems In 1960s


 1961: The dawn of minicomputers
 1962 Compatible Time-Sharing System (CTSS) from MIT
 1963 Burroughs Master Control Program (MCP) for the B5000 system
 1964: IBM System/360
COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 2
 1960s: Disks become mainstream
 1966: Minicomputers get cheaper, more powerful, and really useful
 1967-1968: The mouse
 1964 and onward: Multics
 1969: The UNIX Time-Sharing System from Bell Telephone Laboratories

Supported OS Features by 1970s


 Multi User and Multi tasking was introduced.
 Dynamic address translation hardware and Virtual machines came into picture.
 Modular architectures came into existence.
 Personal, interactive systems came into existence.

Accomplishments after 1970


 1971: Intel announces the microprocessor
 1972: IBM comes out with VM: the Virtual Machine Operating System
 1973: UNIX 4th Edition is published
 1973: Ethernet
 1974 The Personal Computer Age begins
 1974: Gates and Allen wrote BASIC for the Altair
 1976: Apple II
 August 12, 1981: IBM introduces the IBM PC
 1983 Microsoft begins work on MS-Windows
 1984 Apple Macintosh comes out
 1990 Microsoft Windows 3.0 comes out
 1991 GNU/Linux
 1992 The first Windows virus comes out
 1993 Windows NT
 2007: iOS
 2008: Android OS

And the research and development work still goes on, with new operating systems being developed and
existing ones being improved to enhance the overall user experience while making operating systems fast
and efficient like they have never been before.
Importance of operating system
Following are some of important functions of an operating System
 Memory Management
 Processor Management
 Device Management
 File Management
 Security
 Control over system performance
 Job accounting
 Error detecting aids
 Coordination between other software and users
Memory Management
Memory management refers to management of Primary Memory or Main Memory. Main memory is a large
array of words or bytes where each word or byte has its own address.
Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be
executed, it must in the main memory. An Operating System does the following activities for memory
management −
 Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in use.
 In multiprogramming, the OS decides which process will get memory when and how much.
 Allocates the memory when a process requests it to do so.
 De-allocates the memory when a process no longer needs it or has been terminated.

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 3


Processor Management
In multiprogramming environment, the OS decides which process gets the processor when and for how
much time. This function is called process scheduling. An Operating System does the following activities
for processor management −
 Keeps tracks of processor and status of process. The program responsible for this task is known
as traffic controller.
 Allocates the processor (CPU) to a process.
 De-allocates processor when a process is no longer required.
Device Management
An Operating System manages device communication via their respective drivers. It does the following
activities for device management −
 Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.
 Decides which process gets the device when and for how much time.
 Allocates the device in the efficient way.
 De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and usage. These directories may
contain files and other directions.
An Operating System does the following activities for file management −
 Keeps track of information, location, uses, status etc. The collective facilities are often known
as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.
Other Important Activities
Following are some of the important activities that an Operating System performs −
 Security − By means of password and similar other techniques, it prevents unauthorized access to
programs and data.
 Control over system performance − Recording delays between request for a service and
response from the system.
 Job accounting − Keeping track of time and resources used by various jobs and users.
 Error detecting aids − Production of dumps, traces, error messages, and other debugging and
error detecting aids.
 Coordination between other softwares and users − Coordination and assignment of
compilers, interpreters, assemblers and other software to the various users of the computer
systems.

Summary : basic function of operating system


1. It boots the computer
2. It performs basic computer tasks e.g. managing the various peripheral devices e.g. mouse, keyboard
3. It provides a user interface, e.g. command line, graphical user interface (GUI)
4. It handles system resources such as computer's memory and sharing of the central processing unit(CPU)
time by various applications or peripheral devices.
5. It provides file management which refers to the way that the operating system manipulates stores,
retrieves and saves data.
6. Error Handling is done by the operating system. It takes preventive measures whenever required to
avoid errors.
Types of Operating System
Operating systems are there from the very first computer generation. Operating systems keep evolving
over the period of time.
Operating systems are classified based on the number of users into single user, multi-user and network
operating systems.
a. Single-user operating systems: Single-user operating systems are designed to run on stand-alone
microcomputers or PCs. Their primary function is to manage the resources of that PC and control its
activities in its entirety. Examples are: MS-DOS, PC -DOS, CP/M, Windows 3.1 e.t.c

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 4


b. Multi-User Operating System: The multi-user operating systems were prevalent during the era of
centralized processing of jobs on a host (CPU) to which other terminals (Dummy terminals) are connected.
These operating systems run on minicomputer systems. Examples are: PC-MOS, and UNIX.
c. Network Operating System: Network operating systems are primarily concerned with communication
among the various systems and peripherals on the network; as well as the flow of data across the network
and sharing of resources on the network. The various examples are:
i. Windows 3.11 (workgroup), 95,98, NT, 2000 pro and server, XP, 2003, etc.
ii. Novell Netware, UNIX and Linux

Following are few of the important types of operating system which are most commonly used.
Batch operating system
Batch processing is a technique in which Operating System collects one programs and data together in a
batch before processing starts. Operating system does the following activities related to batch processing.
 OS defines a job which has predefined sequence of commands, programs and data as a single unit.
 OS keeps a number a jobs in memory and executes them without any manual information.
 Jobs are processed in the order of submission i.e first come first served fashion.
 When job completes its execution, its memory is released and the output for the job gets copied
into an output spool for later printing or processing.

The problems with Batch Systems are following.


 Lack of interaction between the user and job.
 CPU is often idle, because the speeds of the mechanical I/O devices are slower than CPU.
 Difficult to provide the desired priority.

Time-sharing operating systems


Time sharing is a technique which enables many people, located at various terminals, to use a particular
computer system at the same time. Time-sharing or multitasking is a logical extension of
multiprogramming. The processor time is divided into slices and each user is given a quantum time which
is shared among multiple users simultaneously termed as time-sharing. The main difference between
Multiprogrammed Batch Systems and Time-Sharing Systems is that in case of Multiprogrammed batch
systems, objective is to maximize processor use, whereas in Time-Sharing Systems objective is to minimize
response time.

Distributed operating System


Distributed environment refers to multiple independent CPUs or processors in a computer system.
Operating system does the following activities related to distributed [Link] processing jobs are
distributed among the processors accordingly to which one can perform each job most efficiently
 OS Distributes computation logics among several physical processors.
 The processors do not share memory or a clock.
 Instead, each processor has its own local memory.
 OS manages the communications between the processors. They communicate with each other
through various communication lines.
Processors in a distributed system may vary in size and function. These processors are referred as sites,
nodes, computers and so on.

Network operating System


Network Operating System runs on a server and provides server the capability to manage data, users,
groups, security, applications, and other networking functions. The primary purpose of the network
operating system is to allow shared file and printer access among multiple computers in a network,
COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 5
typically a local area network (LAN), a private network or to other networks. Examples of network
operating systems are Microsoft Windows Server 2003, Microsoft Windows Server 2008, UNIX, Linux, Mac
OS X, Novell NetWare, and BSD.
Real Time operating System
Real time system is defines as a data processing system in which the time interval required to
process and respond to inputs is so small that it controls the environment.
Real time processing is always on line whereas online system need not be real time. The time taken by the
system to respond to an input and display of required updated information is termed as response time. So
in this method response time is very less as compared to the online processing.
Real-time systems are used when there are rigid time requirements on the operation of a processor
or the flow of data and real-time systems can be used as a control device in a dedicated application. Real-
time operating system has well-defined, fixed time constraints otherwise system will [Link] example
scientific experiments, medical imaging systems, industrial control systems, weapon systems, robots, and
home-applicance controllers, Air traffic control system etc.
Real time systems represents are usually dedicated embedded systems. Operating system does the
following activities related to real time system activity.
 In such systems, Operating Systems typically read from and react to sensor data.
 The Operating system must guarantee response to events within fixed periods of time to ensure
correct performance.

Operating System - Properties


Following are few of very important tasks that Operating System handles

Multitasking
Multitasking refers to term where multiple jobs are executed by the CPU simultaneously by switching
between [Link] occur so frequently that the users may interact with each program while it is
running.

Multiprogramming
When two or more programs are residing in memory at the same time, then sharing the processor is
referred to the multiprogramming. Multiprogramming assumes a single shared processor.
Multiprogramming increases CPU utilization by organizing jobs so that the CPU always has one to execute.
Following figure shows the memory layout for a multiprogramming system.

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 6


 The operating system keeps several jobs in memory at a time. This set of jobs is a subset of the
jobs kept in the job pool. The operating system picks and begins to execute one of the job in the
memory.
 Multiprogrammed system provide an environment in which the various system resources are utilized
effectively, but they do not provide for user interaction with the computer system. Jobs entering
into the system are kept into the memory.
 Operating system picks the job and begins to execute one of the job in the memory. Having several
programs in memory at the same time requires some form of memory management.
 Multiprogramming operating system monitors the state of all active programs and system
resources. This ensures that the CPU is never idle unless there are no jobs.

Advantages
1. High CPU utilization.
2. It appears that many programs are allotted CPU almost simultaneously.

Disadvantages
1. CPU scheduling is requires.
2. To accommodate many jobs in memory, memory management is required.

Interactivity
Interactivity refers that a User is capable to interact with computer system. Operating system does the
following activities related to interactivity.
 OS provides user an interface to interact with system.
 OS managers input devices to take inputs from the user. For example, keyboard.
 OS manages output devices to show outputs to the user. For example, Monitor.
 OS Response time needs to be short since the user submits and waits for the result.
Spooling
Spooling is an acronym for simultaneous peripheral operations on line. Spooling refers to putting data of
various I/O jobs in a buffer. This buffer is a special area in memory or hard disk which is accessible to I/O
devices. Operating system does the following activites related to distributed environment.
 OS handles I/O device data spooling as devices have different data access rates.
 OS maintains the spooling buffer which provides a waiting station where data can rest while the
slower device catches up.
 OS maintains parallel computation because of spooling process as a computer can perform I/O in
parallel fashin. It becomes possible to have the computer read data from a tape, write data to disk
and to write out to a tape printer while it is doing its computing task.

Advantage of Spooling
1. The spooling operation uses a disk as a very large buffer.
2. Spooling is however capable of overlapping I/O operation for one job with processor operations for
another job.

The function of OS in relation to memory management, management and interrupt handling,


information management.
Memory Management
Memory management is the functionality of an operating system which handles or manages primary
memory and moves processes back and forth between main memory and disk during execution. Memory
management keeps track of each and every memory location, regardless of either it is allocated to some

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 7


process or it is free. It checks how much memory is to be allocated to processes. It decides which process
will get memory at what time. It tracks whenever some memory gets freed or unallocated and
correspondingly it updates the status.

The operating system is responsible for the following activities in connection with memory management:
• Keeping track of which parts of memory are currently being used and whois using them
• Deciding which processes (or parts of processes) and data to move intoand out of memory
• Allocating and deallocating memory space as needed

Management and interrupt handling


Interrupts: An interrupt is an asynchronous signal from hardware indicating the need for attention or a
synchronous event in software indicating the need for a change in execution. A hardware interrupt causes
the processor to save its state of execution via a context switch, and begin execution of an interrupt
handler. Software interrupts are usually implemented as instructions in the instruction set, which cause a
context switch to an interrupt handler similar to a hardware interrupt. Interrupts are a commonly used
technique for computer multitasking, especially in real-time computing. Such a system is said to be
interrupt-driven. An act of interrupting is referred to as an interrupt request ("IRQ"). Interrupts were
introduced as a way to avoid wasting the processor's valuable time in polling loops, waiting for external
events.
Why interrupts:
People like connecting devices. A computer is much more than the CPU we have peripherals like Keyboard,
mouse, screen, disk drives, Scanner, printer, sound card, camera, etc. These devices occasionally need
CPU service but we can’t predict when external events typically occur on a macroscopic timescale and we
want to keep the CPU busy between events.

Interrupt handling
An interrupt handler, also known as an interrupt service routine (ISR), is a callback subroutine in an
operating system or device driver whose execution is triggered by the reception of an interrupt. Interrupt
handlers have a multitude of functions, which vary based on the reason the interrupt was generated and
the speed at which the Interrupt Handler completes its task.

Management of interrupt
Give each device a wire (interrupt line) that it can use to signal the processor
• When interrupt signaled, processor executes a routine called an interrupt handler to deal with the
interrupt
• No overhead when no requests pending

Structure of an Operating System


An operating system is composed of a kernel, possibly some servers, and posssibly some user-level
libraries. The kernel provides operating system services through a set of procedures, which may be invoked
by user processes through system calls. System calls look like procedure calls when they appear in a
program, but transfer control to the kernel when they are invoked at run time. (   read is an example of a
system call in Unix.)
In some operating systems, the kernel and user-processes run in a single (physical or virtual) address
space. In these systems, a system call is simply a procedure call. In more robust systems, the kernel runs

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 8


in its own address space and in a special privileged mode in which it can execute instructions not available
to ordinary user processes. In such systems, a system call invokes a trap as discussed below.
A trap is a hardware event that invokes a trap handler in the kernel. The trap indicates, in
appropriate hardware status fields, the kind of trap. Traps may be generated implicitly by the hardware to
signal events such as division by zero and address faults (which we will discuss later), or  explicitly by a
process when it executes a special instruction. Explicit or user-initiated traps are used to handle system
calls. A system call stores the name of the call and its arguments on the stack and generates a user-
initiated trap. The trap handler in the kernel knows, from the type of the trap, that it is a user-initiated trap
asking for a system call, finds the name of the systems call, and calls the appropriate kernel procedure to
handle the call passing it the arguments stored on the stack.

The operating system itself is an arrangement of software, with some sort of access from the user
programs. The software ranges from the tiny operating systems in single chip control systems (less than
2,000 lines of code) up to large mainframe operating system software (more than 1,000,000 lines of
code). The operating system is often the largest software module that runs on a machine, and a lot of time
has been spent studying operating systems and their architectures.
In general, an operating system is a set of event driven utilities. The events which drive it are:
 SYSTEM calls by user processes
 I/0 (Interrupt) requests
 Fault recognition and response

The design of an operating system architecture traditionally follows the separation of concerns principle.


This principle suggests structuring the operating system into relatively independent parts that provide
simple individual features, thus keeping the complexity of the design manageable.
Besides managing complexity, the structure of the operating system can influence key features such as
robustness or efficiency:
A view of operating system services

Structure/Organization/Layout of Operating Systems:


 Monolithic (one unstructured program)
 Layered
 Microkernel
 distributed
 Virtual Machines
The monolithic operating systems were developed in the early days of computing. Here the OS could be
thought of as the only task running on the machine. It would call user programs (like subroutines) when
it was not performing system tasks.

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 9


These operating systems were large collections of service routines, compiled all at once, and with very little
internal structure. The monolithic operating system has died out over the years, and has been replaced
with more modular architectures.

Layered Systems
A layered design of the operating system architecture attempts to achieve robustness by structuring the
architecture into layers with different privileges. The most privileged layer would contain code dealing with
interrupt handling and context switching, the layers above that would follow with device drivers, memory
management, file systems, user interface, and finally the least privileged layer would contain the
applications.
 With the layered approach, the bottom layer is the hardware, while the highest layer is the user
interface.
o The main advantage is simplicity of construction and debugging.
o The main difficulty is defining the various layers.
o The main disadvantage is that the OS tends to be less efficient than other implementations.
The Microsoft Windows NT Operating System. The lowest level is a monolithic kernel, but many OS
components are at a higher level, but still part of the OS Components are divided into layers grouping
similar components. Each layer only interacts with the bottom layer( requesting services) and to the top
layer (answering requests). The Higher level layer is made up of the Applications while the lowest level
layer contain the hardware
Advantages of layered structure
 Simplicity of construction and debugging
Disadvantages of layered structures
 The OS tends to be less efficient than other implementations making it slow
 It may be difficult to define layers.
Layered Architecture

Microkernels architecture

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 10


This structures the operating system by removing all nonessential portions of the kernel and implementing
them as system and user level programs.
 Generally they provide minimal process and memory management, and a communications facility.
 Communication between components of the OS is provided by message passing.
The benefits of the microkernel are as follows:
 Extending the operating system becomes much easier.
 Any changes to the kernel tend to be fewer, since the kernel is smaller.
 The microkernel also provides more security and reliability.
Main disadvantage is poor performance due to increased system overhead from message passing.

MACH is a prominent example of a microkernel that has been used in contemporary operating systems,
including the NextStep and OpenStep systems and, notably, OS X. Most research operating systems also
qualify as microkernel operating systems.

Virtual Machines
A virtual machine takes the layered approach to its logical next step. It treats hardware and the operating
system kernel as though they were all hardware.
A virtual machine provides an interface identical to the underlying bare hardware. The operating system
host creates the illusion that a process has its own processor (and virtual memory)

Process
es
Process
es
Processes

Process
es

Kernel Kernel Kernel

Kernel VM1 VM1 VM1


Virtual machine
implementation
Hardware
Hardware
(a) Non-virtual machine (b) virtual machine

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 11


Set of system calls
These architectures take a reverse view of the computer, representing the operating system as being a
collection of utilities or services provided by the computer to the user processes. Most modern operating
systems are of this sort.
This is sometimes called the kernel or nucleus approach. This approach provides a minimal set of
functions, and the rest of the operating system is constructed out of them. When the computer is
executing any of these kernel functions, it is said to be in kernel or supervisor mode. When the computer is
not executing the functions, it is said to be in user mode.

Operating System - Processes


Process
A process is a program in execution. The execution of a process must progress in a sequential fashion.
Definition of process is following.
A process is defined as an entity which represents the basic unit of work to be implemented in the system.
Components of process are following.

Object Program: Code to be executed.


Data: Data to be used for executing the program.
Resources: the required resources while executing the program.
Status: Verifies the status of the process execution.A process can run to completion only when all
requested resources have been allocated to the process. Two or more processes could be executing
the same program, each using their own data and resources.

Program
A program by itself is not a process. It is a static entity made up of program statement while process is a
dynamic entity. Program contains the instructions to be executed by processor.
A program takes a space at single place in main memory and continues to stay there. A program does not
perform any action by itself.

Process States
As a process executes, it changes state. The state of a process is defined as the current activity of the
process.
Process can have one of the following five states at a time.
New: The process is just being created.
Ready: The process is waiting to be assigned to a processor. Ready processes are waiting to have
the processor allocated to them by the operating system so that they can run.
COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 12
Running: Process instructions are being executed (i.e. The process that is currently being
executed).
Waiting: The process is waiting for some event to occur before being executed (such as the
completion of an I/O operation or The running process may become suspended by invoking an I/O
module.).
Terminated: The process has finished execution.(i.e A process that has been released from the
pool of executable processes by the operating system. )

Whenever processes changes state, the operating system reacts by placing the process PCB in the list that
corresponds to its new state. Only one process can be running on any processor at any instant and many
processes may be ready and waiting state.

Suspended Processes
Characteristics of suspend process
1. Suspended process is not immediately available for execution.
2. The process may or may not be waiting on an event.
3. For preventing the execution, process is suspended by OS, parent process, process itself and an agent.
4. Process may not be removed from the suspended state until the agent orders the removal.
Swapping is used to move all of a process from main memory to disk. When all the process by putting it in
the suspended state and transferring it to disk.
Reasons for process suspension
1. Swapping
2. Timing
3. Interactive user request
4. Parent process request
Swapping : OS needs to release required main memory to bring in a process that is ready to execute.
Timing : Process may be suspended while waiting for the next time interval.
Interactive user request :Process may be suspended for debugging purpose by user.
Parent process request :To modify the suspended process or to coordinate the activity of various
descendants.
Process Control Block (PCB)
Each process contains the process control block (PCB). PCB is the data structure used by the operating
system. Operating system groups all information that needs about particular process. The figure below
shows the process control block.
Pointer Process
State
Process Number
Program Counter
CPU registers
Memory Allocation
Event Information
List of open files

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 13


1. Pointer : Points to another process control block. Pointer is used for maintaining the scheduling list.
2. Process State : Process state may be new, ready, running, waiting and so on.
3. Program Counter : It indicates the address of the next instruction to be executed for this process.
4. Event information : For a process in the blocked state this field contains information concerning the
event for which the process is waiting.
5. CPU register : It indicates general purpose register, stack pointers, index registers and accumulators
etc. number of register and type of register totally depends upon the computer architecture.
6. Memory Management Information: This information may include the value of base and limit
register. This information is useful for deallocating the memory when the process terminates.
7. Accounting Information: This information includes the amount of CPU and real time used, time
limits, job or process numbers, account numbers etc. Process control block also includes the information
about CPU scheduling, I/O resource management, file management information, priority and so on.
The PCB simply serves as the repository for any information that may vary from process to process. When
a process is created, hardware registers and flags are set to the values provided by the loader or linker.
Whenever that process is suspended, the contents of the processor register are usually saved on the stack
and the pointer to the related stack frame is stored in the PCB. In this way, the hardware state can be
restored when the process is scheduled to run again.
Operating System - Process Scheduling
The process scheduling is the activity of the process manager that handles the removal of the running
process from the CPU and the selection of another process on the basis of a particular strategy.
Process scheduling is an essential part of a Multiprogramming operating system. Such operating systems
allow more than one process to be loaded into the executable memory at a time and loaded process shares
the CPU using time multiplexing.
Scheduling Queues
Scheduling queues refers to queues of processes or devices. When the process enters into the system,
then this process is put into a job queue. This queue consists of all processes in the system. The operating
system also maintains other queues such as device queue. Device queue is a queue for which multiple
processes are waiting for a particular I/O device. Each device has its own device queue.
This figure shows the queuing diagram of process scheduling.
 Queue is represented by rectangular box.
 The circles represent the resources that serve the queues.
 The arrows indicate the process flow in the system.

Queues are of two types


 Ready queue
 Device queue
A newly arrived process is put in the ready queue. Processes waits in ready queue for allocating the CPU.
Once the CPU is assigned to a process, then that process will execute. While executing the process, any
one of the following events can occur.
 The process could issue an I/O request and then it would be placed in an I/O queue.
COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 14
 The process could create new sub process and will wait for its termination.
 The process could be removed forcibly from the CPU, as a result of interrupt and put back in the
ready queue.
Two State Process Model
Two state process model refers to running and non-running states which are described below.

Running: When new process is created by Operating System that process enters into the system as in the
running state.
Not Running: Processes that are not running are kept in queue, waiting for their turn to execute. Each
entry in the queue is a pointer to a particular process. Queue is implemented by using linked list. Use of
dispatcher is as follows. When a process is interrupted, that process is transferred in the waiting queue. If
the process has completed or aborted, the process is discarded. In either case, the dispatcher then selects
a process from the queue to execute.
Schedulers
Schedulers are special system software which handles process scheduling in various [Link] main task
is to select the jobs to be submitted into the system and to decide which process to run. Schedulers are of
three types
 Long Term Scheduler
 Short Term Scheduler
 Medium Term Scheduler
Long Term Scheduler
It is also called job scheduler. Long term scheduler determines which programs are admitted to the system
for processing. Job scheduler selects processes from the queue and loads them into memory for execution.
Process loads into the memory for CPU scheduler. The primary objective of the job scheduler is to provide
a balanced mix of jobs, such as I/O bound and processor bound. It also controls the degree of
multiprogramming. If the degree of multiprogramming is stable, then the average rate of process creation
must be equal to the average departure rate of processes leaving the system.
On same systems, the long term scheduler may be absent or minimal. Time-sharing operating systems
have no long term scheduler. When process changes the state from new to ready, then there is a long
term scheduler.
Short Term Scheduler
It is also called CPU scheduler. Main objective is increasing system performance in accordance with the
chosen set of criteria. It is the change of ready state to running state of the process. CPU scheduler selects
from among the processes that are ready to execute and allocates the CPU to one of them.
Short term scheduler also known as dispatcher, execute most frequently and makes the fine grained
decision of which process to execute next. Short term scheduler is faster than long tern scheduler.
Medium Term Scheduler
Medium term scheduling is part of the swapping function. It removes the processes from the memory. It
reduces the degree of multiprogramming. The medium term scheduler is in charge of handling the
swapped out-processes.
Medium term scheduler is shown in the Figure below

Queueing diagram with medium term scheduler


Running process may become suspended by making an I/O request. Suspended processes cannot make
any progress towards completion. In this condition, to remove the process from memory and make space

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 15


for other process. Suspended process is move to the secondary storage is called swapping, and the process
is said to be swapped out or rolled out. Swapping may be necessary to improve the process mix.

Comparison between Scheduler


S/n Long Term Short Term Medium Term
1 It is job scheduler It is CPU Scheduler It is swapping
2 Speed is less than short term Speed is very fast Speed is in between both
scheduler
3 It controls degree of Less control over degree of Reduce the degree of
multiprogramming multiprogramming multiprogramming.
4 Absent or minimal in time Minimal in time sharing system. Time sharing system use
sharing system. medium term scheduler.
5 It select processes from pool It select from among the Process can be reintroduced
and load them into memory processes that are ready to into memory and its execution
for execution. execute. can be continued.
6 Process state is (New to Process state is (Ready to -
Ready) Running)
7 Select a good process, mix of Select a new process for a CPU -
I/O bound and CPU bound. quite frequently.

Context switch
A context switch is the mechanism to store and restore the state or context of a CPU in Process Control
block so that a process execution can be resumed from the same point at a later time. Using this
technique, a context switcher enables multiple processes to share a single CPU. Context switching is an
essential part of a multitasking operating system features.
When the scheduler switches the CPU from executing one process to executing another, the context
switcher saves the content of all processor registers for the process being removed from the CPU in its
process being removed from the CPU in its process descriptor. The context of a process is represented in
the process control block of a process. Context switch time is pure overhead. Context switching can
significantly affect performance, since modern computers have a lot of general and status registers to be
saved.
Content switch times are highly dependent on hardware support. Context switch requires (n + m)
bXK time units to save the state of the processor with n general registers, assuming b store operations are
required to save register and each store instruction requires K time units. Some hardware systems employ
two or more sets of processor registers to reduce the amount of context switching time. When the process
is switched the information stored is :
1. Program Counter
2. Scheduling Information
3. Base and limit register value
4. Currently used register
5. Changed State
6. I/O State
7. Accounting
Operation on Processes
Several operations are possible on the process. Process must be created and deleted dynamically.
Operating system must provide the environment for the process operation. We discuss the two main
operations on processes.
1. Create a process
2. Terminate a process
Create a Process
Operating system creates a new process with the specified or default attributes and identifier. A process
may create several new subprocesses. Syntax for creating new process is
CREATE( processed, attributes )
Two names are used in the process they are parent process and child process.

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 16


Parent process is a creating process. Child process is created by the parent process. Child process may
create another subprocess. So it forms a tree of processes. When operating system issues a CREATE
system call, it obtains a new process control block from the pool of free memory, fills the fields with
provided and default parameters, and insert the PCB into the ready list. Thus it makes the specified
process eligible to run the process.
When a process is created, it requires some parameters. These are priority, level of privilege, requirement
of memory, access right, memory protection information etc. Process will need certain resources, such as
CPU time, memory, files and I/O devices to complete the operation. When process creates a subprocess,
that subprocess may obtain its resources directly from the operating system. Otherwise it uses the
resources of parent process.
When a process creates a new process, two possibilities exist in terms of execution.
1. The parent continues to execute concurrently with its children.
2. The parent waits until some or all of its children have terminated.
For address space, two possibilities occur:
1. The child process is a duplicate of the parent process.
2. The child process has a program loaded into it.
Terminate a Process
DELETE system call is used for terminating a process. A process may delete itself or by another process. A
process can cause the termination of another process via an appropriate system call. The operating system
reacts by reclaiming all resources allocated to the specified process, closing files opened by or for the
process. PCB is also removed from its place of residence in the list and is returned to the free pool. The
DELETE service is normally invoked as a part of orderly program termination.
Following are the resources for terminating the child process by parent process.
1. The task given to the child is no longer required.
2. Child has exceeded its usage of some of the resources that it has been allocated.
3. Operating system does not allow a child to continue if its parent terminates.

Operating System - Multi-Threading


What is Thread?
A thread is a flow of execution through the process code, with its own program counter, system registers
and stack. A thread is also called a light weight process. Threads provide a way to improve application
performance through parallelism. Threads represent a software approach to improving performance of
operating system by reducing the overhead thread is equivalent to a classical process.
Each thread belongs to exactly one process and no thread can exist outside a process. Each thread
represents a separate flow of [Link] have been successfully used in implementing network
servers and web server. They also provide a suitable foundation for parallel execution of applications on
shared memory multiprocessors. Following figure shows the working of the single and multithreaded
processes.

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 17


COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 18
Difference between Process and Thread
S.N. Process Thread
Thread is light weight taking lesser resources than a
1 Process is heavy weight or resource intensive.
process.
Process switching needs interaction with Thread switching does not need to interact with
2
operating system. operating system.
In multiple processing environments each
All threads can share same set of open files, child
3 process executes the same code but has its
processes.
own memory and file resources.
If one process is blocked then no other process While one thread is blocked and waiting, second
4
can execute until the first process is unblocked. thread in the same task can run.
Multiple processes without using threads use
5 Multiple threaded processes use fewer resources.
more resources.
In multiple processes each process operates One thread can read, write or change another
6
independently of the others. thread's data.
Advantages of Thread
 Thread minimizes context switching time.
 Use of threads provides concurrency within a process.
 Efficient communication.
 Economy- It is more economical to create and context switch threads.
 Utilization of multiprocessor architectures to a greater scale and efficiency.

Types of Thread
Threads are implemented in following two ways
 User Level Threads -- User managed threads
 Kernel Level Threads -- Operating System managed threads acting on kernel, an operating
system core.
User Level Thread
In a user thread, all of the work of thread management is done by the application and the kernel is not
aware of the existence of threads. The thread library contains code for creating and destroying threads, for
passing message and data between threads, for scheduling thread execution and for saving and restoring
thread contexts. The application begins with a single thread and begins running in that thread.
The following figure shows the user level thread.

Advantage of user level thread over Kernel level thread :


1. Thread switching does not require Kernel mode privileges.
2. User level thread can run on any operating system.
3. Scheduling can be application specific.
4. User level threads are fast to create and manage.
Disadvantages of user level thread:
1. In a typical operating system, most system calls are blocking.
2. Multithreaded application cannot take advantage of multiprocessing.
Kernel Level Threads

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 19


In Kernel level thread, thread management done by the Kernel. There is no thread management code in
the application area. Kernel threads are supported directly by the operating system. Any application can be
programmed to be multithreaded. All of the threads within an application are supported within a single
process. The Kernel maintains context information for the process as a whole and for individuals’ threads
within the process.
Scheduling by the Kernel is done on a thread basis. The Kernel performs thread creation, scheduling and
management in Kernel space. Kernel threads are generally slower to create and manage than the user
threads.
Advantages of Kernel level thread:
1. Kernel can simultaneously schedule multiple threads from the same process on multiple process.
2. If one thread in a process is blocked, the Kernel can schedule another thread of the same process.
3. Kernel routines themselves can multithreaded.
Disadvantages:
1. Kernel threads are generally slower to create and manage than the user threads.
2. Transfer of control from one thread to another within same process requires a mode switch to the
Kernel.
Advantages of Thread
1. Thread minimize context switching time.
2. Use of threads provides concurrency within a process.
3. Efficient communication.
4. Economy- It is more economical to create and context switch threads.
5. Utilization of multiprocessor architectures –
The benefits of multithreading can be greatly increased in a multiprocessor architecture.
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread facility. Solaris is a
good example of this combined approach. In a combined system, multiple threads within the same
application can run in parallel on multiple processors and a blocking system call need not block the entire
process.
Multithreading models are three types:
1. Many to many relationship.
2. Many to one relationship.
3. One to one relationship.

1. Many to Many Model

In this model, many user level threads multiplexes to the Kernel thread of smaller or equal numbers. The
number of Kernel threads may be specific to either a particular application or a particular machine.
Fig below shows the many to many model. In this model, developers can create as many user threads as
necessary and the corresponding Kernel threads can run in parallels on a multiprocessor.

2. Many to One Model

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 20


Many to one model maps many user level threads to one Kernel level thread. Thread management is done
in user space. When thread makes a blocking system call, the entire process will be blocks. Only one
thread can access the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.

If the user level thread libraries are implemented in the operating system, that system does not support
Kernel threads use the many to one relationship modes.

3. One to One Model


There is one to one relationship of user level thread to the kernel level thread. This model provides more
concurrency than the many to one model.

It also another thread to run when a thread makes a blocking system call. It support multiple thread to
execute in parallel on microprocessors. Disadvantage of this model is that creating user thread requires the
corresponding Kernel thread. OS/2, windows NT and windows 2000 use one to one relationship model.

Difference between User Level & Kernel Level Thread


s/no User Level Threads Kernel Level Thread
1 User level thread are faster to create and Kernel level thread are slower to create
manage. and manage.
2 Implemented by a thread library at the Operating system support directly to
user level. Kernel threads.
3 User level thread can run on any Kernel level threads are specific to the
operating system. operating system.
4 Support provided at the user level called Support may be provided by kernel is
user level thread. called Kernel level threads.
5 Multithread application cannot take Kernel routines themselves can be
advantage of multiprocessing. multithreaded.
Difference between Process and Thread
process thread
Process is called heavy weight process. Thread is called light weight process.

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 21


Process switching needs interface with operating system. Thread switching does not need to call a
operating system and cause an interrupt to the
Kernel.

In multiple process implementation each process executes All threads can share same set of open files,
the same code but has its own memory and file resources. child processes.

If one server process is blocked no other server process While one server thread is blocked and waiting,
can execute until the first process unblocked. second thread in the same task could run.

Multiple redundant process uses more resources than Multiple threaded process uses fewer resources
multiple threaded. than multiple redundant process.

In multiple process each process operates independently One thread can read, write or even completely
of the others. wipe out another threads stack

Threading Issues

Thread cancellation is a process of thread terminates before its completion of task. For example, in multiple
thread environment, thread concurrently searching through a database. If any one thread returns the
result, the remaining thread might be cancelled.
Thread cancellation is of two types.
1. Asynchronous cancellation
2. Synchronous cancellation
In asynchronous cancellation, one thread immediately terminates the target thread. Deferred cancellation
periodically check for terminate by target thread. It also allow the target thread to terminate itself in an
orderly fashion. Some resources are allocated to the thread. If we cancel the thread, which update the
data with other thread. This problem may face by asynchronous cancellation system wide resource are not
free if threads cancelled asynchronously. Most of the operating system allow a process or thread to be
cancelled asynchronously.

Principle of Concurrency
In a single-processor multiprogramming system, processes are interleaved in time to yield the appearance
of simultaneous execution. Even parallel processing is not achieved, and ever though there is a certain
amount of overhead involved in switching back and forth between processes, interleaved execution
provides major benefits in processing efficiency and in program structuring. In a multiple processor system,
it is possible not only to interleave the execution of multiple processes but also to overlap them. It is
assumed, it may seem that interleaving and overlapping represent fundamentally different modes of
execution and present different problems. In fact, both techniques can be viewed as examples of
concurrent processing, and both present the same problems. The relative speed of execution of processes
It depends on activities of other processes, the way in which the operating system handles interrupts, and
the scheduling policies of the operating, system.
There are quite difficulties:
1. The sharing of global resources. For example, if two processes both make use of the same global
variable and both perform reads and writes on that variable, then the order in which the various reads and
writes are executed is critical.
2. It is difficult for the operating system to manage the allocation of resources optimally.
3. It is very difficult to locate a programming error because results are typically not deterministic and
reproducible.
e.g
Void echo()
{

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 22


chin = getchar();
chout = chin;
putchar(chout);
}
This procedure shows the essential elements of a program that will provide a character echo procedure;
input is obtained from a keyboard one keystroke at a time. Each input character is stored in variable chin.
It is then transferred to variable chout and sent to the display. Any program can call this procedure
repeatedly to accept user input and display it on the user's screen.
In a single - processor multiprogramming system supporting a single user. The user can jump from one
application to another, and each .application uses the same keyboard for input and the same screen for
output. Because each application needs to use the procedure echo, it makes sense for it to be a shared
procedure that is loaded into a portion of memory global to all applications. Thus, only a single copy of the
echo procedure is used, saving space.
The sharing of main memory among processes is useful to permit efficient and close interaction among
processes Consider the following sequence:
1. Process P1 invokes the echo procedure and is interrupted immediately after getchar returns its value
and stores it in chin. At this point, the most recently entered character, x, is stored in variable chin.
2. Process P2 is activated and invokes the echo procedure, which runs to conclusion, inputting and then
displaying a single character, y, on the screen.
3. Process P1 is resumed. By this time, the value x has been overwritten in chin and therefore lost.
Instead, chin contains y, which is transferred to chout and displayed.
Thus, the first character is lost and the second character is displayed twice. Because of shared global
variable, chin. If one process updates the global variable and then is interrupted, another .process may
alter the variable before the first process can use its value. However, if only one process at a time may be
in that procedure. Then the foregoing sequence would result in the following:
1. Process P1 invokes the echo procedure and is interrupted immediately after the conclusion of the input
function. At this point, the most recently entered character, x, is stored in variable chin.
2. Process P2 is activated and invokes the echo procedure. However, because P1 is still inside the echo
procedure, although currently suspended, P2 is blocked from entering the procedure. Therefore, P2 is
suspended awaiting the availability of the echo procedure.
3. At some later time, process PI is resumed and completes execution of echo. The proper character, x, is
displayed.
4. When PI exits' echo, this removes the block on P2. When P2 is later resumed, the echo procedure is
successfully invoked.
Therefore it is necessary to protect shared global variables. And that the only way to do that is to control
the code that accesses the variable.
Race Condition
A race condition occurs when multiple processes or threads read and write data items so that the final
result depends on the order of execution of instructions in the multiple processes.
Suppose that two processes, P1 and P2, share the global variable a. At some point in its execution, P1
updates a to the value 1, and at some point in its execution, P2 updates a to the value 2. Thus, the two
tasks are in a race to write variable a. In this example the "loser" of the race (the process that updates
last) determines the final value of a.
Therefore Operating System Concerns of following things
1. The operating system must be able to keep track of the various processes
2. The operating system must allocate and deallocate various resources for each active process.
3. The operating system must protect the data and physical resources of each process against unintended
interference by other processes.
4. The functioning of a process, and the output it produces, must be independent of the speed at which its
execution is carried out relative to the speed of other concurrent processes.

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 23


Process Interaction can be defined as
• Processes unaware of each other
• Processes indirectly aware of each other
• Processes directly aware of each other
Concurrent processes come into conflict with each other when they are competing for the use of
the same resource.
Two or more processes need to access a resource during the course of their execution. Each
process is unaware of the existence of the other processes. There is no exchange of information between
the competing processes.
Requirements for Mutual Exclusion
1. Mutual exclusion must be enforced: Only one process at a time is allowed into its critical section, among
all processes that have critical sections for the same resource or shared object.
2. A process that halts in its non critical section must do so without interfering with other processes.
3. It must not be possible for a process requiring access to a critical section to be delayed indefinitely: no
deadlock or starvation.
4. When no process is in a critical section, any process that requests entry to its critical section must be
permitted to enter without delay.
5. No assumptions are made about relative process speeds or number of processors.
6. A process remains inside its critical section for a finite time only.
Mutual Exclusion – Software Support
Software approaches can be implemented for concurrent processes that executes on a single processor or
a multiprocessor machine with shared main memory.
Dekkers Algorithm Dekkers algorithm is for two processes based solely on software. Each of these
processes loop indefinitely, repeatedly entering and reentering its critical section. A process (P0 & P1) that
wishes to execute its critical section first enters the igloo and examines the blackboard. The process
number is written on the blackboard, that process leaves the igloo and proceeds to critical section.
Otherwise that process will wait for its turn. Process reenters in the igloo to check the blackboard. It
repeats this exercise until it is allowed to enter its critical section. This procedure is known as busy waiting.
In formal terms, there is a shared global variable :
Var turn : 0 : 1;
Process 0 - - - - - - - -
While turn # 0 do (nothing) <critical section>;
turn := 1; - - - - Process 1 - - - - - - - - While turn # 1 do (nothing);
<critical section>;
Turn := 0;
Process scheduling
A Process Scheduler schedules different processes to be assigned to the CPU based on particular
scheduling algorithms. There are six popular process scheduling algorithms which we are going to discuss
in this chapter −
 First-Come, First-Served (FCFS) Scheduling
 Shortest-Job-Next (SJN) Scheduling
 Priority Scheduling
 Shortest Remaining Time
 Round Robin(RR) Scheduling
 Multiple-Level Queues Scheduling
These algorithms are either non-preemptive or preemptive. Non-preemptive algorithms are designed
so that once a process enters the running state, it cannot be preempted until it completes its allotted
time, whereas the preemptive scheduling is based on priority where a scheduler may preempt a low
priority running process anytime when a high priority process enters into a ready state.
First Come First Serve (FCFS)
 Jobs are executed on first come, first serve basis.
 It is a non-preemptive, pre-emptive scheduling algorithm.

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 24


 Easy to understand and implement.
 Its implementation is based on FIFO queue.
 Poor in performance as average wait time is high.

Wait time of each process is as follows −


Proces Wait Time : Service Time - Arrival Time
s
P0 0-0=0
P1 5-1=4
P2 8-2=6
P3 16 - 3 = 13
Average Wait Time: (0+4+6+13) / 4 = 5.75
Shortest Job Next (SJN)
 This is also known as shortest job first, or SJF
 This is a non-preemptive, pre-emptive scheduling algorithm.
 Best approach to minimize waiting time.
 Easy to implement in Batch systems where required CPU time is known in advance.
 Impossible to implement in interactive systems where required CPU time is not known.
 The processer should know in advance how much time process will take.

Wait time of each process is as follows −


Proces Wait Time : Service Time - Arrival Time

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 25


s
P0 3-0=3
P1 0-0=0
P2 16 - 2 = 14
P3 8-3=5
Average Wait Time: (3+0+14+5) / 4 = 5.50
Priority Based Scheduling
 Priority scheduling is a non-preemptive algorithm and one of the most common scheduling
algorithms in batch systems.
 Each process is assigned a priority. Process with highest priority is to be executed first and so on.
 Processes with same priority are executed on first come first served basis.
 Priority can be decided based on memory requirements, time requirements or any other resource
requirement.

Wait time of each process is as follows −


Proces Wait Time : Service Time - Arrival Time
s
P0 9-0=9
P1 6-1=5
P2 14 - 2 = 12
P3 0-0=0
Average Wait Time: (9+5+12+0) / 4 = 6.5
Shortest Remaining Time
 Shortest remaining time (SRT) is the preemptive version of the SJN algorithm.
 The processor is allocated to the job closest to completion but it can be preempted by a newer
ready job with shorter time to completion.
 Impossible to implement in interactive systems where required CPU time is not known.
 It is often used in batch environments where short jobs need to give preference.
Round Robin Scheduling
 Round Robin is the preemptive process scheduling algorithm.
 Each process is provided a fix time to execute, it is called a quantum.
 Once a process is executed for a given time period, it is preempted and other process executes for
a given time period.
 Context switching is used to save states of preempted processes.

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 26


Wait time of each process is as follows −
Proces Wait Time : Service Time - Arrival Time
s
P0 (0 - 0) + (12 - 3) = 9
P1 (3 - 1) = 2
P2 (6 - 2) + (14 - 9) + (20 - 17) = 12
P3 (9 - 3) + (17 - 12) = 11
Average Wait Time: (9+2+12+11) / 4 = 8.5
Multiple-Level Queues Scheduling
Multiple-level queues are not an independent scheduling algorithm. They make use of other existing
algorithms to group and schedule jobs with common characteristics.
 Multiple queues are maintained for processes with common characteristics.
 Each queue can have its own scheduling algorithms.
 Priorities are assigned to each queue.
For example, CPU-bound jobs can be scheduled in one queue and all I/O-bound jobs in another queue.
The Process Scheduler then alternately selects jobs from each queue and assigns them to the CPU based
on the algorithm assigned to the queue.

Interrupts
The CPU hardware uses an interrupt request line wire which helps CPU to sense after executing every
instruction. When the CPU checks that a controller has put a signal on the interrupt request line, the CPU
saves a state, such as the current value of the instruction pointer, and jumps to the interrupt handler
routine at a fixed address. The interrupt handler part determines the cause of the interrupt, performs the
necessary processing and executes a interrupt instruction to return the CPU to its execution state.
The basic mechanism of interrurpt enables the CPU to respond to an asynchronous event, such as when a
device controller become ready for service. Most CPUs have two interrupt request lines.
 non-maskable interrupt - Such kind of interrupts are reserved for events like unrecoverable
memory errors.
 maskable interrupt - Such interrupts can be switched off by the CPU before the execution of
critical instructions that must not be interrupted.
The interrupt mechanism accepts an address - a number that selects a specific interrupt handling
routine/function from a small [Link] most architectures, this address is an offset stored in a table called the
interrupt vector table. This vector contains the memory addresses of specialized interrupt handlers.

Deadlock
In a multiprogramming environment, several processes may compete for a finite number of resources. A
process requests resources; if the resources are not available at that time, the process enters a wait state.
It may happen that waiting processes will never again change state, because the resources they have
requested are held by other waiting processes. This situation is called deadlock.
If a process requests an instance of a resource type, the allocation of any instance of the type will satisfy
the request. If it will not, then the instances are not identical, and the resource type classes have not been
defined properly. A process must request a resource before using it, and must release the resource after
using it. A process may request as many resources as it requires to carry out its designated task. Under the
normal mode of operation, a process may utilize a resource in only the following sequence:
COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 27
1. Request: If the request cannot be granted immediately, then the requesting process must wait until it
can acquire the resource.
2. Use: The process can operate on the resource.
3. Release: The process releases the resource
In deadlock, processes never finish executing and system resources are tied up, preventing other jobs from
ever starting. Necessary Conditions A deadlock situation can arise if the following four conditions hold
simultaneously in a system:
1. Mutual exclusion: At least one resource must be held in a non-sharable mode; that is, only one
process at a time can use the resource. If another process requests that resource, the requesting process
must be delayed until the resource has been released.
2. Hold and wait : There must exist a process that is holding at least one resource and is waiting to
acquire additional resources that are currently being held by other processes.
3. No preemption : Resources cannot be preempted; that is, a resource can be released only voluntarily
by the process holding it, after that process, has completed its task.
4. Circular wait: There must exist a set {P0, P1, ..., Pn } of waiting processes such that P0 is waiting for
a resource that is held by P1, P1 is waiting for a resource that is held by P2, …., Pn-1 is waiting for a
resource that is held by Pn, and Pn is waiting for a resource that is held by P0.

Resource-Allocation Graph
Deadlocks can be described more precisely in terms of a directed graph called a system resource-
allocation graph. The set of vertices V is partitioned into two different types of nodes P = {P1, P2, … Pn}
the set consisting of all the active processes in the system; and R = {R1, R2, …, R1}, the set consisting of
all resource types in the system. A directed edge from process Pi to resource type Rj is denoted by Pi → Rj,
it signifies that process Pi requested an instance of resource type Rj and is currently waiting for that
resource.
A directed edge from resource type Rj toprocess Pi is denoted by Rj →Pi it signifies that an instance
of resource type Rj has been allocated to process Pi.
A directed edge Pi→ Rj is called a request edge; a directed edge Rj →Pi is called an assignment edge.
When process Pi requests an instance of resource type Rj, a request edge is inserted in the resource-
allocation graph.
When this request can be fulfilled, the request edge is instantaneously transformed to an
assignment edge. When the process no longer needs access to the, resource it releases the resource, and
as a result the assignment edge is deleted.
Definition of a resource-allocation graph, it can be shown that, if the graph contains no cycles, then no
process in the system is deadlocked. If, on the other hand, the graph contains the cycle, then a deadlock
must exist. If each resource type has several instances, then a cycle implies that a deadlock has occurred.
If the cycle involves only a set of resources types, each of which has only a single instance, then a
deadlock has occurred. Each process involved in the cycle is deadlocked. In this case, a cycle in the graph
is both a necessary and a sufficient condition for the existence of deadlock. A set of vertices V and a set of
edges E. V is partitioned into two types:
P = {P1, P2, …, Pn}, the set consisting of all the processes in the system.
R = {R1, R2, …, Rm}, the set consisting of all resource types in the system.
request edge – directed edge P1 → Rj
assignment edge – directed edge Rj → Pi

If each resource type has several instance, then a cycle does not necessarily imply that a deadlock
incurred. In this case, a cycle in the graph is a necessary but not a sufficient condition for the existence of
deadlock. Suppose that process P3requests an instance of resource type R2 Since no resource instance is
currently available, a request edge P3 → R2 is added to the graph. At this point, two minimal cycles exist
in the system:
COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 28
P1 → R1 → P2 → R3 → P3 → R2 → P1
P2 → R3 → P3 → R2 → P2

Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource R3, which is held by
process P3. Process P3, on the other hand, is waiting for either process P1 or process P2 to release
resource R2. In addition, process PI is waiting for process P2 to release resource R1.

Method For Handling Deadlock /Detection


There are are three different methods for dealing with the deadlock problem:
We can use a protocol to ensure that the system will never enter a deadlock state.
We can allow the system to enter a deadlock state and then recover.
We can ignore the problem all together, and pretend that deadlocks never occur in the system. This
solution is the one used by most operating systems, including UNIX. Deadlock avoidance, on the other
hand, requires that the operating system be given in advance additional information concerning which
resources a process will request and use during its lifetime. With this additional knowledge, we can decide
for each request whether or not the process should wait. Each request requires that the system consider
the resources currently available, the resources currently allocated to each process, and the future requests
and releases of each process, to decide whether the current request can be satisfied or must be delayed.
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm, then
a deadlock situation may occur If a system does not ensure that a deadlock will never occur, and also does
not provide a mechanism for deadlock detection and recovery, then we may arrive at a situation where the
system is in a deadlock state yet has no way of recognizing what has happened.
Deadlock Prevention
For a deadlock to occur, each of the four necessary-conditions must hold. By ensuring that at least
on one these conditions cannot hold, we can prevent the occurrence of a deadlock. 6.4.1 Mutual Exclusion
The mutual-exclusion condition must hold for non-sharable resources. For example, a printer cannot be
simultaneously shared by several processes. Sharable resources, on the other hand, do not require
mutually exclusive access, and thus cannot be involved in a deadlock.
Hold and Wait
1. When whenever a process requests a resource, it does not hold any other resources. One
protocol that be used requires each process to request and be allocated all its resources before it begins
execution.
2. An alternative protocol allows a process to request resources only when the process has none. A
process may request some resources and use them. Before it can request any additional resources,
however it must release all the resources that it is currently allocated here are two main disadvantages to
these protocols. First, resource utilization may be low, since many of the resources may be allocated but
unused for a long period. In the example given, for instance, we can release the tape drive and disk file,
and then again request the disk file and printer, only if we can be sure that our data will remain on the disk
file. If we cannot be assured that they will, then we must request all resources at the beginning for both
protocols. Second, starvation is possible.
No Preemption
If a process that is holding some resources requests another resource that cannot be immediately
allocated to it, then all resources currently being held are preempted. That is this resources are implicitly
released. The preempted resources are added to the list of resources for which the process is waiting
process will be restarted only when it can regain its old resources, as well as the new ones that it is
requesting.

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 29


Circular Wait
Circular-wait condition never holds is to impose a total ordering of all resource types, and to require that
each process requests resources in an increasing order of enumeration.
Let R = {R1, R2, ..., Rn} be the set of resource types. We assign to each resource type a unique integer
number, which allows us to compare two resources and to determine whether one precedes another in our
ordering. Formally, we define a one-to-one function F: R  N, where N is the set of natural numbers.
Deadlock Avoidance Prevent deadlocks requests can be made. The restraints ensure that at least one of
the necessary conditions for deadlock cannot occur, and, hence, that deadlocks cannot hold. Possible side
effects of preventing deadlocks by this, melted, however, are Tow device utilization and reduced system
throughput.
An alternative method for avoiding deadlocks is to require additional information about how
resources are to be requested. For example, in a system with one tape drive and one printer, we might be
told that process P will request first the tape drive, and later the printer, before releasing both resources.
Process Q on the other hand, will request first the printer, and then the tape drive. With this knowledge of
the complete sequence of requests and releases for each process we can decide for each request whether
or not the process should wait.
A deadlock-avoidance algorithm dynamically examines the resourceallocation state to ensure that
there can never be a circular wait condition. The resource allocation state is defined by the number of
available and allocated resources, and the maximum demands of the processes.
Safe State
A state is safe if the system can allocate resources to each process (up to its maximum) in some order and
still avoid a deadlock. More formally, a system is ina safe state only if there exists a safe sequence. A
sequence of processes <P1, P2, .. Pn> is a safe sequence for the current allocation state if, for each Pi the
resources that Pj can still request can be satisfied by the currently available resources plus the resources
held by all the Pj, with j < i. In this situation, if the resources that process Pi needs are not immediately
available, then Pi can wait until all Pj have finished. When they have finished, Pi can obtain all of its needed
resources, complete its designated task return its allocated resources, and terminate. When Pi terminates,
Pi + 1 can obtain its needed resources, and so on.

If no such sequence exists, then the system state is said to be unsafe.


Resource-Allocation Graph Algorithm Suppose that process Pi requests resource Rj. The request can be
granted only if converting the request edge Pi → Rj to an assignment edge Rj → Pi does not result in the
formation of a cycle in the resource-allocation graph.

Banker's Algorithm
The resource-allocation graph algorithm is not applicable to a resource allocation system with multiple
instances of each resource type. The deadlockavoidance algorithm that we describe next is applicable to
such a system, but is less efficient than the resource-allocation graph scheme. This algorithm is commonly
known as the banker's algorithm.
Deadlock Detection
If a system does not employ either a deadlock-prevention or a deadlock avoidance algorithm, then a
deadlock situation may occur.
An algorithm that examines the state of the system to determine whether a deadlock has occurred.
An algorithm to recover from the deadlock.

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 30


Single Instance of Each Resource Type If all resources have only a single instance, then we can define a
deadlock detection algorithm that uses a variant of the resource-allocation graph, called a wait-for graph.
We obtain this graph from the resource-allocation graph by removing the nodes of type resource and
collapsing the appropriate edges.
Several Instances of a Resource Type The wait-for graph scheme is not applicable to a resource-
allocation system with multiple instances of each resource type.

The algorithm used are :


Available: A vector of length m indicates the number of available resources of each type.
Allocation: An n x m matrix defines the number of resources of each type currently allocated to each
process.
Request: An n x m matrix indicates the current request of each process. If Request [i, j] = k, then process
P, is requesting k more instances of resource type Rj.
Detection-Algorithm Usage If deadlocks occur frequently, then the detection algorithm should be invoked
frequently. Resources allocated to deadlocked processes will be idle until the deadlock can be broken.

Recovery from Deadlock


When a detection algorithm determines that a deadlock exists, several alternatives exist. One possibility is
to inform the operator that a deadlock has spurred, and to let the operator deal with the deadlock
manually. The other possibility is to let the system recover from the deadlock automatically. There are two
options for breaking a deadlock. One solution is simply to abort one or more processes to break the circular
wait. The second option is to preempt some resources from one or more of the deadlocked processes.

Process Termination
To eliminate deadlocks by aborting a process, we use one of two methods. In both methods, the system
reclaims all resources allocated to the terminated processes.
• Abort all deadlocked processes: This method clearly will break the dead – lock cycle, but at a great
expense, since these processes may have computed for a long time, and the results of these partial
computations must be discarded, and probably must be recomputed.
• Abort one process at a time until the deadlock cycle is eliminated: This method incurs considerable
overhead, since after each process is aborted a deadlock-detection algorithm must be invoked to determine
whether processes are still deadlocked.
Resource Preemption
To eliminate deadlocks using resource preemption, we successively preempt some resources from
processes and give these resources to other processes until he deadlock cycle is broken.
The three issues are considered to recover from deadlock
1. Selecting a victim
2. Rollback
3. Starvation

COM 311 (Operating System I) notes by Orioke O. O. (Mrs.) Page | 31

Common questions

Powered by AI

User level threads are faster to create and manage since they do not require kernel mode privileges or involvement. They can be run on any operating system as they are implemented by a thread library at the user level. However, they cannot take full advantage of multiprocessor architectures as multithreaded applications using user level threads are handled in the user space only and often face limitations with blocking system calls . Conversely, kernel level threads are managed by the operating system kernel itself, allowing for true parallel thread execution across multiple processors. This makes kernel level threads slower to manage due to the overhead of kernel interaction, yet they can leverage multicore processors effectively .

Deadlock affects system resources and processes by tying up the resources such that processes remain in a perpetual wait state without progressing. It occurs when processes hold resources and simultaneously request additional ones that other processes control, hence leading to a situation where no process can release its held resources . This leads to system resources being locked indefinitely, which prevents the execution of other processes, thus deteriorating system performance and throughput . Additionally, because resources remain allocated but unused, resource utilization is notably reduced .

Threading models significantly impact performance and concurrency in operating systems. The 'many-to-many' model allows multiple user threads to be mapped to an equal or smaller number of kernel threads, maximizing concurrency on multiprocessor systems while reducing overhead . The 'many-to-one' model is limited as it maps multiple user threads to a single kernel thread, restricting concurrency due to the inability to use multiple processors for execution; blocking calls can immobilize the entire process . The 'one-to-one' model gets around these restrictions by mapping each user thread to a distinct kernel thread, facilitating better parallelism on multiprocessors, albeit at the cost of higher resources and initial thread creation overheads .

Deadlock prevention involves ensuring that at least one of the necessary conditions for deadlock—mutual exclusion, hold and wait, no preemption, and circular wait—never occurs, often implementing constraints such as requesting all resources upfront or ordering resource accesses, which can reduce resource utilization and increase starvation risk . In contrast, deadlock avoidance requires additional knowledge about future requests and releases by each process to dynamically make decisions that avoid circular wait scenarios. This often involves less rigid constraints than prevention and can potentially result in better resource utilization and system throughput, albeit with increased algorithmic complexity .

Context switching is more efficient with threads because threads are lighter weight compared to processes. Switching between threads within the same process requires minimal overhead as threads share the same memory and resources of the process, avoiding the need for significant changes in memory context. Unlike process switching, thread switching involves merely saving and restoring the thread-specific information, such as program counter and registers, resulting in less time and resource consumption . This efficiency supports improved concurrency and performance scalability in threaded applications .

The Banker's Algorithm faces significant challenges in environments with multiple resources due to its computational complexity and the requirement of maintaining accurate tracking of available resources, allocated resources, and maximum demands of processes. The algorithm must simulate the allocation of resources for future states dynamically to ensure that they remain safe, which can lead to intensive calculations especially in systems with numerous resource types and high process concurrency . Additionally, the system needs beforehand information about the maximum resource requirements for each process, which may not always be feasible or practical in dynamic workloads .

A deadlock in a multiprogramming environment can occur when four conditions hold simultaneously: mutual exclusion, hold and wait, no preemption, and circular wait. Mutual exclusion requires that at least one resource type cannot be concurrently used by multiple processes. Hold and wait permits processes to hold allocated resources while waiting for others. Under no preemption, resources cannot be forcibly taken from a process. Circular wait entails a closed loop of processes where each holds at least one resource needed by the next process in the loop .

Thread cancellation allows the termination of threads before they complete execution, which can be necessary for managing resources or terminating unresponsive threads in a multithreaded application. While it facilitates management of computational resources and the adjustment of workload dynamically, it can also lead to issues such as resource leaks, inconsistent states, or deadlocks if not managed properly . Proper synchronization mechanisms and cleanup operations are essential to mitigate performance impacts, ensuring that shared resources remain consistent and the application maintains its integrity .

Maskable interrupts are those that can be disabled or ignored by the CPU during the execution of critical sections to prevent disruption, allowing the CPU to defer the interrupt handling until it is ready to address them . In contrast, non-maskable interrupts cannot be disabled and must be processed immediately as they are often tied to critical system events, such as unrecoverable memory errors or other high-priority situations that require immediate attention to maintain system stability . This distinction ensures system responsiveness to crucial events while allowing flexibility for performance optimization in less critical scenarios.

The 'many-to-one' threading model is less effective in a multi-processor environment as it restricts concurrency by mapping multiple user threads to a single kernel thread, preventing the use of multiple processors. This results in poor scalability and can cause system performance degradation due to potential thread blocking without the ability to switch to other threads . In contrast, the 'one-to-one' model allows greater concurrency and better parallel execution since each user thread corresponds to a single kernel thread, enabling simultaneous processing across multiple cores. However, this advantage comes at the cost of increased resource consumption and overhead from managing multiple kernel threads .

You might also like