ADV.
OPERATING SYSTEMS
OPERATING SYSTEM OVERVIEW
What is an Operating System?
Operating systems are those programs that interface the machine
with the applications programs. The main function of these systems is
to dynamically allocate the shared system resources to the
executing programs. As such, research in this area is clearly
concerned with the management and scheduling of memory,
processes, and other devices.
—WHAT CAN BE AUTOMATED?: THE COMPUTER SCIENCE AND
ENGINEERING RESEARCH STUDY,
MIT Press, 1980
What is an Operating System?
An Operating System is a program or collection of
programs that makes it easier for us to use a computer.
An Operating System provides simpler abstraction of the
underlying hardware.
An Operating System is resource manager.
Examples:
• DOS, OS/2, Windows XP, Windows 2000
• Ubuntu, FreeBSD, Fedora, Solaris, Mac OS
• iOS, Android, Symbian OS, Lynx OS
Objectives of an Operating System
A program that controls the execution of
application programs
An interface between applications and
hardware
Main objectives of an OS:
• Convenience
• Efficiency
• Ability to evolve
COMPUTER HARDWARE AND
SOFTWARE INFRASTRUCTURE
OPERATING SYSTEM SERVICES
Program development
Program execution
Access I/O devices
Controlled access to files
System access
Error detection and response
Accounting
KEY INTERFACES
1. Instruction Set Architecture (ISA)
2. Application Binary Interface (ABI)
3. Application Programming Interface
(API)
API VS ABI
An API is a contract between pieces of source code: It
defines the parameters to a function, the function's
return value, and attributes such as whether
inheritance is allowed.
An API is enforced by the compiler: An API is
instructions to the compiler about what source code
can and cannot do. We also often speak about the
API in terms of the prerequisites, behavior, and error
conditions of functions. In that sense, an API is also
consumed by humans: An API is instructions to a
programmer about what functions expect and do.
API VS ABI
An ABI is a contract between pieces of binary code: It defines the
mechanisms by which functions are invoked, how parameters
are passed between caller and callee, how return values are
provided to callers, how libraries are implemented, and how
programs are loaded into memory.
An ABI is enforced by the linker: An ABI is the rules about how
unrelated code must work together. An ABI is also rules about
how processes coexist on the same system. For example, on a
Unix system, an ABI might define how signals are executed, how
a process invokes system calls, what endianness is used, and
how stacks grow. In that sense, an ABI is a set of rules enforced
by the operating system on a specific architecture.
https://www.quora.com/What-exactly-is-an-Application-Binary-Interface-ABI
http://stackoverflow.com/questions/2171177/what-is-application-binary-
interface-abi
THE ROLE OF AN OS
A computer is a set of resources for the
movement, storage, and processing of data
and for the control of these functions.
The OS is responsible for managing these
resources
Normally, we think of a control mechanism as
something external to that which is controlled.
Example: Heating System and Thermostat
OPERATING SYSTEM AS SOFTWARE
Functions in the same way as ordinary
computer software i.e. Program, or suite of
programs, executed by the processor
EVOLUTION OF OPERATING SYSTEMS
A major OS will evolve over time for a
number of reasons:
Hardware Upgrades
New Types of Hardware
New Services
Bug Fixes
EVOLUTION OF OPERATING SYSTEMS
Stages include:
Time Sharing
Systems
Multiprogrammed
Batch Systems
Simple Batch
Systems
Serial
Processing
SERIAL PROCESSING
EARLIEST COMPUTERS: PROBLEMS:
No operating system Scheduling
Most installations used a
Programmers
hardcopy sign-up sheet to
interacted directly with
reserve computer time
the computer hardware
Time allocations could run
Computers ran from a
short or long, resulting in
console with display lights,
wasted computer time
toggle switches, some form
of input device, and a Setup time
printer
A considerable amount of time
Users have access to the was spent just on setting up
computer in ―series‖ the program to run
Slide
Introduction
15
Lecture
Slide 16 Notes: Introduction
Lecture
Slide 17 Notes: Introduction
Lecture
Slide 18 Notes: Introduction
IBM 7094 (Early 1960's)
Lecture
Slide 19 Notes: Introduction
IBM 701 Console
SIMPLE BATCH SYSTEMS
Early computers were very expensive
Important to maximize processor utilization
Monitor
User no longer has direct access to processor
Job is submitted to computer operator who batches
them together and places them on an input device
Program branches back to the monitor when finished
MONITOR POINT OF VIEW
Monitor controls the
sequence of events
Resident Monitor is software
that always resides in
memory
Monitor reads in job and
gives control
Job returns control to
monitor
PROCESSOR POINT OF VIEW
Processor executes instruction from the memory
containing the monitor
Executes the instructions in the user program until
it encounters an ending or error condition
“control is passed to a job” means processor is
fetching and executing instructions in a user
program
“control is returned to the monitor” means that the
processor is fetching and executing instructions
from the monitor program
JOB CONTROL LANGUAGE (JCL)
Special type of programming
language used to provide instructions
to the monitor
what compiler to use
what data to use
DESIRABLE HARDWARE
FEATURES
Memory protection for monitor
• while the user program is executing, it must not alter the
memory area containing the monitor
Timer
• prevents a job from monopolizing the system
Privileged instructions
• can only be executed by the monitor
Interrupts
• gives OS more flexibility in controlling user programs
MODES OF OPERATION
User Mode Kernel Mode
• user program executes in • monitor executes in kernel
user mode mode
• certain areas of memory • privileged instructions may
are protected from user be executed
access • protected areas of memory
• certain instructions may not may be accessed
be executed
SIMPLE BATCH SYSTEM OVERHEAD
Processor time alternates between execution of user
programs and execution of the monitor
Sacrifices:
Some main memory is now given over to the monitor
Some processor time is consumed by the monitor
Despite overhead, the simple batch system improves
utilization of the computer
MULTIPROGRAMMED BATCH
SYSTEMS
Processor is
often idle
Even with
automatic job
sequencing
I/O devices are
slow compared
to processor
UNIPROGRAMMING
The processor spends a certain amount of time
executing, until it reaches an I/O instruction; it must
then wait until that I/O instruction concludes before
proceeding
MULTIPROGRAMMING
There must be enough memory to hold the OS (resident monitor) and
one user program
When one job needs to wait for I/O, the processor can switch to the
other job, which is likely not waiting for I/O
MULTIPROGRAMMING
Multiprogramming
also known as multitasking
memory is expanded to hold three, four, or more
programs and switch among all of them
MULTIPROGRAMMING EXAMPLE
EFFECTS ON RESOURCE
UTILIZATION
Table 2.2 Effects of Multiprogramming on Resource Utilization
UTILIZATION HISTOGRAMS
TIME-SHARING SYSTEMS
Can be used to handle multiple interactive jobs
Processor time is shared among multiple users
Multiple users simultaneously access the
system through terminals, with the OS
interleaving the execution of each user program
in a short burst or quantum of computation
BATCH MULTIPROGRAMMING
VS. TIME SHARING
Table 2.3 Batch Multiprogramming versus Time Sharing
COMPATIBLE TIME-SHARING
SYSTEMS
CTSS TIME SLICING
One of the first time-sharing System clock generates interrupts
operating systems at a rate of approximately one
every 0.2 seconds
Developed at MIT by a group At each interrupt OS regained
known as Project MAC control and could assign
Ran on a computer with processor to another user
32,000 36-bit words of main At regular time intervals the
memory, with the resident current user would be preempted
monitor consuming 5000 and another user loaded in
words of that memory! Old user programs and data were
To simplify both the monitor written out to disk
and memory management a Old user program code and data
program was always loaded to were restored in main memory
start th
at the location of the when that program was next given
5000 word a turn
CTSS OPERATION
DIFFERENT ARCHITECTURAL
APPROACHES
Demands on operating systems require new
ways of organizing the OS
Different Approaches and Design Elements
• Microkernel Architecture
• Multithreading
• Symmetric Multiprocessing
• Distributed Operating Systems
• Object-Oriented Design
MICROKERNEL ARCHITECTURE
Assigns only a few essential functions to the
kernel:
interprocess
address basic
communication
spaces scheduling
(IPC)
• The approach:
is well suited to
simplifies provides
a distributed
implementation flexibility
environment
MULTITHREADING
Technique in which a process, executing an application,
is divided into threads that can run concurrently
Thread
• dispatchable unit of work
• includes a processor context and its own data area to
enable subroutine branching
• executes sequentially and is interruptible
Process
• a collection of one or more threads and associated
system resources
• programmer has greater control over the modularity of
the application and the timing of application related
events
SYMMETRIC
MULTIPROCESSING (SMP)
Term that refers to a computer hardware architecture
and also to the OS behavior that exploits that
architecture
Several processes can run in parallel
Multiple processors are transparent to the user
• these processors share same main memory and I/O facilities
• all processors can perform the same functions
The OS takes care of scheduling of threads or processes
on individual processors and of synchronization among
processors
SMP ADVANTAGES
more than one process can be running
Performance simultaneously, each on a different
processor
failure of a single process does not
Availability halt the system
Incremental performance of a system can be
Growth enhanced by adding an additional
processor
vendors can offer a range of products
Scaling based on the number of processors
configured in the system
MG
U R
L A
T M
I M
P I
R N
O G
VIRTUAL MACHINES AND
VIRTUALIZATION
Virtualization
Enables a single PC or server to simultaneously run
multiple operating systems or multiple sessions of a
single OS
A machine can host numerous applications, including
those that run on different operating systems, on a
single platform
Host operating system can support a number of virtual
machines (VM)
each has the characteristics of a particular OS and, in
some versions of virtualization, the characteristics of a
particular hardware platform.
VIRTUAL MEMORY CONCEPT
VIRTUAL MACHINE ARCHITECTURE
Process perspective:
• the machine on which it executes consists of the virtual memory space
assigned to the process
• the processor registers it may use
• the user-level machine instructions it may execute
• OS system calls it may invoke for I/O
• ABI defines the machine as seen by a process
Application perspective:
• machine characteristics are specified by high-level language capabilities and
OS system library calls
• API defines the machine for an application
OS perspective:
• processes share a file system and other I/O resources
• system allocates real memory and I/O resources to the processes
• ISA provides the interface between the system and machine
PROCESS AND SYSTEM
VIRTUAL MACHINES
PROCESS AND SYSTEM
VIRTUAL MACHINES
SYMMETRIC MULTIPROCESSOR
OS CONSIDERATIONS
A multiprocessor OS must provide all the functionality of a
multiprogramming system plus additional features to
accommodate multiple processors
Key design issues:
Simultaneous Scheduling Memory Reliability
Synchronization
concurrent Management and Fault
processes or with multiple Tolerance
any the reuse of
threads active
processor physical the OS
processes
kernel may pages is should
having
routines perform the biggest provide
scheduling,
potential
need to be problem of graceful
which access to
reentrant to shared address concern degradation
allow several complicates
spaces or in the face
processors the task of of processor
shared I/O
to execute enforcing a failure
resources, care
the same scheduling
must be taken
kernel code policy
to provide
simultaneous effective
MULTICORE OS CONSIDERATIONS
The design challenge for a
many-core multicore system
is to efficiently harness the hardware parallelism within each
core processor, known as
multicore processing power instruction level parallelism (ILP)
and intelligently manage the
substantial on-chip potential for multiprogramming
and multithreaded execution
resources efficiently within each processor (TLP)
Potential for parallelism potential for a single application
to execute in concurrent
exists at three levels: processes or threads across
multiple cores (CMP)
GENERAL UNIX ARCHITECTURE
55
MODULAR
MONOLITHIC KERNEL
LOADABLE MODULES
Includes virtually all of the
OS functionality in one Relatively independent blocks
large block of code that A module is an object file
runs as a single process whose code can be linked to
with a single address space and unlinked from the kernel
All the functional at runtime
components of the kernel
have access to all of its A module is executed in kernel
internal data structures and mode on behalf of the current
routines process
Linux is structured as a Have two important
collection of modules characteristics:
Dynamic linking
Stackable modules
KERNEL AND SHELL
Unix-like systems divide the OS into
• Kernel
• The lowest part of the OS that talks to the physical hardware.
• Implements Process/Memory Management etc.
• Runs in supervisor mode.
• Shell
• Accepts commands from the user.
• Shells for Unix-like systems allow combining simple programs to
achieve a complex task.
• Runs in user mode.
SYSTEM CALLS
System calls are the mechanism through which services of
the operating systems are sought.
Examples
• Starting a new process or thread
• Reading contents of a file
• Existing a program
SYSTEM CALLS
A system call starts with C/C++ procedure call
The procedure store the call number at some
special place and executes a trap instruction.
The system enters kernel mode and starts execution
from a fixed memory location as per the call
number.
After performing the task in kernel mode the system
returns to user mode and transfers control back to
the user program.
65
SYSTEM CALLS FOR PROCESS
MANAGEMENT
Call D e s c r i p t i on
pid = fork() Create a child process identical to the
parent
pid = waitpid(pid, &statloc,
options) Wait for a child to terminate
s = execve(name, argv, environp) Replace a process' core image
exit(status) Terminate process execution and return
status
SYSTEM CALLS FOR FILE
MANAGEMENT
Call D e s c r i p t i on
fd = open(fife, how,...) Open a file for reading, writing, or both
s = close(fd) Close an open file
n = read(fd, buffer, nbytes) Read data from a file into a buffer
n = write(fd, buffer, nbytes) Write data from a buffer into a file
position = lseek(fd, offset, Move the file pointer
whence)
s = stat(narne, &buf) Get a fife's status information
SYSTEM CALLS FOR DIRECTORY
MANAGEMENT
Call D e s c r i p t i on
s = mkdir(name, mode) Create a new directory
s = rmdir(name) Remove an empty directory
s = link(name1, name2) Create a new entry, name2, pointing to namel
s = unlink(name) Remove a directory entry
s = mount(speciaf, name, flag) Mount a file system
s = umount(special) Unmount a file system
MISCELLANEOUS SYSTEM
CALLS
Call D e s c r i p t i on
s = chdir(dirname) Change the working directory
s = chmod(name, mode) Change a file's protection bits
s = kill(pid, signal) Send a signal to a process
seconds = time(&seconds) Get the elapsed time since Jan. 1, 1970