UNIT-4
Tagged architectures and multi-level UNIX
Unix file system is a logical method of organizing and storing large amounts of information in
way that makes it easy to manage. A file is a smallest unit in which the information is stored.
Unix file system has several important features. All data in Unix is organized into files. All file
are organized into directories. These directories are organized into a tree-like structure called th
file system.
Files in Unix System are organized into multi-level hierarchy structure known as a directory tre
At the very top of the file system is a directory called “root” which is represented by a “/”. All
other files are “descendants” of root.
Directories or Files and their description –
/ : The slash / character alone denotes the root of the filesystem tree.
/bin : Stands for “binaries” and contains certain fundamental utilities, such as ls or cp, which
are generally needed by all users.
/boot : Contains all the files that are required for successful booting process.
/dev : Stands for “devices”. Contains file representations of peripheral devices and pseudo-
devices.
UNIT-4
/etc : Contains system-wide configuration files and system databases. Originally also contain
“dangerous maintenance utilities” such as init,but these have typically been moved to /sbin o
elsewhere.
/home : Contains the home directories for the users.
/lib : Contains system libraries, and some critical files such as kernel modules or device
drivers.
/media : Default mount point for removable devices, such as USB sticks, media players, etc.
/mnt : Stands for “mount”. Contains filesystem mount points. These are used, for example, i
the system uses multiple hard disks or hard disk partitions. It is also often used for remote
(network) filesystems, CD-ROM/DVD drives, and so on.
/proc : procfs virtual filesystem showing information about processes as files.
/root : The home directory for the superuser “root” – that is, the system administrator. This
account’s home directory is usually on the initial filesystem, and hence not in /home (which
may be a mount point for another filesystem) in case specific maintenance needs to be
performed, during which other filesystems are not available. Such a case could occur, for
example, if a hard disk drive suffers physical failures and cannot be properly mounted.
/tmp : A place for temporary files. Many systems clear this directory upon startup; it might
have tmpfs mounted atop it, in which case its contents do not survive a reboot, or it might be
explicitly cleared by a startup script at boot time.
/usr : Originally the directory holding user home directories,its use has changed. It now hold
executables, libraries, and shared resources that are not system critical, like the X Window
System, KDE, Perl, etc. However, on some Unix systems, some user accounts may still have
home directory that is a direct subdirectory of /usr, such as the default as in Minix. (on mode
systems, these user accounts are often related to server or system use, and not directly used b
a person).
/usr/bin : This directory stores all binary programs distributed with the operating system not
residing in /bin, /sbin or (rarely) /etc.
/usr/include : Stores the development headers used throughout the system. Header files are
mostly used by the #include directive in C/C++ programming language.
/usr/lib : Stores the required libraries and data files for programs stored within /usr or
elsewhere.
/var : A short for “variable.” A place for files that may change often – especially in size, for
example e-mail sent to users on the system, or process-ID lock files.
/var/log : Contains system log files.
/var/mail : The place where all the incoming mails are stored. Users (other than root) can
UNIT-4
access their own mail only. Often, this directory is a symbolic link to /var/spool/mail.
/var/spool : Spool directory. Contains print jobs, mail spools and other queued tasks.
/var/tmp : A place for temporary files which should be preserved between system reboots.
Types of Unix files – The UNIX files system contains several different types of files :
1. Ordinary files – An ordinary file is a file on the system that contains data, text, or program
instructions.
Used to store your information, such as some text you have written or an image you have
drawn. This is the type of file that you usually work with.
Always located within/under a directory file.
Do not contain other files.
In long-format output of ls -l, this type of file is specified by the “-” symbol.
2. Directories – Directories store both special and ordinary files. For users familiar with
Windows or Mac OS, UNIX directories are equivalent to folders. A directory file contains an
entry for every file and subdirectory that it houses. If you have 10 files in a directory, there will
be 10 entries in the directory. Each entry has two components.
(1) The Filename
(2) A unique identification number for the file or directory (called the inode number)
Branching points in the hierarchical tree.
Used to organize groups of files.
May contain ordinary files, special files or other directories.
Never contain “real” information which you would work with (such as text). Basically, just
used for organizing files.
All files are descendants of the root directory, ( named / ) located at the top of the tree.
UNIT-4
In long-format output of ls –l , this type of file is specified by the “d” symbol.
3. Special Files – Used to represent a real physical device such as a printer, tape drive or
terminal, used for Input/Output (I/O) operations. Device or special files are used for device
Input/Output(I/O) on UNIX and Linux systems. They appear in a file system just like an ordina
file or a directory.
On UNIX systems there are two flavors of special files for each device, character special files a
block special files :
When a character special file is used for device Input/Output(I/O), data is transferred one
character at a time. This type of access is called raw device access.
When a block special file is used for device Input/Output(I/O), data is transferred in large
fixed-size blocks. This type of access is called block device access.
For terminal devices, it’s one character at a time. For disk devices though, raw access means
reading or writing in whole chunks of data – blocks, which are native to your disk.
In long-format output of ls -l, character special files are marked by the “c” symbol.
In long-format output of ls -l, block special files are marked by the “b” symbol.
4. Pipes – UNIX allows you to link commands together using a pipe. The pipe acts a temporary
file which only exists to hold data from one command until it is read by another.A Unix pipe
provides a one-way flow of data.The output or result of the first command sequence is used as t
input to the second command sequence. To make a pipe, put a vertical bar (|) on the command
line between two commands.For example: who | wc -l
In long-format output of ls –l , named pipes are marked by the “p” symbol.
5. Sockets – A Unix socket (or Inter-process communication socket) is a special file which
allows for advanced inter-process communication. A Unix Socket is used in a client-server
application framework. In essence, it is a stream of data, very similar to network stream (and
network sockets), but all the transactions are local to the file system.
In long-format output of ls -l, Unix sockets are marked by “s” symbol.
6. Symbolic Link – Symbolic link is used for referencing some other file of the file
system.Symbolic link is also known as Soft link. It contains a text form of the path to the file it
references. To an end user, symbolic link will appear to have its own name, but when you try
reading or writing data to this file, it will instead reference these operations to the file it points t
If we delete the soft link itself , the data file would still be there. If we delete the source file or
move it to a different location, symbolic file will not function properly.
UNIT-4
Traps
are occurred by the user program to invoke the functionality of the OS. Assume the us
application requires something to be printed on the screen, and it would set off a tra
and the operating system would write the data to the screen.
A trap is a software-produced interrupt that can be caused by various factors, includi
an error in instruction, such as division by zero or illegal memory access. A trap may al
be generated when a user program makes a definite service request from the OS.
Traps are called synchronous events because the execution of the present instructio
much more likely causes traps. System calls are another type of trap in which t
program asks the operating system to seek a certain service, and the operating syste
subsequently generates an interrupt to allow the program to access the services.
UNIT-4
The traps are more active as an interrupt because the code will
heavily depend on the fact that the trap may be used to interact
with the OS. Therefore, traps would repeat the trap's function to
access any system service.
Mechanism of Trap in the Operating System
The user program on the CPU usually makes use of library calls to
make system calls. The library routine check's job is to validate the
program's parameters, create a data structure to transfer the
arguments from the application to the operating system's kernel,
and then execute special instructions known as traps or software
interrupts.
These special instructions or traps has operands that aid in
determining which kernel service the application inputs require.
As a result, when the process is set to execute the traps, the
interrupt saves the user code's state, switches to supervisor mode,
and then dispatches the relevant kernel procedure that may offer
the requested service.
Configuring Rules in Trap Policies
Rules define the action a policy should take in response to a
specific type of incoming event. Each rule consists of the
following:
A condition for the incoming data
The condition is the part of a policy that describes the data
source.
Settings for the outgoing event
The settings define the actual event data that Operations
Connector sends to OMi.
UNIT-4
A policy must contain at least one rule. If the policy contains
multiple rules, they are evaluated consecutively. After the
condition is matched in one rule, rule evaluation stops.
To access
In the Operations Connector user interface, click in the toolbar,
then Interceptor. The SNMP Trap Policy editor opens.
Alternatively, double-click an existing SNMP trap policy to edit it.
Click Rules to open the policy Rules page.
Rule types
The rule types are:
Event on matched rule. If matched, Operations
Connector sends an event to OMi. The event uses the settings
defined for the rule. If you do not configure these settings, the
default settings are used.
Suppress on matched rule. If matched, Operations
Connector stops processing and does not send an event to OMi.
Suppress on unmatched rule. If not matched, Operations
Connector stops processing and does not send an event
Kernel Hooking Basics
Kernel in general, whether it be Windows or Linux, is a subject not too
many take an interest in, nor is it widely taught. It is however, a place
for various types of malicious activities, not often in plain sight and not
always easy to detect, many times due to the fact that kernels are not
properly understood. This paper gives an overview of what a kernel is,
its architecture, common concepts and nomenclatures, and finally
explains how kernel hooking works. The focus will mainly be on the
UNIT-4
Microsoft Windows kernel, although Linux kernel will be mentioned
throughout.
Kernel Hooking
The term hooking [5] covers a range of techniques used to alter or
augment the behavior of an operating system, an application or any
other software components by intercepting function calls, messages
and events passed between the different software component. The
code that performs the interception of function calls, events or
messages is called a hook. Typically hooks are inserted while software is
already running, but hooking is a tactic that can also be employed prior
to the application being started.
The two main methods of hooking are:
Physical modification: Achieved by physically modifying an
executable or library before an application is running. Through
techniques of reverse engineering, you can also achieve hooking. This
is typically used to intercept function calls to either monitor them or
replace them entirely. For example, by using a disassembler, the
entry point of a function within a module can be found. It can then be
altered to instead dynamically load some other library module and
then have it execute desired methods within that loaded library. If
applicable, another related approach by which hooking can be
UNIT-4
achieved is by altering the import table of an executable. This table
can be modified to load any additional library modules as well as
changing what external code is invoked when a function is called by
the application. An alternative method for achieving function hooking
is by intercepting function calls through a wrapper library. When
creating a wrapper, you make your own version of a library that an
application loads with all the same functionality of the original library
that it will replace. That is, all the functions that are accessible are
essentially the same between the original and the replacement. This
wrapper library can be designed to call any of the functionality from
the original library, or replace it with an entirely new set of logic.
Runtime modification: Operating systems and software may provide
the means to easily insert event hooks at runtime. Microsoft
Windows for example, allows you to insert hooks that can be used to
process or modify system events and application events for dialogs,
scrollbars, and menus as well as other items. It also allows a hook to
insert, remove, process or modify keyboard and mouse events. Linux
provides another example where hooks can be used in a similar
manner to process network events within the kernel through
NetFilter.
SELinux type enforcement: design, implementation, and
pragmatics.
Security-Enhanced Linux (SELinux) is a security architecture
for Linux® systems that allows administrators to have more
control over who can access the system. It was originally
developed by the United States National Security Agency (NSA)
as a series of patches to the Linux kernel using Linux Security
Modules (LSM).
SELinux was released to the open source community in 2000,
and was integrated into the upstream Linux kernel in 2003.
UNIT-4
How does SELinux work?
SELinux defines access controls for the applications, processes,
and files on a system. It uses security policies, which are a set of
rules that tell SELinux what can or can’t be accessed, to enforce
the access allowed by a policy.
When an application or process, known as a subject, makes a
request to access an object, like a file, SELinux checks with an
access vector cache (AVC), where permissions are cached for
subjects and objects.
If SELinux is unable to make a decision about access based on
the cached permissions, it sends the request to the security
server. The security server checks for the security context of the
app or process and the file. Security context is applied from the
SELinux policy database. Permission is then granted or denied.
If permission is denied, an "avc: denied" message will be
available in /var/log.messages.
How to configure SELinux
There are a number of ways that you can configure SELinux to
protect your system. The most common are targeted policy or
multi-level security (MLS).
Targeted policy is the default option and covers a range of
processes, tasks, and services. MLS can be very complicated and
is typically only used by government organizations.
You can tell what your system is supposed to be running at by
looking at the /etc/sysconfig/selinux file. The file will have a
section that shows you whether SELinux is in permissive mode,
UNIT-4
enforcing mode, or disabled, and which policy is supposed to be
loaded.
SELinux labeling and type enforcement
Type enforcement and labeling are the most important concepts
for SELinux.
SELinux works as a labeling system, which means that all of the
files, processes, and ports in a system have an SELinux label
associated with them. Labels are a logical way of grouping things
together. The kernel manages the labels during boot.
Labels are in the format user:role:type:level (level is optional).
User, role, and level are used in more advanced implementations
of SELinux, like with MLS. Label type is the most important for
targeted policy.
SELinux uses type enforcement to enforce a policy that is defined
on the system. Type enforcement is the part of an SELinux policy
that defines whether a process running with a certain type can
access a file labeled with a certain type.
Enabling SELinux
If SELinux has been disabled in your environment, you can
enable SElinux by editing /etc/selinux/config and setting
SELINUX=permissive. Since SELinux was not currently enabled,
you don’t want to set it to enforcing right away because the
system will likely have things mislabeled that can keep the system
from booting.
You can force the system to automatically relabel the
filesystem by creating an empty file named .autorelabel in the root
directory and then rebooting. If the system has too many errors,
UNIT-4
you should reboot while in permissive mode in order for the boot
to succeed. After everything has been relabeled, set SELinux to
enforcing with /etc/selinux/config and reboot, or run setenforce 1.
If a sysadmin is less familiar with the command line, there are
graphic tools available that can be used to manage SELinux.
SELinux provides an additional layer of security for your system
that is built into Linux distributions. It should remain on so that it
can protect your system if it is ever compromised.
How to handle SELinux errors
When you get an error in SELinux there is something that needs
to be addressed. It is likely 1 of these 4 common problems:
1. The labels are wrong. If your labeling is incorrect you can
use the tools to fix the labels.
2. A policy needs to be fixed. This could mean that you need
to inform SELinux about a change you’ve made, or you
might need to adjust a policy. You can fix it using booleans
or policy modules.
3. There is a bug in the policy. It could be that a bug exists in
the policy that needs to be addressed.
4. The system has been broken in to. Although SELinux can
protect your systems in many scenarios, the possibility for a
system to be compromised still exists. If you suspect that
this is the case, take action immediately
kmem
The kmem tracing system captures events related to object and
page allocation within the kernel. Broadly speaking there are five
major subheadings.
UNIT-4
Slab allocation of small objects of unknown type (kmalloc)
Slab allocation of small objects of known type
Page allocation
Per-CPU Allocator Activity
External Fragmentation
This document describes what each of the tracepoints is and why
they might be useful.
1. Slab allocation of small objects of unknown type
kmalloc call_site=%lx ptr=%p bytes_req=%zu
bytes_alloc=%zu gfp_flags=%s
kmalloc_node call_site=%lx ptr=%p bytes_req=%zu
bytes_alloc=%zu gfp_flags=%s node=%d
kfree call_site=%lx ptr=%p
Heavy activity for these events may indicate that a specific cache
is justified, particularly if kmalloc slab pages are getting
significantly internal fragmented as a result of the allocation
pattern. By correlating kmalloc with kfree, it may be possible to
identify memory leaks and where the allocation sites were.
2. Slab allocation of small objects of known type
kmem_cache_alloc call_site=%lx ptr=%p
bytes_req=%zu bytes_alloc=%zu gfp_flags=%s
kmem_cache_alloc_node call_site=%lx ptr=%p
bytes_req=%zu bytes_alloc=%zu gfp_flags=%s node=%d
kmem_cache_free call_site=%lx ptr=%p
These events are similar in usage to the kmalloc-related events
except that it is likely easier to pin the event down to a specific
cache. At the time of writing, no information is available on what
slab is being allocated from, but the call_site can usually be used
to extrapolate that information.
UNIT-4
3. Page allocation
mm_page_alloc page=%p pfn=%lu order=%d
migratetype=%d gfp_flags=%s
mm_page_alloc_zone_locked page=%p pfn=%lu order=%u
migratetype=%d cpu=%d percpu_refill=%d
mm_page_free page=%p pfn=%lu order=%d
mm_page_free_batched page=%p pfn=%lu order=%d
cold=%d
These four events deal with page allocation and freeing.
mm_page_alloc is a simple indicator of page allocator activity.
Pages may be allocated from the per-CPU allocator (high
performance) or the buddy allocator.
If pages are allocated directly from the buddy allocator, the
mm_page_alloc_zone_locked event is triggered. This event is
important as high amounts of activity imply high activity on the
zone->lock. Taking this lock impairs performance by disabling
interrupts, dirtying cache lines between CPUs and serialising
many CPUs.
When a page is freed directly by the caller, the only
mm_page_free event is triggered. Significant amounts of activity
here could indicate that the callers should be batching their
activities.
When pages are freed in batch, the also mm_page_free_batched
is triggered. Broadly speaking, pages are taken off the LRU lock
in bulk and freed in batch with a page list. Significant amounts of
activity here could indicate that the system is under memory
pressure and can also indicate contention on the zone->lru_lock.
4. Per-CPU Allocator Activity
mm_page_alloc_zone_locked page=%p pfn=%lu
order=%u migratetype=%d cpu=%d percpu_refill=%d
UNIT-4
mm_page_pcpu_drain page=%p pfn=%lu
order=%d cpu=%d migratetype=%d
In front of the page allocator is a per-cpu page allocator. It exists
only for order-0 pages, reduces contention on the zone->lock and
reduces the amount of writing on struct page.
When a per-CPU list is empty or pages of the wrong type are
allocated, the zone->lock will be taken once and the per-CPU list
refilled. The event triggered is mm_page_alloc_zone_locked for
each page allocated with the event indicating whether it is for a
percpu_refill or not.
When the per-CPU list is too full, a number of pages are freed,
each one which triggers a mm_page_pcpu_drain event.
The individual nature of the events is so that pages can be
tracked between allocation and freeing. A number of drain or refill
pages that occur consecutively imply the zone->lock being taken
once. Large amounts of per-CPU refills and drains could imply an
imbalance between CPUs where too much work is being
concentrated in one place. It could also indicate that the per-CPU
lists should be a larger size. Finally, large amounts of refills on
one CPU and drains on another could be a factor in causing large
amounts of cache line bounces due to writes between CPUs and
worth investigating if pages can be allocated and freed on the
same CPU through some algorithm change.
UNIT-4
5. External Fragmentation
mm_page_alloc_extfrag page=%p pfn=%lu
alloc_order=%d fallback_order=%d pageblock_order=%d
alloc_migratetype=%d fallback_migratetype=%d
fragmenting=%d change_ownership=%d
External fragmentation affects whether a high-order allocation will
be successful or not. For some types of hardware, this is
important although it is avoided where possible. If the system is
using huge pages and needs to be able to resize the pool over
the lifetime of the system, this value is important.
Large numbers of this event implies that memory is fragmenting
and high-order allocations will start failing at some time in the
future. One means of reducing the occurrence of this event is to
increase the size of min_free_kbytes in increments of
3*pageblock_size*nr_online_nodes where pageblock_size is
usually the size of the default hugepage size.
The Vmem Allocator
The kmem allocator relies on two lower-level system services to
create slabs: a virtual address allocator to provide kernel virtual
addresses, and VM routines to back those addresses with
physical pages and establish virtual-to-physical translations. The
scalability of large systems was limited by the old virtual address
allocator (the resource map allocator). It tended to fragment the
address space badly over time, its latency was linear in the
number of fragments, and the whole thing was single-threaded.
Virtual address allocation is, however, just one example of the
more general problem of resource allocation. For our purposes,
a resource is anything that can be described by a set of integers.
For example: virtual addresses are subsets of the 64-bit integers;
process IDs are subsets of the integers [0, 30000]; and minor
device numbers are subsets of the 32-bit integers.
UNIT-4
Vmem Objectives
o A good resource allocator should have the following
properties:
o A powerful interface that can cleanly express the most
common resource allocation problems
o Constant-time performance, regardless of the size of the
request or the degree of fragmentation
o Linear scalability to any number of CPUs
o Low fragmentation, even if the operating system runs at full
throttle for years
object-oriented operating system
is an operating system that is designed, structured, and operated
using object-oriented programming principles.
An object-oriented operating system is in contrast to an object-
oriented user interface or programming framework, which can be
run on a non-object-oriented operating system like DOS or Unix.
There are already object-based language concepts involved in the
design of a more typical operating system such as Unix. While a
more traditional language like C does not support object-
orientation as fluidly as more recent languages, the notion of, for
example, a file, stream, or device driver (in Unix, each
represented as a file descriptor) can be considered a good
example of objects. They are, after all, abstract data types, with
various methods in the form of system calls which behavior varies
based on the type of object and which implementation details are
hidden from the caller.
Object-orientation has been defined as objects + inheritance, and
inheritance is only one approach to the more general problem
of delegation that occurs in every operating system. [2] Object-
UNIT-4
orientation has been more widely used in the user interfaces of
operating systems than in their kernels.
Athene
Athene is an object-based operating system first released in
2000 by Rocklyte Systems.[3][4] The user environment was
constructed entirely from objects that are linked together at
runtime. Applications for Athene could also be created using
this methodology and were commonly scripted using the
object scripting language Dynamic Markup Language (DML).
Objects could have been shared between processes by
creating them in shared memory and locking them as
needed for access. Athene's object framework was multi-
platform, allowing it to be used in Windows and Linux
environments for developing object-oriented programs. The
company went defunct and the project abandoned sometime
in 2009.
BeOS
BeOS[5] was an object-oriented operating system released in
1995, which used objects and the C++ language for
the application programming interface (API). The kernel was
written in C with C++ wrappers in user space. The OS did
not see mainstream usage and proved commercially
unviable, however it has seen continued usage and
development by a small enthusiast community.
Choices
Choices is an object-oriented operating system developed at
the University of Illinois at Urbana–Champaign.[6][7] It is
written in C++ and uses objects to represent core kernel
components like the central processing
unit (CPU), processes, and so on. Inheritance is used to
separate the kernel into portable machine-independent
classes and small non-portable dependent classes. Choices
has been ported to and runs on SPARC, x86, and ARM.
UNIT-4
GEOS
PC/GEOS is a light-weight object-oriented multitasking
graphical operating system with sophisticated window and
desktop management featuring scalable fonts. It is mostly
written in an object-oriented x86 assembly language dialect
and some C/C++ and is designed to run on DOS (similar to
Microsoft Windows up to Windows Me). GEOS was
developed originally by Berkeley Softworks in 1990, which
later became GeoWorks Corporation, and it is continued to
be maintained by BreadBox Computer Company.
Related software suites were named Ensemble and New
Deal Office. Adaptations exist for various palmtops, and 32-
bit systems with non-x86-CPUs.
Haiku
Haiku (originally named OpenBeOS), is an open-source
replacement for BeOS. It reached its first development
milestone in September 2009 with the release of Haiku
R1/Alpha 1. The x86 distribution is compatible with BeOS at
both source and binary level. Like BeOS, it is written
primarily in C++ and provides an object-oriented API. It is
actively developed.
IBM i (OS/400, i5/OS)
IBM introduced OS/400 in 1988. This OS ran exclusively on
the AS/400 platform. Renamed IBM i in 2008, this operating
system and runs exclusively on Power Systems which also
can run AIX and Linux. IBM i uses an object-oriented
methodology and integrates a database (Db2 for i). The IBM
i OS has a 128-bit unique identifier for each object.
IBM OS/2 2.0
IBM's first priority based pre-emptive multitasking, graphical,
windows-based operating system included an object-
oriented user shell. It was designed for the Intel 80386 that
used virtual 8086 mode with full 32-bit support and was
released in 1992. ArcaOS, a new OS/2 based operating
UNIT-4
system initially called Blue Lion[8] is being developed by Arca
Noae. The first version was released in May 2017.
IBM TopView
TopView was an object-oriented operating environment that
loaded on a PC on DOS, and then took control from DOS. At
that point it effectively became an object-oriented operating
system with an object-oriented API (TopView API). It was
IBM's first multi-tasking, window based, object-oriented
operating system for the PC led by David C. Morrill and
released in February 1985.
Java-based
Given that Oracle's (formerly Sun Microsystems') Java is
today one of the most dominant object-oriented languages, it
is no surprise that Java-based operating systems have been
attempted. In this area, ideally, the kernel would consist of
the bare minimum needed to support a Java virtual
machine (JVM). This is the only component of such an
operating system that would have to be written in a language
other than Java. Built on the JVM and basic hardware
support, it would be possible to write the rest of the operating
system in Java; even parts of the system that are more
traditionally written in a lower-level language such as C, for
example device drivers, can be written in Java.
Examples of attempts at such an operating system
include JavaOS, JOS,[9] JNode, and JX.
Lisp-based
An object-oriented operating system written in
the Lisp dialect Lisp Machine Lisp (and later Common Lisp)
was developed at MIT. It was commercialized with Lisp
Machines from Symbolics, Lisp Machines Inc. and Texas
Instruments. Symbolics called their operating
system Genera. It was developed with the Flavors object-
oriented extension of Lisp, then with New Flavors, and then
with the Common Lisp Object System (CLOS).
UNIT-4
Xerox developed several workstations with an operating
system written in Interlisp-D. Interlisp-D provided object-
oriented extensions like LOOPS and CLOS.
Movitz and Mezzano are two more recent attempts at
operating systems written in Common Lisp.
Medos-2
Medos-2 is a single user, object-oriented operating system
made for the Lilith line of workstations (processor: Advanced
Micro Devices (AMD) 2901), developed in the early 1980s
at ETH Zurich by Svend Erik Knudsen with advice
from Niklaus Wirth. It is built entirely from modules of the
programming language Modula-2.[10][11][12] It was succeeded
at ETH Zurich by the Oberon system, and a variant
named Excelsior was developed for the Kronos workstation,
by the Academy of Sciences of the Soviet Union, Siberian
branch, Novosibirsk Computing Center, Modular
Asynchronous Developable Systems (MARS) project,
Kronos Research Group (KRG).
Microsoft Singularity
Singularity is an experimental operating system based on
Microsoft's .NET Framework. It is comparable to Java-based
operating systems.
Microsoft Windows NT
Windows NT is a family of operating systems
(including Windows 7, 8, Phone 8, 8.1, Windows 10, 10
Mobile, Windows 11 and Xbox) produced by Microsoft, the
first version of which was released in July 1993. It is a high-
level programming language-based, processor-
independent, multiprocessing, multi-user operating system. It
is best described as object-based rather than object-oriented
UNIT-4
as it does not include the full inheritance properties of object-
oriented languages.[14]
The Object Manager is in charge of managing NT objects.
As part of this responsibility, it maintains an
internal namespace where various operating system
components, device drivers, and Win32 programs can store
and lookup objects. The NT Native API provides routines
that allow user space (mode) programs to browse the
namespace and query the status of objects located there,
but the interfaces are undocumented. NT supports per-
object (file, function, and role) access control lists allowing a
rich set of security permissions to be applied to systems and
services. WinObj is a Windows NT program that uses the NT
Native API (provided by NTDLL.DLL) to access and display
information on the NT Object Manager's name space.
Memory Hierarchy
The memory in a computer can be divided into five hierarchies
based on the speed as well as use. The processor can move from
one level to another based on its requirements. The five
hierarchies in the memory are registers, cache, main memory,
magnetic discs, and magnetic tapes. The first three hierarchies
are volatile memories which mean when there is no power, and
then automatically they lose their stored data. Whereas the last
two hierarchies are not volatile which means they store the data
permanently.
A memory element is the set of storage devices which stores the
binary data in the type of bits. In general, the storage of
memory can be classified into two categories such as volatile as
well as non- volatile.
UNIT-4
Memory Hierarchy in Computer Architecture
The memory hierarchy design in a computer system mainly
includes different storage devices. Most of the computers were
inbuilt with extra storage to run more powerfully beyond the main
memory capacity. The following memory hierarchy diagram is a
hierarchical pyramid for computer memory. The designing of the
memory hierarchy is divided into two types such as primary
(Internal) memory and secondary (External) memory.
Memory Hierarchy
Primary Memory
The primary memory is also known as internal memory, and this
is accessible by the processor straightly. This memory includes
main, cache, as well as CPU registers.
Secondary Memory
The secondary memory is also known as external memory, and
this is accessible by the processor through an input/output
UNIT-4
module. This memory includes an optical disk, magnetic disk, and
magnetic tape.
Characteristics of Memory Hierarchy
The memory hierarchy characteristics mainly include the
following.
Performance
Previously, the designing of a computer system was done without
memory hierarchy, and the speed gap among the main memory
as well as the CPU registers enhances because of the huge
disparity in access time, which will cause the lower performance
of the system. So, the enhancement was mandatory. The
enhancement of this was designed in the memory hierarchy
model due to the system’s performance increase.
Ability
The ability of the memory hierarchy is the total amount of data the
memory can store. Because whenever we shift from top to bottom
inside the memory hierarchy, then the capacity will increase.
Access Time
The access time in the memory hierarchy is the interval of the
time among the data availability as well as request to read or
write. Because whenever we shift from top to bottom inside the
memory hierarchy, then the access time will increase
Cost per bit
When we shift from bottom to top inside the memory hierarchy,
then the cost for each bit will increase which means an internal
Memory is expensive compared with external memory.
UNIT-4
Memory Hierarchy Design
The memory hierarchy in computers mainly includes the following.
Registers
Usually, the register is a static RAM or SRAM in the processor of
the computer which is used for holding the data word which is
typically 64 or 128 bits. The program counter register is the most
important as well as found in all the processors. Most of the
processors use a status word register as well as an accumulator.
A status word register is used for decision making, and the
accumulator is used to store the data like mathematical operation.
Usually, computers like complex instruction set computers have so
many registers for accepting main memory, and RISC- reduced
instruction set computers have more registers.
Cache Memory
Cache memory can also be found in the processor, however
rarely it may be another IC (integrated circuit) which is separated
into levels. The cache holds the chunk of data which are
frequently used from main memory. When the processor has a
single core then it will have two (or) more cache levels rarely.
Present multi-core processors will be having three, 2-levels for
each one core, and one level is shared.
Main Memory
The main memory in the computer is nothing but, the memory unit
in the CPU that communicates directly. It is the main storage unit
of the computer. This memory is fast as well as large memory
used for storing the data throughout the operations of the
computer. This memory is made up of RAM as well as ROM.
Magnetic Disks
The magnetic disks in the computer are circular plates fabricated
of plastic otherwise metal by magnetized material. Frequently, two
faces of the disk are utilized as well as many disks may be
stacked on one spindle by read or write heads obtainable on
UNIT-4
every plane. All the disks in computer turn jointly at high speed.
The tracks in the computer are nothing but bits which are stored
within the magnetized plane in spots next to concentric circles.
These are usually separated into sections which are named as
sectors.
Magnetic Tape
This tape is a normal magnetic recording which is designed with a
slender magnetizable covering on an extended, plastic film of the
thin strip. This is mainly used to back up huge data. Whenever the
computer requires to access a strip, first it will mount to access
the data. Once the data is allowed, then it will be unmounted. The
access time of memory will be slower within magnetic strip as well
as it will take a few minutes for accessing a strip.
Advantages of Memory Hierarchy
The need for a memory hierarchy includes the following.
Memory distributing is simple and economical
Removes external destruction
Data can be spread all over
Permits demand paging & pre-paging
Swapping will be more proficient
Thus, this is all about memory hierarchy. From the above
information, finally, we can conclude that it is mainly used to
decrease the bit cost, access frequency, and to increase the
capacity, access time.
UNIT-4
Multiprocessor Operating system
Multiprocessor system means, there are more than one
processor which work parallel to perform the required operations.
It allows the multiple processors, and they are connected with
physical memory, computer buses, clocks, and peripheral
devices.
The main objective of using a multiprocessor operating system is
to increase the execution speed of the system and consume high
computing power.
Advantages
The advantages of multiprocessor systems are as follows −
If there are multiple processors working at the same time,
more processes can be executed parallel at the same time.
Therefore the throughput of the system will increase.
Multiprocessor systems are more reliable. Due to the fact that
there are more than one processor, in case of failure of any
one processor will not make the system come to a halt.
Although the system will become slow if it happens but still it
will work.
Electricity consumption of a multiprocessor system is less
than the single processor system. This is because, in single
processor systems, many processes have to be executed by
only one processor so there is a lot of load on it. But in case
of multiple processor systems, there are many processors to
execute the processes so the load on each processor will be
comparatively less so electricity consumed will also be less.
UNIT-4
Fields
The different fields of multiprocessor operating systems used are
as follows −
Asymmetric Multiprocessor − Every processor is given
seeded tasks in this operating system, and the master
processor has the power for running the entire system. In the
course, it uses the master-slave relationship.
Symmetric Multiprocessor − In this system, every processor
owns a similar copy of the OS, and they can make
communication in between one another. All processors are
connected with peering relationship nature, meaning it won’t
be using master & slave relation.
Shared memory Multiprocessor − As the name indicates,
each central processing unit contains distributable common
memory.
Uniform Memory Access Multiprocessor (UMA) − In this
system, it allows accessing all memory at a consistent speed
rate for all processors.
Distributed memory Multiprocessor − A computer system
consisting of a range of processors, each with its own local
memory, connected through a network, which means all the
kinds of processors consist of their own private memory.
NUMA Multiprocessor − The abbreviation of NUMA is Non-
Uniform Memory Access Multiprocessor. It entails some areas
of the memory for approaching at a swift rate and the
remaining parts of memory are used for other tasks.
Operations on the File
A file is a collection of logically related data that is recorded on
the secondary storage in the form of sequence of operations. The
content of the files are defined by its creator who is creating the
file. The various operations which can be implemented on a file
such as read, write, open and close etc. are called file operations.
UNIT-4
These operations are performed by the user by using the
commands provided by the operating system. Some common
operations are as follows:
1.Create operation:
This operation is used to create a file in the file system. It is the
most widely used operation performed on the file system. To
create a new file of a particular type the associated application
program calls the file system. This file system allocates space to
the file. As the file system knows the format of directory structure,
so entry of this new file is made into the appropriate directory.
2. Open operation:
This operation is the common operation performed on the file.
Once the file is created, it must be opened before performing the
file processing operations. When the user wants to open a file, it
provides a file name to open the particular file in the file system. It
tells the operating system to invoke the open system call and
passes the file name to the file system.
3. Write operation:
This operation is used to write the information into a file. A
system call write is issued that specifies the name of the file and
the length of the data has to be written to the file. Whenever the
file length is increased by specified value and the file pointer is
repositioned after the last byte written.
UNIT-4
4. Read operation:
This operation reads the contents from a file. A Read pointer is
maintained by the OS, pointing to the position up to which the
data has been read.
5. Re-position or Seek operation:
The seek system call re-positions the file pointers from the current
position to a specific place in the file i.e. forward or backward
depending upon the user's requirement. This operation is generally
performed with those file management systems that support direct access
files.
6. Delete operation:
eleting the file will not only delete all the data stored inside the file it is
also used so that disk space occupied by it is freed. In order to delete the
specified file the directory is searched. When the directory entry is
located, all the associated file space and the directory entry is released.
7. Truncate operation:
Truncating is simply deleting the file except deleting attributes. The file
is not completely deleted although the information stored inside the file
gets replaced.
8. Close operation:
When the processing of the file is complete, it should be closed so
that all the changes made permanent and all the resources
occupied should be released. On closing it deallocates all the
internal descriptors that were created when the file was opened.
UNIT-4
9. Append operation:
This operation adds data to the end of the file.
I/O buffering and its Various Techniques
A buffer is a memory area that stores data being transferred between two devices
or between a device and an application.
Uses of I/O Buffering :
Buffering is done to deal effectively with a speed mismatch between the
producer and consumer of the data stream.
A buffer is produced in main memory to heap up the bytes received from
modem.
After receiving the data in the buffer, the data get transferred to disk from buffer
in a single operation.
This process of data transfer is not instantaneous, therefore the modem needs
another buffer in order to store additional incoming data.
When the first buffer got filled, then it is requested to transfer the data to disk.
The modem then starts filling the additional incoming data in the second buffer
while the data in the first buffer getting transferred to disk.
When both the buffers completed their tasks, then the modem switches back to
the first buffer while the data from the second buffer get transferred to the disk.
The use of two buffers disintegrates the producer and the consumer of the data,
thus minimizes the time requirements between them.
Buffering also provides variations for devices that have different data transfer
sizes.
UNIT-4
Types of various I/O buffering techniques :
1. Single buffer :
A buffer is provided by the operating system to the system portion of the main
memory.
Block oriented device –
System buffer takes the input.
After taking the input, the block gets transferred to the user space by the process
and then the process requests for another block.
Two blocks works simultaneously, when one block of data is processed by the
user process, the next block is being read in.
OS can swap the processes.
OS can record the data of system buffer to user processes.
Stream oriented device –
Line- at a time operation is used for scroll made terminals. User inputs one line
at a time, with a carriage return signaling at the end of a line.
Byte-at a time operation is used on forms mode, terminals when each keystroke
is significant.
2. Double buffer :
Block oriented –
There are two buffers in the system.
One buffer is used by the driver or controller to store data while waiting for it to
be taken by higher level of the hierarchy.
Other buffer is used to store data from the lower level module.
Double buffering is also known as buffer swapping.
A major disadvantage of double buffering is that the complexity of the process
get increased.
If the process performs rapid bursts of I/O, then using double buffering may be
deficient.
Stream oriented –
Line- at a time I/O, the user process need not be suspended for input or output,
unless process runs ahead of the double buffer.
Byte- at a time operations, double buffer offers no advantage over a single
buffer of twice the length.
UNIT-4
3. Circular buffer :
When more than two buffers are used, the collection of buffers is itself referred
to as a circular buffer.
In this, the data do not directly passed from the producer to the consumer
because the data would change due to overwriting of buffers before they had
been consumed.
The producer can only fill up to buffer i-1 while data in buffer i is waiting to be
consumed.
Swapping in Operating System
Swapping is a memory management scheme in which any process
can be temporarily swapped from main memory to secondary
memory so that the main memory can be made available for
other processes. It is used to improve main memory utilization. In
secondary memory, the place where the swapped-out process is
stored is called swap space.
UNIT-4
The purpose of the swapping in operating system is to access the
data present in the hard disk and bring it to RAM so that the
application programs can use it. The thing to remember is that
swapping is used only when data is not present in RAM.
Although the process of swapping affects the performance of the
system, it helps to run larger and more than one process. This is
the reason why swapping is also referred to as memory
compaction.
The concept of swapping has divided into two more concepts:
Swap-in and Swap-out.
o Swap-out is a method of removing a process from RAM and
adding it to the hard disk.
o Swap-in is a method of removing a program from a hard
disk and putting it back into the main memory or RAM.
Example: Suppose the user process's size is 2048KB and is a
standard hard disk where swapping has a data transfer rate of
1Mbps. Now we will calculate how long it will take to transfer
from main memory to secondary memory.
1. User process size is 2048Kb
2. Data transfer rate is 1Mbps = 1024 kbps
3. Time = process size / transfer rate
4. = 2048 / 1024
5. = 2 seconds
6. = 2000 milliseconds
7. Now taking swap-in and swap-
out time, the process will take 4000 milliseconds.
UNIT-4
Advantages of Swapping
1. It helps the CPU to manage multiple processes within a
single main memory.
2. It helps to create and use virtual memory.
3. Swapping allows the CPU to perform multiple tasks
simultaneously. Therefore, processes do not have to wait
very long before they are executed.
4. It improves the main memory utilization.
Disadvantages of Swapping
1. If the computer system loses power, the user may lose all
information related to the program in case of substantial
swapping activity.
2. If the swapping algorithm is not good, the composite
method can increase the number of Page Fault and decrease
the overall processing performance.