Overview of Operating Systems Functions
Overview of Operating Systems Functions
CHAPTER ONE
Overview of Operating System
1. What is an Operating System?
In a computer system, we find four main components: the hardware, the operating system, the
application software and the users. In a computer system the hardware provides the basic
computing resources. The applications programs define the way in which these resources are
used to solve the computing problems of the users. The operating system controls and
coordinates the use of the hardware among the various systems programs and application
programs for the various users.
We can view an operating system as a resource allocator. A computer system has many resources
(hardware and software) that may be required to solve a problem: CPU time, memory space, files
storage space, input/output devices etc. The operating system acts as the manager of these
resources and allocates them to specific programs and users as necessary for their tasks. Since
there may be many, possibly conflicting, requests for resources, the operating system must
decide which requests are allocated resources to operate the computer system fairly and
efficiently.
An operating system is a control program. This program controls the execution of user programs
to prevent errors and improper use of the computer. Operating systems exist because: they are a
reasonable way to solve the problem of creating a usable computing system. The fundamental
goal of a computer system is to execute user programs and solve user problems.
While there is no universally agreed upon definition of the concept of an operating system, the
following is a reasonable starting point:
A computer’s operating system is a group of programs designed to serve two basic purposes:
To control the allocation and use of the computing system’s resources among the various
users and tasks, and
To provide an interface between the computer hardware and the programmer that simplifies
and makes feasible the creation, coding, debugging, and maintenance of application
programs.
An effective operating system should accomplish the following functions:
Should act as a command interpreter by providing a user friendly environment.
Should facilitate communication with other users.
Facilitate the directory/file creation along with the security option.
Provide routines that handle the intricate details of I/O programming.
Provide access to compilers to translate programs from high-level languages to machine
language
Provide a loader program to move the compiled program code to the computer’s memory
for execution.
Assure that when there are several active processes in the computer, each will get fair and
non-interfering access to the central processing unit for execution.
Take care of storage and device allocation.
1
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
Provide for long term storage of user information in the form of files.
Permit system resources to be shared among users when appropriate, and be protected from
unauthorized or mischievous intervention as necessary.
Though systems programs such as editors and translators and the various utility programs (such
as sort and file transfer program) are not usually considered part of the operating system, the
operating system is responsible for providing access to these system resources.
2. Functions of Operating System
The main functions of an operating system are as follows:
Process Management
Memory Management
Secondary Storage Management
I/O Management
File Management
Protection
Networking Management
Command Interpretation.
Extending the machine
2.1. Process Management
The CPU executes large number of programs. While its main concern is the execution of user
programs, the CPU is also needed for other system activities. These activities are called
processes. A process is a program in execution. Typically, a batch job is a process. A time-shared
user program is a process. A system task, such as spooling, is also a process. For now, a process
may be considered as a job or a time-shared program, but the concept is actually more general.
The operating system is responsible for the following activities in connection with processes
management:
The creation and deletion of both user and system processes
The suspension and resumption of processes.
The provision of mechanisms for process synchronization
The provision of mechanisms for deadlock handling.
2.2. Memory Management
Memory is the most expensive part in the computer system. Memory is a large array of words or
bytes, each with its own address. Interaction is achieved through a sequence of reads or writes of
specific memory address. The CPU fetches from and stores in memory.
There are various algorithms that depend on the particular situation to manage the memory.
Selection of a memory management scheme for a specific system depends upon many factors,
but especially upon the hardware design of the system. Each algorithm requires its own hardware
support.
The operating system is responsible for the following activities in connection with memory
management.
2
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
Keep track of which parts of memory are currently being used and by whom.
Decide which processes are to be loaded into memory when memory space becomes
available.
Allocate and deallocate memory space as needed.
2.3. Secondary Storage Management
The main purpose of a computer system is to execute programs. These programs, together with
the data they access, must be in main memory during execution. Since the main memory is too
small to permanently accommodate all data and program, the computer system must provide
secondary storage to backup main memory. Most modem computer systems use disks as the
primary on-line storage of information, of both programs and data. Most programs, like
compilers, assemblers, sort routines, editors, formatters, and so on, are stored on the disk until
loaded into memory, and then use the disk as both the source and destination of their processing.
Hence the proper management of disk storage is of central importance to a computer system.
There are few alternatives. Magnetic tape systems are generally too slow. In addition, they are
limited to sequential access. Thus tapes are more suited for storing infrequently used files, where
speed is not a primary concern.
The operating system is responsible for the following activities in connection with disk
management:
Free space management
Storage allocation
Disk scheduling.
2.4. I/O Management
One of the purposes of an operating system is to hide the peculiarities or specific hardware
devices from the user. For example, in UNIX, the peculiarities of I/O devices are hidden from
the bulk of the operating system itself by the I/O system. The operating system is responsible for
the following activities in connection to I/O management:
A buffer caching system
To activate a general device driver code
To run the driver software for specific hardware devices as and when required.
2.5. File Management
File management is one of the most visible services of an operating system. Computers can store
information in several different physical forms: magnetic tape, disk, and drum are the most
common forms. Each of these devices has it own characteristics and physical organization.
For convenient use of the computer system, the operating system provides a uniform logical
view of information storage. The operating system abstracts from the physical properties of its
storage devices to define a logical storage unit, the file. Files are mapped, by the operating
system, onto physical devices.
A file is a collection of related information defined by its creator. Commonly, files represent
programs (both source and object forms) and data. Data files may be numeric, alphabetic or
3
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
alphanumeric. Files may be free-form, such as text files, or may be rigidly formatted. In general
a files is a sequence of bits, bytes, lines or records whose meaning is defined by its creator and
user. It is a very general concept.
The operating system implements the abstract concept of the file by managing mass storage
device, such as types and disks. Also files are normally organized into directories to ease their
use. Finally, when multiple users have access to files, it may be desirable to control by whom
and in what ways files may be accessed.
The operating system is responsible for the following activities in connection to the file
management:
The creation and deletion of files and directory.
The support of primitives for manipulating files and directories.
The mapping of files onto disk storage.
Backup of files on stable (nonvolatile) storage.
Protection and security of the files.
2.6. Protection
The various processes in an operating system must be protected from each other’s activities. For
that purpose, various mechanisms which can be used to ensure that the files, memory segment,
CPU and other resources can be operated on only by those processes that have gained proper
authorization from the operating system.
For example, memory addressing hardware ensures that a process can only execute within its
own address space. The timer ensures that no process can gain control of the CPU without
relinquishing it. Finally, no process is allowed to do its own I/O, to protect the integrity of the
various peripheral devices. Protection refers to a mechanism for controlling the access of
programs, processes, or users to the resources defined by a computer controls to be imposed,
together with some means of enforcement.
Protection can improve reliability by detecting latent errors at the interfaces between component
subsystems. Early detection of interface errors can often prevent contamination of a healthy
subsystem by a subsystem that is malfunctioning. An unprotected resource cannot defend against
use (or misuse) by an unauthorized or incompetent user.
2.7. Networking
A distributed system is a collection of processors that do not share memory or a clock. Instead,
each processor has its own local memory, and the processors communicate with each other
through various communication lines, such as high speed buses or telephone lines. Distributed
systems vary in size and function. They may involve microprocessors, workstations,
minicomputers, and large general purpose computer systems.
The processors in the system are connected through a communication network, which can be
configured in the number of different ways. The network may be fully or partially connected.
The communication network design must consider routing and connection strategies and the
problems of connection and security. A distributed system provides the user with access to the
4
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
various resources the system maintains. Access to a shared resource allows computation speed-
up, data availability, and reliability.
2.8. Command Interpretation
One of the most important components of an operating system is its command interpreter. The
command interpreter is the primary interface between the user and the rest of the system.
Many commands are given to the operating system by control statements. When a new job is
started in a batch system or when a user logs-in to a time-shared system, a program which reads
and interprets control statements is automatically executed. This program is variously called (1)
the control card interpreter, (2) the command line interpreter, (3) the shell (in Unix), and so on.
Its function is quite simple: get the next command statement, and execute it.
The command statements themselves deal with process management, I/O handling, secondary
storage management, main memory management, file system access, protection, and networking.
2.9. Extending the machine
Hiding the internal complication of the hardware and presenting simple view that is simpler and
easier to use.
E.g. files instead of physical disk addresses
3. Goals of Operating System
The primary objective of a computer is to execute an instruction in an efficient manner and to
increase the productivity of processing resources attached with the computer system such as
hardware resources, software resources and the users. In other words, we can say that maximum
CPU utilization is the main objective, because it is the main device which is to be used for the
execution of the programs or instructions. We can brief the goals as:
The primary goal of an operating system is to make the computer convenient to use.
The secondary goal is to use the hardware in an efficient manner.
4. Computer System Components
Hardware - provides basic computing resources (CPU, memory, I/O devices).
Operating system - controls and coordinates the use of the hardware among the various
application programs for the various users.
Applications programs - define the ways in which the system resources are used to solve the
computing problems of the users (compilers, database systems, video games, business
programs).
Users- (people, machines, other computers).
The abstract view of the components of a computer system and the positioning of OS is shown in
the Figure 1.
5
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
6
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
The actual operation of these early computers took place without the benefit of an operating
system. Early programs were written in machine language and each contained code for initiating
operation of the computer itself.
The mode of operation was called “open-shop” and this meant that users signed up for computer
time and when a user’s time arrived, the entire (in those days quite large) computer system was
turned over to the user. The individual user (programmer) was responsible for all machine set up
and operation, and subsequent clean-up and preparation for the next user. This system was
clearly inefficient and dependent on the varying competencies of the individual programmer as
operators.
5.2. First Generation (1951-1956)
The first generation marked the beginning of commercial computing, including the introduction
of Eckert and Mauchly’s UNIVAC I in early 1951, and a bit later, the IBM 701 which was also
known as the Defence Calculator. The first generation was characterized again by the vacuum
tube as the active component technology.
Operation continued without the benefit of an operating system for a time. The mode was called
“closed shop” and was characterized by the appearance of hired operators who would select the
job to be run, initial program load the system, run the user’s program, and then select another
job, and so forth. Programs began to be written in higher level, procedure-oriented languages,
and thus the operator’s routine expanded. The operator now selected a job, ran the translation
program to assemble or compile the source program, and combined the translated object program
along with any existing library programs that the program might need for input to the linking
program, loaded and ran the composite linked program, and then handled the next job in a
similar fashion.
Application programs were run one at a time, and were translated with absolute computer
addresses that bound them to be loaded and run from these reassigned storage addresses set by
the translator, obtaining their data from specific physical I/O device. There was no provision for
moving a program to different location in storage for any reason. Similarly, a program bound to
specific devices could not be run at all if any of these devices were busy or broken.
The inefficiencies inherent in the above methods of operation led to the development of the
mono-programmed operating system, which eliminated some of the human intervention in
running job and provided programmers with a number of desirable functions. The OS consisted
of a permanently resident kernel in main storage, and a job scheduler and a number of utility
programs kept in secondary storage. User application programs were preceded by control or
specification cards (in those day, computer program were submitted on data cards) which
informed the OS of what system resources (software resources such as compilers and loaders;
and hardware resources such as tape drives and printer) were needed to run a particular
application. The systems were designed to be operated as batch processing system.
These systems continued to operate under the control of a human operator who initiated
operation by mounting a magnetic tape that contained the operating system executable code onto
a “boot device”, and then pushing the IPL (Initial Program Load) or “boot” button to initiate the
7
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
bootstrap loading of the operating system. Once the system was loaded, the operator entered the
date and time, and then initiated the operation of the job scheduler program which read and
interpreted the control statements, secured the needed resources, executed the first user program,
recorded timing and accounting information, and then went back to begin processing of another
user program, and so on, as long as there were programs waiting in the input queue to be
executed.
The first generation saw the evolution from hands-on operation to closed shop operation to the
development of mono-programmed operating systems. At the same time, the development of
programming languages was moving away from the basic machine languages; first to assembly
language, and later to procedure oriented languages, the most significant being the development
of FORTRAN by John W. Backus in 1956. Several problems remained, however, the most
obvious was the inefficient use of system resources, which was most evident when the CPU
waited while the relatively slower, mechanical I/O devices were reading or writing program data.
In addition, system protection was a problem because the operating system kernel was not
protected from being overwritten by an erroneous application program. Moreover, other user
programs in the queue were not protected from destruction by executing programs.
5.3. Second Generation (1956-1964)
The second generation of computer hardware was most notably characterized by transistors
replacing vacuum tubes as the hardware component technology. In addition, some very
important changes in hardware and software architectures occurred during this period. For the
most part, computer systems remained card and tape-oriented systems. Significant use of random
access devices, that is, disks, did not appear until towards the end of the second generation.
Program processing was, for the most part, provided by large centralized computers operated
under mono-programmed batch processing operating systems.
The most significant innovations addressed the problem of excessive central processor delay due
to waiting for input/output operations. Recall that programs were executed by processing the
machine instructions in a strictly sequential order. As a result, the CPU, with its high speed
electronic component, was often forced to wait for completion of I/O operations which involved
mechanical devices (card readers and tape drives) that were order of magnitude slower. This
problem led to the introduction of the data channel, an integral and special-purpose computer
with its own instruction set, registers, and control unit designed to process input/output
operations separately and asynchronously from the operation of the computer’s main CPU near
the end of the first generation, and its widespread adoption in the second generation.
The data channel allowed some I/O to be buffered. That is, a program’s input data could be read
“ahead” from data cards or tape into a special block of memory called a buffer. Then, when the
user’s program came to an input statement, the data could be transferred from the buffer
locations at the faster main memory access speed rather than the slower I/O device speed.
Similarly, a program’s output could be written another buffer and later moved from the buffer to
the printer, tape, or card punch. What made this all work was the data channel’s ability to work
asynchronously and concurrently with the main processor. Thus, the slower mechanical I/O
8
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
could be happening concurrently with main program processing. This process was called I/O
overlap.
The data channel was controlled by a channel program set up by the operating system I/O control
routines and initiated by a special instruction executed by the CPU. Then, the channel
independently processed data to or from the buffer. This provided communication from the CPU
to the data channel to initiate an I/O operation. It remained for the channel to communicate to the
CPU such events as data errors and the completion of a transmission. At first, this
communication was handled by polling – the CPU stopped its work periodically and polled the
channel to determine if there is any message.
Polling was obviously inefficient (imagine stopping your work periodically to go to the post
office to see if an expected letter has arrived) and led to another significant innovation of the
second generation – the interrupt. The data channel was able to interrupt the CPU with a message
– usually “I/O complete.” In fact, the interrupt idea was later extended from I/O to allow
signaling of number of exceptional conditions such as arithmetic overflow, division by zero and
time-run-out. Of course, interval clocks were added in conjunction with the latter, and thus
operating system came to have a way of regaining control from an exceptionally long or
indefinitely looping program.
These hardware developments led to enhancements of the operating system. I/O and data channel
communication and control became functions of the operating system, both to relieve the
application programmer from the difficult details of I/O programming and to protect the integrity
of the system to provide improved service to users by segmenting jobs and running shorter jobs
first (during “prime time”) and relegating longer jobs to lower priority or night time runs. System
libraries became more widely available and more comprehensive as new utilities and application
software components were available to programmers.
In order to further mitigate the I/O wait problem, system were set up to spool the input batch
from slower I/O devices such as the card reader to the much higher speed tape drive and
similarly, the output from the higher speed tape to the slower printer. In this scenario, the user
submitted a job at a window, a batch of jobs was accumulated and spooled from cards to tape
“off line,” the tape was moved to the main computer, the jobs were run, and their output was
collected on another tape that later was taken to a satellite computer for off line tape-to-printer
output. User then picked up their output at the submission windows.
Toward the end of this period, as random access devices became available, tape-oriented
operating system began to be replaced by disk-oriented systems. With the more sophisticated
disk hardware and the operating system supporting a greater portion of the programmer’s work,
the computer system that users saw was more and more removed from the actual hardware-users
saw a virtual machine.
The second generation was a period of intense operating system development. Also it was the
period for sequential batch processing. But the sequential processing of one job at a time
remained a significant limitation. Thus, there continued to be low CPU utilization for I/O bound
jobs and low I/O device utilization for CPU bound jobs. This was a major concern, since
9
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
computers were still very large (room-size) and expensive machines. Researchers began to
experiment with multiprogramming and multiprocessing in their computing services called the
time-sharing system. A noteworthy example is the Compatible Time Sharing System (CTSS),
developed at MIT during the early 1960s.
5.4. Third Generation (1964-1979)
The third generation officially began in April 1964 with IBM’s announcement of its System/360
family of computers. Hardware technology began to use integrated circuits (ICs) which yielded
significant advantages in both speed and economy.
Operating system development continued with the introduction and widespread adoption of
multiprogramming. This marked first by the appearance of more sophisticated I/O buffering in
the form of spooling operating systems, such as the HASP (Houston Automatic Spooling) system
that accompanied the IBM OS/360 system. These systems worked by introducing two new
systems programs, a system reader to move input jobs from cards to disk, and a system writer to
move job output from disk to printer, tape, or cards. Operation of spooling system was, as before,
transparent to the computer user who perceived input as coming directly from the cards and
output going directly to the printer.
The idea of taking fuller advantage of the computer’s data channel I/O capabilities continued to
develop. That is, designers recognized that I/O needed only to be initiated by a CPU instruction –
the actual I/O data transmission could take place under control of separate and asynchronously
operating channel program. Thus, by switching control of the CPU between the currently
executing user program, the system reader program, and the system writer program, it was
possible to keep the slower mechanical I/O device running and minimizes the amount of time the
CPU spent waiting for I/O completion. The net result was an increase in system throughput and
resource utilization, to the benefit of both user and providers of computer services.
This concurrent operation of three programs (more properly, apparent concurrent operations,
since systems had only one CPU, and could, therefore execute just one instruction at a time)
required that additional features and complexity be added to the operating system. First, the fact
that the input queue was now on disk, a direct access device, freed the system scheduler from the
first-come-first-served policy so that it could select the “best” next job to enter the system
(looking for either the shortest job or the highest priority job in the queue). Second, since the
CPU was to be shared by the user program, the system reader, and the system writer, some
processor allocation rule or policy was needed. Since the goal of spooling was to increase
resource utilization by enabling the slower I/O devices to run asynchronously with user program
processing, and since I/O processing required the CPU only for short periods to initiate data
channel instructions, the CPU was dispatched to the reader, the writer, and the program in that
order. Moreover, if the writer or the user program was executing when something became
available to read, the reader program would preempt the currently executing program to regain
control of the CPU for its initiation instruction, and the writer program would preempt the user
program for the same purpose. This rule, called the static priority rule with preemption, was
implemented in the operating system as a system dispatcher program.
10
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
The spooling operating system in fact had multiprogramming since more than one program was
resident in main storage at the same time. Later this basic idea of multiprogramming was
extended to include more than one active user program in memory at time. To accommodate this
extension, both the scheduler and the dispatcher were enhanced. The scheduler became able to
manage the diverse resource needs of the several concurrently active used programs, and the
dispatcher included policies for allocating processor resources among the competing user
programs. In addition, memory management became more sophisticated in order to assure that
the program code for each job or at least that part of the code being executed was resident in
main storage.
The advent of large-scale multiprogramming was made possible by several important hardware
innovations such as:
The widespread availability of large capacity, high-speed disk units to accommodate the
spooled input streams and the memory overflow together with the maintenance of several
concurrently active programs in execution.
Relocation hardware which facilitated the moving of blocks of code within memory without
any undue overhead penalty.
The availability of storage protection hardware to ensure that user jobs are protected from
one another and that the operating system itself is protected from user programs.
Some of these hardware innovations involved extensions to the interrupt system in order to
handle a variety of external conditions such as program malfunctions, storage protection
violations, and machine checks in addition to I/O interrupts. In addition, the interrupt system
became the technique for the user program to request services from the operating system
kernel.
The advent of privileged instructions allowed the operating system to maintain coordination
and control over the multiple activities now going on within the system.
Successful implementation of multiprogramming opened the way for the development of a new
way of delivering computing services-time-sharing. In this environment, several terminals,
sometimes up to 200 of them, were attached (hard wired or via telephone lines) to a central
computer. Users at their terminals, “logged in” to the central system, and worked interactively
with the system. The system’s apparent concurrency was enabled by the multiprogramming
operating system. Users shared not only the system hardware but also its software resources and
file system disk space.
The third generation was an exciting time, indeed, for the development of both computer
hardware and the accompanying operating system. During this period, the topic of operating
systems became, in reality, a major element of the discipline of computing.
5.5. Fourth Generation (1979 – Present)
The fourth generation is characterized by the appearance of the personal computer and the
workstation. Miniaturization of electronic circuits and components continued and large scale
integration (LSI), the component technology of the third generation, was replaced by very large
scale integration (VLSI), which characterizes the fourth generation. VLSI with its capacity for
11
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
containing thousands of transistors on a small chip, made possible the development of desktop
computers with capabilities exceeding those that filled entire rooms and floors of building just
twenty years earlier.
The operating systems that control these desktop machines have brought us back in a full circle,
to the open shop type of environment where each user occupies an entire computer for the
duration of a job’s execution. This works better now, not only because the progress made over
the years has made the virtual computer resulting from the operating system/hardware
combination so much easier to use, or, in the words of the popular press “user-friendly.”
However, improvements in hardware miniaturization and technology have evolved so fast that
we now have inexpensive workstation – class computers capable of supporting
multiprogramming and time-sharing. Hence the operating systems that supports today’s personal
computers and workstations look much like those which were available for the minicomputers of
the third generation. Examples are Microsoft’s DOS for IBM-compatible personal computers and
UNIX for workstation.
However, many of these desktop computers are now connected as networked or distributed
systems. Computers in a networked system each have their operating systems augmented with
communication capabilities that enable users to remotely log into any system on the network and
transfer information among machines that are connected to the network. The machines that make
up distributed system operate as a virtual single processor system from the user’s point of view; a
central operating system controls and makes transparent the location in the system of the
particular processor or processors and file systems that are handling any given program.
6. Types of Operating Systems
Modern computer operating systems may be classified into three groups, which are distinguished
by the nature of interaction that takes place between the computer user and his or her program
during its processing. The three groups are called batch, time-sharing and real-time operating
systems.
6.1. Batch Processing Operating System
In a batch processing operating system environment users submit jobs to a central place where
these jobs are collected into a batch, and subsequently placed on an input queue at the computer
where they will be run. In this case, the user has no interaction with the job during its processing,
and the computer’s response time is the turnaround time the time from submission of the job
until execution is complete, and the results are ready for return to the person who submitted the
job.
6.2. Time Sharing
Another mode for delivering computing services is provided by time sharing operating systems.
In this environment a computer provides computing services to several or many users
concurrently on-line. Here, the various users are sharing the central processor, the memory, and
other resources of the computer system in a manner facilitated, controlled, and monitored by the
operating system. The user, in this environment, has nearly full interaction with the program
12
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
during its execution, and the computer’s response time may be expected to be no more than a
few second.
6.3. Real Time Operating System (RTOS)
The third class is the real time operating systems, which are designed to service those
applications where response time is of the essence in order to prevent error, misrepresentation or
even disaster. Examples of real time operating systems are those which handle airlines
reservations, machine tool control, and monitoring of a nuclear power station. The systems, in
this case, are designed to be interrupted by external signals that require the immediate attention
of the computer system.
These real time operating systems are used to control machinery, scientific instruments and
industrial systems. An RTOS typically has very little user-interface capability, and no end-user
utilities. A very important part of an RTOS is managing the resources of the computer so that a
particular operation executes in precisely the same amount of time every time it occurs. In a
complex machine, having a part move more quickly just because system resources are available
may be just as catastrophic as having it not move at all because the system is busy.
6.4. Multiprogramming Operating System
A multiprogramming operating system is a system that allows more than one active user program
(or part of user program) to be stored in main memory simultaneously. Thus, it is evident that a
time-sharing system is a multiprogramming system, but note that a multiprogramming system is
not necessarily a time-sharing system. A batch or real time operating system could, and indeed
usually does, have more than one active user program simultaneously in main storage.
6.5. Multiprocessing System
A multiprocessing system is a computer hardware configuration that includes more than one
independent processing unit. The term multiprocessing is generally used to refer to large
computer hardware complexes found in major scientific or commercial applications.
It is a multiple CPU in one system, a variation on server OS with some special features for
communication and connectivity
6.6. Networking Operating System
A networked computing system is a collection of physical interconnected computers. The
operating system of each of the interconnected computers must contain, in addition to its own
stand-alone functionality, provisions for handing communication and transfer of program and
data among the other computers with which it is connected.
Network operating systems are not fundamentally different from single processor operating
systems. They obviously need a network interface controller and some low-level software to
drive it, as well as programs to achieve remote login and remote files access, but these additions
do not change the essential structure of the operating systems.
13
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
14
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
15
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
16
Compiled by Worku A.
Bule Hora University Department of Computer Science 2018 EC
This networking support is the reason why Windows became successful in the first place.
However, Windows 95, 98, ME, NT, 2000 and XP are complicated operating environments.
Certain combinations of hardware and software running together can cause problems, and
troubleshooting can be daunting. Each new version of Windows has interface changes that
constantly confuse users and keep support people busy, and Installing Windows applications is
problematic too. Microsoft has worked hard to make Windows 2000 and Windows XP more
resilient to installation of problems and crashes in general.
8.4. MACINTOSH
The Macintosh (often called “the Mac”), introduced in 1984 by Apple Computer, was the first
widely-sold personal computer with a graphical user interface (GUI). The Mac was designed to
provide users with a natural, intuitively understandable, and, in general, “user-friendly”
computer interface. This includes the mouse, the use of icons or small visual images to represent
objects or actions, the point-and-click and click-and-drag actions, and a number of window
operation ideas. Microsoft was successful in adapting user interface concepts first made popular
by the Mac in its first Windows operating system. The primary disadvantage of the Mac is that
there are fewer Mac applications on the market than for Windows. However, all the fundamental
applications are available, and the Macintosh is a perfectly useful machine for almost everybody.
Data compatibility between Windows and Mac is an issue, although it is often overblown and
readily solved.
The Macintosh has its own operating system, Mac OS which, in its latest version is called Mac
OS X. Originally built on Motorola’s 68000 series microprocessors, Mac versions today are
powered by the PowerPC microprocessor, which was developed jointly by Apple, Motorola, and
IBM. While Mac users represent only about 5% of the total numbers of personal computer users,
Macs are highly popular and almost a cultural necessity among graphic designers and online
visual artists and the companies they work for.
17
Compiled by Worku A.