100% found this document useful (1 vote)
991 views24 pages

HISTORICAL PERSPECTIVE OF COMPUTING AMS112 DR. HALIMATU S.A. (Original)

The document provides an introduction to computing, including definitions of key terms like data and information. It discusses the historical evolution of computers from mainframes to microcomputers and classifies computers based on different criteria like size and capabilities. The document also describes the components and characteristics of a basic computer system.

Uploaded by

boypluto800
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
991 views24 pages

HISTORICAL PERSPECTIVE OF COMPUTING AMS112 DR. HALIMATU S.A. (Original)

The document provides an introduction to computing, including definitions of key terms like data and information. It discusses the historical evolution of computers from mainframes to microcomputers and classifies computers based on different criteria like size and capabilities. The document also describes the components and characteristics of a basic computer system.

Uploaded by

boypluto800
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

AMS 112: Introduction to Computing

CHAPTER ONE
HISTORICAL PERSPECTIVE OF COMPUTING - CHARACTERISTICS OF EACH
PROGRAMMES IN COMPUTING

1.1 Basic Concepts


1.2 Historical Overview of the Computer
1.3 Classification of Computers

1.1 BASIC CONCEPTS


1.1.1 Introduction
1.1.2 Definitions
1.1.3 Methods of Data Processing
1.1.4 Characteristics of a Computer
1.1.5 The Computer System

1.1.1 INTRODUCTION
The computer is fast becoming the universal machine of the 21st century. Early
computers were large in size and too expensive to be owned by individuals. Thus
they were confined to the laboratories and few research institutes. They could
only be programmed by computer engineers. The basic applications were confined
to undertaking complex calculations in science and engineering. Today, the
computer is no longer confined to the laboratory. Computers and, indeed,
computing have become embedded in almost every item we use. Computing is
fast becoming ubiquitous. Its application transcends science, engineering,
communication, space science, aviation, financial institutions, social sciences,
humanities, the military, transportation, manufacturing, and extractive
industries to mention but a few. This unit presents the background information
about computers.

However, the computer as we know it today has evolved over the ages. An attempt
is made to present in chronological order the various landmarks and milestones

DR. HALIMATU S. ABUBAKAR 1


AMS 112: Introduction to Computing

in the development of the computer. Based on the milestone achievement of each


era, the computer evolution is categorized into generations. The generational
classification, however, is not rigid as we may find one generation eating into the
next.

The computer has passed through many stages of evolution from the days of the
mainframe computers to the era of microcomputers. Computers have been
classified based on different criteria. In this unit, we shall classify computers
based on three popular methods.

1.1.2 Definitions
i. Computer: A computer is basically defined as a tool or machine used for
processing data to give required information. It is capable of:
- taking input data through the keyboard (input unit),
- storing the input data in a diskette, hard disk or other medium,
- processing it in the central processing unit (CPU) and
- giving out the result (output) on the screen or the Visual Display Unit (VDU).

ii. Data: The term data refers to facts about a person, object or place, e.g. name, age,
complexion, school, class, height etc.

iii. Information: This is referred to as processed data or a meaningful statement, e.g.


net pay of workers, examination results of students, list of successful candidates
in an examination or interview etc.

INPUT OUTPUT
PROCESSING
(DATA) (INFORMATION)

Fig. 1: A schematic diagram to define a computer

DR. HALIMATU S. ABUBAKAR 2


AMS 112: Introduction to Computing

1.1.3 Methods of Data Processing


The following are the three major methods that have been widely used for data
processing over the years:
• The Manual method,
• The Mechanical method and
• The Computer method.

a. The Manual Method


The manual method of data processing involves the use of chalk, wall, pen, pencil
and the like. These devices, machines or tools facilitate human efforts in
recording, classifying, manipulating, sorting and presenting data or information.
The manual data processing operations entail considerable manual efforts. Thus,
the manual method is cumbersome, tiresome, boring, frustrating and time
consuming.

Furthermore, the processing of data by the manual method is likely to be affected


by human errors. When there are errors, then the reliability, accuracy, neatness,
tidiness, and validity of the data would be in doubt. The manual method does not
allow for the processing of large volumes of data on a regular and timely basis.

b. The Mechanical Method


The mechanical method of data processing involves the use of machines such as
the typewriter, roneo machines, adding machines and the like. These machines
facilitate human efforts in recording, classifying, manipulating, sorting and
presenting data or information. The mechanical operations are basically routine
in nature. There is virtually no creative thinking. Mechanical operations are noisy,
hazardous, error prone and untidy. The mechanical method does not allow for the
processing of large volumes of data continuously and timely.

c. The Computer Method

DR. HALIMATU S. ABUBAKAR 3


AMS 112: Introduction to Computing

The computer method of carrying out data processing has the following major
features:
• Data can be steadily and continuously processed.
• The operations are practically not noisy.
• There is a store where data and instructions can be stored temporarily and
permanent.
• Errors can be easily and neatly corrected.
• Output reports are usually very neat, decent and can be produced in various
forms such as adding graphs, diagrams and pictures etc.
• Accuracy and reliability are highly enhanced.
• Below are further attributes of a computer which make an indispensable tool for
humans.

1.1.4 Characteristics of a Computer


a. Speed: The computer can manipulate large data at incredible speed and response
time can be very fast.
b. Accuracy: Its accuracy is very high and its consistency can be relied upon. Errors
committed in computing are mostly due to human rather than technological
weakness. There are in-built error detecting schemes in the computer.
c. Storage: It has both internal and external storage facilities for holding data and
instructions. This capacity varies from one machine to the other. Memories are
built up in K (Kilo) modules where K=1024 memory locations.
d. Automatic: Once a program is in the computer’s memory, it can run
automatically each time it is opened. The individual has little or no instruction to
give again.
e. Reliability: Being a machine, a computer does not suffer human traits of
tiredness and lack of concentration. It will perform the last job with the same
speed and accuracy as the first job every time even if ten million jobs are involved.
f. Flexibility: It can perform any type of task once it can be reduced to logical steps.
Modern computers can be used to perform a variety of functions like on-line
processing, multi- programming, real time processing etc.
DR. HALIMATU S. ABUBAKAR 4
AMS 112: Introduction to Computing

1.1.5 The Computing System


The computing system is made up of the computer system (that is, hardware and
software), the user and the environment in which the computer is operated.

The
Computing
System

The
Hardware Software Users Computing
Environment

Fig 2: A Schematic diagram of the computing system

a) The Hardware
The computer hardware comprises the input unit, the processing unit and the
output unit. The input unit comprises those media through which data is fed into
the computer. Examples include the keyboard, mouse, joystick, trackball and
scanner. The processing unit is made up of the Arithmetic and Logic Unit (ALU),
the control unit and the main memory. The main memory also known as the
primary memory is made up of the Read Only Memory (ROM) and the Random
Access Memory (RAM). The output unit is made up of those media through which
data, instructions for processing the data (program), and the result of the
processing operation are displayed for the user to see. Examples of the output
unit are the monitor (Visual Display Unit) and the printer.

b) Software
Computer software is the series of instructions that enable the computer to
perform a task or group of tasks. A program is made up of a group of instructions
to perform a task. Series of programs linked together make up software. Computer
programs could be categorized into system software, utility software, and
application programs.

DR. HALIMATU S. ABUBAKAR 5


AMS 112: Introduction to Computing

Software

System Utility Application


Software Software Software

Operating Word Spread Statistical


Anti Virus Scandisk
System Processor sheet Packages

Fig. 3: Computer software

c) Computer Users
Computer users are the different categories of personnel that operate the
computer. We have expert users and casual users. The expert users could be
further categorized into computer engineers, computer programmers and
computer operators.

Computer
Users

Expert Users End Users Casual Users

System Computer Data Entry


Programmers
Engineers Operators Clerks

Fig. 4: Computer users

DR. HALIMATU S. ABUBAKAR 6


AMS 112: Introduction to Computing

d) The Computing Environment


The computing environment includes the building housing the other elements of
the computing system namely the computer and the users, the furniture,
auxiliary devices such as the voltage stabilizer, the Uninterruptible Power Supply
System (UPS), the fans, the air conditioners etc.

The Computing
Environment

Auxiliary
Building Furniture
Devices

Voltage
Air Conditioner UPS
Stabilizer

Fig. 5: Computing environment

1.2 HISTORICAL OVERVIEW OF THE COMPUTER


1.2.1 A Brief History of Computer Technology
1.2.2 First Generation Electronic Computers (1937-1953)
1.2.3 Second Generation (1954-1972)
1.2.4 Third Generation (1903-1972)
1.2.5 Fourth Generation (1972-1984)
1.2.6 Fifth Generation (1984-1990)
1.2.7 Sixth Generation (1990-Date)

1.2.1 A Brief History of Computer Technology


A complete history of computing would include a multitude of diverse devices
such as the ancient Chinese abacus, the Jacquard loom (1805) and Charles
Babbage’s “analytical engine” (1834). It would also include a discussion of

DR. HALIMATU S. ABUBAKAR 7


AMS 112: Introduction to Computing

mechanical, analog and digital computing architectures. As late as the 1960s,


mechanical devices, such as the Marchant calculator, still found widespread
application in science and engineering. During the early days of electronic
computing devices, there was much discussion about the relative merits of analog
vs. digital computers. In fact, as late as the 1960s, analog computers were
routinely used to solve systems of finite difference equations arising in oil reservoir
modeling. In the end, digital computing devices proved to have the power,
economics and scalability necessary to deal with large scale computations. Digital
computers now dominate the computing world in all areas ranging from the hand
calculator to the super computer and are pervasive throughout society. Therefore,
this brief sketch of the development of scientific computing is limited to the area
of digital, electronic computers.

The evolution of digital computing is often divided into generations. Each


generation is characterized by dramatic improvements over the previous
generation in the technology used to build computers, the internal organization
of computer systems, and programming languages. Although not usually
associated with computer generations, there has been a steady improvement in
algorithms, including algorithms used in computational science. The following
history has been organized using these widely recognized generations as
mileposts.

1.2.2 First Generation Electronic Computers (1937 – 1953)


Three machines have been promoted at various times as the first electronic
computers. These machines used electronic switches, in the form of vacuum
tubes, instead of electromechanical relays. In principle the electronic switches
were more reliable, since they would have no moving parts that would wear out,
but technology was still new at that time and the tubes were comparable to relays
in reliability. Electronic components had one major benefit, however: they could
“open” and “close” about 1,000 times faster than mechanical switches.

DR. HALIMATU S. ABUBAKAR 8


AMS 112: Introduction to Computing

The earliest attempt to build an electronic computer was by J. V. Atanasoff, a


professor of physics and mathematics at Iowa State, in 1937. Atanasoff set out to
build a machine that would help his graduate students solve systems of partial
differential equations. By 1941, he and graduate student Clifford Berry had
succeeded in building a machine that could solve 29 simultaneous equations with
29 unknowns. However, the machine was not programmable, and was more of an
electronic calculator.

A second early electronic machine was Colossus, designed by Alan Turning for
the British military in 1943. This machine played an important role in breaking
codes used by the German army in World War II. Turning’s main contribution to
the field of computer science was the idea of the Turning Machine, a mathematical
formalism widely used in the study of computable functions. The existence of
Colossus was kept secret until long after the war ended, and the credit due to
Turning and his colleagues for designing one of the first working electronic
computers was slow in coming.

The first general purposes programmable electronic computer was the Electronic
Numerical Integrator and Computer (ENIAC), built by J. Presper Eckert and John
V. Mauchly at the University of Pennysylvania. Work began in 1943, funded by
the Army Ordinance Department, which needed a way to compute ballistics
during World War II. The machine wasn’t completed until 1945, but then it was
used extensively for calculations during the design of the hydrogen bomb. By the
time it was decommissioned in 1955 it had been used for research on the design
of wind tunnels, random number generators, and weather prediction.

Eckert, Mauchly, and John Von Neumann, a consultant to the ENIAC project,
began work on a new machine before ENIAC was finished. The main contribution
of EDVAC, their new project, was the notion of a stored program. There is some
controversy over who deserves the credit for this idea, but no one knows how
important the idea was to the future of general-purpose computers. ENIAC was

DR. HALIMATU S. ABUBAKAR 9


AMS 112: Introduction to Computing

controlled by a set of external switches and dials; to change the program required
physically altering the settings on these controls. These controls also limited the
speed of the internal electronic operations. Through the use of a memory that was
large enough to hold both instructions and data, and using the program stored in
memory to control the order of arithmetic operations, EDVAC was able to run
orders of magnitude faster than ENIAC. By storing instructions in the same
medium as data, designers could concentrate on improving the internal structure
of the machine without worrying about matching it to the speed of an external
control.

Regardless of who deserves the credit for the stored program idea, the EDVAC
project is significant as an example of the power of interdisciplinary projects that
characterize modern computational science. By recognizing that functions, in the
form of a sequence of instructions for a computer, can be encoded as numbers,
the EDVAC group knew the instructions could be stored in the computer’s
memory along with numerical data. The notion of using numbers to represent
functions was a key step used by Goedel in his incompleteness theorem in 1937,
work which Von Neumann, as a logician, was quite familiar with. Von Neumann’s
background in logic, combined with Eckert and Mauchly’s electrical engineering
skills, formed a very powerful interdisciplinary team.

Software technology during this period was very primitive. The first programs were
written out in machine code, i.e. programmers directly wrote down the numbers
that corresponded to the instructions they wanted to store in memory. By the
1950s programmers were using a symbolic notation, known as assembly
language, then hand-translating the symbolic notation into machine code. Later
programs known as assemblers performed the translation task. As primitive as
they were, these first electronic machines were quite useful in applied science and
engineering. Atanasoff estimated that it would take eight hours to solve a set of
equations with eight unknowns using a Marchant calculator, and 381 hours to
solve 29 equations for 29 unknowns. The Atanasoff-Berry computer was able to

DR. HALIMATU S. ABUBAKAR 10


AMS 112: Introduction to Computing

complete the task in under an hour. The first problem run on the ENIAC, a
numerical simulation used in the design of the hydrogen bomb, required 20
seconds, as opposed to forty hours using mechanical calculators. Eckert and
Mauchly later developed what was arguably the first commercially successful
computer, the UNIVAC; in 1952, 45 minutes after the polls closed and with 7% of
the vote counted, UNIVAC predicted Eisenhower would defeat Stevenson with 438
electoral votes (he ended up with 442).

1.2.3 Second Generation (1954 – 1962)


The second generation saw several important developments at all levels of
computer system design, from the technology used to build the basic circuits to
the programming languages used to write scientific applications. Electronic
switches in this era were based on discrete diode and transistor technology with
a switching time of approximately 0.3 microseconds. The first machines to be built
with this technology include TRADIC at Bell Laboratories in 1954 and TX-0 at
MIT’s Lincoln Laboratory. Memory technology was based on magnetic cores which
could be accessed in random order, as opposed to mercury delay lines, in which
data was stored as an acoustic wave that passed sequentially through the medium
and could be accessed only when the data moved by the I/O interface. Important
innovations in computer architecture included index registers for controlling
loops and floating-point units for calculations based on real numbers. Prior to
this accessing successive elements in an array was quite tedious and often
involved writing self-modifying codes (programs which modified themselves as
they ran; at the time viewed as a powerful application of the principle that
programs and data were fundamentally the same, this practice is now frowned
upon as extremely hard to debug and is impossible in most high-level languages).
Floating point operations were performed by libraries of software routines in early
computers, but were done in hardware in second generation machines.

During this second generation many high-level programming languages were


introduced, including FORTRAN (1956), ALGOL (1958), and COBOL (1959).

DR. HALIMATU S. ABUBAKAR 11


AMS 112: Introduction to Computing

Important commercial machines of this era include the IBM 704 and 7094. The
latter introduced I/O processors for better throughput between I/O devices and
main memory. The second generation also saw the first two supercomputers
designed specifically for numeric processing in scientific applications. The term
“supercomputer” is generally reserved for a machine that is an order of magnitude
more powerful than other machines of its era. Two machines of the 1950s deserve
this title. The Livermore Atomic Research Computer (LARC) and the IBM 7030
(aka Stretch) were early examples of machines that overlapped memory operations
with processor operations and had primitive forms of parallel processing.

1.2.4 Third Generation (1963 – 1972)


The third generation brought huge gains in computational power. Innovations in
this era include the use of integrated circuits, or ICs (semiconductor devices with
several transistors built into one physical component), semiconductor memories
starting to be used instead of magnetic cores, microprogramming as a technique
for efficiently designing complex processors, the coming of age of pipelining and
other forms of parallel processing, and the introduction of operating systems and
time-sharing.

The first ICs were based on small-scale integration (SSI) circuits, which had
around 10 devices per circuit (or “chip”) and evolved to the use of medium-scale
integrated (MSI) circuits, which had up to 100 devices per chip. Multilayered
printed circuits were developed, and core memory was replaced by faster, solid
state memories. Computer designers began to take advantage of parallelism by
using multiple functional units, overlapping CPU and I/O operations, and
pipelining (internal parallelism) in both the instruction stream and the data
stream. In 1964, Seymour Cray developed the CDC 6600, which was the first
architecture to use functional parallelism. By using 10 separate functional units
that could operate simultaneously and 32 independent memory banks, the CDC
6600 was able to attain a computation rate of 1 million floating point operations
per second (1 Mflops). Five years later CDC released the 7600, also developed by

DR. HALIMATU S. ABUBAKAR 12


AMS 112: Introduction to Computing

Seymour Cray. The CDC 7600, with its pipelined functional units, is considered
to be the first vector processor and was capable of executing at 10 Mflops. The
IBM 360/91, released during the same period, was roughly twice as fast as the
CDC 6600. It employed instruction look ahead, separate floating point and integer
functional units and pipelined instruction stream. The IBM 360-195 was
comparable to the CDC 7600, deriving much of its performance from a very fast
cache memory. The SOLOMON computer, developed by Westinghouse
Corporation, and the ILLIAC IV, jointly developed by Burroughs, the Department
of Defence and the University of Illinois, was representative of the first parallel
computers.

The Texas Instrument Advanced Scientific Computer (TI-ASC) and the STAR-100
of CDC were pipelined vector processors that demonstrated the viability of that
design and set the standards for subsequent vector processors.

Early in this third generation, Cambridge and the University of London cooperated
in the development of CPL (Combined Programming Language, 1963). CPL was,
according to its authors, an attempt to capture only the important features of the
complicated and sophisticated ALGOL. However, the ALGOL CPL was large with
many features that were hard to learn. In an attempt at further simplification,
Martin Richards of Cambridge developed a subset of CPL called BCPL (Basic
Computer Programming Language, 1967).

1.2.5 Fourth Generation (1972 – 1984)


The next generation of computer systems saw the use of large scale integration
(LSI –1000 devices per chip) and very large scale integration (VLSI –100,000
devices per chip) in the construction of computing elements. At this scale entire
processors will fit onto a single chip, and for simple systems the entire computer
(processor, main memory, and I/O controllers) can fit on one chip. Gate delays
dropped to about Insper gate.

DR. HALIMATU S. ABUBAKAR 13


AMS 112: Introduction to Computing

Semiconductor memories replaced core memories as the main memory in most


systems; until this time the use of semiconductor memory in most systems was
limited to registers and cache. During this period, high speed vector processors,
such as the CRAY 1, CRAY X-MP and CYBER 205 dominated the high-
performance computing scene. Computers with large main memory, such as the
CRAY 2, began to emerge. A variety of parallel architectures began to appear;
however, during this period the parallel computing efforts were of a mostly
experimental nature and most computational science was carried out on vector
processors. Microcomputers and workstations were introduced and saw wide use
as alternatives to time-shared mainframe computers.

Developments in software include very high-level languages such as FP


(functional programming) and Prolog (programming in logic). These languages
tend to use a declarative programming style as opposed to the imperative style of
Pascal, C. FORTRAN, et al. In a declarative style, a programmer gives a
mathematical specification of what should be computed, leaving many details of
how it should be computed to the compiler and/or runtime system. These
languages are not yet in wide use but are very promising as notations for
programs that will run on massively parallel computers (systems with over 1,000
processors).

Compilers for established languages started to use sophisticated optimization


techniques to improve codes, and compilers for vector processors were able to
vectorize simple loops (turn loops into single instructions that would initiate an
operation over an entire vector). Two important events marked the early part of
the third generation: the development of the C programming language and the
UNIX operating system, both at Bell Labs. In 1972, Dennis Ritchie, seeking to
meet the design goals of CPL and generalize Thompson’s B, developed the C
language. Thompson and Ritchie then used C to write a version of UNIX for the
DEC PDP-11. This C-based UNIX was soon ported to many different computers,
relieving users from having to learn a new operating system each time they change

DR. HALIMATU S. ABUBAKAR 14


AMS 112: Introduction to Computing

computer hardware. UNIX or a derivative of UNIX is now a de facto standard on


virtually every computer system.

An important event in the development of computational science was the


publication of the Lax report. In 1982, the US Department of Defence (DOD) and
National Science Foundation (NSF) sponsored a panel on Large Scale Computing
in Science and Engineering, chaired by Peter D. Lax. The Lax Report stated that
aggressive and focused foreign initiatives in high performance computing,
especially in Japan, were in sharp contrast to the absence of coordinated national
attention in the United States. The report noted that university researchers had
inadequate access to high performance computers. One of the first and most
visible of the responses to the Lax report was the establishment of the NSF
supercomputing centres. Phase I on this NSF program was designed to encourage
the use of high-performance computing at American universities by making cycles
and training on three (and later six) existing supercomputers immediately
available. Following this Phase I stage, in 1984 – 1985 the NSF provided funding
for the establishment of five Phase II supercomputing centres. The Phase II
centres, located in San Diego (San Diego Supercomputing Centre); Illinois
(National Centre for Supercomputing Applications); Pittsburgh (Pittsburgh
Supercomputing Center); Cornell (Cornell Theory Centre); and Princeton (John
Von Neumann Centre), have been extremely successful at providing computing
time on supercomputers to the academic community. In addition they have
provided many valuable training programmes and have developed several
software packages that are available free of charge. These Phase II centres
continue to augment the substantial high performance computing efforts at the
National Laboratories, especially the Department of Energy (DOE) and NASA sites.

1.2.6 Fifth Generation (1984 – 1990)


The development of the next generation of computer systems is characterized
mainly by the acceptance of parallel processing. Until this time, parallelism was
limited to pipelining and vector processing, or at most to a few processors sharing

DR. HALIMATU S. ABUBAKAR 15


AMS 112: Introduction to Computing

jobs. The fifth generation saw the introduction of machines with hundreds of
processors that could all be working on different parts of a single program. The
scale of integration in semiconductors continued at an incredible pace, so that by
1990 it was possible to build chips with a million components – and
semiconductor memories became standard on all computers.

Other new developments were the widespread use of computer networks and the
increasing use of single-user workstations. Prior to 1985, large scale parallel
processing was viewed as a research goal, but two systems introduced around
this time are typical of the first commercial products to be based on parallel
processing. The Sequent Balance 8000 connected up to 20 processors to a single
shared memory module (but each processor had its own local cache). The machine
was designed to compete with the DEC VAX-780 as a general purpose Unix
system, with each processor working on a different user’s job. However, Sequent
provided a library of subroutines that would allow programmers to write programs
that would use more than one processor, and the machine was widely used to
explore parallel algorithms and programming techniques.

The Intel iPSC-1, nicknamed “the hypercube”, took a different approach. Instead
of using one memory module, Intel connected each processor to its own memory
and used a network interface to connect processors. This distributed memory
architecture meant memory was no longer a bottleneck and large systems (using
more processors) could be built. The largest iPSC-1 had 128 processors. Toward
the end of this period, a third type of parallel processor was introduced to the
market. In this style of machine, known as a data-parallel or SIMD, there are
several thousand very simple processors. All processors work under the direction
of a single control unit; i.e. if the control unit says “add a to b” then all processors
find their local copy of a and add it to their local copy of b. Machines in this class
include the Connection Machine from Thinking Machines, Inc., and the MP-1
from MasPar, Inc. Scientific computing in this period was still dominated by vector

DR. HALIMATU S. ABUBAKAR 16


AMS 112: Introduction to Computing

processing. Most manufacturers of vector processors introduced parallel models,


but there were very few (two to eight) processors in these parallel machines. In
the area of computer networking, both wide area network (WAN) and local area
network (LAN) technology developed at a rapid pace, stimulating a transition from
the traditional mainframe computing environment towards a distributed
computing environment in which each user has their own workstation for
relatively simple tasks (editing and compiling programs, reading mail) but sharing
large, expensive resources such as file servers and supercomputers. RISC
technology (a style of internal organization of the CPU) and plummeting costs for
RAM brought tremendous gains in computational power of relatively low cost
workstations and servers. This period also saw a marked increase in both the
quality and quantity of scientific visualization.

1.2.7 Sixth Generation (1990 to date)


Transitions between generations in computer technology are hard to define,
especially as they are taking place. Some changes, such as the switch from
vacuum tubes to transistors, are immediately apparent as fundamental changes,
but others are clear only in retrospect. Many of the developments in computer
systems since 1990 reflect gradual improvements over established systems, and
thus it is hard to claim they represent a transition to a new “generation”, but other
developments will prove to be significant changes.

In this section, we offer some assessments about recent developments and current
trends that we think will have a significant impact on computational science. This
generation is beginning with many gains in parallel computing, both in the
hardware area and in improved understanding of how to develop algorithms to
exploit diverse, massively parallel architectures. Parallel systems now compete
with vector processors in terms of total computing power and, most especially,
parallel systems to dominate the future.

DR. HALIMATU S. ABUBAKAR 17


AMS 112: Introduction to Computing

Combinations of parallel/vector architectures are well established, and one


corporation (Fujitsu) has announced plans to build a system with over 200 of its
high and vector processors. Manufacturers have set themselves the goal of
achieving teraflops (1012 arithmetic operations per second) performance by the
middle of the decade, and it is clear this will be obtained only by a system with a
thousand processors or more. Workstation technology has continued to improve,
with processor designs now using a combination of RISC, pipelining, and parallel
processing. As a result it is now possible to procure a desktop workstation that
has the same overall computing power (100 megaflops) as fourth generation
supercomputers. This development has sparked an interest in heterogeneous
computing: a program started on one workstation can find idle workstations
elsewhere in the local network to run parallel subtasks.

One of the most dramatic changes in the sixth generation is the explosive growth
of wide area networking. Network bandwidth has expanded tremendously in the
last few years and will continue to improve for the next several years. T1
transmission rates are now standard for regional networks, and the national
“backbone” that interconnects regional networks uses T3. Networking technology
is becoming more widespread than its original strong base in universities and
government laboratories as it is rapidly finding application in K-12 education,
community networks and private industry. A little over a decade after the warning
voiced in the Lax report, the future of a strong computational science
infrastructure is bright.

1.3 CLASSIFICATION OF COMPUTERS


1.3.1 Classification Based on Signal Type
1.3.2 Classification by Purpose
1.3.3 Classification by Capacity

DR. HALIMATU S. ABUBAKAR 18


AMS 112: Introduction to Computing

Although there are no industry standards, computers are generally classified in


the following ways:
1.3.1 Classification Based on Signal Type
There are basically three types of electronic computers. These are the Digital,
Analog and Hybrid computers.

a. The Digital Computer


This represents its variables in the form of digits. The data it deals with, whether
representing numbers, letters or other symbols, are converted into binary form
on input to the computer. The data undergoes a processing after which the binary
digits are converted back to alpha numeric form for output for human use.
Because of the fact that business applications like inventory control, invoicing
and payroll deal with discrete values (separate, disunited, discontinuous), they
are best processed with digital computers. As a result of this, digital computers
are mostly used in commercial and business places today.

b. The Analog Computer


It measures rather than counts. This type of computer sets up a model of a
system. The common type represents its variables in terms of electrical voltage
and sets up circuit analog to the equation connecting the variables. The answer
can be either by using a voltmeter to read the value of the variable required, or by
feeding the voltage into a plotting device. Analog computers hold data in the form
of physical variables rather than numerical quantities. In theory, analog
computers give an exact answer because the answer has not been approximated
to the nearest digit. Whereas, when we try to obtain the answers using a digital
voltmeter, we often find that the accuracy is less than that which could have been
obtained from an analog computer. It is almost never used in business systems.
It is used by scientists and engineers to solve systems of partial differential
equations. It is also used in controlling and monitoring of systems in such areas
as hydrodynamics and rocketry in production. There are two useful properties of
this computer once it is programmed:

DR. HALIMATU S. ABUBAKAR 19


AMS 112: Introduction to Computing

- It is simple to change the value of a constant or coefficient and study the effect of
such changes.
- It is possible to link certain variables to a time pulse to study changes with time
as a variable, and chart the result on an X-Y plotter.

c. The Hybrid Computer


In some cases, the computer user may wish to obtain the output from an analog
computer as processed by a digital computer or vice versa. To achieve this, he set
up a hybrid machine where the two are connected and the analog computer may
be regarded as a peripheral of the digital computer. In such a situation, a hybrid
system attempts to gain the advantage of both the digital and the analog elements
in the same machine. This kind of machine is usually a special-purpose device
which is built for a specific task. It needs a conversion element which accepts
analog inputs, and outputs digital values. Such converters are called digitizers.
There is a need for a converter from analog to digital also. It has the advantage of
giving real-time response on a continuous basis. Complex calculations can be
dealt with by the digital elements, thereby requiring a large memory, and giving
accurate results after programming. They are mainly used in aerospace and
process control applications.

1.3.2 Classification by Purpose


Depending on their flexibility in operation, computers are classified as either
special purpose or general purpose.
a) Special-Purpose Computers
A special purpose computer is one that is designed to solve a restricted class of
problems. Such computers may even be designed and built to handle only one
job. In such machines, the steps or operations that the computer follows may be
built into the hardware. Most of the computers used for military purposes fall into
this class. Other examples of special purpose computers include:
• Computers designed specifically to solve navigational problems.
• Computers designed for tracking airplanes or missiles.

DR. HALIMATU S. ABUBAKAR 20


AMS 112: Introduction to Computing

• Computers used for process control applications in industries such as oil refinery,
chemical manufacture, steel processing and power generation.
• Computers used as robots in factories like vehicle assembly plants and glass
industries.

General Attributes of Special-Purpose Computers


Special-purpose computers are usually very efficient for the tasks for which they
are specially designed. They are very much less complex than the general-purpose
computers. The simplicity of the circuiting stems from the fact that provision is
made only for limited facilities. They are very much cheaper than the general-
purpose type since they involve fewer components and are less complex.

b) General-Purpose Computers
General-purpose computers are computers designed to handle a wide range of
problems. Theoretically, a general-purpose computer can be adequate by means
of some easily alterable instructions to handle any problems that can be solved
by computation. In practice, however, there are limitations imposed by memory
size, speed and the type of input/output devices. Examples of areas where general
purpose computers are employed include the following:
- Payroll
- Banking
- Billing
- Sales analysis
- Cost accounting
- Manufacturing scheduling
- Inventory control

General Attributes of General-Purpose Computers


i. General-purpose computers are more flexible than special purpose computers.
Thus, the former can handle a wide spectrum of problems.

DR. HALIMATU S. ABUBAKAR 21


AMS 112: Introduction to Computing

ii. They are less efficient than the special-purpose computers due to such problems
as the following:
- They have inadequate storage
- They have low operating speed
- Coordination of the various tasks and subsections may take time
- General-purpose computers are more complex than special purpose computers.

1.3.3 Classification of Computers According to Capacity


In the past, the capacity of computers was measured in terms of physical size.
Today, however, physical size is not a good measure of capacity because modern
technology has made it possible to achieve compactness.

A better measure of capacity today is the volume of work that a computer can
handle. The volume of work that a given computer handles is closely tied to the
cost and to the memory size of the computer. Therefore, most authorities today
accept rental price as the standard for ranking computers. Here, both memory
size and cost shall be used to rank (classify) computers into three main categories
as follows:

• Microcomputers
• Medium/mini/small computers
• Large computer/mainframes.

a. Microcomputers
Microcomputers, also known as single board computers, are the cheapest class
of computers. In the microcomputer, we do not have a Central Processing Unit
(CPU) as we have in the larger computers. Rather we have a microprocessor chip
as the main data processing unit. They are the cheapest and smallest, and can
operate under normal office conditions. Examples are IBM, APPLE, COMPAQ,
Hewlett Packard (HP), Dell and Toshiba, etc.

DR. HALIMATU S. ABUBAKAR 22


AMS 112: Introduction to Computing

Different Types of Personal Computers (Microcomputers)


Normally, personal computers are placed on the desk; hence they are referred to
as desktop personal computers. Still other types are available under the
categories of personal computers. They are:
i. Laptop Computers:
These are small size types that are battery- operated. The screen is used to cover
the system while the keyboard is installed flat on the system unit. They could be
carried about like a box when closed after operation and can be operated in
vehicles while on a journey.
ii. Notebook Computers:
These are like laptop computers but smaller in size. Though small, the notebook
computer comprises all the components of a full system.
iii. Palmtop Computers:
The palmtop computer is far smaller in size. All the components are complete as
in any of the above, but it is made smaller so that it can be held on the palm.

Uses of the Personal Computer


A personal computer can perform the following functions:
- It can be used to produce documents like memos, reports, letters and briefs.
- It can be used to calculate budgets and accounting tasks
- It can analyze numeric functions
- It can create illustrations
- It can be used for electronic mails
- It an help in making schedules and planning projects
- It can assist in searching for specific information from lists or from reports.

Advantages of the Personal Computer


- The personal computer is versatile: it can be used in any establishment
- It has faster speed for processing data
- It can deal with several data at a time

DR. HALIMATU S. ABUBAKAR 23


AMS 112: Introduction to Computing

- It can attend to several users at the same time, thereby being able to process
several jobs at a time
- It is capable of storing several data
- Operating the personal computer gives less fatigue
- It is possible to network personal computers, that is, linking of two or more
computers. Disadvantages of the Personal Computer
- The personal computer is costly to maintain
- It is very fragile and complex to handle
- It requires special skill to operate
- With inventions and innovations everyday, the personal computer is at the risk of
becoming obsolete
- It can lead to unemployment, especially in less developed countries
- Some computers cannot function properly without the aid of a cooling system,
e.g. air conditioners or fans in some locations.

b. Mini Computers
Mini computers have memory capacity in the range ‘128- 256 Kbytes’ and are also
not expensive but reliable and smaller in size compare to mainframe. They were
first introduced in 1965; when DEC (Digital Equipment Corporation) built the PDP
– 8.Other mini computers are WANG VS.

c. Mainframe Computers.
The mainframe computers, often called number crunchers have memory capacity
of the order of ‘4 Kbytes’, and are very expensive. They can execute up to 100
MIPS (Meanwhile Instructions per Second). They have large systems and are used
by many people for a variety of purposes.

DR. HALIMATU S. ABUBAKAR 24

You might also like