Evolution of Multilevel Machines
how the number and nature of the levels has evolved over the years.
Hardware consists of tangible objects—integrated circuits, printed circuit
boards, cables, power supplies, memories, and printers
Software, in contrast, consists of algorithms.
Programs can be stored on hard disk, CD-ROM, or other media,
the essence of software is the set of instructions that makes up the
programs, not the physical media on which they are recorded.
In the very first computers, the boundary between hardware and software
was crystal clear.
Over time, however, it has blurred considerably, primarily due to the
addition, removal, and merging of levels as computers have evolved.
Hardware and software are logically equivalent.
Any operation performed by software can also be built directly into the
hardware.
As Karen Panetta put it: ‘‘Hardware is just petrified software”. Of course,
the reverse is also true
Any instruction executed by the hardware can also be simulated in software.
The decision to put certain functions in hardware and others in software is
based on such factors as cost, speed, reliability, and frequency of
expected changes.
The Invention of Microprogramming
The first digital computers, back in the 1940s, had only two levels:
the ISA level, in which all the programming was done,
And the digital logic level, which executed these programs.
The digital logic level’s circuits were complicated, difficult to understand
and build, and unreliable.
In 1951, Maurice Wilkes, a researcher at the University of Cambridge,
suggested designing a three-level computer in order to drastically
simplify the hardware and thus reduce the number of (unreliable) vacuum
tubes needed.
Unchangeable interpreter (the microprogram), whose function was to
execute ISA-level programs by interpretation.
Because the hardware would now only have to execute microprograms,
which have a limited instruction set, instead of ISA-level programs,
which have a much larger instruction set.
Fewer electronic circuits would be needed, reduced tube count and hence
enhance reliability (i.e., the number of crashes per day).
By 1970 the idea was dominant.
The Invention of the Operating System
In these early years, most computers were ‘‘open shop”
Next to each machine was a sign-up sheet. A programmer wanting to run a
program signed up for a block of time, say Wednesday morning 3 to 5 A.M.
Headed for the machine room with a deck of 80-column punched cards
(an early input medium) in one hand and a sharpened pencil in the other.
He went over to the cabinet where the program library was kept, took out
the big green deck labeled FORTRAN compiler.
He put his FORTRAN program in the card reader and pushed the CONTINUE
button. The program was read in.
When the computer stopped, he read his FORTRAN program in a second
time. Although some compilers required only one pass over the input,
many required two or more.
Finally, the translation neared completion. The programmer often became
nervous near the end because if the compiler found an error in the program,
he had to correct it and start the entire process all over again. If there were
no errors, the compiler punched out the translated machine language
program on cards.
The programmer then put the machine language program in the card
reader along with the subroutine library deck and read them both in.
The program began executing. More often than not it did not work and
unexpectedly stopped in the middle. If lucky, he figured out the problem,
corrected the error, and went back to the cabinet containing the big green
FORTRAN compiler to start over again. If less fortunate, he made a
printout of the contents of memory, called a core dump, and took it home to
study.
It forced the programmers to learn how to operate the machine and to
know what to do when it broke down.
The machine was frequently idle while people were carrying cards around
the room or scratching their heads trying to find out why their programs were
not working properly.
Around 1960 people tried to reduce the amount of wasted time by
automating the operator’s job. A program called an operating system was
The operating system read the ∗JOB card and used the information on it for
kept in the computer at all times.
accounting purposes. (The asterisk was used to identify control cards, so they
Later, it read the ∗FORTRAN card, which was an instruction to load the
would not be confused with program and data cards.)
FORTRAN compiler from a magnetic tape.
The compiler then read in and compiled the FORTRAN program.
system, which then read the ∗DATA card. This was an instruction to
When the compiler finished, it returned control back to the operating
execute the translated program, using the cards following the ∗DATA card as
the data.
In subsequent years, operating systems became more and more
sophisticated.
New instructions, facilities, and features were added to the ISA level
it began to take on the appearance of a new level.
Some of this new level’s instructions were identical to the ISA-level
instructions (+,*,..) but others, particularly input/output instructions, were
completely different.
The new instructions were often known as operating system macros or
supervisor calls. The usual term now is system call.
In the early 1960s researchers (M.I.T) developed operating systems that
allowed (multiple) programmers to communicate directly with the
computer.
In these systems, remote terminals were connected to the central
computer via telephone lines. The computer was shared among many users.
A programmer could type in a program and get the results typed back
almost immediately, These systems were called timesharing systems.
The Migration of Functionality to Microcode
Once microprogramming had become common (by 1970), designers realized
that they could add new instructions by just extending the microprogram.
In other words, they could add ‘‘hardware’’ (new machine instructions)
by programming.
This revelation led to a virtual explosion in machine instruction sets, as
designers competed with one another to produce bigger and better
instruction sets.
Many of these instructions were not essential in the sense that their effect
could be easily achieved by existing instructions, but often they were
slightly faster than a sequence of existing instructions
For example, many machines had an instruction INC (increment) that added
1 to a number rather than using ADD instruction.
The Elimination of Microprogramming
Microprograms grew fat during the golden years of microprogramming
(1960s and 1970s). They also tended to get slower and slower as they
acquired more bulk.
Finally, some researchers realized that by eliminating the microprogram,
vastly reducing the instruction set, and having the remaining instructions be
directly executed (i.e., hardware control of the data path), machines
could be speeded up.
In a certain sense, computer design had come full circle, back to the way it
was before Wilkes invented microprogramming in the first place.
But the wheel is still turning. Modern processors still rely on
microprogramming to translate complex instructions to internal microcode
that can be executed directly on streamlined hardware. The point of this
discussion is to show that the boundary between hardware and software is
arbitrary and constantly changing. Today’s software may be tomorrow’s
hardware, and vice versa.
Furthermore, the boundaries between the various levels are also fluid. From
the programmer’s point of view, how an instruction is actually implemented is
unimportant (except perhaps for its speed). A person programming at the
ISA level can use its multiply instruction as though it were a hardware
instruction without having to worry about it, or even be aware of whether it
really is a hardware instruction. One person’s hardware is another person’s
software.
The Zeroth Generation—Mechanical Computers (1642–
1945)
This section provides a historical overview of the Zeroth Generation of computers,
which refers to mechanical computers developed from 1642 to 1945, before the
advent of electronic computers. It describes the contributions of various pioneers
who laid the foundation for modern computing. Let’s break it down:
1. Blaise Pascal (1642)
Contribution: Built the first mechanical calculating machine at the age of
19.
Device: Pascal’s machine was designed to help his father, a tax collector,
with calculations. It was powered by a hand-operated crank and used gears
to perform basic arithmetic operations (addition and subtraction).
Significance: This machine, though limited to simple operations, marked the
beginning of mechanical computing.
2. Gottfried Wilhelm von Leibniz (1672-1673)
Contribution: Built a more advanced mechanical machine than Pascal’s.
Device: Leibniz’s machine could perform multiplication and division in
addition to addition and subtraction, making it equivalent to a four-function
pocket calculator of the modern era.
Significance: Leibniz advanced the capabilities of mechanical computers by
introducing more complex arithmetic functions.
3. Charles Babbage (1822-1837)
Contribution: Designed two influential mechanical machines: the Difference
Engine and the Analytical Engine.
o Difference Engine: Built to compute mathematical tables used for
navigation. It used a specific algorithm, the method of finite
differences, to calculate polynomials.
o Analytical Engine: The first general-purpose mechanical
computer, consisting of four major components—the store (memory),
the mill (computation unit), input section (punched-card reader),
and output section (punched and printed output). It could be
programmed using punched cards, similar to later computers.
Significance: Babbage’s Analytical Engine is considered the conceptual
precursor to modern computers because it had all the basic components
of a digital computer—memory, computation, input, and output. It was also
programmable, meaning it could run different programs by reading different
punched cards.
Ada Lovelace: Babbage’s collaborator, Ada Lovelace, was the world’s first
computer programmer, writing programs for the Analytical Engine.
4. Konrad Zuse (1938-1941)
Contribution: Built a series of automatic calculating machines using
electromagnetic relays in Germany.
Device: Zuse’s work marked a transition from mechanical to
electromechanical computing, but it did not influence later developments
because his machines were destroyed during World War II, and he did not
receive government funding.
Significance: Zuse is considered one of the pioneers of computer science,
despite the lack of direct influence on subsequent machines.
5. John Atanasoff (Late 1930s)
Contribution: Designed an advanced calculator at Iowa State College that
used binary arithmetic and capacitors for memory, which were
periodically refreshed to prevent data loss, much like modern DRAM
(Dynamic Random-Access Memory).
Significance: Atanasoff’s machine was ahead of its time in terms of using
binary logic, but it was never fully operational, limited by the hardware
technology available during that era.
6. George Stibbitz (1940)
Contribution: Built a simpler calculator at Bell Labs that worked
effectively.
Significance: Stibbitz demonstrated his machine publicly in 1940, which
helped spark further interest in computing.
7. Howard Aiken (1944)
Contribution: Built the Mark I at Harvard University, inspired by Charles
Babbage’s work. The Mark I was a general-purpose relay-based
computer with 72 words of 23 decimal digits each. It used punched paper
tape for input and output.
Significance: Aiken’s Mark I was one of the last electromechanical
computers before the electronic era began. By the time Aiken completed the
Mark II, relay-based machines were becoming obsolete due to the rise of fully
electronic computers.
Conclusion
The Zeroth Generation of computing was characterized by mechanical and
electromechanical devices. These early machines laid the groundwork for modern
computing by introducing concepts like programmable computation, binary
arithmetic, and memory storage. Pioneers like Pascal, Leibniz, Babbage, and others
pushed the boundaries of technology despite being limited by the mechanical
engineering of their time. Their work directly influenced the design and architecture
of later, fully electronic computers.
The First Generation—Vacuum Tubes (1945–1955)
The stimulus for the electronic computer was World War II. During the early part of
the war, German submarines were wreaking havoc on British ships. Commands
were sent from the German admirals in Berlin to the submarines by radio, which the
British could, and did, intercept. The problem was that these messages were
encoded using a device called the ENIGMA, whose forerunner was designed by
amateur inventor and former U.S. president, Thomas Jefferson. Early in the war,
British intelligence managed to acquire an ENIGMA machine from Polish Intelligence,
which had stolen it from the Germans. However, to break a coded message, a huge
amount of computation was needed, and it was needed very soon after the
message was intercepted to be of any use. To decode these messages, the British
government set up a top secret laboratory that built an electronic computer called
the COLOSSUS. The famous British mathematician Alan Turing helped design this
machine. The COLOSSUS was operational in 1943, but since the British government
kept virtually every aspect of the project classified as a military secret for 30 years,
the COLOSSUS line was basically a dead end. It is worth noting only because it was
the world’s first electronic digital computer. In addition to destroying Zuse’s
machines and stimulating the construction of the COLOSSUS, the war also affected
computing in the United States. The army needed range tables for aiming its heavy
artillery. It produced these tables by hiring hundreds of women to crank them out
using hand calculators (women were thought to be more accurate than men).
Nevertheless, the process was time consuming and errors often crept in. John
Mauchley, who knew of Atanasoff’s work as well as Stibbitz’, was aware that the
army was interested in mechanical calculators. Like many computer scientists after
him, he put together a grant proposal asking the army for funding to build an
electronic computer. The proposal was accepted in 1943, and Mauchley and his
graduate student, J. Presper Eckert, proceeded to build an electronic computer,
which they called the ENIAC (Electronic Numerical Integrator And Computer). It
consisted of 18,000 vacuum tubes and 1500 relays. The ENIAC weighed 30 tons and
consumed 140 kilowatts of power. Architecturally, the machine had 20 registers,
each capable of holding a 10-digit decimal number. (A decimal register is very small
memory that can hold one number up to some maximum number of decimal digits,
somewhat like the odometer that keeps track of how far a car has traveled in its
lifetime.) The ENIAC was programmed by setting up 6000 multiposition switches
and connecting a multitude of sockets with a veritable forest of jumper cables. The
machine was not finished until 1946, too late to be of any use for its original
purpose. However, since the war was over, Mauchley and Eckert were allowed to
organize a summer school to describe their work to their scientific colleagues. That
summer school was the beginning of an explosion of interest in building large digital
computers. After that historic summer school, many other researchers set out to
build electronic computers. The first one operational was the EDSAC (1949), built at
the University of Cambridge by Maurice Wilkes. Others included the JOHNNIAC at
the Rand Corporation, the ILLIAC at the University of Illinois, the MANIAC at Los
Alamos Laboratory, and the WEIZAC at the Weizmann Institute in Israel. Eckert and
Mauchley soon began working on a successor, the EDVAC (Electronic Discrete
Variable Automatic Computer). However, that project was fatally wounded when
they left the University of Pennsylvania to form a startup company, the Eckert-
Mauchley Computer Corporation, in Philadelphia (Silicon Valley had not yet been
invented). After a series of mergers, this company became the modern Unisys
Corporation. As a legal aside, Eckert and Mauchley filed for a patent claiming they
invented the digital computer. In retrospect, this would not be a bad patent to own.
After years of litigation, the courts decided that the Eckert-Mauchley patent was
invalid and that John Atanasoff invented the digital computer, even though he never
patented it, effectively putting the invention in the public domain.
While Eckert and Mauchley were working on the EDVAC, one of the people involved
in the ENIAC project, John von Neumann, went to Princeton’s Institute of Advanced
Studies to build his own version of the EDVAC, the IAS machine. Von Neumann was a
genius in the same league as Leonardo Da Vinci. He spoke many languages, was an
expert in the physical sciences and mathematics, and had total recall of everything
he ever heard, saw, or read. He was able to quote verbatim from memory the text
of books he had read years earlier. At the time he became interested in computers,
he was already the most eminent mathematician in the world. It was soon apparent
to him that programming computers with huge numbers of switches and cables was
slow, tedious, and inflexible. He came to realize that the program could be
represented in digital form in the computer’s memory, along with the data. He also
saw that the clumsy serial decimal arithmetic used by the ENIAC, with each digit
represented by 10 vacuum tubes (1 on and 9 off) could be replaced by using parallel
binary arithmetic, something Atanasoff had realized years earlier. The basic design,
which he first described, is now known as a von Neumann machine. It was used in
the EDSAC, the first stored-program computer, and even now, more than half a
century later, is still the basis for nearly all digital computers. This design, and the
IAS machine, built in collaboration with Herman Goldstine, has had such an
enormous influence that it is worth describing briefly. Although Von Neumann’s
name is always attached to this design, Goldstine and others made major
contributions to it as well. A sketch of the architecture is given in Fig. 1-5.
The von Neumann machine had five basic parts: the memory, the arithmetic logic
unit, the control unit, and the input and output equipment. The memory consisted of
4096 words, a word holding 40 bits, each a 0 or a 1. Each word held either two 20-
bit instructions or a 40-bit signed integer. The instructions had 8 bits devoted to
telling the instruction type and 12 bits for specifying one of the 4096 memory
words. Together, the arithmetic logic unit and the control unit formed the ‘‘brain’’ of
the computer. In modern computers they are combined onto a single chip called the
CPU (Central Processing Unit). Inside the arithmetic logic unit was a special internal
40-bit register called the accumulator. A typical instruction added a word of memory
to the accumulator or stored the contents of the accumulator in memory. The
machine did not have floating-point arithmetic because von Neumann felt that any
competent mathematician ought to be able to keep track of the decimal point
(actually the binary point) in his or her head. At about the same time von Neumann
was building the IAS machine, researchers at M.I.T. were also building a computer.
Unlike IAS, ENIAC and other machines of its type, which had long word lengths and
were intended for heavy number crunching, the M.I.T. machine, the Whirlwind I, had
a 16-bit word and was designed for real-time control. This project led to the
invention of the magnetic core memory by Jay Forrester, and then eventually to the
first commercial minicomputer. While all this was going on, IBM was a small
company engaged in the business of producing card punches and mechanical card-
sorting machines. Although IBM had provided some of Aiken’s financing, it was not
terribly interested in computers until it produced the 701 in 1953, long after Eckert
and Mauchley’s company was number one in the commercial market with its
UNIVAC computer. The 701 had 2048 36-bit words, with two instructions per word. It
was the first in a series of scientific machines that came to dominate the industry
within a decade. Three years later came the 704, which initially had 4096 words of
core memory, 36-bit instructions, and a new innovation, floating-point hardware. In
1958, IBM began production of its last vacuum-tube machine, the 709, which was
basically a beefed-up 704.