1.1.
3 The History of Artificial Intelligence
The gestation of artificial intelligence (1943-1955)
There were a number of early examples of work that can be characterized as AI, but it
was Alan Turing who first articulated a complete vision of A1 in his 1950 article
"Comput- ing Machinery and Intelligence." Therein, he introduced the Turing test,
machine learning, genetic algorithms, and reinforcement learning.
The birth of artificial intelligence (1956)
McCarthy convinced Minsky, Claude Shannon, and Nathaniel Rochester to help him
bring together U.S. researchers interested in automata theory, neural nets, and the study of
intelligence. They organized a two-month workshop at Dartmouth in the summer of 1956.
Perhaps the longest-lasting thing to come out of the workshop was an agreement to adopt
McCarthy's new name for the field: artificial intelligence.
Early enthusiasm, great expectations (1952-1969)
The early years of A1 were full of successes-in a limited way.
General Problem Solver (GPS) was a computer program created in 1957 by Herbert Simon and
Allen Newell to build a universal problem solver machine. The order in which the program
considered subgoals and possible actions was similar to that in which humans approached the same
problems. Thus, GPS was probably the first program to embody the "thinking humanly" approach.
At IBM, Nathaniel Rochester and his colleagues produced some of the first A1 pro-
grams. Herbert Gelernter (1959) constructed the Geometry Theorem Prover, which
was able to prove theorems that many students of mathematics would find quite tricky.
Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts Institute of
Technology (MIT). In 1963, McCarthy started the AI lab at Stanford.
Tom Evans's ANALOGY program (1968) solved geometric analogy problems that appear in IQ tests, such as
the one in Figure 1.1
Figure 1.1 The Tom Evan’s ANALOGY program could solve geometric analogy problems
as shown.
A dose of reality (1966-1973)
From the beginning, AI researchers were not shy about making predictions of their
coming successes. The following statement by Herbert Simon in 1957 is often quoted:
―It is not my aim to surprise or shock you-but the simplest way I can summarize is to
say that there are now in the world machines that think, that learn and that create.
Moreover,
their ability to do these things is going to increase rapidly unt il-in a visible future-the
range of problems they can handle will be coextensive with the range to which the
human mind has been applied.
Knowledge-based systems: The key to power? (1969-1979)
Dendral was an influential pioneer project in artificial intelligence (AI) of the 1960s, and the
computer software expert system that it produced. Its primary aim was to help organic chemists
in identifying unknown organic molecules, by analyzing their mass spectra and using knowledge
of chemistry. It was done at Stanford University by Edward Feigenbaum, Bruce Buchanan,
Joshua Lederberg, and Carl Djerassi.
A1 becomes an industry (1980-present)
In 1981, the Japanese announced the "Fifth Generation" project, a 10-year plan to build
intelligent computers running Prolog. Overall, the A1 industry boomed from a few million dollars in 1980
to billions of dollars in 1988.
The return of neural networks (1986-present)
Psychologists including David Rumelhart and Geoff Hinton continued the study of neural-net models
of
memory.
A1 becomes a science (1987-present)
In recent years, approaches based on hidden Markov models (HMMs) have come to dominate the
area. Speech technology and the related field of handwritten character recognition are already
making the
transition to widespread industrial and consumer applications.
The Bayesian network formalism was invented to allow efficient representation of, and rigorous
reasoning with, uncertain knowledge.
The emergence of intelligent agents (1995-present)
One of the most important environments for intelligent agents is the Internet