0% found this document useful (0 votes)
584 views5 pages

A Brief History of Artificial Intelligence

In this document, I've explained the topic of "History of Artificial Intelligence" in my own words. Hope you will like it.

Uploaded by

waleed haider
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
584 views5 pages

A Brief History of Artificial Intelligence

In this document, I've explained the topic of "History of Artificial Intelligence" in my own words. Hope you will like it.

Uploaded by

waleed haider
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
  • Introduction
  • Birth of Artificial Intelligence
  • The Past, Four Seasons of AI
  • The Golden Years
  • References

1

History of Artificial Intelligence

INTRODUCTION

The history of artificial intelligence (AI) began in antiquity, with myths, stories and
rumors of artificial beings endowed with intelligence or consciousness by master
craftsmen. The seeds of modern AI were planted by philosophers who attempted to
describe the process of human thinking as the mechanical manipulation of symbols.
This work culminated in the invention of the programmable digital computer in the
1940s, a machine based on the abstract essence of mathematical reasoning. This
device and the ideas behind it inspired a handful of scientists to begin seriously
discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus
of Dartmouth College, USA during the summer of 1956. (Michael,2019)
Those who attended would become the leaders of AI research for decades. Many of
them predicted that a machine as intelligent as a human being would exist in no
more than a generation, and they were given millions of dollars to make this vision
come true.
Eventually, it became obvious that commercial developers and researchers had
grossly underestimated the difficulty of the project. In 1974, in response to the
criticism from James Lighthill and ongoing pressure from congress,
the U.S. and British Governments stopped funding undirected research into artificial
intelligence, and the difficult years that followed would later be known as an "AI
winter". Seven years later, a visionary initiative by the Japanese
Government inspired governments and industry to provide AI with billions of dollars,
but by the late 80s the investors became disillusioned and withdrew funding again.
(Newquest,HP,1994)

MATURATION OF ARTIFICIAL INTELLIGENCE


2

 Year 1943: The first work which is now recognized as AI was done by Warren
McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.
 Year 1949: Donald Hebb demonstrated an updating rule for modifying the
connection strength between neurons. His rule is now called Hebbian learning.
 Year 1950: The Alan Turing who was an English mathematician and pioneered
Machine learning in 1950. Alan Turing publishes "Computing Machinery and
Intelligence" in which he proposed a test. The test can check the machine's. Ability
to exhibit intelligent behavior equivalent to human intelligence, called a Turing test.

BIRTH OF ARTIFICAL INTELLIGENCE

 Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial
intelligence program"Which was named as "Logic Theorist". This program
had proved 38 of 52 Mathematics theorems, and find new and more elegant
proofs for some theorems.
 Year 1956: The word "Artificial Intelligence" first adopted by American
Computer scientist John McCarthy at the Dartmouth Conference. For the first
time, AI coined as an academic field. At that time high-level computer
languages such as FORTRAN, LISP, or COBOL were invented. And the
enthusiasm for AI was very high at that time.

THE GOLDEN YEARS

 Year 1966: The researchers emphasized developing algorithms which can


solve mathematical problems. Joseph Weizenbaum created the first chatbot
in 1966, which was named as ELIZA.
 Year 1972: The first intelligent humanoid robot was built in Japan which was
named as WABOT-1.

THE PAST FOUR SEASONS OF AI


AI Spring: The Birth of AI
Although it is difficult to pinpoint, the roots of AI can probably be traced back to the
1940s, specifically 1942, when the American Science Fiction writer Isaac Asimov
3

published his short story Runaround. The plot of Runaround—a story about a robot
developed by the engineers Gregory Powell and Mike Donavan—evolves around the
Three Laws of Robotics: (1) a robot may not injure a human being or, through
inaction, allow a human being to come to harm; (2) a robot must obey the orders
given to it by human beings except where such orders would conflict with the First
Law; and (3) a robot must protect its own existence as long as such protection does
not conflict with the First or Second Laws. Asimov’s work inspired generations of
scientists in the field of robotics, AI, and computer science—among others the
American cognitive scientist Marvin Minsky (who later co-founded the MIT AI
laboratory). At roughly the same time, but over 3,000 miles away, the English
mathematician Alan Turing worked on much less fictional issues and developed a
code breaking machine called The Bombe for the British government, with the
purpose of deciphering the Enigma code used by the German army in the Second
World War. The Bombe, which was about 7 by 6 by 2 feet large and had a weight of
about a ton, is generally considered the first working electro-mechanical computer.
The powerful way in which The Bombe was able to break the Enigma code, a task
previously impossible to even the best human mathematicians, made Turing wonder
about the intelligence of such machines.
In 1950, he published his seminal article “Computing Machinery and Intelligence”3
where he described how to create intelligent machines and in particular how to test
their intelligence. This Turing Test is still considered today as a benchmark to identify
intelligence of an artificial system: if a human is interacting with another human and a
machine and unable to distinguish the machine from the human, then the machine is
said to be intelligent. The word Artificial Intelligence was then officially coined about
six years later, when in 1956 Marvin Minsky and John McCarthy (a computer
scientist at Stanford) hosted the approximately eight-week-long Dartmouth Summer
Research Project on Artificial Intelligence (DSRPAI) at Dartmouth College in New
Hampshire. This workshop—which marks the beginning of the AI Spring and was
funded by the Rockefeller Foundation—reunited those who would later be
considered as the founding fathers of AI. Participants included the computer scientist
Nathaniel Rochester, who later designed the IBM 701, the first commercial scientific
computer, and mathematician Claude Shannon, who founded information theory.
The objective of DSRPAI was to reunite researchers from various fields in order to
create a new research area aimed at building machines able to simulate human
intelligence.

AI Summer and Winter: The Ups and Downs of AI


The Dartmouth Conference was followed by a period of nearly two decades that saw
significant success in the field of AI. An early example is the famous ELIZA computer
program, created between 1964 and 1966 by Joseph Weizenbaum at MIT. ELIZA
was a natural language processing tool able to simulate a conversation with a
human and one of the first programs capable of attempting to pass the
aforementioned Turing Test.4 Another success story of the early days of AI was the
4

General Problem Solver program—developed by Nobel Prize winner Herbert Simon


and RAND Corporation scientists Cliff Shaw and Allen Newell—that was able to
automatically solve certain kind of simple problems, such as the Towers of Hanoi.5
As a result of these inspiring success stories, substantial funding was given to AI
research, leading to more and more projects. In 1970, Marvin Minsky gave an
interview to Life Magazine in which he stated that a machine with the general
intelligence of an average human being could be developed within three to eight
years.
Yet, unfortunately, this was not the case. Only three years later, in 1973, the U.S.
Congress started to strongly criticize the high spending on AI research. In the same
year, the British mathematician James Lighthill published a report commissioned by
the British Science Research Council in which he questioned the optimistic outlook
given by AI researchers. Lighthill stated that machines would only ever reach the
level of an “experienced amateur” in games such as chess and that common-sense
reasoning would always be beyond their abilities. In response, the British
government ended support for AI research in all except three universities
(Edinburgh, Sussex, and Essex) and the U.S. government soon followed the British
example. This period started the AI Winter. And although the Japanese government
began to heavily fund AI research in the 1980s, to which the U.S. DARPA responded
by a funding increase as well, no further advances were made in the following years.

AI Fall: The Harvest


One reason for the initial lack of progress in the field of AI and the fact that reality fell
back sharply relative to expectations lies in the specific way in which early systems
such as ELIZA and the General Problem Solver tried to replicate human intelligence.
Specifically, they were all Expert Systems, that is, collections of rules which assume
that human intelligence can be formalized and reconstructed in a top-down approach
as a series of “if-then” statements.6 Expert Systems can perform impressively well in
areas that lend themselves to such formalization. For example, IBM’s Deep Blue
chess playing program, which in 1997 was able to beat the world champion Gary
Kasparov—and in the process proved one of the statements made by James
Lighthill nearly 25 earlier wrong—is such an Expert System. Deep Blue was
reportedly able to process 200 million possible moves per second and to determine
the optimal next move looking 20 moves ahead through the use of a method called
tree search.
However, Expert Systems perform poorly in areas that do not lend themselves to
such formalization. For example, an Expert System cannot be easily trained to
recognize faces or even to distinguish between a picture showing a muffin and one
showing a Chihuahua.8 For such tasks it is necessary that a system is able to
interpret external data correctly, to learn from such data, and to use those learnings
to achieve specific goals and tasks through flexible adaptation—characteristics that
define AI.9 Since Expert Systems do not possess these characteristics, they are
5

technically speaking not true AI. Statistical methods for achieving true AI have been
discussed as early as the 1940s when the Canadian psychologist Donald Hebb
developed a theory of learning known as Hebbian Learning that replicates the
process of neurons in the human brain.10 This led to the creation of research on
Artificial Neural Networks. Yet, this work stagnated in 1969 when Marvin Minsky and
Seymour Papert showed that computers did not have sufficient processing power to
handle the work required by such artificial neural networks.
Artificial neural networks made a comeback in the form of Deep Learning when in
2015 AlphaGo, a program developed by Google, was able to beat the world
champion in the board game Go. Go is substantially more complex than chess (e.g.,
at opening there are 20 possible moves in chess but 361 in Go) and it was long
believed that computers would never be able to beat humans in this game. AlphaGo
achieved its high performance by using a specific type of artificial neural network
called Deep [Link] artificial neural networks and Deep Learning form the
basis of most applications we know under the label of AI. They are the basis of
image recognition algorithms used by Facebook, speech recognition algorithms that
fuel smart speakers and self-driving cars. This harvest of the fruits of past statistical
advances is the period of AI Fall, which we find ourselves in today. (Kaplan, Andreas;
Haenlein, Michael (2019).

REFERENCES
i. [Link]
334539401_A_Brief_History_of_Artificial_Intelligence_On_the_Past_Present_and_Future_of_Artificial_I
ntelligence/links/60cd82e5a6fdcc01d482dc23/A-Brief-History-of-Artificial-Intelligence-On-the-Past-
[Link]

ii. Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the
interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62: 15–
25. doi:10.1016/[Link].2018.08.004. S2CID 158433736.

iii. Newquist, HP (1994), The Brain Makers: Genius, Ego, And Greed in the Quest For Machines That
Think, New York: Macmillan/SAMS, ISBN 978-0-9885937-1-8, OCLC 313139906

iv. [Link]

Common questions

Powered by AI

Neural networks and Hebbian learning were pivotal in early AI research by introducing the concept that systems could learn in a manner similar to human neural processes. Hebbian learning, proposed by Donald Hebb in 1949, suggested that connection strengths between neurons could be adjusted based on learning experiences. Though initially limited by the era's computational constraints, these ideas laid the groundwork for later innovations in neural networks. Today, these concepts underpin deep learning models, enabling capabilities such as pattern recognition and autonomous decision-making vastly superior to earlier expert systems .

Before deep learning, attempts to simulate human intelligence primarily involved expert systems and symbolic AI. Early efforts, like the Logic Theorist and General Problem Solver, used rule-based systems to address specific problems, while ELIZA mimicked human conversation. These systems laid the groundwork for understanding AI's potential and limitations. Despite their failures to achieve general intelligence, they highlighted the need for systems capable of learning and adapting, thus shaping modern AI's focus on statistical methods and neural networks, leading to the deep learning successes seen today .

AI research has experienced cycles of optimism ('AI springs') and disappointment ('AI winters') due to several factors. Initial high expectations were set during the Dartmouth Conference in 1956, where researchers believed machines would soon match human intelligence. However, these expectations were not met, as demonstrated by critiques like James Lighthill's report in 1973, leading to funding cuts. This pattern repeated in the 1980s when the Japanese and U.S. governments funded ambitious AI projects that ultimately failed to meet advanced expectations, causing a withdrawal of support and an 'AI winter' .

The Dartmouth Conference in 1956 was crucial in establishing AI as an academic field. It was the first formal meeting where a group of scientists came together to explore machine intelligence, coining the term "Artificial Intelligence." The conference aimed to create machines simulating human intelligence and brought together pioneers like John McCarthy and Marvin Minsky, who laid the intellectual and organizational foundations for AI research. This event marked the AI "spring," a period of intense interest and funding in the field, setting the stage for future breakthroughs .

Deep learning reignited AI progress by overcoming the limitations of earlier expert systems, which relied heavily on predefined rules. Unlike these systems, deep learning uses artificial neural networks to emulate the human brain's ability to learn from vast amounts of data, enabling them to perform complex tasks such as recognizing images and understanding speech. This approach demonstrated its potential when AlphaGo, a deep learning network, defeated the world champion in Go, a significantly more complex game than chess. Deep learning's flexible data interpretation and adaptability mark a significant departure from the rigid frameworks of past AI methodologies .

The primary limitations of early expert systems included their reliance on predefined, rule-based logic and their inability to learn from data or adapt to new information. These systems were designed as collections of "if-then" rules that could mimic intelligence in structured domains but failed to handle reasonings like intuitive, common sense understanding required for generalized AI. Their lack of adaptability and learning capacity were stark contrasts to human cognition, stalling progress until statistical methods and neural networks emerged to enable more flexible, learning-based approaches like deep learning .

Early AI models were predominantly built around the assumption that human intelligence could be formalized and reconstructed using a top-down approach of 'if-then' rules, leading to the creation of expert systems. These systems could perform impressively in highly structured environments such as chess, as evidenced by IBM's Deep Blue, which used a method called tree search. However, they struggled with tasks requiring flexible adaptation and interpretation of external data, such as image or face recognition, due to their rigid rule-based structure .

Isaac Asimov's "Three Laws of Robotics," introduced in 1942, laid an ethical framework that influenced subsequent robot and AI development. These laws—prioritizing human safety, obedience, and self-preservation provided they do not conflict with higher priorities—encourage responsible and ethically sound creation of robots and AI systems. This framework inspired pivotal figures in AI and robotics, encouraging systems that align with ethical standards by ensuring safety and human well-being as primary considerations. Asimov's laws continue to reflect in contemporary debates about the moral implications of AI technologies .

Joseph Weizenbaum's ELIZA, developed between 1964 and 1966, was a fundamental milestone in early AI because it was one of the first systems capable of simulating conversation with a human. Although limited by rigid pattern-response processing, ELIZA demonstrated the potential for natural language processing. This paved the way for more advanced conversational agents by highlighting both the possibilities and restrictions inherent in early AI. It helped popularize the concept of conversational AI, leading to future developments in more sophisticated systems that leverage machine learning and deep learning techniques .

Philosophical works significantly shaped early AI conceptions by attempting to describe human thought as mechanical symbol manipulation, leading to the development of programmable digital computers in the 1940s. This laid the foundation for the modern AI field. These philosophical ideas influenced pioneers like Alan Turing, who proposed the Turing Test to assess machine intelligence. Such intellectual groundwork spurred technological advancements, culminating in the creation of electronic brains and setting the stage for the founding of AI as an academic discipline during the Dartmouth Conference .

You might also like