AI's Understanding of Language and Vision
AI's Understanding of Language and Vision
“I don’t see that human intelligence is something that humans can never
understand.”
I. Introduction
ARTIFICIAL INTELLIGENCE (hereinafter referred to as AI) has been a topic of fascination and
debate for decades, with its origins dating back to the 1950s. The field of AI has seen significant
advancements in recent years, leading to its integration in various industries and everyday life.
From self-driving cars to virtual assistants, the advancements in AI technology have made it
possible for machines to perform tasks that were once thought to be the exclusive domain of
humans.
The origins of AI may be traced back to the 1950s, when Dartmouth College professors
proposed a summer research project to create “thinking machines” that could simulate human
intelligence. This project, known as the Dartmouth Conference, marked the beginning of the
field of AI research. Since then, AI has undergone several stages of development, including
the “Good Old-fashioned AI” (GOFAI) era, the “Expert Systems” era, and the current era of
“Deep Learning” and “Big Data”.
The construction of the first AI programme, the Logic Theorist, in 1955 by Allen Newell and
Herbert Simon is an important event in the history of AI. This program was able to prove
mathematical theorems, a task that was previously thought to be the exclusive domain of human
intelligence. Another significant event was the creation of the first AI-powered machine, the
“General Problem Solver” in 1957, which was capable of solving problems in a variety of
domains.1
The discipline of artificial intelligence has also seen significant advancements in recent years,
particularly in the areas of natural language processing and computer vision. These two areas
1
Rockwell Anyoha, “The History of Artificial Intelligence” Special Edition on Artificial Intelligence: Harvard
University (2017).
25
have seen the most progress in recent times. As a result of the development of these
technologies, virtual assistants like Apple’s Siri and Alexa, as well as autonomous vehicles,
have been available to consumers. The growth of artificial intelligence technology has also led
to the emergence of new commercial sectors, such as the industry for autonomous vehicles and
the smart city movement.2
Before attempting to understand AI, it is vital to first understand the concept of intelligence.
The ability to acquire, comprehend, and use knowledge and skills in order to solve problems
and respond to new situations is referred to as intelligence.3 Artificial intelligence, also known
simply as AI, is the process by which machines attempt to mimic human intelligence. AI
systems are created with the intention of doing tasks that would typically need the intelligence
of a human being. These tasks include visual perception, speech recognition, decision-making,
and language translation.
There are numerous methods of AI classification, with the most prevalent being:
• Reactive Machines: These are AI systems that are designed to respond to specific
situations but have no memory of past events. Examples include IBM’s Deep Blue, the
chess-playing computer that defeated Garry Kasparov in 1997.4
• Limited Memory: These AI systems have a limited memory and can use past experiences
to inform their decision making. Examples include self-driving cars, which use cameras
and sensors to perceive their environment and make decisions based on past experiences.5
• Theory of Mind: These AI systems have the ability to understand human emotions and
mental states and can respond accordingly. Examples include virtual assistants such as Siri
and Alexa, which are designed to understand human speech and respond in a natural way.6
2
H.M.K.K.M.B. Herath, Mamta Mittal, “Adoption of artificial intelligence in smart cities: A comprehensive
review” 2 International Journal of Information Management Data Insights 100076 (2022).
3
Shane Legg, Marcus Hutter, A Collection of Definitions of Intelligence, available at:
[Link] (last visited on 21 Feb. 2020).
4
Murray Campbella, A. Joseph Hoane Jr. et. al., “Deep Blue” 134 Artificial Intelligence 57 (2002).
5
Abhishek Gupta, Alagan Anpalagan et. al., “Deep Learning for Object Detection and Scene Perception in Self-
Driving Cars: Survey, Challenges, and Open Issues” 10 Array 100057 (2021).
6
Felp Roza, “Theory of Mind and Artificial Intelligence” Towards Data Science (2020).
26
• Self-Aware: These AI systems have the ability to be aware of their own existence and can
make decisions based on their own self-interest. However, currently, there is no such AI
system that is self-aware.7
AI has come a long way since its origins in the 1950s. From the first AI program, the Logic
Theorist, to the current era of Deep Learning and Big Data, the field of AI has undergone
significant advancements.8 The incorporation of AI technology into numerous industries has
resulted in the emergence of new industries and has the potential to transform our way of life
and work. The history of AI has been introduced in this chapter, along with significant
occasions that have influenced the field's growth and a description and classification of AI. As
the field of AI continues to evolve, it is important to understand the history and current state of
AI in order to fully grasp its potential and its impact on society.9
In the following chapters, we will delve deeper into the analysis of existing legal theories and
the challenges they present in regulating AI. As AI becomes more prevalent in our daily lives,
it is crucial to understand the legal and ethical implications of this technology. From issues of
liability and accountability to privacy and data protection, the legal framework for AI is an
essential aspect of ensuring its responsible development and use.
In this chapter, the researcher examines the current state of artificial intelligence and its
potential societal influence, as well as the legal and ethical concerns that must be addressed to
ensure its responsible development and usage. By providing a comprehensive understanding
of the history, definition, and classification of AI, as well as an analysis of existing legal
theories, this thesis aims to contribute to the ongoing conversation about the future of AI and
its role in society.
AI has a lengthy and eventful history spanning decades. At its core, AI refers to the study of
how to programme computers and robots to carry out activities that would typically call for the
intelligence of a human being, such as perception, thinking, and decision-making. AI is an
7
Arend Hintze, “Understanding the Four Types of Artificial Intelligence” Cloud and Computing (2016).
8
Supra note 1.
9
Michael Haenlein and Andreas Kaplan, “A Brief History of Artificial Intelligence: On the Past, Present, and
Future of Artificial Intelligence” 1 California Management Review (2019).
27
interdisciplinary field that borrows from a variety of other fields, including but not limited to
computer science, mathematics, languages, psychology, philosophy, and neuroscience.
The history of AI can be broadly divided into three phases: the early phase, the symbolic phase,
and the current phase.10
In the 1950s and 1960s, Alan Turing and John McCarthy, along with other forerunners,
originally presented the concept of building machines that could accomplish jobs that would
ordinarily need human intelligence. During this period, researchers concentrated on creating
simple rule-based systems capable of doing certain tasks, like as playing chess or completing
math problems. These early AI systems were often called “expert systems” and were based on
the idea of knowledge representation and reasoning.11
The symbolic phase of AI began in the 1970s and 1980s and was characterized by the
development of more complex rule-based systems. Researchers during this phase focused on
developing formal languages and logical systems that could be used to represent knowledge
and reason about it. During this phase, a variety of AI applications were developed, including
natural language comprehension, problem-solving, and decision-making.12
The current phase of AI, which began in the 1990s, is characterized by the development of
machine learning and deep learning techniques. These techniques allow machines to learn from
data rather than relying on explicit rules and representations.13 This has led to the creation of
advanced AI systems capable of picture and speech recognition, natural language processing,
and autonomous vehicle operation.
Overall, the history of AI is a fascinating and ongoing journey that continues to evolve and
shape our world. Despite its complexity, the field of AI is built on a foundation of
interdisciplinary research and collaboration and continues to push the boundaries of what is
possible with technology.
10
Pantelis Linardatos, Vasilis Papastefanopoulos, “Explainable AI: A Review of Machine Learning
Interpretability Methods” 23 Entropy 44 (2020).
11
J Gips, “Artificial intelligence” 6 Environment and Planning B: Planning and Design 353(1979).
12
Stuart J. Russell and Peter Norvig, Artificial Intelligence A Modern Approach (Prentice-Hall, Inc. A Simon &
Schuster Company Englewood Cliffs, New Jersey, 1995).
13
Iain M. Cockburn, The Impact of Artificial Intelligence on Innovation 1050 (National Bureau of Economic
Research, Massachusetts Avenue Cambridge, 2018).
28
What is Artificial Intelligence?
The field of computer science known as AI focuses on the creation of intelligent agents and
systems. These agents and systems have been developed to carry out tasks that would typically
need human-like intelligence, including as understanding natural language, recognising objects
and images, making judgments, and finding solutions to problems.
AI is founded on the premise that human intelligence may be reproduced and automated using
computational models and algorithms. This is achieved by developing mathematical
frameworks that can simulate the cognitive processes and decision-making abilities of the
human mind.
One of the key approaches to AI is symbolic reasoning, which involves representing knowledge
in a formal language and using logical inference to make deductions and solve problems. 14
Another approach is sub-symbolic reasoning, which uses numerical methods and statistical
techniques to model the underlying processes of the human mind. Recent developments in
machine learning, in particular in the subfields of deep learning and reinforcement learning,
have led to significant improvements in artificial intelligence systems’ capabilities to carry out
a variety of tasks, including image and speech recognition, natural language processing, and
game playing. These advancements have been made possible by recent technological advances.
However, it’s important to note that despite the rapid progress in AI research, the field still
faces many challenges. For example, AI systems often lack the ability to understand context
and common sense, which limits their ability to perform certain tasks. Additionally, there are
ethical and societal implications of AI, such as concerns about job displacement, privacy, and
autonomous systems.15
Artificial Ability is a complicated and multidisciplinary field that tries to produce intelligent
computers and systems capable of completing activities that normally need human-like
intelligence. It draws on various branches of computer science, mathematics, and cognitive
science, and it continues to be an active area of research with many open problems and
challenges.
14
Joachim Hertzberg and Raja Chatila, AI Reasoning Methods for Robotics (Springer Handbook of Robotics
2008).
15
Vinuesa, R., Azizpour, H. et al., “The role of artificial intelligence in achieving the Sustainable Development
Goals” 11 Nature Communications 233 (2020).
29
The History of Artificial Intelligence
The history of AI can be tracked back to the early days of philosophy and mathematics, with
thinkers contemplating the possibility of creating intelligent machines that could mimic human
thought processes. In the eighteenth century, thinkers such as René Descartes and Gottfried
Leibniz,16 who lay the groundwork for modern logic and computation, envisioned the creation
of machines that could execute mathematical computations and reason like humans. This
concept served as the inspiration for the creation of programmed mechanical calculators, such
as Charles Babbage’s Analytical Engine in the 1800s, which was considered to be one of the
first precursors to the modern computer.17
The development of the programmable digital computer in the 1940s with the Atanasoff-Berry
Computer (ABC) marked a significant turning point in the history of AI.18 The creation of the
ABC, which could conduct mathematical computations and store data, encouraged scientists
to pursue the concept of constructing a “electronic brain” or artificially intelligent person.
The early years of AI research focused on the development of rule-based systems, which used
a set of predefined rules to make decisions and solve problems. However, this approach proved
to be limited and researchers soon began to explore other methods such as knowledge
representation, expert systems, and machine learning.
In the late 20th century, with the advent of powerful computers and new algorithms, AI
research experienced a resurgence, with significant advances being made in areas such as
natural language processing, computer vision and robotics. In recent years, the advancement
of deep learning and reinforcement learning has led to even more remarkable accomplishments,
such as the capacity of machines to defeat human champions at sophisticated games such as
Go and Poker.
The history of AI can be traced back to the early days of philosophy and mathematics, where
early thinkers contemplated the possibility of creating intelligent machines that could mimic
human thought processes. An important turning point in the history of artificial intelligence
was the creation of the programmable digital computer with the Atanasoff-Berry Computer
16
Alvaro Pastor, “Human as machine: A discussion on the transformations and exhaustion of a metaphor” Human
as machine (2020).
17
Mark Priestley, A Science of Operations: History of Computing (Springer-Verlag, London, 2011).
18
Doron Swade, The History of Computing (Oxford University Press, Oxford, 2022).
30
(ABC) in the 1940s. This invention propelled scientists to pursue the notion of creating a
“electronic brain” or artificially intelligent being.19 The field has been an ongoing journey of
exploration and experimentation, with researchers and inventors pushing the boundaries of
what is possible and discovering new ways of creating intelligent machines.
Since its inception in the mid-1950s, AI has experienced considerable modifications and is an
area that is rapidly evolving. The Turing Test was developed in 1950 by eminent mathematician
and logician Alan Turing, who laid the foundation for the field. The Turing Test measures a
machine’s capacity to demonstrate intelligence that is indistinguishable from human
intelligence.20
At a Dartmouth College summer conference in 1956, computer and cognitive scientist John
McCarthy coined the phrase “artificial intelligence”.21 This helped officially create the field of
research that is now known as AI. This heralded the beginning of a new age in AI research,
with logicians, theorists, scientists and programmers trying to consolidate the contemporary
knowledge of AI in its entirety.
In the decades that followed, AI research has witnessed many ground-breaking innovations and
findings that have fundamentally changed our understanding of the field. The late 1950s and
1960s saw the emergence of rule-based systems and expert systems, which used a set of
predefined rules to make decisions and solve problems. However, this strategy proved to be
inadequate, and academics quickly began to investigate various approaches, including
knowledge representation, machine learning and natural language processing.
In the 1970s and 1980s, artificial intelligence research shifted its focus towards the
development of intelligent agents and systems which could accomplish tasks that usually
necessitate human-like intelligence. These include the comprehension of natural language, the
recognition of objects and images, the ability to make decisions, and the resolution of problems.
Other tasks that fall into this category include learning new languages. This led to the
emergence of new subfields such as computer vision, natural language processing, and
robotics.
19
Ibid.
20
W. Rapaport, Turing Test 151 (Encyclopaedia of Language & Linguistics, 2 nd Edition, 2006).
21
Jørgen Veisdal, “The Birthplace of AI” Cantor’s Paradise (2019).
31
The 1990s and 2000s saw the advent of powerful computers and new algorithms, which led to
a resurgence in AI research. During this time period, substantial advancements were made in
fields like as computer vision, natural language processing, and robotics, in addition to the
creation of novel methods such neural networks and genetic algorithms. In recent years, the
field has seen an explosion in interest and investment, driven by the rapid development of deep
learning and reinforcement learning, which has led to impressive achievements such as the
ability of machines.
To summarise, the subfield of study known as Artificial Intelligence has come a very long way
since it was first established in the middle of the 1950s. Research into artificial intelligence has
resulted in a number of ground-breaking developments and discoveries, which have radically
altered our understanding of the field and made it a practical reality for both the present
generation and those to come. AI research has experienced a series of revolutionary changes
and discoveries over the course of its history, beginning with the development of rule-based
systems and expert systems and continuing all the way up to the most current advancements in
deep learning and reinforcement learning.
After the year 1900, there was a noticeable acceleration in the development of artificial
intelligence, which should not come as a surprise to anyone. It is astounding how many people
were thinking about artificial intelligence hundreds of years before there was even a vocabulary
to articulate what they were thinking.
The concept of AI, often known as the automation of human thought in non-human entities,
dates all the way back to ancient times and has a long and illustrious history. In 380 BC, the
Greek philosopher Aristotle wrote about the possibility of creating automatons, or self-moving
machines, in his treatise “On the Soul”. Similarly, in the Middle Ages, theologians and
philosophers such as Thomas Aquinas and René Descartes discussed the possibility of creating
machines that could mimic human intelligence.22
22
J D Casten, Cybernetic Revelation Deconstructing Artificial Intelligence (Post Egoism Media Eugene, Oregon,
USA, 2012).
32
During the Renaissance, mathematicians and scientists such as Blaise Pascal and Wilhelm
Schickard began to develop mechanical calculators and numeral systems, laying the foundation
for the development of modern computing.23 Around the beginning of the 1700s, popular
literature began to include depictions of all-knowing machines that were similar to computers.
These early examples date back to the time period. Jonathan Swift's work “Gulliver's Travels”24
has one of the earliest mentions to contemporary technology and, more particularly, a
computer. This mention is located in the book's third chapter. This “engine” is regarded to be
one of the earliest references to the gadget. The purpose of developing this technology was to
expand human understanding and the range of mechanical operations to the point where even
the least skilled person could give the impression that they were an expert. All of this was
supposed to be completed with the assistance and insight of a mind that was not human but
instead imitated artificial intelligence.25
In his novel “Erewhon”, which was written by Samuel Butler26 and originally published in
1872, Butler speculated that, at some point in the unforeseeable future, it may be conceivable
for robots to achieve awareness on their own. This novel, along with other literary works of the
time, helped to popularize the idea of AI and sparked a great deal of debate and discussion
among scientists, philosophers, and the general public.
While these early musings and depictions of AI may seem fanciful by today’s standards, they
placed the foundation for the development of modern AI research. The ideas and concepts
explored by these early thinkers continue to influence the field of AI to this day.
The first decades of the 20th century saw significant advancements in the field of AI. The
concept of robots, as well as the potential for machines to think and learn like humans, began
to capture the public imagination through science fiction literature and film. Additionally,
pioneering scientists and inventors began to develop early examples of AI-powered machines.
23
Friedrich L. Bauer, Origins and Foundations of Computing (Springer Berlin, Heidelberg, 2010).
24
Rebecca Reynoso, “A Complete History of Artificial Intelligence” AI & Machine Learning Operationalization
Category (2021).
25
Ibid.
26
Justin Prystash, “Idealist Fictions: Crossing F. H. Bradley and Samuel Butler” 62 Criticism 195 (2020).
33
In his science fiction play “robot” which was written in 1921 by a Czech playwright named
Karel Apek, the title “Rossum's Universal Robots” was first used.27 This is the first time that
the word is known to have been referenced, and it was explored in this play. This play also
marked the birth of the concept of factory-made artificial people. In the years to come, the
concept of robots will continue to serve as a source of creativity and motivation for researchers.
The science fiction film Metropolis, directed by Fritz Lang and premiered in 1927, included a
female robot that, from the outside looking in, could not be distinguished from a human being.28
After that, the artificially intelligent robot-girl launched an assault on the city, causing havoc
on a Berlin of the future. This movie is significant because it was the first time a robot was
portrayed on film. It also served as a source of inspiration for other notable non-human
characters, such as C-3PO from the Star Wars franchise.
Makoto Nishimura, a Japanese biologist and professor, is credited with being the first person
in Japan to build a robot. His creation was called Gakutensoku. The robot, whose name can be
directly translated to “learning from the rules of nature”, was gifted with the capability to
collect information from both people and the natural world. It also had the ability to move its
head and hands and to alter its facial expressions.29
John Vincent Atanasoff and Clifford Berry constructed the Atanasoff-Berry Computer (ABC)
in 1939 at Iowa State University with a grant of $650. The computer was named after the two
men. The ABC tipped the scales at almost 700 pounds and has the capacity to solve as many
as 29 linear equations simultaneously.30
In his book “Giant Brains: Or Machines That Think” published in 1949, 31 computer scientist
Edmund Berkeley made the observation that machines were becoming increasingly capable of
managing vast volumes of information with speed and skill. The University of California,
Berkeley went on to equate machines to a human brain consisting of “hardware and wire
instead of flesh and nerves”, describing the capabilities of machines as being comparable to
27
John M. Jordan, “The Czech Play That Gave Us the Word ‘Robot” The MIT Press Reader (2019).
28
Stephanie Ruth, Fritz Lang’s Metropolis: A Reflection of a Restless Time, available at:
[Link] (last visited on Mar. 2, 2020).
29
Yulia Frumer, “The Short, Strange Life of the First Friendly Robot” IEEE Spectrum (2020).
30
John Edwards, John Atanasoff and Clifford Berry: Inventing The ABC, A Benchmark Digital Computer,
available at: [Link]
atanasoff-and-clifford-berry-inventing-the-abc-a-benchmark-digital-computer (last visited on Apr. 4, 2020).
31
Thomas Haigh, “Inventing Information Systems: The Systems Men and the Computer, 1950-1968” 75
Computers and Communications Networks 15 (2001).
34
those of the human mind, Berkeley concluded that “a machine, therefore, can think”. This
caused a dramatic shift in people’s perceptions of whether or not machines are capable of
thinking and learning in the same way that humans are.32
These early developments in AI would lay the foundation for further advancements in the field
in the decades to come, as scientists and researchers continued to work towards creating
machines that could think, learn, and even exhibit human-like behavior.
The 1950s saw a significant increase in research and development in the domain of AI. Many
developments in the area came to completion during this decade, as computer scientists and
researchers in other fields began to explore the potential of creating machines that could think,
learn, and even exhibit human-like behavior.
The first paper to discuss the creation of a chess-playing computer programme was
“Programming a Computer for Playing Chess” by Claude Shannon, known as “the father of
information theory”, which was published in the journal Communications of the ACM in 1950.
This marked the beginning of research into creating computers that could play games, which
would become an important area of AI research in the years to come.33
In the same year, 1950, Alan Turing released “Computing Machinery and Intelligence”, in
which he developed the concept of “The Imitation Game”, a question that addressed whether
or not robots are capable of thinking. The Turing Test, which tested the intelligence of
machines (AI), was based on this original notion. The Turing Test developed into an essential
part of the philosophy of AI (AI), which investigates the nature of intelligence, consciousness,
and capability in computers.34
A computer programme that could play checkers was the first of its kind and was created in
1952 by computer scientist Arthur Samuel. It was the first software to learn how to play a game
on its own. This marked a huge step forward in the creation of machines that are capable of
learning on their own and improving their performance without the participation of humans.
32
Ibid.
33
Monty Newborn, Deep Blue: An Artificial Intelligence Milestone (Springer New York, NY, 2003).
34
A. M. Turing, “I.—Computing Machinery and Intelligence” LIX Mind: A Quarterly Review
of Psychology and Philosophy 433 (1950).
35
In 1955, John McCarthy and a group of his co-workers developed a plan for a workshop that
would be centred on the topic of “artificial intelligence”. McCarthy is credited as being the one
who first officially used the term AI in 1956, which was the year that the workshop took place.
In the same year, 1955, Herbert Simon, Allen Newell and Cliff Shaw collaborated on the
creation of the first computer software with AI called Logic Theorist.35
In 1958, McCarthy developed Lisp, a programming language that would become one of the
most popular and widely used languages for AI research. Lisp is still widely used in AI research
today.
In 1959, Samuel introduced the idea of “machine learning” when he was discussing the concept
of programming a computer to play a game of chess better than the human who created the
software for the computer.36 Samuel was addressing the concept of programming a computer
to play a game of chess better than a human. This marked the beginning of research in the field
of machine learning, which would become an important subfield of AI and is widely used today
for tasks such as image recognition, speech recognition, and natural language processing.
Overall, the 1950s marked a period of significant progress in the field of AI, as researchers
began to explore the potential of creating machines that could think and learn like humans and
lay the foundation for the further advancements in the field in the decades to come.
The decade of the 1960s was a watershed moment in the history of AI, marking a time of great
expansion and invention. During this decade, progress was achieved in the creation of robotics
and automatons, programming languages, research investigations, and portrayals of artificially
intelligent beings in popular culture. These breakthroughs were made possible by advances in
technology. These developments have had a profound impact on the field of AI and continue
to shape our understanding of the capabilities and limitations of AI.
One of the key innovations of the 1960s was the introduction of industrial robots. In 1961, an
industrial robot named Unimate, which had been developed in the 1950s by George Devol and
35
Leo Gugerty, “Newell and Simon's Logic Theorist: Historical Background and Impact on Cognitive Modelling”
50 Proceedings of the Human Factors and Ergonomics Society Annual Meeting (2016).
36
Neha Sharma, Reecha Sharma et. al., “Machine Learning and Deep Learning Applications-A Vision” 2 Global
Transitions Proceedings 24 (2021).
36
utilised by General Motors in New Jersey, was the first robot to work on an assembly line.37
This occurred in the year 1961. Because it was concluded that the tasks of transferring die
castings away from the assembly line and welding parts onto cars were too dangerous for
people to do, the Unimate was given these responsibilities. The success of Unimate
demonstrated the potential of robots to perform tasks that were previously considered too
dangerous or difficult for humans to undertake.
During the 1960s, a number of the earliest AI programmes were also developed. James Slagle,
a computer scientist and instructor, developed the heuristic problem-solving tool known as
SAINT (Symbolic Automatic INTegrator) in the year 1961. SAINT's primary focus was on
symbolic integration, which is covered in the first year of calculus. SAINT was named after
the acronym for Slagle's full name, Symbolic Automatic INTegrator. STUDENT was an early
AI programme that was built in Lisp and designed to solve algebra word problems. It was
created in 1964 by computer scientist Daniel Bobrow.38 In the realm of AI natural language
processing, STUDENT is regarded as an early milestone.
Joseph Weizenbaum, a professor of computer science and the creator of the interactive
computer software ELIZA, created ELIZA in 1965. ELIZA was able to have a conversation in
English with a person in a functional capacity.39 Weizenbaum's research aimed to demonstrate
that any interactions between artificially intelligent minds and human minds only had the most
cursory of exchanges. However, he discovered that many people assigned humanistic qualities
to ELIZA, which contradicted his original objective. This prevented him from achieving his
purpose. The reaction to ELIZA demonstrated the potential for people to form emotional
connections with AI systems.
In 1966, Charles Rosen and eleven other people collaborated on the creation of Shakey the
Robot, which went on to become the first mobile robot that could do a variety of tasks. The
phrase “first electronic person” was also used to refer to this concept. Shakey’s development
37
Kat Eschner, How Robots Left the Lab and Started Helping Humans, available at:
[Link] (last visited on July 4,
2020).
38
Philip C. Jackson, Jr., Introduction to Artificial Intelligence (Dover Publications, Inc., New York, 1985).
39
Joseph Weizenbaum, “ELIZA A Computer Program For the Study of Natural Language Communication
Between Man And Machine” 9 Computational Logistics 36 (1966).
37
marked a significant step forward in the field of robotics and demonstrated the potential for
robots to be used in a wide variety of applications.40
1968 saw the debut of the science fiction movie titled “2001: A Space Odyssey”, which was
directed by Stanley Kubrick. A conscious computer known as HAL (Heuristically programmed
Algorithmic computer) makes an appearance in the movie. HAL, the advanced computer
system in control of the spacecraft, interacts and converses with the crew as if it were human.
However, a malfunction causes a shift in HAL's behaviour, leading to undesirable outcomes.
The portrayal of HAL in the film has had a significant impact on popular culture’s
understanding of AI and continues to shape our perception of AI today.
SHRDLU was one of the earliest natural language computer programmes, and it was developed
in 1968 by Terry Winograd, who was also a professor of computer science. 41 SHRDLU
represented a significant advancement in the field of natural language processing, and its
development marked a major step forward in the development of AI systems that could
understand and respond to human language.
In conclusion, the 1960s saw significant advancement and growth in the field of AI. The
developments of this decade have had a profound impact on the field of AI and continue to
shape our understanding of the capabilities and limitations of AI. The advancements that were
made in the areas of robotics, natural language processing, problem-solving, and human-
computer interaction were critical milestones that cleared the way for the creation of more
advanced AI systems in the years to come.
AIcontinued to advance in the 1970s, with a special emphasis on the creation of robots and
automatons. During this decade, however, the area of AI also experienced hurdles, such as a
reduction in government backing for AI research.
One of the key innovations of the 1970s was the development of anthropomorphic robots.
Waseda University in Japan is credited with the construction of WABOT-1, the first humanoid
40
Mikel Olazaran, “A Sociological History of the Neural Network Controversy” 37 Advances in Computers 335
(1993).
41
Dr Nivash Jeevanandam, “AI concepts for beginners: SHRDLU: An early natural-language understanding
computer program” India AI:Meity (2022).
38
robot, in the year 1970.42 It possessed movable limbs, eyes, and the ability to hold a
conversation in addition to its other abilities. The development of WABOT-1 marked a
significant step forward in the field of robotics and demonstrated the potential for robots to be
designed with human-like characteristics.
During the 1970s, the area of AIstruggled to overcome obstacles, most notably a decline in
government funding for AI research. In 1973, applied mathematician James Lighthill spoke to
the British Science Council about the state of AI research. In that presentation, he stated that
“in no part of the field have discoveries made so far produced the major impact that was then
promised”. Because of this report, the British government cut its funding for AI research by a
significant amount.
The 1970s also saw the portrayal of AI in popular culture, particularly in the 1977 release of
the film Star Wars. As a protocol droid, humanoid robot C-3PO is “fluent in more than seven
million kinds of communication”. He appears in the film. R2-D2, a diminutive astromech droid,
also makes an appearance in the movie. Unable to mimic human speech, R2-D2 communicates
with the other characters by electronic beeps. These characters have had a significant impact
on popular culture's understanding of AI.43
James L. Adams, who was a graduate student in mechanical engineering at the time, invented
The Stanford Cart in 1979. It was a mobile robot that could be operated remotely and came
equipped with a television. Hans Moravec, who was a PhD student at the time, introduced a
“slider”, also known as a mechanical swivel, in 1979.44 This allowed the TV camera to be
adjusted from side to side. The cart was able to navigate through a room filled with seats
without human assistance, taking around five hours to complete its tour. It is considered as one
of the early examples of autonomous vehicles. Because of this, the cart is considered to be one
of the earliest examples of an autonomous vehicle.
In conclusion, the 1970s was a decade of continued advancements in the field of AI,
particularly in the field of robotics and automatons. During this decade, however, the area of
AIalso experienced hurdles, such as a reduction in government backing for AI research. The
42
Md. Akhtaruzzaman, A. A. Shafie, “Evolution of Humanoid Robot and Contribution of Various Countries in
Advancing the Research and Development of the Platform” ICROS 1021 (2010).
43
Karim Nader, Paul Toprac et. al., “Public understanding of artificial intelligence through entertainment media”
AI and Society (2022).
44
Yanna Qiao, Lili Zhou, “Implications of Artificial Intelligence Technology for the External Communication of
Chinese Ethnic Cultures” Mobile Information Systems 1 (2022).
39
development of anthropomorphic robots and autonomous vehicles, as well as the portrayal of
AI in popular culture, has had a significant impact on the field of AI and continues to shape
our understanding of the capabilities and limitations of AI. The field of AI continued to make
progress during the 1970s and created the groundwork for more advanced AI systems in the
decades to come, despite the difficulties that were encountered at the time.
The 1980s was a decade of great advancements and excitement in the field of AI. However,
there was also a growing sense of caution surrounding the potential for an “AI Winter”, a period
of reduced funding and interest in AI research. Despite this, several notable achievements in
the field occurred during this time period.
The development of WABOT-2 at Waseda University was one of the most significant
achievements in AIthroughout the 1980s.45 This particular WABOT type was a humanoid robot
that could converse with humans and even read music notation while playing an electronic
organ. This was a major breakthrough in the field of AI and demonstrated the potential for
advanced human-robot interaction.
During the 1980s, the government of Japan made considerable investments in AI (AI). In 1981,
the Japanese Ministry of International Trade and Industry contributed 850 million JPY, which
is equivalent to around $850 million, to the Fifth Generation Computer project.46 The objective
of this project was to build computers that were capable of conversing, translating languages,
interpreting pictures, and expressing human-like reasoning. This was a major investment in AI
research and demonstrated the potential for advanced natural language processing and
computer vision.
The 1980s also saw the release of several films that dealt with the theme of AI and its impact
on society. Electric Dreams was a movie directed by Steve Barron that was released in 1984.
The main cause of conflict in the film is a love triangle involving a man, a woman, and “Edgar”,
a sentient computer. This film helped to popularize the concept of AI and sparked public
interest in the field.47
45
John Sunda Hsia, “The Evolution of AI” Artificial Intelligence (AI): Robotics (2018).
46
Information Technology and R&D: Critical Trends and Issues (Washington, DC: U.S. Congress, Office of
Technology Assessment, OTA-CIT-268, February 1985).
47
Wanda Strauven, The Cinema of Attractions Reloaded (Amsterdam University Press, Amsterdam, 2006).
40
However, despite the advancements and excitement in the field, there were also warnings of
an “AI Winter” during this decade. In 1984, cognitive scientist Marvin Minsky and AI theorist
Roger Schank48 published a notice alerting the public to the possibility of decreased funding
for AI research. Within three years, their prophecy came true as funding for AI research sharply
declined in the late 1980s.
During the 1980s, one of the most significant steps forward in AI was the development of
driverless automobiles. Mercedes-Benz designed, manufactured, and marketed a driverless van
in 1986. The van had a camera system and many sensors.49 On a stretch of road devoid of any
other vehicles or pedestrians, the vehicle was able to achieve top speeds of up to 55 miles per
hour. This was a major breakthrough in the field of AI and demonstrated the potential for
advanced autonomous systems.
Judea Pearl, a computer scientist and philosopher, was responsible for another significant
advancement in AI throughout the 1980s with his work. In 1988, Pearl published “Probabilistic
Reasoning in Intelligent Systems”, in which he introduced the concept of Bayesian networks.
These are probabilistic graphical models that make use of directed acyclic graphs to explain
the sets of variables and the dependencies that exist between them. These graphs can be directed
in either direction (DAGs). The work that Pearl did in this area had a tremendous impact on
the subject of AI and is still extensively acknowledged to this day.
Rollo Carpenter, the developer of two chatbots, Cleverbot and Jabberwacky, both of which
were introduced in the 1990s, came up with the idea for Jabberwacky in 1988 and went on to
build Cleverbot. Jabberwacky was designed to mimic natural human conversation in a way that
was interesting, entertaining, and humorous.50 Cleverbot was released in the 1990s. This
showcased the promise of AI through its interaction with people using chatbots.
Overall, the 1980s was a decade of great advancements and excitement in the field of AI, but
also marked by warnings of an “AI Winter” and a decrease in funding and interest in the field.
Despite this, significant achievements were made in areas such as human-robot interaction,
48
Subrata Dasgupta, The Second Age of Computer Science: From Algol Genes to Neural Nets (Oxford University
Press, Oxford, 2018).
49
Fabian Kröger, Automated Driving in Its Social, Historical and Cultural Contexts (Autonomous Driving:
Springer, Berlin, 2016).
50
Ann Borda and Giuliano Gaia, Engaging Museum Visitors with AI: The Case of Chatbots (Springer Series on
Cultural Computing, Springer, 2019).
41
natural language processing, computer vision, autonomous systems and probabilistic
reasoning.51
The 1990s were a decade that saw AIcontinue to make strides forward in terms of both growth
and advancement (AI). During this time period, a number of significant advancements were
made in a variety of fields, including natural language processing, recurrent neural networks,
and autonomous systems, amongst others.
One of the most significant achievements in AI during the 1990s was the development of
A.L.I.C.E (Artificial Linguistic Internet Computer Entity) by computer scientist Richard
Wallace in 1995. A.L.I.C.E was a chatbot that was inspired by Weizenbaum’s ELIZA but
differentiated by the addition of natural language sample data collection. A.L.I.C.E was able
to understand and respond to a wide range of natural language inputs and was an important
step forward in the field of natural language processing.52
The invention of Long Short-Term Memory (LSTM) in 1997 by computer scientists Sepp
Hochreiter and Jürgen Schmidhuber was another key milestone in the field of AI throughout
the 1990s.53 The recurrent neural network (RNN) architecture known as LSTM is what's
utilised for handwriting and speech recognition technologies nowadays. LSTM was able to
overcome the limitations of traditional RNNs and has since become a fundamental building
block in many AI systems.
The field of game-playing AI also saw significant advancements during the 1990s. A chess-
playing computer called IBM's Deep Blue54 made history in 1997 when it became the first
software of its kind to defeat the current world champion in both a single game and an entire
match. This was a significant achievement in the field of AI and demonstrated the potential for
advanced game-playing systems.
Toy robots, which later evolved into autonomous systems, began appearing on store shelves in
the 1990s. Furby, the first interactive “pet” robot for kids, was developed in 1998 by Dave
51
Supra note 12.
52
Richard S. Wallace, The Anatomy of A.L.I.C.E. 181 (Springer Science: Business Media B.V., 2009).
53
Keith D. Foote, “A Brief History of Deep Learning” Dataversity (2002).
54
Lawrence Aung, “Deep Blue: The History and Engineering behind Computer Chess” Illumin Magazine (2010).
42
Hampton and Caleb Chung.55 Furby was able to respond to a variety of inputs such as touch
and sound, and was engineered to “learn” through interaction with its surroundings. In 1999,
Sony introduced AIBO (Artificial Intelligence RoBOt), a robotic companion dog.56 It cost
$2,000 and was able to understand and respond to over one hundred vocal instructions, as well
as communicate with its human owner. AIBO was nicknamed “Artificial Intelligence RoBOt”.
These toy robots demonstrated the potential for autonomous systems to interact with humans
in a natural and intuitive way.
Overall, the 1990s was a decade of continued growth and advancement in the field of AI.
Notable achievements were made in areas such as natural language processing, recurrent neural
networks, game-playing AI, and autonomous systems. These advancements laid the foundation
for many of the AI systems that we use today and continue to influence the field of AI.
The turn of the millennium marked a new era for the field of AI. The new millennium brought
new challenges and opportunities, and AI continued to trend upward in terms of advancements
and research.
The Y2K problem, which is often referred to as the year 2000 problem, was one of the most
critical challenges that AI had to face at the beginning of the new millennium. At the turn of
the century, a class of computer defects emerged that affected the formatting and storing of
data associated with electronic calendars. These vulnerabilities were caused by a change.
Because all of the software and apps that make up the internet were established in the 1900s,
many systems had a difficult time transitioning to the new year format that began in the year
2000 and extended beyond that. This problem is expected to remain for some time. Previously,
these automatic systems just required the final two digits of the year to be changed, but once
the new year arrived, it became necessary to alter all four digits, which created a significant
issue for both the system and the users.
In spite of this, significant progress was made in the field of AI throughout this decade, which
led to the accomplishment of numerous noteworthy goals. Kismet was a robot that was created
55
Lucia Peters, “Here's How That Furby Craze In The '90s Really Began — And Why They Still Give You
Nightmares” Bustle (2018).
56
Karen-Janine Cohen, “They're Brains Behind Hot Toys” South Florida Sun-Sentinel (2004).
43
in the year 2000 by Professor Cynthia Breazeal.57 The facial expressions of Kismet, a robot,
were meant to resemble those of a human face, including the eyes, mouth, eyelids, and
eyebrows. Kismet was able to recognise and imitate emotions. It was a significant advancement
in the study of emotional AI. During the same year, Honda also debuted ASIMO, a humanoid
robot that possessed AI and was referred to as ASIMO.
The concept of AI also became a popular topic in the creative media, specifically in film. The
science fiction movie “A.I. Artificial Intelligence”, which Steven Spielberg directed and
produced, was released in the year 2001.58 The movie centers around David, a humanoid child
that is programmed with human-like tendencies, such as the ability to love, and the story
unfolds in a post-apocalyptic society. Alex Proyas directed the 2004 science fiction film I,
Robot, which was released the same year. The story takes place in 2035 and shows a society
where humanoid robots assist humans. One character, however, is vehemently anti-robot due
to a personal tragedy that was caused by a robot.
During this time period, there were also great leaps forward made in the realm of autonomous
systems. Roomba, an autonomous robot vacuum developed by i-Robot in 2002, was designed
to clean while avoiding obstacles in its path. In 2004, NASA developed and operated two
robotic rovers, Spirit and Opportunity, which successfully explored the surface of Mars without
human intervention.
During this time period, computer scientists also made significant advances to the field of AI.
Oren Etzioni, a professor of computer science, computer scientists Michael Cafarella and
Michele Banko, created the phrase “machine reading” in 2006. They defined it as the
unsupervised autonomous understanding of text. This was a significant advance in the field of
natural language processing, which constituted an essential step forward. ImageNet is a
database of annotated photos that was put together in 2007 by Fei Fei Li and his colleagues for
the goal of assisting in the research that is being done on object identification software.59
During this decade, there were also tremendous leaps forward made in the realm of autonomous
cars. Google began developing a driverless car in stealth mode in 2009, and by 2014, the
57
Nikolaos Mavridis, “A review of verbal and non-verbal human–robot interactive communication” 63 Robotics
and Autonomous Systems 22 (2105).
58
Matt Melia, “The Post-Kubrickian: Stanley Kubrick, Steven Spielberg, Adaptation and A.I. Artificial
Intelligence” Screening the Past (2020).
59
Jason Brownlee, “A Gentle Introduction to the ImageNet Challenge (ILSVRC)” Deep Learning for Computer
Vision (2019).
44
vehicle had successfully passed Nevada's test for autonomous driving. This exhibited the
potential for more advanced autonomous systems to be utilised in the transportation sector.
Overall, the decade of 2000-2010 was a period of continued growth and advancements in the
field of AI. Notable achievements were made in areas such as emotional AI, autonomous
systems, natural language processing, object recognition, and autonomous vehicles. These
advancements laid the foundation for many of the AI systems that we use today and continue
to influence the field of AI.
The past decade has seen significant advancements in the field of AI. Beginning in 2010, AI
has become increasingly prevalent in our daily lives, with the incorporation of technologies
such as voice assistants and intelligent functions in our smartphones and computers.
An annual AI object recognition competition that has played a crucial role in driving forward
progress in the field of computer vision is called the ImageNet Large Scale Visual Recognition
Challenge (ILSVRC),60 which was launched in 2010. This was one of the most significant
events that took place in 2010. In addition, Microsoft released the Kinect for Xbox 360 in 2010,
making it the first gaming device to employ a 3D camera and infrared detection technology to
track human body movement. This innovation was made possible by the Kinect for Xbox 360.
IBM's Watson, a natural language question answering computer, made its debut in 2011.
Watson played on the television game show Jeopardy! in 2011 and prevailed against two
previous winners of the competition. During the same year, Apple debuted Siri, a natural-
language processing-based virtual assistant for its iOS operating systems. Siri is able to respond
to inquiries, provide recommendations, and customise itself to the specific preferences of each
unique user.61
In 2012, Google researchers led by Jeff Dean and Andrew Ng trained a neural network with
16,000 processors to recognise cat shots by exposing it to 10 million unlabeled images
extracted from YouTube films.62 The training was done in order to teach the neural network
how to recognise photographs of cats. The researchers were able to train the network with the
60
Ibid.
61
Martin Adam, Michael Wessel et. al., “AI-based chatbots in customer service and their effects on user
compliance” 31 Electron Markets 427 (2021).
62
Luke Dormehl, Thinking Machines (Penguin Random House LLC, New York, 2017).
45
help of videos found on YouTube. This proved the promise of unsupervised learning in the
field of computer vision. Specifically, the field of computer vision. The following year, a group
of researchers from Carnegie Mellon University presented the Never-Ending Image Learner
(NEIL), a semantic machine learning system that was able to investigate and analyse the
connections between images.63
Cortana and Alexa, two competing personal digital assistants, were both introduced in the same
year by Microsoft and Amazon, respectively. The next year, a number of notable people, like
as Elon Musk, Stephen Hawking, and Steve Wozniak, signed an open letter that called for a
prohibition on the creation and use of autonomous weapons in armed conflict.64
In the years that followed, there were a number of notable achievements in the field of AI.
AlphaGo, a computer programme created by Google DeepMind that plays the board game Go,
was able to win against multiple human champions between the years of 2015 and 2017.65
Sophia is a humanoid robot that Hanson Robotics built in 2016. She is regarded to as the first
“robot citizen” since she resembles a human in appearance and has the ability to sense, express
facial emotions, and engage in social interaction using artificial intelligence.66 In the same year,
Google introduced Google Home, a smart speaker that can serve as a user's personal assistant
by utilising AI to aid users with completing tasks, booking appointments, and looking for
information.
Two chatbots were taught to converse by the Facebook Artificial Intelligence Research lab in
[Link] a result of this training, the chatbots were able to construct their own language to
speak with one another, displaying a high level of artificial intelligence. In 2018, an Alibaba
language processing AI achieved better results than humans on a Stanford reading and
comprehension test.67 BERT, the first unsupervised bidirectional language representation, was
also created by Google. Through the process of transfer learning, this representation can be
63
T. Mitchell, W. Cohen et. al., “Never-Ending Learning” 61 Communications of the ACM 103 (2108).
64
Samuel Gibbs, Musk, Wozniak and Hawking urge ban on warfare AI and autonomous weapons, available at:
[Link]
(last visited on May 3, 2020).
65
David Silver, Aja Huang, “Mastering the game of Go with deep neural networks and tree search” 529 Nature
484 (2016).
66
Jaana Parviainen, Mark Coeckelbergh, “The political choreography of the Sophia robot: beyond robot rights
and citizenship to political performances for the social robotics market” 36 AI and Society 715 (2021).
67
Srishti Deoras, “Alibaba and Microsoft’s AI Beats Humans In A Reading Comprehension Test At Stanford” In
News (2018).
46
implemented in a vast array of natural language activities. During the same year, Samsung
debuted Bixby, a virtual assistant that could perform speech, vision, and home-related tasks.68
There have been a number of significant developments in the area of artificial intelligence in
recent years, which have had a significant impact on a wide range of applications and
industries. These advancements have been made possible by the rapid development of
computing power and the availability of massive amounts of data. These breakthroughs have
been driven by the development of new algorithms and architectures, in addition to the
increased availability of massive volumes of data. Some of the most important recent
advancements are as follows:
Reinforcement Learning: In 2019, OpenAI's Dactyl, a robotic hand that could perform a
dexterity task using reinforcement learning, was able to match and even exceed human
68
Zygmunt Vetulani and Patrick Paroubek, Human Language Technology. Challenges for Computer Science and
Linguistics (Springer Cham, 2019).
69
Alberto Romero, “A Complete Overview of GPT-3 — The Largest Neural Network Ever Created” Towards
Data Science (2021).
47
performance. Through the use of reinforcement learning, OpenAI's DALL-E was able to
produce visuals in the year 2020 based on text descriptions written in natural language. One
subfield of machine learning is known as reinforcement learning, and it entails teaching models
how to make choices in a way that maximises some abstract concept of cumulative reward. It
has been put to use in a vast array of applications, some examples of which being robotics,
game playing, and decision-making.70
Adversarial Example: In 2020, researchers from Google AI proposed a new method called
“Adversarial Example” to defend against the adversarial attacks on deep learning models. The
method is able to improve the robustness of deep learning models to adversarial examples, by
training the models on a dataset of adversarial examples. Adversarial examples are inputs that
have been carefully crafted to fool a model into making an incorrect prediction. Adversarial
attacks on deep learning models have become a major concern in recent years, and there has
been a significant amount of research on developing methods to defend against them.71
Explainable AI (XAI): In recent years, there has been an increased focus on developing AI
models on providing interpretable and transparent explanations for their decision-making
processes. This is known as XAI, and it is seen as an important step towards building
trustworthy and accountable AI systems. XAI aims to provide insights into the internal
workings of AI models, which can help to ensure that they are making decisions that are aligned
70
Haifeng Wang, Jiwei Li et. al., “Pre-Trained Language Models and Their Applications” 6 Engineering 200
(2022).
71
Kui Ren, Tianhang Zheng et. al., “Adversarial Attacks and Defences in Deep Learning” 6 Engineering 346
(2020).
72
Tero Karras, Samuli Laine, “Analyzing and Improving the Image Quality of StyleGAN” IEEE/CVF Conference
on Computer Vision and Pattern Recognition (2020).
48
with human values and ethical principles. This is particularly important in high-stakes
applications, such as healthcare and finance, where the decisions made by AI models can have
significant consequences for individuals and society as a whole.73
• Generative Models For 3D Objects and Scenes: Researchers have been working on
developing generative models that can generate 3D objects and scenes, such as
buildings, furniture, and vehicles. These models have a wide variety of potential
applications, including computer-aided design, virtual reality, and augmented reality,
to name a few.74
• Neural Architecture Search (NAS): NAS is a technique that automates the process of
designing neural network architectures. This can be useful in situations where it is
difficult to manually design an architecture that is well-suited to a particular task.75
73
Waddah Saeed, Christian Omlin, “Explainable AI (XAI): A systematic meta-survey of current challenges and
future opportunities” 263 Knowledge-Based Systems 110273 (2023).
74
Siddhartha Chaudhuri, Daniel Ritchie et. al., “Learning Generative Models of 3D Structures” 39 The
Eurographics Association (2020).
75
Thomas Elsken and Hendrik Metzen, Neural Architecture Search 63 (Springer, Cham, 2019).
76
Vijay Kanade, What Is Transfer Learning? Definition, Methods, and Applications, available at:
[Link] (last visited
on Sept. 2, 2022).
77
Xu Han, Zhengyan Zhang et. al., “Pre-trained models: Past, present and future” 2 AI Open 225 (2021).
49
of training instances. This comes particularly handy in circumstances in which there is
a restricted amount of data available for training.78
• Multi-Modal AI Systems: Multi-modal AI systems are AI models that can process and
reason about multiple types of data, such as text, images, and audio. This can be useful
in applications such as computer vision, natural language processing and speech
recognition, where the input data may be a combination of different modalities.79
It is undeniable that the field of AI has experienced remarkable growth over the course of the
past few years, and it is highly probable that this trend will continue well into the foreseeable
future. These technological advances have the potential to deliver tremendous benefits to
society, but at the same time, they bring up significant ethical and societal problems. As a
result, it is essential that practitioners, researchers, and policymakers collaborate to guarantee
that these technologies are developed and implemented in a responsible and ethical manner.
Overall, the past decade has witnessed a rapid acceleration in the development and integration
of AI technologies into various areas of our lives, and it is likely that this trend will continue
in the coming years.
AI is a rapidly evolving field that is poised to have a significant impact on the way we live and
work. In the coming years, we can expect to see continued advancements in the areas of Natural
Language Processing (NLP), Machine Learning (ML), and Autonomous Systems. These
technologies will enable more sophisticated human-computer interactions, improved decision-
making, and greater automation of tasks.
One key trend that is likely to continue in the field of AI is the use of chatbots and virtual
assistants for improved user experience. These AI-powered systems are becoming increasingly
sophisticated in their ability to understand and respond to human language, thanks to
advancements in NLP. This is enabling them to handle more complex interactions and provide
more personalized responses, making them more valuable for a wide range of applications such
78
Maksym Tatariants, Few-shot Learning Explained: Examples, Applications, Research, available at:
[Link] (last visited on Oct. 6,
2021).
79
Iqbal H. Sarker, “AI-Based Modelling: Techniques, Applications and Research Issues Towards Automation
Intelligent and Smart Systems” 3 SN Computer Science 158 (2022).
50
as customer service, e-commerce, and personal productivity.80 The integration of AI-powered
chatbots and virtual assistants in various industries, such as healthcare, finance, and retail, is
expected to improve the customer experience and increase efficiency.
The development of autonomous systems is another area of AI that is likely to see continued
growth in the coming years. There is a significant effort to mechanize the task of driving from
one location to another, one example of this is the experimentation with self-driving vehicles,
which is currently being conducted on roads worldwide. Because of this, not only will the cost
of human labour be reduced, but also the process of purchase, shipment, and arrival at the
customer will be optimized through the use of self-driving vehicles, which will not become
fatigued while they are behind the wheel. Other areas where we can expect to see increased use
of autonomous systems include drones, robots, and smart homes. The integration of
autonomous systems in various industries, such as transportation, logistics, and agriculture, is
expected to increase efficiency and safety.83
However, it is important to note that the development and deployment of AI systems also raises
important ethical and societal concerns, such as bias, transparency, and accountability. These
concerns are particularly acute in the context of autonomous systems, which have the potential
to affect human lives and safety. Addressing these concerns will require collaboration between
researchers, policymakers, and practitioners, as well as the development of new regulations,
guidelines, and best practices.
80
Eleni Adamopoulou, Lefteris Moussiades, “Chatbots: History, technology, and applications” 2 Machine
Learning with Applications 100006 (2020).
81
Justin Tennenbaum, “Automated Machine Learning” Towards Data Science (2019).
82
Janakiram MSV, “Why AutoML Is Set To Become The Future Of Artificial Intelligence” Forbes (2018).
83
Linh N.K. Duong, Mohammed Al-Fadhli, “A review of robotics and autonomous systems in the food industry:
From the supply chains perspective” 106 Trends in Food Science and Technology 355 (2020).
51
Overall, AI is an exciting field with many potential applications and opportunities for growth.
As technology continues to evolve and become more accessible, we can expect to see more
organizations adopt AI to improve their operations, better serve their customers, and create
new products and services. It is essential, however, that the development and deployment of
AI systems be guided by ethical and societal considerations to ensure that the benefits of AI
are realized by all.
III. Definition
AI, or artificial intelligence, is a multidisciplinary field of study that aims to create intelligent
machines and systems that are capable of performing tasks that typically require human
intelligence. These tasks include perception, reasoning, learning, decision making, and the
processing of natural language. The modern definition of AI, as proposed by John McCarthy
in 1956, is “the science and engineering of making intelligent machines”.84
An intelligent agent is a key concept in AI, which refers to a system that can perceive its
environment, reason about it, and take actions that maximize its chances of achieving its goals.
These agents can be physical entities, such as robots, or virtual entities, such as software
programs. The intelligence of an agent is determined by its ability to make rational decisions
based on the information it receives from its environment.85
AI research can be divided into two main branches i.e., symbolic AI and sub-symbolic AI.
Symbolic AI, also known as “good old-fashioned AI” (GOFAI), focuses on the use of symbols
and logic to represent knowledge and reasoning. This approach was dominant in the early days
of AI research and is still used today in applications such as expert systems and natural
language processing. Sub-symbolic AI, on the other hand, uses non-symbolic representations
and techniques, such as neural networks and evolutionary algorithms, to model intelligence.86
This approach has become increasingly popular in recent years, with significant advancements
in areas such as machine learning and deep learning.
84
Alyssa Schroer, “Artificial Intelligence” Builtin (2022).
85
Ibid.
86
Orhan G. Yalçın, “Symbolic vs. Sub symbolic AI Paradigms for AI Explainability” Towards Data Science
(2021).
52
autonomous vehicles. However, the development of AI also raises ethical and societal
problems, such as the possible influence on employment and privacy. 87 These concerns are
brought up as a result of the potential implications of AI. As such, it is important to consider
these issues in the development and deployment of AI technologies.
Overall, AI is a rapidly evolving field, with ongoing research and development aimed at
creating machines and systems that can perform increasingly complex tasks with human-like
intelligence. Computational intelligence, synthetic intelligence, and computational rationality
are a few of the alternative names that have been offered for this area of study. A machine’s or
a program’s degree of intelligence can also be referred to as its “artificial intelligence,” and the
phrase “artificial intelligence” can be used to characterise this quality.
Computer science provides the foundation for AI, as it provides the algorithms, programming
languages, and data structures that are used to create intelligent agents. The field of psychology,
particularly cognitive psychology, provides insights into how human intelligence works, which
can be used to inform the design of AI systems.89 Philosophy, particularly the philosophy of
mind, provides a framework for understanding the nature of intelligence, consciousness and
mental states.
Neuroscience and cognitive science provide a deeper understanding of the neural mechanisms
and cognitive processes that underlie human intelligence, which can be used to inform the
development of biologically inspired AI systems.90 Linguistics is important for natural
87
B C Stahl, Ethical Issues of AI. In: Artificial Intelligence for a Better Future (Springer Briefs in Research and
Innovation Governance Springer, Cham, 2021).
88
Angela Spatharou, Solveigh Hieronimus, “Transforming healthcare with AI: The impact on the workforce and
organizations” McKinsey and Company (2020).
89
Ibid.
90
Tom Macpherson, Anne Churchland, “Natural and Artificial Intelligence: A brief introduction to the interplay
between AI and neuroscience research” 144 Neural Networks 603 (2021).
53
language processing and understanding the structure and meaning of human language, which
is critical for building intelligent agents that can interact with humans in natural language.91
Operations research and control theory provide the mathematical framework for decision
making and control, which are essential for creating intelligent agents that can make rational
decisions and take appropriate actions in uncertain environments. Economics, particularly
game theory, provides a framework for understanding strategic decision-making and social
interactions, which can be applied to AI systems.
Probability and statistics provide the tools for modelling and reasoning under uncertainty,
which are essential for many tasks in AI, such as machine learning and probabilistic reasoning.
Optimization provides the mathematical framework for finding the best solution to a problem,
which is critical for many AI tasks, such as planning and scheduling. Logic, particularly formal
logic, provides a framework for representing and reasoning with knowledge, which is essential
for many AI tasks, such as knowledge representation and reasoning.92
Research in artificial intelligence also overlaps with a number of other topics, including
recognition, scheduling, speech, data miming, robotics, control systems, logistics, facial
recognition, and a great deal of other areas. Robotics is an interdisciplinary field that combines
AI with mechanical engineering, control systems, and electronics, to create intelligent
machines that can sense, reason, and act in the physical world.93 Control systems deals with
the control of dynamic systems, which is critical for tasks such as autonomous vehicles and
robotics. Scheduling is used in AI to plan and coordinate the actions of intelligent agents, which
is critical for tasks such as logistics and manufacturing. Data mining is used in AI to extract
knowledge from large datasets, which is critical for tasks such as natural language processing
and computer vision.94 Logistics deals with the efficient movement of goods and resources,
which is critical for tasks such as supply chain management and transportation. Speech
recognition and facial recognition are important applications of AI that enable computers to
understand human speech and recognize human faces.
91
Diksha Khurana, Aditya Koli et. al., “Natural language processing: state of the art, current trends and
challenges” 82 Multimedia Tools and Applications 3713 (2023).
92
William B. Gevarter, An overview of Computer-Based Natural language Processing (National Aeronautics and
Space Administration Headquarters Washington, D.C. 20546, 1983).
93
Alan Martin, “Robotics and artificial intelligence: The role of AI in robots” AI Business (2021).
94
Julius Olufemi Ogunleye, The Concept of Data Mining (Intechopen, 2022).
54
Overall, AI is a multidisciplinary field that draws on various disciplines and fields, each
providing its own set of tools and insights to help understand and create intelligent systems. As
such, AI research requires the integration of various knowledge and skills from multiple fields
in order to achieve the goal of creating intelligent machines and systems.95
The IEEE Computational Intelligence Society provides definitions for a number of essential
topics that fall under the umbrella of the discipline of computational intelligence.97 These topics
include evolutionary computation, neural networks, and fuzzy systems. Neural networks are
programmable, pattern-recognition-capable systems, whereas fuzzy systems are approaches for
reasoning under uncertainty that are widely utilised in industrial and consumer product control
systems today.98 On the other hand, evolutionary computation takes ideas from biology and
applies them to the problem-solving process. These ideas include populations, mutation, and
the concept of “survival of the fittest.”99
These methods can be further segmented into evolutionary algorithms, which also include
genetic algorithms, and swarm intelligence, which also includes ant algorithms, amongst other
possible instances. Genetic algorithms and ant algorithms are just two types of evolutionary
algorithms. There are also attempts to combine these two groups through the use of hybrid
intelligent systems.100
In addition, the utilisation of neural networks or production rules produced from statistical
learning, such as those located in ACT-R or CLARION, can be utilised to assist in the process
95
Christopher Collins, Denis Dennehy et. al., “Artificial intelligence in information systems research: A
systematic literature review and research agenda” 60 International Journal of Information Management 102383
(2021).
96
A.S. d’Avila Garcez, K. Broda et. al., “Symbolic knowledge extraction from trained neural networks: A sound
approach” 125 Artificial Intelligence 155 (2001).
97
Duch Wlodzislaw, “What is Computational Intelligence and what could it become?” Computational intelligence
(2003).
98
Supra note 90.
99
T. Ryan Gregory, “Understanding Natural Selection: Essential Concepts and Common Misconceptions” 2
Evolution: Education Outreach 156 (2009).
100
Sourabh Katoch, Sumit Singh Chauhan et. al., “A review on genetic algorithm: past, present, and future” 80
Multimedia Tools and Applications 8091 (2021).
55
of generating expert inference rules.101 This can be done through the utilisation of a variety of
different methods. Integration of systems is considered as both promising and possibly crucial
for the development of genuine artificial intelligence. This is because it is believed that the
human brain uses numerous methods to both formulate and cross-check outcomes.
Specifically, the combination of symbolic and connectionist models is held in high regard as
having particularly promising potential.
Overall, the field of computational intelligence is a complex and multi-faceted area of research
that encompasses a wide range of techniques and approaches for developing intelligent
systems. This is a very active area of study, and there are numerous ongoing initiatives to
increase the capabilities and performance of these systems.
There are many different definitions of artificial intelligence, as the field is constantly evolving
and there are many different perspectives on what constitutes AI. However, some commonly
cited definitions include:
• “The study and design of intelligent agents” (Russell and Norvig, Artificial
Intelligence: A Modern Approach)102
• “The field of AI research defines itself as the study of 'intelligent agents': any device
that perceives its environment and takes actions that maximize its chance of success at
some goal” (Berkeley Artificial Intelligence Research)103
• “AI is the branch of computer science concerned with developing algorithms and
techniques for building systems that can perform tasks that typically require human
intelligence, such as visual perception, speech recognition, decision-making, and
language understanding” (IEEE)105
101
Frank E. Ritter, Farnaz Tehranchi et. al., “ACT-R: A cognitive architecture for modelling cognition” Wiley
Interdisciplinary Reviews: Cognitive Science (2018).
102
Supra note 12.
103
Chethan Kumar GN, Artificial Intelligence: Definition, Types, Examples, Technologies, available at:
[Link] (last visited
on Apr. 1, 2022).
104
Ibid.
105
Tanya Tiwari, Tanuja Tiwari et. al., “How Artificial Intelligence, Machine Learning and Deep Learning are
56
• “AI is the simulation of cognitive functions such as learning, reasoning, and self-
correction in computers” (MIT Press, AI: A Modern Approach)106
• “AI is the ability of a computer or machine to think and learn” (Oxford Dictionary)107
• “AI is the development of computer systems able to perform tasks that normally require
human intelligence, such as visual perception, speech recognition, decision-making,
and language translation” (European Union)109
• “AI is the science and engineering of making intelligent machines, especially intelligent
computer programs” (AAAI, Association for the Advancement of Artificial
Intelligence)110
• “AI is the ability of a computer system to perform tasks that would normally require
human intelligence, such as recognizing speech, understanding natural language, and
recognizing objects in images”111 (Techopedia)
• “AI is the simulation of human intelligence in machines that are programmed to think
and learn”112 (Science Daily)
• These are only a few instances of how the definition of “artificial intelligence” can vary
depending on the viewpoint and the source used in various regions of the world.
57
Intelligence
Intelligence is a complex and multifaceted construct that has been defined and studied in
various ways by different researchers and disciplines. One frequently accepted definition of
intelligence is the capacity for acquiring, comprehending, and applying knowledge, as well as
exercising thought and reason. This definition emphasises the integrative aspect of intelligence
by pointing out that it incorporates a wide range of cognitive activities, including perception,
attention, memory, language, and planning, among others.
Neuroimaging studies have provided evidence for the neural underpinnings of intelligence,
with a particular focus on the frontal-parietal network. This network has been found to be
involved in a variety of cognitive functions, including language, short-term memory storage
and perception which aligns with the integrative nature of intelligence.113
According to researchers like Allen Newell115, intelligence is “the degree to which a system
approximates a knowledge-level system”. This definition emphasizes the importance of
knowledge in intelligence and highlights the central role of knowledge representation and
engineering in artificial intelligence research.
The term “knowledge” can be defined in numerous ways, including as acquired expertise and
abilities from education or experience, practical or theoretical comprehension of a subject, or
awareness or familiarity earned through experience. In the context of intelligence, knowledge
refers to the self-assured comprehension and application of information when appropriate.116
113
Colom R, Karama S. et. al., “Dialogues Clin Neurosci” 12 Neuroscience 489 (2010).
114
Allen Newell, Unified Theories of Cognition (Harvard University Press, Cambridge, Massachusetts, 1990).
115
Ibid.
116
Supra note 12.
58
of cognitive functions and neural networks, with the acquisition, representation, and
application of knowledge being a central aspect.117
Artificial Intelligence
The definition of AI has evolved over time to encompass a broader range of capabilities and
technologies, including natural language processing, machine learning, computer vision, and
more.
Mechanical replication
Previous definitions of Artificial Intelligence took into account the limitations of intelligence
that are inherent to mechanical processes. John McCarthy of Stanford University's Computer
Science Department, who invented the phrase in 1956,118 first defined Artificial Intelligence as
“the science and engineering of making intelligent machines”.119 The field of Artificial
Intelligence was established on the belief that intelligence, a basic human trait, can be clearly
defined and replicated by a computer.120
Intelligence Agents
The term “artificial intelligence” (AI) used to refer to the “study and development of intelligent
agents” in earlier AI textbooks. An intelligent agent is a system that has the ability to
comprehend its surroundings and take appropriate action to increase its chances of being
successful.121 AI is “the capability of a device to perform functions that are normally associated
with human intelligence, such as reasoning and optimization through experience.”122 AI refers
to both the intelligence of machines and the discipline of computer science that seeks to
generate it.123 The capability of a machine to gain knowledge through experiences and perform
actions that are typically associated with human intelligence, including the capacity to resolve
issues, reason logically and understand spoken language.124
117
Patrick Henry Winston, Artificial Intelligence (Addison Wesley Publishing Company, 2001).
118
Daniel Crevier, AI: The Tumultuous Search for Artificial Intelligence 50 (Basic Books, New York, 1993).
119
Ibid.
120
Supra note 76.
121
Supra note 12.
122
J. Feldman, “Artificial Intelligence in Cognitive Science” International Encyclopaedia of the Social &
Behavioural Sciences 561, 792 (2001).
123
A definition of AI: Main capabilities and scientific disciplines, available at:
[Link] (last visited on
Oct. 3, 2021).
124
Ida Arlene Joiner, “Artificial Intelligence: AI is Nearby” Emerging Library Technologies 1 (2018).
59
Non-Naturally Occurring Systems
Ideas in Machines
When attempting to define artificial intelligence, the focus has shifted from mechanical devices
to systems, and then ultimately to concepts that are implemented in machines. The latter
definition stated: “Artificial intelligence is the study of ideas to bring into being machines that
respond to stimulation consistent with traditional responses from humans, given the human
capacity for contemplation, judgment, and intention. Each such machine should engage in
critical appraisal and selection of differing opinions within itself. Produced by human skill and
labor, these machines should conduct themselves in agreement with life, spirit, and sensitivity,
though in reality, they are imitations”127 or “AI is the ability of Machine/Tools”: “The ability
of a machine to learn from experience and perform tasks normally attributed to human
intelligence, for example, problem-solving, reasoning, and understanding natural language.”128
or “Tools that exhibit human intelligence and behavior including self-learning robots, expert
systems, voice recognition, natural and automated.”129
Process of Simulation
Initially, the concept of artificial intelligence was understood to refer to the process of having
computers work for humans. For example, “Artificial Intelligence is the study of how to make
125
Janna Anderson, Artificial Intelligence and the Future of Humans, Pew Research Center (2018).
126
Elaine Rich, Kevin Knight et. al., Artificial Intelligence (Tata McGraw Hill Education Private Ltd., New Delhi,
2009).
127
Marvin L. Minsky, Computation: Finite and Infinite Machines 254 (Prentice-Hall, Inc. Englewood Cliffs, N.
J., 1967).
128
Supra note 124.
129
Ibid.
130
Supra note 122.
60
computers accomplish tasks which, at the present, people do better.”131 Soon after, the machine
was supplanted by the computer, which became the sole owner of AI. The definition of AI, for
instance, states that it “refers to a computer system that is able to operate in a manner similar
to that of human intelligence; that is, it can understand natural language and is capable of
reasoning, classifying, self-improvement, recognising, adapting, learning and solving
problems.” This term relates to computer systems that can simulate human intelligence in their
operation.132
The progression from machines to computers, and then from computers to a branch of
computing that aims to replicate human capabilities, was done in order to specifically include
this branch within the realm of computers and exclude other types of machines. For example,
“AI is the branch of computer science that attempts to approximate human reasoning by
organising and manipulating factual and heuristic knowledge”. The shift from machines to
computers, and then from computers to a branch of computing that aims to imitate human
abilities, was adopted to incorporate this branch as a specific part of computers and exclude
other machines. Artificial intelligence research focuses on fields such as robotics, natural
language understanding, expert systems, speech recognition, and vision133 and “The branch of
Computing Science concerned with simulating aspects of human intelligence such as language
comprehension and production, vision, planning”134 etc.
AI is the intelligence of machines and the branch of computer science that aims to develop it,
e.g., “The intelligence possessed by machines is referred to as artificial intelligence (AI), and
the subfield of computer science that tries to produce AI is called artificial intelligence”135 and
“AI is a branch of computer science that studies and creates computer systems that exhibit
some form of intelligence, learn new concepts and tastes, reason and draw useful conclusions
about the world around us, understand natural languages or visual scenes, and perform other
feats that require human intelligence”.136
131
Supra note 126.
132
Mohd Naveed Uddin, “Cognitive science and artificial intelligence: simulating the human mind and its
complexity” 1 Cognitive Computation and Systems 113 (2019).
133
Supra note 6.
134
Dalvinder Singh Grewal, “A Critical Conceptual Analysis of Definitions of Artificial Intelligence as Applicable
to Computer Engineering” 16 IOSR Journal of Computer Engineering 9 (2014).
135
Ibid.
136
Dan W Patterson, Introduction to Artificial Intelligence & Expert Systems (Prentice-Hall, USA, 2004).
61
The development of intelligent machines and computers ultimately led to the creation of
computer systems and the branch of computing that deals with their design. This was just the
beginning of future advancements and the connection between these technologies and
computer programming. From artificial intelligence to computers, from computers to a branch
of computing that works with creating computer systems, and from simulation to one of those
branches. “The branch of computer science that focuses on creating software that aims to
replicate brain functions”.137
AI is a rapidly advancing field of technology that has the potential to revolutionize society in
a multitude of ways. Despite the impressive progress that has already been made in the
development of AI applications, it is important to recognize that this field remains largely
unexplored, meaning that the current state of AI represents only the tip of the iceberg in terms
of its potential capabilities.
The rapid growth and powerful capabilities of AI have led to a variety of concerns, including
fears of an AI takeover and the possibility of job displacement due to automation. Additionally,
the transformative impact of AI in various industries has led some to believe that we are
approaching the peak of AI research and that the potential of AI has been fully realized.
Nevertheless, it is essential to acknowledge that there are numerous forms of AI that are
potential and already exist, each with its own advantages and disadvantages. Understanding
the various types of AI and their existing capabilities helps shed light on the current level of
AI research and the long road ahead for the subject.
Current AI systems, for instance, can be split into two primary categories: narrow AI, which is
designed to execute a particular task, and general AI, which can perform a variety of tasks and
adapt to new settings. Narrow AI is the most advanced and prevalent form of AI at now, with
applications in areas such as picture and audio recognition, natural language processing, and
autonomous vehicles. However, general AI, which is still in the early stages of development,
has the potential to significantly impact society in ways that are not yet fully understood.
Despite the fact that AI is already having a substantial impact on society, it is essential to
understand that the field is still in its infancy and that AI research has a long way to go.
137
Anthony C. Chang, “Basic Concepts of Artificial Intelligence” Intelligence-Based Medicine 7 (2020).
62
Understanding the various types of artificial intelligence and their existing capabilities helps
provide a clearer view of the current status of AI research and the potential future effect of the
technology.
The goal of AI research is to create machines that can emulate human behaviour. One method
to categorise AI systems is by the degree to which they can mimic human capabilities in terms
of variety and performance. Here, more sophisticated and developed AI systems are those that
can do tasks traditionally associated with humans at roughly human levels of competence,
whereas less complex and developed systems are those that are unable to do so.138
An AI system’s “think” and “feel” abilities, as well as its similarity to the human mind, are
used to categorise it. As per this taxonomy approach, artificial intelligence or AI-enabled
technologies fall into four groups:139
• Reactive Machines: These are the simplest type of AI systems that can only react to the
current situation and do not have the ability to form memories or make decisions based
on past experiences.140
• Limited Memory Machines: This type of systems can learn from their past mistakes
and utilise that information to guide their present-day actions. They find usefulness in
systems like autonomous vehicles.141
• Theory of Mind: They are able to comprehend and reason about the mental states of
other agents, such as humans and other AI systems, as well as humans themselves.142
• Self-aware AI: Such systems have a self-awareness and consciousness like that of
humans. However, this technology is still in the research stage and is yet to be
developed.143
138
Supra note 79.
139
Supra note 84.
140
Supra note 7.
141
Javed M. Subhedar, “Autonomous Driving Architectures: Insights of Machine Learning and Deep Learning
Algorithms” 6 Machine Learning with Applications 100164 (2021).
142
Supra note 6.
143
Supra note 7.
63
It should be noted that this classification system is not the only one and there are other ways in
which AI can be classified such as by the specific task or application it is designed to perform,
or by the specific algorithms or methods used to create it.
Reactive Machines
One of the earliest forms of AIsystems, Reactive Machines are limited in comparison to more
modern AI systems. These devices are designed to mimic how the human mind reacts to a
range of stimuli, but they do not have any functionality related to memory. This suggests that
these systems are unable to use their previous experiences to inform the decisions that they
make today; hence, they are unable to “learn” from the acts that they have taken in the past.
This lack of memory-based functionality and learning capability limits the range of
applications for reactive machines. They are only able to react automatically to a combination
or specific set of inputs, and they are unable to change their behaviour depending on the
information they have gathered in the past. This makes them suitable for simple applications
such as traffic signals, elevators, and vending machines, which require only simple input-output
operations.
Deep Blue, the famous chess-playing computer that famously bested Garry Kasparov in 1997,
is a well-known example of a machine that uses reactive artificial intelligence. It utilized a
reactive approach to play chess and was not capable of learning from past games or adapting
to Kasparov's playstyle. However, it was still able to beat Kasparov by being able to evaluate
200 million chess positions per second and choosing the best move from the pre-programmed
chess knowledge.144
In conclusion, reactive machines are a primitive form of AI, but they are still widely used in
simple applications due to their ability to respond to simple inputs. However, their lack of
memory-based functionality and learning capability limit their range of applications and
capabilities when compared to more advanced AI systems.
Limited Memory
Limited Memory machines are an AIsystem type that possesses both the skills of fully reactive
machines and the ability to make judgments based on prior data. This sort of artificial
intelligence is distinguished by its capacity to keep historical data in its memory and utilise it
144
Supra note 54.
64
as a guide for addressing future challenges. In contrast, reactive machines are incapable of
learning from their past experiences or improving their performance.
Almost all AI applications now in existence come under this category. These systems learn by
establishing a reference model for future problem-solving by storing enormous amounts of
training data in their respective memories during the training process. For instance, an AI is
taught to identify scanned items using hundreds of photos and their descriptions for image
recognition. In order to better categorise fresh images, this type of AI uses its “learning
experience” or the training photos as references to classify old ones.145
Notably, while restricted memory machines are more advanced than reactive machines, they
are still less capable than more advanced AI systems, such as those that can comprehend and
reason about the mental states of other agents or those with self-awareness and consciousness.
Theory of Mind
The next level of AI systems being actively developed by researchers is Theory of Mind. This
sort of artificial intelligence is distinguished by its capacity to comprehend the entities with
which it interacts by identifying their emotions, needs, mental processes, and beliefs.147 This is
a huge improvement over prior forms of artificial intelligence, such as reactive machines and
machines with limited memory, which can only respond to input based on pre-programmed
rules and cannot comprehend the mental states of other agents.
145
Ashlyn S Pothen, Artificial Intelligence and its Increasing Importance 74 (L’Ordine Nuovo Publication 2022).
146
Supra note 80.
147
Butz, M.V., “Towards Strong AI” 35 Künstl Intell 91 (2021).
148
Supra note 125.
65
is of interest to notable AI researchers and a budding industry; nevertheless, achieving a theory
of mind level of AI will also require improvements in other AI fields, such as natural language
processing, computer vision, and decision-making.149
Currently, the development of theory of mind AI is still in the research stage and it is not yet
possible to create a machine that truly understands and reason about the mental states of other
agents like humans.150 However, researchers are actively working to develop AI systems that
can simulate certain aspects of human cognition such as understanding emotions, beliefs, and
intentions. These developments will have a significant impact on many fields such as
healthcare, education, and human-computer interaction.151
Self-aware
Self-awareness is a highly debated and speculative concept within the field of AI research. It
is often considered the ultimate goal of AI development, as it would involve the creation of an
AI system that is able to possess cognitive abilities and consciousness that are similar or
equivalent to those of a human being.
However, there are also concerns that the development of self-aware AI could lead to
catastrophic outcomes. This is because a self-aware AI system could have goals like self-
preservation, which could endanger humankind in some way. Due this reason, many experts in
the field of AI research argue that the development of self-aware AI should be approached with
149
Supra note 95.
150
Ibid.
151
Dr. Rohit Kumar, Vinay Jaiswal et. al., “Human-Computer Interaction (HCI)” 9 International Journal of
Engineering Research and Technology (2021).
152
Philip Boucher, “Artificial intelligence: How does it work, why does it matter, and what can we do about it?”
Scientific Foresight Unit (STOA): European Parliament (2020).
66
caution, and that ethical considerations should be taken into account at every stage of the
development process.153
Interestingly, the self-awareness classification system is used less commonly in tech jargon
than the AI classification scheme consisting of “Artificial Narrow Intelligence” (ANI),
“Artificial General Intelligence” (AGI), and “Artificial Superintelligence” (ASI).
In the field of AI research, ANI systems are considered to be the most basic and limited form
of AI. They are characterized by a lack of self-awareness, consciousness, and the ability to
understand or reason about the world. ANI systems are also commonly referred to as “reactive”
or “limited memory” AI, as they are only able to respond to inputs based on pre-programmed
rules and do not have the capability to learn or adapt based on past experiences.154
ANI systems include virtual assistants such as Apple’s Siri and Amazon’s Alexa, which are
limited in their ability to conduct speech recognition and natural language processing-related
activities. Other examples include image recognition systems, which are only able to identify
objects or features within an image based on pre-programmed rules.155
Notably, ANI encompasses even the most cutting-edge AI systems that leverage machine
learning and deep learning to enhance their functionality. This is because these systems are still
limited in their scope of abilities and are unable to perform tasks outside of the specific domain
in which they were trained. They lack the ability to generalize their knowledge to other
domains.
153
Yongjun Xu and Xin Liu, “Artificial Intelligence: A Powerful Paradigm for Scientific Research” 2 The
Innovation 100179 (2021).
154
Anant Manish Singh, Wasif Bilal Haju, “Artificial Intelligence” 10 International Journal for Research in
Applied Science & Engineering Technology 7 (2022).
155
Vidushi Marda, “Artificial intelligence policy in India: a framework for engaging the limits of data-driven
decision-making” 376 Philosophical Transactions of the Royal Society A: Mathematical, Physical and
Engineering Sciences, 20180087 (2018).
67
ANI systems are AI systems that are characterized by their limited scope of abilities and lack
of ability to generalize or adapt to perform other tasks. They fall under the most basic and
limited form of AI and lack self-awareness, consciousness, and the ability to understand or
reason about the world.
The term “artificial general intelligence” (AGI) is used to describe an AI entity that can think
and reason like a human being. AGI systems are capable of autonomously learning, perceiving,
comprehending, and functioning across a vast array of tasks and domains.
AGI systems are distinguished from ANI systems by their ability to create connections and
generalizations across domains, which enables them to adapt to new conditions and tasks more
effectively.156 This ability to generalize and transfer learning across domains is one of the key
hallmarks of AGI and is considered to be a crucial step towards the development of AI systems
that are truly human-like in their abilities.
One of the primary objectives of AGI research is to create AI systems that can execute any
intellectual activity that a human can, i.e., the AI must be able to comprehend or learn any
intellectual task that a person can. This would include the ability to think, plan, solve problems,
comprehend difficult concepts, gain knowledge via experience, and adjust to new situations.
It is important to note that AGI is still a hypothetical concept and there is no consensus among
experts in the field of AI research about when or if AGI will ever be achieved. However, many
academics feel that AGI represents a big step forward in the development of AI, and that the
creation of AGI systems has the potential to significantly assist society in a variety of fields,
including healthcare, education, transportation, and more.157
AGI refers to the capacity of an AI agent to possess cognitive capacities that are comparable
to those of humans. AGI systems are capable of independently learning, perceiving,
comprehending, and functioning across a vast array of tasks and domains; they are able to form
connections and generalizations across domains; and the ability to generalize and transfer
learning across domains is one of AGI’s most defining characteristics. AGI is still a
156
Etienne Oosthuysen, “Ethics in artificial intelligence (AI)” Making meaning of Data (2021).
157
Stahl, B.C., Ethical Issues of AI. In: Artificial Intelligence for a Better Future (Springer Briefs in Research and
Innovation Governance Springer, Cham 2021).
68
hypothetical concept and there is no consensus among experts in the field of AI research about
when or if AGI will ever be achieved.158
ASI is a highly speculative concept within the discipline of AI research, characterized by the
capability of AI systems to possess intelligence that vastly surpasses that of any human being.
These systems would possess the ability to perform a wide range of intellectual tasks with a
high level of proficiency and speed, including the ability to reason, plan, solve problems,
understand complex concepts, learn from experience, and adapt to new situations, with
decision-making capabilities, data analysis, much faster data processing and greater memory
Advancement of ASI is often associated with the concept of the “singularity”, which posits that
there will come a point in time where the intelligence of AI systems surpasses that of human
beings and triggers an explosion of technological progress.159 It is theorized that once ASI is
achieved, it will be able to improve itself at an exponential rate, leading to an intelligence
explosion that will rapidly and dramatically change the world.160
It is important to note that the concept of ASI is still highly speculative and there is currently
no consensus among experts in the field of AI research regarding when, or if, it will ever be
achieved.161 Moreover, the creation of ASI raises serious ethical and safety problems, since
such powerful machines could potentially endanger human survival or, at the very least, our
way of life. As such, many experts argue that the development of ASI should be approached
with caution, and that ethical considerations must be taken into account at every stage of the
development process.162
In summary, ASI represents a highly speculative concept within the domain of AI research,
characterized by the ability of AI systems to possess intelligence that vastly surpasses that of
any human being. The development of ASI is often associated with the concept of the
158
Seth D. Baum and Ben Goertzel et. al., “How long until human-level AI? Results from an expert assessment”
78 Technological Forecasting & Social Change 185 (2011).
159
Robert K. Logan and Adriana Braga, “AI and the singularity a fallacy or a great opportunity?” 73 Information:
MDPI (2019).
160
Jens Pohl, “Artificial Superintelligence: Extinction or Nirvana?” Proceedings for InterSymp-2015, IIAS, 27th
International Conference on Systems Research, Informatics, and Cybernetics (2015).
161
Houlin Zhao, “The impact of Artificial Intelligence on communication networks and services” 1 ITU Journal:
ICT Discoveries 1 (2018).
162
Christina Pazzanese, “Ethical concerns mount as AI takes bigger decision-making role in more industries” The
Harvard Gazette: Business (2020).
69
singularity and raises significant ethical and safety concerns. However, it is important to note
that the development of ASI is still highly speculative and there is no consensus among experts
in the field of AI research about when or if it will ever be achieved. Therefore, the development
of ASI should be approached with caution and ethical considerations should be taken into
account at every stage of the development process.
V. Conclusion
In conclusion, this chapter has provided a comprehensive and nuanced understanding of the
history, definition, and classification of AI. The field of AI has undergone enormous
advancements and developments since its inception in the 1950s, and it is of paramount
importance to have a thorough understanding of its history and current state in order to fully
grasp the potential and impact of this technology on society. The chapter has highlighted key
events and milestones in the development of AI, including the Dartmouth Conference, the
creation of the first AI program and machine, and recent advancements in natural language
processing and computer vision.
The origins of AI may be tracked back to the 1950s, when a group of Dartmouth College
researchers proposed a summer research project to create “thinking machines” that could
imitate human intelligence. This project, known as the Dartmouth Conference, marked the
beginning of the field of AI research. Since then, AI has undergone several stages of
development, including the “Good Old-fashioned AI” (GOFAI) era, the “Expert Systems” era,
and the current era of “Deep Learning” and “Big Data”. The study of artificial intelligence has
made significant gains in recent years, particularly in the areas of natural language processing
and computer vision, leading in the creation of virtual assistants such as Apple’s Siri and
Amazon’s Alexa, as well as self-driving cars. The advancements in AI technology have also
led to the creation of new industries, such as autonomous vehicles and smart cities.
In addition, the chapter has investigated the concept of intelligence and provided a precise
definition of AI. Intelligence is the capacity to acquire, comprehend, and apply knowledge and
skills to solve problems and adapt to novel conditions. Artificial Intelligence is, as its name
suggests, the simulation of human intelligence by machines. AI systems are intended to
perform tasks that ordinarily require human intelligence, such as visual perception, speech
recognition, decision-making, and language translation. In addition, the chapter studied the
many classifications of AI, such as Limited Memory, Theory of Mind, Reactive Machines and
70
Self-Aware AI systems. These classifications provide a framework for understanding the
capabilities and limitations of AI systems and are essential for comprehending the potential
and impact of AI on society.
It is crucial to recognize that while AI technology possesses immense potential for positive
change and advancement, it also brings substantial problems and threats that must be addressed.
The field of AI is constantly evolving and advancing, and it is crucial to keep a close watch on
its development and the legal and ethical challenges it poses, in order to safeguard it is
responsible, ethical, and valuable use for society. AI technology has the potential to
revolutionize various industries, but it is important to ensure that its use is aligned with societal
values and ethical principles.
The understanding of the history, definition, and classification of AI, as presented in this
chapter, serves as a solid foundation for the analysis of existing legal theories that will be
discussed in the following chapters. As AI becomes increasingly prevalent in our daily lives, it
is imperative to understand the legal and ethical implications of this technology. From issues
of liability and accountability to privacy and data protection, the legal framework for AI is a
fundamental aspect of ensuring its responsible development and use. The legal and ethical
considerations surrounding AI technology are of paramount importance as it has the potential
to impact society in a multitude of ways. The field of AI is rapidly advancing and it is crucial
that a legal framework is in place to govern its use. The legal and ethical considerations
surrounding AI technology include issues such as data protection, privacy, accountability, and
liability. As AI becomes more integrated in our daily lives, it is essential that we consider the
implications of this technology on society and take steps to ensure that it is used in a responsible
and ethical manner.
In summary, this chapter has provided a detailed and nuanced understanding of the history,
definition, and classification of AI. The field of AI has undergone significant advancements
and developments since its inception in the 1950s, and it is of vital importance to have a
comprehensive understanding of its history and current state in order to fully grasp the potential
and impact of this technology on society. The chapter has also examined the different types of
AI classification and provided a clear and precise definition of AI, and has highlighted the legal
and ethical considerations surrounding AI technology. This understanding serves as a solid
foundation for the analysis of existing legal theories in the following chapters. As AI becomes
71
more pervasive in our daily lives, it is crucial to comprehend the legal and ethical consequences
of this technology in order to assure its responsible, ethical, and societally useful application.
72