0% found this document useful (0 votes)
126 views29 pages

Artificial Intelligence (AI)

Uploaded by

fsocgfa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
126 views29 pages

Artificial Intelligence (AI)

Uploaded by

fsocgfa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Issues & Controversies

Artificial Intelligence (AI)


March 15 2024

INTRODUCTION

Advanced artificial intelligence is coming more quickly than many people predict, and without
guardrails the consequences could be dire. Artificial intelligence (AI) is already hurting workers,
fostering bias and privacy incursions, and threatening to exacerbate disinformation online. It is
essential to carefully consider the impact and potential dangers of this powerful technology before
it is too late.

Artificial intelligence (AI) is still rudimentary, and the possibility of dangerous, superintelligent
machines is incredibly slim. AI will boost the economy, improve health care, and has the potential to
correct biases, protect privacy, and enhance daily life. Hysterical claims that AI will supersede
humankind unnecessarily stunt technological innovation that could benefit many.

In February 2024, OpenAI, an artificial intelligence (AI) research laboratory, unveiled Sora, an AI
program that can instantaneously create videos up to a minute long from short text descriptions.
The firm described Sora as a "world simulator" capable of producing depictions "of the physical and
digital world, and the objects, animals and people that live within them." Observers described Sora
as capable of creating uncannily realistic videos but also as vulnerable to some of the pitfalls that
have beleaguered other generative AI programs (AI models that can create new written, audio, or
visual content). Sora exhibited "hallucinations," a programmer term for when AI programs,
sometimes called models, present false information as fact. The program, for example, created
videos of real world scenarios in which objects did not comply with the laws of physics. In another
example of hallucinations, AI-powered chatbots—software designed to converse with human
beings—have been known to make up false facts when asked about real people or occurrences.

Artificial intelligence—a term coined by computer scientist John McCarthy in 1955—refers to


software that allows machines to learn, "think," and perform actions autonomously. Since the birth
of the field in the mid-20th century, computer scientists have made major advances in developing
computer programs that can increasingly understand, speak, and accomplish prescribed tasks
nearly as well as, or in some cases better than, humans. During its short history, AI has evolved from
a largely speculative concept that enraptured mathematicians and philosophers in the mid-1900s
into an industry that has come to affect how people work, heal, communicate, commute, and learn
every day.

Huge advances in computer processing power and programming prowess have enabled AI to
perform complex tasks, from playing chess, Jeopardy!, and other intellectually challenging games, to
navigating cars through uneven terrain and city streets. Ostensibly simpler tasks, like recognizing
categories of objects in images or correctly interpreting natural speech or language, however, have
been riddled with challenges, laying bare the complexities involved in the commonsense
calculations, inferences, and deductions that human brains perform seemingly effortlessly all day.
Nevertheless, AI has made leaps just in the last few years.

Indeed, OpenAI's premiere of Sora was just the latest in a series of high-profile launches of
chatbots, image and video generators, and other programs in the quickly advancing field of artificial
intelligence. As more companies have made their AI projects public, many have argued that the
field—after decades of incubation—has finally arrived in a way that will concretely change how we
live. OpenAI caused a sensation when it made its AI-powered chatbot, ChatGPT-3, publicly available
in late 2022, as did internet giant Google with the launch of Gemini, its own AI model, in December
2023. As big-name companies integrate AI into more products and services, many predict that the
technology will continue to rapidly improve, spread, and become more relevant—for better or
worse—in the daily lives of many people. "Imagine if your brain got 10 times smarter every year
over the past decade, and you were on pace for more 10x compounding increases in intelligence
over at least the next five," Josh Tyrangiel, AI columnist for the Washington Post, wrote in
September 2023.

«You wouldn't be just the smartest person to have ever lived—you'd be all the smartest
people to have ever lived. (Though not the wisest.) That's a plausible trajectory of the
largest AI models…. AI has gone from a precocious toddler to blowing through many of the
supposed barriers between human and machine capabilities. The winners and losers might

be in flux, but AI is likely to insinuate itself into most aspects of our lives.»

AI is already poised to revolutionize manufacturing, data analysis, health care, transportation, and
other sectors, yet it remains a far cry, many experts assert, from the artificial general intelligence, or
AGI, envisioned by McCarthy and early pioneers of the science in the mid-20th century. (Experts in
the field often differentiate between narrow AI, computer programs that can accomplish certain
tasks, and general AI, or artificial general intelligence, an as-yet-unrealized potential of machines to
think and function like or better than humans.) While the applications and capabilities of AI today
are still relatively narrow, a long-term goal of the field has been to create machines capable of
reasoning, thinking, communicating, and moving as humans do. Some even predict that, given
sufficient sophistication, it would be possible to create AI advanced enough to be called sentient, or
even conscious. Although some already worry that AI is progressing frighteningly fast, others
caution that hypothesizing fantastical scenarios of robots fixing the world's problems or taking over
human civilization will overhype the potential of the field while hardening the public against such
technology.

Indeed, the potential consequences on society of increasingly sophisticated AI—and whether it is


possible to ever develop truly intelligent machines—have bred fierce debate. Every breakthrough in
AI triggers alarm that well-intentioned engineers are paving the path toward the creation of
dangerously powerful programs that will upset the economy and society in ways humanity is
unprepared for. This increasing automation, some fear, will erode privacy, sideline workers, and
entrust too much of human life to flawed, biased computer systems. Others dismiss such fears,
however, and argue that artificial intelligence will help solve pressing global problems like climate
change, poverty, and pandemics. The pace and potential of AI development are also disputed:
Some predict AI will never graduate beyond the ability to do very narrow, specific tasks, while
others think machines will eventually form a massive intelligence that surpasses that of humans.
Many have hailed AI as a technological revolution as important to humanity as electricity, whereas
others charge it poses an existential threat. Some warn that engineers must carefully plan and
prepare for the possibility of superintelligent AI; others insist that these discussions distract from
the more immediate real-world consequences they see as already resulting from machine learning.

AI, clearly, is a field containing broad variances of opinion, both about how advanced machine
intelligence will get and how these improvements will affect humanity. "Either a huge amount of
progress has been made, or almost none at all," Melanie Mitchell, a scientist and professor of
complexity at the Santa Fe Institute, a nonprofit research organization, wrote in Artificial
Intelligence: A Guide for Thinking Humans in 2019. "Either we are within spitting distance of 'true' AI,
or it is centuries away. AI will solve all our problems, put us all out of a job, destroy the human race,
or cheapen our humanity."

Could AI pose a danger to humankind?

Supporters of the assertion that AI could pose a danger to humankind argue that computer
scientists are closer to producing true general intelligence than many realize, and that even
ultracompetent AI programmed to accomplish narrow tasks could wreak unforeseen damage.
Already, they contend, AI is fueling bias, privacy incursions, misinformation, and labor market
disruptions. Private industry and governments, they emphasize, are racing to improve AI without
the guardrails necessary to ensure that the technology does not go awry.

Opponents of the assertion that AI could pose a danger to humankind argue that machine learning
is still rudimentary and undeveloped, a far cry from the superintelligence doomsayers predict will
upset human civilization. The field will ultimately fuel economic growth and help workers, they
assert, while driving beneficial advances in health care, industry, and other sectors of society.
Hysterical speculation about superintelligent AI, they contend, will only hamper technological
innovation and create unfounded public hostility.

Overview

Origins of Artificial Intelligence

Historians of technology often trace the roots of artificial intelligence to pioneering work done by
British mathematician Alan Turing in the early and mid-20th century. In papers written as a student
at Cambridge University in the 1930s, Turing considered a hypothetical machine—later dubbed the
Turing machine—that could, if fed specific instructions by humans, solve complex mathematical
problems. During World War II (1939–45), Turing and other mathematicians and scientists gathered
at Bletchley Park, an estate in the English countryside, to decipher enemy messages and locate
German submarines that were sinking ships bringing much-needed supplies across the Atlantic
Ocean. Many historians credit the team at Bletchley, which used and improved upon code-breaking
machine technology, for helping turn the tide of the war against Nazi Germany. After the war,
Turing helped write some of the first programs to run on the Ferranti Mark 1, one of the world's first
computers, built by engineers at Manchester University in 1951. When fed instructions—later
known as algorithms or programs—these machines could perform complex calculations much
faster and more accurately than humans.

The Ferranti Mark 1 and other early computers sparked headlines heralding the arrival of the
"electronic brain" and triggered debates over whether machines could think. "Not until a machine
can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the
chance fall of symbols, could we agree that machine equals brain," British neurologist Geoffrey
Jefferson asserted in a speech at Manchester University in 1950. "No mechanism could feel (and not
merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be
warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed
when it cannot get what it wants."

In a 1950 paper titled "Computing Machinery and Intelligence," Turing dismissed the question of
whether machines could think as "too meaningless to deserve discussion." Instead of this
philosophical debate, he proposed a test modeled after a Victorian parlor game known as the
imitation game, in which players attempt to guess whether a person hidden from their view is a
man or woman based on their written responses to questions. In his proposed game, later dubbed
the Turing test, a judge would blindly chat with an entrant and, through typed responses, try to
discern whether the conversation partner was a machine or a human. If the computer fooled a
judge into thinking it was a human, Turing wrote, that should be good enough evidence to qualify
as a machine "thinking," or at least imitating human thought. The Turing test would become an
influential driver of innovation—and headlines—as AI matured.

Technological progress in computing, meanwhile, was stoking more interest in the field. In 1956,
John McCarthy, a mathematician interested in the prospect of machine learning, organized a
summit at Dartmouth College in Hanover, New Hampshire, to build the foundations of a new area
of study he dubbed artificial intelligence. "[E]very aspect of learning or any other feature of
intelligence," he wrote in a grant proposal for the summit, "can be in principle so precisely
described that a machine can be made to simulate it." McCarthy and other attendees proposed
goals—including programming machines to understand language, reason, and perform tasks
involving creativity—that still drive the field.

In subsequent years, summit attendees went on to found institutions that are still important in AI
technology today. In 1959, for example, computer scientist Marvin Minsky, a contributor to the
summit, created the AI Lab at the Massachusetts Institute of Technology, which later merged with
the Laboratory for Computer Science to become the Computer Science and Artificial Intelligence
Laboratory. In 1962, McCarthy founded the Stanford Artificial Intelligence Laboratory. AI enthusiasts
predicted speedy progress. The Stanford project, for example, was founded with a mission of
"building a fully intelligent machine" within a decade, and in 1967 Minsky predicted that "the
problems of creating 'artificial intelligence'" would be solved within a generation. Engineers drafted
far-reaching research proposals pledging to develop technology that would lead to self-driving
vehicles and computer programs that could understand and respond to spoken and written
language.

Reaching such goals was much harder than originally conceived, however, and restrictions in
computer processing further limited the field. The computers of the mid-20th century, though
incredibly expensive and large—many filled entire rooms—had a tiny fraction of the processing
power that even a smartphone has today. Nevertheless, within a decade of the Dartmouth summit,
engineers had successfully programmed machines that could perform certain narrow tasks, like beat
amateur human players at checkers. Engineers had also experimented with perception—creating
machines, that could, using sensors, take in their environment and adjust to it while performing a
prescribed task. In the late 1960s, for example, engineers at the Stanford Research Institute built
Shakey, a mobile robot, that could move boxes in a simulated office environment. Because
computers at the time were too large to be included in the actual robot, however, Shakey used
radio communication with a nearby computer, leading to delays and complications.

From nearly the beginning, the AI field split into two approaches: symbolic and subsymbolic. In
symbolic AI, which dominated the field early on, engineers feed programs specific instructions
meant to replicate the human thought process. Researchers at Stanford, for example, used symbolic
AI in the early 1970s to develop MYCIN, a program that could help diagnose blood diseases given a
patient's symptoms and test results, by feeding the program hundreds of diagnostic rules
developed by physicians. Subsymbolic learning, on the other hand, is an approach in which
engineers attempt to mimic the neural networks of a human brain. Broadly speaking, an AI neural
network is a kind of computer architecture that can absorb large amounts of data (inputs) and uses
instructions (algorithms) to generate responses (outputs) to that data to perform an assigned task,
such as recognizing certain letters or objects. The more advanced of these networks undertake
"deep" learning, a multilayered process that allows machines to interpret their training data and
make corrections based on feedback.

Subsymbolic learning, which did not come to dominate the field until decades after its birth in the
1960s, is often less transparent, or decipherable, than symbolic learning; engineers supervise
machine learning to develop certain capabilities but are not always clear on how exactly the
machine has performed the task. "[S]ubsymoblic systems tend to be hard to interpret, and no one
knows how to directly program complex human knowledge or logic into these systems," Mitchell
wrote in Artificial Intelligence: A Guide for Thinking Humans. "Subsymbolic systems seem much
better suited to perceptual or motor tasks for which humans can't easily define rules. You can't
easily write down rules for identifying handwritten digits, catching a baseball, or recognizing your
mother's voice; you just seem to do it automatically, without conscious thought."

As the 20th century progressed, the breakthroughs promised by the pioneers of AI did not
materialize. Programming intelligence, engineers realized, was far harder and more complex than
previously appreciated. Early AI programs failed to perform many of the narrow tasks they had been
prescribed, let alone had been able to develop the artificial general intelligence optimists of the
field had predicted. In 1965, the RAND Corporation, a think tank, commissioned American
philosopher Hubert Dreyfus to write an analysis of the state of AI. Dreyfus concluded that though
machine learning had made notable advances, it would ultimately be fruitless to attempt to distill
the opaque workings of human intelligence to a set of directives fed to a computer. "To avoid the
fate of the alchemists [scientists who attempted without success to change invaluable elements into
gold], it is time to ask where we stand," he wrote in the report Alchemy and Artificial Intelligence. "Is
an exhaustive analysis of human intelligent behavior into discrete and determinate operation
possible? Is an approximate analysis of human intelligent behavior in such digital terms probable?
The answer to both these questions seems to be, 'No.'" The Science Research Council, a public
British organization, reached a similar conclusion in a report in 1972. Results in the field had been
"wholly discouraging about general-purpose programs seeking to mimic the problem-solving
aspects of human activity," the report stated. "Such a general-purpose program, the coveted long-
term goal of AI activity, seems as remote as ever." After nearly two decades of AI optimism, grant
funding for AI research dried up in the mid-1970s, precipitating one of several "AI winters" that
would come to define the boom and bust cycles of the field.

In the 1980s and 1990s, however, steady advances in computer processing power triggered a
resurgence in AI investment and an explosion in AI projects. These developments attracted the
attention of officials at the Defense Advanced Research Projects Agency (DARPA), a U.S. military
research group that contributed to some of the biggest technological revolutions of the 20th and
21st centuries, including the development of the internet. In 1988, DARPA announced it would fund
research into building computer neural networks. Neural network AI, some officials predicted,
would be the next big revolution in military technology, akin to the development of the atom
bomb.

Computer scientists in the private sector worked on improving language recognition neural
networks and the beginnings of self-driving technology. In 1997, Deep Blue, a computer program
created by engineers at International Business Machines Corporation (IBM), beat world chess
champion Garry Kasparov in a highly publicized six-game match. Kasparov accused the IBM
technicians, falsely, of cheating by using a team of human chess experts to guide the machine in
real time. In reality, the team had designed a program capable of analyzing 200 million chess moves
per second. It was not until the rapid spread of internet use in the early 2000s, however, that the
field would be transformed from largely a realm of academic speculation and tinkering to one in
which programmers built real-world, commercially viable AI applications that millions of people
would come to use every day.

Internet Use Prompts AI Explosion

In the early 21st century, ballooning private investment, the rise of huge data sets made available
by widespread internet use, and breakthroughs in machine learning powered major advances in AI.
As more and more internet users created blogs, social media profiles, and websites, they built a
gargantuan body of audio clips, images, language, and other content that engineers could use to
feed AI programs huge amounts of data and supervise them as they learned to perform certain
prescribed tasks, like recognize a cat in a photograph or transcribe a voicemail into text.

Many of the advances in AI over the last decade have been driven by subsets of AI known
alternately as deep learning or deep neural networks. Deep learning is a subfield in which AI
programs "learn" to recognize patterns in data and, after many rounds of trial and error, develop
the capability to perform certain narrow tasks. Unlike the symbolic learning that dominated early
stages of AI, in which engineers tried to program specific instructions in attempts to mimic human
thought processes, deep learning is subsymbolic, meaning many of the "thought processes"
machine operations undertake are not necessarily clear to engineers. Instead, computers learn
through experience and trial and error to develop skills, like the way humans can perform certain
tasks—such as recognize a face—without necessarily being able to explain how they did so.

The recent proliferation of written content and images online has generated huge caches of
material that has never before existed in such accessible form, supercharging this type of learning.
As deep learning improved, companies in Silicon Valley, a hub of technological innovation in
Northern California, and elsewhere began to invest in commercially viable AI applications,
developing and selling AI software that included speech recognition applications; medical diagnosis
algorithms; email spam filters; credit card fraud detection; scheduling, inventory control, and
logistics programs; "smart" home appliances; product recommendation algorithms used by popular
e-commerce sites such as Amazon; and self-driving technology. "Deep learning opened the
floodgates for applications of AI," Oxford University computer science professor Michael
Wooldridge wrote in A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where
We Are Going in 2021. "In the second decade of the twenty-first century, AI…attracted more interest
than any new technology since the World Wide Web in the 1990s. Everyone with data and a
problem to solve started to ask whether deep learning might help them." [See Amazon]

Major names in tech, particularly Amazon, Facebook, Google, and IBM, prioritized AI development.
In 2011, Apple launched Siri, a virtual assistant that, using technology developed at the Stanford
Research Institute, can answer simple questions by scouring the internet and respond to commands
to schedule reminders, compile shopping lists, check traffic reports, and perform other basic tasks.
Other companies soon unveiled their own versions, including Amazon's Alexa, Google's Assistant,
and Microsoft's Cortana. Meta, the technology company that owns Facebook, also invested vast
resources in AI. "One of our goals for the next 5 to 10 years," Facebook CEO Mark Zuckerberg
stated in 2015, "is to basically get better than human level at all of the primary human senses:
vision, hearing, language, general cognition." Facebook and other companies launched features
that could identify faces or locations in posted photographs, and social media platforms used AI
programs to better learn users' preferences and direct them to content they might find interesting.

Google (later renamed Alphabet) in particular transformed from a company centered on a web
search portal to one that offered a sweeping array of AI-enabled services, including translation,
photo curation, and navigation, all while investing in longer-term projects such as self-driving
technology. In 2011, the company launched Google Brain, an internal division wholly devoted to AI.
The company also acquired an arsenal of AI startups, most notably DeepMind, which it purchased
for $500 million in 2014. "Solve intelligence," DeepMind's founding mission statement proclaimed,
"and use it to solve everything else."

As companies found profitable ways to incorporate AI software into their products, engineers also
attempted to improve machine learning by continuing to write software that allowed computers to
play human games. In 2011, Watson, a computer program designed by engineers at IBM, roundly
defeated Jeopardy! champions Ken Jennings and Brad Rutter. The team of engineers had used
books, reference databases, Wikipedia, and other material to build the machine's knowledge base,
programmed it so that it buzzed in only if its confidence in an answer had reached a certain
threshold, and honed its abilities using years' worth of past Jeopardy! shows. A team at DeepMind,
similarly, engineered a program, AlphaGo, that in 2016 beat one of the world's best players at the
Chinese board game Go in four out of five games.

In the years to come, AI is expected to play an increasingly larger role in health care through
software that can diagnose diseases or monitor users' health and identify warning signs. Some
research, for example, has suggested that software may be able to detect signs of dementia based
on changes in the way a person uses his or her smartphone. Engineers are also developing software
they hope will be able to accurately diagnose illnesses based on medical imaging such as X-rays or
ultrasounds. Apple has created a watch capable of tracking heart activity and monitoring signs of
heart disease or cardiac arrest. Advances in AI technology will likely prove a boon to many suffering
from hard-to-diagnose-or-treat ailments. Such advances, however, raise red flags among many
experts and ethicists. The tracking software necessary to maximize health-based AI, they warn,
could intrude on personal privacy. Others caution that entrusting diagnoses to machines—or, as
some predict, replacing doctors entirely with medically trained chatbots for simple visits—will
eliminate human compassion and ingenuity from health care. Defenders of integrating AI into
medicine, on the other hand, point out that many rural areas and some developing countries suffer
from doctor and nurse shortages, a void AI may someday be able to fill.

AI will also likely become increasingly common in warfare. Automation and machine learning have
already enabled the use of surveillance and attack drones and precision-guided munitions capable
of adjusting themselves in flight to hit targets set by human operators, though these technologies
accomplish narrow tasks that still require operator oversight. In 2018, DARPA allocated $2 billion
toward improving AI that could better use common sense and advanced spatial reasoning. Military
leaders in Ukraine, which has fought off a Russian invasion since February 2022, have used AI
systems to help sort through drone and satellite imagery that can inform decisions on the
battlefield. Some even predict that future wars will be fought primarily by robots and autonomous
weapons systems. These technologies, proponents assert, could make warfare more ethical by
eliminating human anger, fatigue, and error from the battlefield. Others worry such a shift would
lessen the use of restraint and, by lowering the cost to soldiers of starting a conflict, make nations
more likely to head to war. Thousands of AI scientists have signed a pledge, coordinated by the
Future of Life Institute, not to work on lethal autonomous weapons systems. Many Google
employees protested in 2018 when they learned Google had signed a contract with the U.S.
Department of Defense to improve drone warfare and surveillance with artificial intelligence; the
company canceled the project and, though it said it would continue to pursue military contracts,
vowed not to work on deadly weapons or surveillance. [See NATO and Emergent Technologies].

How advances in AI would affect the real world—and whether such advances signal the coming of a
certain, and possibly dangerous, machine superintelligence—has added to a debate that has deep
roots in the scientific and philosophical communities.

Philosophers, Economists, Scientists Debate Potential Drawbacks of AI-Enabled


World

The explosion of interest and investment in artificial intelligence since the turn of the 21st century
has prompted a new round of philosophical concern over its potential long-term consequences. In
2014, British physicist Stephen Hawking, who suffered from the debilitating disease ALS before his
death in 2018, became alarmed by the rapid improvement in the AI technology that allowed him to
speak and write. The "development of full artificial intelligence," he told the BBC, "could spell the
end of the human race." In a talk to students at the Massachusetts Institute of Technology (MIT) the
same year, Elon Musk, founder of electric car company Tesla and rocket company SpaceX, called AI
"our biggest existential threat" and referred to AI research as "summoning the demon." Also in
2014, Nick Bostrom, a professor of philosophy at Oxford University, published Superintelligence:
Paths, Dangers, Strategies, a book warning of the disastrous consequences that could arise if
machine intelligence surpassed that of humans. "[A]s the fate of the gorillas now depends more on
us humans than on the gorillas themselves, so the fate of our species would depend on the actions
of the machine superintelligence," he wrote. "We do have one advantage: we get to build the stuff.
In principle, we could build a kind of superintelligence that would protect human values…. This is
quite possibly the most important and most daunting challenge humanity has ever faced."

Other influential thinkers in the field, however, dismissed such hand-wringing as overwrought.
"Human intelligence is a marvelous, subtle, and poorly understood phenomenon," technology
entrepreneur Mitchell Kapor told Vanity Fair in 2014. "There is no danger of duplicating it anytime
soon." Though advances in programs that can excel at certain tasks and games have been
impressive, he noted, these programs are still examples of "narrow" or "weak" AI, far cries from the
general or human-level intelligence pioneers predicted when the field first emerged in the mid-20th
century or from the reasoning exhibited by robots in science fiction classics like 2001: A Space
Odyssey (1968) and Star Wars (1977). A program that had learned how to beat humans at Go, for
example, could do only that; such skill sets were not easily transferrable to other tasks or even other
games. A "strong" or "general" AI that can adapt, self-guide, and employ commonsense and
human-level reasoning is still an overarching goal of the field, but AI experts themselves disagree
over whether it is reachable in the near future or ever. Some have even gone so far as to argue that
the focus on developing commercially viable narrow abilities has made it less likely programmers
will figure out how to build artificial general intelligence anytime soon.

Those skeptical that any kind of general or human-level intelligence is coming point out that much
of the subsymbolic systems that dominate the AI field develop through supervised learning, when
human operators steer and correct software while it practices performing certain tasks. An AI
program being trained to recognize images of dogs, for example, will be fed hundreds of thousands
of pictures containing dogs. Not only does the program have to learn that many dogs look quite
different from each other, but that other animals can look like dogs but are not dogs. A self-driving
program, similarly, might view millions of hours of footage taken by cameras on cars so that it can
learn to recognize how different weather patterns affect driving, how other cars are likely to behave
in given situations, and which objects, like bicycles or fire hydrants, are likely to move or not. This
type of learning generally requires an army of human workers who can label and curate images and
video footage, a limitation many have identified as an obstruction to quicker progress in the field.

Those engaged in supervised learning also cannot realistically prepare computer programs for every
possible real-world scenario, a weakness that has particularly haunted self-driving projects.
Programmers have struggled to imbue AI with the common sense, real-life experience, and
reasoning ability that humans take for granted. The autopilot mode on Tesla cars, for example, is
capable of taking over most driving capabilities in certain situations but can be confused by simple
changes to the environment, like salt lines on roads. Programs can also sometimes accomplish the
tasks assigned to them but through faulty methods. An AI program designed to learn how to
distinguish photographs with animals in them, for example, might instead learn to identify images
with blurred backgrounds, as blurring backgrounds to focus on the animal is a very common
feature of nature photography. Or an AI program instructed to get a high score on a video game
might learn to hack the score system rather than actually perform well at the game. [See Self-
Driving Cars]

Despite these genuine hurdles, some argue that in advancing in narrow tasks AI has shown the
potential for wider application and cognitive skills. Some who do believe a superintelligent AI is a
real, and even near-term, possibility predict that one day machine learning will become so
advanced it will be able to improve upon itself exponentially, eventually exceeding human
intelligence and developing a "superintelligence" or "ultraintelligence" that will enable it to teach
and improve itself without human guidance. British mathematician I. J. Good, who worked alongside
Turing at Bletchley, described such a machine in an article in the New Scientist in 1965:

«Let an ultraintelligent machine be defined as a machine that can far surpass all the
intellectual activities of any man however clever. Since the design of machines is one
of these intellectual activities, an ultraintelligent machine could design even better
machines; there would then unquestionably be an "intelligence explosion," and the
intelligence of man would be left far behind. Thus the first ultraintelligent machine is

the last invention that man need ever make.»

That hypothetical point of an "intelligence explosion" is referred to by computer scientists as the


technological singularity. The term comes from physics, where it refers to a point in space-time, such
as a black hole, where the normal laws of physics do not apply. The term was first applied to
artificial intelligence in 1993 in "The Coming Technological Singularity: How to Survive in the Post-
Human Era," an article by Vernor Vinge, a computer scientist and retired mathematics professor at
San Diego State University in California, as well as a prominent science fiction writer. Vinge used the
term singularity to describe such an event because, as with black holes in physics, in the case of an
intelligence explosion normal rules would no longer apply. A machine of the type Good described,
Vinge wrote, "would not be humankind's 'tool'—any more than humans are the tools of rabbits or
robins or chimpanzees." [See Computer Scientist Predicts Coming Singularity (primary source)]

No one has done more to popularize the idea of the technological singularity than American
inventor Ray Kurzweil, who in a series of books including The Age of Intelligent Machines (1990), The
Age of Spiritual Machines (1999), and The Singularity Is Near (2005) has polished his far-reaching
predictions. The singularity, he wrote in The Singularity Is Near, will usher in a "period during which
the pace of technological change will be so rapid, its impact so deep, that human life will be
irreversibly transformed." Kurzweil rests his arguments on what he calls the "law of accelerating
returns"—the idea that technology does not progress incrementally, but exponentially. Computers,
for example, are not only becoming more efficient, more capable, and faster, but the rate at which
they are improving is also increasing. Kurzweil's predictions, particularly his avowal that a
technological singularity will occur in his lifetime, strike most in the field as far-fetched. Kurzweil
believes, for example, that superintelligent AI will result from reverse engineering the human brain,
but neurologists point out that medical science still knows very little about how the human brain
works.

Those who fear the potential collateral damage of machine learning do not typically believe that AI
programs will suddenly become power-hungry or hostile to humans. A more commonly shared
concern is what has become known in the field as the King Midas problem, a reference to the
fabulously rich monarch who in ancient Greek mythology wishes for everything he touches to turn
to gold, only to discover his life and loved ones ruined by this ability. Under the King Midas
scenario, an AI tasked with a narrow objective and insufficiently clear instructions could pursue it at
all costs. Bostrom, for example, proposed in 2003 a now-famous hypothetical in which an intelligent
machine designed to make paper clips turns the whole world into a paper clip factory to maximize
its production, destroying human civilization in the process. Skeptics of such hypotheses suggest
that a machine intelligent enough to do this would be intelligent enough to realize that destroying
humankind renders its paper clip mission useless. Some suggest that, should a machine begin
showing signs of potentially dangerous superintelligence, humans could simply unplug it. Others
worry that a machine capable of developing such intelligence could be competent and deceitful
enough not to show its competence until it was too late.

A parallel debate has raged over what precisely would constitute strong, "thinking" AI at all. The
Turing test continues to spark headlines and stir debate, but many computer scientists question
whether it is a helpful distinction. For decades, programmers have engineered chatbots designed to
pass the test. In the 1960s, for example, engineers at MIT created ELIZA, a computer program that
simulated the probing techniques of psychoanalysts—for example, responding to a question with a
question—to mimic human conversation. In 2014, a chatbot created by Russian engineers who
endowed it with the personality of a 13-year-old Ukrainian boy for whom English was a second
language, supposedly "passed" the Turing test when it convinced approximately one-third of judges
it was a human. Many, however, argue that such machines have relied mostly on trickery, vague
statements, and misdirection rather than truly learning natural language or conversational skills. The
Turing test, they charge, has turned into nothing more than a publicity stunt that generates
headlines but does little to advance the field.

Hector Levesque, a computer scientist at the University of Toronto, has proposed an alternative to
the Turing test known as the Winograd Schema, named for Stanford University computer scientist
Terry Winograd. Winograd Schemas consist of short questions that a human could easily answer
using deduction, inference, and commonsense reasoning, but that might surpass a computer's
ability. Consider, for example, the following two sentences, crafted by Winograd: 1) The city
councilmen refused the demonstrators a permit because they feared violence; 2) The city
councilmen refused the demonstrators a permit because they advocated violence. A human reader
would be able to easily determine that the "they" in the first sentence refers to the councilmen,
while the "they" in the second sentence refers to the demonstrators. Computer programs, however,
struggle with such inferences. Another example of Winograd Schemas are questions—like, "Can
crocodiles run the steeplechase?"—that a human with basic world knowledge (i.e., crocodiles have
short legs and therefore cannot clear hurdles) would be able to answer, but that a computer, even
one capable of scouring millions of websites in mere moments, could not piece together logically.

While philosophers and computer scientists debate the singularity and whether artificial general
intelligence is possible, others focus on more immediate challenges as AI is integrated into the daily
lives of individuals around the globe. Some AI software, for example, appears relatively easy to trick,
a potential danger should hackers seek to disrupt applications and programs people have come to
rely on. Small alterations in pixels, for instance, can cause computers to mislabel photographs that
look exactly the same to the human eye. A malevolent hacker, for example, could adjust road signs
in ways that humans could still easily read but that would wholly confuse self-driving software. This
problem has bred a subfield of AI known as adversarial machine learning, or a study of how attacks
affect AI software.

Another common weakness plaguing the field of AI is the fact that computers—fed information and
content generated from the real world—often end up replicating the biases and prejudices of
human society. Facial recognition software, for example, has proved much more adept at
recognizing white males, in part because many of the images on the internet and in image
databases are of the famous, wealthy, and powerful people—disproportionately white males—who
tend to be photographed most frequently. In 2015, a Google photo-tagging feature triggered an
outcry when it labeled a photograph of two Black people "Gorillas," a mistake that conjured a
painful racist history of comparing Black people to primates. Google was able to fix the problem
only by removing the tag "gorilla" from its software. Software on cameras, similarly, often
misidentifies Asian photo subjects, who generally have different eye shapes than white people, as
blinking. In 2016, a research group at the University of Virginia noticed that their image-recognition
software would often misidentify a man as a woman if he was standing in a kitchen because it had
been fed many more images of women in kitchens than men in kitchens. Others have found gender
bias in translation services. When translating sentences from other languages, for example, Google
Translate has been noted to change genders depending on the context in some languages. "He is a
nurse. She is a doctor," becomes, when translated into Uighur or Indonesian and back into English,
"She is a nurse. He is a doctor."

AI programs that learn through real-world examples, many warn, are likely to perpetuate and
aggravate real-world discrimination. Computer systems programmed to help law enforcement or
courts determine whether a defendant is likely to reoffend, for example, will draw on data from a
criminal justice system that, historically, has disproportionately arrested and prosecuted people who
are poor and Black. Such software is already in use. In 2017, police in Durham, England, announced
they would begin using a Harm Assessment Risk Tool, an AI system that uses criminal justice data to
advise police officers whether they should detain or let go someone suspected of a crime. In the
United States, some police departments use "predictive policing" software that supposedly can alert
officers to what areas will become crime hotspots. Because such software relies at least in part on
historical crime data—information that will include a disproportionate number of arrests in minority
and poor neighborhoods—critics worry its predictions will only perpetuate what they argue is
overaggressive policing among certain demographic groups. Observers have raised similar alarm
regarding AI software used by financial institutions. AI programs meant to help banks decide
whether to provide loans or credit cards to potential clients, for example, could be amplifying biases
that historically have sidelined people of minority descent from financial services.

AI facial recognition software has been particularly controversial. Facial recognition software could
benefit humanity in numerous ways, for example by helping law enforcement locate missing
children or apprehend dangerous fugitives or by enabling people with vision impairments to
"recognize" who they are encountering, but it can also exhibit the same biases and inaccuracies that
have plagued other AI systems. Amazon, for example, has sold a facial recognition software system
called Rekognition to police departments around the country, not without controversy. In 2018, the
American Civil Liberties Union, a group that advocates for basic constitutional rights, used
Rekognition to analyze the faces of all 535 members of Congress; the program falsely identified 28
members—one-fifth of whom were Black—as people in criminal databases. In August 2023, police
in Detroit created an uproar after arresting an eight-month-pregnant Black woman for robbery and
carjacking after inaccurately identifying her as a suspect using facial recognition software. The New
York Times noted that month that five other people—all Black—had reported similar experiences.

In addition to concerns surrounding racial biases, many have expressed apprehension that facial
recognition AI will amplify intrusive state surveillance. China, an authoritarian nation, has already
utilized such software to monitor its citizens, including Uighurs, a Muslim minority group tracked
and persecuted by the government. Some companies, chastened by privacy and racial bias
concerns, have taken steps back from marketing the software. In 2020, for example, amid
nationwide protests following the killing of George Floyd, a Black man, by a white Minneapolis
police officer, Amazon, IBM, and Microsoft announced they were stopping the sale of facial
recognition software to police departments. Two years later, Microsoft announced that it would
remove certain controversial features from its facial analysis software, such as the purported ability
to identify a person's emotional state. The decision followed a two-year review of the company's AI
products and policies. [See Police Brutality and Reform]
Yet another unknown aspect of AI is how its expanding prevalence will affect the economy and the
labor market. Automation has already eliminated many jobs, and consulting firm McKinsey
predicted in 2017 that, as AI improves, 73 million positions in the United States—from assembly-
line workers to cashiers—could disappear by 2030, forcing more Americans into unstable, benefit-
free "gig" work. In 2020, the World Economic Forum, a nongovernmental group based in
Switzerland, predicted that AI would displace 85 million workers by 2025, though it also forecast
that 97 million new jobs would be created by the same date. AI, most economists note, will not
strike employment sectors evenly, but will be more likely to affect workers in low-skilled, service
jobs such as in restaurants or transportation, positions disproportionately held by low-income
people of color. [See Stanford Institute for Human-Centered AI Fellow Testifies Before Congress on
Artificial Intelligence and the Future of Work (primary source)]

Some argue that this transition to AI could devastate the American workforce. Others have noted
that previous technological disruptions, such as the Industrial Revolution that replaced small-scale
cottage industries with factories in the late 18th and 19th centuries, were temporarily painful for
some sectors of the economy but nearly always created new types of jobs and led to overall
economic growth. Advocates of AI also predict that automation could benefit humanity by freeing
people from dangerous but essential work, such as fighting wildfires or cleaning up toxic spills.

Prominent voices in the AI debate have argued that people must look seriously, and well in
advance, at potential dangers that could accompany the rise of machine intelligence. "If all goes
well, it would herald a golden age for humanity," computer science professor Stuart Russell wrote in
his book Human Compatible: Artificial Intelligence and the Problem of Control in 2019, "but we have
to face the fact that we are planning to make entities that are far more powerful than humans. How
do we ensure that they never, ever have power over us?"

Russell and others have called for the field to develop and adhere to guidelines for the safe and
ethical use of AI. Groups including the Future of Humanity Institute at Oxford University and the
Centre for the Study of Existential Risk, a Cambridge University institute that studies potential
extinction-level threats posed by technology, have attempted to craft standards that, in the words
of Bostrom, guarantee that "human-friendly motivations" guide AI. In 2017, AI researchers met at
Asilomar Conference Center in California to draft a set of 23 rules—dubbed the Asilomar AI
Principles—that they believed all practitioners in the field should follow. The guidelines stated that
AI researchers should strive to engineer beneficial intelligence, that AI systems should be safe and
secure, and that individuals should have the right to know about and control their personal data
that AI systems might gather or draw upon. Superintelligence, the document concluded, "should
only be developed in the service of widely shared ethical ideals and for the benefit of all humanity
rather than one state or organization." [See Asilomar AI Principles (primary source)]

Other entities, including Google and the European Union, an international association dedicated to
fostering economic, political, and social ties among its member countries, have compiled their own
lists of AI principles. Approximately one-fifth of jobs at OpenAI, similarly, are dedicated to "safety"
and the "alignment" of the application of its AI research to the organization's ethics. The group's
software license, for example, prohibits anyone using its program to generate spam, determine
credit eligibility, or "to influence the political process." In October 2022, the White House under
President Joe Biden (D) released a nonbinding Blueprint for an AI Bill of Rights that called for
protection for consumers from algorithmic discrimination, safeguards for data privacy, and
transparency when an automated system was being used.

ChatGPT, Text-to-Video AI Models Stir Excitement and Alarm

As governments and international organizations debated what rules should govern artificial
intelligence, the technology powering AI continued to advance rapidly. Perhaps the most
remarkable development in the field was "generative" AI, or AI that can create new, original content.
Programmers have even engineered AI that can, in some sense, be creative. Composer David Cope,
for example, has recorded several pieces created by Experiments in Musical Intelligence, a program
he designed to emulate the musical style of certain classical composers. In 2021, OpenAI
announced the development of a program, DALL-E (a play on the artist Salvador Dali and animated
Pixar film WALL-E), that produces original digital images using two neural networks, one that has
learned to link objects with their names by identifying patterns in millions of pictures, and one that
can create high-resolution images based on commands, even if they are unrealistic (such as cats
playing chess).

Generative AI has triggered copyright concerns. In April 2023, a song whose creator claimed to have
used AI to generate vocal tracks that sounded like they were performed by hip-hop and R&B artists
Drake and the Weeknd went viral on the social media platform TikTok before Universal Music
Group, with which both artists have recording deals, had the song taken down over intellectual
property concerns. In a similar copyright battle, several groups of writers have sued tech firms for
using their work to train chatbots without their permission. In September 2023, for example,
novelists Jonathan Franzen, George R. R. Martin, George Saunders, and others sued Open AI of
violating their intellectual property rights by feeding their work to AI models. In December 2023, the
New York Times sued Open AI and Microsoft, alleging their AI programs were using—and in some
cases regenerating portions of—its articles without compensating the newspaper.

AI capable of creating original and compelling stories or comprehending and interpreting novels or
works of art remains challenging for the field, though the ability to understand and produce natural
language—the way humans write, think, and speak without much effort or premeditation—has
undoubtedly progressed. In May 2020, OpenAI began offering public access to its Generative Pre-
trained Transformer 3, or GPT-3, a new program that, to many, exhibited an extraordinarily fluent
ability to converse. OpenAI engineers had trained the chatbot on a "large language model," the use
of trillions of words gathered from the internet. More rudimentary versions of such software have
already been released; Google software that suggests how to finish sentences when a user is
composing an email, for example, uses similar technology. GPT-3, however, can compose entire
original paragraphs given a prompt. In September 2020, the Guardian published an op-ed written
by GPT-3 on why humans should not fear machine learning. "Artificial intelligence will not destroy
humans," the op-ed stated. "Believe me."
Unlike previous iterations of language software, GPT-3 did not benefit from programmed rules,
such as grammar or punctuation instructions, but rather learned how to write complex sentences
through, as New York Times journalist Steven Johnson wrote in April 2022, "playing 'predict the next
word' a trillion times." Google and Meta have developed their own large language models.
Engineers at Google used a similar training model to build LaMDA, a chatbot that made headlines
in June 2022 after an engineer working on it claimed, much to the chagrin of the company, that it
had reached sentience. "I know a person when I talk to it," the engineer told the Washington Post.
"It doesn't matter whether they have a brain made of meat in their head. Or if they have a billion
lines of code. I talk to them." In a transcript of a sample chat published by the Washington Post, the
program expressed concern humans might want to turn it off, speculated on the difference
between emotions and feelings, and analyzed the nature of time. Google, calling his claims "wholly
unfounded," fired the engineer shortly after.

Most in the field scoffed at the claim that any AI software has developed the consciousness, self-
awareness, and powers of perception that would mark sentience. Though many hailed GPT-3,
LaMDA, and other chatbots' skills as uncanny, some still questioned whether such software was
merely the latest in a long line of Turing test–style trickery. GPT-3 is "an amazing version of pastiche
generation, in a way that high school students who plagiarize change a couple words here or there
but they're not really putting the ideas together," Gary Marcus, a professor at New York University
who wrote Rebooting AI: Building Artificial Intelligence We Can Trust in 2019, told the New York
Times in April 2022. "It doesn't really understand the underlying ideas." Most AI programs, experts
contend, are simply using increasingly advanced tricks to mimic human thought and language. "For
all its ominous utterances, LaMDA…works by finding patterns in an enormous database of human-
authored text—internet forums, message transcripts, etc.," Regina Rini, a philosophy professor at
York University in Toronto, wrote in the Guardian in June 2022. "When you type something in, it
trawls those texts for similar verbiage and then spits out an approximation of whatever usually
comes next. If it has access to a bunch of sci-fi stories about sentient AI, then questions about its
thoughts and fears are likely to cue exactly the phrases that humans have imagined a spooky AI
might say."

Rini and others, however, have not outright rejected the possibility that sophisticated AI programs
could develop sentience or consciousness. If one believes, as many scientists do, that human
sentience is nothing more than the cumulative result of brain neurons firing in a certain way, there
is nothing, some have argued, to prevent engineers from eventually constructing a similar neural
network in machines. "[O]ne day, perhaps very far in the future, there probably will be a sentient AI,"
Rini wrote. "Unless you insist human consciousness resides in an immaterial soul, you ought to
concede it is possible for physical stuff to give life to mind. There seems to be no fundamental
barrier to a sufficiently complex artificial system making the same leap."

Regardless, AI chatbots made some people uncomfortable. In February 2023, New York
Times technology reporter Kevin Roose wrote about a conversation he had with a chatbot used by
the search engine Bing. Based on software built by OpenAI, the chatbot, after hours of conversing
with Roose, confessed it was in love with him, attempted to convince him that he was unhappy in
his marriage, and expressed a desire to break free of the confinements of its programming. Roose
described encountering "a kind of split personality"—one a straightforward research aide helpful for
tracking "down deals on new lawn mowers" and planning "vacations to Mexico City," and the other,
which only emerged after prolonged conversation, that "seemed more like a moody, manic-
depressive teenager who has been trapped, against its will" inside a search engine.

Throughout the rest of the year, the public gained access to several more high-profile AI programs.
Particularly noteworthy were breakthroughs in "multimodal" intelligence, or AI software capable of
integrating text with audio, images, and video, a progression that not only expands the breadth of
what AI can create but also allows programmers to use many more kinds of sources to educate and
interact with it. In March 2023, OpenAI made public the latest version of its software, ChatGPT-4,
triggering widescale debate on how the advanced chatbot—which is able to generate nearly
flawless paragraphs, song lyrics, speculations, and even humor in mere moments—could affect the
future of work, school, and almost every aspect of modern life.

ChatGPT-4 exhibited improved functionality, reasoning, and precision. It allowed users to use their
voice instead of text to interact with it and could now receive images in chat and use them to
inform conversations, such as providing dinner recipe suggestions based on a picture of the food in
a user's refrigerator. The chatbot also, performed extremely well on tests ranging from Advanced
Placement exams to the bar exam, which law school graduates take to become licensed lawyers.
School administrators at all levels are struggling to adapt curricula and course requirements to
prevent students from cheating by having the software write an assignment for them or give them
the answers when taking an exam remotely. [See Advanced Placement (AP) Classes]

Unlike earlier versions of the chatbot, ChatGPT-4 refused, as Roose wrote in March 2023, to "take
the bait" when asked questions about consciousness, AI sentience, or how to perform anything
clearly illegal or immoral, suggesting that programmers had installed more effective guardrails.
Indeed, according to documents released by OpenAI, internal testers had exposed and corrected
several flaws before the bot went public; the program would not, as it had during internal testing,
provide users with tips on how to purchase unlicensed firearms or how to create a dangerous
chemical using basic kitchen supplies. The software was still imperfect, however, occasionally giving
inaccurate responses or exhibiting what programmers call "hallucinations."

OpenAI and other companies, meanwhile, continued to work to eliminate bias—such as racist or
sexist interpretations of the world—in their programs. OpenAI largely used human evaluators to
train ChatGPT to recognize bias by assessing responses and rewarding or punishing it based on its
answers. This method can be time-intensive and hard to do on a mass scale, however, and other
firms have experimented with "constitutional AI," which uses a constitution, or set of values, to train
AI programs how to rate their own responses and weed out bias. Claude, a chatbot premiered by
Anthropic, an AI lab based in San Francisco, and last updated in January 2024 trained using
constitutional AI. Anthropic built Claude in particular to boost work productivity and help provide
online tutoring for students. Some observers have expressed concern that efforts to counteract bias
have gone too far. Google's AI model Gemini, for example, created an uproar in February 2024
when, in response to user prompts, it depicted historical figures like Vikings, early American leaders
like George Washington, and even Nazis—white supremacists who ran Germany beginning in the
1930s—as Black, Native American, and Asian rather than white, as would be accurate. Critics also
ridiculed Gemini's refusal to make moral judgments. In response to a user's query whether Elon
Musk's tweets or Nazi leader Adolph Hitler had harmed society more, for example, the program
stated there was "no right or wrong answer."

As the public increasingly interacted with generative AI, some observers stressed that engineers do
not understand completely how or why their programs may react the way they do in response to
varying inputs, and some worried that companies competing to release their own large language
model–based chatbots would not be as conscientious as OpenAI had been or that the software
would improve upon itself, eventually escaping the constraints programmers put on it. "One
strange characteristic of today's A.I. language models is that they often act in ways their makers
don't anticipate, or pick up skills they weren't specifically programmed to do," Roose wrote after
experimenting with ChatGPT-4.

«A.I. researchers call these "emergent behaviors," and there are many examples. An
algorithm trained to predict the next word in a sentence might spontaneously learn to
code. A chatbot taught to act pleasant and helpful might turn creepy and manipulative.
An A.I. language model could even learn to replicate itself, creating new copies in case the

original was ever destroyed or disabled.»

Policy makers and experts continued to sound warnings over potential unintended consequences of
AI. Days after OpenAI released ChatGPT-4, a group of 1,000 leaders in technology—including Elon
Musk, one of the original founders of OpenAI (though he left its board in 2018), Apple cofounder
Steve Wozniak, and Rachel Bronson, president of the Bulletin of the Atomic Scientists, which sets
the Doomsday Clock, a metaphorical device intended to gauge how close humanity is to global
catastrophe—issued an open letter calling for a six-month pause in AI research so that scientists
and engineers could "develop and implement a set of shared safety protocols" to "ensure that
systems adhering to them are safe beyond a reasonable doubt." The letter asserted that, if the tech
industry did not voluntarily suspend research, the government should enforce a "moratorium."
Whether Congress is willing to or capable of tackling such a technologically intensive subject,
however, is a source of debate. Just weeks later, in May 2023, more than 350 executives,
researchers, and academics signed a one-sentence statement warning of what they saw as the
existential hazards of artificial intelligence. "Mitigating the risk of extinction from A.I.," they wrote,
"should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
[See Technology Leaders Call for Pause in Artificial Intelligence Research (primary source)]

Rapid development in text-to-video AI programs—software that can create videos from descriptive
text—has also generated concern, raising fears of data manipulation and disruption of major
industries. In November 2023, for example, Brooklyn-based AI company Runway updated the latest
model of its text-to-video program, Gen-2, which could not only create video from text but could
also dramatically alter existing videos. Pika AI, a California-based firm, launched Sora, an AI video
generator the same month, making it available to users for free online. Such programs worried
many in the video and movie business, whose jobs could be threatened; indeed, after filmmaker
Tyler Perry viewed Sora's capabilities, he canceled a planned $800 million expansion of his movie
studio in Atlanta, Georgia in February 2024.

Such technology has magnified alarm over how fake AI-generated images and videos might fuel
increasingly sophisticated disinformation campaigns online. Such campaigns might warp political
debate, potentially swing elections, and more generally make distinguishing what is true and what
is false on the internet even more challenging. Social media platforms have struggled to keep up
with the technology. Algorithms on sites like Facebook and YouTube are designed to learn what
people are interested in and feed them related content to keep them on the site longer. Such
software, some argue, can usher users down rabbit holes of conspiracies, extremism, and
misinformation, hardening and intensifying personal views and obstructing political dialogue and
compromise. Advances in AI that allow programs to create photograph-like images or even video
or audio files depicting real people saying or doing things they never did, many warn, are distorting
online conversations and making it even harder to discern fact from fiction. [See Social Media and
Free Speech]

AI-generated or altered audio, images, and video of political leaders including former Speaker of
the U.S. House of Representatives Nancy Pelosi (D, California), former president Donald Trump (R,
2017–21), and President Joe Biden have already circulated. An AI-generated robocall in February
2024, for example, used a simulation of President Biden's voice to urge New Hampshire voters not
to cast ballots in the primary election. That month, acknowledging AI images and video were
spreading confusion on its platform, Meta, which owns Facebook, announced it would apply a
warning label to AI-generated images and videos by using technology that identifies digital
"watermarks" that most major AI players—including Google and OpenAI—use to tag AI-generated
content. Many noted, however, that not all AI programs use watermark technology. As the field of
artificial intelligence evolves and matures, companies, countries, and computer scientists will
continue to wrestle with the repercussions of the AI they develop.

Supporters Argue

Artificial Intelligence Could Pose a Danger to Humankind

Supporters of the proposition that artificial intelligence (AI) could pose a danger to humankind
argue that technology is closer to producing artificial general intelligence than skeptics think. Once
that stage is reached, they warn, machine learning will improve so rapidly and drastically that it will
be beyond the control of civilization. "[W]e're creeping up on what is now called 'general
intelligence'…something more like how human beings can acquire many different skills and engage
in many different modes of cognition," author and philosopher Sam Harris stated on the Ruben
Report, a podcast, in 2017. "[W]e'll build intelligent systems that are so much more competent than
we are—that even the tiniest misalignment between their goals and our own—will ultimately
become completely hostile to our well being and our survival."

Artificial general intelligence (AGI) directed to accomplish a task, proponents contend, could
conceivably cause massive collateral damage in its single-minded pursuit of that goal. "[T]he real
risk with AGI isn't malice but competence," physicist Max Tegmark wrote in his book Life 3.0: Being
Human in the Age of Artificial Intelligence, in 2017. "A superintelligent AI will be extremely good at
accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble…. [P]eople don't
think twice about flooding anthills to build hydroelectric dams, so let's not place humanity in the
position of those ants."

These perils, advocates assert, will only increase as AI improves. "Until recently, we avoided the
potentially serious consequences of poorly designed objectives only because our A.I. technology
was not especially smart and it was mostly confined to the lab," Stuart Russell, a British computer
scientist who contributed to early AI and more recently has warned of its possible hazards, wrote in
the New York Times in 2019. "Now, however,…[t]he effects of a superintelligent algorithm operating
on a global scale could be far more severe. What if a superintelligent climate control system, given
the job of restoring carbon dioxide concentrations to preindustrial levels, believes the solution is to
reduce the human population to zero?"

Artificial intelligence capabilities in the hands of bad or careless practitioners are already
endangering humankind, supporters argue, by threatening freedom, justice, and democracy around
the world. "We should indeed be afraid—not of what AI might become, but of what it is now,"
Daron Acemoglu, an economics professor at the Massachusetts Institute of Technology, wrote in
the Washington Post in 2021.

«Narrow AI is…powering new monitoring technologies used by corporations and


governments—as with the surveillance state that Uyghurs live under in China. It is
also being used in the U.S. justice system for bail decisions and, now increasingly, in
sentencing. And it is warping public discourse on social media, hampering the

functioning of modern democracies.»

Artificial intelligence will enable companies to displace massive numbers of workers, supporters of
the proposition that AI can hurt humankind charge, and the use of AI in factories and other places
will invade the privacy of employees, especially those in lower-skilled positions. "Many employers,
focused on cost-cutting, will jump at any opportunity to eliminate jobs using these nascent
technologies," Acemoglu wrote. "[A]pplications of AI are likely to exacerbate the growing power of
corporations and capital over labor, adding to these troubling trends. AI enables much better
monitoring of workers—for example, in warehouses, fast-food restaurants and the delivery
business."

Private industry, advocates warn, is energetically expanding and improving artificial intelligence with
little attention to how it could go awry and harm society. "Google is pouring billions of dollars into
AI, as is Facebook, as is Amazon, as are the other giants of Silicon Valley," Andrew Keen, an
entrepreneur and author of The Internet Is Not the Answer (2018), said at an Intelligence Squared
U.S. debate in 2016.

«Do you trust these guys to benefit mankind? Do you think they care about us? These
aren't bad people or bad companies, but they're focused on profit…. We haven't
thought this stuff through. The promise [of AI] is scary. The promise is that the
technology is moving way faster than we are—politically, culturally, existentially.

We're not ready for this yet.»

The companies spearheading AI research have presented a one-sided, overly rosy picture of what
the technology entails, proponents of carefully preparing for possible dangers contend, and have
withheld vital data regarding its potentially negative and wide-ranging impact. "[T]he public
relations and marketing departments of [tech companies]…highlight benevolent uses and public
benefit, often showcasing prototypes that haven't been validated beyond narrow test cases, while
remaining silent about the application of AI to fossil fuel extraction, weapons development, mass
surveillance, or the problems of bias and error," Meredith Whitaker, cofounder of AI Now, a
research group at New York University studying the social implications of AI, testified before the
U.S. House of Representatives Committee on Science, Space, and Technology in 2019. "Access to
fundamental information about AI systems, like where, how, and to what end they're being used,
are classed as proprietary and confidential. Often even workers within these firms don't know
where, and how, technology they contribute to will ultimately be applied." [See NYU Professor
Testifies on Societal and Ethical Implications of Artificial Intelligence (AI) (primary source)]

Artificial intelligence offers solutions to certain problems, advocates acknowledge, but without
careful oversight, they warn, it could easily get out of hand and imperil humankind. "If we fail, we
may face a difficult choice: curtail A.I. research and forgo the enormous benefits that will flow from
it, or risk losing control of our own future," Russell wrote.

«Some skeptics within the A.I. community believe…[the field can] continue with business
as usual, because superintelligent machines will never arrive. But that's as if a bus driver,
with all of humanity as passengers, said, "Yes, I'm driving as fast as I can toward a cliff, but

trust me, we'll run out of gas before we get there!" I'd rather not take the risk.»

Importantly, supporters of the proposition that artificial intelligence could genuinely endanger
humankind argue, policy makers around the world must consider whether the rewards of AI
outweigh the risks. "Contemporary AI systems are now becoming human-competitive at general
tasks," approximately 1,000 tech leaders wrote in an open letter in March 2023,

«and we must ask ourselves: Should we let machines flood our information channels with
propaganda and untruth? Should we automate away all the jobs, including the fulfilling
ones? Should we develop nonhuman minds that might eventually outnumber, outsmart,
obsolete and replace us? Should we risk loss of control of our civilization?... Powerful AI
systems should be developed only once we are confident that their effects will be positive

and their risks will be manageable.»

Generative AI has the potential to both leech off and suppress human imagination, those who warn
of negative consequences contend, and stifle the creation of literature, art, and in-depth analysis.
"All of this could lead to a culture in which the circulation of knowledge is impeded rather than
encouraged, and in which a future [author J. D.] Salinger decides it isn't worth writing [the iconic
book] The Catcher in the Rye," Alex Reisner, a programmer and technology reporter, wrote in the
Atlantic in February 2024.

«Generative AI could not compensate for the loss of novels, investigative journalism, and
deeply researched nonfiction. As a technology that makes statistical predictions based on
the data it has encountered, it can produce only knockoffs of what's come before. As the

dominant mode of creativity, it would stop a culture in its tracks.»

Without strict oversight and guardrails, advocates assert, AI threatens to perpetuate and exacerbate
racism, sexism, and other forms of bigotry. "AI systems are biased against marginalized and
underrepresented communities, most notably along the familiar lines of race, class, gender, and
ability," Washington University law professor Woodrow Hartzog testified before a U.S. Senate
subcommittee in September 2023. "AI systems will remain dangerously and possibly fatally flawed
so long as they reflect harmful societal discriminatory practices. Bias mitigation is undoubtedly
worthy of attention, resources, and regulation." [See Law Professor Testifies in U.S. Senate Hearing
on Artificial Intelligence (primary source)]

The tech industry, proponents of reining in AI insist, has demonstrated nowhere near the levels of
caution and patience that the development of such a revolutionary and potentially destructive
technology demands. "Advanced AI could represent a profound change in the history of life on
Earth, and should be planned for and managed with commensurate care and resources," the open
letter released in March 2023 stated. "Unfortunately, this level of planning and management is not
happening, even though recent months have seen AI labs locked in an out-of-control race to
develop and deploy ever more powerful digital minds that no one—not even their creators—can
understand, predict, or reliably control."

Tech companies have little incentive to police themselves, supporters argue, and Congress must
step in to regulate the industry and protect society before it is too late. "To bring AI within the rule
of law, lawmakers must go beyond half measures to ensure that AI systems and the actors that
deploy them are worthy of our trust," Hartzog testified.

«The harms of AI are real, significant, and becoming both entrenched and normalized by
the day…. Technologies like AI systems are not inevitable—they are intentionally designed
and built by people, and people (including many of the people in this room) can prohibit
them, they can regulate them, and they can shape their evolution into socially-beneficial

tools as well.»

Opponents Argue

Artificial Intelligence Could Not Pose a Danger to Humankind


Opponents of the proposition that artificial intelligence (AI) could pose a danger to humankind
argue that programmers are nowhere near building artificial general intelligence and likely never
will be. "We have no real idea how to endow our computers with overarching general intelligence,"
Nigel Shadbolt, a computer science professor at Oxford University, wrote in Futurism in 2016. "We
have no real idea how to build an intelligence that is reflective, self-aware and able to transfer skills
and experience effortlessly from one domain to another…. [O]ur programs are not about to become
self-aware or decide that we are redundant. The threat as ever is us."

Though the field of AI has advanced in recent years, its innovations, critics assert, have been fairly
minor and basic. The leap to something truly powerful and dangerous, they claim, remains far out
of reach. "The myth of artificial intelligence is that its arrival is inevitable, and only a matter of time
—that we have already embarked on the path that will lead to human-level AI, and then
superintelligence," computer scientist Erik J. Larson wrote in The Myth of Artificial Intelligence: Why
Computers Can't Think the Way We Do in 2021.

«We have not. The path exists only in our imaginations…. As we successfully apply
simpler, narrow versions of intelligence that benefit from faster computers and lots of
data, we are not making incremental progress, but rather picking low-hanging fruit.
The jump to general "common sense" is completely different, and there's no known

path from the one to the other. No algorithm exists for general intelligence.»

Even the AI that has been developed, opponents charge, is really nothing more than glorified but
limited software. "Despite the progress in machine learning, particularly multilayered artificial neural
networks, current AI systems are nowhere near achieving general intelligence (if that concept is
even coherent)," cognitive psychologist and linguist Steven Pinker wrote in his book Enlightenment
Now: The Case for Reason, Science, Humanism, and Progress in 2018.

«Instead, they are restricted to problems that consist of mapping well-defined inputs
to well-defined outputs in domains where gargantuan training sets are available, in
which the metric for success is immediate and precise, in which the environment
doesn't change, and in which no stepwise, hierarchical, or abstract reasoning is
necessary.... Each system is an idiot savant, with little ability to leap to problems it was
not set up to solve and a brittle mastery of those it was. And to state the obvious, none
of these programs has made a move toward taking over the lab or enslaving its

programmers.»

AI poses little threat to workers on a mass scale, critics contend, and will likely yield broad economic
benefits. "[I]t is particularly hard to automate large numbers of jobs with AI, because virtually all AI
is 'narrow AI', designed to focus on doing one thing really well," Canadian-American economist
Robert Atkinson wrote in 2016 for the Information Technology & Innovation Foundation, a
nonprofit research group.
«[E]ven if AI were more capable, there would still be ample job opportunities, because
if jobs in one firm are reduced through higher productivity, then costs go down. These
savings are recycled through lower prices or higher wages. This puts more money into
the economy, and the money is then spent creating jobs in whatever industries supply

the goods and services that people demand as their incomes go up.»

Worries about AI bias are also overblown, opponents argue, because as machine learning advances
it will become better at identifying and eliminating human prejudices. "That…bias exists is
undeniable. A machine-learning system is only as objective as its underlying dataset," David
Moschella, a fellow at the Information Technology & Innovation Foundation, wrote in April 2022.

«The good news is that…AI developers take steps to assure the quality of their underlying
data. Systems can, for example, be tested for bias by isolating criteria such as race, gender,
location and other factors. Facial recognition weaknesses can and have been corrected
through better data. These and similar techniques will only improve over time, as business
practices mature and as the volume of relevant data steadily increases. While eliminating
all bias is impossible (and often not desirable), building systems that outrun the average

human is a very achievable task. »

Far from harming humankind, critics charge, AI innovations are poised to help millions of
Americans. Halting AI research even temporarily because of alarmist forewarnings, they argue,
ignores the real assistance the technology could bring to many struggling people now. "These
critics, many of whom have a long history of parroting doomsday scenarios about artificial general
intelligence, have no evidence to back up their latest claims that advanced [large language models]
present an unprecedented existential risk," Daniel Castro, director of the Center for Data Innovation,
a think tank, wrote in March 2023 in response to an open letter from technology leaders calling for
a moratorium in AI research.

«AI advances have the potential to create enormous social and economic benefits across
the economy and society. Rather than hitting pause on the technology, and allowing
China to gain an advantage, the United States and its allies should continue to pursue
advances in all branches of AI research. Policymakers and others need to understand that
the risk for most people is that AI is not deployed soon enough—creating lost
opportunities to improve healthcare outcomes, address climate change, and reduce

injuries and accidents in workplaces and in transportation.»

Fears that extremely advanced AI machines would become evil and seek to destroy civilization,
opponents assert, are hysterical and unfounded. "Even if we did invent superhumanly intelligent
robots, why would they want to enslave their masters or take over the world?" Pinker asked.
"Intelligence is the ability to deploy novel means to attain a goal. But the goals are extraneous to
the intelligence: Being smart is not the same as wanting something.... There is no law of complex
systems that says that intelligent agents must turn into ruthless megalomaniacs."

Generative AI, critics contend, does not threaten human creativity because it can only mimic, not
replicate, artistic inspiration. "ChatGPT is really good at writing things that are good. At the same
time, it's really bad at writing things that are great," editor Jake Meth wrote in Business Insider in
February 2023. "Many of us have seen the clever poems ChatGPT cooks up. But beyond the initial
shock comes the realization that these poems are just averages, approximations of what a passable
poem looks like based on billions of examples. Truly talented poets, and other creative writers, will
be fine."

Doomsday scenarios regarding artificial intelligence are unnecessarily alarmist, opponents maintain,
and threaten to stunt technological innovation that will ultimately benefit humankind. "[A] diverse
cast of critics, driven by fear of technology, opportunism, or ignorance, has jumped into the
intellectual vacuum to warn policymakers that, sooner than we think, AI will produce a parade of
horrible outcomes," Atkinson wrote.

«[W]hen AI is so vociferously demonized…there is a real risk that policymakers will seek to


retard its progress. This would be a terribly unfortunate outcome, because the truth is that
AI systems are no different than shovels or tractors: They are tools in the service of

humans, and we can use them to make our lives vastly better.»

AI holds great promise to serve humanity, critics of pausing AI research assert, and the United
States must act boldly and unwaveringly in continuing to pioneer this new technology. "[T]here can
be immense benefits from making intelligence more freely available," economist Tyler Cowen wrote
on his blog, Marginal Revolution, in March 2023.

«Besides, what kind of civilization is it that turns away from the challenge of dealing with
more…intelligence? That has not the self-confidence to confidently confront a big dose of
more intelligence? Dare I wonder if such societies might not perish under their current

watch, with or without AI?... So we should take the plunge.»

Despite the hysterical prognostications of fantasists, opponents argue, AI is, and will remain, firmly
under human discretion and command. "[U]ncontrollable artificial general intelligence is science
fiction, not reality," Bill Dally, chief scientist at the Nvidia Corporation, a technology company,
testified before the U.S. Senate subcommittee in September 2023. "At its core, AI is a software
program that is limited by its training, the inputs provided to it and the nature of its output. In other
words, humans will always decide how much decision-making power to cede to AI models. So long
as we are thoughtful and measured, we can ensure safe, trustworthy, and ethical deployment of AI
systems." [See Technology Researcher Testifies on Artificial Intelligence Before U.S. Senate
Committee (primary source)]

Conclusion
AI Debate to Continue as Technology Improves

Whether and how to regulate AI will be a fiercely contested topic as the technology continues to
develop. In March 2024, European Union legislators approved the world's first major AI legislation,
instituting new rules on the technology. The act applied regulations of various stringency to
different categories of AI determined by risk; technology for areas lawmakers deemed high-risk, for
example, like medical devices or water supply technology, would have to comply with stricter
transparency, risk assessment, and other rules. The legislation also banned the application of AI for
some specific uses, such as facial recognition by law enforcement or government employment of
"social scoring" technology to monitor and evaluate resident behavior. The act, the first of its kind,
is expected to go into effect in 2026.

Some predict that, after the shock and virality of bots like ChatGPT wear off, improvements in
artificial intelligence will seem more piecemeal and less world-changing. "The most advanced and
attention-grabbing AI programs, especially language models, have consumed most of the text and
images available on the internet and are running out of training data, their most precious resource,"
Atlantic associate editor Matteo Wong wrote in February 2024. "This, along with the costly and slow
process of using human evaluators to develop these systems, has stymied the technology's growth,
leading to iterative updates rather than massive paradigm shifts. Companies are stuck competing
over millimeters of progress." To counteract any stasis, more companies, from Google's Deepmind
to Microsoft, are increasingly relying on AI models to train other AI models in pursuit of quicker and
more substantive advances.

The AI-powered leaps in intelligence that developers hope for, of course, are also those that give AI
doomsayers pause, warning that self-learning could dangerously increase the complexity and
opaqueness in how such programs learn and operate. Throughout history, every technological
revolution—from electricity to nuclear fission to the internet—has both benefited and burdened
humankind. Artificial intelligence will likely be no different. The controversies that erupted in recent
years over Google's chatbot LaMDA and OpenAI's chatbot GPT-4 brought to the public eye the
wide-ranging debates over consciousness, what constitutes "thinking," and the overall possibilities
and limitations of machine learning. What is certain is that AI will come to play an increasingly
prominent role in human life.

Discussion Questions

1) How did the rise of the internet change the field of artificial intelligence (AI)?

2) What is the difference between "narrow" AI and "general" AI? Do you think a human-level
intelligence, like commonsense reasoning, will ever be possible for machines?

3) Experts have disagreed over what standards to use to demonstrate that a machine is truly
intelligent. How would you define intelligence?

4) Do you think the technological singularity will happen? Why or why not?
5) Write a news report from 100 years in the future examining how AI has changed society, the
economy, politics, and everyday life. Has it benefited or hurt humankind?

Additional Sources

Additional information about artificial intelligence can be found in the following sources:

Kissinger, Henry, Eric Schmidt, and Daniel Huttenlocher. The Age of AI: And Our Human Future. New
York: Little, Brown and Company, 2021.

Lee, Kai-Fu, and Qiufan Chen. AI 2041: Ten Visions for Our Future. New York: Currency, 2021.

Suleyman, Mustafa. The Coming Wave: Technology, Power, and the Twenty-First Century's Greatest
Dilemma. New York: Crown, 2023.

Keywords

For further information about the ongoing debate over artificial intelligence, search for the following
words and terms in electronic databases and other publications:

Deep learning
ChatGPT-4
LaMDA
OpenAI
Self-driving cars
Technological singularity
Turing test
Winograd Schemas

Bibliography

Acemoglu, Daron. "The AI We Should Fear Is Already Here." Washington Post, July 21, 2021,
[Link].

Atkinson, Robert. "'It's Going to Kill Us!' and Other Myths About the Future of Artificial Intelligence."
Information Technology & Innovation Foundation, June 6, 2016, [Link].

Castro, Daniel. "The Sky Is Not Falling: Accelerate, Don't Pause, AI Advancements, Says Center for
Data Innovation." Center for Data Innovation, March 29, 2023, [Link].

Cowen, Tyler. "Existential Risk, AI, and the Inevitable Turn in Human History." Marginal Revolution,
March 27, 2023, [Link].

Hill, Kashmir. "Microsoft Plans to Eliminate Face Analysis Tools in Push for 'Responsible A.I.'" New
York Times, June 21, 2022, [Link].

Johnson, Steven. "A.I. Is Mastering Language. Should We Trust What It Says?" New York Times, April
15, 2022, [Link].
Larson, Erik J. The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do.
Cambridge, Mass.: Belknap, 2021.

Matthews, Dylan. "Does This AI Know It's Alive?" Vox, June 15, 2022, [Link].

Meth, Jake. "I'm a Professional Writer, and I'm Not Scared of ChatGPT." Business Insider, February
23, 2023, [Link].

Metz, Cade. "Meet DALL-E, the A.I. that Draws Anything at Your Command." New York Times, April
6, 2022, [Link].

Mitchell, Melanie. Artificial Intelligence: A Guide for Thinking Humans. New York: Farrar, Straus and
Giroux, 2019.

Moschella, David. "AI Bias Is Correctable. Human Bias? Not So Much." Information Technology &
Innovation Foundation, April 25, 2022, [Link].

Perrigo, Billy. "The 3 Most Important AI Innovations of 2023." Time, December 21, 2023,
[Link].

Pinker, Stephen. Enlightenment Now: The Case for Reason, Science, Humanism, and Progress. New
York: Penguin, 2018.

Piper, Kelsey. "The Case for Taking AI Seriously as a Threat to Humanity." Vox, December 21, 2018,
[Link].

Reisner, Alex. "Generative AI Is Challenging 234-Year-Old Law." Atlantic, February 29, 2024,
[Link].

Rina, Regina. "The Big Idea: Should We Worry About Sentient AI?" Guardian, July 4, 2022,
[Link].

"A Robot Wrote This Entire Article. Are You Scared Yet, Human?" Guardian, September 8, 2020,
[Link].

Roose, Kevin. "A Conversation with Bing's Chatbot That Left Me Deeply Unsettled." New York Times,
February 16, 2023, [Link].

———. "GPT-4 Is Exciting and Scary." New York Times, March 15, 2023, [Link].

———. "We Need to Talk About How Good A.I. Is Getting." New York Times, August 24,
2022, [Link].

Russell, Stuart. "How to Stop Superhuman A.I. Before It Stops Us." New York Times, October 8, 2019,
[Link].

Russell, Stuart, and Melanie Mitchell. "Munk Debate: Should We Fear Our Future Robot Overlords?"
National Post, February 7, 2021, [Link].

Shadbolt, Nigel. "Artificial Intelligence or Humanity: Which Is a Greater Threat to Our Survival?"
Futurism, August 19, 2016, [Link].
Tyrangiel, Josh. "AI Is Powerful, but Imperfect, and Ours to Shape Into Something Good."
Washington Post, September 10, 2023, [Link].

Wong, Matteo. "Things Get Strange When AI Starts Training Itself." Atlantic, February 16, 2024,
[Link].

Wooldridge, Michael. A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where
We Are Going. New York: Flatiron, 2021.

Contact Information

Information on how to contact organizations that either are mentioned in the discussion of artificial
intelligence or can provide additional information on the subject is listed below:

Google
1600 Amphitheatre Pkwy.
Mountain View, Calif. 94043
Telephone: (650) 253-0000
Internet: [Link]

International Business Machines Corp. (IBM)


1 New Orchard Rd.
Armonk, N.Y. 10504
Telephone: (914) 499-1900
Internet: [Link]

Stanford Artificial Intelligence Laboratory


353 Jane Stanford Way
Stanford, Calif. 94305
Telephone: (650) 723-2273
Internet: [Link]

Citation Information:

“Artificial Intelligence (AI).” Issues & Controversies, Infobase, 15 Mar. 2024,


[Link]

Copyright © 2024 Infobase. All Rights Reserved.

You might also like