0% found this document useful (0 votes)
57 views

What Is Artificial Intelligence

The document discusses the history and evolution of artificial intelligence from early tabulation machines to modern AI capabilities. It describes how AI systems analyze large amounts of data to make predictions and identifies three levels of AI: narrow AI, broad AI, and general AI.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views

What Is Artificial Intelligence

The document discusses the history and evolution of artificial intelligence from early tabulation machines to modern AI capabilities. It describes how AI systems analyze large amounts of data to make predictions and identifies three levels of AI: narrow AI, broad AI, and general AI.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

MODULE 1: WHAT IS ARTIFICIAL INTELLIGENCE?

What does AI do?


Artificial intelligence machines (researchers call them “AI services”)

don’t think. They calculate. They represent some of the newest, most

sophisticated calculating machines in human history. Some can perform

what’s called machine learning as they acquire new data. Others, using

calculations arranged in ways inspired by neurons in the human brain,

can even perform deep learning with multiple levels of calculations.


Imagine you are given the job to sort items in the meat department at a
grocery store. You realize that there are dozens of products and very
little time to sort them manually. How could you use artificial
intelligence, machine learning, and deep learning to help with your
work?
Select each tab to learn more.
ARTIFICAL INTELLIGENCEMACHINE LEARNINGDEEP
LEARNING
To separate the chicken, beef, and pork, you could create a programmed
rule in the format of if-else statements. This allows the machine to
recognize what is on the label and route it to the correct basket.

A programmed rule might look something like this:


if beef_is_on_label:
route_items_to_center_basket()
else:
redirect_item_to_main_basket()

Artificial intelligence makes this process more efficient.

How do AI services calculate? And, what do they do with those


calculations? Let’s break this down into two parts.
Select each card to learn more.
1. Front of card
Analysis
Click to flip
Back of card
AI services can take in (or “ingest”) enormous amounts of data.
They can apply mathematical calculations in order to analyze
data, sorting and organizing it in ways that would have been
considered impossible only a few years ago.
Click to flip
2. Front of card
Prediction
Click to flip
Back of card
AI services can use their data analysis to make predictions. They
can, in effect, say, “Based on this information, a certain thing will
probably happen.”
Click to flip

This is what AI services do! Based on data analysis, they


make predictions. It might not seem like much, but that analysis and
those predictions can have an enormous impact on human life.

MODULE 2: WHAT ARE THE THREE ERAS OF COMPUTING?

What predictions can AI make?


Most people have a love-hate relationship with the autocorrect feature
on phones or computers. What’s happening when you enter a misspelled
word? And how does the machine know to suggest a better spelling?

Simply put, the software analyzes what you’ve typed so far and predicts
a likely correction. Your phone or computer (or its online service) has
more than just a dictionary of correct spellings. It has a huge library of
phrases that humans use in certain contexts on many subjects. So, when
you enter a word that’s not in its dictionary, it begins analyzing and
predicting and suggests the word you need. Predictions aren’t always
accurate. But if they’re correct often enough, they’re useful and can save
you time.

Here are more ways that AI uses data to make predictions.


Select each of the following to learn more.
Human language

Online chatbots use natural language processing (NLP) to
analyze poorly typed or spoken questions, then predict which
answers to give on topics ranging from shipping or business
hours to merchandise and sizes.
Vision recognition

AI helps doctors identify serious diseases based on unusual
symptoms and early-warning signs, and it reads speed limit and
stop signs as it guides cars through traffic.
Fraud detection

AI analyzes patterns created when thousands of bank customers
make credit card purchases, then predicts which charges might
be the result of identity

Today’s AI has gone beyond creating driving directions, vacuuming


floors, or recommending new fashions. Now it really can mimic the
capabilities of the human mind. AI can learn from examples and
experience, recognize objects, understand and respond to language, and
solve problems. Even more exciting are its future possibilities. Read on
to see how AI might evolve over the next half-century.
How is AI evolving?
Computer scientists have identified three levels of AI based on
predicted growth in its ability to analyze data and make predictions.
They call these levels:

 Narrow AI
 Broad AI
 General AI
As shown in the following graphic, Narrow AI, and Broad AI are
available today. In fact, most enterprises use Broad AI. General AI
won’t come online until sometime in the future.

Select each of the following levels of AI to learn more.


Narrow AI

 Narrow AI is focused on addressing a single task
such as predicting your next purchase or planning
your day.
 Narrow AI is scaling very quickly in the consumer
world, in which there are a lot of common tasks and
data to train AI systems. For example, you can buy a
book with a voice-based device.
 Narrow AI also enables robust applications, such as
using Siri on an iPhone, the Amazon
recommendation engine, autonomous vehicles, and
more. Narrow AI systems like Siri have
conversational capabilities, but only if you stick to the
script.

Broad AI

 Broad AI is a midpoint between Narrow and General
AI.
 Rather than being limited to a single task, Broad AI
systems are more versatile and can handle a wider
range of related tasks.
 Broad AI is focused on integrating AI within a specific
business process where companies need business-
and enterprise-specific knowledge and data to train
this type of system.
 Newer Broad AI systems predict global weather, trace
pandemics, and help businesses predict future
trends.

General AI

 General AI refers to machines that can perform any
intellectual task that a human can.
 Currently, AI does not have the ability to think
abstractly, strategize, and use previous experiences
to come up with new, creative ideas as humans do,
such as inventing a new product or responding to
people with appropriate emotions. And don't worry, AI
is nowhere near this point.

There might be another level, known as artificial


superintelligence (ASI) that could appear near the end of this century.
Then machines might become self-aware! Even then, no levels of AI are
expected to replace or dominate you. Instead, scientists hope AI will
extend humans’ ability to lead richer lives.

Ray Kurzweil, a noted futurist, has said:


WHow is AI Envolving?Module 2
What are the three Eras of Computing?

The Era of Tabulation


People have analyzed data for centuries

For centuries, people have struggled to understand the meaning that’s

hidden in large amounts of data. After all, it’s one thing to estimate how

many trees grow in a million square miles of forest. It’s something else

to classify what species of trees they are, how they cluster at different

altitudes, and what could be built with the wood they provide. That

information can be difficult to extract from a very large amount of data.


Because it's hard to see without help, scientists call this dark data. It’s

information without a structure: just a huge, unsorted mess of facts.

To sort out unstructured data, humans have created many different

calculating machines. Over 2000 years ago, tax collectors for Emperor

Qin Shihuang used the abacus—a device with beads on wires—to break

down tax receipts and arrange them into categories. From this, they

could determine how much the Emperor should spend on building

extensions to the Great Wall of China.


In England during the mid-1800s, Charles Babbage and Ada Lovelace

designed (but never finished) what they called a “difference engine”

designed to handle complex calculations using logarithms and

trigonometry. Had they built it, the difference engine might have helped

the English Navy build tables of ocean tides and depth soundings that

could guide English sailors through rough waters.

By the early 1900s, companies like IBM were using machines to

tabulate and analyze the census numbers for entire national populations.

They didn’t just count people. They found patterns and structure within
the data—useful meaning beyond mere numbers. These machines

uncovered ways that different groups within the population moved and

settled, earned a living, or experienced health problems—information

that helped governments better understand and serve them.


The word to remember across those twenty centuries is tabulate. Think

of tabulation as “slicing and dicing” data to give it a structure, so that

people can uncover patterns of useful information. You tabulate when

you want to get a feel for what all those columns and rows of data in a

table really mean.

Researchers call these centuries the Era of Tabulation, a time when

machines helped humans sort data into structures to reveal its secrets.
The Era of Programming
Data analysis changed in the 1940s
During the turmoil of World War II, a new approach to dark data

emerged: the Era of Programming. Scientists began building electronic

computers, like the Electronic Numerical Integrator and Computer

(ENIAC) at the University of Pennsylvania, that could run more than

one kind of instruction (today we call those “programs”) in order to do

more than one kind of calculation. ENIAC, for example, not only
calculated artillery firing tables for the US Army, it worked in secret to

study the feasibility of thermonuclear weapons.


This was a huge breakthrough. Programmable computers guided

astronauts from Earth to the moon and were reprogrammed during

Apollo 13’s troubled mission to bring its astronauts safely back to Earth.

You’ve grown up during the Era of Programming. It even drives the

phone you hold in your hand. But the dark data problem has also grown.

Modern businesses and technology generate so much data that even the

finest programmable supercomputer can't analyze it before the “heat-

death” of the universe. Electronic computing is facing a crisis.


The Era of AI
A brief history of AI

The history of artificial intelligence dates back to philosophers thinking


about the question, "What more can be done with the world we live in?"
This question lead to discussions and the very beginning of many ideas
about the possibilities involving technology.

Since the advent of electronic computing, there are some important


events and milestones in the evolution of artificial intelligence to know
about. Here's an overview to get started.
Select each marker (+) on the timeline below in sequential order,
starting with the 1940s, to learn about key events.
1.
2.
3.
4.
5.
6.
7.
8.
The Era of AI began one summer in 1956
Early in the summer of 1956, a small group of researchers, led by John

McCarthy and Marvin Minsky, gathered at Dartmouth College in New

Hampshire. There, at one of the oldest colleges in the United States, they

launched a revolution in scientific research and coined the term

“artificial intelligence”.

The researchers proposed that “every aspect of learning or any other


feature of intelligence can be so precisely described that a machine can
be made to simulate it.” They called their vision “artificial intelligence”
and they raised millions of dollars to achieve it within 20 years. During
the next two decades, they accomplished tremendous things, creating
machines that could prove geometry theorems, speak simple English,
and even solve word problems with algebra.

For a short time, AI was one of the most exciting fields in computer
science.
But then came winter
By the early 1970s, it became clear that the problem was larger than

researchers imagined. There were fundamental limits that no amount of

money and effort could solve.

Select the following sections to learn about two of these limits.


Limited calculating power
+
Limited information storage
+
As these issues became clear, the money dried up for The First Winter
of AI.
The weather was rough for half a century

It took about a decade for technology and AI theory to catch up,


primarily with new forms of AI called “expert systems”. These were
limited to specific knowledge that could be manipulated with sets of
rules. They worked well enough—for a while—and became popular in
the 1980s. Money poured in. Researchers invested in tremendous
mainframe machines that cost millions of dollars and occupied entire
floors of large university and corporate buildings. It seemed as if AI was
booming once again.

But soon the needs of scientists, businesses, and governments outgrew


even these new systems. Again, funding for AI collapsed.
Then came another AI chill

In the late 1980s, the boom in AI research cooled, in part, because of the
rise of personal computers. Machines from Apple and IBM, sitting on
desks in people’s homes, grew more powerful than the huge corporate
systems purchased just a few years earlier. Businesses and governments
stopped investing in large-scale computing research, and funding dried
up.

Over 300 AI companies shut down or went bankrupt during The Second
Winter of AI.
Now, the forecast is sunny

In the mid-1990s, almost half a century after the Dartmouth research

project, the Second Winter of AI began to thaw. Behind the scenes,


computer processing finally reached speeds fast enough for machines to

solve complex problems.

At the same time, the public began to see AI’s ability to play

sophisticated games.

 In 1997, IBM’s Deep Blue beat the world’s chess champion


by processing over 200 million possible moves per second.
 In 2005, a Stanford University robot drove itself down a 131-
mile desert trail.
 In 2011, IBM’s Watson defeated two grand champions in the
game of Jeopardy!
Today, AI has proven its ability in fields ranging from cancer research
and big data analysis to defense systems and energy production.
Artificial intelligence has come of age. AI has become one of the hottest
fields of computer science. Its achievements impact people every day
and its abilities increase exponentially. The Two Winters of AI have
ended!

MODULE 3: STRUCTURED, SEMI-STRUCTURED, AND


UNSTRUCTURED DATA

A look at the types of data

Data is raw information. Data might be facts, statistics, opinions, or any


kind of content that is recorded in some format. This could include
voices, photos, names, and even dance moves!

Data can be organized into the following three types.


 Structured data is typically categorized
as quantitative data and is highly organized. Structured
data is information that can be organized in rows and
columns. Perhaps you've seen structured data in a
spreadsheet, like Google Sheets or Microsoft Excel.
Examples of structured data includes names, dates,
addresses, credit card numbers, stock information.
 Unstructured data, also known as dark data, is typically
categorized as qualitative data. It cannot be processed and
analyzed by conventional data tools and methods.
Unstructured data lacks any built-in organization, or
structure. Examples of unstructured data include images,
texts, customer comments, medical records, and even song
lyrics.
 Semi-structured data is the “bridge” between structured
and unstructured data. It doesn't have a predefined data
model. It combines features of both structured data and
unstructured data. It's more complex than structured data,
yet easier to store than unstructured data. Semi-structured
data uses metadata to identify specific data characteristics
and scale data into records and preset fields. Metadata
ultimately enables semi-structured data to be better
cataloged, searched, and analyzed than unstructured data.
An example of semi-structured data is a video on a social
media site. The video by itself is unstructured data, but a
video typically has text for the internet to easily categorize
that information, such as through a hashtag to identify a
location.

Are you wondering why this is important? Why would anyone want to
search through a mountain of data such as social media posts?

Here are just some of the many people and organizations that might need
to do this:
 A sneaker designer looking for new trends
 Governments searching for possible terrorists
 Pandemic experts trying to anticipate disease outbreaks
 Financial institutions preparing for good times or a recession

Experts estimate that about 80% of all the data in today’s


world is unstructured. It contains so many variables and
changes so quickly that no conventional computer program
can learn much from it.

Now, imagine a programmable computer trying to extract meaning from

billions of data like this! What kind of program would someone write

that could sort out every eventuality among the clutter? How would

someone build a long enough list of keywords to find anything useful?

Unstructured data hides answers to disease prevention, criminal activity,

stock markets—almost every aspect of civilization today. Without those

answers, people and organizations cannot make useful predictions or

recommendations.

But AI can shed light on unstructured data!


AI uses new kinds of computing—some modeled on the human brain—
to rapidly give dark data structure, and from it, make new discoveries.
AI can even learn things—by itself—from the data it manages and teach
itself how to make better predictions over time. This is the Era of AI,
and it changes everything!

Module 4

Is Machine learning the answer to the unstructured data problem?

How does machine learning approach a


problem?
Two ways to solve dark data problems
If AI doesn’t rely on programming instructions to work with
unstructured data, how does AI do it? Machine learning can analyze
dark data far more quickly than a programmable computer can. To see
why, consider the problem of finding a route through big city traffic
using a navigation system. It’s a dark data problem because solving it
requires working with not only a complicated street map, but also with
changing variables like weather, traffic jams, and accidents. Let’s look at
how two different systems might try to solve this problem.
Select each item to learn how each system might approach this
problem.
Programmable computer

Researchers might upload onto the computer a complete database of all
possible routes through the city. This is an enormous collection of
structured data.

Then they would have to add much more data describing current weather
and traffic conditions. This would have to be revised every few minutes
for the entire city!

Then they might use a programmable computer to search through the


data until it finds a route from start to finish. The entire project would
require astronomical resources and time, if it could be accomplished at
all!
AI with machine learning

The machine learning AI would treat the problem like climbing a tree.
The system would try a route, as if starting at the base of the trunk. Upon
reaching a branch, the system would then fork in one direction and
continue doing so until it reached either a dead end or the desired
destination.

It would do this over and over again, then compare successful routes to
identify the shortest one. Although the work sounds repetitious, it
requires fewer resources and can be completed more quickly.

The machine learning process is entirely different

The machine learning process has advantages:

 It doesn’t need a database of all the possible routes from


one place to another. It just needs to know where places are
on the map.
 It can respond to traffic problems quickly because it
doesn’t need to store alternative routes for every possible
traffic situation. It notes where slowdowns are and finds a
way around them through trial and error.
 It can work very quickly. While trying single turns one at a
time, it can work through millions of tiny calculations.

But machine learning has two more advantages that programmable


computers lack:

 Machine learning can predict. You know this already. A


machine can determine, “Based on traffic right now, this
route is likely to be faster than that one.” It knows this
because it compared routes as it built them.
 Machine learning learns! It can notice that your car was
delayed by a temporary detour and adjust its
recommendations to help other drivers.
 Machine learning uses probabilistic
calculation


 There are two other ways to contrast classical and machine

learning systems. One is deterministic and the other

is probabilistic.

 Let’s dig in and see what these two words mean.

 For a deterministic system, there must be an enormous,


predetermined structure of routes—a gigantic database of
possibilities from which the machine can make its choice. If a
certain route leads to the destination, then the machine flags it as
“YES”. If not, it flags it as “NO”. This is basically binary thinking:
on or off, yes or no. This is the essence of a computer program.
The answer is either true or false, not a confidence value.
 Machine learning is probabilistic. It never says “YES” or “NO”.
Machine learning is analog (like waves gradually going up and
down) rather than binary (like arrows pointing upward and
downward). Machine learning constructs every possible route to a
destination and compares them in real time, including all the
variables such as changing traffic. So, a machine learning system
doesn’t say, “This is the fastest route.” It says something like, “I
am 84% confident that this route will get you there in the shortest
time.” You might have seen this yourself if you’ve traveled in a car
with an up-to-date GPS navigation system that offers you two or
three choices with estimated times.
 If machine learning offers only probabilities, who makes the
final decision?
 This can literally be a life-and-death question. Suppose you have a
serious disease and your doctor offers you a choice. Do you want
your doctor to prescribe your treatment, or do you want the
treatment that a machine learning system determines is most likely
to succeed?
 Reflection: What do you think?
 Think about the preceding question for a moment. Then, type a
sentence or two about what you might decide, and explain why.
 Enter your response in the provided text box. When you are
finished, select DONE. (Writing an answer is a good way to
process your thoughts. These answers are for your use only. You
have the option to download your response and save it. It will not
be saved in the text box when you move on in the course.)

As you learned, the GPS offers different routes to choose from. Machine
learning leaves room for humans to make the final decision. You, or an
expert, can choose which route to take based on possible outcomes and
personal experience.

Consider how a machine learning system might work with the medical
question you considered previously. As a doctor considers how to treat a
cancer patient, the AI ingests your entire medical record plus every
research paper about this cancer published in the last few years. This
takes a few seconds. But the AI doesn’t say, “Do this” or “Do that”. That
would be a deterministic solution that cuts the doctor and patient out of
the decision.

Instead, the AI provides a probabilistic statement such as, “There’s a


92% chance that this treatment will work, and an 87% chance that this
other treatment will work. Here is what would be involved in conducting
each treatment option.” Now the doctor can consult with you and
together you make the choice. In other words, AI, the doctor, and you
have formed a partnership in which machines predict but humans judge!

Does common sense make sense?


It turns out that in fields, ranging from medicine and education to social

studies and government, the best decisions are made using a balance of

human and machine strengths. But remember, there is another elusive

but vital capability that must also be considered: common sense. You

might know people with strong common sense and understand its value.

You also might have seen or read output from machines that makes no

sense. Yet, there’s a contribution to be made from both sides.

Common sense draws on many complex generalizations mixed with


compassion and abstractions. At this time, only humans can use
common sense well. The problem is that common sense is often tainted
with bias that can distort your judgment. AI systems can balance this. As
long as AI systems are provided and trained with unbiased data, they can
make recommendations that are free of bias. A partnership between
humans and machines can lead to sensible decisions.

MODULE 5: THREE COMMON METHODS OF MACHINE


LEARNING

Supervised learning
Machine learning solves problems in three ways:

 bullet

Supervised learning

 bullet

Unsupervised learning
 bullet
Reinforcement learning
Let's explore each one!

Supervised learning is about providing AI with enough examples to


make accurate predictions.

All supervised learning algorithms need labeled data. Labeled data is


data that is grouped into samples that are tagged with one or more labels.
In other words, applying supervised learning requires you to tell your
model:

1. What the key characteristics of a thing are, also


called features
2. What the thing actually is
For example, the information might be drawings and photos of animals,

some of which are dogs and are labeled “dog”. The machine will learn

by identifying a pattern for “dog”. When the machine sees a new dog

photo and is asked, “What is this?”, it will respond, “dog”, with high

accuracy. This is known as a classification problem.

Train AI to classify images


Imagine that you’re a supervisor at an AI research company. Practice
training an AI system to classify images of animals with the correct
label.
Unsupervised learning
In unsupervised learning, a person feeds a machine a large amount of
information, asks a question, and then the machine is left to figure out
how to answer the question by itself.

For example, the machine might be fed many photos and articles about
dogs. It will classify and cluster information about all of them. When
shown a new photo of a dog, the machine can identify the photo as a
dog, with reasonable accuracy.

Unsupervised learning occurs when the algorithm is not given a specific


“wrong” or “right” outcome. Instead, the algorithm is given unlabeled
data.

Unsupervised learning is helpful when you don't know how to classify


data. For example, imagine you work for a banking institution and you
have a large set of customer financial data. You don't know what type of
groups or categories to organize the data. Here, an unsupervised learning
algorithm could find natural groupings of similar customers in a
database, and then you could describe and label them.

This type of learning has the ability to discover similarities and


differences in information, which makes it an ideal solution for
exploratory data analysis, cross-selling strategies, customer
segmentation, and image recognition.

Reinforcement learning

Reinforcement learning is a machine learning model similar to


supervised learning, but the algorithm isn’t trained using sample data.
This model learns as it goes by using trial and error. A sequence of
successful outcomes is reinforced to develop the best recommendation
for a given problem. The foundation of reinforcement learning is
rewarding the “right” behavior and punishing the “wrong” behavior.

You might be wondering, what does it mean to "reward" a machine?


Good question! Rewarding a machine means that you give your agent
positive reinforcement for performing the "right" thing and negative
reinforcement for performing the "wrong" things.
Learn how AI uses trial and error
Here's an interactive demonstration to show you how AI can learn
through the process of trial and error to complete a task, and in this
example get a reward.

As a machine learns through trial and error, it tries a prediction, then


compares it with data in its corpus.

 Each time the comparison is positive, the machine receives positive


numerical feedback, or a reward.
 Each time the comparison is negative, the machine receives
negative numerical feedback, or a penalty.

Over time, a machine’s predictions will grow to be more accurate. It


accomplishes this automatically based on feedback, rather than through
human intervention.
MODULE 6: HOW WILL MACHINE LEARNING
TRANSFORM HUMAN LIFE?
Think about this
Perhaps, 25 years from now, General AI is expected to emerge. AI
researcher Nick Bostrom defines this superintelligence as, “an intellect
that is much smarter than the best human brains in practically every
field, including scientific creativity, general wisdom and social skills.”
You’re likely to see General AI appear in your lifetime. General AI will
enable supersmart bots and technologies to link AI with the Internet of
Things through “embodied cognition”. This will give machines the
ability to interact in human-like ways as they work alongside humans.
What will interacting with general AI feel like to humans?
Select the following attributes for ideas on how humans might
experience General AI in the future.
AI everywhere

AI will move into all industries, from finance, to education, to
healthcare. AI will increase productivity and enable new opportunities.
Deeper insights

New technologies will sense, analyze, and understand things never
before possible.
Engagement reimagined

New forms of human-machine interaction and emerging technologies,
such as conversational bots, will transform how humans engage with
each other and with machines.
Personalization

Machine interactions will be personalized for you, with new levels of
detail and scale.
Instrumented planet

Billions of sensors generating exabytes of data will open new
possibilities for improving Earth’s safety, sustainability, and security.
What’s beyond these wonders? Humans, devices, and robots might exist
as a collective “digital brain” that anticipates human needs, makes
predictions, and provides solutions. Farther in the future, we might trust
the digital brain to do things on our behalf across a broad spectrum of
endeavors!

Key points to remember


1. Artificial intelligence refers to the ability of a machine to
learn patterns and make predictions. AI does not replace
human decisions; instead, AI adds value to human judgment.

2. AI performs tasks without human intervention and completes


mundane and repetitive tasks, while augmented intelligence
allows humans to make final decisions after analyzing data,
reports, and other types of data.

3. The three levels of AI include: Narrow AI, Broad AI, and


General AI. Narrow AI and Broad AI are available today. In
fact, most enterprises use Broad AI. General AI won’t come
online until sometime in the future.

4. The history of AI has progressed across the Era of


Tabulation, Era of Programming, and Era of AI.

5. Data can be structured, unstructured, or semi-structured.

 Structured data is quantitative and highly organized, such


as a spreadsheet of data.
 Unstructured data is qualitative data that doesn't have
structure, such as medical records. It's becoming increasing
valuable to businesses.
 And semi-structured data combines features of both
structured data and unstructured data. It uses metadata.

6. About 80% of all the data in today’s world is unstructured.


7. Machine learning has advantages compared to programmable
computers. Machine learning can predict and machine
learning learns!

8. Machine learning uses three methods.

 Supervised learning requires enough examples to make


accurate predictions
 Unsupervised learning requires large amounts of
information so the machine can ask a question, and then
figure out how to answer the question by itself.
 Reinforcement learning requires the process of trial and
error.

9. With AI everywhere, AI will move into all industries, from


finance, to education, to healthcare.

10. AI can increase productivity, create new opportunities,


provide deeper insights, and enable personalization.

You might also like