Unit 1:
Introduction- AI for Everyone
CLASS XI
What is Artificial Intelligence
(AI)?
Artificial intelligence (AI) refers to the ability of a machine to learn patterns and make
predictions.
Artificial Intelligence is a field that combines computer science and robust datasets to enable
problem-solving.
AI does not replace human decisions; instead, AI adds value to human judgment.
Think of AI as a smart helper that can understand things, learn from examples, and do tasks on
its own without needing to be told exactly what to do each time.
AI can:
• Understand Language: AI can understand and respond to what you say, like virtual assistants
such as Siri or Alexa.
• Recognize Images: AI can look at pictures and recognize what is in them, like identifying
animals in photos.
• Make Predictions: AI can analyze data to make predictions, like predicting the weather or
suggesting what movie you might like to watch next.
• Play Games: AI can play games and learn to get better at them, like playing chess or video
games.
• Drive Cars: AI can help cars drive themselves by sensing the road and making decisions to stay
safe.
What is not AI?
• Traditional Rule-Based Systems: These machines follow set rules without learning from data.
• Simple Automation Tools: Basic tools like timers or calculators do specific tasks but do not think
or learn.
• Mechanical Devices: Machines like pulleys or gears work based on physics but do not learn or
think.
• Fixed-Function Hardware: Devices like microwave ovens perform tasks without learning or
thinking.
• Non-Interactive Systems: Machines that do not change based on new information, like a basic
electric fan.
• Basic Sensors: Sensors collect data but do not analyze or understand it.
Artificial Intelligence machines are different. They learn from data and can make decisions on
their own.
For example, a smart washing machine can adjust its settings based on what it is washing.
AI goes beyond just following rules; it can learn, adapt, and make decisions based on data and
context.
2. Evolution of AI
3. Types of AI
Computer scientists have identified three levels of AI based on predicted growth in its ability to
analyze data and make predictions.
1. Narrow AI:
• Focuses on single tasks like predicting purchases or planning schedules.
• Rapidly growing in consumer applications, such as voice-based shopping and virtual assistants like
Siri.
• Capable of handling specific tasks effectively, but lacks broader understanding.
2. Broad AI:
• Acts as a midpoint between Narrow and General AI.
• More versatile than Narrow AI, capable of handling a wider range of related tasks.
• Often used in businesses to integrate AI into specific processes, requiring domain-specific knowledge
and data.
3. General AI:
• Refers to machines that can perform any intellectual task a human can.
• Currently, AI lacks abstract thinking, strategizing, and creativity like humans.
• Artificial Superintelligence (ASI) may emerge, potentially leading to self-aware machines, but this is far
from current capabilities.
Domains of AI
Artificial Intelligence (AI) encompasses various fields, each focusing on different aspects of
replicating human intelligence and performing tasks traditionally requiring human intellect. These
fields are classified based on the type of data input they handle:
a)Data Science: Data Science deals with numerical, alphabetical, and alphanumeric data inputs. It
involves the collection, analysis, and interpretation of large volumes of data to extract insights and
patterns using statistical methods, machine learning algorithms, and data visualization techniques.
b) Natural Language Processing (NLP): NLP focuses on processing text and speech inputs to enable
computers to understand, interpret, and generate human language. It involves tasks such as
language translation, sentiment analysis, text summarization, and speech recognition, facilitating
communication between humans and machines through natural language interfaces. c) Computer
Vision: Computer Vision deals with visual data inputs, primarily images and videos. It enables
computers to interpret and understand visual information, and perform tasks such as object
detection, image classification, facial recognition, and scene understanding, enabling applications
such as autonomous vehicles, medical imaging, and augmented reality.
1. Gesture recognition for human-computer interaction 2. Chatbots for customer service 3. Spam
email detection 4. Autonomous drones for surveillance 5. Google Translate 6. Fraud detection in
financial transactions 7. Augmented reality applications (e.g., Snapchat filters) 8. Sports analytics for
performance optimization 9. Object detection in autonomous vehicles [Link] systems
for e-commerce platforms [Link] segmentation for targeted marketing 12. Text
summarization for news articles [Link] subtitles for videos [Link] image diagnosis 15.
Stock prediction
a. Data Science Data might be facts, statistics, opinions, or any kind of content that is recorded in
some format. This could include voices, photos, names, and even dance moves! It surrounds us
and shapes our experiences, decisions, and interactions. For example: • Your search
recommendations, Google Maps history are based on your previous data. • Amazon's
personalized recommendations are influenced by your shopping habits. • Social media activity,
cloud storage, textbooks, and more are all forms of data. It is often referred to as the "new oil"
of the 21st century. Did you know? 90% of the world's data has been created in just the last 2
years, compared to the previous 6 million years of human existence. Type of Data • Structured
Data • Unstructured Data • Semi-structured Data
Structured data is like a neatly arranged table, with rows and columns that make it easy to
understand and work with. It includes information such as names, dates, addresses, and stock
prices. Because of its organized nature, it is straightforward to analyze and manipulate, making it
a preferred format for many data-related tasks. On the other hand, unstructured data lacks any
specific organization, making it more challenging to analyze compared to structured data.
Examples of unstructured data include images, text documents, customer comments, and song
lyrics. Since unstructured data does not follow a predefined format, extracting meaningful
insights from it requires specialized tools and techniques. Semi-structured data falls somewhere
between structured and unstructured data. While not as organized as structured data, it is
easier to handle than unstructured data. Semi-structured data uses metadata to identify certain
characteristics and organize data into fields, allowing for some level of organization and analysis.
An example of semistructured data is a social media video with hashtags used for categorization,
blending structured elements like hashtags with unstructured content like the video itself.
b. Natural Language Processing: It refers to the field of computer science and AI that focuses on
teaching machines to understand and process languages in both written and spoken form, just
like humans do. The goal of an NLP-Trained model is to be capable of “understanding” the
contents of documents, including the slangs, sarcasm, inner meaning, and contextual definitions
of the language in which the text was written.
Natural Language Processing (NLP): This is the broad umbrella term encompassing everything
related to how computers interact with human language. Think of it as the "what" - what
computers can do with human language. It is like the whole library - filled with different tools
and techniques for working with language data. Natural Language Understanding (NLU): This is a
subfield of NLP that focuses on understanding the meaning of human language. It analyzes text
and speech, extracting information, intent, and sentiment. NLU helps computers understand the
language and what it means. Imagine finding a specific book in the library. Natural Language
Generation (NLG): This is another subfield of NLP, but instead of understanding, it focuses on
generating human language. It takes structured data as input and turns it into coherent and
readable text or speech. Think of this as writing a new book based on the information gathered
in the library.
c. Computer Vision: Computer Vision is like giving computers the ability to see and understand the
world through digital images and videos, much like how humans use their eyes to perceive their
surroundings. In this domain, computers analyze visual information from images and videos to
recognize objects, understand scenes, and make decisions based on what they "see." When we take
a digital image, it is essentially a grid of tiny colored dots called pixels. Each pixel represents a tiny
portion of the image and contains information about its color and intensity. Resolution is expressed
as the total number of pixels along the width and height of the image. For example, an image with a
resolution of 1920x1080 pixels has 1920 pixels horizontally and 1080 pixels vertically. Higher
resolution images have more pixels, providing more detail. Now, here's where AI comes in. To make
sense of these images, computers convert them into numbers. They break down the image into a
series of numbers that represent the color and intensity of each pixel. This numerical representation
allows AI algorithms to process the image mathematically and extract meaningful information from
it. For instance, AI algorithms might learn to recognize patterns in these numbers that correspond to
specific objects, like cars or faces. By analyzing large amounts of labeled image data, AI systems can
"learn" to identify objects accurately.
Cognitive Computing (Perception, Learning, Reasoning) Cognitive Computing is a branch of
Artificial Intelligence (AI) that aims to mimic the way the human brain works in processing
information and making decisions. It involves building systems that can understand, reason,
learn, and interact with humans in a natural and intuitive way.
Examples of cognitive computing software: IBM Watson, Deep mind, Microsoft Cognitive service
etc.
Cognitive Computing integrates Data Science, Natural Language Processing, and Computer
Vision to create intelligent systems that can understand and interact with humans in a human-
like manner. By combining these technologies, Cognitive Computing enables machines to
process and interpret diverse types of data, communicate effectively in natural language, and
perceive and understand visual information, thereby extending the capabilities of traditional AI
systems.
AI Terminologies
Artificial intelligence machines don’t think. They calculate. They represent some of the newest,
most sophisticated calculating machines in human history. It is a computer system that can
perform tasks that ordinarily require human intelligence or human interference. • Some can
perform what is called machine learning as they acquire new data. Machine learning is a subset
of artificial intelligence (AI) that focuses on developing algorithms and models that enable
computers to learn from data and make predictions or decisions without being explicitly
programmed.
Others, using calculations arranged in ways inspired by neurons in the human brain, can even
perform deep learning with multiple levels of calculations. Deep learning is an AI function that
imitates the working of the human brain in processing data and creating patterns for use in
decision making. o The structure of Deep Learning is inspired by the structure of the neurons
and neuron connections in the human brain. o Neural networks, also known as Artificial Neural
Networks (ANNs), are a subset of Machine Learning and the core heart and concept of Machine
Learning. o They comprise of node layers, containing an input layer, one or multiple hidden
layers, and an output layer. o If the output of any node is above a specified threshold, that node
is activated, sending data to the next layer of the network. o Otherwise, no data is passed along
to the next layer of the network. o If the number of Layers including the Input and Output Layer
is more than three, then it is called a Deep Neural Network.
Types of Machine Learning
Supervised learning
Supervised learning is a type of machine learning where the model learns from labelled data,
which means that the input data is accompanied by the correct output. ● In supervised learning,
the algorithm learns to map input data to output labels based on example input-output pairs
provided during the training phase. ● The goal of supervised learning is to learn a mapping
function from input variables to output variables so that the model can make predictions on
unseen data. ● Examples of supervised learning algorithms include linear regression, logistic
regression, decision trees, support vector machines (SVM), and neural networks.
Unsupervised Learning:
Unsupervised learning is a type of machine learning where the model learns from unlabelled data,
which means that the input data is not accompanied by the correct output. ● In unsupervised learning,
the algorithm tries to find hidden patterns or structure in the input data without explicit guidance. ●
The goal of unsupervised learning is to explore and discover inherent structures or relationships within
the data, such as clusters, associations, or anomalies. ● Examples of unsupervised learning algorithms
include k-means clustering, hierarchical clustering, principal component analysis (PCA), and
autoencoders
Reinforcement learning is a type of machine learning where an agent learns to make decisions
by interacting with an environment to maximize cumulative rewards. ● In reinforcement
learning, the agent learns through trial and error by taking actions and receiving feedback from
the environment in the form of rewards or penalties. ● The goal of reinforcement learning is to
learn a policy or strategy that guides the agent to take actions that lead to the highest
cumulative reward over time. ● Reinforcement learning is commonly used in scenarios where
the agent must make a sequence of decisions over time, such as playing games, controlling
robots, or managing financial portfolios. ● Examples of reinforcement learning algorithms
include Q-learning, deep Q-networks (DQN), policy gradients, and actor-critic methods.
6. Benefits and limitations of AI BENEFITS: 1. Increased efficiency and productivity: AI automates
tasks, analyzes data faster, and optimizes processes, leading to increased efficiency and
productivity across various sectors. 2. Improved decision-making: AI analyzes vast amounts of
data and identifies patterns that humans might miss, assisting in data-driven decision-making
and potentially leading to better outcomes. 3. Enhanced innovation and creativity: AI tools can
generate new ideas, explore possibilities, and automate repetitive tasks, freeing up human
resources for more creative pursuits and innovation. 4. Progress in science and healthcare: AI
aids in drug discovery, medical diagnosis, and personalized medicine, contributing to
advancements in healthcare and scientific research.
LIMITATIONS: 1. Job displacement: Automation through AI raises concerns about job
displacement and the need for workforce retraining and upskilling. 2. Ethical considerations:
Concerns exist around bias in AI algorithms, potential misuse for surveillance or manipulation,
and the need for ethical guidelines and regulations. 3. Lack of explainability: Some AI models,
particularly complex ones, lack transparency in their decision-making, making it difficult to
understand how they arrive at their outputs. 4. Data privacy and security: Large-scale data
collection and use for AI development raise concerns about data privacy and security
vulnerabilities.