1
1
def is_spam(email):
spam_words = ["win", "free", "money", "urgent", "click here"]
if any(word in email.lower() for word in spam_words):
return True
return False
# Example email
email = "Congratulations! You have won free money. Click here to claim your prize."
result = is_spam(email)
print(f"Is this email spam? {'Yes' if result else 'No'}")
# Example dataset
emails = [
"Congratulations! You have won free money.",
"Hey, can we meet tomorrow for lunch?",
"Click here to claim your urgent prize!",
"Let's catch up next week.",
]
labels = [1, 0, 1, 0] # 1: Spam, 0: Not Spam
# Convert emails into a format that a machine learning model can understand
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(emails) Benefits of the AI Approach:
1.Learns Patterns Automatically: The model learns
# Train a Naive Bayes model
model = MultinomialNB()
from the data, recognizing complex patterns beyond
model.fit(X, labels) simple keyword matching.
2.Scalable: Once trained, the model can classify millions
# Test with a new email
of emails efficiently without needing manual rule updates.
test_email = ["Win free tickets now!"]
X_test = vectorizer.transform(test_email) 3.Adaptive: The model can be retrained on new data to
prediction = model.predict(X_test) keep up with evolving spam techniques, making it much
more flexible than a rule-based approach.
print(f"Is this email spam? {'Yes' if prediction[0] == 1 else 'No'}")
Sklearn: A popular machine learning library in Python
Naive_bayes: Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem
with the “naive” assumption of conditional independence between every pair of features given the value of the class variable.
vectorizer.fit_transform: it creates a dictionary of tokens (by default the tokens are words separated by spaces and
punctuation) that maps each single token to a position in the output matrix.
The fit() method calculates the various required parameters, and the transform() method applies the calculated
parameters to standardize the data.
MultinomialNB(): The multinomial Naive Bayes classifier is suitable for classification with discrete features
(eg, word counts for text classification).
model.predict: When you call model. predict on a set of input data, you receive an array containing the model's predictions
for each input sample.
Traditional programming relies on
explicit rules defined by the programmer,
which makes it efficient for well-defined,
structured tasks but limited in flexibility
and adaptability.
2
2. Understand key concepts in the field of artificial intelligence. C
(Understand)
3
3. Implement artificial intelligence techniques and case studies C
(Apply)
Self
Understand
awareness
and Learn Thinking out
( I think Adaptability
from of the box
therefore I
experience
am )
Problem
solving Capacity to Objective
ability / know or and unbiased Think beyond
decision understand thinking expectation
making
INTELLIGENCE FOR COMPUTER SCIENTIST?
Set of cognitive skills
Abstract Ability to
Ability to learn from
Thinking experience
and understand
and adapt to
reasoning complex a changing
ideas environment.
Problem Ability to
Solving acquire
knowledge
AI DEFINITION BY JOHN MCCARTHY
http://www-formal.stanford.edu/jmc/whatisai/whatisai.html
WHAT IS AN AI?
Behavior
Human performance metric: involving observations and Ideal or rational performance metric: combination of
hypothesis mathematics and engineering
Thinking Humanly
1 (The Cognitive Modelling
Approach)
&^-⊕
Notation Examples:
• p&q
• p^
• -p
• p⊕q
APPROACH 3: THINKING RATIONALLY
Logic!
Socrates is a man, All men are mortal, Therefore Socrates is mortal.
1943: McCulloch & Pitts: Boolean circuit 1969—79: Early development of knowledge-
model of brain based systems
1950: Turing's “Computing Machinery and 1980—88: Expert systems industry booms
Intelligence” 1988—93: Expert systems industry busts:
“AI Winter”
1950—70: Excitement
1950s: Early AI programs, including Samuel's 1990—: Statistical approaches
checkers program, Newell & Simon's Logic Resurgence of probability, focus on
Theorist, Gelernter's Geometry Engine uncertainty
1956: Dartmouth meeting: “Artificial General increase in technical depth
Intelligence” adopted
Agents and learning systems… “AI Spring”?
BRIEF HISTORY OF AI