0% found this document useful (0 votes)
12 views

unit 2_class_preceptron

The document discusses machine learning models, distinguishing between discriminative and generative models, and provides an overview of supervised learning algorithms including linear and classification models. It explains key concepts such as confusion matrix metrics (accuracy, recall, precision) and details the perceptron model, its activation functions, and training rules. Additionally, it touches on the concept of epochs in machine learning and compares single-layer and multi-layer perceptrons.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

unit 2_class_preceptron

The document discusses machine learning models, distinguishing between discriminative and generative models, and provides an overview of supervised learning algorithms including linear and classification models. It explains key concepts such as confusion matrix metrics (accuracy, recall, precision) and details the perceptron model, its activation functions, and training rules. Additionally, it touches on the concept of epochs in machine learning and compares single-layer and multi-layer perceptrons.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 13

Machine learning models

a discriminative model makes predictions on unseen data


based on conditional probability and can be used either for
classification or regression problem statements.

a generative model focuses on the distribution of a dataset


to return a probability for a given example
Confusion matrix

Classification Accuracy:
Accuracy = (TP + TN) / (TP + TN + FP + FN) = (3+4)/(3+4+2+1) = 0.70
Recall: Recall gives us an idea about when it’s actually yes, how often does it predict yes.
Recall = TP / (TP + FN) = 3/(3+1) = 0.75
Precision: Precision tells us about when it predicts yes, how often is it correct.
Precision = TP / (TP + FP) = 3/(3+2) = 0.60
Types of supervised ML algorithms

oLinear Regression
oRegression Trees
oNon-Linear Regression
oBayesian Linear Regression
oPolynomial Regression
classification
Classification algorithms are used when the output
variable is categorical, which means there are two
classes such as Yes-No, Male-Female, True-false, etc.

oRandom Forest
oDecision Trees
oLogistic Regression
oSupport vector Machines
Supervised model types
Linear Regression Models
Least squares, single & multiple variables, Bayesian linear regression

Linear Classification Models

Discriminant function , Perceptron algorithm

Probabilistic discriminative model

Logistic regression

Probabilistic generative model


Naive Bayes

Maximum margin classifier

Support vector machine, Decision Tree, Random Forests


Neural networks
Perceptron
Perceptron model is also treated as one of the best and simplest
types of Artificial Neural networks.
it is a supervised learning algorithm of binary classifiers. Hence,
we can consider it as a single-layer neural network with four
main parameters, i.e., input values, weights and Bias, net sum,
and an activation function.
ANN..

Each time a dataset passes through an algorithm, it is said to have completed an epoch.
Therefore, Epoch, in machine learning, refers to the one entire passing of training data
through the algorithm. It's a hyperparameter that determines the process of training
the machine learning model.
Perceptron – Activation function
Activation function will help to determine whether the neuron will fire or not.
Activation Function can be considered primarily as a step function.
Types of Activation functions:
oSign function
oStep function, and
oSigmoid function
Perceptron Model ( algorithm)

Perceptron model (algorithm) works in two important steps as follows:


Step-1
In the first step first, multiply all input values with corresponding weight values and then
add them to determine the weighted sum. Mathematically, we can calculate the weighted
sum as follows:
∑wi*xi = x1*w1 + x2*w2 +…wn*xn
Add a special term called bias 'b' to this weighted sum to improve the model's
performance.
∑wi*xi + b

Step-2
In the second step, an activation function is applied with the above-mentioned
weighted sum, which gives us output either in binary form or a continuous value as
follows:
Y = f(∑wi*xi + b)
Perceptron training rule
Perceptron_training_rule (X, η)
initialize w (wi  an initial (small) random value)
repeat
for each training instance (x, tx) ∈ X
compute the real output ox = Activation(Summation(w.x))
if (tx ≠ ox)
for each wi
wi  wi + ∆𝑤𝑖
∆𝑤𝑖  η (tx - ox)xi
end for
end if
end for
until all the training instances in X are correctly classified
return w
Single vs Multilayer perceptron

You might also like