Commonly Used Machine Learning Algorithms
Commonly Used Machine Learning Algorithms
How it works: This algorithm consist of a target / outcome variable (or dependent variable)
which is to be predicted from a given set of predictors (independent variables). Using these set
of variables, we generate a function that map inputs to desired outputs. The training process
continues until the model achieves a desired level of accuracy on the training data. Examples of
Supervised Learning: Regression, Decision Tree, Random Forest, KNN, Logistic Regression etc.
2. Unsupervised Learning
How it works: In this algorithm, we do not have any target or outcome variable to predict /
estimate. It is used for clustering population in different groups, which is widely used for
segmenting customers in different groups for specific intervention. Examples of Unsupervised
Learning: Apriori algorithm, K-means.
3. Reinforcement Learning:
How it works: Using this algorithm, the machine is trained to make specific decisions. It works
this way: the machine is exposed to an environment where it trains itself continually using trial
and error. This machine learns from past experience and tries to capture the best possible
knowledge to make accurate business decisions. Example of Reinforcement Learning: Markov
Decision Process
1. Linear Regression
2. Logistic Regression
3. Decision Tree
4. SVM
5. Naive Bayes
6. kNN
7. K-Means
8. Random Forest
9. Dimensionality Reduction Algorithms
10. Gradient Boosting algorithms
1. GBM
2. XGBoost
3. LightGBM
4. CatBoost
1. Linear Regression
It is used to estimate real values (cost of houses, number of calls, total sales etc.) based on
continuous variable(s). Here, we establish relationship between independent and dependent
variables by fitting a best line. This best fit line is known as regression line and represented by a
linear equation Y= a *X + b.
The best way to understand linear regression is to relive this experience of childhood. Let us say,
you ask a child in fifth grade to arrange people in his class by increasing order of weight, without
asking them their weights! What do you think the child will do? He / she would likely look
(visually analyze) at the height and build of people and arrange them using a combination of
these visible parameters. This is linear regression in real life! The child has actually figured out
that height and build would be correlated to the weight by a relationship, which looks like the
equation above.
In this equation:
Y – Dependent Variable
a – Slope
X – Independent variable
b – Intercept
These coefficients a and b are derived based on minimizing the sum of squared difference of
distance between data points and regression line.
Look at the below example. Here we have identified the best fit line having linear
equation y=0.2811x+13.9. Now using this equation, we can find the weight, knowing the height
of a person.
Linear Regression is mainly of two types: Simple Linear Regression and Multiple Linear
Regression. Simple Linear Regression is characterized by one independent variable. And,
Multiple Linear Regression(as the name suggests) is characterized by multiple (more than 1)
independent variables. While finding the best fit line, you can fit a polynomial or curvilinear
regression. And these are known as polynomial or curvilinear regression.
Here’s a coding window to try out your hand and build your own linear regression model in
Python:
R Code
#Identify feature and response variable(s) and values must be numeric and numpy arrays
x <- cbind(x_train,y_train)
# Train the model using the training sets and check score
linear <- lm(y_train ~ ., data = x)
summary(linear)
#Predict Output
predicted= predict(linear,x_test)
2. Logistic Regression
Don’t get confused by its name! It is a classification not a regression algorithm. It is used to
estimate discrete values ( Binary values like 0/1, yes/no, true/false ) based on given set of
independent variable(s). In simple words, it predicts the probability of occurrence of an event by
fitting data to a logit function. Hence, it is also known as logit regression. Since, it predicts the
probability, its output values lies between 0 and 1 (as expected).
Let’s say your friend gives you a puzzle to solve. There are only 2 outcome scenarios – either you
solve it or you don’t. Now imagine, that you are being given wide range of puzzles / quizzes in an
attempt to understand which subjects you are good at. The outcome to this study would be
something like this – if you are given a trignometry based tenth grade problem, you are 70%
likely to solve it. On the other hand, if it is grade fifth history question, the probability of getting
an answer is only 30%. This is what Logistic Regression provides you.
Coming to the math, the log odds of the outcome is modeled as a linear combination of the
predictor variables.
ln(odds) = ln(p/(1-p))
R Code
x <- cbind(x_train,y_train)
# Train the model using the training sets and check score
summary(logistic)
#Predict Output
predicted= predict(logistic,x_test)
3. Decision Tree
This is one of my favorite algorithm and I use it quite frequently. It is a type of supervised
learning algorithm that is mostly used for classification problems. Surprisingly, it works for both
categorical and continuous dependent variables. In this algorithm, we split the population into
two or more homogeneous sets. This is done based on most significant attributes/ independent
variables to make as distinct groups as possible. For more details, you can read: Decision Tree
Simplified.
source: statsexchange
In the image above, you can see that population is classified into four different groups based on
multiple attributes to identify ‘if they will play or not’. To split the population into different
heterogeneous groups, it uses various techniques like Gini, Information Gain, Chi-square,
entropy.
The best way to understand how decision tree works, is to play Jezzball – a classic game from
Microsoft (image below). Essentially, you have a room with moving walls and you need to create
walls such that maximum area gets cleared off with out the balls.
So, every time you split the room with a wall, you are trying to create 2 different populations
with in the same room. Decision trees work in very similar fashion by dividing a population in as
different groups as possible.
4. SVM (Support Vector Machine)
It is a classification method. In this algorithm, we plot each data item as a point in n-dimensional
space (where n is number of features you have) with the value of each feature being the value
of a particular coordinate.
For example, if we only had two features like Height and Hair length of an individual, we’d first
plot these two variables in two dimensional space where each point has two co-ordinates (these
co-ordinates are known as Support Vectors)
Now, we will find some line that splits the data between the two differently classified groups of
data. This will be the line such that the distances from the closest point in each of the two
groups will be farthest away.
In the example shown above, the line which splits the data into two differently classified groups
is the black line, since the two closest points are the farthest apart from the line. This line is our
classifier. Then, depending on where the testing data lands on either side of the line, that’s what
class we can classify the new data as.
You can draw lines/planes at any angles (rather than just horizontal or vertical as in the
classic game)
The objective of the game is to segregate balls of different colors in different rooms.
And the balls are not moving.
Try your hand and design an SVM model in Python through this coding window:
R Code
library(e1071)
x <- cbind(x_train,y_train)
# Fitting model
summary(fit)
#Predict Output
predicted= predict(fit,x_test)
5. Naive Bayes
It is a classification technique based on Bayes’ theorem with an assumption of independence
between predictors. In simple terms, a Naive Bayes classifier assumes that the presence of a
particular feature in a class is unrelated to the presence of any other feature. For example, a
fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if
these features depend on each other or upon the existence of the other features, a naive Bayes
classifier would consider all of these properties to independently contribute to the probability
that this fruit is an apple.
Naive Bayesian model is easy to build and particularly useful for very large data sets. Along with
simplicity, Naive Bayes is known to outperform even highly sophisticated classification methods.
Bayes theorem provides a way of calculating posterior probability P(c|x) from P(c), P(x) and P(x|
c). Look at the equation below:
Here,
Example: Let’s understand it using an example. Below I have a training data set of weather and
corresponding target variable ‘Play’. Now, we need to classify whether players will play or not
based on weather condition. Let’s follow the below steps to perform it.
Step 2: Create Likelihood table by finding the probabilities like Overcast probability = 0.29 and
probability of playing is 0.64.
Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each class.
The class with the highest posterior probability is the outcome of prediction.
Here we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P( Yes)= 9/14 = 0.64
Now, P (Yes | Sunny) = 0.33 * 0.64 / 0.36 = 0.60, which has higher probability.
Naive Bayes uses a similar method to predict the probability of different class based on various
attributes. This algorithm is mostly used in text classification and with problems having multiple
classes.
R Code
library(e1071)
x <- cbind(x_train,y_train)
# Fitting model
summary(fit)
#Predict Output
predicted= predict(fit,x_test)
KNN can easily be mapped to our real lives. If you want to learn about a person, of whom you
have no information, you might like to find out about his close friends and the circles he moves
in and gain access to his/her information!
R Code
library(knn)
x <- cbind(x_train,y_train)
# Fitting model
summary(fit)
#Predict Output
predicted= predict(fit,x_test)
7. K-Means
It is a type of unsupervised algorithm which solves the clustering problem. Its procedure follows
a simple and easy way to classify a given data set through a certain number of clusters (assume
k clusters). Data points inside a cluster are homogeneous and heterogeneous to peer groups.
Remember figuring out shapes from ink blots? k means is somewhat similar this activity. You
look at the shape and spread to decipher how many different clusters / population are present!
In K-means, we have clusters and each cluster has its own centroid. Sum of square of difference
between centroid and the data points within a cluster constitutes within sum of square value for
that cluster. Also, when the sum of square values for all the clusters are added, it becomes total
within sum of square value for the cluster solution.
We know that as the number of cluster increases, this value keeps on decreasing but if you plot
the result you may see that the sum of squared distance decreases sharply up to some value of
k, and then much more slowly after that. Here, we can find the optimum number of cluster.
R Code
library(cluster)
8. Random Forest
Random Forest is a trademark term for an ensemble of decision trees. In Random Forest, we’ve
collection of decision trees (so known as “Forest”). To classify a new object based on attributes,
each tree gives a classification and we say the tree “votes” for that class. The forest chooses the
classification having the most votes (over all the trees in the forest).
1. If the number of cases in the training set is N, then sample of N cases is taken at random
but with replacement. This sample will be the training set for growing the tree.
2. If there are M input variables, a number m<<M is specified such that at each node, m
variables are selected at random out of the M and the best split on these m is used to
split the node. The value of m is held constant during the forest growing.
3. Each tree is grown to the largest extent possible. There is no pruning.
For more details on this algorithm, comparing with decision tree and tuning model parameters, I
would suggest you to read these articles:
1. Introduction to Random forest – Simplified
2. Comparing a CART model to Random Forest (Part 1)
3. Comparing a Random Forest to a CART model (Part 2)
4. Tuning the parameters of your Random Forest model
Python Code:
R Code
library(randomForest)
x <- cbind(x_train,y_train)
# Fitting model
summary(fit)
#Predict Output
predicted= predict(fit,x_test)
For example: E-commerce companies are capturing more details about customer like their
demographics, web crawling history, what they like or dislike, purchase history, feedback and
many others to give them personalized attention more than your nearest grocery shopkeeper.
As a data scientist, the data we are offered also consist of many features, this sounds good for
building good robust model but there is a challenge. How’d you identify highly significant
variable(s) out 1000 or 2000? In such cases, dimensionality reduction algorithm helps us along
with various other algorithms like Decision Tree, Random Forest, PCA, Factor Analysis, Identify
based on correlation matrix, missing value ratio and others.
To know more about this algorithms, you can read “Beginners Guide To Learn Dimension
Reduction Techniques“.
R Code
library(stats)
GBM is a boosting algorithm used when we deal with plenty of data to make a prediction with
high prediction power. Boosting is actually an ensemble of learning algorithms which combines
the prediction of several base estimators in order to improve robustness over a single estimator.
It combines multiple weak or average predictors to a build strong predictor. These boosting
algorithms always work well in data science competitions like Kaggle, AV Hackathon,
CrowdAnalytix.
R Code
library(caret)
x <- cbind(x_train,y_train)
# Fitting model
GradientBoostingClassifier and Random Forest are two different boosting tree classifier and
often people ask about the difference between these two algorithms.
10.2. XGBoost
Another classic gradient boosting algorithm that’s known to be the decisive choice between
winning and losing in some Kaggle competitions.
The XGBoost has an immensely high predictive power which makes it the best choice for
accuracy in events as it possesses both linear model and the tree learning algorithm, making the
algorithm almost 10x faster than existing gradient booster techniques.
The support includes various objective functions, including regression, classification and ranking.
One of the most interesting things about the XGBoost is that it is also called a regularized
boosting technique. This helps to reduce overfit modelling and has a massive support for a
range of languages such as Scala, Java, R, Python, Julia and C++.
Supports distributed and widespread training on many machines that encompass GCE, AWS,
Azure and Yarn clusters. XGBoost can also be integrated with Spark, Flink and other cloud
dataflow systems with a built in cross validation at each iteration of the boosting process.
Python Code:
R Code:
require(caret)
x <- cbind(x_train,y_train)
# Fitting model
OR
10.3. LightGBM
LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is
designed to be distributed and efficient with the following advantages:
The framework is a fast and high-performance gradient boosting one based on decision tree
algorithms, used for ranking, classification and many other machine learning tasks. It was
developed under the Distributed Machine Learning Toolkit Project of Microsoft.
Since the LightGBM is based on decision tree algorithms, it splits the tree leaf wise with the best
fit whereas other boosting algorithms split the tree depth wise or level wise rather than leaf-
wise. So when growing on the same leaf in Light GBM, the leaf-wise algorithm can reduce more
loss than the level-wise algorithm and hence results in much better accuracy which can rarely be
achieved by any of the existing boosting algorithms.
Python Code:
test_data = train_data.create_valid('test.svm')
param['metric'] = 'auc'
num_round = 10
bst.save_model('model.txt')
ypred = bst.predict(data)
R Code:
library(RLightGBM)
data(example.binary)
#Parameters
min_sum_hessian_in_leaf = 5.0)
lgbm.data.setField(handle.data, "label", y)
lgbm.booster.train(handle.booster, num_iterations, 5)
#Predict
If you’re familiar with the Caret package in R, this is another way of implementing the LightGBM.
require(caret)
require(RLightGBM)
data(iris)
model <-caretModel.LGBM()
print(fit)
fit <- train(data.frame(idx = 1:nrow(iris)), iris$Species, method = model.sparse, matrix = mat, verbosity = 0)
print(fit)
10.4. Catboost
CatBoost is a recently open-sourced machine learning algorithm from Yandex. It can easily
integrate with deep learning frameworks like Google’s TensorFlow and Apple’s Core ML.
The best part about CatBoost is that it does not require extensive data training like other ML
models, and can work on a variety of data formats; not undermining how robust it can be.
Make sure you handle missing data well before you proceed with the implementation.
Catboost can automatically deal with categorical variables without showing the type conversion
error, which helps you to focus on tuning your model better rather than sorting out trivial
errors.
Python Code:
import pandas as pd
import numpy as np
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")
train.fillna(-999, inplace=True)
test.fillna(-999,inplace=True)
#Creating a training set for modeling and validation set to check model performance
X = train.drop(['Item_Outlet_Sales'], axis=1)
y = train.Item_Outlet_Sales
from sklearn.model_selection import train_test_split
loss_function='RMSE')
model.fit(X_train, y_train,cat_features=categorical_features_indices,eval_set=(X_validation,
y_validation),plot=True)
submission = pd.DataFrame()
submission['Item_Identifier'] = test['Item_Identifier']
submission['Outlet_Identifier'] = test['Outlet_Identifier']
submission['Item_Outlet_Sales'] = model.predict(test)
R Code:
set.seed(1)
require(titanic)
require(caret)
require(catboost)
tt <- titanic::titanic_train[complete.cases(titanic::titanic_train),]
grid <- expand.grid(depth = c(4, 6, 8),learning_rate = 0.1,iterations = 100, l2_leaf_reg = 1e-3, rsm = 0.95,
border_count = 64)
print(report)
print(importance)
Projects
Now, its time to take the plunge and actually play with some other real datasets. So are you
ready to take on the challenge? Accelerate your data science journey with the following Practice
Problems:
Identify the
Practice Problem:
employees most
HR Analytics
likely to get
Challenge
promoted
Predict number of
Practice Problem: upvotes on a
Predict Number of query asked at an
Upvotes online question &
answer platform