0% found this document useful (0 votes)
56 views31 pages

ANN Architecture & Batch Norm Guide

The document discusses neural networks and their architecture. It explains that a neural network consists of three layers - an input layer, a hidden layer, and an output layer. The hidden layer performs computations on the input data and transfers the output to the output layer. Weights are assigned between neurons and determine the learning ability of the neural network. During training, weights are updated through forward and backward propagation to reduce error.

Uploaded by

JAYESH SINGH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views31 pages

ANN Architecture & Batch Norm Guide

The document discusses neural networks and their architecture. It explains that a neural network consists of three layers - an input layer, a hidden layer, and an output layer. The hidden layer performs computations on the input data and transfers the output to the output layer. Weights are assigned between neurons and determine the learning ability of the neural network. During training, weights are updated through forward and backward propagation to reduce error.

Uploaded by

JAYESH SINGH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

1.

Demonstrate architecture of ANN with neat diagram


Neural Network is a series of algorithms that are trying to mimic the human brain and find the
relationship between the sets of data. It is being used in various use-cases like in regression,
classification, Image Recognition and many more.

As we have talked above, if neural networks try to mimic the human brain then there might be
the difference as well as the similarity between them.

Some major differences between them are that the biological neural network does parallel
processing whereas the Artificial neural network does series processing also in the former one
processing is slower (in millisecond) while in the latter one processing is faster (in a
nanosecond).

Architecture of ANN

A neural network consists of three layers. The first layer is the input layer. It contains the input
neurons that send information to the hidden layer. The hidden layer performs the computations
on input data and transfers the output to the output layer. It includes weight, activation function,
cost function.

The connection between neurons is known as weight, which is the numerical values. The weight
between neurons determines the learning ability of the neural network. During the learning of
artificial neural networks, weight between the neurons changes.

Working of ANN
Firstly, the information is fed into the input layer. Which then transfers it to the hidden layers,
and interconnection between these two layers assign weights to each input randomly at the
initial point. Then bias is added to each input neuron and after this, the weight sum which is a
combination of weights and bias is passed through the activation function. Activation Function
has the responsibility of which node to fire for feature extraction and finally output is calculated.
Therefore this whole process is known as Forward Propagation. After getting the output model
to compare it with the original output the error is known and finally, weights are updated in
backward propagation to reduce the error and this process continues for a certain number of
epochs (iteration). Finally, model weights get updates and prediction is done.

Some Merits of ANN


● It has a parallel processing ability. It has the numerical strength that performs
more than one task at the same time.
● After training, ANN can infer unseen relationships from unseen data, and hence it
is generalized.
● Unlike many machine learning models, ANN does not have restrictions on
datasets like data should be Gaussian distribution or any other distribution.

Applications of ANN
There are many applications of ANN. Some of them are :

Medical

We can use it in detecting cancer cells and analyzing the MRI images to give detailed results.

Forecast

We can use it in every field of business decisions like in finance and the stock market, in
economic and monetary policy.

Image Processing
We can use satellite imagery processing for agricultural and defense use.

2. Explain Convolution Layer along with batch


normalization with example
To fully understand how Batch Norm works and why it is important, let’s start
by talking about normalization.

Normalization is a pre-processing technique used to standardize data. In


other words, having different sources of data inside the same range. Not
normalizing the data before training can cause problems in our network,
making it drastically harder to train and decrease its learning speed.

For example, imagine we have a car rental service. Firstly, we want to predict
a fair price for each car based on competitors’ data. We have two features per
car: the age in years and the total amount of kilometers it has been driven for.
These can have very different ranges, ranging from 0 to 30 years, while
distance could go from 0 up to hundreds of thousands of kilometers. We don’t
want features to have these differences in ranges, as the value with the higher
range might bias our models into giving them inflated importance.

There are two main methods to normalize our data. The most straightforward
method is to scale it to a range from 0 to 1:

Xnormalized = X-M/Xmax-Xmin

x the data point to normalize, m the mean of the data set, xmax the highest
value, and xmin the lowest value. This technique is generally used in the
inputs of the data. The non-normalized data points with wide ranges can
cause instability in Neural Networks. The relatively large inputs can cascade
down to the layers, causing problems such as exploding gradients.

The other technique used to normalize data is forcing the data points to have
a mean of 0 and a standard deviation of 1, using the following formula:
Xnormalized = x-m/s

being x the data point to normalize, m the mean of the data set, and s the
standard deviation of the data set. Now, each data point mimics a standard
normal distribution. Having all the features on this scale, none of them will
have a bias, and therefore, our models will learn better.

In Batch Norm, we use this last technique to normalize batches of data inside
the network itself.

3. Batch Normalization
Batch Norm is a normalization technique done between the layers of a
Neural Network instead of in the raw data. It is done along mini-batches
instead of the full data set. It serves to speed up training and use higher
learning rates, making learning easier.

Following the technique explained in the previous section, we can define the
normalization formula of Batch Norm as:
3. Which are the various metrics for evaluating the classifier
performance?
https://towardsdatascience.com/8-metrics-to-measure-classification-performance-
984d9d7fd7aa

4. How can cross validation methods be used to evaluate


classifiers?
5. What is meant by ensemble learning? Explain different
types of ensemble classifiers. Explain any one in detail.
Example:
6. How Does Bagging Method Work?
7. Short note on Hold Out method and Boosting
8.Explain Data Science Process with the help of an Example
9.Write note on Applications of data science in healthcare,
supply chain management
10.Explain current trends of data science in detail
11.

12.
8.
13.

You might also like