Deep Learning
Deep Learning
Takes less time to train the model. Takes more time to train the model.
The above diagram is the building block of the whole of deep learning.
Perceptrons bear similarity to neurons as the structure is very similar.
Perceptron also takes input and give output in the same fashion as a neuron
does. Hence the name neural network is generally used to name the models in
deep learning.
Multi-Layered Perceptron(MLP):
As the name suggests that in MLP we have multiple layers of perceptrons.
MLPs are feed-forward artificial neural networks. In MLP we have at least 3
layers. The first layer is called the input layer, the next ones are called hidden
layers and last on is called the output layer. The nodes in the input layer don’t
have activation, in fact, the nodes in the input layers represent the data point.
If the data point is represented using a d-dimensional vector then the input
layer will have d nodes. The below diagram will make the point more clear.
5.active function and types
What is an activation function?
Simply put, an activation function is a function that is added into an artificial
neural network in order to help the network learn complex patterns in the
data. When comparing with a neuron-based model that is in our brains, the
activation function is at the end deciding what is to be fired to the next
neuron. That is exactly what an activation function does in an ANN as well. It
takes in the output signal from the previous cell and converts it into some
form that can be taken as input to the next cell. The comparison can be
summarized in the figure below.
Now imagine taking a small patch of this image and running a small neural
network, called a filter or kernel on it, with say, K outputs and representing
them vertically. Now slide that neural network across the whole image, as a
result, we will get another image with different widths, heights, and depths.
Instead of just R, G, and B channels now we have more channels but lesser
width and height. This operation is called Convolution. If the patch size is the
same as that of the image it will be a regular neural network. Because of this
small patch, we have fewer weights.