Deep Learning
Deep Learning
Uma N Dulhare
(Professor & HEAD)
Fundamentals of
Deep Learning
Implementing Neural Networks in TensorFlow
What Is TensorFlow?
Tensor Addition
You can add two tensors
using tensorA.add(tensorB):
Tensor Subtraction
You can subtract two tensors
using tensorA.sub(tensorB):
Tensor Multiplication
You can multiply two tensors using tensorA.mul(tensorB)
Tensor Division
You can divide two tensors using tensorA.div(tensorB)
Tensor Square
You can square a tensor using tensor.square()
Tensor Reshape
You can reshape a tensor using tensor.reshape()
Placeholder Tensors
Feature maps are used in convolutional neural networks for several reasons, some of
which are mentioned below.
● Detect important features: Initially, the feature maps capture low-level patterns, but
as they propagate to successive layers, they detect new patterns and combine them to
form high-level features.
● Feature sharing: A feature map is passed through multiple layers in a neural network
for image processing; hence, the features found by the previous layers are propagated
to each successive layer.
● Object recognition: Feature maps can also be passed to an artificial neural network
which can be trained to predict the object in the image.
● Image segmentation: Feature maps can divide an image into different segments, each
representing a meaningful part of the unsegmented image.
Full Description of the Convolutional Layer
● The convolutional layer is the core building block of a CNN, and it is where the
majority of computation occurs. It requires a few components, which are input data,
a filter, and a feature map. Let's assume that the input will be a color image, which
is made up of a matrix of pixels in 3D.
● Neural networks are a subset of machine learning, and they are at the heart of deep
learning algorithms. They are comprised of node layers, containing an input layer,
one or more hidden layers, and an output layer. Each node connects to another and
has an associated weight and threshold. If the output of any individual node is above
the specified threshold value, that node is activated, sending data to the next layer of
the network. Otherwise, no data is passed along to the next layer of the network.
Max Pooling
Training Deep Neural Networks is complicated by the fact that the distribution of each
layer's inputs changes during training, as the parameters of the previous layers change.
This slows down the training by requiring lower learning rates and careful parameter
initialization, and makes it notoriously hard to train models with saturating
nonlinearities. We refer to this phenomenon as internal covariate shift, and address the
problem by normalizing layer inputs. Our method draws its strength from making
normalization a part of the model architecture and performing the normalization for
each training mini-batch. Batch Normalization allows us to use much higher learning
rates and be less careful about initialization. It also acts as a regularizer, in some cases
eliminating the need for Dropout.
Building a Convolutional Network for CIFAR-10