0% found this document useful (0 votes)
4 views

Deep Learning for Data Analytics 2023 Answer

Uploaded by

renuka.ai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Deep Learning for Data Analytics 2023 Answer

Uploaded by

renuka.ai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

1. Machine learning has a strong connection with mathematics.

Each machine learning


algorithm is based on the concepts of mathematics & also with the help of
mathematics, one can choose the correct algorithm by considering training time,
complexity, number of features, etc. Linear Algebra is an essential field of
mathematics, which defines the study of vectors, matrices, planes, mapping, and lines
required for linear transformation.
2. PyTorch is a fully featured framework for building deep learning models, which is a
type of machine learning that's commonly used in applications like image recognition
and language processing. Written in Python, it's relatively easy for most machine
learning developers to learn and use.
3. Feed forward neural networks are artificial neural networks in which nodes do not
form loops. This type of neural network is also known as a multi-layer neural network
as all information is only passed forward. During data flow, input nodes receive data,
which travel through hidden layers, and exit output nodes. The architecture of a
feedforward neural network consists of three types of layers: the input layer, hidden
layers, and the output layer. Each layer is made up of units known as neurons, and the
layers are interconnected by weights.
4. In early stopping, During the training of our model, we also check the prediction
accuracy of our model on a validation dataset. We keep track of the prediction
accuracy on the validation dataset, and as soon as the validation accuracy starts to
decrease, we stop our training preventing overfitting.
5. Image segmentation is a computer vision technique that partitions a digital image into
discrete groups of pixels—image segments—to inform object detection and related
tasks. By parsing an image's complex visual data into specifically shaped segments,
image segmentation enables faster, more advanced image processing.
6. Pix2Pix GAN has a generator and a discriminator just like a normal GAN would
have. For our black and white image colorization task, the input B&W is processed by
the generator model and it produces the color version of the input as output. In
Pix2Pix, the generator is a convolutional network with U-net architecture.
7. Broadly I would cover the following methods.
 Batch Normalization.
 Weight Normalization.
 Layer Normalization.
 Group Normalization.
 Weight Standarization.
8. Stochastic Gradient Descent as another version of Gradient Descent(GD). We need to
know that Gradient Descent algorithm update the parameters in using all samples.
However, SGD just randomly choose one sample to update the parameter.
9. The Siamese network design comprises two identical subnetworks, each processing
one of the inputs. Initially, the inputs undergo processing through a convolutional
neural network (CNN), which extracts significant features from the provided images.
10. You Only Look Once (YOLO) proposes using an end-to-end neural network that
makes predictions of bounding boxes and class probabilities all at once. It differs from
the approach taken by previous object detection algorithms, which repurposed
classifiers to perform detection.

11 . What is Principal Component Analysis(PCA)?

Principal Component Analysis(PCA) technique was introduced by the


mathematician Karl Pearson in 1901. It works on the condition that while the data in a
higher dimensional space is mapped to data in a lower dimension space, the variance of
the data in the lower dimensional space should be maximum.

 Principal Component Analysis (PCA) is a statistical procedure that uses an


orthogonal transformation that converts a set of correlated variables to a set of
uncorrelated variables.PCA is the most widely used tool in exploratory data analysis
and in machine learning for predictive models. Moreover,

 Principal Component Analysis (PCA) is an unsupervised learning algorithm technique


used to examine the interrelations among a set of variables. It is also known as a
general factor analysis where regression determines a line of best fit.

 The main goal of Principal Component Analysis (PCA) is to reduce the


dimensionality of a dataset while preserving the most important patterns or
relationships between the variables without any prior knowledge of the target
variables.

Principal Component Analysis (PCA) is used to reduce the dimensionality of a data set by
finding a new set of variables, smaller than the original set of variables, retaining most of
the sample’s information, and useful for the regression and classification of data.

Principal Component Analysis

1. Principal Component Analysis (PCA) is a technique for dimensionality reduction that


identifies a set of orthogonal axes, called principal components, that capture the
maximum variance in the data. The principal components are linear combinations of
the original variables in the dataset and are ordered in decreasing order of importance.
The total variance captured by all the principal components is equal to the total
variance in the original dataset.

2. The first principal component captures the most variation in the data, but the second
principal component captures the maximum variance that is orthogonal to the first
principal component, and so on.

3. Principal Component Analysis can be used for a variety of purposes, including data
visualization, feature selection, and data compression. In data visualization, PCA can
be used to plot high-dimensional data in two or three dimensions, making it easier to
interpret. In feature selection, PCA can be used to identify the most important
variables in a dataset. In data compression, PCA can be used to reduce the size of a
dataset without losing important information.

4. In Principal Component Analysis, it is assumed that the information is carried in the


variance of the features, that is, the higher the variation in a feature, the more
information that features carries.

Application of PCA

PCA is used to visualize multidimensional data.

It is used to reduce the number of dimensions in healthcare data.

PCA can help resize an image.

It can be used in finance to analyze stock data and forecast returns.

PCA helps to find patterns in the high-dimensional datasets.

11. Introduction to Dimensionality


Reduction Technique
What is Dimensionality Reduction?
The number of input features, variables, or columns present in a given
dataset is known as dimensionality, and the process to reduce these
features is called dimensionality reduction.

A dataset contains a huge number of input features in various cases,


which makes the predictive modeling task more complicated. Because it is
very difficult to visualize or make predictions for the training dataset with
a high number of features, for such cases, dimensionality reduction
techniques are required to use.

Dimensionality reduction technique can be defined as, "It is a way of


converting the higher dimensions dataset into lesser dimensions
dataset ensuring that it provides similar information." These
techniques are widely used in machine learning for obtaining a better fit
predictive model while solving the classification and regression problems.

It is commonly used in the fields that deal with high-dimensional data,


such as speech recognition, signal processing, bioinformatics, etc.
It can also be used for data visualization, noise reduction, cluster
analysis, etc.
The Curse of Dimensionality
Handling the high-dimensional data is very difficult in practice, commonly
known as the curse of dimensionality. If the dimensionality of the input
dataset increases, any machine learning algorithm and model becomes
more complex. As the number of features increases, the number of
samples also gets increased proportionally, and the chance of overfitting
also increases. If the machine learning model is trained on high-
dimensional data, it becomes overfitted and results in poor performance.

Hence, it is often required to reduce the number of features, which can be


done with dimensionality reduction.

Benefits of applying Dimensionality Reduction


Some benefits of applying dimensionality reduction technique to the given
dataset are given below:

o By reducing the dimensions of the features, the space required to


store the dataset also gets reduced.
o Less Computation training time is required for reduced dimensions
of features.
o Reduced dimensions of features of the dataset help in visualizing
the data quickly.
o It removes the redundant features (if present) by taking care of
multicollinearity.

Disadvantages of dimensionality Reduction


There are also some disadvantages of applying the dimensionality
reduction, which are given below:

o Some data may be lost due to dimensionality reduction.


o In the PCA dimensionality reduction technique, sometimes the
principal components required to consider are unknown.

11 b historical development in deep learning


your story
11 b Numerous libraries are widely used in machine learning, and each of them offers a
unique set of features and capabilities. Some of the most popular machine learning
libraries include Keras, Scikit-Learn, PyTorch, TensorFlow, Matpotlib, NumPy, etc.

Keras – Deep Learning API Written in Python

Keras is a Python-based deep learning software tool that is popular for its simplicity and
flexibility. This open-source library works as an interface for the machine learning
platforms TensorFlow and Theano.

TensorFlow is widely considered one of the best Python libraries for deep learning
applications. Developed by the Google Brain Team, it provides a wide range of flexible
tools, libraries, and community resources.

your story

14. The generator part of a GAN learns to create fake data by incorporating feedback
from the discriminator. It learns to make the discriminator classify its output as real.
Generator training requires tighter integration between the generator and the discriminator
than discriminator training requires.

Generator training requires tighter integration between the generator and the
discriminator than discriminator training requires. The portion of the GAN that trains
the generator includes:

 random input
 generator network, which transforms the random input into a data instance
 discriminator network, which classifies the generated data
 discriminator output
 generator loss, which penalizes the generator for failing to fool the discriminator

You might also like