0% found this document useful (0 votes)
2 views

Deep Learning Module-03

The document discusses optimization strategies for training deep learning models, focusing on minimizing loss functions and improving generalization. It highlights challenges such as high dimensionality, non-convex loss surfaces, and issues like overfitting and vanishing gradients, while proposing techniques like gradient descent variants, regularization, and adaptive learning rates. Additionally, it covers practical implementations and case studies involving CNNs and RNNs, emphasizing the importance of parameter initialization and choosing appropriate optimization algorithms.

Uploaded by

darshj7483
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Deep Learning Module-03

The document discusses optimization strategies for training deep learning models, focusing on minimizing loss functions and improving generalization. It highlights challenges such as high dimensionality, non-convex loss surfaces, and issues like overfitting and vanishing gradients, while proposing techniques like gradient descent variants, regularization, and adaptive learning rates. Additionally, it covers practical implementations and case studies involving CNNs and RNNs, emphasizing the importance of parameter initialization and choosing appropriate optimization algorithms.

Uploaded by

darshj7483
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

21CS743 | DEEP LEARNING

Module-03

Optimization for Training Deep Models

Introduction to Optimization in Deep Learning

Definition

ud
• Optimization: Adjusting model parameters (weights, biases) to minimize the loss
function.

• Loss Function: Measures the error between predicted outputs and actual targets.

• Goal: Find parameters that reduce the error and improve predictions.

lo
C
tu

Key Objective

• Generalization: Ensure the model performs well on new, unseen data.


V

o Underfitting: Model is too simple, doesn't capture patterns.

o Overfitting: Model is too complex, learns noise, performs poorly on new data.

Page 1
21CS743 | DEEP LEARNING

Challenges

1. High Dimensionality of Parameter Space

o Deep learning models have millions of parameters.

o Exploring this vast space is computationally challenging.

2. Non-convex Loss Surfaces

ud
o Loss surfaces are complex with many local minima and saddle points.

▪ Local Minima: Points where the loss is low, but not the lowest.

▪ Saddle Points: Flat regions that slow down optimization.

o Hard to find the absolute best solution (global minimum).


lo
Strategies to Overcome Challenges

Gradient Descent Variants:


C
o Stochastic Gradient Descent (SGD): Efficiently updates parameters using small
batches of data.

o Adam, RMSprop: Advanced methods that adapt learning rates during training.
tu

• Regularization Techniques:

o L1/L2 Regularization: Adds penalties to prevent overfitting.

o Dropout: Randomly disables neurons during training to reduce reliance on specific


V

neurons.

• Learning Rate Scheduling:

o Dynamically adjusts the learning rate to ensure better convergence.

Page 2
21CS743 | DEEP LEARNING

• Momentum and Adaptive Methods:

o Momentum: Helps in moving faster towards the minima by considering past


gradients.

o Adaptive Methods: Adjust learning rates based on gradient history for stable
training.

ud
Empirical Risk Minimization (ERM)

Concept

• Empirical Risk Minimization (ERM) is a foundational concept in machine learning.

• It involves minimizing the average loss on the training data to approximate the true risk
or error on the entire data distribution.


lo
The objective of ERM is to train a model that performs well on unseen data by minimizing
the empirical risk derived from the training set.
C
tu
V

Page 3
21CS743 | DEEP LEARNING

Mathematical Formulation

The empirical risk is calculated as the average loss over the training set:

ud
Overfitting vs. Generalization
lo
C
1. Overfitting:

o Occurs when the model performs extremely well on the training data but poorly on
tu

unseen test data.

o The model learns the noise and specific patterns in the training set, which do not
generalize.

o Symptoms: High training accuracy, low test accuracy.


V

2. Generalization:

o The ability of a model to perform well on new, unseen data.

Page 4
21CS743 | DEEP LEARNING

o A generalized model strikes a balance between fitting the training data and
maintaining good performance on the test data.

o Symptoms: Balanced performance on both training and test datasets.

Regularization Techniques

To combat overfitting and enhance generalization, several regularization techniques are employed:

ud
1.
lo
C
2. Dropout:

o A regularization method that randomly "drops out" a fraction of neurons during


tu

training.

o This prevents units from co-adapting too much, forcing the network to learn more
robust features.

o During each training iteration, some neurons are ignored (set to zero), which helps
V

in reducing overfitting and improving generalization.

Page 5
21CS743 | DEEP LEARNING

Challenges in Neural Network Optimization

1. Non-Convexity

• Nature: Loss surfaces in neural networks are non-convex.

• Challenges:

o Multiple Local Minima: Loss is low but not the lowest globally.

ud
o Saddle Points: Gradients are zero but not at minima or maxima, causing slow
convergence.

• Visualization: Loss landscape diagrams show complex terrains with hills, valleys, and flat
regions.

Vanishing Gradients:

o
lo
2. Vanishing and Exploding Gradients

Problem: Gradients become very small as they backpropagate.


C
o Impact: Slow learning, especially in earlier layers.

• Exploding Gradients:
tu

o Problem: Gradients grow excessively large.

o Impact: Unstable updates, leading to divergence or large parameter values.

• Solutions:
V

o ReLU Activation: Prevents vanishing gradients by not saturating for positive


inputs.

o Gradient Clipping: Caps gradients to prevent them from becoming too large.

Page 6
21CS743 | DEEP LEARNING

3. Ill-Conditioned Problems

• Definition: Occurs when parameter updates are poorly scaled.

• Impact: Inefficient training, with some parameters updating too quickly or too slowly.

• Solution:

o Normalization Techniques:

ud
▪ Batch Normalization: Normalizes layer inputs for consistent scaling.

▪ Other Normalizations: Layer Normalization, Group Normalization

Basic Algorithms: Stochastic Gradient Descent (SGD)

1. Gradient Descent (GD)


lo
Concept: Gradient Descent is an optimization algorithm used to minimize a loss function
by updating the model's parameters iteratively.
C
tu
V

Process:

• Compute the gradient of the loss function.

• Update the parameters in the opposite direction of the gradient.

• Repeat until convergence.

Page 7
21CS743 | DEEP LEARNING

2. Stochastic Gradient Descent (SGD)

• Concept:
Stochastic Gradient Descent improves upon standard GD by updating the model
parameters using a randomly selected mini-batch of the training data rather than the
entire dataset.

ud

• Advantages:

o Faster Updates: Each update is quicker since it uses a small batch of data.


o

Challenges:

o
lo
Efficiency: Reduces computational cost, especially for large datasets.

Noisier Convergence: Due to randomness, the convergence path is less smooth


C
and can fluctuate.

o Requires More Iterations: Often requires more epochs to converge.


tu

3. Learning Rate

• Definition: The learning rate controls the size of the step taken towards minimizing the
loss during each update.

• Impact:
V

o Too High: Causes overshooting the minimum.

o Too Low: Leads to slow convergence.

• Strategies:

o Learning Rate Decay: Gradually reduce the learning rate as training progresses.

Page 8
21CS743 | DEEP LEARNING

o Warm Restarts: Periodically reset the learning rate to a higher value to escape
local minima.

4. Momentum

• Concept: Momentum helps accelerate convergence by combining the current gradient


with a fraction of the previous gradient, smoothing updates and reducing oscillations.

ud
• Update Rule:


lo
C
Benefits:

o Smoother Updates: Reduces fluctuations in updates, leading to more stable


convergence.
tu

o Faster Convergence: Helps in faster convergence, especially in regions with


shallow gradients.
V

Page 9
21CS743 | DEEP LEARNING

Importance of Parameter Initialization

• Prevents Vanishing/Exploding Gradients:

o Proper initialization ensures that gradients remain within a manageable range


during backpropagation.

o Poor initialization can lead to gradients that either vanish (become too small) or

ud
explode (become too large), hindering effective learning.

• Accelerates Convergence:

o Well-initialized parameters help the network converge faster, reducing training


time.

o Ensures that the model starts training with meaningful gradients, leading to

2. Initialization Strategies
lo
efficient optimization.

a. Xavier Initialization (Glorot Initialization)


C
• Concept:

o Designed for sigmoid and tanh activations.


tu

o Ensures that the variance of the outputs of a layer remains roughly constant across
layers.
V

Page 10
21CS743 | DEEP LEARNING

ud
• Benefits:

o
lo
Balances the scale of gradients flowing in both forward and backward directions.

Helps prevent saturation in sigmoid/tanh activations, maintaining effective


C
learning.

b. He Initialization (Kaiming Initialization)


tu

• Concept:

o Specifically designed for ReLU and its variants.

o Accounts for the fact that ReLU activation outputs are not symmetrically
distributed around zero.
V

Page 11
21CS743 | DEEP LEARNING

ud

• Benefits:

o
lo
Prevents the dying ReLU problem (where neurons output zero for all inputs).

Maintains gradient flow and supports faster convergence.

3. Practical Impact
C
• Faster Convergence:

o Proper initialization provides a good starting point for optimization, reducing the
tu

number of iterations required to converge.

• Better Final Accuracy:

o Empirical studies show that networks with proper initialization not only converge
faster but also achieve better final accuracy.
V

o Poor initialization can lead to suboptimal solutions or longer training times.

Page 12
21CS743 | DEEP LEARNING

Algorithms with Adaptive Learning Rates

1. Motivation

• Need for Adaptive Learning Rates:

o Fixed learning rates can be ineffective as they do not account for the varying
characteristics of different layers or the nature of the training data.

ud
o Certain parameters may require larger updates, while others may need smaller
adjustments. Adaptive learning rates enable the model to adjust learning based on
the training dynamics.

2. AdaGrad

• Concept:

o
lo
AdaGrad (Adaptive Gradient Algorithm) adapts the learning rate for each
parameter based on the past gradients. It increases the learning rate for infrequent
features and decreases it for frequent features, making it particularly effective for
C
sparse data scenarios.
tu
V

Page 13
21CS743 | DEEP LEARNING

• Advantages:

o Good for Sparse Data: AdaGrad performs well in scenarios where features have
varying frequencies, such as in natural language processing tasks.

o Diminishing Learning Rate: As training progresses, the learning rates decrease,


preventing overshooting the minimum.

ud
• Challenges:

o Rapid Learning Rate Decay: The learning rate can decrease too quickly, leading
to premature convergence and potentially suboptimal solutions.

3. RMSProp

• Concept:

o
lo
RMSProp (Root Mean Square Propagation) improves upon AdaGrad by using a
moving average of squared gradients, addressing the rapid decay issue of
AdaGrad's learning rate.
C
tu


Advantages:
V

o More Stable Convergence: By maintaining a moving average, RMSProp helps


stabilize updates, ensuring the learning rate does not decrease too quickly.

o Effective for Non-Stationary Objectives: It performs well on problems where the


data distribution may change over time.

Page 14
21CS743 | DEEP LEARNING

Choosing the Right Optimization Algorithm

1. Factors to Consider

ud
• Data Size:

o Large datasets may require optimization algorithms that can handle more frequent
updates (e.g., SGD or mini-batch variants).

o Smaller datasets may benefit from adaptive methods that adjust learning rates (e.g.,

• Model Complexity:

o
lo
AdaGrad or Adam).

Complex models (deep networks) can benefit from algorithms that adjust learning
C
rates dynamically (e.g., RMSProp or Adam) to navigate complex loss surfaces
effectively.

o Simpler models may work well with standard SGD.


tu

• Computational Resources:

o Resource availability may dictate the choice of algorithm. Some algorithms (e.g.,
Adam) are more computationally intensive due to maintaining additional state
information (like momentum and moving averages).
V

2. Comparison of Optimization Algorithms

• Stochastic Gradient Descent (SGD):

o Pros: Simple and effective; widely used in practice.

o Cons: Requires careful tuning of learning rates and may converge slowly.

Page 15
21CS743 | DEEP LEARNING

• AdaGrad:

o Pros: Adapts learning rates based on parameter frequency; effective for sparse data.

o Cons: Tends to slow down learning too quickly due to rapid decay of learning rates.

• RMSProp:

o Pros: Balances learning rates dynamically; provides stable convergence, especially

ud
in non-stationary problems.

o Cons: Requires tuning of decay rate parameter.

• Adam (Adaptive Moment Estimation):

o Pros: Combines momentum with adaptive learning rates; generally performs well

o
lo
across a wide range of tasks and is robust to hyperparameter settings.

Cons: More complex to implement and requires careful tuning for optimal
performance.
C
3. Practical Tips

• Start with Adam:

o For most tasks, beginning with the Adam optimizer is recommended due to its
tu

versatility and strong performance in various scenarios.

• Fine-Tune Learning Rates:

o Experiment with different learning rates to find the best fit for your specific model
V

and data. A common approach is to perform a learning rate search or use techniques
like cyclical learning rates.

• Use Learning Rate Scheduling:

o Implement learning rate schedules (e.g., decay, step-wise, or cosine annealing) to


adjust the learning rate dynamically during training for improved convergence and
performance.

Page 16
21CS743 | DEEP LEARNING

Case Studies and Practical Implementations

1. Image Classification with CNN

• Objective:

o Train a Convolutional Neural Network (CNN) on the CIFAR-10 dataset using


Stochastic Gradient Descent (SGD) and RMSProp. Compare the performance in

ud
terms of learning curves, loss, and accuracy.

• Dataset:

o CIFAR-10 consists of 60,000 32x32 color images in 10 classes, with 6,000 images
per class. The classes include airplanes, cars, birds, cats, deer, dogs, frogs, horses,
and trucks.

• Model Architecture:

o
lo
Use a simple CNN architecture with convolutional layers, ReLU activation, pooling
layers, and a fully connected output layer.
C
• Training Process:

o Implement two training runs: one using SGD and the other using RMSProp.
tu

o Hyperparameters:

▪ Learning Rate: Set initial values (e.g., 0.01 for SGD, 0.001 for RMSProp).

▪ Batch Size: Use mini-batches (e.g., 32).


V

▪ Number of Epochs: Train for a predetermined number of epochs (e.g., 50).

• Comparison Metrics:

o Learning Curves: Plot training and validation accuracy and loss over epochs for
both optimizers.

Page 17
21CS743 | DEEP LEARNING

o Loss and Accuracy: Analyze final training and validation loss and accuracy after
training completion.

• Expected Results:

o RMSProp is anticipated to achieve faster convergence and higher accuracy


compared to SGD, particularly in the later epochs due to its adaptive learning rates.

ud
2. NLP Task with RNN/Transformer

• Objective:

o Train a Recurrent Neural Network (RNN) or Transformer model on text data to


highlight vanishing gradient issues and compare different optimizers (SGD,
AdaGrad, RMSProp).

• Dataset:

o
lo
Use a text dataset such as IMDB reviews for sentiment analysis or any sequence
data suitable for RNNs or Transformers.
C
• Model Architecture:

o Implement either an RNN or Transformer architecture, depending on the chosen


approach.
tu

o Include layers such as LSTM or GRU for RNNs, or attention mechanisms for
Transformers.

• Training Process:
V

o Conduct training with different optimizers: SGD, AdaGrad, and RMSProp.

o Hyperparameters:

▪ Learning Rates: Start with different learning rates for each optimizer.

▪ Batch Size: Use appropriate batch sizes for the model.

Page 18
21CS743 | DEEP LEARNING

▪ Number of Epochs: Set a common epoch count for all models.

• Vanishing Gradient Issues:

o Discuss how RNNs are susceptible to vanishing gradients, leading to difficulties in


learning long-range dependencies in sequences. This problem can be less
pronounced in Transformers due to their attention mechanism.

ud
• Comparison Metrics:

o Loss Curves: Visualize the loss curves for each optimizer to show convergence
behavior.

o Training Performance: Analyze the final training and validation accuracy and
loss.

• Expected Results:

o
lo
RMSProp and AdaGrad may show better performance than SGD, particularly in
tasks where the data is sparse or where gradients can vanish, leading to slower
C
convergence.
tu
V

Page 19
21CS743 | DEEP LEARNING

3. Visualization

• Loss Curves:

o Plot the training and validation loss curves for each optimizer used in both case
studies. This visualization will demonstrate:

▪ Convergence Behavior: How quickly each optimizer converges to a lower

ud
loss value.

▪ Stability: The stability of loss reduction over time and the presence of
fluctuations.

• Learning Curves:

o Include plots of training and validation accuracy over epochs for visual comparison

lo
of model performance across different optimizers.
C
tu
V

Page 20

You might also like