0% found this document useful (0 votes)
2 views

ca3dl

Deep learning is a subset of machine learning that utilizes artificial neural networks with multiple layers to mimic the human brain's structure. The key differences between machine learning and deep learning include feature extraction methods, data requirements, and computational power, with deep learning requiring larger datasets and more computational resources. Neural networks, including biological and artificial types, consist of interconnected nodes that process data, and they utilize activation functions to introduce non-linearity, enabling the learning of complex patterns.

Uploaded by

abhijitjha1901
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

ca3dl

Deep learning is a subset of machine learning that utilizes artificial neural networks with multiple layers to mimic the human brain's structure. The key differences between machine learning and deep learning include feature extraction methods, data requirements, and computational power, with deep learning requiring larger datasets and more computational resources. Neural networks, including biological and artificial types, consist of interconnected nodes that process data, and they utilize activation functions to introduce non-linearity, enabling the learning of complex patterns.

Uploaded by

abhijitjha1901
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

what is deep learning ? diff between dl and ml ?

Deep learning is a subset of machine learning, which itself is a branch of artificial intelligence (AI). It focuses on
algorithms that mimic the structure and functioning of the human brain, known as artificial neural networks. These
networks consist of multiple layers, giving rise to the term "deep" learning because of the depth created by having many
layers of interconnected neurons.

Difference Between Machine Learning (ML) and Deep Learning (DL)

Feature Machine Learning (ML) Deep Learning (DL)


Definition Learns from data using algorithms. Uses neural networks with multiple layers.
Feature Extraction Done manually. Done automatically.
Data Requirement Works well with small datasets. Requires large datasets.
Computational Power Less computationally intensive. Requires high computing power (GPUs).
Example Decision Trees, SVM, Random Forest. CNN, RNN, Transformers.

neural network definition and types (bnn and ann)?


A Neural Network (NN) is a computational model inspired by the human brain, consisting of interconnected
nodes (neurons) that process data and learn patterns. It is widely used in AI for tasks like classification,
prediction, and recognition.

Types of Neural Networks


1. Biological Neural Network (BNN)
o Comprises actual neurons in the human brain.
o Processes information through electrical and chemical signals.
o Adapts and learns through synaptic plasticity.
2. Artificial Neural Network (ANN)
o A mathematical model that mimics BNN.
o Composed of artificial neurons (nodes) organized in layers.
o Used in AI, deep learning, and machine learning tasks.

Components of a Neural Network (NN)


1. Neurons (Nodes) – Basic processing units that receive, process, and transmit data.
2. Input Layer – Receives raw data and passes it to the network.
3. Hidden Layers – Intermediate layers where computations occur through weighted connections.
4. Output Layer – Produces the final result or classification.
5. Weights & Biases – Parameters that determine the influence of inputs on the neuron’s output.
6. Activation Function – Defines the neuron’s output by applying a non-linear transformation (e.g.,
ReLU, Sigmoid).
7. Loss Function – Measures the error between predicted and actual values.
8. Optimizer – Adjusts weights and biases to minimize the loss function (e.g., SGD, Adam).

Types of Learning in Neural Networks


1. Supervised Learning
o The model learns from labeled data (input-output pairs).
o Example: Image classification (Cats vs. Dogs).
o Algorithms: CNN, RNN, SVM.
2. Unsupervised Learning
o The model learns from unlabeled data by finding patterns and structures.
o Example: Clustering customer segments.
o Algorithms: K-Means, Autoencoders.
3. Reinforcement Learning
o The model learns through rewards and penalties by interacting with an environment.
o Example: AlphaGo playing chess.
o Algorithms: Q-Learning, Deep Q-Networks (DQN).

what is activation function and its type and its role?


An activation function decides whether a neuron should be activated or not by applying a mathematical
transformation to the input. It introduces non-linearity into the neural network, allowing it to learn complex
patterns.

Types of Activation Functions


1. Linear Activation Function
o f(x)=xf(x) = x
o Used in regression tasks but not preferred for deep networks.
2. Non-Linear Activation Functions
o Sigmoid: f(x)=11+e−xf(x) = \frac{1}{1+e^{-x}}
▪ Output range: (0,1)
▪ Used in binary classification.
▪ Problem: Vanishing gradient.
o Tanh: f(x)=ex−e−xex+e−xf(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}}
▪ Output range: (-1,1)
▪ Better than Sigmoid but still suffers from vanishing gradient.
o ReLU (Rectified Linear Unit): f(x)=max⁡(0,x)f(x) = \max(0, x)
▪ Output range: (0,∞)
▪ Efficient and widely used in deep learning.
▪ Problem: Dying ReLU (some neurons output 0 always).
o Leaky ReLU: f(x)=xf(x) = x if x>0x > 0, else 0.01x0.01x
▪ Solves dying ReLU problem by allowing small gradients for negative values.
o Softmax: Converts logits into probabilities.
▪ Used in multi-class classification.

Role of Activation Functions


• Introduce non-linearity for complex pattern learning.
• Control neuron activation and model convergence.
• Help in handling vanishing/exploding gradients.

Applications of Deep Learning


1. Computer Vision
o Image classification (e.g., Face Recognition, Google Lens)
o Object detection (e.g., Self-driving cars)
o Medical imaging (e.g., Tumor detection in MRIs)
2. Natural Language Processing (NLP)
o Speech recognition (e.g., Google Assistant, Siri)
o Machine translation (e.g., Google Translate)
o Sentiment analysis (e.g., Social media monitoring)
3. Healthcare
o Disease diagnosis (e.g., AI-assisted radiology)
o Drug discovery (e.g., Predicting molecule interactions)
4. Autonomous Systems
o Self-driving cars (e.g., Tesla Autopilot)
o Robotics (e.g., AI-powered industrial robots)
5. Finance & Banking
o Fraud detection (e.g., Credit card fraud monitoring)
o Stock market prediction

Backpropagation in Neural Networks


Definition:
Backpropagation (Backward Propagation of Errors) is a supervised learning algorithm used to train artificial
neural networks by minimizing the error between predicted and actual outputs. It is an optimization technique
that adjusts the weights of the network using gradient descent and the chain rule of differentiation.

Steps of Backpropagation Algorithm:


1. Forward Propagation:
o Input data is passed through the network.
o Each neuron applies the activation function to produce an output.
o The final output is computed and compared with the actual value to calculate the error.
2. Error Calculation:
o Compute the loss using a cost function (e.g., Mean Squared Error, Cross-Entropy).
3. Backward Propagation (Gradient Calculation):
o Compute the gradient of the loss function w.r.t each weight using the chain rule.
o Errors propagate backward from the output layer to hidden layers.
4. Weight Update (Gradient Descent):
o Adjust weights using:

where η/eta is the learning rate and L is the loss function.


5. Repeat:
o The process continues for multiple epochs until the error is minimized.

Visualization of Backpropagation:
1. Forward Propagation:
Input → Hidden Layer → Output Layer → Compute Loss
Input → [Hidden Layer] → Output → Loss
2. Backward Propagation:
Error propagates back, weights get adjusted
Output Layer ← [Hidden Layer] ← Input

Example Evaluation (Simple 3-Layer Network):


Given:
• Input: X1,X2
• Weights: W1,W2,W3,W4W_1, W_2, W_3, W_4
• Hidden Layer Activation: ff
• Output: YY
1. Forward Pass:
o Compute hidden layer outputs
o Compute final output
o Calculate error
2. Backward Pass:
o Compute gradients
o Adjust weights

What is an Optimizer in Backpropagation?


An optimizer in backpropagation is an algorithm that updates the weights and biases of a neural network to
minimize the loss function. It adjusts these parameters using gradients computed through backpropagation to
improve model accuracy.

Types of Optimizers
Optimizers are categorized into different types based on how they update weights:

1. Gradient Descent-Based Optimizers


• Batch Gradient Descent (BGD): Updates weights after computing the gradient over the entire dataset.
• Stochastic Gradient Descent (SGD): Updates weights after computing the gradient for each training
example.
• Mini-batch Gradient Descent: Uses small batches of data to compute gradients and update weights.

2. Adaptive Optimizers
• Momentum: Accelerates gradient descent by adding a fraction of the previous update.
• Nesterov Accelerated Gradient (NAG): Looks ahead to adjust the learning rate dynamically.
• Adagrad (Adaptive Gradient Algorithm): Adjusts learning rates individually for each parameter.
• RMSprop (Root Mean Square Propagation): Uses an exponentially decaying average of squared
gradients to normalize updates.
• Adam (Adaptive Moment Estimation): Combines Momentum and RMSprop to provide stable and
efficient updates.
• AdaMax: A variant of Adam using the infinity norm.
• Nadam (Nesterov-accelerated Adaptive Moment Estimation): Adam with Nesterov acceleration.

What is a Loss Function?

A loss function is a mathematical function that measures the difference between the actual (ground truth)
output and the predicted output of a model. The goal of training a neural network is to minimize this loss,
making the model more accurate.

Types of Loss Functions


Loss functions are mainly categorized into two types:
1. Regression Loss Functions (for continuous output)
2. Classification Loss Functions (for discrete categories)

1. Regression Loss Functions


2. Classification Loss Functions
Used when the output belongs to a category/class.

You might also like