NEURAL NETWORKS:
Neural networks are machine learning models that mimic the complex functions of the human
brain.
These models consist of interconnected nodes or neurons that process data, learn patterns and
enable tasks such as pattern recognition and decision-making.
Neural networks are a core component of machine learning and are used in a wide variety of
applications, including image recognition, natural language processing, and more.
Neural networks are capable of learning and identifying patterns directly from data without
pre-defined rules. These networks are built from several key components:
Neurons: The basic units that receive inputs, each neuron is governed by a threshold and an
activation function.
Connections: Links between neurons that carry information, regulated by weights and biases.
Weights and Biases: These parameters determine the strength and influence of connections.
Propagation Functions: Mechanisms that help process and transfer data across layers of neurons.
Learning Rule: The method that adjusts weights and biases over time to improve accuracy.
Learning in neural networks follows a structured, three-stage process:
1. Input Computation: Data is fed into the network.
2. Output Generation: Based on the current parameters, the network generates an output.
3. Iterative Refinement: The network refines its output by adjusting weights and biases,
gradually improving its performance on diverse tasks.
Layers in Neural Network Architecture
1. Input Layer: This is where the network receives its input data. Each input neuron in the layer
corresponds to a feature in the input data.
2. Hidden Layers: These layers perform most of the computational heavy lifting. A neural
network can have one or multiple hidden layers. Each layer consists of units (neurons) that
transform the inputs into something that the output layer can use.
3. Output Layer: The final layer produces the output of the model. The format of these outputs
varies depending on the specific task like classification, regression.
1)
Working of Neural Networks
• Feed forward propagation
• Linearity function
• Activation function
• Backward propagation
• Loss calculation
• Gradient calculation
• Weight update
• Iteration
Feed Forward neural networks:
• When data is input into the network, it passes through the network in the forward direction,
from the input layer through the hidden layers to the output layer. This process is known as
forward propagation.
• 1. Linear Transformation: Each neuron in a layer receives inputs which are multiplied by the
weights associated with the connections. These products are summed together and a bias is
added to the sum. This can be represented mathematically as:
z=w1x1+w2x2+…+wnxn+b
• 2. Activation: The result of the linear transformation (denoted as zz) is then passed through
an activation function. The activation function is crucial because it introduces non-linearity
into the system, enabling the network to learn more complex patterns.
Back Propagation:
Backpropagation
Backpropagation (backward propagation of errors) is the learning phase.
• After forward propagation, the network evaluates its performance using a loss function which
measures the difference between the actual output and the predicted output. The goal of
training is to minimize this loss.
• It adjusts the weights of the neurons to minimize the prediction error.
1. Loss Calculation:
The network calculates loss, which tells us how wrong the prediction is.
Different problems use different loss functions for example:
Mean Squared Error is used for regression (predicting numbers).
Cross-Entropy Loss is used for classification (predicting categories).
2. Gradient calculation:
The network figures out how much each weight and bias is responsible for the error.
It uses calculus (chain rule) to break the error down and understand which parts need to
change to improve the result.
3. Weight update:
After finding the gradients, the network updates the weights and biases to reduce the error.
It uses a method like Stochastic Gradient Descent (SGD) to do this.
The weights are changed in the opposite direction of the error, and the learning rate controls
how big each step is.
Iteration:
• The network repeats the steps of forward-pass, loss calculation, backpropagation, and weight
updates many times.
• Each time, it learns a bit more and makes better predictions.
• This helps the network understand the data better and perform well on tasks like classification
or prediction.