0% found this document useful (0 votes)
35 views7 pages

Understanding Neural Networks Explained

Uploaded by

Priyanka Arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views7 pages

Understanding Neural Networks Explained

Uploaded by

Priyanka Arora
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Unit 6 – Understanding Neural Networks

1. What is a Neural Network?

 Biological Neural Network: Made of neurons in the human brain, which process and
transmit information.

 Artificial Neural Network (ANN): AI system that mimics the brain’s way of processing
information.

 Advantages: Learns from data automatically, adapts to inputs, used in chatbots, spam
filtering, image tagging, recommendation systems.

2. Parts of a Neural Network

1. Input Layer – Receives raw features of the problem.

2. Hidden Layer(s) – One or more layers that process data using neurons & weights.

3. Output Layer – Produces the final prediction or decision.

 Deep Neural Network (DNN): ANN with 2+ hidden layers (deep learning).

3. Components

1. Neurons (Nodes): Process inputs → weighted sum → activation function → output.

2. Weights: Show importance of each connection.

3. Activation Function: Adds non-linearity (e.g., Sigmoid, Tanh, ReLU).

4. Bias: Shifts activation threshold.

5. Connections: Links between neurons carrying weights.

6. Learning Rule: Updates weights/bias to reduce error (e.g., Backpropagation).

7. Propagation Functions:

o Forward Propagation: Data flows input → output, compute error.

o Backpropagation: Adjust weights to reduce error.

4. Working

Formula:

Output=f(∑wixi+bias)\text{Output} = f\left(\sum w_i x_i + \text{bias}\right)Output=f(∑wixi+bias)

 Multiply inputs by weights, add bias, apply activation function.

 Output moves to next layer until final prediction is made.


5. Types of Neural Networks

1. Standard Neural Network (Perceptron): Structure:

Single-layer network with input nodes connected directly to output nodes.

Function: Performs binary classification (yes/no, true/false).

Activation: Uses a threshold function to decide the output.

Limitation: Cannot handle complex patterns or non-linear data.

Example: Classifying whether an email is spam or not based on one simple rule.

2. Feedforward Neural Network (FFNN): 

Structure: Multiple layers of neurons where data flows only in one direction — from
input to output.

Purpose: Can process complex relationships between data features.

Applications:

Image recognition (e.g., identifying objects in a picture)

Natural Language Processing (text classification)

Advantages: Simple to understand, efficient for many tasks.

Limitation: Cannot remember past inputs (no feedback loops).

3. Convolutional Neural Network (CNN):

Structure: Special layers called convolutional layers use filters (kernels) to detect
features like edges, colours, shapes.
Strength: Best for spatial data (data that has height, width, and sometimes depth).

Applications:

 Object detection in images

 Medical image analysis (MRI, X-rays)

 Facial recognition

Key Feature: Pooling layers reduce image size while preserving important features,
making processing faster.

4. Recurrent Neural Network (RNN):

Structure: Has loops that allow information to be stored and used in future steps.

Strength: Good for sequential data where order matters.

Applications:

 Speech recognition

 Language translation

 Time-series forecasting (stock prices, weather)

Special Types:

 LSTM (Long Short-Term Memory) – remembers long sequences

 GRU (Gated Recurrent Unit) – simpler but effective for many tasks

5. Generative Adversarial Network (GAN):

Structure: Two networks working together:

Generator: Creates synthetic data


Discriminator: Checks if data is real or fake

Purpose: To generate realistic-looking synthetic data.

Applications:

Creating realistic images (deepfakes, art)

Data augmentation for training AI models

Restoring damaged or old images/videos

Key Feature: Works like a competition — the Generator improves to fool the
Discriminator, and the Discriminator improves to detect fakes.

6. Future Impact on Society

 Positive: Automation, efficiency, personalization, economic growth, new jobs.

 Concerns: Data privacy, bias, job loss, ethical regulation needed.


ANN DIAGRAM –
LETS SOLVE -

Let us see a simple problem.

CASE I: Let the features be represented as x1,x2 and x3.

Input Layer:

Feature 1, x1 = 2

Feature 2, x2 = 3

Feature 3, x3 = 1

Hidden Layer:

Weight 1, w1 = 0.4

Weight 2, w2 = 0.2

Weight 3, w3 = 0.6

bias = 0.1

threshold = 3.0

Output: Using the formula:

∑wixi + bias = w1x1 + w2x2 + w3x3 + bias

= (0.4*2) + (0.2*3) + (0.6*1) + 0.1


= 0.8 + 0.6 + 0.6 + 0.1

= 2.1

Now, we apply the threshold value:

If output > threshold, then output = 1 (active)

If output < threshold, then output = 0 (inactive)

In this case:

Output (2.1) < threshold (3.0)

So, the output of the hidden layer is:

Output = 0

This means that the neuron in the hidden layer is inactive.

CASE II

Let's say we have another neuron in the output layer with the following weights and bias:

w1 = 0.7

w2 = 0.3

bias = 0.2

The output of the hidden layer (0) is passed as input to the output layer:

Output = w1_x1 + w2_x2 + bias

= 0.7_0 + 0.3_0 + 0.2

= 0.2

Let's assume the threshold value for the output layer is 0.1:

Output (0.2) > threshold (0.1)

So, the final output of the neural network is:

Output = 1

You might also like