Lab 8 Manual
Lab 8 Manual
A Single-Layer Perceptron (SLP) is one of the simplest types of artificial neural networks
used for binary classification tasks. It consists of a single layer of weights connecting inputs
to an output neuron.
To Understand the single layer perceptron, it's is Important to understand the Artificial
neural network (ANN).
An ANN is an information processing system whose mechanism is inspired by the function
of biological neural Artificial neural networks have many interconnected computing unit.
Activation Function
1. Activation functions are mathematical functions that can be used in Perceptrons to
determine the output given its input.
2. As we said it determines whether the neuron (Perceptron) needs to be activated or not.
3. Activation functions take in a weighted sum of the input data, called the activation, and
produce an output that can be used for prediction.
4. Activation functions are an essential part of Perceptrons and neural networks because
they allow the model to learn and make decisions based on the input data.
5. They also help to introduce non-linearity into the model, which is necessary for learning
more complex relationships in the data.
6. Some common types of activation functions used in Perceptrons are the Sign function,
Heaviside function, Sigmoid function, ReLU function, etc.
Only works for linearly separable problems (e.g., AND, OR).
Cannot learn XOR function since it's not linearly separable.
# Train perceptron
perceptron = Perceptron(input_size=2)
perceptron.train(X, y)
# Test perceptron
print("Testing Perceptron on AND gate:")
for inputs in X:
print(f"Input: ,inputs-, Predicted Output: ,perceptron.predict(inputs)-")
Result:
Explanation:
1. Initialize Weights: All weights (including bias) start at zero.
2. Activation Function: Uses a simple step function.
3. Training: Updates weights using the Perceptron Learning Rule.
4. Testing: Predicts output for each input in the dataset.
Single layer perceptron for OR gate
import numpy as np
class Perceptron:
def __init__(self, input_size, learning_rate=0.1, epochs=10):
self.weights = np.zeros(input_size + 1) # +1 for bias
self.learning_rate = learning_rate
self.epochs = epochs
Note:
A Single-Layer Perceptron can only solve linearly separable problems. OR is linearly
separable, but XOR is not. The XOR function requires a nonlinear decision boundary, which
a single-layer perceptron cannot learn.
For XOR Gate: The perceptron will fail to learn XOR, usually outputting all 0s or all 1s
because XOR is not linearly separable.
Sometimes the results may vary
Note:
To correctly classify XOR, we need a Multi-Layer Perceptron (MLP) with a hidden layer
using an activation function like ReLU or sigmoid.
Reference:
1. https://www.javatpoint.com/
2. https://www.kaggle.com/
3. https://openai.com/index/chatgpt/
4. https://towardsdatascience.com/