Artificial Intelligence Lab
BSCS 6-A
Department of Computer Science
Bahria University, Lahore Campus
Lab Journal: [8]
Name: Muhammad Hamza Nawaz
Roll No: 03-134221-055
Topic 1: Basic ANN Construction
Define a simple ANN model using Scikit-learn.
Initialize layers and set up activation functions (e.g., ReLU, sigmoid).
Train the model with sample data.
Code:
Explanation:
Hidden Layers:
(10, 5): Two layers with 10 and 5 neurons respectively.
Activation Function:
ReLU: Fast and effective for hidden layers to handle non-linear relationships.
Output Layer Activation:
The final activation function is chosen based on the task (e.g., binary or multi-class
classification).
Convergence Settings:
max_iter: Limits the number of iterations for optimization.
Topic 2: Image Classification with ANN
Import a dataset (e.g., MNIST or CIFAR-10).
Preprocess the dataset for ANN compatibility (normalization, one-hot encoding).
Build and train an ANN to classify images.
Evaluate the model's performance using metrics.
Code:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist
from sklearn.metrics import classification_report
# Load and preprocess the data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train, X_test = X_train / 255.0, X_test / 255.0
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
# Build the ANN model
model = Sequential([
Flatten(input_shape=(28, 28)),
Dense(128, activation='relu'),
Dense(10, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the model
history = model.fit(X_train, y_train,
epochs=5,
batch_size=32,
validation_split=0.1)
# Evaluate the model
loss, accuracy = model.evaluate(X_test, y_test)
print(f"Test Accuracy: {accuracy:.2f}")
# Generate a classification report
y_pred = model.predict(X_test)
y_pred_classes = tf.argmax(y_pred, axis=1)
y_true_classes = tf.argmax(y_test, axis=1)
print(classification_report(y_true_classes, y_pred_classes))
Output;
Explanation
1. Activation Functions:
o ReLU for hidden layers to handle non-linear relationships.
o Softmax for output layer in classification tasks.
2. Optimizers:
o Adam: Adaptive optimizer with momentum.
o SGD: Simple gradient descent for experimentation.
3. Preprocessing:
o Normalization ensures input values are in a standard range.
o One-hot encoding converts categorical labels to numerical vectors.
4. Evaluation:
o Use metrics like accuracy and loss to assess model performance.
Topic 3: Training and Evaluation
Experiment with different optimizers (e.g., SGD, Adam).
Adjust hyper parameters such as learning rate, epochs, and batch size.
Evaluate the model’s performance on test data.
Code:
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.optimizers import SGD, Adam
from tensorflow.keras.datasets import mnist
from sklearn.metrics import classification_report
# Load the MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Normalize pixel values to [0, 1]
X_train, X_test = X_train / 255.0, X_test / 255.0
# One-hot encode the labels
y_train = tf.keras.utils.to_categorical(y_train, num_classes=10)
y_test = tf.keras.utils.to_categorical(y_test, num_classes=10)
def create_model(optimizer='adam', learning_rate=0.001):
model = Sequential([
Flatten(input_shape=(28, 28)), # Input layer
Dense(128, activation='relu'), # Hidden layer
Dense(10, activation='softmax') # Output layer
])
# Configure the optimizer
if optimizer == 'sgd':
opt = SGD(learning_rate=learning_rate)
elif optimizer == 'adam':
opt = Adam(learning_rate=learning_rate)
# Compile the model
model.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
# Create model with SGD optimizer
model_sgd = create_model(optimizer='sgd', learning_rate=0.01)
# Train the model
history_sgd = model_sgd.fit(X_train, y_train,
epochs=10,
batch_size=64,
validation_split=0.1,
verbose=1)
# Evaluate on test data
loss_sgd, accuracy_sgd = model_sgd.evaluate(X_test, y_test)
print(f"SGD Test Accuracy: {accuracy_sgd:.2f}")
# Create model with Adam optimizer
model_adam = create_model(optimizer='adam', learning_rate=0.001)
# Train the model
history_adam = model_adam.fit(X_train, y_train,
epochs=10,
batch_size=64,
validation_split=0.1,
verbose=1)
# Evaluate on test data
loss_adam, accuracy_adam = model_adam.evaluate(X_test, y_test)
print(f"Adam Test Accuracy: {accuracy_adam:.2f}")
batch_sizes = [32, 64, 128]
for batch_size in batch_sizes:
print(f"Training with Batch Size: {batch_size}")
model = create_model(optimizer='adam', learning_rate=0.001)
model.fit(X_train, y_train, epochs=5, batch_size=batch_size, validation_split=0.1, verbose=1)
loss, accuracy = model.evaluate(X_test, y_test)
print(f"Batch Size {batch_size} - Test Accuracy: {accuracy:.2f}")
learning_rates = [0.1, 0.01, 0.001]
for lr in learning_rates:
print(f"Training with Learning Rate: {lr}")
model = create_model(optimizer='sgd', learning_rate=lr)
model.fit(X_train, y_train, epochs=5, batch_size=64, validation_split=0.1, verbose=1)
loss, accuracy = model.evaluate(X_test, y_test)
print(f"Learning Rate {lr} - Test Accuracy: {accuracy:.2f}")
# Train with 20 epochs
model = create_model(optimizer='adam', learning_rate=0.001)
history = model.fit(X_train, y_train, epochs=20, batch_size=64, validation_split=0.1, verbose=1)
# Evaluate performance
loss, accuracy = model.evaluate(X_test, y_test)
print(f"20 Epochs - Test Accuracy: {accuracy:.2f}")
import matplotlib.pyplot as plt
# Plot loss for SGD optimizer
plt.plot(history_sgd.history['loss'], label='SGD - Training Loss')
plt.plot(history_sgd.history['val_loss'], label='SGD - Validation Loss')
plt.legend()
plt.title('SGD Training vs Validation Loss')
plt.show()
# Plot loss for Adam optimizer
plt.plot(history_adam.history['loss'], label='Adam - Training Loss')
plt.plot(history_adam.history['val_loss'], label='Adam - Validation Loss')
plt.legend()
plt.title('Adam Training vs Validation Loss')
plt.show()
y_pred = model_adam.predict(X_test)
y_pred_classes = tf.argmax(y_pred, axis=1)
y_true_classes = tf.argmax(y_test, axis=1)
from sklearn.metrics import classification_report
print(classification_report(y_true_classes, y_pred_classes))
OUTPUT