0% found this document useful (0 votes)
15 views50 pages

NN & DL Record Final

The document outlines the laboratory record for the Neural Networks and Deep Learning course at University College of Engineering Tindivanam. It includes various experiments such as implementing vector addition in TensorFlow, creating regression models in Keras, and building image classifiers using CNNs. Each experiment details the aim, algorithm, program code, output, and results, demonstrating successful execution of deep learning tasks.

Uploaded by

druchandas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views50 pages

NN & DL Record Final

The document outlines the laboratory record for the Neural Networks and Deep Learning course at University College of Engineering Tindivanam. It includes various experiments such as implementing vector addition in TensorFlow, creating regression models in Keras, and building image classifiers using CNNs. Each experiment details the aim, algorithm, program code, output, and results, demonstrating successful execution of deep learning tasks.

Uploaded by

druchandas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

UNIVERSITY COLLEGE OF ENGINEERING TINDIVANAM

(A Constiuent college of Anna University,Chennai)


DEPARTMENT OF COMPUTER SCIENCEAND ENGINEERING
B.E. SIXTH SEMESTER
RECORD FOR

CCS355-NEURAL NETWORKS AND DEEP


LEARNING
LABORATORY

NAME :

REG NO :

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


UNIVERSITY COLLEGE OF ENGINEERING TINDIVANAM
MELPAKKAM
TINDIVANAM-604001
UNIVERSITY COLLEGE OF ENGINEERING TINDIVANAM
(A Constiuent college of Anna University,Chennai)
DEPARTMENT OF COMPUTER SCIENCEAND ENGINEERING

LABORATORY RECORD NOTE BOOK

2024-2025
This is to certify that this is a bonafide record of the work done

by Mr. / Ms. _______________Register Number ____________

of the III year B.E., Department of Computer Science and Engineering

in the NEURAL NETWORKS AND DEEP LEARNING LABORATORY

in the VI Semester.

University Examination held on .

Staff in charge Head of the department

Internal Examiner External Examiner


INDEX
Page
S.no Date Title no Signature
1 Implement simple vector addition in
TensorFlow.

Implement a regression model in


2 keras.
Implement a perceptron in
3 TensorFlow/Keras Environment.

4 Implement a Feed-forward network in


TensorFlow/Keras.
5 Implement an Image classifier using
CNN in TensorFlow/Keras.

6 Improve the Deep learning model by


fine tuning hyper parameters.
7 Implement a Transfer learning
concept in image classification.
8 Using a pre trained model on keras for
Transfer Learning.
9 Perform sentiment analysis using RNN.

10 Implement an LSTM based autoencoder


in Tensorflow /keras.
11 Image generation using GAN.
12 Train a Deep learning model to classify
a given image using pre trained model.
13 Recommendation system from sales data
using Deep learning.
14 Implement object detection using CNN.

15 Implement any simple Reinforcement


algorithm for an NLP problem.
EXP:NO:01 IMPLEMENTATION OF SIMPLE VECTOR ADDITION IN
TENSORFLOW
DATE:

AIM:
To write a python to perform a simple vector addition in tensor flow.

ALGORITHM:
1. Import the tensor flow library to access its functionalities.
2. Define the input vectors that you want to add together. These vectors can be created as
tensor flow tensors.
3. Use tensor flow operation to perform element-wise addition of the input vectors.
4. Create a tensor flow session to execute the computational graph.
5. Use the session to run the addition operation and obtain the result.
6. Print the result of the addition operation or use it for further computation.

PROGRAM:

Import tensorflow as tf
vector1 = tf.constant([1,2,3])
vector2 = tf.constant([4,5,6])
print(vector1)
print(vector2)
result = tf.add(vector1, vector2)
print("Result of vector addition:",result)
OUTPUT:
tf.Tensor([1 2 3], shape=(3,), dtype=int32)
tf.Tensor([4 5 6], shape=(3,), dtype=int32)
Result of vector addition: tf.Tensor([5 7 9], shape=(3,), dtype=int32)

RESULT:

Thus the python program to perform the simple vector addition has executed successfully
using the tensor flow.
EXP.NO: 02 IMPLEMENT A REGRESSION MODEL IN KERAS

DATE:

AIM:

To write a python program for implement a regression model


using keras.

ALGORITHM:

1.Start the program.

2.Import the necessary packages.

3.Declare and initializes the variables

4.Assign the X and Y values.

5.Train the model of the neural network

6.Evaluate the trained model.

7.Check the desired output value in the actual output value.

8.Stop the program.


PROGRAM:

import numpy as np

from keras.models import Sequential

from keras.layers import Dense

np.random.seed(0)

X = np.random.rand(100, 1)

y = 2 * X + 1 + np.random.randn(100, 1) * 0.1

model = Sequential()

model.add(Dense(10, input_dim=1, activation='relu'))

model.add(Dense(1))

model.compile(loss='mean_squared_error', optimizer='adam')

model.fit(X, y, epochs=1000, batch_size=10, verbose=0)

mse = model.evaluate(X, y, verbose=0)

print('Mean Squared Error:', mse)

X_new = np.array([[0.2], [0.5], [0.8]])

predictions = model.predict(X_new)

print('Predictions:', predictions)
OUTPUT:

Mean Squared Error: 0.009807378984987736

1/1 [==============================] - ETA: 0s


1/1 [==============================] - 0s 78ms/step

Predictions: [[1.4192846]

[2.024505 ]

[2.6137354]]

RESULT:

Thus the regression model using keras has been executed and the
output is verified successfully.
EXP NO: 3 IMPLEMENT A PERCEPTRON IN TENSORFLOW/ KERAS
ENVIRONMENT
DATE:

AIM:

To write a python program for perceptron using tensorflow/ keras environment.

ALGORITHM:

1. Start the program.


2. Import the necessary packages.
3. Declare the sample data’s input.
4. Define the perceptron model.
5. Compile the model.
6. Train the model.
7. Predict the output values.
8. Stop the program.

PROGRAM:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
import numpy as np
# Generate some sample data for a logical OR operation
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 1]) # Output labels (OR gate)
# Define a simple perceptron model
model = keras.Sequential([Dense(units=1, input_dim=2, activation='sigmoid'])
# Compile the model
model.compile(optimizer=SGD(learning_rate=0.1), loss='mean_squared_error',
metrics=['accuracy'])
# Train the model
model.fit(X, y, epochs=1000, verbose=0)
# Evaluate the model
loss, accuracy = model.evaluate(X, y)
print("Loss:", loss)
print("Accuracy:", accuracy)
# Make predictions
predictions = model.predict(X)
print("Predictions:")
print(predictions)

OUTPUT:

1/1 [==============================] - 0s 120ms/step - loss: 0.0370 - accuracy: 1.


0000
Loss: 0.037040166556835175
Accuracy: 1.0
1/1 [==============================] - 0s 61ms/step
Predictions:
[[0.29985684]
[0.8216442 ]
[0.83836555]
[0.9823918 ]]

RESULT:

Thus, the above program for “perceptron” has been executed successfully.
EXP NO: 04 IMPLEMENT A FEED-FORWARD NETWORK IN

DATE: TENSORFLOW/KERAS.

AIM:

To implement a feed-forward network in tensorflow / keras.

ALGORITHM:

1.Start the program.

2.Import all the packages required.

3.Compile the program.

4.Display the sample result.

5.Stop the program.

PROGRAM:

from tensorflow import keras

# Replace these values with appropriate values for your problem

input_size = 100 # Number of input features

hidden_units = 32 # Number of units in the hidden layer

output_units = 1 # Number of output units (e.g., 1 for binary classification)

# Define the architecture of the neural network

model = keras.Sequential([

# Input layer (specify input_shape for the first layer)

keras.layers.Input(shape=(input_size,)),

# Hidden layer with sigmoid activation

keras.layers.Dense(units=hidden_units, activation='sigmoid'),

# Output layer

keras.layers.Dense(units=output_units, activation='sigmoid')

])

# Compile the model


model.compile(optimizer='adam',

loss='binary_crossentropy',

metrics=['accuracy'])

# Print the model summary

model.summary()

OUTPUT:

RESULT:

Thus, the given program for feed_forward network executed successfully.


EXP.NO:05
IMPLEMENT AN IMAGE CLASSIFIER USING CNN
DATE:

AIM:

To write a python program to implement an image classifier using CNN in


TensorFlow/Keras.

ALGORITHM:

1. Start the program.


2. Import all the packages required.
3. Compile the program.
4. Display the sample result.
5. Stop the program.
PROGRAM:

import tensorflow as tf

from tensorflow import keras

from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense,

Dropout

from tensorflow.keras.datasets import cifar10

from tensorflow.keras.utils import to_categorical

# Load and preprocess the CIFAR-10 dataset

(x_train, y_train), (x_test, y_test) = cifar10.load_data()

x_train, x_test = x_train / 255.0, x_test / 255.0

# Normalize pixel values to the range [0, 1]

y_train = to_categorical(y_train, 10) # One-hot encode the labels

y_test = to_categorical(y_test, 10)

Define the CNN model

model = keras.Sequential([

Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),


MaxPooling2D((2, 2)),

Conv2D(64, (3, 3), activation='relu'),

MaxPooling2D((2, 2)),

Conv2D(64, (3, 3), activation='relu'),

Flatten(),

Dense(64, activation='relu'),

Dropout(0.5),

Dense(10, activation='softmax')

])

# Compile the model

model.compile(optimizer='adam', loss='categorical_crossentropy',

metrics=['accuracy'])

# Train the model

model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))

# Evaluate the model

loss, accuracy = model.evaluate(x_test, y_test)

print("Test Loss:", loss)

print("Test Accuracy:", accuracy)


OUTPUT:

RESULT:

Thus the given program for implement an image classifier using CNN in
TensorFlow/Keras is executed and verified successfully.
EXP.NO:06
IMPROVE THE DEEP LEARNING MODEL BY FINE
TUNING HYPER PARAMETERS
DATE:

AIM:

To write a python program for improve the Deep learning model by fine tuning hyper
parameters.
ALGORITHM:

1. Start the Program


2. Create a 3-layer CNN model with Adam optimizer and sparse
3. Train the model for 10 epochs with validation data.
4. Evaluate test performance for accuracy and loss.
5. Stop the Program

PROGRAM:

# Necessary imports

from sklearn.linear_model import LogisticRegression

from sklearn.model_selection import GridSearchCV

import numpy as np

from sklearn.datasets import make_classification

X, y = make_classification(n_samples=1000, n_features=20,
n_informative=10, n_classes=2, random_state=42)

# Creating the hyperparameter

c_space = np.logspace(-5, 8, 15)

param_grid = {'C': c_space}

# Instantiating logistic regression classifier

logreg = LogisticRegression()
# Instantiating the GridSearchCV object

logreg_cv = GridSearchCV(logreg, param_grid, cv=5)

# Assuming X and y are your feature matrix and target variable

# Fit the GridSearchCV object to the data

logreg_cv.fit(X, y)

# Print the tuned parameters and score

print("Tuned Logistic Regression Parameters:


{}".format(logreg_cv.best_params_))

print("Best score is {}".format(logreg_cv.best_score_))

OUTPUT:

Tuned Logistic Regression Parameters: {‘C’:0.006105402296585327}

Best Score is 0.853

RESULT:

Thus the given program for improve the deep learning model by fine tuning hyper
parameters is executed and verified successfully.
EXP.NO:07 IMPLEMENT A TRANSFER LEARNING CONCEPT IN
IMAGE CLASSIFICATION
DATE:

AIM:

To write a python program for the implementation of a transfer learning concept in


image classification.

ALGORITHM:

1. Start the program.


2. Load the MobileNetV2 pre-trained model without the top classification
layers.
3. Add new classification layers on top of the pre-trained model.
4. Freeze the layers of the pre-trained model to retain their weights.
5. Compile the model using the Adam optimizer with a learning rate of
0.001 and categorical cross-entropy loss.
6. .Load the CIFAR-10 dataset and preprocess it by normalizing pixel
values to the range [0, 1].
7. Convert labels to one-hot encoded format.
8. Apply data augmentation techniques like rotation, width and height
shifting, and horizontal flipping.
9. Fine-tune the model by training it on the augmented data for 10 epochs,
using a batch size of 32, and validate it using the test data. Finally,
evaluate the model's performance on the test data and print the test loss
and accuracy.
10.Stop the program.

PROGRAM:

import tensorflow as tf

from tensorflow import keras

from tensorflow.keras.applications import MobileNetV2

from tensorflow.keras.preprocessing.image import ImageDataGenerator

from tensorflow.keras.layers import GlobalAveragePooling2D, Dense

from tensorflow.keras.optimizers import Adam


# Load a pre-trained model (MobileNetV2) excluding the top classification layers

base_model = MobileNetV2(weights='imagenet', include_top=False)

# Create a new model on top

x = base_model.output

x = GlobalAveragePooling2D()(x)

x = Dense(1024, activation='relu')(x)

predictions = Dense(10, activation='softmax')(x)

model = keras.Model(inputs=base_model.input, outputs=predictions)

# Freeze the layers of the pre-trained model

for layer in base_model.layers:layer.trainable = False

# Compile the model

model.compile(optimizer=Adam(lr=0.001), loss='categorical_crossentropy',
metrics=['accuracy'])

# Load and preprocess your dataset

# You can use your own dataset or a built-in dataset like CIFAR-10

# Example of using CIFAR-10

from tensorflow.keras.datasets import cifar10

from tensorflow.keras.utils import to_categorical

(x_train, y_train), (x_test, y_test) = cifar10.load_data()

x_train, x_test = x_train / 255.0, x_test / 255.0 # Normalize pixel values to the range
[0, 1]

y_train = to_categorical(y_train, 10)

y_test = to_categorical(y_test, 10)

# Use data augmentation for better performance

data_generator = ImageDataGenerator(

rotation_range=20,

width_shift_range=0.2,
height_shift_range=0.2,

horizontal_flip=True)

model.fit(data_generator.flow(x_train, y_train, batch_size=32), epochs=10,


validation_data=(x_test, y_test))

# Evaluate the model

loss, accuracy = model.evaluate(x_test, y_test)

print("Test Loss:", loss)

print("Test Accuracy:", accuracy)

OUTPUT:

Downloading data from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz

170498071/170498071 [==============================] - 360s 2us/step

Epoch 1/10

1563/1563 [==============================] - 116s 65ms/step - loss:


1.9905 - accuracy: 0.2704 - val_loss: 1.8794 - val_accuracy: 0.3245

Epoch 2/10

1563/1563 [==============================] - 97s 62ms/step - loss: 1.9269


- accuracy: 0.2968 - val_loss: 1.8544 - val_accuracy: 0.3311

Epoch 3/10

1563/1563 [==============================] - 106s 68ms/step - loss:


1.9108 - accuracy: 0.3018 - val_loss: 1.8384 - val_accuracy: 0.3379

Epoch 4/10

1563/1563 [==============================] - 103s 66ms/step - loss:


1.9043 - accuracy: 0.3046 - val_loss: 1.8328 - val_accuracy: 0.3446

Epoch 5/10

1563/1563 [==============================] - 102s 65ms/step - loss:


1.8937 - accuracy: 0.3111 - val_loss: 1.8341 - val_accuracy: 0.3380
Epoch 6/10

1563/1563 [==============================] - 100s 64ms/step - loss:


1.8877 - accuracy: 0.3122 - val_loss: 1.8247 - val_accuracy: 0.3349

Epoch 7/10

1563/1563 [==============================] - 99s 63ms/step - loss: 1.8876


- accuracy: 0.3127 - val_loss: 1.8362 - val_accuracy: 0.3386

Epoch 8/10

1563/1563 [==============================] - 82s 53ms/step - loss: 1.8799


- accuracy: 0.3131 - val_loss: 1.8173 - val_accuracy: 0.3430

Epoch 9/10

1563/1563 [==============================] - 77s 49ms/step - loss: 1.8728


- accuracy: 0.3176 - val_loss: 1.8024 - val_accuracy: 0.3493

Epoch 10/10

1563/1563 [==============================] - 78s 50ms/step - loss: 1.8707


- accuracy: 0.3186 - val_loss: 1.8069 - val_accuracy: 0.3478

313/313 [==============================] - 7s 22ms/step - loss: 1.8069 -


accuracy: 0.3478

Test Loss: 1.806941032409668

Test Accuracy: 0.34779998660087585

RESULT:

Thus the given program for the implementation of the transfer learning concept in

image classification is executed and verified successfully.


EXP NO: 8 USING A PRE TRAINED MODEL ON KERAS FOR

DATE: TRANSFER LEARNING

AIM:
To write a python program and to use a pre trained model on keras for transfer learning.

ALGORITHM:
1. Start the program.
2. Load cifar10 dataset.
3. Normalize pixel values to be between 0 and 1.
4. One-hot encode the labels.
5. Load pre-trained VGG16 model without the top layers (include_top=False).

6. Create a new model with VGG16 base and additional dense layers.
7. Compile the model.
8. Train the model.
9. Evaluate the model on test set.

PROGRAM:
import numpy as np
from keras.applications import VGG16
from keras.models import Sequential
from keras.layers import Dense, Flatten
from keras.optimizers import Adam

from keras.datasets import cifar10


from keras.utils import to_categorical
# Load CIFAR-10 dataset
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
# Normalize pixel values to be between 0 and 1
X_train = X_train.astype('float32') / 255.0
X_test = X_test.astype('float32') / 255.0
# One-hot encode the labels
y_train = to_categorical(y_train, 10)

y_test = to_categorical(y_test, 10)


# Load pre-trained VGG16 model without the top layers (include_top=False)
base_model = VGG16(weights='imagenet', include_top=False,
input_shape=(32, 32, 3))
# Freeze the weights of the pre-trained layers
for layer in base_model.layers:

layer.trainable = False
# Create a new model with VGG16 base and additional dense layers
model = Sequential([
base_model,
Flatten(),
Dense(512, activation='relu'),

Dense(10, activation='softmax')
])
# Compile the model
model.compile(optimizer=Adam(),loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the model

model.fit(X_train, y_train, batch_size=128, epochs=10, validation_split=0.1)


# Evaluate the model on test set
test_loss, test_accuracy = model.evaluate(X_test, y_test)
print("Test Loss:", test_loss)
print("Test Accuracy:", test_accuracy)
OUTPUT:

RESULT:
Thus the python program for using pre-trained model on keras for transfer learning has
been executed successfully.
EXP:NO:09 PERFORM SENTIMENT ANALYSIS USING RNN

DATE:

AIM:

To perform a sentiment analysis using RNN.

ALGORITHM:

1. Start the program.


2. Import the necessary packages.
3. Load the IMDB dataset.
4. Pad sequences to a fixed length.
5. Define the RNN model.
6. Compile the model.
7. Train the model.
8. Evaluate the model.
9. Stop the program.

PROGRAM:

import numpy as np
importtensorflow as tf
fromtensorflow.keras.datasets import imdb
fromtensorflow.keras.preprocessing import sequence
fromtensorflow.keras.models import Sequential
fromtensorflow.keras.layers import Dense, Embedding, SimpleRNN
# Load the IMDB dataset
max_features = 10000 # Consider only the top 10,000 words
maxlen = 500 # Cut reviews after 500 words
batch_size = 32
print('Loading data...')
(input_train, y_train), (input_test, y_test) = imdb.load_data(
num_words=max_features)
print(len(input_train), 'train sequences')
print(len(input_test), 'test sequences')
# Pad sequences to a fixed length
print('Pad sequences (samples x time)')
input_train = sequence.pad_sequences(input_train, maxlen=maxlen)
input_test = sequence.pad_sequences(input_test, maxlen=maxlen)
print('input_train shape:', input_train.shape)
print('input_test shape:', input_test.shape)
# Define the RNN model
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))
# Compile the model
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
# Train the model
print('Training...')
model.fit(input_train, y_train,
batch_size=batch_size,
epochs=5,
validation_data=(input_test, y_test))
# Evaluate the model
print('Evaluating...')
loss, accuracy = model.evaluate(input_test, y_test, verbose=0)
print('Test loss:', loss)
print('Test accuracy:', accuracy)
OUTPUT:

RESULT:
Thus to perform sentiment analysis using RNN was executed successfully.
EXP NO:10
DATE: IMPLEMENT AN LSTM BASED AUTOENCODER IN
TENSORFLOW / KERAS

AIM:

To write a python program to implement an LSTM based on auto encoder in


tensorflow and keras.

ALGORITHM:

1. Start the program.


2. Import numpy and tensorflow libraries.
3. Generate random sequences as sample data.
4. Define an LSTM autoencoder model with:
 Input shape for sequences.
 Encoder LSTM layer with 32 units and ‘relu’ activation.
 Bottleneck layer to repeat the encoded vector for each timestep.
 Decoder LSTM layer with 32 units and ‘relu’ activation.
 TimeDistributed dense layer for sequence reconstruction.
 Compile the model with Adam iptimizer and mean squared error
loss.
5. Train the autoencoder model on the sample data for 5 epochs with a batch
size of 32 and 20% validation split.
6. Generate example test data and use the trained autoencoder or reconstruct
the sequences.
7. Print an example original and reconstructed sequence for verification.

PROGRAM:

import numpy as np

import tensorflow as tf

from tensorflow.keras.layers import Input, LSTM, RepeatVector

from tensorflow.keras.models import Model

# Generate sample data


# Assuming input sequences of length 10 with 5 features

seq_length = 10

num_features = 5

num_samples = 1000

# Generate random sequences

X_train = np.random.randn(num_samples, seq_length, num_features)

# Define LSTM Autoencoder

input_seq = Input(shape=(seq_length, num_features))

# Encoder

encoder = LSTM(32, activation='relu')(input_seq)

# Bottleneck

encoded = RepeatVector(seq_length)(encoder)

# Decoder

decoder = LSTM(32, activation='relu', return_sequences=True)(encoded)

# Output

output_seq=
tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(num_features))(decoder )

# Autoencoder model

autoencoder = Model(input_seq, output_seq)

autoencoder.compile(optimizer='adam', loss='mse')

# Train the model

autoencoder.fit(X_train, X_train, epochs=5, batch_size=32,


validation_split=0.2)

# Reconstruction

X_test = np.random.randn(5, seq_length, num_features) # Example test


data

reconstructed_seqs = autoencoder.predict(X_test)

# Print example reconstructed sequence

print("Original Sequence:")

print(X_test[0])

print("\nReconstructed Sequence:")

print(reconstructed_seqs[0])
OUTPUT:

RESULT:

Thus, the python program for implementing the LSTM based on autoencoder
has been executed successfully.
EX.NO:11
IMAGE GENERATION USING GAN
DATE:

AIM:

To write a python program for image generation using GAN.

ALGORITHM:

1. Start the program.


2. Import the necessary packages.
3. Load the data and conduct data preprocessing.
4. Create the generator network.
5. Define the loss function and optimize the generator.
6. Display the result.
7. Stop the program.

PROGRAM:

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
# Load MNIST dataset
(train_images, _), (_, _) = tf.keras.datasets.mnist.load_data()
# Preprocess data
train_images = train_images.reshape(train_images.shape[0], 28, 28,
1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize to [-1, 1]
# Define generator model
def make_generator_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(7*7*256, use_bias=False,input_shape=(100,)))
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Reshape((7, 7, 256)))
assert model.output_shape == (None, 7, 7, 256)
# Note: None is the batch size
model.add(tf.keras.layers.Conv2DTranspose(128, (5, 5), strides=(1, 1),
padding='same', use_bias=False))
assert model.output_shape == (None, 7, 7, 128)
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Conv2DTranspose(64, (5, 5), strides=(2, 2),
padding='same', use_bias=False))
assert model.output_shape == (None, 14, 14, 64)
model.add(tf.keras.layers.BatchNormalization())
model.add(tf.keras.layers.LeakyReLU())
model.add(tf.keras.layers.Conv2DTranspose(1, (5, 5), strides=(2, 2),
padding='same', use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
# Generate an image using the generator
def generate_and_show_image(generator):
noise = tf.random.normal([1, 100])
generated_image = generator.predict(noise)[0, :, :, 0]
plt.imshow(generated_image, cmap='gray')
plt.axis('off')
plt.show()
# Create and display a generated image
generator = make_generator_model()
generate_and_show_image(generator)
OUTPUT:

1/1 [====================] – 0s 130ms/step

RESULT:

Thus the above program for image generation using GAN has
been executed successfully.
EXP.NO : 12
TRAIN A DEEP LEARNING MODEL TO CLASSIFY A
DATE : GIVEN IMAGE USING PRE – TRAINED MODEL

AIM :

To train a deep learning model to classify a given image using pre – trained
model

ALGORITHM :

1. Start the program


2. Import the necessary packages
3. Load and preprocess the CIFAR-10 dataset
4. Load a pre-trained model (MobileNetV2) excluding the top classification
layers
5. Add custom classification layers on top
6. Freeze the layers of the pre-trained model
7. Compile the model
8. Train the model
9. Evaluate the model
10. Stop the program.

PROGRAM :
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.models import Model
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense, Input
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
# Load and preprocess the CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Normalize pixel values to the range [0, 1]
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
# Load a pre-trained model (MobileNetV2) excluding the top classification layers
base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(32,
32, 3))
# Add custom classification layers on top
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(256, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x) # 10 classes in CIFAR-10
model = Model(inputs=base_model.input, outputs=predictions)
# Freeze the layers of the pre-trained model
for layer in base_model.layers:
layer.trainable = False
# Compile the model
model.compile(optimizer='adam',loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the model
model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
# Evaluate the model
loss, accuracy = model.evaluate(x_test, y_test)
print("Test Loss:", loss)
print("Test Accuracy:", accuracy)
OUTPUT :

RESULT :

Thus to train a deep learning model to classify a given image using pre –
trained model was executed successfully.
EXP NO:13
RECOMMENDATION SYSTEM FROM SALES DATA USING DEEP
DATE:
LEARNING

AIM:
To write a python program for Recommendation system from sales data
using deep learning.

ALGORITHM:
1. Import the necessary libraries.
2. Load and preprocess the CIFAR-10 dataset.
3. Load a pre-trained model(MobileNetV2) excluding the top
classification layers.
4. Add custom classification layers on the top of the pre-trained model.
5. Freeze the layers of the pre-trained model.
6. Compile the model.
7. Train the model.

PROGRAM:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.models import Model
from tensorflow.keras.layers import GlobalAveragePooling2D, Dense, Input
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
# Load and preprocess the CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0 # Normalize pixel values to the
range [0, 1]
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)
# Load a pre-trained model (MobileNetV2) excluding the top classification
layers
base_model = MobileNetV2(weights='imagenet', include_top=False,
input_shape=(32, 32, 3))
# Add custom classification layers on top
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dense(256, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x) # 10 classes in CIFAR-10
model = Model(inputs=base_model.input, outputs=predictions)
# Freeze the layers of the pre-trained model
for layer in base_model.layers:
layer.trainable = False
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy',
metrics=['accuracy'])
# Train the model
model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))
# Evaluate the model
loss, accuracy = model.evaluate(x_test, y_test)
print("Test Loss:", loss)
print("Test Accuracy:", accuracy)
OUTPUT:

RESULT:
Thus the python program for Recommendation system from sales data
using deep learning has been executed successfully.
EX.NO:14
OBJECT DETECTION USING CNN
DATE:

AIM:
To write a python program to implement object detection using CNN.

ALGORITHM:
1.Start the program.
2.Import necessary libraries.
3.Randomly put handwritten digits from MNIST.
4.Pickup random index and make digit colorful.
5.Pickup a random position for a digit and place the digit.
6.Use the handwritten digits to train the model.
7.Test the model by giving an input to the model.
8.Plot the image using imshow() function.
9.Stop the program.

PROGRAM:

import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import cv2
import matplotlib.pyplot as plt
(X_num, y_num), _ = tf.keras.datasets.mnist.load_data()
X_num = np.expand_dims(X_num, axis=-1).astype(np.float32) / 255.0
grid_size = 16 # image_size / mask_size

def make_numbers(X, y):


for _ in range(3):
# pickup random index
idx = np.random.randint(len(X_num))
# make digit colorful
number = X_num[idx] @ (np.random.rand(1, 3) + 0.1)
number[number > 0.1] = np.clip(number[number > 0.1], 0.5, 0.8)
# class of digit
kls = y_num[idx]

# random position for digit


px, py = np.random.randint(0, 100), np.random.randint(0, 100)

# digit belong which mask position


mx, my = (px+14) // grid_size, (py+14) // grid_size
channels = y[my][mx]
# prevent duplicated problem
if channels[0] > 0:
continue

channels[0] = 1.0
channels[1] = px - (mx * grid_size) # x1
channels[2] = py - (my * grid_size) # y1
channels[3] = 28.0 # x2, in this demo image only 28 px as width
channels[4] = 28.0 # y2, in this demo image only 28 px as height
channels[5 + kls] = 1.0

# put digit in X
X[py:py+28, px:px+28] += number

def make_data(size=64):
X = np.zeros((size, 128, 128, 3), dtype=np.float32)
y = np.zeros((size, 8, 8, 15), dtype=np.float32)
for i in range(size):
make_numbers(X[i], y[i])

X = np.clip(X, 0.0, 1.0)


return X, y

def get_color_by_probability(p):
if p < 0.3:
return (1., 0., 0.)
if p < 0.7:
return (1., 1., 0.)
return (0., 1., 0.)
def show_predict(X, y, threshold=0.1):
X = X.copy()
for mx in range(8):
for my in range(8):
channels = y[my][mx]
prob, x1, y1, x2, y2 = channels[:5]
# if prob < threshold we won't show any thing
if prob < threshold:
continue

color = get_color_by_probability(prob)
# bounding box
px, py = (mx * grid_size) + x1, (my * grid_size) + y1
cv2.rectangle(X, (int(px), int(py)), (int(px + x2), int(py + y2)), color, 1)

# label
cv2.rectangle(X, (int(px), int(py - 10)), (int(px + 12), int(py)), color, -1)
kls = np.argmax(channels[5:])
cv2.putText(X, f'{kls}', (int(px + 2), int(py-2)),
cv2.FONT_HERSHEY_PLAIN, 0.7, (0.0, 0.0, 0.0))
plt.imshow(X)
# test
X, y = make_data(size=1)
show_predict(X[0], y[0])
OUTPUT:

Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-


datasets/mnist.npz
11490434/11490434 [==============================] - 0s

RESULT:
Thus,the python program for the implementation of object detection using
CNN has been executed successfully.
EXP.NO:15
IMPLEMENT ANY SIMPLE REINFORCEMENT ALGORITHM
DATE; FOR AN NLP PROBLEM

AIM:

Write a python program to implement any simple reinforcement algorithm for an NLP
problem.

ALGORITHM:
1. Start The Program.

2. Import the necessary packages.

3. Define learning rate and discount factor.

4. Exploitation rate: choose the action with the highest Q-value & choose the action with
the highest Q-value.

5. If the state is new, choose a random action.

6. Q-learning update rule.


7. Simple bag-of-words representation.

8. Define reward function.

9. Define the NLP problem (dummy data for demonstration).

10. Initialize Q-learning agent.

11. Test the agent .


PROGRAM:

import numpy as np class

QLearningAgent:

def init (self, actions, alpha=0.1, gamma=0.9, epsilon=0.1):

self.actions = actions

self.alpha = alpha # learning rate

self.gamma = gamma # discount factor

self.epsilon = epsilon # exploration rate


self.q_table = {}

def choose_action(self, state):

if np.random.uniform(0, 1) < self.epsilon: #

Exploration: choose a random action


return np.random.choice(self.actions)

else:

# Exploitation: choose the action with the highest Q-value

if state not in self.q_table:

# If the state is new, choose a random action

return np.random.choice(self.actions)
else:

return max(self.q_table[state], key=self.q_table[state].get)

def learn(self, state, action, reward, next_state):

if state not in self.q_table:

self.q_table[state] = {action: 0}
if next_state not in self.q_table:

self.q_table[next_state] = {a: 0 for a in self.actions}

# Q-learning update rule

self.q_table[state][action] += self.alpha * (reward + self.gamma *


max(self.q_table[next_state].values()) - self.q_table[state][action])

def extract_features(text):

# Simple bag-of-words representation

features = {}

for word in text.split():

features[word] = features.get(word, 0) + 1
return features

def get_reward(sentiment, predicted_sentiment):

# Define reward function

if sentiment == predicted_sentiment:

return 1 # Correct prediction


else:

return -1 # Incorrect prediction

# Define the NLP problem (dummy data for demonstration)

training_data = [

("I love this movie", "positive"),


("This movie is terrible", "negative"),
("The acting was great", "positive"),

("I hate this film", "negative")

# Initialize Q-learning agent

actions = ["positive", "negative"]


agent = QLearningAgent(actions)

#c

for text, sentiment in training_data:

state = extract_features(text)

action = agent.choose_action(str(state))

reward = get_reward(sentiment, action)

agent.learn(str(state), action, reward, str(state))

# Test the agent

test_data = [

"I LOVE DRIVING",


"I dislike this movie"

for text in test_data:

state = extract_features(text)

action = agent.choose_action(str(state))

print(f"Text: '{text}', Predicted Sentiment: {action}")


OUTPUT:

Text: 'I LOVE DRIVING', Predicted Sentiment: positive Text: 'I

dislike this movie', Predicted Sentiment: negative

RESULT:
Thus the python program to implement the simple reinforcement algorithm for an NLP

problem has been executed successfully.

You might also like