0% found this document useful (0 votes)
2 views

Shaurya DL file

The document outlines a Deep Learning Lab course at Noida Institute of Engineering and Technology for the 2024-2025 session, detailing various practical experiments to be conducted. Each experiment includes programming tasks related to deep learning concepts such as building neural networks, predictive modeling, and transfer learning. The document serves as a guide for students to implement and understand different deep learning techniques using Python and relevant libraries.

Uploaded by

Shaurya Bhati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Shaurya DL file

The document outlines a Deep Learning Lab course at Noida Institute of Engineering and Technology for the 2024-2025 session, detailing various practical experiments to be conducted. Each experiment includes programming tasks related to deep learning concepts such as building neural networks, predictive modeling, and transfer learning. The document serves as a guide for students to implement and understand different deep learning techniques using Python and relevant libraries.

Uploaded by

Shaurya Bhati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 75

NOIDA INSTITUTE OF ENGINEERING AND

TECHNOLOGY GREATER NOIDA-201306


(An Autonomous Institute)
Affiliated to Dr. A.P.J Abdul Kalam Technical University,Uttar Pradesh,
Lucknow
School of Computer Sciences & Engineering in Emerging
Technologies

Department of CSE (AIML)

Session (2024 – 2025)

DEEP LEARNING
LAB
(ACSML0652)
(6th Semester)

Submitted To: Submitted By:


MR. SUBHASH CHANDRA Name: Shaurya Bhati
Roll No: 2201331530170

Affiliated to Dr. A.P.J Abdul Kalam Technical University,


Uttar Pradesh, Lucknow.
NOIDA INSTITUTE OF ENGINEERING AND
TECHNOLOGY
GREATER NOIDA-201306
(An Autonomous Institute)
School of Computer Sciences & Engineering in Emerging
Technologies
DEEP LEARNING LAB (ACSML0652)
INDEX
S.No PRACTICAL CONDUCTED DATE SIGN

1. Write a program Print Dimensions of dataset

2. Write a program to Calculate of Accuracy Values.

3. Write a program to Build an Artificial Neural Network Classifier

4. Write a program to create a model to predict old car price

5. Write a program to Visualize Convolutional Neural Network

6. Write a program to Build predictive modelling with Iris Dataset.

7. Write a program to Build an Convolutional Neural Network

8. Program for Multi-Classification using MNIST Dataset

9. Write a program to Build Cat vs Dog prediction model using transfer


learning(VGG16)

10. Write a program Integer Encoding Using Simple RNN.

11. Write a program to Build Embedding Sentiment Analysis Using Simple


RNN.

12. Write a program for Logistic regression model (Spam-ham)

13. Write a program to Build Long Short Term Memory

14. Write a program to Deploy LSTM Model with Flask

15. Write a program to Build Gated Recurrent Unit Model.

16. Write a program to Deploy GRU Model with Flask

17. Write a Deep Learning program for Auto-Encoder.

18. Write a program for deep RNN model

19. Write a program to implement object detection using YOLO


20. Write a program to implement autoencoder using MNIST dataset.

21. Write a program to implement text classification for spam-mails.

22. Write a program to implement Transfer Learning Feature Extraction on data.

23. Write a program to implement face detection method .

24. W rite a program to Build Airplane vs Bird prediction model using transfer
learning.
25. Write a program to Build Airplane vs Bird prediction model using transfer learning.
Experiment 1

Write a program Print Dimensions of dataset

Code:
import pandas as pd

df = pd.read_csv("/content/sample_data/mnist_test.csv")

print(df.head()) #returns top 5 rows/tuples

print("Shape of the dataset",df.shape) #returns the shape (dimensions) of the dataset

print("Size of the dataset",df.size) #returns the total number of cells

Output:
Experiment 2

Write a program to Calculate of Accuracy Values.

Code:
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.datasets import load_iris
from sklearn.metrics import accuracy_score

# Loading the dataset


X, Y = load_iris(return_X_y = True)

# Splitting the dataset in training and test data


X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.3, random_state = 0)

# Training the model using the Support Vector Classification class of sklearn
svc = SVC()
svc.fit(X_train, Y_train)

# Computing the accuracy score of the model


Y_pred = svc.predict(X_test)
score = accuracy_score(Y_test, Y_pred)
print("Accuracy Score :",score)

Output:
Experiment 3

Write a program to Build an Artificial Neural Network Classifier

Code:

import tensorflow as tf
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_iris
from sklearn.metrics import accuracy_score

# Load the Iris dataset


iris = load_iris()
X = iris.data
y = iris.target

#preprocess the data


scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

#split the data into training and testing


X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state =
42)

# Build the neural network model


model = tf.keras.Sequential([
tf.keras.layers.Dense(10,activation ='relu', input_shape=(X_train.shape[1],)),
tf.keras.layers.Dense(1,activation='sigmoid')
])

model.compile(optimizer ='adam',loss='binary_crossentropy',metrics=['accuracy'])
model.fit(X_train,y_train,epochs=50,batch_size=32,validation_split=0.2)

y_pred = model.predict(X_test)
y_pred_binary = (y_pred > 0.5).astype(int)

import numpy as np

# Predict classes for test data


y_pred = model.predict(X_test)
y_pred_classes = np.argmax(y_pred, axis=1)

# Compare predictions with actual labels


print("Predictions:", y_pred_classes)
print("Actual Labels:", y_test)

Output:
Experiment 4

Write a program to Visualize Convolutional Neural Network

Code:

from keras.applications.vgg16 import VGG16


model = VGG16()

import pandas as pd
from tensorflow.keras.applications.vgg16 import VGG16
from tensorflow.keras.applications.vgg16 import preprocess_input
from tensorflow.keras.preprocessing.image import load_img
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.models import Model
from matplotlib import pyplot
from numpy import expand_dims
from matplotlib import pyplot

import warnings
warnings.filterwarnings('ignore')

model.summary()

from keras.utils import plot_model


plot_model(model)

for i in range(len(model.layers)):
if 'conv' not in model.layers[i].name:
continue
filters, biases = model.layers[i].get_weights()
print("layers number", i,model.layers[i].name, filters.shape)

filters, bias = model.layers[1].get_weights()


f_min, f_max = filters.min(), filters.max()
filters = (filters - f_min) / (f_max - f_min)

import matplotlib
from matplotlib import pyplot
n_filters = 6
ix=1
fig = pyplot.figure(figsize = (15, 10))
for i in range(n_filters):
f = filters[:,:,:,i]
for j in range(3):
pyplot.subplot(n_filters,3,ix)
pyplot.imshow(f[:,:,j], cmap='gray')
ix+=1

pyplot.show() #Fig: 2

for i in range(len(model.layers)):
layer = model.layers[i]
if 'conv' not in layer.name:
continue
print(i,layer.name, layer.output.shape)

model = Model(inputs= model.inputs, outputs = model.layers[1].output)


image = load_img("cat.jpg", target_size=(224,224))
image = img_to_array(image)
image = expand_dims(image, axis = 0)
image = preprocess_input(image)

#calculating feature_map
features = model.predict(image)
fig = pyplot.figure(figsize=(20,15))
for i in range(1, features.shape[3]+1):
pyplot.subplot(8, 8, i)
pyplot.imshow(features[0,:,:,i-1], cmap='gray')

pyplot.show() #Fig: 2
Output:

Fig: 1

Fig: 2
Experiment 5

Write a program to Build predictive modelling with Iris Dataset.

Code:
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score, classification_report

from sklearn.datasets import load_iris


iris_data = load_iris()

df = pd.DataFrame(data=iris_data.data, columns=iris_data.feature_names)
df['target'] = iris_data.target

X = df.drop('target', axis=1)
y = df['target']

# Split the data into training and testing sets


X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Scale the features using StandardScaler


scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

# Create an SVC classifier


classifier = SVC()
# Train the model
classifier.fit(X_train, y_train)

# Make predictions on the test set


y_pred = classifier.predict(X_test)

# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
print(f"Accuracy: {accuracy}")

# Generate classification report


report = classification_report(y_test, y_pred)
print(f"Classification Report:\n{report}")
Output:
Experiment 6

Write a program to Build an Convolutional Neural Network

Code:
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist

(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

#Normalize pixel values to be between 0 and 1.


train_images = train_images.astype('float32') / 255.0
test_images = test_images.astype('float32') / 255.0

#Reshape the images into the format expected by the neural network.
train_images = train_images.reshape((60000, 28, 28, 1))
test_images = test_images.reshape((10000, 28, 28, 1))

#Encode the labels using one-hot encoding.


from tensorflow.keras.utils import to_categorical

train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))

#Compile the Model: Specify the loss function, optimizer, and metrics for model compilation.
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

#Train the Model: Train the model using the training data.
model.fit(train_images, train_labels, epochs=10, batch_size=64, validation_split=0.2)

#Evaluate the Model: Evaluate the trained model on the test data.
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(f'Test accuracy: {test_acc}')
#Make Predictions: Use the trained model to make predictions on new data.
predictions = model.predict(test_images)

# Assuming you have already trained and compiled the model as mentioned in the previous
response

# Make predictions on the test set


predictions = model.predict(test_images)

# Get the index of the class with the highest probability for each prediction
predicted_labels = predictions.argmax(axis=1)

# Display the first few predictions


for i in range(5):
print(f"Actual Label: {test_labels[i].argmax()}, Predicted Label: {predicted_labels[i]}")

# Optionally, you can visualize the images and their predictions


import matplotlib.pyplot as plt

def plot_image(i, predictions_array, true_label, img):


predictions_array, true_label, img = predictions_array[i], true_label[i].argmax(),
img[i].reshape(28, 28)
plt.grid(False)
plt.xticks([])
plt.yticks([])

plt.imshow(img, cmap=plt.cm.binary)

predicted_label = predictions_array.argmax()
color = 'blue' if predicted_label == true_label else 'red'

plt.xlabel(f"Predicted: {predicted_label} ({100 * tf.reduce_max(predictions_array):.2f}%),


Actual: {true_label}", color=color)

# Visualize the predictions


num_rows = 5
num_cols = 3
num_images = num_rows * num_cols
plt.figure(figsize=(2 * 2 * num_cols, 2 * num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2 * num_cols, 2 * i + 1)
plot_image(i, predictions, test_labels, test_images)
plt.show()
Output:
Experiment 7

Program for Multi-Classification using MNIST Dataset

Code:
import tensorflow as tf
import keras
from keras import Sequential
from keras.layers import Dense, Flatten

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)

import matplotlib.pyplot as plt

x_train = x_train/255
x_test = x_test/255

model=Sequential()
model.add(Flatten(input_shape = (28,28)))
model.add(Dense(128, activation='relu'))
model.add(Dense(32,activation='relu'))
model.add(Dense(10,activation='softmax'))
model.summary()

model.compile(loss='sparse_categorical_crossentropy', optimizer='Adam', metrics =


['accuracy'])

history = model.fit(x_train, y_train, epochs=25, validation_split = 0.2)


y_prob = model.predict(x_test)
y_pred = y_prob.argmax(axis = 1)

import numpy as np

predicted_probabilities = model.predict(x_test)
predicted_classes = np.argmax(predicted_probabilities, axis=1)

correct_indices = np.nonzero(predicted_classes == y_test)[0]


incorrect_indices = np.nonzero(predicted_classes != y_test)[0]

plt.figure()
for i, correct in enumerate(correct_indices[:9]):
plt.subplot(3,3,i+1)
plt.imshow(x_test[correct].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[correct], y_test[correct]))

plt.tight_layout()

plt.figure()
for i, incorrect in enumerate(incorrect_indices[:9]):
plt.subplot(3,3,i+1)
plt.imshow(x_test[incorrect].reshape(28,28), cmap='gray', interpolation='none')
plt.title("Predicted {}, Class {}".format(predicted_classes[incorrect], y_test[incorrect]))

plt.tight_layout()
Output:
Experiment 8

Write a program to Build Cat vs Dog prediction model using


transfer learning(VGG16)

Code:
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!kaggle datasets download -d salader/dogs-vs-cats

import zipfile
zip_ref = zipfile.ZipFile('/content/dogs-vs-cats.zip', 'r')
zip_ref.extractall('/content')
zip_ref.close()

import tensorflow as tf
import keras
from keras import Sequential
from keras.layers import Dense, Flatten
from keras.applications.vgg16 import VGG16

conv_base = VGG16(
weights = 'imagenet',
include_top = False,
input_shape = (150, 150, 3)
)
conv_base.summary()

model=Sequential()
model.add(conv_base)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(1,activation='sigmoid'))
model.summary()

conv_base.trainable = False
model.summary()

# generators
train_ds = keras.utils.image_dataset_from_directory(
directory = '/content/train',
labels = 'inferred',
label_mode = 'int',
batch_size=32,
image_size=(150,150)
)

validation_ds = keras.utils.image_dataset_from_directory(
directory = '/content/test',
labels = 'inferred',
label_mode = 'int',
batch_size=32,
image_size=(150,150)
)

#Normalize
def process(image,label):
image = tf.cast(image/255, tf.float32)
return image, label

train_ds = train_ds.map(process)
validation_ds = validation_ds.map(process)
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
history=model.fit(train_ds,epochs=10, validation_data=validation_ds)

import matplotlib.pyplot as plt


import cv2
import cv2 as cv
import numpy as np
import argparse
import time
import cv2
test_img = cv2.imread('/content/Dog.png')
plt.imshow(test_img)

Test_img.shape

test_img = cv2.resize(test_img, (150, 150))


test_input = test_img.reshape((1,150,150,3))
model.predict(test_input)

Output:
Experiment 9

Write a program Integer Encoding Using Simple RNN.

Code:
import numpy as np
docs = [
"go india",
"india india","hip hip hurray","jeetega bhai jeetega india jeetega","bharat mata ki jai","kholi
kholi",
"sachin sachin","dhoni dhoni","modi ji ki jai","inqualab jindabad"
]

from keras.preprocessing.text import Tokenizer


tokenizer= Tokenizer(oov_token="<nothing>")
tokenizer.fit_on_texts(docs)
tokenizer.word_index

tokenizer.word_counts

tokenizer.document_count

sequences = tokenizer.texts_to_sequences(docs)
sequences

from keras.utils import pad_sequences


sequences = pad_sequences(sequences, padding='post')
sequences

from keras.datasets import imdb


from keras import Sequential
from keras.layers import Dense, SimpleRNN, Embedding, Flatten

(x_train, y_train),(x_test,y_test) = imdb.load_data()


(x_train, y_train),(x_test,y_test) = imdb.load_data()
x_train = pad_sequences(x_train,padding='post', maxlen=50)
x_test = pad_sequences(x_test,padding='post', maxlen=50)
x_train[0]

model = Sequential()

model.add(SimpleRNN(32, input_shape=(50,1),return_sequences=False))
model.add(Dense(1,activation='sigmoid'))

model.summary()

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])


model.fit(x_train,y_train,epochs=5,validation_data=(x_test,y_test))

# Evaluate model on test set


loss, accuracy = model.evaluate(x_test, y_test)
print("Test Loss:", loss)
print("Test Accuracy:", accuracy)

Output:
Experiment 10

Write a program to Build Embeading Sentiment Analysis Using


Simple RNN.

Code:

docs = [
"go india",
"india india","hip hip hurray","jeetega bhai jeetega india jeetega","bharat mata ki jai","kholi
kholi",
"sachin sachin","dhoni dhoni","modi ji ki jai","inqualab jindabad"
]

from keras.datasets import imdb


from keras import Sequential
from keras.layers import Dense, SimpleRNN, Embedding, Flatten
from keras.preprocessing.text import Tokenizer
tokenizer= Tokenizer(oov_token="<nothing>")
tokenizer.fit_on_texts(docs)
tokenizer.word_index

tokenizer.word_counts

tokenizer.document_count

sequences = tokenizer.texts_to_sequences(docs)
sequences

(x_train, y_train), (x_test, y_test) = imdb.load_data()


from keras.utils import pad_sequences
x_train = pad_sequences(x_train, padding='post',maxlen=50)
x_test = pad_sequences(x_test,padding='post', maxlen=50)
model = Sequential()

model.add(Embedding(100000, output_dim=2, input_length=50))


model.add(SimpleRNN(32, return_sequences=False))
model.add(Dense(1,activation='sigmoid'))

model.summary()

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])

X_new = pad_sequences(new_sequences, maxlen=50)


predictions = model.predict(X_new)

for text, pred in zip(new_texts, predictions):


sentiment = "Positive" if pred > 0.5 else "Negative"
print(f"Text: {text} - Sentiment: {sentiment} - Probability: {pred[0]:.4f}")

Output:
Experiment 11

Write a program for Logistic regression model (Spam-ham)

Code:
# Reading Data
import pandas as pd
data = pd.read_csv('https://raw.githubusercontent.com/mohitgupta-omg/Kaggle-SMS-Spam-
Collection-Dataset-/master/spam.csv', encoding='latin-1')
data.head()

data.drop(['Unnamed: 2','Unnamed: 3','Unnamed: 4'],axis=1, inplace=True)


data.columns = ['label', 'text']
data.head()

data.isna().sum()

import nltk
nltk.download('all')
text = list(data['text'])
import re
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
lematizer= WordNetLemmatizer()
corpus = []
for i in range(len(text)):
r = re.sub('[^a=zA-Z]',' ',text[i])
r = r.lower()
r = r.split()
r = [word for word in r if word not in stopwords.words('english')]
r = ' '.join(r)
corpus.append(r)
data['text'] = corpus
data.head()
X=data['text']
y=data['label']
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.33,random_state=123)
print("Training Data:",X_train.shape)
print("Testing Data:",X_test.shape)

from sklearn.feature_extraction.text import CountVectorizer


cv=CountVectorizer()
X_train_cv=cv.fit_transform(X_train)
X_train_cv.shape

#Training Logistic Regressio Model


from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()

lr.fit(X_train_cv, y_train)

Output:
Experiment 12

Write a program to Build Long Short Term Memory

Code:
faqs = """About the Program
What is the course fee for Data Science Mentorship Program (DSMP 2023)
The course follows a monthly subscription model where you have to make monthly payments of Rs 799/month.
What is the total duration of the course?
The total duration of the course is 7 months. So the total course fee becomes 799*7 = Rs 5600(approx.)
What is the syllabus of the mentorship program?
We will be covering the following modules:
Python Fundamentals
Python libraries for Data Science
Data Analysis
SQL for Data Science
Maths for Machine Learning
ML Algorithms
Practical ML
MLOPs
Case studies
You can check the detailed syllabus here - https://learnwith.campusx.in/courses/CampusX-Data-Science-
Mentorship-Program-637339afe4b0615a1bbed390
Will Deep Learning and NLP be a part of this program?
No, NLP and Deep Learning both are not a part of this program’s curriculum.
What if I miss a live session? Will I get a recording of the session?
Yes all our sessions are recorded, so even if you miss a session you can go back and watch the recording.
Where can I find the class schedule?
Checkout this google sheet to see month by month time table of the course -
https://docs.google.com/spreadsheets/d/16OoTax_A6ORAeCg4emgexhqqPv3noQPYKU7RJ6ArOzk/edit?usp=s
haring.
What is the time duration of all the live sessions?
Roughly, all the sessions last 2 hours.
What is the language spoken by the instructor during the sessions?
Hinglish
How will I be informed about the upcoming class?
You will get a mail from our side before every paid session once you become a paid user.
Can I do this course if I am from a non-tech background?
Yes, absolutely.
I am late, can I join the program in the middle?
Absolutely, you can join the program anytime.
If I join/pay in the middle, will I be able to see all the past lectures?
Yes, once you make the payment you will be able to see all the past content in your dashboard.
Where do I have to submit the task?
You don’t have to submit the task. We will provide you with the solutions, you have to self evaluate the task
yourself.
Will we do case studies in the program?
Yes.
Where can we contact you?
You can mail us at [email protected]
Payment/Registration related questions
Where do we have to make our payments? Your YouTube channel or website?
You have to make all your monthly payments on our website. Here is the link for our website -
https://learnwith.campusx.in/
Can we pay the entire amount of Rs 5600 all at once?
Unfortunately no, the program follows a monthly subscription model.
What is the validity of monthly subscription? Suppose if I pay on 15th Jan, then do I have to pay again on 1st
Feb or 15th Feb
15th Feb. The validity period is 30 days from the day you make the payment. So essentially you can join
anytime you don’t have to wait for a month to end.
What if I don’t like the course after making the payment. What is the refund policy?
You get a 7 days refund period from the day you have made the payment.
I am living outside India and I am not able to make the payment on the website, what should I do?
You have to contact us by sending a mail at [email protected]
Post registration queries
Till when can I view the paid videos on the website?
This one is tricky, so read carefully. You can watch the videos till your subscription is valid. Suppose you have
purchased subscription on 21st Jan, you will be able to watch all the past paid sessions in the period of 21st Jan
to 20th Feb. But after 21st Feb you will have to purchase the subscription again.
But once the course is over and you have paid us Rs 5600(or 7 installments of Rs 799) you will be able to watch
the paid sessions till Aug 2024.
Why lifetime validity is not provided?
Because of the low course fee.
Where can I reach out in case of a doubt after the session?
You will have to fill a google form provided in your dashboard and our team will contact you for a 1 on 1 doubt
clearance session
If I join the program late, can I still ask past week doubts?
Yes, just select past week doubt in the doubt clearance google form.
I am living outside India and I am not able to make the payment on the website, what should I do?
You have to contact us by sending a mail at [email protected]
Certificate and Placement Assistance related queries
What is the criteria to get the certificate?
There are 2 criterias:
You have to pay the entire fee of Rs 5600
You have to attempt all the course assessments.
I am joining late. How can I pay payment of the earlier months?
You will get a link to pay fee of earlier months in your dashboard once you pay for the current month.
I have read that Placement assistance is a part of this program. What comes under Placement assistance?
This is to clarify that Placement assistance does not mean Placement guarantee. So we dont guarantee you any
jobs or for that matter even interview calls. So if you are planning to join this course just for placements, I am
afraid you will be disappointed. Here is what comes under placement assistance
Portfolio Building sessions
Soft skill sessions
Sessions with industry mentors
Discussion on Job hunting strategies
"""

import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer()
tokenizer.fit_on_texts([faqs])
len(tokenizer.word_index)

input_sequences = []
for sentence in faqs.split('\n'):
tokenized_sentence = tokenizer.texts_to_sequences([sentence])[0]

for i in range(1,len(tokenized_sentence)):
input_sequences.append(tokenized_sentence[:i+1])
print(input_sequences)

max_len = max([len(x) for x in input_sequences])


print(max_len)

from tensorflow.keras.preprocessing.sequence import pad_sequences


padded_input_sequences = pad_sequences(input_sequences, maxlen = max_len,
padding='pre')
print(padded_input_sequences)

x = padded_input_sequences[:,:-1]

y = padded_input_sequences[:,-1]

from tensorflow.keras.models import Sequential


from tensorflow.keras.layers import Embedding, LSTM, Dense
model = Sequential()
model.add(Embedding(283, 100, input_length = 56))
model.add(LSTM(150))

model.add(Dense(283, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()

from tensorflow.keras.utils import to_categorical


y = to_categorical(y, num_classes = 283)

model.fit(x,y,epochs = 100)

import numpy as np
import time
text = "No, NLP and Deep"

for i in range(10):
#tokenize
token_text = tokenizer.texts_to_sequences([text])[0]
#padding
padded_token_text = pad_sequences([token_text],maxlen=56, padding='pre')

#predict
pos = np.argmax(model.predict(padded_token_text))

for word,index in tokenizer.word_index.items():


if index == pos:
text = text + " " + word
print(text)
time.sleep(2)
Output:
Experiment 13

Write a program to Deploy LSTM Model with Flask

Code:
Index.html

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LSTM Model Deployment</title>
</head>
<body>
<h1>LSTM Model Deployment</h1>
<form action="/process" method="POST">
<label for="d1">Enter text:</label><br>
<textarea id="d1" name="d1" rows="4" cols="50"></textarea><br>
<input type="submit" value="Submit">
</form>
</body>
</html>

Result.html

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Result</title>
</head>
<body>
<h1>Result</h1>
<p>{{ result }}</p>
</body>
</html>

App.py

from flask import Flask, render_template,request


from lstm_model import predict_seq
app = Flask( name )
@app.route('/')
def index():
return render_template('index.html')
@app.route('/process', methods=['POST'])
def process():
text = request.form['d1']
result=predict_seq(text)
return render_template('resultt.html',result=result)
if name ==' main ':
app.run(debug=True)

lstm_model.py

faqs= """About the Program


What is the course fee for Data Science Mentorship Program (DSMP 2023)
The course follows a monthly subscription model where you have to make monthly payments
of Rs 799/month.
What is the total duration of the course?
The total duration of the course is 7 months. So the total course fee becomes 799*7 = Rs
5600(approx.)
What is the syllabus of the mentorship program?
We will be covering the following modules:
Python Fundamentals
Python libraries for Data Science
Data Analysis
SQL for Data Science
Maths for Machine Learning
ML Algorithms
Practical ML
MLOPs
Case studies
You can check the detailed syllabus here - https://learnwith.campusx.in/courses/CampusX-
Data-Science-Mentorship-Program-637339afe4b0615a1bbed390
Will Deep Learning and NLP be a part of this program?
No, NLP and Deep Learning both are not a part of this program’s curriculum.
What if I miss a live session? Will I get a recording of the session?
Yes all our sessions are recorded, so even if you miss a session you can go back and watch
the recording.
Where can I find the class schedule?
Checkout this google sheet to see month by month time table of the course -
https://docs.google.com/spreadsheets/d/16OoTax_A6ORAeCg4emgexhqqPv3noQPYKU7RJ
6ArOzk/edit?usp=sharing.
What is the time duration of all the live sessions?
Roughly, all the sessions last 2 hours.
What is the language spoken by the instructor during the sessions?
Hinglish
How will I be informed about the upcoming class?
You will get a mail from our side before every paid session once you become a paid user.
Can I do this course if I am from a non-tech background?
Yes, absolutely.
I am late, can I join the program in the middle?
Absolutely, you can join the program anytime.
If I join/pay in the middle, will I be able to see all the past lectures?
Yes, once you make the payment you will be able to see all the past content in your
dashboard.
Where do I have to submit the task?
You don’t have to submit the task. We will provide you with the solutions, you have to self
evaluate the task yourself.
Will we do case studies in the program?
Yes.
Where can we contact you?
You can mail us at [email protected]
Payment/Registration related questions
Where do we have to make our payments? Your YouTube channel or website?
You have to make all your monthly payments on our website. Here is the link for our website -
https://learnwith.campusx.in/
Can we pay the entire amount of Rs 5600 all at once?
Unfortunately no, the program follows a monthly subscription model.
What is the validity of monthly subscription? Suppose if I pay on 15th Jan, then do I have to
pay again on 1st Feb or 15th Feb
15th Feb. The validity period is 30 days from the day you make the payment. So essentially
you can join anytime you don’t have to wait for a month to end.
What if I don’t like the course after making the payment. What is the refund policy?
You get a 7 days refund period from the day you have made the payment.
I am living outside India and I am not able to make the payment on the website, what should I
do?
You have to contact us by sending a mail at [email protected]
Post registration queries
Till when can I view the paid videos on the website?
This one is tricky, so read carefully. You can watch the videos till your subscription is valid.
Suppose you have purchased subscription on 21st Jan, you will be able to watch all the past
paid sessions in the period of 21st Jan to 20th Feb. But after 21st Feb you will have to
purchase the subscription again.
But once the course is over and you have paid us Rs 5600(or 7 installments of Rs 799) you
will be able to watch the paid sessions till Aug 2024.
Why lifetime validity is not provided?
Because of the low course fee.
Where can I reach out in case of a doubt after the session?
You will have to fill a google form provided in your dashboard and our team will contact you
for a 1 on 1 doubt clearance session
If I join the program late, can I still ask past week doubts?
Yes, just select past week doubt in the doubt clearance google form.
I am living outside India and I am not able to make the payment on the website, what should I
do?
You have to contact us by sending a mail at [email protected]
Certificate and Placement Assistance related queries
What is the criteria to get the certificate?
There are 2 criterias:
You have to pay the entire fee of Rs 5600
You have to attempt all the course assessments.
I am joining late. How can I pay payment of the earlier months?
You will get a link to pay fee of earlier months in your dashboard once you pay for the current
month.
I have read that Placement assistance is a part of this program. What comes under
Placement assistance?
This is to clarify that Placement assistance does not mean Placement guarantee. So we dont
guarantee you any jobs or for that matter even interview calls. So if you are planning to join
this course just for placements, I am afraid you will be disappointed. Here is what comes
under placement assistance
Portfolio Building sessions
Soft skill sessions
Sessions with industry mentors
Discussion on Job hunting strategies
"""
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
tokenizer=Tokenizer()
tokenizer.fit_on_texts([faqs])
#len(tokenizer.word_index)
input_sequences=[]
for sentence in faqs.split("\n"):
tokenized_sentence=tokenizer.texts_to_sequences([sentence])[0]
for i in range(1,len(tokenized_sentence)):
input_sequences.append(tokenized_sentence[:i+1])
max_len=max([len(x) for x in input_sequences])
from tensorflow.keras.preprocessing.sequence import pad_sequences
padded_input_sequences=pad_sequences(input_sequences,maxlen=max_len,padding='pre'
)
x=padded_input_sequences[:,:-1]
y=padded_input_sequences[:,-1]
from tensorflow.keras.utils import to_categorical
y=to_categorical(y,num_classes=283)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding,LSTM,Dense
model=Sequential()
model.add(Embedding(283,100,input_length=56))
model.add(LSTM(150))
model.add(Dense(283,activation='softmax'))
model.compile(loss="categorical_crossentropy",optimizer='adam',metrics=['accuracy'])

model.fit(x,y,epochs=10)
import numpy as np

_, accuracy = model.evaluate(x, y)
import time
def predict_seq(text):
for i in range(10):
token_text=tokenizer.texts_to_sequences([text])[0]
padded_token_text=pad_sequences([token_text],maxlen=56,padding='pre')
pos=np.argmax(model.predict(padded_token_text))
for word , index in tokenizer.word_index.items():
if index==pos:
text=text+" "+word
time.sleep(2)
return text

Output:
Experiment 14

Write a program to Build Gated Recurrent Unit Model.

Code:
faqs = """About the Program
What is the course fee for Data Science Mentorship Program (DSMP 2023)
The course follows a monthly subscription model where you have to make monthly payments of Rs 799/month.
What is the total duration of the course?
The total duration of the course is 7 months. So the total course fee becomes 799*7 = Rs 5600(approx.)
What is the syllabus of the mentorship program?
We will be covering the following modules:
Python Fundamentals
Python libraries for Data Science
Data Analysis
SQL for Data Science
Maths for Machine Learning
ML Algorithms
Practical ML
MLOPs
Case studies
You can check the detailed syllabus here - https://learnwith.campusx.in/courses/CampusX-Data-Science-
Mentorship-Program-637339afe4b0615a1bbed390
Will Deep Learning and NLP be a part of this program?
No, NLP and Deep Learning both are not a part of this program’s curriculum.
What if I miss a live session? Will I get a recording of the session?
Yes all our sessions are recorded, so even if you miss a session you can go back and watch the recording.
Where can I find the class schedule?
Checkout this google sheet to see month by month time table of the course -
https://docs.google.com/spreadsheets/d/16OoTax_A6ORAeCg4emgexhqqPv3noQPYKU7RJ6ArOzk/edit?usp=s
haring.
What is the time duration of all the live sessions?
Roughly, all the sessions last 2 hours.
What is the language spoken by the instructor during the sessions?
Hinglish
How will I be informed about the upcoming class?
You will get a mail from our side before every paid session once you become a paid user.
Can I do this course if I am from a non-tech background?
Yes, absolutely.
I am late, can I join the program in the middle?
Absolutely, you can join the program anytime.
If I join/pay in the middle, will I be able to see all the past lectures?
Yes, once you make the payment you will be able to see all the past content in your dashboard.
Where do I have to submit the task?
You don’t have to submit the task. We will provide you with the solutions, you have to self evaluate the task
yourself.
Will we do case studies in the program?
Yes.
Where can we contact you?
You can mail us at [email protected]
Payment/Registration related questions
Where do we have to make our payments? Your YouTube channel or website?
You have to make all your monthly payments on our website. Here is the link for our website -
https://learnwith.campusx.in/
Can we pay the entire amount of Rs 5600 all at once?
Unfortunately no, the program follows a monthly subscription model.
What is the validity of monthly subscription? Suppose if I pay on 15th Jan, then do I have to pay again on 1st
Feb or 15th Feb
15th Feb. The validity period is 30 days from the day you make the payment. So essentially you can join
anytime you don’t have to wait for a month to end.
What if I don’t like the course after making the payment. What is the refund policy?
You get a 7 days refund period from the day you have made the payment.
I am living outside India and I am not able to make the payment on the website, what should I do?
You have to contact us by sending a mail at [email protected]
Post registration queries
Till when can I view the paid videos on the website?
This one is tricky, so read carefully. You can watch the videos till your subscription is valid. Suppose you have
purchased subscription on 21st Jan, you will be able to watch all the past paid sessions in the period of 21st Jan
to 20th Feb. But after 21st Feb you will have to purchase the subscription again.
But once the course is over and you have paid us Rs 5600(or 7 installments of Rs 799) you will be able to watch
the paid sessions till Aug 2024.
Why lifetime validity is not provided?
Because of the low course fee.
Where can I reach out in case of a doubt after the session?
You will have to fill a google form provided in your dashboard and our team will contact you for a 1 on 1 doubt
clearance session
If I join the program late, can I still ask past week doubts?
Yes, just select past week doubt in the doubt clearance google form.
I am living outside India and I am not able to make the payment on the website, what should I do?
You have to contact us by sending a mail at [email protected]
Certificate and Placement Assistance related queries
What is the criteria to get the certificate?
There are 2 criterias:
You have to pay the entire fee of Rs 5600
You have to attempt all the course assessments.
I am joining late. How can I pay payment of the earlier months?
You will get a link to pay fee of earlier months in your dashboard once you pay for the current month.
I have read that Placement assistance is a part of this program. What comes under Placement assistance?
This is to clarify that Placement assistance does not mean Placement guarantee. So we dont guarantee you any
jobs or for that matter even interview calls. So if you are planning to join this course just for placements, I am
afraid you will be disappointed. Here is what comes under placement assistance
Portfolio Building sessions
Soft skill sessions
Sessions with industry mentors
Discussion on Job hunting strategies
"""

import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer()
tokenizer.fit_on_texts([faqs])
len(tokenizer.word_index)

input_sequences = []
for sentence in faqs.split('\n'):
tokenized_sentence = tokenizer.texts_to_sequences([sentence])[0]

for i in range(1,len(tokenized_sentence)):
input_sequences.append(tokenized_sentence[:i+1])
print(input_sequences)

max_len = max([len(x) for x in input_sequences])


print(max_len)

from tensorflow.keras.preprocessing.sequence import pad_sequences


padded_input_sequences = pad_sequences(input_sequences, maxlen = max_len,
padding='pre')
print(padded_input_sequences)

x = padded_input_sequences[:,:-1]

y = padded_input_sequences[:,-1]

from tensorflow.keras.models import Sequential


from tensorflow.keras.layers import Embedding, GRU, Dense
model = Sequential()
model.add(Embedding(283, 100, input_length = 56))
model.add(GRU(150))
model.add(Dense(283, activation='sigmoid'))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])


model.summary()

from tensorflow.keras.utils import to_categorical


y = to_categorical(y, num_classes = 283)

model.fit(x,y,epochs = 100)

import numpy as np
import time
text = "No, NLP and Deep"

for i in range(10):
#tokenize
token_text = tokenizer.texts_to_sequences([text])[0]
#padding
padded_token_text = pad_sequences([token_text],maxlen=56, padding='pre')

#predict
pos = np.argmax(model.predict(padded_token_text))

for word,index in tokenizer.word_index.items():


if index == pos:
text = text + " " + word
print(text)
time.sleep(2)
Output:
Experiment 15

Write a program to Deploy GRU Model with Flask

Code:
Index.html

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>GRU Model Deployment</title>
</head>
<body>
<h1>GRU Model Deployment</h1>
<form action="/process" method="POST">
<label for="d1">Enter text:</label><br>
<textarea id="d1" name="d1" rows="4" cols="50"></textarea><br>
<input type="submit" value="Submit">
</form>
</body>
</html>

Result.html

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Result</title>
</head>
<body>
<h1>Result</h1>
<p>{{ result }}</p>
</body>
</html>

App.py

from flask import Flask, render_template,request


from lstm_model import predict_seq
app = Flask( name )
@app.route('/')
def index():
return render_template('index.html')
@app.route('/process', methods=['POST'])
def process():
text = request.form['d1']
result=predict_seq(text)
return render_template('resultt.html',result=result)
if name ==' main ':
app.run(debug=True)

GRU_model.py

faqs= """About the Program


What is the course fee for Data Science Mentorship Program (DSMP 2023)
The course follows a monthly subscription model where you have to make monthly payments
of Rs 799/month.
What is the total duration of the course?
The total duration of the course is 7 months. So the total course fee becomes 799*7 = Rs
5600(approx.)
What is the syllabus of the mentorship program?
We will be covering the following modules:
Python Fundamentals
Python libraries for Data Science
Data Analysis
SQL for Data Science
Maths for Machine Learning
ML Algorithms
Practical ML
MLOPs
Case studies
You can check the detailed syllabus here - https://learnwith.campusx.in/courses/CampusX-
Data-Science-Mentorship-Program-637339afe4b0615a1bbed390
Will Deep Learning and NLP be a part of this program?
No, NLP and Deep Learning both are not a part of this program’s curriculum.
What if I miss a live session? Will I get a recording of the session?
Yes all our sessions are recorded, so even if you miss a session you can go back and watch
the recording.
Where can I find the class schedule?
Checkout this google sheet to see month by month time table of the course -
https://docs.google.com/spreadsheets/d/16OoTax_A6ORAeCg4emgexhqqPv3noQPYKU7RJ
6ArOzk/edit?usp=sharing.
What is the time duration of all the live sessions?
Roughly, all the sessions last 2 hours.
What is the language spoken by the instructor during the sessions?
Hinglish
How will I be informed about the upcoming class?
You will get a mail from our side before every paid session once you become a paid user.
Can I do this course if I am from a non-tech background?
Yes, absolutely.
I am late, can I join the program in the middle?
Absolutely, you can join the program anytime.
If I join/pay in the middle, will I be able to see all the past lectures?
Yes, once you make the payment you will be able to see all the past content in your
dashboard.
Where do I have to submit the task?
You don’t have to submit the task. We will provide you with the solutions, you have to self
evaluate the task yourself.
Will we do case studies in the program?
Yes.
Where can we contact you?
You can mail us at [email protected]
Payment/Registration related questions
Where do we have to make our payments? Your YouTube channel or website?
You have to make all your monthly payments on our website. Here is the link for our website -
https://learnwith.campusx.in/
Can we pay the entire amount of Rs 5600 all at once?
Unfortunately no, the program follows a monthly subscription model.
What is the validity of monthly subscription? Suppose if I pay on 15th Jan, then do I have to
pay again on 1st Feb or 15th Feb
15th Feb. The validity period is 30 days from the day you make the payment. So essentially
you can join anytime you don’t have to wait for a month to end.
What if I don’t like the course after making the payment. What is the refund policy?
You get a 7 days refund period from the day you have made the payment.
I am living outside India and I am not able to make the payment on the website, what should I
do?
You have to contact us by sending a mail at [email protected]
Post registration queries
Till when can I view the paid videos on the website?
This one is tricky, so read carefully. You can watch the videos till your subscription is valid.
Suppose you have purchased subscription on 21st Jan, you will be able to watch all the past
paid sessions in the period of 21st Jan to 20th Feb. But after 21st Feb you will have to
purchase the subscription again.
But once the course is over and you have paid us Rs 5600(or 7 installments of Rs 799) you
will be able to watch the paid sessions till Aug 2024.
Why lifetime validity is not provided?
Because of the low course fee.
Where can I reach out in case of a doubt after the session?
You will have to fill a google form provided in your dashboard and our team will contact you
for a 1 on 1 doubt clearance session
If I join the program late, can I still ask past week doubts?
Yes, just select past week doubt in the doubt clearance google form.
I am living outside India and I am not able to make the payment on the website, what should I
do?
You have to contact us by sending a mail at [email protected]
Certificate and Placement Assistance related queries
What is the criteria to get the certificate?
There are 2 criterias:
You have to pay the entire fee of Rs 5600
You have to attempt all the course assessments.
I am joining late. How can I pay payment of the earlier months?
You will get a link to pay fee of earlier months in your dashboard once you pay for the current
month.
I have read that Placement assistance is a part of this program. What comes under
Placement assistance?
This is to clarify that Placement assistance does not mean Placement guarantee. So we dont
guarantee you any jobs or for that matter even interview calls. So if you are planning to join
this course just for placements, I am afraid you will be disappointed. Here is what comes
under placement assistance
Portfolio Building sessions
Soft skill sessions
Sessions with industry mentors
Discussion on Job hunting strategies
"""
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
tokenizer=Tokenizer()
tokenizer.fit_on_texts([faqs])
#len(tokenizer.word_index)
input_sequences=[]
for sentence in faqs.split("\n"):
tokenized_sentence=tokenizer.texts_to_sequences([sentence])[0]
for i in range(1,len(tokenized_sentence)):
input_sequences.append(tokenized_sentence[:i+1])
max_len=max([len(x) for x in input_sequences])
from tensorflow.keras.preprocessing.sequence import pad_sequences
padded_input_sequences=pad_sequences(input_sequences,maxlen=max_len,padding='pre'
)
x=padded_input_sequences[:,:-1]
y=padded_input_sequences[:,-1]
from tensorflow.keras.utils import to_categorical
y=to_categorical(y,num_classes=283)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, GRU, Dense
model=Sequential()
model.add(Embedding(283,100,input_length=56))
model.add(GRU(150))
model.add(Dense(283,activation='sigmoid'))
model.compile(loss="categorical_crossentropy",optimizer='adam',metrics=['accuracy'])

model.fit(x, y, epochs=10)
import numpy as np

_, accuracy = model.evaluate(x, y)
import time
def predict_seq(text):
for i in range(10):
token_text=tokenizer.texts_to_sequences([text])[0]
padded_token_text=pad_sequences([token_text],maxlen=56,padding='pre')
pos=np.argmax(model.predict(padded_token_text))
for word , index in tokenizer.word_index.items():
if index==pos:
text=text+" "+word
time.sleep(2)
return text

Output:
Experiment 16

Write a Deep Learning program for Auto-Encoder.

Code:
import tensorflow as tf
import pandas as pd
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.model_selection import train_test_split
from keras import layers, losses
from keras.datasets import mnist
from keras.models import Model

#Loading the MNIST dataset and extracting training and testing data
(x_train, _), (x_test, _) = mnist.load_data()

#Normalizing pixel values to the range [0, 1]


x_train = x_train.astype('float32') / 255
x_test = x_test.astype('float32') / 255

#Displaying the shapes of the training and testing datasets


print("Shape of the training data:", x_train.shape)
print("Shape of the testing data:", x_test.shape)

#Definition of the Autoencoder model as a subclass of the TensorFlow Model class

class SimpleAutoencoder(Model):
def init (self, latent_dimensions, data_shape):
super(SimpleAutoencoder, self). init ()
self.latent_dimensions = latent_dimensions
self.data_shape = data_shape

#Encoder architecture using a Sequebtial model


self.encoder = tf.keras.Sequential([
layers.Flatten(),
layers.Dense(latent_dimensions, activation = 'relu'),
])

#Decoder architecture using a Sequential mkodel


self.decoder = tf.keras.Sequential([
layers.Dense(tf.math.reduce_prod(data_shape), activation='sigmoid'),
layers.Reshape(data_shape)
])

#Forward pass method defining the encoding and decoding steps


def call(self, input_data):
encoded_data = self.encoder(input_data)
decoded_data = self.decoder(encoded_data)
return decoded_data

#Extracting shape information from the testing dataset


input_data_shape = x_test.shape[1:]

#Specifying the dimensionality of the latent space


latent_dimensions = 64

#Creating an instance of the SimpleAutoencoder model


simple_autoencoder = SimpleAutoencoder(latent_dimensions, input_data_shape)

simple_autoencoder.compile(optimizer='adam', loss=losses.MeanSquaredError())
simple_autoencoder.fit(x_train, x_train,
epochs =1,
shuffle=True,
validation_data=(x_test, x_test))

encoded_img = simple_autoencoder.encoder(x_test).numpy()
decoded_img = simple_autoencoder.decoder(encoded_img).numpy()

n=6
plt.figure(figsize=(8,4))
for i in range(n):
ax = plt.subplot(2,n,i+1)
plt.imshow(x_test[i])
plt.title("original")
plt.gray()

#display reconstruction
ax =plt.subplot(2,n,i+1+n)
plt.imshow(decoded_img[i])
plt.title("reconstructed")
plt.gray()

plt.show()
Output:
Experiment 17

Write a program to create a model to predict old car price

Code:

import pandas as pd
import numpy as np

df = pd.read_pickle("/content/drive/MyDrive/CarPricesData")
df.head()

# Separate Target Variable and Predictor Variables


TargetVariable=['Price']
Predictors=['Age', 'KM', 'Weight', 'HP', 'MetColor', 'CC', 'Doors']

X=df[Predictors].values
y=df[TargetVariable].values

### Sandardization of data ###


from sklearn.preprocessing import StandardScaler
PredictorScaler=StandardScaler()
TargetVarScaler=StandardScaler()

# Storing the fit object for later reference


PredictorScalerFit=PredictorScaler.fit(X)
TargetVarScalerFit=TargetVarScaler.fit(y)

# Generating the standardized values of X and y


X=PredictorScalerFit.transform(X)
y=TargetVarScalerFit.transform(y)

# Split the data into training and testing set


from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# Quick sanity check with the shapes of Training and testing datasets
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)

# importing the libraries


from keras.models import Sequential
from keras.layers import Dense

# create ANN model


model = Sequential()

# Defining the Input layer and FIRST hidden layer, both are same!
model.add(Dense(units=5, input_dim=7, kernel_initializer='normal', activation='relu'))

# Defining the Second layer of the model


# after the first layer we don't have to specify input_dim as keras configure it automatically
model.add(Dense(units=5, kernel_initializer='normal', activation='tanh'))

# The output neuron is a single fully connected node


# Since we will be predicting a single number
model.add(Dense(1, kernel_initializer='normal'))

# Compiling the model


model.compile(loss='mean_squared_error', optimizer='adam')

# Fitting the ANN to the Training set


model.fit(X_train, y_train ,batch_size = 5, epochs = 100, verbose=1)

# Defining a function to find the best parameters for ANN


def FunctionFindBestParams(X_train, y_train, X_test, y_test):

# Defining the list of hyper parameters to try


batch_size_list=[5, 10, 15, 20]
epoch_list = [5, 10, 50, 100]
import pandas as pd
SearchResultsData=pd.DataFrame(columns=['TrialNumber', 'Parameters', 'Accuracy'])

# initializing the trials


TrialNumber=0
for batch_size_trial in batch_size_list:
for epochs_trial in epoch_list:
TrialNumber+=1
# create ANN model
model = Sequential()
# Defining the first layer of the model
model.add(Dense(units=5, input_dim=X_train.shape[1], kernel_initializer='normal',
activation='relu'))

# Defining the Second layer of the model


model.add(Dense(units=5, kernel_initializer='normal', activation='relu'))

# The output neuron is a single fully connected node


# Since we will be predicting a single number
model.add(Dense(1, kernel_initializer='normal'))

# Compiling the model


model.compile(loss='mean_squared_error', optimizer='adam')

# Fitting the ANN to the Training set


model.fit(X_train, y_train ,batch_size = batch_size_trial, epochs = epochs_trial,
verbose=0)

MAPE = np.mean(100 * (np.abs(y_test-model.predict(X_test))/y_test))

# printing the results of the current iteration


print(TrialNumber, 'Parameters:','batch_size:', batch_size_trial,'-',
'epochs:',epochs_trial, 'Accuracy:', 100-MAPE)

data = [{'TrialNumber': TrialNumber, 'Parameters': str(batch_size_trial) + '-' +


str(epochs_trial), 'Accuracy': 100 - MAPE}]
df2 = pd.DataFrame(data)
SearchResultsData = pd.concat([SearchResultsData, df2], ignore_index=True)
return(SearchResultsData)

# Calling the function


ResultsData = FunctionFindBestParams(X_train, y_train, X_test, y_test)

# Fitting the ANN to the Training set


model.fit(X_train, y_train ,batch_size = 5, epochs = 100, verbose=0)

# Generating Predictions on testing data


Predictions=model.predict(X_test)

# Scaling the predicted Price data back to original price scale


Predictions=TargetVarScalerFit.inverse_transform(Predictions)

# Scaling the y_test Price data back to original price scale


y_test_orig=TargetVarScalerFit.inverse_transform(y_test)

# Scaling the test data back to original scale


Test_Data=PredictorScalerFit.inverse_transform(X_test)

TestingData=pd.DataFrame(data=Test_Data, columns=Predictors)
TestingData['Price']=y_test_orig
TestingData['PredictedPrice']=Predictions
TestingData.head()

# Computing the absolute percent error


APE=100*(abs(TestingData['Price']-TestingData['PredictedPrice'])/TestingData['Price'])
TestingData['APE']=APE

print('The Accuracy of ANN model is:', 100-np.mean(APE))


TestingData.head()

Print(Testing Data)
Output:
Experiment 18

Write a program for deep RNN model

Code:

import tensorflow as tf
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding,SimpleRNN,Dense,LSTM,GRU

(x_train,y_train), (x_test, y_test) = imdb.load_data(num_words = 10000)

x_train= pad_sequences(x_train, maxlen=100)


x_test = pad_sequences(x_test, maxlen=100)

model = Sequential([
Embedding(10000,32,input_length=100),
SimpleRNN(5,return_sequences = True),
SimpleRNN(5),
Dense(1,activation='sigmoid')
])
model.summary()

model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs = 5, batch_size=32, validation_split=0.2)

model = Sequential([
Embedding(10000,32,input_length=100),
LSTM(5,return_sequences = True),
LSTM(5),
Dense(1,activation='sigmoid')
])
model.summary()
Output:
Experiment 19

Write a program to implement object detection using YOLO

Code:

import cv2
import numpy as np
from google.colab.patches import cv2_imshow

# Load YOLO
net = cv2.dnn.readNet("/content/drive/MyDrive/yolov3.weights",
"/content/drive/MyDrive/yolov3.cfg")
classes = []

with open("/content/drive/MyDrive/coco.names", "r") as f:


classes = [line.strip() for line in f.readlines()]

layer_names = net.getLayerNames()
output_layers = [layer_names[i - 1] for i in net.getUnconnectedOutLayers()]

# Load image
img = cv2.imread("/content/drive/MyDrive/image.jpg")
height, width, channels = img.shape

# Detecting objects
blob = cv2.dnn.blobFromImage(img, 0.00392, (416, 416), (0, 0, 0), True, crop=False)
net.setInput(blob)
outs = net.forward(output_layers)

# Showing informations on the screen


class_ids = []
confidences = []
boxes = []
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.5:
# Object detected
center_x = int(detection[0] * width)
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)
# Rectangle coordinates
x = int(center_x - w / 2)
y = int(center_y - h / 2)
boxes.append([x, y, w, h])
confidences.append(float(confidence))
class_ids.append(class_id)

indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)

font = cv2.FONT_HERSHEY_PLAIN
for i in range(len(boxes)):
if i in indexes:
x, y, w, h = boxes[i]
label = str(classes[class_ids[i]])
color = (255, 0, 0)
cv2.rectangle(img, (x, y), (x + w, y + h), color, 2)
cv2.putText(img, label, (x, y + 30), font, 2, color, 2)

cv2_imshow(img)
cv2.waitKey(0)
cv2.destroyAllWindows()

Output:
Experiment 20

Write a program to implement autoencoder using MNIST dataset.

Code:

import keras
from keras import layers

encoding_dim = 32 # 32 floats -> compression of factor 24.5, assuming the input is 784
floats

# This is our input image


input_img = keras.Input(shape=(784,))
# "encoded" is the encoded representation of the input
encoded = layers.Dense(encoding_dim, activation='relu')(input_img)
# "decoded" is the lossy reconstruction of the input
decoded = layers.Dense(784, activation='sigmoid')(encoded)

# This model maps an input to its reconstruction


autoencoder = keras.Model(input_img, decoded)

encoder = keras.Model(input_img, encoded)

# This is our encoded (32-dimensional) input


encoded_input = keras.Input(shape=(encoding_dim,))
# Retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# Create the decoder model
decoder = keras.Model(encoded_input, decoder_layer(encoded_input))

autoencoder.compile(optimizer='adam',loss='binary_crossentropy')

from keras.datasets import mnist


import numpy as np

from keras.datasets import mnist


import numpy as np
(x_train,_), (x_test,_)= mnist.load_data()

x_train= x_train.astype('float32')/255.
x_test=x_test.astype('float32')/255.
x_train=x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test=x_test.reshape((len(x_test),np.prod(x_test.shape[1:])))
print(x_train.shape)
print(x_test.shape)
autoencoder.fit(x_train, x_train,
epochs=50,
batch_size=256,
shuffle=True,
validation_data=(x_test, x_test))

encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)

import matplotlib.pyplot as plt

n = 10 # How many digits we will display


plt.figure(figsize=(20, 4))
for i in range(n):
# Display original
ax = plt.subplot(2, n, i + 1)
plt.imshow(x_test[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)

# Display reconstruction
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(decoded_imgs[i].reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()

Output:

Epoch 1/50
235/235 [==============================] - 4s 13ms/step - loss: 0.2722 -
val_loss: 0.1855
Epoch 2/50
235/235 [==============================] - 2s 10ms/step - loss: 0.1691 -
val_loss: 0.1527
Epoch 3/50
235/235 [==============================] - 2s 10ms/step - loss: 0.1443 -
val_loss: 0.1344
Epoch 4/50
235/235 [==============================] - 2s 10ms/step - loss: 0.1291 -
val_loss: 0.1218
Epoch 5/50
235/235 [==============================] - 3s 11ms/step - loss: 0.1187 -
val_loss: 0.1136
Epoch 6/50
235/235 [==============================] - 3s 12ms/step - loss: 0.1117 -
val_loss: 0.1076
Epoch 7/50
235/235 [==============================] - 2s 10ms/step - loss: 0.1066 -
val_loss: 0.1033
Epoch 8/50
235/235 [==============================] - 2s 10ms/step - loss: 0.1027 -
val_loss: 0.0998
Epoch 9/50
235/235 [==============================] - 2s 10ms/step - loss: 0.0998 -
val_loss: 0.0974
Epoch 10/50
235/235 [==============================] - 2s 9ms/step - loss: 0.0976 -
val_loss: 0.0956
Epoch 11/50
235/235 [==============================] - 3s 14ms/step - loss: 0.0962 -
val_loss: 0.0945
Epoch 12/50
235/235 [==============================] - 2s 9ms/step - loss: 0.0954 -
val_loss: 0.0937
Epoch 13/50
...
Epoch 49/50
235/235 [==============================] - 2s 10ms/step - loss: 0.0927 -
val_loss: 0.0915
Epoch 50/50
235/235 [==============================] - 2s 9ms/step - loss: 0.0926 -
val_loss: 0.0916
Experiment 21
Write a program to implement text classification for spam-mails.

Code:

#reading datat
import pandas as pd
data = pd.read_csv('https://raw.githubusercontent.com/mohitgupta-omg/Kaggle-SMS-Spam-
Collection-Dataset-/master/spam.csv',encoding='latin-1')
data.head()

data.drop(['Unnamed: 2','Unnamed: 3','Unnamed: 4'],axis=1,inplace=True)


data.columns = ['label','text']
data.head()

data.isna().sum()

#donwload nltk
import nltk
nltk.download('all')

#
text = list(data['text'])
# preprocessing loop

import re
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer

lemmatizer = WordNetLemmatizer()

corpus = []

for i in range(len(text)):
r = re.sub('[^a-zA-Z]', ' ', text[i])
r = r.lower()
r = r.split()
r =[word for word in r if word not in stopwords.words('english')]
r = [lemmatizer.lemmatize(word) for word in r]
r = ' '.join(r)
corpus.append(r)

#assign corpus to data['text']


data['text'] = corpus
data.head()

# create feature and label sets


x = data['text']
y = data['label']

# train test split (66% train - 33% test)


from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.33, random_state = 123)

print('Training Data Shape:', x_train.shape)

print('Testing Data Shape: ', x_test.shape)

# train bag of words model feature extraction


from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()

x_train_cv = cv.fit_transform(x_train)
x_train_cv.shape

# training logistic regression model


from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(x_train_cv, y_train)

# transform x_test using cv


x_test_cv = cv.transform(x_test)

#generate predictions
predictions = lr.predict(x_test_cv)
predictions

Output
Experiment 22
Write a program to implement Transfer Learning Feature
Extraction on data.

Code:

!pip install mtcnn

from mtcnn import MTCNN


import cv2
import numpy as np

!pip install ultralytics

from ultralytics import YOLO


import cv2
from google.colab.patches import cv2_imshow

# Load pretrained YOLOv5 model


model = YOLO("yolov8s.pt")

img_path = 'bike-img.png'
img = cv2.imread(img_path)

results = model(img)
annotated_img = results[0].plot()

cv2_imshow(annotated_img)

boxes = results[0].boxes

for box in boxes:


xyxy = box.xyxy[0].cpu().numpy().astype(int)
conf = box.conf[0].item()
if conf > 0.3:
cv2.rectangle(img, (xyxy[0], xyxy[1]), (xyxy[2], xyxy[3]), (0, 255, 0), 2)

cv2_imshow(img)
Output
Experiment 23
Write a program to implement face detection method .

Code:

!pip install mtcnn

from mtcnn import mtcnn


import cv2

from mtcnn import MTCNN


import cv2
from google.colab.patches import cv2_imshow # cv2_imshow

detector = MTCNN()

img = cv2.imread('/content/lady.jpg')

output = detector.detect_faces(img)
#[{},{}...{}]
print(output)

for i in output:
x,y,widht,height = i['box']

left_eyeX,left_eyeY = i['keypoints']['left_eye']
right_eyeX,right_eyeY = i['keypoints']['right_eye']
noseX,noseY = i['keypoints']['nose']
mouth_leftX,mouth_leftY = i['keypoints']['mouth_left']
mouth_rightX,mouth_rightY = i['keypoints']['mouth_right']

cv2.circle(img,center=(left_eyeX,left_eyeY),color=(255,0,0),thickness=3,radius=2)
cv2.circle(img,center=(right_eyeX,right_eyeY),color=(255,0,0),thickness=3,radius=2)
cv2.circle(img,center=(noseX,noseY),color=(255,0,0),thickness=3,radius=2)
cv2.circle(img,center=(mouth_leftX,mouth_leftY),color=(255,0,0),thickness=3,radius=2)

cv2.circle(img,center=(mouth_rightX,mouth_rightY),color=(255,0,0),thickness=3,radius=2)

cv2.rectangle(img,pt1=(x,y),pt2=(x+widht,y+height),color=(255,0,0),thickness=3)
cv2_imshow(img) #use cv2_imshow instead of cv2.imshow

# prompt: now create a car detection program like this

!pip install opencv-python


!pip install tensorflow

import cv2
import tensorflow as tf

# Load the pre-trained model (you'll need to download a car detection model)
# Example using a MobileNet SSD model (replace with your preferred model)
model = tf.saved_model.load("path/to/your/car_detection_model") # Replace with actual
path

# Load the image


img = cv2.imread('/content/cars.jpg') # Replace with your image path

# Preprocess the image (resize, normalize, etc.) - adapt this to match your model's input
requirements
input_tensor = tf.convert_to_tensor(img)
input_tensor = input_tensor[tf.newaxis, ...]

# Perform inference
detections = model(input_tensor)

# Process the detections


num_detections = int(detections.pop('num_detections'))
detections = {key: value[0, :num_detections].numpy()
for key, value in detections.items()}
detections['num_detections'] = num_detections

# Draw bounding boxes and labels


for i in range(num_detections):
score = detections['detection_scores'][i]
if score > 0.5: # Adjust the confidence threshold as needed
ymin, xmin, ymax, xmax = detections['detection_boxes'][i]
ymin = int(ymin * img.shape[0])
xmin = int(xmin * img.shape[1])
ymax = int(ymax * img.shape[0])
xmax = int(xmax * img.shape[1])
cv2.rectangle(img, (xmin, ymin), (xmax, ymax), (0, 255, 0), 2)
class_id = int(detections['detection_classes'][i])
# ... get class name from your model's labels ...
label = f"Car {score:.2f}" # Replace with actual class name if available.
cv2.putText(img, label, (xmin, ymin - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0,
255, 0), 2)

from google.colab.patches import cv2_imshow

# Display the image


cv2_imshow(img)
Output
Experiment 24
Write a program to Build Airplane vs Bird prediction model
using transfer learning.

Code:

!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!kaggle datasets download -d persona/aeroplanes-vs-birds

import zipfile
zip_ref = zipfile.ZipFile('/content/aeroplanes-vs-birds.zip', 'r')
zip_ref.extractall('/content')
zip_ref.close()

import tensorflow as tf
import keras
from keras import Sequential
from keras.layers import Dense, Flatten
from keras.applications.vgg16 import VGG16

conv_base = VGG16(weights='imagenet', include_top=False, input_shape=(150, 150, 3))


conv_base.summary()

model = Sequential()
model.add(conv_base)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(1, activation='sigmoid'))

model.summary()

conv_base.trainable = False
model.summary()

train_ds = keras.utils.image_dataset_from_directory(directory='/content/train',
labels='inferred', label_mode='int', batch_size=32, image_size=(150,150))
validation_ds = keras.utils.image_dataset_from_directory(directory='/content/test',
labels='inferred', label_mode='int', batch_size=32, image_size=(150,150))

def process(image, label):


image = tf.cast(image/255, tf.float32)
return image, label

train_ds = train_ds.map(process)
validation_ds = validation_ds.map(process)

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])


history = model.fit(train_ds, epochs=10, validation_data=validation_ds)
import matplotlib.pyplot as plt
import cv2
import numpy as np

test_img = cv2.imread('/content/test_bird.jpg')
plt.imshow(cv2.cvtColor(test_img, cv2.COLOR_BGR2RGB))
plt.show()

test_img = cv2.resize(test_img, (150, 150))


test_input = test_img.reshape((1, 150, 150, 3))

model.predict(test_input)

Output
Experiment 25
Write a program to Build Airplane vs Bird prediction model
using transfer learning.

Code:

!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!kaggle datasets download -d us236/oranges-vs-mangoes

import zipfile
zip_ref = zipfile.ZipFile('/content/oranges-vs-mangoes.zip', 'r')
zip_ref.extractall('/content')
zip_ref.close()

import tensorflow as tf
from tensorflow import keras
from keras import Sequential
from keras.layers import Dense, Flatten
from keras.applications.vgg16 import VGG16

conv_base = VGG16(
weights='imagenet',
include_top=False,
input_shape=(150,150,3)
)
conv_base.summary()

model = Sequential()
model.add(conv_base)
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()

conv_base.trainable = False
model.summary()

train_ds = keras.utils.image_dataset_from_directory(
directory='/content/train',
labels='inferred',
label_mode='int',
batch_size=32,
image_size=(150,150)
)

validation_ds = keras.utils.image_dataset_from_directory(
directory='/content/test',
labels='inferred',
label_mode='int',
batch_size=32,
image_size=(150,150)
)

def process(image,label):
image = tf.cast(image/255, tf.float32)
return image,label

train_ds = train_ds.map(process)
validation_ds = validation_ds.map(process)

model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
history = model.fit(train_ds,epochs=10,validation_data=validation_ds)

import matplotlib.pyplot as plt


import cv2
test_img = cv2.imread('/content/mango.jpg')
plt.imshow(test_img)

test_img = cv2.resize(test_img,(150,150))
test_input = test_img.reshape((1,150,150,3))
model.predict(test_input)

Output

You might also like