0% found this document useful (0 votes)
10 views

AP21110011455_LAB7

Uploaded by

snehar3322
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

AP21110011455_LAB7

Uploaded by

snehar3322
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

PRINCIPLES OF SOFT COMPUTING LAB - 7

R.Snehalatha – AP21110011455
CSE-U

1.Write a python program to implement CNN.

CODE:
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape((X_train.shape[0], 28, 28, 1)) # Reshape to (28, 28, 1)
X_test = X_test.reshape((X_test.shape[0], 28, 28, 1)) # Reshape to (28, 28, 1)
X_train, X_test = X_train / 255.0, X_test / 255.0 # Normalize pixel values
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=5, batch_size=64, validation_split=0.2)
test_loss, test_acc = model.evaluate(X_test, y_test)
print(f'Test accuracy: {test_acc}')

OUTPUT:
Epoch 1/5
750/750 ━━━━━━━━━━━━━━━━━━━━ 44s 57ms/step - accuracy: 0.8584 - loss: 0.4792 -
val_accuracy: 0.9798 - val_loss: 0.0727
Epoch 2/5
750/750 ━━━━━━━━━━━━━━━━━━━━ 80s 54ms/step - accuracy: 0.9796 - loss: 0.0656 -
val_accuracy: 0.9839 - val_loss: 0.0532
Epoch 3/5
750/750 ━━━━━━━━━━━━━━━━━━━━ 43s 57ms/step - accuracy: 0.9879 - loss: 0.0402 -
val_accuracy: 0.9814 - val_loss: 0.0620
Epoch 4/5
750/750 ━━━━━━━━━━━━━━━━━━━━ 80s 54ms/step - accuracy: 0.9894 - loss: 0.0325 -
val_accuracy: 0.9886 - val_loss: 0.0410
Epoch 5/5
750/750 ━━━━━━━━━━━━━━━━━━━━ 41s 55ms/step - accuracy: 0.9924 - loss: 0.0231 -
val_accuracy: 0.9889 - val_loss: 0.0387
313/313 ━━━━━━━━━━━━━━━━━━━━ 3s 8ms/step - accuracy: 0.9877 - loss: 0.0390
Test accuracy: 0.989799976348877
2.. Write a python Programming to realize the working principles of popular architectures such as
Alex Net, Google Net and VGG Net.

a) AlexNet Implementation:

AlexNet has multiple layers and uses ReLU activation, dropout, and max pooling layers to
achieve good performance on ImageNet data.

CODE:
from tensorflow.keras import layers, models
def AlexNet(input_shape=(224, 224, 3), num_classes=1000):
model = models.Sequential()
model.add(layers.Conv2D(96, (11, 11), strides=(4, 4), activation='relu', input_shape=input_shape))
model.add(layers.MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(layers.Conv2D(256, (5, 5), activation='relu', padding='same'))
model.add(layers.MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(layers.Conv2D(384, (3, 3), activation='relu', padding='same'))
model.add(layers.Conv2D(384, (3, 3), activation='relu', padding='same'))
model.add(layers.Conv2D(256, (3, 3), activation='relu', padding='same'))
model.add(layers.MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(4096, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(4096, activation='relu'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(num_classes, activation='softmax'))
return model
model = AlexNet()
model.summary()

OUTPUT:

Model: "sequential_3"
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━
━━━━━━━━━━┓
┃ Layer (type) ┃ Output Shape ┃ Param # ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━╇━━
━━━━━━━━━━━━━━━┩
│ conv2d_9 (Conv2D) │ (None, 54, 54, 96) │ 34,944 │
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ max_pooling2d_7 (MaxPooling2D) │ (None, 26, 26, 96) │ 0│
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_10 (Conv2D) │ (None, 26, 26, 256) │ 614,656 │
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ max_pooling2d_8 (MaxPooling2D) │ (None, 12, 12, 256) │ 0│
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_11 (Conv2D) │ (None, 12, 12, 384) │ 885,120 │
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_12 (Conv2D) │ (None, 12, 12, 384) │ 1,327,488 │
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ conv2d_13 (Conv2D) │ (None, 12, 12, 256) │ 884,992 │
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ max_pooling2d_9 (MaxPooling2D) │ (None, 5, 5, 256) │ 0│
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ flatten_3 (Flatten) │ (None, 6400) │ 0│
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_7 (Dense) │ (None, 4096) │ 26,218,496 │
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dropout_2 (Dropout) │ (None, 4096) │ 0│
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_8 (Dense) │ (None, 4096) │ 16,781,312 │
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dropout_3 (Dropout) │ (None, 4096) │ 0│
├──────────────────────────────┼─────────────────────────────┼─────────────────┤
│ dense_9 (Dense) │ (None, 1000) │ 4,097,000 │
└───────────────────────────────┴─────────────────────────────┴─────────────────┘
Total params: 50,844,008 (193.95 MB)
Trainable params: 50,844,008 (193.95 MB)
Non-trainable params: 0 (0.00 B)

b) VGGNet (VGG16) Implementation:

VGGNet uses deep stacks of convolutional layers with small 3x3 filters.

def vgg16():
model = Sequential([
Conv2D(64, (3, 3), activation='relu', padding='same', input_shape=(224, 224, 3)),
Conv2D(64, (3, 3), activation='relu', padding='same'),
MaxPooling2D((2, 2)),

Conv2D(128, (3, 3), activation='relu', padding='same'),


Conv2D(128, (3, 3), activation='relu', padding='same'),
MaxPooling2D((2, 2)),

Conv2D(256, (3, 3), activation='relu', padding='same'),


Conv2D(256, (3, 3), activation='relu', padding='same'),
Conv2D(256, (3, 3), activation='relu', padding='same'),
MaxPooling2D((2, 2)),

Conv2D(512, (3, 3), activation='relu', padding='same'),


Conv2D(512, (3, 3), activation='relu', padding='same'),
Conv2D(512, (3, 3), activation='relu', padding='same'),
MaxPooling2D((2, 2)),

Conv2D(512, (3, 3), activation='relu', padding='same'),


Conv2D(512, (3, 3), activation='relu', padding='same'),
Conv2D(512, (3, 3), activation='relu', padding='same'),
MaxPooling2D((2, 2)),

Flatten(),
Dense(4096, activation='relu'),
Dense(4096, activation='relu'),
Dense(1000, activation='softmax')
])
return model

# Model summary
vgg16_model = vgg16()
vgg16_model.summary()

Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
conv2d_1 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
conv2d_3 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
conv2d_5 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
conv2d_6 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
conv2d_8 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
conv2d_9 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
conv2d_11 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
conv2d_12 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
dense (Dense) (None, 4096) 102764544
_________________________________________________________________
dense_1 (Dense) (None, 4096) 16781312
_________________________________________________________________
dense_2 (Dense) (None, 1000) 4097000
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________

c) GoogleNet (Inception) Implementation:

GoogleNet uses Inception modules that apply multiple convolutions of different sizes.

import tensorflow as tf
from tensorflow.keras import layers, models, Input

# Define an Inception module


def inception_module(x, filters_1x1, filters_3x3_reduce, filters_3x3, filters_5x5_reduce, filters_5x5,
filters_pool_proj):
conv_1x1 = layers.Conv2D(filters_1x1, (1,1), padding='same', activation='relu')(x)

conv_3x3_reduce = layers.Conv2D(filters_3x3_reduce, (1,1), padding='same', activation='relu')(x)


conv_3x3 = layers.Conv2D(filters_3x3, (3,3), padding='same', activation='relu')(conv_3x3_reduce)

conv_5x5_reduce = layers.Conv2D(filters_5x5_reduce, (1,1), padding='same', activation='relu')(x)


conv_5x5 = layers.Conv2D(filters_5x5, (5,5), padding='same', activation='relu')(conv_5x5_reduce)

pool_proj = layers.MaxPooling2D((3,3), strides=(1,1), padding='same')(x)


pool_proj = layers.Conv2D(filters_pool_proj, (1,1), padding='same', activation='relu')(pool_proj)

output = layers.concatenate([conv_1x1, conv_3x3, conv_5x5, pool_proj], axis=-1)


return output

# Define GoogleNet (Inception v1) model


def googlenet():
input_layer = Input(shape=(224, 224, 3))
x = layers.Conv2D(64, (7,7), strides=(2,2), padding='same', activation='relu')(input_layer)
x = layers.MaxPooling2D((3,3), strides=(2,2), padding='same')(x)

x = layers.Conv2D(64, (1,1), padding='same', activation='relu')(x)


x = layers.Conv2D(192, (3,3), padding='same', activation='relu')(x)
x = layers.MaxPooling2D((3,3), strides=(2,2), padding='same')(x)

# Inception Modules
x = inception_module(x, 64, 96, 128, 16, 32, 32)
x = inception_module(x, 128, 128, 192, 32, 96, 64)
x = layers.MaxPooling2D((3,3), strides=(2,2), padding='same')(x)

x = inception_module(x, 192, 96, 208, 16, 48, 64)


x = inception_module(x, 160, 112, 224, 24, 64, 64)
x = inception_module(x, 128, 128, 256, 24, 64, 64)
x = inception_module(x, 112, 144, 288, 32, 64, 64)
x = inception_module(x, 256, 160, 320, 32, 128, 128)
x = layers.MaxPooling2D((3,3), strides=(2,2), padding='same')(x)

x = inception_module(x, 256, 160, 320, 32, 128, 128)


x = inception_module(x, 384, 192, 384, 48, 128, 128)

# Average Pooling
x = layers.AveragePooling2D((7,7), strides=(1,1), padding='valid')(x)
x = layers.Flatten()(x)
x = layers.Dense(1000, activation='softmax')(x)

model = models.Model(input_layer, x, name='GoogLeNet')


return model

# Create the model and print summary


model = googlenet()
model.summary()

Model: "GoogLeNet"
__________________________________________________________________________________
________________
Layer (type) Output Shape Param # Connected to
==================================================================================
================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
__________________________________________________________________________________
________________
conv2d (Conv2D) (None, 112, 112, 64) 9472 input_1[0][0]
__________________________________________________________________________________
________________
max_pooling2d (MaxPooling2D) (None, 56, 56, 64) 0 conv2d[0][0]
__________________________________________________________________________________
________________
conv2d_1 (Conv2D) (None, 56, 56, 64) 4160 max_pooling2d[0][0]
__________________________________________________________________________________
________________
conv2d_2 (Conv2D) (None, 56, 56, 192) 110784 conv2d_1[0][0]
__________________________________________________________________________________
________________
max_pooling2d_1 (MaxPooling2D) (None, 28, 28, 192) 0 conv2d_2[0][0]
__________________________________________________________________________________
________________
... (layers continue)

dense (Dense) (None, 1000) 1025000 flatten[0][0]


==================================================================================
================
Total params: 6,998,744
Trainable params: 6,998,744
Non-trainable params: 0
__________________________________________________________________________________
________________

You might also like