DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Experiment 3
Student Name: Piyush Prashar UID:22BET10096
Branch: BE.IT Section/Group:22BET_IOT-703-A
Semester: 6th Date of Performance:3/02/25
Subject Name: IOT LAB Subject Code: 22ITP-367
1. Aim:
Monitor air quality using a gas sensor (MQ135) and display the data on ThingSpeak.
2. Objective:
Monitor air quality using the MQ135 gas sensor and send the data to ThingSpeak
for visualization and analysis.
3. Hardware Used:
• Hardware Required:
• MQ135 gas sensor
• ESP8266/NodeMCU (or any microcontroller with Wi-Fi capability)
• Breadboard and jumper wires
• Power supply (5V for the sensor and microcontroller)
• ThingSpeak account (free API key)
4. Procedure:
1. Connect the Hardware:
• MQ135 Pinout:
• VCC: Connect to 5V.
• GND: Connect to GND.
• AO (Analog Output): Connect to the analog pin of the ESP8266 (e.g., A0 on
NodeMCU).
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Wiring:
• MQ135 VCC → NodeMCU 3V3 or 5V (depending on module support)
• MQ135 GND → NodeMCU GND
• MQ135 A0 → NodeMCU A0
2. Set Up ThingSpeak:
• Go to ThingSpeak and create a free account.
• Create a new channel and add a Field (e.g., "Air Quality").
• Note down the Write API Key from the API Keys tab.
3. Install Required Libraries:
• Ensure the ESP8266 library is installed in your Arduino IDE:
• Go to Tools > Manage Libraries.
• Search for ESP8266 and install it.
•
5. Code:
#include <ESP8266WiFi.h>
#include <ESP8266HTTPClient.h>
// Replace with your network credentials const char*
ssid = "Your_SSID"; const char* password =
"Your_PASSWORD";
// ThingSpeak settings
const char* server = "http://api.thingspeak.com";
String apiKey = "YOUR_API_KEY"; //
MQ135 connected to A0 int
mq135Pin= A0;
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
void setup() {
Serial.begin(115200);
WiFi.begin(ssid, password);
while (WiFi.status() != WL_CONNECTED) {
delay(1000);
Serial.println("Connecting to WiFi...");
}
Serial.println("Connected to WiFi");
}
void loop() {
// Read analog value from MQ135
intanalogRead(mq135Pin);
Serial.println("Air Quality Value: " + String(airQuality));
// Send data to ThingSpeak
if (WiFi.status() == WL_CONNECTED) {
HTTPClient http;
String url = server + "/update?api_key=" + apiKey + "&field1=" +
String(airQuality);
http.begin(url);
int httpCode = http.GET(); if
(httpCode > 0) {
Serial.println("Data sent to ThingSpeak successfully.");
} else {
Serial.println("Error sending data.");
}
http.end();
}
// ThingSpeak limits updates to every 15 seconds delay(15000);
}
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
6. Output:
Fig 1: Simulated Cloud Air Quality Variations
Fig 2: Hardware
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Fig 3: Think speak Visualization
7. Learning Outcome:
• Understanding how to interface and calibrate the MQ135 gas sensor with
microcontrollers such as Arduino or ESP32.
• Collecting sensor data efficiently and reading analog values for air quality
monitoring.
• Learning how to set up wireless communication protocols (Wi-Fi, MQTT) to
connect with cloud platforms.
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Experiment 6
Student Name: Kundan UID: 22BCS10726
Branch: CSE Section/Group: 637-B
Semester: 6th Date of Performance: 28/2/25
Subject Name: Cloud IoT Subject Code: 22CSP-367
1. Aim: Create a cloud-based back-end to support IoT applications by setting up EC2 servers
with different operating systems.
2. Objective: To create a cloud-based back-end for IoT applications by setting up Amazon
EC2 servers with different operating systems.
3. Hardware / Software Used: Operating System, IOT Sensors (if integrated), Internet
Connectivity, AWS Account, SSH-Enabled Device.
4. Procedure:
1. Log into AWS → Go to Services → Search & Click EC2.
2. Click Instances → Launch Instances.
3. Enter Instance Name (e.g., IoT Server).
4. Select AMI (Operating System).
5. Choose Instance Type (t2.micro for free tier).
6. Click Create Key Pair, download the .pem file.
7. Keep Network & Storage settings default or modify as needed.
8. Click Launch Instance.
9. Go to Instances → Verify instance creation.
10. Take a screenshot of the Launch an Instance | EC2 | ap-south-1 page.
5. Result:
Fig 1. EC2
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Fig 2. Instance created
6. Learning Outcomes:
1. Choose the right AMI, instance type, and storage for deployment.
2. Manage AWS Free Tier resources efficiently to avoid unexpected costs.
3. Develop skills in monitoring instance performance and troubleshooting issues.
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Experiment 2
Student Name: Manglam Dubey UID: 22BCS16039
Branch: CSE Section/Group: IOT-621-B
Semester: 6th Date of Performance:
Subject Name: IOT LAB Subject Code: 22CSP-367
1. Aim:
Simulate a cloud scenario using Matlab and run an algorithm for temperature
variations.
2. Objective:
To simulate a cloud computing scenario using MATLAB and implement an
algorithm to monitor and analyze temperature variations.
3. Software Used:
• Online MATLAB
• ThingSpeak
4. Procedure:
Creating Account on MATLAB:
• Open the web browser and create a account on the online MATLAB.
• Open MATLAB and start a new script.
• Write the code
• Select the code and press Enter to run the code.
• The code will simulate temperature readings (based on a sine wave variation) and
send the data to ThingSpeak every 10 seconds
5. Sending data on ThingSpeak:
• Go to ThingSpeak and create an account.
• Create a new channel for storing the temperature data
• After creating the channel, go to the Channel Settings page
• Get the Channel ID,Write API Key from the channel settings
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
• Now Analyze and Visualize Data on ThingSpeak
6. Code:
1.)MATLAB code:
% Parameters
time = 0:0.1:24; % Time in hours (0 to 24, with 0.1-hour intervals)
baseTemp = 20; % Base temperature in degrees Celsius
amplitude = 10; % Temperature fluctuation amplitude
noiseFactor = 2; % Random noise amplitude
% Simulating temperature variations
temperature=baseTemp+amplitude*sin((pi/12)*time)+noiseFactor*randn(size(time
));
% Plotting the temperature variations
figure;
plot(time, temperature, 'b', 'LineWidth', 1.5);
xlabel('Time (hours)');
ylabel('Temperature (°C)');
title('Simulated Cloud Temperature Variations');
grid on;
% Running an algorithm to detect significant changes (spike detection)
threshold = 5; % Change threshold for spikes
tempDiff = diff(temperature); % Calculate differences
spikeIndices = find(abs(tempDiff) > threshold);
% Mark spikes on the plot
hold on;
plot(time(spikeIndices), temperature(spikeIndices), 'ro', 'MarkerSize', 8,
'LineWidth', 1.5);
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
legend('Temperature', 'Detected Spikes');
% Output spike times and values
disp('Detected spikes at the following times (hours) and temperatures (°C):');
disp([time(spikeIndices)', temperature(spikeIndices)']);
2. MATLAB code for sending data:
matlab.addons.install('ThingSpeak Support for MATLAB')
% ThingSpeak Parameters
channelID = <your_channel_ID>; % Replace with your channel ID
writeAPIKey = '<your_write_API_key>'; % Replace with your Write API Key
% Simulate temperature data (same as before)
time = 0:0.1:24; % Time in hours
baseTemp = 20; % Base temperature
amplitude = 10; % Temperature fluctuation amplitude
noiseFactor = 2; % Random noise amplitude
temperature = baseTemp + amplitude * sin((pi/12) * time) + noiseFactor *
randn(size(time));
% Send data to ThingSpeak in a loop
for i = 1:length(time)
% Create data structure
data = temperature(i);
% Write data to ThingSpeak channel
response = thingSpeakWrite(channelID, data, 'WriteKey', writeAPIKey);
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
% Pause to simulate real-time data (every 5 seconds)
pause(5);
end
disp('Data successfully sent to ThingSpeak.');
7. Output:
Fig 1: Simulated Cloud Temperature Variations
Fig 2: MATLAB Analysis
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Fig 3: MATLAB Visualization
8. Learning Outcome:
• Gain a basic understanding of how cloud cover and other environmental
factors influence temperature.
• Improve your MATLAB programming skills by implementing algorithms for
simulating real-world phenomena.
• Understand how to represent spatial data (like temperature) on a grid
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Experiment 5
Student Name: Zatch UID:
Branch: BE-CSE Section/Group:
Semester: 06 Date of Performance:
Subject Name: Foundation of Cloud
Subject Code: 22CSP-367
IOT Edge ML Lab
1. Aim: Set up a system using IoT sensor data to AWS IoT Core and store it in an S3 bucket.
2. Objective: To demonstrate the process of integrating IoT sensors with AWS IoT
core, transmitting sensor data, and storing the data in AWS S3 for further analysis.
3. Hardware / Software Used:
a. AWS IOT Core
b. AWS S3
c. Operating System
d. IOT Sensors (if integrated)
e. Internet Connectivity
4. Procedure:
a. Log in to AWS, navigate to S3, and create a bucket (e.g., s3-bucket-for-iot-data) in the
preferred AWS region.
b. Open AWS IoT Core, navigate to Act → Rules, and create a rule
(IoT_data_rule_for_S3).
c. Set an SQL query to collect data: SELECT * FROM 'iotdevice/+/datas3'.
d. Add an action: "Store a message in an Amazon S3 bucket."
e. Go to MQTT Test Client, publish data to iotdevice/55/datas3 with payload.
f. Verify data storage in S3 by navigating to the bucket.
g. Open S3, select the created bucket, and find the folder corresponding to the IoT device
ID.
h. Download the stored data to verify successful transmission.
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
5. Result:
Figure 1: End-to-End AWS Architecture
Figure 2: AWS IOT Connection Figure 3: MQTT Test Client
Figure 4: CPU Utilization Stats
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Figure 5: Integration and Graphical Visualization
6. Conclusion:
As a result, AWS IoT Core with the IoT Rule engine will assist in filtering IoT topics and the
storage of data in AWS S3. AWS IoT core can receive and send millions of IoT data at a time,
and the AWS IoT Rule engine can filter MQTT topics from IoT devices and send them to
other AWS Services and a time stamp. For data backup and archive, AWS S3 will be used.
7. Learning Outcomes:
a) Learned how to connect IoT sensors with AWS IoT Core and process sensor data in the
cloud.
b) Gained knowledge of MQTT message publishing, subscribing, and rule-based filtering in
AWS IoT Core.
c) Understood how AWS IoT Core, S3, and IAM roles work together to manage and secure
IoT data.
d) Learned how to store IoT data securely in S3 for long-term analysis and scalability.
e) Understood the IOT-Cloud integration.
DEPARTMENTOF
COMPUTERSCIENCE&ENGINEERING
Experiment - 4
Student Name: Piyush Prashar UID: 22BET10096
Branch:BE-IT Section/Group: BET-703-A
Semester: 6th Dateof Performance: 17/02/25
Subject Name: IOT LAB Subject Code: 22ITP-367
1. Aim: Build a security system with any sensor and alerts using Blynk.
2. Objective: To design and implement a security system using sensors (e.g., PIR
motion sensor, magnetic door sensor, or ultrasonic sensor) and integrate it with the Blynk
platform to send real-time alerts.
3. Hardware Used:
• PIR Motion Sensor (HC-SR501)
• ESP8266/NodeMCU(or any Wi-Fi-enabled microcontroller)
• Buzzer/LED (for local alerts, optional)
• Blynk App (installed on your smartphone)
• Breadboard and jumper wires Ultrasonic Sensor (HC-SR04)
4. Procedure:
1. Connect the Hardware: PIR Sensor Pinout:
• VCC: Connect to 3.3V or 5V (depending on the sensor model).
• GND: Connect to GND.
• OUT: Connect to a digital pin on ESP8266 (e.g., D5).
• Wiring Diagram:
DEPARTMENTOF
COMPUTERSCIENCE&ENGINEERING
• PIR VCC → NodeMCU 3.3V
• PIR GND → NodeMCU GND
• PIR OUT → NodeMCU D5
• Buzzer/LED (optional) → D2
2. Set Up Blynk:
• Download and install the Blynk app (iOS/Android).
• Create a new project and select ESP8266 as the device.
• Notedown theAuth Token sent to your email.
• Add a Notification Widget in the app for alerts.
3. Install Libraries inArduino IDE:
Blynk Library:
o Go to Tools > Manage
Libraries and search for Blynk.
o Install the Blynk library. ESP8266
Board Support:
• Go to File > Preferences and add the following URL to the Additional
Boards Manager.
• http://arduino.esp8266.com/stable/package_esp8266com_index.json
Go to Tools > Board > Boards Manager and install the ESP8266
package.
DEPARTMENTOF
COMPUTERSCIENCE&ENGINEERING
4. Code: #define BLYNK_TEMPLATE_ID "YourTemplateID"
#define
BLYNK_DEVICE_NAME "SecuritySystem" #define BLYNK_AUTH_TOKEN
"YourAuthToken"
#include <ESP8266WiFi.h> #include <BlynkSimpleEsp8266.h>
// Blynk and Wi-Fi credentials
charauth[] = "YourAuthToken"; char
ssid[] = "Your_SSID";
char pass[] = "Your_PASSWORD";
// PIR sensor pin int pirPin = D5; int buzzerPin = D2; void setup()
{
Serial.begin(115200);
Blynk.begin(auth, ssid, pass);
pinMode(pirPin, INPUT);
pinMode(buzzerPin, OUTPUT);
digitalWrite(buzzerPin, LOW);
Serial.println("Security system ready."); } void loop() { Blynk.run(); if
(digitalRead(pirPin) == HIGH) { Serial.println("Motion Detected!");
DEPARTMENTOF
COMPUTERSCIENCE&ENGINEERING
Blynk.notify("Alert! Motion Detected at Home."); digitalWrite(buzzerPin,
HIGH); Turn on buzzer/LED delay(5000); // Alert duration
digitalWrite(buzzerPin, LOW);
// Turn off buzzer/LED }
Blynk Code
#define BLYNK_PRINT Serial
#include <ESP8266WiFi.h>
#include<BlynkSimpleEsp8266.h> BlynkTimer
timer;
char auth[] = "xxxxx"; //Enter the authentication code sent by Blynk to your Email
char ssid[] = "xxxxx"; //Enter your WIFI SSID char pass[] = "xxxxx"; //Enter your
WIFI Password int flag=0;
void notifyOnButtonPress()
{ int isButtonPressed = digitalRead(D1);
if(isButtonPressed==1&& flag==0){Serial.println("Someone Openeddoor");
Blynk.notify("Alert : Someone
Opened the door"); flag=1;
}
else if (isButtonPressed==0)
{ flag=0; }
} void
setup()
{
Serial.begin(9600);
Blynk.begin(auth,ssid, pass);
DEPARTMENTOF
COMPUTERSCIENCE&ENGINEERING
pinMode(D1,INPUT_PULLUP);
timer.setInterval(16000L,notifyOnButtonPress);
} voidloop() { Blynk.run();
timer.run(); }
6.Output
Fig 2
DEPARTMENTOF
COMPUTERSCIENCE&ENGINEERING
7. Learning Outcome:
• IoTand Blynk Integration – Learn how to connect sensors with Blynk for
realtime monitoring and remote alerts.
• Sensorand HardwareInterfacing – Gainhands-on experience in workingwith
motion, door, or gas sensors and microcontrollers like ESP8266/ESP32.
• Alert Mechanisms – Implement real-time notifications via Blynk (push alerts,
email, or SMS) and physical alerts using buzzers or LEDs.
• Embedded Programming – Develop coding skills in C++ (Arduino IDE) or
Micro Python to process sensor data and trigger security actions.
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
EXPERIMENT - 9
Student Name: Abhinav Paswan UID: 22BET10332
Branch: BE-CSE Section/Group: BET-IOT-701‘B’
Semester: 6th Date of Performance: 07/04/25
Subject Name: Foundation of Cloud IOT Edge ML Subject Code: 22ITP-367
1. Aim: Automate quality inspection of products using cameras and edge computing.
2. Objective: To design and implement an automated quality inspection system for products using
cameras and edge computing.
3. Pre requisites:
Software Requirements:
1. Python (version 3.8 or above)
2. TensorFlow/Keras or PyTorch
3. OpenCV for image processing
4. Flask/Django for backend integration
5. MQTT or HTTP protocols for IoT data transfer
4. Procedure:
Step 1: Data Collection (Image Acquisition)
1. Cameras: Use high-quality cameras (e.g., industrial cameras or machine vision cameras) to
capture product images. Depending on the application, cameras can be positioned at various
points along the production line.
2. Lighting: Proper lighting is essential to ensure clear and consistent image capture. Lighting can
be adjusted to reduce shadows and enhance defect visibility.
3. Trigger Mechanism: Use sensors (like proximity sensors or conveyors) to trigger the camera
when a product passes through.
Step 2: Preprocessing the Images
4. Image Preprocessing: Raw images may need preprocessing to enhance features and remove
noise.
5. Resizing to a fixed dimension (e.g., 224x224 pixels).
6. Normalization to scale pixel values.
7. Data Augmentation (if training a model) to simulate different conditions such as rotations or
lighting variations.
Step 3: Defect Detection Model
8. Convolutional Neural Networks: A CNN is a deep learning model ideal for image classification
tasks, like defect detection. You can either train your own CNN or use pre-trained models for
defect classification.
9. Pre-trained Models for Feature Extraction: Use pre-trained models like ResNet or VGG16 for
feature extraction and fine-tune them on your dataset if you have a labeled dataset of defective
vs. non-defective products.
10. Inference on Edge Devices: The trained model is deployed to an edge computing device like a
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Raspberry Pi, NVIDIA Jetson, or an industrial-grade embedded system for inference.
Step 4: Edge Computing
11. Real-Time Processing: Edge computing allows processing images locally to minimize latency
and avoid transferring large amounts of data to the cloud. This is particularly important for real-
time applications where immediate decisions are needed (e.g., stopping the production line or
rejecting defective products).
12. Hardware Selection: Use edge devices like Raspberry Pi, NVIDIA Jetson, or Intel NUC,
depending on the complexity of your model and the required processing power.
13. Model Optimization: For efficient inference on edge devices, models can be optimized by:
o Quantization: Reducing the precision of model weights (e.g., from float32 to int8) to speed
up inference without sacrificing much accuracy.
o Model Pruning: Removing unnecessary neurons to reduce the size of the model
Step 5: Integration with Actuators
14. Rejecting Defective Products: Once the defect is detected, the system can trigger an actuator
(e.g., robotic arm, conveyor belt diverter) to reject or separate defective products from the
production line.
15. Alerting Operators: When a defect is detected, the system can send alerts to factory operators or
managers via SMS, email, or the IoT dashboard.
5. Implementation / Code:
import cv2
import numpy as np
import tensorflow as tf
from tensorflow.keras.applications import VGG16
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
# Load and preprocess image
def preprocess_image(image_path):
image = cv2.imread(image_path)
image_resized = cv2.resize(image, (224, 224))
image_normalized = image_resized / 255.0
return np.expand_dims(image_normalized, axis=0)
# Load VGG16 pre-trained model
base_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
# Define custom model for defect classification
model = Sequential([
base_model,
Flatten(),
Dense(512, activation='relu'),
Dense(2, activation='softmax') # 2 classes: defect or no defect
])
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
# Load the trained model
model = tf.keras.models.load_model("defect_detection_model.h5")
# Predict function
def predict_defect(image_path):
image = preprocess_image(image_path)
prediction = model.predict(image)
class_index = np.argmax(prediction)
if class_index == 0:
print("Product is defective")
send_alert("Defective product detected!")
trigger_actuator()
else:
print("Product is not defective")
# Example usage
predict_defect("product.jpg")
6. Screenshot:
Akarsh (22BCS150194)
Akarsh (22BCS150194)
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
7. Conclusion:
In this experiment, we explored essential techniques for image preprocessing, transfer learning, and
model deployment to build an effective deep learning pipeline for product defect classification. By
leveraging OpenCV and NumPy, we efficiently loaded, resized, and normalized images to ensure
optimal input for deep learning models.
Using transfer learning with a pre-trained VGG16 model, we extracted meaningful features and
customized the network to differentiate between defective and non-defective products, improving
classification accuracy. Finally, we successfully deployed the trained TensorFlow model,
demonstrating its capability to make real-time predictions and trigger automated responses, such as
alerts or actuator controls. This workflow highlights the practical implementation of deep learning for
industrial automation, enhancing defect detection efficiency and reliability.
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
EXPERIMENT - 8
Student Name Abhinav Paswan UID: 22BET10332
Branch: BE-IT Section/Group: BET-IOT-701/A
Semester: 6th Date of Performance: 25/03/25
Subject Name: Foundation of Cloud IOT Edge ML Subject Code: 22ITP-367
1. Aim: Design a CNN based approach for vehicle recognition & traffic estimation based on IoT.
2. Objective: To develop a system to detect and classify vehicles using Convolutional Neural Networks
(CNNs).
3. Pre requisites:
Hardware Requirements:
1. IoT Cameras (CCTV, IP cameras, or edge devices)
2. Edge server or cloud access
3. Optional sensors: LiDAR, radar, or infrared
Software Requirements:
4. Python (version 3.8 or above)
5. TensorFlow/Keras or PyTorch
6. OpenCV for image processing
7. Flask/Django for backend integration
8. MQTT or HTTP protocols for IoT data transfer
4. Procedure:
Step 1: Hardware Installation
Mount IoT cameras at the required traffic points.
Connect additional sensors (if used) to the network or edge devices.
Ensure devices are powered and connected to the internet.
Step 2: Software Installation
Install Python and required libraries.
Clone the project repository.
Configure application settings in the config.py file.
Step 3: Deploy the System
For cloud-based deployment, set up the system on a cloud server.
For edge deployment, configure a local server and connect all devices.
Step 4: Model Training
Perform data augmentation to enhance dataset diversity.
Implement CNN architecture with convolutional, pooling, and fully connected layers.
Train the model for 10 epochs or more, adjusting based on dataset and resources.
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Step 5: Testing & Inference
Test the trained model on new images.
Evaluate accuracy using different CNN configurations.
Optimize the model by adjusting epochs and learning rates.
5. Implementation / Code:
Connecting Google Drive
from google.colab import drive
drive.mount('/content/drive')
Installing necessary packages
!pip install tensorflow opencv-python matplotlib
Importing and Classfication of Images
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Define paths
train_dir = '/content/drive/MyDrive/vehicle_dataset/train'
val_dir = '/content/drive/MyDrive/vehicle_dataset/validation'
# Data Augmentation
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=30,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)
val_datagen = ImageDataGenerator(rescale=1./255)
# Load Images in Batches
train_generator = train_datagen.flow_from_directory(
train_dir, target_size=(128, 128), batch_size=32, class_mode='categorical')
val_generator = val_datagen.flow_from_directory(
val_dir, target_size=(128, 128), batch_size=32, class_mode='categorical')
Building a CNN Model for Vehicle Classification
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
# Build CNN Model
model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(128, 128, 3)),
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
MaxPooling2D(pool_size=(2, 2)),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(128, (3, 3), activation='relu'),
MaxPooling2D(pool_size=(2, 2)),
Flatten(),
Dense(128, activation='relu'),
Dropout(0.5),
Dense(train_generator.num_classes, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
Training the CNN Model for Vehicle Classification
history = model.fit(train_generator, epochs=10, validation_data=val_generator)
Saving the Trained CNN Model
model.save('/content/drive/MyDrive/vehicle_model.h5')
Performing Vehicle Classification on a New Image
from tensorflow.keras.preprocessing import image
import numpy as np
# Load trained model
model = tf.keras.models.load_model('/content/drive/MyDrive/vehicle_model.h5')
# Load and preprocess the test image
img_path = '/content/drive/MyDrive/vehicle_dataset/test_image.jpg'
img = image.load_img(img_path, target_size=(128, 128))
img_array = image.img_to_array(img) / 255.0
img_array = np.expand_dims(img_array, axis=0)
# Predict the class
predictions = model.predict(img_array)
predicted_class = np.argmax(predictions[0])
print(f'Predicted class: {predicted_class}')
Visualizing CNN Training Performance
import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
6. Screenshot:
Figure 1. Output on test image (Akarsh Jaiswal 22BCS50194)
Figure 2. Visualisation of accuracy of model (Akarsh 22BCS50194)
7. Conclusion:
Another important aspect of managing EC2 instances is understanding the various instance types
available in AWS. Different instance types have different performance characteristics and are optimized
for different types of workloads.
For example, some instances are optimized for CPU-intensive workloads, while others are better suited
for memory-intensive applications. By choosing the right instance type for your workload, you can
ensure that your applications are running efficiently and cost- effectively. Overall, understanding the
different states of EC2 instances in AWS is just one aspect of effectively managing your infrastructure
in the cloud. By taking advantage of tools like AWS CloudWatch, choosing the right instance types for
your workloads, and following best practices for security and maintenance, you can ensure that your
applications and services are always available to your users and that you’re getting the most out of
your investment in the cloud.
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Experiment 7
Student Name: Abhinav Paswan UID: 22BET10332
Branch: B.E. IT Section/Group: BET_IOT_701/A
Semester: 6th Date of Performance: 19/03/25
Subject Name: Foundation of Cloud IOT Edge ML Subject Code: 22ITP-367
1. Aim: Deploy and manage an edge computing environment using Terraform for
running real-time IoT workloads close to data sources.
2. Objective: To deploy and manage an edge computing environment using Terraform
for running real-time IoT workloads close to data sources.
3. Prerequisites: Terraform, AWS CLI, Cloud Platform - AWS S3
4. Procedure:
i. Log in to AWS:
Go to the AWS Management Console and log in with your credentials.
ii. Open Security Credentials from profile:
Here, create an access key
iii. Copy Access key and secret access key:
Download .csv file and copy from there.
iv. Download AWS CLI – Command Line Interface:
Select 64-bit Windows version and download it. Then install it.
v. Download terraform by HashiCorp:
Select AMD64 and download binary file for windows. Then extract it.
vi. Open Command prompt – type aws configure:
Enter copied access key and secret access key. Else none.
vii. Copy terraform file path and paste in environment variable:
Copy the path and paste it in PATH in Environment Variables.
viii. Check by opening cmd – terraform -v:
Verify using this command – terraform -v
ix. Open Project file in VS code :
Type ‘main.tf’ and insert the code from https://github.com/terraform-aws-
modules/terraform-aws-s3-bucket#
x. Make a index.html file and upload it to AWS bucket
xi. Copy the project path and open command prompt:
Change directory to project folder.
xii. Type terraform init: Initialize Terraform
xiii. Type terraform plan: Plan Deployment
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
xiv. Type terraform apply -auto-approve: Deploy Infrastructure
xv. Open the AWS website endpoint link on browser.
xvi. Type terraform destroy -auto-approve: Destroy Infrastructure
5. Screenshot:
Fig. 1.1 Retrieve Access key
Fig. 1.2 Install AWS CLI
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Fig. 1.3 Install Terraform
Fig. 1.4 Configure AWS
Fig. 1.5 type ‘terraform init’
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Fig. 1.6 type ‘terraform plan’
Fig. 1.7 type ‘terraform -auto-approve’
Fig. 1.8 url created
DEPARTMENT OF
COMPUTER SCIENCE & ENGINEERING
Fig. 1.9 online website
6. Output:
I. Define Terraform configuration (main.tf) to set up edge computing instances.
II. Deploy with Terraform (init, plan, apply).
III. Automate management with Terraform modules.
IV. Monitor and scale edge nodes dynamically.
7. Learning Outcomes:
I. Understand Edge Computing: Its benefits in reducing latency and improving real-time
IoT processing.
II. Learn Terraform Basics: How to define infrastructure using code and automate
deployment.
III. Implement Cloud Infrastructure: Deploy virtual machines and configure security settings.
IV. Use Terraform Commands: init, plan, and apply to set up infrastructure.
8. Conclusion:
It provided a comprehensive understanding of deploying and managing an edge
computing environment using Terraform for real-time IoT workloads. By utilizing AWS
services and Terraform, key cloud infrastructure elements were configured to optimize
IoT processing close to data sources, reducing latency. Through hands-on tasks such as
configuring AWS, setting up Terraform, and automating infrastructure deployment, the
experiment highlighted the practical benefits of edge computing. Moreover, it enhanced
knowledge of Terraform's role in automating cloud infrastructure management, offering
valuable insights into cloud and IoT integration.