SWL All Practicals (Et22036)
SWL All Practicals (Et22036)
Date of Performance:
Date of Submission:
THEORY:
2. Data Insertion
To populate the shop's inventory, we use the INSERT INTO statement. This allows us to add
records for items like Rice, Milk, and Shampoo, specifying their category, available quantity,
and price. This data ensures that the shopkeeper has a structured view of the inventory.
FLOWCHART:
Start
End
OUTPUT:
CONCLUSION:
The practical successfully demonstrated the creation and basic operations of a database system
for a Daily Needs Shop using SQL commands.
DISCUSSION QUESTIONS:
1) List down the mostly used SQL constructs in creating a database.
And: When creating a database, the following SQL constructs are commonly used:
• CREATE:
o CREATE DATABASE: Creates a new database.
o CREATE TABLE: Creates a new table within a database.
o CREATE INDEX: Creates an index on a table to improve query performance.
o CREATE VIEW: Creates a virtual table based on the result of a query.
• ALTER:
o ALTER TABLE: Modifies the structure of an existing table.
• DROP:
o DROP TABLE: Deletes a table from the database.
o DROP DATABASE: Deletes an entire database.
o DROP INDEX: Deletes an index from a table.
• CONSTRAINTS (used within CREATE TABLE or ALTER TABLE):
o PRIMARY KEY: Defines a primary key constraint.
o FOREIGN KEY: Defines a foreign key constraint.
o UNIQUE: Ensures uniqueness for a set of columns.
o NOT NULL: Ensures a column cannot contain null values.
o CHECK: Imposes a condition that must be met for each row.
• DATA TYPES (used within CREATE TABLE or ALTER TABLE):
o Integer types (e.g., INT, BIGINT)
o Character types (e.g., VARCHAR, CHAR)
o Date and Time types (e.g., DATE, TIMESTAMP)
o Binary types (e.g., BLOB, BYTEA)
REFERENCE:
1. “SQL and Python Programming”, Bryan Johnson, 1st Edition, 2019, KDP Print US.
2. “Python: The complete Reference”, Martin C Brown, 1st Edition, 2001, McGraw Hill.
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Lab Assignment No. 01
1. Aim: Construct a database of “List of Vendors” for a Honda Automotive Garage with the
following fields:
a) Name of Vendor
b) Name of Organization
c) Contact Number
Create a mechanism to update the database by adding the following fields:
i) Address of Vendor
ii) e-Mail ID
iii) Website, if any
Code:-
CREATE DATABASE honda_garage;
mysql> USE honda_garage;
mysql> CREATE TABLE vendors (
-> vendor_id INT AUTO_INCREMENT PRIMARY KEY,
-> vendor_name VARCHAR(100) NOT NULL,
-> organization_name VARCHAR(100) NOT NULL,
-> contact_number VARCHAR(15) NOT NULL
-> );
AIM 2
Create a database of Customers for a branch of Axis Bank with the following fields:
a) Name of Account Holder
b) Account Number
c) Address
d) Contact Number
e) e-Mail ID
Create a mechanism to link the database by adding the following fields:
i) Aadhar Card Number
ii) PAN Card Number
code:
CREATE DATABASE axis_bank;
mysql> USE axis_bank;
Database changed
mysql> CREATE TABLE customers (
-> customer_id INT AUTO_INCREMENT PRIMARY KEY,
-> account_holder_name VARCHAR(100) NOT NULL,
-> account_number VARCHAR(20) NOT NULL UNIQUE,
-> address VARCHAR(255),
-> contact_number VARCHAR(15),
-> email VARCHAR(100)
-> );
output
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Practical No. 02
Aim: Consider the following Employee Tables:
11-JUNE-
E101 Vivek R &D 145000 Nagpur
2009
Product 20-JULY-
E103 Priyal 120000 Banglore
Development 2018
Product
E105 Shrushti 80000 19-SEPT-2019 Nagpur
Development
Product
E106 Pranay 100000 22-OCT-2018 Mumbai
Development
THEORY:
In this practical, we are exploring several SQL operations commonly used in relational database
management systems (RDBMS) for querying and manipulating data. These operations focus on
retrieving, aggregating, and combining data from different tables.
1. Displaying All Fields of a Table: The SELECT * statement is used to display all fields
from a table. This operation retrieves every column and every row, giving a complete
view of the data stored in the table.
2. Counting Records: The COUNT() function is an aggregate function used to count the
number of rows in a table that meet certain conditions. It helps us determine how many
records exist, for example, counting the total number of employees in an employee table.
4. Calculating Total Values: The SUM() function is another aggregate function used to
calculate the total of a numeric column. In this case, it would be applied to sum the salary
column, providing the total salary paid to all employees.
5. Calculating the Average: The AVG() function calculates the average value of a numeric
column. It is commonly used to calculate metrics such as average salary, which gives
insights into the typical pay within the organization.
6. Sorting Data: The ORDER BY clause sorts data based on one or more columns. When
sorting in descending order, ORDER BY column_name DESC is used to arrange records
from highest to lowest, such as listing employees by salary or name.
7. Joining Tables: The JOIN operation is used to combine data from multiple tables based
on a common column. This allows us to retrieve related information from two or more
Department of Electronics and Telecommunication Engineering, S.B.J.I.T.M.R,
Nagpur. 3
Software Workshop Lab (PCCET605P)
tables. For example, joining the employee table with the designation table to display
employee names along with their department and designation.
These SQL operations form the backbone of querying and analyzing relational data, allowing for
flexible and efficient data management. They help in retrieving specific information, performing
calculations, and combining data from multiple sources to provide valuable insights.
FLOWCHART:
PROGRAM:
DISCUSSION QUESTIONS:
1) List down the operations that can be performed on a given database.
ANS-1] Data Definition Operations (DDL):
• CREATE: To create tables in the database (although this isn't explicitly mentioned, it's
implied for creating the necessary tables).
• ALTER: Modify the structure of the table if needed.
2] Data Manipulation Operations (DML):
• SELECT: Used to retrieve data from tables.
• INSERT: Adds data into tables (implied, though not explicitly mentioned).
• UPDATE: Modifies existing records (could be used to update fields).
• DELETE: Removes rows of data from the table (could be used, though not explicitly
mentioned).
3] Data Querying Operations:
• JOIN: Combines data from two or more tables, such as retrieving employee names with
their department and designation.
• ORDER BY: Sorts the data, e.g., sorting employee names in descending order.
• GROUP BY: Groups data (could be used for aggregate operations like counting
employees or summing salaries).
1. INNER JOIN: Retrieves rows with matching values in both tables. Non-matching rows
are excluded.
2. LEFT JOIN (LEFT OUTER JOIN): Retrieves all rows from the left table and
matching rows from the right table. Non-matching rows in the right table return NULL.
3. RIGHT JOIN (RIGHT OUTER JOIN): Retrieves all rows from the right table and
matching rows from the left table. Non-matching rows in the left table return NULL.
4. FULL JOIN (FULL OUTER JOIN): Retrieves all rows from both tables, with NULL
values where there is no match.
5. CROSS JOIN: Returns the Cartesian product of two tables, creating all possible row
combinations.
6. SELF JOIN: Joins a table with itself, often used for hierarchical data or comparisons
within the same table.
JOINs allow for efficient data retrieval from multiple tables, maintaining relational integrity
and reducing redundancy.
REFERENCE:
1. “SQL and Python Programming”, Bryan Johnson, 1st Edition, 2019, KDP Print US.
2. “Python: The complete Reference”, Martin C Brown, 1st Edition, 2001, McGraw Hill.
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Lab Assignment no.2
Aim: 1. Construct a Food Inventory Table for Zomato as given below:
1. Display all food items along with their stock and expiry date.
2. Find the total stock of items in the "Spreads" category.
3. Retrieve the details of food items that are expiring in the next 6 months.
4. Calculate the average price of items in the "Dairy" category.
5. Retrieve the names of suppliers who provide items with stock greater than 50.
6. Display all food items along with their supplier names (using JOIN).
7. Retrieve the names and prices of items priced above 200, along with their category and
supplier location.
Department of Electronics and Telecommunication Engineering, S.B.J.I.T.M.R,
Nagpur. 1
Software Workshop Lab (PCCET605P)
Code
CREATE DATABASE zomato_inventory;
mysql> USE zomato_inventory;
mysql> CREATE TABLE food_inventory (
-> item_id VARCHAR(10) PRIMARY KEY,
-> item_name VARCHAR(100),
-> category VARCHAR(50),
-> price DECIMAL(10, 2),
-> stock INT,
-> expiry_date DATE
-> );
mysql> INSERT INTO food_inventory (item_id, item_name, category, price, stock, expiry_date)
-> VALUES
-> ('F101', 'Basmati Rice', 'Grains', 80.00, 100, '2026-01-01'),
-> ('F102', 'Brown Bread', 'Bakery', 40.00, 50, '2025-01-15'),
-> ('F103', 'Almond Butter', 'Spreads', 250.00, 30, '2025-02-20'),
-> ('F104', 'Whole Milk', 'Dairy', 60.00, 200, '2024-12-10'),
-> ('F105', 'Organic Honey', 'Spreads', 300.00, 25, '2026-06-01');
mysql> SELECT * FROM food_inventory;
mysql> SELECT *
-> FROM food_inventory
-> WHERE expiry_date BETWEEN CURDATE() AND DATE_ADD(CURDATE(),
INTERVAL 6 MONTH);
mysql> SELECT AVG(price) AS average_price_dairy
-> FROM food_inventory
-> WHERE category = 'Dairy';
mysql> SELECT DISTINCT sd.supplier_name
-> FROM supplier_details sd
-> JOIN food_inventory fi
-> ON fi.item_id LIKE CONCAT('F', SUBSTRING(sd.supplier_id, 2));
Empty set (0.37 sec)
Output
AIM2
Construct a Clothing Inventory Table for Amazon Outlet:
CODE
mysql> CREATE DATABASE amazon_outlet;
Database changed
-> );
mysql> INSERT INTO clothing_inventory (item_id, item_name, category, price, stock, supplier)
-> VALUES
-> );
-> VALUES
mysql> SELECT *
OUTPUT
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Practical No. 03
Aim: Develop a Database “Students” of an Institute. Create a table with following attributes:
a) Roll Number
b) Name of the student
c) Department
d) Year
e) Section
f) Contact Number
g) Email ID
Add another other attributes as “CGPA” in the above mentioned table. Insert details of at-
least 10 students taken from the end user in the table and display the detailed table. Generate
the sorted reports based on the following criteria:
a) List of students with roll numbers in increasing order
b) List of the students with decreasing order of CGPA.
c) List the students with the Minimum and maximum CGPA.
THEORY:
In this practical, we will be working with a database management system (DBMS) to create and
manipulate a database called "Students” using MYSQL. The database will store information
about students in an institute, including their roll number, name, department, year, section,
contact number, e-mail ID, and CGPA in tabular form. Each row in the table represents a tuple
consisting of data related to each student, and each column represents a specific attribute of that
student.
The first step is to design the table structure by specifying the attributes and their data types.
Based on the given requirements, the table structure will include the following attributes:
• Roll Number: This attribute will store a unique identification number assigned to
each student. Name of the student: This attribute will store the name of the student.
• Department: This attribute will store the department in which the student is
enrolled.
• Year: This attribute will store the year in which the student is studying.
• Section: This attribute will store the section in which the student is assigned.
• Contact Number: This attribute will store the contact number of the student.
• Email ID: This attribute will store the email address of the student.
• CGPA: This attribute will store the Cumulative Grade Point Average of the
student.
import mysql.connector
conn = mysql.connector.connect(
host="localhost",
user="root",
password="password",
database="student_in"
)
cursor = conn.cursor()
for i in range(10):
print(f"Enter details for Student {i + 1}:")
roll = int(input("Roll Number: "))
name = input("Name: ")
department = input("Department: ")
year = int(input("Year: "))
section = input("Section: ")
contact = input("Contact Number: ")
email = input("Email ID: ")
cgpa = float(input("CGPA: "))
cursor.execute(
"INSERT INTO StudentDetails (RollNo, Name, Department, Year, Section, ContactNo,
EmailID, CGPA) "
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s)",
(roll, name, department, year, section, contact, email, cgpa)
)
conn.commit()
print("Data inserted successfully!")
SQL code:
mysql> CREATE DATABASE student_in;
mysql» USE student_in;
mysql> CREATE TABLE IF NOT EXISTS Student Details ( RollNO INT PRIMARY KEY,
Name VARCHAR(100),
Department VARCHAR(50),
Year INT,
Section VARCHAR(5),
ContactNo VARCHAR(15),
EmailID VARCHAR(100),
CGPA FLOAT
-> );
mysql> show tables;
mysql> describe StudentDetails;
mysql> select * from StudentDetails;
mysql> SELECT *FROM StudentDetails ORDER BY RollNo ASC;
mysql> SELECT *FROM StudentDetails ORDER BY CGPA DESC;
mysql> SELECT * FROM StudentDetails WHERE CGPA=(select MAX(CGPA) from Student
Details);
mysql> SELECT * FROM StudentDetails WHERE CGPA=(select MIN(CGPA) from
StudentDetails);
DISCUSSION QUESTIONS:
1) Elaborate about the subsets of SQL.
Ans: Subsets of SQL:SQL has three primary subsets:
• DDL (Data Definition Language): Defines the structure of the database (e.g., CREATE, ALTER,
DROP).
• DML (Data Manipulation Language): Manipulates data in tables (e.g., INSERT, UPDATE,
DELETE, SELECT).
• DCL (Data Control Language): Manages user access and permissions (e.g., GRANT, REVOKE).
• TCL (Transaction Control Language): Manages transactions in a database (e.g., COMMIT,
ROLLBACK).
REFERENCE:
1. “SQL and Python Programming”, Bryan Johnson, 1st Edition, 2019, KDP Print US.
2. “Python: The complete Reference”, Martin C Brown, 1st Edition, 2001, McGraw Hill.
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Lab Assignment no.3
Aim: 1. Develop a database "Employees" of Persistent Systems Limited, Nagpur.
Create a table with the following attributes:
a) Employee ID
b) Name of the employee
c) Department
d) Designation
e) Salary
f) Contact Number
g) e-Mail ID
h) Date of Joining
Add another attribute "Performance Rating" in the table. Insert details of at least 10 employees
taken from the end user in the table and display the detailed database.
Generate sorted reports based on the following criteria:
a) List of employees with Employee ID in an ascending order.
b) List of employees with descending order of Performance Rating.
c) List of employees with the minimum and maximum salary.
d) Display the name of employee whose name contain letter “y”
Date of Performance:__________
Date of Submission: __________
Code:-
mysql> CREATE DATABASE PersistentEmployees;
mysql> USE PersistentEmployees;
Database changed
mysql> CREATE TABLE EmployeeDetails (
-> EmployeeID INT AUTO_INCREMENT PRIMARY KEY,
-> Name VARCHAR(100) NOT NULL,
-> Department VARCHAR(50) NOT NULL,
-> Designation VARCHAR(50) NOT NULL,
-> Salary DECIMAL(10, 2) NOT NULL,
-> ContactNumber VARCHAR(15),
-> EmailID VARCHAR(100) UNIQUE NOT NULL,
-> DateOfJoining DATE NOT NULL
-> );
mysql> ALTER TABLE EmployeeDetails
-> ADD COLUMN PerformanceRating VARCHAR(10);
mysql> describe EmployeeDetails;
->SELECT * FROM EmployeeDetails
ORDER BY EmployeeID ASC;
->SELECT * FROM EmployeeDetails
ORDER BY PerformanceRating DESC;
->SELECT * FROM EmployeeDetails
ORDER BY Salary ASC
LIMIT 1 ;
cursor.execute(
"INSERT INTO Employee (EmployeeID, Name, Department, Designation, Salary,
ContactNumber, Email, DateOfJoining, PerformanceRating) "
"VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)",
(emp_id, name, department, designation, salary, contact_number, email, date_of_joining,
performance_rating)
)
conn.commit()
print("Data inserted successfully!")
conn.close()
OUTPUT:
AIM 2: Create a database "Books" for Atal Bihari Vajpeyee Public Library.
Construct a table with the following fields:
a) Title of the book
b) Name of Author
c) Publication House
d) Year of Publication
e) ISBN Number
f) Language
g) Price
h) Available Copies
Insert details of at least 10 books taken from the end user in the table and display the detailed
database.
Generate sorted reports based on the following criteria:
a) List of books by ISBN Numbers in ascending order.
b) List of books by Year of Publication in a descending order.
c) Display the names of books whose price is in between Rs.500 to
Rs.1000.
Create a mechanism to search the availability of the book in the library based on:
a) Keyword in the Title of Book
b) Name of Author
c) ISBN Number
SQL CODE:
mysql> CREATE DATABASE Books;
FROM BookDetails
ORDER BY YearOfPublication DESC;
SELECT Title, Price
FROM BookDetails
WHERE Price BETWEEN 500 AND 1000;
SELECT * FROM BookDetails
WHERE Title LIKE '%ABC%';
SELECT * FROM BookDetails
WHERE AuthorName LIKE '%Z%';
SELECT * FROM BookDetails
WHERE ISBNNumber = '12345';
Python Code:
import mysql.connector
conn = mysql.connector.connect(
host="localhost",
user="root",
password="password",
database="Books"
)
cursor = conn.cursor()
# Insert details of 4 books with short names
books = [
("ABC", "X", "Pub1", 2020, "12345", "English", 250.00, 5),
("DEF", "Y", "Pub2", 2019, "67890", "English", 300.00, 3),
("GHI", "Z", "Pub3", 2021, "11223", "English", 150.00, 7),
("JKL", "A", "Pub4", 2018, "44556", "English", 200.00, 4)
]
conn.close()
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Practical No. 04
Aim: Examine the provided image “moonlanding.png”, which is heavily contaminated with
periodic noise. Clean up the noise using the Fast Fourier Transform (FFT). Find and use the 2-D
FFT function in “scipy.fftpack”, and plot the FFT spectrum of the image. The spectrum consists
of high and low frequency components. The noise is contained in the high-frequency part of the
spectrum, so set some of those components to zero (Hint: use array slicing). Apply the inverse
Fourier transform to reconstruct the resulting image.
OBJECTIVE/EXPECTED LEARNINGOUTCOME:
The objectives and expected learning outcome of this practical are:
• To get familiar with the different methods available in SciPy Library.
• To learn and apply the Fast Fourier Transform Package (FFTPack) of SciPy Library for
image processing.
THEORY:
The Fast Fourier Transform (FFT) is an efficient algorithm for computing the Discrete Fourier
Transform (DFT) of a sequence or array of data. The DFT is a mathematical transformation that
converts a signal from the time or spatial domain into the frequency domain, revealing the
amplitudes and phases of different frequency components present in the signal.
In the SciPy library, the Fast Fourier Transform (FFT) functionality is provided by the scipy.fft
module. This module offers several functions for performing FFT computations, including both
one-dimensional (1-D) and two-dimensional (2-D) FFTs. The theory behind the FFT used in
SciPy is based on the principles of the Cooley-Tukey algorithm and the Fast Fourier Transform.
The scipy.fft module provides various functions for performing FFT computations. The main
functions include:
● ifft(): Computes the 1-D inverse FFT, which transforms the signal from the frequency
domain back to the time domain.
● fft2(): Computes the 2-D FFT of an array, such as an image.
Efficiency and Performance: The SciPy FFT implementation in the scipy.fft module is highly
optimized for performance. It leverages efficient algorithms and techniques, such as
vectorization and utilizing lower-level libraries like FFTPACK or Intel MKL, to accelerate the
computations.
The FFT functions in SciPy provide additional options to control the behavior of the FFT
computation. These options include specifying the input data type, choosing the size and shape
of the output, selecting different FFT algorithms (e.g., FFTPACK, FFTW), applying
normalization, and more.
By using the functions available in the scipy.fft module, you can efficiently compute the FFT
and inverse FFT of one-dimensional or two-dimensional data in the SciPy library. These
functions provide a convenient way to analyze signals and images in the frequency domain,
enabling tasks such as spectral analysis, filtering, deconvolution, and more.
FLOWCHART:
PROGRAM:
image_fft = fftpack.fft2(image)
def plot_spectrum(image_fft):
from matplotlib.colors import LogNorm
plt.imshow(np.abs(image_fft), norm=LogNorm(vmin=5))
plt.colorbar()
plt.figure()
plot_spectrum(image_fft)
plt.title('Fourier transform')
plt.show()
#APPLYING FILTERING
keep_fraction = 0.1
Department of Electronics and Telecommunication Engineering, S.B.J.I.T.M.R,
Nagpur. 4
Software Workshop Lab (PCCET605P)
im_fft2 = image_fft.copy()
r, c = im_fft2.shape
im_fft2[int(r*keep_fraction):int(r*(1-keep_fraction))] = 0
im_fft2[:, int(c*keep_fraction):int(c*(1-keep_fraction))] = 0
plt.figure()
plot_spectrum(im_fft2)
plt.title('Filtered Spectrum')
plt.show()
im2=fftpack.ifft2(im_fft2).real
plt.figure()
plt.imshow(im2,plt.cm.gray)
plt.title('reconsructed Signal')
plt.show()
#Bluring image
im_blur = ndimage.gaussian_filter(image, 4)
plt.figure()
plt.imshow(im_blur, plt.cm.gray)
plt.title('Blurred image')
plt.show()
#edge strength
sobel_x = ndimage.sobel(image, axis=0)
sobel_y = ndimage.sobel(image, axis=1)
# Calculate edge strength as the magnitude of the gradient
OUTPUT:
CONCLUSION:
By applying the Fast Fourier Transform (FFT) to the image, we were able to examine its
frequency spectrum. The high-frequency components of the spectrum correspond to the noise
in the image. By setting some of those high-frequency components to zero, we effectively
filtered out the noise. Applying the inverse Fourier transform to the modified spectrum
resulted in a cleaned version of the image, where the periodic noise was significantly reduced.
DISCUSSION QUESTIONS:
1) Illustrate the significance of SciPy Library in data processing.
Ans}Significance of SciPy in Data Processing:
• Numerical Operations: Fast computations for integration, interpolation, and linear
algebra.
• Signal & Image Processing: FFT, filtering, and edge detection.
• Statistical Analysis: Probability distributions, hypothesis testing, and curve fitting.
• Sparse Matrix Handling: Efficient operations on large datasets.
• Optimization: Tools for function minimization and equation solving.
• Interoperability: Seamless integration with NumPy, Matplotlib, and Pandas.
REFERENCE:
1. “SciPy and NumPy”, Eli Bressert, 2nd Edition, 2013, O’Reilly Media Inc.
2. “Python: The complete Reference”, Martin C Brown, 1st Edition, 2001, McGraw Hill.
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Lab Assignment no.4
AIM: 1. Read the famous painting of Leonardo Da Vinci of “Mona Lisa” in JPEG
format and perform the following operations:
a) Create a mechanism to remove the noise.
b) Sharpen the image using numpy & scipy libraries
c) Apply the Edge Detection Technique
d) Display the Original & Edge Detected output images
Name of Student: _Ronak Wanjari__
Roll No.: ET22036
Semester/Year: 6TH /3RD
Academic Session:_____________
Date of Performance:__________
Date of Submission: __________
CODE:
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image, ImageFilter
# Load image
image = Image.open("mona.jpeg").convert("RGB")
# Remove noise (Gaussian Blur)
denoised = image.filter(ImageFilter.GaussianBlur(1)) # Light blur to keep details
# Sharpen image (Unsharp Mask for better effect)
sharpened = denoised.filter(ImageFilter.UnsharpMask(radius=2, percent=200, threshold=3))
# Edge detection
edges = image.convert("L").filter(ImageFilter.FIND_EDGES)
# Show all images in one figure
titles = ["Original", "Denoised", "Sharpened", "Edges"]
images = [image, denoised, sharpened, edges]
plt.figure(figsize=(10, 8))
for i, (img, title) in enumerate(zip(images, titles), 1):
plt.subplot(2, 2, i)
plt.imshow(img, cmap="gray" if title == "Edges" else None)
plt.title(title)
plt.axis("off")
plt.tight_layout()
plt.show()
OUTPUT:
AIM: 2]Read your own colored image in PNG format and create a mechanism to
convert your colored image into famous Microsoft “Sepia Image”
CODE:
from PIL import Image
# Open the original image
image = Image.open("ronak.jpeg")
image = image.convert("RGB")
# Apply sepia filter
pixels = image.load()
for x in range(image.width):
for y in range(image.height):
R, G, B = pixels[x, y]
oR = min(255, int(R * 0.393 + G * 0.769 + B * 0.189))
oG = min(255, int(R * 0.349 + G * 0.686 + B * 0.168))
oB = min(255, int(R * 0.272 + G * 0.534 + B * 0.131))
pixels[x, y] = (oR, oG, oB)
# Save and show the image
image.save("sepia.jpeg")
image.show()
OUTPUT:
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Practical No. 05
Aim: Import an image “face.jpg” from package of “scipy.ndimage” and perform the
following operations on it:
a) Display the image using pyplot
b) Display the image up-side down
c) Rotate the image with an angle of 450
d) Apply the filters and blur the image
e) Apply the edge detection technique to it
OBJECTIVE/EXPECTED LEARNINGOUTCOME:
The objectives and expected learning outcome of this practical are:
• To get familiar with image processing techniques available in ndimage package of
SciPy Library.
• To apply various image processing techniques available in ndimage package of
SciPy Library.
THEORY:
SciPy is a scientific computing library for Python that provides functionality for numerical
integration, optimization, linear algebra, signal and image processing, statistical analysis, and
more. It builds upon NumPy, another Python library, and extends its capabilities by offering
additional tools and algorithms for scientific and technical computing tasks. SciPy is widely
used in fields such as physics, engineering, mathematics, and data science.
• Set the figure size and adjust the padding between and around the subplots.
• Use imshow() method, with data.
• Display the data as an image, i.e., on a 2D regular raster.
• To display the figure, use show()method
c) Rotate the image with an angle of 450 From PIL -- Python Image Library is a
module that
Original Blurred
FLOWCHART:
PROGRAM:
plt.show()
up_side_down = np.flipud(image)
plt.imshow(up_side_down)
plt.title("inverse image")
plt.show()
OUTPUT:
CONCLUSION:
Here we have successfully performed various operations on image using the scipy library.
DISCUSSION QUESTIONS:
1) Determine the significance of ndimage package available in SciPy Library.
ANS}Significance of scipy.ndimage in SciPy:
• Filtering: Gaussian, median, and uniform filters for smoothing images.
• Morphological Operations: Erosion, dilation, opening, and closing.
REFERENCE:
1. “SciPy and NumPy”, Eli Bressert, 2nd Edition, 2013, O’Reilly Media Inc.
2. “Python: The complete Reference”, Martin C Brown, 1st Edition, 2001, McGraw Hill.
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Lab Assignment no.5
AIM:
1. Read a gray scale picture of “Mount Everest” in JPG format and perform
the following operations:
a) Convert the gray scale picture into a colored image
b) Create a mechanism to display the mirror image of colored Mount
Everest.
c) Create a mechanism to focus on the peak of Mount Everest (displaying
only the peak of Mount Everest)
Name of Student: _Ronak Wanjari__
Roll No.: ET22036
Semester/Year: 6TH /3RD
Academic Session:_____________
Date of Performance:__________
Date of Submission: __________
CODE:
import cv2
import matplotlib.pyplot as plt
# Load grayscale image
image = cv2.imread("mount.jpeg", cv2.IMREAD_GRAYSCALE)
# Convert to colored image
colored_image = cv2.applyColorMap(image, cv2.COLORMAP_JET)
# Create mirror image
mirror_image = cv2.flip(colored_image, 1)
# Crop to focus on peak
x, y, w, h = 200, 50, 300, 200
peak_image = colored_image[y:y+h, x:x+w]
# Display all images
titles = ["Grayscale", "Colored", "Mirror Image", "Focused Peak"]
images = [image, colored_image, mirror_image, peak_image]
plt.figure(figsize=(12, 6))
for i in range(4):
plt.subplot(1, 4, i+1)
plt.imshow(cv2.cvtColor(images[i], cv2.COLOR_BGR2RGB) if i > 0 else images[i],
cmap='gray' if i == 0 else None)
plt.title(titles[i])
plt.axis('off')
OUTPUT:
CODE:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
# Create a figure
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Generate Spiral Data
# Animate Rotation
ani = animation.FuncAnimation(fig, update, frames=360, interval=20)
plt.show()
OUTPUT:
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Practical No. 06
Aim: Perform statistical analysis using the scipy.stats module in Python. Explore various
statistical functions provided by SciPy to analyze a given dataset.
Read the dataset “brain_size.csv” containing different values of three different IQ measures,
details of subject weight, height and brain size (as per MRI) and perform the following
operations:
a) Display the first few rows of the dataset
b) Display the different data types in the dataset
c) Display the descriptive statics such as mean, variance, skewness and kurtosis.
d) Generate the statistical report based on the gender of the subject
e) Visualize the dataset by displaying the weights & heights of each subject
f) Display the histogram of weights & heights
OBJECTIVE/EXPECTED LEARNINGOUTCOME:
The objectives and expected learning outcome of this practical are:
• To get familiar with statistical modules available in stats package of SciPy Library.
• To apply various statistical modules available in stats package of SciPy Library.
THEORY:
PROGRAM:
import pandas as pd
import scipy
data=pd.read_csv('brain_size.csv',sep=";",na_values=".")
print("Displaying the contents of files:")
print(data)
print("displaying the first few rows of datasets")
print(data.head())
print("displaying the data types of columns:")
print(data.dtypes)
print("displaying the description statistics of given dataset")
print(data.describe())
gender=data.groupby('Gender')
print("displaying the description statistics -Genderwise:")
print(gender.describe())
print("displaying the -Genderwise mean values:")
print(gender.mean())
print("displaying the description statistics -Genderwise:")
print(gender.size())
from scipy.stats import skew
print(" displaying the skewness")
print("Skewness for FSIQ:",skew(data['FSIQ']))
from scipy.stats import kurtosis
print("displaying the kurtosis")
print("kurtosis for FSIQ:",kurtosis(data['FSIQ']))
Department of Electronics and Telecommunication Engineering, S.B.J.I.T.M.R,
Nagpur. 4
Software Workshop Lab (PCCET605P)
CONCLUSION:
Here we have successfully hands-on with various statistical modules available in stats package of
SciPy Library..
DISCUSSION QUESTIONS:
1) Determine the significance of stats package available in SciPy Library.
Ans} Significance of scipy.stats Package
The scipy.stats module is a crucial part of the SciPy library that provides various
statistical functions for data analysis, hypothesis testing, probability distributions, and
correlation analysis.
Key Features:
1. Probability Distributions – Supports both continuous (e.g., Normal, Exponential) and
discrete (e.g., Poisson, Binomial) distributions.
2. Descriptive Statistics – Computes important statistical measures like mean, variance,
standard deviation, skewness, and kurtosis.
3. Hypothesis Testing – Includes tests such as t-tests, chi-square tests, ANOVA, and
Kolmogorov-Smirnov tests.
REFERENCE:
1. “SciPy and NumPy”, Eli Bressert, 2nd Edition, 2013, O’Reilly Media Inc.
2. “Python: The complete Reference”, Martin C Brown, 1st Edition, 2001, McGraw Hill.
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Lab Assignment no.6
AIM: 1. Read a dataset of Populations of any Five States of India and give the
descriptive statistics by determining the values of mean, variance, skewness
and kurtosis for the population datasets
Name of Student: _Ronak Wanjari__
Roll No.: ET22036
Semester/Year: 6TH /3RD
Academic Session:_____________
Date of Performance:__________
Date of Submission: __________
CODE:
import pandas as pd
import scipy.stats as stats
# Load dataset from CSV
df = pd.read_csv("indian_population.csv")
# Compute Descriptive Statistics
mean_value = df["Population"].mean()
variance_value = df["Population"].var()
skewness_value = stats.skew(df["Population"])
kurtosis_value = stats.kurtosis(df["Population"])
print("Descriptive Statistics for Population Data:")
print(f"Mean: {mean_value}")
print(f"Variance: {variance_value}")
print(f"Skewness: {skewness_value}")
print(f"Kurtosis: {kurtosis_value}")
OUTPUT:
AIM: 2)Read the famous “Iris” dataset and give descriptive statistical analysis by
determining values of mean, variance, skewness and kurtosis. Conduct a
hypothesis testing by applying the t-tests to compare the means of any two
species.
CODE:
import pandas as pd
from scipy import stats
from sklearn.datasets import load_iris
# Load the Iris dataset
iris = load_iris()
df = pd.DataFrame(data=iris.data, columns=iris.feature_names)
df['species'] = iris.target
# Descriptive Statistics
mean = df.mean()
variance = df.var()
skewness = df.skew()
kurtosis = df.kurtosis()
# Print Descriptive Statistics
print("Descriptive Statistics:")
print("Mean:\n", mean)
print("Variance:\n", variance)
print("Skewness:\n", skewness)
print("Kurtosis:\n", kurtosis)
OUTPUT:
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Practical No. 07
Aim: Import sales data of a Cosmetic Company. Analyze it through following ways with
visualizationusing Matplotlib:
a) Read the total profit of all the months and visualize it using a Line Plot.
b) Read all product sales data and show it using a Multiline Plot.
c) Read face cream and face wash product sales data and show it using theBar chart.
d) Calculate total sale data for last year for each product and show it using a Pie chart.
OBJECTIVE/EXPECTED LEARNINGOUTCOME:
The objectives and expected learning outcome of this practical are:
• To get familiar with visualization techniques/plots available in pyplot package of
Matplotlib Library.
• To apply various visualization techniques available in pyplot package of
Matplotlib Library.
THEORY:
Matplotlib is a powerful library in Python used for data visualization. It provides various plotting
functions through the pyplot module, which allows users to create high-quality graphs and
charts. The pyplot module is commonly used because of its ease of use and similarity to
MATLAB’s plotting functions.
Data visualization plays a crucial role in understanding trends, patterns, and insights from raw
data. Using different types of charts and graphs, we can make data more interpretable and
visually appealing.
1. Line Plot:
2. Multiline Plot:
3. Bar Chart:
4. Pie Chart:
By using Matplotlib, we can efficiently represent sales data, making it easier to analyze business
performance and make data-driven decisions.
FLOWCHART:
Start
End
PROGRAM:
# RONAK WANJARI
# ET22026
import pandas as pd
import matplotlib.pyplot as plt
# Load the CSV file
df = pd.read_csv('ronakcos.csv')
# Strip any leading or trailing spaces from column names
df.columns = df.columns.str.strip()
df.rename(columns={
'face Cream': 'Face Cream',
'Face Wash': 'Face Wash',
'shampoo': 'Shampoo',
'Sun cream': 'Sun Cream',
'hair Jel': 'Hair Gel',
'Conditioner': 'Conditioner',
'Lipstick': 'Lipstick'
}, inplace=True)
# (a) Line plot for total profit per month
plt.figure(figsize=(10,5))
plt.plot(df['Month'], df['Total_Profit'], marker='o', linestyle='-', color='b', label='Total Profit')
plt.xlabel('Month')
plt.ylabel('Total Profit')
plt.title('Total Profit per Month')
plt.xticks(rotation=45)
plt.legend()
plt.grid()
plt.show()
# (b) Multiline plot for all product sales
df.set_index('Month', inplace=True)
df.plot(kind='line', figsize=(12,6), marker='o')
plt.xlabel('Month')
plt.ylabel('Sales')
plt.title('Sales of Different Products per Month')
plt.xticks(rotation=45)
plt.grid()
plt.show()
# (c) Bar chart for Face Cream and Face Wash sales
df[['Face Cream', 'Face Wash']].plot(kind='bar', figsize=(12,6))
plt.xlabel('Month')
plt.ylabel('Sales')
plt.title('Face Cream & Face Wash Sales per Month')
plt.xticks(rotation=45)
plt.legend()
plt.show()
# (d) Pie chart for total sales of each product over the year
total_sales = df.sum()
total_sales = total_sales.drop('Total_Profit') # Excluding total profit from pie chart
total_sales.plot(kind='pie', autopct='%1.1f%%', figsize=(8,8))
plt.title('Total Sales Distribution per Product')
plt.ylabel('') # Remove y-label
plt.show
OUTPUT:
CONCLUSION:
Here we have successfully hands-on with visualization techniques available in pyplot package of
Matplotlib Library.
DISCUSSION QUESTIONS:
1) Determine the significance of pyplot package available in Matplotlib Library.
Ans: The pyplot module in Matplotlib is a collection of functions that make plotting easier,
similar to MATLAB. It provides a simple interface to create and customize various types of plots
such as line graphs, bar charts, histograms, and scatter plots. It helps in quickly visualizing data
without requiring detailed knowledge of the underlying plotting mechanisms.
2) List out the important methods available in pyplot package & Matplotlib.
Ans:
pyplot Methods:
• plt.plot() – Creates line plots.
• plt.bar() – Creates bar charts.
• plt.hist() – Plots histograms.
• plt.pie() – Creates pie charts.
• plt.xlabel() / plt.ylabel() – Adds labels to axes.
• plt.title() – Adds a title to the plot.
• plt.legend() – Displays a legend.
• plt.xlim() / plt.ylim() – Sets limits for axes.
• plt.grid() – Adds grid lines.
• plt.savefig() – Saves the figure to a file.
• plt.show() – Displays the plot.
Matplotlib Methods (Beyond pyplot):
• matplotlib.figure.Figure() – Creates a new figure object.
• matplotlib.axes.Axes() – Adds and customizes axes.
REFERENCE:
1. “Hands-on Matplotlib: Learn Plotting and Visualizations with Python 3”, Ashwin Pajankar, 1st
Edition, 2021, Kindle.
• “Python: The complete Reference”, Martin C Brown, 1st Edition, 2001, McGraw Hill
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Lab Assignment no.7
AIM: 1. Import a dataset of new recruitments in companies such as Microsoft,
Google, Amazon, IBM, Deliotte, Capmemini, ATOS Origin, Amdocs etc.
Generate & visualize reports of new recruitments using:
a) Bar Chart
b) Pie Chart
c) Customize Pie Chart
d) Doughnut Chart
Compare the new recruitments in IBM & Amdocs using visualization.
Name of Student: _Ronak Wanjari__
Roll No.: ET22036
Semester/Year: 6TH /3RD
Academic Session:_____________
Date of Performance:__________
Date of Submission: __________
CODE:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv('ronaknew.csv')
# a) Bar Chart
df.plot(kind='bar', x='Company', y='Number of Hires', color='skyblue', legend=False)
plt.title('New Recruitments by Company (Bar Chart)')
plt.ylabel('Number of Hires')
plt.xticks(rotation=45)
plt.show()
# b) Pie Chart
plt.pie(df['Number of Hires'], labels=df['Company'], autopct='%1.1f%%', startangle=140,
colors=plt.cm.Paired.colors)
plt.title('New Recruitments by Company (Pie Chart)')
plt.show()
# d) Doughnut Chart
plt.pie(df['Number of Hires'], labels=df['Company'], autopct='%1.1f%%', startangle=140,
colors=plt.cm.Paired.colors, wedgeprops={'width': 0.4})
plt.title('New Recruitments by Company (Doughnut Chart)')
plt.show()
AIM:2) Import a dataset of sales last 5 years of different cars in India such as
BMW, Mercedes-Benz, Land Rover, Fortuner etc. Generate visualization
reports using 3D: a) Bar Chart b) Pie Chart c) Doughnut Chart.
Visualize the comparative sales BMW and Fortuner in last 5 years
CODE:
import pandas as pd
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
df = pd.read_csv("car_sales_data.csv")
# 3D Bar Chart
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
for i, brand in enumerate(df.columns[1:]):
ax.bar(df['Year'], df[brand], zs=i, zdir='y', alpha=0.8, label=brand)
ax.set(title='3D Bar Chart: Car Sales in India', xlabel='Year', ylabel='Brands', zlabel='Sales')
plt.legend()
plt.show()
# 3D Pie Chart (Using shadow effect for 3D look)
total_sales = df.iloc[:, 1:].sum()
fig, ax = plt.subplots()
ax.pie(total_sales, labels=total_sales.index, autopct='%1.1f%%', startangle=140,
colors=plt.cm.Paired.colors, shadow=True)
ax.set_title('3D Pie Chart: Total Car Sales Distribution')
plt.show()
# 3D Doughnut Chart (Using shadow effect for 3D look)
fig, ax = plt.subplots()
ax.pie(total_sales, labels=total_sales.index, autopct='%1.1f%%', startangle=140,
colors=plt.cm.Paired.colors, wedgeprops={'width': 0.4}, shadow=True)
ax.set_title('3D Doughnut Chart: Car Sales Distribution')
plt.show()
# 3D Comparative Sales: BMW vs Fortuner
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.bar(df['Year'], df['BMW'], zs=0, zdir='y', alpha=0.8, label='BMW')
ax.bar(df['Year'], df['Fortuner'], zs=1, zdir='y', alpha=0.8, label='Fortuner')
ax.set(title='3D Comparative Sales: BMW vs Fortuner', xlabel='Year', ylabel='Brands',
zlabel='Sales')
plt.legend()
plt.show()
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Practical No. 08
Aim: Demonstrate data manipulation and visualization by utilizing Pandas for data
manipulation and Matplotlib to create insightful visualizations.
Import a dataset related to the Scottish Hills from the file “scottish_hills.csv”.
The file “scottish_hill.csv” contains different fields such as Hill Name, Height, Latitude,
Longitude and OSGRID. Perform the following operations:
a) Display the contains of the dataset
b) Display the details of Hills in sorted order based on their heights
c) Visualize the relation between Height and Latitude using Scatter Plot
d) Apply the Linear Regression method to the Height and Latitude data and again visualize
using Scatter Plot.
e) Customize the Scatter Plot for insightful visualization.
Name of Student: Ronak S. Wanjari
Roll No.: ET22036
Semester/Year: 6th/3rd
Academic Session: 2024-2025
Date of Performance:
Date of Submission:
OBJECTIVE/EXPECTED LEARNINGOUTCOME:
The objectives and expected learning outcome of this practical are:
• To get familiar with data manipulation and visualization techniques available in
Python libraries Pandas & Matplotlib.
• To apply various data manipulation and visualization techniques available in
Python libraries Pandas & Matplotlib.
THEORY:
Pandas Library:
Pandas is an open-source Python library that provides data structures and functions needed to
efficiently manipulate structured data.
• It provides Series (1D) and DataFrame (2D) structures.
• It allows for data filtering, grouping, merging, and aggregation.
• Supports file formats like CSV, Excel, JSON, and SQL.
• Handles missing values and time-series data effectively.
Matplotlib Library:
Matplotlib is a Python plotting library used for visualizing data.
• Supports various plots like line, bar, scatter, histogram, and pie charts.
• Provides options for customization with labels, legends, gridlines, and styles.
• Helps in saving figures in different formats like PNG, PDF, and SVG.
• Useful for data analysis and trend visualization.
FLOWCHART:
Start
Print dataset
End
PROGRAM:
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
data = pd.read_csv('scottish_hills.csv')
print(data)
sorted_hills = data.sort_values(by='Height', ascending=False)
print(sorted_hills)
plt.figure(figsize=(10, 6))
plt.scatter(data['Latitude'], data['Height'], color='blue', alpha=0.5)
plt.title('Scatter Plot of Height vs Latitude')
plt.xlabel('Latitude')
plt.ylabel('Height (meters)')
plt.grid(True)
plt.show()
X = data[['Latitude']]
y = data['Height']
model = LinearRegression()
model.fit(X, y)
y_pred = model.predict(X)
plt.figure(figsize=(10, 6))
plt.scatter(data['Latitude'], data['Height'], color='green', alpha=0.5, label='Data Points')
plt.plot(data['Latitude'], y_pred, color='red', linewidth=2, label='Regression Line')
plt.title('Height vs Latitude with Linear Regression')
plt.xlabel('Latitude')
plt.ylabel('Height (meters)')
plt.legend()
plt.grid(True)
plt.show()
plt.figure(figsize=(10, 6))
# Define colors based on height, creating a color map (from blue to red based on height)
plt.scatter(data['Latitude'], data['Height'], c=data['Height'], cmap='coolwarm', s=100, alpha=0.7,
edgecolors='black')
Department of Electronics and Telecommunication Engineering, S.B.J.I.T.M.R,
Nagpur. 4
Software Workshop Lab (PCCET605P)
CONCLUSION:
Here we have successfully hands-on data manipulation and visualization techniques available in
Python libraries Pandas & Matplotlib.
DISCUSSION QUESTIONS:
1) Determine the significance of Pandas & Matplotlib Libraries of Python.
ANS: Pandas:
• Used for data manipulation and analysis.
• Supports reading and writing data from CSV, Excel, JSON, and SQL.
Matplotlib:
• Supports various plots like line, bar, scatter, pie, histogram, etc.
• Enables saving plots in multiple formats like PNG, PDF, and SVG.
Matplotlib Features:
• Multiple Plot Types – Line, bar, pie, scatter, and histogram.
REFERENCE:
1. “Hands-on Matplotlib: Learn Plotting and Visualizations with Python 3”, Ashwin
Pajankar, 1st Edition, 2021, Kindle.
2. “Python for Data Analysis”, Wes McKinney, 3rd Edition, 2022, O’Reilly Media.
“Python: The complete Reference”, Martin C Brown,
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Lab Assignment no.8
AIM: Develop a program to print the data on maps using Cartopy.
CODE:
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
# Create a figure and set the projection to PlateCarree
fig, ax = plt.subplots(subplot_kw={'projection': ccrs.PlateCarree()})
# Add map features
ax.add_feature(cfeature.LAND, edgecolor='black')
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.BORDERS, linestyle=':')
# Set extent (longitude, latitude limits)
ax.set_extent([-180, 180, -90, 90]) # World map
# Add sample data (example: plotting a point)
lon, lat = 79.0882, 21.1458
ax.scatter(lon, lat, color='red', transform=ccrs.PlateCarree(), label="Nagpur")
# Add title and legend
plt.title("Simple Map with Cartopy")
plt.legend()
plt.show()
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Practical No. 09
Aim: Develop a Web Application using “Streamlit” for Electronics Products.
Create a dataset related to the selling of Electronic Products at a showroom. Create a dashboard
application with following requirements:
a) Create a mechanism to upload the sales file.
b) Create a mechanism to preview the data of sales file.
c) Generate a summary of sales data
d) Create a mechanism to filter out sales details
e) Generate different plots related to sales data.
OBJECTIVE/EXPECTED LEARNINGOUTCOME:
The objectives and expected learning outcome of this practical are:
• To get familiar with features for developing Web Applications & Websites using
Streamlit Web Framework.
• To apply various techniques available in Streamlit for developing Web
applications and websites.
THEORY:
Streamlit is an open-source Python framework for creating interactive web applications with
minimal coding. It is widely used in data science and machine learning due to its simplicity and
fast development process.
Key Features of Streamlit:
• Python-Based: No need for HTML, CSS, or JavaScript.
• Easy UI Components: Built-in widgets like buttons, sliders, and file uploaders.
• Real-Time Updates: Automatically refreshes when the script changes.
• Data Visualization Support: Integrates with Matplotlib, Seaborn, and Plotly.
• State Management: Uses st.session_state to retain user inputs.
• Fast Deployment: Can be hosted on Streamlit Cloud, Heroku, or AWS.
Concepts Used in This Web Application:
• Uploading CSV Files: Users can upload and preview electronics product sales data.
• Data Summary & Filtering: Generates statistics and allows filtering based on selected
criteria.
• Data Visualization: Displays sales trends using Line, Bar, and Scatter plots.
Applications of Streamlit:
• Data Dashboards, Machine Learning Apps, Stock Market Analysis, and IoT Monitoring.
FLOWCHART:
PROGRAM:
import streamlit as st
import pandas as pd
import matplotlib.pyplot as plt
if st.checkbox("Preview Data"):
st.write(sales_data)
OUTPUT:
CONCLUSION:
Here we have successfully hands-on with various techniques available in Streamlit for
developing Web applications and websites.
DISCUSSION QUESTIONS:
1) List out the significant features of Streamlit.
Ans: Streamlit is a Python-based framework for creating interactive web applications for
machine learning and data science. Key features include:
1. Simplicity & Ease of Use – Streamlit provides a minimalistic API to build UI
components using Python scripts without requiring HTML, CSS, or JavaScript.
2. Widgets for User Interaction – Offers built-in widgets like sliders, buttons, text inputs,
and dropdowns to collect user input dynamically.
3. Live Code Reloading – Automatically updates the app in real-time when the script is
modified and saved.
4. Seamless Data Visualization – Supports popular libraries like Matplotlib, Plotly,
Seaborn, and Altair for easy integration of charts and graphs.
Ans:
S. B. JAIN INSTITUTE OF
TECHNOLOGY,MANAGEMENT& RESEARCH,NAGPUR.
Lab Assignment no.9
AIM: 1)Develop a Web Application using “Streamlit” for dataset of an eCommerce Website.
CODE:
import streamlit as st
import pandas as pd
import matplotlib.pyplot as plt
st.title(" Simple E-Commerce Data Analysis")
# Upload CSV file
uploaded_file = st.file_uploader("Upload a CSV file", type=["csv"])
if uploaded_file:
df = pd.read_csv(uploaded_file)
st.write("### Dataset Preview")
st.write(df.head())
st.write("### Dataset Summary")
st.write(df.describe())
# Choose column for histogram
numeric_columns = df.select_dtypes(include=['number']).columns.tolist(
if numeric_columns:
column = st.selectbox("Select a column for histogram", numeric_columns)
fig, ax = plt.subplots()
ax.hist(df[column], bins=20, color='blue', edgecolor='black')
ax.set_title(f"Histogram of {column}")
st.pyplot(fig)
else:
st.info("Please upload a CSV file.")
OUTPUT:
AIM2: Develop a Web Application based on the study of heart diseases for a
Hospital
CODE:
import streamlit as st
import pandas as pd
import matplotlib.pyplot as plt
st.title(" Simple Heart Disease Analysis")
# Upload CSV file
uploaded_file = st.file_uploader("Upload Heart Disease Data (CSV)", type=["csv"])
if uploaded_file:
df = pd.read_csv(uploaded_file)
st.write("### Dataset Preview")
st.write(df.head())
st.write("### Summary Statistics")
st.write(df.describe())
# Select numeric column for visualization
numeric_columns = df.select_dtypes(include=['number']).columns.tolist()
if numeric_columns:
column = st.selectbox("Select a column for histogram", numeric_columns)
fig, ax = plt.subplots()
ax.hist(df[column], bins=20, color='red', edgecolor='black')
ax.set_title(f"Histogram of {column}")
st.pyplot(fig)
else:
st.info("Please upload a CSV file.")
Department of Electronics and Telecommunication Engineering, S.B.J.I.T.M.R,
Nagpur. 4
Software Workshop Lab (PCCET605P)
OUTPUT: