Project Report Final
Project Report Final
MASTEROFCOMPUTERAPPLICATIONS
By
NANDHINI R P
1913323037007
Mrs.U. ShanthaVisalakshi
MCADEPARTMENT
ETHIRAJCOLLEGEFORWOMEN(AUTONOMOUS)
IMAGE PREDICTION
by
NANDHINI R P
1913323037007
In partial fulfilment for the award of the degree of
MASTEROFCOMPUTERAPPLICATIONS
is a bonafide report of work carried out by her under my guidance and supervision
Guide
Place:
Date:
Examiner–2:……………………………………
(SignatureandnameoftheExaminer)
CERTIFICATE OF ORIGINALITY
M. Phil.,M.E.,and that the project has not previously formed the basis for the award of any
other degree.
Date: NANDHINI R P,
1913323037007.
ACKNOWLEDGEMENT
Apart from the effort ofanyone,thesuccessofanyproject
dependslargelyontheencouragementandguidelinesofmanyothers.I
takethisopportunitytoexpressmygratitudetothosepeoplewhohave
beeninstrumentalinthesuccessoftheproject.
IexpressmythankstoDr.S.BHUVANESWARI,Principali/cforher
supportinincludingthissubjectinourcurriculum. Iexpress my deep sense ofgratitude to ourHead
ofthe DepartmentMrs.A.JOSEPHINEANITHA,MCA,M.Phil.,whogaveme
thiswonderfulopportunityandsupporttodevelopthisproject.
Iowemygreatthankstogreatmanypeoplewhohelpedand
supportedmeduringdevelopmentandwritingofthisprojectreport.My
projectwasguidedandcorrectedwithattentionandcarebymyproject
guide.Shetookimmensecaretogothroughmyprojectandmade
necessarycorrectionsasandwhenneeded.Iexpressmydeepest
thankstomyprojectguideMs.K.VIJAYALAKSHMI,MCA,M.Phil.,M.E.,
forherguidance,withoutwhichthisworkwouldhavebeenareality. I am thankfulto and fortunate
enough to get constant encouragement,supportand guidance from allteaching staffof
DepartmentofMCAwhichhelpedmeinsuccessfullycompletingmy
projectwork.Also,IwouldliketoextendmysincereregardstoourLab assistantforhertimelysupport.
Iwishtoavailmyselfthisopportunity,toexpressmysenseof
gratitudeandlovetomyfriends,mybelovedparentsfortheirmanual
support,strength,andhelpandeverything.
AUGUST 2020
NAME : NANDHINI R.P
REG NO : 1913323037007
SEM : I MCA
(AUTONOMOUS)
Chennai – 600 008.
CERTIFICATE
This is to certify that this is the bonafide record work carried out under my supervision in the Computer
Laboratory Course “COMP. LAB. III: DATABASE MANAGEMENT SYSTEMS”, submitted to MCA Department, Ethiraj
College for Women (Autonomous) by,
NANDHINI.R.P
(1913323037007)
S. NO
Faculty-In-Charge
TITLE PAGE NO
Head of the Department
1 INTRODUCTION
Submitted For the Laboratory Examination at Ethiraj College For Women (Autonomous) on ……………..
2 ABSTRACT
3 OBJECTIVES OF PROJECT
4 SYSTEM REQUIRMENT
Examiner – 1
5 TENSORFLOW AND KERAS
(Signature and Name of the Examiner)
6 PYTHON
7 IMAGE AI
10 CLASSIFICATION OF MODELS
11 MULTIPLE CLASSIFICATION
12 CONCLUSION
13 APPENDIX
14 REFERENCE
CHAPTER 1
1. Introduction:
To recognize image and to determine their poses in a scene we need to find the
correspondences between the features extracted from the image and those of
the image models.
2.OPENCV:
This Fig 4:
Methodology of the proposed framework from left to right: (a) Input
Image (b) Image Pyramids with increasing FOV (c) Visual Attention
Saliency Maps (d) Multi-resolution Attention Map by fusing (c) with
different weights To validate our model termed as Multi-Resolution
AIM (MR-AIM), we ran experiments on a series of patterns as shown in
Figure 5. First we considered a series of spatially distributed red dots of
same dimensions against a black background (Figures 5 (a) and 5 (b)).
As can be seen in the saliency result (Figures 5 (e) and 5 (f)) there is a
gradual decrease in saliency as one moves away from the fovea (Red
corresponds to regions of higher saliency while Blue corresponds to
regions of lower saliency).
This Figure 5: Saliency results for different spatial perturbations (a)-(d)
Input Image, (e)-(h) Saliency Result Onsets are considered to drive
visual attention in a dynamic environment, so in Figure 5(c) we next
considered the arrivals of new objects of interest within the fovea (red
dot) and towards the periphery (yellow dot). Maximum response is
obtained in the region around the yellow dot (Figure 5 (g)). Next, we
consider a movement of the yellow dot further away from the fovea
(Figure 5 (d)). Again we notice a slight shift in saliency moving attention
towards the center (Figure 5 (h)). These experiments give us valuable
information on the mechanisms of our model when the object of
interest is moving relative to the fovea.
6: Qualitative comparisons:
3.1 Objectives
One such technique is to create model that predict multiple images at
same to process using threading to process every images at same
time.so it can be used in any application at same type.
.
CHAPTER 4: SYSTEM REQUIREMENTS
5. Tensorflow:
TensorFlow is an end-to-end open source
platform for machine learning. It has a comprehensive, flexible
ecosystem of tools, libraries and community resources that lets
researchers push the state-of-the-art in ML and developers easily build
and deploy ML powered applications.
FEATURES OF TENSORFLOW:
KERAS:
Keras is an open-source neural-network library written
in Python. It is capable of running on top of TensorFlow, Microsoft
Cognitive Toolkit, R, Theano, or PlaidML. Designed to enable fast
experimentation with deep neural networks, it focuses on being user-
friendly, modular, and extensible. It was developed as part of the
research effort of project ONEIROS (Open-ended Neuro-Electronic
Intelligent Robot Operating System), and its primary author and
maintainer is François Chollet, a Google engineer. Chollet also is the
author of the XCeption deep neural network model.
FEATURES OF KERAS:
KERAS VS TENSORFLOW:
Keras is a neural network library
while TensorFlow is the open-source library for a number of various
tasks in machine learning. TensorFlow provides both high-
level and low-level APIs while Keras provides only high-level
APIs. ... Keras is built in Python which makes it way more user-friendly
than TensorFlow.
CHAPTER 6
6. PYTHON 3
1. Introduction:
Python is an interpreted, high-level, general-purpose programming
language. Created by Guido van Rossum and first released in 1991,
Python's design philosophy emphasizes code readability with its
notable use of significant whitespace. Its language
constructs and object-oriented approach aim to
help programmers write clear, logical code for small and large-scale
projects.
Python is dynamically typed and garbage-collected. It supports
multiple programming paradigms,
including structured (particularly, procedural), object-oriented,
and functional programming. Python is often described as a "batteries
included" language due to its comprehensive standard library.
Python was conceived in the late 1980s as a successor to the ABC
language. Python 2.0, released in 2000, introduced features like list
comprehensions and a garbage collection system with reference
counting.
Python 3.0, released in 2008, was a major revision of the language that
is not completely backward-compatible, and much Python 2 code does
not run unmodified on Python 3.
The Python 2 language was officially discontinued in 2020 (first planned
for 2015), and "Python 2.7.18 is the last Python 2.7 release and
therefore the last Python 2 release." No more security patches or other
improvements will be released for it. With Python 2's end-of-life, only
Python 3.5.x and later are supported.
Python interpreters are available for many operating systems. A global
community of programmers develops and maintains CPython, a free
and open-source reference implementation. A non-profit organization,
the Python Software Foundation, manages and directs resources for
Python and CPython development.
#Sample program:
Hello world:-
print('Hello, world!')
O/P:-
Hello, world!
To Install Packages:-
python -m pip install SomePackage
CHAPTER 7
IMAGE AI:
Prediction Classes
the class in your code, you will create a new instance of the class in
your code as seen below
After creating a new instance of the ImagePrediction class, you can use
the functions below to set your instance property and start recognizing
objects in images.
This function sets the model type of the image recognition instance
you created to the DenseNet model, which meansyou will be
performing your image prediction tasks using the pre-trained
“DenseNet” model you downloaded from the links above.
.setModelPath() , This function accepts a string which must be the path
to the model file you downloaded and must corresponds to the model
type you set for your image prediction instance.
prediction.setModelPath("resnet50_weights_tf_dim_ordering_tf_kerne
ls.h5") – parameter model_path (required) : This is the path to your
downloaded model file. .loadModel() , This function loads the model
from the path you specified in the function call above into your image
prediction instance.
results_array =
multiple_prediction.predictMultipleImages(all_images_ar
ray, result_count_per_image=5)
percentage_probabilities[index])
print("-----------------------")
– parameter sent_images_array (required) : This refers to a list that
contains the path to your image files, Numpy array of your images or
image file stream of your images, depending on the input type you
specified.
ImagePredictionimport os
execution_path = os.getcwd()
prediction =
ImagePrediction()prediction.setModelTypeAsResNet
()prediction.setModelPath(os.path.join(execution_pa
th,
"resnet50_weights_tf_dim_ordering_tf_kernels.h5"))
prediction.loadModel()
predictions, probabilities =
prediction.predictImage(os.path.join(execution_path,
"image1.jpg"), result_count=10)for eachPrediction,
eachProbability in zip(predictions, probabilities):
Find below the classes and their respective functions available for you
to use. These classes can be integrated into any traditional python
program you are developing, be it a website, Windows/Linux/MacOS
application or a system that supports or part of a Local-Area-Network.
Once you have downloaded the model of your choice, you should
create a new instance of the ObjectDetection class as seen in the
sample below:
detector = ObjectDetection()
Once you have created an instance of the class, you can use the
functions below to set your instance property and start detecting
objects in images.
detector.setModelTypeAsYOLOv3()
detector.setModelTypeAsTinyYOLOv3()
.setModelPath() , This function accepts a string which must be the path
to the model file you downloaded and must corresponds to the model
type you set for your object detection instance. Find example code,and
parameters of the function below:
detector.setModelPath("yolo.h5")
.loadModel() , This function loads the model from the path you
specified in the function call above into your object detection instance.
Find example code below:
detector.loadModel()
—parameter output_image_path (required only
if input_type = “file” ) : This refers to the file path to which
the detected image will be saved. It is required only
if input_type = “file”
– parameter minimum_percentage_probability (optional ) :
This parameter is used to determine the integrity of the
detection results. Lowering the value shows more objects
while increasing the value ensures objects with the highest
accuracy are detected. The default value is 50.
returned_image, detections =
detector.detectObjectsFromImage(input_image=
”image.jpg”, output_type=”array”,
minimum_percentage_probability=30)
—parameter display_percentage_probability (optional ) :
This parameter can be used to hide the percentage
probability of each object detected in the detected image if
set to False. The default values is True.
– parameter display_object_name (optional ) : This
parameter can be used to hide the name of each object
detected in the detected image if set to False. The default
values is True.
—parameter extract_detected_objects (optional ) : This
parameter can be used to extract and save/return each
object detected in an image as a seperate image. The default
values is False.
name (string)
percentage_probability (float)
box_points (tuple of x1,y1,x2 and y2
coordinates)
“”” detections =
detector.detectObjectsFromImage(input_image=
”image.jpg”,
output_image_path=”imagenew.jpg”,
minimum_percentage_probability=30)
“”“
If all required parameters are set and
output_type = ‘array’ ,the function will return
name (string)
percentage_probability (float)
box_points (list of x1,y1,x2 and y2
coordinates)
“”“
If extract_detected_objects = True and
‘output_image_path’ is set to a file path you
want
the detected image to be saved, the function will
return: 1. an array of dictionaries, with each
dictionary corresponding to the objects
detected in the image. Each dictionary
contains the following property: *
name (string) *
percentage_probability (float) *
box_points (list of x1,y1,x2 and y2
coordinates)
“”“
If extract_detected_objects = True and
output_type = ‘array’, the the function will
return:
execution_path = os.getcwd()
detector =
ObjectDetection()detector.setModelTypeAsYOLOv3()detector.setModel
Path( os.path.join(execution_path ,
"yolo.h5"))detector.loadModel()detections =
detector.detectObjectsFromImage(input_image=os.path.join(execution
_path , "image.jpg"), output_image_path=os.path.join(execution_path ,
"imagenew.jpg"), minimum_percentage_probability=30)
print("--------------------------------")
CHAPTER 8
1. Gathering data
2. Preparing that data
3. Choosing a model
4. Training
5. Evaluation
6. Hyper parameter tuning
7. Prediction.
1. Gathering Data: Once you know exactly what you want and the equipments
are in hand, it takes you to the first real step of machine learning- Gathering Data.
This step is very crucial as the quality and quantity of data gathered will directly
determine how good the predictive model will turn out to be. The data collected
is then tabulated and called as Training Data.
2.
3. Data Preparation: After the training data is gathered, you move on to the next
step of machine learning: Data preparation, where the data is loaded into a
suitable place and then prepared for use in machine learning training. Here, the
data is first put all together and then the order is randomized as the order of data
should not affect what is learned. This is also a good enough time to do any
visualizations of the data, as that will help you see if there are any relevant
relationships between the different variables, how you can take their advantage
and as well as show you if there are any data imbalances present. Also, the data
now has to be split into two parts. The first part that is used in training our model,
will be the majority of the dataset and the second will be used for the evaluation
of the trained models performance. The other forms of adjusting and
manipulation like normalization, error correction, and more take place at this
step.
5. Training: After the before steps are completed, you then move onto
what is often considered the bulk of machine learning called training
where the data is used to incrementally improve the model’s ability to
predict. The training process involves initializing some random values
for say A and B of our model, predict the output with those values, then
compare it with the model's prediction and then adjust the values so
that they match the predictions that were made previously. This
process then repeats and each cycle of updating is called one training
step.
CHAPTER 10
TYPES OF CLASSIFICATION MODEL:
7 Types of Classification Algorithms:
1. Logistic Regression
2. Naïve Bayes
3. Stochastic Gradient Descent
4. K-Nearest Neighbours
5. Decision Tree
6. Random Forest
7. Support Vector Machine
Logistic Regression Definition:
Logistic regression is a machine learning algorithm for classification. In
this algorithm, the probabilities describing the possible outcomes of a
single trial are modelled using a logistic function. Advantages: Logistic
regression is designed for this purpose (classification), and is most
useful for understanding the influence of several independent variables
on a single outcome variable. Disadvantages: Works only when the
predicted variable is binary, assumes all predictors are independent of
each other and assumes data is free of missing values.
Naïve Bayes
Definition: Naive Bayes algorithm based on Bayes’ theorem with the
assumption of independence between every pair of features. Naive
Bayes classifiers work well in many real-world situations such as
document classification and spam filtering. Advantages: This algorithm
requires a small amount of training data to estimate the necessary
parameters. Naive Bayes classifiers are extremely fast compared to
more sophisticated methods. Disadvantages: Naive Bayes is is known to
be a bad estimator.
11 MULTICLASS CLASSIFICATION:
CHAPTER 12
CONCLUSION
CHAPTER 13
APPENDIX
Sample Result
convertible : 52.459555864334106
sports_car : 37.61284649372101
pickup : 3.1751200556755066
car_wheel : 1.817505806684494
minivan : 1.7487050965428352
kite : 10.20539253950119
white_stork : 1.6472270712256432
CHAPTER 14
REFERENCE
1 ImageAI https://github.com/OlafenwaMoses/ImageAI
2 Python https://www.python.org/