0% found this document useful (0 votes)
64 views13 pages

Understanding AI Project Lifecycle

For class 12 - 2024-25

Uploaded by

Pratibha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views13 pages

Understanding AI Project Lifecycle

For class 12 - 2024-25

Uploaded by

Pratibha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

A Project is a temporary endeavourundertaken to create a uniqueproduct ,

service or result .
It can be defined as a series of activities which is aimed at bringingabout clearly
specified objectives within a defined time period andwith a defined budget .
It is a unique set of coordinated activities / jobs , with definitestarting and
finishing points , undertaken by an individual ororganization to meet specific
objectives within defined schedulecost and performance parameters
What is project Life Cycle?

A project Lifecycleis the sequence of phases that a project goesthrough from its
initiation to its closure.
•Even though every project has a definite start and end , theparticular
objectives , deliverables , and activities vary widely. Theproject cycle provides
the basic foundation of the actions that haveto be performed in the project ,
irrespective of the specific workinvolved .
•Project Lifecyclecan range from predictive or plan-drivenapproaches to
adaptive or change -driven approaches. In a predictive Lifecycle, the specifics
are defined at the start of theproject, and any alterations to scope are carefully
addressed . In an adaptive Lifecycle, the product is developed over
multipleiterations, and detailed scope is defined for iteration only as
theiteration begins.

Generally , every AI project lifecycle encompasses three mainstages : project


scoping , design or build phase , and deployment inproduction.
However , the specificity of an Al project is that it does not stop atthe
implementation stage , but rather follows a cyclical process .
The process of developing a machine learning model is
highlyiterative . Often , you will find yourself returning to previous
stepsbefore proceeding to a subsequent one.
You start with a hypothesis or a burning question , such as " Whatdo all our
loyal customers have in common ? " or flip it to ask "What do all our
cancellations have in common ? " Then you gatherthe required data , train a
model with historical data , run currentdata to answer that question , and
then act on the answer .
The steering group provides input along the way , but the datareflects the
actual , not the hypothetical .
Using the Lifecycle , you will always be able to answer questionssuch as how
and why you created a particular model , how you willassess its accuracy and
effectiveness , how you will use it in aproduction environment , and how
it will evolve over time.
You will also be able to identify model drift and determinewhether changes
to the model based on incoming data arepointing you toward new insights or
diverting you towardundesired changes in scope.

What is Problem Scoping ?


Problem Scoping refers to understanding a problem and finding out various
factors which affect the problem, define the goal or aim of the project.

The 4W's of Problem Scoping are Who, What, Where and Why.
These Ws helps in identifying and understanding the problem in a
better and efficient manner.
Who - "Who" part helps us in comprehending and categorizing
who all are affected directly and indirectly with the problem and
who are called the Stake Holders
What - "What" part helps us in understanding and identifying the
nature of the problem and how do we get to know what helps to
get us know the evidence.
Where - "Where" does the problem arises, situation and the
location.
Why - "Why" is the given problem worth solving

Data Requirement and Acquitition


To proceed further , You need to acquire data which will become the baseof
your project as it will help you in understanding what theparameters that are
related to problems are scoping .
You go for data acquisitionby collecting data from variousreliable and
authentic sources . Since the data you collect wouldbe in large quantities ,
you can try to give it a visual image ofdifferent types of representations like
graphs , databases , flowcharts , maps , etc. This makes it easier for you to
interpret thepatterns in which your acquired data follows .
Afterexploringthe patterns , you can decide upon the type ofmodel you
would build to achieve the goal .

For this , you can research online and select various models whichgive
a suitable output . You can test the selected models and figureout which is
the most efficient one .
The most efficient model is now the base of your AI project andyou can
develop your algorithm around it .
Once themodellingis complete , you now need to test your modelon some
newly fetched data .
The results will help you inevaluatingyour model and henceimproving it .
Finally , after evaluation , the project cycle is now complete andwhat you get
is your AI project
Step 2 : Design / Building the Model :Once the relevant projects have been
selected and properly scoped, the next step of the machine learning lifecycle
is the Design orBuild phase , which can take from a few days to multiple
months ,depending on the nature of the project .
Q.1 What is a Model Parameter?
Ans: A model parameter is a variable whose value is estimated from the
dataset. Parametersare the values learned during training from the historical
data [Link] values of model parameters are not set manually. They are
estimated from the trainingdata. These are variables that are internal to the
machine learning [Link] on the training, the values of parameters are
set. These values are used by machinelearning while making predictions. The
accuracy of the values of the parameters defines theskill of your model.
Q.2 What is a hyperparameter ? What is the difference between parameter
andhyperparameter?
Ans: Hyperparameter:A hyperparameter is a configuration variable that is
external to themodel. It is defined manually before the training of the model
with the historical dataset. Itsvalue cannot be evaluated from the [Link]
is not possible to know the best value of the hyperparameter. But we can use
rules of thumb or select a value with trial and error for our
[Link] affect the speed and accuracy of the learning
process of the [Link] systems need different numbers of hyper-
parameters. Simple systems might notneed any hyperparameters at
[Link] rate, the value of K in k-nearest neighbors, and batch-size
are examples

Q. What is Data Acquisition and Data Exploration?


Data Acquisition:
Data Acquisition consists of twowords:
Data: Data refers to the raw facts, figures, information, or statistics.
Acquisition: Acquisition refers to acquiring data for the [Link], Data
Acquisition means Acquiring Data needed to solve the problem.
DATA MAY BE THE PROPERTY OF SOMEONE ELSE, AND THE USE
OF THAT DATAWITHOUT THEIR PERMISSION IS NOT ACCEPTABLE.
But there are some sources from which we can collect data, no
hassle whatsoever.
There are 2 Types of data in this case, Primary and Secondary, let’s take a
look at both.
Primary Data:
Primary data is the kind of data thatis collected directly from thedata source.
It is mostly collected specially for a research project and may beshared
publicly to be used for another research. Primary data is often reliable
andauthentic.
Secondary Data:
Secondary data is the data that hasbeen collected in the past bysomeone else
and made available for others to use. Secondary data is usually
easilyaccessible to researchers and individuals because they are shared
publicly.
Data Exploration:
Data exploration is the first stepof data analysis which is used tovisualize data
to uncover insights from the start or identify areas or patterns to dive into
anddig more. It allows for a deeper, more detailed, and better understanding
of the data.
DataVisualization
is a part of this where we visualizeand present the data in terms of tables,
piecharts, bar graphs, line graphs, bubble chart, choropleth map etc.
Modelling:
First of, what is an AI Model? An AI modelis a program or algorithm
thatutilizes a set of data that enables it to recognize certain patterns.
Modelling is the process in which different models based on the visualized
data can becreated and even checked for the advantages and disadvantages
of the [Link] are 2 Approaches to make a Machine Learning Model.

Learning Based Approach:


Learning Based Approach isbased on Machine Learningexperience with the
data fed.
Supervised Learning:
The data that you have collectedhere is labelled and so
youknow what input needs to be mapped to what output. This helps you
correct your algorithm if it makes a mistake in giving you the answer.
Supervised Learning isused in classifying mail as spam.
Unsupervised Learning:
The data collected here hasno labels and you are
unsureabout the outputs. So, you model your algorithm such that it can
understand patterns from the data and output the required answer. You do
not interfere whenthe algorithm learns.
Reinforcement Learning:
There is no data in this kindof learning, you model
thealgorithm such that it interacts with the environment and if the algorithm
does adecent job, you reward it, else you punish the algorithm. (Reward or
PenaltyPolicy). With continuous interactions and learning, it goes from being
bad to beingthe best that it can for the problem assigned to it.
Rule Based Approach:
A rule-based system uses rulesas the knowledge [Link] rules
are coded into the system in the form of if-then-else [Link] main
idea of a rule-based system is to capture the knowledge of a human expert in
aspecialized domain and embed it within a computer system.A rule-based
system is like a human being born with fixed [Link] knowledge of
that human being doesn’t change over time.
Evaluation:
The method of understanding the reliabilityof an API Evaluation and is
basedon the outputs which are received by the feeding the data into the
model and comparing theoutput with the actual answers.

Q.3 Explain various AI development platforms and


their [Link]:Artificial Intelligence Platforms involve the use of
machines to perform the tasks thatare performed by human beings. The
platforms simulate the cognitive function that humanminds perform such as
problem-solving, learning, reasoning, social intelligence as well
asgeneral intelligence.(Various platforms are TensorFlow, Microsoft Azure,
Rainbird, Infosys Nia, Wipro HOLMES, Dialogflow, Premonition, Ayasdi, Meya,
KAI, Vital A.I, etc.)AI's limitations are
1. High Costs
The ability to create a machine that can simulate human intelligence is
no small feat. Itrequires plenty of time and resources and can cost a huge
deal of money. AI also needs tooperate on the latest hardware and software
to stay updated and meet the latest requirements,thus making it quite costly.
2. No creativity
A big disadvantage of AI is that it cannot learn to think outside the box. AI is
capable of learning over time with pre-fed data and past experiences, but
cannot be creative in itsapproach.
3. Unemployment
One application of artificial intelligence is a robot, which is displacing
occupations andincreasing unemployment (in a few cases). Therefore, some
claim that there is always achance of unemployment as a result of chatbots
and robots replacing [Link] instance, robots are frequently utilized to
replace human resources in manufacturing businesses in some more
technologically advanced nations like Japan.
4. Make Humans Lazy
AI applicationsautomate the majority of tedious andrepetitive tasks. Since we
do not haveto memorize things or solve puzzles to get the job done, we tend
to use our brains less andless. This addiction to AI can cause problems to
future generations.
5. No Ethics
Ethics and morality are important human features that can be difficult to
incorporate into anAI. The rapid progress of AI has raised a number of
concerns that one day, AI will growuncontrollably, and eventually wipe
out humanity. This moment is referred to as the AIsingularity.
6. Emotionless
Since early childhood, we have been taught that neither computers nor other
machines havefeelings. Humans function as a team, and team management
is essential for achieving [Link], there is no denying that robots are
superior to humans when functioningeffectively, but it is also true that
human connections, which form the basis of teams, cannot be replaced by
computers.
Q.4 What are the key considerations for testing in the AI and Data Science
Life Cycleor Analytics Project Life Cycle?
1. AI Project Scoping
The first fundamental step when starting an AI initiative is scoping
andselecting the relevantuse case(s)that the AI model will be built to
[Link] this phase, it's crucial to preciselydefine the strategic business
objectives and desired outcomes of the project, align all thedifferent
stakeholders' expectations, anticipate the key resources and steps, and define
thesuccess metrics. Selecting the AI or machine learning use cases and being
able to evaluatethereturn on investment (ROI)is critical to thesuccess of
any data project.
2. Building the Model
Once the relevant projects have been selected and properly scoped, the next
step of themachine learning lifecycle is the Design or Build phase, which
can take from a few days tomultiple months, depending on the nature of the
project. The Design phase is essentially aniterative process comprisingall the
steps relevantto building the AIor machine learningmodel: data acquisition,
exploration, preparation, cleaning, feature engineering, testing andrunning a
set of models to try to predict behaviors or discover insights in the data.
3. Deploying to Production
In order to realize real business value from data projects, machine learning
models must notsit on the shelf; they need to beoperationalized,or deployed
into production for use acrossthe organization. Sometimes the cost of
deploying a model into production is higher than thevalue it would bring.
Ideally, this should be anticipated in the project scoping phase, beforethe
model is actually built, but this is not always [Link] crucial factor
to consider in the deployment phase of the machine learning lifecycleis
thereplicabilityof a project: think about howthis project can be reused and
capitalized on by other teams, departments, regions, etc., than the ones it's
initially built to serve.
Q.5 What is the AI development life cycle?
Ans: It is the development cycle to create AI solutions. It includes 3 parts:•
Project scoping• Design• Build phase
Q.6 What are the 7 phases of the system development life cycle?
Ans: • Planning• Requirements• Design• Development• Testing•
Deployment• Maintenance.
Q.7 What are the different methods of tuning hyperparameters? Explain with
the helpof an example?
Ans: The performance of the machine learning model improves with
hyperparameter tuning.
Hyperparameter Optimization Checklist:

Manual Search.

Grid Search.

Randomized Search.

Halving Grid Search.

Halving Randomized Search.

HyperOpt-Sklearn.

Bayes [Link] example of a model hyperparameter is the topology and size
of a neural [Link] of algorithm hyperparameters
arelearningrateand batch size as well asmini-batch size. Batch size can refer
to the full data sample where mini-batch size would bea smaller sample set.
Q.8 What are some of the open-source frameworks used for AI development?
Ans:we have compiled a list of best frameworks and libraries that you can
use to buildmachine learning models.
1) TensorFlow:
Developed by Google, TensorFlow isan open-source software library builtfor
deep learning or artificial neural networks. With TensorFlow, you can
create neuralnetworks and computation models using flowgraphs. It is one of
the most well-maintainedand popular open-source libraries available for
deep learning. The TensorFlow framework isavailable in C++ and Python.
2)Theano:
Theano is a Python library designed fordeep learning. Using the tool, you
candefine and evaluate mathematical expressions including multi-
dimensional arrays.
3) Torch:
The Torch is an easy to use open-source computingframework for
ML [Link] tool offers an efficient GPU support, N-dimensional array,
numeric optimizationroutines, linear algebra routines, and routines for
indexing, slicing, and transposing.
4) Caffe:
Caffe is a popular deep learning tool designedfor building apps.
5) Microsoft CNTK:
Microsoft cognitive toolkit is one of the fastest deep learningframeworks
with C#/C++/Python interface support. The open-source framework comes
with powerful C++ API and is faster and more accurate than TensorFlow.
6) Keras:
Written in Python, Keras is an open-sourcelibrary designed to make the
creation of new DL models easy. This high-level neural network API can
be run on top of deep learningframeworks like TensorFlow, Microsoft CNTK,
etc.
7) SciKit-Learn:
SciKit-Learn is an open-source Pythonlibrary designed for machinelearning.
The tool based on libraries such as NumPy, SciPy, and matplotlib can be used
for data mining and data analysis. SciKit-Learn is equipped with a variety
of ML modelsincluding linear and logistic regressors, SVM classifiers, and
random forests.
8) Amazon Machine Learning:
Amazon Machine Learning(AML) is an ML service that provides tools and
wizards for creating ML models.
Q.9 Explain Machine learning Life cycle?
Ans:Machinelearninglifecycleisacyclicprocesstobuildanefficientmachinelearni
ng project. The main purpose of the life cycle is to find a solution to the
problem or [Link] learning life cycle involves seven major steps,
which are given below:
Gathering Data
Data preparation
Data Wrangling
Analyse Data
Train the model
Test the model
Deployment

You might also like