0% found this document useful (0 votes)
31 views19 pages

Software Process Models Explained

Uploaded by

Gunjan Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views19 pages

Software Process Models Explained

Uploaded by

Gunjan Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

UNIT II - PROJECT LIFE CYCLE AND EFFORT ESTIMATION

What is a software process model?


Software process models often represent a networked sequence of activities, objects,
transformations, and events that embody strategies for accomplishing software evolution. Such
models can be used to develop more precise and formalized descriptions of software life cycle
activities. Their power emerges from their
utilization of a sufficiently rich notation, syntax, or semantics, often suitable for computational
processing.

2.2 Choice of Process Models

Factors to consider when choosing a software process model

Project requirements
Before choosing the software process model, you must take the time to define all the requirements
of the project, this must be done by working together with the client and taking into account the
needs of the user of the final product to achieve one hundred percent satisfaction.

Project size
You have to take into consideration the size of the project you are working on. The bigger it is, the
more people it requires on the development team and a software process model with more
elaborate management and creation plans.

Complexity
In general, complex projects do not have all the requirements clear from the beginning. New ones
are added or changed as product development progresses and this can translate into time lag and
higher cost, so complex projects require a software process model that can grow and change over
time.

Client
If you are working with a client, do you need to be involved during the process? Does the end
user need to be involved in all phases of the development process? These questions should be
asked before choosing the software process model.

Skills and Knowledge


Another factor that may not greatly influence the choice of the software process model but must
be taken into account is the skills and knowledge of the developers and team members involved in
the process. Their handling of development tools and languages is important

Priyanka Bhardwaj
The software process model framework is specific to the project. Thus, it is essential to select the
software process model according to the software which is to be developed. The software project
is considered efficient if the process model is selected according to the requirements. It is also
essential to consider time and cost while choosing a process model as cost and/ or time constraints
play an important role in software development. The basic characteristics required to select the
process model are project type and associated risks, requirements of the project, and the users.
One of the key features of selecting a process model is to understand the project in terms of size,
complexity, funds available, and so on. In addition, the risks which are associated with the project
should also be considered. Note that only a few process models emphasize risk assessment.
Various other issues related to the project and the risks are listed in Table.
Table Selections on the Basis of the Project Type and Associated Risks

Project Type Waterfall Prototype Spiral RAD Formal


and Methods
Associated
Risks

Reliability No No Yes No Yes


requirements

Stable funds Yes Yes No Yes Yes

Reuse No Yes Yes Yes Yes


components

Tight project No Yes Yes Yes No


schedule

Scarcity of No Yes Yes No No


resources

The most essential feature of any process model is to understand the requirements of the project.
In case the requirements are not clearly defined by the user or poorly understood by the developer,
the developed software leads to ineffective systems. Thus, the requirements of the software should
be clearly understood before selecting any process model. Various other issues related to the
requirements are listed in Table.
Table Selection on the Basis of the Requirements of the Project

Requirements of Waterfall Prototype Spiral RAD Formal


the Project Methods

Requirements are Yes No No Yes No


defined early in
SDLC

Priyanka Bhardwaj
Requirements are Yes No No Yes Yes
easily defined and
understandable

Requirements are No Yes Yes No Yes


changed frequently

Requirements No Yes Yes No No


indicate a complex
System

Software is developed for the users. Hence, the users should be consulted while selecting the
process model. The comprehensibility of the project increases if users are involved in selecting the
process model. It is possible that a user is aware of the requirements or has a rough idea of the
requirements. It is also possible that the user wants the project to be developed in a sequential
manner or an incremental manner (where a part is delivered to the user for use). Various other
issues related to the user’s satisfaction are listed in Table.
Table Selection on the Basis of the Users

User Waterfall Prototype Spiral RAD Formal


Involvement Methods

Requires Yes No Yes No Yes


Limited User
Involvement

User No Yes No Yes No


participation in
all phases

No experience No Yes Yes No Yes


of participating
in similar
projects

RAPID APPLICATION DEVELOPMENT


Priyanka Bhardwaj
The Rapid Application Development
Model was first proposed by IBM in the
1980s. The critical feature of this model
is the use of powerful development tools
and techniques. A software project can
be implemented using this model if the
project can be broken down into small
modules wherein each module can be
assigned independently to separate
teams. These modules can finally be
combined to form the final product.
Development of each module involves
the various basic steps as in the
waterfall model i.e analyzing, designing,
coding, and then testing, etc. as shown
in the figure. Another striking feature of
this model is a short time span i.e the
time frame for delivery(time-box) is
generally 60-90 days.

The various phases of RAD are as follows:

[Link] Modelling: The information flow among business functions is defined by answering
questions like what data drives the business process, what data is generated, who generates it, where
does the information go, who process it and so on.

2. Data Modelling: The data collected from business modeling is refined into a set of data objects
(entities) that are needed to support the business. The attributes (character of each entity) are
identified, and the relation between these data objects (entities) is defined.
Priyanka Bhardwaj
3. Process Modelling: The information object defined in the data modeling phase are transformed
to achieve the data flow necessary to implement a business function. Processing descriptions are
created for adding, modifying, deleting, or retrieving a data object.

4. Application Generation: Automated tools are used to facilitate construction of the software;
even they use the 4th GL techniques.

5. Testing & Turnover: Many of the programming components have already been tested since RAD
emphasis reuse. This reduces the overall testing time. But the new part must be tested, and all
interfaces must be fully exercised.

When to use RAD Model?


o When the system should need to create the project that modularizes in a short span time (2-3 months).
o When the requirements are well-known.
o When the technical risk is limited.
o When there's a necessity to make a system, which modularized in 2-3 months of period.
o It should be used only if the budget allows the use of automatic code generating tools.

Advantage of RAD Model


o This model is flexible for change.
o In this model, changes are adoptable.
o Each phase in RAD brings highest priority functionality to the customer.
o It reduced development time.
o It increases the reusability of features.

Disadvantage of RAD Model


o It required highly skilled designers.
o All application is not compatible with RAD.
o For smaller projects, we cannot use the RAD model.
o On the high technical risk, it's not suitable.
o Required user involvement.

AGILE METHODS

Priyanka Bhardwaj
o In earlier days Iterative Waterfall model was very popular to complete a project. But
nowadays developers face various problems while using it to develop software.
The main difficulties included handling customer change requests during project
development and the high cost and time required to incorporate these changes. To
overcome these drawbacks of the Waterfall model, in the mid-1990s the Agile
Software Development model was proposed.
o The Agile model was primarily designed to help a project adapt to change requests
quickly. So, the main aim of the Agile model is to facilitate quick project completion.
To accomplish this task agility is required. Agility is achieved by fitting the process
to the project and removing activities that may not be essential for a specific
project. Also, anything that is waste of time and effort is avoided.
o Actually Agile model refers to a group of development processes. These processes
share some basic characteristics but do have certain subtle differences among
themselves
Agile SDLC models:
o Crystal
o Atern
o Feature-driven development
o Scrum
o Extreme programming (XP):
o Lean development
o Unified process

Agile development requires a high degree of collaboration and communication among


team members, as well as a willingness to adapt to changing requirements and feedback
from customers.
In the Agile model, the requirements are decomposed into many small parts that can be
incrementally developed. The Agile model adopts Iterative development. Each
incremental part is developed over an iteration. Each iteration is intended to be small and
easily manageable and can be completed within a couple of weeks only. At a time one
iteration is planned, developed, and deployed to the customers. Long-term plans are not
made.
The agile model is a combination of iterative and incremental process models. The steps
involve in agile SDLC models are:
 Requirement gathering
 Requirement Analysis
 Design
 Coding
 Unit testing
 Acceptance testing
The time to complete an iteration is known as a Time Box. Time-box refers to the
maximum amount of time needed to deliver an iteration to customers. So, the end date
for an iteration does not change. However, the development team can decide to reduce
the delivered functionality during a Time-box if necessary to deliver it on time. The central
principle of the Agile model is the delivery of an increment to the customer after each
Time-box.
Principles of Agile model:
Priyanka Bhardwaj
 To establish close contact with the customer during development and to gain a clear
understanding of various requirements, each Agile project usually includes a customer
representative on the team. At the end of each iteration stakeholders and the
customer representative review, the progress made and re-evaluate the requirements.
 The agile model relies on working software deployment rather than comprehensive
documentation.
 Frequent delivery of incremental versions of the software to the customer
representative in intervals of a few weeks.
 Requirement change requests from the customer are encouraged and efficiently
incorporated.
 It emphasizes having efficient team members and enhancing communications among
them is given more importance. It is realized that enhanced communication among the
development team members can be achieved through face-to-face communication
rather than through the exchange of formal documents.
 It is recommended that the development team size should be kept small (5 to 9
people) to help the team members meaningfully engage in face-to-face
communication and have a collaborative work environment.
 The agile development process usually deploys Pair Programming. In Pair
programming, two programmers work together at one workstation. One does coding
while the other reviews the code as it is typed in. The two programmers switch their
roles every hour or so.
Advantages:
 Working through Pair programming produces well-written compact programs which
have fewer errors as compared to programmers working alone.
 It reduces the total development time of the whole project.
 Agile development emphasizes face-to-face communication among team members,
leading to better collaboration and understanding of project goals.
 Customer representatives get the idea of updated software products after each
iteration. So, it is easy for him to change any requirement if needed.
 Agile development puts the customer at the center of the development process,
ensuring that the end product meets their needs.
Disadvantages:
 The lack of formal documents creates confusion and important decisions taken during
different phases can be misinterpreted at any time by different team members.
 Agile development models often involve working in short sprints, which can make it
difficult to plan and forecast project timelines and deliverables. This can lead to delays
in the project and can make it difficult to accurately estimate the costs and resources
needed for the project.
 Agile development models require a high degree of expertise from team members, as
they need to be able to adapt to changing requirements and work in an iterative
environment. This can be challenging for teams that are not experienced in agile
development practices and can lead to delays and difficulties in the project.
 Due to the absence of proper documentation, when the project completes and the
developers are assigned to another project, maintenance of the developed project can
become a problem.

Extreme programming (XP)


Priyanka Bhardwaj
Extreme programming (XP) is one of the most important software development
frameworks of Agile models. It is used to improve software quality and responsiveness to
customer requirements. The extreme programming model recommends taking the best
practices that have worked well in the past in program development projects to extreme
levels. Good practices need to be practiced in extreme programming: Some of the
good practices that have been recognized in the extreme programming model and
suggested to maximize their use are given below:
 Code Review: Code review detects and corrects errors efficiently. It suggests pair
programming as coding and reviewing of written code carried out by a pair of
programmers who switch their works between them every hour.
 Testing: Testing code helps to remove errors and improves its reliability. XP suggests
test-driven development (TDD) to continually write and execute test cases. In the TDD
approach test cases are written even before any code is written.
 Incremental development: Incremental development is very good because customer
feedback is gained and based on this development team comes up with new
increments every few days after each iteration.
 Simplicity: Simplicity makes it easier to develop good quality code as well as to test
and debug it.
 Design: Good quality design is important to develop good quality software. So,
everybody should design daily.
 Integration testing: It helps to identify bugs at the interfaces of different
functionalities. Extreme programming suggests that the developers should achieve
continuous integration by building and performing integration testing several times a
day.
Basic principles of Extreme programming: XP is based on the frequent iteration
through which the developers implement User Stories. User stories are simple and
informal statements of the customer about the functionalities needed. A User Story is a
conventional description by the user of a feature of the required system. It does not
mention finer details such as the different scenarios that can occur. Based on User
stories, the project team proposes Metaphors. Metaphors are a common vision of how
the system would work. The development team may decide to build a Spike for some
features. A Spike is a very simple program that is constructed to explore the suitability of
a solution being proposed. It can be considered similar to a prototype. Some of the basic
activities that are followed during software development by using the XP model are given
below:
 Coding: The concept of coding which is used in the XP model is slightly different from
traditional coding. Here, the coding activity includes drawing diagrams (modeling) that
will be transformed into code, scripting a web-based system, and choosing among
several alternative solutions.
 Testing: XP model gives high importance to testing and considers it to be the primary
factor to develop fault-free software.
 Listening: The developers need to carefully listen to the customers if they have to
develop good quality software. Sometimes programmers may not have the depth
knowledge of the system to be developed. So, the programmers should understand
properly the functionality of the system and they have to listen to the customers.
 Designing: Without a proper design, a system implementation becomes too complex
and very difficult to understand the solution, thus making maintenance expensive. A

Priyanka Bhardwaj
good design results elimination of complex dependencies within a system. So,
effective use of suitable design is emphasized.
 Feedback: One of the most important aspects of the XP model is to gain feedback to
understand the exact customer needs. Frequent contact with the customer makes the
development effective.
 Simplicity: The main principle of the XP model is to develop a simple system that will
work efficiently in the present time, rather than trying to build something that would
take time and may never be used. It focuses on some specific features that are
immediately needed, rather than engaging time and effort on speculations of future
requirements.
Applications of Extreme Programming (XP): Some of the projects that are suitable to
develop using the XP model are given below:
 Small projects: XP model is very useful in small projects consisting of small teams as
face-to-face meeting is easier to achieve.
 Projects involving new technology or Research projects: This type of project face
changing requirements rapidly and technical problems. So XP model is used to
complete this type of project.
Extreme Programming (XP) is an Agile software development methodology that focuses
on delivering high-quality software through frequent and continuous feedback,
collaboration, and adaptation. XP emphasizes a close working relationship between the
development team, the customer, and stakeholders, with an emphasis on rapid, iterative
development and deployment.

XP includes the following practices:

1. Continuous Integration: Code is integrated and tested frequently, with all changes
reviewed by the development team.
2. Test-Driven Development: Tests are written before code is written, and the code is
developed to pass those tests.
3. Pair Programming: Developers work together in pairs to write code and review each
other’s work.
4. Continuous Feedback: Feedback is obtained from customers and stakeholders
through frequent demonstrations of working software.
5. Simplicity: XP prioritizes simplicity in design and implementation, with the goal of
reducing complexity and improving maintainability.
6. Collective Ownership: All team members are responsible for the code, and anyone
can make changes to any part of the codebase.
7. Coding Standards: Coding standards are established and followed to ensure
consistency and maintainability of the code.
8. Sustainable Pace: The pace of work is maintained at a sustainable level, with regular
breaks and opportunities for rest and rejuvenation.
9. XP is well-suited to projects with rapidly changing requirements, as it emphasizes
flexibility and adaptability. It is also well-suited to projects with tight timelines, as it
emphasizes rapid development and deployment.

Priyanka Bhardwaj
Dynamic Systems Development Method (DSDM)

The Dynamic Systems Development technique (DSDM) is an associate degree agile


code development approach that provides a framework for building and maintaining
systems.
The DSDM philosophy is borrowed from a modified version of the sociologist PARETO
principle—80 % of An application is often delivered in twenty percent of the time it’d
desire deliver the entire (100 percent) application.
DSDM is an iterative code method within which every iteration follows the 80% rule. That
is, only enough work is needed for every increment to facilitate movement to the next
increment.
DSDM life cycle defines 3 different iterative cycles, preceded by 2 further life cycle
activities:
1. Feasibility Study:
It establishes the essential business necessities and constraints related to the
applying to be designed then assesses whether or not the application could be a
viable candidate for the DSDM method.
2. Business Study:
It establishes the use and knowledge necessities that may permit the applying to
supply business value; additionally, it is the essential application design and identifies
the maintainability necessities for the applying.
3. Functional Model Iteration:
It produces a collection of progressive prototypes that demonstrate practicality for the
client.
(Note: All DSDM prototypes are supposed to evolve into the deliverable application.)
The intent throughout this unvarying cycle is to collect further necessities by eliciting
feedback from users as they exercise the paradigm.
4. Design and Build Iteration:
It revisits prototypes designed throughout useful model iteration to make sure that
everyone has been designed during a manner that may alter it to supply operational
business price for finish users. In some cases, useful model iteration and style and
build iteration occur at the same time.
5. Implementation:
It places the newest code increment (an “operationalized” prototype) into the
operational surroundings. It ought to be noted that:
 (a) the increment might not 100% complete or,
 (b) changes are also requested because the increment is placed into place. In
either case, DSDM development work continues by returning to the useful model
iteration activity.
Below diagram describe the DSDM life cycle:

Priyanka Bhardwaj
DSDM is often combined with XP to supply a mixed approach that defines a solid method
model (the DSDM life cycle) with the barmy and bolt practices (XP) that are needed to
create code increments. additionally, the ASD ideas of collaboration and self-organizing
groups are often tailored to a combined method model.

Managing Interactive Processes


Booch suggests that there are two levels of development:
• The macro process
• The micro process
Macro process
• Establish core requirements (conceptualization).
• Develop a model of the desired behavior (analysis).
• Create an architecture (design).
• Evolve the implementation (evolution).
• Manage post delivery evolution (maintenance).
Micro process
• Identify the classes and objects at a given level of abstraction.
• Identify the semantics of these classes and objects.
• Identify the relationships among these classes and objects.
• Specify the interface and then the implementation of these classes and objects
In principle, the micro process represents the daily activity of the individual developer, or
of a small team of developers.

Priyanka Bhardwaj
The macro process serves as the controlling framework of the micro process. It represents
the activities of the entire development team on the scale of weeks to months at a time.
The basic philosophy of the macro process is that of incremental development: the system as a
whole is built up step by step, each successive version consisting of the previous ones plus a
number of new functions.

BASICS OF SOFTWARE ESTIMATION


Estimation is the process of finding an estimate, or approximation, which is a value that can be
used for some purpose even if input data may be incomplete, uncertain, or unstable.
Estimation determines how much money, effort, resources, and time it will take to build a specific
system or product. Estimation is based on −

 Past Data/Past Experience


 Available Documents/Knowledge
 Assumptions
 Identified Risks
The four basic steps in Software Project Estimation are −

 Estimate the size of the development product.


 Estimate the effort in person-months or person-hours.
 Estimate the schedule in calendar months.
 Estimate the project cost in agreed currency.

General Project Estimation Approach


The Project Estimation Approach that is widely used is Decomposition Technique. Decomposition
techniques take a divide and conquer approach. Size, Effort and Cost estimation are performed in
a stepwise manner by breaking down a Project into major Functions or related Software Engineering
Activities.
Step 1 − Understand the scope of the software to be built.
Step 2 − Generate an estimate of the software size.
 Start with the statement of scope.
 Decompose the software into functions that can each be estimated individually.
 Calculate the size of each function.
 Derive effort and cost estimates by applying the size values to your baseline productivity
metrics.
 Combine function estimates to produce an overall estimate for the entire project.
Step 3 − Generate an estimate of the effort and cost. You can arrive at the effort and cost estimates
by breaking down a project into related software engineering activities.
 Identify the sequence of activities that need to be performed for the project to be completed.
 Divide activities into tasks that can be measured.
 Estimate the effort (in person hours/days) required to complete each task.
 Combine effort estimates of tasks of activity to produce an estimate for the activity.
 Obtain cost units (i.e., cost/unit effort) for each activity from the database.
 Compute the total effort and cost for each activity.
 Combine effort and cost estimates for each activity to produce an overall effort and cost
estimate for the entire project.
Priyanka Bhardwaj
Step 4 − Reconcile estimates: Compare the resulting values from Step 3 to those obtained from
Step 2. If both sets of estimates agree, then your numbers are highly reliable. Otherwise, if widely
divergent estimates occur conduct further investigation concerning whether −
 The scope of the project is not adequately understood or has been misinterpreted.
 The function and/or activity breakdown is not accurate.
 Historical data used for the estimation techniques is inappropriate for the application, or
obsolete, or has been misapplied.
Step 5 − Determine the cause of divergence and then reconcile the estimates.

Estimation Issues
There are many challenges in many aspects for project estimation. Below are some of the
significant challenges:

 The uncertain gray area –The biggest issue is the uncertainty involved at the beginning of
the project. Many times even the client is not clear about the whole complete requirement. If
there is no complete clear requirement then how it’s possible to estimate it in term of effort
and time?
 Not splitting bigger tasks- If somehow things are clear then many times the estimation is
taken keeping in mind the bigger tasks instead of splitting it into smaller tasks for proper
estimation. Such estimation will definitely will lead to the overhead tasks at a later stage.
 Idealistic & optimistic estimation-Most of the time, the estimation is done keeping in mind
the ideal and optimistic conditions but things like version maintenance, unavailability of
some resource and change requests during the project etc. are not considered in project
estimation.
 Estimation person- Estimation must be done by the developer or in assistance with the
developer. Sometimes the estimation is not done by the developer which may lead to huge
mismatch in the estimation.
 Buffer & dependencies – It is always uncertain that how much buffer a PM should take.
Usually 15-20% buffer is taken keeping in mind project elaboration as project progresses.
But this decision should also consider the things like skillsets, experience of the team and
complexity of the project. Dependency of project’s internal as well as external factors are
not considered most of the time. It can be in terms of some functionality like payment
integration or some license cost for some software etc.

Effort and Cost Estimation Techniques


 top-down - where an overall estimate is formulated for the whole project and is then broken
down into the effort required for component tasks;
 bottom-up - where component tasks are identified and sized and these indiv idual
estimates are aggregated.
 expert judgement - where the advice of knowledgeable stall" is solicited;
 analogy - where a similar, completed, project is identified and its actual effort is used as a
basis for the new project;
 three point estimate-it involves three different estimates that are usually obtained from
subject matter experts

Top Down estimate

Priyanka Bhardwaj
 top-down estimating technique assigns an overall time for the project and divides the
project into parts according to the work breakdown structure.
 For example, let’s imagine a project that must be finalized in one year. By fitting the scope
of the project on the timeline, you can estimate how much time is available for each activity
that needs to be performed. The top-down method is best applied to projects similar to
those you have completed previously. If details are sketchy or unpredictable, the top-down
approach is likely to be inefficient and cause backlogs.
 The top-down approach is normally associated with parametric (or algorithmic) models.
These may be explained using the analogy of estimating the cost of rebuilding a house.
This would be of practical concern to a house-owner who needs sufficient insurance cover
to allow for rebuilding the property if it were destroyed. Unless the house-owner happens to
be in the building trade it is unlikely that he or she would be able to w ork out how many
bricklayer-hours, how many carpenter-hours, electrician-hours and so on would be
required. Insurance companies, however, produce convenient tables where the house-
owner can find an estimate of rebuilding costs based on such parameters as ihe number of
storeys and the floor space that a house has. 'ITiis is a simple parametric model.
 The effort needed to implement a project will be related mainly to variables associated with
characteristics of the final system. The form of the parametric model will normally be one or
more formulae in the form:

effort = (system size) x (productivity rate)

 For example, system size might be in the form 'thousands of lines of code' (KLOC) and the
productivity rate 40 days per KLOC. The values to be used will often be matters of
subjective judgement.
 A model to forecast software development effort therefore has two key components. The
first is a method of assessing the size of the software development task to be undertaken.
The second assesses the rate of work at which the task can be done. For example.
Amanda at IOE might estimate that the first software module to be constructed is 2 KLOC.
She might then judge that if Kate undertook the development of the code, with her expertise
she could work at a rate of 40 days per KLOC and complete the work in 2 x 40 days, that is.
80 days, while Ken. who is less experienced, would need 55 days per KLOC and take 2 x
55 that is, 110 days to complete the task.

Buttom up estimate

 The bottom-up method is the opposite of top-down. It approaches the project as a


combination of small workpieces. By making a detailed estimate for each task and
combining them together, you can build an overall project estimate.
 Creating a bottom-up estimate usually takes more time than the top-down method but has a
higher accuracy rate. However, for the bottom-up method to be truly efficient, the project
must be separated at the level of work packages.

Expert judgement

 The expert judgment technique requires consulting the expert who will perform the task to
ask how long it will take to complete. This method relies on your trust in the expert's insights
and experience

Analogous Estimating
Priyanka Bhardwaj
 Analogous estimating is a technique for estimating based on similar projects completed in
the past. If the whole project has no analogs, it can be applied by blending it with the
bottom-up technique. In this case, you compare the tasks with their counterparts, then
combine them to estimate the overall project.

Three-point Estimating

 Three-point estimating is very straightforward. It involves three different estimates that are
usually obtained from subject matter experts:
 Optimistic estimate
 Pessimistic estimate

 Most likely estimate


 The optimistic estimate gives the amount of work and time that would be required if
everything went smoothly. A pessimistic estimate provides the worst-case scenario. The
result will be most realistic when the two are averaged with the most likely estimate.

Cost Estimation Models

Cost estimation simply means a technique that is used to find out the cost estimates. The cost
estimate is the financial spend that is done on the efforts to develop and test software in Software
Engineering. Cost estimation models are some mathematical algorithms or parametric equations
that are used to estimate the cost of a product or a project.
Various techniques or models are available for cost estimation, also known as Cost Estimation
Models as shown below :

Empirical Estimation Technique –


Empirical estimation is a technique or model in which empirically derived formulas are used for predicting
the data that are a required and essential part of the software project planning step. These techniques are
usually based on the data that is collected previously from a project and also based on some guesses, prior
experience with the development of similar types of projects, and assumptions. It uses the size of the
software to estimate the effort.
In this technique, an educated guess of project parameters is made. Hence, these models are based on
common sense. However, as there are many activities involved in empirical estimation techniques, this
technique is formalized. For example Delphi technique and Expert Judgement technique.
Heuristic Technique –
Heuristic word is derived from a Greek word that means “to discover”. The heuristic technique is a
technique or model that is used for solving problems, learning, or discovery in the practical methods which
are used for achieving immediate goals. These techniques are flexible and simple for taking quick decisions
through shortcuts and good enough calculations, most probably when working with complex data. But the
decisions that are made using this technique are necessary to be optimal.

Priyanka Bhardwaj
In this technique, the relationship among different project parameters is expressed using mathematical
equations. The popular heuristic technique is given by Constructive Cost Model (COCOMO). This
technique is also used to increase or speed up the analysis and investment decisions.
Analytical Estimation Technique –
Analytical estimation is a type of technique that is used to measure work. In this technique, firstly the task is
divided or broken down into its basic component operations or elements for analyzing. Second, if the
standard time is available from some other source, then these sources are applied to each element or
component of work.
Third, if there is no such time available, then the work is estimated based on the experience of the work. In
this technique, results are derived by making certain basic assumptions about the project. Hence, the
analytical estimation technique has some scientific basis. Halstead’s software science is based on an
analytical estimation model.

COSMIC full function points


• COSMIC FFP – Common Software Measurement International Consortium Full Function Point.
• COSMIC deals with decomposing the system architecture into a hierarchy of software layers.
• Unit is Cfsu(COSMIC functional size units).

A Data Movement moves one Data Group. A Data Group is a unique cohesive set of data
(attributes) specifying an ‘object of interest’ (i.e. something that is ‘of interest’ tothe user). Each
Data Movement is counted as one CFP (COSMIC function point).

COSMIC recognizes 4 (types of) Data Movements


 Entry moves data from outside into the process
 Exit moves data from the process to the outside world
 Read moves data from persistent storage to the process
 Write moves data from the process to persistent storage.

Function Points
Function points were defined in 1979 in Measuring Application Development
Productivity by Allan Albrecht at IBM. The functional user requirements of the software are
identified and each one is categorized into one of five types: outputs, inquiries, inputs, internal
files, and external interfaces. Once the function is identified and categorized into a type, it is then
assessed for complexity and assigned a number of function points. Each of these functional user
requirements maps to an end-user business function, such as a data entry for an Input or a user
query for an Inquiry. This distinction is important because it tends to make the functions measured
in function points map easily into user-oriented requirements, but it also tends to hide internal
functions (e.g. algorithms), which also require resources to implement.
There is currently no ISO recognized FSM Method that includes algorithmic complexity in the
sizing result. Recently there have been different approaches proposed to deal with this perceived
weakness, implemented in several commercial software products. The variations of the Albrecht-
based IFPUG method designed to make up for this (and other weaknesses) include:

 Early and easy function points – Adjusts for problem and data complexity with two questions
that yield a somewhat subjective complexity measurement; simplifies measurement by
eliminating the need to count data elements.

 Engineering function points – Elements (variable names) and operators (e.g., arithmetic,
equality/inequality, Boolean) are counted. This variation highlights computational
function. The intent is similar to that of the operator/operand-based Halstead complexity
Priyanka Bhardwaj
measures.
 Bang measure – Defines a function metric based on twelve primitive (simple) counts that affect
or show Bang, defined as "the measure of true function to be delivered as perceived by the
user." Bang measure may be helpful in evaluating a software unit's value in terms of how much
useful function it provides, although there is little evidence in the literature of such application.
The use of Bang measure could apply when re-engineering (either complete or piecewise) is
being considered, as discussed in Maintenance of Operational Systems—An Overview.
 Feature points – Adds changes to improve applicability to systems with significant internal
processing (e.g., operating systems, communications systems). This allows accounting for
functions not readily perceivable by the user, but essential for proper operation.
 Weighted Micro Function Points – One of the newer models (2009) which adjusts function
points using weights derived from program flow complexity, operand and operator vocabulary,
object usage, and algorithm.

Benefits

The use of function points in favor of lines of code seek to address several additional issues:

 The risk of "inflation" of the created lines of code, and thus reducing the value of the
measurement system, if developers are incentivized to be more productive. FP advocates refer
to this as measuring the size of the solution instead of the size of the problem.
 Lines of Code (LOC) measures reward low level languages because more lines of code are
needed to deliver a similar amount of functionality to a higher level language. C. Jones offers
a method of correcting this in his work.
 LOC measures are not useful during early project phases where estimating the number of lines
of code that will be delivered is challenging. However, Function Points can be derived from
requirements and therefore are useful in methods such as estimation by proxy.

COCOMO II: A Parametric Productivity Model


COnstructive COst MOdel II (COCOMO II) is a model that allows one to estimate the cost, effort,
and schedule when planning a new software development activity. COCOMO II is the latest major
extension to the original COCOMO (COCOMO 81) model published in 1981.
It consists of three sub-models:

1. End User Programming:


Application generators are used in this sub-model. End user write the code by using these application
generators.
Example – Spreadsheets, report generator, etc.

2. Intermediate Sector:

Priyanka Bhardwaj
 (a). Application Generators and Composition Aids –
This category will create largely prepackaged capabilities for user programming. Their product will
have many reusable components. Typical firms operating in this sector are Microsoft, Lotus,
Oracle, IBM, Borland, Novell.
 (b). Application Composition Sector –
This category is too diversified and to be handled by prepackaged solutions. It includes GUI,
Databases, domain specific components such as financial, medical or industrial process control
packages.
 (c). System Integration –
This category deals with large scale and highly embedded systems.
3. Infrastructure Sector:
This category provides infrastructure for the software development like Operating System, Database
Management System, User Interface Management System, Networking System, etc.

Stages of COCOMO II:

1. Stage-I:
It supports estimation of prototyping. For this it uses Application Composition Estimation Model. This
model is used for the prototyping stage of application generator and system integration.
2. Stage-II:
It supports estimation in the early design stage of the project, when we less know about it. For this it
uses Early Design Estimation Model. This model is used in early design stage of application
generators, infrastructure, system integration.
3. Stage-III:
It supports estimation in the post architecture stage of a project. For this it uses Post Architecture
Estimation Model. This model is used after the completion of the detailed architecture of application
generator, infrastructure, system integration.

Priyanka Bhardwaj
Priyanka Bhardwaj

You might also like