Software Process Models Explained
Software Process Models Explained
Project requirements
Before choosing the software process model, you must take the time to define all the requirements
of the project, this must be done by working together with the client and taking into account the
needs of the user of the final product to achieve one hundred percent satisfaction.
Project size
You have to take into consideration the size of the project you are working on. The bigger it is, the
more people it requires on the development team and a software process model with more
elaborate management and creation plans.
Complexity
In general, complex projects do not have all the requirements clear from the beginning. New ones
are added or changed as product development progresses and this can translate into time lag and
higher cost, so complex projects require a software process model that can grow and change over
time.
Client
If you are working with a client, do you need to be involved during the process? Does the end
user need to be involved in all phases of the development process? These questions should be
asked before choosing the software process model.
Priyanka Bhardwaj
The software process model framework is specific to the project. Thus, it is essential to select the
software process model according to the software which is to be developed. The software project
is considered efficient if the process model is selected according to the requirements. It is also
essential to consider time and cost while choosing a process model as cost and/ or time constraints
play an important role in software development. The basic characteristics required to select the
process model are project type and associated risks, requirements of the project, and the users.
One of the key features of selecting a process model is to understand the project in terms of size,
complexity, funds available, and so on. In addition, the risks which are associated with the project
should also be considered. Note that only a few process models emphasize risk assessment.
Various other issues related to the project and the risks are listed in Table.
Table Selections on the Basis of the Project Type and Associated Risks
The most essential feature of any process model is to understand the requirements of the project.
In case the requirements are not clearly defined by the user or poorly understood by the developer,
the developed software leads to ineffective systems. Thus, the requirements of the software should
be clearly understood before selecting any process model. Various other issues related to the
requirements are listed in Table.
Table Selection on the Basis of the Requirements of the Project
Priyanka Bhardwaj
Requirements are Yes No No Yes Yes
easily defined and
understandable
Software is developed for the users. Hence, the users should be consulted while selecting the
process model. The comprehensibility of the project increases if users are involved in selecting the
process model. It is possible that a user is aware of the requirements or has a rough idea of the
requirements. It is also possible that the user wants the project to be developed in a sequential
manner or an incremental manner (where a part is delivered to the user for use). Various other
issues related to the user’s satisfaction are listed in Table.
Table Selection on the Basis of the Users
[Link] Modelling: The information flow among business functions is defined by answering
questions like what data drives the business process, what data is generated, who generates it, where
does the information go, who process it and so on.
2. Data Modelling: The data collected from business modeling is refined into a set of data objects
(entities) that are needed to support the business. The attributes (character of each entity) are
identified, and the relation between these data objects (entities) is defined.
Priyanka Bhardwaj
3. Process Modelling: The information object defined in the data modeling phase are transformed
to achieve the data flow necessary to implement a business function. Processing descriptions are
created for adding, modifying, deleting, or retrieving a data object.
4. Application Generation: Automated tools are used to facilitate construction of the software;
even they use the 4th GL techniques.
5. Testing & Turnover: Many of the programming components have already been tested since RAD
emphasis reuse. This reduces the overall testing time. But the new part must be tested, and all
interfaces must be fully exercised.
AGILE METHODS
Priyanka Bhardwaj
o In earlier days Iterative Waterfall model was very popular to complete a project. But
nowadays developers face various problems while using it to develop software.
The main difficulties included handling customer change requests during project
development and the high cost and time required to incorporate these changes. To
overcome these drawbacks of the Waterfall model, in the mid-1990s the Agile
Software Development model was proposed.
o The Agile model was primarily designed to help a project adapt to change requests
quickly. So, the main aim of the Agile model is to facilitate quick project completion.
To accomplish this task agility is required. Agility is achieved by fitting the process
to the project and removing activities that may not be essential for a specific
project. Also, anything that is waste of time and effort is avoided.
o Actually Agile model refers to a group of development processes. These processes
share some basic characteristics but do have certain subtle differences among
themselves
Agile SDLC models:
o Crystal
o Atern
o Feature-driven development
o Scrum
o Extreme programming (XP):
o Lean development
o Unified process
Priyanka Bhardwaj
good design results elimination of complex dependencies within a system. So,
effective use of suitable design is emphasized.
Feedback: One of the most important aspects of the XP model is to gain feedback to
understand the exact customer needs. Frequent contact with the customer makes the
development effective.
Simplicity: The main principle of the XP model is to develop a simple system that will
work efficiently in the present time, rather than trying to build something that would
take time and may never be used. It focuses on some specific features that are
immediately needed, rather than engaging time and effort on speculations of future
requirements.
Applications of Extreme Programming (XP): Some of the projects that are suitable to
develop using the XP model are given below:
Small projects: XP model is very useful in small projects consisting of small teams as
face-to-face meeting is easier to achieve.
Projects involving new technology or Research projects: This type of project face
changing requirements rapidly and technical problems. So XP model is used to
complete this type of project.
Extreme Programming (XP) is an Agile software development methodology that focuses
on delivering high-quality software through frequent and continuous feedback,
collaboration, and adaptation. XP emphasizes a close working relationship between the
development team, the customer, and stakeholders, with an emphasis on rapid, iterative
development and deployment.
1. Continuous Integration: Code is integrated and tested frequently, with all changes
reviewed by the development team.
2. Test-Driven Development: Tests are written before code is written, and the code is
developed to pass those tests.
3. Pair Programming: Developers work together in pairs to write code and review each
other’s work.
4. Continuous Feedback: Feedback is obtained from customers and stakeholders
through frequent demonstrations of working software.
5. Simplicity: XP prioritizes simplicity in design and implementation, with the goal of
reducing complexity and improving maintainability.
6. Collective Ownership: All team members are responsible for the code, and anyone
can make changes to any part of the codebase.
7. Coding Standards: Coding standards are established and followed to ensure
consistency and maintainability of the code.
8. Sustainable Pace: The pace of work is maintained at a sustainable level, with regular
breaks and opportunities for rest and rejuvenation.
9. XP is well-suited to projects with rapidly changing requirements, as it emphasizes
flexibility and adaptability. It is also well-suited to projects with tight timelines, as it
emphasizes rapid development and deployment.
Priyanka Bhardwaj
Dynamic Systems Development Method (DSDM)
Priyanka Bhardwaj
DSDM is often combined with XP to supply a mixed approach that defines a solid method
model (the DSDM life cycle) with the barmy and bolt practices (XP) that are needed to
create code increments. additionally, the ASD ideas of collaboration and self-organizing
groups are often tailored to a combined method model.
Priyanka Bhardwaj
The macro process serves as the controlling framework of the micro process. It represents
the activities of the entire development team on the scale of weeks to months at a time.
The basic philosophy of the macro process is that of incremental development: the system as a
whole is built up step by step, each successive version consisting of the previous ones plus a
number of new functions.
Estimation Issues
There are many challenges in many aspects for project estimation. Below are some of the
significant challenges:
The uncertain gray area –The biggest issue is the uncertainty involved at the beginning of
the project. Many times even the client is not clear about the whole complete requirement. If
there is no complete clear requirement then how it’s possible to estimate it in term of effort
and time?
Not splitting bigger tasks- If somehow things are clear then many times the estimation is
taken keeping in mind the bigger tasks instead of splitting it into smaller tasks for proper
estimation. Such estimation will definitely will lead to the overhead tasks at a later stage.
Idealistic & optimistic estimation-Most of the time, the estimation is done keeping in mind
the ideal and optimistic conditions but things like version maintenance, unavailability of
some resource and change requests during the project etc. are not considered in project
estimation.
Estimation person- Estimation must be done by the developer or in assistance with the
developer. Sometimes the estimation is not done by the developer which may lead to huge
mismatch in the estimation.
Buffer & dependencies – It is always uncertain that how much buffer a PM should take.
Usually 15-20% buffer is taken keeping in mind project elaboration as project progresses.
But this decision should also consider the things like skillsets, experience of the team and
complexity of the project. Dependency of project’s internal as well as external factors are
not considered most of the time. It can be in terms of some functionality like payment
integration or some license cost for some software etc.
Priyanka Bhardwaj
top-down estimating technique assigns an overall time for the project and divides the
project into parts according to the work breakdown structure.
For example, let’s imagine a project that must be finalized in one year. By fitting the scope
of the project on the timeline, you can estimate how much time is available for each activity
that needs to be performed. The top-down method is best applied to projects similar to
those you have completed previously. If details are sketchy or unpredictable, the top-down
approach is likely to be inefficient and cause backlogs.
The top-down approach is normally associated with parametric (or algorithmic) models.
These may be explained using the analogy of estimating the cost of rebuilding a house.
This would be of practical concern to a house-owner who needs sufficient insurance cover
to allow for rebuilding the property if it were destroyed. Unless the house-owner happens to
be in the building trade it is unlikely that he or she would be able to w ork out how many
bricklayer-hours, how many carpenter-hours, electrician-hours and so on would be
required. Insurance companies, however, produce convenient tables where the house-
owner can find an estimate of rebuilding costs based on such parameters as ihe number of
storeys and the floor space that a house has. 'ITiis is a simple parametric model.
The effort needed to implement a project will be related mainly to variables associated with
characteristics of the final system. The form of the parametric model will normally be one or
more formulae in the form:
For example, system size might be in the form 'thousands of lines of code' (KLOC) and the
productivity rate 40 days per KLOC. The values to be used will often be matters of
subjective judgement.
A model to forecast software development effort therefore has two key components. The
first is a method of assessing the size of the software development task to be undertaken.
The second assesses the rate of work at which the task can be done. For example.
Amanda at IOE might estimate that the first software module to be constructed is 2 KLOC.
She might then judge that if Kate undertook the development of the code, with her expertise
she could work at a rate of 40 days per KLOC and complete the work in 2 x 40 days, that is.
80 days, while Ken. who is less experienced, would need 55 days per KLOC and take 2 x
55 that is, 110 days to complete the task.
Buttom up estimate
Expert judgement
The expert judgment technique requires consulting the expert who will perform the task to
ask how long it will take to complete. This method relies on your trust in the expert's insights
and experience
Analogous Estimating
Priyanka Bhardwaj
Analogous estimating is a technique for estimating based on similar projects completed in
the past. If the whole project has no analogs, it can be applied by blending it with the
bottom-up technique. In this case, you compare the tasks with their counterparts, then
combine them to estimate the overall project.
Three-point Estimating
Three-point estimating is very straightforward. It involves three different estimates that are
usually obtained from subject matter experts:
Optimistic estimate
Pessimistic estimate
Cost estimation simply means a technique that is used to find out the cost estimates. The cost
estimate is the financial spend that is done on the efforts to develop and test software in Software
Engineering. Cost estimation models are some mathematical algorithms or parametric equations
that are used to estimate the cost of a product or a project.
Various techniques or models are available for cost estimation, also known as Cost Estimation
Models as shown below :
Priyanka Bhardwaj
In this technique, the relationship among different project parameters is expressed using mathematical
equations. The popular heuristic technique is given by Constructive Cost Model (COCOMO). This
technique is also used to increase or speed up the analysis and investment decisions.
Analytical Estimation Technique –
Analytical estimation is a type of technique that is used to measure work. In this technique, firstly the task is
divided or broken down into its basic component operations or elements for analyzing. Second, if the
standard time is available from some other source, then these sources are applied to each element or
component of work.
Third, if there is no such time available, then the work is estimated based on the experience of the work. In
this technique, results are derived by making certain basic assumptions about the project. Hence, the
analytical estimation technique has some scientific basis. Halstead’s software science is based on an
analytical estimation model.
A Data Movement moves one Data Group. A Data Group is a unique cohesive set of data
(attributes) specifying an ‘object of interest’ (i.e. something that is ‘of interest’ tothe user). Each
Data Movement is counted as one CFP (COSMIC function point).
Function Points
Function points were defined in 1979 in Measuring Application Development
Productivity by Allan Albrecht at IBM. The functional user requirements of the software are
identified and each one is categorized into one of five types: outputs, inquiries, inputs, internal
files, and external interfaces. Once the function is identified and categorized into a type, it is then
assessed for complexity and assigned a number of function points. Each of these functional user
requirements maps to an end-user business function, such as a data entry for an Input or a user
query for an Inquiry. This distinction is important because it tends to make the functions measured
in function points map easily into user-oriented requirements, but it also tends to hide internal
functions (e.g. algorithms), which also require resources to implement.
There is currently no ISO recognized FSM Method that includes algorithmic complexity in the
sizing result. Recently there have been different approaches proposed to deal with this perceived
weakness, implemented in several commercial software products. The variations of the Albrecht-
based IFPUG method designed to make up for this (and other weaknesses) include:
Early and easy function points – Adjusts for problem and data complexity with two questions
that yield a somewhat subjective complexity measurement; simplifies measurement by
eliminating the need to count data elements.
Engineering function points – Elements (variable names) and operators (e.g., arithmetic,
equality/inequality, Boolean) are counted. This variation highlights computational
function. The intent is similar to that of the operator/operand-based Halstead complexity
Priyanka Bhardwaj
measures.
Bang measure – Defines a function metric based on twelve primitive (simple) counts that affect
or show Bang, defined as "the measure of true function to be delivered as perceived by the
user." Bang measure may be helpful in evaluating a software unit's value in terms of how much
useful function it provides, although there is little evidence in the literature of such application.
The use of Bang measure could apply when re-engineering (either complete or piecewise) is
being considered, as discussed in Maintenance of Operational Systems—An Overview.
Feature points – Adds changes to improve applicability to systems with significant internal
processing (e.g., operating systems, communications systems). This allows accounting for
functions not readily perceivable by the user, but essential for proper operation.
Weighted Micro Function Points – One of the newer models (2009) which adjusts function
points using weights derived from program flow complexity, operand and operator vocabulary,
object usage, and algorithm.
Benefits
The use of function points in favor of lines of code seek to address several additional issues:
The risk of "inflation" of the created lines of code, and thus reducing the value of the
measurement system, if developers are incentivized to be more productive. FP advocates refer
to this as measuring the size of the solution instead of the size of the problem.
Lines of Code (LOC) measures reward low level languages because more lines of code are
needed to deliver a similar amount of functionality to a higher level language. C. Jones offers
a method of correcting this in his work.
LOC measures are not useful during early project phases where estimating the number of lines
of code that will be delivered is challenging. However, Function Points can be derived from
requirements and therefore are useful in methods such as estimation by proxy.
2. Intermediate Sector:
Priyanka Bhardwaj
(a). Application Generators and Composition Aids –
This category will create largely prepackaged capabilities for user programming. Their product will
have many reusable components. Typical firms operating in this sector are Microsoft, Lotus,
Oracle, IBM, Borland, Novell.
(b). Application Composition Sector –
This category is too diversified and to be handled by prepackaged solutions. It includes GUI,
Databases, domain specific components such as financial, medical or industrial process control
packages.
(c). System Integration –
This category deals with large scale and highly embedded systems.
3. Infrastructure Sector:
This category provides infrastructure for the software development like Operating System, Database
Management System, User Interface Management System, Networking System, etc.
1. Stage-I:
It supports estimation of prototyping. For this it uses Application Composition Estimation Model. This
model is used for the prototyping stage of application generator and system integration.
2. Stage-II:
It supports estimation in the early design stage of the project, when we less know about it. For this it
uses Early Design Estimation Model. This model is used in early design stage of application
generators, infrastructure, system integration.
3. Stage-III:
It supports estimation in the post architecture stage of a project. For this it uses Post Architecture
Estimation Model. This model is used after the completion of the detailed architecture of application
generator, infrastructure, system integration.
Priyanka Bhardwaj
Priyanka Bhardwaj