0% found this document useful (0 votes)
5 views18 pages

SPM 2

The document outlines key concepts in Software Project Management (SPM), including the project life cycle phases and effort estimation techniques. It also discusses various software processes and models, such as Waterfall, Agile, and Rapid Application Development (RAD), emphasizing their methodologies and applications. Additionally, it covers Agile methods like Extreme Programming (XP) and Dynamic Systems Development Method (DSDM), highlighting their principles, advantages, and practices for effective software development.

Uploaded by

jackmark02042004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views18 pages

SPM 2

The document outlines key concepts in Software Project Management (SPM), including the project life cycle phases and effort estimation techniques. It also discusses various software processes and models, such as Waterfall, Agile, and Rapid Application Development (RAD), emphasizing their methodologies and applications. Additionally, it covers Agile methods like Extreme Programming (XP) and Dynamic Systems Development Method (DSDM), highlighting their principles, advantages, and practices for effective software development.

Uploaded by

jackmark02042004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

SPM-2

# Project Life Cycle and Effort Estimation -


In Software Project Management (SPM), a "project life cycle" refers to the distinct phases a software project goes
through from initiation to completion, while "effort estimation" is the process of predicting how much time and
resources (usually measured in person-hours) will be needed to complete each phase of the project, allowing for
accurate planning and budgeting.
Key points about project life cycle and effort estimation in SPM:
• Project Life Cycle Stages:
• Initiation: Defining project goals, scope, and feasibility analysis.
• Planning: Breaking down the project into tasks, assigning resources, creating a detailed schedule, and
estimating effort for each phase.
• Execution: Developing the software according to the plan, including coding, testing, and quality
assurance.
• Monitoring and Control: Tracking progress, identifying issues, and making necessary adjustments to
stay on track.
• Closure: Delivering the final product, documenting lessons learned, and project evaluation.
• Effort Estimation Techniques:
• Analogous Estimation: Using data from similar past projects to estimate effort for the current project.
• Expert Judgment: Relying on the experience and knowledge of team members to estimate effort.
• Function Point Analysis (FPA): Measuring the functionality of a software system based on its
features and complexity to estimate effort.
• COCOMO Model: A widely used parametric model that uses factors like project size, complexity, and
team experience to estimate effort.

#Software process and Process Models –


Software is the set of instructions in the form of programs to govern the computer system and to process the
hardware components. To produce a software product the set of activities is used. This set is called a software
process.
What are Software Processes?
Software processes in software engineering refer to the methods and techniques used to develop and maintain
software. Some examples of software processes include:
• Waterfall: a linear, sequential approach to software development, with distinct phases such as requirements
gathering, design, implementation, testing, and maintenance.
• Agile: a flexible, iterative approach to software development, with an emphasis on rapid prototyping and
continuous delivery.
• Scrum: a popular Agile methodology that emphasizes teamwork, iterative development, and a flexible,
adaptive approach to planning and management.
• DevOps: a set of practices that aims to improve collaboration and communication between development and
operations teams, with an emphasis on automating the software delivery process.
Software Process Model
A software process model is an abstraction of the actual process, which is being described. It can also be defined as
a simplified representation of a software process. Each model represents a process from a specific perspective.
Following are some basic software process models on which different type of software process models can be
implemented:
1. A workflow Model : It is the sequential series of tasks and decisions that make up a business process.
2. The Waterfall Model: It is a sequential design process in which progress is seen as flowing steadily
downwards.
• Phases in waterfall model:
o Requirements Specification
o Software Design
o Implementation
o Testing

3. Dataflow Model: It is diagrammatic representation of the flow and exchange of information within a system.
4. Evolutionary Development Model: Following activities are considered in this method:
• Specification
• Development
• Validation
5. Role / Action Model: Roles of the people involved in the software process and the activities.

#Choice of Process models – Write about various models.


#Rapid Application development –
What is RAD Model in Software Engineering?
IBM first proposed the Rapid Application Development or RAD Model in the 1980s. The RAD model is a type of
incremental process model in which there is a concise development cycle. The RAD model is used when the
requirements are fully understood and the component-based construction approach is adopted. Various phases in
RAD are Requirements Gathering, Analysis and Planning, Design, Build or Construction, and finally Deployment.
The critical feature of this model is the use of powerful development tools and techniques. A software project can be
implemented using this model if the project can be broken down into small modules wherein each module can be
assigned independently to separate teams. These modules can finally be combined to form the final product.
Development of each module involves the various basic steps as in the waterfall model i.e. analyzing, designing,
coding, and then testing, etc. as shown in the figure. Another striking feature of this model is a short period i.e. the
time frame for delivery(time-box) is generally 60-90 days.
Multiple teams work on developing the software system using the RAD model parallelly.
Rapid application development model (RAD)
The use of powerful developer tools such as JAVA, C++, Visual BASIC, XML, etc. is also an integral part of the
projects. This model consists of 4 basic phases:
1. Requirements Planning – This involves the use of various techniques used in requirements elicitation like
brainstorming, task analysis, form analysis, user scenarios, FAST (Facilitated Application Development
Technique), etc. It also consists of the entire structured plan describing the critical data, methods to obtain it,
and then processing it to form a final refined model.
2. User Description – This phase consists of taking user feedback and building the prototype using developer
tools. In other words, it includes re-examination and validation of the data collected in the first phase. The
dataset attributes are also identified and elucidated in this phase.
3. Construction – In this phase, refinement of the prototype and delivery takes place. It includes the actual use
of powerful automated tools to transform processes and data models into the final working product. All the
required modifications and enhancements are to be done in this phase.
4. Cutover – All the interfaces between the independent modules developed by separate teams have to be
tested properly. The use of powerfully automated tools and subparts makes testing easier. This is followed by
acceptance testing by the user.
The process involves building a rapid prototype, delivering it to the customer, and taking feedback. After validation by
the customer, the SRS document is developed and the design is finalized.
Unlike traditional models such as the Waterfall model, RAD is designed to be more flexible and responsive to user
feedback and changing requirements throughout the development process.
When to use the RAD Model?
1. Well-understood Requirements: When project requirements are stable and transparent, RAD is appropriate.
2. Time-sensitive Projects: Suitable for projects that need to be developed and delivered quickly due to tight
deadlines.
3. Small to Medium-Sized Projects: Better suited for smaller initiatives requiring a controllable number of team
members.
4. High User Involvement: Fits where ongoing input and interaction from users are essential.
5. Innovation and Creativity: Helpful for tasks requiring creative inquiry and innovation.
6. Prototyping: It is necessary when developing and improving prototypes is a key component of the
development process.
7. Low technological Complexity: Suitable for tasks using comparatively straightforward technological
specifications.
Applications of Rapid Application Development Model (RAD)
1. This model should be used for a system with known requirements and requiring a short development time.
2. It is also suitable for projects where requirements can be modularized and reusable components are also
available for development.
3. The model can also be used when already existing system components can be used in developing a new
system with minimum changes.
4. This model can only be used if the teams consist of domain experts. This is because relevant knowledge and
the ability to use powerful techniques are a necessity.
5. The model should be chosen when the budget permits the use of automated tools and techniques required.

Agile methods –
The Agile methodology is a project management and software development approach that emphasizes flexibility,
collaboration, and customer-centricity. It is the latest model used by major companies today like Facebook, google,
amazon, etc. It follows the iterative as well as incremental approach that emphasizes the importance of delivering of
working product very quickly.
What is Agile?
Agile is a project management and software development approach that aims to be more effective.
1. It focuses on delivering smaller pieces of work regularly instead of one big launch.
2. This allows teams to adapt to changes quickly and provide customer value faster.
What is the Agile Methodology?
Agile methodologies are iterative and incremental, which means it's known for breaking a project into smaller parts
and adjusting to changing requirements.
1. They prioritize flexibility, collaboration, and customer satisfaction.
2. Major companies like Facebook, Google, and Amazon use Agile because of its adaptability and customer-
focused approach.
Life cycle of Agile Methodology
The Agile software development life cycle helps you break down each project you take on into six simple stages:

1. Requirement Gathering
• In this stage, the project team identifies and documents the needs and expectations of various stakeholders,
including clients, users, and subject matter experts.
• It involves defining the project's scope, objectives, and requirements.
• Establishing a budget and schedule.
• Creating a project plan and allocating resources.
2. Design
• Developing a high-level system architecture.
• Creating detailed specifications, which include data structures, algorithms, and interfaces.
• Planning for the software's user interface.
3. Development (Coding)
Writing the actual code for the software. Conducting unit testing to verify the functionality of individual components.
4. Testing
This phase involves several types of testing:
• Integration Testing: Ensuring that different components work together.
• System Testing: Testing the entire system as a whole.
• User Acceptance Testing: Confirming that the software meets user requirements.
• Performance Testing: Assessing the system's speed, scalability, and stability.
5. Deployment
• Deploying the software to a production environment.
• Put the software into the real world where people can use it.
• Make sure it works smoothly in the real world.
• Providing training and support for end-users.
6. Review (Maintenance)
• Addressing and resolving any issues that may arise after deployment.
• Releasing updates and patches to enhance the software and address problems.
Agile Methodology Advantage and Disadvantage
The main advantage and disadvantage of agile methodology are:
• Advantage : Agile methodologies allow for flexibility and adaptability in responding to changes. Teams can
easily adjust their plans and priorities based on evolving requirements or feedback during the project.
• Disadvantage: The iterative and adaptive nature of agile can sometimes lead to uncertainty, especially in
projects with unclear or rapidly changing requirements. This may pose challenges in estimating timelines and
costs accurately.
Benefits of Agile Methodology
The advantages of the agile model are as follows:
1. Immediate Feedback: It allows immediate feedback, which aids software improvement in the next increment.
2. Adapts to Changing Requirements: It is a highly adaptable methodology in which rapidly changing
requirements, allowing responsive adjustments.
3. Face-to-Face Communication: Agile methodology encourages effective face-to-face communication.
4. Time-Efficient: It is well-suited for its time-efficient practices, which help in delivering software quickly and
reducing time-to-market.
5. Frequent Changes: It effectively manages and accommodates frequent changes in project requirements
according to stakeholder convenience.
6. Customer Satisfaction: It prioritizes customer satisfaction.
7. Flexibility and Adaptability: Agile methodologies are known for their flexibility and adaptability.

Dynamic Systems Development Method (DSDM) -


The Dynamic Systems Development technique (DSDM) is an associate degree agile code development approach
that provides a framework for building and maintaining systems. The DSDM philosophy is borrowed from a modified
version of the sociologist principle—80 % of An application is often delivered in twenty percent of the time it’d desire
deliver the entire (100 percent) application.
DSDM is An iterative code method within which every iteration follows the 80% rule that simply enough work is
needed for every increment to facilitate movement to the following increment. The remaining detail is often completed
later once a lot of business necessities are noted or changes are requested and accommodated.
The DSDM tool (www.dsdm.org) could be a worldwide cluster of member companies that put together tackle the role
of “keeper” of the strategy. The pool has outlined AN Agile Development Model, known as the DSDM life cycle that
defines 3 different unvarying cycles, preceded by 2 further life cycle activities:
1. Feasibility Study:
It establishes the essential business necessities and constraints related to the applying to be designed then
assesses whether or not the application could be a viable candidate for the DSDM method.
2. Business Study:
It establishes the use and knowledge necessities that may permit the applying to supply business value;
additionally, it is the essential application design and identifies the maintainability necessities for the applying.
3. Functional Model Iteration:
It produces a collection of progressive prototypes that demonstrate practicality for the client.
(Note: All DSDM prototypes are supposed to evolve into the deliverable application.) The intent throughout this
unvarying cycle is to collect further necessities by eliciting feedback from users as they exercise the paradigm.
4. Design and Build Iteration:
It revisits prototypes designed throughout useful model iteration to make sure that everyone has been
designed during a manner that may alter it to supply operational business price for finish users. In some
cases, useful model iteration and style and build iteration occur at the same time.
5. Implementation:
It places the newest code increment (an “operationalized” prototype) into the operational surroundings. It ought
to be noted that:
• (a) the increment might not 100% complete or,
• (b) changes are also requested because the increment is placed into place. In either case, DSDM
development work continues by returning to the useful model iteration activity.
Below diagram describe the DSDM life cycle:
DSDM is often combined with XP to supply a mixed approach that defines a solid method model (the DSDM life cycle)
with the barmy and bolt practices (XP) that are needed to create code increments. additionally, the ASD ideas of
collaboration and self-organizing groups are often tailored to a combined method model.

What is Extreme Programming (XP)?


Extreme programming (XP) is one of the most important software development frameworks of Agile models. It is used
to improve software quality and responsiveness to customer requirements.
Extreme Programming (XP) is an Agile software development methodology that focuses on delivering high-quality
software through frequent and continuous feedback, collaboration, and adaptation. XP emphasizes a close working
relationship between the development team, the customer, and stakeholders, with an emphasis on rapid, iterative
development and deployment.

Extreme Programming (XP)


Agile development approaches evolved in the 1990s as a reaction to documentation and bureaucracy-based
processes, particularly the waterfall approach. Agile approaches are based on some common principles, some of
which are:
1. Working software is the key measure of progress in a project.
2. For progress in a project, therefore software should be developed and delivered rapidly in small increments.
3. Even late changes in the requirements should be entertained.
4. Face-to-face communication is preferred over documentation.
5. Continuous feedback and involvement of customers are necessary for developing good-quality software.
6. A simple design that involves and improves with time is a better approach than doing an elaborate design up
front for handling all possible scenarios.
7. The delivery dates are decided by empowered teams of talented individuals.
Extreme programming is one of the most popular and well-known approaches in the family of agile methods. an XP
project starts with user stories which are short descriptions of what scenarios the customers and users would like the
system to support. Each story is written on a separate card, so they can be flexibly grouped.
Good Practices in Extreme Programming
Some of the good practices that have been recognized in the extreme programming model and suggested to
maximize their use are given below:

Extreme Programming Good Practices


• Code Review: Code review detects and corrects errors efficiently. It suggests pair programming as coding
and reviewing of written code carried out by a pair of programmers who switch their work between them every
hour.
• Testing: Testing code helps to remove errors and improves its reliability. XP suggests test-driven development
(TDD) to continually write and execute test cases. In the TDD approach, test cases are written even before
any code is written.
• Incremental development: Incremental development is very good because customer feedback is gained and
based on this development team comes up with new increments every few days after each iteration.
• Simplicity: Simplicity makes it easier to develop good-quality code as well as to test and debug it.
• Design: Good quality design is important to develop good quality software. So, everybody should design daily.
• Integration testing: Integration Testing helps to identify bugs at the interfaces of different functionalities.
Extreme programming suggests that the developers should achieve continuous integration by building and
performing integration testing several times a day.
Life Cycle of Extreme Programming (XP) - The Extreme Programming Life Cycle consist of five phases:
Life Cycle of Extreme Programming (XP)
1. Planning: The first stage of Extreme Programming is planning. During this phase, clients define their needs in
concise descriptions known as user stories. The team calculates the effort required for each story and
schedules releases according to priority and effort.
2. Design: The team creates only the essential design needed for current user stories, using a common analogy
or story to help everyone understand the overall system architecture and keep the design straightforward and
clear.
3. Coding: Extreme Programming (XP) promotes pair programming i.e. wo developers work together at one
workstation, enhancing code quality and knowledge sharing. They write tests before coding to ensure
functionality from the start (TDD), and frequently integrate their code into a shared repository with automated
tests to catch issues early.
4. Testing: Extreme Programming (XP) gives more importance to testing that consist of both unit tests and
acceptance test. Unit tests, which are automated, check if specific features work correctly. Acceptance tests,
conducted by customers, ensure that the overall system meets initial requirements. This continuous testing
ensures the software’s quality and alignment with customer needs.
5. Listening: In the listening phase regular feedback from customers to ensure the product meets their needs
and to adapt to any changes.

1. Communication: The essence of communication is for information and ideas to be exchanged amongst
development team members so that everyone has an understanding of the system requirements and goals.
Extreme Programming (XP) supports this by allowing open and frequent communication between members of
a team.
2. Simplicity: Keeping things as simple as possible helps reduce complexity and makes it easier to understand
and maintain the code.
3. Feedback: Feedback loops which are constant are among testing as well as customer involvements which
helps in detecting problems earlier during development.
4. Courage: Team members are encouraged to take risks, speak up about problems, and adapt to change
without fear of repercussions.
5. Respect: Every member’s input or opinion is appreciated which promotes a collective way of working among
people who are supportive within a certain group.

– Managing interactive processes –


Managing Interactive Processes
Managing interactive processes in software development involves overseeing the dynamic and iterative interactions
among team members, stakeholders, and the software itself. These processes are essential for adapting to changes,
ensuring collaboration, and delivering a high-quality product.
Key Aspects of Managing Interactive Processes:
1. Effective Communication:
• Daily Stand-ups: Short, regular meetings to discuss progress, plans, and obstacles.
• Feedback Loops: Regularly gather feedback from stakeholders and users to guide development.
• Transparent Communication Channels: Use tools like Slack, Microsoft Teams, or email for clear and
open communication.
2. Collaboration Tools:
• Version Control Systems: Use tools like Git to manage code changes and collaborate effectively.
• Project Management Software: Utilize tools such as Jira, Trello, or Asana to track tasks, progress,
and deadlines.
• Documentation: Maintain clear and up-to-date documentation using tools like Confluence or Google
Docs.
3. Agile Practices:
• Iterative Development: Break the project into small, manageable iterations or sprints, typically 1-4
weeks long.
• Scrum or Kanban: Adopt frameworks that facilitate iterative development and visual task
management.
4. Continuous Integration and Continuous Deployment (CI/CD):
• Automated Testing: Implement automated tests to quickly identify issues in the codebase.
• Frequent Builds and Releases: Ensure new code is regularly integrated and deployed to catch issues
early and adapt swiftly to changes.
5. Stakeholder Involvement:
• Regular Demos: Showcase progress to stakeholders regularly to gather feedback.
• User Stories and Acceptance Criteria: Clearly define and document user needs and expectations.
6. Risk Management:
• Identify and Mitigate Risks: Continuously identify potential risks and develop strategies to mitigate
them.
• Risk Reviews: Regularly review and update the risk management plan.
7. Adaptive Planning:
• Flexible Plans: Create plans that can adapt to changes in requirements or priorities.
• Backlog Management: Continuously refine and prioritize the product backlog.

Basics of Software Estimation -


Software estimation is a crucial aspect of Software Project Management (SPM) that involves predicting the effort,
time, and cost required to develop a software project. Accurate estimation ensures better planning, resource
allocation, and risk management.
1. What is Software Estimation?
Software estimation is the process of forecasting:
• Effort (Person-months or Person-hours)
• Time (Duration of the project)
• Cost (Budget required)
• Resources (Human, technical, and material needs)
2. Importance of Software Estimation
• Helps in project planning and scheduling.
• Ensures cost-effective resource allocation.
• Minimizes risk and uncertainty.
• Improves decision-making.
3. Types of Software Estimation
1. Effort Estimation – Predicts human effort needed.
2. Time Estimation – Calculates the duration of tasks.
3. Cost Estimation – Determines financial requirements.
4. Resource Estimation – Estimates hardware, software, and manpower.
4. Software Estimation Techniques
Several techniques help in software estimation, including:
1. Expert Judgment
o Relies on the experience of experts.
o Quick but subjective.
2. Analogous Estimation
o Uses historical data from similar projects.
o Effective but needs accurate past data.
3. Top-Down Estimation
o Estimates the project as a whole first, then breaks it down.
o Useful for early-stage planning.
4. Bottom-Up Estimation
o Estimates each component separately and sums them up.
o More accurate but time-consuming.
5. Function Point Analysis (FPA)
o Estimates based on software functionalities.
o Independent of programming language.
6. COCOMO (COnstructive COst MOdel)
o Uses mathematical formulas based on project size.
o Three types:
▪ Basic COCOMO (for small projects)
▪ Intermediate COCOMO (considers complexity)
▪ Detailed COCOMO (most accurate, considers multiple factors)
7. Use Case Points (UCP)
o Estimates based on use case complexity in object-oriented systems.
8. Delphi Method
o Consensus-based expert estimation with multiple iterations.
5. Challenges in Software Estimation
• Incomplete Requirements – Leads to incorrect estimates.
• Scope Creep – Changing requirements increase cost and time.
• Unrealistic Deadlines – Causes project failures.
• Lack of Historical Data – Affects estimation accuracy.
• Human Factors – Experience, skill level, and team efficiency impact estimates.
6. Best Practices for Software Estimation
• Use multiple estimation techniques for accuracy.
• Break down the project into smaller tasks.
• Include risk buffers to handle uncertainties.
• Regularly review estimates during project execution.
• Document past projects to improve future estimation accuracy.

Effort and Cost estimation techniques –


1. Effort Estimation Techniques
Effort estimation predicts the amount of human work (usually measured in person-hours or person-months) needed
to complete a project.
(a) Expert Judgment
• Based on experience and intuition of domain experts.
• Experts provide estimates based on similar past projects.
• Pros: Quick and easy.
• Cons: Subjective and varies between experts.
(b) Delphi Method
• A group of experts provide independent estimates.
• Their estimates are shared anonymously and iterated upon until a consensus is reached.
• Pros: Reduces bias, provides a collective judgment.
• Cons: Time-consuming.
(c) Work Breakdown Structure (WBS)
• Breaks the project into smaller tasks or modules.
• Estimates effort for each task, then sums them up.
• Pros: Improves accuracy, helps in task management.
• Cons: Requires detailed project knowledge.
(d) Function Point Analysis (FPA)
• Estimates effort based on the size and complexity of the system.
• Measures functional requirements (inputs, outputs, files, queries, etc.).
• Formula: Effort=FunctionPoints * ProductivityFactor
• Pros: Independent of programming language.
• Cons: Requires experience to assign function points correctly.
(f) COCOMO (COnstructive COst MOdel) for Effort Estimation
• Mathematical model developed by Barry Boehm.
• Uses project size (in KLOC – thousands of lines of code) to estimate effort.
• Types of COCOMO Models:
1. Basic COCOMO: Simple projects.
2. Intermediate COCOMO: Considers project complexity.
3. Detailed COCOMO: Includes multiple cost drivers.
Basic COCOMO Effort Estimation Formula:
Effort=a×(Size)bEffort = a \times (Size)^{b}Effort=a×(Size)b
Where:
• a, b = Constants based on project type.
• Size = Estimated KLOC.

2. Cost Estimation Techniques


Cost estimation predicts the total financial expense required to complete the software project. It includes costs for
development, testing, deployment, maintenance, and resources.
(a) Expert Judgment (Same as in Effort Estimation)
• Experts estimate costs based on past experience.
• Pros: Quick and simple.
• Cons: Can be biased and inaccurate.
(b) Top-Down Estimation
• Estimates total cost first and then distributes it across phases.
• Based on past project experiences.
• Pros: Useful in early planning.
• Cons: May overlook specific task costs.
(c) Bottom-Up Estimation
• Estimates cost for each task individually, then adds them up.
• Based on Work Breakdown Structure (WBS).
• Pros: More accurate than top-down.
• Cons: Time-consuming.
(d) Parametric Estimation
• Uses statistical models based on historical data and project parameters.
• Example: Cost per function point, cost per KLOC.
• Pros: Data-driven and objective.
• Cons: Requires accurate historical data.
(e) COCOMO for Cost Estimation
• Extends effort estimation to cost by considering developer salary rates and other expenses.
• Formula: Cost=Effort × Cost Per Person − Month

• Pros: Provides a structured cost estimate.


• Cons: Depends on accurate effort estimation.

COSMIC Full Function Points in Software Project Management -

COSMIC (Common Software Measurement International Consortium) Full Function Points (FFP) is a standardized
method for measuring the functional size of software. It helps estimate effort, cost, and resources required for software
development and maintenance.

1. What is COSMIC FFP?

COSMIC FFP is a functional size measurement method that quantifies the amount of functionality delivered by a
software system. It extends traditional Function Point Analysis (FPA) and is particularly suitable for modern software
applications, including real-time, embedded, and business software.

2. Key Principles of COSMIC FFP

• Process-Based: Measures functionality based on user transactions and system processes.


• Technology-Independent: Focuses on functional user requirements, independent of programming language,
architecture, or technology.
• Applicable to Various Software Types: Can be used for real-time, embedded, business, and infrastructure
software.
• Data Movement Focused: Measures the movement of data across system boundaries and within the software.

3. Measurement Concepts in COSMIC

COSMIC defines four data movement types that form the basis of measurement:

1. Entry (E): Data is received from an external source (e.g., user input).
2. Exit (X): Data is sent to an external destination (e.g., displaying results).
3. Read (R): Data is retrieved from persistent storage.
4. Write (W): Data is stored or updated in persistent storage.

Each data movement is counted as 1 COSMIC Function Point (CFP).

4. Steps to Measure Using COSMIC FFP

1. Identify the Scope and Boundaries: Determine the functional user requirements and system boundaries.
2. Break Down the Software into Functional Processes: Identify distinct functional processes (e.g., login, order
processing).
3. Identify Data Movements: Categorize each data movement as Entry, Exit, Read, or Write.
4. Assign COSMIC Function Points (CFP): Each identified data movement contributes to the total functional
size.
5. Calculate Total CFP: Sum up all the function points to determine the final functional size.

5. Advantages of COSMIC FFP in Software Project Management


• Accurate Estimation: Provides precise effort and cost estimates for development and maintenance.
• Better Productivity Analysis: Helps track software productivity over time.
• Technology-Neutral: Can be applied to various types of software applications.
• Improved Benchmarking: Enables comparisons across different projects and organizations.

6. COSMIC vs. Traditional Function Points

Feature COSMIC FFP Traditional Function Points (IFPUG)


Measurement Focus Functional transactions Data processing components
Data Movements Entry, Exit, Read, Write Inputs, Outputs, Inquiries, Files
Scope Real-time, embedded, and business software Primarily business applications
Granularity Fine-grained Higher abstraction

7. Practical Applications

• Software effort estimation


• Project cost and resource planning
• Benchmarking software productivity
• Agile and waterfall development models

COCOMO-II –
COCOMO-II is the revised version of the original Cocomo (Constructive Cost Model) and was developed at the
University of Southern California. It is the model that allows one to estimate the cost, effort, and schedule when
planning a new software development activity.

Sub-Models of COCOMO Model

COCOMO Sub-models
1. End User Programming
Application generators are used in this sub-model. End user write the code by using these application generators. For
Example, Spreadsheets, report generator, etc.
2. Intermediate Sector
COCOMO Intermediate Sector
• Application Generators and Composition Aids: This category will create largely prepackaged capabilities
for user programming. Their product will have many reusable components. Typical firms operating in this
sector are Microsoft, Lotus, Oracle, IBM, Borland, Novell.
• Application Composition Sector: This category is too diversified and to be handled by prepackaged
solutions. It includes GUI, Databases, domain specific components such as financial, medical or industrial
process control packages.
• System Integration: This category deals with large scale and highly embedded systems.
3. Infrastructure Sector
This category provides infrastructure for the software development like Operating System, Database Management
System, User Interface Management System, Networking System, etc.

Stages of COCOMO II

Stages of COCOMO
1. Stage-I
It supports estimation of prototyping. For this it uses Application Composition Estimation Model. This model is used for
the prototyping stage of application generator and system integration.
2. Stage-II
It supports estimation in the early design stage of the project, when we less know about it. For this it uses Early
Design Estimation Model. This model is used in early design stage of application generators, infrastructure, system
integration.
3. Stage-III
It supports estimation in the post architecture stage of a project. For this it uses Post Architecture Estimation Model.
This model is used after the completion of the detailed architecture of application generator, infrastructure, system
integration.
A Parametric Productivity Model –
Parametric Productivity Model in Software Project Management
A Parametric Productivity Model in Software Project Management (SPM) is a mathematical model used to
estimate software development effort, cost, and schedule based on historical project data and influencing parameters.
These models use statistical and regression-based techniques to predict productivity by analyzing various factors like
software size, complexity, team experience, and development tools.

1. What is a Parametric Productivity Model?


A parametric model uses quantifiable parameters to predict software development effort and productivity. Instead of
relying solely on expert judgment, it derives estimates from empirical data and mathematical relationships.
Key Characteristics:

Data-Driven: Uses historical project data for accurate estimation.


Scalable: Works for small, medium, and large software projects.
Customizable: Can be adapted based on industry-specific requirements.
Technology-Independent: Can be applied across various programming languages and development
methodologies.

2. General Formula of a Parametric Productivity Model


The effort required for a software project is usually estimated using a formula like:
Effort (PM)=A×(Size)B×∏Cost Drivers\text{Effort (PM)} = A \times (\text{Size})^B \times \prod \text{Cost
Drivers}Effort (PM)=A×(Size)B×∏Cost Drivers
Where:
• Effort (PM) = Person-Months (the total work effort required).
• A = Calibration constant (derived from historical data).
• Size = Software size, usually in KLOC (thousands of lines of code) or Function Points (FPs).
• B = Scale exponent that reflects productivity changes as project size increases.
• Cost Drivers = Multipliers for factors like team capability, tool support, schedule constraints, etc.

3. Popular Parametric Productivity Models


(i) COCOMO II (Constructive Cost Model II)
• Uses cost drivers like team experience, development tools, and complexity.
• Applicable to different project phases: Early Design, Post-Architecture.
• Formula: PM=A×(Size)B×∏EMiPM = A \times (\text{Size})^B \times \prod EM_iPM=A×(Size)B×∏EMi
• Use Case: Large-scale enterprise applications, real-time systems.

(ii) Putnam Model (SLIM – Software Life Cycle Management Model)


• Based on Rayleigh Distribution of software effort.
• Uses Productivity Parameter (P) to estimate time and effort.
• Formula: Effort=(Size)3P4\text{Effort} = \frac{\text{(Size)}^3}{P^4}Effort=P4(Size)3
• Use Case: Defense, aerospace, and long-duration projects.
(iii) Function Point Analysis (FPA)
• Measures functionality based on user requirements rather than lines of code.
• Useful for Agile and object-oriented projects.
• Function Points are converted into effort estimates using industry-standard conversion tables.
• Use Case: Business applications, banking software.

4. Factors Affecting Productivity in Parametric Models

Factor Impact on Productivity

Team Experience More experienced teams improve productivity.

Software Complexity Higher complexity increases effort.

Development Tools Advanced tools reduce effort.

Programming Language High-level languages improve efficiency.

Project Management Quality Better planning leads to higher productivity.

5. Advantages of Parametric Productivity Models

Accurate and Reliable: Based on real data rather than subjective judgment.
Repeatable and Scalable: Works for different project sizes and domains.
Better Risk Management: Identifies cost drivers early.
Objective Decision Making: Reduces estimation bias.

6. Limitations of Parametric Productivity Models

Requires Historical Data: Accuracy depends on past project data.


Not Ideal for Small Projects: Overhead of model calibration can be high.
Assumes Stable Requirements: May not adapt well to frequent scope changes.

7. When to Use a Parametric Productivity Model?

✔ Large-scale projects with historical data


✔ Organizations looking for standardized cost estimation
✔ Software development teams using function points or KLOC
✔ Projects with long development lifecycles (e.g., government, aerospace, healthcare)

You might also like