0% found this document useful (0 votes)
57 views98 pages

Software Project Management Lecture Notes

Uploaded by

FaZe Gaming
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views98 pages

Software Project Management Lecture Notes

Uploaded by

FaZe Gaming
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 98

LECTURE NOTES ON

Software Project Management (SPM)

III-I CSE (DATA SCIENCE) (R22)

UNIT-III (PAGE NO: 1-16)

UNIT-IV (PAGE NO: 17-43)

UNIT- V (PAGE NO: 44-90)

Prepared By
Chaitanya Kumar Repalle
SPM UNIT III

A Software Management Process Framework-II

Model Based Software Architectures: A Management Perspective and Technical Perspective.

Flows of the Process: Software Process Workflows. Iteration Workflows

Checkpoints of the Process: Major Mile Stones, Minor Milestones, Periodic Status
Assessments

ARCHITECTURE: A MANAGEMENT PERSPECTIVE


⮚ The most critical technical product of a software project is its architecture: the
infrastructure, control and date interfaces that permit software components to co-operate as
a system and software designers to co-operate efficiently as a tem. When the
communications media include multiple languages and inter group literacy varies, the
communications problem can become extremely complex and even unsolvable. If a
software development team is to be successful, the inter project communications, as
captured in the software architecture, must be both accurate and precise.

⮚ From a management perspective, there are three difference aspects of architecture.


An architecture (the intangible design concept) is the design of a software system
this includes all engineering necessary to specify a complete bill of materials.
An architecture baseline (the tangible artifacts) is a slice of information across the
engineering artifact sets sufficient to satisfy all stakeholders that the vision (function
and quality) can be achieved within the parameters of the business case (cost, profit,
time, technology and people).

An architecture description (a human-readable representation of an architecture,


which is one of the components of an architecture baseline) is an organized subset of
information extracted form the design set model(s). The architecture description
communicates how the intangible concept is realized in the tangible artifacts.

The importance of software architecture and its close linkage with modern software
development processes can be summarized as follows:

Achieving stable software architecture represents a significant project milestone at which


the critical make/buy decisions should have been resolved.
Architecture representations provide a basis for balancing the trade-offs between the
problem space (requirements and constraints) and the solution space (the operational
product).
The architecture and process encapsulate many of the important (high-payoff or high-
risk) communications among individuals, teams, organizations and stakeholders.

Poor architectures and immature processes are often given as reasons for project failures.
A mature process, an understanding of the primary requirements, and a demonstrable
architecture are important prerequisites fro predictable planning.

Architecture development and process definition are the intellectual steps that map the
problem to a solution

Without violating the constraints; they require human innovation and cannot be
automated
.

ARCHITECTURE: A TECHNICAL PERSPECTIVE


⮚ An architecture framework is defined in the terms of views that are abstractions of the
UML models in the design set. The design model includes the full breadth and depth of
information. An architecture view is an abstraction of the design model; it contains only the
architecturally significant information. Most real-world systems require four views: design,
process, component and deployment. The purposes of these views are as follows:

Design: Describes architecturally significant structures and functions of the


design model.
Process: Describes concurrency and control thread relationship among the
design, component and deployment views.
Component: Describes the structure of the implementation set.
Deployment: Describes the structures of the deploy.

The following Figure Summarizes the artifacts of the design set, including the architecture
views and architecture description:
The requirements model addresses the behavior of the system as seen by its end users,
analysts, and testers. This view is modeled statically using use case and class diagrams and
dynamically using sequence, collaboration, state chart and activity diagrams.

The use case view describes how the system‟s critical (architecturally significant) use cases are
realized by elements of
the design model. It is modeled statically using use case diagrams and dynamically using any
of the UML behavioral diagrams.

The design view describes the architecturally significant elements of the design model. This
view, an abstraction of the
design model, addresses the basic structure and functionality of the solution. It is modeled
statically using calls and object diagrams and dynamically using any of the UML behavioral
diagrams.

The process view addresses the run-time collaboration issues involved in executing the
architecture on a distributed deployment model, including the logical software network
topology (allocation to process and threads of control), inter process communication and state
management
.This view is modeled statically using deployment diagrams and dynamically using any of the
UML behavioral diagrams.

The component view describes the architecturally significant elements of the implementation
set. This view, an abstraction of the design model, addresses the software source code
realization of the system from the perspective of the project‟s integrators and developers,
especially with regard to releases and configuration management. It is modeled statically using
component diagrams and dynamically using any of the UML behavioral diagrams.

The deployment view addresses the executable realization of the system, including the
allocation of logical processes in
the distribution view (the logical software topology) to physical resources of the deployment
network (the physical system topology). It is modeled statically using deployment diagrams
and dynamically using any of the UML behavioral diagrams.
Generally, an architecture baseline should including the following:

Requirements: critical use cases system-level quality objectives and priority relationships
among features and qualities

Design: names, attributes, structures, behaviors, groupings and relationships of significant


classes and components

Implementation: source component inventory and bill of materials (number, name, purpose,
cost) of all primitive components

Development: executable components sufficient to demonstrate the critical us cases and the
risk associated with achieving the system qualities.
.

SOFTWARE PROCESS WORKFLOWS

The term workflow is used to mean a thread of cohesive and mostly sequential activities;
Workflows are mapped to product artifacts. There are seven top-level workflows:

1. Management workflow: controlling the process and ensuring win conditions for all
stakeholders.
2. Environment workflow: automating the process and evolving the maintenance
environment.
3. Requirements workflow: analyzing the problem space and evolving the requirements
artifacts.
4. Design workflow: modeling the solution and evolving the architecture and designartifacts.
5. Implementation workflow: programming components & evolving the implementation and
deployment artifacts.
6. Assessment workflow: assessing the trends in process and product quality.
7. Deployment workflow: transitioning the end products to the user.

1. Architecture –first approach: Extensive requirements analysis, design, implementation


and assessment activities are performed before the construction phase when full-scale
implementation is thefocus.
Iterative life-cycle process: Some projects may require only one iteration in a phase, others
may require several iterations. The point is that the activities and artifacts of any given
workflow may require more than one pass to achieve results.
2. Round-trip engineering: Raising the environment activities to a first-class workflow is
critical. The environment is the tangible embodiment of the projects process, methods and
notations for producing the artifacts.
3. Demonstration-based approach: Implementation and assessment activities are initiated early
in the life cycle, reflecting the emphasis on constructing executable subsets of the evolving
architecture.
.
ITERATION WORKFLOWS

Iteration consists of a loosely sequential set of activities in various proportions, depending on


where the iteration is located in the development cycle. Each iteration is defined in terms of a
set of allocated usage scenarios.
An individual iteration's workflow generally includes the following sequence:
▪ Management: iteration planning to determine the content of the release and develop the
detailed plan for the iteration; assignment of work packages, or tasks, to the development
team.
▪ Environment: evolving the software change order database to reflect all new baselines and
changes to existing
baselines for all product, test, and environment components
▪ Requirements: analyzing the baseline plan, the baseline architecture, and the baseline
requirements set artifacts to fully elaborate the use cases to be demonstrated at the end of
this iteration and their evaluation criteria; updating any requirements set artifacts to reflect
changes necessitated by results of this iteration‟s engineeringactivities.
▪ Design: Evolving the baseline architecture and the baseline design set artifacts to elaborate
fully the design model and test model components necessary to demonstrate against the
evaluation criteria allocated to this iteration; updating design set artifacts to reflect changes
necessitated by the results of this iteration‟s engineering activities.
▪ Implementation: developing or acquiring any new components, and enhancing or
modifying any existing components, to demonstrate the evaluation criteria allocated to
this iteration; integrating and testing all new and modified components with existing
baselines (previous versions).
▪ Assessment: evaluating the results of the iteration, including compliance with the
allocated evaluation criteria and the quality of the current baselines; identifying any
rework required and detem1ining whether it should be performed before deployment
of this release or allocated to the next release; assessing results to improve the basis
of the subsequent iteration‟s plan.
▪ Deployment: transitioning the release either to an external organization (such as a
user, independent verification and validation contractor, or regulatory agency) or to
internal closure by conducting a post-mortem so that lessons learned can be captured
and reflected in the next iteration.

CHECKPOINTS OF THE PROCESS

Three types of joint management reviews are conducted throughout the process:
• Major Milestones: these system wide events are held ant the end of each development
phase. They provide visibility to system wide issues synchronize the management and
engineering perspectives and verify that the aims of the phase have been achieved.
• Minor Milestones: theses iteration-focused events are conducted to review the content of
an iteration in detail and to authorize continued work.
• Status Assessments: These periodic events provide management with frequent and
regular insight into the progress being made.

Each of the four phases-inceptions, elaboration, construction and transition consists of


one or more iterations and concludes with a major milestone when a planned technical
capability is produced in demonstrable form.

MAJOR MILESTONES

The four major milestones occur at the transition points between life-cycle phases. They can
be used in many different process models, including eth conventional waterfall model. In an
iterative model, the major milestones are used to achieve concurrence among all stakeholders
on the current state of the project.
▪ Customers: Schedule and budget estimates, feasibility, risk assessment, requirements
understanding, progress, product line compatibility.
▪ Users: consistency with requirements and usage scenarios, potential for accommodating,
growth, quality attributes.
▪ Architects and systems engineers: product line compatibility, requirements changes, trade-
off analyses,
completeness and consistency, balance among risk, quality and usability
▪ Developers: sufficiency of requirements detail and usage scenario descriptions,
frameworks for component selection or development, resolution of development risk,
product line compatibility, sufficiency of the development environment.
▪ Maintainers: sufficiency of product and documentation artifacts, understandability,
interoperability with existing systems, sufficiency of maintenance environment.
▪ Others: possibly many other perspectives by stakeholders such as regulatory agencies,
independent verification and validation contractors, venture capital investors,
subcontractors, associate contractors and sale and marketing teams.

The following Table summarizes the balance of information across them major milestones.
Life-Cycle Objectives Milestone
The life-cycle objectives milestone occurs at tile end of the inception phase. The goal is to
present to all stakeholders a recommendation on how to proceed with development, including
a plan, estimated cost and schedule and expected benefits and cost savings. A successfully
completed life-cycle objectives milestone will result in authorization from all stakeholders to
proceed with the elaboration phase.

Life-Cycle Architecture Milestone


The life-cycle architecture milestone occurs at the end of the elaboration phase. The primary
goal is to demonstrate an executable architecture to all stakeholders. The baseline architecture
consists of both a human-readable representation and a configuration-controlled set of
software components captured in the engineering artifacts.
MINOR MILESTONES

▪ Minor milestones are sometimes called as inch-pebbs.


▪ Minor milestones mainly focus on local concerns of current iteration.
▪ These iterative focused events are used to review iterative content in a detailed manner &
authorize continued work. Minor Milestone in the life cycle of Iteration: The number of
iteration specific milestones is dependent on the iteration length and the content. A one
month to six month iterative period requires only two minor milestones
a) Iteration Readiness review: This informal milestone is conducted at the start of
each iteration to reviewthe detailed iteration plan and evaluation criteria that have
been allocated to this iteration.
b) Iteration Assessment Review: This informal milestone is conducted at the end of each
iteration to assess the
degree to which the iteration achieved its objectives and satisfied its evaluation criteria, to
review iteration results.

PERIODIC STATUS ASSESSMENTS

▪ Periodic status assessments are management reviews conducted at regular intervals


(monthly, quarterly) to address progress and quality indicators, ensure continuous attention
to project dynamics, and maintain open communications among all stakeholders.
▪ Periodic status assessments are crucial for focusing continuous attention on the evolving
health of the project and its dynamic priorities.
▪ Periodic status assessments serve as project snapshots. While the period may vary, the
recurring event forces the project history to be captured and documented. Status
assessments provide the following:
a) A mechanismfor openly addressing, communicating and resolving management issues
technical issues and
project risks.
b) Objective data derived directly from on-going activities and evolving product
configurations
c) A mechanism for disseminating process, progress, quality trends, practices, and
experience information to and from all stakeholders in an open forum.

MINOR MILESTONES

▪ Minor milestones are sometimes called as inch-pebbs.


▪ Minor milestones mainly focus on local concerns of current iteration.
▪ These iterative focused events are used to review iterative content in a detailed manner &
authorize continued work. Minor Milestone in the life cycle of Iteration: The number of
iteration specific milestones is dependent on the iteration length and the content. A one
month to six month iterative period requires only two minor milestones
c) Iteration Readiness review: This informal milestone is conducted at the start of
each iteration to reviewthe detailed iteration plan and evaluation criteria that have
been allocated to this iteration.
d) Iteration Assessment Review: This informal milestone is conducted at the end of each
iteration to assess the
degree to which the iteration achieved its objectives and satisfied its evaluation criteria, to
review iteration results.

PERIODIC STATUS ASSESSMENTS

▪ Periodic status assessments are management reviews conducted at regular intervals


(monthly, quarterly) to address progress and quality indicators, ensure continuous attention
to project dynamics, and maintain open communications among all stakeholders.
▪ Periodic status assessments are crucial for focusing continuous attention on the evolving
health of the project and its dynamic priorities.
▪ Periodic status assessments serve as project snapshots. While the period may vary, the
recurring event forces the project history to be captured and documented. Status
assessments provide the following:
d) A mechanismfor openly addressing, communicating and resolving management issues
technical issues and
project risks.
e) Objective data derived directly from on-going activities and evolving product
configurations
f) A mechanism for disseminating process, progress, quality trends, practices, and
experience information to and from all stakeholders in an open forum.
UNIT – IV

Software Management Discipline-I

Iterative Process Planning- Work breakdown structures, planning guidelines, cost and
schedule estimating, Iteration planning process, Pragmatic planning.

Project Organizations and Responsibilities:


Line-of-Business Organizations, Project Organizations, evolution of Organizations.

Process Automation: Automation building blocks, The Project Environment


UNIT – IV

Software Management Discipline-I

Iterative Process Planning- Work breakdown structures, planning guidelines, cost and
schedule estimating, Iteration planning process, Pragmatic planning.

Project Organizations and Responsibilities:


Line-of-Business Organizations, Project Organizations, evolution of Organizations.

Process Automation: Automation building blocks, The Project Environment.

WORK BREAKDOWN STRUCTURES (WBS)

▪A good work breakdown structure and its synchronization with the process framework are
critical factors in software project success. Development of a work breakdown structure
dependent on the project management style, organizational culture, customer preference,
financial constraints and several other hard-to-define, project-specific parameters.
▪ A WBS is simply a hierarchy of elements that decomposes the project plan into the discrete
work tasks.
▪ A WBS provides the following information structure:
a) A delineation of all significant work
b) A clear task decomposition for assignment of responsibilities
c) A framework for scheduling, budgeting, and expenditure tracking

CONVENTIONAL WBS ISSUES

Conventional work breakdown structures frequently suffer from three fundamental flaws.
▪ They are prematurely structured around the product design.

▪ They are prematurely decomposed, planned, and budgeted in either too much or too little
detail.
▪ They are project-specific, and cross-project comparisons are usually difficult or impossible.
▪ Conventional work breakdown structures are prematurely structured around the
product design. The above Figure shows a typical conventional WBS that has been
structured primarily around the subsystems of its product architecture, and then further
decomposed into the components of each subsystems. A WBS is the architecture for the
financial plan.
▪ Conventional work breakdown structures are prematurely decomposed, planned and
budgeted in either too little or too much detail. Large software projects tend to be over
planned and small projects tend to be under planned. The basic problem with planning
too much detail at the outset is that the detail does not evolve with the level of fidelity in
the plan.
▪ Conventional work breakdown structures are project-specific and cross-project
comparisons are usually difficult or impossible. With no standard WBS structure, it is
extremely difficult to compare plans, financial data, schedule data, organizational
efficiencies, cost trends, productivity trends, or quality trends across multiple projects.
EVOLUTIONARY WORK BREAKDOWN STRUCTURES

An evolutionary WBS should organize the planning elements around the process framework
rather than the product framework. The basic recommendation for the WBS is to organize the
hierarchy as follows:

a) First-level
WBS elements are the workflows (management, environment,
requirements, design, implementation, assessment, and deployment).

b) Second-level elements are defined for each phase of life cycle (inception, elaboration,
construction, and transition).

c) Third-level elements are defined for the focus of activities that produce the artifacts of each
phase.

A default WBS consistent with the process framework (phases, workflows, and artifacts) is
shown in Figure:
PLANNING GUIDELINES
Software projects span a broad range of application domains. It is valuable but risky to make
specific planning recommendations independent of project context. Project-independent
planning advice is also risky. There is the risk that the guidelines may be adopted blindly
without being adapted to specific project circumstance. Two simple planning guidelines
should be considered when a project plan is being initiated or assessed.
The below table prescribes a default allocation of costs among the first-level WBS elements.

The below table prescribes allocation of effort and schedule across the lifecycle phases.

THE COST AND SCHEDULE ESTIMATING PROCESS

Project plans need to be derived from two perspectives. The first is a forward-looking, top-
down approach. It starts with an understanding of the general requirements and constraints,
derives a macro-level budget and schedule, then decomposes these elements into lower level
budgets and intermediate milestones.

From this perspective, the following planning sequence would occur:


▪ The software project manager (and others) develops a characterization of the overall
size, process, environment, people, and quality required for the project.
▪ A macro-level estimate of the total effort and schedule is developed using a software cost
estimation model.
▪ The software project manager partitions the estimate for the effort into top-level WBS using
guidelines.
At this point, subproject managers are given the responsibility for decomposing each of the
WBS elements into lower levels using their top-level allocation, staffing profile, and major
milestone dates as constraints.
The second perspective is a backward-looking, bottom-up approach. We start with the end in
mind, analyze the micro- level budgets and schedules, and then sum all these elements into the
higher level budgets and intermediate milestones. This approach tends to define and populate
the WBS from the lowest levels upward. From this perspective, the following planning
sequence would occur:
a) The lowest level WBS elements are elaborated into detailed tasks
b) Estimates are combined and integrated into higher level budgets and milestones.
c) Comparisons are made with the top-down budgets and schedule milestones.

THE ITERATION PLANNING PROCESS

▪ Planning is concerned with defining the actual sequence of intermediate results.


▪ Iteration is used to mean a complete synchronization across the project, with a well-
orchestrated global assessment of the entire project baseline.

Inception Iterations: the early prototyping activities integrate the foundation components
of

Candidate architecture and provide an executable framework for elaborating the critical use

cases of eth system

.
Elaboration Iteration: These iterations result in architecture, including a complete
framework and infrastructure for execution. Upon completion of the architecture iteration,
a few critical use cases should be demonstrable:
(1) Initializing the architecture
(2) Injecting a scenario to drive the worst-case data processing flow through the system
(3) Injecting a scenario to drive the worst-case control flow through the system (for
example, orchestrating the fault-tolerance use cases).

Construction Iterations: Most projects require at least two major construction


iterations: an alpha release and a beta release.

Transition Iterations: Most projects use a single iteration to transition a beta release into
the final product.

▪ The general guideline is that most projects will use between four and nine iteration. The
typical project would have the following six-iteration profile:

▪ One iteration in inception: an architecture prototype



▪ Two iterations in elaboration: architecture prototype and architecture baseline
▪ Two iterations in construction: alpha and beta releases
▪ One iteration in transition: product releas
During the engineering stage top down approach dominates bottom up approach. During
the production stage bottom approach dominates top down approach

PRAGMATIC PLANNING

Even though good planning is more dynamic in an iterative process, doing it accurately
is far easier. While executing iteration N of any phase, the software project manager must be
monitoring and controlling against a plan that was initiated in iteration N-1 and must be
planning iteration N+1. the art of good project management is to make trade- offs in the
current iteration plan and the next iteration plan based on objective results in the current
iteration and previous iterations. Aside form bad architectures and misunderstood requirement,
inadequate planning (and subsequent bad management) is one of the most common reasons for
project failures. Conversely, the success of every successful project can be attributed in part to
good planning.

A project‟s plan is a definition of how the project requirements will be transformed into
a product within the business constraints. It must be realistic, it must be current, it must be a
team product, it must be understood by the stake holders, and it must be used. Plans are not
just for mangers. The more open and visible the planning process and results, the more
ownership there is among the team members who need to execute it. Bad, closely held plans
cause attrition. Good, open plans can shape cultures and encourage teamwork.
Project Organizations and Responsibilities: Line-of-Business Organizations, Project
Organizations, and Evolution of Organizations. Process Automation: Automation Building
Blocks, the Project Environment.

Software lines of business and project teams have different motivations. Software lines of
business are motivated by return on investment, new business discriminators, market
diversification and profitability.
Software professionals in both types of organizations are motivated by career growth, job
satisfaction and the opportunity to make a difference.

LINE -OF-BUSINESS ORGANIZATIONS

The main features of the default organization are as follows:


• Responsibility for process definition and maintenance is specific to a cohesive line of
business.
• Responsibility for process automation is an organizational role and is equal in
importance to the process definition role.
• Organization roles may be fulfilled by a single individual or several different teams,
depending on the scale of the
organization.

The line of business organization consists of 4 component teams.

Software Engineering Process Authority (SEPA):

❖ Responsible for exchanging the information and project guidance to or from the project
practitioners.
❖ Maintains current assessment of organization process maturity.
❖ Help in initiate and periodically assess project processes.
❖ Responsible for process definition and maintainence.

Project Review Authority (PRA):

❖ Responsible for reviewing the financial performance, customer commitments,


risks & accomplishments, adherence to organizational policies by customer etc.
❖ Reviews both project‟s conformance, customer commitments as well as
organizational polices, deliverables, financial performances and other risks.

Software Engineering Environment Authority (SEEA):

❖ SEEA deals with the maintenance or organizations standard environment, training


projects and process
automation.
❖ Maintains organization‟s standard environment.
❖ Training projects to use environment.

❖ Maintain organization wide resources support.


Infrastructure:

❖ An organization‟s infrastructure provides human resources support, project-independent


research and development other capital software engineering assets. The typical
components of the organizational infrastructure are as follows:
❑ Project Administration: Time accounting system; contracts, pricing, terms and
conditions; corporate information systems integration.
❑ Engineering Skill Centers: Custom tools repository and maintenance, bid and
proposal support,
independent research and development.
❑ Professional Development: Internal training boot camp, personnel recruiting,
personnel skills database maintenance, literature and assets library, technical
publications.

PROJECT ORGANIZATIONS
The default project organization and maps project-level roles and responsibilities. This
structure can be tailored to the size and circumstance of the specific project organization.

The main feature of the default organization is as follows:

❑ The project management team is an active participant, responsible for producing as


well as managing. Project management is not a spectator sport.

The architecture team is responsible for real artifacts and for the integration of
components, not just for staff Functions.

❑ The development team owns the component construction and maintenance activities.

Quality is every one job. Each team takes responsibility for a different quality perspective
Software Management Team:

❑ This is active participant in an organization and is incharge of producing as well as


managing.
❑ As the software attributes, such as Schedules, costs, functionality and quality are
interrelated to each other, negotiation among multiple stakeholders is required and
these are carried out by the software management team. Responsibilities:
❖ Effort planning
❖ Conducting the plan
❖ Adapting the plan according to the changes in requirements and design
❖ Resource management
❖ Stakeholders satisfaction
❖ Risk management
❖ Assignment or personnel
❖ Project controls and scope definition
❖ Quality assurance
Software Architecture Team:

❑ The software architecture team performs the tasks of integrating the components, creating
real artifactsetc.
❑ It promotes team communications and implements the applications with a system-wide
quality.
❑ The success of the development team is depends on the effectiveness of the architecture
team along with the software management team controls the inception and elaboration
phases of a life-cycle.
❑ The architecture team must have:
❖ Domain experience to generate an acceptable design and use-caseview.
❖ Software technology experience to generate an acceptable process view, component and
development views.

Responsibilities:
❖ System-level quality i.e., performance, reliability and maintainability.

❖ Requirements and design trade-offs.


❖ Component selection
❖ Technical risk solution
❖ Initial integration
Software Development Team:

❑ The Development team is involved in the construction and maintenance activities. It


is most applicationspecific team. It consists of several sub teams assigned to the
groups of components requiring a common skill set.
❑ The skill set include the following:
❖ Commercial component: specialists with detailed knowledge of commercial
components central to a system's architecture.
❖ Database: specialists with experience in the organization, storage, and retrieval of data.


❖ Graphical user interfaces: specialists with experience in the display organization;
data presentation, and user interaction.
Operating systems and networking: specialists with experience in various control issues
arises due to synchronization, resource sharing, reconfiguration, inter object
communications, name space management etc.
❖ Domain applications: Specialists with experience in the algorithms, application
processing, or business rules specific to the system.

Responsibilities:
❑ The exposure of the quality issues that affect the customer‟s expectations.

❑ Metric analysis.
❑ Verifying the requirements.
❑ Independent testing.
❑ Configuration control and user development.
❑ Building project infrastructure.
❖ Independent testing.
❖ Configuration control and user development.
❖ Building project infrastructure.
EVOLUTION OF ORGANIZATIONS

❑ The project organization represents the architecture of the team and needs to evolve
consistent with the project plan captured in the work breakdown structure
.
❑ A different set of activities is emphasized in each phase, as follows:

❖ Inception team: An organization focused on planning, with enough support from the
other teams to ensure that the plans represent a consensus of all perspectives.
❖ Elaboration team: An architecture-focused organization in which the driving forces of
the project reside in the
software architecture team and are supported, by the software development and software
assessment teams as necessary to achieve a stable architecturebaseline.
❖ Construction team: A fairly balanced organization in which most of the activity resides
in the software
development and software assessment teams.
❖ Transition team: A customer-focused organization in which usage feedback drives the

deployment activities

PROCESS AUTOMATION

There are 3 levels of process:

1. Metaprocess: An organization‟s policies, procedures, and practices for managing a software


intensive line of business.The automation support for this level is called an infrastructure. An
infrastructure is an inventory of preferred tools, artifact templates, microprocess guidelines,
macroprocess guidelines, project performance repository, database of organizational skill sets,
and library of precedent examples of past project plans and results.

2. Macroprocess: A project's policies, procedures, and practices for producing a complete


software product within certain cost, schedule, and quality constraints. The automation
support for a project's process is called an environment. An environment is a specific
collection of tools to produce a specific set of artifacts as governed by a specific project
plan.

3. Microprocess: A project team's policies, procedures, and practices for achieving an artifact
of the software process. The automation support for generating an artifact is generally called
a tool. Typical tools include requirements management, visual modeling, compilers, editors,
debuggers, change management, metrics automation, document automation, test automation,
cost estimation, and workflow automation.
Management: Software cost estimation tools and WBS tools are useful for generating the
planning artifacts. For managing against a plan, workflow management tools and a software
project control panel that can maintain an on-line version of the status assessment are
advantageous.
Environment: Configuration management and version control are essential in a modern
iterative development process.
(change management automation that must be supported by the environment.
Requirements: Conventional approaches decomposed system requirements into subsystem
requirements, subsystem requirements into component requirements, and component
requirements into unit requirements.
The ramifications of this approach on the environment‟s support for requirements management
are twofold:
1. The recommended requirements approach is dependent on both textual and model-based
representations
2. Traceability between requirements and other artifacts needs to be automated.
Design: The primary support required for the design workflow is visual modeling, which is
used for capturing design models, presenting them in human-readable format, and translating
them into source code. Architecture-first and demonstration-based process is enabled by
existing architecture components and middleware.
Implementation: The implementation workflow relies primarily on a programming
environment (editor, compiler, debugger, and linker, run time) but must also include substantial
integration with the change management tools, visual modeling tools, and test automation tools
to support productive iteration.
Assessment and Deployment: To increase change freedom, testing and document production
must be mostly automated. Defect tracking is another important tool that supports assessment:
It provides the change management instrumentation necessary to automate metrics and control
release baselines. It is also needed to support the deployment workflow throughout the life
cycle.

THE PROJECT ENVIRONMENT


The project environment artifacts evolve through three discrete states:

1. The prototyping environment includes an architecture tested for prototyping project


architectures to evaluate trade- offs during the inception and elaboration phases of the life
cycle. It should be capable of supporting the following activities:

– technical risk analyses


– feasibility studies for commercial products
– Fault tolerance/dynamic reconfiguration trade-offs
– Analysis of the risks associated implementation
– Development of test scenarios, tools, and instrumentation suitable for analyzing
the requirements.
2. The development environment should include a full suite of development tools needed to
support the various process workflows and to support round-trip engineering to the
maximum extent possible.

3. Themaintenance environment may be a subset of the development environment delivered


as one of the project's end products.

Four important environment disciplines that is critical to the management context and the
success of a modern iterative development process:

– Tools must be integrated to maintain consistency and traceability. Roundtrip


Engineering is the term used to describe this key requirement for environments that
support iterative development.

– Change management must be automated and enforced to manage multiple,


iterations and to enable change freedom. Change is the fundamental primitive of
iterative development.

– Organizational infrastructures A common infrastructure promotes inter project


consistency, reuse of training, reuse of lessons learned, and other strategic improvements
to the organization's metaprocess
.

– Extending automation support for stakeholder environments enables further support


for paperless exchange of information and more effective review of engineering
artifacts.
Round-Trip Engineering
• Round-trip engineering is the environment support necessary to maintain
consistency among theengineering artifacts.
• The primary reason for round-trip engineering is to allow freedom in changing software
engineering data
sources.

Change Management

• Change management is as critical to iterative processes as planning.


• Tracking changes in the technical artifacts is crucial to understanding the true technical
progress trends and quality trends toward delivering an acceptable end product or
interim release.
• In a modern process-in which requirements, design, and implementation set artifacts
are captured in rigorous notations early in the life cycle and are evolved through
multiple generations-change management has become fundamental to all phases and
almost all activities.

Software Change Orders (SCO)

• The atomic unit of software work that is authorized to create, modify, or


obsolesce components within a configuration baseline is called a software
change order (SCO).
• Software change orders are a key mechanism for partitioning, allocating, and scheduling
software work against
an established software baseline and for assessing progress and quality.

The basic fields of the SCO are title, description, metrics, resolution, assessment and
disposition
.
a) Title. The title is suggested by the originator and is finalized upon acceptance by the
configuration control board. b)Description: The problem description includes the name of the
originator, date of origination, CCB-assigned SCO identifier, and relevant version identifiers of
related support software.
c) Metrics: The metrics collected for each sea are important for planning, for scheduling, and
for assessing quality improvement. Change categories are type 0 (critical bug), type 1 (bug),
type 2 (enhancement), type 3 (new feature), and type 4 (other)
Resolution: This field includes the name of the person responsible for implementing the
change, the components changed, the actual metrics, and a description of the change.
d) Assessment: This field describes the assessment technique as either inspection, analysis,
demonstration, or test. Where applicable, it should also reference all existing test cases and
new test cases executed, and it should identify all different test configurations, such as
platforms, topologies, and compilers.
e) Disposition: The SCO is assigned one of the following states by the CCB:
• Proposed: written, pending CCB review
• Accepted: CCB-approved for resolution
• Rejected: closed, with rationale, such as not a problem, duplicate, obsolete change,
resolved by another SCO
• Archived: accepted but postponed until a later release
• In progress: assigned and actively being resolved by the development organization
• In assessment: resolved by the development organization; being assessed by a test
organization
• Closed: completely resolved, with the concurrence of all CCB members
Configuration Baseline
A configuration baseline is a named collection of software components and supporting
documentation that is subject to change management and is upgraded, maintained, tested,
statused and obsolesced as a unit.
There are general1y two classes of baselines:
1. external product releases and
2. internal testing releases.
A configuration baseline is a named collection of components that is treated as a unit. It is
controlled formally because it is a packaged exchange between groups. A project may release
a configuration baseline to the user community for beta testing. Once software is placed in a
controlled baseline, all changes are tracked. A distinction must be made for the cause of a
change. Change categories are as follows:
– Type 0: Critical failures, which are defects that are nearly always fixed before any
external release.
– Type 1: A bug or defect that either does not impair the usefulness of the system or can
be worked around.
– Type 2: A change that is an enhancement rather than a response to a defect.
– Type 3: A change that is necessitated by an update to the requirements.
Type 4: changes that are not accommodated by the other categories
Configuration Control Board (CCB)

• A CCB is a team of people that functions as the decision authority on the content of
configuration baselines.
• A CCB usually includes the software manager, software architecture manager,
software development manager, software assessment manager and other stakeholders
(customer, software project manager, systems engineer, user) who are integral to the
maintenance of a controlled software delivery system.
• The [bracketed] words constitute the state of an SCO transitioning through theprocess.
• [Proposed]: A proposed change is drafted and submitted to the CCB. The
proposed change must includea technical description of the problem and an
estimate of the resolution effort.
• [Accepted, archived or rejected]: The CCB assigns a unique identifier and accepts,
archives, or rejects each proposed change. Acceptance includes the change for
resolution in the next release; archiving accepts the change but postpones it for
resolution in a future release; and rejection judges the change to be without merit,
redundant with other proposed changes, or out of scope.
• [In progress]: the responsible person analyzes, implements and tests a solution to
satisfy the SCQ. This task includes updating documentation, release notes and SCO
metrics actuals and submitting new SCOs.
• [In assessment]: The independent test assesses whether the SCO is completely resolved.
When the independent
test team deems the change to be satisfactorily resolved, the SCO is submitted to the
CCB for final disposition and closure.
• [Closed]: when the development organization, independent test organization and
CCB concur that the SCO is resolved, it is transitioned to a closed status. „

Infrastructures
Organization‟s infrastructure provides the organization capital assets, including two key
artifacts:
a) a policy that captures the standards for project software development processes, and
b) an environment that captures an inventory of tools.

Organization Policy
• The organization policy is usually packaged as a handbook that defines the life cycle and
the process primitives (major milestones, intermediate artifacts, engineering repositories,
metrics, roles and responsibilities). The handbook provides a general framework for
answering the following questions:
– What gets done? (activities and artifacts)
– When does it get done? (mapping to the life-cycle phases and milestones)
– Who does it? (team roles and responsibilities)
How do we know that it is adequate? (Checkpoints, metrics and standards of
performance
Organization Environment
Some of the typical components of an organization‟s automation building blocks are as
follows:
• Standardized tool selections, which promote common workflows and a higher ROI on
training.
• Standard notations for artifacts, such as UML for all design models, or Ada 95
for all custom-developed, reliability-critical implementation artifacts.
• Tool adjuncts such as existing artifact templates (architecture description,
evaluation criteria, release descriptions, status assessment) or customizations.
• Activity templates (iteration planning, major milestone activities, configuration control
boards).

Stakeholder Environments
• An on-line environment accessible by the external stakeholders allows them to
participate in the process as
follows:
– Accept and use executable increments for hands-on evaluation.
– Use the same on-line tools, data and reports that the software development
organization uses to manageand monitor the project.
– Avoid excessive travel, paper interchange delays, format translations, paper and
shipping costs andother overhead costs.
• There are several important reasons for extending development environment
resources into certainstakeholder domains.
– Technical artifacts are not just paper.
– Reviews and inspections, breakage/rework assessments, metrics analyses and
even beta testing canbe performed independently of the development team.
– Even paper documents should be delivered electronically to reduce production costs
and turn aroundtime.
SPM UNIT-V

Project Control and Process Instrumention:

The Seven Core Metrics, Management Indicators, Quality Indicators,


Life Cycle Expectations Pragmatic Software Metrics, Metrics
Automation.

Tailoring the process: Process Discriminates.

Future Software Project Management: Modern Project Profiles, Next


generation Software economics, Modern process transitions.

Case Study: The command Center Processing and Display system-


Replacement (CCPDS-R).
SPM UNIT- V

Project Control and Process Instrumention: Seven Core Metrics, Management Indicators,
Quality Indicators, Life Cycle Expectations Pragmatic Software Metrics, Metrics Automation.

Tailoring the process: Process Discriminates.

Future Software Project Management: Modern Project Profiles, Next generation Software
economics, Modern process transitions.

Case Study: The command Center Processing and Display system- Replacement (CCPDS-R).

The primary themes of a modern software development process tackle the central
management issues of complex software:

• Getting the design right by focusing on the architecture first


• Managing risk through iterative development
• Reducing the complexity with component based techniques
• Making software progress and quality tangible through instrumented change management
• Automating the overhead and bookkeeping activities through the use of round-trip
engineering and integrated environments

The goals of software metrics are to provide the development team and the management team
with the following:

• An accurate assessment of progress to date


• Insight into the quality of the evolving software product
• A basis for estimating the cost and schedule for completing the product with increasing
accuracy over time.

THE SEVEN CORE METRICS

Seven core metrics are used in all software projects. Three are management indicators and four
are quality indicators.
a) Management Indicators
▪ Work and progress (work performed over time)
▪ Budgeted cost and expenditures (cost incurred over time)
▪ Staffing and team dynamics (personnel changes over time)
b) Quality Indicators

Change traffic and stability (change traffic over time)


Breakage and modularity (average breakage per change over time)
Rework and adaptability (average rework per change over time)
Mean time between failures (MTBF) and maturity (defect rate over
time)
The seven core metrics are based on common sense and field experience with both successful
and unsuccessful metrics programs. Their attributes include the following:

▪ They are simple, objective, easy to collect, easy to interpret and hard to misinterpret.
▪ Collection can be automated and non-intrusive.
▪ They provide for consistent assessment throughout the life cycle and are derived
from the evolving product baselines rather than from a subjective assessment.
▪ They are useful to both management and engineering personnel for communicating
progress and quality in a consistent format.

▪ They improve fidelity across the life cycle.

MANAGEMENT INDICATORS

There are three fundamental sets of management metrics; technical progress, financial status
staffing progress. By examining these perspectives, management can generally assess whether
a project is on budget and on schedule. The management indicators recommended here include
standard financial status based on an earned value system, objective technical progress metrics
tailored to the primary measurement criteria for each major team of the organization and staff
metrics that provide insight into team dynamics.

Work & Progress


The various activities of an iterative development project can be measured by defining a
planned estimate of the work in an objective measure, then tracking progress (work completed
over time) against that plan ), the default perspectives of this metric would be as follows:
▪ Software architecture team: use cases demonstrated

▪ Software development team: SLOC under baseline change management, SCOs closed.
▪ Software assessment team: SCOs opened, test hours executed, evaluation criteria met
▪ Software management team: milestones completed
Budgeted Cost and Expenditures
To maintain management control, measuring cost expenditures over the project life cycle is
always necessary. One common approach to financial performance measurement is use of an
earned value system, which provides highly detailed cost and schedule insight.
Modern software processes are amenable to financial performance measurement through an
earned value approach. The basic parameters of an earned value system, usually expressed in
units of dollars, are as follows:
▪ Expenditure Plan: the planned spending profile· for a project over its planned schedule.
For most software
projects (and other labor-intensive projects), this profile generally tracks the staffing
profile.
▪ Actual Progress: the technical accomplishment relative to the planned progress
underlying the spending profile. In a healthy project, the actual progress tracks
planned progress closely.
▪ Actual Cost: the actual spending profile for a project over its actual schedule. In a
healthy project, this profile
tracks the planned profile closely.
▪ Earned Value: the value that represents the planned cost of the actual progress.

▪ Cost variance: the difference between the actual cost and the earned value.
▪ Positive values correspond to over - budget situations; negative values correspond to
under budget situations.
▪ Schedule Variance: the difference between the planned cost and the earned value.
Positive values correspond to behind-schedule situations; negative values correspond
to ahead-of-schedulesituations.
Staffing and Team Dynamics
An iterative development should start with a small team until the risks in the requirements
and architecture have been suitably resolved. Depending on the overlap of iterations and
other project specific circumstance, staffing can vary. For discrete, one of-a-kind
development efforts (such as building a corporate information system), the staffing profile
would be typical.
It is reasonable to expect the maintenance team to be smaller than the development team
for these sorts of
developments. For a commercial product development, the sizes of the maintenance and
development teams may be the same.

QUALITY INDICATORS

The four quality indicators are based primarily on the measurement of software change
across evolving baselines of engineering data (such as design models and source code).

Change Traffic and Stability

Overall change traffic is one specific indicator of progress and quality.


Change traffic is defined as the number of software change orders opened and closed over the
life cycle This metric can be collected by change type, by release, across all releases, by team,
by components, by subsystem, and so forth. Stability is defined as the relationship between
opened versus closed SCOs.
Breakage and Modularity
Breakage is defined as the average extent of change, which is the amount of software
baseline that needs rework (in SLOC, function points, components, subsystems, files, etc).
Modularity is the average breakage trend over time. For a healthy project, the trend expectation
is decreasing or stable

Rework and Adaptability


Rework is defined as the average cost of change, which is the effort to analyze, resolve
and retest all changes to software baselines.
Adaptability is defined as the rework trend over time. For a health project, the trend expectation
is decreasing or stable.
MTBF and Maturity
MTBF is the average usage time between software faults. In rough terms, MTBF is
computed by dividing the test hours by the number of type 0 and type 1 SCOs. MTBF stands
for Mean- Time- Between –Failures.
Maturity is defined as the MTBF trend over time
LIFE CYCLE EXPECTATIONS
There is no mathematical or formal derivation for using the seven core metrics. However, there
were specific reasons for selecting them:
▪ The quality indicators are derived form the evolving product rather than from the
artifacts.
▪ They provide insight into the waster generated by the process. Scrap and
rework metrics are a standard measurement perspective of most manufacturing
processes.
▪ They recognize the inherently dynamic nature of an iterative development process.
Rather than focus on the
value, they explicitly concentrate on the trends or changes with respect to time.
▪ The combination of insight from the current value and the current trend

provides tangible indicators for management action.


PRAGMATIC SOFTWARE METRICS

Measuring is useful, but it doesn‟t do any thinking for the decision makers. It only provides
data to help them ask the right questions, understand the context, and make objective
decisions.
The basic characteristics of a good metric are as follows:
1. It is considered meaningful by the customer, manager and performer. Customers come to
software engineering providers because the providers are more expert than they are at
developing and managing software. Customers will accept metrics that are demonstrated
to be meaningful to the developer.

2. It demonstrates quantifiable correlation between process perturbations and business


performance. Theonly real organizational goals and objectives are financial: cost
reduction, revenue increase and margin increase.

3. It is objective and unambiguously defined: Objectivity should translate into some form of
numeric representation (such as numbers, percentages, ratios) as opposed to textual
representations (such as excellent, good, fair, poor). Ambiguity is minimized through well
understood units of measurement (such as staff-month, SLOC, change, function point, class,
scenario, requirement), which are surprisingly hard to define precisely in the software
engineering world.
.
4. It displays trends: This is an important characteristic. Understanding the change in a
metric‟s value with respect to today‟s iterative development models. It is very rare that a
given metric drives the appropriate actiondirectly.
.
5. It is a natural by-product of the process: The metric does not introduce new artifacts
or overhead activities; it is derived directly from the mainstream engineering and
management workflows.

6. It is supported by automation: Experience has demonstrated that the most successful


metrics are those that are collected and reported by automated tools, in part because
software tools require rigorous definitions of the data they process.
METRICS AUTOMATION

There are many opportunities to automate the project control activities of a software project.
For managing against a plan, a software project control panel (SPCP) that maintains an on-
line version of the status of evolving artifacts provides a key advantage.
To implement a complete SPCP, it is necessary to define and develop the following:
▪ Metrics primitives: indicators, trends, comparisons, and progressions.

▪ A graphical user interface: GUI support for a software project manager role and flexibility
to support other roles
▪ Metric collection agents: data extraction from the environment tools that maintain
theengineering notations .for the various artifact sets.
▪ Metrics data management server: data management support for populating the
metric displays of the GUI and storing the data extracted by the agents.
▪ Metrics definitions: actual metrics presentations for requirements progress (extracted from
requirements set
artifacts), design progress (extracted from design set artifacts), implementation progress
(extracted from implementation set artifacts), assessment progress (extracted from
deployment set artifacts), and other progress dimensions (extracted from manual sources,
financial management systems, management artifacts, etc.)
▪ Actors: typically, the monitor and the administrator

Specific monitors (called roles) include software project managers, software development
team leads, software architects, and customers.
▪ Monitor: defines panel layouts from existing mechanisms, graphical objects, and
linkages to project data; queries data to be displayed at different levels of abstraction
▪ Administrator: installs thesystem; defines new mechanisms, graphical objects, and linkages;
archiving functions;
defines composition and decomposition structures for displaying multiple levels of
abstraction.
In this case, the software project manager role has defined a top-level display with four
graphical objects.
1. Project activity Status: the graphical object in the upper left provides an overview of the
status of the top-level WBS elements. The seven elements could be coded red, yellow
and green to reflect the current earned valuestatus. (In Figure they are coded with white
and shades of gray). For example, green would represent ahead of plan, yellow would
indicate within 10% of plan, and red would identify elements that have a greater than
10% cost or schedule variance. This graphical object provides several examples of
indicators: tertiary colors, the actual percentage, and the current first derivative (up arrow
means getting better, down arrow means getting worse).
2. Technical artifact status: the graphical object in the upper right provides an overview of
the status of the evolving technical artifacts. The Req light would display an assessment
of the current state of the use case models and requirements specifications. The Des light
would do the same for the design models, the Imp light for the source code baseline and
the Dep light for the test program.
3. Milestone progress: the graphical object in the lower left provides a progress
assessment of the achievementof milestones against plan and provides indicators of
the current values.
4. Action item progress: the graphical object in the lower right provides a different
perspective of progress, showing the current number of open and close issues.
The following top-level use case, which describes the basic operational concept of an SPCP,
corresponds to a monitor interacting with the control panel:
▪ Start the SPCP. The SPCP starts and shows the most current information that was saved
when the user last used
the SPCP.
▪ Select a panel preference. The user selects from a list of previously defined default
panel preference. The SPCP displays the preference selected.
▪ Select a value or graph metric. The user selects whether the metric should be displayed
for a given point in time or in a graph, as a trend. The default for trends is monthly.
▪ Select to superimpose controls. The user points to a graphical object and requests that
the control values for that metric and point in time be displayed.
▪ Drill down to trend. The user points to a graphical object displaying a point in time
and drills down to view the trend for the metric.
▪ Drill down to point in time. The user points to a graphical object displaying a trend
and drills down to view the values for the metric.
▪ Drill down to lower levels of information. The user points to a graphical object displaying
a point in time and
drills down to view the next level of information.
▪ Drill down to lower level of indicators. The user points to a graphical object
displaying an indicator anddrills down to view the breakdown of the next level of
indicators.
PROCESS DISCRIMINATES

In tailoring the management process to a specific domain or project, there are two
dimensions of discriminating factors: technical complexity and management complexity.

The Figure illustrates discriminating these two dimensions of process variability and
shows some example project applications. The formality of reviews, the quality control
of artifacts, the priorities f concerns and numerous other process instantiation parameters
are governed by the point a project occupies in these two dimensions.
Figure summarizes the different priorities along the two dimensions.

Scale
▪ There are many ways to measure scale, including number of source lines of code,
number of function points, number of use cases, and number of dollars. From a process
tailoring perspective, the primary measure of scale is the size of the team. As the
headcount increases, the importance of consistent interpersonal communications
becomes paramount. Otherwise, the diseconomies of scale can have a serious impact on
achievement of the project objectives.
▪ A team of 1 (trivial), a team of 5 (small), a team of 25 (moderate), a team of 125 (large),
a team of 625 (huge), and so on. As team size grows, a new level of personnel
management ins introduced at roughly each factor of 5. This model can be sued to
describe some of the process differences among projects of different sizes.
▪ Trivial-sized projects require almost no management overhead (planning,
communication, coordination,
Progress assessment, review, administration).
▪ Small projects (5 people) require very little management overhead, but team leadership
toward a common objective is crucial. There is some need to communicate the
intermediate artifacts among team member.
▪ Moderate-sized projects (25 people) require moderate management overhead, including a
dedicated software
Project manager to synchronize team workflows and balance resources.
▪ Large projects (125 people) require substantial management overhead including a
dedicated software project manager and several subproject managers to synchronize
project-level and subproject-level workflows and to balance resources. Project
performance is dependent on average people, for two reasons:
a) There are numerous mundane jobs in any large project, especially in the overhead
workflows.
b) The probability of recruiting, maintaining and retaining a large umber of exceptional
people is small.
▪ Huge projects (625 people) require substantial management overhead, including
multiple software project managers an many subproject managers to synchronize
project-level and subproject-level workflows and to balance resources.

Stakeholder Cohesion or Contention


The degree of cooperation and coordination among stakeholders (buyers, developers, users,
subcontractors and maintainers, among others) can significantly drive the specifies of how a
process is defined. This process parameter can range from cohesive to adversarial. Cohesive
teams have common goals, complementary skills and close communications. Adversarial

teams have conflicting goals, completing or incomplete skills and less-than-open


communications.

Process Flexibility or Rigor

The degree of rigor, formality and change freedom inherent in a specific project‟s “contract”
(vision document, business case and development plan) will have a substantial impact on the
implementation of the project‟s process. For very loose contracts such as building a
commercial product within a business unit of a software company (such as a Microsoft
application or a rational software corporation development tool), management complexity is
minimal. In these

sorts of development processes, feature set, time to market, budget and quality can all be
freely traded off and changed with very little overhead.
Process Maturity
The process maturity level of the development organization, as defined by thesoftware
engineering Institute’s capability maturity model is another key driver of management
complexity. Managing a mature process (level 3 or higher) is far simpler than managing an
immature process (level 1 and 2). Organizations with a mature process typically have a high
level of precedent experience in developing software and a high level of existing process
collateral that enables predictable planning and execution of the process. Tailoring a mature
organization’s process for a specific project is generally a straight forward task.
Architectural Risk
The degree of technical feasibility demonstrated before commitment to full-scale production
is an important dimension of defining a specific project‟s process. There are many sources of
architectural risk. Some of the most important and recurring sources are system performance
(resource utilization, response time, throughput, accuracy), robustness to change (addition of
new features, incorporation of new technology, adaptation to dynamic operational conditions)
and system reliability (predictable behavior, fault tolerance). The degree to which these risks
can bed eliminated before construction begins can have dramatic ramifications in the process
tailoring.

Domain Experience
The development organization‟s domain experience governs its ability to converge on an
acceptable architecture in a minimum number of iterations. An organization that has built five
generations of radar control switches may be able to converge on adequate baseline
architecture for a new radar application in two or three prototype release iterations. A skilled
software organization building its first radar application may require four or five prototype
releases before converging on an adequate baseline.
EXAMPLE: SMALL-SCALE PROJECT VERSUS LARGE-SCALE PROJECT

▪ An analysis of the differences between the phases, workflows and artifacts of two
projects on opposite ends of the management complexity spectrum shows how different
two software project processes can be Table 14-7 illustrates the differences in schedule
distribution for large and small project across the life-cycle phases. A small commercial
project (for example, a 50,000 source-line visual basic windows application, built by a
team of five) may require only 1 month of inception, 2 months of elaboration, 5 months
of construction and 2 months of transition. A large, complex project (for example, a
300,000 source-line embedded avionics program, built by a team of 40) could require 8
months of inception, 14 months of elaboration, 20 months of construction, and 8 months
of transition. Comparing the ratios of the life cycle spend in each phase highlights the
obvious differences.
.
▪ One key aspect of the differences between the two projects is the leverage of the
various process components in the success or failure of the project. This reflects the
importance of staffing or the level of associated risk management.
The following list elaborates some of the key differences in discriminators of success.

▪ Design is key in both domains. Good design of a commercial product is a key


differentiator in the marketplace and is the foundation for efficient new product releases.
Good design of a large, complex project is the foundation for predictable, cost-efficient
construction.

▪ Management is paramount in large projects, where the consequences of planning errors,


resource allocation errors, inconsistent stakeholder expectations and other out-of-
balance factors can have catastrophic consequences for the overall team dynamics.
Management is far less important in a small team, where opportunities for
miscommunications are fewer and their consequences less significant.


Deployment plays a far greater role for a small commercial product because there is a broad
user base of diverse individuals and environment.
.

Future Software Project


Management:

⮚ Modern Project Profiles


⮚ Next generation Software economics
⮚ Modern process transitions.

MODERN PROJECT PROFILES

Continuous Integration

In the iterative development process, firstly, the overall architecture of the project is
created and then all the integration steps are evaluated to identify and eliminate the design
errors. This approach eliminates problems such as downstream integration, late patches
and shoe-horned software fixes by implementing sequential or continuous integration
rather than implementing large-scale integration during the project completion

▪ Moreover, it produces feasible and a manageable design by delaying the ‘design


breakage’ to the engineering phase, where they can be efficiently resolved. This
can be one by making use of project demonstrations which forces integration into
the design phase.
▪ With the help of this continuous integration incorporated in the iterative
development process, the quality tradeoffs are better understood and the system
features such as system performance, fault tolerance and maintainability are clearly
visible even before the completion of the project.

▪ In the modern project profile, the distribution of cost among various workflows or
project is completely different from that of traditional project profile as shown
below:

Software Conventional Modern


Engineering Process process
Workflows Expenditures Expenditures
Management 5% 10%
Environment 5% 10%
Requirements 5% 10%
Design 10% 15%
Implementation 30% 25%
Assessment 40% 25%
Deployment 5% 5%

As shown in the table, the modern projects spend only 25% of their budget for integration
and Assessment activities whereas; traditional projects spend almost 40% of their total
budget for these activities. This is because, the traditional project involve inefficient
large-scale integration and late identification of design issues.

Early Risk Resolution

▪ In the project development lifecycle, the engineering phase concentrates on


identification and elimination of the risks associated with the resource
commitments just before the production stage. The traditional projects involve, the
solving of the simpler steps first and then goes to the complicated steps, as a result
the progress will be visibly good, whereas, the modern projects focuses on 20% of
the significant requirements, use cases, components and risk and hence they
occasionally have simpler steps.

▪ To obtain a useful perspective of risk management, the project life cycle has to be
applied on the principles of software management. The following are the 80:20
principles.
▪ The 80% of Engineering is utilized by 20% of the requirements.

▪ Before selecting any of the resources, try to completely understand all the
requirement because irrelevant resource selection (i.e., resources selected based
on prediction) may yield severe problems.

▪ 80% of the software cost is utilized by 20% of the components

▪ Firstly, the cost-critical components must be elaborated which forces the project
to focus more on controlling the cost.

▪ 80% of the bugs occur because of 20% of the components

▪ Firstly, the reliability-critical components must be elaborated which give


sufficient time for assessment activities like integration and testing, in order to
achieve the desired level of maturity.

▪ 80% of the software scrap and rework is due to 20% if the changes.

▪ The change-critical components r elaborated first so that the changes that have
more impact occur when the project is matured.

▪ 80% of the resource consumption is due to 20% of the components.

▪ Performance critical components are elaborated first so that, the trade-offs with
reliability; changeability and cost-consumption can be solved as early as
possible.

▪ 80% of the project progress is carried-out by 20% of the people

▪ It is important that planning and designing team should consist of best


processionals because the entire success of the project depends upon a good plan
and architecture.

Teamwork among stakeholders

▪ Most of the characteristics of the classic development process worsen the


stakeholder relationships which in turn makes the balancing of requirement product
attributes and plans difficult. An iterative process which has a good relationship
between the stakeholders mainly focuses on objective understanding by each and
every individual stakeholder.

▪ This process needs highly skilled customers, users and monitors which have
experience in both the application as well as software. Moreover, this process
requires an organization whose focus is on producing a quality product and
achieves customer satisfaction.

▪ The table below shows the tangible results of major milestones in a modern
process.

▪ From the above table, it can be observed that the progress of the project is not
possible unless all the demonstration objectives are satisfied. This statement does
not present the renegotiation of objectives, even when the demonstration results
allow the further processing of tradeoffs present in the requirement, design, plans
and technology.
▪ Modern iterative process that rely on the results of the demonstration need al its
stakeholders to be well-educated and with a g good analytical ability so as to
distinguish between the obviously negative results and the real progress visible. For
example, an early determined design error can be treated as a positive progress
instead to a major issue.

Principles of Software Management(TOP-10)

▪ Software management basically relies on the following principles, they are,

1. Process must be based on architecture-first approach

If the architecture is focused at the initial stage, then there will be a good foundation for almost
20% of the significant stuff that are responsible for the overall success of the project. This stuff
include the requirements, components use cases, risks and errors. In other words, if the
components that are being involved in the architecture are well known then the expenditure
causes by scrap and rework will be comparatively less.

2. Develop an iterative life-cycle process that identifies the risks at an early stage

An iterative process supports a dynamic planning framework that facilitates the risk management
predictable performance moreover, if the risks are resolved earlier, the predictability will be
more and the scrap and rework expenses will be reduced.
3. After the design methods in-order to highlight components-based development.

The quantity of the human generated source code and the customized development can be
reduced by concentrating on individual components rather than individual lines- of-code. The
complexity of software is directly proportional to the number of artifacts it contains that is, if
the solution is smaller then the complexity associated with its management is less.

4. Create a change management Environment

Highly-controlled baselines are needed to compensate the changes caused by various teams that
concurrently work on the shared artifacts.
5. Improve change freedom with the help of automated tools that support round-trip
engineering.
The roundtrip-engineering is an environment that enables the automation and synchronization of
engineering information into various formats. The engineering information usually consists
requirement specification, source code, design models test cases and executable code. The
automation of this information allows the teams to focus more on engineering rather than dealing
with over head involved.

6. Design artifacts must be captured in model based notation.

The design artifacts that are modeled using a model based notation like UML, are rich in graphics
and texture. These modeled artifacts facilitate the following tasks.

▪ Complexity control

▪ Objective fulfillment

▪ Performing automated analysis

7. Process must be implemented or obtaining objective quality control and estimation of


progress.
The progress in the lifecycle as well as the quality of intermediately products must be
estimated and incorporated into the process. This can be done with the help of well defined
estimation mechanism that are directly derived from the emerging artifacts. These
mechanisms provide detailed information about trends and correlation with requirements.
8. Implement a Demonstration-based Approach for Estimation of intermediately Artifacts.
9. This approach involves giving demonstration on different scenarios. It facilitates earl
integration and better understanding of design trade-offs. Moreover, it eliminates architectural
defects earlier in the lifecycle. The intermediately results of this approach are definitive.

10. The Points Increments and generations must be made based on the evolving levels of
detail
Here, the ‘levels of detail’ refers to the level of understanding requirements and architecture.
The requirements, iteration content, implementations and acceptance testing can be
organized using cohesive usage scenarios.

10. Develop a configuration process that should be economically scalable

The process framework applied must be suitable for variety of applications. The process must
make use of processing spirit, automation, architectural patterns and components such that it
is economical and yield investment benefits.
NEXT GENERATION SOFTWARE ECONOMICS

Next generation software cost models

▪ In comparison to the current generation software cost modes, the next generation software

cost models should perform the architecture engineering and application production
separately. The cost associated with designing, building, testing and maintaining the
architecture is defined in terms of scale, quality, process, technology and the team
employed.

▪ After obtaining the stable architecture, the cost of the production is an exponential

function of size, quality and complexity involved.

▪ The architecture stage cost model should reflect certain diseconomy of scale (exponent

less than 1.0) because it is based on research and development-oriented concerns. Whereas
the production stage cost model should reflect economy of scale (exponent less than 1.0)
for production of commodities.

▪ The next generation software cost models should be designed in a way that, they can

assess larger architectures with economy of scale. Thus, the process exponent will be less
than 1.0 at the time of production because large systems have more automated proves
components and architectures which are easily reusable.
Phases

Inception and elaboration Construction ad transition Architecture and applications have


different units of mass-scale and size.

● Scale is measured in terms of architecturally significant elements such as classes,

components, processes and nodes

● Size is measured in SLOC or megabyte of executable code.

● Next generation environments and infrastructures are moving to automate and

standardize
many of the management activities, thereby requiring a lower percentage of effort for
overhead activities as scale increases
.
Modern software economics

Barry Boehm top 10 software metrics

1) Finding and fixing a software problem after delivery costs 100times more than finding
and fixing

the problem in early design phases

2) you can compress software development schedules 25% nominal, but no more

3) For every $1 you spend on development , you will spend $2 on maintenance

4) Software development and maintenance costs are primarily a function of the number of
source lines of code.

5. Variations among people account for the biggest differences in software productivity

6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in
1985, 85:15.

7. Only about 15% of software development effort is devoted to programming.

8. Software systems and products typically cost 3 times as much per SLOC as individual
software programs. Software-system products (i.e., system of systems) cost 9 times as
much.

9. Walkthroughs catch 60% of the errors

10. 80% of the contribution comes from 20% of the contributors.

3. MODERN PROCESS TRANSITIONS

Indications of a successful project transition to a modern culture

Several indicators are available that can be observed in order to distinguish projects that
have made a genuine cultural transition from projects that only pretends. The following are
some rough indicators available.

● The lower-level managers and the middle level managers should participate in
the project development

Any organization which has an employee count less than or equal to 25 does not need
to have pure managers. The responsibility of the managers in this type of organization
will be similar to that of a project manager. Pure managers are needed when personal
resources exceed 25. Firstly, these managers understand the status of the project them,
develop the plans and estimate the results. The manager should participate in developing
the plans. This transition affects the software project managers.

● Tangible design and requirements

The traditional processes utilize tons of paper in order to generate the documents
relevant to the desired project. Even the significant milestones of a project are expressed
via documents. Thus, the traditional process spends most of their crucial time on
document preparation instead of performing software development activities.

An iterative process involves the construction of systems that describe the


architecture, negotiates the significant requirements, identifies and resolves the risks etc.
These milestones will be focused by all the stakeholders because they show progressive
deliveries of important functionalities instead of documental descriptions about the
project. Engineering teams will accept this transition of environment from to less
document-driven while conventional monitors will refuse this transition.

● Assertive Demonstrations are prioritized

The design errors are exposed by carrying-out demonstrations in the early stages of
the life cycle. The stake holders should not over-react to these design errors because
overemphasis of design errors will discourage the development organizations in
producing the ambitious future iterating. This does not mean that stakeholders should
bare all these errors. Infact, the stakeholders must follow all the significant steps needed
for resolving these issues because these errors will sometimes lead to serious down-fall in
the project.

This transition will unmark all the engineering or process issues so, it is mostly
refused by management team, and widely accepted by users, customers and the
engineering team.

● The performance of the project can be determined earlier in the life cycle.

The success and failure of any project depends on the planning and architectural
phases of life cycle so, these phases must employ high-skilled professionals. However,
the remaining phases may work well an average team.

● Earlier increments will be adolescent

The development organizations must ensure that customers and users should not
expect to have good or reliable deliveries at the initial stages. This can be done by
demonstration of flexible benefits in successive increments. The demonstration is similar
to that of documentation but involves measuring of changes, fixes and upgrades based on
the objectives so as to highlight the process quality and future environments.

● Artifacts tend to be insignificant at the early stages but proves to be the most
significant in the later stages
The details of the artifacts should not be considered unless a stable and a useful
baseline is obtained. This transition is accepted by the development team while the
conventional contract monitors refuse this transition.

● Identifying and Resolving of real issues is done in a systematic order

The requirements and designs of any successful project arguments along with the
continuous negotiations and trade-offs. The difference between real and apparent issued
of a successful project can easily be determined. This transition may affect any team of
stakeholders.

● Everyone should focus on quality assurance


The software project manager should ensure that quality assurance is integrated in
every aspect of project that is it should be integrated into every individuals role, every
artifact, and every activity performed etc.
There are some organizations which maintains a separate group of individuals know
as quality assurance team, this team would perform inspections, meeting and checklist in
order to measure quality assurance.
However, this transition involves replacing of separate quality assurance team into an
organizational teamwork with mature process, common objectives and common
incentives. So, this transition is supported by engineering teams and avoided by quality
assurance team and conventional managers.

DENOUEMENT
The term denouement typically refers to the final resolution or outcome of a story, often used
in the context of literature or drama. In the context of modern process transitions in software
project management, the denouement would symbolize the conclusion of the project, where
the various processes and activities converge into a final resolution. It is the point where the
project has reached its completion, and all deliverables are ready for handover, maintenance, or
closure.
In software project management, process transitions refer to the movement from one phase of
the project life cycle to another, often involving significant shifts in approach, methodology,
and tools. Modern transitions involve the adoption of agile, DevOps, and other contemporary
software development methodologies that aim to improve efficiency, collaboration, and quality
in the project.
Key Elements of Modern Process Transitions in Software Project Management:
1. Transition from Waterfall to Agile/DevOps: Historically, many software projects
followed a Waterfall model, which had distinct, sequential phases. Over time, this
model has given way to Agile and DevOps practices that focus on iterative development,
continuous feedback, and collaboration. The transition to these modern approaches
requires significant changes in the mindset, processes, and tools used by project teams.
2. Automation of Processes: The introduction of automation tools for testing,
deployment, and integration has changed the way software projects are managed.
Continuous integration/continuous deployment (CI/CD) pipelines, automated testing,
and infrastructure as code (IaC) ensure smoother transitions between development,
testing, and deployment phases.
3. Collaboration and Communication: Modern process transitions emphasize cross-
functional teams, where developers, testers, product owners, and operations work
together throughout the development process. Tools like Jira, Trello, and Slack enhance
communication, making the transition between different phases more seamless.
4. User-Centric Development: The focus has shifted from developing software based on
rigid specifications to creating software that responds to user feedback. Agile
methodologies, particularly Scrum and Kanban, allow for quicker iterations, where
software is developed in short sprints with regular adjustments based on user feedback.
5. Quality Assurance: In the past, quality assurance (QA) was often a separate process at
the end of the development cycle. In modern transitions, QA is integrated continuously
throughout the process. This shift allows for earlier detection of issues and a more
continuous focus on quality.
6. Scalability and Flexibility: Modern processes are designed to be more scalable and
flexible. The use of cloud platforms, microservices architectures, and containerization
technologies like Docker allows teams to scale their solutions easily and adapt to
changing business needs.
7. Customer Feedback and Continuous Improvement: Modern process transitions also
emphasize continuous feedback loops, allowing teams to react to customer needs and
improve the software iteratively. Regular releases and sprint reviews give stakeholders
the opportunity to see progress and provide real-time feedback.
The Denouement in Modern Software Project Management:
The denouement in this context would be the point where the project reaches a natural
conclusion after undergoing these transitions. It involves:
● Project Delivery and Closure: Once the final iteration is delivered, the project moves
toward its closure. This includes wrapping up documentation, finalizing the product for
production, and ensuring all stakeholders are satisfied with the outcome.
● Knowledge Transfer and Handover: In the denouement, knowledge transfer to the
operations team or maintenance team occurs, where they take over ongoing support or
maintenance tasks for the software. Any lessons learned during the project transition are
documented for future use.
● Post-Deployment Monitoring and Continuous Improvement: Even after the product
is released, the work doesn't end. Monitoring the software’s performance and gathering
feedback from users continues as part of an ongoing improvement process.
● Final Retrospectives: A final retrospective meeting or review often takes place after a
project ends. This allows teams to reflect on the entire transition process, what went
well, and what can be improved for future projects.

Case Study: The command Center Processing and Display system-

Replacement (CCPDS-R).

CCPDS-R Case Study and Future Software Project Management

Practices

• CCPDS-R Case Study : The Command Center Processing and Display Sys-tem-
Replacement (CCPDS-R) project was performed for the U.S. Air Force by TRW Space
and Defense in Redondo Beach, California.
• The entire project included systems engineering, hardware procurement, and software
development, with each of these three major activities consuming about one-third of the
total cost.
• The schedule spanned 1987 through 1994.
• The software effort included the development of three distinct software systems totaling
more than one million source lines of code.
• This case study focuses on the initial software development, called the Common
Subsystem, for which about 355,000 source lines were developed.
• The Common Subsystem effort also produced a reusable architecture, a mature process,
and an integrated environment for efficient development of the two software subsystems
of roughly similar size that followed
• CCPDS-R was one of the pioneering projects that practiced many modern management
approaches
• TRW was awarded the Space and Missile Warning Systems Award for excellence in

1991 for continued, sustained performance in overall systems engineering and project

execution”

Characteristic CCPDS-R

Domain Ground based C3 development

Size/language 1.15M SLOC Ada

Average number of '75

people

Schedule 75 months

Process/standards
● DOD-STD-2167A

● Iterative development
Environment
● Rational host DEC host
● DEC VMS targets
Contractor TRW

Customer USAF

Current status Delivered On-budget, On-schedule

⮚ Common subsystem overview

The CCPDS-R contract called for the development of three common subsystems

1) Common subsystem:

● It is primary warning system within Cheyenne mountain upgrade program

● Required 3,55,000 SLOC

● 48 months software development schedule

● Primary installation in Cheyenne Mountain with backup system deployed at

Offutt air force base, Nebraska

2) Processing and display system

● It was scaled-down missile warning system, for all nuclear-capable commanders

in chief

● It was about 25,000 SLOC


● It was fielded on remote, read-only workstations that were distributed

worldwide

3) STRATCOM subsystem

⮚ It’s about 450,000 SLOC

⮚ It provided both missile warning center at the command center of the strategic

command

Overall software Acquisition process

The CCPDS-R acquisition included two distinct phases:

o A concept definition (CD) phase

o Full scale development(FSD) phase

1) A concept definition (CD) phase

● The proposal was competed for by five major bidders, and two firm-fixed-based

contracts of about $2 million each are awarded

The CD phase was very similar in intent to the inception phase.

o The primary products were a system specification (a vision document), an FSD phase

proposal (a business case, including the technical approach and a fixed-price-incentive

and award-fee cost proposal), and a software development plan.

o The CD phase also included a system design review, technical interchange meetings

with the government stakeholders (customer and user), and several contract-deliverable

documents
The exercise requirements included the following:

• Use the proposed software team.

• Use the proposed software development techniques and tools.

• Use the FSD-proposed software development plan.

• Conduct a mock design review with the customer 23 days after receipt of the specification.

• The exercise produced the following results:

– Four primary use cases were elaborated and demonstrated.

– A software architecture skeleton was designed, prototyped, and documented,

including two executable, distributed processes; five concurrent tasks (separate

threads of control); eight components; and 72 component-to-component interfaces.

– A total of 4,163 source lines of prototype components were developed and

executed. Several thousand lines of reusable components were also integrated into

the demonstration.

– Three milestones were conducted and more than 30 action items resolved.

• the results produced by TRW's CCPDS-R team were impressive. They


demonstrated to the customer that the team was prepared, credible, and competent

at conducting the proposed software approach. Approximately 12 staff-months

were expended in the effort (12 people full-time for 23 days).

• A detailed plan was established that included an activity network, responsibility

assignments, and expected results for tracking progress.

The plan included two architecture iterations and all the milestones and artifacts

proposed in the software development plan.

⮚ computer software configuration items(common system product overview)

• The Common Subsystem software comprised six computer software configuration

items (CSCIs).

• CSCIs are defined and described in DOD-STD-2167A [DOD, 1988], The CSCIs were

identified as follows:

1. Network Architecture Services (NAS). This foundation middleware provided reusable

components for network management, inter process communications, initialization,

reconfiguration, anomaly management, and instrumentation of software health,

performance, and state. This CSCI was designed to be reused across all three CCPDS-R

subsystems.

2. System Services (SSV). This CSCI comprised the software architecture skeleton, real-

time data distribution, global data types, and the computer system operator interface.

3. Display Coordination (DCO). This CSCI comprised user interface control, display

formats, and display population.

4. Test and Simulation (TAS). This CSCI comprised test scenario generation, test message

injection, data recording, and scenario playback.

5. Common Mission Processing (CMP). This CSCI comprised the missile warning
algorithms for radar, nuclear detonation, and satellite early warning messages.

6. Common Communications (CCO). This CSCI comprised external interfaces with other

systems and message input, output, and protocol management.

⮚ Project Organization:

The CD phase team represented the essence of the architecture team which is
responsible for an efficient engineering stage this team had the following
responsibilities
o Analyze and specify the project requirements
o Define and develop the top-level architecture
o Plan the FSD phase software development activities
o Configure the process and development environment
o Establish trust and win-win relationships among the stakeholders
Core metrics

TRW formulated a metrics program with four objectives:


1. Provide data for assessing current project trends and identifying the need for
management attention
2. Provide data for planning future builds and subsystems
3. Provide data for assessing the relative complexity of meeting the software end-item
quality requirements
4. Provide data for identifying where process improvements are needed and substantiating
the need
Development progress
• Significant effort went into devising a consistent approach that would provide accurate
insight into subsystem-level status and build status.
• The goal was a balanced assessment that included the following:
– The Ada/ADL metrics. These data provided good insight into the direct
indicators of technical progress. By themselves, these metrics were fairly
accurate at depicting the true progress in design and implementation. They were
generally weak at depicting the completed contract deliverables and financial
status.
– Earned value metrics. These data provided good insight into the financial
status and contract deliverables. They were generally weak indicators of true
technical progress.

Monthly progress is shown in below fig

Test Progress

● Test organization was responsible for build integration tests and requirements

verification testing
● Build integration testing proved to be less effective than expected for uncovering

problems.

● The below table summarizes the build 2 BIT results, which reflect a highly integrated

product state.

● The below table provide perspectives on the progress metrics used to plan and track the

CCPDS-R test program.

Stability

The below figure illustrates rate if configuration baseline changes. it cumulative number

of SLOC that were broken and the number of SLOC repaired

Breakage rates that diverged from repair rates results in management attention,
reprioritization of resources, and corrective actions taken to ensure that the test and

development organization remained in relative equilibrium

Modularity

● This metric identifies the total scrap generated by the common subsystem software

development process as about 25% of the whole product.

● Industry average for software scrap run in the 40% to 60% range.

● The initial configuration management baseline was established around the time of PDR,

at month 14.

● There are 1,600 discrete changes processed against configuration baselines thereafter
Adapatability:

● The level of Adaptability achieved by CCPDS-R was roughly four times better than the

typical project, in which rework costs over the development life cycle usually exceed

20% of the total cost

● The below figure plots the average cost of change across the common subsystem

schedule

● Most of the early SCO trends were changes that affected multiple people and multiple

components

● The later SCO trends were usually localized to a single person and a single component.
Maturity

● CCPDS-R had a specific reliability requirement, for which the software had a

specific allocation.

Cost/effort expenditures by activity

Below table provide Overall cost breakdown for the CCPDS-R common subsystem. its

extracted from the final WBS cost collection runs and were structured
Future software best practices

• Airlie software council was purposely structured to include highly successful managers of
large scale software projects

• There are nine best practices with the process framework, management principles & top 10
principles

1) Formal risk Management: using an iterative approach

2) Agreement on Interfaces: Architecture first approach

3) Formal Inspections: Assessment workflows

4) Metric Based scheduling and management: Model based notations and objective
quality control

5) Binary quality gates at the inch-pebble level: Evolving level of detail

6) Program wide visibility of progress versus plan: Open Communication among project
team members

7) Defect tracking against quality targets : Its related to architecture first approach and
objective quality control

8) Configuration management: Change management principle

9) People aware management accountability: Management principle

You might also like