Software Project Management Lecture Notes
Software Project Management Lecture Notes
Prepared By
Chaitanya Kumar Repalle
SPM UNIT III
Checkpoints of the Process: Major Mile Stones, Minor Milestones, Periodic Status
Assessments
The importance of software architecture and its close linkage with modern software
development processes can be summarized as follows:
Poor architectures and immature processes are often given as reasons for project failures.
A mature process, an understanding of the primary requirements, and a demonstrable
architecture are important prerequisites fro predictable planning.
Architecture development and process definition are the intellectual steps that map the
problem to a solution
Without violating the constraints; they require human innovation and cannot be
automated
.
The following Figure Summarizes the artifacts of the design set, including the architecture
views and architecture description:
The requirements model addresses the behavior of the system as seen by its end users,
analysts, and testers. This view is modeled statically using use case and class diagrams and
dynamically using sequence, collaboration, state chart and activity diagrams.
The use case view describes how the system‟s critical (architecturally significant) use cases are
realized by elements of
the design model. It is modeled statically using use case diagrams and dynamically using any
of the UML behavioral diagrams.
The design view describes the architecturally significant elements of the design model. This
view, an abstraction of the
design model, addresses the basic structure and functionality of the solution. It is modeled
statically using calls and object diagrams and dynamically using any of the UML behavioral
diagrams.
The process view addresses the run-time collaboration issues involved in executing the
architecture on a distributed deployment model, including the logical software network
topology (allocation to process and threads of control), inter process communication and state
management
.This view is modeled statically using deployment diagrams and dynamically using any of the
UML behavioral diagrams.
The component view describes the architecturally significant elements of the implementation
set. This view, an abstraction of the design model, addresses the software source code
realization of the system from the perspective of the project‟s integrators and developers,
especially with regard to releases and configuration management. It is modeled statically using
component diagrams and dynamically using any of the UML behavioral diagrams.
The deployment view addresses the executable realization of the system, including the
allocation of logical processes in
the distribution view (the logical software topology) to physical resources of the deployment
network (the physical system topology). It is modeled statically using deployment diagrams
and dynamically using any of the UML behavioral diagrams.
Generally, an architecture baseline should including the following:
Requirements: critical use cases system-level quality objectives and priority relationships
among features and qualities
Implementation: source component inventory and bill of materials (number, name, purpose,
cost) of all primitive components
Development: executable components sufficient to demonstrate the critical us cases and the
risk associated with achieving the system qualities.
.
The term workflow is used to mean a thread of cohesive and mostly sequential activities;
Workflows are mapped to product artifacts. There are seven top-level workflows:
1. Management workflow: controlling the process and ensuring win conditions for all
stakeholders.
2. Environment workflow: automating the process and evolving the maintenance
environment.
3. Requirements workflow: analyzing the problem space and evolving the requirements
artifacts.
4. Design workflow: modeling the solution and evolving the architecture and designartifacts.
5. Implementation workflow: programming components & evolving the implementation and
deployment artifacts.
6. Assessment workflow: assessing the trends in process and product quality.
7. Deployment workflow: transitioning the end products to the user.
Three types of joint management reviews are conducted throughout the process:
• Major Milestones: these system wide events are held ant the end of each development
phase. They provide visibility to system wide issues synchronize the management and
engineering perspectives and verify that the aims of the phase have been achieved.
• Minor Milestones: theses iteration-focused events are conducted to review the content of
an iteration in detail and to authorize continued work.
• Status Assessments: These periodic events provide management with frequent and
regular insight into the progress being made.
MAJOR MILESTONES
The four major milestones occur at the transition points between life-cycle phases. They can
be used in many different process models, including eth conventional waterfall model. In an
iterative model, the major milestones are used to achieve concurrence among all stakeholders
on the current state of the project.
▪ Customers: Schedule and budget estimates, feasibility, risk assessment, requirements
understanding, progress, product line compatibility.
▪ Users: consistency with requirements and usage scenarios, potential for accommodating,
growth, quality attributes.
▪ Architects and systems engineers: product line compatibility, requirements changes, trade-
off analyses,
completeness and consistency, balance among risk, quality and usability
▪ Developers: sufficiency of requirements detail and usage scenario descriptions,
frameworks for component selection or development, resolution of development risk,
product line compatibility, sufficiency of the development environment.
▪ Maintainers: sufficiency of product and documentation artifacts, understandability,
interoperability with existing systems, sufficiency of maintenance environment.
▪ Others: possibly many other perspectives by stakeholders such as regulatory agencies,
independent verification and validation contractors, venture capital investors,
subcontractors, associate contractors and sale and marketing teams.
The following Table summarizes the balance of information across them major milestones.
Life-Cycle Objectives Milestone
The life-cycle objectives milestone occurs at tile end of the inception phase. The goal is to
present to all stakeholders a recommendation on how to proceed with development, including
a plan, estimated cost and schedule and expected benefits and cost savings. A successfully
completed life-cycle objectives milestone will result in authorization from all stakeholders to
proceed with the elaboration phase.
MINOR MILESTONES
Iterative Process Planning- Work breakdown structures, planning guidelines, cost and
schedule estimating, Iteration planning process, Pragmatic planning.
Iterative Process Planning- Work breakdown structures, planning guidelines, cost and
schedule estimating, Iteration planning process, Pragmatic planning.
▪A good work breakdown structure and its synchronization with the process framework are
critical factors in software project success. Development of a work breakdown structure
dependent on the project management style, organizational culture, customer preference,
financial constraints and several other hard-to-define, project-specific parameters.
▪ A WBS is simply a hierarchy of elements that decomposes the project plan into the discrete
work tasks.
▪ A WBS provides the following information structure:
a) A delineation of all significant work
b) A clear task decomposition for assignment of responsibilities
c) A framework for scheduling, budgeting, and expenditure tracking
Conventional work breakdown structures frequently suffer from three fundamental flaws.
▪ They are prematurely structured around the product design.
▪ They are prematurely decomposed, planned, and budgeted in either too much or too little
detail.
▪ They are project-specific, and cross-project comparisons are usually difficult or impossible.
▪ Conventional work breakdown structures are prematurely structured around the
product design. The above Figure shows a typical conventional WBS that has been
structured primarily around the subsystems of its product architecture, and then further
decomposed into the components of each subsystems. A WBS is the architecture for the
financial plan.
▪ Conventional work breakdown structures are prematurely decomposed, planned and
budgeted in either too little or too much detail. Large software projects tend to be over
planned and small projects tend to be under planned. The basic problem with planning
too much detail at the outset is that the detail does not evolve with the level of fidelity in
the plan.
▪ Conventional work breakdown structures are project-specific and cross-project
comparisons are usually difficult or impossible. With no standard WBS structure, it is
extremely difficult to compare plans, financial data, schedule data, organizational
efficiencies, cost trends, productivity trends, or quality trends across multiple projects.
EVOLUTIONARY WORK BREAKDOWN STRUCTURES
An evolutionary WBS should organize the planning elements around the process framework
rather than the product framework. The basic recommendation for the WBS is to organize the
hierarchy as follows:
a) First-level
WBS elements are the workflows (management, environment,
requirements, design, implementation, assessment, and deployment).
b) Second-level elements are defined for each phase of life cycle (inception, elaboration,
construction, and transition).
c) Third-level elements are defined for the focus of activities that produce the artifacts of each
phase.
A default WBS consistent with the process framework (phases, workflows, and artifacts) is
shown in Figure:
PLANNING GUIDELINES
Software projects span a broad range of application domains. It is valuable but risky to make
specific planning recommendations independent of project context. Project-independent
planning advice is also risky. There is the risk that the guidelines may be adopted blindly
without being adapted to specific project circumstance. Two simple planning guidelines
should be considered when a project plan is being initiated or assessed.
The below table prescribes a default allocation of costs among the first-level WBS elements.
The below table prescribes allocation of effort and schedule across the lifecycle phases.
Project plans need to be derived from two perspectives. The first is a forward-looking, top-
down approach. It starts with an understanding of the general requirements and constraints,
derives a macro-level budget and schedule, then decomposes these elements into lower level
budgets and intermediate milestones.
Inception Iterations: the early prototyping activities integrate the foundation components
of
Candidate architecture and provide an executable framework for elaborating the critical use
.
Elaboration Iteration: These iterations result in architecture, including a complete
framework and infrastructure for execution. Upon completion of the architecture iteration,
a few critical use cases should be demonstrable:
(1) Initializing the architecture
(2) Injecting a scenario to drive the worst-case data processing flow through the system
(3) Injecting a scenario to drive the worst-case control flow through the system (for
example, orchestrating the fault-tolerance use cases).
Transition Iterations: Most projects use a single iteration to transition a beta release into
the final product.
▪ The general guideline is that most projects will use between four and nine iteration. The
typical project would have the following six-iteration profile:
▪
PRAGMATIC PLANNING
Even though good planning is more dynamic in an iterative process, doing it accurately
is far easier. While executing iteration N of any phase, the software project manager must be
monitoring and controlling against a plan that was initiated in iteration N-1 and must be
planning iteration N+1. the art of good project management is to make trade- offs in the
current iteration plan and the next iteration plan based on objective results in the current
iteration and previous iterations. Aside form bad architectures and misunderstood requirement,
inadequate planning (and subsequent bad management) is one of the most common reasons for
project failures. Conversely, the success of every successful project can be attributed in part to
good planning.
A project‟s plan is a definition of how the project requirements will be transformed into
a product within the business constraints. It must be realistic, it must be current, it must be a
team product, it must be understood by the stake holders, and it must be used. Plans are not
just for mangers. The more open and visible the planning process and results, the more
ownership there is among the team members who need to execute it. Bad, closely held plans
cause attrition. Good, open plans can shape cultures and encourage teamwork.
Project Organizations and Responsibilities: Line-of-Business Organizations, Project
Organizations, and Evolution of Organizations. Process Automation: Automation Building
Blocks, the Project Environment.
Software lines of business and project teams have different motivations. Software lines of
business are motivated by return on investment, new business discriminators, market
diversification and profitability.
Software professionals in both types of organizations are motivated by career growth, job
satisfaction and the opportunity to make a difference.
❖ Responsible for exchanging the information and project guidance to or from the project
practitioners.
❖ Maintains current assessment of organization process maturity.
❖ Help in initiate and periodically assess project processes.
❖ Responsible for process definition and maintainence.
PROJECT ORGANIZATIONS
The default project organization and maps project-level roles and responsibilities. This
structure can be tailored to the size and circumstance of the specific project organization.
The architecture team is responsible for real artifacts and for the integration of
components, not just for staff Functions.
❑ The development team owns the component construction and maintenance activities.
❑
Quality is every one job. Each team takes responsibility for a different quality perspective
Software Management Team:
❑ The software architecture team performs the tasks of integrating the components, creating
real artifactsetc.
❑ It promotes team communications and implements the applications with a system-wide
quality.
❑ The success of the development team is depends on the effectiveness of the architecture
team along with the software management team controls the inception and elaboration
phases of a life-cycle.
❑ The architecture team must have:
❖ Domain experience to generate an acceptable design and use-caseview.
❖ Software technology experience to generate an acceptable process view, component and
development views.
Responsibilities:
❖ System-level quality i.e., performance, reliability and maintainability.
❖ Graphical user interfaces: specialists with experience in the display organization;
data presentation, and user interaction.
Operating systems and networking: specialists with experience in various control issues
arises due to synchronization, resource sharing, reconfiguration, inter object
communications, name space management etc.
❖ Domain applications: Specialists with experience in the algorithms, application
processing, or business rules specific to the system.
Responsibilities:
❑ The exposure of the quality issues that affect the customer‟s expectations.
❑ Metric analysis.
❑ Verifying the requirements.
❑ Independent testing.
❑ Configuration control and user development.
❑ Building project infrastructure.
❖ Independent testing.
❖ Configuration control and user development.
❖ Building project infrastructure.
EVOLUTION OF ORGANIZATIONS
❑ The project organization represents the architecture of the team and needs to evolve
consistent with the project plan captured in the work breakdown structure
.
❑ A different set of activities is emphasized in each phase, as follows:
❖ Inception team: An organization focused on planning, with enough support from the
other teams to ensure that the plans represent a consensus of all perspectives.
❖ Elaboration team: An architecture-focused organization in which the driving forces of
the project reside in the
software architecture team and are supported, by the software development and software
assessment teams as necessary to achieve a stable architecturebaseline.
❖ Construction team: A fairly balanced organization in which most of the activity resides
in the software
development and software assessment teams.
❖ Transition team: A customer-focused organization in which usage feedback drives the
deployment activities
PROCESS AUTOMATION
3. Microprocess: A project team's policies, procedures, and practices for achieving an artifact
of the software process. The automation support for generating an artifact is generally called
a tool. Typical tools include requirements management, visual modeling, compilers, editors,
debuggers, change management, metrics automation, document automation, test automation,
cost estimation, and workflow automation.
Management: Software cost estimation tools and WBS tools are useful for generating the
planning artifacts. For managing against a plan, workflow management tools and a software
project control panel that can maintain an on-line version of the status assessment are
advantageous.
Environment: Configuration management and version control are essential in a modern
iterative development process.
(change management automation that must be supported by the environment.
Requirements: Conventional approaches decomposed system requirements into subsystem
requirements, subsystem requirements into component requirements, and component
requirements into unit requirements.
The ramifications of this approach on the environment‟s support for requirements management
are twofold:
1. The recommended requirements approach is dependent on both textual and model-based
representations
2. Traceability between requirements and other artifacts needs to be automated.
Design: The primary support required for the design workflow is visual modeling, which is
used for capturing design models, presenting them in human-readable format, and translating
them into source code. Architecture-first and demonstration-based process is enabled by
existing architecture components and middleware.
Implementation: The implementation workflow relies primarily on a programming
environment (editor, compiler, debugger, and linker, run time) but must also include substantial
integration with the change management tools, visual modeling tools, and test automation tools
to support productive iteration.
Assessment and Deployment: To increase change freedom, testing and document production
must be mostly automated. Defect tracking is another important tool that supports assessment:
It provides the change management instrumentation necessary to automate metrics and control
release baselines. It is also needed to support the deployment workflow throughout the life
cycle.
Four important environment disciplines that is critical to the management context and the
success of a modern iterative development process:
Change Management
The basic fields of the SCO are title, description, metrics, resolution, assessment and
disposition
.
a) Title. The title is suggested by the originator and is finalized upon acceptance by the
configuration control board. b)Description: The problem description includes the name of the
originator, date of origination, CCB-assigned SCO identifier, and relevant version identifiers of
related support software.
c) Metrics: The metrics collected for each sea are important for planning, for scheduling, and
for assessing quality improvement. Change categories are type 0 (critical bug), type 1 (bug),
type 2 (enhancement), type 3 (new feature), and type 4 (other)
Resolution: This field includes the name of the person responsible for implementing the
change, the components changed, the actual metrics, and a description of the change.
d) Assessment: This field describes the assessment technique as either inspection, analysis,
demonstration, or test. Where applicable, it should also reference all existing test cases and
new test cases executed, and it should identify all different test configurations, such as
platforms, topologies, and compilers.
e) Disposition: The SCO is assigned one of the following states by the CCB:
• Proposed: written, pending CCB review
• Accepted: CCB-approved for resolution
• Rejected: closed, with rationale, such as not a problem, duplicate, obsolete change,
resolved by another SCO
• Archived: accepted but postponed until a later release
• In progress: assigned and actively being resolved by the development organization
• In assessment: resolved by the development organization; being assessed by a test
organization
• Closed: completely resolved, with the concurrence of all CCB members
Configuration Baseline
A configuration baseline is a named collection of software components and supporting
documentation that is subject to change management and is upgraded, maintained, tested,
statused and obsolesced as a unit.
There are general1y two classes of baselines:
1. external product releases and
2. internal testing releases.
A configuration baseline is a named collection of components that is treated as a unit. It is
controlled formally because it is a packaged exchange between groups. A project may release
a configuration baseline to the user community for beta testing. Once software is placed in a
controlled baseline, all changes are tracked. A distinction must be made for the cause of a
change. Change categories are as follows:
– Type 0: Critical failures, which are defects that are nearly always fixed before any
external release.
– Type 1: A bug or defect that either does not impair the usefulness of the system or can
be worked around.
– Type 2: A change that is an enhancement rather than a response to a defect.
– Type 3: A change that is necessitated by an update to the requirements.
Type 4: changes that are not accommodated by the other categories
Configuration Control Board (CCB)
• A CCB is a team of people that functions as the decision authority on the content of
configuration baselines.
• A CCB usually includes the software manager, software architecture manager,
software development manager, software assessment manager and other stakeholders
(customer, software project manager, systems engineer, user) who are integral to the
maintenance of a controlled software delivery system.
• The [bracketed] words constitute the state of an SCO transitioning through theprocess.
• [Proposed]: A proposed change is drafted and submitted to the CCB. The
proposed change must includea technical description of the problem and an
estimate of the resolution effort.
• [Accepted, archived or rejected]: The CCB assigns a unique identifier and accepts,
archives, or rejects each proposed change. Acceptance includes the change for
resolution in the next release; archiving accepts the change but postpones it for
resolution in a future release; and rejection judges the change to be without merit,
redundant with other proposed changes, or out of scope.
• [In progress]: the responsible person analyzes, implements and tests a solution to
satisfy the SCQ. This task includes updating documentation, release notes and SCO
metrics actuals and submitting new SCOs.
• [In assessment]: The independent test assesses whether the SCO is completely resolved.
When the independent
test team deems the change to be satisfactorily resolved, the SCO is submitted to the
CCB for final disposition and closure.
• [Closed]: when the development organization, independent test organization and
CCB concur that the SCO is resolved, it is transitioned to a closed status. „
Infrastructures
Organization‟s infrastructure provides the organization capital assets, including two key
artifacts:
a) a policy that captures the standards for project software development processes, and
b) an environment that captures an inventory of tools.
Organization Policy
• The organization policy is usually packaged as a handbook that defines the life cycle and
the process primitives (major milestones, intermediate artifacts, engineering repositories,
metrics, roles and responsibilities). The handbook provides a general framework for
answering the following questions:
– What gets done? (activities and artifacts)
– When does it get done? (mapping to the life-cycle phases and milestones)
– Who does it? (team roles and responsibilities)
How do we know that it is adequate? (Checkpoints, metrics and standards of
performance
Organization Environment
Some of the typical components of an organization‟s automation building blocks are as
follows:
• Standardized tool selections, which promote common workflows and a higher ROI on
training.
• Standard notations for artifacts, such as UML for all design models, or Ada 95
for all custom-developed, reliability-critical implementation artifacts.
• Tool adjuncts such as existing artifact templates (architecture description,
evaluation criteria, release descriptions, status assessment) or customizations.
• Activity templates (iteration planning, major milestone activities, configuration control
boards).
Stakeholder Environments
• An on-line environment accessible by the external stakeholders allows them to
participate in the process as
follows:
– Accept and use executable increments for hands-on evaluation.
– Use the same on-line tools, data and reports that the software development
organization uses to manageand monitor the project.
– Avoid excessive travel, paper interchange delays, format translations, paper and
shipping costs andother overhead costs.
• There are several important reasons for extending development environment
resources into certainstakeholder domains.
– Technical artifacts are not just paper.
– Reviews and inspections, breakage/rework assessments, metrics analyses and
even beta testing canbe performed independently of the development team.
– Even paper documents should be delivered electronically to reduce production costs
and turn aroundtime.
SPM UNIT-V
Project Control and Process Instrumention: Seven Core Metrics, Management Indicators,
Quality Indicators, Life Cycle Expectations Pragmatic Software Metrics, Metrics Automation.
Future Software Project Management: Modern Project Profiles, Next generation Software
economics, Modern process transitions.
Case Study: The command Center Processing and Display system- Replacement (CCPDS-R).
The primary themes of a modern software development process tackle the central
management issues of complex software:
The goals of software metrics are to provide the development team and the management team
with the following:
Seven core metrics are used in all software projects. Three are management indicators and four
are quality indicators.
a) Management Indicators
▪ Work and progress (work performed over time)
▪ Budgeted cost and expenditures (cost incurred over time)
▪ Staffing and team dynamics (personnel changes over time)
b) Quality Indicators
▪ They are simple, objective, easy to collect, easy to interpret and hard to misinterpret.
▪ Collection can be automated and non-intrusive.
▪ They provide for consistent assessment throughout the life cycle and are derived
from the evolving product baselines rather than from a subjective assessment.
▪ They are useful to both management and engineering personnel for communicating
progress and quality in a consistent format.
MANAGEMENT INDICATORS
There are three fundamental sets of management metrics; technical progress, financial status
staffing progress. By examining these perspectives, management can generally assess whether
a project is on budget and on schedule. The management indicators recommended here include
standard financial status based on an earned value system, objective technical progress metrics
tailored to the primary measurement criteria for each major team of the organization and staff
metrics that provide insight into team dynamics.
▪ Software development team: SLOC under baseline change management, SCOs closed.
▪ Software assessment team: SCOs opened, test hours executed, evaluation criteria met
▪ Software management team: milestones completed
Budgeted Cost and Expenditures
To maintain management control, measuring cost expenditures over the project life cycle is
always necessary. One common approach to financial performance measurement is use of an
earned value system, which provides highly detailed cost and schedule insight.
Modern software processes are amenable to financial performance measurement through an
earned value approach. The basic parameters of an earned value system, usually expressed in
units of dollars, are as follows:
▪ Expenditure Plan: the planned spending profile· for a project over its planned schedule.
For most software
projects (and other labor-intensive projects), this profile generally tracks the staffing
profile.
▪ Actual Progress: the technical accomplishment relative to the planned progress
underlying the spending profile. In a healthy project, the actual progress tracks
planned progress closely.
▪ Actual Cost: the actual spending profile for a project over its actual schedule. In a
healthy project, this profile
tracks the planned profile closely.
▪ Earned Value: the value that represents the planned cost of the actual progress.
▪ Cost variance: the difference between the actual cost and the earned value.
▪ Positive values correspond to over - budget situations; negative values correspond to
under budget situations.
▪ Schedule Variance: the difference between the planned cost and the earned value.
Positive values correspond to behind-schedule situations; negative values correspond
to ahead-of-schedulesituations.
Staffing and Team Dynamics
An iterative development should start with a small team until the risks in the requirements
and architecture have been suitably resolved. Depending on the overlap of iterations and
other project specific circumstance, staffing can vary. For discrete, one of-a-kind
development efforts (such as building a corporate information system), the staffing profile
would be typical.
It is reasonable to expect the maintenance team to be smaller than the development team
for these sorts of
developments. For a commercial product development, the sizes of the maintenance and
development teams may be the same.
QUALITY INDICATORS
The four quality indicators are based primarily on the measurement of software change
across evolving baselines of engineering data (such as design models and source code).
Measuring is useful, but it doesn‟t do any thinking for the decision makers. It only provides
data to help them ask the right questions, understand the context, and make objective
decisions.
The basic characteristics of a good metric are as follows:
1. It is considered meaningful by the customer, manager and performer. Customers come to
software engineering providers because the providers are more expert than they are at
developing and managing software. Customers will accept metrics that are demonstrated
to be meaningful to the developer.
3. It is objective and unambiguously defined: Objectivity should translate into some form of
numeric representation (such as numbers, percentages, ratios) as opposed to textual
representations (such as excellent, good, fair, poor). Ambiguity is minimized through well
understood units of measurement (such as staff-month, SLOC, change, function point, class,
scenario, requirement), which are surprisingly hard to define precisely in the software
engineering world.
.
4. It displays trends: This is an important characteristic. Understanding the change in a
metric‟s value with respect to today‟s iterative development models. It is very rare that a
given metric drives the appropriate actiondirectly.
.
5. It is a natural by-product of the process: The metric does not introduce new artifacts
or overhead activities; it is derived directly from the mainstream engineering and
management workflows.
There are many opportunities to automate the project control activities of a software project.
For managing against a plan, a software project control panel (SPCP) that maintains an on-
line version of the status of evolving artifacts provides a key advantage.
To implement a complete SPCP, it is necessary to define and develop the following:
▪ Metrics primitives: indicators, trends, comparisons, and progressions.
▪ A graphical user interface: GUI support for a software project manager role and flexibility
to support other roles
▪ Metric collection agents: data extraction from the environment tools that maintain
theengineering notations .for the various artifact sets.
▪ Metrics data management server: data management support for populating the
metric displays of the GUI and storing the data extracted by the agents.
▪ Metrics definitions: actual metrics presentations for requirements progress (extracted from
requirements set
artifacts), design progress (extracted from design set artifacts), implementation progress
(extracted from implementation set artifacts), assessment progress (extracted from
deployment set artifacts), and other progress dimensions (extracted from manual sources,
financial management systems, management artifacts, etc.)
▪ Actors: typically, the monitor and the administrator
Specific monitors (called roles) include software project managers, software development
team leads, software architects, and customers.
▪ Monitor: defines panel layouts from existing mechanisms, graphical objects, and
linkages to project data; queries data to be displayed at different levels of abstraction
▪ Administrator: installs thesystem; defines new mechanisms, graphical objects, and linkages;
archiving functions;
defines composition and decomposition structures for displaying multiple levels of
abstraction.
In this case, the software project manager role has defined a top-level display with four
graphical objects.
1. Project activity Status: the graphical object in the upper left provides an overview of the
status of the top-level WBS elements. The seven elements could be coded red, yellow
and green to reflect the current earned valuestatus. (In Figure they are coded with white
and shades of gray). For example, green would represent ahead of plan, yellow would
indicate within 10% of plan, and red would identify elements that have a greater than
10% cost or schedule variance. This graphical object provides several examples of
indicators: tertiary colors, the actual percentage, and the current first derivative (up arrow
means getting better, down arrow means getting worse).
2. Technical artifact status: the graphical object in the upper right provides an overview of
the status of the evolving technical artifacts. The Req light would display an assessment
of the current state of the use case models and requirements specifications. The Des light
would do the same for the design models, the Imp light for the source code baseline and
the Dep light for the test program.
3. Milestone progress: the graphical object in the lower left provides a progress
assessment of the achievementof milestones against plan and provides indicators of
the current values.
4. Action item progress: the graphical object in the lower right provides a different
perspective of progress, showing the current number of open and close issues.
The following top-level use case, which describes the basic operational concept of an SPCP,
corresponds to a monitor interacting with the control panel:
▪ Start the SPCP. The SPCP starts and shows the most current information that was saved
when the user last used
the SPCP.
▪ Select a panel preference. The user selects from a list of previously defined default
panel preference. The SPCP displays the preference selected.
▪ Select a value or graph metric. The user selects whether the metric should be displayed
for a given point in time or in a graph, as a trend. The default for trends is monthly.
▪ Select to superimpose controls. The user points to a graphical object and requests that
the control values for that metric and point in time be displayed.
▪ Drill down to trend. The user points to a graphical object displaying a point in time
and drills down to view the trend for the metric.
▪ Drill down to point in time. The user points to a graphical object displaying a trend
and drills down to view the values for the metric.
▪ Drill down to lower levels of information. The user points to a graphical object displaying
a point in time and
drills down to view the next level of information.
▪ Drill down to lower level of indicators. The user points to a graphical object
displaying an indicator anddrills down to view the breakdown of the next level of
indicators.
PROCESS DISCRIMINATES
In tailoring the management process to a specific domain or project, there are two
dimensions of discriminating factors: technical complexity and management complexity.
The Figure illustrates discriminating these two dimensions of process variability and
shows some example project applications. The formality of reviews, the quality control
of artifacts, the priorities f concerns and numerous other process instantiation parameters
are governed by the point a project occupies in these two dimensions.
Figure summarizes the different priorities along the two dimensions.
Scale
▪ There are many ways to measure scale, including number of source lines of code,
number of function points, number of use cases, and number of dollars. From a process
tailoring perspective, the primary measure of scale is the size of the team. As the
headcount increases, the importance of consistent interpersonal communications
becomes paramount. Otherwise, the diseconomies of scale can have a serious impact on
achievement of the project objectives.
▪ A team of 1 (trivial), a team of 5 (small), a team of 25 (moderate), a team of 125 (large),
a team of 625 (huge), and so on. As team size grows, a new level of personnel
management ins introduced at roughly each factor of 5. This model can be sued to
describe some of the process differences among projects of different sizes.
▪ Trivial-sized projects require almost no management overhead (planning,
communication, coordination,
Progress assessment, review, administration).
▪ Small projects (5 people) require very little management overhead, but team leadership
toward a common objective is crucial. There is some need to communicate the
intermediate artifacts among team member.
▪ Moderate-sized projects (25 people) require moderate management overhead, including a
dedicated software
Project manager to synchronize team workflows and balance resources.
▪ Large projects (125 people) require substantial management overhead including a
dedicated software project manager and several subproject managers to synchronize
project-level and subproject-level workflows and to balance resources. Project
performance is dependent on average people, for two reasons:
a) There are numerous mundane jobs in any large project, especially in the overhead
workflows.
b) The probability of recruiting, maintaining and retaining a large umber of exceptional
people is small.
▪ Huge projects (625 people) require substantial management overhead, including
multiple software project managers an many subproject managers to synchronize
project-level and subproject-level workflows and to balance resources.
The degree of rigor, formality and change freedom inherent in a specific project‟s “contract”
(vision document, business case and development plan) will have a substantial impact on the
implementation of the project‟s process. For very loose contracts such as building a
commercial product within a business unit of a software company (such as a Microsoft
application or a rational software corporation development tool), management complexity is
minimal. In these
sorts of development processes, feature set, time to market, budget and quality can all be
freely traded off and changed with very little overhead.
Process Maturity
The process maturity level of the development organization, as defined by thesoftware
engineering Institute’s capability maturity model is another key driver of management
complexity. Managing a mature process (level 3 or higher) is far simpler than managing an
immature process (level 1 and 2). Organizations with a mature process typically have a high
level of precedent experience in developing software and a high level of existing process
collateral that enables predictable planning and execution of the process. Tailoring a mature
organization’s process for a specific project is generally a straight forward task.
Architectural Risk
The degree of technical feasibility demonstrated before commitment to full-scale production
is an important dimension of defining a specific project‟s process. There are many sources of
architectural risk. Some of the most important and recurring sources are system performance
(resource utilization, response time, throughput, accuracy), robustness to change (addition of
new features, incorporation of new technology, adaptation to dynamic operational conditions)
and system reliability (predictable behavior, fault tolerance). The degree to which these risks
can bed eliminated before construction begins can have dramatic ramifications in the process
tailoring.
Domain Experience
The development organization‟s domain experience governs its ability to converge on an
acceptable architecture in a minimum number of iterations. An organization that has built five
generations of radar control switches may be able to converge on adequate baseline
architecture for a new radar application in two or three prototype release iterations. A skilled
software organization building its first radar application may require four or five prototype
releases before converging on an adequate baseline.
EXAMPLE: SMALL-SCALE PROJECT VERSUS LARGE-SCALE PROJECT
▪ An analysis of the differences between the phases, workflows and artifacts of two
projects on opposite ends of the management complexity spectrum shows how different
two software project processes can be Table 14-7 illustrates the differences in schedule
distribution for large and small project across the life-cycle phases. A small commercial
project (for example, a 50,000 source-line visual basic windows application, built by a
team of five) may require only 1 month of inception, 2 months of elaboration, 5 months
of construction and 2 months of transition. A large, complex project (for example, a
300,000 source-line embedded avionics program, built by a team of 40) could require 8
months of inception, 14 months of elaboration, 20 months of construction, and 8 months
of transition. Comparing the ratios of the life cycle spend in each phase highlights the
obvious differences.
.
▪ One key aspect of the differences between the two projects is the leverage of the
various process components in the success or failure of the project. This reflects the
importance of staffing or the level of associated risk management.
The following list elaborates some of the key differences in discriminators of success.
▪
Deployment plays a far greater role for a small commercial product because there is a broad
user base of diverse individuals and environment.
.
Continuous Integration
In the iterative development process, firstly, the overall architecture of the project is
created and then all the integration steps are evaluated to identify and eliminate the design
errors. This approach eliminates problems such as downstream integration, late patches
and shoe-horned software fixes by implementing sequential or continuous integration
rather than implementing large-scale integration during the project completion
▪ In the modern project profile, the distribution of cost among various workflows or
project is completely different from that of traditional project profile as shown
below:
As shown in the table, the modern projects spend only 25% of their budget for integration
and Assessment activities whereas; traditional projects spend almost 40% of their total
budget for these activities. This is because, the traditional project involve inefficient
large-scale integration and late identification of design issues.
▪ To obtain a useful perspective of risk management, the project life cycle has to be
applied on the principles of software management. The following are the 80:20
principles.
▪ The 80% of Engineering is utilized by 20% of the requirements.
▪ Before selecting any of the resources, try to completely understand all the
requirement because irrelevant resource selection (i.e., resources selected based
on prediction) may yield severe problems.
▪ Firstly, the cost-critical components must be elaborated which forces the project
to focus more on controlling the cost.
▪ 80% of the software scrap and rework is due to 20% if the changes.
▪ The change-critical components r elaborated first so that the changes that have
more impact occur when the project is matured.
▪ Performance critical components are elaborated first so that, the trade-offs with
reliability; changeability and cost-consumption can be solved as early as
possible.
▪ This process needs highly skilled customers, users and monitors which have
experience in both the application as well as software. Moreover, this process
requires an organization whose focus is on producing a quality product and
achieves customer satisfaction.
▪ The table below shows the tangible results of major milestones in a modern
process.
▪ From the above table, it can be observed that the progress of the project is not
possible unless all the demonstration objectives are satisfied. This statement does
not present the renegotiation of objectives, even when the demonstration results
allow the further processing of tradeoffs present in the requirement, design, plans
and technology.
▪ Modern iterative process that rely on the results of the demonstration need al its
stakeholders to be well-educated and with a g good analytical ability so as to
distinguish between the obviously negative results and the real progress visible. For
example, an early determined design error can be treated as a positive progress
instead to a major issue.
If the architecture is focused at the initial stage, then there will be a good foundation for almost
20% of the significant stuff that are responsible for the overall success of the project. This stuff
include the requirements, components use cases, risks and errors. In other words, if the
components that are being involved in the architecture are well known then the expenditure
causes by scrap and rework will be comparatively less.
2. Develop an iterative life-cycle process that identifies the risks at an early stage
An iterative process supports a dynamic planning framework that facilitates the risk management
predictable performance moreover, if the risks are resolved earlier, the predictability will be
more and the scrap and rework expenses will be reduced.
3. After the design methods in-order to highlight components-based development.
The quantity of the human generated source code and the customized development can be
reduced by concentrating on individual components rather than individual lines- of-code. The
complexity of software is directly proportional to the number of artifacts it contains that is, if
the solution is smaller then the complexity associated with its management is less.
Highly-controlled baselines are needed to compensate the changes caused by various teams that
concurrently work on the shared artifacts.
5. Improve change freedom with the help of automated tools that support round-trip
engineering.
The roundtrip-engineering is an environment that enables the automation and synchronization of
engineering information into various formats. The engineering information usually consists
requirement specification, source code, design models test cases and executable code. The
automation of this information allows the teams to focus more on engineering rather than dealing
with over head involved.
The design artifacts that are modeled using a model based notation like UML, are rich in graphics
and texture. These modeled artifacts facilitate the following tasks.
▪ Complexity control
▪ Objective fulfillment
10. The Points Increments and generations must be made based on the evolving levels of
detail
Here, the ‘levels of detail’ refers to the level of understanding requirements and architecture.
The requirements, iteration content, implementations and acceptance testing can be
organized using cohesive usage scenarios.
The process framework applied must be suitable for variety of applications. The process must
make use of processing spirit, automation, architectural patterns and components such that it
is economical and yield investment benefits.
NEXT GENERATION SOFTWARE ECONOMICS
▪ In comparison to the current generation software cost modes, the next generation software
cost models should perform the architecture engineering and application production
separately. The cost associated with designing, building, testing and maintaining the
architecture is defined in terms of scale, quality, process, technology and the team
employed.
▪ After obtaining the stable architecture, the cost of the production is an exponential
▪ The architecture stage cost model should reflect certain diseconomy of scale (exponent
less than 1.0) because it is based on research and development-oriented concerns. Whereas
the production stage cost model should reflect economy of scale (exponent less than 1.0)
for production of commodities.
▪ The next generation software cost models should be designed in a way that, they can
assess larger architectures with economy of scale. Thus, the process exponent will be less
than 1.0 at the time of production because large systems have more automated proves
components and architectures which are easily reusable.
Phases
standardize
many of the management activities, thereby requiring a lower percentage of effort for
overhead activities as scale increases
.
Modern software economics
1) Finding and fixing a software problem after delivery costs 100times more than finding
and fixing
2) you can compress software development schedules 25% nominal, but no more
4) Software development and maintenance costs are primarily a function of the number of
source lines of code.
5. Variations among people account for the biggest differences in software productivity
6. The overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; in
1985, 85:15.
8. Software systems and products typically cost 3 times as much per SLOC as individual
software programs. Software-system products (i.e., system of systems) cost 9 times as
much.
Several indicators are available that can be observed in order to distinguish projects that
have made a genuine cultural transition from projects that only pretends. The following are
some rough indicators available.
● The lower-level managers and the middle level managers should participate in
the project development
Any organization which has an employee count less than or equal to 25 does not need
to have pure managers. The responsibility of the managers in this type of organization
will be similar to that of a project manager. Pure managers are needed when personal
resources exceed 25. Firstly, these managers understand the status of the project them,
develop the plans and estimate the results. The manager should participate in developing
the plans. This transition affects the software project managers.
The traditional processes utilize tons of paper in order to generate the documents
relevant to the desired project. Even the significant milestones of a project are expressed
via documents. Thus, the traditional process spends most of their crucial time on
document preparation instead of performing software development activities.
The design errors are exposed by carrying-out demonstrations in the early stages of
the life cycle. The stake holders should not over-react to these design errors because
overemphasis of design errors will discourage the development organizations in
producing the ambitious future iterating. This does not mean that stakeholders should
bare all these errors. Infact, the stakeholders must follow all the significant steps needed
for resolving these issues because these errors will sometimes lead to serious down-fall in
the project.
This transition will unmark all the engineering or process issues so, it is mostly
refused by management team, and widely accepted by users, customers and the
engineering team.
● The performance of the project can be determined earlier in the life cycle.
The success and failure of any project depends on the planning and architectural
phases of life cycle so, these phases must employ high-skilled professionals. However,
the remaining phases may work well an average team.
The development organizations must ensure that customers and users should not
expect to have good or reliable deliveries at the initial stages. This can be done by
demonstration of flexible benefits in successive increments. The demonstration is similar
to that of documentation but involves measuring of changes, fixes and upgrades based on
the objectives so as to highlight the process quality and future environments.
● Artifacts tend to be insignificant at the early stages but proves to be the most
significant in the later stages
The details of the artifacts should not be considered unless a stable and a useful
baseline is obtained. This transition is accepted by the development team while the
conventional contract monitors refuse this transition.
The requirements and designs of any successful project arguments along with the
continuous negotiations and trade-offs. The difference between real and apparent issued
of a successful project can easily be determined. This transition may affect any team of
stakeholders.
DENOUEMENT
The term denouement typically refers to the final resolution or outcome of a story, often used
in the context of literature or drama. In the context of modern process transitions in software
project management, the denouement would symbolize the conclusion of the project, where
the various processes and activities converge into a final resolution. It is the point where the
project has reached its completion, and all deliverables are ready for handover, maintenance, or
closure.
In software project management, process transitions refer to the movement from one phase of
the project life cycle to another, often involving significant shifts in approach, methodology,
and tools. Modern transitions involve the adoption of agile, DevOps, and other contemporary
software development methodologies that aim to improve efficiency, collaboration, and quality
in the project.
Key Elements of Modern Process Transitions in Software Project Management:
1. Transition from Waterfall to Agile/DevOps: Historically, many software projects
followed a Waterfall model, which had distinct, sequential phases. Over time, this
model has given way to Agile and DevOps practices that focus on iterative development,
continuous feedback, and collaboration. The transition to these modern approaches
requires significant changes in the mindset, processes, and tools used by project teams.
2. Automation of Processes: The introduction of automation tools for testing,
deployment, and integration has changed the way software projects are managed.
Continuous integration/continuous deployment (CI/CD) pipelines, automated testing,
and infrastructure as code (IaC) ensure smoother transitions between development,
testing, and deployment phases.
3. Collaboration and Communication: Modern process transitions emphasize cross-
functional teams, where developers, testers, product owners, and operations work
together throughout the development process. Tools like Jira, Trello, and Slack enhance
communication, making the transition between different phases more seamless.
4. User-Centric Development: The focus has shifted from developing software based on
rigid specifications to creating software that responds to user feedback. Agile
methodologies, particularly Scrum and Kanban, allow for quicker iterations, where
software is developed in short sprints with regular adjustments based on user feedback.
5. Quality Assurance: In the past, quality assurance (QA) was often a separate process at
the end of the development cycle. In modern transitions, QA is integrated continuously
throughout the process. This shift allows for earlier detection of issues and a more
continuous focus on quality.
6. Scalability and Flexibility: Modern processes are designed to be more scalable and
flexible. The use of cloud platforms, microservices architectures, and containerization
technologies like Docker allows teams to scale their solutions easily and adapt to
changing business needs.
7. Customer Feedback and Continuous Improvement: Modern process transitions also
emphasize continuous feedback loops, allowing teams to react to customer needs and
improve the software iteratively. Regular releases and sprint reviews give stakeholders
the opportunity to see progress and provide real-time feedback.
The Denouement in Modern Software Project Management:
The denouement in this context would be the point where the project reaches a natural
conclusion after undergoing these transitions. It involves:
● Project Delivery and Closure: Once the final iteration is delivered, the project moves
toward its closure. This includes wrapping up documentation, finalizing the product for
production, and ensuring all stakeholders are satisfied with the outcome.
● Knowledge Transfer and Handover: In the denouement, knowledge transfer to the
operations team or maintenance team occurs, where they take over ongoing support or
maintenance tasks for the software. Any lessons learned during the project transition are
documented for future use.
● Post-Deployment Monitoring and Continuous Improvement: Even after the product
is released, the work doesn't end. Monitoring the software’s performance and gathering
feedback from users continues as part of an ongoing improvement process.
● Final Retrospectives: A final retrospective meeting or review often takes place after a
project ends. This allows teams to reflect on the entire transition process, what went
well, and what can be improved for future projects.
Replacement (CCPDS-R).
Practices
• CCPDS-R Case Study : The Command Center Processing and Display Sys-tem-
Replacement (CCPDS-R) project was performed for the U.S. Air Force by TRW Space
and Defense in Redondo Beach, California.
• The entire project included systems engineering, hardware procurement, and software
development, with each of these three major activities consuming about one-third of the
total cost.
• The schedule spanned 1987 through 1994.
• The software effort included the development of three distinct software systems totaling
more than one million source lines of code.
• This case study focuses on the initial software development, called the Common
Subsystem, for which about 355,000 source lines were developed.
• The Common Subsystem effort also produced a reusable architecture, a mature process,
and an integrated environment for efficient development of the two software subsystems
of roughly similar size that followed
• CCPDS-R was one of the pioneering projects that practiced many modern management
approaches
• TRW was awarded the Space and Missile Warning Systems Award for excellence in
1991 for continued, sustained performance in overall systems engineering and project
execution”
Characteristic CCPDS-R
people
Schedule 75 months
Process/standards
● DOD-STD-2167A
● Iterative development
Environment
● Rational host DEC host
● DEC VMS targets
Contractor TRW
Customer USAF
The CCPDS-R contract called for the development of three common subsystems
1) Common subsystem:
in chief
worldwide
3) STRATCOM subsystem
⮚ It provided both missile warning center at the command center of the strategic
command
● The proposal was competed for by five major bidders, and two firm-fixed-based
o The primary products were a system specification (a vision document), an FSD phase
o The CD phase also included a system design review, technical interchange meetings
with the government stakeholders (customer and user), and several contract-deliverable
documents
The exercise requirements included the following:
• Conduct a mock design review with the customer 23 days after receipt of the specification.
executed. Several thousand lines of reusable components were also integrated into
the demonstration.
– Three milestones were conducted and more than 30 action items resolved.
The plan included two architecture iterations and all the milestones and artifacts
items (CSCIs).
• CSCIs are defined and described in DOD-STD-2167A [DOD, 1988], The CSCIs were
identified as follows:
performance, and state. This CSCI was designed to be reused across all three CCPDS-R
subsystems.
2. System Services (SSV). This CSCI comprised the software architecture skeleton, real-
time data distribution, global data types, and the computer system operator interface.
3. Display Coordination (DCO). This CSCI comprised user interface control, display
4. Test and Simulation (TAS). This CSCI comprised test scenario generation, test message
5. Common Mission Processing (CMP). This CSCI comprised the missile warning
algorithms for radar, nuclear detonation, and satellite early warning messages.
6. Common Communications (CCO). This CSCI comprised external interfaces with other
⮚ Project Organization:
The CD phase team represented the essence of the architecture team which is
responsible for an efficient engineering stage this team had the following
responsibilities
o Analyze and specify the project requirements
o Define and develop the top-level architecture
o Plan the FSD phase software development activities
o Configure the process and development environment
o Establish trust and win-win relationships among the stakeholders
Core metrics
Test Progress
● Test organization was responsible for build integration tests and requirements
verification testing
● Build integration testing proved to be less effective than expected for uncovering
problems.
● The below table summarizes the build 2 BIT results, which reflect a highly integrated
product state.
● The below table provide perspectives on the progress metrics used to plan and track the
Stability
The below figure illustrates rate if configuration baseline changes. it cumulative number
Breakage rates that diverged from repair rates results in management attention,
reprioritization of resources, and corrective actions taken to ensure that the test and
Modularity
● This metric identifies the total scrap generated by the common subsystem software
● Industry average for software scrap run in the 40% to 60% range.
● The initial configuration management baseline was established around the time of PDR,
at month 14.
● There are 1,600 discrete changes processed against configuration baselines thereafter
Adapatability:
● The level of Adaptability achieved by CCPDS-R was roughly four times better than the
typical project, in which rework costs over the development life cycle usually exceed
● The below figure plots the average cost of change across the common subsystem
schedule
● Most of the early SCO trends were changes that affected multiple people and multiple
components
● The later SCO trends were usually localized to a single person and a single component.
Maturity
● CCPDS-R had a specific reliability requirement, for which the software had a
specific allocation.
Below table provide Overall cost breakdown for the CCPDS-R common subsystem. its
extracted from the final WBS cost collection runs and were structured
Future software best practices
• Airlie software council was purposely structured to include highly successful managers of
large scale software projects
• There are nine best practices with the process framework, management principles & top 10
principles
4) Metric Based scheduling and management: Model based notations and objective
quality control
6) Program wide visibility of progress versus plan: Open Communication among project
team members
7) Defect tracking against quality targets : Its related to architecture first approach and
objective quality control