4.3 Software Metrics and Analytics
4.3 Software Metrics and Analytics
Software Measurement
Measures, Metrics, and Indicators :
▪ Simple and computable. It should be relatively easy to learn how to derive the metric,
and its computation should not demand inordinate effort or time
▪ Empirically and intuitively persuasive. The metric should satisfy the engineer’s
intuitive notions about the product attribute under consideration
▪ Consistent and objective. The metric should always yield results that are unambiguous.
▪ Consistent in its use of units and dimensions. The mathematical computation of the
metric should use measures that do not lead to unusual combinations of unit.
▪ Programming language independent. Metrics should be based on the analysis model,
the design model, or the structure of the program itself.
▪ An effective mechanism for quality feedback. Metric should provide a software engineer
with information that can lead to a higher quality end product
FURPS Quality attributes :
• Functionality is assessed by evaluating the feature set and capabilities of the program,
the generality of the functions that are delivered, and the security of the overall system.
• Usability is assessed by considering human factors, consistency, and documentation.
• Reliability is evaluated by measuring the frequency and severity of failure, the accuracy
of output results, the mean-time-to-failure (MTTF), the ability to recover from failure,
and the predictability of the program.
• Performance is measured by considering processing speed, response time, resource
consumption, throughput, and efficiency.
• Supportability combines the ability to extend the program (extensibility), adaptability,
serviceability—these three attributes represent a more common term maintainability.
Software Analytics
Buse and Zimmermann suggest that analytics can help developers make decisions regarding:
1. Targeted testing. To help focus regression testing and integration testing resources
2. Targeted refactoring. To help make strategic decisions on how to avoid large technical
debt costs.
3. Release planning. To help ensure that market needs as well as technical features in
software product are taken into account.
4. Understanding customers. To help developers get actionable information on product
use by customers in the field during product engineering.
5. Judging stability. To help managers and developers monitor the state of the evolving
prototype and future maintenance needs.
Product Metrics
Conventional Software :
Characteristics that can be used to assess the quality of the requirements model and the
corresponding requirements specification:
specificity (lack of ambiguity), completeness, correctness, understandability, verifiability,
internal and external consistency, achievability, traceability, modifiability, precision, and
reusability.
Although many of these characteristics appear to be qualitative in nature, each can be
represented using one or more metrics.
For example, we assume that there are nr requirements in a specification, such that
Mobile Software :
• The objective of all mobile projects is to deliver a combination of content and
functionality to the end user.
• Measures and metrics used for traditional software engineering projects are difficult to
translate directly to MobileApps.
• It is possible to develop measures that can be determined during the requirements
gathering activities that can serve as the basis for creating MobileApp metrics.
Number of static screen displays. These pages represent low relative complexity and
generally require less effort to construct than dynamic pages. This measure provides an
indication of the overall size of the application and the effort required to develop it.
Number of dynamic screen displays. These pages represent higher relative complexity and
require more effort to construct than static pages. This measure provides an indication of the
overall size of the application and the effort required to develop it.
Number of persistent data objects. As the number of persistent data objects (e.g., a database
or data file) grows, the complexity of the MobileApp also grows and the effort to implement it
increases proportionally.
Number of external systems interfaced. As the requirement for interfacing grows, system
complexity and development effort also increase.
Number of static content objects. These objects represent low relative complexity and
generally require less effort to construct than dynamic pages.
Number of dynamic content objects. These objects represent higher relative complexity and
require more effort to construct than static pages.
Number of executable functions. As the number of executable functions (e.g., a script or
applet) increases, modeling and construction effort also increase.
For example with these measures, you can define a metric that reflects the degree of end-user
customization that is required for the MobileApp and correlate it to the effort expended on the
project and/or the errors uncovered as reviews and testing are conducted.
➢ Structural complexity
➢ Data complexity
➢ System complexity.
Design Metrics for Object-Oriented Software
In order to develop metrics for object-oriented (OO) design, nine distinct and measurable
characteristics of OO design are considered, which are listed below.
Majority of the metrics used for testing focus on testing process rather than the technical
characteristics of test. Generally, testers use metrics for analysis, design, and coding to guide
them in design and execution of test cases.
The function point (FP) metric can be used effectively as a means for measuring the
functionality delivered by a system. Using historical data, the FP metric can then be used to
(1) estimate the cost or effort required to design, code, and test the software;
(2) predict the number of errors that will be encountered during testing; and
(3) forecast the number of components and/or the number of projected source lines in the
implemented system.
Various characteristics like errors discovered, number of test cases needed, testing effort, and
so on can be determined by estimating the number of function points in the current project
and comparing them with any previous project.
Metrics used for architectural design can be used to indicate how integration testing can be
carried out. In addition, cyclomatic complexity can be used effectively as a metric in the basis-
path testing to determine the number of test cases needed.
By using program volume (V) and program level (PL), Halstead effort (e) can be calculated
by the following equations.
For developing metrics for object-oriented (OO) testing, different types of design metrics that
have a direct impact on the testability of object-oriented system are considered. While
developing metrics for OO testing, inheritance and encapsulation are also considered. A set
of metrics proposed for OO testing is listed below.
◼ IEEE suggests a software maturity index (SMI) that provides an indication of the
stability of a software product (based on changes that occur for each release of the
product).
◼ The following information is determined:
•
MT = the number of modules in the current release
•
Fc = the number of modules in the current release that have been changed
•
Fa = the number of modules in the current release that have been added
•
Fd = the number of modules from the preceding release that were deleted
in the current release
◼ The software maturity index is computed in the following manner:
Process metrics are collected across all projects and over long periods of time. Their intent is
to provide a set of process indicators that lead to long-term software process improvement.
Measures that are collected by a project team and converted into metrics for use during a project
can also be transmitted to those with responsibility for software process improvement. For this
reason, many of the same metrics are used in both the process and project domains.
Process locates at the center of a triangle connecting three factors that have a profound
influence on software quality and organizational performance.
The skill and motivation of people has been shown to be the single most influential factor in
quality and performance.
The complexity of the product can have a substantial impact on quality and team performance.
The technology (i.e., the software engineering methods and tools) that populates the process
also has an impact.
In addition, the process triangle exists within a circle of environmental conditions that include
the development environment (e.g., integrated software tools), business conditions (e.g.,
deadlines, business rules), and customer characteristics (e.g., ease of communication and
collaboration).
We can only measure the efficacy of a software process indirectly.
That is, derive a set of metrics based on the outcomes that can be derived from the process.
Outcomes include measures of errors uncovered before release of the software, defects
delivered to and reported by end users, work products delivered (productivity), human effort
expended, calendar time expended, schedule conformance, and other measures.
Also derive process metrics by measuring the characteristics of specific software engineering
tasks. For example, measure the effort and time spent performing the umbrella activities and
the generic software engineering activities.
Software process metrics can provide significant benefit as an organization works to improve
its overall level of process maturity. However, like all metrics, these can be misused, creating
more problems than they solve.
Grady suggests a “software metrics etiquette” that is appropriate for both managers and
practitioners as they institute a process metrics program:
• Use common sense and organizational sensitivity when interpreting metrics data.
• Provide regular feedback to the individuals and teams who collect measures and metrics.
• Don’t use metrics to appraise individuals.
• Work with practitioners and teams to set clear goals and metrics that will be used to achieve
them.
• Never use metrics to threaten individuals or teams.
• Metrics data that indicate a problem area should not be considered “negative.” These data are
merely an indicator for process improvement.
• Don’t obsess on a single metric to the exclusion of other important metrics.
As an organization becomes more comfortable with the collection and use of process metrics,
the derivation of simple indicators gives way to a more rigorous approach called statistical
software process improvement (SSPI).
SSPI uses software failure analysis to collect information about all errors and defects
encountered as an application, system, or product is developed and used.
Metrics for Quality
Defect Removal Efficiency A quality metric that provides benefit at both the project and
process level is defect removal efficiency (DRE). DRE is a measure of the filtering ability of
quality assurance and control activities as they are applied throughout all process framework
activities.
When considered for a project as a whole, DRE is defined in the following manner:
where E is the number of errors found before delivery of the software to the end-user and D is
the number of defects found after delivery.
Those errors that are not found during the review of the analysis model are passed on to the
design task .
When used in this context, we redefine DRE as
A quality objective for a software team is to achieve DRE that approaches 1. That is, errors
should be filtered out before they are passed on to the next activity.