0% found this document useful (0 votes)
12 views

4.3 Software Metrics and Analytics

The document discusses software metrics and analytics. It defines key terms like measures, metrics, and indicators. It also outlines attributes of effective software metrics and discusses metrics for requirements, design, object-oriented software, and mobile apps. Product metrics help assess quality.

Uploaded by

jashtiyamini72
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

4.3 Software Metrics and Analytics

The document discusses software metrics and analytics. It defines key terms like measures, metrics, and indicators. It also outlines attributes of effective software metrics and discusses metrics for requirements, design, object-oriented software, and mobile apps. Product metrics help assess quality.

Uploaded by

jashtiyamini72
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Software Metrics and Analytics

Software Measurement
Measures, Metrics, and Indicators :

Metric as “a quantitative measure of the degree to which a system, component, or process


possesses a given attribute.”
• When a single data point has been collected (e.g., the number of errors uncovered within
a single software component), a measure has been established.
• Measurement occurs as the result of the collection of one or more data points (e.g., a
number of component reviews and unit tests are investigated to collect measures of the
number of errors for each).
• A software metric relates the individual measures in some way (e.g., the average
number of errors found per review or the average number of errors found per unit test).
A software engineer collects measures and develops metrics so that indicators will be
obtained.
• An indicator is a metric or combination of metrics that provides insight into the software
process, a software project, or the product itself. An indicator provides insight that
enables the project manager or software engineers to adjust the process, the project, or
the product to make things better.

Attributes of Effective Software Metrics :

▪ Simple and computable. It should be relatively easy to learn how to derive the metric,
and its computation should not demand inordinate effort or time
▪ Empirically and intuitively persuasive. The metric should satisfy the engineer’s
intuitive notions about the product attribute under consideration
▪ Consistent and objective. The metric should always yield results that are unambiguous.
▪ Consistent in its use of units and dimensions. The mathematical computation of the
metric should use measures that do not lead to unusual combinations of unit.
▪ Programming language independent. Metrics should be based on the analysis model,
the design model, or the structure of the program itself.
▪ An effective mechanism for quality feedback. Metric should provide a software engineer
with information that can lead to a higher quality end product
FURPS Quality attributes :

FURPS—functionality, usability, reliability, performance, and supportability.

• Functionality is assessed by evaluating the feature set and capabilities of the program,
the generality of the functions that are delivered, and the security of the overall system.
• Usability is assessed by considering human factors, consistency, and documentation.
• Reliability is evaluated by measuring the frequency and severity of failure, the accuracy
of output results, the mean-time-to-failure (MTTF), the ability to recover from failure,
and the predictability of the program.
• Performance is measured by considering processing speed, response time, resource
consumption, throughput, and efficiency.
• Supportability combines the ability to extend the program (extensibility), adaptability,
serviceability—these three attributes represent a more common term maintainability.

Software Analytics

• Software analytics is the systematic computational analysis of software engineering


data or statistics to provide managers and software engineers with meaningful insights
and empower their teams to make better decisions.
• It is important that the insights provide timely, actionable advice to developers.
• Analytics can help developers predict the number of defects to expect, where to test for
them, and how much time it will take to fix them.
• This allows managers and developers to create incremental schedules that use these
predictions to determine expected completion times.

Buse and Zimmermann suggest that analytics can help developers make decisions regarding:

1. Targeted testing. To help focus regression testing and integration testing resources
2. Targeted refactoring. To help make strategic decisions on how to avoid large technical
debt costs.
3. Release planning. To help ensure that market needs as well as technical features in
software product are taken into account.
4. Understanding customers. To help developers get actionable information on product
use by customers in the field during product engineering.
5. Judging stability. To help managers and developers monitor the state of the evolving
prototype and future maintenance needs.

6. Targeting inspection. To help teams determine the value of individual inspection


activities, their frequency, and their scope.

Product Metrics

• Product metrics for computer software helps to assess quality.

Metrics for the Requirements Model


• Technical work in software engineering begins with the creation of the requirements model.
• It is at this stage that requirements are derived and a foundation for design is established.
Therefore, product metrics that provide insight into the quality of the analysis model are
desirable.

Conventional Software :

Characteristics that can be used to assess the quality of the requirements model and the
corresponding requirements specification:
specificity (lack of ambiguity), completeness, correctness, understandability, verifiability,
internal and external consistency, achievability, traceability, modifiability, precision, and
reusability.
Although many of these characteristics appear to be qualitative in nature, each can be
represented using one or more metrics.
For example, we assume that there are nr requirements in a specification, such that
Mobile Software :
• The objective of all mobile projects is to deliver a combination of content and
functionality to the end user.
• Measures and metrics used for traditional software engineering projects are difficult to
translate directly to MobileApps.
• It is possible to develop measures that can be determined during the requirements
gathering activities that can serve as the basis for creating MobileApp metrics.

Among the measures that can be collected are the following:

Number of static screen displays. These pages represent low relative complexity and
generally require less effort to construct than dynamic pages. This measure provides an
indication of the overall size of the application and the effort required to develop it.
Number of dynamic screen displays. These pages represent higher relative complexity and
require more effort to construct than static pages. This measure provides an indication of the
overall size of the application and the effort required to develop it.
Number of persistent data objects. As the number of persistent data objects (e.g., a database
or data file) grows, the complexity of the MobileApp also grows and the effort to implement it
increases proportionally.
Number of external systems interfaced. As the requirement for interfacing grows, system
complexity and development effort also increase.
Number of static content objects. These objects represent low relative complexity and
generally require less effort to construct than dynamic pages.
Number of dynamic content objects. These objects represent higher relative complexity and
require more effort to construct than static pages.
Number of executable functions. As the number of executable functions (e.g., a script or
applet) increases, modeling and construction effort also increase.

For example with these measures, you can define a metric that reflects the degree of end-user
customization that is required for the MobileApp and correlate it to the effort expended on the
project and/or the errors uncovered as reviews and testing are conducted.

To accomplish this, you define


Design Metrics for Conventional Software
Architectural design metrics focus on characteristics of the program architecture with an
emphasis on the architectural structure and the effectiveness of modules or components within
the architecture.
Metrics can provide insight into structural data and system complexity associated with
architectural design.

Card and Glass define three software design complexity measures:

➢ Structural complexity
➢ Data complexity
➢ System complexity.
Design Metrics for Object-Oriented Software
In order to develop metrics for object-oriented (OO) design, nine distinct and measurable
characteristics of OO design are considered, which are listed below.

• Complexity: Determined by assessing how classes are related to each other


• Coupling: Defined as the physical connection between OO design elements
• Sufficiency: Defined as the degree to which an abstraction possesses the features
required of it
• Cohesion: Determined by analyzing the degree to which a set of properties that
the class possesses is part of the problem domain or design domain
• Primitiveness: Indicates the degree to which the operation is atomic
• Similarity: Indicates similarity between two or more classes in terms of their
structure, function, behavior, or purpose
• Volatility: Defined as the probability of occurrence of change in the OO design
• Size: Defined with the help of four different views, namely, population, volume,
length, and functionality. Population is measured by calculating the total number
of OO entities, which can be in the form of classes or operations. Volume
measures are collected dynamically at any given point of time. Length is a
measure of interconnected designs such as depth of inheritance tree.
Functionality indicates the value rendered to the user by the OO application.

Metrics for Testing

Majority of the metrics used for testing focus on testing process rather than the technical
characteristics of test. Generally, testers use metrics for analysis, design, and coding to guide
them in design and execution of test cases.

Function point can be effectively used to estimate testing effort.

The function point (FP) metric can be used effectively as a means for measuring the
functionality delivered by a system. Using historical data, the FP metric can then be used to
(1) estimate the cost or effort required to design, code, and test the software;
(2) predict the number of errors that will be encountered during testing; and
(3) forecast the number of components and/or the number of projected source lines in the
implemented system.
Various characteristics like errors discovered, number of test cases needed, testing effort, and
so on can be determined by estimating the number of function points in the current project
and comparing them with any previous project.

Metrics used for architectural design can be used to indicate how integration testing can be
carried out. In addition, cyclomatic complexity can be used effectively as a metric in the basis-
path testing to determine the number of test cases needed.

Halstead measures can be used to derive metrics for testing effort.

By using program volume (V) and program level (PL), Halstead effort (e) can be calculated
by the following equations.

For developing metrics for object-oriented (OO) testing, different types of design metrics that
have a direct impact on the testability of object-oriented system are considered. While
developing metrics for OO testing, inheritance and encapsulation are also considered. A set
of metrics proposed for OO testing is listed below.

• Lack of cohesion in methods (LCOM): This indicates the number of states to


be tested. LCOM indicates the number of methods that access one or more same
attributes. The value of LCOM is 0, if no methods access the same attributes. As
the value of LCOM increases, more states need to be tested.
• Percent public and protected (PAP): This shows the number of class attributes,
which are public or protected. Probability of adverse effects among classes
increases with increase in value of PAP as public and protected attributes lead to
potentially higher coupling.
• Public access to data members (PAD): This shows the number of classes that
can access attributes of another class. Adverse effects among classes increase as
the value of PAD increases.
• Number of root classes (NOR): This specifies the number of different class
hierarchies, which are described in the design model. Testing effort increases
with increase in NOR.
• Fan-in (FIN): This indicates multiple inheritances. If value of FIN is greater than
1, it indicates that the class inherits its attributes and operations from many root
classes. Note that this situation (where FIN> 1) should be avoided.

Metrics for maintenance

◼ IEEE suggests a software maturity index (SMI) that provides an indication of the
stability of a software product (based on changes that occur for each release of the
product).
◼ The following information is determined:


MT = the number of modules in the current release

Fc = the number of modules in the current release that have been changed

Fa = the number of modules in the current release that have been added

Fd = the number of modules from the preceding release that were deleted
in the current release
◼ The software maturity index is computed in the following manner:

• SMI = [MT - (Fa + Fc + Fd)]/MT

◼ As SMI approaches 1.0, the product begins to stabilize.

Process and Project Metrics

Process metrics are collected across all projects and over long periods of time. Their intent is
to provide a set of process indicators that lead to long-term software process improvement.

Project metrics enable a software project manager to


(1) Assess the status of an ongoing project,
(2) Track potential risks,
(3) Uncover problem areas before they go “critical,”
(4) Adjust work flow or tasks, and
(5) Evaluate the project team’s ability to control quality of software work products.

Measures that are collected by a project team and converted into metrics for use during a project
can also be transmitted to those with responsibility for software process improvement. For this
reason, many of the same metrics are used in both the process and project domains.

Process Metrics and Software Process Improvement


The only way to improve any process is to measure specific attributes of process, develop a set
of meaningful metrics based on these attributes, and then use the metrics to provide indicators
that will lead to a strategy for improvement.

Process locates at the center of a triangle connecting three factors that have a profound
influence on software quality and organizational performance.
The skill and motivation of people has been shown to be the single most influential factor in
quality and performance.
The complexity of the product can have a substantial impact on quality and team performance.
The technology (i.e., the software engineering methods and tools) that populates the process
also has an impact.
In addition, the process triangle exists within a circle of environmental conditions that include
the development environment (e.g., integrated software tools), business conditions (e.g.,
deadlines, business rules), and customer characteristics (e.g., ease of communication and
collaboration).
We can only measure the efficacy of a software process indirectly.
That is, derive a set of metrics based on the outcomes that can be derived from the process.
Outcomes include measures of errors uncovered before release of the software, defects
delivered to and reported by end users, work products delivered (productivity), human effort
expended, calendar time expended, schedule conformance, and other measures.
Also derive process metrics by measuring the characteristics of specific software engineering
tasks. For example, measure the effort and time spent performing the umbrella activities and
the generic software engineering activities.

Software process metrics can provide significant benefit as an organization works to improve
its overall level of process maturity. However, like all metrics, these can be misused, creating
more problems than they solve.
Grady suggests a “software metrics etiquette” that is appropriate for both managers and
practitioners as they institute a process metrics program:

• Use common sense and organizational sensitivity when interpreting metrics data.
• Provide regular feedback to the individuals and teams who collect measures and metrics.
• Don’t use metrics to appraise individuals.
• Work with practitioners and teams to set clear goals and metrics that will be used to achieve
them.
• Never use metrics to threaten individuals or teams.
• Metrics data that indicate a problem area should not be considered “negative.” These data are
merely an indicator for process improvement.
• Don’t obsess on a single metric to the exclusion of other important metrics.

As an organization becomes more comfortable with the collection and use of process metrics,
the derivation of simple indicators gives way to a more rigorous approach called statistical
software process improvement (SSPI).

SSPI uses software failure analysis to collect information about all errors and defects
encountered as an application, system, or product is developed and used.
Metrics for Quality

The goal of software engineering is to produce a high-quality system, application, or product


within a timeframe that satisfies a market need. To achieve this goal, software engineers must
apply effective methods coupled with modern tools within the context of a mature software
process.

Measuring Quality : The measures of software quality are correctness, maintainability,


integrity, and usability. These measures will provide useful indicators for the project team.
1. Correctness. Correctness is the degree to which the software performs its required
function.
The most common measure for correctness is defects per KLOC, where a defect is defined as
a verified lack of conformance to requirements.
2. Maintainability. Maintainability is the ease with which a program can be corrected if
an error is encountered, adapted if its environment changes, or enhanced if the
customer desires a change in requirements.
A simple time-oriented metric is mean-time-to change (MTTC), the time it takes to analyze the
change request, design an appropriate modification, implement the change, test it, and
distribute the change to all users.
3. Integrity. Attacks can be made on all three components of software: programs, data,
and documents.
To measure integrity, two additional attributes must be defined: threat and security.
Threat is the probability (which can be estimated or derived from empirical evidence) that an
attack of a specific type will occur within a given time.
Security is the probability (which can be estimated or derived from empirical evidence) that
the attack of a specific type will be repelled.
The integrity of a system can then be defined as

4. Usability: Usability is an attempt to quantify user-friendliness and can be measured in


terms of characteristics.
These four factors are only a sampling of those that have been proposed as measures for
software quality.

Defect Removal Efficiency A quality metric that provides benefit at both the project and
process level is defect removal efficiency (DRE). DRE is a measure of the filtering ability of
quality assurance and control activities as they are applied throughout all process framework
activities.
When considered for a project as a whole, DRE is defined in the following manner:

where E is the number of errors found before delivery of the software to the end-user and D is
the number of defects found after delivery.
Those errors that are not found during the review of the analysis model are passed on to the
design task .
When used in this context, we redefine DRE as

A quality objective for a software team is to achieve DRE that approaches 1. That is, errors
should be filtered out before they are passed on to the next activity.

You might also like