Lecture 06 (Software Metrics)
Lecture 06 (Software Metrics)
4
Software Quality
Software quality refers to two related but distinct notions that
exist wherever quality is defined
◦ Functional quality
◦ Structural quality
5
Software Quality
Functional quality
◦ Reflects how well it complies with or conforms to a given design,
based on functional requirements or specifications
◦ Can also be described as the fitness for purpose of a piece of
software or how it compares to competitors in the marketplace as a
worthwhile product
◦ functional quality is typically enforced and measured through
software testing
6
Software Quality
Structural quality
◦ Refers to how it meets non-functional requirements that support the
delivery of the functional requirements, such as robustness or
maintainability, the degree to which the software was produced
correctly
Defect Removal Efficiency (DRE). Relationship between errors (E) and defects (D). The ideal is
a DRE of 1:
DRE E /( E D)
Project Metrics
Used by a project manager and software team to adapt project work flow and technical
activities
Tactical and Short Term.
Purpose:
- Minimize the development schedule by making the necessary adjustments to avoid delays and
mitigate problems
- Assess product quality on an ongoing basis
Metrics:
- Effort or time per SE task - Errors uncovered per review hour
- Scheduled vs. actual milestone dates - Number of changes and their characteristics
- Distribution of effort on SE tasks
Product Metrics
Focus on the quality of deliverables
Product metrics are combined across several projects to produce process
metrics
Metrics for the product:
- Measures of the Analysis Model
- Complexity of the Design Model
1. Internal algorithmic complexity
2. Architectural complexity
3. Data flow complexity
- Code metrics
Metrics Guidelines
Use common sense and organizational sensitivity when interpreting metrics data
Provide regular feedback to the individuals and teams who have worked to collect
measures and metrics.
Don’t use metrics to appraise individuals
Work with practitioners and teams to set clear goals and metrics that will be used to
achieve them
Never use metrics to threaten individuals or teams
Metrics data that indicate a problem area should not be considered “negative.” These data
are merely an indicator for process improvement
Don’t obsess on a single metric to the exclusion of other important metrics
Normalization for Metrics
How does an organization combine metrics that come from different
individuals or projects?
Depend on the size and complexity of the projec
Normalization: compensate for complexity aspects particular to a
product
Normalization approaches:
-Size oriented (lines of code approach)
-Function oriented (function point approach)
Normalized Metrics
Size-Oriented:
◦ use size of the SW to normalize
◦ size-oriented measures include:
LOC, effort, $, errors, defects, people
Function-Oriented:
-errors per FP, defects per FP, pages of documentation per FP, FP per person-month
Size-Oriented normalization
suppose choose LOC as normalization value
then can compare across projects:
errors per KLOC
defects per KLOC
$ per LOC
Function-oriented Normalization
•use a measure of functionality as the normalization value
•formula estimate
•functionality cannot be measured directly, but must be derived using
other (direct) measures
•method of quantifying size and complexity of system in terms of
functions that system delivers to user
Computing Function Points
Analyze application
domain information Establish count for input domain and
and develop counts system interfaces
Monitor
Monitor
Password, Alarm Alert and
and
Sensors, etc. Response
Response
System
System
System
System
Config
ConfigData
Data
Example: SafeHome Functionality
Example: SafeHome FP
weighting factor
measurement parameter count simple avg. complex
number of user inputs 3 X 3 4 6 = 9
number of user outputs 2 X 4 5 7 = 8
number of user inquiries 2 X 3 4 6 = 6
number of files 1 X 7 10 15 = 7
number of ext. interfaces 4 X 5 7 10 = 20
count-total 50
complexity multiplier [0.65 0.01 Fi ] [0.65 0.46] 1.11
function points 56
OO Metrics: Distinguishing Characteristics
The following characteristics require that special OO metrics be
developed:
-Encapsulation — Concentrate on classes rather than functions
-Information hiding — An information hiding metric will provide an indication of
quality
-Inheritance — A pivotal indication of complexity
-Abstraction — Metrics need to measure a class at different levels of abstraction
and from different viewpoints
OO Project Metrics
Number of Scenario Scripts (Use Cases):
- Number of use-cases is directly proportional the number of classes needed to meet requirements
- A strong indicator of program size
- Depth of the Inheritance Tree (DIT): The maximum length from a leaf to the root of the tree.
Large DIT leads to greater design complexity but promotes reuse
- Number of Children (NOC): Total number of children for each class. Large NOC may dilute
abstraction and increase testing
OO Metrics
Coupling:
-Coupling between Object Classes (COB): Total number of collaborations listed
for each class in CRC cards. Keep COB low because high values complicate
modification and testing
-Response For a Class (RFC): Set of methods potentially executed in response
to a message received by a class. High RFC implies test and design complexity
Cohesion:
-Lack of Cohesion in Methods (LCOM): Number of methods in a class that
access one or more of the same attributes. High LCOM means tightly coupled
methods
OO Metrics
Inheritance:
AIF- Attribute Inheritance Factor
– Ratio of the sum of inherited attributes in all classes of the system to the total number of attributes for
all classes.
TC= total number of classes , Ad (Ci) = number of attribute declared in a class, Ai (Ci) = number of
attribute inherited in a class
OO Metrics
Inheritance:
MIF- Method Inheritance Factor
– Ratio of the sum of inherited methods in all classes of the system to the total number of methods for all
classes.
TC= total number of classes, Md(Ci)= the number of methods declared in a class, Mi(Ci)= the number of
methods inherited in a class
OO Metrics
Use-Case Oriented Metrics
Counting actors
2. Maintainability:
- The degree to which a program is amenable to change
- Metric = Mean Time to Change. Average time taken to analyze, design, implement and
distribute a change
Quality Metrics: Further Measures
3.Integrity:
- The degree to which a program is resistant to outside attack
(1 t ) s
-
i i
i
- Summed over all types of security attacks, i, where t = threat (probability that an attack of type i will occur within a given time) and
s = security (probability that an attack of type i will be repelled)
4.Usability:
- The degree to which a program is easy to use.