0% found this document useful (0 votes)
50 views

Lecture 06 (Software Metrics)

The document discusses object-oriented metrics and software quality. It covers internal and external quality, principles of object-oriented design like single responsibility and open/closed principles, and definitions of software quality, metrics, and normalization. Software quality can be measured functionally and structurally. Common software metrics include lines of code, complexity, and defects. Normalization compensates for differences in project size and complexity.

Uploaded by

MANOJ
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

Lecture 06 (Software Metrics)

The document discusses object-oriented metrics and software quality. It covers internal and external quality, principles of object-oriented design like single responsibility and open/closed principles, and definitions of software quality, metrics, and normalization. Software quality can be measured functionally and structurally. Common software metrics include lines of code, complexity, and defects. Normalization compensates for differences in project size and complexity.

Uploaded by

MANOJ
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Object Oriented System

Object Oriented Metrics


MANISH ARYAL
Quality of Design
Internal Quality
Internal quality mean all the properties of the software as seen by the
developers that are desirable in order to facilitate the process of creating a
good product :
◦ concision : code do not suffer from duplication
◦ cohesion : each [module|class|routine] does one thing and does it well
◦ coupling : minimal interdependencies and interrelation between objects
◦ simplicity
◦ generality : the problem domain bounds are known and stated
◦ clarity : the code enjoys a good auto-documentation level
Quality of Design
External Quality
External quality mean all the properties of the software as a
product that users can experience and enjoy
◦ conformity to their expectations (and evolution thereof)
◦ reliability
◦ accuracy
◦ ease of use and comfort (including response delay)
◦ robustness (or adaptability to some unforeseen condition of use)
◦ openness (or adaptability to future extensions or evolutions)
Principles of OOD
SRP: The Single Responsibility Principle
OCP: The Open/Closed Principle
LSP: The Liskov Substitution Principle
ISP: The Interface Segregation Principle
DIP: The Dependency Inversion Principle

4
Software Quality
Software quality refers to two related but distinct notions that
exist wherever quality is defined

◦ Functional quality
◦ Structural quality

5
Software Quality
Functional quality
◦ Reflects how well it complies with or conforms to a given design,
based on functional requirements or specifications
◦ Can also be described as the fitness for purpose of a piece of
software or how it compares to competitors in the marketplace as a
worthwhile product
◦ functional quality is typically enforced and measured through
software testing

6
Software Quality
Structural quality
◦ Refers to how it meets non-functional requirements that support the
delivery of the functional requirements, such as robustness or
maintainability, the degree to which the software was produced
correctly

◦ Evaluated through the analysis of the software inner structure, its


source code, at the unit level, the technology level and the system
level, which is in effect how its architecture adheres to sound
principles of software architecture outlined in a paper
7
Software Metric
A software metric is a standard of measure of a degree to which a
software system or process possesses some property

The goal is obtaining measurements that are


◦ objective,
◦ Reproducible,
◦ quantifiable
Software Metric
Applications in
◦ schedule and budget planning,
◦ cost estimation,
◦ quality assurance and testing,
◦ software debugging,
◦ software performance optimization,
◦ optimal personnel task assignments.
Software Metric
Common software measurements include:
Balanced scorecard DSQI (design structure quality index)
Bugs per line of code Maintainability index
Code coverage Number of classes and interfaces
Cohesion Number of lines of code
Coupling Number of lines of customer requirements
Cyclomatic complexity Program execution time
Terminology
Measure:
◦ Quantitative indication of the extent, amount, dimension, or size of some attribute
of a product or process
Metrics
◦ The degree to which a system, component, or process possesses a given attribute.
Relates several measures (e.g. average number of errors found per person hour)
◦ Direct Metrics
◦ Immediately measurable attributes (e.g. line of code, execution speed, defects reported)
◦ Indirect Metrics
◦ Aspects that are not immediately quantifiable (e.g. functionality, quantity, reliability)
Terminology
Indicators
◦ A combination of metrics that provides insight into the software process, project or
product
Faults:
-Errors: Faults found by the practitioners during software development
-Defects: Faults found by the customers after release
A Good Manager Measures
process
process metrics
project metrics
measurement
product metrics
product
What do we
use as a
basis?
• size?
• function?
Process Metrics
Focus on quality achieved as a consequence of a repeatable or managed process. Strategic
and Long Term.
Statistical Software Process Improvement (SSPI)
Error Categorization and Analysis:
 All errors and defects are categorized by origin
 The cost to correct each error and defect is recorded
 The number of errors and defects in each category is computed
 Data is analyzed to find categories that result in the highest cost to the organization
 Plans are developed to modify the process

Defect Removal Efficiency (DRE). Relationship between errors (E) and defects (D). The ideal is
a DRE of 1:
DRE  E /( E  D)
Project Metrics
Used by a project manager and software team to adapt project work flow and technical
activities
Tactical and Short Term.
Purpose:
- Minimize the development schedule by making the necessary adjustments to avoid delays and
mitigate problems
- Assess product quality on an ongoing basis

Metrics:
- Effort or time per SE task - Errors uncovered per review hour
- Scheduled vs. actual milestone dates - Number of changes and their characteristics
- Distribution of effort on SE tasks
Product Metrics
Focus on the quality of deliverables
Product metrics are combined across several projects to produce process
metrics
Metrics for the product:
- Measures of the Analysis Model
- Complexity of the Design Model
1. Internal algorithmic complexity
2. Architectural complexity
3. Data flow complexity
- Code metrics
Metrics Guidelines
Use common sense and organizational sensitivity when interpreting metrics data
Provide regular feedback to the individuals and teams who have worked to collect
measures and metrics.
Don’t use metrics to appraise individuals
Work with practitioners and teams to set clear goals and metrics that will be used to
achieve them
Never use metrics to threaten individuals or teams
Metrics data that indicate a problem area should not be considered “negative.” These data
are merely an indicator for process improvement
Don’t obsess on a single metric to the exclusion of other important metrics
Normalization for Metrics
How does an organization combine metrics that come from different
individuals or projects?
Depend on the size and complexity of the projec
Normalization: compensate for complexity aspects particular to a
product
Normalization approaches:
-Size oriented (lines of code approach)
-Function oriented (function point approach)
Normalized Metrics
Size-Oriented:
◦ use size of the SW to normalize
◦ size-oriented measures include:
LOC, effort, $, errors, defects, people

Function-Oriented:
-errors per FP, defects per FP, pages of documentation per FP, FP per person-month
Size-Oriented normalization
suppose choose LOC as normalization value
then can compare across projects:
errors per KLOC
defects per KLOC
$ per LOC
Function-oriented Normalization
•use a measure of functionality as the normalization value
•formula estimate
•functionality cannot be measured directly, but must be derived using
other (direct) measures
•method of quantifying size and complexity of system in terms of
functions that system delivers to user
Computing Function Points
Analyze application
domain information Establish count for input domain and
and develop counts system interfaces

Assign level of complexity (simple,


Weight each count average, complex) or weight to each count
by assessing
complexity

Assess the influence Grade significance of external factors, F_i,


of global factors that such as reuse, concurrency, OS, ...
affect the
application FP = SUM(count x weight) x C
where complexity
Compute function
multiplier C = (0.65+0.01 x N) degree of
points
influence N = SUM(F_i)
Analyzing the Information Domain
weighting factor
measurement parameter count simple avg. complex
number of user inputs X 3 4 6 =
number of user outputs X 4 5 7 =
number of user inquiries X 3 4 6 =
number of files X 7 10 15 =
number of ext.interfaces X 5 7 10 =
count-total
complexity multiplier
function points
Taking Complexity into Account
Complexity Adjustment Values (F_i) are rated on a scale of 0 (not important) to 5 (very important):
1. Does the system require reliable backup and recovery?
2. Are data communications required?
3. Are there distributed processing functions?
4. Is performance critical? Rate each factor on a scale of 0 to 5
5. System to be run in an existing, heavily utilized environment? 0 - No influence
6. Does the system require on-line data entry? 1 - Incidental
7. On-line entry requires input over multiple screens or operations? 2 - Moderate
8. Are the master files updated on-line? 3 - Average
9. Are the inputs, outputs, files, or inquiries complex? 4 - Significant
10. Is the internal processing complex? 5 - Essential
11. Is the code designed to be reusable?
12. Are conversion and instillation included in the design?
13. Multiple installations in different organizations?
14. Is the application designed to facilitate change and ease-of-use?
Example: SafeHome Functionality
Test Sensor
Password
Zone Setting Sensors
Sensors
Zone Inquiry

User Sensor Inquiry SafeHome


SafeHome Messages
User
System
System Sensor Status User
User
Panic Button
(De)activate (De)activate

Monitor
Monitor
Password, Alarm Alert and
and
Sensors, etc. Response
Response
System
System
System
System
Config
ConfigData
Data
Example: SafeHome Functionality
Example: SafeHome FP
weighting factor
measurement parameter count simple avg. complex
number of user inputs 3 X 3 4 6 = 9
number of user outputs 2 X 4 5 7 = 8
number of user inquiries 2 X 3 4 6 = 6
number of files 1 X 7 10 15 = 7
number of ext. interfaces 4 X 5 7 10 = 20
count-total 50
complexity multiplier [0.65  0.01  Fi ]  [0.65  0.46] 1.11
function points 56
OO Metrics: Distinguishing Characteristics
The following characteristics require that special OO metrics be
developed:
-Encapsulation — Concentrate on classes rather than functions
-Information hiding — An information hiding metric will provide an indication of
quality
-Inheritance — A pivotal indication of complexity
-Abstraction — Metrics need to measure a class at different levels of abstraction
and from different viewpoints
OO Project Metrics
Number of Scenario Scripts (Use Cases):
- Number of use-cases is directly proportional the number of classes needed to meet requirements
- A strong indicator of program size

Number of Key Classes (Class Diagram):


- A key class focuses directly on the problem domain
- NOT likely to be implemented via reuse
- Typically 20-40% of all classes are key, the rest support infrastructure (e.g. GUI, communications,
databases)

Number of Subsystems (Package Diagram):


- Provides insight into resource allocation, scheduling for parallel development and overall
integration effort
OO Analysis and Design Metrics
Related to Analysis and Design Principles
Complexity:
- Weighted Methods per Class (WMC): Assume that n methods with cyclomatic complexity 1 c , c2 ,..., cn
are defined for a class C:
WMC   ci

- Depth of the Inheritance Tree (DIT): The maximum length from a leaf to the root of the tree.
Large DIT leads to greater design complexity but promotes reuse
- Number of Children (NOC): Total number of children for each class. Large NOC may dilute
abstraction and increase testing
OO Metrics
Coupling:
-Coupling between Object Classes (COB): Total number of collaborations listed
for each class in CRC cards. Keep COB low because high values complicate
modification and testing
-Response For a Class (RFC): Set of methods potentially executed in response
to a message received by a class. High RFC implies test and design complexity
Cohesion:
-Lack of Cohesion in Methods (LCOM): Number of methods in a class that
access one or more of the same attributes. High LCOM means tightly coupled
methods
OO Metrics
Inheritance:
AIF- Attribute Inheritance Factor
– Ratio of the sum of inherited attributes in all classes of the system to the total number of attributes for
all classes.

TC= total number of classes , Ad (Ci) = number of attribute declared in a class, Ai (Ci) = number of
attribute inherited in a class
OO Metrics
Inheritance:
MIF- Method Inheritance Factor
– Ratio of the sum of inherited methods in all classes of the system to the total number of methods for all
classes.

TC= total number of classes, Md(Ci)= the number of methods declared in a class, Mi(Ci)= the number of
methods inherited in a class
OO Metrics
Use-Case Oriented Metrics
Counting actors

Actor weighting factors


◦ Simple actor: represents another system with a defined interface.
◦ Average actor: another system that interacts through a text based interface through a protocol such as TCP/IP.
◦ Complex actor: person interacting through a GUI interface.

The actors weight can be calculated by adding these values together.


OO Metrics
Use-Case Oriented Metrics
Counting use cases

Transaction-based weighting factors


The number of each use case type is counted in the software and then each number is multiplied by a
weighting factor as shown in table above.
Quality Metrics
Measures conformance to explicit requirements, following specified standards,
satisfying of implicit requirements
Software quality can be difficult to measure and is often highly subjective
1. Correctness:
- The degree to which a program operates according to specification
- Metric = Defects per FP

2. Maintainability:
- The degree to which a program is amenable to change
- Metric = Mean Time to Change. Average time taken to analyze, design, implement and
distribute a change
Quality Metrics: Further Measures
3.Integrity:
- The degree to which a program is resistant to outside attack

 (1  t )  s
-
i i
i
- Summed over all types of security attacks, i, where t = threat (probability that an attack of type i will occur within a given time) and
s = security (probability that an attack of type i will be repelled)

4.Usability:
- The degree to which a program is easy to use.

the skill required to learn the system,


the time required to become moderately proficient,
Metric =
the net increase in productivity,
assessment of the users attitude to the system
Thank You.

You might also like