0% found this document useful (0 votes)
21 views59 pages

Mc4102 Oose Unit 5 KVL Notes

The document discusses software quality and metrics, focusing on object-oriented software estimation techniques such as Lorenz and Kidd Estimation, Use Case Points Method, and Class Point Method. It emphasizes the importance of project management activities including planning, scope management, and various estimation techniques for software size, effort, time, and cost. Additionally, it outlines the steps involved in estimating software projects and provides examples to illustrate the application of these estimation methods.

Uploaded by

haritham08012004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views59 pages

Mc4102 Oose Unit 5 KVL Notes

The document discusses software quality and metrics, focusing on object-oriented software estimation techniques such as Lorenz and Kidd Estimation, Use Case Points Method, and Class Point Method. It emphasizes the importance of project management activities including planning, scope management, and various estimation techniques for software size, effort, time, and cost. Additionally, it outlines the steps involved in estimating software projects and provides examples to illustrate the application of these estimation methods.

Uploaded by

haritham08012004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

MC4102 OOSE UNIT - V

UNIT V SOFTWARE QUALITY AND METRICS 9

Need of Object Oriented Software Estimation – Lorenz and Kidd Estimation – Use
Case Points Method – Class Point Method – Object Oriented Function Point –
Risk Management – Software Quality Models – Analyzing the Metric Data –
Metrics for Measuring Size and Structure – Measuring Software Quality - Object
Oriented Metrics

NEED OF OBJECT ORIENTED SOFTWARE ESTIMATION

Software Management Activities


Project Planning
Scope Management
Project Estimation
Software Size Estimation
Effort Estimation
Time Estimation
Cost Estimation
Project Estimation Technique
Decomposition Technique
Empirical Estimation Technique
Need of Object Oriented Software Estimation

Software Management Activities

Software project management comprises of a number of activities, which contains


planning of project, deciding scope of software product, estimation of cost in various
terms, scheduling of tasks and events, and resource management.

Project management activities may include:

 Project Planning
 Scope Management
 Project Estimation

Project Planning

Software project planning is task, which is performed before the production of


software actually starts.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 1


MC4102 OOSE UNIT - V
It is there for the software production but involves no concrete activity that has any
direction connection with software production; rather it is a set of multiple processes,
which facilitates software production.

Scope Management

It defines the scope of project; this includes all the activities, process need to be done
in order to make a deliverable software product.
Scope management is essential because it creates boundaries of the project by clearly
defining what would be done in the project and what would not be done.
This makes project to contain limited and quantifiable tasks, which can easily be
documented and in turn avoids cost and time overrun.

During Project Scope management, it is necessary to -

 Define the scope


 Decide its verification and control
 Divide the project into various smaller parts for ease of management.
 Verify the scope
 Control the scope by incorporating changes to the scope

Project Estimation

For an effective management accurate estimation of various measures is a must. With


correct estimation managers can manage and control the project more efficiently and
effectively.
Project estimation may involve the following:

Software size estimation

Software size may be estimated either in terms of KLOC (Kilo Line of Code) or by
calculating number of function points in the software.
Lines of code depend upon coding practices and Function points vary according to the
user or software requirement.

Effort estimation

The managers estimate efforts in terms of personnel requirement and man-hour


required to produce the software.
For effort estimation software size should be known.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 2


MC4102 OOSE UNIT - V

This can either be derived by managers’ experience, organization’s historical data or


software size can be converted into efforts by using some standard formulae.

Time estimation

Once size and efforts are estimated, the time required to produce the software can be
estimated.
Effort required is segregated into sub categories as per the requirement specifications
and interdependency of various components of software.
Software tasks are divided into smaller tasks, activities or events by Work
Breakthrough Structure (WBS).
The tasks are scheduled on day-to-day basis or in calendar months.
The sum of time required to complete all tasks in hours or days is the total time
invested to complete the project

Cost estimation

This might be considered as the most difficult of all because it depends on more
elements than any of the previous ones.
For estimating project cost, it is required to consider –
 Size of software
 Software quality
 Hardware
 Additional software or tools, licenses etc.
 Skilled personnel with task-specific skills
 Travel involved o Communication
 Training and support

Project Estimation Techniques

We discussed various parameters involving project estimation such as size, effort, time
and cost. Project manager can estimate the listed factors using two broadly recognized
techniques –

Decomposition Technique

This technique assumes the software as a product of various compositions.


There are two main models -
Line of Code Estimation is done on behalf of number of line of codes in the software
product.
Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 3
MC4102 OOSE UNIT - V

Function Points Estimation is done on behalf of number of function points in the


software product.

Empirical Estimation Technique

This technique uses empirically derived formulae to make estimation.


These formulae are based on LOC or FPs.

Putnam Model

 This model is made by Lawrence H. Putnam, which is based on Norden’s frequency


distribution (Rayleigh curve). Putnam model maps time and efforts required with
software size.

COCOMO

 COCOMO stands for COnstructive COst MOdel, developed by Barry W. Boehm. It


divides the software product into three categories of software: organic, semi-detached
and embedded.

Need of Object Oriented Software Estimation

Traditional software estimations include lines of source code and function point
analysis for size estimation:

COCOMO 81, COCOMO II and Putnam resource allocation model for cost estimation.

However, as the paradigm is shifting towards object-oriented software, we wonder "Is


object oriented software estimation different than traditional software estimation?" We
would say it is a similar activity but the key parameters (for example size) to do an
estimation change in the case of object-oriented software.

Object-oriented software engineering uses unified modeling language for creating


models. One of the important mechanisms to depict the functionality of the system is
use cases.

The use case diagram may be used to predict the size and hence the effort of the
software at an early stage of software development.

Classes are also an important element for measuring size in object-oriented software.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 4


MC4102 OOSE UNIT - V

The functionality of an object oriented software can be depicted using use cases and
these use cases are transformed using classes.
Thus, the effort can be estimated using the size measures computed for object-oriented
products.

LORENZ & KIDD ESTIMATION

Lorenz & Kidd Estimation methods


Use of Scenario Scripts
Use of Key and Support Classes
Examples for finding efforts

Lorenz & kidd Estimation methods

The Lorenz and Kidd method for estimation of size and effort is one of the earliest
methods developed in 1994 (Lorenz and Kidd, 1994).
The results of this method are based on the study of 5-8 projects developed in C++ and
Smalltalk.

Lorenz and Kidd provided two methods for estimation of a number of classes:

Use of scenario scripts:

The number of scenario scripts (same as use cases) may be used to estimate size of
classes.

Number of classes = 17 x number of scenario scripts

This method for determining the number of classes can be used in early phases of
software development life cycle, i.e, before the classes have been identified.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 5


MC4102 OOSE UNIT - V

Use of key and support classes:

After elaborating the design phase, the number of classes can be determined.
Lorenz and Kidd differentiated between key classes and support classes.

Key classes are specific to business applications and are ranked with higher priority by
the customer.
These classes also involve many scenarios.

Support classes are common for many applications.


These classes include user interface, back end classes and communications.

The following are used to determine the support classes:

Support classes can be calculated as:

Support classes = Number of key classes x Multiplier

Finally, the total number of classes is obtained by adding key classes and support
classes.

Total number of classes = Key classes + Support classes

According to Lorenz and Kidd, each class requires 10 to 20 person days for
implementation.

Thus, effort can be calculated as:

Effort= Total number of classes x (10 to 20 person days)

This method is easy and simple to understand.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 6


MC4102 OOSE UNIT - V

EXAMPLE 5.1

An application consists of 15 scenario scripts and requires 15 person days to


implement each class. Determine the effort of the given application.

Solution

Number of classes = 17 x Scenario scripts

= 17 X 15 = 255

Effort = 255 x 15 = 3825 person days

EXAMPLE 5.2

Consider the database application project with the following characteristics:

1. The application has 45 key classes

2. A graphical user interface is required

Calculate the effort to develop such a project given 20 person days.

Solution

Number of key classes = 45

Number of support classes= Number of key classes x Multiplier

= 45 X 2.5 = 112.5

Total number of classes = Number of key classes + Number of support classes

= 112.5 + 45 = 157.5

Effort = 157.5 x 20 = 3150 person days

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 7


MC4102 OOSE UNIT - V
USE CASE POINTS METHOD

Introduction of Use Case Points method.


Steps in Use Case Points method
Classification of Actors and Use Cases
Computing unadjusted Use Case Points
Calculating Technical Complexity Factors
Calculating Environmental Complexity Factors
Calculating Use Case Points
Example for Use Case Points

Introduction of Use Case Points method

The use case points method was developed by GaustavKarner of Objectory (currently
known as IBM Rational Software) in 1993 (Karner, 1993).
The method is used for estimating size and effort of object-oriented projects using use
cases.
The method is an extension of function point analysis technique developed by Albrecht
and Gaffney (1983).
Karner's work on use case points was written in his diploma thesis titled "Metrics
Objectory".
The use case points method measures the functionality of the software based on the use
case counts.
The use case model is a very popular technique for requirements gathering and can be
used at early phases of software development in order to provide estimations of the
project.

Steps in Use Case Points method

1. Classification of actors and use cases:

In this step, the complexity levels of the actors and use cases are identified and
their weighted sum is computed.

2. Computing unadjusted use case points:

The estimates of unadjusted use case points are made.

3. Calculating technical complexity factors:

In this step, we identify the degree of influence of technical factors on the


project.
Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 8
MC4102 OOSE UNIT - V

4. Calculating environmental complexity factors:


In this step, the environmental complexity factors are classified.

5. Calculating use case points:


In the final step, the use case points are calculated on the basis of values
obtained from steps 1, 2 and 3.

Classification of Actors & Use Cases

The first step involved in the calculation of use case points is the classification of
actors and use cases. The actors are ranked according to their complexity, i.e. simple,
average or complex.
The criteria for classifying an actor as simple, average or complex

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 9


MC4102 OOSE UNIT - V

The unadjusted actor weight (UAW) is the weighted sum of actors.

Mathematically, where n is the number of actors, a i is the ith actor and wi is the value
of weighting factor of the ith actor.

The use cases are ranked according to their complexity: simple, average or complex.
The criteria for classifying a use case are given in Table 5.2.

There are two methods for classifying use case complexity:

1. A use case may be classified as simple, average or complex based on the number of
transactions.
A transaction can be defined as a collection of activities that are counted by counting
the use case steps.

2. The other method to classify the use case is counting the analysis objects which are
counted by determining the number of objects that we will need to implement a use
case.

Each use case is multiplied by its corresponding weighting factors and this sum is
added to get unadjusted use case weight (UUCW).
Mathematically, where m is the number of use cases and u; is the ith use case.

Computing Unadjusted Use Case Points

After classifying actors and use cases, the resultant unadjusted use case points (UUCP)
are computed by adding UAW and UUCW as shown in mathematical form below:

UUCP= UAW+ UUCW

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 10


MC4102 OOSE UNIT - V

The procedure for calculating UUCP is given in Table 5.3.

Calculating Technical Complexity Factors

Technical complexity factor (TCP) assesses the functionality of the software.


TCFs are similar to the ones in function point analysis except some factors are added
and some deleted.
The criterion for technical factor is defined by Symons (1988) as:

A system requirement other than those concerned with information content intrinsic to
and affecting the size of the task, but not arising from the project environment

TCFs vary depending on the difficulty level of the system. Each factor is rated on the
scale of o to 5 as shown in Figure 5.3.
Table 5.4 shows the weights assigned to the contributing technical factors.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 11


MC4102 OOSE UNIT - V

TCP is obtained by using the following relationship:

TCP= 0.6 + 0.01

Calculating Environmental Complexity Factors

Environmental complexity factor (ECF) helps in estimating the efficiency of the


project.
This factor is calculated based on the early estimations in the project based on the
interviews carried out in objectory projects.
Figure 5.4 classifies the ECF on the measurement scale from 0 to 5, 0 being irrelevant
factor and 5 being essential factor.
Table 5.5 shows the weights assigned to the contributing environmental factors.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 12


MC4102 OOSE UNIT - V

The ECF is obtained by using the following relationship:

ECF = 1.4 + (-0.03);

Calculating Use Case Points

The final number of use case points is calculated by multiplying UUCP by TCF and
ECF.

UCP = UUCP x TCF x ECF

Duration can be measured 20 person hours per use case point. .


The use case points method can be used in early phases of software development in
order to estimate the size and effort.

Example for Use Case Points

EXAMPLE 5.3

Consider an airline reservation system where the following information is available:


Number of actors: 05
Number of use cases: 10

Assume all complexity factors are average.


Compute the use case points for the project.
Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 13
MC4102 OOSE UNIT - V

Solution

UUCW=
= 10 X 10 = 100

UUCP= UAW+ UUCW


=10+100=110 13

TCF = 0.6 + 0.01


= 0.6 + 0.01 x (3 x 2 + 3 x 1 + 3 x 1 + 3 x 1 + 3 x 1 + 3 x 0.5 +3 x 0.5+3 x 2+3 x 2+3 x
l+3 x 1+3 x 1+3 x 1)
= 0.6 + 0.01 x 42 = 1.02

ECF = 1.4 + (-0.03)


= 1.4 + (-0.03) x (3 x 1.5 + 3 x 0.5 + 3 x 0.5 + 3 x 1 + 3 x 1 + 3 x -1 + 3 x -1 + 3 x 2)
= 0.995

UCP = UUCP x TCF x ECF


= 110 X 1.02 X 0.995
= 111.639

EXAMPLE 5.4

The following information is available for an application:

2 simple actors, 2 average actors, 1 complex actor, 2 use cases with the number of
transactions 3, 4 use cases with the number of transactions 5 and 2 use cases with the
number of transactions 15.

In addition to the above information, the system requires:

(i) Significant user efficiency


(ii) Essential ease to change
(iii) Moderate concurrency
(iv) Essential application experience
(v) Significant object-oriented experience
(vi) Essential stable requirements

Other technical and environmental complexity factors are treated as average.


Compute the use case points for the project.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 14


MC4102 OOSE UNIT - V

Solution

2 x 2=4
1x3=3

UAW = 2 + 4 + 3 = 9

UUCW =
= 2 x 5 + 4 x 10 + 2 x 15
= 10 + 40 + 30 = 80

UUCP= UAW+ UUCW


= 9 +80 = 89

TCF = 0.6 + 0.01


= 0.6 + 0.01 x (3 x 2 + 3 x 1 + 4 x 1 + 3 x 1 + 3 x 1 + 3 x 0.5 +3 x 0.5+3 x 2+5 x
l+2 x 1+3 x 1+3 x 1+3 x 1)
= 0.6 + 0.01 X 44
= 1.84

ECF = 1.4 + (-0.03)


= 1 .4 + -0.03 x 3 x 1.5 + 3 x 0.5 + 3 x 0.5 + 4 x 1 +3 x 1+3 x -1+3 x -1+5 x 2
= 1.4 + (-0.03) x 19.5
= 0.815

UCP = UUCP x TCF x ECF


= 89 x 1.84 x 0.815
= 133.46

EXAMPLE 5.5

Consider Example 4.4 and calculate the effort for the given application.

Solution

Effort = UCP x 20
= 133.46 X 20
= 2669.2 person hours

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 15


MC4102 OOSE UNIT - V

EXAMPLE 5.6 Consider the following use case model of result management system:

The number of transactions required for each use case is given as follows:

Assume all complexity factors are average.


Compute the use case points and effort for the project.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 16


MC4102 OOSE UNIT - V

Solution

TCF = 0.6 + 0.01


= 0.6 + 0.01 x (3 x 2 + 3 x 1 + 3 x 1 + 3 x 1 + 3 x 1 + 3 x 0.5 + 3 x 0.5 +3 x 2+3 x 2+3
x l+3 x 1+3 x l+3 x 1)
= 0.6 + 0.01 X 42 = 1.02

ECF= 1.4 + (-0.03)


= 1.4 + (-0.03) x (3 x 1.5 + 3 x 0.5 + 3 x 0.5 + 3 x 1 + 3 x 1 + 3 x -1 + 3 x -1 + 3 x 2)
= 0.995

UCP = UUCP x TCF x ECF


= 66 X 1.02 X 0.995
= 66.98

Effort= 20 x 66.98
= 1339.67 person hours

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 17


MC4102 OOSE UNIT - V

Class Point Method

This method is used to provide the system level size estimation of the object-oriented
software.
The class point method was given by Costagliola and Tortora (2005).
This method primarily focuses on classes for the estimation of size.
The complexity of a class is analysed through determining the number of methods in a
class, the number of attributes in a class and the interaction of a class with other
classes.
This method can be used for estimation during requirement and design phase of
software development life cycle.

The steps required for the estimation of size are given as follows:

1. Identification of classes
2. Determination of complexity of classes
3. Calculation of unadjusted class point
4. Calculation of technical complexity factor
5. Calculation of class point

Identification of Classes

In the class point method, four types of classes are identified. The types of system
components are given in Table 5.6.

Classifying Class Complexity

In the class point method, two measures CP1 and CP2 are used. In CP1, the initial
estimate of size is made and CP2 provides detailed estimate of size.
Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 18
MC4102 OOSE UNIT - V

The following measures are used in the calculation of CP1 and CP2 measures:

1. Number of external methods (NEM): It measures the size of class in terms of


methods.
It counts the number of public methods in a class.
2. Number of service requested (NSR): It measures the interaction between system
components. It counts the number of services requested by other classes.
3. Number of attributes (NOA): It measures the complexity of a class. It counts the
number of data members in a class.

In the CP1 measure NEM and NSR are used to classify complexity of a class. Table
5.7 shows the complexity for CP1 measure.

In the CP2 measure, NOA is also considered; thus, a detailed insight into the estimate
of size is obtained.
Tables 5.8 to 5.10 show the details for classifying class complexity for CP2
measure on the basis of NEM, NSR and NOA.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 19


MC4102 OOSE UNIT - V

Calculating Unadjusted Class Points

The total unadjusted class point (TUCP) is calculated by assigning complexity weights
based on the classifications made as given in Table 5.11.

After classifying the complexity level of classes, the TUCP is calculated as follows:

where wij is complexity weights j (j is low, average, high) assigned to class type i. xij is
the number of classes of type i (i is type of class problem domain, human interface,
data management and task management).

Calculating Technical Complexity Factor

The factors F1-F18 are the degree of influence (DI) as shown in Figure 5.5.
Each DI is rated on the scale of 0-5.
DI is used to determine technical complexity factor (TCF).

The TCF is determined by the following mathematical formula:


Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 20
MC4102 OOSE UNIT - V

Calculating Class Point and Effort

The final class point is calculated by multiplying total unadjusted class point values
with technical factor.
The procedure for calculating adjusted class point (CP) is given as:

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 21


MC4102 OOSE UNIT - V

CP = TUCP x TCF

Costagliola and Tortora (2005) used data from 40 systems developed in Java language
in order to predict the effort during two successive semesters of graduate courses of
software engineering.
They used ordinary least-square (OLS) regression analysis for deriving the effort
model.
The effort is defined in terms of person hours for both CP1 and CP2 measures as:

Effort= 0.843 x CP1 + 241.85


Effort= 0.912 x CP2 + 239.75

Two measures CP1 and CP2 have been described.


The CP1 measure can be used in early phases of software development and the CP 2
measure can be used when the number of attributes is available.

EXAMPLE 5.7

Consider a project with the following parameters:

Assume all the technical complexity factors have average influence. Calculate class
points.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 22


MC4102 OOSE UNIT - V

Solution

The total unadjusted use case points are calculated as:

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 23


MC4102 OOSE UNIT - V

EXAMPLE 5.8

Consider a result management system with the following information:

In addition, the system requires


1. User adaptability is significant
2. Rapid prototyping has strong influence
3. Multiple interfaces are significant
4. Distributed functions are moderate

Assume all other factors as average. Calculate CP1 and CP2 using class point method
and effort.

Solution

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 24


MC4102 OOSE UNIT - V

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 25


MC4102 OOSE UNIT - V

OBJECT ORIENTED FUNCTION POINT

The traditional function point method can be mapped to obtain object points.
The object point sizing method is also known as object-oriented function point method
as it is very similar to the function point method.
The object point method does not require much experience, requires less effort for
computation and can be calculated quickly.
It better suites for object oriented systems and is easier to calculate as compared to the
traditional function point method.
The object point method involves counting of classes and methods (services).
All this information is obtained from the object model of object-oriented design.
In this approach, the function point concepts are mapped to object-oriented concepts
and the ambiguities present in the function point method are removed.

Relationship between Function Points and Object Points

A class is a template that encapsulates attributes and member functions into a single
unit.
Objects are data members of the class.
Classes may be divided into external and internal classes depending on their scope and
boundary.
The relationship between concepts defined in function point and their corresponding
concepts in object points are shown in Table 5.12.
The internal logical files are mapped to internal classes, external interface files are
mapped to external classes and external inquiries/inputs/outputs are mapped to
services.

The internal classes are the classes that reside inside the application boundaries and the
external classes are the classes that reside outside the application boundaries.
The services are the methods defined in the class. Figure 5.6 depicts the method to
compute the object oriented concepts in object point method.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 26


MC4102 OOSE UNIT - V

For each external/internal class, it is necessary to compute the number of data element
types (DETs) and record element types (RETs).
The DETs correspond to the total number of attributes of a class and the RETs
correspond to the total number of subclasses of a class (descendants of a class) as
shown in Figure 5.7.
In inheritance hierarchy, the classes that inherit the properties of the base class while
having their own properties are known as subclasses.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 27


MC4102 OOSE UNIT - V

Counting Internal Classes, External Classes and Services

An internal class identifies the total number of DETs and RETs in the class.
In the IFPUG Manual 4.1, DETs are defined as (IFPUG, 1994):

A unique user recognizable, non repeatable field.

DETs are counted by applying the following rules:

1. A DET is counted for each simple attribute (integer, string, real) defined in a class.

For example, the book accession number (integer type) stored in a class is counted as
one DET.

2. A DET is counted for each attribute required to communicate with another internal
class or external class.

In the IFPUG Manual 4.1, RET is defined as (IFPUG, 1994):

A unique recognizable subgroups of data elements within an internal logical file or


external interface file.

In object-oriented systems, these subgroups are known as subclasses or descendants.


The RET is counted for each of a given class.

For example, if employee is a base class and salary, contract based or hourly employee
are three subclasses of class employee, then the RET count for employee class is 3.

One of the following rules is applicable to a class while counting RETs:

1. Count an RET for each subclass of the internal/external class.


2. If there are no subclasses, count internal/external classes as one RET.

Each service in the class is examined.


For each service, the number of DETs and file type referenced (FTR) needs to be
counted.
The counting of DETs and FTR involves simple and complex data types referenced by
the methods.
A simple data type is a compiler-defined data type and a complex data type is a user-
defined data type.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 28


MC4102 OOSE UNIT - V

The following are the counting rules for DETs and FTR for each service:

1. A DET is counted for each simple data type referenced as arguments of the service
or global variables referenced by the service.
2. An FTR is counted for each complex data type referenced as arguments of the
service or returned by the service.

In Tables 5.13, 5.14 and 5.15, the complexity values for internal classes, external
classes and services are shown.

Consider that the example of the result calculation of a student is shown. The class
model is given in Figure 5.8.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 29


MC4102 OOSE UNIT - V

Figure 5.8 Class. model of result management system.

After classifying the internal classes, external classes and services, these are multiplied
by their complexity values. Finally, the results are summed up using the following
formulas:

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 30


MC4102 OOSE UNIT - V

where n is the number of internal classes, m is the number of external classes and o is
the number of services.
The classes in the given example are internal, hence the occurrences of internal classes
are multiplied with their corresponding weights and summing all the resulting values,
we get IC = 4 x 7 = 28.
The concrete services in the classes are 5. Assuming that the methods have average
complexity (as their signature is not given in the example), the services are 5 x 4 = 20.

Calculating Unadjusted Object Points

The unadjusted object points (UOP) are obtained as follows:

UOP = ICtotal + ECtotal + Stotal

In the example given in Figure 5.8, UOP = ICtotaI + Stotal = 28 + 20 = 48.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 31


MC4102 OOSE UNIT - V

The above classes contain six methods with high complexity and the complexities of
the adjustment factors are average. Compute object points.

EXAMPLE 5.10
An application has the following data:
5 high internal classes, 2 average external classes and 6 average services. Assume the
adjustment factor as significant. What are the unadjusted object points and object
points?

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 32


MC4102 OOSE UNIT - V
Solution

RISK MANAGEMENT

Risk
Risk Management
Classification of Risk
Project Risk
Technical Risk
Business Risk
Known Risk
Predictable Risk
Unpredictable Risk
Principle of Risk Management

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 33


MC4102 OOSE UNIT - V

Risk Management Activities


Risk Assessment
Risk Identifiaction
Risk Analysis
Risk Prioritization
Risk Control
Risk Management Planning
Risk Monitoring
Risk Resolution

Risk

"Tomorrow problems are today's risk."

"Risk" is a problem that could cause some loss or threaten the progress of the project,
but which has not happened yet.

Risk is defined as the combined effect of probability of occurrence of an undesirable


event and the impact of the occurrence of this event. Risk can delay the delivery of the
software and over budget a project. Risky projects may not also meet specified quality
levels.
The risk may be defined as:

Risk = Probability of occurrence of an undesired event × Impact of occurrence of that event

Risks may be rated as

1. Urgent: Risks that would cause high loss to the business.


2. High: Risks that would prevent the delivery of the software.
3. Medium: Risk may affect the company from meeting a milestone.
4. Low: Routine risks with little or no impact.

It is important to classify risks so that they can be analysed and prioritized based on
their probabilities and impacts.
Risks rated urgent should be addressed before the risks rated high, as they cause huge
loss to the organization.

These potential issues might harm cost, schedule or technical success of the project and
the quality of our software device, or project team morale.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 34


MC4102 OOSE UNIT - V

Risk Management is the system of identifying addressing and eliminating these


problems before they can damage the project.

We need to differentiate risks, as potential issues, from the current problems of the
project.
Different methods are required to address these two kinds of issues.

Example

Staff storage, because we have not been able to select people with the right technical
skills is a current problem, but the threat of our technical persons being hired away by
the competition is a risk.

Risk Management

A software project can be concerned with a large variety of risks. In order to be adept
to systematically identify the significant risks which might affect a software project, it
is essential to classify risks into different classes. The project manager can then check
which risks from each class are relevant to the project.

Classification of Risk

Project Risk
Technical Risk
Business Risk
Known Risk
Predictable Risk
Unpredictable Risk

Project Risk

Project risks concern differ forms of budgetary, schedule, personnel, resource, and
customer-related problems.
A vital project risk is schedule slippage.
Since the software is intangible, it is very tough to monitor and control a software
project.
It is very tough to control something which cannot be identified.
For any manufacturing program, such as the manufacturing of cars, the plan executive
can recognize the product taking shape.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 35


MC4102 OOSE UNIT - V

Technical risks

Technical risks concern potential method, implementation, interfacing, testing, and


maintenance issue.
It also consists of an ambiguous specification, incomplete specification, changing
specification, technical uncertainty, and technical obsolescence.
Most technical risks appear due to the development team's insufficient knowledge
about the project.

Business risks:

This type of risks contain risks of building an excellent product that no one need,
losing budgetary or personnel commitments, etc.

Known risks:

Those risks that can be uncovered after careful assessment of the project program, the
business and technical environment in which the plan is being developed, and more
reliable data sources (e.g., unrealistic delivery date)

Predictable risks:

Those risks that are hypothesized from previous project experience (e.g., past
turnover)

Unpredictable risks:

Those risks that can and do occur, but are extremely tough to identify in advance.

Principle of Risk Management

1. Global Perspective:
In this, we review the bigger system description, design, and
implementation.
We look at the chance and the impact the risk is going to have.

2. Take a forward-looking view:


Consider the threat which may appear in the future and create future plans
for directing the next events.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 36


MC4102 OOSE UNIT - V
3. Open Communication:
This is to allow the free flow of communications between the client and
the team members so that they have certainty about the risks.

4. Integrated management:
In this method risk management is made an integral part of project
management.

5. Continuous process:
In this phase, the risks are tracked continuously throughout the risk
management paradigm.

Risk Management Activities

Risk Assessment
Risk Identifiaction
Risk Analysis
Risk Prioritization
Risk Control
Risk Management Planning
Risk Monitoring
Risk Resolution

Risk management is a key part of project planning activities and the specific risky
areas are highlighted in the plan.
The project plan is expected to highlight both probability of failure and impact of the
failure and to describe the steps to be taken in order to reduce the risk.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 37


MC4102 OOSE UNIT - V

Risk Assessment

The objective of risk assessment is to division the risks in the condition of their loss,
causing potential.

For risk assessment, first, every risk should be rated in two methods:

The possibility of a risk coming true (denoted as r).


The consequence of the issues relates to that risk (denoted as s).

Based on these two methods, the priority of each risk can be estimated:

p=r*s

Where p is the priority with which the risk must be controlled, r is the probability of
the risk becoming true, and s is the severity of loss caused due to the risk becoming
true.
If all identified risks are set up, then the most likely and damaging risks can be
controlled first, and more comprehensive risk abatement methods can be designed for
these risks.

1. Risk Identification:

The project organizer needs to anticipate the risk in the project as early as possible so
that the impact of risk can be reduced by making effective risk management planning.
A project can be of use by a large variety of risk.
To identify the significant risk, this might affect a project.
It is necessary to categories into the different risk of classes.

There are different types of risks which can affect a software project:

 Technology risks: Risks that assume from the software or hardware


technologies that are used to develop the system.
 People risks: Risks that are connected with the person in the development
team.
 Organizational risks: Risks that assume from the organizational
environment where the software is being developed
 Tools risks: Risks that assume from the software tools and other support
software used to create the system.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 38


MC4102 OOSE UNIT - V

 Requirement risks: Risks that assume from the changes to the customer
requirement and the process of managing the requirements change.
 Estimation risks: Risks that assume from the management estimates of
the resources required to build the system

2. Risk Analysis:

During the risk analysis process, you have to consider every identified risk and make a
perception of the probability and seriousness of that risk.
There is no simple way to do this. You have to rely on your perception and experience
of previous projects and the problems that arise in them.
It is not possible to make an exact, the numerical estimate of the probability and
seriousness of each risk. Instead, you should authorize the risk to one of several bands:

The probability of the risk might be determined as

very low (0-10%),


low (10-25%),
moderate (25-50%),
high (50-75%) or
very high (+75%).

The effect of the risk might be determined as

catastrophic (threaten the survival of the plan),


serious (would cause significant delays),
tolerable (delays are within allowed contingency), or
insignificant.

Risk Control

It is the process of managing risks to achieve desired outcomes.


After all, the identified risks of a plan are determined; the project must be made to
include the most harmful and the most likely risks.
Different risks need different containment methods.
In fact, most risks need ingenuity on the part of the project manager in tackling the
risk.

There are three main methods to plan for risk management:

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 39


MC4102 OOSE UNIT - V

Avoid the risk: This may take several ways such as discussing with the client to
change the requirements to decrease the scope of the work, giving incentives to the
engineers to avoid the risk of human resources turnover, etc.
Transfer the risk: This method involves getting the risky element developed by a
third party, buying insurance cover, etc.

Risk reduction: This means planning method to include the loss due to risk. For
instance, if there is a risk that some key personnel might leave, new recruitment can be
planned.

Risk Leverage:

To choose between the various methods of handling risk, the project plan must
consider the amount of controlling the risk and the corresponding reduction of risk.
For this, the risk leverage of the various risks can be estimated.

Risk leverage is the variation in risk exposure divided by the amount of reducing the
risk.

Risk leverage = (risk exposure before reduction - risk exposure after reduction) /
(cost of reduction)

1. Risk planning:

The risk planning method considers each of the key risks that have been identified and
develop ways to maintain these risks.
For each of the risks, you have to think of the behavior that you may take to minimize
the disruption to the plan if the issue identified in the risk occurs.
You also should think about data that you might need to collect while monitoring the
plan so that issues can be anticipated.
Again, there is no easy process that can be followed for contingency planning. It rely
on the judgment and experience of the project manager.

2. Risk Monitoring:

Risk monitoring is the method king that your assumption about the product, process,
and business risks has not changed.
The risks should be monitored on continuous basis by reevaluating the risks, the
probability of occurrence of risks and the impact of the risk.

This ensures that:


Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 40
MC4102 OOSE UNIT - V

The identified risks have been reduced or resolved.


The new risks are discovered.
The impact and magnitude of the risk are reassured.

The risk can be monitored by scheduling regular review meetings to evaluate risks.
Some risks may move down the risk list, some may be eliminated from the list and
some new risks may be identified and added to the list.

Risk Resolution

Risk resolution is the implementation and execution of the risk reduction techniques
specified and scheduled in the risk management plan.
Risk monitoring ensures that the risk reduction strategies are implemented and
executed according to schedule.
It is aimed to ensure that the risk management process is a closed-loop process and
progresses on track. Rather than monitoring an risk items, it is more effective to focus
on the top-N risk items of the project, where N should be limited to 10, and depends on
the project size, nature, and progress status.
The status of the top-N risk items is updated to reflect changes of their rankings
from the last review, number of months on the list, and risk-resolution status.

SOFTWARE QUALITY MODELS

Software Quality
Software Quality Attributes
Importance of Software Quality Model
Software Quality Models
 McCall’s Model
 Boehm’s Model
 FURPS Model
 ISO 9000
 ISO 9126
 Capability Maturity Model

Software Quality Model

1. The degree to which a system, component, or process meets specific requirements.


2. The degree to which a system, component, or process meets customer or user needs
or expectations.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 41


MC4102 OOSE UNIT - V
Software Quality Attributes

Software quality can be measured in terms of attributes.

The attribute domains that are required to define for a given software are as follows:

1. Functionality
2. Usability
3. Testability
4. Reliability
5. Maintainability
6. Adaptability

Importance of Software Quality Model

With the growing number of customer's demand for software systems, the expectations
for quality has also grown in terms of how reliable a software product will be.
As we know a software application is quite complex in nature, hence the task of
verifying whether a specific functionality has been implemented or not, becomes quite
difficult.
Therefore software developers often divide the tasks in the form of deliverables, that is,
defining a benchmark to mark the completion of one specific task.
If the errors in some of the previous phases are not rectified on time, then it may lead
to that error being carried over to the next consecutive phases, which may have a
serious problem in the later stages of the project.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 42


MC4102 OOSE UNIT - V
Software Quality Model

McCall’s Software Quality Model

McCall Model is the first quality model developed, which defines a layout of the
various aspects that define the product's quality. It defines the product quality in the
following manner – Product Revision, Product Operation, Product Transition. Product
revision deals with maintainability, flexibility and testability, product operation is
about correctness, reliability, efficiency and integrity.

Boehm’s Software Quality Model

Boehm’s Model describes how easily and reliably a software product can be used. This
model actually elaborates the aspects of McCall model in detail.
It begins with the characteristics that resorts to higher level requirements.
The model's general utility is divided into various factors - portability, efficiency and
human engineering, which are the refinement of factors like portability and utility.
Further maintainability is refined into testability, understandability and modifiability.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 43


MC4102 OOSE UNIT - V

The high-level characteristics represent primary use of the system.

The following primary level factors are addressed by the high-level characteristics:

 As a utility: The ease with which the software can be used in its present form.
 Maintainability: The ease with which the software can be understood, modified
and tested.
 Portability: The ease with which the software can be used by changing it from
one platform to another platform.

FURPS Model

This model categorises requirements into functional and non-functional requirements.

The term FURPS is an acronym for

Functional requirement(F) which relies on expected input and output. non functional
requirements
(U) stands for Usability which includes human factors, aesthetic, documentation of
user material of training,
(R) stands for reliability(frequency and severity of failure, time among failure),
(P) stands for Performance that includes functional requirements, and
(S) stands for supportability that includes backup, requirement of design and
implementation etc.
Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 44
MC4102 OOSE UNIT - V

ISO 9000

The International Organization for Standardization (ISO) made different attempts to


improve the quality with ISO 9000 series.
It is a well-known and widely used series.
The ISO 9000 series of standards is not only for software, but it is a series of five
related standards that are applicable to a wide range of applications such as industrial
tasks including design, production, installation and servicing.
ISO 9001 is the standard that is applicable to software quality.

The aim of ISO 9001 is to define, understand, document, implement, maintain,


monitor, improve and control the following processes:

1. Management responsibility
2. Quality system
3. Contract review
4. Design control
5. Document control
6. Purchasing
7. Purchaser-supplied product
8. Product identification and traceability
9. Process control
10. Inspection and testing
11. Inspection, measuring and test equipment
12. Inspection and test status
13. Control of nonconforming product
14. Corrective action
15. Handling, storage, packaging and delivery
16. Quality records
17. Internal quality audits
18. Training
19. Servicing
20. Statistical techniques

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 45


MC4102 OOSE UNIT - V

ISO 9126

ISO 9126 is an international standard that is guided by ISO/IEC.

1. Functionality:

It is an essential feature of any software product that achieves the basic purpose for
which the software is developed.

Example :

The LMS should be able to maintain book details, maintain member details, issue
book, return book, reserve book, etc.
Functionality includes the essential features that a product must have. It includes
suitability, accuracy, interoperability and security.

2. Reliability:

Once the functionality of the software has been completed, the reliability is defined as
the capability of defect-free operation of software for a specified period of time and
given conditions.
One of the important features of reliability is fault tolerance.

Example:

If the system crashes, then when it recovers the system should be able to continue its
normal functioning.
Other features of reliability are maturity and recoverability.

3. Usability:

The ease with which the software can be used for each specified function is another
attribute of ISO 9126.
The ability to learn, understand and operate the system is the sub-characteristics of
usability.

Example:

The ease with which the operation of cash withdrawal function of an ATM system can
be learned is a part of usability attribute.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 46


MC4102 OOSE UNIT - V

4. Efficiency:

This characteristic concerns with performance of the software and resources used by
the software under specified conditions.

Example : if a system takes 15 minutes to respond, then the system is not efficient.
Efficiency includes time behaviour and resource behaviour.

5. Maintainability:

The ability to detect and correct faults in the maintenance phase is known as
maintainability.
Maintainability is affected by the readability, understandability and modifiability of the
source code.
The ability to diagnose the system for identification of cause of failures analysability),
the effort required to test a software (testability) and the risk of unexpected effect of
modifications (stability) are the sub-characteristics of maintainability.

6. Portability:

This characteristic refers to the ability to transfer the software from one platform or
environment to another form or environment.

Capability Maturity Model

One of the most important quality models of software quality maintenance.


The model lays down a very simple approach to define the quality standards.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 47


MC4102 OOSE UNIT - V

It has five levels namely – initial, repeatable, defined, managed, optimizing.

At the initial level, the company is quite small and it solely depends on an individual
how he handles the company.

The repeatable level states that at least the basic requirements or techniques have been
established and the organisation has attained a certain level of success.

By the next level that is , defined, the company has already established a set of
standards for smooth functioning of a software project/process.

At the managed level, an organisation monitors its own activities through a data
collection and analysis.

At the fifth level that is the optimizing level, constant improvement of the prevailing
process becomes a priority, a lot of innovative approach is applied towards the
qualitative enhancement.

The past experience reports show that moving from level 1 to level 2 may take 3 to 5
years.
The CMM is becoming popular and many software organizations are aspiring to
achieve CMM level 5.
The acceptability and pervasiveness of the CMM activities are helping the
organizations to produce a quality software.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 48


MC4102 OOSE UNIT - V

ANALYZING THE METRIC DATA

Summary Statistics for Pre examining Data


Measures of Central Tendency
Measures of Dispersion
Metric Data Distribution
Outlier Analysis
Correlation Analysis
Exploring Analysis

Analyzing the Metric Data

The role of statistics is to function as a tool in analysing research data and drawing
conclusions from it. The research data must be suitably reduced so that the same can be
read easily and used for further analysis. Metric data can be analysed using one or
many statistical techniques and meaningful inferences can be drawn.

Summary Statistics for Pre Examining Data

The role of statistics is to function as a tool in analysing research data and drawing
conclusions from it.
Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 49
MC4102 OOSE UNIT - V
The research data must be suitably reduced to be read easily and used for further
analysis.
Descriptive statistics concern development of certain indices or measures to summarize
data.
Data can be summarized by using measures of central tendency (mean,median and
mode) and measures of dispersion (standard deviation, variance, and quartile).

Measures of Central Tendency

Measures of central tendency include mean, median and mode.


These measures are known as measures of central tendency as they give us the idea
about the central values of the data around which all the other data points have a
tendency to gather.
Mean can be computed by taking average values of the data set and is given as:

Median gives the middle value in the data set which means half of the data points are
below the median value and half of the data points are above the median value.
It is calculated as (1/2) (n + l)th value of the data set, where n is the number of data
points in the data set.
The most frequently occurring value in the data set is denoted by mode.
The concept has significance for nominal data.

Measures of Dispresion

Measures of dispersion include standard deviation, variance and quartiles. Measures of


dispersion tell us how the data is scattered or spread. Standard deviation calculates the
distance of the data point from the mean. If most of the data points are closer to the
mean,then the standard deviation of the variable is large.
The standard deviation is calculated as given below:

Variance is a measure of variability and is the square of standard deviation.

The quartile divides the metric data into four equal parts.
For calculating quartile, the data is first arranged in ascending order.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 50


MC4102 OOSE UNIT - V

The 25% of the metric data is below the lower quartile (25 percentile), 50% of the
metric data is below the median value and 75% of the metric data is below the upper
quartile (75 percentile).
Figure 5.21 shows the division of data set into four parts by using quartiles.

The lower quartile (Q1) is computed by:


1. finding the median of the data set
2. finding the median of the lower half of the data set.
The upper quartile (Q3) is computed by:
1. finding the median of the data set
2. finding the median of the upper half of the data set.
The interquartile range is calculated as the difference between the upper quartile and
the lower quartile and is given as
Interquartile range (IQR) = Q3 - Q1

Outlier Analysis

Data points, which are located in an empty part of the sample space, are called outliers.
These are the data values that are numerically distant from the rest of the data.
Once the outliers are identified, the decision about the inclusion or exclusion of the
outlier must be made. The decision depends upon the reason why the case is identified
as outlier.

There are three types of outliers:


univariate,
bivariate and
multivariate.

Univariate outliers are those exceptional values that occur within a single variable.
Bivariate outliers occur within the combination of two variables and
Multivariate outliers are present within the combination of more than two variables.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 51


MC4102 OOSE UNIT - V

Correlation Analysis

Correlation analysis studies the variation of two or more variables for determining the
amount of correlation between them.

Hopkins (2003) calls a correlation coefficient value


between 0.5 and 0.7 large,
0.7 and 0.9 very large, and
0.9 and 1.0 almost perfect.
Correlation analysis can be used to find the relationship among the metrics.
This tells whether the metrics are independent or contain redundant information.

Exploring Analysis

The metric variables can be of two types-independent variables and dependent


(target)variables.
The effect of independent variables on the dependent variables can be explored by
using various statistical and machine learning methods.
The choice of the method for analysing the relationship between independent and
dependent variables depends upon the type of the dependent variable (continuous or
discrete).
If the dependent variable is continuous, then the widely known statistical method
(linear regression method) may be used, whereas if the dependent variable is of
discrete type, then logistic regression method may be used for analysing relationships.

METRICS FOR MEASURING SIZE AND STRUCTURE

Size Estimation
Information Flow Metrics

Metrics for measuring Size and Structure

There are a range of metrics available to measure the size and structure of a software
system.

Size metrics can be used to estimate the size of the software, input to estimation
models and can be used to monitor the progress during software development.
The structural metrics helps us to analyse the product and increase the understanding of
the product.
They may also provide insight into the complexity of the software.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 52


MC4102 OOSE UNIT - V

Size Estimation

The measurement of size is a important and difficult area in software development.


Size metrics give an indication about the length of the source code.
The physical quantities such as mass, velocity or temperature are easily measurable,
but measuring the size of the software is difficult.
The most commonly used size metric is lines of source code (LOC).
There are many ways in which they can be counted.

Example :

Many programs use comment lines and blank lines to make their programs more
understandable and readable.

Programming effort is required in order to include blank lines and comment lines.
Thus, these lines may be excluded from the count of lines of source code.
Similarly, unexecutable statements are also not included by some programmers in the
LOC count.
Hence, the programmer must be careful while selecting the method for counting LOC.

The functional units can be counted in the early phases of software development.
In object oriented software development, the functionality of a software can be
depicted in terms of use cases and classes.
The use case point and class point method are used to count the functional units in
object-oriented software development.

Information Flow Metrics

Information flow metrics represent the coupling (degree of interdependence between


classes) in the system.
A system is said to be highly coupled if its classes accept information from and/or pass
information to other classes.
The aim of the developer should be to minimize coupling, as high-coupled systems
tend to be less reliable and maintainable.
Fan-in and Fan-out refer to the number of classes collaborating with each other and are
defined as follows:
1. Fan-in: It counts the number of other classes that call a given class A.
2. Fan-out: It counts the number of other classes called by a given class A.

A low fan-out value is desirable since high fan-out values represent large amount of
coupling present in the system.
Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 53
MC4102 OOSE UNIT - V

Figure 5.27 shows values of fan-in and fan-out for a small system consisting of six
classes.

MEASURING SOFTWARE QUALITY

Software Quality
Software Quality Metrics

Code Quality
Reliability
Performance
Usability
Correctness
Maintainability
Integrity
Security

Software Quality

In Software Engineering, Software Measurement is done based on some Software


Metrics where these software metrics are referred to as the measure of various
characteristics of a Software.
In Software engineering Software Quality Assurance (SAQ) assures the quality of the
software.
A set of activities in SAQ is continuously applied throughout the software process.
Software Quality is measured based on some software quality metrics.
There is a number of metrics available based on which software quality is measured.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 54


MC4102 OOSE UNIT - V

Software Quality Metrics

Code Quality
Reliability
Performance
Usability
Correctness
Maintainability
Integrity
Security

1. Code Quality

Code quality metrics measure the quality of code used for software project
development.
Maintaining the software code quality by writing Bug-free and semantically correct
code is very important for good software project development.
In code quality,
both Quantitative metrics like the number of lines, complexity, functions, rate of
bugs generation, etc, and
Qualitative metrics like readability, code clarity, efficiency, and maintainability,
etc are measured.

2. Reliability

Reliability metrics express the reliability of software in different conditions.


The software is able to provide exact service at the right time or not checked.
Reliability can be checked using Mean Time Between Failure (MTBF) and Mean Time
To Repair (MTTR).

3. Performance

Performance metrics are used to measure the performance of the software.


Each software has been developed for some specific purposes.
Performance metrics measure the performance of the software by determining whether
the software is fulfilling the user requirements or not, by analyzing how much time and
resource it is utilizing for providing the service.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 55


MC4102 OOSE UNIT - V

4. Usability

Usability metrics check whether the program is user-friendly or not.


Each software is used by the end-user. So it is important to measure that the end-user is
happy or not by using this software.

5. Correctness

Correctness is one of the important software quality metrics as this checks whether the
system or software is working correctly without any error by satisfying the user.
Correctness gives the degree of service each function provides as per developed.

6. Maintainability

Each software product requires maintenance and up-gradation.


Maintenance is an expensive and time-consuming process.
So if the software product provides easy maintainability then we can say software
quality is up to mark.
Maintainability metrics include the time required to adapt to new features /
functionality, Mean Time to Change (MTTC), performance in changing environments,
etc.

7. Integrity

Software integrity is important in terms of how much it is easy to integrate with other
required software which increases software functionality and what is the control on
integration from unauthorized software’s which increases the chances of cyber attacks.

8. Security

Security metrics measure how secure the software is.


In the age of cyber terrorism, security is the most essential part of every software.
Security assures that there are no unauthorized changes, no fear of cyber attacks, etc
when the software product is in use by the end-user.

OBJECT ORIENTED METRICS

Object Oriented Metrics in Software Engineering


Object Oriented Software Engineering Metrics
Localization
Encapsulation
Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 56
MC4102 OOSE UNIT - V
Information Hiding
Inheritance
Object Abstraction Technique

Object Oriented Metrics in Software Engineering

Metrics can be used to reinforce good OO programming technique which lead to


more reliable code.
Object-oriented software engineering metrics are units of measurement that are used
to characterize:
 object-oriented software engineering products, e.g., designs source code,
and the test cases.
 object-oriented software engineering processes, e.g., designing and
coding.
 object-oriented software engineering people, e.g., productivity of an
individual designer.

Object Oriented Software Engineering Metrics

Localization
Encapsulation
Information Hiding
Inheritance
Object Abstraction Technique

Localization

It is the process of placing items in close physical nearness to each other.

Functional decomposition processes localize information around functions.


Data-driven approaches localize information around data.
Object-oriented approaches localize information around objects.

Encapsulation

It is the packaging of a collection of items.


Low-level examples of encapsulation include records and arrays.
Subprograms are mid level mechanisms for encapsulation.
There are still very long encapsulation mechanisms for the object-oriented
programming languages, e.g., C++’s, classes, Ada’s packages, and Modula 3’s
modules.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 57


MC4102 OOSE UNIT - V

Objects encapsulates:

Knowledge of state
Advertised capabilities
Other objects
Exceptions
Constants
Concepts
Information Hiding

It is the suppression or hiding of the objects.


We show only the information which is needed to accomplish our goals.
Degree of information hiding ranges from partially restricted visibility to total
invisibility.
Encapsulation and information hiding are not same thing e.g., an item can be
encapsulated but still be to totally visible.
It plays a direct role in such metrics as object coupling and the degree of information
hiding.

Inheritance

It is mechanism where one object acquires the characteristics from one, or more ,
other objects.

Some object-oriented languages support only single inheritance.


Some object-oriented languages support only multiple inheritance.
Inheritance type and their semantics vary from language to language.

There are many object-oriented software engineering metrics which are based on
inheritance e.g.,
number of children
number of parents
class hierarchy nesting level

Abstraction

It is the mechanism where we focus only on important details of a concept, while


ignoring the in essential details.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 58


MC4102 OOSE UNIT - V

It is a relative concept.
There are also different categories of abstraction, e.g., functional data, process
and object abstraction.
Objects are treated as high-level entities in object abstraction.

Classes

There are three commonly used views on the definition for “class”.

Class as a cookie cutter:

For the structurally identically items, a class is a pattern, template, or a blueprint.


The items which can be created using class are called instances.

Class as an instance factory:

Basically class is a thing which contains both a pattern and a mechanism for creating
items based on that pattern and instances are like individual items that are
“manufactured” by using class creation mechanism.

A class is a set of all the items which are created using a specific pattern, i.e., the
class is the set of all instances of that pattern.

Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 59

You might also like