Mc4102 Oose Unit 5 KVL Notes
Mc4102 Oose Unit 5 KVL Notes
Need of Object Oriented Software Estimation – Lorenz and Kidd Estimation – Use
Case Points Method – Class Point Method – Object Oriented Function Point –
Risk Management – Software Quality Models – Analyzing the Metric Data –
Metrics for Measuring Size and Structure – Measuring Software Quality - Object
Oriented Metrics
Project Planning
Scope Management
Project Estimation
Project Planning
Scope Management
It defines the scope of project; this includes all the activities, process need to be done
in order to make a deliverable software product.
Scope management is essential because it creates boundaries of the project by clearly
defining what would be done in the project and what would not be done.
This makes project to contain limited and quantifiable tasks, which can easily be
documented and in turn avoids cost and time overrun.
Project Estimation
Software size may be estimated either in terms of KLOC (Kilo Line of Code) or by
calculating number of function points in the software.
Lines of code depend upon coding practices and Function points vary according to the
user or software requirement.
Effort estimation
Time estimation
Once size and efforts are estimated, the time required to produce the software can be
estimated.
Effort required is segregated into sub categories as per the requirement specifications
and interdependency of various components of software.
Software tasks are divided into smaller tasks, activities or events by Work
Breakthrough Structure (WBS).
The tasks are scheduled on day-to-day basis or in calendar months.
The sum of time required to complete all tasks in hours or days is the total time
invested to complete the project
Cost estimation
This might be considered as the most difficult of all because it depends on more
elements than any of the previous ones.
For estimating project cost, it is required to consider –
Size of software
Software quality
Hardware
Additional software or tools, licenses etc.
Skilled personnel with task-specific skills
Travel involved o Communication
Training and support
We discussed various parameters involving project estimation such as size, effort, time
and cost. Project manager can estimate the listed factors using two broadly recognized
techniques –
Decomposition Technique
Putnam Model
COCOMO
Traditional software estimations include lines of source code and function point
analysis for size estimation:
COCOMO 81, COCOMO II and Putnam resource allocation model for cost estimation.
The use case diagram may be used to predict the size and hence the effort of the
software at an early stage of software development.
Classes are also an important element for measuring size in object-oriented software.
The functionality of an object oriented software can be depicted using use cases and
these use cases are transformed using classes.
Thus, the effort can be estimated using the size measures computed for object-oriented
products.
The Lorenz and Kidd method for estimation of size and effort is one of the earliest
methods developed in 1994 (Lorenz and Kidd, 1994).
The results of this method are based on the study of 5-8 projects developed in C++ and
Smalltalk.
Lorenz and Kidd provided two methods for estimation of a number of classes:
The number of scenario scripts (same as use cases) may be used to estimate size of
classes.
This method for determining the number of classes can be used in early phases of
software development life cycle, i.e, before the classes have been identified.
After elaborating the design phase, the number of classes can be determined.
Lorenz and Kidd differentiated between key classes and support classes.
Key classes are specific to business applications and are ranked with higher priority by
the customer.
These classes also involve many scenarios.
Finally, the total number of classes is obtained by adding key classes and support
classes.
According to Lorenz and Kidd, each class requires 10 to 20 person days for
implementation.
EXAMPLE 5.1
Solution
= 17 X 15 = 255
EXAMPLE 5.2
Solution
= 45 X 2.5 = 112.5
= 112.5 + 45 = 157.5
The use case points method was developed by GaustavKarner of Objectory (currently
known as IBM Rational Software) in 1993 (Karner, 1993).
The method is used for estimating size and effort of object-oriented projects using use
cases.
The method is an extension of function point analysis technique developed by Albrecht
and Gaffney (1983).
Karner's work on use case points was written in his diploma thesis titled "Metrics
Objectory".
The use case points method measures the functionality of the software based on the use
case counts.
The use case model is a very popular technique for requirements gathering and can be
used at early phases of software development in order to provide estimations of the
project.
In this step, the complexity levels of the actors and use cases are identified and
their weighted sum is computed.
The first step involved in the calculation of use case points is the classification of
actors and use cases. The actors are ranked according to their complexity, i.e. simple,
average or complex.
The criteria for classifying an actor as simple, average or complex
Mathematically, where n is the number of actors, a i is the ith actor and wi is the value
of weighting factor of the ith actor.
The use cases are ranked according to their complexity: simple, average or complex.
The criteria for classifying a use case are given in Table 5.2.
1. A use case may be classified as simple, average or complex based on the number of
transactions.
A transaction can be defined as a collection of activities that are counted by counting
the use case steps.
2. The other method to classify the use case is counting the analysis objects which are
counted by determining the number of objects that we will need to implement a use
case.
Each use case is multiplied by its corresponding weighting factors and this sum is
added to get unadjusted use case weight (UUCW).
Mathematically, where m is the number of use cases and u; is the ith use case.
After classifying actors and use cases, the resultant unadjusted use case points (UUCP)
are computed by adding UAW and UUCW as shown in mathematical form below:
A system requirement other than those concerned with information content intrinsic to
and affecting the size of the task, but not arising from the project environment
TCFs vary depending on the difficulty level of the system. Each factor is rated on the
scale of o to 5 as shown in Figure 5.3.
Table 5.4 shows the weights assigned to the contributing technical factors.
The final number of use case points is calculated by multiplying UUCP by TCF and
ECF.
EXAMPLE 5.3
Solution
UUCW=
= 10 X 10 = 100
EXAMPLE 5.4
2 simple actors, 2 average actors, 1 complex actor, 2 use cases with the number of
transactions 3, 4 use cases with the number of transactions 5 and 2 use cases with the
number of transactions 15.
Solution
2 x 2=4
1x3=3
UAW = 2 + 4 + 3 = 9
UUCW =
= 2 x 5 + 4 x 10 + 2 x 15
= 10 + 40 + 30 = 80
EXAMPLE 5.5
Consider Example 4.4 and calculate the effort for the given application.
Solution
Effort = UCP x 20
= 133.46 X 20
= 2669.2 person hours
EXAMPLE 5.6 Consider the following use case model of result management system:
The number of transactions required for each use case is given as follows:
Solution
Effort= 20 x 66.98
= 1339.67 person hours
This method is used to provide the system level size estimation of the object-oriented
software.
The class point method was given by Costagliola and Tortora (2005).
This method primarily focuses on classes for the estimation of size.
The complexity of a class is analysed through determining the number of methods in a
class, the number of attributes in a class and the interaction of a class with other
classes.
This method can be used for estimation during requirement and design phase of
software development life cycle.
The steps required for the estimation of size are given as follows:
1. Identification of classes
2. Determination of complexity of classes
3. Calculation of unadjusted class point
4. Calculation of technical complexity factor
5. Calculation of class point
Identification of Classes
In the class point method, four types of classes are identified. The types of system
components are given in Table 5.6.
In the class point method, two measures CP1 and CP2 are used. In CP1, the initial
estimate of size is made and CP2 provides detailed estimate of size.
Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 18
MC4102 OOSE UNIT - V
The following measures are used in the calculation of CP1 and CP2 measures:
In the CP1 measure NEM and NSR are used to classify complexity of a class. Table
5.7 shows the complexity for CP1 measure.
In the CP2 measure, NOA is also considered; thus, a detailed insight into the estimate
of size is obtained.
Tables 5.8 to 5.10 show the details for classifying class complexity for CP2
measure on the basis of NEM, NSR and NOA.
The total unadjusted class point (TUCP) is calculated by assigning complexity weights
based on the classifications made as given in Table 5.11.
After classifying the complexity level of classes, the TUCP is calculated as follows:
where wij is complexity weights j (j is low, average, high) assigned to class type i. xij is
the number of classes of type i (i is type of class problem domain, human interface,
data management and task management).
The factors F1-F18 are the degree of influence (DI) as shown in Figure 5.5.
Each DI is rated on the scale of 0-5.
DI is used to determine technical complexity factor (TCF).
The final class point is calculated by multiplying total unadjusted class point values
with technical factor.
The procedure for calculating adjusted class point (CP) is given as:
CP = TUCP x TCF
Costagliola and Tortora (2005) used data from 40 systems developed in Java language
in order to predict the effort during two successive semesters of graduate courses of
software engineering.
They used ordinary least-square (OLS) regression analysis for deriving the effort
model.
The effort is defined in terms of person hours for both CP1 and CP2 measures as:
EXAMPLE 5.7
Assume all the technical complexity factors have average influence. Calculate class
points.
Solution
EXAMPLE 5.8
Assume all other factors as average. Calculate CP1 and CP2 using class point method
and effort.
Solution
The traditional function point method can be mapped to obtain object points.
The object point sizing method is also known as object-oriented function point method
as it is very similar to the function point method.
The object point method does not require much experience, requires less effort for
computation and can be calculated quickly.
It better suites for object oriented systems and is easier to calculate as compared to the
traditional function point method.
The object point method involves counting of classes and methods (services).
All this information is obtained from the object model of object-oriented design.
In this approach, the function point concepts are mapped to object-oriented concepts
and the ambiguities present in the function point method are removed.
A class is a template that encapsulates attributes and member functions into a single
unit.
Objects are data members of the class.
Classes may be divided into external and internal classes depending on their scope and
boundary.
The relationship between concepts defined in function point and their corresponding
concepts in object points are shown in Table 5.12.
The internal logical files are mapped to internal classes, external interface files are
mapped to external classes and external inquiries/inputs/outputs are mapped to
services.
The internal classes are the classes that reside inside the application boundaries and the
external classes are the classes that reside outside the application boundaries.
The services are the methods defined in the class. Figure 5.6 depicts the method to
compute the object oriented concepts in object point method.
For each external/internal class, it is necessary to compute the number of data element
types (DETs) and record element types (RETs).
The DETs correspond to the total number of attributes of a class and the RETs
correspond to the total number of subclasses of a class (descendants of a class) as
shown in Figure 5.7.
In inheritance hierarchy, the classes that inherit the properties of the base class while
having their own properties are known as subclasses.
An internal class identifies the total number of DETs and RETs in the class.
In the IFPUG Manual 4.1, DETs are defined as (IFPUG, 1994):
1. A DET is counted for each simple attribute (integer, string, real) defined in a class.
For example, the book accession number (integer type) stored in a class is counted as
one DET.
2. A DET is counted for each attribute required to communicate with another internal
class or external class.
For example, if employee is a base class and salary, contract based or hourly employee
are three subclasses of class employee, then the RET count for employee class is 3.
The following are the counting rules for DETs and FTR for each service:
1. A DET is counted for each simple data type referenced as arguments of the service
or global variables referenced by the service.
2. An FTR is counted for each complex data type referenced as arguments of the
service or returned by the service.
In Tables 5.13, 5.14 and 5.15, the complexity values for internal classes, external
classes and services are shown.
Consider that the example of the result calculation of a student is shown. The class
model is given in Figure 5.8.
After classifying the internal classes, external classes and services, these are multiplied
by their complexity values. Finally, the results are summed up using the following
formulas:
where n is the number of internal classes, m is the number of external classes and o is
the number of services.
The classes in the given example are internal, hence the occurrences of internal classes
are multiplied with their corresponding weights and summing all the resulting values,
we get IC = 4 x 7 = 28.
The concrete services in the classes are 5. Assuming that the methods have average
complexity (as their signature is not given in the example), the services are 5 x 4 = 20.
The above classes contain six methods with high complexity and the complexities of
the adjustment factors are average. Compute object points.
EXAMPLE 5.10
An application has the following data:
5 high internal classes, 2 average external classes and 6 average services. Assume the
adjustment factor as significant. What are the unadjusted object points and object
points?
RISK MANAGEMENT
Risk
Risk Management
Classification of Risk
Project Risk
Technical Risk
Business Risk
Known Risk
Predictable Risk
Unpredictable Risk
Principle of Risk Management
Risk
"Risk" is a problem that could cause some loss or threaten the progress of the project,
but which has not happened yet.
It is important to classify risks so that they can be analysed and prioritized based on
their probabilities and impacts.
Risks rated urgent should be addressed before the risks rated high, as they cause huge
loss to the organization.
These potential issues might harm cost, schedule or technical success of the project and
the quality of our software device, or project team morale.
We need to differentiate risks, as potential issues, from the current problems of the
project.
Different methods are required to address these two kinds of issues.
Example
Staff storage, because we have not been able to select people with the right technical
skills is a current problem, but the threat of our technical persons being hired away by
the competition is a risk.
Risk Management
A software project can be concerned with a large variety of risks. In order to be adept
to systematically identify the significant risks which might affect a software project, it
is essential to classify risks into different classes. The project manager can then check
which risks from each class are relevant to the project.
Classification of Risk
Project Risk
Technical Risk
Business Risk
Known Risk
Predictable Risk
Unpredictable Risk
Project Risk
Project risks concern differ forms of budgetary, schedule, personnel, resource, and
customer-related problems.
A vital project risk is schedule slippage.
Since the software is intangible, it is very tough to monitor and control a software
project.
It is very tough to control something which cannot be identified.
For any manufacturing program, such as the manufacturing of cars, the plan executive
can recognize the product taking shape.
Technical risks
Business risks:
This type of risks contain risks of building an excellent product that no one need,
losing budgetary or personnel commitments, etc.
Known risks:
Those risks that can be uncovered after careful assessment of the project program, the
business and technical environment in which the plan is being developed, and more
reliable data sources (e.g., unrealistic delivery date)
Predictable risks:
Those risks that are hypothesized from previous project experience (e.g., past
turnover)
Unpredictable risks:
Those risks that can and do occur, but are extremely tough to identify in advance.
1. Global Perspective:
In this, we review the bigger system description, design, and
implementation.
We look at the chance and the impact the risk is going to have.
4. Integrated management:
In this method risk management is made an integral part of project
management.
5. Continuous process:
In this phase, the risks are tracked continuously throughout the risk
management paradigm.
Risk Assessment
Risk Identifiaction
Risk Analysis
Risk Prioritization
Risk Control
Risk Management Planning
Risk Monitoring
Risk Resolution
Risk management is a key part of project planning activities and the specific risky
areas are highlighted in the plan.
The project plan is expected to highlight both probability of failure and impact of the
failure and to describe the steps to be taken in order to reduce the risk.
Risk Assessment
The objective of risk assessment is to division the risks in the condition of their loss,
causing potential.
For risk assessment, first, every risk should be rated in two methods:
Based on these two methods, the priority of each risk can be estimated:
p=r*s
Where p is the priority with which the risk must be controlled, r is the probability of
the risk becoming true, and s is the severity of loss caused due to the risk becoming
true.
If all identified risks are set up, then the most likely and damaging risks can be
controlled first, and more comprehensive risk abatement methods can be designed for
these risks.
1. Risk Identification:
The project organizer needs to anticipate the risk in the project as early as possible so
that the impact of risk can be reduced by making effective risk management planning.
A project can be of use by a large variety of risk.
To identify the significant risk, this might affect a project.
It is necessary to categories into the different risk of classes.
There are different types of risks which can affect a software project:
Requirement risks: Risks that assume from the changes to the customer
requirement and the process of managing the requirements change.
Estimation risks: Risks that assume from the management estimates of
the resources required to build the system
2. Risk Analysis:
During the risk analysis process, you have to consider every identified risk and make a
perception of the probability and seriousness of that risk.
There is no simple way to do this. You have to rely on your perception and experience
of previous projects and the problems that arise in them.
It is not possible to make an exact, the numerical estimate of the probability and
seriousness of each risk. Instead, you should authorize the risk to one of several bands:
Risk Control
Avoid the risk: This may take several ways such as discussing with the client to
change the requirements to decrease the scope of the work, giving incentives to the
engineers to avoid the risk of human resources turnover, etc.
Transfer the risk: This method involves getting the risky element developed by a
third party, buying insurance cover, etc.
Risk reduction: This means planning method to include the loss due to risk. For
instance, if there is a risk that some key personnel might leave, new recruitment can be
planned.
Risk Leverage:
To choose between the various methods of handling risk, the project plan must
consider the amount of controlling the risk and the corresponding reduction of risk.
For this, the risk leverage of the various risks can be estimated.
Risk leverage is the variation in risk exposure divided by the amount of reducing the
risk.
Risk leverage = (risk exposure before reduction - risk exposure after reduction) /
(cost of reduction)
1. Risk planning:
The risk planning method considers each of the key risks that have been identified and
develop ways to maintain these risks.
For each of the risks, you have to think of the behavior that you may take to minimize
the disruption to the plan if the issue identified in the risk occurs.
You also should think about data that you might need to collect while monitoring the
plan so that issues can be anticipated.
Again, there is no easy process that can be followed for contingency planning. It rely
on the judgment and experience of the project manager.
2. Risk Monitoring:
Risk monitoring is the method king that your assumption about the product, process,
and business risks has not changed.
The risks should be monitored on continuous basis by reevaluating the risks, the
probability of occurrence of risks and the impact of the risk.
The risk can be monitored by scheduling regular review meetings to evaluate risks.
Some risks may move down the risk list, some may be eliminated from the list and
some new risks may be identified and added to the list.
Risk Resolution
Risk resolution is the implementation and execution of the risk reduction techniques
specified and scheduled in the risk management plan.
Risk monitoring ensures that the risk reduction strategies are implemented and
executed according to schedule.
It is aimed to ensure that the risk management process is a closed-loop process and
progresses on track. Rather than monitoring an risk items, it is more effective to focus
on the top-N risk items of the project, where N should be limited to 10, and depends on
the project size, nature, and progress status.
The status of the top-N risk items is updated to reflect changes of their rankings
from the last review, number of months on the list, and risk-resolution status.
Software Quality
Software Quality Attributes
Importance of Software Quality Model
Software Quality Models
McCall’s Model
Boehm’s Model
FURPS Model
ISO 9000
ISO 9126
Capability Maturity Model
The attribute domains that are required to define for a given software are as follows:
1. Functionality
2. Usability
3. Testability
4. Reliability
5. Maintainability
6. Adaptability
With the growing number of customer's demand for software systems, the expectations
for quality has also grown in terms of how reliable a software product will be.
As we know a software application is quite complex in nature, hence the task of
verifying whether a specific functionality has been implemented or not, becomes quite
difficult.
Therefore software developers often divide the tasks in the form of deliverables, that is,
defining a benchmark to mark the completion of one specific task.
If the errors in some of the previous phases are not rectified on time, then it may lead
to that error being carried over to the next consecutive phases, which may have a
serious problem in the later stages of the project.
McCall Model is the first quality model developed, which defines a layout of the
various aspects that define the product's quality. It defines the product quality in the
following manner – Product Revision, Product Operation, Product Transition. Product
revision deals with maintainability, flexibility and testability, product operation is
about correctness, reliability, efficiency and integrity.
Boehm’s Model describes how easily and reliably a software product can be used. This
model actually elaborates the aspects of McCall model in detail.
It begins with the characteristics that resorts to higher level requirements.
The model's general utility is divided into various factors - portability, efficiency and
human engineering, which are the refinement of factors like portability and utility.
Further maintainability is refined into testability, understandability and modifiability.
The following primary level factors are addressed by the high-level characteristics:
As a utility: The ease with which the software can be used in its present form.
Maintainability: The ease with which the software can be understood, modified
and tested.
Portability: The ease with which the software can be used by changing it from
one platform to another platform.
FURPS Model
Functional requirement(F) which relies on expected input and output. non functional
requirements
(U) stands for Usability which includes human factors, aesthetic, documentation of
user material of training,
(R) stands for reliability(frequency and severity of failure, time among failure),
(P) stands for Performance that includes functional requirements, and
(S) stands for supportability that includes backup, requirement of design and
implementation etc.
Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 44
MC4102 OOSE UNIT - V
ISO 9000
1. Management responsibility
2. Quality system
3. Contract review
4. Design control
5. Document control
6. Purchasing
7. Purchaser-supplied product
8. Product identification and traceability
9. Process control
10. Inspection and testing
11. Inspection, measuring and test equipment
12. Inspection and test status
13. Control of nonconforming product
14. Corrective action
15. Handling, storage, packaging and delivery
16. Quality records
17. Internal quality audits
18. Training
19. Servicing
20. Statistical techniques
ISO 9126
1. Functionality:
It is an essential feature of any software product that achieves the basic purpose for
which the software is developed.
Example :
The LMS should be able to maintain book details, maintain member details, issue
book, return book, reserve book, etc.
Functionality includes the essential features that a product must have. It includes
suitability, accuracy, interoperability and security.
2. Reliability:
Once the functionality of the software has been completed, the reliability is defined as
the capability of defect-free operation of software for a specified period of time and
given conditions.
One of the important features of reliability is fault tolerance.
Example:
If the system crashes, then when it recovers the system should be able to continue its
normal functioning.
Other features of reliability are maturity and recoverability.
3. Usability:
The ease with which the software can be used for each specified function is another
attribute of ISO 9126.
The ability to learn, understand and operate the system is the sub-characteristics of
usability.
Example:
The ease with which the operation of cash withdrawal function of an ATM system can
be learned is a part of usability attribute.
4. Efficiency:
This characteristic concerns with performance of the software and resources used by
the software under specified conditions.
Example : if a system takes 15 minutes to respond, then the system is not efficient.
Efficiency includes time behaviour and resource behaviour.
5. Maintainability:
The ability to detect and correct faults in the maintenance phase is known as
maintainability.
Maintainability is affected by the readability, understandability and modifiability of the
source code.
The ability to diagnose the system for identification of cause of failures analysability),
the effort required to test a software (testability) and the risk of unexpected effect of
modifications (stability) are the sub-characteristics of maintainability.
6. Portability:
This characteristic refers to the ability to transfer the software from one platform or
environment to another form or environment.
At the initial level, the company is quite small and it solely depends on an individual
how he handles the company.
The repeatable level states that at least the basic requirements or techniques have been
established and the organisation has attained a certain level of success.
By the next level that is , defined, the company has already established a set of
standards for smooth functioning of a software project/process.
At the managed level, an organisation monitors its own activities through a data
collection and analysis.
At the fifth level that is the optimizing level, constant improvement of the prevailing
process becomes a priority, a lot of innovative approach is applied towards the
qualitative enhancement.
The past experience reports show that moving from level 1 to level 2 may take 3 to 5
years.
The CMM is becoming popular and many software organizations are aspiring to
achieve CMM level 5.
The acceptability and pervasiveness of the CMM activities are helping the
organizations to produce a quality software.
The role of statistics is to function as a tool in analysing research data and drawing
conclusions from it. The research data must be suitably reduced so that the same can be
read easily and used for further analysis. Metric data can be analysed using one or
many statistical techniques and meaningful inferences can be drawn.
The role of statistics is to function as a tool in analysing research data and drawing
conclusions from it.
Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 49
MC4102 OOSE UNIT - V
The research data must be suitably reduced to be read easily and used for further
analysis.
Descriptive statistics concern development of certain indices or measures to summarize
data.
Data can be summarized by using measures of central tendency (mean,median and
mode) and measures of dispersion (standard deviation, variance, and quartile).
Median gives the middle value in the data set which means half of the data points are
below the median value and half of the data points are above the median value.
It is calculated as (1/2) (n + l)th value of the data set, where n is the number of data
points in the data set.
The most frequently occurring value in the data set is denoted by mode.
The concept has significance for nominal data.
Measures of Dispresion
The quartile divides the metric data into four equal parts.
For calculating quartile, the data is first arranged in ascending order.
The 25% of the metric data is below the lower quartile (25 percentile), 50% of the
metric data is below the median value and 75% of the metric data is below the upper
quartile (75 percentile).
Figure 5.21 shows the division of data set into four parts by using quartiles.
Outlier Analysis
Data points, which are located in an empty part of the sample space, are called outliers.
These are the data values that are numerically distant from the rest of the data.
Once the outliers are identified, the decision about the inclusion or exclusion of the
outlier must be made. The decision depends upon the reason why the case is identified
as outlier.
Univariate outliers are those exceptional values that occur within a single variable.
Bivariate outliers occur within the combination of two variables and
Multivariate outliers are present within the combination of more than two variables.
Correlation Analysis
Correlation analysis studies the variation of two or more variables for determining the
amount of correlation between them.
Exploring Analysis
Size Estimation
Information Flow Metrics
There are a range of metrics available to measure the size and structure of a software
system.
Size metrics can be used to estimate the size of the software, input to estimation
models and can be used to monitor the progress during software development.
The structural metrics helps us to analyse the product and increase the understanding of
the product.
They may also provide insight into the complexity of the software.
Size Estimation
Example :
Many programs use comment lines and blank lines to make their programs more
understandable and readable.
Programming effort is required in order to include blank lines and comment lines.
Thus, these lines may be excluded from the count of lines of source code.
Similarly, unexecutable statements are also not included by some programmers in the
LOC count.
Hence, the programmer must be careful while selecting the method for counting LOC.
The functional units can be counted in the early phases of software development.
In object oriented software development, the functionality of a software can be
depicted in terms of use cases and classes.
The use case point and class point method are used to count the functional units in
object-oriented software development.
A low fan-out value is desirable since high fan-out values represent large amount of
coupling present in the system.
Mrs.K.VIJAYALAKSHMI, MCA.,M.Phil., B.Ed., 53
MC4102 OOSE UNIT - V
Figure 5.27 shows values of fan-in and fan-out for a small system consisting of six
classes.
Software Quality
Software Quality Metrics
Code Quality
Reliability
Performance
Usability
Correctness
Maintainability
Integrity
Security
Software Quality
Code Quality
Reliability
Performance
Usability
Correctness
Maintainability
Integrity
Security
1. Code Quality
Code quality metrics measure the quality of code used for software project
development.
Maintaining the software code quality by writing Bug-free and semantically correct
code is very important for good software project development.
In code quality,
both Quantitative metrics like the number of lines, complexity, functions, rate of
bugs generation, etc, and
Qualitative metrics like readability, code clarity, efficiency, and maintainability,
etc are measured.
2. Reliability
3. Performance
4. Usability
5. Correctness
Correctness is one of the important software quality metrics as this checks whether the
system or software is working correctly without any error by satisfying the user.
Correctness gives the degree of service each function provides as per developed.
6. Maintainability
7. Integrity
Software integrity is important in terms of how much it is easy to integrate with other
required software which increases software functionality and what is the control on
integration from unauthorized software’s which increases the chances of cyber attacks.
8. Security
Localization
Encapsulation
Information Hiding
Inheritance
Object Abstraction Technique
Localization
Encapsulation
Objects encapsulates:
Knowledge of state
Advertised capabilities
Other objects
Exceptions
Constants
Concepts
Information Hiding
Inheritance
It is mechanism where one object acquires the characteristics from one, or more ,
other objects.
There are many object-oriented software engineering metrics which are based on
inheritance e.g.,
number of children
number of parents
class hierarchy nesting level
Abstraction
It is a relative concept.
There are also different categories of abstraction, e.g., functional data, process
and object abstraction.
Objects are treated as high-level entities in object abstraction.
Classes
There are three commonly used views on the definition for “class”.
Basically class is a thing which contains both a pattern and a mechanism for creating
items based on that pattern and instances are like individual items that are
“manufactured” by using class creation mechanism.
A class is a set of all the items which are created using a specific pattern, i.e., the
class is the set of all instances of that pattern.