0% found this document useful (0 votes)
12 views63 pages

Component-Level Software Testing Guide

The document outlines a strategic approach to software testing, emphasizing the importance of effective technical reviews, verification, and validation processes at the component level. It describes various testing techniques, including white-box and black-box testing, and highlights the roles of developers and independent test groups in ensuring thorough testing. Additionally, it discusses integration testing methods, such as top-down and bottom-up approaches, and the significance of continuous integration in agile development practices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views63 pages

Component-Level Software Testing Guide

The document outlines a strategic approach to software testing, emphasizing the importance of effective technical reviews, verification, and validation processes at the component level. It describes various testing techniques, including white-box and black-box testing, and highlights the roles of developers and independent test groups in ensuring thorough testing. Additionally, it discusses integration testing methods, such as top-down and bottom-up approaches, and the significance of continuous integration in agile development practices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Software Testing –

Component Level

© McGraw Hill
Strategic Approach to Testing
• You should conduct effective technical reviews this can
eliminate many errors before testing begins.
• Testing begins at the component level and works "outward"
toward the integration of the entire system.
• Different testing techniques are appropriate for different
software engineering approaches and at different points in
time.
• Testing is conducted by the developer of the software and (for
large projects) an independent test group.
• Testing and debugging are different activities, but debugging
must be accommodated in any testing strategy.

© McGraw Hill 2
Verification and Validation
Verification refers to the set of tasks that ensure that software
correctly implements a specific function.
Verification: Are we building the product right?

Validation refers to a different set of tasks that ensure that the


software that has been built is traceable to customer
requirements.
Validation: "Are we building the right product?"

© McGraw Hill 3
Organizing for Testing
• Software developers are always responsible for testing
individual program components and ensuring that each
performs its deigned function or behavior.
• Only after the software architecture is complete does an
independent test group become involved.
• The role of an independent test group (ITG) is to remove the
inherent problems associated with letting the builder test the
thing that has been built.
• ITG personnel are paid to find errors.
• Developers and ITG work closely throughout a software
project to ensure that thorough tests will be conducted.

© McGraw Hill 4
Testing Strategy

Access the text alternative for slide images.

© McGraw Hill 5
Testing the Big Picture
• Unit testing begins at the center of the spiral and concentrates
on each unit (for example, component, class, or content object)
as they are implemented in source code.
• Testing progresses to integration testing, where the focus is on
design and the construction of the software architecture.
Taking another turn outward on the spiral.
• Validation testing, is where requirements established as part of
requirements modeling are validated against the software that
has been constructed.
• In system testing, the software and other system elements are
tested as a whole.

© McGraw Hill 6
Software Testing Steps

Access the text alternative for slide images.

© McGraw Hill 7
When is Testing Done?

© McGraw Hill 8
Criteria for Done
• You’re never done testing; the burden simply shifts from the
software engineer to the end user. (Wrong).
• You’re done testing when you run out of time or you run out of
money. (Wrong).
• The statistical quality assurance approach suggests executing
tests derived from a statistical sample of all possible program
executions by all targeted users.
• By collecting metrics during software testing and making use
of existing statistical models, it is possible to develop
meaningful guidelines for answering the question: “When are
we done testing?”

© McGraw Hill 9
Unit Test Environment

Access the text alternative for slide images.

© McGraw Hill 10
Test Case Design
Design unit test cases before you develop code for a component to ensure that
code that will pass the tests.
Test cases are designed to cover the following areas:

• The module interface is tested to ensure that information properly flows


into and out of the program unit.
• Local data structures are examined to ensure that stored data stored
maintains its integrity during execution.
• Independent paths through control structures are exercised to ensure all
statements are executed at least once.
• Boundary conditions are tested to ensure module operates properly at
boundaries established to limit or restrict processing.
• All error-handling paths are tested.

© McGraw Hill 11
Module Tests

Access the text alternative for slide images.

© McGraw Hill 12
Error Handling
• A good design anticipates error conditions and establishes error-
handling paths which must be tested.
• Among the potential errors that should be tested when error
handling is evaluated are:
1. Error description is unintelligible.
2. Error noted does not correspond to error encountered.
3. Error condition causes system intervention prior to error handling,
4. Exception-condition processing is incorrect.
5. Error description does not provide enough information to assist in the
location of the cause of the error.

© McGraw Hill 13
Traceability
• To ensure that the testing process is auditable, each test case
needs to be traceable back to specific functional or
nonfunctional requirements or anti-requirements.
• Often nonfunctional requirements need to be traceable to
specific business or architectural requirements.
• Many test process failures can be traced to missing
traceability paths, inconsistent test data, or incomplete test
coverage.
• Regression testing requires retesting selected components that
may be affected by changes made to other collaborating
software components.

© McGraw Hill 14
White Box Testing
Using white-box testing methods, you can derive test cases that:

1. Guarantee that all independent paths within a module have


been exercised at least once.
2. Exercise all logical decisions on their true and false sides.
3. Execute all loops at their boundaries and within their
operational bounds.
4. Exercise internal data structures to ensure their validity.

© McGraw Hill 15
Basis Path Testing 1

Determine the number of independent paths in the program by


computing Cyclomatic Complexity:
1. The number of regions of the flow graph corresponds to the cyclomatic
complexity.
2. Cyclomatic complexity V(G) for a flow graph G is defined as
V(G) = E − N + 2
E is the number of flow graph edges
N is the number of nodes.

3. Cyclomatic complexity V(G) for a flow graph G is also defined as


V(G) = P + 1
P is number of predicate nodes contained in the flow graph G.

© McGraw Hill 16
Flowchart (a) and Flow Graph (b)

Access the text alternative for slide images.

© McGraw Hill 17
Basis Path Testing 2

Cyclomatic Complexity of the flow graph is 4


1. The flow graph has four regions.
2. V(G) = 11 edges − 9 nodes + 2 = 4.
3. V(G) = 3 predicate nodes + 1 = 4.

An independent path is any path through the program that


introduces at least one new set of processing statements or a new
condition (we need 4 independent paths to test)
Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-10-1-11
Path 4: 1-2-3-6-7-9-10-1-11

© McGraw Hill 18
Basis Path Testing 3

Designing Test Cases


• Using the design or code as a foundation, draw a
corresponding flow graph.
• Determine the cyclomatic complexity of the resultant flow
graph.
• Determine a basis set of linearly independent paths.
• Prepare test cases that will force execution of each path in the
basis set.

© McGraw Hill 19
Control Structure Testing
• Condition testing is a test-case design method that exercises
the logical conditions contained in a program module.
• Data flow testing selects test paths of a program according to
the locations of definitions and uses of variables in the
program.
• Loop testing is a white-box testing technique that focuses
exclusively on the validity of loop constructs.

© McGraw Hill 20
Classes of Loops

Access the text alternative for slide images.

© McGraw Hill 21
Loop Testing
Test cases for simple loops: Test cases for nested loops:
1. Start at the innermost loop. Set all other
1. Skip the loop entirely.
loops to minimum values.
2. Only one pass through the loop.
2. Conduct simple loop tests for the
3. Two passes through the loop.
innermost loop while holding the outer
4. m passes through the loop where loops at their minimum iteration
m < n. parameter (for example, loop counter)
5. n − 1, n, n + 1 passes through the values.
loop.
3. Add other tests for out-of-range or
excluded values.
4. Work outward, conducting tests for the
next loop, but keeping all other outer
loops at minimum values and other
nested loops to “typical” values.
5. Continue until all loops have been tested.
© McGraw Hill 22
Black Box Testing 1

Black-box (functional) testing attempts to find errors in the


following categories:
1. Incorrect or missing functions.
2. Interface errors.
3. Errors in data structures or external database access.
4. Behavior or performance errors.
5. Initialization and termination errors.

Unlike white-box testing, which is performed early in the testing


process, black-box testing tends to be applied during later stages of
testing.

© McGraw Hill 23
Black Box Testing 2

Black-box test cases are created to answer questions like:


• How is functional validity tested?
• How are system behavior and performance tested?
• What classes of input will make good test cases?
• Is the system particularly sensitive to certain input values?
• How are the boundaries of a data class isolated?
• What data rates and data volume can the system tolerate?
• What effect will specific combinations of data have on system
operation?

© McGraw Hill 24
Black Box – Interface Testing
• Interface testing is used to check that a program component
accepts information passed to it in the proper order and data
types and returns information in proper order and data format.
• Components are not stand-alone programs testing interfaces
requires the use stubs and drivers.
• Stubs and drivers sometimes incorporate test cases to be
passed to the component or accessed by the component.
• Debugging code may need to be inserted inside the
component to check that data passed was received correctly.

© McGraw Hill 25
Black Box – Boundary Value Analysis
(BV A)
• Boundary value analysis leads to a selection of test cases that exercise
bounding values.
• Guidelines for BVA:

1. If an input condition specifies a range bounded by values a and b, test cases


should be designed with values a and b and just above and just below a and b.
2. If an input condition specifies a number of values, test cases should be
developed that exercise the min and max numbers as well as values just above
and below min and max.
3. Apply guidelines 1 and 2 to output conditions.
4. If internal program data structures have prescribed boundaries (for example,
array with max index of 100) be certain to design a test case to exercise the
data structure at its boundary.

© McGraw Hill 26
Software Testing –
Integration Level

© McGraw Hill
Testing Fundamentals
Attributes of a good test:
• A good test has a high probability of finding an error.
• A good test is not redundant.
• A good test should be “best of breed.”
• A good test should be neither too simple nor too complex.

© McGraw Hill 28
Approaches to Testing
Any engineered product can be tested in one of two ways:
1. Knowing the specified function that a product has been
designed to perform, tests can be conducted that demonstrate
each function is fully operational while at the same time
searching for errors in each function.
2. Knowing theinternal workings of a product, tests can be
conducted to ensure that “all gears mesh,” that is, internal
operations are performed according to specifications and all
internal components have been adequately exercised.

© McGraw Hill 29
White Box Testing
• White-box testing, is an integration testing philosophy that
uses implementation knowledge of the control structures
described as part of component-level design to derive test
cases.
• White-box tests can be only be designed after source code
exists and program logic details are known.
• Logical paths through the software and collaborations between
components are the focus of white-box integration testing.
• Important data structures should also be tested for validity after
component integration.

© McGraw Hill 30
Black Box Testing
• Black-box testing, is an integration testing achieved by
exercising component interfaces. It examines the fundamental
aspects of the system and little regards to internal logical
structure.
• Black-box tests ensures that components behaves correctly and
satisfy the expected output.
• It is based on the requirements specified in the user story.
• Expected output and Observed output are matched together, if
not matching then it’s a bug.

© McGraw Hill 31
Integration Testing
• Integration testing is a systematic technique for constructing
the software architecture while conducting tests to uncover
errors associated with interfacing.
• The objective is to take unit-tested components and build a
program structure that matches the design.
• In the big bang approach, all components are combined at
once and the entire program is tested as a whole. Chaos usually
results!
• In incremental integration a program is constructed and tested
in small increments, making errors easier to isolate and correct.
Far more cost-effective!

© McGraw Hill 32
Top-Down Integration 1
• Top-down integration testing is an incremental approach to construction of
the software architecture.
• Modules are integrated by moving downward through the control hierarchy,
beginning with the main control module (main program).
• Modules subordinate to the main control module are incorporated into the
structure followed by their subordinates.
• Depth-first integration integrates all components on a major control path
of the program structure before starting another major control path.
• Breadth-first integration incorporates all components directly subordinate
at each level, moving across the structure horizontally before moving down
to the next level of subordinates.

© McGraw Hill 33
Top-Down Integration 2

Access the text alternative for slide images.

© McGraw Hill 34
Top-Down Integration Testing
1. The main control module is used as a test driver, and stubs are
substituted for all components directly subordinate to the
main control module.
2. Depending on the integration approach selected (for example,
depth or breadth first), subordinate stubs are replaced one at a
time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced
with the real component.
5. Regression testing may be conducted to ensure that new errors
have not been introduced.

© McGraw Hill 35
Bottom-Up Integration Testing
Bottom-up integration testing, begins construction and testing
with atomic modules components at the lowest levels in the
program structure.
1. Low-level components are combined into clusters (builds)
that perform a specific software subfunction.
2. A driver (a control program for testing) is written to
coordinate test-case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined, moving
upward in the program structure.

© McGraw Hill 36
Bottom-Up Integration

Access the text alternative for slide images.

© McGraw Hill 37
Continuous Integration
• Continuous integration is the practice of merging components
into the evolving software increment at least once a day.
• This is a common practice for teams following agile
development practices such as XP or DevOps. Integration
testing must take place quickly and efficiently if a team is
attempting to always have a working program in place as part
of continuous delivery.
• Smoke testing is an integration testing approach that can be
used when software is developed by an agile team using short
increment build times.

© McGraw Hill 38
Smoke Testing Integration
1. Software components that have been translated into code are
integrated into a build. – that includes all data files, libraries,
reusable modules, and components required to implement one
or more product functions.
2. A series of tests is designed to expose “show-stopper” errors
that will keep the build from properly performing its function
cause the project to fall behind schedule.
3. The build is integrated (either top-down or bottom-up) with
other builds, and the entire product (in its current form) is
smoke tested daily.

© McGraw Hill 39
Smoke Testing Advantages
• Integration risk is minimized, since smoke tests are run daily.
• Quality of the end product is improved, functional and
architectural problems are uncovered early.
• Error diagnosis and correction are simplified, errors are
most likely in (or caused by) the new build.
• Progress is easier to assess, each day more of the final
product is complete.
• Smoke testing resembles regression testing by ensuring newly
added components do not interfere with the behaviors of
existing components.

© McGraw Hill 40
Integration Testing Work Products
• An overall plan for integration of the software and a description of specific
tests is documented in a test specification.
• Test specification incorporates a test plan and a test procedure and becomes
part of the software configuration.
• Testing is divided into phases and incremental builds that address specific
functional and behavioral characteristics of the software.
• Time and resources must be allocated to each increment build along with
the test cases needed.
• A history of actual test results, problems, or peculiarities is recorded in a
test report and may be appended to the test specification.
• It is often best to implement the test report as a shared Web document to
allow all stakeholders access to the latest test results and the current state of
the software increment.

© McGraw Hill 41
Regression Testing
• Regression testing is the re-execution of some subset of tests that have
already been conducted to ensure that changes have not propagated
unintended side effects.
• Whenever software is corrected, some aspect of the software configuration
(the program, its documentation, or the data that support it) is changed.
• Regression testing helps to ensure that changes (due to testing or for other
reasons) do not introduce unintended behavior or additional errors.
• Regression testing may be conducted manually, by re-executing a subset of
all test cases or using automated capture/playback tools.
• AI tools may be able to help select the best subset of test cases to use in
regression automatically based on previous experiences of the developers
with the evolving software product.

© McGraw Hill 42
Validation Testing
• Validation testing tries to uncover errors, but the focus is at the
requirements level - on user visible actions and user-recognizable output
from the system.
• Validation testing begins at the culmination of integration testing, the
software is completely assembled as a package and errors have been
corrected.
• Each user story has user-visible attributes, and the customer’s acceptance
criteria which forms the basis for the test cases used in validation-testing.
• A deficiency list is created when a deviation from a specification is
uncovered and their resolution is negotiated with all stakeholders.
• An important element of the validation process is a configuration review
(audit) that ensures the complete system was built properly.

© McGraw Hill 43
Software Metrics

© McGraw Hill
Measures, Metrics, and Indicators
• A measure provides a quantitative indication of the extent,
amount, dimension, capacity, or size of some attribute of a
product or process.
• A metric is a quantitative measure of the degree to which a
system, component, or process possesses a given attribute.
• An indicator is a metric or combination of metrics that provide
insight into the software process, a software project, or the
product itself.

© McGraw Hill 45
Attributes of Effective Metrics
• Simple and computable. It should be relatively easy to learn how to derive
the metric.
• Empirically and intuitively persuasive. Satisfies the engineer’s intuitive
notions about the product attribute.
• Consistent and objective. The metric should yield results that are
unambiguous.
• Consistent in its use of units and dimensions. Computation of the metric
should not lead to bizarre combinations of units.
• Programming language independent. Metrics should be based on the
analysis model, the design model, or the structure of the program itself.
• Effective mechanism for quality feedback. Should provide a software
engineer with information that can lead to a higher quality end-product.

© McGraw Hill 46
Requirements Model Metrics
Requirement specificity (lack of ambiguity):
Q1 = nui / nr
where nui is the number of requirements for which all reviewers
had identical interpretations.
Q1 close to 1 is good.

Assume there are nr requirements in a specification:


nr = nf + nnf
nf is the number of functional requirements
nnf is the number of nonfunctional requirements

© McGraw Hill 47
Mobile Software Requirements Model
Metrics
• Number of static screen displays. (Nsp)
• Number of dynamic screen displays. (Ndp)
• Number of persistent data objects.
• Number of external systems interfaced.
• Number of static content objects.
• Number of dynamic content objects.
• Number of executable functions.

Customization index C = Ndp / (Ndp + Nsp)


C ranges from 0 to 1, larger C is better

© McGraw Hill 48
Architectural Design Metrics
Architectural design metrics
• Structural complexity = g(fan-out).
• Data complexity = f(input & output variables, fan-out).
• System complexity = h(structural & data complexity).

Morphology metrics: a function of the number of modules and the


number of interfaces between modules
Size = n + a
n = number of nodes, a = number of arcs
Depth = longest path root to leaf node
Width = maximum number if nodes at each level

© McGraw Hill 49
Morphology Metrics

Access the text alternative for slide images.

© McGraw Hill 50
Object-Oriented Design Metrics 1
Weighted methods per class (WMC). The number of methods
and their complexity are reasonable indicators of the amount of
effort required to implement and test a class.

Depth of the inheritance tree (DIT). A deep class hierarchy (DIT


is large) leads to greater design complexity.

Number of children (NOC). As NOC increases, the amount of


testing (required to exercise each child in its operational context)
will also increase.

© McGraw Hill 51
Object-Oriented Design Metrics 2
Coupling between object classes (CBO). High values of CBO
indicate poor reusability and make testing of modifications more
complicated.

Response for a class (RFC). The number of methods that can


potentially be executed in response to a message received by an
object of the class.

Lack of cohesion in methods (LCOM). LCOM is the number


of methods that access one or more of the same attributes

© McGraw Hill 52
Class Hierarchy

Access the text alternative for slide images.

© McGraw Hill 53
User Interface Design Metrics
Interface metrics. Ergonomics measures (for example, memory
load, typing effort, recognition time, layout complexity)
Aesthetic (graphic design) metrics. Aesthetic design relies on
qualitative judgment but some measures are possible (for
example, word count, graphic percentage, page size)
Content metrics. Focus on content complexity and on clusters of
content objects that are organized into pages
Navigation metrics. Address the complexity of the navigational
flow and they are applicable only for static Web applications.

© McGraw Hill 54
Source Code Metrics
Halstead’s Software Science: a comprehensive collection of metrics all
predicated on the number (count and occurrence) of operators and operands
within a component or program.
n1 = number of distinct operators that appear in a program
n2 = number of distinct operands that appear in a program
N1 = total number of operator occurrences
N2 = total number of operand occurrences
Program length N = n1 log2 n1 + n2 log2 n2
Program volume V = N log2 (n1 + n2)
Volume Ratio L = ( 2 / n1 ) × ( n2 / N2 )

L is ratio of most compact form of the program to actual program size

© McGraw Hill 55
Software Measurement
• Direct measures of the software process include cost and
effort applied.
• Direct measures of the product include lines of code (L OC)
produced, execution speed, memory size, and defects reported
over some set period of time.
• Indirect measures of the product include functionality, quality,
complexity, efficiency, reliability, maintainability, and many
others.
• Direct measures are relatively easy to collect, the quality and
functionality of software are more difficult to assess and can
be measured only indirectly.

© McGraw Hill 56
Normalized Size-Oriented Metrics
errors per KLOC (thousand lines of code)
defects per KLOC
$ per LOC
pages of documentation per KLOC
errors per person-month
errors per review hour
LOC per person-month
$ per page of documentation

© McGraw Hill 57
Normalized Function-Oriented Metrics
errors per FP (thousand lines of code)
defects per FP
$ per FP
pages of documentation per FP
FP per person-month

© McGraw Hill 58
Why Opt For Function-Oriented Metrics
• Programming language independent.
• Used readily countable characteristics that are determined
early in the software process.
• Does not “penalize” inventive (short) implementations that use
fewer LOC that other more clumsy versions.
• Makes it easier to measure the impact of reusable components.

© McGraw Hill 59
Software Quality Metrics
• Correctness. degree to which the software performs its required
function (for example, defects per KLOC).
• Maintainability. degree to which a program is amenable to
change (for example, MTTC - mean time to change).
• Integrity. degree to which a program is impervious to outside
attack.

Integrity   1  threat 1  security 

threat = probability specific attack occurs


security = probability specific attack is repelled

• Usability. quantifies ease of use (for example, error rate).

© McGraw Hill 60
Defect Removal Efficiency (DRE)
• DRE is a measure of the filtering ability of quality assurance and
control actions as they are applied throughout all process
framework activities.

DRE = E / (E + D)
E = number of errors found before delivery
D = number or errors found after delivery
• The ideal value for DRE is 1. No defects (D = 0) are found be
the consumers of a work product after delivery.
• The value of DRE begins to approach as E increases many the
team is catching its own errors.

© McGraw Hill 61
FP-Based Estimation Table
Table 25.1 Estimating information domain values
Information domain value Opt. Likely Pess. Est Weight F P count
count.
Number of external inputs 20 24 30 24 4 96 (24 × 4 = 96)
Number of external outputs 12 14 22 14 5 70 (14 × 5 = 70)
Number of external enquiries 16 20 28 20 5 100 (20 × 5 = 100)
Number of internal logical files 4 4 5 4 10 40 (4 × 10 = 40)
Number of external interface 2 2 3 2 7 14 (2 × 7 = 14)
files
Count Total 320

© McGraw Hill 62
FP-Based Estimation
• To compute the FP equation:
FPestimated count total  0.65  0.01  Fi 

• For the purposes of this estimate, the complexity weighting factor is


assumed to be average and the F P count total from the table is 320.

• Assume the sum of the 14 complexity factors  Fi is 52.


 0.65  0.01  Fi  1.17

• The estimated number of FP can be computed:

Fpestimated count total  0.65  0.01 Fi  375

• If the historic cost per FP is approximately $1,230 then total estimated


project cost is $461,000 estimated effort is 58 person-months.
© McGraw Hill 63

You might also like