0% found this document useful (0 votes)
61 views

Unit Iv Testing: 4.1taxonomy of Software Testing

1. The document discusses different types of software testing including unit testing, module testing, subsystem testing, system testing, and acceptance testing. 2. It describes the typical testing process which involves testing individual components or units first, then modules, subsystems, and finally the full integrated system through successive stages of integration testing. 3. Two integration testing approaches are discussed - top-down testing which tests high levels before lower levels, and bottom-up testing which tests lower levels first before integrating and testing higher levels.

Uploaded by

Tanish Pant
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

Unit Iv Testing: 4.1taxonomy of Software Testing

1. The document discusses different types of software testing including unit testing, module testing, subsystem testing, system testing, and acceptance testing. 2. It describes the typical testing process which involves testing individual components or units first, then modules, subsystems, and finally the full integrated system through successive stages of integration testing. 3. Two integration testing approaches are discussed - top-down testing which tests high levels before lower levels, and bottom-up testing which tests lower levels first before integrating and testing higher levels.

Uploaded by

Tanish Pant
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

www.vidyarthiplus.

com

UNIT IV

TESTING

4.1TAXONOMY OF SOFTWARE TESTING:

Software testing is a critical element of software quality assurance and


represents the ultimate review of specification design and code generation.

Testing involves exercising the program using data like the real data
processed by unexpected system outputs.

Testing may be carried out during the implementation phase to verify that
the software behaves as intended by its designer and after the implementation is
complete. This later phase checks conformance with requirements is complete.

Different kinds of testing use different types of data:

 statistical testing may be used to test the programs performance and reliability
 defect testing is intended to find the areas where the program does not
conform to its specification

Testing Vs Debugging:

Defect testing a debugging are sometimes considered to be part of the same


process. Infact they are quite different. Testing establishes the existence of defects.
Debugging usually follows testing but they differs as to goals, methods and psychology

www.vidyarthiplus.com Page 1
www.vidyarthiplus.com

1. Testing starts with unknown conditions, uses predefined procedures, and has
predictable outcomes only whether or not the program passes the test id
unpredictable.

2. Testing can and should be planned designed and scheduled the procedures for
and duration of debugging cannot be so constrained.

3. Testing is a demonstration of error or apparent correctness

4. Testing proves a programmers failure. Debugging is the programmer’s


vindication.

5. Testing as executed hold strives to predictable, dull, constrained, rigid and


inhuman.
6. Much of the testing can be done without design knowledge.

7. Testing can often be done by an outsider. Debugging must be done by an


insider.

8. Much of test execution and design can be automated. Automated debugging is


still a dream.

Testing Objectives:

1. Testing is a process of executing a program with the intend of finding on error.

2. A good test case is one that high probability of finding an as yet undiscovered error.

www.vidyarthiplus.com Page 2
www.vidyarthiplus.com

3. A successful test is one that uncovers an as yet undiscovered error.

Testing Principles:

The various testing principle a listed below:

1. All tests should be traceable to customer requirements. The most serve defects
are those that cause the program fail to meet its requirements.

2. Test should be planned long before testing begins. All tests can be planned and
designed before any code has been generated.

3. Testing should begin “in the small” and progress towards testing “in the large”.
The first tests planned and executed generally focus on the individual components. As
testing progresses, focus shifts in an attempt to find errors in integrated clusters of
components and ultimately in the entire system.

4. Exhaustive testing is not possible.

5. To be more effective testing should be conducted by a third party.

Attributes of a good testing:

1. A good testing has a high probability of finding an error.

2. A good test is not redundant.

www.vidyarthiplus.com Page 3
www.vidyarthiplus.com

3. In a group of tests that have a similar intent, time and resource, the test that has
the highest likelihood of uncovering a whole class of errors should be used.

4. A good test should be neither too simple nor too complex. Each test should be
executed separately.

The Testing Process

Systems should be tested as a single, monolithic unit only for small


programs. Large systems are built out of sub-systems which are build out of modules
which are compose of producers and functions. The testing process should therefore
proceed in stages where testing is carried out incrementally in connection with system
implementation.

The most widely used testing process consists of five stages

Unit testing

Module
testing

Sub-system
www.vidyarthiplus.com testing Page 4
www.vidyarthiplus.com

System
testing

Acceptance
testing

Component testing Integration testing Unit testing

4.2 TYPES OF SOFTWARE TESTING

1. unit testing
2. module testing
3. sub-system testing
4. system testing
5. acceptance testing

1.) Unit Testing:

Here individual components are tested to ensure that they operate correctly. Each
component is tested separately.

2.) Module Testing:

A module is collection of dependent components such as an object class, an


abstract data type or some looser collection of procedures and functions. A module
encapsulates related components so can be tested without other system modules.

www.vidyarthiplus.com Page 5
www.vidyarthiplus.com

3.) Sub-System Testing:

Here his phase involves testing collection of modules which have been
integrated into sub-systems. Sub-systems be independently designed and
implemented. The most common problems which arise in large s/w systems are sub-
systems interface mismatches.

4.) System Testing:

The sub-systems are integrated to the entire system. The testing process is
concerned with finding errors which results from anticipated interactions between
sub-systems and system components. It is also concerned with validating that the
system meets its functional and non-functional requirements.

5.) Acceptance testing:

This is the final stage in the testing process before the system is accepted
for operational use. The system is tested with data supplied by the system producer
rather than stimulated test data. Acceptance testing may reveal errors and omissions
in the system requirements definition because the real data exercise the system in
different ways from the test data. Acceptance testing may also reveal requirements
problems where the systems facilities do not really meet the user’s needs or the
system performance is unacceptable.

(i) Alpha testing:

www.vidyarthiplus.com Page 6
www.vidyarthiplus.com

Acceptance testing is sometimes called alpha testing. The alpha testing


process continues until the system developer and the client agree with the deliver
system is an acceptable implementation of the system requirements.

(ii) Beta testing:

When a system is to be marketed as a software product, a testing process


called beta testing is often used. Beta testing involves delivering a system to a number
of potential customers to agree to use that system. They report problems to the
system developers. This exposes the product to real use and detects errors which may
not have been anticipated by the system builders. After this feedback, the system is
modified and either released or further beta testing or for general sale.

4.3 INTEGRATION TESTING

TOP - DOWN TESTING:

Top – down testing test the high level s of a system before testing its
detailed components. The program is represented as a single abstract component
with sub-components represented by stubs. Stubs have the same interface as the
component but very limited functionality.

After the top level components have been tested, its sub-components
are implemented and tested in the same way. This process continues recursively until
the bottom level components are implemented. The whole system may then be
completely tested.

Advantages of top down testing:

1. Unnoticed design errors may be detected at a early stage in the testing process.
As these errors are mainly structural errors ,early detection means that can be
corrected without undue costs
www.vidyarthiplus.com Page 7
www.vidyarthiplus.com

2. A limited working system is available at an early stage in the development. It


demonstrates the feasibility of the system to management.

Disadvantage of top-down testing:

1. Program stubs simulating lower levels of system should be produced. If the


component is a complex one, it may be impractical to produce a program stub
which simulates it accurately.

2. Test output may be difficult to observe. In many systems, the higher levels of that
system do not generate output but, to test these levels, they must be do so. The
tester must create an artificial environment to generate the test results.

II. BOTTOM-UP TESTING

Bottom-up testing is the converses of the top-down testing. It involves testing


the modules at the lower levels in the hierarchy, and then working up the hierarchy of
modules until the final module is tested.

When using bottom-up testing, test drivers must be written to exercise the
lower-level components. These test drivers must be written to exercise the lower-
level components. These test drivers simulate the components environment and are
valuable components in their own right.

If the components being tested are reusable components, The test drivers
and test data should be distributed with the component.

www.vidyarthiplus.com Page 8
www.vidyarthiplus.com

Potential re-users can then run these tests to satisfy themselves that the
component behaves as expected in their environment.

The advantages of bottom-up testing are the disadvantage of top-down


testing and vice-versa.

Bottom-up testing is appropriate for object-oriented systems in that


individual objects may be tested using their own drivers.

4.4Unit testing

It begins at the vortex of the spiral and concentrates on each unit of the s/w as
implemented in source code.

Testing progresses by moving outward along the spiral to integration


testing. Here the focus is on design and the construction of the software
architecture.

Taking another turn outward on the spiral validation testing is encountered.


Here requirements established as part of s/w requirements analysis are
validated against the software that has been constructed.

Finally system testing is conducted. In this the software and other system
elements are tested as a whole.

1. Unit Testing:

www.vidyarthiplus.com Page 9
www.vidyarthiplus.com

Unit testing focuses verification effort of the smallest unit of software


design the software component or module.

Unit Testing considerations:

 The module interfaces is tested to ensure that information properly flows into
and out of the program until under test.
 The local data structures is examined to ensure that data stored temporarily
maintains its integrity during all steps in an algorithm’s execution.
 Boundary conditions are tested to ensure that the module operates properly at
boundaries established to limit or restrict processing.
 All independent paths through the control structure are exercised to ensure
that all statements in a module have been executed at least once.
 Finally, all error handling paths are tested.

4.5 Regression testing:

This testing is the re execution of some subset of tests that have already
been conducted to ensure that changes have not propagated unintended side
effects.

Regression testing is the activity that helps to ensure that changes do not
introduce unintended behavior or additional errors.

The regression test suite contains three different classes of test cases:

 A representation sample of tests that will exercise all software functions.


 Additional test that focus on software functions that are likely to be affected by
the change.
 Tests that focus on the s/w components that have been changed.

www.vidyarthiplus.com Page 10
www.vidyarthiplus.com

4.6Validation testing:

At the end of integration testing, s/w is completely assembled as a package,


interfacing errors have been uncovered and corrected, and a final series of
software tests.

Validation succeeds when s/w functions in a manner that can be reasonably


expected by the customer.

After each validation has been conducted, one of the two possible
conditions exist:

1. The information or performance characteristic conform to specification and are


accepted.
2. A derivation from specifications is uncovered and a deficiency list is created.

It is often necessary to negotiate with the customer to establish a method for


resolving deficiencies.

Configuration review:

This is an important element of the validation process. The intent of the


review is to ensure that all elements of the s/w configuration have been properly
developed.

Alpha and Beta testing:

www.vidyarthiplus.com Page 11
www.vidyarthiplus.com

When custom software is built for one customer, a series of acceptance


tests are conducted to enable the customer to validate all requirements.

The alpha and beta tests have been discussed previously.

4.7SYSTEM TESTING AND DEBUGGING:

S/w is incorporated with other system elements like hardware, people,


information and a series of system integration and validation tests are conducted.
These tests fall outside the scope of the software process and are not conducted
solely by s/w engineers.

System testing is actually a series of different tests whose primary purpose


is to fully exercise the computer based system. Although each test has a different
purpose, all work to verify that system elements have been properly integrated and
perform allocated functions.

Types of system testing:

1. Recovery testing
2. Security testing
3. Stress testing
4. Performance testing

Recovery testing:

Many computer based systems must recover from faults and resume
processing within a prespecified time.

www.vidyarthiplus.com Page 12
www.vidyarthiplus.com

Recovery testing is a system test that forces the s/w to fail in a variety of
ways and verifies that recovery is properly performed.

If recovery requires human intervention the mean time to repair is


evaluated to determine whether it is within acceptable limits.

Security testing:

Security testing attempts to verify that protection mechanism built into a


system will, in fact, protect it from improper penetration.

During security testing, the tester plays the role of the individual who
desires to penetrate the system.

Stress testing:

This executes a system in a manner that demands resources in


abnormal quantity, frequency or volume.

Essentially, the tester attempts to break the program.

A variation of stress testing is a techniques called sensitivity testing. In some


situations, a very small range of data contained within the bounds of valid data for
a program may cause extreme and even erroneous processing or profound
performance degradation.

www.vidyarthiplus.com Page 13
www.vidyarthiplus.com

Sensitivity testing attempts to uncover data combinations within valid input


classes that may be cause instability or improper processing.

Performance testing

For real time and embedded systems, software that provides required function but does not
conform to performance requirements is unacceptable. Performance testing is designed to test the run-
time performance of s/w within the context of an integrated system.

Performance testing occurs throughout all steps in the testing process. Even at the unit level,
the performance of an individual module may be assessed.

Performance tests are often couple with stress testing and usually require both hardware and
s/w instrumentation.

TESTING AND DEBUGGING


 Defect testing and debugging are distinct processes

 Verification and validation is concerned with establishing the existence of defects in a


program

 Debugging is concerned with locating and repairing these errors

www.vidyarthiplus.com Page 14
www.vidyarthiplus.com

 Debugging involves formulating a hypothesis about program behaviour then testing these
hypotheses to find the system error

THE DEBUGGING PROCESS

4.8 TEST COVERAGE BASED ON DATA FLOW MECHANISM:

White box testing is called as glass box testing. It is a test case design method that uses the
control structure of the procedural design to the derive test cases.

Benefits of white box testing:

 Focused testing: The programmer can test the program in pieces. It’s much easier to give an
individual suspect module a through workout in glass box testing than in black box testing.

 Testing coverage: The programmer can also find out which parts of the program are exercised
by any test. It is possible to find out which lines of code, which branches, or which paths haven’t
yet been tested.

www.vidyarthiplus.com Page 15
www.vidyarthiplus.com

 Control flow: The programmer knows what the program is supported to do next, as a function
of its current state.

 Data integrity: The programmer knows which parts of the program modify any item of data. By
tracking a data item through the system.

 Internal boundaries: The programmer can see internal boundaries in the code that are
completely invisible to the outside tester.

 Algorithmic specific: The programmer can apply standard numerical analysis techniques to
predict the results.

Various white box testing techniques:

1. BASIS PATH TESTING:

The basis path method enables the test case designer to derive a logical complexity measure of
a procedural design and use this measure as a guide for defining and use this measure as a guide for
defining a basis set of execution paths.

Flow graph notation:

Flow graph is a simple notation for the representation of control flow. Each structured construct
has a corresponding flow graph symbol.

Flow graph node: Represents one or more procedural statements.

Edges or links: Represent flow control.

www.vidyarthiplus.com Page 16
www.vidyarthiplus.com

Regions: These are areas bounded by edges and nodes.

Each node that contains a condition is called a predicate node and is characterized by two or
more edges emanating from it.

Cyclomatic complexity:

Cyclomatic complexity is a software metric that provide a quantitative measure of the logical
complexity of a program.

The value computed for cyclomatic defines the number of independent paths in the basis set of
a program and provides us with an upper bound for the number of tests that must be conducted to
ensure that all statements have been executed at least once.

Thee ways of computing cyclomatic complexity:

1. The number of regions of the flow graph corresponds to the cyclomatic complexity.

2. Cyclomatic complexity ,V(G) for a flow graph G, is defined as


V (G) = E-N+2

E is the number of the flow graph edges; N is the number of flow graph nodes.

3. Cyclomatic complexity V (G) for a flow graph G is also called as

V (G) = P+1

www.vidyarthiplus.com Page 17
www.vidyarthiplus.com

P is the number of predicate nodes contained in the flow graph G.

Deriving test cases:

The basis path testing method can be applied to procedural design or to source code.

Steps to derive the basis test:

1. Using the design or code as a foundation draw a corresponding flow graph. A flow graph is
created using the symbols and construction rules.

2. Determine the cyclomatic complexity of the resultant flow graph. V (G) is determined by
applying the above algorithms.

3. Determine a basis set of linearly independent paths. The value of V (G) provides the number of
linearly independent paths through the program control structure.

II. CONDITION TESTING

Condition testing is a test case design method that exercises the logical conditions contained in
a program module.

The condition testing method focuses on testing each condition in the program.

Advantage of condition testing:


www.vidyarthiplus.com Page 18
www.vidyarthiplus.com

1. Measurement of the test coverage of a conditional is simple.

2. The test coverage of conditions in a program provides guidance for the generation of additional
tests for the program.

 Branch testing: This is the simplest condition testing strategy. For a compound condition C,
the true and false branches of C and every simple condition in C need to be executed at
least once.

 Domain testing: This requires three or four tests to be derived for a relational for a
relational expression.

 BRO (branch and relational operator) testing: This technique guarantees the detection of
branch and relational operator errors in a condition provided that all Boolean variable and
relational operators in condition occur only once.

III. DATA FLOW TESTING:

The data flow testing method selects test paths of a program according to the locations of
definitions and uses of variable in the program.

For a statement with S as its statement number,

DEF(S) ={X} statement S contains a definition of {X}

www.vidyarthiplus.com Page 19
www.vidyarthiplus.com

USE(S) ={X} statement S contains a use of {X}

If statement S is an if or loop statement, its DEF set is empty and its USE set is based on the
condition of statement S.

A definition use (DU) chain of variable X is of the form {X, S, S’] where S and S’ are statement
numbers, X is in DEF(S) and USE(S’) and the definition of X in statements S is live at statement S’.

One simple data flow testing strategy is to require that every DU chain be covered at least once.

Data flow testing strategies are useful for selecting test paths for a program containing nested if
and loop statements.

Since the statements in a program are related to each other according to the definitions and
uses of variable the data flow testing approach is effective for error detection.

Problem:

Measuring test coverage and selecting test paths for data flow testing are more difficult.

IV. LOOP TESTING:

Loop testing is a white box testing technique that focuses exclusively on the validity of loop
constructs.

www.vidyarthiplus.com Page 20
www.vidyarthiplus.com

Different classes of loops:

1. Simple loops
2. Nested loops
3. Concatenated loops
4. unstructured loops

Simple loops:

1. Skip the loop entirely.


2. Only one pass through the loop.
3. two passes through the loop.
4. m passes through the loop where m<n
5. n-1, n, n+1 passes through the loop.

Nested loops: The number of possible tests would grow geometrically as the level of nesting
increases.

Methods to reduce the number of tests:

1. Start at the innermost loop.

2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter values.

3. Work outward, conducting tests for the next loop, but keeping all other outer loops at minimum
value and other nested loops to typical values.

www.vidyarthiplus.com Page 21
www.vidyarthiplus.com

Concatenated loops:

Concatenated loops can be tested using the approach defined for simple loops, if each of the
loops is independent of the other. However, if two loops are concatenated and the loop counter for loop
1 is used as the initial value for loop2.

Unstructured loops:

Whenever possible, this class of loops would be redesigned to reflect the use of the structured
programming constructs.

4.9BLACK BOX TESTING:

Black box testing is also called as behavioral testing. This focuses on the functional requirements
of the s/w. Black box testing enables the s/w engineer to derive sets of input conditions that will
fully exercise all functional requirements for a program.

Errors found by black box testing:

1. incorrect or missing functions

2. interface errors

3. errors in data structures or external data base access.

www.vidyarthiplus.com Page 22
www.vidyarthiplus.com

4. behavior or performance errors.

5. initialization and termination errors.

Various black box testing method:

1. Equivalent partitioning

2. boundary value analysis

3. comparison testing

4. orthogonal array testing

1. EQUIVALENCE PARTITIONING:

It is a black box testing method that divides the inputs domain of a program into classes of data
from which test cases can be derived.

Test case design for equivalence partitioning is based on an evaluation of equivalence classes for
an input condition.

The input data to a program usually fall into number of different classes. These classes have
common characteristics, for example positive numbers, negative numbers strings without blanks

www.vidyarthiplus.com Page 23
www.vidyarthiplus.com

and so on. Programs normally behave in a comparable way for all members of a class. Because of
this equivalent behavior, these classes are sometimes called equivalent partitions or domains.

A systematic approach to defect testing is based on identifying a set of equivalence partitions


which must be handled by a program.

Guidelines for defining equivalence classes:

1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.

2. If an input condition requires a specific value, one valid and two invalid equivalence classes are
defined.

3. If an input condition specifies a member of a set one valid and one invalid equivalence class are
defined.

4. If an input condition is Boolean, one valid and one invalid class are defined.

Test cases for each input domain data item can be developed and executed by applying the
guidelines for the derivation of equivalence classes.

III. COMPARISON TESTING:

When reliability of software is absolutely critical, redundant hardware and s/w are often used to
minimize the possibility of error. In such situations, each version can be tested with the same test data
to ensure that all provide identical output. Those independent versions from the basis of a black box
testing technique called comparison testing.

If the output from the each version is the same, it is assumed that all implementations are
correct. If the output is different, each of the applications is investigated to determine if a defect in one
or more versions is responsible for the difference.
www.vidyarthiplus.com Page 24
www.vidyarthiplus.com

Problem in comparison testing:

1. Comparison testing is not foolproof. If the specification from which all versions have been
developed is in error, all versions will likely reflect the error.

2. If each of the independent versions produces identical but incorrect results, condition testing
will fail to detect the error.

IV. ORTHOGONAL ARRAY TESTING:

Orthogonal testing can be applied to problems in which the input domain is relatively small but
too large to accommodate exhaustive testing. The orthogonal array testing method is particularly
useful in finding errors associated with regions faults an error category associated with faulty logic
within a software component.

When orthogonal array testing occurs, an L9 orthogonal array of test cases is created. The L9
orthogonal array has a balancing property. That is test cases are dispersed uniformly throughout the
test domain.

The orthogonal array testing approach enables us to provide good test coverage with fewer test
case than the exhaustive strategy.

4.10 BOUNDARY VALUE ANALYSIS:

www.vidyarthiplus.com Page 25
www.vidyarthiplus.com

A great number of errors tend to occur at the boundaries of the input domain rather than in the
center. So boundary value analysis (BVA) derives test cases from the output domain as well.

Guidelines for boundary value analysis:

1. If an input condition specifies a range bounded by values a and b, test cases should be designed
with values a and b and just above and just below a and b.

2. If an input condition specifies a number of values, test cases should be developed that exercise
the minimum and maximum numbers.

3. Apply guidelines 1 and 2 to output conditions.

4. If internal program data structures have prescribed boundaries, be certain to design a test case
to exercise the data structure at its boundary.

4.11 STRUCTURAL TESTING


 Sometime called white-box testing

 Derivation of test cases according to program


structure. Knowledge of the program is used to identify additional test cases

www.vidyarthiplus.com Page 26
www.vidyarthiplus.com

 Objective is to exercise all program statements


(not all path combinations)

Test data

Tests Derives

Component Test
code outputs

4.12 SOFTWARE IMPLEMENTATION TECHNIQUES

Testing tool categories:

S/w quality engineering defines the following testing categories:

 Data acquisition- tools that acquire data to be used during testing.

 Static measurement- tools that analyze source code without executing test cases.

 Dynamic measurement- tools that analyze source code during execution.

 Simulation – tools that simulate function of hardware or other externals.

www.vidyarthiplus.com Page 27
www.vidyarthiplus.com

 Test management – tools that assist in the planning, development, and control of testing.

 Cross functional tools – tools that cross the bounds of the preceding categories.

www.vidyarthiplus.com Page 28

You might also like