0% found this document useful (0 votes)
101 views45 pages

Topic 7c - Test Design Techniques - Part 5 - Structure-Based or White-Box Techniques

The document discusses different techniques for designing software tests, including specification-based or black box techniques, structure-based or white box techniques, and experience-based techniques. It provides details on code coverage levels for structure-based testing and describes techniques like error guessing that use a tester's experience and intuition to design tests.

Uploaded by

Shalu Miny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views45 pages

Topic 7c - Test Design Techniques - Part 5 - Structure-Based or White-Box Techniques

The document discusses different techniques for designing software tests, including specification-based or black box techniques, structure-based or white box techniques, and experience-based techniques. It provides details on code coverage levels for structure-based testing and describes techniques like error guessing that use a tester's experience and intuition to design tests.

Uploaded by

Shalu Miny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd

7.

0 Test Design Techniques


(cont.)
Test Design Techniques

7.1 The Test Development Process


7.2 Categories of Test Design Techniques
7.3 Specification based or Black Box Techniques
7.4 Structure based or White Box Techniques
7.5 Experience Based Techniques
7.6 Choosing Test Techniques
7.7 Summary
Structure-based or white box
techniques
• Structure-based test are based on how the system works inside
– Determine and achieve a level of coverage of control flows based on
code analysis
– Determine and achieve a level of coverage of data flows based code
and data analysis
• Coverage of structure is a way to check for gaps in specification-
and experience-based tests
• Levels of code coverage:
– Statement coverage: every statement executed
– Branch (decision) coverage: every branch (decision) taken each way,
true and false
– Condition coverage: each condition has been evaluated both true and
false
– Loop coverage: all loop paths taken zero, once, and multiple (ideally,
maximum) times
Code coverage example

1 #include <iostream>
2 using namespace std; • What test values for n do
3 int main ()
4{
we need to cover all the
5 int i, n, f; statements?
6 cout << “ n = “ ;
7 cin >> n;
8 if (n < 0){
9 cout << “ Invalid: “ << n; • Does that get us branch
10 n = -1;
11 } else { (decision) coverage?
12 f = 1;
13 for (i=1; i <= n; i++) {
14 f *= i;
15 }
16 cout << n << “ ! = ” << f << endl;
17 }
18 return 0;
19 }
Statement testing

• The objective of statement testing is we need to


cover/execute all the statements – 100% statement
coverage
• Identify the test value(s) that could meet the
objective
• What test values for n do we need to cover all the
statements?

n < 0, n > 0
Flowgraphs consists of 3 primitives
• A decision: A program point at which the control can
diverge
– E.g. if, loop, switch..case statements
• A junction: A program point where the control flow
can merge.
– E.g. goto, return.
• A process block: A sequence of program statements
uninterrupted by either decision or junctions (i.e. a
straight-line code)
– Has one entry and one exit
Example of Question: Statement-based Testing

1 #include <iostream> a) How many test cases are needed to


2 using namespace std; achieve 100 per cent statement
3 int main ()
coverage?
4{
b) Which lines of code that were not
5 int i, n, f;
6 cout << “ n = “ ;
executed when we consider only
7 cin >> n; statement coverage?
8 if (n < 0){ c) What value of n that causes the line
9 cout << “ Invalid: “ << n; of code in (b) cannot be executed?
10 n = -1;
11 } else {
12 f = 1;
13 for (i=1; i <= n; i++) {
14 f *= i;
15 }
16 cout << n << “ ! = ” << f << endl;
17 }
18 return 0;
19 }
Solution Steps

1. Draw the control flow graph.


2. Tabulate the process link.
Step 1: Draw Control Flow Graph
1 #include <iostream>
2 using namespace std;
3 int main ()
4{
5 int i, n, f;
6 cout << “ n = “ ;
7 cin >> n;
8 if (n < 0){
9 cout << “ Invalid: “ << n;
10 n = -1;
11 } else {
12 f = 1;
13 for (i=1; i <= n; i++) {
14 f *= i;
15 }
16 cout << n << “ ! = ” << f << endl;
17 }
18 return 0;
19 }
Step 2: Process Links Table

Path Process Links Test Cases


a b c d e f g h i j Input Expected
Output

acj X X X n=-1 Invalid -1

abdefghi X X X X X X X X n=1 1! = 1

abdhi X X X X X n=0 0! = 1

a) How many test cases are needed to achieve 100 per cent statement coverage? 2.
b) Which lines of code that were not executed when we consider only statement
coverage?
c) What value of n that causes the line of code in (b) cannot be executed?
Another look at Statement Coverage

• Consider code sample • To achieve 100%


below: statement coverage of
this code segment, just
one test case is required
READ A • One which ensures that
variable A contains a
READB IF A>B value that is greater than
THEN C =0 the value of variable B,
for example, A = 12 and B
ENDIF = 10
Exercise: Statement Testing
Given the following program, answer the following questions:

#include <iostream>
using namespace std;
int main ()
{
int a,b, c;
cin >> a >> b;
c = a*b;
if (c > 50)
cout << "Large" << c;

system("pause");
return 0;
}

a) How many test cases are needed to achieve 100 per cent statement coverage
More Exercise
Decision (Branch) testing
• A decision is an IF statement, a loop control statement (e.g. DO-
WHILE or REPEAT-UNTIL), or a CASE statement, where there are
two or more possible exits or outcomes from the statement.
• With an IF statement, the exit can either be TRUE or FALSE,
depending on the value of the logical condition that comes after IF.
• With a loop control statement, the outcome is either to perform
the code within the loop or not - again a True or False exit. Decision
coverage is calculated by:

• The objective of branch coverage is to cover every branch (decision)


taken each way, TRUE and FALSE

• Does statement coverage imply branch (decision) coverage?


Example of Question: Decision-based Testing

1 #include <iostream> a) How many test cases are needed to


2 using namespace std; achieve 100 per cent decision
3 int main ()
coverage?
4{
b) What possible value of n to test to
5 int i, n, f;
6 cout << “ n = “ ;
achieve 100 per cent decision
7 cin >> n; coverage?
8 if (n < 0){
9 cout << “ Invalid: “ << n;
10 n = -1;
11 } else {
12 f = 1;
13 for (i=1; i <= n; i++) {
14 f *= i;
15 }
16 cout << n << “ ! = ” << f << endl;
17 }
18 return 0;
19 }
Solution Steps

• Draw the control flow graph.


– Can reuse the previous control flow graph for the
statement coverage.
– Must ensure every condition has its true and false flow,
else, revise the graph.
• Tabulate the decisions for condition
Decision Tabulation

Path Decision Test Cases

n<0 i<=n Input Expected


Output
acj T n=-1 Invalid -1

abdefghi F T n=1 1! = 1

abdhi F F n=0 0! = 1
Re-visit Exercise: Branch/Decision Testing

Given the following program, answer the following questions:

#include <iostream>
using namespace std;
int main ()
{
int a,b, c;
cin >> a >> b;
c = a*b;
if (c > 50)
cout << "Large" << c;

system("pause");
return 0;
}

a) How many test cases are needed to achieve 100 per cent decision coverage?
EXPERIENCE-BASED TECHNIQUES
Experience based technique
• Experience-based Tests / Intuitive Testing
• Experience-based tests are based on the tester’s
– Skills and intuition
– Experience with similar applications
– Experience with similar technologies
• Rather than being pre-designed, experience-
based tests are often created during test
execution (i.e. test strategy is dynamic/on-the-fly)
• Only document if there’s defect
• Tests are frequently “time-boxed” (i.e. brief
periods of testing focused on specific test
conditions)
Experience based technique (cont.)

• Test knows typical problems.


• Focus on:
– Typical faults
– Unclear items
– Special values
– Special connections and contexts
– Risk factors
– Use of the program and its environment
• Very dependent on knowledge, skill and
experience!
• Good addition to black and white box!
(1) ERROR GUESSING
Error Guessing: Characteristics

• A test design technique, where the experience and


knowledge of the tester is used to anticipate what defects
might be present in the component or system under test as a
result of errors made and to design tests specifically to
expose them
• Method for choosing test cases, based on knowledge and
experience of the tester
• Structured approach
– Create a defect catalog containing as many defects and
effects of defects as possible
– Design test cases which target those defects

© Copyright 2011 to MSTB/ GTB V 1.0


Error Guessing: Procedure(1)
• Possibly the most often used test design technique (often also
called intuitive test design) Tests are derived from the
capability and the intuition of the tester and out of his/her
experience with similar applications and technologies
• Error Guessing can be used to support systematic test design
techniques
• Determines tests, which are not/or only with difficulty
designed by systematic test design techniques
• Attention: Error guessing can achieve very different degrees
of efficiency, as they are dependent on the experience of the
tester

© Copyright 2011 to MSTB/ GTB V 1.0


Error Guessing: Procedure (2)
• Structured Approach
– Create a list of possible defects and design the test cases,
which target those defects
– This list of defective states and effects can be created on
the basis of experience, available data and general
knowledge relating to software defects
– The list can be also of great benefit to users, who can be
made aware of possible problems and difficulties before
implementation. This can be useful for defect avoidance.

© Copyright 2011 to MSTB/ GTB V 1.0


Error Guessing: Catalog of Defects

• List with possible defects and error-prone situations


– Extensive experience often only in the heads of
experienced testers
– Experience of frequently occurring defects are recorded
and are available for all testers
• Problems:
– Timeliness and relevance
– Defect catalogs are not domain- or product-specific
• General defect catalogs should be adopted
• Definition and care of own defect catalog is always needed

© Copyright 2011 to MSTB/ GTB V 1.0


Error Guessing: Catalog of Defects

References / Related Work:


•Numerous published defect catalogues Cem Kaner, Jack Falk, & Hung Quoc Nguyen,
Testing Computer Software contains a list of above 400 defects
•James A. Whittaker: How to Break Software: A Practical Guide to Testing contains
possible „Attacks“, which could lead to a defect
•James A. Whittaker: How to Break Software Security: contains possible „Security Attacks

•Mike Andrews and James A. Whittaker: How to Break Web Software: Functional and
Security Testing of Web Applications and Web Services
•Greg Hoglund and Gary McGraw: Exploiting Software: How to Break Code

© Copyright 2011 to MSTB/ GTB V 1.0


(2) EXPLORATORY TESTING
Exploratory Testing: Learn, Test and
Execute Simultaneously
• Informal testing, in which no test preparation takes place, and no specific test design
technique is used
• No expected results are specified, and the test run happens more or less arbitrarily
• Test case analysis and test case design, test implementation and execution, test
recording and especially also the learning process all take place at the same time
• Basis is a test charter, from which the test goals and possible test ideas can be taken
• Execution in a fixed time frame
• Especially adequate if there are only few or unqualified specification, when testing
under time pressure, or if other formal test design techniques are to be
complemented

© Copyright 2011 to MSTB/ GTB V 1.0


Exploratory Testing: Qualities and
Activities
• Qualities
– Parallel test design and test execution. Development of strategies to
scrutinize the product, the definition and the execution of test cases
– Verifiable results. Results and experiences of previous tests will be
documented, evaluated and may influence the next tests
– Exploration : “Getting to know” the product (Functions used, data,
immature product areas, …)
– Heuristics: Guidelines and rules of thumb simplify the choice of test
cases
• Activities Test Preparation
– Test Execution
– Documentation
– Test Evaluation
Exploratory Testing: Test Preparation
Exploratory Testing: The 5 stages
Bug Taxonomy Example
Test charter template example
Structured vs Exploratory
Structured vs. Exploratory

Structured Testing
CHOOSING TEST TECHNIQUE
Choosing test Techniques

• How do you chose the right


technique?
– Type of system
– Standards
– Customer or contractual
requirements
– Level of risk
– Type of risk
– Testing objectives
– Documentation available
– Knowledge / skills of the testers
– Time and budget
– Development processes
• Pick the right techniques for the
right situation
Factors in Choosing Techniques

1. Test objectives - If the test objective is simply to gain


confidence that the soft ware will cope with typical
operational tasks then use cases would be a sensible
approach. If the objective is for very thorough testing then
more rigorous and detailed techniques (including
structure-based techniques) should be chosen.

2. Level and type of risk - The greater the risk (e.g. safety-
critical systems), the greater the need for more thorough
and more formal testing. Commercial risk may be
influenced by quality issues (so more thorough testing
would be appropriate) or by time-to-market issues (so
exploratory testing would be a more appropriate choice)
Factors in Choosing Techniques (cont.)

3. Type of system - the type of system (e.g. embedded,


graphical, financial, etc.) will influence the choice of
techniques. For example, a financial application
involving many calculations would benefit from
boundary value analysis.
4. Regulatory standard - Some industries have regulatory
standards or guidelines that govern the testing
techniques used. For example, the aircraft industry
requires the use of equivalence partitioning,
boundary value analysis and state transition testing
for high integrity systems together with statement,
decision or modified condition decision coverage
depending on the level of software integrity required.
Factors in Choosing Techniques (cont.)

5. Customer or contractual requirements - Sometimes


contracts specify particular testing techniques to use
(most commonly statement or branch coverage).
Summary
• We learned about
– Identifying Test Conditions
– Designing Test Cases from the Test Conditions
– Creating Test Procedure Specifications to sequence and schedule our Test Cases
– The importance of traceability to requirements and specification of expected results

• We learned about the difference between Black and White Box testing
– White Box (Structure-based) Testing is based upon the structure of the program code
– Black Box (Specification Based) Testing is without reference the internal working of the
program code
– The reasons why both are useful
Summary
• We learned about the different Black box techniques, mainly:
– Equivalence Partitioning
– Boundary Value Analysis
– State Transition Testing
– Decision Tables
– Use Case Testing
– Experience based Testing

• We learned about the different White box techniques, mainly:


– Statement Testing
– Decision Testing

• For all Black and White box techniques we learned why they are of use and for
which test levels they are typically applied

You might also like