Software Testing 1666362987
Software Testing 1666362987
1
Agenda
• Grader Info
• Quick Review
• Introduction to Software Testing
• Input Space Partitioning
2
Outline
• Introduction
• Basic Concepts
• The Testing Process
• Types of Testing
• Testing Philosophy
• Summary
3
Introduction
What is software testing ?
Software testing is a check activity to validate whether the actual results
matches with expected results and to ensure that the software system is
bug free.
Why it is needed ?
– Field validation issues (e.g. check date of birth, or age, etc.)
– Sites responsiveness under specific load (e.g. during a registration …)
– Payment related issues (e.g. payment gateways)
– Security issues (payment methods)
– Verifying store policies and customer support (return policy, etc.)
– Lack of device and browser compatibility (different browsers.)
– etc.
Lack of testing may lead to
– Loss of money
– Loss of time
– Loss of business reputation
4
SQA, SQC, and Testing
• Software Quality Assurance (SQA)
– Is error preventive & verification activity.
– SQA role is to observe that documented standards,
processes, and procedures are followed as enforced by
development
• Software Quality Control (SQC)
– is defect detection and validation activity.
– SQC role (usually testers) is to validate the quality of a
system and to check whether this application adheres the
defined quality standards or not.
• Testing
– It includes activities that ensure that identification of
bugs/error/defects in a software
5
Software Development Life Cycle
6
V-model vs W-model
• V-model :
– v model is the most used software development model in companies where
we plan about testing at the same time of system development.
• W-model
– This model is the most recent software development model where we start
real testing activity concurrently with the software development process
(from the beginning)
7
V-model
8
W-model
9
Testing in the 21st Century
• Software defines behavior
• network routers, finance, switching networks, other infrastructure
10
Software is a skin that Surrounds our
civilization
11
Software Quality
• The priority concern in software engineering
– No quality, no engineering!
– Software malfunctions can cause severe
consequences including environmental damages,
and even loss of human life.
• An important factor that distinguishes a
software product from its competition
– The feature set tends to converge between similar
products
12
Software Testing
• A dynamic approach to ensuring software
correctness
• Involves sampling the input space, running
the test object, and observing the runtime
behavior
• Among the most widely used approaches in
practice
– Labor intensive, and often consumes more than
50% of development cost
13
Static Analysis
• Reason about the behavior of a program
based on the source code, i.e., without
executing the program
14
Outline
• Introduction
• Basic Concepts
• The Testing Process
• Types of Testing
• Testing Philosophy
• Summary
15
Software Faults, Errors & Failures
16
An Example, Fault and Failure
17
Testing and Debugging
Question ?
If a system passes all of its tests, is it free of all faults ?
18
No!
• Faults may be hiding in portions of code that only rarely
get executed.
• “Testing can only be used to prove the existence of faults
not their absence” or “not all faults have failures”
• Sometimes faults mask each other resulting in no visible
failures.
• However, if we do a good job in creating a test set that
– Covers all functional capabilities of a system
– And covers all code using a metric such as “branch coverage”
• Then, having all test pass increase our confidence that our
system has high quality and can be deployed
19
Looking for Faults
20
Looking for Faults
21
One way forward ? Fold
22
Fold ? What does that mean ?
23
Completeness
24
Continuous Testing
25
Fault, Error, and Failure
public static int numZero (int[] x) {
// effects: if x == null throw NullPointerException
// else return the number of occurrences of 0 in x
int count = 0; Test 1
for (int i = 1; i < x.length; i ++) { [ 2, 7, 0 ]
if (x[i] == 0) { Expected: 1
count ++; Error: i is 1, not 0, on Actual: 1
} the first iteration
} Failure: none Test 2
return count; [ 0, 2, 7 ]
Fault: Should start Expected: 1
} searching at 0, not 1
Actual: 0
Error: i is 1, not 0
Error propagates to the variable count
Failure: count is 0 at the return statement
26
Fault, Error, and Failure
• The state of numZero consists of the values
of the variables x, count, i, and the program
counter.
• Consider what happens with numZero ([2, 7,
0]) and numZero ([0, 7, 2])?
27
The Term Bug
• Bug is used informally
• Sometimes speakers mean fault, sometimes
error, sometimes failure … often the speaker
doesn’t know what it means !
• This class will try to use words that have
precise, defined, and unambiguous meanings
28
Fault & Failure Model
• Three conditions must be satisfied for a
failure to be observed
29
Static Analysis & Dynamic Testing
30
Test Case
• Test data: data values to be input to the
program under test
31
Testing the System(I)
• Unit Tests
–Tests that cover low-level aspects of a system
–For each module, does each operation perform as expected
–For Method foo(), we would like to see another method
testFoo()
• Integration Tests
–Tests that check that modules work together in combination
–Most projects are on schedule until they hit this point
(Brookes)
• All sorts of hidden assumptions are surfaced when code
written by different developers.
–Lack of integration testing has led to performance failures
(Mars Polar Lander)
32
Testing the System(II)
• System Tests
– Tests performed by the developer to ensure that all
major functionality has been implemented
• Have all user stories been implemented and
function correctly ?
• Acceptance Tests
–Tests performed by the user to check that the
delivered system meets their needs
•In Large, custom projects, developers will be on-site to
install system and then respond to problems as they arise.
33
Verification & Validation
• Validation: Ensure compliance of a software product
with intended usage (Are we building the right
product ?)
34
Quality Attributes
• Static attributes refer to the actual code and
related documentation
– Well-structured, maintainable, and testable code
– Correct and complete documentation
35
Testability
• The degree to which a system or component
facilitates the establishment of test criteria and the
performance of tests to determine whether those
criteria have been met
• The more complex an application, the lower the
testability, i.e., the more effort required to test it
• Design for testability: Software should be designed in
a way such that it can be easily tested
36
Outline
• Introduction
• Basic Concepts
• The Testing Process
• Types of Testing
• Testing Philosophy
• Summary
38
Test & Debug Cycle
input data Input
domain
Construct use Operational
test input use profile
Test case
Execute Test plan
program
behavior
use
Specification
Is behavior as
yes
expected? no
Cause of error to be
determined now?
Testing to be
terminated? Debug the
no yes program
Stop
Error to be fixed now?
39
An Example
• Program sort:
– Given a sequence of integers, this program sorts
the integers in either ascending or descending
order.
– The order is determined by an input request
character “A” for ascending or “D” for descending.
40
Test plan
1. Execute the program on at least two input
sequences, one with “A” and the other with
“D” as request characters
2. Execute the program on an empty input
sequence
3. Test the program for robustness against
invalid inputs such as “R” typed in as the
request character
4. All failures of the test program should be
reported
41
Test Data
• Test case 1:
– Test data: <“A” 12 -29 32 .>
– Expected output: -29 12 32
• Test case 2:
– Test data: <“D” 12 -29 32 .>
– Expected output: 32 12 -29
• Test case 3:
– Test data: <“A” .>
– Expected output: No input to be sorted in ascending order.
• Test case 4:
– Test data: <“D” .>
– Expected output: No input to be sorted in ascending order.
• Test case 5:
– Test data: <“R” 3 17 .>
– Expected output: Invalid request character
• Test case 6:
– Test data: <“A” c 17.>
– Expected output: Invalid number
42
Test Harness
• In software testing, a Test harness or Automated test
framework is :
43
Test Harness
get_input
print_sequence
Test pool
check_input report_failure
call_sort check_output
request_char
sorted_sequence
num_item
in_numbers
sort
44
Test Oracle
Input
Observed behavior
Oracle
45
Outline
• Introduction
• Basic Concepts
• The Testing Process
• Types of Testing
• Testing Philosophy
• Summary
46
Multi-Level Testing
Once we have code, we can perform three types of tests :
• Black Box Testing
– Does the system behave as predicted by its
specification
• Grey Box Testing
– Having a bit of insight into the architecture of the
system, does it behave as predicted by its specification
• White Box testing
– Since, we have access to most of the code, let’s make
sure we are covering all aspects of the code:
Statements, branches, …
47
Source of Test Generation
• Black-box testing: Tests are generated from
informally or formally specified requirements
– Does not require access to source code
– Boundary-value analysis, equivalence partitioning,
random testing, pairwise testing
• White-box testing: Tests are generated from
source code.
– Must have access to source code
– Structural testing, path testing, data flow testing
48
Black Box Testing
==?
• Process
– Write at least one test case per functional
capability
– Iterate on code until all tests pass
• Need to automate this process as much as possible
50
Black Box Categories
• Functionality
– User input validation (based off specification)
– Output results
– State transitions
• Are there clear states in the system in which the system
is supposed to behave differently based on the state ?
51
Grey Box Testing
• Use knowledge of system’s architecture to create a more
complete set of black box tests
– Verifying auditing and logging information
• For each function is the system really updating all
internal state correctly
• Data destined for other systems
• System-added information (timestamp, checksum, etc.)
• “Looking for scarps”
– Is the system currently cleaning up after itself
• Temporary files, memory leaks, data duplication/
deletion
52
White boxing Testing
• Writing test cases with complete knowledge of code
– Format is the same: input, expected output, actual
output
• But, now we are looking at
– Code coverage (more on this in a minute)
– Proper error handing
– Working as documented (is method “foo” thread
safe?)
– Proper handling of resources
• How does the software behave when resources
become constrained ?
53
Code coverage(I)
• A criteria for knowing white testing is “complete”
– Statement coverage
• Write tests until all statements have been executed
– Branch Coverage (edge coverage)
• Write tests until each edge in a program’s control
flow graph has been executed at least once (covers
true/false conditions)
– Condition coverage
• Like branch coverage but with more attention paid
to the conditionals (if compound conditionals,
ensure that all combinations have been covered)
54
Code Coverage (II)
• A criteria for knowing white box testing is “complete”
• Path coverage
– Write tests until all paths in program’s control flow graph
have been executed multiple times as dictated by heuristics
e.g.
• For each loop, write a test case that executes the loop
– Zero times (Skips the loop)
– Exactly one time
– More than once (exact number depends on context)
55
A sample Program
1. Public int P()
2. {
3. int x,y;
4. x = Console.Read(); y = Console.Read();
5. while(x>10){
6. x = x-10;
7. if (x==10) break;
8. }
9. if (y < 20 && ( x % 2) == 0){
10. y = y +20;
11. }else{
12. y = y-20;
13. }
14. Return 2 * x + y;
15. }
56
P’s Control Flow Graph (CFG)
57
White boxing Criteria
• Statement Coverage
– Create a test set T such that
• By Executing P for each t in T
• Each elementary statement of P is executed at least once
58
All statement coverage of P
59
All statement coverage of P
60
All statement coverage of P
61
White boxing Testing Criteria
• Edge Coverage
• Select a test set T such that
– By executing P for each t in T
– Each edge of P’s control flow graph is traversed at least
once
62
All-Edge Coverage of P
63
All-Edge Coverage of P
64
All-Edge Coverage of P
65
White-box Testing Criteria
• Edge Coverage
• Select a test set T such that
– By executing P for each t in T
– Each edge of P’s control flow graph is traversed at
least once
– And all possible values of the constituents of
compound conditions are exercised at least once
66
All- Conditions Coverage of P
67
All- Conditions Coverage of P
68
All- Conditions Coverage of P
69
All- Conditions Coverage of P
70
All- Conditions Coverage of P
71
Black-box testing
• Boundary Value Analysis : a software testing
technique in which tests are designed to include
representatives of boundary values in a range.
• Random testing : a black-box software testing
technique where programs are tested by generating
random, independent inputs.
• Pairwise testing: combinational method of
software testing for each pair of input parameters to a
system. Other words, tests all possible discrete
combinations of those parameters
• Equivalence partitioning : next slides
72
Pairwise testing- Example
Parameter Enabled Choice type Category
Value 1 Value 2 Value 3 Value 4
name
True 3 a
Enabled True False * *
True 1 d
Choice type 1 2 3 *
False 1 c
Category a b c d
False 2 d
True 2 c
'Enabled', 'Choice Type' and 'Category' have a False 2 a
choice range of 2, 3 and 4, respectively. An
exhaustive test would involve 24 tests (2 x 3 x False 1 a
4). False 3 b
Multiplying the two largest values (3 and 4) True 2 b
indicates that a pair-wise tests would involve
12 tests. The pict (Pairwise Independent True 3 d
Combinatorial Tool) tool generated pairwise False 3 c
test cases and test configurations. True 1 b
73
Boundary Value Analysis (BVA) - Example
Date 1-31
0 1 31 32
Boundary value Boundary value Boundary value Boundary value
just below the just above the just below the just above the
boundary boundary boundary boundary
74
Equivalence partitioning
• Divides the input data of a software unit into partitions
of equivalent data from which test cases can be
derived.
• This technique aims to define test cases that uncover
classes of errors, thereby reducing the total number
of test cases that must be developed.
• Equivalence partitioning is usually applied to the
inputs of a tested component.
• Advantage :
– Reduction in the time required for testing a
software due to lesser number of test cases.
75
Equivalence Partitioning cont.
• Test Each Partition once (the assumption is that any
input in a partition is equivalent)
• Example – Date (1-31) It takes one value from each
partition to test.
0 1 32
-2 2 33
-4 3 34
-6 .. …
... 31
76
Equivalence Partitioning cont.
Invalid Partition valid Partition Invalid Partition valid Partition Invalid Partition
0 18 60 66 81
1 … … … 82
… 59 65 80 83
17 …
77
White-box Testing
1. Structural Testing:
– Tests are derived from the knowledge of the
software's structure or internal implementation
(source code)
• Structural Testing Techniques:
– Statement Coverage
• Exercising all programming statements with minimal tests
– Branch Coverage
• Running a series of tests to ensure that all branches are tested at least
once
– Path Coverage
• all possible paths which means that each statement and branch are
covered
78
White-box Testing Cont.
2. Path Testing:
80
White-box Testing Cont.
81
White-box Testing Cont.
82
White-box Testing Cont.
83
Test Automation
84
Test Automation Cont.
85
Continuous Integration (CI)
86
Classifier C2: Life Cycle Phases
PHASE TECHNIQUE
Coding Unit Testing
Integration Integration Testing
System Integration System Testing
Maintenance Regression Testing
Postsystem, pre-release Beta Testing
Classifier C3: Goal Directed Testing
GOAL TECHNIQUE
Features Functional Testing
Security Security Testing
Invalid inputs Robustness Testing
Vulnerabilities Penetration Testing
Performance Performance Testing
Compatibility Compatibility Testing
Classifier C4: Artifact Under Test
ARTIFACT TECHNIQUE
OO Software OO Testing
Web applications Web Testing
Real-Time software Real-time testing
Concurrent software Concurrency testing
Database applications Database testing
Cost of NOT testing
• Testing is the most time consuming and
expensive part of software development
• Not testing is even more expensive!
• If we have too little testing effort early, the
cost of testing increases
• Planning for testing after development is
expensive (time)
90
Summary: Why Do We Test Software ?
• Improve quality
• Reduce cost
• Preserve customer satisfaction
91
Outline
• Introduction
• Basic Concepts
• The Testing Process
• Types of Testing
• Testing Philosophy
• Summary
92
Philosophy
• Level 0: Testing is the same as debugging.
• Level 1: Testing aims to show correctness
• Level 2: Testing aims to show the program
under test doesn't work
• Level 3: Testing aims to reduce the risk of
using the software
• Level 4: Testing is a mental discipline that
helps develop higher quality software
93
Level 0 Thinking
• Testing is the same as debugging
94
Level 1 Thinking
• Purpose is to show correctness
• Correctness is impossible to achieve
• What do we know if no failures?
– Good software or bad tests?
• Test engineers have no:
– Strict goal
– Real stopping rule
– Formal test technique
– Test managers are powerless
95
Level 2 Thinking
• Purpose is to show failures
• Looking for failures is a negative activity
• Puts testers and developers into an adversarial
relationship (against each other)
• What if there are no failures?
96
Level 3 Thinking
• Testing can only show the presence of failures
• Whenever we use software, we incur some risk
• Risk may be small and consequences unimportant
• Risk may be great and consequences disastrous
• Testers and developers cooperate to reduce risk
97
Level 4 Thinking
A mental discipline that increases quality
98
Where Are you ?
Are you at level 0, 1, or 2 ?
Is your organization at work at level 0, 1, or 2 ?
Or 3?
99
Make Testing fun
100
Outline
• Introduction
• Basic Concepts
• The Testing Process
• Types of Testing
• Testing Philosophy
• Summary
101
Summary
• Quality is the central concern of software engineering.
• Testing is the single most widely used approach to ensuring
software quality.
– Validation and verification can occur in any phase
• Testing consists of test generation, test execution, and test
evaluation.
• Testing can show the presence of failures, but not their
absence.
• Testing of code involves
– Black box, Grey box, and white box tests
– All require : input expected output, actual output
– White box additionally looks for code coverage
• Testing of systems involves
– Unit test, integration test, system test and acceptance tests
102