UNIT NO 3
Software Testing Strategies and Method
Marks 20 Software Engineering: A Practitioners Approach, 7/e
by Roger S. Pressman
Modified by Mr. Arvind S. Sardar (Lecturer) MIT Polytechnic Rotegaon
.
1
Software Testing
Testing is the process of exercising a program with the specific intent of finding errors prior to delivery to the end user. Testing is a set of activity that can be planned in advance and conducted systematically.
What Testing Shows
errors requirements conformance performance
an indication of quality
Testing Objectives
Testing is a process of executing a program with the intent of finding
an error.
A good test case is one that has a high probability of finding
an as-yet-undiscovered error.
A successful test is one that uncovers an as-yet-undiscovered error.
Testing Principles
All tests should be traceable to customer requirements.
Tests should be planned long before testing begins. The Pareto principle applies to software testing. Testing should begin in the small and progress toward testing in the
large.
Exhaustive testing is not possible. To be most effective, testing should be conducted by an independent
third party.
Testability
Operability
Observability Controllability
It operates cleanly
The results of each test case are readily observed
The degree to which testing can be automated and optimized DecomposabilityTesting can be targeted
Simplicity Stability
Reduce complex architecture and logic to simplify tests Few changes are requested during testing
Understandability Of the design
What is a Good Test?
A good test has a high probability of finding an error
A good test is not redundant. A good test should be best of breed A good test should be neither too simple nor too complex
Test Plan, Test Case and test Data
Test Plan: A test plan is a document detailing a systematic approach to testing a system such as a machine or software. A test plan documents the strategy that will be used to verify and ensure that a product or system meets its design specifications and other requirements. A test plan can be defined as a document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.
8
Test Plan, Test Case and test Data
Test Data: Test Data is data which has been specifically identified for use in tests, typically of a computer program. Example: Confirmatory way :- To verify that a given set of input to a given function produces some expected result. To challenge the ability of the program to respond to unusual, extreme, exceptional, or unexpected input.
Test Plan, Test Case and test Data
Test cases: A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Formal test cases A formal written test-case is characterized by a known input and by an expected output, which is worked out before the test is executed. The known input should test a precondition and the expected output should test a postcondition.
10
Test Plan, Test Case and test Data
Informal test cases Test cases can be written based on the accepted normal operation of programs of a similar class. Test cases are not written at all but the activities and results are reported after the tests have been run.
11
Test Plan, Test Case and test Data
Test case format To test the correct behavior / functionality, features of an application. Includeds: Test case ID - Test case description Test step or order of execution number related requirement(s) Depth - Test category Author Check boxes for whether the test can be or has been automated Pass/Fail - Remarks
12
Strategic Approach
Strategy provides A rod map- That describe the steps to be
conducted for testing, When these steps are planed and taken, How much effort, Time and Recourses are required. So Strategy incorporates 1. Test Planning 2. Test Case design 3. Test Execution and 4. Resultant data collection and evaluation. Flexible enough Rigid enough
13
Strategic Approach Characteristics
To perform effective testing, you should conduct
effective technical reviews. By doing this, many errors will be eliminated before testing commences. Testing begins at the component level and works "outward" toward the integration of the entire computerbased system. Different testing techniques are appropriate for different software engineering approaches and at different points in time. Testing is conducted by the developer of the software and (for large projects) an independent test group. Testing and debugging are different activities, but debugging must be accommodated in any testing strategy.
14
V&V
Verification refers to the set of tasks that ensure that
software correctly implements a specific function. Does the product meet its specifications? Validation refers to a different set of tasks that ensure that the software that has been built is traceable to customer requirements. Boehm [Boe81] states this another way: Does the product perform as desired?
Verification: "Are we building the product right?" Validation: "Are we building the right product?"
15
Misconceptions
The developer of software should do no testing at all
That the software should be "tossed over the wall" to strangers who will
test it mercilessly
Testers get involved with the project only when the testing steps are
about to begin.
16
Who Tests the Software?
developer
Understands the system but, will test "gently" and, is driven by "delivery"
independent tester
Must learn about the system, but, will attempt to break it and, is driven by quality
17
Testing Strategy
System engineering
Analysis modeling
Design modeling
Code generation
Unit test Integration test Validation test
System test
18
Testing Strategy
We begin by testing-in-the-small and move toward
testing-in-the-large For conventional software
The module (component) is our initial focus Integration of modules follows
For OO software our focus when testing in the small changes from an individual module (the conventional view) to an OO class that encompasses attributes and operations and implies communication and collaboration
19
Strategic Issues
Specify product requirements in a quantifiable manner
long before testing commences. State testing objectives explicitly. Understand the users of the software and develop a profile for each user category. Develop a testing plan that emphasizes rapid cycle testing. Build robust software that is designed to test itself Use effective technical reviews as a filter prior to testing Conduct technical reviews to assess the test strategy and test cases themselves. Develop a continuous improvement approach for the testing process.
20
Unit Testing
module to be tested
results
software engineer
Take Verification effort on the smallest unit of software
test cases
21
Unit Testing
module to be tested interface local data structures boundary conditions independent paths error handling paths
test cases
22
Unit Test Environment
driver
interface local data structures
Module
boundary conditions independent paths error handling paths
stub
stub
test cases
RESULTS
23
Unit Test Environment
Driver:- is noting more than a Main Program that accept test date, Passes such data to the component and prints relevant results. Stub:- Stubs serve modules that are subordinate the component to be tested
24
Integration Testing Strategies
Is a systematic technique for constructing the software architecture and conducting test to uncover error associated with Interfacing Build program structure that has been dictated by design
Options: The big bang approach All component are combined in advance. An incremental construction strategy The program is constructed and tested in small increments, where errors are easier to isolate.
25
Top Down Integration
Top-down integration testing is an incremental approach to construction of program structure.
Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program) .
26
Top Down Integration
The integration process is performed in a series of five steps: The main control module is used as a test driver and stubs are substituted for all components directly subordinate to the main control module. Depending on the integration approach selected (i.e., depth or breadth first), subordinate stubs are replaced one at a time with actual components. Tests are conducted as each component is integrated. On completion of each set of tests, another stub is replaced with the real component. Regression testing may be conducted to ensure that new errors have not been introduced
27
Top Down Integration
A top module is tested with stubs G
stubs are replaced one at a time, "depth first" C as new modules are integrated, some subset of tests is re-run D E
28
Bottom-Up Integration
Begins construction and testing with atomic modules
A bottom-up integration strategy may be implemented with the following steps: 1. Low-level components are combined into clusters (sometimes called builds) that perform a specific software sub function. 2. A driver (a control program for testing) is written to coordinate test case input and output. 3. The cluster is tested. 4. Drivers are removed and clusters are combined moving upward in the program structure.
29
Bottom-Up Integration
A
drivers are replaced one at a time, "depth first" worker modules are grouped into builds and integrated
cluster
30
Sandwich Testing
A Top modules are tested with stubs G
C Worker modules are grouped into builds and integrated D E
cluster
31
Regression Testing
Regression testing is the re-execution of some subset of tests that
have already been conducted to ensure that changes have not propagated unintended side effects Whenever software is corrected, some aspect of the software configuration (the program, its documentation, or the data that support it) is changed. Regression testing helps to ensure that changes (due to testing or for other reasons) do not introduce unintended behavior or additional errors.
Regression testing may be conducted manually, by re-executing a
subset of all test cases or using automated capture/playback tools.
32
Regression Testing
The regression test suite (the subset of tests to be executed) contains three different classes of test cases: A representative sample of tests that will exercise all software functions.
Additional tests that focus on software functions that are likely to be
affected by the change.
Tests that focus on the software components that have been changed.
33
Smoke Testing
A common approach for creating daily builds for product software Smoke testing steps: Software components that have been translated into code are integrated into a build. A build includes all data files, libraries, reusable modules, and engineered components that are required to implement one or
more product functions. A series of tests is designed to expose errors that will keep the build from properly performing its function. The intent should be to uncover show stopper errors that have the highest likelihood of throwing the software project behind schedule. The build is integrated with other builds and the entire product (in its current form) is smoke tested daily. The integration approach may be top down or bottom up.
34
Smoke Testing
Benefits when it is applied on complex, time critical software engineering projects: Integration risk is minimized- Smoke tests are conducted daily. The quality of the end-product is improved -The approach is construction (integration) oriented, smoke testing is likely to uncover both functional errors and architectural and component-level design defects. Error diagnosis and correction are simplified- Errors uncovered during smoke testing are likely to be associated with new software increments Progress is easier to assess- With each passing day, more of the software has been integrated and more has been demonstrated to work.
35
Alpha and Beta Testing
1. The alpha test is conducted at the developer's site by a customer.
2.
The software is used in a natural setting with the developer "looking over the shoulder" of the user and recording errors and usage problems. Alpha tests are conducted in a controlled environment. The beta test is conducted at one or more customer sites by the end-user of the software. Unlike alpha testing, the developer is generally not present. Therefore, the beta test is a "live" application of the software in an environment that cannot be controlled by the developer. The customer records all problems (real or imagined) that are encountered during beta testing and reports these to the developer at regular intervals.
36
System Testing
Software is incorporated with other system elements (e.g.,
hardware, people, information), and a series of system integration and validation tests are conducted. System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system.
37
Recovery Testing
Recovery testing is a system test that forces the software to fail in
a variety of ways and verifies that recovery is properly performed.
If recovery is automatic (performed by the system itself),
reinitialization, check pointing mechanisms, data recovery, and restart are evaluated for correctness.
If recovery requires human intervention, the mean-time-to-
repair (MTTR) is evaluated to determine whether it is within acceptable limits.
38
Security Testing
Security testing attempts to verify that protection mechanisms
built into a system will, in fact, protect it from improper penetration. During security testing, the tester plays the role(s) of the individual who desires to penetrate the system.
39
Stress Testing
Stress testing executes a system in a manner that demands
resources in abnormal quantity, frequency, or volume. Example: Special tests may be designed that generate ten interrupts per second, when one or two is the average rate, Input data rates may be increased by an order of magnitude to determine how input functions will respond, Test cases that require maximum memory or other resources are executed, Test cases that may cause thrashing in a virtual operating system are designed, Test cases that may cause excessive hunting for disk-resident data are created. 40
Performance Testing
Performance testing is designed to test the run-time performance of
software within the context of an integrated system. Performance testing occurs throughout all steps in the testing process. Even at the unit level, the performance of an individual module may be assessed as white-box tests are conducted. Performance tests are often coupled with stress testing and usually require both hardware and software instrumentation.
41
White Box Testing
Is a test case design method that uses the control structure of the
procedural design to derive test cases. The "status of the program" may be examined at various points to determine if the expected or asserted status corresponds to the actual status. The software engineer can derive test cases Guarantee that all independent paths within a module have been exercised at least once, Exercise all logical decisions on their true and false sides, Execute all loops at their boundaries and within their operational bounds, and Exercise internal data structures to ensure their validity.
42
Black Box Testing
Black-box testing alludes to tests that are conducted at the
software interface. Designed to uncover errors, To demonstrate that software functions are operational, That input is properly accepted and output is correctly produced, The integrity of external information (e.g., a database) is maintained. Examines some fundamental aspect of a system with little regard for the internal logical structure of the software.
43
Debugging: A Diagnostic Process
44
Debugging
Software testing is a process that can be systematically planned
and specified. Test case design can be conducted, a strategy can be defined, and results can be evaluated against prescribed expectations. Debugging occurs as a consequence of successful testing. That is, when a test case uncovers an error, debugging is the process that results in the removal of the error. Although debugging can and should be an orderly process, it is still very much an art.
45
Debugging Process
The debugging process will always have one of two outcomes:
(1) the cause will be found and corrected, or (2) the cause will not be found.
46
The Debugging Process
47
Debugging Effort
time required to correct the error and conduct regression tests time required to diagnose the symptom and determine the cause
48
Characteristics, Symptoms & Causes
symptom and cause may be geographically separated symptom may disappear when another problem is fixed cause may be due to a combination of non-errors
cause may be due to a system or compiler error
symptom cause
cause may be due to assumptions that everyone believes symptom may be intermittent
49
Consequences of Bugs
infectious damage
catastrophic extreme
serious disturbing mild annoying Bug Type Bug Categories: function-related bugs, system-related bugs, data bugs, coding bugs, design bugs, documentation bugs, standards violations, etc.
50
Debugging Techniques
brute force / testing backtracking induction deduction
51
Brute Force
The most common and least efficient method for isolating the
cause of a software error.
Apply when all else fails. Using a "let the computer find the error" philosophy
52
Backtracking
Used successfully in small programs.
Beginning at the site where a symptom has been uncovered,
the source code is traced backward (manually) until the site of the cause is found.
Unfortunately, as the number of source lines increases, the
number of potential backward paths may become unmanageably large.
53
Cause elimination
Is manifested by induction or deduction and introduces the
concept of binary partitioning.
Data related to the error occurrence are organized to isolate
potential causes.
54
Correcting the Error
Is the cause of the bug reproduced in another part of the program? In
many situations, a program defect is caused by an erroneous pattern of logic that may be reproduced elsewhere. What "next bug" might be introduced by the fix I'm about to make? Before the correction is made, the source code (or, better, the design) should be evaluated to assess coupling of logic and data structures. What could we have done to prevent this bug in the first place? This question is the first step toward establishing a statistical software quality assurance approach. If you correct the process as well as the product, the bug will be removed from the current program and may be eliminated from all future programs.
55
Final Thoughts
Think -- before you act to correct
Use tools to gain additional insight If youre at an impasse, get help from someone else
Once you correct the bug, use regression testing to
uncover any side effects
56
57