Test Case Design Methods
Test Case Design Methods
2.1 Introduction
A rich variety of test case design methods have evolved for software. These
methods provide the developer with a systematic approach to testing more
important, method provide a mechanism that can help to ensure the completeness
of tests and provide the highest likelihood for uncovering errors in software.
Any engineered product can be tested in one of the two ways: (1) Knowing the
specified function that a product has been designed to perform,tests can be
condcuted that demonstrated each function is fully opertational while at the same
time searching for errors in each function; (2)Knowing the internal workings of
aproduct tests can be condcuted to ensure that " all gears mesh",that is internal
operations are performed accrosding to specifications and all internal components
hav been adquetaely excersied .the first test approach is called black box testing
and the second white box testing .
Black box tests are conducted at the software interface level.Although they are
designed to unc over errors,black boc tests are used to demonstrated that software
functions are operational, that input is properly accepted and output is correctly
produced and the integrity of external information is maintained.A black box test
examines some fundamaental aspect of a system with little regard for the internal
logic structure of the software.Black box testing is also called behavioural testing
.It foucses on the functional requirements of the software.White box testing of
software involves close examination of procedural detail.logical paths through the
software are tested by providing test cases that exercise specific sets of conditions
and for loops.The "status of the program" may be examined at various points to
determine if the expected or asserted status corresponds to the actual status.
The attribuets of both black box and white box testing can be combined to provide
an approach that validates the software interface and selectively ensures that the
internal workings of the software are correct.
Test designs specifications defines the approach for testing, test techniques to be
used, test case design methods to be used, test environment, etc. Test
specifications are documented in the test plan.
A test suite is a framework that provides for a way of grouping test cases. A test
suite may consist of test cases to test a specific functionality. For instance a test
case for testing a specific user interface (or web page) can be grouped together to
form a test suite. Each test suite and its test cases should be uniquely identifiable.
Page 1 of 13
divine QA Testing
Depending on the complexity of the software system and the level of testing
(Unit, Integration, System and Acceptance) some or all of the items stated above
could be included in a test case template for recording test case design.
Page 2 of 13
divine QA Testing
Type 1:
Test case Name: Mnemonic identifier
Test Id: Numeric identifier
Test suite Id: Test suite(s) identifier (numeric)
Feature: System/application feature being tested
Priority: Priority assigned to the test
Environment: Hardware and software required
Duration: Test schedule
Effort: Person hours
Set up: List steps needed to set up test
Test step: Test steps and sub test steps for starting; conducting and
stopping the test. For each test step the following items
shall be defined
<Test step No.> < Step description > < Input/Input action>
<Output/Out come> <Result> <Bug identification Id>
Feature Pass/Fail: Pass: Output and out come expected for complete pass
Fail: Output and outcome expected for failure
Partial pass: Output and out come expected for partial pass
Type 2:
Test case Name: Mnemonic identifier
Test case Id: Numeric identifier
Test suite Id: Test suite(s) identifier
Purpose: What feature it tests
Priority: Priority assigned to the test
Input: Data input/input actions
Output: Output and outcome expected (Pass/Fail/Partial Pass)
Environment: Hardware and software required
Procedure: <Set up procedure>
<Test steps>
<Test stop and wrap up steps>
Constraints: Any constraint associated with the test
Dependencies: Inter dependency with the other test cases
Type 3:
Test case Id: Numeric identifier
Test suite Id: Test suite(s) identifier
Purpose: What feature it tests
Priority: Priority assigned to the test
Input: Data input/input actions
Output: Output and outcome expected (Pass/Fail)
Test step: Test steps and sub test steps for starting, conducting
and stopping the test
Constraints: Constraints on the test
The environment requirement and test setup
procedure are described at test suite level only
Page 3 of 13
divine QA Testing
Type 4:
Test case Id: Numeric identifier
Test suite Id: Test suite(s) identifier
Purpose: What feature it tests
Test step: Test steps and sub test steps for starting, conducting
and stopping the test
Refer to item A) for test step description
Feature Pass/Fail: Pass: Output and out come expected
Fail: Output and outcome expected
The environment requirement and test setup procedure are described at test suite
level only
In software systems business logic and functionality are documented and modeled
using use cases. A use case typically describes the interaction between an actor
(User of the system) and the system so that the actor can achieve desired results. A
use case normally describes how a user can access a specific feature or
functionality or service of a software system to perform a specific task. Use case
based test case design provides for functional testing of the application. Use cases
form excellent input for functional test case design for integration, system and
acceptance testing. Functional test case can be designed based on Use cases in the
following way:
Page 4 of 13
divine QA Testing
• Link and read along with user interface design and data model
(if required)
• Identify sub-path(s) in each alternate flow
• Identify data set that executes a given alternate flow path
A use case may have dependency with another use case, which would require
interface and interaction. Ensure that this dependency and associated use case
flows are captured in the test case design. Use case based test cases can be used
for end user testing (during acceptance testing) by grouping test case pertaining to
all use cases in which an actor (end user) participates for interacting with the
system.
User interface testing involves testing the user interface of the product to verify
weather the product performs its intended functions correctly or does it behave
incorrectly. User interface testing includes standard (normal) usage testing as well
as unusual usage testing and failure (error condition) testing (negative behavior).
All interactive applications including web applications provide user interfaces
using which a user interacts with a system to perform a desired function. User
interface based testing involves testing the user interface and the functional
testing of the application through the user interface.
User interface testing requires tests that verify the windows on their graphical
objects for accuracy. A user interface may work well functionally, but graphical
Page 5 of 13
divine QA Testing
objects can appear to the user to be corrupted. One way to find these problems are
to have automated scripts that address screen verification through out the testing
process. This type of automation testing is time consuming and may require much
maintenance, so keep it simple and compact. Some times these automated scripts
can be used as acceptance tests.
Manual testing allows the tester the flexibility to make judgements and to catch
subtle things that elude automated testing, but it is much harder to repeat such
tests accurately.
Navigational testing/page flow testing verifies that all navigational methods work
correctly. Ok buttons, cancel buttons, keys windows, dialogue windows,
toolboxes and others offer different ways of navigating through the
windows/pages. Since there are almost infinite ways to navigate through an
application an efficient way of checking them is to alternate navigation options
when doing other types of testing
Page flow testing deals with ensuring that jumping to random pages does not
confuse the application. Each page should typically check to ensure that it can
only be viewed via specific previous pages and if the referring page was not one
of that set, then an error page should be displayed. A page flow diagram is very
useful for the tester to use when checking for correct page flow within the
application. Some simple checks to consider are forcing the application to move
in an unnatural path, the application must resist, and display appropriate error
messages.
Page 6 of 13
divine QA Testing
The following procedure can be used for user interface test design for testing the
user interface and the functionality of the application.
1. Study the user interfaces (web pages) and the user interface navigation
diagrams (page flow diagrams)
2. For each user interface (web page)
• Identify the data fields (input/output)
• Navigation input actions required
• For each navigation/input action define the output outcome.
3. Develop specific/alternate user interaction dialog paths
• For each user interface
• Across related user interface
• Across unrelated user interface (if required)
4. Identify data set and input actions that would activate a specific user interface dialog
5. Design test case(s)
• Define test environment set up
• Define test procedure (test steps) of the test detailing navigation and
input actions
• Define dependencies with other test cases - pre requisite for the test
case
• Define input data (if any)
• Define output of the test case
• Define outcome of the test case
• Define pass/fail/partial pass criteria
6. Document the test case in the test case template
7. Walk through (dry run) the test case on the application
8. Review test case design to identify
• Missed conditions and paths
• Need for more test cases
• Defects in existing test cases
9. Update the test case design
10. Verify the test case design and close the review findings
Test case(s) pertaining to a specific user interface and user interface path can be
grouped together to form a test suite. Test case can be documented using any of
the test case templates described earlier.
User interface based test case design provides for development of test cases for
testing the user interface and its usability as well as the functionality of the
application.
Page 7 of 13
divine QA Testing
Logic based testing is a white box testing strategy. It is also called basis path
testing. In logic based testing it is necessary to test each program flow path
uniquely at least once. According to basis -path testing technique the cyclomatic
complexity of the logic flow gives the upper bound for the number of independent
paths that form the "basis paths" that need to be tested.
Example: A program with two predicate nodes (conditionals) would have 3 basis
paths. We need one test case for testing each path. For identifying the paths it is
necessary to construct the flow graph for the logic of the program/method.
We need to identify the data set for executing each basis path or conditional path.
The test case design is documented in the test case template as described in the
earlier section. The test case are reviewed and approved.
Logic and data structure are the key elements of a program. Data structures
modeled in the form of data model define the input domain of the software
system. A data model typically describes the entities and their relationships in the
application domain. It also defines the attributes of each entity and their data
Page 8 of 13
divine QA Testing
description. The attributes of the entities along with their data descriptions (name,
type, size, constraints) are documented in a data dictionary.
The input domain of the software system would be very large and testing the
software system with complete input domain would be very expensive and time
consuming. Techniques such as equivalence portioning (EP), Boundary value
analysis (BVA) etc are applied to reduce the input domain to arrive at on an
acceptable and manageable size for input domain based testing.
Test cases can be designed based on the input domain in the following way:
1. Study the data model (entity -relation model) of the application.
2. Identify and study the attributes of each entity in terms of Data type, size, and
constraints (constraints can be primary key, one among a range of values,
computed value, foreign key, etc.).
3. Identify the critical attributes that are used for condition checks, computations,
data manipulation, validations and retrieval, etc. These attributes form the
critical data in the input domain.
4. Identify other attributes that are simply Input-Output type
5. Identify and define standard set of validations that should be conducted for
attributes of particular data type (Say numeric). These constitute the
validations tests that are conducted with invalid data, invalid data type etc.
Ex: Data item with given data type (Say XNO with numeric (3) data type) has an
acceptable range of values. For instance acceptable range for XNO is -999 to
+999. But based on the input domain the valid values are say only 1-990. Then
rest of the acceptable range of values constitutes the invalid values.
Acceptance Range
Validation test case with invalid data can be picked from the "invalid range" of
values. Validation test case is required for all attributes in the input domain.
6. For each critical attribute test cases for "Edge testing" can be identified from
the input domain from the equivalence classes and boundary values as per the
following:
Equivalence classes: A group of test forms an equivalence class if they all test the
same thing, will catch the same bug, involve the same input data, result in similar
operation in the program, effect the same output variables, none force the program
Page 9 of 13
divine QA Testing
Equivalence partitioning is a black box testing method that divides the input
domain of a program into classes of data from which test cases can be derived.
Equivalence portioning strives to define a test case that uncovers classes of errors,
there by reducing the total number of test cases that must be developed.
Prepare a table containing attributes name, valid equivalence class and invalid
equivalence class columns. For each attribute
• Identify the valid and invalid equivalence classes and record them in the table
• Record one value of each class to represent the class in the table.
• Include additional values to provide for standard validation test cases
Ex.
Attribute Valid Equivalence Class Invalid Equivalence Class
XNO Number between 1-990 Number between -999-0
Number between 991-999
Boundary values: Boundary value analysis leads to selection of test cases that
exercise boundary values. BVA leads to the selection of test cases at the "edges"
of the equivalence class.
The following boundary values can be considered while designing test cases for
an equivalence class with lower (LB) and upper (UB) boundary values
LB - 1 UB - 1
LB UB
LB + 1 LB + 1
Page 10 of 13
divine QA Testing
By combining the Equivalence classes and the boundary values, we can arrive at
the test data for XNO as:
The valid and invalid values identified are used as the input data in the test cases
designed. In addition to the invalid values, standard validations for numeric data
type have to be also considered while designing the test case.
During the test case design process, we may apply one or more test case design
techniques described earlier. Any one technique may not ensure complete
coverage of testing. It is necessary to adopt and use more than one technique and
integrate the test cases developed.
It is possible that the same test case could have been identified by the multiple
techniques applied. We need to remove any such redundancies before integrating
all test cases.
Subsequently, test cases can be packaged in to test suits and test suites can be
combined to meet a test specification.
As people use a product, they form opinions about how well that product fulfills
their expectations. In a sense, during development of the product, the
development and testing teams use the software system to try to gauge, in
advance, customer's experiences of product quality. The extent to which the
Page 11 of 13
divine QA Testing
software system allows you to do this is known as the "fidelity" of the software
system.
Test cases that cover the most important quality risks, requirements and functions
are assigned highest priority to achieve high "fidelity".
Quality Risk Coverage: Apart from testing some functional aspect of the system,
a test case may also test the quality risk (failure modes) associated with it, directly
or indirectly. Testing should look for situations in which the software system fails
to meet customer's reasonable expectations in particular areas.
The quality risk coverage provided by each test case needs to be specified by
assigning a numeric value:
When you total the numbers assigned to test cases by quality risk category and by
test suite, you can measure respectively, weather you are covering a particular risk
and weather test are providing an adequate return on investment. You need to
relate these numbers to the risk priority numbers. High priority number should
correspond to high risk, low priority numbers to low risk.
Page 12 of 13
divine QA Testing
You need to use every opportunity to increase test configuration coverage through
careful use of test cycles. By reshuffling the configuration used with each test in
each cycle, you can get even closer to complete coverage.
Code, Path and Branch Coverage: Code coverage addresses weather all the
lines of code in a program/class/component have been tested.
Path coverage indicates weather all the program flow paths have been tested.
Branch coverage addresses weather each simple condition (a complex condition is
constructed using set of simple conditions) and both its branches (True/False)
have been tested.
Code coverage, path coverage and branch coverage measures are normally used at
the Unit testing level.
Page 13 of 13