0% found this document useful (0 votes)
18 views

Istqb M1

The document discusses the objectives and principles of software testing. It describes various testing activities and tasks involved in the test process, including test planning, monitoring, analysis, design, implementation, execution and completion. Contextual factors like lifecycle model, risks, requirements and resources influence the test process.

Uploaded by

jkaugust12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Istqb M1

The document discusses the objectives and principles of software testing. It describes various testing activities and tasks involved in the test process, including test planning, monitoring, analysis, design, implementation, execution and completion. Contextual factors like lifecycle model, risks, requirements and resources influence the test process.

Uploaded by

jkaugust12
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

Typical Objectives of Testing (7)

1. To find defects and failures thus reduce the level of risk of inadequate software quality
2. To prevent defects by evaluate work products such as requirements, user stories, design,
and code
3. To verify whether all specified requirements have been fulfilled
4. To check whether the test object is complete and validate if it works as the users and
other stakeholders expect
5. To build confidence in the level of quality of the test object
6. To provide sufficient information to stakeholders to allow them to make informed
decisions, especially regarding the level of quality of the test object
7. To comply with contractual, legal, or regulatory requirements or standards, and/or to
verify the test object’s compliance with such requirements or standards

The objectives of testing can vary, depending upon the context of the
component or system being tested, the test level, and the software
development lifecycle model.

 During component testing, one objective may be to find as many failures as possible so
that the underlying defects are identified and fixed early. Another objective may be to
increase code coverage of the component tests
 During acceptance testing, one objective may be to confirm that the system works as
expected and satisfies requirements. Another objective of this testing may be to give
information to stakeholders about the risk of releasing the system at a given time.

1.1.2 Testing and Debugging

Testing and debugging are different. Executing tests can show failures
that are caused by defects in the software. Debugging is the development
activity that finds, analyzes, and fixes such defects.
Subsequent confirmation testing checks whether the fixes resolved the defects. In some cases,
testers are responsible for the initial test and the final confirmation test, while developers do the
debugging, associated component and component integration testing (continues integration).

1.2.1 Testing’s Contributions to Success (4)

 Having testers involved in requirements reviews or user story refinement could detect
defects in these work products. The identification and removal of requirements defects
reduces the risk of incorrect or un testable features being developed.
 Having testers work closely with system designers while the system is being designed can
increase each party’s understanding of the design and how to test it. This increased
understanding can reduce the risk of fundamental design defects and enable tests to be
identified at an early stage.
 Having testers work closely with developers while the code is under development can
increase each party’s understanding of the code and how to test it. This increased
understanding can reduce the risk of defects within the code and the tests.
 Having testers verify and validate the software prior to release can detect failures that
might otherwise have been missed, and support the process of removing the defects that
caused the failures (i.e., debugging). This increases the likelihood that the software meets
stakeholder needs and satisfies requirements.

1.2.2 Quality Assurance and Testing

Quality Management

1. Quality management includes all activities that direct and control an organization with
regard to quality.
2. Among other activities, quality management includes both QUALITY ASSURANCE and
QUALITY CONTROL
Quality assurance

1. Quality Assurance is concerned with the proper execution of the


entire process
2. QA is typically focused on adherence to proper processes, in order to provide
confidence that the appropriate levels of quality will be achieved. When processes are
carried out properly, the work products created by those processes are generally of
higher quality, which contributes to defect prevention.
3. In addition, the use of root cause analysis to detect and remove the causes of
defects, along with the proper application of the findings of retrospective meetings to
improve processes , are important for effective quality assurance

Quality Control

1. Quality Control involves various activities, including test activities


that support the achievement of appropriate levels of quality.
2. Test activities are part of the overall software development or maintenance process.
Since quality assurance is concerned with the proper execution of
the entire process, quality assurance supports proper testing.

1.2.3 Errors, Defects, and Failures

A person can make an error (mistake), which can lead to the introduction of a defect (fault or
bug) in the software code or in some other related work product.

An error that leads to the introduction of a defect in one work product can trigger an error that
leads to the introduction of a defect in a related work product.

For example, a requirements elicitation error can lead to a requirements defect, which then
results in a programming error that leads to a defect in the code.
If a defect in the code is executed, this may cause a failure, but not necessarily in all
circumstances. For example, some defects require very specific inputs or preconditions to trigger
a failure, which may occur rarely or never.

For example, suppose incorrect interest payments, due to a single line of incorrect code, result in
customer complaints. The defective code was written for a user story which was ambiguous, due
to the product owner’s misunderstanding of how to calculate interest. If a large percentage of
defects exist in interest calculations, and these defects have their root cause in similar
misunderstandings, the product owners could be trained in the topic of interest calculations to
reduce such defects in the future. In this example, the customer complaints are effects. The
incorrect interest payments are failures. The improper calculation in the code is a defect, and it
resulted from the original defect, the ambiguity in the user story. The root cause of the original
defect was a lack of knowledge on the part of the product owner, which resulted in the product
owner making an error while writing the user story.

Errors may occur for many reasons, such as: (7)

1. Time pressure
2. Human fallibility
3. Inexperienced or insufficiently skilled project participants and design
4. Miscommunication between project participants, including miscommunication
about requirements
5. Complexity of the code, design, architecture, the underlying problem to be solved,
and/or the and design technologies used
6. Misunderstandings about intra-system and inter-system interfaces, especially when
such intra system and inter-system interactions are large in number
7. New, unfamiliar technologies
1.2.4 Defects, Root Causes and Effects
The root causes of defects are the earliest actions or conditions that contributed to creating the
defects. Defects can be analyzed to identify their root causes, so as to reduce the occurrence of
similar defects in the future. By focusing on the most significant root causes, root cause analysis
can lead to process improvements that prevent a significant number of future defects from being
introduced.

1.3 Seven Testing Principles (7)

1. Testing shows the presence of defects, not their


absence
2. Exhaustive testing is impossible
3. Early testing saves time and money
4. Defects cluster together
5. Beware of the pesticide paradox
6. Testing is context dependent
7. Absence-of-errors is a fallacy

1.4 Test Process

There is no one universal software test process, but there are common sets of test activities
without which testing will be less likely to achieve its established objectives. These sets of
test activities are a test process. The proper, specific software test process in any
given situation depends on many factors. Which test activities are involved in this test process,
how these activities are implemented, and when these activities occur may be discussed in an
organization’s test strategy.
1.4.1 Test Process in Context

Contextual factors that influence the test process for an organization, include, (7)

1. Software development lifecycle model and project methodologies


being used
2. Product and project risks
3. Test levels and test types being considered
4. Business domain
5. Operational constraints, including but not limited to:
 Budgets and resources
 Timescales
 Complexity
 Contractual and regulatory requirements
6. Required internal and external standards
7. Organizational policies and practices

The following sections describe general aspects of organizational test processes in terms of the
following:

 Test activities and tasks


 Test work products
 Traceability between the test basis and test work
products
1.4.2 Test Activities and Tasks (7)

1. Test planning
2. Test monitoring and control
3. Test analysis
4. Test design
5. Test implementation
6. Test execution
7. Test completion

Test Manger (TM does these 3 )

1. Test planning
2. Test Monitoring and Control
3. Test Completion

Testers(Complete these 3)

1. Test analysis
2. Test Design
3. Test implementation
4. Test Execution

1,Test Planning

(5.2.-Planning and Estimation)5.2.1 Purpose and Content of a Test Plan(5th module)

TP is a continuous process ,this may change after deployment and maintenance


Using feedback and identification of new risk in the testing activities , TP will change according of
the situation
TP documented on MASTER TEST PLAN and have separate test plans for each test level

Eg. Acceptance test, usability test, performance test.

As the project and test planning progress, more information becomes available and more detail
can be included in the test plan. Test planning is a continuous activity and is performed
throughout the product's lifecycle. (Note that the product’s lifecycle may extend beyond a
project's scope to include the maintenance phase.) Feedback from test activities should be used
to recognize changing risks so that planning can be adjusted.

Planning may be documented in a master test plan and in separate test plans for test levels, such
as system testing and acceptance testing, or for separate test types, such as usability testing and
performance testing. Test planning activities may include the following and some of these may be
documented in a test plan

Test Planning Activities ,Includes in the TEST PLAN(8)

1. Overall approach
2. Scope, Objectives, Risk of testing.
3. Decision about what to test, people, resource required to perform various test.
4. Budget.
5. Choosing Traceability Matrix for TEST MONITORING AND CONTROL.
6. Determining the level of details and structure for Test Documentation.
7. Integrating and coordinating the Test Activities in SDLC.
8. Scheduling of test analysis, design, implementation, execution, evaluating
activities on particular dates.

During test planning, Configuration Management procedures and infrastructure (tools)


should be identified and implemented.( 5.4 Configuration Management)

2,Test Monitoring And Control(5.3)


Test Monitoring

 Test monitoring involves the on-going comparison of actual progress against planned
progress using any test monitoring metrics defined in the test plan.
 Test monitoring is to gather information and provide feedback and visibility about test
activities.
 Information to be monitored may be collected manually or
automatically and should be used to assess test progress and to
measure whether the test exit criteria, or the testing tasks associated with
an Agile project's definition of done, are satisfied, such as meeting the targets for
coverage of product risks, requirements, or acceptance criteria.

Test Control-

 Test control involves taking actions necessary to meet the objectives of the test plan
(which may be updated over time).
 Test control describes any guiding or corrective actions taken as a
result of information and metrics gathered and (possibly) reported.
Actions may cover any test activity and may affect any other software lifecycle activity.

Examples of test control actions include(3)

 Reprioritizing test when there is an identified Risk occurs


 Changing Test schedules
 Re-evaluate whether the test item meet entry/exit criteria.

Entry Criteria and Exit Criteria (Definition of Ready and Definition of Done) (5TH
MODULE)
In order to exercise effective control over the quality of the software, and of the testing, it is
advisable to have criteria which define when a given test activity should start and when the
activity is complete.

Entry criteria
A set conditions to be met before an activity begin.(more typically called definition of ready in
Agile development) define the preconditions for undertaking a given test activity. If entry criteria
are not met, it is likely that the activity will prove more difficult, more time-consuming, more
costly, and more risky.

Typical entry criteria include: (5)


1. Availability of test environment
2. Availability of necessary test tools
3. Availability of test data and other necessary resources
4. Availability of testable requirements, user stories, and/or models (e.g., when following a
model- based testing strategy)
5. Availability of test items that have met the exit criteria for any prior test levels

Exit criteria
A set of conditions to be met before an activity conclude. Both are applicable in all the stage of
STLC.(more typically called definition of done in Agile development) define what conditions must
be achieved in order to declare a test level or a set of tests completed.
Entry and exit criteria should be defined for each test level and test type, and will differ based
on the test objectives.

Typical EXIT CRITERIA include: (5)


1. Planned tests have been executed
2. A defined level of coverage (e.g., of requirements, user stories, acceptance criteria, risks,
code) has been achieved
3. The number of unresolved defects is within an agreed limit
4. The number of estimated remaining defects is sufficiently low
5. The evaluated levels of reliability, performance efficiency, usability, security, and other
relevant quality characteristics are sufficient

Even without exit criteria being satisfied, it is also common for test activities to be curtailed due
to the budget being expended, the scheduled time being completed, and/or pressure to bring the
product to market. It can be acceptable to end testing under such circumstances, if the project
stakeholders and business owners have reviewed and accepted the risk to go live without further
testing.

Metrics Used in Testing


Metrics can be collected during and at the end of test activities in order to assess:
1. Progress against the planned schedule and budget
2. Current quality of the test object
3. Adequacy of the test approach
4. Effectiveness of the test activities with respect to the objectives

Common test metrics include:


1. Percentage of planned work done in test case preparation (or percentage of planned
test cases implemented)
2. Percentage of planned work done in test environment preparation
3. Test case execution (e.g., number of test cases run/not run, test cases passed/failed,
and/or test conditions passed/failed)
4. Defect information (e.g., defect density, defects found and fixed, failure rate, and
confirmation test results)
5. Test coverage of requirements, user stories, acceptance criteria, risks, or code
6. Task completion, resource allocation and usage, and effort
7. Cost of testing, including the cost compared to the benefit of finding the next defect
or the cost compared to the benefit of running the next test

5.3.2 Purposes, Contents, and Audiences for Test Reports

The purpose of test reporting is to summarize and communicate test


activity information, both during and at the end of a test activity
(e.g., a test level)
TEST PROGRESS REPORT

 The test report prepared during a test activity may be referred to


as a test progress report.
 During test monitoring and control, the test manager regularly
issues test progress reports for stakeholders.

Typical test progress reports may also include (4)


1. The status of the test activities and progress against the test
plan
2. The quality of the test object
3. Factors impeding progress(Factors prevent progress)
4. Testing planned for the next reporting period

TEST SUMMARY REPORT


A test report prepared at the end of a test activity may be referred to
as a test summary report.

When exit criteria are reached, the test manager issues the test summary
report. This report provides a summary of the testing performed, based
on the latest test progress report and any other relevant information.

Typical test summary reports may include: (8)

1. summary of testing performed


2. Information on what occurred during a test period
3. Status of testing and product quality with respect to the exit
criteria or definition of done
4. Reusable test work products produced
5. Factors that have blocked or continue to block progress
6. Deviations from plan, including deviations in schedule, duration,
or effort of test activities
7. Residual risks (see section 5.5)(Unresolved risk)
8. Metrics of defects, test cases, test coverage, activity progress,
and resource consumption. (e.g., as described in 5.3.1)

3,TEST ANALYSIS

 During test analysis, the test basis is analyzed to identify testable features and define
associated test conditions.
 Test analysis determines “what to test” in terms of measurable coverage criteria.

Test Basis – All the documents associated with the components

Test analysis includes the following major activities. (5)

1. Identifying features and sets of features to be tested


2. Defining and prioritizing test conditions for each feature based on
analysis of the test basis, and considering functional, non-
functional, and structural characteristics, other business and
technical factors, and levels of risks
3. Capturing bi-directional traceability between each element of the
test basis and the associated test conditions (see sections 1.4.3
and 1.4.4)
4. Analyzing the test basis appropriate to the test level being
considered, for example:
 Requirement specifications, such as business requirements, functional requirements,
system requirements, user stories, epics, use cases, or similar work products that
specify desired functional and non-functional component or system behavior
 Design and implementation information, such as system or software architecture
diagrams or documents, design specifications, call flow graphs, modelling diagrams
(e.g., UML or entity-relationship diagrams), interface specifications, or similar work
products that specify component or system structure
 The implementation of the component or system itself, including code, database
metadata and queries, and interfaces
 Risk analysis reports, which may consider functional, non-functional, and structural
aspects of the component or system

5, Evaluating the test basis and test items to identify defects of


various types, such as:

1. Ambiguities :- State of having more than one meaning or interpretation


2. Omissions :- Mistake in Design (one important feature not mentioned in program)
3. Inconsistencies
4. Inaccuracies
5. Contradictions :- Improvement of one system leads to the deterioration of other
parameters
6. Superfluous statements :- Statements which are not really require in the program

WORK PRODUCT – Test bases & Test Conditions

Test Charters
Test charters are typical work products in some types of experience-based testing (see section
4.4.2).
Experienced Testing(4thmodule)
4,Test Design

During test design, the test conditions are elaborated into high-level test cases, sets of high-level
test cases, and other test ware. So, test analysis answers the question “what to test?” while test
design answers the question “how to test?”

Major activities (4)

1. Designing and prioritizing test cases and sets of test cases

2. Identifying necessary test data to support test conditions and test cases

3. Designing the test environment and identifying any required infrastructure and
tools

4. Capturing bi-directional traceability between the test basis, test conditions,


and test cases (see 1.4.4)
The elaboration of test conditions into test cases and sets of test cases during test design often involves
using test techniques (see chapter 4).

WORK PRODUCT – Test Base, Test Conditions and Test Cases

5,Test implementation

During test implementation, the test ware necessary for test execution is created and/or
completed, including sequencing the test cases into test procedures. So, test design answers the
question “how to test?” while test implementation answers the question “do we now have
everything in place to run the tests?”

Test implementation includes the following major activities: (6)

1. Developing and prioritizing test procedures, and, potentially, creating


automated test scripts
2. Creating test suites from the test procedures and (if any) automated test
scripts

3. Arranging the test suites within a test execution schedule in a way that
results in efficient test execution (see section 5.2.4)

4. Building the test environment (including, potentially, test harnesses, service


virtualization, simulators, and other infrastructure items) and verifying that
everything needed has been set up correctly

5. Preparing test data and ensuring it is properly loaded in the test environment

6. Verifying and updating bi-directional traceability between the Test Basis,


Test Conditions, Test Cases, Test Procedures, And Test Suites

WORK PRODUT - Test basis, Test conditions, Test cases, Test procedures, and Test
suites

Test Scripts
A set of step by step instructions on how to test a test case .
Test Scripts/Test Procedure
Is a set of detailed instructions for the execution of a particular test. It specifies the sequence of
actions to be taken, the input data to be used and expected result.
Test Suites
Logical grouping or collections of test cases to run a single job with different test scenario
Ex:- Test suite for product purchase has multiple test cases(TC),like:
TC1 Login
TC2 Adding Products
TC3 Checkout
TC4 Logout
6,TEST EXECUTION
During test execution, test suites are run in accordance with the test
execution schedule.

Test execution includes the following major activities:(8)


1. Recording the IDs and versions of the test item(s) or
test object, test tool(s), and test ware
2. Executing tests either manually or by using test
execution tools
3. Comparing actual results with expected results
4. Analyzing anomalies to establish their likely causes
(e.g., failures may occur due to defects in the code,
but false positives also may occur (see section 1.2.3)
5. Reporting defects based on the failures observed (see
section 5.6)
6. Logging the outcome of test execution (e.g., pass, fail,
blocked)
7. Repeating test activities either as a result of action
taken for an anomaly, or as part of the planned testing
(e.g., execution of a corrected test, confirmation
testing, and/or regression testing)
8. Verifying and updating bi-directional traceability
between the Test Basis, Test Conditions, Test Cases, Test
Procedures, And TEST RESULTS.

False positive

False positives may occur due to errors in the way tests were executed, or due to defects
in the test data, the test environment, or other testware, or for other reasons. The inverse
situation can also occur, where similar errors or defects lead to false negatives.False
positives are reported as defects, but aren’t actually defects.
False Negative
False negatives are tests that do not detect defects that they should have detected;

7,TEST COMPLETION

Test completion activities collect data from completed test activities to consolidate experience,
test ware, and any other relevant information.
Test completion activities occur at project milestones such as when a software system is
released, a test project is completed (or cancelled), an Agile project iteration is finished, a test
level is completed, or a maintenance release has been completed.

Test completion includes the following major activities(6)


1. Checking whether all defect reports are closed, entering change requests or product
backlog items for any defects that remain unresolved at the end of test execution
2. Creating a TEST SUMMARY REPORT to be communicated to stakeholders
3. Finalizing and archiving the Test Environment, The Test Data, The Test Infrastructure, and
other test ware for later reuse
4. Handing over the test ware to the maintenance teams, other project teams, and/or other
stakeholders who could benefit from its use
5. Analyzing lessons learned from the completed test activities to determine changes
needed for future iterations, releases, and projects
6. Using the information gathered to improve test process maturity

Typical DEFECT REPORTS have the following objectives: (3)

1. Provide developers and other parties with information about any adverse event that
occurred, to enable them to identify specific effects, to isolate the problem with a
minimal reproducing test, and to correct the potential defect(s), as needed or to
otherwise resolve the problem

2. Provide test managers a means of tracking the quality of the work product and the
impact on the testing (e.g., if a lot of defects are reported, the testers will have spent a
lot of time reporting them instead of running tests, and there will be more confirmation
testing needed

3. Provide ideas for development and test process improvement

A Defect Report Filed During Dynamic Testing Typically Includes: (14)

1. An identifier(A unique identification number generated by any defect reporting/tracking


tool)
2. A title and a short summary of the defect being reported
3. Date of the defect report, issuing organization, and author
4. References, including the test case that revealed the problem
5. Identification of the test item (configuration item being tested) and environment
6. The development lifecycle phase(s) in which the defect was observed
7. A description of the defect to enable reproduction and resolution, including logs,
database dumps, screenshots, or recordings (if found during test execution)
8. Expected and actual results
9. Global issues, such as other areas that may be affected by a change resulting from the
defect
10. Urgency/priority to fix
11. Scope or degree of impact (severity) of the defect on the interests of stakeholder(s)
12. State of the defect report (e.g., open, deferred, duplicate, waiting to be fixed, awaiting
confirmation testing, re-opened, closed)
13. Change history, such as the sequence of actions taken by project team members with
respect to the defect to isolate, repair, and confirm it as fixed
14. Conclusions, recommendations and approvals
Test Condition
A testable aspect of a component of system identifies as the basis of testing.

Test Basis
The body of knowledge used as the basis for test analysis and design. The requirement document
is usually used as the test basis.

Test Case
A set of pre-conditions , inputs ,actions, expected results a post conditions developed based on
the test conditions.

Test Procedure
The sequence of test cases in execution order and any associated action that may be required to
setup the initial pre conditions and post conditions

Test Suite
A set of test script or test procedure to be executed in a specific test and grouping of test cases in
some order. TS arranged in test execution order.

1.4.3 Test Work Products

Test work products are created as part of the test process.


Just as there is significant variation in the way that organizations implement the test process,
there is also significant variation in the types of work products created during that process, in the
ways those work products are organized and managed, and in the names used for those work
products.
Many of the test work products described in this section can be captured and managed using test
management tools and defect management tools (see chapter 6).
Test planning work products

1. Test planning work products typically include one or more Test plans.
2. The test plan includes information about the test basis, to which the other test work
products will be related via traceability information as well as exit criteria which will be
used during test monitoring and control.
WORK PRODUCT – TEST PLANS

Test monitoring and control work products


1. Test monitoring and control work products typically include various types of test
reports,
2. including TEST PROGRESS REPORTS produced on an ongoing and/or a regular basis,
and Test SUMMARY REPORTS produced at various completion milestones.
3. All test reports should provide audience-relevant details about the test progress as of
the date of the report, including summarizing the test execution results once those
become available.
4. Test monitoring and control work products should also address project management
concerns, such as task completion, resource allocation and usage, and effort 5.3.

WORK PRODUCT – TEST REPORTS (TEST PROGRESS REPORT AND TEST SUMMERY
REPORT)

Test Analysis Work Products

Test analysis work products include defined and prioritized test conditions, each of which is
ideally bi directionally traceable to the specific element(s) of the test basis it covers. For
exploratory testing, test analysis may involve the creation of test charters. Test analysis
may also result in the discovery and reporting of defects in the test
basis.

WORK PRODUCT – TEST CONDITIONS


Test Design Work Products
Test design results in TEST CASES AND SETS OF TEST CASES to exercise the test conditions
defined in test analysis. It is often a good practice to design high-level test cases, without
concrete values for input data and expected results. Such high-level test cases are reusable
across multiple test cycles with different concrete data, while still adequately documenting the
scope of the test case. Ideally, each test case is bi directionally traceable to the test condition(s)
it covers.

WORK PRODUCT – TEST CONDITIONS-TEST CASES

Test design also results in:

 The identification of infrastructure and tools


 The design of the test environment
 The design and/or identification of the necessary test data

Test Implementation Work Products

1. Test procedures and the sequencing of those test procedures


2. Test suites
3. A test execution schedule
4. Ideally, once test implementation is complete, achievement of coverage criteria
established in the test plan can be demonstrated via bi-directional traceability
between test procedures and specific elements of the test basis, through the test cases
and test conditions.
5. In some cases, test implementation involves creating work products using or used by
tools, such as service virtualization and automated test scripts.
6. Test implementation also may result in the CREATION AND VERIFICATION OF TEST
DATA AND THE TEST ENVIRONMENT. The completeness of the documentation of the
data and/or environment verification results may vary significantly.
7. The test data serve to assign concrete values to the inputs and expected results of test
cases. Such concrete values, together with explicit directions about the use of the
concrete values, turn high-level test cases into executable low-level test cases. The
same high-level test case may use different test data when executed on different
releases of the test object. The concrete expected results which are associated with
concrete test data are identified by using a test oracle.
8. In exploratory testing, some test design and implementation work products may be
created during test execution, though the extent to which exploratory tests (and their
traceability to specific elements of the test basis) are documented may vary
significantly.
9. Test conditions defined in test analysis may be further refined in test implementation.

WORK PRODUCT – TEST CONDITIONS-TEST CASES-TEST PROCEDURE-TEST SUITE

Test Execution Work Products


Test execution work products include:

1. Documentation of the status of individual test cases or test procedures (e.g.,


ready to run, pass, fail, blocked, deliberately skipped, etc.)

2. Defect reports (see section 5.6)

3. Documentation about which test item(s), test object(s), test tools, and test ware
were involved in the testing
Ideally, once test execution is complete, the status of each element of the test basis can be
determined and reported via bi-directional traceability with the associated the test
procedure(s). For example, we can say which requirements have passed all planned tests,
which requirements have failed tests and/or have defects associated with them, and which
requirements have planned tests still waiting to be run. This enables verification that the
coverage criteria have been met, and enables the reporting of test results in terms that are
understandable to stakeholders.
Test completion work products
Test completion work products include TEST SUMMARY REPORTS, action items for
improvement of subsequent projects or iterations, change requests or product backlog
items, and finalized test ware.

1.4.4 Traceability between the Test Basis and Test Work Products

In addition to the evaluation of test coverage, good traceability supports:

1. Analyzing the impact of changes


2. Making testing auditable
3. Meeting IT governance criteria
4. Improving the understandability of test progress reports and test summary
reports to include the status of elements of the test basis (e.g., requirements that
passed their tests, requirements that failed their tests, and requirements that
have pending tests)
5. Relating the technical aspects of testing to stakeholders in terms that they can
understand

Providing information to assess product quality, process capability, and project progress against
business goals

1.5.1 Human Psychology and Testing

An element of human psychology called confirmation bias can make it difficult to accept
information that disagrees with currently held beliefs.
For example, since developers expect their code to be correct, they have a confirmation bias
that makes it difficult to accept that the code is incorrect. In addition to confirmation bias, other
cognitive biases may make it difficult for people to understand or accept information produced
by testing.

. Ways to communicate well include the following examples:


1. Start with collaboration rather than battles. Remind everyone of the common goal of
better quality systems.
2. Emphasize the benefits of testing. For example, for the authors, defect information can
help them improve their work products and their skills. For the organization, defects
found and fixed during testing will save time and money and reduce overall risk to
product quality.
3. Communicate test results and other findings in a neutral, fact-focused way without
criticizing the person who created the defective item. Write objective and factual
defect reports and review findings.
4. Try to understand how the other person feels and the reasons they may react
negatively to the information.
5. Confirm that the other person has understood what has been said and vice versa.

1.5.2 Tester’s and Developer’s Mindsets

The primary objective of development is to design and build a product. As discussed earlier, the
objectives of testing include verifying and validating the product, finding defects prior to
release, and so forth. These are different sets of objectives which require different mindsets.
Bringing these mindsets together helps to achieve a higher level of product quality. A tester’s
mindset should include curiosity, professional pessimism, a critical eye, attention to detail, and
a motivation for good and positive communications and relationships. A tester’s mindset tends
to grow and mature as the tester gains experience. A developer’s mindset may include some of
the elements of a tester’s mindset, but successful developers are often more interested in
designing and building solutions than in contemplating what might be wrong with those
solutions. In addition, confirmation bias makes it difficult to become aware of errors committed
by themselves.

You might also like