ISTQB. Module 3
ISTQB. Module 3
Static Testing
1
1.1 What is Testing?
Terms
Checklist-Based
A review technique guided by a list of questions or required attributes.
Reviewing
Dynamic Testing Testing that involves the execution of the test item.
2
A type of review that follows a defined process with a formally
Formal Review documented output.
A type of review that does not follow a defined process and has no
Informal Review formally documented output.
3
A type of static testing in which a work product or process is evaluated by
Review one or more individuals to detect defects or to provide improvements.
4
Static Testing Testing a work product without the work product code being executed.
5
3.1 Static Testing Basics
Static testing techniques – techniques that test software without executing the code.
Static testing
6
3.1.1 Work Products that сan be
Examined by Static Testing
Examples of products that can be examined using
static testing:
▪ Specifications, including business Applying of static
requirements, functional requirements, and testing:
security requirements.
1. Static testing applied early in the software development lifecycle enables the
early detection of defects before dynamic testing is performed (e.g., in
requirements or design specifications reviews, product backlog refinement,
etc.).
2. Defects found early are often much cheaper to remove than defects found
later in the lifecycle, especially compared to defects found after the software
is deployed and in active use.
3. Fixing those defects promptly is almost always much cheaper for the organization
than using dynamic testing to find defects in the test object and then fixing them,
especially when considering the additional costs associated with updating other
work products and performing confirmation and regression testing.
8
Additional benefits of static testing may include:
▪ Detecting and correcting defects more efficiently, and prior to dynamic test execution.
▪ Increasing development productivity (e.g., due to improved design, more maintainable code).
9
3.1.3 Differences between Static and
Dynamic Testing
Distinctions between static testing
and dynamics
Objectives related to both static testing and
testing:
dynamic testing:
• Static testing defects
▪ providing an assessment of the quality of the
dynamic testing
work products;
failures
▪ identifying defects as early as possible.
Static and dynamic testing complement each other
by finding different types of defects.
Compared with dynamic testing, typical defects
found by static testing are easier and cheaper to
find and fix. • Static testing
dynamic
testing
10
Typical defects found by static testing include:
and redundancies).
▪ Design defects (e.g., inefficient algorithms or database structures, high coupling, low cohesion)
▪ Coding defects (e.g., variables with undefined values, variables that are declared but never used,
unreachable code, duplicate code).
▪ Incorrect interface specifications (e.g., different units of measurement used by the calling system
than by the called system).
▪ Gaps or inaccuracies in test basis traceability or coverage (e.g., missing tests for an acceptance
criterion).
▪ Improper modularization;
Formal Informal
Factors on which the formality of The focus of a review depends on the agreed objectives of the
review
the review depends on:
Review objectives
▪ Software development lifecycle model ▪ Finding defects
▪ Maturity of the development process ▪ Gaining understanding
▪ Complexity of the work product to be reviewed ▪ Generating discussion
▪ Any legal or regulatory requirements ▪ Decision-making by consensus
▪ Need for an audit trail
Initiat
Fixing
and
reporting
3.2 e
review
Review
Process
Review Individu
meeting la
preparat
i on
13
Main activities of the review
process
Planning Initiate review (or kick-off)
▪ Defining the scope (the purpose of the ▪ Distributing the work product (physically
review, documents or parts of documents or by electronic means) and other
to review, the quality characteristics to be material, such as issue log forms,
evaluated). checklists, and related work products;
15
•
Fixing
and •
reporting
•
16
3.2.2 Roles and responsibilities
in a formal review
Author Manager Facilitator (often called moderator)
▪ Creates the work product under ▪ Is responsible for review planning. ▪ Ensures effective running of
review. review meetings (when held).
▪ Decides on the execution of reviews.
▪ Fixes defects in the work product ▪ Mediates, if necessary, between the
▪ Assigns staff, budget, and time.
under review (if necessary) . various points of view.
▪ Monitors ongoing cost-effectiveness.
▪ Is often the person upon whom
▪ Executes control decisions in the
the success of the review
event of inadequate outcomes.
depends.
17
Review leader Reviewers Scribe (or recorder)
▪ May be subject matter experts, persons
▪ Takes overall responsibility for ▪ Collates potential defects found
working on the project, stakeholders with an
the review. during the individual review activity.
interest in the work product, and/or
▪ Decides who will be involved and ▪ Records new potential defects, open
organizes when and where it individuals with specific technical or points, and decisions from the
will take place. business backgrounds. review meeting (when held).
▪ Identify potential defects in the work product
under review. (with the advent of tools to support the
▪ May represent different perspectives (e.g., review process, especially the
tester, programmer, user, operator, logging of defects, open points,
business analyst, usability expert, etc.). and decisions, there is often no
need for a scribe)
In some review types, one person may play more than one role, and the actions associated with
each role may also vary based on the review type.
▪ The required persons are not available or do not have the required qualifications
or technical skills.
▪ Inaccurate estimates during resource planning by management may result in
time pressure.
19
3.2.3 Review
Types Hig
Reviews can be used for various purposes. h
One of the main objectives of the review is to
uncover defects.
20
Informal review (least formal, e.g.,
buddy check, pairing, pair review)
Key characteristics:
21
Walkthrough
Walkthrough – a type of review in which an author leads members of the review through a work
product and the members ask questions and make comments about possible issues.
Key characteristics:
▪ Main purposes: find defects, improve the software product, consider alternative implementations, and
evaluate conformance to standards and specifications.
▪ Possible additional purposes: exchanging ideas about techniques or style variations, training of
participants, and achieving consensus.
▪ Individual preparation before the review meeting is optional.
▪ Scribe is mandatory.
Key characteristics:
▪ Main purposes: gaining consensus, and detecting potential defects.
▪ Possible further purposes: evaluating quality and building confidence in the work product,
generating new ideas, motivating and enabling authors to improve future work products, and
considering alternative implementations.
▪ Reviewers should be technical peers of the author and technical experts in the same or other
disciplines.
▪ Review meeting is optional, ideally led by a trained facilitator (typically not the author).
Key characteristics:
▪ Main purposes: detecting potential defects, evaluating quality and building confidence in the work product, preventing future
similar defects through author learning and root cause analysis.
▪ Possible further purposes: motivating and enabling authors to improve future work products and the software
development process, achieving consensus.
▪ Follows a defined process with formally documented outputs, based on rules and checklists.
▪ Uses clearly defined roles and may include a dedicated reader (who reads the work product aloud during the review
meeting).
These techniques can be used across the review types described above.
The effectiveness of the techniques may differ depending on the type of review used.
Ad hoc Checklist-based
▪ No guidance for reviewers on how this task should ▪ Reviewers detect issues based on checklists that are distributed at
be done; review initiation;
▪ Reviewers often read the work product
▪ Checklist consists of a set of questions based on potential defects
sequentially, identifying and documenting
(which may be derived from experience);
issues as they encounter them;
▪ Checklists should be specific to the type of work product under
▪ Technique needs little preparation and depends on review;
the reviewer’s skills; ▪ Checklists should be maintained regularly to cover issue types missed
▪ Disadvantage: technique is highly dependent in previous reviews;
on reviewer skills and may lead to many ▪ Advantage: a systematic coverage of typical defect types;
duplicate issues reported by different ▪ Care should be taken not to simply follow the checklist in
reviewers. individual reviewing, but also to look for defects outside the
checklist.
25
Scenarios and dry runs
Role-based
▪ Reviewers are provided with structured
▪ In a role-based review, similar to
guidelines on how to read through the
the perspective-based reading,
work product;
reviewers take on different
▪ Reviewers are supported in performing
stakeholder viewpoints in
“dry runs” on the work product based on
individual reviewing.
expected usage of the work product;
▪ Roles include specific end user
▪ Reviewers have strong guidelines on
types (experienced,
how to identify specific defect types
inexperienced, senior, child,
(much better than simple checklists);
etc.), and specific roles in the
▪ Reviewers are not constrained to the
organization (user administrator,
documented scenarios in order not to
system administrator,
miss other defect types (e.g., missing
performance tester, etc.).
features).
26
Perspective-based
Empirical studies have shown perspective-based reading to be the most effective general
technique for reviewing requirements and technical work products.
A key success factor is including and weighing different stakeholder viewpoints appropriately,
based on risks.
27
3.2.5 Success Factors for
Reviews
Organizational success factors for reviews include:
▪ Each review has clear objectives, defined during review planning, and used as measurable exit criteria.
▪ Review types are applied which are suitable to achieve the objectives and are appropriate to the type and level of
software work products and participants.
▪ Any review techniques used, such as checklist-based or role-based reviewing, are suitable for effective defect
identification in the work product to be reviewed.
▪ Any checklists used address the main risks and are up to date.
▪ Large documents are written and reviewed in small chunks so that quality control is exercised by providing authors with
early and frequent feedback on defects.
▪ Management supports the review process (e.g., by incorporating adequate time for review activities in project schedules).
▪ The right people are involved to meet the review objectives, for example, people with different skill sets or perspectives,
who may use the document as a work input.
▪ Testers are seen as valued reviewers who contribute to the review and learn about the work product, which enables
them to prepare more effective tests, and to prepare those tests earlier.
▪ The meeting is well-managed so that participants consider it a valuable use of their time.
▪ The review is conducted in an atmosphere of trust; the outcome will not be used for the evaluation of the participants.
▪ Participants avoid body language and behaviors that might indicate boredom, exasperation, or hostility to other
participants.
▪ Adequate training is provided, especially for more formal review types such as inspections.
29