0% found this document useful (0 votes)
26 views39 pages

Software Testing in Development Models

Uploaded by

David Piña
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views39 pages

Software Testing in Development Models

Uploaded by

David Piña
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

CHAPTER TWO

Testing throughout the software


development life cycle

T esting is not a stand-alone activity. It has its place within a software development
life cycle model and therefore the life cycle applied will largely determine how
testing is organized. There are many different forms of testing. Because several
disciplines, often with different interests, are involved in the development life cycle,
it is important to clearly understand and define the various test levels and types. This
chapter discusses the most commonly applied software development models, test
levels and test types. Maintenance can be seen as a specific instance of a development
process. The way maintenance influences the test process, levels and types and how
testing can be organized is described in the last section of this chapter.

2.1 SOFTWARE DEVELOPMENT Life cycle MODELS

SYLLABUS LEARNING OBJECTIVES FOR 2.1 SOFT WARE


DEVELOPMENT LIFE CYCLE MODELS (K2)

FL-2.1.1 Explain the relationships between software development


­activities and test activities in the software development
life cycle (K2)

FL-2.1.2 Identify reasons why software development life cycle ­models


must be adapted to the context of project and product
­characteristics (K1)

In this section, we’ll discuss software development models and how testing fits into
them. We’ll discuss sequential models, focusing on the V-model approach rather
than the waterfall. We’ll discuss iterative and incremental models such as Rational
Unified Process (RUP), Scrum, Kanban and Spiral (or prototyping).
As we go through this section, watch for the Syllabus terms commercial off-the-
shelf (COTS), sequential development model, and test level. You will find these
keywords defined in the Glossary (and ISTQB website).
The development process adopted for a project will depend on the project aims and
goals. There are numerous development life cycles that have been developed in order
to achieve different required objectives. These life cycles range from lightweight and
fast methodologies, where time to market is of the essence, through to fully controlled
and documented methodologies. Each of these methodologies has its place in mod-
ern software development, and the most appropriate development process should be

36
Section 1   Software Development Life Cycle Models 37

applied to each project. The models specify the various stages of the process and the
order in which they are carried out.
The life cycle model that is adopted for a project will have a big impact on the
testing that is carried out. Testing does not exist in isolation; test activities are highly
related to software development activities. It will define the what, where and when
of our planned testing, influence regression testing and largely determine which test
techniques to use. The way testing is organized must fit the development life cycle
or it will fail to deliver its benefit. If time to market is the key driver, then the testing
must be fast and efficient. If a fully documented software development life cycle, with
an audit trail of evidence, is required, the testing must be fully documented.
Whichever life cycle model is being used, there are several characteristics of good
testing:
●● For every development activity there is a corresponding test activity.
●● Each test level has test objectives specific to that level.
●● The analysis and design of tests for a given test level should begin during the
corresponding software development activity.
●● Testers should participate in discussions to help define and refine requirements
and design. They should also be involved in reviewing work products as soon as
drafts are available in the software development cycle.
Recall from Chapter 1, testing Principle 3: ‘Early testing saves time and money’.
By starting testing activities as early as possible in the software development life
cycle, we find defects while they are still small green shoots (in requirements, for
example) before they have had a chance to grow into trees (in production). We also
prevent defects from occurring at all by being more aware of what should be tested
from the earliest point in whatever software development life cycle we are using.
We will look at two categories of software development life cycle models: sequen-
tial and iterative/incremental.

Sequential development models


A sequential development model is one where the development activities happen Sequential
in a prescribed sequence – at least that is the idea. The models assume a linear development
sequential flow of activities; the next phase is only supposed to start when the pre- model A type of
vious phase is complete. Berard [1993] said that developing software from require- development life cycle
ments is like walking on water – it is easier if it is frozen. But practice does not model in which a
complete system is
conform to theory: the activities will overlap, and things will be discovered in a
developed in a linear
later phase that may invalidate assumptions made in previous phases (which were way of several discrete
supposed to be finished). and successive phases
The waterfall model (in Figure 2.1) was one of the earliest models to be designed. with no overlap
It has a natural timeline where tasks are executed in a sequential fashion. We start between them.
at the top of the waterfall with a feasibility study and flow down through the various
project tasks, finishing with implementation into the live environment. Design flows
through into development, which in turn flows into build, and finally on into test.
Different models have different levels; the figure shows one possible model. With all
waterfall models, however, testing tends to happen towards the end of the life cycle,
so defects are detected close to the live implementation date. With this model, it is
difficult to get feedback passed backwards up the waterfall and there are difficulties
if we need to carry out numerous iterations for a particular phase.
38 Chapter 2   Testing throughout the software development life cycle

need, wish,
policy, law

User
requirements

System
requirements

Global
design

Detailed
design

Implementation

Testing

FIGURE 2.1 Waterfall model

The V-model was developed to address some of the problems experienced using
the traditional waterfall approach. Defects were being found too late in the life cycle,
as testing was not involved until the end of the project. Testing also added lead
time due to its late involvement. The V-model provides guidance on how testing
begins as early as possible in the life cycle. It also shows that testing is not only an
­execution-based activity. There are a variety of activities that need to be performed
before the end of the coding phase. These activities should be carried out in parallel
with development activities, and testers need to work with developers and business
analysts so they can perform these activities and tasks, producing a set of test deliv-
erables. The work products produced by the developers and business analysts during
development are the basis of testing in one or more levels. By starting test design
early, defects are often found in the test basis documents. A good practice is to have
testers involved even earlier, during the review of the (draft) test basis documents. The
V-model is a model that illustrates how testing activities (verification and validation)
can be integrated into each phase of the life cycle. Within the V-model, validation
testing takes place especially during the early stages, for example reviewing the user
requirements, and late in the life cycle, for example during user acceptance testing.
Although variants of the V-model exist, a common type of V-model uses four
Test level (test stage) test levels. (See Section 2.2 for more on test levels.) The four test levels used, each
A specific instantiation with their own objectives, are:
of a test process.
● Component testing: searches for defects in and verifies the functioning of soft-
ware components (for example modules, programs, objects, classes, etc.) that are
separately testable.
● Integration testing: tests interfaces between components, interactions to differ-
ent parts of a system such as an operating system, file system and hardware or
interfaces between systems.
Section 1   Software Development Life Cycle Models 39

●● System testing: concerned with the behaviour of the whole system/product as


defined by the scope of a development project or product. The main focus of
system testing is verification against specified requirements.
●● Acceptance testing: validation testing with respect to user needs, requirements
and business processes conducted to determine whether or not to accept the
system.
In practice, a V-model may have more, fewer or different levels of development
and testing, depending on the project and the software product. For example, there
may be component integration testing after component testing and system integration
testing after system testing. Test levels can be combined or reorganized depending on
the nature of the project or the system architecture. In the V-model, there may also
be overlapping of activities.
Sequential models aim to deliver all of the software at once, that is, the com-
plete set of features required by stakeholders or users, or the software may be
delivered in releases containing significant chunks of new functionality. However,
typically this may take months or even years of development even for a single
release.
Note that the types of work products mentioned in Figure 2.2 on the left side of the
V-model are just an illustration. In practice, they come under many different names.

need, wish,
Operational
policy, law
system

User Preparation Acceptance


requirements Acceptance test test execution

System System
Preparation
requirements test execution
System test

Global Integration
Preparation
design test execution
Integration test

Detailed Component
design test execution

Implementation

FIGURE 2.2 V-model

Iterative and incremental development models


Not all life cycles are sequential. There are also iterative and incremental life
cycles where, instead of one large development timeline from beginning to end,
we cycle through a number of smaller self-contained life cycle phases for the same
project. As with the V-model, there are many variants of iterative and incremental
life cycles.
40 Chapter 2   Testing throughout the software development life cycle

To better understand the meaning of these two terms, consider the following two
sequences of producing a painting.
● Incremental: complete one piece at a time (scheduling or staging strategy).
Each increment may be delivered to the customer.

Images provided by: Mark Fewster, Grove Software Testing Ltd

● Iterative: start with a rough product and refine it, iteratively (rework strategy).
Final version only delivered to the customer (although in practice, intermediate
versions may be delivered to selected customers to get feedback).

Images provided by: Mark Fewster, Grove Software Testing Ltd

In terms of developing software, a purely iterative model does not produce a


working system until the final iteration. The incremental approach produces work-
ing versions of parts of the system early on and each of these can be released to the
customer. The advantage of this is that the customer can gain early benefit from using
the deliveries and, perhaps most importantly, the customers can give valuable feed-
back. This feedback will influence what is done in future increments. Most iterative
approaches also incorporate this feedback loop by delivering some (if not all) of the
(intermediate) products created by the iterations.
The painting analogy shown above is not a perfect representation for the iterative
approach. If the final product were to comprise 1,000 source code modules, you could
be forgiven for thinking that an iterative approach would have people starting the
first iteration by writing one line of code in each module and then have the second
and subsequent iterations each adding another line of code to each module until they
were completed. This is not the case.
In both iterative and incremental models, the features to be implemented are
grouped together (for example according to business priority or risk). In this way,
the focus is always on the most important of the outstanding features. The various
project phases, including their work products and activities, then occur for each group
of features. The phases may be done either sequentially or overlapping, and the iter-
ations or increments themselves may be sequential or overlapping.
An iterative development model is shown in Figure 2.3.
Section 1   Software Development Life Cycle Models 41

FIGURE 2.3 Iterative development model

Testing in incremental and iterative development


During project initiation, high-level test planning and test analysis occurs in paral-
lel with the project planning and business/requirements analysis. Any detailed test
planning, test analysis, test design and test implementation occurs at the beginning
of each iteration.
Test execution often involves overlapping test levels. Each test level begins as
early as possible and may continue after subsequent, higher test levels have started.
In an iterative or incremental life cycle, many of the same tasks will be performed
but their timing and extent may vary. For example, rather than being able to imple-
ment the entire test environment at the beginning of the project, it may be more
efficient to implement only the part needed for the current iteration. The testing tasks
may be undertaken in a different order and not necessarily sequentially. There are
likely to be fewer entry and exit criteria between activities compared with sequential
models. Also, much of the test planning and completion reporting are more likely
to occur at the start and end of the project, respectively, rather than at the start and
end of each iteration.
With any of the iterative or incremental life cycle models, the farther ahead the
planning occurs, the farther ahead the scope of the test process can extend.
Common issues with iterative and incremental models include:
●● More regression testing.
●● Defects outside the scope of the iteration or increment.
●● Less thorough testing.
Because the system is being produced a bit at a time, at any given point there will
some part which is completed in some sense, either an increment or the work that was
done iteratively. This part will be tested and may be used by the customer or user to give
feedback. When the next increment or iteration is developed, this will also be tested, but it
is also important to do regression testing of the parts which have already been developed.
The more iterations or increments there are, the more regression testing will be needed
throughout development. (This type of testing is a good candidate for automation.)
Defects that are found in the part that you are currently testing are dealt with in
the usual way, but what about defects found either by regression testing of previously
developed parts, or discovered by accident when testing a new part? These defects do
need to be logged and dealt with, but because they are outside the scope of the current
iteration/increment, they can sometimes fall between the cracks and be forgotten,
neglected or argued about.
42 Chapter 2   Testing throughout the software development life cycle

There is a danger that the testing may be less thorough in incremental and iter-
ative life cycles, particularly regression testing of previously developed parts, and
especially if the regression testing is manual rather than automated. There is also a
danger that the testing is less formal, because we are dealing with smaller parts of
the system, and formality of the testing may seem like overkill for such a small thing.
Examples of iterative and incremental development models are Rational Unified
Process (RUP), Scrum, Kanban and Spiral (or prototyping).

Rational Unified Process (RUP)


RUP is a software development process framework from Rational Software Devel-
opment, a division of IBM. It consists of four steps:
● Inception: the initial idea, planning (for example what resources would be
needed) and go/no-go decision for development, done with stakeholders.
● Elaboration: further detailed investigation into resources, architecture and costs.
● Construction: the software product is developed, including testing.
● Transition: the software product is released to customers, with modifications
based on user feedback.
This development process is tailorable for different contexts, and there are tools
(and services) to support it. One of the basic principles is that development is iterative,
with risk being the primary driver for decisions about the development. Another prin-
ciple (relevant to us) is that the evaluation of quality (including testing) is continuous
throughout development.
In RUP, the increments that are produced, although significantly smaller than what
is produced by sequential models, are larger than the increments produced by Agile
development (see Scrum below), and would typically take months rather than days
or weeks to complete. They might contain groups of related features, for example.

Scrum
Scrum is an iterative and incremental framework for effective team collabora-
tion, which is typically used in Agile development, the most well-known iterative
method. It (and Agile) is based on recognizing that change is inevitable and taking
a practical empirical approach to changing priorities. Work is broken down into
small units that can be completed in a fairly short time (days, a week or two or
even a month). The delivery of a unit of work is called a sprint. For software devel-
opment, a sprint includes all aspects of development for a particular feature or set
of small features, everything from requirements (typically user stories) through to
testing and (ideally) test automation.
The development teams are small (3 to 9 people) and cross-functional, that is, they
include people who perform various roles, and often individuals take on different
tasks (such as testing) within the team. The key roles are:
● The Product Owner represents the business, that is, stakeholders and end users.
● The Development team (which includes testers) makes its own decisions about
development, that is, they are self-organizing with a high level of autonomy.
● The Scrum Master helps the development team to do their work as efficiently
as possible, by interacting with other parts of the organization and dealing with
problems. The Scrum Master is not a manager, but a facilitator for the team.
Section 1   Software Development Life Cycle Models 43

A stand-up meeting of typically around 15 minutes is held each day, for example
first thing in the morning, to update everyone with progress from the previous day and
plan the work ahead. (This is where the term ‘scrum’ came from, as in the gathering
of a rugby team.)
At the start of a sprint, in the sprint planning meeting, some features are selected
to be implemented, with other features being put on a backlog. Acceptance criteria
apply to user stories, and are similar to test conditions, saying what needs to work for
the user story to be considered working. A definition of done can apply to a user story
(which includes but goes beyond satisfaction of the acceptance criteria), but also to unit
testing, system testing, iterations and releases. After a sprint completes, a retrospective
should be held to assess what went well and what could be improved for the next sprint.
Because development is limited to the sprint duration which is time-boxed, flex-
ibility is in choosing what can be developed in the time. Compare that to sequential
models, where all the features are selected first and the time taken to develop all of
them is based on that. Thus Scrum (and Agile) enable us to deliver the greatest value
soonest, an approach first proposed in the 1980s.
Because the iterations are short, the increments are small, such as a few small
features or even a few enhancements or bug fixes.

Kanban
Kanban came from an approach to work in manufacturing at Toyota. It is a way of
visualizing work and workflow. A Kanban board has columns for different stages of
work, from initial idea through development and testing stages to final delivery to
users. The tasks are put on sticky notes which are moved from left to right through
the columns (like an assembly line for cars).
A key principle of Kanban is to have a limit for work-in-progress activities. If we
concentrate on one task, we are much more efficient at doing it, so this approach
is less wasteful than trying to do little bits of lots of different tasks. This focus on
eliminating waste makes this a lean approach.
There is also a strong focus on user and customer needs. Iterations can be a fixed
length to deliver a single feature or enhancement, or features can be grouped together
for delivery. Kanban can span more than one team’s work (as opposed to Scrum).
If user stories are grouped by feature, work may span more than one column on the
Kanban board, sometimes referred to as swim lanes.

Spiral (or prototyping)


The Spiral model, initially proposed by Boehm [1996] is based on risk. There are
four steps: determine objectives, identify risks and alternatives, develop and test,
and plan the next iteration. Prototypes may be developed as a way of addressing
risks; these prototypes may be kept and incorporated into later cycles (an incremen-
tal approach), they might be discarded (a throw-away prototype), or they may be
re-worked as part of the next cycle.
The diagram of the Spiral model shows development starting small from a centre,
and moving in a circular way clockwise through the four stages. Each succeeding
cycle builds outwards in a spiral through the phases, developing more functionality
each time. The key driver for the Spiral model is that it is risk-driven.

Agile development
In this section, we will describe what Agile development is and then cover the
changes that this way of working brings to testing. This is additional to what you
44 Chapter 2   Testing throughout the software development life cycle

need to know for the exam, as the Syllabus does not specifically cover Agile devel-
opment (the ISTQB Foundation Level Agile Tester Extension Syllabus covers this),
but we hope this will give you useful background, especially if you are not familiar
with it.
Agile software development is a group of software development methodologies
based on iterative and incremental development, where requirements and solutions
evolve through collaboration between self-organizing cross-functional teams. Most
Agile teams use Scrum, as described above. Typical Agile teams are 5 to 9 people,
and the Agile manifesto describes ways of working that are ideal for small teams, and
that counteract problems prevalent in the late 1990s, with its emphasis on process and
documentation. The Agile manifesto consists of four statements describing what is
valued in this way of working:
● individuals and interactions over processes and tools
● working software over comprehensive documentation
● customer collaboration over contract negotiation
● responding to change over following a plan.
While there are several Agile methodologies in practice, the industry seems to
have settled on the use of Scrum as an Agile management approach, and Extreme
Programming (XP) as the main source of Agile development ideas. Some character-
istics of project teams using Scrum and XP are:
● The generation of business stories (a form of lightweight use cases) to define the
functionality, rather than highly detailed requirements specifications.
● The incorporation of business representatives into the development process, as
part of each iteration (called a sprint and typically lasting 2 to 4 weeks), provid-
ing continual feedback and to define and carry out functional acceptance testing.
● The recognition that we cannot know the future, so changes to requirements
are welcomed throughout the development process, as this approach can pro-
duce a product that better meets the stakeholders’ needs as their knowledge
grows over time.
● The concept of shared code ownership among the developers, and the close
inclusion of testers in the sprint teams.
● The writing of tests as the first step in the development of a component, and
the automation of those tests before any code is written. The component is
complete when it then passes the automated tests. This is known as test-driven
development.
● Simplicity: building only what is necessary, not everything you can think of.
● The continuous integration and testing of the code throughout the sprint, at least
once a day.
Proponents of the Scrum and XP approaches emphasize testing throughout the
process. Each iteration (sprint) culminates in a short period of testing, often with an
independent tester as well as a business representative. Developers are to write and
run test cases for their code, and leading practitioners use tools to automate those
tests and to measure structural coverage of the tests (see Chapters 4 and 6). Every
time a change is made in the code, the component is tested and then integrated with
the existing code, which is then tested using the full set of automated component
Section 1   Software Development Life Cycle Models 45

test cases. This gives continuous integration, by which we mean that changes are
incorporated continuously into the software build.
Agile development provides both benefits and challenges for testers. Some of the
benefits are:
●● The focus on working software and good quality code.
●● The inclusion of testing as part of and the starting point of software develop-
ment (test-driven development).
●● Accessibility of business stakeholders to help testers resolve questions about
expected behaviour of the system.
●● Self-organizing teams, where the whole team is responsible for quality and
gives testers more autonomy in their work.
●● Simplicity of design that should be easier to test.
There are also some significant challenges for testers when moving to an Agile
development approach:
●● Testers who are used to working with well-documented requirements will be
designing tests from a different kind of test basis: less formal and subject to
change. The manifesto does not say that documentation is no longer necessary
or that it has no value, but it is often interpreted that way.
●● Because developers are doing more component testing, there may be a percep-
tion that testers are not needed. But component testing and confirmation-based
acceptance testing by only business representatives may miss major problems.
System testing, with its wider perspective and emphasis on non-functional
testing as well as end-to-end functional testing is needed, even if it does not fit
comfortably into a sprint.
●● The tester’s role is different: since there is less documentation and more per-
sonal interaction within an Agile team, testers need to adapt to this style of
working, and this can be difficult for some testers. Testers may be acting more
as coaches in testing to both stakeholders and developers, who may not have a
lot of testing knowledge.
●● Although there is less to test in one iteration than a whole system, there is also
a constant time pressure and less time to think about the testing for the new
features.
●● Because each increment is adding to an existing working system, regression
testing becomes extremely important, and automation becomes more beneficial.
However, simply taking existing automated component or component integra-
tion tests may not make an adequate regression suite.
Software engineering teams are still learning how to apply Agile approaches.
Agile approaches cannot be applied to all projects or products, and some testing
challenges remain to be surmounted with respect to Agile development. However,
Agile methodologies are showing promising results in terms of both development
efficiency and quality of the delivered code.
More information about testing in Agile development and iterative incremental
models can be found in books by Black [2017], Crispin and Gregory [2008] and
Gregory and Crispin [2015]. There is also an ISTQB certificate for the Foundation
Level Agile Tester Extension.
46 Chapter 2   Testing throughout the software development life cycle

2.1.2 Software development life cycle models in context


As with many aspects of development and testing, there is no one correct or best life
cycle model for every situation. Every project and product is different from others,
so it is important to choose a development model that is suitable for your own situ-
ation or context. The most suitable development model for you may be based on the
following:
● the project goal
● the type of product being developed
● business priorities (for example time to market)
● identified product and project risks.
The development of an internal admin system is not the same as a safety-­
critical system such as flight control software for aircraft or braking systems
for cars. These types of development need different life cycle models in order
to succeed. The internal admin system may be developed very informally, with
different features delivered incrementally. The development (and testing) of
safety-­critical systems needs to be far more rigorous and may be subject to legal
contracts and regulatory requirements, so a sequential life cycle model may be
more appropriate.
It is also important to consider organizational and cultural context. If you
want to use Scrum for example, good communication between team members
is critical.
The context also determines the test levels and/or test activities that are appro-
priate for a project, and they may be combined or reorganized. For the integration
Commercial off-the- of a ­commercial off-the-shelf (COTS) software product into a system, for exam-
shelf (COTS) (off- ple, a purchaser may perform acceptance testing focused on functionality and other
the-shelf software) A attributes (for example integration to the infrastructure and other systems), followed
software product that by a system integration test. The acceptance testing can include testing of system
is developed for the functions, but also testing of quality attributes such as performance and other non-­
general market, i.e.
functional tests. The testing may be done from the perspective of the end user and
for a large number of
may also be done from an operations point of view.
customers, and that
is delivered to many The context within an organization also determines the most suitable life cycle
customers in identical model. For example, the V-model may be used to develop back office systems, so that
format. all new features are integrated and tested before everyone updates to the new system.
At the same time, Agile development may be used for the UI (User Interface) to the
website, and a Spiral model may be used to develop a new app.
Internet of Things (IoT) systems present special challenges for testing (as well
as for development). Many different objects need to be integrated together and
tested in realistic conditions, but each object or device may be developed in a dif-
ferent way with a different life cycle model. There is also more emphasis on the
later stages of the development life cycle, after objects are actually in use. There
may be extensive updates needed to different devices and supporting software
systems once users begin using the systems for real. There may also be unfore-
seen security issues that might require updates. There may even be issues when
trying to decommission devices or software for IoT systems. Changing contexts
always have an influence on testing, and in our technological world, constant
change is definitely the norm, so testing (and development) always needs to adapt
to its context.
Section 2   Test Levels 47

2 . 2 TEST LEVELS

SYLLABUS LEARNING OBJECTIVES FOR 2.2 TEST


LEVELS (K2)

FL-2.2.1 Compare the different test levels from the perspective of


objectives, test basis, test objects, typical defects and failures,
and approaches and responsibilities (K2)

We have mentioned (and given the definition for) test levels in Section 2.1. The defi-
nition of ‘test level’ is ‘an instance of the test process’, which is not necessarily the
most helpful. The Syllabus here describes test levels as:
groups of test activities that are organized and managed together
The test activities were described in Chapter 1, Section 1.4 (test planning through
to test completion). When we talk about test levels, we are looking at those activi-
ties performed with reference to development levels (such as those described in the
V-model) from components to systems, or even systems of systems.
In this section, we’ll look in more detail at the various test levels and show how
Test objective A
they are related to other activities within the software development life cycle. The reason or purpose
key characteristics for each test level are discussed and defined, to be able to more for designing and
clearly separate the various test levels. A thorough understanding and definition of executing a test.
the various test levels will identify missing areas and prevent overlap and repetition.
Sometimes we may wish to introduce deliberate overlap to address specific risks. Test basis The body of
knowledge used as the
Understanding whether we want overlaps and removing the gaps will make the test
basis for test analysis
levels more complementary, leading to more effective and efficient testing. We will
and design.
look at four test levels in this section.
As we go through this section, watch for the Syllabus terms acceptance testing, Test case A set
alpha testing, beta testing, component integration testing, component testing, of preconditions,
contractual acceptance testing, integration testing, operational acceptance test- inputs, actions
ing, regulatory acceptance testing, system integration testing, system testing, test (where applicable),
expected results
basis, test case, test environment, test object, test objective and user acceptance
and postconditions,
testing. These terms are also defined in the Glossary. developed based on
While the specific test levels required for – and planned for – a particular project test conditions.
can vary, good practice in testing suggests that each test level has the following
clearly identified: Test object The
component or system
●● Specific test objectives for the test level. to be tested. See also:
●● The test basis, the work product(s) used to derive the test conditions and test cases. test item.
●● The test object (that is, what is being tested such as an item, build, feature or Test environment
­system under test). (test bed, test rig)
An environment
●● The typical defects and failures that we are looking for at this test level.
containing hardware,
●● Specific approaches and responsibilities for this test level. instrumentation,
simulators, software
One additional aspect is that each test level needs a test environment. ­Sometimes
tools and other support
an environment can be shared by more than one test level: in other situations, a
elements needed to
particular environment is needed. For example, acceptance testing should have conduct a test.
a test ­environment that is as similar to production as is possible or feasible.
48 Chapter 2   Testing throughout the software development life cycle

In component testing, developers often just use their development environment. In


system testing, an environment may be needed with particular external connections,
for example.
When these topics are clearly understood and defined for the entire project team,
this contributes to the success of the project. In addition, during test planning, the
managers responsible for the test levels should consider how they intend to test a
system’s configuration, if such data is part of a system.

2.2.1 Component testing


Component testing Component testing, also known as unit or module testing, searches for defects in,
(module testing, unit and verifies the functioning of, software items (for example modules, programs,
testing) The testing of objects, classes, etc.) that are separately testable.
individual hardware or Component tests are typically based on the requirements and detailed design
software components. specifications applicable to the component under test, as well as the code itself (which
we’ll discuss in Chapter 4 when we talk about white-box testing).
The component under test, the test object, includes the individual components, the
data conversion and migration programs used to enable the new release, and database
tables, joins, views, modules, procedures, referential integrity and field constraints,
and even whole databases.

Component testing: objectives


The different test levels have different objectives. The objectives of component test-
ing include:
● Reducing risk (for example by testing high-risk components more extensively).
● Verifying whether or not functional and non-functional behaviours of the com-
ponent are as they should be (as designed and specified).
● Building confidence in the quality of the component: this may include measur-
ing structural coverage of the tests, giving confidence that the component has
been tested as thoroughly as was planned.
● Finding defects in the component.
● Preventing defects from escaping to later testing.
In incremental and iterative development (for example Agile), automated com-
ponent regression tests are run frequently, to give confidence that new additions or
changes to a component have not caused existing components or links to break.
Component testing may be done in isolation from the rest of the system depend-
ing on the context of the development life cycle and the system. Most often, mock
objects or stubs and drivers are used to replace the missing software and simulate
the interface between the software components in a simple manner. A stub or mock
object is called from the software component to be tested; a driver calls a compo-
nent to be tested (see Figure 2.4). Test harnesses may also be used to provide similar
functionality, and service virtualization can give cloud-based functionality to test
components in realistic environments.
Component testing may include testing of functionality (for example are the
­calculations correct) and specific non-functional characteristics such as resource-­
behaviour (for example memory leaks), performance testing (for example do
­calculations complete quickly enough), as well as structural testing. Test cases are
derived from work products such as the software design or the data model.
Section 2   Test Levels 49

Component: A A Driver

Component: B Stub B

FIGURE 2.4 Stubs and drivers

Component testing: test basis


What is this particular component supposed to do? Examples of work products that
can be used as a test basis for component testing include:
●● detailed design
●● code
●● data model
●● component specifications (if available).

Component testing: test objects


What are we actually testing at this level? We could say the smallest thing that can
be sensibly tested on its own. Typical test objects for component testing include:
●● components themselves, units or modules
●● code and data structures
●● classes
●● database models.

Component testing: typical defects and failures


Examples of defects and failures that can typically be revealed by component testing
include:
●● incorrect functionality (for example not as described in a design specification)
●● data flow problems
●● incorrect code or logic.
At the end of the description of all the test levels, see Table 2.1 which summarizes
the characteristics of each test level.

Component testing: specific approaches and responsibilities


Typically, component testing occurs with access to the code being tested and with the
support of the development environment, such as a unit test framework or ­debugging
tool. In practice it usually involves the developer who wrote the code. The d­ eveloper
may change between writing code and testing it. Sometimes, depending on the
­applicable level of risk, component testing is carried out by a different developer,
introducing independence. Defects are typically fixed as soon as they are found,
without formally recording them in a defect management tool. Of course, if such
defects are recorded, this can provide useful information for root cause analysis.
50 Chapter 2   Testing throughout the software development life cycle

One approach in component testing, initially developed in Extreme Programming


(XP), is to prepare and automate test cases before coding. This is called a test-first
approach or test-driven development (TDD). This approach is highly iterative and is
based on cycles of developing automated tests, then building and integrating small
pieces of code, and executing the component tests until they pass, and is typically
done in Agile development. The idea is that the first thing the developer does is to
write some automated tests for the component. Of course if these are run now, they
will fail because no code is there! Then just enough code is written until those tests
pass. This may involve fixing defects now found by the tests and re-factoring the
code. (This approach also helps to build only what is needed rather than a lot of
functionality that is not really wanted.)

2.2.2 Integration testing


Integration testing Integration testing tests interfaces between components and interactions of dif-
Testing performed ferent parts of a system such as an operating system, file system and hardware or
to expose defects in interfaces between systems. Integration tests are typically based on the software and
the interfaces and system design (both high-level and low-level), the system architecture (especially
in the interactions the relationships between components or objects) and the workflows or use cases by
between integrated
which the stakeholders will employ the system.
components or systems.
There may be more than one level of integration testing and it may be carried out
Component on test objects of varying size. For example:
integration testing
(link testing) Testing ● Component integration testing tests the interactions between software compo-
performed to expose nents and is done after component testing. It is a good candidate for automation.
defects in the interfaces In iterative and incremental development, both component tests and integration
and interactions tests are usually part of continuous integration, which may involve automated
between integrated build, test and release to end users or to a next level. At least, this is the theory.
components. In practice, component integration testing may not be done at all, or misunder-
System integration stood and as a consequence not done well.
testing Testing the ● System integration testing tests the interactions between different systems,
combination and packages and microservices, and may be done after system testing. System
interaction of systems. integration testing may also test interfaces to and provided by external organiza-
tions (such as web services). In this case, the developing ­organization may con-
trol only one side of the interface, resulting in a number of problems: changes
may be destabilizing, defects in the external organization’s software may block
progress in the testing, or special test environments may be needed. Business
processes implemented as workflows may involve a series of systems that can
even run on different platforms. System integration testing may be done in par-
allel with other testing activities.

Integration testing: objectives


The objectives of integration testing include:
● Reducing risk, for example by testing high-risk integrations first.
● Verifying whether or not functional and non-functional behaviours of the inter-
Section 2   Test Levels 51

●● Finding defects in the interfaces themselves or in the components or systems


being tested together.
●● Preventing defects from escaping to later testing.
Automated integration regression tests (such as in continuous integration) pro-
vide confidence that changes have not broken existing interfaces, components or
systems.

Integration testing: test basis


How are these components or systems supposed to work together and communicate?
Examples of work products that can be used as a test basis for integration testing
include:
●● software and system design
●● sequence diagrams
●● interface and communication protocol specifications
●● use cases
●● architecture at component or system level
●● workflows
●● external interface definitions.

Integration testing: test objects


What are we actually testing at this level? The emphasis here is in testing things
with others which have already been tested individually. We are interested in how
things work together and how they interact. Typical test objects for integration test-
ing include:
●● subsystems
●● databases
●● infrastructure
●● interfaces
●● APIs (Application Programming Interfaces)
●● microservices.

Integration testing: typical defects and failures


Examples of defects and failures that can typically be revealed by component inte-
gration testing include:
●● Incorrect data, missing data or incorrect data encoding.
●● Incorrect sequencing or timing of interface calls.
●● Interface mismatch, for example where one side sends a parameter where the
value exceeds 1,000, but the other side only expects values up to 1,000.
●● Failures in communication between components.
● Unhandled or improperly handled communication failures between
components.
●● Incorrect assumptions about the meaning, units or boundaries of the data being
passed between components.
52 Chapter 2   Testing throughout the software development life cycle

Examples of defects and failures that can typically be revealed by system integra-
tion testing include:
● inconsistent message structures between systems
● incorrect data, missing data or incorrect data encoding
● interface mismatch
● failures in communication between systems
● unhandled or improperly handled communication failures between systems
● incorrect assumptions about the meaning, units or boundaries of the data being
passed between systems
● failure to comply with mandatory security regulations.
At the end of the description of all the test levels, see Table 2.1 which summarizes
the characteristics of each test level.

Integration testing: specific approaches and responsibilities


The greater the scope of integration, the more difficult it becomes to isolate failures
to a specific interface, which may lead to an increased risk. This leads to varying
approaches to integration testing. One extreme is that all components or systems
are integrated simultaneously, after which everything is tested as a whole. This is
called big-bang integration. Big-bang integration has the advantage that everything
is finished before integration testing starts. There is no need to simulate (as yet
unfinished) parts. The major disadvantage is that in general it is time-consuming
and difficult to trace the cause of failures with this late integration. So big-bang
integration may seem like a good idea when planning the project, being optimistic
and expecting to find no problems. If one thinks integration testing will find defects,
it is a good practice to consider whether time might be saved by breaking down the
integration test process.
Another extreme is that all programs are integrated one by one, and tests are
carried out after each step (incremental testing). Between these two extremes, there
is a range of variants. The incremental approach has the advantage that the defects
are found early in a smaller assembly when it is relatively easy to detect the cause. A
disadvantage is that it can be time-consuming, since mock objects or stubs and drivers
may have to be developed and used in the test. Within incremental integration testing
a range of possibilities exist, partly depending on the system architecture:
● Top-down: testing starts from the top and works to the bottom, following the
control flow or architectural structure (for example starting from the GUI or
main menu). Components or systems are substituted by stubs.
● Bottom-up: testing reverses this approach, starting from the bottom of the con-
trol flow upwards. Components or systems are substituted by drivers.
● Functional incremental: integration and testing takes place on the basis of the
functions or functionality, as documented in the functional specification.
The preferred integration sequence and the number of integration steps required
depend on the location in the architecture of the high-risk interfaces. The best
choice is to start integration with those interfaces that are expected to cause the most
­problems. Doing so prevents major defects at the end of the integration test stage.
In order to reduce the risk of late defect discovery, integration should normally be
incremental rather than big bang. Ideally, testers should understand the architecture
Section 2   Test Levels 53

and influence integration planning. If integration tests are planned before components
or systems are built, they can be developed in the order required for most efficient
testing. A risk analysis of the most complex interfaces can help to focus integration
testing. In iterative and incremental development, integration is also incremental.
Existing integration tests should be part of the regression tests used in continuous
integration. Continuous integration has major benefits because of its iterative nature.
At each stage of integration, testers concentrate solely on the integration itself. For
example, if they are integrating component A with component B they are interested
in testing the communication between the components, not the functionality of either
one. In integrating system X with system Y, again the focus is on the communication
between the systems and what can be done by both systems together, rather than
defects in the individual systems. Both functional and structural approaches may be
used. Testing of specific non-functional characteristics (for example performance)
may also be included in integration testing.
Component integration testing is often carried out by developers; system integra-
tion testing is generally the responsibility of the testers. Either type of integration
testing could be done by a separate team of specialist integration testers, or by a
specialist group of developers/integrators, including non-functional specialists. The
testers performing the system integration testing need to understand the system archi-
tecture. Ideally, they should have had an influence on the development, integration
planning and integration testing.

2.2.3 System testing


System testing is concerned with the behaviour of the whole system/product as defined System testing Testing
by the scope of a development project or product. It may include tests based on risk an integrated system
analysis reports, system, functional or software requirements specifications, business to verify that it meets
processes, use cases or other high-level descriptions of system behaviour, interactions specified requirements.
with the operating system and system resources. The focus is on end-to-end tasks that (Note that the ISTQB
definition implies that
the system should perform, including non-functional aspects, such as performance.
system testing is only
In some systems, the quality of the data may be of critical importance, so there would about verification of
be a focus on data quality. System level tests may be automated to provide a regression specified requirements.
suite to ensure that changes have not adversely affected existing system functionality. In practice, system
Stakeholders may use the information from system testing to decide whether the system testing is often also
is ready for user acceptance testing, for example. System testing is also where conform- about validation that
ance to legal or regulatory requirements or to external standards is tested. the system is suitable
The test environment is important for system testing; it should correspond to the for its intended users,
final production environment as much as possible. as well as verifying
against any type of
System testing: objectives requirement.)
The objectives of system testing include:
●● reducing risk
●● verifying whether or not functional and non-functional behaviours of the system
are as they should be (as specified)
●● validating that the system is complete and will work as it should and as expected
●● building confidence in the quality of the system as a whole
●● finding defects
●● preventing defects from escaping to later testing or to production.
54 Chapter 2   Testing throughout the software development life cycle

System testing: test basis


What should the system as a whole be able to do? Examples of work products that
can be used as a test basis for system testing include:
● software and system requirement specifications (functional and non-functional)
● risk analysis reports
● use cases
● epics and user stories
● models of system behaviour
● state diagrams
● system and user manuals.

System testing: test objects


What are we actually testing at this level? The emphasis here is in testing the whole
system, from end to end, encompassing everything that the system needs to do (and
how well it should do it, so non-functional aspects are also tested here). Typical test
objects for integration testing include:
● applications
● hardware/software systems
● operating systems
● system under test (SUT)
● system configuration and configuration data.

System testing: typical defects and failures


Examples of defects and failures that can typically be revealed by system testing
include:
● incorrect calculations
● incorrect or unexpected system functional or non-functional behaviour
● incorrect control and/or data flows within the system
● failure to properly and completely carry out end-to-end functional tasks
● failure of the system to work properly in the production environment(s)
● failure of the system to work as described in system and user manuals.
At the end of the description of all the test levels, see Table 2.1 which summarizes
the characteristics of each test level.

System testing: specific approaches and responsibilities


System testing is most often the final test on behalf of development to verify that
the system to be delivered meets the specification and to validate that it meets
­expectations; one of its purposes is to find as many defects as possible. Most often it
is carried out by specialist testers that form a dedicated, and sometimes independ-
ent, test team within development, reporting to the development manager or project
manager. In some organizations system testing is carried out by a third-party team
or by business analysts. Again, the required level of independence is based on the
applicable risk level and this will have a high influence on the way system testing
is organized.
Section 2   Test Levels 55

System testing should investigate end-to-end behaviour of both functional


and non-functional aspects of the system. An end-to-end test may include all of
the steps in a typical transaction, from logging on, accessing data, placing an
order, etc. through to logging off and checking order status in a database. Typ-
ical non-functional tests include performance, security and reliability. Testers
may also need to deal with incomplete or undocumented requirements. System
testing of functional requirements starts by using the most appropriate black-box
techniques for the aspect of the system to be tested. For example, a decision table
may be created for combinations of effects described in business rules. White-
box ­techniques may also be used to assess the thoroughness of testing elements
such as menu ­d ialogue structure or web page navigation (see Chapter 4 for more
on test techniques).
System testing requires a controlled test environment with regard to, among
other things, control of the software versions, testware and the test data (see Chap-
ter 5 for more on configuration management). A system test is executed by the
development organization in a (properly controlled) environment. The test envi-
ronment should correspond to the final target or production environment as much
as possible in order to minimize the risk of environment-specific failures not being
found by testing.
System testing is often carried out by independent testers, for example an internal
test team or external testing specialists. However, if testers are only brought in when
system test execution is about to start, you will miss a lot of opportunities to save
time and money, as well as aggravation. If there are defects in specifications, such
as missing functions or incorrect descriptions of business processes, these may not
be picked up before the system is built. Because many defects result from misunder-
standings, the discussions (indeed arguments) about them tend to be worse the later
they are discovered. The developers will defend their understanding because that is
what they have built. The independent testers or end-users may realize that what was
built was not what was wanted. This situation can lead to defects being missed in
testing (if they are based on wrong specifications) or things being reported as defects
that actually are not (due to misunderstandings). These are known as false negative
and false positive results respectively. Referring back to testing Principle 3, early test
involvement saves time and money, so have testers involved in user story refinement
and static testing such as reviews.

Acceptance testing
2.2.4 Acceptance testing Formal testing with
respect to user needs,
When the development organization has performed its system test (and possibly also requirements, and
system integration tests) and has corrected all or most defects, the system may be business processes
delivered for acceptance testing. Acceptance tests typically produce information conducted to determine
to assess the system’s readiness for release or deployment to end-users or custom- whether or not a
ers. Although defects are found at this level, that is not the main aim of acceptance system satisfies the
testing. (If lots of defects are found at this late stage, there are serious problems acceptance criteria
with the whole system, and major project risks.) The focus is on validation, the use and to enable the user,
of the system for real, how suitable the system is to be put into production or actual customers or other
authorized entity to
use by its intended users. Regulatory and legal requirements, and conformance to
determine whether
standards may also be checked in acceptance testing, although they should also have
or not to accept the
been addressed in an earlier level of testing, so that the acceptance test is confirming system.
compliance to the standards.
56 Chapter 2   Testing throughout the software development life cycle

Acceptance testing: objectives


The objectives of acceptance testing include:
● establishing confidence in the quality of the system as a whole
● validating that the system is complete and will work as expected
● verifying that functional and non-functional behaviours of the system are as
specified.

Different forms of acceptance testing


Acceptance testing is quite a broad category, and it comes in several different fla-
User acceptance
vours or forms. We will look at four of these.
testing Acceptance
testing conducted in
a real or simulated User acceptance testing (UAT)
operational User acceptance testing is exactly what it says. It is acceptance testing done by (or on
environment by behalf of) users, that is, end-users. The focus is on making sure that the system is really
intended users fit for purpose and ready to be used by real intended users of the system. The UAT
focusing on their needs, can be done in the real environment or in a simulated operational environment (but as
requirements and realistic as possible). The aim of testing here is to build confidence that the system will
business processes. indeed enable the users to do what they need to do in an efficient way. The system needs
Operational to fulfil the requirements and meet their needs. The users focus on their business pro-
acceptance testing cesses, which they should be able to perform with a minimum of difficulty, cost and risk.
(production acceptance
testing) Operational Operational acceptance testing (OAT)
testing in the Operational acceptance testing focuses on operations and may be performed by
acceptance test phase, system administrators. The main purpose is to give confidence to the system admin-
typically performed in a istrators or operators that they will be able to keep the system running, and recover
(simulated) operational from adverse events quickly and without additional risk. It is normally performed in
environment by a simulated production environment and is looking at operational aspects, such as:
operations and/or
systems administration ● testing of backups and restoration of backups
staff focusing on ● installing, uninstalling and upgrading
operational aspects, for
example recoverability, ● disaster recovery
resource-behaviour, ● user management
installability and
● maintenance tasks
technical compliance.
● data loading and migration tasks
Contractual
● checking for security vulnerabilities (for example ethical hacking)
acceptance testing
Acceptance testing ● performance and load testing.
conducted to verify
whether a system
Contractual and regulatory acceptance testing
satisfies its contractual
requirements.
If a system has been custom-developed for another company, there is normally a legal
contract describing the responsibilities, schedule and costs of the project. The con-
Regulatory acceptance tract should also include or refer to acceptance criteria for the system, which should
testing Acceptance have been defined and agreed when the contract was first taken out. H ­ aving agreed
testing conducted to the acceptance criteria in advance, contractual acceptance testing is focused on
verify whether a system
whether or not the system meets those criteria. This form of testing is often per-
conforms to relevant
formed by users or independent testers.
laws, policies and
regulations. Regulatory acceptance testing is focused on ensuring that the system conforms to
government, legal or safety regulations. This type of testing is also often performed by
Section 2   Test Levels 57

independent testers. It may be a requirement to have a representative of the ­regulatory


body present to witness or to audit the tests.
For both of these forms of acceptance testing, the aim is to build confidence that
the system is in conformance with the contract or regulations.

Alpha and beta testing


Alpha and beta testing are typically used for COTS software, such as software pack-
ages that can be bought or downloaded by consumers. Feedback is needed from
potential or existing users in their market before the software product is put out for
sale commercially. The testing here is looking for feedback (and defects) from real
users and customers or potential customers. Sometimes free software is offered to
those who volunteer to do beta testing.
The difference between alpha and beta testing is only in where the testing takes place.
Alpha testing is at the company that developed the software, and beta testing is done in Alpha testing
the users’ own offices or homes. In alpha testing, a cross-section of potential users are Simulated or actual
invited to use the system. Developers observe the users and note problems. Alpha testing operational testing
may also be carried out by an independent test team. Alpha testing is normally mentioned conducted in the
first, but these two forms can be done in any order, or only one could be done (or none). developer’s test
environment, by roles
Beta testing sends the system or software package out to a cross-section of users
outside the development
who install it and use it under real-world working conditions. The users send records
organization.
of defects with the system to the development organization, where the defects are
repaired. Beta testing is more visible, and is increasingly popular to be done remotely. Beta testing (field
For example, crowd testing, where people or potential users from all over the world testing) Simulated
remotely test an application, can be a form of beta testing. One of the advantages of or actual operational
beta testing is that different users will have a great variety of different environments testing conducted
at an external site,
(browsers, other software, hardware configurations, etc.), so the testing can cover
by roles outside
many more combinations of factors. the development
organization.
Acceptance testing: test basis
How do we know that the system is ready to be used for real? Examples of work
products that can be used as a test basis for the various forms of acceptance testing
include:
●● business processes
●● user or business requirements
●● regulations, legal contracts and standards
●● use cases
●● system requirements
●● system or user documentation
●● installation procedures
●● risk analysis reports.
For operational acceptance testing (OAT), there are some additional aspects with
specific work products that can be a test basis:
●● backup and restore/recovery procedures
●● disaster recovery procedures
●● non-functional requirements
●● operations documentation
58 Chapter 2   Testing throughout the software development life cycle

● deployment and installation instructions


● performance targets
● database packages
● security standards or regulations.
Note that it is particularly important to have very clear, well-tested and frequently
rehearsed procedures for disaster recovery and restoring backups. If you are in the
situation of having to perform these procedures, then you may be in a state of panic,
since something serious will have already gone wrong. In that psychological state,
it is very easy to make mistakes, and here mistakes could be disastrous. There are
stories of organizations who compounded one disaster by accidentally deleting their
backups, or who find that their backups are unusable or incomplete! This is why
restoring from backups is an important test to do regularly.

Acceptance testing: test objects


What are we actually testing at this level? The emphasis here is in gaining confi-
dence, based on the particular form of acceptance testing: user confidence, confi-
dence in operations, confidence that we have met legal or regulatory requirements,
and confidence that real users will like and be happy with the software we are sell-
ing. Some of the things we are testing are similar to the test objects of system testing.
Typical test objects for acceptance testing include:
● system under test (SUT)
● system configuration and configuration data
● business processes for a fully integrated system
● recovery systems and hot sites (for business continuity and disaster recovery
testing)
● operational and maintenance processes
● forms
● reports
● existing and converted production data.

Acceptance testing: typical defects and failures


Examples of defects and failures that can typically be revealed by acceptance testing
include:
● system workflows do not meet business or user requirements
● business rules are not implemented correctly
● system does not satisfy contractual or regulatory requirements
● non-functional failures such as security vulnerabilities, inadequate performance
efficiency under high load, or improper operation on a supported platform.
At the end of the description of all the test levels, see Table 2.1 which summarizes
the characteristics of each test level.

Acceptance testing: specific approaches and responsibilities


The acceptance test should answer questions such as: ‘Can the system be released?’,
‘What, if any, are the outstanding (business) risks?’ and ‘Has development met
their obligations?’ Acceptance testing is most often the responsibility of the user or
Section 2   Test Levels 59

customer, although other stakeholders may be involved as well. The execution of the
acceptance test requires a test environment that is, for most aspects, representative
of the production environment (‘as-if production’).
The goal of acceptance testing is to establish confidence in the system, part of the
system or specific non-functional characteristics, for example usability of the system.
Acceptance testing is most often focused on a validation type of testing, where we are
trying to determine whether the system is fit for purpose. Finding defects should not
be the main focus in acceptance testing. Although it assesses the system’s readiness
for deployment and use, it is not necessarily the final level of testing. For example, a
large-scale system integration test may come after the acceptance of a system.
Acceptance testing may occur at more than just a single level, for example:
●● A COTS software product may be acceptance tested when it is installed or
integrated.
●● Acceptance testing of the usability of a component may be done during compo-
nent testing.
●● Acceptance testing of a new functional enhancement may come before system
testing.
User acceptance testing focuses mainly on the functionality, thereby validating
the fitness for use of the system by the business user, while the operational acceptance
test (also called production acceptance test) validates whether the system meets the
requirements for operation. The user acceptance test is performed by the users and
application managers. In terms of planning, the user acceptance test usually links
tightly to the system test, and will, in many cases, be organized partly overlapping
in time. If the system to be tested consists of a number of more or less independent
subsystems, the acceptance test for a subsystem that meets its exit criteria from the
system test can start while another subsystem may still be in the system test phase.
In most organizations, system administration will perform the operational accept-
ance test shortly before the system is released. The operational acceptance test may
include testing of backup/restore, data load and migration tasks, disaster recovery,
user management, maintenance tasks and periodic check of security vulnerabilities.
Note that organizations may use other terms, such as factory acceptance testing
and site acceptance testing for systems that are tested before and after being moved
to a customer’s site.
In iterative development, different forms of acceptance testing may be done at
various times, and often in parallel. At the end of an iteration, a new feature may
be tested to validate that it meets stakeholder and user needs. This is user accept-
ance testing. If software for general release (COTS) is being developed, alpha
and beta testing may be used at or near the end of an iteration or set of iterations.
Operational and regulatory acceptance testing may also occur at the end of an
iteration or set of iterations.

Test level characteristics: summary


Table 2.1 summarizes the characteristics of the different test levels: the test basis,
test objects and typical defects and failures. We have covered these in the various
sections, but it is useful to contrast them in order to distinguish them from each
other. We have omitted some of the detail to make the table easier to take in at a
glance. Note that some of the typical defects for integration testing are only for
­system integration testing (SIT).
TA B L E 2 . 1 Test level characteristics

60
Component testing Integration testing System testing Acceptance testing

Objectives reduce risk reduce risk reduce risk establish confidence in whole
system and its use
verify functional and non- verify functional and non- verify functional and non-
functional behaviour functional behaviour functional behaviour validate completeness, works
as expected
build confidence in build confidence in interfaces validate completeness, works
components as expected verify functional and non-
find defects
functional behaviour
find defects build confidence in whole
prevent defects to higher
system
prevent defects to higher levels
levels find defects

prevent defects to higher


levels

Test basis detailed design software/system design requirement specs (functional business processes
and non-functional)
code sequence diagrams user, business, system
risk analysis reports requirements
data models interface and communication
protocol specs use cases regulations, legal contracts
component specifications
and standards
use cases epics and user stories
use cases
architecture (component or models of system behaviour
system) documentation
state diagrams
workflows installation procedures
system and user manuals
external interface definitions risk analysis
Component testing Integration testing System testing Acceptance testing

Test objects components, units, subsystems applications system under test (SUT)
modules
databases hardware/software system configuration and data
code
infrastructure operating systems business processes
data structures
interfaces system under test recovery systems
classes
APIs system configuration and data operation and maintenance
database models processes
microservices
forms

reports

existing and converted


production data

Typical defects wrong functionality data problems incorrect calculations system workflows do not
and failures meet business or user
data flow problems inconsistent message incorrect or unexpected
needs
structure (SIT) behaviour
incorrect code/logic
business rules not correct
timing problems incorrect data/control flows
contractual or regulatory
interface mismatch cannot complete end-to-end
problems
tasks
communication failures
non-functional failures
does not work in production
incorrect assumptions (performance, security)
environment(s)
not complying with
not as described in manuals/
regulations (SIT)
documentation

61
62 Chapter 2   Testing throughout the software development life cycle

2 . 3 TEST TYPES

SYLLABUS LEARNING OBJECTIVES FOR 2.3 TEST


T YPES (K2)

FL-2.3.1 Compare functional, non-functional, and white-box testing (K2)

FL-2.3.2 Recognize that functional, non-functional and white-box tests


occur at any test level (K1)

FL-2.3.3 Compare the purposes of confirmation testing and regression


testing (K2)

In this section, we’ll look at different test types. We’ll discuss tests that focus on the
functionality of a system, which informally is testing what the system does. We’ll
also discuss tests that focus on non-functional attributes of a system, which infor-
mally is testing how well the system does what it does. We’ll introduce testing based
on the system’s structure. Finally, we’ll look at testing of changes to the system,
both confirmation testing (testing that the changes succeeded) and regression testing
(testing that the changes did not affect anything unintentionally).
The test types discussed here can involve the development and use of a model of
the software or its behaviours. Such models can occur in structural testing when we
use control flow models or menu structure models. Such models in non-functional
testing can involve performance models, usability models and security threat models.
They can also arise in functional testing, such as the use of process flow models, state
transition models or plain language specifications. Examples of such models will be
found in Chapter 4.
As we go through this section, watch for the Syllabus terms functional testing,
non-functional testing, test type and white-box testing. You will find these terms
defined in the Glossary as well.
Test types are introduced as a means of clearly defining the objective of a certain
test level for a program or project. We need to think about different types of testing
because testing the functionality of the component or system may not be sufficient at
each level to meet the overall test objectives. Focusing the testing on a specific test
objective and, therefore, selecting the appropriate type of test, helps make it easier to
make and communicate decisions about test objectives. Typical objectives may include:
● Evaluating functional quality, for example whether a function or feature is com-
plete, correct and appropriate.
● Evaluating non-functional quality characteristics, for example reliability, perfor-
mance efficiency, security, compatibility and usability.
● Evaluating whether the structure or architecture of the component or system is
correct, complete and as specified.
● Evaluating the effects of changes, looking at both the changes themselves (for
example defect fixes) and also the remaining system to check for any unintended
side-effects of the change. These are confirmation testing and regression testing,
respectively, and are discussed in Section 2.3.4.
Section 3   Test Types 63

A test type is focused on a particular test objective, which could be the testing Test type A group of
of a function to be performed by the component or system; a non-functional quality test activities based on
characteristic, such as reliability or usability; the structure or architecture of the specific test objectives
component or system; or related to changes, that is, confirming that defects have aimed at specific
been fixed (confirmation testing, or re-testing) and looking for unintended changes characteristics of a
(regression testing). Depending on its objectives, testing will be organized differently. component or system.
For example, component testing aimed at performance would be quite different from
component testing aimed at achieving decision coverage.

2.3.1 Functional testing


The function of a system (or component) is what it does. This is typically
described in work products such as business requirements specifications, func-
tional specifications, use cases, epics or user stories. There may be some func-
tions that are assumed to be provided that are not documented. They are also
part of the requirements for a system, though it is difficult to test against undocu-
mented and implicit requirements. Functional tests are based on these functions,
described in documents or understood by the testers, and may be performed at
all test levels (for example tests for components may be based on a component
specification).
Functional testing considers the specified behaviour and is often also referred Functional testing
to as black-box testing (specification-based testing). This is not entirely true, since Testing conducted to
black-box testing also includes non-functional testing (see Section 2.3.2). evaluate the compliance
Functional testing can also be done focusing on suitability, interoperability testing, of a component or
security, accuracy and compliance. Security testing, for example, investigates the system with functional
requirements.
functions (for example a firewall) relating to detection of threats, such as viruses,
from malicious outsiders.
Testing of functionality could be done from different perspectives, the two main
ones being requirements-based or business-process-based.
Requirements-based testing uses a specification of the functional requirements
for the system as the basis for designing tests. A good way to start is to use the table
of contents of the requirements specification as an initial test inventory or list of
items to test (or not to test). We should also prioritize the requirements based on risk
criteria (if this is not already done in the specification) and use this to prioritize the
tests. This will ensure that the most important and most critical tests are included in
the testing effort.
Business-process-based testing uses knowledge of the business processes. Busi-
ness processes describe the scenarios involved in the day-to-day business use of the
system. For example, a personnel and payroll system may have a business process
along the lines of: someone joins the company, he or she is paid on a regular basis, and
he or she finally leaves the company. User scenarios originate from object-oriented
development but are nowadays popular in many development life cycles. They also
take the business processes as a starting point, although they start from tasks to be
performed by users. Use cases are a very useful basis for test cases from a business
perspective.
The techniques used for functional testing are often specification-based, but
­experience-based techniques can also be used (see Chapter 4 for more on test
­techniques). Test conditions and test cases are derived from the functionality of the
64 Chapter 2   Testing throughout the software development life cycle

component or system. As part of test design, a model may be developed, such as a


process model, state transition model or a plain-language specification.
The thoroughness of functional testing can be measured by a coverage meas-
ure based on elements of the function that we can list. For example, we can list
all of the options available from every pull-down menu. If our set of tests has
at least one test for each option, then we have 100% coverage of these menu
options. Of course, that does not mean that the system or component is 100%
tested, but it does mean that we have at least touched every one of the things
we identified. When we have traceability between our tests and functional
requirements, we can identify which requirements we have not yet covered,
that is, have not yet tested (coverage gaps). For example, if we covered only
90% of the menu options, we could add tests so that the untested 10% are then
covered.
Special skills or knowledge may be needed for functional testing, particularly for
specialized application domains. For example, medical device software may need
medical knowledge both for the design and testing of such systems. The worst thing
that a heart pacemaker can do is not to stop giving the electrical stimulant to the heart
(the heart may still limp along less efficiently). The worst thing is to speed up, giving
the signal much too frequently; this can be fatal. Other specialized application areas
include gaming or interactive entertainment systems, geological modelling for oil
and gas exploration, or automotive systems.

2.3.2 Non-functional testing


This test type is the testing of the quality characteristics, or non-functional attributes
of the system (or component or integration group). Here we are interested in how
well or how fast something is done. We are testing something that we need to meas-
ure on a scale of measurement, for example time to respond.
Non-functional Non-functional testing, as functional testing, is performed at all test levels.
testing Testing Non-functional testing includes, but is not limited to, performance testing, load test-
conducted to evaluate ing, stress testing, usability testing, maintainability testing, reliability testing, port-
the compliance of a ability testing and security testing. It is the testing of how well the system works.
component or system Many have tried to capture software quality in a collection of characteristics and
with non-functional related sub-characteristics. In these models, some elementary characteristics keep
requirements.
on reappearing, although their place in the hierarchy can differ. The International
Organization for Standardization (ISO) has defined a set of quality characteristics
in ISE/IEC 25010 [2011].
A common misconception is that non-functional testing occurs only during
higher levels of testing such as system test, system integration test and acceptance
test. In fact, non-functional testing may be performed at all test levels; the higher
the level of risk associated with each type of non-functional testing, the earlier
in the life cycle it should occur. Ideally, non-functional testing involves tests that
quantifiably measure characteristics of the systems and software. For example, in
performance testing we can measure transaction throughput, resource utilization
and response times. Generally, non-functional testing defines expected results in
terms of the external behaviour of the software. This means that we typically use
black-box test techniques. For example, we could use boundary value analysis to
Section 3   Test Types 65

define the stress conditions for performance tests, and equivalence partitioning to
identify types of devices for compatibility testing, or to identify user groups for
usability testing (novice, experienced, age range, geographical location, educational
background).
The thoroughness of non-functional testing can be measured by the coverage of
non-functional elements. If we had at least one test for each major group of users, then
we would have 100% coverage of those user groups that we had identified. Of course,
we may have forgotten an important user group, such as those with disabilities, so we
have only covered the groups we have identified.
If we have traceability between non-functional tests and non-functional require-
ments, we may be able to identify coverage gaps. For example, an implicit require-
ment is for accessibility for disabled users.
Special skills or knowledge may be needed for non-functional testing, such as for
performance testing, usability testing or security testing (for example for specific
development languages).
More about non-functional testing is found in other ISTQB qualification Sylla-
buses, including the Advanced Test Analyst, the Advanced Technical Test Analyst,
and the Advanced Security Tester, the Foundation Performance Testing, and the
Foundation Usability Testing Syllabus.

2.3.3 White-box testing


The third test type looks at the internal structure or implementation of the system
or component. If we are talking about the structure of a system, we may call it
the system architecture. Structural elements also include the code itself, control
flows, business processes and data flows. White-box testing is also referred to White-box testing
as structural testing or glass-box because we are interested in what is happening (clear-box testing,
inside the box. code-based testing,
White-box testing is most often used as a way of measuring the thoroughness of glass-box testing, logic-
testing through the coverage of a set of structural elements or coverage items. It can coverage testing, logic-
driven testing, structural
occur at any test level, although it is true to say that it tends to be mostly applied at
testing, structure-based
component testing and component integration testing, and generally is less likely at testing) Testing based
higher test levels, except for business process testing. At component integration level on an analysis of the
it may be based on the architecture of the system, such as a calling hierarchy or the internal structure-based
interfaces between components (the interfaces themselves can be listed as coverage of the component or
items). The test basis for system, system integration or acceptance testing could be a system.
business model, for example business rules.
At component level, and to a lesser extent at component integration testing, there
is good tool support to measure code coverage. Coverage measurement tools assess
the percentage of executable elements (for example statements or decision outcomes)
that have been exercised (that is, they have been covered) by a test suite. If coverage
is not 100%, then additional tests may need to be written and run to cover those parts
that have not yet been exercised. This of course depends on the exit criteria. (Coverage
and white-box test techniques are covered in Chapter 4.)
Special skills or knowledge may be needed for white-box testing, such as knowl-
edge of the code (to interpret coverage tool results) or how data is stored (for database
queries).
66 Chapter 2   Testing throughout the software development life cycle

2.3.4 Change-related testing


The final test type is the testing of changes. This category is slightly different to the
others because if you have made a change to the software, you will have changed the
way it functions, how well it functions (or both) and its structure. However, we are
looking here at the specific types of tests relating to changes, even though they may
include all of the other test types. There are two things to be particularly aware of
when changes are made: the change itself and any other effects of the change.

Confirmation testing (re-testing)


When a test fails and we determine that the cause of the failure is a software defect,
the defect is reported and we can expect a new version of the software that has had
the defect fixed. In this case we will need to execute the test again to confirm that the
Confirmation testing defect has indeed been fixed. This is known as confirmation testing (also known
(re-testing) Dynamic as re-testing).
testing conducted after When doing confirmation testing, it is important to ensure that steps leading up to
fixing defects with the the failure are carried out in exactly the same way as described in the defect report,
objective to confirm using the same inputs, data and environment, and possibly extending beyond the
that failures caused by
test to ensure that the change has indeed fixed all of the problems due to the defect.
those defects do not
occur anymore.
If the test now passes, does this mean that the software is now correct? Well, we
now know that at least one part of the software is correct – where the defect was.
But this is not enough. The fix may have introduced or uncovered a different defect
elsewhere in the software. The way to detect these unexpected side-effects of fixes
is to do regression testing.

Regression testing
Regression testing Like confirmation testing, regression testing involves executing test cases that have
Testing of a previously been executed before. The difference is that, for regression testing, the test cases
tested component probably passed the last time they were executed (compare this with the test cases
or system following executed in confirmation testing – they failed the last time).
modification to ensure The term regression testing is something of a misnomer. It would be better if it
that defects have not
were called anti-regression testing because we are executing tests with the intent
been introduced or
have been uncovered in
of checking that the system has not regressed (that is, it does not now have more
unchanged areas of the defects in it as a result of some change). More specifically, the purpose of regression
software as a result of testing is to make sure (as far as is practical) that modifications in the software or
the changes made. the environment have not caused unintended adverse side effects and that the system
still meets its requirements.
It is common for organizations to have what is usually called a regression test
suite or regression test pack. This is a set of test cases that is specifically used for
regression testing. They are designed to collectively exercise most functions (certainly
the most important ones) in a system, but not test any one in detail. It is appropriate
to have a regression test suite at every level of testing (component testing, integration
testing, system testing, etc.). In some cases, all of the test cases in a regression test
suite would be executed every time a new version of software is produced; this makes
them ideal candidates for automation. However, it is much better to be able to select
subsets for execution, especially if the regression test suite is very large. In Agile
development, a selection of regression tests would be run to meet the objectives of a
particular ­iteration. Automation of regression tests should start as early as possible
in the project. See Chapter 6 for more on test automation.
Section 3   Test Types 67

Regression tests are executed whenever the software changes, either as a result
of fixes or new or changed functionality. It is also a good idea to execute them when
some aspect of the environment changes, for example when a new version of the host
operating system is introduced or the production environment has a new version of
the Java Virtual Machine or anti-malware software.
Maintenance of a regression test suite should be carried out so it evolves over time
in line with the software. As new functionality is added to a system, new regression
tests should be added. As old functionality is changed or removed, so too should
regression tests be changed or removed. As new tests are added, a regression test
suite may become very large. If all the tests have to be executed manually it may not
be possible to execute them all every time the regression suite is used. In this case, a
subset of the test cases has to be chosen. This selection should be made considering
the latest changes that have been made to the software. Sometimes a regression test
suite of automated tests can become so large that it is not always possible to execute
them all. It may be possible and desirable to eliminate some test cases from a large
regression test suite, for example if they are repetitive (tests which exercise the same
conditions) or can be combined (if they are always run together). Another approach
is to eliminate test cases when the risk associated with that test is so low that it is not
worth running it anymore.
Both confirmation testing and regression testing are done at all test levels.
In iterative and incremental development, changes are more frequent, even con-
tinuous and the software is refactored frequently. This makes confirmation testing
and regression testing even more important. But iterative development such as Agile
should also include continuous testing, and this testing is mainly regression testing.
For IoT systems, change-related testing covers not only software systems but the
changes made to individual objects or devices, which may be frequently updated or
replaced.

2.3.5 Test types and test levels


We mentioned as we went through the test types that each test type is applicable at
every test level. The testing is different, depending on the test level and test type, of
course, but the Syllabus gives examples of each test type at each test level to illustrate
the point.

Functional tests at each test level


Let’s use a banking example to look at the different levels of testing. There are many
features in a financial application. Some of them are visible to users and others are
behind the scenes but equally important for the whole application to work well.
The more technical and more detailed aspects should be tested at the lower levels,
and the customer-facing aspects at higher levels. We will also look at examples of
the different test types showing the different testing for functional, non-­functional,
white-box and change-related testing.
●● Component testing: how the component should calculate compound interest.
●● Component integration testing: how account information from the user interface
is passed to the business logic.
●● System testing: how account holders can apply for a line of credit.
68 Chapter 2   Testing throughout the software development life cycle

● System integration testing: how the system uses an external microservice to


check an account holder’s credit score.
● Acceptance testing: how the banker handles approving or declining a credit
application.

Non-functional tests at each test level


● Component testing: the time or number of CPU cycles to perform a complex
interest calculation.
● Component integration testing: checking for buffer overflow (a security flaw)
from data passed from the user interface to the business logic.
● System testing: portability tests on whether the presentation layer works on sup-
ported browsers and mobile devices.
● System integration testing: reliability tests to evaluate robustness if the credit
score microservice does not respond.
● Acceptance testing: usability tests for accessibility of the banker’s credit pro-
cessing interface for people with disabilities.

White-box tests at each test level


● Component testing: achieve 100% statement and decision coverage for all com-
ponents performing financial calculations.
● Component integration testing: coverage of how each screen in the browser
interface passes data to the next screen and to the business logic.
● System testing: coverage of sequences of web pages that can occur during a
credit line application.
● System integration testing: coverage of all possible inquiry types sent to the
credit score microservice.
● Acceptance testing: coverage of all supported financial data file structures and
value ranges for bank-to-bank transfers.

Change-related tests at each test level


● Component testing: automated regression tests for each component are included
in the continuous integration framework and pipeline.
● Component integration testing: confirmation tests for interface-related defects
are activated as the fixes are checked into the code repository.
●● System testing: all tests for a given workflow are re-executed if any screen changes.
● System integration testing: as part of continuous deployment of the credit scor-
ing microservice, automated tests of the interactions of the application with the
microservice are re-executed.
● Acceptance testing: all previously failed tests are re-executed after defects
found in acceptance testing are fixed.
Note that not every test type will occur at every test level in every system! ­However,
it is a good idea to think about how every test type might apply at each test level,
and try to implement those tests at the earliest opportunity within the development
life cycle.
Section 4   Maintenance Testing 69

2 . 4 MAINTENANCE TESTING

SYLLABUS LEARNING OBJECTIVES FOR 2.4 MAINTENANCE


TESTING (K2)

FL-2.4.1 Summarize triggers for maintenance testing (K2)

FL-2.4.2 Describe the role of impact analysis in maintenance testing (K2)

Once deployed, a system is often in service for years or even decades. During this
time, the system and its operational environment are often corrected, changed or
extended. As we go through this section, watch for the Syllabus terms impact ­analysis
and maintenance testing. You will find these terms also defined in the Glossary.
Testing that is executed during this life cycle phase is called ­maintenance ­testing. Maintenance testing
Maintenance testing, along with the entire process of maintenance releases, should Testing the changes to
be carefully planned. Not only must planned maintenance releases be considered, but an operational system
the process for developing and testing hot fixes must be as well. ­Maintenance testing or the impact of a
includes any type of testing of changes to an existing, operational system, whether changed environment
to an operational
the changes result from modifications, migration or retirement of the software
system.
or system.
Modifications can result from planned enhancement changes such as those
referred to as minor releases, that include new features and accumulated (non-­
emergency) bug fixes. Modifications can also result from corrective and more urgent
emergency changes. Modifications can also involve changes of environment, such
as planned operating system or database upgrades, planned upgrade of COTS soft-
ware, or patches to correct newly exposed or discovered vulnerabilities of the
operating system.
Migration involves moving from one platform to another. This can involve
abandoning a platform no longer supported or adding a new supported platform.
Either way, testing must include operational tests of the new environment as well
as of the changed software. Migration testing can also include conversion test-
ing, where data from another application will be migrated into the system being
maintained.
Note that maintenance testing is different from testing for maintainability (which
is the degree to which a component or system can be modified by the intended main-
tainers). In this section, we’ll discuss maintenance testing.
The same test process steps will apply as for testing during development and,
depending on the size and risk of the changes made, several levels of testing are car-
ried out: a component test, an integration test, a system test and an acceptance test.
If testing is done more formally, an application for a change may be used to produce
a test plan for testing the change, with test cases changed or created as needed. In
less formal testing, thought needs to be given to how the change should be tested,
even if this planning, updating of test cases and execution of the tests is part of a
continuous process.
The scope of maintenance testing depends on several factors, which influence the
test types and test levels. The factors are:
●● Degree of risk of the change, for example a self-contained change is a lower risk
than a change to a part of the system that communicates with other systems.
70 Chapter 2   Testing throughout the software development life cycle

● The size of the existing system, for example a small system would need less
regression testing than a larger system.
● The size of the change, which affects the amount of testing of the changes that
would be needed. The amount of regression testing is more related to the size of
the system than the size of the change.

2.4.1 Triggers for maintenance


As stated, maintenance testing is done on an existing operational system. There are
three possible triggers for maintenance testing:
● modifications
● migration
● retirement.
Modifications include planned enhancement changes (for example release-based),
corrective and emergency changes and changes of environment, such as planned
operating system or database upgrades, or patches to newly exposed or discovered
vulnerabilities of the operating system and upgrades of COTS software.
Modifications may also be of hardware or devices, not just software components
or systems. For example, in IoT systems, new or significantly modified hardware
devices may be introduced to a working system. The emphasis in maintenance test-
ing would likely focus on different types of integration testing and security testing
at all test levels.
Maintenance testing for migration (for example from one platform to another)
should also include operational testing of the new environment, as well as the changed
software. It is important to know that the platform you will be transferring to is sound
before you start migrating your own files and applications.
Maintenance testing for the retirement of a system may include the testing of data
migration or archiving, if long data-retention periods are required. Testing of restore
or retrieve procedures after archiving may also be needed. There is no point in try-
ing to save and preserve something that you can no longer access. These procedures
should be regularly tested and action taken to migrate away from technology that is
reaching the end of its life. You may remember seeing magnetic tape on old movies,
which was thought to be a good long-term archiving solution at the time.

2.4.2 Impact analysis and regression testing


As mentioned earlier, maintenance testing usually consists of two parts:
● testing the changes

● regression tests to show that the rest of the system has not been affected by the

Impact analysis The maintenance work.


identification of all In addition to testing what has been changed, maintenance testing includes exten-
work products affected sive regression testing to parts of the system that have not been changed. Some
by a change, including
systems will have extensive regression suites (automated or not) where the costs
an estimate of the
of executing all of the tests would be significant. A major and important activity
resources needed to
accomplish the change. within maintenance testing is impact analysis. During impact analysis, together with
stakeholders, a decision is made on what parts of the system may be unintentionally
Section 4   Maintenance Testing 71

affected and therefore need more extensive regression testing. Risk analysis will help
to decide where to focus regression testing. It is unlikely that the team will have time
to repeat all the existing tests, so this gives us the best value for the time and effort
we can spend in regression testing.
If the test specifications from the original development of the system are kept, one
may be able to reuse them for regression testing and to adapt them for changes to
the system. This may be as simple as changing the expected results for your existing
tests. Sometimes additional tests may need to be built. Extension or enhancement to
the system may mean new areas have been specified and tests would be drawn up just
as for the development. Do not forget that automated regression tests will also need
to be updated in line with the changes; this can take significant effort, depending on
the architecture of your automation (see Chapter 6).
Impact analysis can also be used to help make a decision about whether or not a
particular change should be made. If the change has the potential to cause high-risk
vulnerabilities throughout the system, it may be a better decision not to make that
change.
There are a number of factors that make impact analysis more difficult:
●● Specifications are out of date or missing (for example business requirements,
user stories, architecture diagrams).
●● Test cases are not documented or are out of date.
●● Bi-directional traceability between tests and the test basis has not been
maintained.
●● Tool support is weak or non-existent.
●● The people involved do not have domain and/or system knowledge.
●● The maintainability of the software has not been taken into enough considera-
tion during development.
Impact analysis can be very useful in making maintenance testing more efficient,
but if it is not, or cannot be, done well, then the risks of making the change are greatly
increased.
72 Chapter 2   Testing throughout the software development life cycle

CHAPTER REVIEW
Let’s review what you have learned in this chapter.
From Section 2.1, you should now understand the relationship between devel-
opment activities and test activities within a development life cycle and be familiar
with sequential life cycle models (waterfall and V-model) and iterative/incremental
life cycle models (RUP, Scrum, Kanban and Spiral). You should be able to recall the
reasons for different levels of testing and characteristics of good testing in any life
cycle model. You should be able to give reasons why software development life cycle
models need to be adapted to the context of the project and product being developed.
You should know the Glossary terms commercial off-the-shelf (COTS), sequential
development model and test level.
From Section 2.2, you should know the typical levels of testing (component,
­integration, system and acceptance testing). You should be able to compare the differ-
ent levels of testing with respect to their major objectives, the test basis, typical objects
of testing, typical defects and failures, and approaches and responsibilities for each
test level. You should know the Glossary terms acceptance testing, alpha t­esting,
beta testing, component integration testing, component testing, contractual
­acceptance testing, integration testing, operational acceptance testing, ­regulatory
acceptance testing, system integration testing, system testing, test basis, test case,
test ­environment, test object, test objective and user acceptance testing.
From Section 2.3, you should know the four major types of test (functional,
non-functional, structural and change-related) and should be able to provide some
concrete examples for each of these. You should understand that functional and
structural tests occur at any test level and be able to explain how they are applied in
the various test levels. You should be able to identify and describe non-functional
test types based on non-functional requirements and product quality characteristics.
Finally, you should be able to explain the purpose of confirmation testing (re-testing)
and regression testing in the context of change-related testing. You should know the
Glossary terms functional testing, non-functional testing, test type and white-box
testing.
From Section 2.4, you should be able to compare maintenance testing to testing of
new applications. You should be able to identify triggers and reasons for maintenance
testing, such as modifications, migration and retirement. Finally, you should be able to
describe the role of regression testing and impact analysis within maintenance testing.
You should know the Glossary terms impact analysis and maintenance testing.
Sample Exam Questions 73

SAMPLE EXAM QUESTIONS


Question 1 Which of the following statements is a. Modification: 2 and 3, Migration: 1 and 5,
true? Retirement: 4 and 6.
a. Overlapping test levels and test activities are more b. Modification: 2 and 6, Migration: 1 and 4,
common in sequential life cycle models than in Retirement: 3 and 5.
iterative incremental models. c. Modification: 2 and 4, Migration: 1 and 3,
b. The V-model is an iterative incremental life cycle Retirement: 5 and 6.
model because each development activity has a d. Modification: 1 and 5, Migration: 2 and 3,
corresponding test activity. Retirement: 4 and 6.
c. When completed, iterative incremental
life cycle models are more likely to Question 5 Which of these is a functional test?
deliver the full set of features originally a. Measuring response time on an online booking
envisioned by stakeholders than sequential system.
models.
b. Checking the effect of high volumes of traffic in a
d. In iterative and incremental life cycle call centre system.
models, delivery of usable software to
end-users is much more frequent than in c. Checking the online bookings screen information
sequential models. and the database contents against the information
on the letter to the customers.
Question 2 What level of testing is typically d. Checking how easy the system is to use, particularly
performed by system administration staff? for users with disabilities such as impaired vision.
a. Regulatory acceptance testing.
Question 6 Which of the following is true,
b. System testing. regarding the process of testing emergency fixes?
c. System integration testing. a. There is no time to test the change before it goes
d. Operational acceptance testing. live, so only the best developers should do this
work and should not involve testers as they slow
Question 3 Which of the following is a test type? down the process.
a. Component testing. b. Just run the retest of the defect actually fixed.
b. Functional testing. c. Always run a full regression test of the whole
c. System testing. system in case other parts of the system have been
adversely affected.
d. Acceptance testing.
d. Retest the changed area and then use risk
assessment to decide on a reasonable subset of the
Question 4 Consider the three triggers for whole regression test to run in case other parts of
maintenance, and match the event with the correct the system have been adversely affected.
trigger:
1. Data conversion from one system to another. Question 7 A regression test:
2. Upgrade of COTS software. a. Is only run once.
3. Test of data archiving. b. Will always be automated.
4. System now runs on a different platform and c. Will check unchanged areas of the software to see
operating system. if they have been affected.
5. Testing restore or retrieve procedures. d. Will check changed areas of the software to see if
6. Patches for security vulnerabilities. they have been affected.
74 Chapter 2   Testing throughout the software development life cycle

Question 8 Non-functional testing includes: Question 9 Beta testing is:


a. Testing to see where the system does not function a. Performed by customers at their
correctly. own site.
b. Testing the quality attributes of the system b. Performed by customers at the software
including reliability and usability. developer’s site.
c. Gaining user approval for the system. c. Performed by an independent test team.
d. Testing a system feature using only the software d. Useful to test software developed for a specific
required for that function. customer or user.

You might also like