Software Testing in Development Models
Software Testing in Development Models
T esting is not a stand-alone activity. It has its place within a software development
life cycle model and therefore the life cycle applied will largely determine how
testing is organized. There are many different forms of testing. Because several
disciplines, often with different interests, are involved in the development life cycle,
it is important to clearly understand and define the various test levels and types. This
chapter discusses the most commonly applied software development models, test
levels and test types. Maintenance can be seen as a specific instance of a development
process. The way maintenance influences the test process, levels and types and how
testing can be organized is described in the last section of this chapter.
In this section, we’ll discuss software development models and how testing fits into
them. We’ll discuss sequential models, focusing on the V-model approach rather
than the waterfall. We’ll discuss iterative and incremental models such as Rational
Unified Process (RUP), Scrum, Kanban and Spiral (or prototyping).
As we go through this section, watch for the Syllabus terms commercial off-the-
shelf (COTS), sequential development model, and test level. You will find these
keywords defined in the Glossary (and ISTQB website).
The development process adopted for a project will depend on the project aims and
goals. There are numerous development life cycles that have been developed in order
to achieve different required objectives. These life cycles range from lightweight and
fast methodologies, where time to market is of the essence, through to fully controlled
and documented methodologies. Each of these methodologies has its place in mod-
ern software development, and the most appropriate development process should be
36
Section 1 Software Development Life Cycle Models 37
applied to each project. The models specify the various stages of the process and the
order in which they are carried out.
The life cycle model that is adopted for a project will have a big impact on the
testing that is carried out. Testing does not exist in isolation; test activities are highly
related to software development activities. It will define the what, where and when
of our planned testing, influence regression testing and largely determine which test
techniques to use. The way testing is organized must fit the development life cycle
or it will fail to deliver its benefit. If time to market is the key driver, then the testing
must be fast and efficient. If a fully documented software development life cycle, with
an audit trail of evidence, is required, the testing must be fully documented.
Whichever life cycle model is being used, there are several characteristics of good
testing:
●● For every development activity there is a corresponding test activity.
●● Each test level has test objectives specific to that level.
●● The analysis and design of tests for a given test level should begin during the
corresponding software development activity.
●● Testers should participate in discussions to help define and refine requirements
and design. They should also be involved in reviewing work products as soon as
drafts are available in the software development cycle.
Recall from Chapter 1, testing Principle 3: ‘Early testing saves time and money’.
By starting testing activities as early as possible in the software development life
cycle, we find defects while they are still small green shoots (in requirements, for
example) before they have had a chance to grow into trees (in production). We also
prevent defects from occurring at all by being more aware of what should be tested
from the earliest point in whatever software development life cycle we are using.
We will look at two categories of software development life cycle models: sequen-
tial and iterative/incremental.
need, wish,
policy, law
User
requirements
System
requirements
Global
design
Detailed
design
Implementation
Testing
The V-model was developed to address some of the problems experienced using
the traditional waterfall approach. Defects were being found too late in the life cycle,
as testing was not involved until the end of the project. Testing also added lead
time due to its late involvement. The V-model provides guidance on how testing
begins as early as possible in the life cycle. It also shows that testing is not only an
execution-based activity. There are a variety of activities that need to be performed
before the end of the coding phase. These activities should be carried out in parallel
with development activities, and testers need to work with developers and business
analysts so they can perform these activities and tasks, producing a set of test deliv-
erables. The work products produced by the developers and business analysts during
development are the basis of testing in one or more levels. By starting test design
early, defects are often found in the test basis documents. A good practice is to have
testers involved even earlier, during the review of the (draft) test basis documents. The
V-model is a model that illustrates how testing activities (verification and validation)
can be integrated into each phase of the life cycle. Within the V-model, validation
testing takes place especially during the early stages, for example reviewing the user
requirements, and late in the life cycle, for example during user acceptance testing.
Although variants of the V-model exist, a common type of V-model uses four
Test level (test stage) test levels. (See Section 2.2 for more on test levels.) The four test levels used, each
A specific instantiation with their own objectives, are:
of a test process.
● Component testing: searches for defects in and verifies the functioning of soft-
ware components (for example modules, programs, objects, classes, etc.) that are
separately testable.
● Integration testing: tests interfaces between components, interactions to differ-
ent parts of a system such as an operating system, file system and hardware or
interfaces between systems.
Section 1 Software Development Life Cycle Models 39
need, wish,
Operational
policy, law
system
System System
Preparation
requirements test execution
System test
Global Integration
Preparation
design test execution
Integration test
Detailed Component
design test execution
Implementation
To better understand the meaning of these two terms, consider the following two
sequences of producing a painting.
● Incremental: complete one piece at a time (scheduling or staging strategy).
Each increment may be delivered to the customer.
● Iterative: start with a rough product and refine it, iteratively (rework strategy).
Final version only delivered to the customer (although in practice, intermediate
versions may be delivered to selected customers to get feedback).
There is a danger that the testing may be less thorough in incremental and iter-
ative life cycles, particularly regression testing of previously developed parts, and
especially if the regression testing is manual rather than automated. There is also a
danger that the testing is less formal, because we are dealing with smaller parts of
the system, and formality of the testing may seem like overkill for such a small thing.
Examples of iterative and incremental development models are Rational Unified
Process (RUP), Scrum, Kanban and Spiral (or prototyping).
Scrum
Scrum is an iterative and incremental framework for effective team collabora-
tion, which is typically used in Agile development, the most well-known iterative
method. It (and Agile) is based on recognizing that change is inevitable and taking
a practical empirical approach to changing priorities. Work is broken down into
small units that can be completed in a fairly short time (days, a week or two or
even a month). The delivery of a unit of work is called a sprint. For software devel-
opment, a sprint includes all aspects of development for a particular feature or set
of small features, everything from requirements (typically user stories) through to
testing and (ideally) test automation.
The development teams are small (3 to 9 people) and cross-functional, that is, they
include people who perform various roles, and often individuals take on different
tasks (such as testing) within the team. The key roles are:
● The Product Owner represents the business, that is, stakeholders and end users.
● The Development team (which includes testers) makes its own decisions about
development, that is, they are self-organizing with a high level of autonomy.
● The Scrum Master helps the development team to do their work as efficiently
as possible, by interacting with other parts of the organization and dealing with
problems. The Scrum Master is not a manager, but a facilitator for the team.
Section 1 Software Development Life Cycle Models 43
A stand-up meeting of typically around 15 minutes is held each day, for example
first thing in the morning, to update everyone with progress from the previous day and
plan the work ahead. (This is where the term ‘scrum’ came from, as in the gathering
of a rugby team.)
At the start of a sprint, in the sprint planning meeting, some features are selected
to be implemented, with other features being put on a backlog. Acceptance criteria
apply to user stories, and are similar to test conditions, saying what needs to work for
the user story to be considered working. A definition of done can apply to a user story
(which includes but goes beyond satisfaction of the acceptance criteria), but also to unit
testing, system testing, iterations and releases. After a sprint completes, a retrospective
should be held to assess what went well and what could be improved for the next sprint.
Because development is limited to the sprint duration which is time-boxed, flex-
ibility is in choosing what can be developed in the time. Compare that to sequential
models, where all the features are selected first and the time taken to develop all of
them is based on that. Thus Scrum (and Agile) enable us to deliver the greatest value
soonest, an approach first proposed in the 1980s.
Because the iterations are short, the increments are small, such as a few small
features or even a few enhancements or bug fixes.
Kanban
Kanban came from an approach to work in manufacturing at Toyota. It is a way of
visualizing work and workflow. A Kanban board has columns for different stages of
work, from initial idea through development and testing stages to final delivery to
users. The tasks are put on sticky notes which are moved from left to right through
the columns (like an assembly line for cars).
A key principle of Kanban is to have a limit for work-in-progress activities. If we
concentrate on one task, we are much more efficient at doing it, so this approach
is less wasteful than trying to do little bits of lots of different tasks. This focus on
eliminating waste makes this a lean approach.
There is also a strong focus on user and customer needs. Iterations can be a fixed
length to deliver a single feature or enhancement, or features can be grouped together
for delivery. Kanban can span more than one team’s work (as opposed to Scrum).
If user stories are grouped by feature, work may span more than one column on the
Kanban board, sometimes referred to as swim lanes.
Agile development
In this section, we will describe what Agile development is and then cover the
changes that this way of working brings to testing. This is additional to what you
44 Chapter 2 Testing throughout the software development life cycle
need to know for the exam, as the Syllabus does not specifically cover Agile devel-
opment (the ISTQB Foundation Level Agile Tester Extension Syllabus covers this),
but we hope this will give you useful background, especially if you are not familiar
with it.
Agile software development is a group of software development methodologies
based on iterative and incremental development, where requirements and solutions
evolve through collaboration between self-organizing cross-functional teams. Most
Agile teams use Scrum, as described above. Typical Agile teams are 5 to 9 people,
and the Agile manifesto describes ways of working that are ideal for small teams, and
that counteract problems prevalent in the late 1990s, with its emphasis on process and
documentation. The Agile manifesto consists of four statements describing what is
valued in this way of working:
● individuals and interactions over processes and tools
● working software over comprehensive documentation
● customer collaboration over contract negotiation
● responding to change over following a plan.
While there are several Agile methodologies in practice, the industry seems to
have settled on the use of Scrum as an Agile management approach, and Extreme
Programming (XP) as the main source of Agile development ideas. Some character-
istics of project teams using Scrum and XP are:
● The generation of business stories (a form of lightweight use cases) to define the
functionality, rather than highly detailed requirements specifications.
● The incorporation of business representatives into the development process, as
part of each iteration (called a sprint and typically lasting 2 to 4 weeks), provid-
ing continual feedback and to define and carry out functional acceptance testing.
● The recognition that we cannot know the future, so changes to requirements
are welcomed throughout the development process, as this approach can pro-
duce a product that better meets the stakeholders’ needs as their knowledge
grows over time.
● The concept of shared code ownership among the developers, and the close
inclusion of testers in the sprint teams.
● The writing of tests as the first step in the development of a component, and
the automation of those tests before any code is written. The component is
complete when it then passes the automated tests. This is known as test-driven
development.
● Simplicity: building only what is necessary, not everything you can think of.
● The continuous integration and testing of the code throughout the sprint, at least
once a day.
Proponents of the Scrum and XP approaches emphasize testing throughout the
process. Each iteration (sprint) culminates in a short period of testing, often with an
independent tester as well as a business representative. Developers are to write and
run test cases for their code, and leading practitioners use tools to automate those
tests and to measure structural coverage of the tests (see Chapters 4 and 6). Every
time a change is made in the code, the component is tested and then integrated with
the existing code, which is then tested using the full set of automated component
Section 1 Software Development Life Cycle Models 45
test cases. This gives continuous integration, by which we mean that changes are
incorporated continuously into the software build.
Agile development provides both benefits and challenges for testers. Some of the
benefits are:
●● The focus on working software and good quality code.
●● The inclusion of testing as part of and the starting point of software develop-
ment (test-driven development).
●● Accessibility of business stakeholders to help testers resolve questions about
expected behaviour of the system.
●● Self-organizing teams, where the whole team is responsible for quality and
gives testers more autonomy in their work.
●● Simplicity of design that should be easier to test.
There are also some significant challenges for testers when moving to an Agile
development approach:
●● Testers who are used to working with well-documented requirements will be
designing tests from a different kind of test basis: less formal and subject to
change. The manifesto does not say that documentation is no longer necessary
or that it has no value, but it is often interpreted that way.
●● Because developers are doing more component testing, there may be a percep-
tion that testers are not needed. But component testing and confirmation-based
acceptance testing by only business representatives may miss major problems.
System testing, with its wider perspective and emphasis on non-functional
testing as well as end-to-end functional testing is needed, even if it does not fit
comfortably into a sprint.
●● The tester’s role is different: since there is less documentation and more per-
sonal interaction within an Agile team, testers need to adapt to this style of
working, and this can be difficult for some testers. Testers may be acting more
as coaches in testing to both stakeholders and developers, who may not have a
lot of testing knowledge.
●● Although there is less to test in one iteration than a whole system, there is also
a constant time pressure and less time to think about the testing for the new
features.
●● Because each increment is adding to an existing working system, regression
testing becomes extremely important, and automation becomes more beneficial.
However, simply taking existing automated component or component integra-
tion tests may not make an adequate regression suite.
Software engineering teams are still learning how to apply Agile approaches.
Agile approaches cannot be applied to all projects or products, and some testing
challenges remain to be surmounted with respect to Agile development. However,
Agile methodologies are showing promising results in terms of both development
efficiency and quality of the delivered code.
More information about testing in Agile development and iterative incremental
models can be found in books by Black [2017], Crispin and Gregory [2008] and
Gregory and Crispin [2015]. There is also an ISTQB certificate for the Foundation
Level Agile Tester Extension.
46 Chapter 2 Testing throughout the software development life cycle
2 . 2 TEST LEVELS
We have mentioned (and given the definition for) test levels in Section 2.1. The defi-
nition of ‘test level’ is ‘an instance of the test process’, which is not necessarily the
most helpful. The Syllabus here describes test levels as:
groups of test activities that are organized and managed together
The test activities were described in Chapter 1, Section 1.4 (test planning through
to test completion). When we talk about test levels, we are looking at those activi-
ties performed with reference to development levels (such as those described in the
V-model) from components to systems, or even systems of systems.
In this section, we’ll look in more detail at the various test levels and show how
Test objective A
they are related to other activities within the software development life cycle. The reason or purpose
key characteristics for each test level are discussed and defined, to be able to more for designing and
clearly separate the various test levels. A thorough understanding and definition of executing a test.
the various test levels will identify missing areas and prevent overlap and repetition.
Sometimes we may wish to introduce deliberate overlap to address specific risks. Test basis The body of
knowledge used as the
Understanding whether we want overlaps and removing the gaps will make the test
basis for test analysis
levels more complementary, leading to more effective and efficient testing. We will
and design.
look at four test levels in this section.
As we go through this section, watch for the Syllabus terms acceptance testing, Test case A set
alpha testing, beta testing, component integration testing, component testing, of preconditions,
contractual acceptance testing, integration testing, operational acceptance test- inputs, actions
ing, regulatory acceptance testing, system integration testing, system testing, test (where applicable),
expected results
basis, test case, test environment, test object, test objective and user acceptance
and postconditions,
testing. These terms are also defined in the Glossary. developed based on
While the specific test levels required for – and planned for – a particular project test conditions.
can vary, good practice in testing suggests that each test level has the following
clearly identified: Test object The
component or system
●● Specific test objectives for the test level. to be tested. See also:
●● The test basis, the work product(s) used to derive the test conditions and test cases. test item.
●● The test object (that is, what is being tested such as an item, build, feature or Test environment
system under test). (test bed, test rig)
An environment
●● The typical defects and failures that we are looking for at this test level.
containing hardware,
●● Specific approaches and responsibilities for this test level. instrumentation,
simulators, software
One additional aspect is that each test level needs a test environment. Sometimes
tools and other support
an environment can be shared by more than one test level: in other situations, a
elements needed to
particular environment is needed. For example, acceptance testing should have conduct a test.
a test environment that is as similar to production as is possible or feasible.
48 Chapter 2 Testing throughout the software development life cycle
Component: A A Driver
Component: B Stub B
Examples of defects and failures that can typically be revealed by system integra-
tion testing include:
● inconsistent message structures between systems
● incorrect data, missing data or incorrect data encoding
● interface mismatch
● failures in communication between systems
● unhandled or improperly handled communication failures between systems
● incorrect assumptions about the meaning, units or boundaries of the data being
passed between systems
● failure to comply with mandatory security regulations.
At the end of the description of all the test levels, see Table 2.1 which summarizes
the characteristics of each test level.
and influence integration planning. If integration tests are planned before components
or systems are built, they can be developed in the order required for most efficient
testing. A risk analysis of the most complex interfaces can help to focus integration
testing. In iterative and incremental development, integration is also incremental.
Existing integration tests should be part of the regression tests used in continuous
integration. Continuous integration has major benefits because of its iterative nature.
At each stage of integration, testers concentrate solely on the integration itself. For
example, if they are integrating component A with component B they are interested
in testing the communication between the components, not the functionality of either
one. In integrating system X with system Y, again the focus is on the communication
between the systems and what can be done by both systems together, rather than
defects in the individual systems. Both functional and structural approaches may be
used. Testing of specific non-functional characteristics (for example performance)
may also be included in integration testing.
Component integration testing is often carried out by developers; system integra-
tion testing is generally the responsibility of the testers. Either type of integration
testing could be done by a separate team of specialist integration testers, or by a
specialist group of developers/integrators, including non-functional specialists. The
testers performing the system integration testing need to understand the system archi-
tecture. Ideally, they should have had an influence on the development, integration
planning and integration testing.
Acceptance testing
2.2.4 Acceptance testing Formal testing with
respect to user needs,
When the development organization has performed its system test (and possibly also requirements, and
system integration tests) and has corrected all or most defects, the system may be business processes
delivered for acceptance testing. Acceptance tests typically produce information conducted to determine
to assess the system’s readiness for release or deployment to end-users or custom- whether or not a
ers. Although defects are found at this level, that is not the main aim of acceptance system satisfies the
testing. (If lots of defects are found at this late stage, there are serious problems acceptance criteria
with the whole system, and major project risks.) The focus is on validation, the use and to enable the user,
of the system for real, how suitable the system is to be put into production or actual customers or other
authorized entity to
use by its intended users. Regulatory and legal requirements, and conformance to
determine whether
standards may also be checked in acceptance testing, although they should also have
or not to accept the
been addressed in an earlier level of testing, so that the acceptance test is confirming system.
compliance to the standards.
56 Chapter 2 Testing throughout the software development life cycle
customer, although other stakeholders may be involved as well. The execution of the
acceptance test requires a test environment that is, for most aspects, representative
of the production environment (‘as-if production’).
The goal of acceptance testing is to establish confidence in the system, part of the
system or specific non-functional characteristics, for example usability of the system.
Acceptance testing is most often focused on a validation type of testing, where we are
trying to determine whether the system is fit for purpose. Finding defects should not
be the main focus in acceptance testing. Although it assesses the system’s readiness
for deployment and use, it is not necessarily the final level of testing. For example, a
large-scale system integration test may come after the acceptance of a system.
Acceptance testing may occur at more than just a single level, for example:
●● A COTS software product may be acceptance tested when it is installed or
integrated.
●● Acceptance testing of the usability of a component may be done during compo-
nent testing.
●● Acceptance testing of a new functional enhancement may come before system
testing.
User acceptance testing focuses mainly on the functionality, thereby validating
the fitness for use of the system by the business user, while the operational acceptance
test (also called production acceptance test) validates whether the system meets the
requirements for operation. The user acceptance test is performed by the users and
application managers. In terms of planning, the user acceptance test usually links
tightly to the system test, and will, in many cases, be organized partly overlapping
in time. If the system to be tested consists of a number of more or less independent
subsystems, the acceptance test for a subsystem that meets its exit criteria from the
system test can start while another subsystem may still be in the system test phase.
In most organizations, system administration will perform the operational accept-
ance test shortly before the system is released. The operational acceptance test may
include testing of backup/restore, data load and migration tasks, disaster recovery,
user management, maintenance tasks and periodic check of security vulnerabilities.
Note that organizations may use other terms, such as factory acceptance testing
and site acceptance testing for systems that are tested before and after being moved
to a customer’s site.
In iterative development, different forms of acceptance testing may be done at
various times, and often in parallel. At the end of an iteration, a new feature may
be tested to validate that it meets stakeholder and user needs. This is user accept-
ance testing. If software for general release (COTS) is being developed, alpha
and beta testing may be used at or near the end of an iteration or set of iterations.
Operational and regulatory acceptance testing may also occur at the end of an
iteration or set of iterations.
60
Component testing Integration testing System testing Acceptance testing
Objectives reduce risk reduce risk reduce risk establish confidence in whole
system and its use
verify functional and non- verify functional and non- verify functional and non-
functional behaviour functional behaviour functional behaviour validate completeness, works
as expected
build confidence in build confidence in interfaces validate completeness, works
components as expected verify functional and non-
find defects
functional behaviour
find defects build confidence in whole
prevent defects to higher
system
prevent defects to higher levels
levels find defects
Test basis detailed design software/system design requirement specs (functional business processes
and non-functional)
code sequence diagrams user, business, system
risk analysis reports requirements
data models interface and communication
protocol specs use cases regulations, legal contracts
component specifications
and standards
use cases epics and user stories
use cases
architecture (component or models of system behaviour
system) documentation
state diagrams
workflows installation procedures
system and user manuals
external interface definitions risk analysis
Component testing Integration testing System testing Acceptance testing
Test objects components, units, subsystems applications system under test (SUT)
modules
databases hardware/software system configuration and data
code
infrastructure operating systems business processes
data structures
interfaces system under test recovery systems
classes
APIs system configuration and data operation and maintenance
database models processes
microservices
forms
reports
Typical defects wrong functionality data problems incorrect calculations system workflows do not
and failures meet business or user
data flow problems inconsistent message incorrect or unexpected
needs
structure (SIT) behaviour
incorrect code/logic
business rules not correct
timing problems incorrect data/control flows
contractual or regulatory
interface mismatch cannot complete end-to-end
problems
tasks
communication failures
non-functional failures
does not work in production
incorrect assumptions (performance, security)
environment(s)
not complying with
not as described in manuals/
regulations (SIT)
documentation
61
62 Chapter 2 Testing throughout the software development life cycle
2 . 3 TEST TYPES
In this section, we’ll look at different test types. We’ll discuss tests that focus on the
functionality of a system, which informally is testing what the system does. We’ll
also discuss tests that focus on non-functional attributes of a system, which infor-
mally is testing how well the system does what it does. We’ll introduce testing based
on the system’s structure. Finally, we’ll look at testing of changes to the system,
both confirmation testing (testing that the changes succeeded) and regression testing
(testing that the changes did not affect anything unintentionally).
The test types discussed here can involve the development and use of a model of
the software or its behaviours. Such models can occur in structural testing when we
use control flow models or menu structure models. Such models in non-functional
testing can involve performance models, usability models and security threat models.
They can also arise in functional testing, such as the use of process flow models, state
transition models or plain language specifications. Examples of such models will be
found in Chapter 4.
As we go through this section, watch for the Syllabus terms functional testing,
non-functional testing, test type and white-box testing. You will find these terms
defined in the Glossary as well.
Test types are introduced as a means of clearly defining the objective of a certain
test level for a program or project. We need to think about different types of testing
because testing the functionality of the component or system may not be sufficient at
each level to meet the overall test objectives. Focusing the testing on a specific test
objective and, therefore, selecting the appropriate type of test, helps make it easier to
make and communicate decisions about test objectives. Typical objectives may include:
● Evaluating functional quality, for example whether a function or feature is com-
plete, correct and appropriate.
● Evaluating non-functional quality characteristics, for example reliability, perfor-
mance efficiency, security, compatibility and usability.
● Evaluating whether the structure or architecture of the component or system is
correct, complete and as specified.
● Evaluating the effects of changes, looking at both the changes themselves (for
example defect fixes) and also the remaining system to check for any unintended
side-effects of the change. These are confirmation testing and regression testing,
respectively, and are discussed in Section 2.3.4.
Section 3 Test Types 63
A test type is focused on a particular test objective, which could be the testing Test type A group of
of a function to be performed by the component or system; a non-functional quality test activities based on
characteristic, such as reliability or usability; the structure or architecture of the specific test objectives
component or system; or related to changes, that is, confirming that defects have aimed at specific
been fixed (confirmation testing, or re-testing) and looking for unintended changes characteristics of a
(regression testing). Depending on its objectives, testing will be organized differently. component or system.
For example, component testing aimed at performance would be quite different from
component testing aimed at achieving decision coverage.
define the stress conditions for performance tests, and equivalence partitioning to
identify types of devices for compatibility testing, or to identify user groups for
usability testing (novice, experienced, age range, geographical location, educational
background).
The thoroughness of non-functional testing can be measured by the coverage of
non-functional elements. If we had at least one test for each major group of users, then
we would have 100% coverage of those user groups that we had identified. Of course,
we may have forgotten an important user group, such as those with disabilities, so we
have only covered the groups we have identified.
If we have traceability between non-functional tests and non-functional require-
ments, we may be able to identify coverage gaps. For example, an implicit require-
ment is for accessibility for disabled users.
Special skills or knowledge may be needed for non-functional testing, such as for
performance testing, usability testing or security testing (for example for specific
development languages).
More about non-functional testing is found in other ISTQB qualification Sylla-
buses, including the Advanced Test Analyst, the Advanced Technical Test Analyst,
and the Advanced Security Tester, the Foundation Performance Testing, and the
Foundation Usability Testing Syllabus.
Regression testing
Regression testing Like confirmation testing, regression testing involves executing test cases that have
Testing of a previously been executed before. The difference is that, for regression testing, the test cases
tested component probably passed the last time they were executed (compare this with the test cases
or system following executed in confirmation testing – they failed the last time).
modification to ensure The term regression testing is something of a misnomer. It would be better if it
that defects have not
were called anti-regression testing because we are executing tests with the intent
been introduced or
have been uncovered in
of checking that the system has not regressed (that is, it does not now have more
unchanged areas of the defects in it as a result of some change). More specifically, the purpose of regression
software as a result of testing is to make sure (as far as is practical) that modifications in the software or
the changes made. the environment have not caused unintended adverse side effects and that the system
still meets its requirements.
It is common for organizations to have what is usually called a regression test
suite or regression test pack. This is a set of test cases that is specifically used for
regression testing. They are designed to collectively exercise most functions (certainly
the most important ones) in a system, but not test any one in detail. It is appropriate
to have a regression test suite at every level of testing (component testing, integration
testing, system testing, etc.). In some cases, all of the test cases in a regression test
suite would be executed every time a new version of software is produced; this makes
them ideal candidates for automation. However, it is much better to be able to select
subsets for execution, especially if the regression test suite is very large. In Agile
development, a selection of regression tests would be run to meet the objectives of a
particular iteration. Automation of regression tests should start as early as possible
in the project. See Chapter 6 for more on test automation.
Section 3 Test Types 67
Regression tests are executed whenever the software changes, either as a result
of fixes or new or changed functionality. It is also a good idea to execute them when
some aspect of the environment changes, for example when a new version of the host
operating system is introduced or the production environment has a new version of
the Java Virtual Machine or anti-malware software.
Maintenance of a regression test suite should be carried out so it evolves over time
in line with the software. As new functionality is added to a system, new regression
tests should be added. As old functionality is changed or removed, so too should
regression tests be changed or removed. As new tests are added, a regression test
suite may become very large. If all the tests have to be executed manually it may not
be possible to execute them all every time the regression suite is used. In this case, a
subset of the test cases has to be chosen. This selection should be made considering
the latest changes that have been made to the software. Sometimes a regression test
suite of automated tests can become so large that it is not always possible to execute
them all. It may be possible and desirable to eliminate some test cases from a large
regression test suite, for example if they are repetitive (tests which exercise the same
conditions) or can be combined (if they are always run together). Another approach
is to eliminate test cases when the risk associated with that test is so low that it is not
worth running it anymore.
Both confirmation testing and regression testing are done at all test levels.
In iterative and incremental development, changes are more frequent, even con-
tinuous and the software is refactored frequently. This makes confirmation testing
and regression testing even more important. But iterative development such as Agile
should also include continuous testing, and this testing is mainly regression testing.
For IoT systems, change-related testing covers not only software systems but the
changes made to individual objects or devices, which may be frequently updated or
replaced.
2 . 4 MAINTENANCE TESTING
Once deployed, a system is often in service for years or even decades. During this
time, the system and its operational environment are often corrected, changed or
extended. As we go through this section, watch for the Syllabus terms impact analysis
and maintenance testing. You will find these terms also defined in the Glossary.
Testing that is executed during this life cycle phase is called maintenance testing. Maintenance testing
Maintenance testing, along with the entire process of maintenance releases, should Testing the changes to
be carefully planned. Not only must planned maintenance releases be considered, but an operational system
the process for developing and testing hot fixes must be as well. Maintenance testing or the impact of a
includes any type of testing of changes to an existing, operational system, whether changed environment
to an operational
the changes result from modifications, migration or retirement of the software
system.
or system.
Modifications can result from planned enhancement changes such as those
referred to as minor releases, that include new features and accumulated (non-
emergency) bug fixes. Modifications can also result from corrective and more urgent
emergency changes. Modifications can also involve changes of environment, such
as planned operating system or database upgrades, planned upgrade of COTS soft-
ware, or patches to correct newly exposed or discovered vulnerabilities of the
operating system.
Migration involves moving from one platform to another. This can involve
abandoning a platform no longer supported or adding a new supported platform.
Either way, testing must include operational tests of the new environment as well
as of the changed software. Migration testing can also include conversion test-
ing, where data from another application will be migrated into the system being
maintained.
Note that maintenance testing is different from testing for maintainability (which
is the degree to which a component or system can be modified by the intended main-
tainers). In this section, we’ll discuss maintenance testing.
The same test process steps will apply as for testing during development and,
depending on the size and risk of the changes made, several levels of testing are car-
ried out: a component test, an integration test, a system test and an acceptance test.
If testing is done more formally, an application for a change may be used to produce
a test plan for testing the change, with test cases changed or created as needed. In
less formal testing, thought needs to be given to how the change should be tested,
even if this planning, updating of test cases and execution of the tests is part of a
continuous process.
The scope of maintenance testing depends on several factors, which influence the
test types and test levels. The factors are:
●● Degree of risk of the change, for example a self-contained change is a lower risk
than a change to a part of the system that communicates with other systems.
70 Chapter 2 Testing throughout the software development life cycle
● The size of the existing system, for example a small system would need less
regression testing than a larger system.
● The size of the change, which affects the amount of testing of the changes that
would be needed. The amount of regression testing is more related to the size of
the system than the size of the change.
● regression tests to show that the rest of the system has not been affected by the
affected and therefore need more extensive regression testing. Risk analysis will help
to decide where to focus regression testing. It is unlikely that the team will have time
to repeat all the existing tests, so this gives us the best value for the time and effort
we can spend in regression testing.
If the test specifications from the original development of the system are kept, one
may be able to reuse them for regression testing and to adapt them for changes to
the system. This may be as simple as changing the expected results for your existing
tests. Sometimes additional tests may need to be built. Extension or enhancement to
the system may mean new areas have been specified and tests would be drawn up just
as for the development. Do not forget that automated regression tests will also need
to be updated in line with the changes; this can take significant effort, depending on
the architecture of your automation (see Chapter 6).
Impact analysis can also be used to help make a decision about whether or not a
particular change should be made. If the change has the potential to cause high-risk
vulnerabilities throughout the system, it may be a better decision not to make that
change.
There are a number of factors that make impact analysis more difficult:
●● Specifications are out of date or missing (for example business requirements,
user stories, architecture diagrams).
●● Test cases are not documented or are out of date.
●● Bi-directional traceability between tests and the test basis has not been
maintained.
●● Tool support is weak or non-existent.
●● The people involved do not have domain and/or system knowledge.
●● The maintainability of the software has not been taken into enough considera-
tion during development.
Impact analysis can be very useful in making maintenance testing more efficient,
but if it is not, or cannot be, done well, then the risks of making the change are greatly
increased.
72 Chapter 2 Testing throughout the software development life cycle
CHAPTER REVIEW
Let’s review what you have learned in this chapter.
From Section 2.1, you should now understand the relationship between devel-
opment activities and test activities within a development life cycle and be familiar
with sequential life cycle models (waterfall and V-model) and iterative/incremental
life cycle models (RUP, Scrum, Kanban and Spiral). You should be able to recall the
reasons for different levels of testing and characteristics of good testing in any life
cycle model. You should be able to give reasons why software development life cycle
models need to be adapted to the context of the project and product being developed.
You should know the Glossary terms commercial off-the-shelf (COTS), sequential
development model and test level.
From Section 2.2, you should know the typical levels of testing (component,
integration, system and acceptance testing). You should be able to compare the differ-
ent levels of testing with respect to their major objectives, the test basis, typical objects
of testing, typical defects and failures, and approaches and responsibilities for each
test level. You should know the Glossary terms acceptance testing, alpha testing,
beta testing, component integration testing, component testing, contractual
acceptance testing, integration testing, operational acceptance testing, regulatory
acceptance testing, system integration testing, system testing, test basis, test case,
test environment, test object, test objective and user acceptance testing.
From Section 2.3, you should know the four major types of test (functional,
non-functional, structural and change-related) and should be able to provide some
concrete examples for each of these. You should understand that functional and
structural tests occur at any test level and be able to explain how they are applied in
the various test levels. You should be able to identify and describe non-functional
test types based on non-functional requirements and product quality characteristics.
Finally, you should be able to explain the purpose of confirmation testing (re-testing)
and regression testing in the context of change-related testing. You should know the
Glossary terms functional testing, non-functional testing, test type and white-box
testing.
From Section 2.4, you should be able to compare maintenance testing to testing of
new applications. You should be able to identify triggers and reasons for maintenance
testing, such as modifications, migration and retirement. Finally, you should be able to
describe the role of regression testing and impact analysis within maintenance testing.
You should know the Glossary terms impact analysis and maintenance testing.
Sample Exam Questions 73