100% found this document useful (1 vote)
95 views

Fundamentals of Testing

Testing is necessary to verify that software, medical devices, and other systems meet requirements and function as intended before being deployed, as failures caused by defects can have serious consequences as seen in examples like the Ariane 5 rocket explosion and the 2003 Northeast blackout. Proper testing at various stages of the development lifecycle helps reduce risks, find and fix defects, build confidence in quality, and prevent failures that could otherwise be missed from occurring once systems are in operation. Testing contributes to quality assurance by providing information about the software or system under test and helping to measure and increase confidence in its quality through the identification and removal of defects.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
95 views

Fundamentals of Testing

Testing is necessary to verify that software, medical devices, and other systems meet requirements and function as intended before being deployed, as failures caused by defects can have serious consequences as seen in examples like the Ariane 5 rocket explosion and the 2003 Northeast blackout. Proper testing at various stages of the development lifecycle helps reduce risks, find and fix defects, build confidence in quality, and prevent failures that could otherwise be missed from occurring once systems are in operation. Testing contributes to quality assurance by providing information about the software or system under test and helping to measure and increase confidence in its quality through the identification and removal of defects.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 95

1.

Fundamentals of 1

Testing
1.1 What is 2

Testing?
Why testing is necessary?
A few questions:
1. Will we trust a medical device that is not
tested?
2. Will we use a car where the speed index is not
working properly?
3. Are we going to read e-books with a lot of
typos?
4. Are we going to live in an intelligent house 3

that cools instead of heating?


5. Do we use suspending applications on the
phone?

DO WE NEED TESTING?
What is Testing?

Let's more questions ... 4

WHAT IS TESTING FOR YOU?


What is Testing?

 Testing
The process consisting of all lifecycle activities, both static and dynamic,
concerned with:

• planning,

• preparation,

• and evaluation of a component or system and related work products


to determine that they satisfy speci ed requirements, to demonstrate that they are t for purpose and to detect defects.
5
What is Testing?
Test activities:

Test planning, Creating a test summary report,


Identifying features and sets of features to be Checking entry and exit criteria,
tested, Creating a test summary report,
Designing test cases and sets of test cases, Code and documentations review (static
Checking test results, testing).
6
What is Testing?
Veri cation Validation
Con rmation by examination and through Con rmation by examination and through
provision of objective evidence that speci ed provision of objective evidence that the
requirements have been ful lled. requirements for a speci c intended use or
application have been ful lled.

Check that the application is according to the Check that the application is correct 7

speci cation. and client expectations have been ful lled.


1.1.1 Typical
Objectives of 8

Testing
Objectives of testing

Objectives of testing:

• To prevent defects by evaluate work • To nd defects and failures,


products such as requirements, user stories,
design, and code,
• To provide su cient information to
stakeholders to allow them to make
• To verify whether all speci ed requirements informed decisions,
have been ful lled,
• To comply with contractual, legal, or 9

• To check whether the test object is complete regulatory requirements or standards,


and validate if it works as the users and and/or to verify the test object’s compliance
other stakeholders expect, with such requirements or standards.
• To build con dence in the level of quality
of the test object,
Objectives of testing
The objectives of testing can vary, depending upon the context of the component or system being
tested, the test level, and the software development lifecycle model.

For example:

Test level Objective


Component testing • Reducing risk. 10

• Finding defects in the component.

• Building con dence in the component’s quality.

Acceptance testing • Validating that the system is complete and will work as expected.

• Verifying that functional and non-functional behaviors of the system are as speci ed.
1.1.2 Testing
and 11

Debugging
Testing it is not debugging

 Debugging
The process of nding, analyzing and removing the causes of failures in a component or system.

Debugging is a development activity, that nds, analyzes, and xes such defects. Subsequent
con rmation testing checks whether the xes resolved the defects.
12

testers --> TESTING


developers --> DEBUGGING
Testing it is not debugging

13
1.2 Why is
Testing 14

Necessary?
Why is Testing Necessary?
Rigorous testing of components and systems, and their associated documentation, can help reduce
the risk of failures occurring during operation. When defects are detected, and subsequently xed,
this contributes to the quality of the components or systems.

In addition, software testing may also be required to meet contractual or legal requirements or
industry-speci c standards.

15
1.2.1 Testing
Contributions 16

to Success
Testing Contributions to Success
Ariane 5 received a navigation platform from Ariane 4 on a copy-paste basis.
Nobody cared that the overloads and ight path di ered signi cantly from those in
Ariane 4.

During the rst phases of ight (30 seconds after taking o from the Kourou ELA-3
platform), the autopilot began to rapidly adjust the position of the afterburner
nozzles to do the same with the main engine nozzle. The reason for this action was
incorrect data supplied by the navigation computer to the control module.
17

Where did the wrong data come from?

The remote control software was written in Ada. The main cause of the catastrophe was the integer
over ow.

The error occurred when an unprotected number conversion from 32 bits to 16 bits was performed.
Testing Contributions to Success
For over 50 million people, these were three days of fear, panic
and a real life lesson. All through a certain server in a small control
room of the FirstEnergy corporation. Due to a minor error in the
software, a minor and harmless failure turned into an uncontrolled
energy disaster. As many as 265 power plants failed!
All of New York was plunged into darkness. Within moments,
millions of people lost access to electricity.
18

After the job queue was full, the primary server crashed, followed by the failover server. The
operators could not react because the error caused the delay in displaying the images on the control
screens from 2 seconds to 59! After a few alarming phone calls and the rst breakdowns, the
technicians, looking at the monitors, calmed the callers and informed that everything looked OK.
Testing Contributions to Success
Using appropriate test techniques can reduce the frequency of such problematic deliveries, when
those techniques are applied with the appropriate level of test expertise, in the appropriate test
levels, and at the appropriate points in the software development lifecycle.

Examples include:

• Having testers involved in requirements reviews or user story re nement could detect 19
defects in these work products.
• Having testers work closely with system designers while the system is being designed can
increase each party’s understanding of the design and how to test it.
• Having testers work closely with developers while the code is under development can increase
each party’s understanding of the code and how to test it.
• Having testers verify and validate the software prior to release can detect failures that might
otherwise have been missed, and support the process of removing the defects that caused the
failures (i.e., debugging).
1.2.2 Quality
Assurance 20

and Testing
Quality Assurance and Testing

 Quality
The degree to which a component or system satis es the stated and implied needs of its various stakeholders.

Testing
Testing contributes to the achievement of quality in a variety of ways.
Test results provide information about software quality and give the possibility to measure 21
software quality..
Test results helps build con dence in the software quality,if detected few or no defects found.
Quality Assurance and Testing

22
1.2.3 Errors,
Defects, 23

and Failures
Errors / Defects / Failures

 Error
A human action that produces an incorrect result.

 Defect
An imperfection or de ciency in a work product where it does not meet its requirements or speci cations.
24

 Failure
An event in which a component or system does not perform a required function within speci ed limits.
Errors / Defects / Failures
ERRORS may occur for many reasons, such as: FAILURES caused due to:
Time pressure, defects in the code,
Human fallibility, environmental conditions (radiation,
Inexperienced or insu ciently skilled project electromagnetic elds, and pollution)
participants,
Miscommunication between project
participants, including miscommunication
about requirements and design, 25

Complexity of the code, design, architecture,


the underlying problem to be solved, and/or
the technologies used,
Misunderstandings about intra-system and
inter-system interfaces, especially when such
intrasystem and inter-system interactions are
large in number,
New, unfamiliar technologies.
Errors / Defects / Failures
Not all unexpected test results are failures:

False positives are reported as defects, but aren’t actually defects:


• errors in the way tests were executed,
• defects in the test data,
• the test environment, or other testware.
26

False negatives are tests that do not detect defects that they should have detected
1.2.4 Defects,
Root Causes 27

and E ects
Defects, Root Causes and E ects
AT&T network failure.
On January 15, 1990, there was a major failure of the telecommunications network in the United
States. As of 2:25 AM, the AT&T operations center in Bedminster began receiving warning messages
from various parts of the network. The failure proceeded rapidly. The problem turned out to be a
defect in the code intended to improve the speed of message processing.
The cost of this failure is estimated at $ 60 million.

28

E ects of AT&T failure are complaints from customers who could not make calls and use the
Internet.
Defects are BTS stations that are not working properly
Root causes was due to a problem in the code and incorrect use of the break instruction.
1.3 Seven
Testing 29

Principles
Seven Testing Principles
1. Testing shows the presence of defects, not their absence
Testing can show that defects are present, but cannot prove that there are no defects. Testing
reduces the probability of undiscovered defects remaining in the software but, even if no defects are
found, testing is not a proof of correctness.

2. Exhaustive testing is impossible


30

 Exhaustive testing
A test approach in which the test suite comprises all combinations of input values and preconditions.
Seven Testing Principles
We want to test the requirement for two-step veri cation for our bank account.
we need to enter the phone number (9 - digits) and the PIN code (4 - digits) received from the bank
The login form looks like this

31

Can we test all combinations of inputs? If so, how way?


Seven Testing Principles
Theoretically all combinations are possible ...?!

Phone number = 10 ^ 9 combination


PIN code = 10 ^ 4 combination
Phone number + PIN code = 10^9 * 10^4 = 10^13 combination

And ... do we still test everything ?? 32

Try to calculate how long it would take for the test script that checks
one combination in 1 millisecond ;-).
Seven Testing Principles
3. Early testing saves time and money
ASPA ... as soon as possible
Testing early in the software development lifecycle helps reduce or eliminate costly changes .

33

Reference: https://jasono orida.com/seven-software-testing-principles/


Seven Testing Principles
4. Defects cluster together
A small number of modules usually contains most of the defects discovered during pre-release
testing, or is responsible for most of the operational failures. We can use it here the Pareto principle
that 80% of the e ects are the result of 20% of causes.

34

Predicted defect clusters, and the actual observed defect clusters in test or operation, are an
important input into a risk analysis used to focus the test e ort
Seven Testing Principles
5. Beware of the pesticide paradox
If the same tests are repeated over and over again, eventually these tests no longer nd any new
defects.

You should review and update your tests regularly.

35

In some cases, such as automated regression testing, the pesticide paradox has a bene cial
outcome, which is the relatively low number of regression defects.
Seven Testing Principles
Let's go back to the client's login form to the bank using the phone number and PIN code.
The login form looks like this:

36

When can the pesticide paradox occur with such simple testing?
Seven Testing Principles
The pesticide paradox in this case can be associated with test data.
If we use the same phone number (694 733 000) and the same PIN code all the time
(1234) we ignore many of the conditions that may contribute to the failure.

• What if the phone number of another operator will not be recognized in system?

• What if our number is inactive? 37

• We will also not check whether another (default) PIN code prevents us from getting to system?
Seven Testing Principles
6. Testing is context dependent
Testing is done di erently in di erent contexts. For example, safety-critical industrial control
software is tested di erently from an e-commerce mobile app..

There will be a di erent approach to testing for each of the above systems.

38
Seven Testing Principles
How we approach the tested system depends on many factors:
• business requirements,
• risk level,
• lifecycle project (np. SCRUM, Model V),
• system criticality.

39

For testing to be e ective, it must always be "tailor-made"


Seven Testing Principles
7. Absence-of-errors is a fallacy
It is a fallacy (i.e., a mistaken belief) to expect that just nding and xing a large number of defects
will ensure the success of a system.

Thoroughly testing all speci ed requirements and xing all defects found could still produce a
system that is di cult to use, that does not ful ll the users’ needs and expectations, or that is
inferior compared to other competing systems.
40
1.4 Test 41

Process
1.4.1 Test
Process in 42

Context
Test Process in Context

 Test process
The set of interrelated activities comprising of test planning, test monitoring and control, test analysis, test design, test
implementation, test execution, and test completion.

There is no one universal software test process, but there are common sets of test activities 43
without which testing will be less likely to achieve its established objectives
Test Process in Context
Contextual factors that in uence the test process for an organization, include, but are not limited to:

• Software development lifecycle model and • Organizational policies and practices,


project methodologies being used,
• Operational constraints, including but not
• Test levels and test types being considered, limited to:
• Product and project risks,
• Budgets and resources,
• Business domain,
• Timescales,
44

• Organizational policies and practices,


• Complexity,
• Contractual and regulatory requirements.
Test Process in Context
It is very useful if the test basis (for any level or type of testing that is being considered) has
measurable coverage criteria de ned.

 Key Perfomance Indicators (KPI)


A Key Performance Indicator is a measurable value that demonstrates how e ectively a company is achieving key
business objectives.
45
Test Process in Context
KPIs in the test process, for example:

• Average time to resolve failure / defect.


• Average time from defect reporting to attempting a solution.
• Percentage of requirements not completed on time.
• Percentage of source code covered by tests (e.g. unit).
46
• The number of critical defects to all defects.
• Defects per release.
• Percentage of hours devoted to xing defects to all time spent on code development.
Test Process in Context
The test process should be adapted to the software development lifecycle model.

47

Software development lifecycle model -> Chapter 2.


1.4.2 Test
Activities and 48

Tasks
Test Process

Test Activities:
Test planning,
Test monitoring and control,
Test analysis,
49
Test design,
Test implementation,
Test execution,
Test completion.
Test planning
RESPONSIBLE: Test manager

Create test plan:


Determining the scope, objectives, and risks
of testing,
De ning the overall approach of testing,
Scheduling of test analysis, design, 50

implementation, execution, and evaluation


activities,
Selecting metrics for test monitoring and
control,
Budgeting for the test activities,
Detailed description of the test plan is presented in chapter 5.2.1
Purpose and Content of a Test Plan.
Test planning

 Entry criteria (de nition of ready)


The set of conditions for o cially starting a de ned task.

 Exit criteria (de nition of done)


The set of conditions for o cially completing a de ned task. 51
Test analysis
RESPONSIBLE: Tester
During test analysis, the test basis is analyzed to identify testable features and de ne associated test
conditions.

Test analysis includes the following major


activities:
Analyzing the test basis appropriate to the
test level being considered, 52

Evaluating the test basis and test items to


identify defects of various types,
Identifying features and sets of features to be
tested,
De ning and prioritizing test conditions for
each feature based on analysis of the test
basis,
Capturing bi-directional traceability between
each element of the test basis and the
associated test conditions.
Test analysis
Keywords

 Requirement
A provision that contains criteria to be ful lled.

 Test condition
53
A testable aspect of a component or system identi ed as a basis for testing.

 Test basis
The body of knowledge used as the basis for test analysis and design.
Test analysis - requirements
Requirement (IREB de nition)
1. A need perceived by a stakeholder
2. A capability or property that a system shall have
3. A documented representation of a need, capability or property.

The requirement is a description of a solution - a set of features required by the user, customer or 54
other stakeholder to achieve the objective.

The purpose of the requirement should be known and de ned.


Test analysis - User stories

User stories
is a convenient form of expressing the expected business value. User stories are written in such a
way that they can be understood by both people from the business side of the project, as well as by
engineers. They are simple in structure and provide a good platform for conversation.

55

Źródło
Test analysis - User stories template
A correct high-level story (not very detailed) should contain:

• Title - the short name of the requirement

• User Stories
   As a ......< type of user >
   I want ......< some goal >
56
   so that ...... < some reason >.
Test analysis - User stories example
User registration
   As a an unregistered user
   I want register on the site,
   so that be able to log in.

Site navigation
57
   As a user
   I want access to the top menu of application from anywhere in the application,
   so that navigate the site would be easily.
Test analysis - Test condition
An example is a properly functioning ATM.

ATM functional requirements:


• allows you to withdraw cash,
• allows you to make quick withdrawals,
• allows you to make a transfer,
• allows you to check the account balance. 58
Test analysis - Test condition
STEP 1. Identifying test object
Test object is an ATM software.

STEP 2. Identifying test conditions.


Indicating the test conditions we should be based on the requirements that the client presented for
the designed system. In our case it will be e.g.
59

Requirement: allows you to withdraw cash


• TCo1 (Test condition): successful cash withdrawal,
• TCo2: refusal to withdraw cash.
Test analysis - identify defects
Evaluating the test basis and test items to identify defects of various types, such as:
• Ambiguities,
• Omissions,
• Inconsistencies
• Inaccuracies
• Contradictions
• Super uous statements 60

The identi cation of defects during test analysis is an important potential bene t, especially where
no other review process is being used and/or the test process is closely connected with the review
process. Such test analysis activities not only verify whether the requirements are consistent,
properly expressed, and complete, but also validate whether the requirements properly capture
customer, user, and other stakeholder needs.
Test design
RESPONSIBLE Tester
During test design, the test conditions are elaborated into high-level test cases, sets of high-level test
cases, and other testware.

Test design includes the following major


activities:
Designing and prioritizing test cases and sets
of test cases, 61

Identifying necessary test data to support


test conditions and test cases,
Designing the test environment and
identifying any required infrastructure and
tools,
Capturing bi-directional traceability between
the test basis, test conditions, and test cases.
Test design - test case

 Test case
A set of preconditions, inputs, actions (where applicable), expected results and postconditions, developed based on test
conditions.

62
Test design - test case

An example is a properly functioning ATM, the requirements of which we discussed earlier.


63

STEP 1. Identifying test conditions..


TCo: allows you to withdraw cash
Test design - test case
STEP 2. Create test case.

ID TC1
TC Name Successful cash withdrawal from an ATM
Input • Correct debit card

• Correct PIN = 2233

• Account balance = 10 000 $

• Withdrawal amount = 50 $ 64

Precondition A withdrawal can only be made for a functioning ATM that displays the welcome screen as well using a good
card and entering the correct PIN.
Expected
result
• Successfully withdrawal 50 $

• Account balance = 9 950 $

• The ATM is working, the welcome screen is displayed


Test design - test case

The test cases can be divided into: high-level test cases & low-level test cases.

65
Test design - test case

High-level test cases Low-level test cases

It is more general. It leaves space for the Describes the test process in detail. It has
tester to interpret. carefully written steps and often test data.
Easier to maintain. It guarantees repeatability. Each test case
They provide greater test coverage, because run will be the same.
each time the performance may be slightly It does not require a lot of experience from 66
di erent (e.g. used other test data). the tester and in-depth knowledge of the
A better choice if we do not have a well- application.
described requirement, or the functionality is They can be di cult to maintain. Change in
not implemented. the app can make us x many test cases.
Poor repeatability.
May require good application knowledge or
testing experience.
Test design - test case
High-level test case Przypadek testowy niskiego poziomu
ID: CG001 ID: CG002
TC name: Successfully login into the system. TC name:Successfully login into the system.
Steps: Precondition: Użytkownik jest
Enter the correct login information in the zarejestrowany w aplikacji.
required elds. Steps:
Expected result: The user has been logged 1. Type "John" into text eld "Login".
into the application. 67

2. Type "mypass" into text eld "Password".


3. Click the "Login" button
Expected result: The user has been logged
into the application.
Test implementation
RESPONSIBLE: Tester

Test implementation includes the following


major activities:
Developing and prioritizing test procedures,
and, potentially, creating automated test
scripts
68
Creating test suites from the test procedures
and (if any) automated test scripts
Arranging the test suites within a test
execution schedule in a way that results in
e cient test execution,
Building the test environment,
Preparing test data and ensuring it is
properly loaded in the test environment
Verifying and updating bi-directional
traceability between the test basis, test
Test implementation
Keywords

 Test procedure
A sequence of test cases in execution order, and any associated actions that may be required to set up the initial
preconditions and any wrap up activities post execution.
(wg. ISO/IEC/IEEE 29119-2 (2013) Software and systems engineering — Software testing — Part 2: Test processes)

 Test suite 69

A set of test scripts or test procedures to be executed in a speci c test run.


Test implementation
The following test cases were prepared for the ATM:

TC1 - Debit card activation,


TC2 - Debit card authorization at an ATM,
TC3 - Successfully cash withdrawal,
TC4 - Transaction con rmation printout,
TC5 - Checking account status, 70

TC6 - Correct execution of a quick transfer,


TC7 - Refusal to make a quick transfer.
Test implementation
Test procedure
• Test procedure 1 (TPr_1): Successfully cash withdrawal - TC2, TC3, TC4
• TPr_2: Account balance check - TC2, TC5

Test Suite
• Test suite 1 (TS_1) - Manual testing of ATM operation according to test procedures: TPr_1 and 71
TPr_2, the sequence of tests performed is in accordance with the procedure description.
Test execution
RESPONSIBLE: Tester
Test execution includes the following major
activities:
Recording the IDs and versions of the test
item(s) or test object, test tool(s), and
testware,
Executing tests either manually or by using
test execution tools, 72

Comparing actual results with expected


results,
Analyzing anomalies to establish their likely
causes,
Reporting defects based on the failures
observed,
Logging the outcome of test execution,
Repeating test activities either as a result of
action taken for an anomaly, or as part of the
planned testing,
Test completion
RESPONSIBLE: Test manager

Test completion activities are performed when


project milestones are achieved, such as:
• successful completion of the
implementation,
• completion (or cancellation) of the project, 73

• completing the iteration of an agile project,


• completion of the test level or completion of
work on the maintenance release.
Test completion
Test completion includes the following major activities:
Checking whether all defect reports are closed, entering change requests or product backlog
items for any defects that remain unresolved at the end of test execution,
Creating a test summary report to be communicated to stakeholders,
Finalizing and archiving the test environment, the test data, the test infrastructure, and other
testware for later reuse,
Handing over the testware to the maintenance teams, other project teams, and/or other
stakeholders who could bene t from its use, 74

Analyzing lessons learned from the completed test activities to determine changes needed for
future iterations, releases, and projects,
Using the information gathered to improve test process maturity.
Test monitoring and control
RESPONSIBLE: Test manager
Test monitoring involves the on-going comparison of actual progress against planned progress
using any test monitoring metrics de ned in the test plan.
Test control involves taking actions necessary to meet the objectives of the test plan (which may be
updated over time).

For example, the evaluation of exit criteria for 75


test execution as part of a given test level may
include:
• Checking test results and logs against
speci ed coverage criteria,
• Assessing the level of component or system
quality based on test results and logs,
• Determining if more tests are needed.
Test monitoring and control are further explained in section 5.3.
1.4.3 Test
Work 76

Products
Test Work Products
Test work products are created as part of the test process. Just as there is signi cant variation in the
way that organizations implement the test process, there is also signi cant variation in the types of
work products created during that process, in the ways those work products are organized and
managed, and in the names used for those work products.

Test process Products


Test planning
• one or more test plans

Test analysis
• de ned and prioritized test conditions
77

• bidirectionally traceable to the speci c element(s) of the test basis it covers

• the creation of test charters (for exploratory testing)

Test design
• the design high-level test cases

• the design and/or identi cation of the necessary test data,

• the design of the test environment

• the identi cation of infrastructure and tools


Test Work Products
Test process Products
Test implementation
• Test procedures and the sequencing of those test procedures

• Test suites

• A test execution schedule

Test execution
• Documentation of the status of individual test cases or test procedures

• Defect reports

• Documentation about which test item(s), test object(s), test tools, and testware were involved in the testing

• Report via bi-directional traceability with the associated the test procedure(s)

Test completion
• test summary reports
78

• change requests or product backlog items

• nalized testware

Test monitoring and


control • various types of test reports, including test progress reports produced on an ongoing and/or a regular basis, and test summary reports
produced at various completion milestones.

• summary of project management tasks, such as task completion, resource allocation and usage, and e ort.
1.4.4 Traceability between the
Test Basis and Test Work 79

Products
Traceability between the Test Basis and Test Work Products

Good traceability supports


• Analyzing the impact of changes,
• Making testing auditable,
• Meeting IT governance criteria,
• Improving the understandability of test progress reports and test summary reports to include the
status of elements of the test basis,

80
Relating the technical aspects of testing to stakeholders in terms that they can understand,
• Providing information to assess product quality, process capability, and project progress against
business goals.
Traceability between the Test Basis and Test Work Products

• each test case can be related to a speci c


design element (line of code, function,
module, system, etc.),
• with each element of the design may be
related to some risk,

81
each risk can be related to a speci c
requirement,
• e.t.c.

Example of traceability.
Traceability between the Test Basis and Test Work Products
Traceability between the Test Cases and Requirements (created by JIRA tool).

82
1.5 The
Psychology of 83

Testing
1.5.1 Human
Psychology 84

and Testing
Human Psychology and Testing
Tester and project teams:/p>
Finding bugs during testing can be viewed as criticizing the product or its author,
Testing seen as a destructive action for the team, despite the vast supply of knowledge from the
point of view of the Risk Management,
Constructive communication of errors, faults and failures can help to avoid short-circuits,,
The tester and test leader need good interpersonal skills,
The tester has features that contrast with those a developer needs. Testers need the knowledge 85
end users of the system . This allows the product to be validated in the way the client does
instead of what the developer would like,
Human Psychology and Testing
Ways to communicate well include the following examples:

Start with collaboration rather than battles. Remind everyone of the common goal of better
quality systems.
Emphasize the bene ts of testing. For example, for the authors, defect information can help
them improve their work products and their skills.
Communicate test results and other ndings in a neutral, fact-focused way without criticizing
the person who created the defective item. 86

Write objective and factual defect reports and review ndings.


Try to understand how the other person feels and the reasons they may react negatively to the
information.
Con rm that the other person has understood what has been said and vice versa
1.5.2 Tester’s
and 87

Developer’s
Mindsets
Tester’s and Developer’s Mindsets
I DO NOT MAKE BUGS ...
Whose words could these be?

Independent Tester
No independent testers; the only form of testing available is developers testing their own code,
Independent developers or testers within the development teams or the project team; this could
be developers testing their colleagues’ products, 88

Independent test team or group within the organization, reporting to project management or
executive management,
Independent testers from the business organization or user community, or with specializations in
speci c test types such as usability, security, performance, regulatory/compliance, or portability.
Independent testers external to the organization, either working on-site (in-house) or o -site
(outsourcing).

WHO WAS THE BEST EVALUATION OF DEVELOPERS 'WORK?


Tester’s and Developer’s Mindsets

89

WHO IS A GOOD TESTER ??


Tester’s and Developer’s Mindsets

Features of a good tester


curiosity,
inquisitiveness,
critical look,
meticulousness,
90
experience, conclusions from previous projects,
communication - especially with developers.
Chapter SUMMARY 91
SUMMARY

After this chapter you should be able to:<


Identify typical objectives of testing
Di erentiate testing from debugging
Give examples of why testing is necessary
Describe the relationship between testing and quality assurance and give examples of how
testing contributes to higher quality
Distinguish between error, defect, and failure 92

Distinguish between the root cause of a defect and its e ects


Explain the seven testing principles
Explain the impact of context on the test process
SUMMARY

After this chapter you should be able to:<


Describe the test activities and respective tasks within the test process
Di erentiate the work products that support the test process
Explain the value of maintaining traceability between the test basis and test work products
Identify the psychological factors that in uence the success of testing
Explain the di erence between the mindset required for test activities and the mindset required
for development activities 93
Time
for Quiz 1 94

Fundamentals of Testing
The end 95
Go to next chapter

You might also like