0% found this document useful (0 votes)
63 views29 pages

Introduction To Software Testing

This document provides an introduction to software testing, including definitions of key terms like bugs, errors, and failures. It describes general testing principles like showing presence of bugs but not absence, and that exhaustive testing is impossible. The document outlines the fundamental test process of planning, analysis, implementation, evaluation, and closure. It also discusses different test levels from unit to integration to system to acceptance testing. The software development life cycle is also briefly covered.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views29 pages

Introduction To Software Testing

This document provides an introduction to software testing, including definitions of key terms like bugs, errors, and failures. It describes general testing principles like showing presence of bugs but not absence, and that exhaustive testing is impossible. The document outlines the fundamental test process of planning, analysis, implementation, evaluation, and closure. It also discusses different test levels from unit to integration to system to acceptance testing. The software development life cycle is also briefly covered.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 29

Introduction to

Software Testing
Content

1.What is a “bug”?
2.Testing Principles
3.Test Process
4.Life cycle
5.Test Levels
6.Test Types
What is a “bug”?

Error: a human action that produces an incorrect


result

Fault: a manifestation of an error in software, also


known as a defect or bug if executed, a fault may
cause a failure

Failure: deviation of the software from its expected


delivery or service
Error - Fault - Failure

A person makes
an error ...

… that creates a
fault in the
software ...

… that can cause


a failure
in operation
Reliability versus faults

Reliability: the probability that software will not


cause the failure of the system for a specified time
under specified conditions

Can a software system be reliable but still have


faults?
Yes
GENERAL TESTING PRINCIPLES
GENERAL TESTING PRINCIPLES

1. Testing shows the presence of bugs


-Testing an application can only reveal that one or more defects exist in the
application, however, testing alone cannot prove that the application is error
free.
-It is important to design test cases which find as many defects as possible.

2. Early testing
-It is very important to start testing as early as possible and anticipate
possible errors that the developer can make.
-Remember the sooner there are bugs the cheaper to fix them.
GENERAL TESTING PRINCIPLES

3. Exhaustive testing is impossible


-Exhaustive testing = a test approach in which all possible data combinations
are used.
-All cases simply cannot be included in the test suite, since it would take us a
lot of time and in the end, it would cost us such an effort.
-In software testing, it is accepted to analyze a product or a new feature and
then focus efforts in testing on riskier and more priority cases and areas of our
product.
GENERAL TESTING PRINCIPLES

4. Defect clustering
-In a large application, it is often a small number of modules that exhibit the
majority of the problems.
-This is the application of the Pareto principle to software testing:
approximately 80 % of the problems are found in about 20 % of the modules.

5. The pesticide paradox


-Running the same set of tests continually will not continue to find new
defects.
-The software may fail in production because the regression tests are no
longer relevant to the requirements of the system or the test objectives.
GENERAL TESTING PRINCIPLES

6. Testing is context dependent


-Different testing is necessary in different circumstances (e.g., test an air traffic
control system vs test an application for calculating the length of a mortgage).
-Risk can be a large factor in determining the type of testing that is needed.

7. Absence of errors fallacy


-Testing shows the presence of bugs in the product, but not their absence. Many
people think that if the new functionality has passed the testing stage, then
everything means that there are no more bugs. This is a false judgment. Testing
only reduces the chance of bugs appearing in the product.
FUNDAMENTAL TEST PROCESS

The fundamental test process consists of five parts that


encompass all aspects of testing:
(1) Planning and control
(2) Analysis and design
(3) Implementation and execution
(4) Evaluating exit criteria and reporting
(5) Test closure activities
FUNDAMENTAL TEST PROCESS

1. Planning and control


•Planning is determining what is going to be tested, and how this will be
achieved.
•It is where we draw a map; how activities will be done; and who will do them.
•We define the test completion criteria. Completion criteria are how we know
when testing is finished.
•Control, on the other hand, is what we do when the activities do not match up
with the plans. It is the ongoing activity where we compare the progress against
the plan.

Key words: schedules, people,


completion criteria
what is going to be tested
FUNDAMENTAL TEST PROCESS

2. Analysis and design what to test (test conditions)


•Analysis and design are concerned with:
how to combine test conditions
into test cases
A small number of test cases can cover as many of the test conditions as possible.
•Design process needs to consider:
- the test data that will be required for the test conditions;
- test cases that have been drawn up.

Key words: how the software under test


should behave in a given set of circumstances
FUNDAMENTAL TEST PROCESS

3. Implementation and execution


• The most visible test activities
• The test implementation and execution activity involves:
- checking the test environment before testing begins;
- running tests;
- combining test cases into an overall run procedure, so that test time can
be utilized efficiently.
• As tests are run, their outcome needs to be logged, and a comparison made between expected results and
actual results.
• Whenever there is a discrepancy
between the expected and actual
results, this needs to be investigated.

Key words: test expected results,


What environment will be needed
FUNDAMENTAL TEST PROCESS

4. Evaluating exit criteria and reporting


•Checking whether the previously determined exit criteria have been met.
•Determining if more tests are needed or if the specified exit criteria need
amending.
• Writing up the result of the testing activities for the business sponsors and
other stakeholders.
FUNDAMENTAL TEST PROCESS

5. Test closure activities


• Ensuring that the documentation is in order;
• What has been delivered is defined (it may be more or less than originally
planned);
• Closing incidents and raising changes for future deliveries;
• Documenting that the system has been accepted.
• Closing down and archiving the test environment, test infrastructure.
LIFE CYCLE

A development life cycle for a software product involves:


•capturing the initial requirements from the customer;
•expanding on these to provide the detail required for code
production;
•writing the code and testing the product, ready for release.
A simple development model
Requirement specification
Functional specification
Technical specification
Program specification
Coding
Testing
TEST LEVELS

• For all types of development used, testing plays a significant role.


• Testing helps to ensure that:
- the work-products are being developed in the right way (verification)
- the product will meet the user needs (validation).
• Characteristics of good testing across the development life cycle include:
- Early test design
- Each work-product is tested (Each specification document is called
the test basis, the basis on each tests)
- Testers are involved in reviewing requirements before they are
released
TEST LEVELS

The typical levels of testing are:


•Unit (component) testing
•Integration testing
•System testing
•Acceptance testing
TEST LEVELS

1. Unit (component) testing


•Unit testing is intended to ensure that the code written for the unit meets its
specification, prior to its integration with other units.
•Unit testing would also verify that all of the code that has been written for the unit
can be executed
•An approach to unit testing is called Test Driven Development. As its name suggests,
test cases are written first, code built, tested and changed until the unit passes its
tests.
•Unit testing is usually performed by the developer who wrote the code (and who
may also have written the program specification). Defects found and fixed during
unit testing are often not recorded.
TEST LEVELS

2. Integration testing
•The purpose of integration testing is to expose defects in the interfaces and in the
interactions between integrated components or systems.
•Integration strategies:
- Big-bang integration
- Top-down integration
- Bottom-up integration
TEST LEVELS

3. System testing
•It is focusing on the behavior of the whole system/product as defined by the scope
of a development project or program, in a representative live environment.
•It is usually carried out by a team that is independent of the development process.
The behavior required of the system is documented in the functional specification.
The functional specification should contain definitions of both the functional and
non-functional requirements of the system.
TEST LEVELS
4. Acceptance testing
•The purpose of acceptance testing is to provide the end users with confidence
that the system will function according to their expectations.
•It use the requirement specification as a basis for test.
•Acceptance testing is often the responsibility of the customers or users of a
system, although other project team members may be involved as well.
•Typical forms of acceptance testing include the following:
- User acceptance testing (testing by user representatives to check that the
system meets their business needs);
- Operational acceptance testing (checking that the processes and
procedures are in place to allow the system to be used and maintained)
- Contract and regulation acceptance testing (sometimes the criteria for
accepting a system are documented in a contract)
- Alpha and beta testing
Alpha testing takes place at the developer’s site
Beta testing takes place at the customer’s site
TEST TYPES
TEST TYPES

1. Functional testing
-Functional testing describes what the product does
-It looks at the specific functionality of a system;
-This testing mainly involves black box testing, and it is not concerned about
the source code of the application.
-Manual testing or automation tools can be used for functional testing
-Examples of Functional testing are:
- Unit testing
- Smoke testing
- Integration testing
- User Acceptance testing
- Regression testing
TEST TYPES

2. Non-functional testing
-Non-functional system testing looks at those aspects that are important but
not directly related to what functions the system performs.
-Using tools will be effective for this testing
-Nonfunctional testing describes how good the product works
-Examples of Non-Functional testing are:
- Performance testing
- Load testing
- Security testing
- Installation testing
- Migration Testing
- Compatibility testing
TEST TYPES

3. Structural testing
-It is often referred to as white box testing or glass box because in structural
testing we are interested in what is happening “inside the system/application”.
-The testers are required
to have the knowledge of the
internal implementations of
the code (how the software
is implemented, how it works).
-Can be used at all levels of
testing.
TEST TYPES

4. Change –related testing


- It is provided to ensure that previously eradicated bugs have been fixed and to
catch bugs that may have been accidentally appeared into a new version.
-There are two subtypes of Change related testing:

Confirmation testing (Re-testing) Regression testing


- the bug has indeed been - new defects have not
successfully removed come up or discovered
after the changes
BUT! Don’t forget about almost the most
important thing!
If this feature does not meet the user’s
expectations and needs, then no matter how
high-quality our product is

You might also like