MODULE 1 - Introduction to Software Testing
Bug, Fault & Failure
• A person makes an error
• That creates a fault in software
• That can cause a failure in operation
• Error : An error is a human action that produces the incorrect result that
results in a fault.
• Bug : The presence of error at the time of execution of the software.
• Fault : State of software caused by an error.
• Failure : Deviation of the software from its expected result. It is an event.
• Defect :A defect is an error or a bug, in the application which is created. A
programmer while designing and building the software can make mistakes
or error. These mistakes or errors mean that there are flaws in the
software. These are called defects.
Why do defects occur in software?
Software is written by human beings
Who know something, but not everything
Who have skills, but aren’t perfect
Who don’t usually use rigorous methods
Who do make mistakes (errors)
Under increasing pressure to deliver to strict deadlines
No time to check, assumptions may be wrong
Systems may be incomplete
Software is complex, abstract and invisible
Hard to understand
Hard to see if it is complete or working correctly
No one person can fully understand large systems
Numerous external interfaces and dependencies
Sources of defects
Education
Developers does not understand well enough what he or she is doing
Lack of proper education leads to errors in specification, design, coding,
and testing
Communication
Developers do not know enough
Information does not reach all stakeholders
Information is lost
Oversight
Omitting to do necessary things
Transcription
Developer knows what to do but simply makes a mistake
Process
Process is not applicable for the actual situation
Process places restrictions that cause errors
Objective of testing
• To find defects before they cause a production system to fail.
• To bring the tested software, after correction of the identified defects and
retesting, to an acceptable level of quality.
• To perform the required tests efficiently and effectively, within the budgetary
limits and scheduling limitation.
• To compile a record of software errors for use in error prevention (by
corrective and preventive actions)
Software Quality Concept
Quality : Quality means consistently meeting customer needs in
terms of requirement, cost and delivery schedule.
Quality of s/w - reasonably bug free, delivered on time and within
budget, meets requirements and exceptions and is maintainable.
Software Quality Concept
● Quality Control: “All defined work products and measurable
specifications” are compared with the output of each process.
● Quality control focuses on operational techniques and activities
used to fulfill and verify requirement of quality.
● S/w quality involves – Series of inspection, reviews, and test
used throughout the s/w process.
Quality Assurance (QA): Consists of auditing and reporting
procedure which are used to provide necessary data to
management in order to make decisions.
● Goal of quality assurance is to provide adequate confidence
that a product or service is of the quality expected by the
customer.
● If the data provided through quality assurance identify
problems, then it is management’s responsibility to address
the problems & apply the necessary resources to resolve
quality issues.
Testing Process
Level 0 — No Testing Thinking (Testing = Checking)
Mindset:
“Testing is just following instructions.”
Characteristics:
Treats testing as mechanical execution
Only runs scripted test cases
Cannot detect issues not explicitly described
No understanding of why tests exist
Level 1 — Testing to Show the Software Works
Mindset:
“My job is to prove that everything is correct.”
Characteristics:
Confirms good scenarios (“happy paths”)
Unconsciously avoids breaking the system
Assumes the product works unless proven otherwise
Level 2 — Testing to Find Problems
Mindset:
“My job is to find as many bugs as possible.”
Characteristics:
Actively tries to break the software
Uses exploratory testing
Recognizes the system is imperfect
Shifts from confirming to challenging behavior
Level 3 — Testing to Reduce Risk
Mindset:
“What can cause the most damage, and how do we
prevent it?”
Characteristics:
Prioritizes based on likelihood + impact
Understands business goals, users, and usage
patterns
Does root-cause thinking
Uses test design strategically (e.g., risk-based, model-
based)
Level 4 — Testing to Improve Quality & Value
Mindset:
“How do we ensure the product is valuable, usable,
and maintainable?”
Characteristics:
Works across the whole development lifecycle
Prevents defects rather than just finding them
Influences requirements, design, architecture
Advocates for user experience and business success
Collaborates deeply with developers, PMs, UX
Level Mindset Primary Focus
0 Just follows steps Checking, not thinking
1 Show software works Verification of expected behavior
2 Find bugs Detecting failures
3 Manage risk Prioritizing what matters most
Holistic product quality &
4 Improve quality/value
prevention
What is Verification?
• Definition : The process of evaluating software to determine whether the
products of a given development phase satisfy the conditions imposed at
the start of that phase.
• Verification is a static practice of verifying documents, design, code and
program. It includes all the activities associated with producing high quality
software: inspection, design analysis and specification analysis. It is a
relatively objective process.
• Verification will help to determine whether the software is of high quality,
but it will not ensure that the system is useful. Verification is concerned
with whether the system is well-engineered and error-free.
• Methods of Verification : Static Testing
• Walkthrough
• Inspection
• Review
What is Validation?
• Definition: The process of evaluating software during or at the end of
the development process to determine whether it satisfies specified
requirements.
• Validation is the process of evaluating the final product to check
whether the software meets the customer expectations and
requirements. It is a dynamic mechanism of validating and testing the
actual product.
• Methods of Validation : Dynamic Testing
• Testing
• End Users
Verification Vs Validation
• Verification: “Did we build the feature right?”
• You check the product against the specifications.
• Verification Activities:
• Review the requirements document:
“The system must allow users to log in with email + password.”
• Check the design document for correctness.
• Inspect the code and confirm it follows the design.
• Run tests to ensure:
• Email format validation works
• Wrong password shows an error
• Password field hides characters
• Rate-limiting is implemented as specified
• “Does the login feature meet the written requirements and technical
specs?”
Verification Vs Validation
• Validation: “Did we build the right feature?”
• You check the product against real user needs.
• Validation Activities:
• Ask users to actually try logging in.
• Observe if they struggle or misunderstand the flow.
• Check if the login supports the scenarios users truly need:
• Do users NEED a “Log in with Google” option?
• Is the password policy too strict and causing frustration?
• Does the login work smoothly on mobile devices?
• Is the error message clear enough for real-world users?
• “Does this login system actually solve the user’s problem? Does it
provide value?”
Verification Validation
1. Verification is a static practice of verifying documents, 1. Validation is a dynamic mechanism of validating and
design, code and program. testing the actual product.
2. It does not involve executing the code. 2. It always involves executing the code.
3. It is human-based checking of documents and files. 3. It is computer-based execution of program.
4. Validation uses methods like black box
4. Verification uses methods like inspections, reviews,
(functional) testing, gray box testing, and white box
walkthroughs, Desk-checking etc.
(structural) testing etc.
5. Verification is to check whether the software conforms 5. Validation is to check whether software meets the
to specifications. customer expectations and requirements.
6. It can catch errors that validation cannot catch. It is low [Link] can catch errors that verification cannot catch. It is High
level exercise. level exercise.
7. Target is requirements specification, application and
7. Target is actual product - a unit, a module, a group of
software architecture, high level, complete design, and
integrated modules, and effective final product.
database design etc.
8. Verification is done by QA team to ensure that the 8. Validation is carried out with the involvement of testing
software is as per the specifications in the SRS document. team.
9. It generally comes first-done before validation. 9. It generally follows after verification.
Coverage Criteria
• Specific elements or aspects of the software that need to be exercised by a test
suite to ensure its quality and thoroughness.
• These criteria serve as rules or requirements that a test suite must satisfy to be
considered adequate.
Coverage Criteria
Characteristics of Coverage Criteria
Levels Of Testing
Unit Testing
• Level of software testing where individual units/ components of a software are
tested.
• The purpose is to validate that each unit of the software performs as designed.
• First level of testing and is performed prior to integration testing.
• A unit is the smallest testable part of software. It usually has one or a few inputs
and usually a single output.
• It is executed by the developer.
• Performed by using the White Box Testing method.
• Example: - A function, method, Loop or statement in program is working fine.
Drivers
• Drivers are used in bottom-up integration testing approach.
• It can simulate the behavior of upper-level module that is not integrated yet.
• Driver modules act as the temporary replacement of module and act as the actual
products.
• Drivers are also used for interacting with external system and usually complex
than stubs.
• Driver: Calls the module to be tested.
Now suppose you have modules B and C ready but module A which calls functions
from module B and C is not ready so developer will write a dummy piece of code
for module A which will return values to module B and C. This dummy piece of
code is known as driver.
Stubs
• Stubs are used in top-down integration testing.
• It can simulate the behavior of lower-level module that are not integrated.
• They act as a temporary replacement of module and provide same output as actual
product.
• When needs to interact with external system then also stubs are used.
• Stub: Is called by the module under test.
Assume you have 3 modules, Module A, Module B and module C. Module A is
ready and we need to test it, but module A calls functions from Module B and
C which are not ready, so developer will write a dummy module which
simulates B and C and returns values to module A. This dummy module code
is known as stub.
Stub Driver
Type Dummy codes Dummy codes
Routines that don’t actually do Routines that don’t actually do
anything except declare themselves anything except declare themselves
Description and the parameters they accept. The and the parameters they accept. The
rest of the code can then take these rest of the code can then take these
parameters and use them as inputs parameters and use them as inputs
Used in Top-Down Integration Bottom-Up Integration
To allow testing of the upper levels To allow testing of the lower levels of
Purpose of the code, when the lower levels of the code, when the upper levels of
the code are not yet developed. the code are not yet developed.
Benefits
• Unit testing increases confidence in changing/maintaining code. If
good unit tests are written and if they are run every time any code is
changed, we will be able to promptly catch any defects introduced
due to the change.
• Codes are more reusable.
• Development is faster.
• The cost of fixing a defect detected during unit testing is lesser in
comparison to that of defects detected at higher levels.
• Debugging is easy.
Integration Testing
• Level of software testing where individual units are
combined and tested as a group.
• Here, individual software modules are integrated logically
and tested as a group.
• Tests integration or interfaces between components,
interactions to different parts of the system such as an
operating system, file system and hardware or interfaces
between systems.
Integration Testing Approaches
Big Bang integration testing
• All components or modules are integrated simultaneously,
after which everything is tested as a whole.
• As per the below image all the modules from ‘Module 1’
to ‘Module 6’ are integrated simultaneously then the
testing is carried out.
Pros And Cons
• Convenient for small systems.
• Fault localization is difficult.
• Since all modules are tested at once, high risk critical
modules are not isolated and tested on priority.
• Since the integration testing can commence only after "all"
the modules are designed, testing team will have less time
for execution in the testing phase.
Incremental Approach
• Here, testing is done by joining two or more modules that
are logically related. Then the other related modules are added and
tested for the proper functioning. Process continues until all of the
modules are joined and tested successfully.
• This process is carried out by using dummy programs called Stubs and
Drivers. Stubs and Drivers do not implement the entire programming
logic of the software module but just simulate data communication
with the calling module.
• Stub: Is called by the module under test.
• Driver: Calls the module to be tested.
Bottom-up Integration
• Here, each module at lower levels is tested with higher
modules until all modules are tested.
• It takes help of Drivers for testing.
Advantages:
•Fault localization is easier.
•No time is wasted waiting for all modules to be
developed unlike Big-bang approach.
Disadvantages:
•Critical modules (at the top level of software
architecture) which control the flow of application are
tested last and may be prone to defects.
•Early prototype is not possible.
Top-down Integration:
• Here, testing takes place from top to down following the
control flow of the software system.
• Takes help of stubs for testing.
Advantages:
•Fault localization is easier.
•Possibility to obtain an early prototype.
•Critical modules are tested on priority; major
design flaws could be found and fixed first.
Disadvantages:
•Needs many Stubs.
•Modules at lower level are tested inadequately.
Unit Testing Integration Testing
Type of testing to check if the small Type of testing to check if different
piece of code is doing what it is pieces of the modules are working
suppose to do. together.
Checks a single component of an The behavior of integration modules
application. is considered here.
The scope is wide, as it covers the
The scope is narrow, as it covers
whole application under test and it
System Testing
• The process of testing of an integrated hardware and software
system to verify that the system meets its specified requirements.
• It is performed when integration testing is completed.
• It is mainly a black box type testing. This testing evaluates working of
the system from a user point of view, with the help of specification
document. It does not require any internal knowledge of system, like
design or structure of the code.
• It contains functional and non-functional areas of
application/product.
• System testing is performed in the context of a System Requirement
Specification (SRS) and/or a Functional Requirement Specifications
(FRS). It is the final test to verify that the product to be delivered
meets the specifications mentioned in the requirement document. It
should investigate both functional and non-functional
requirements.
• It mainly focuses on following:
▪ External interfaces
▪ Complex functionalities
▪ Security
▪ Recovery
▪ Performance
▪ Operator and user’s smooth interaction with system
▪ Documentation
▪ Usability
▪ Load / Stress
Performance Testing
• Type of testing to ensure software applications will perform well
under their expected workload.
• A software application's performance like its response time,
reliability, resource usage and scalability do matter.
• The goal is not to find bugs but to eliminate performance
bottlenecks.
• The focus of performance testing is checking a software program’s:
➢ Speed - Determines whether the application responds quickly
➢ Scalability - Determines maximum user load the software application can
handle
➢ Stability - Determines if the application is stable under varying loads
Performance Testing Process
1) Identify your testing environment :–
• Do proper requirement study & analyze test goals and its objectives.
• Determine the testing scope along with test initiation checklist.
• Identify the logical and physical production architecture for performance
testing, identify the software, hardware and networks configurations
required for kick off the performance testing.
• Compare both the test and production environments while identifying the
testing environment.
• Get resolve the environment-related concerns if any, analyze that
whether additional tools are required for performance testing.
• This step also helps to identify the probable challenges tester may face
while performance testing.
2) Identify the performance acceptance criteria :–
• Identify the desired performance characteristics of the application like
Response time, Throughput and Resource utilization.
3) Plan & design performance tests :–
• Identify key usage scenarios, determine appropriate variability across
users, identify and generate test data, and specify the metrics to be
collected.
• Ultimately, these items will provide the foundation for workloads and
workload profiles.
• The output of this stage ensures that prerequisites for test execution are
ready, all required resources, tools & test data are ready.
4) Configuring the test environment :–
• Prepare with conceptual strategy, available tools, designed tests along with
testing environment before execution.
• The output of this stage is configured load-generation environment and
resource-monitoring tools.
5) Implement test design :–
• According to test planning and design create your performance tests.
6) Execute the tests :–
• Collect and analyze the data.
• Problem investigation like bottlenecks (memory, disk, processor, process,
cache, network, etc.) resource usage like (memory, CPU, network, etc.,)
• Generate the performance analysis reports containing all performance
attributes of the application.
• Based on the analysis prepare recommendation report.
• Repeat the above test for the new build received from client after fixing the
bugs and implementing the recommendations.
7) Analyze Results, Report, and Retest :-
• Consolidate, analyze and share test results.
• Based on the test report re-prioritize the test & re-execute the same.
• If all results are between the thresholds limits then testing of same scenario
on particular configuration is completed.
Test objectives frequently include the
following:
• Response time : For example, the product catalog must be
displayed in less than 3 seconds.
• Throughput: For example, the system must support 100
transactions per second.
• Resource utilization: A frequently overlooked aspect is the
amount of resources your application is consuming, in terms of
processor, memory, disk input output
(I/O), and network I/O.
Types of Performance Testing
•Load test
•Stress test
•Data Volume Testing
•Storage Testing
Stress Testing
• Stress Testing is performance testing type to check the stability of software
when hardware resources are not sufficient like CPU, memory, disk space
etc.
• It is performed to find the upper limit capacity of the system and also to
determine how the system performs if the current load goes well above the
expected maximum.
• Main parameters to focus during Stress testing are “Response Time” and
“Throughput”.
• Stress testing is negative testing where we load the software with large
number of concurrent users/processes which cannot be handled by the
system’s hardware resources. This testing is also known as Fatigue testing.
Usability Testing
• In usability testing basically the testers tests the ease with which the user
interfaces can be used.
• It tests that whether the application or the product built is user-friendly or not.
• Usability testing is a black box testing technique.
• Usability testing also reveals whether users feel comfortable with your
application or web site according to different parameters – the flow, navigation
and layout, speed and content – especially in comparison to prior or similar
applications.
• Usability testing tests the following features of the software:
— How easy it is to use the software.
— How easy it is to learn the software.
— How convenient is the software to end user.
Usability testing includes the following five
components:
• Learnability: How easy is it for users to accomplish basic tasks the first time
they encounter the design?
• Efficiency: How fast can experienced users accomplish tasks?
• Memorability: When users return to the design after a period of not using it,
does the user remember enough to use it effectively the next time, or does the
user have to start over again learning everything?
• Errors: How many errors do users make, how severe are these errors and how
easily can they recover from the errors?
• Satisfaction: How much does the user like using the system?
Benefits of usability testing to the end user or
the customer:
• Better quality software
• Software is easier to use
• Software is more readily accepted by users
• Shortens the learning curve for new users
Acceptance Testing
• Acceptance Testing is a level of the software testing where a system is tested for
acceptability.
• The purpose of this test is to evaluate the system’s compliance with the business
requirements and assess whether it is acceptable for delivery.
• Usually, Black Box Testing method is used in Acceptance Testing.
• Acceptance Testing is performed after System Testing and before making the system
available for actual use.
• The acceptance test cases are executed against the test data or using an acceptance test
script and then the results are compared with the expected ones.
• The goal of acceptance testing is to establish confidence in the system.
• Acceptance testing is most often focused on a validation type testing.
Acceptance Criteria
Acceptance criteria are defined on the basis of the following attributes
• Functional Correctness and Completeness
• Data Integrity
• Data Conversion
• Usability
• Performance
• Timeliness
• Confidentiality and Availability
• Installability and Upgradability
• Scalability
• Documentation
Types Of Acceptance Testing
• User Acceptance test
• Operational Acceptance test
• Contract Acceptance testing
• Compliance acceptance testing
• User Acceptance test
It focuses mainly on the functionality thereby validating the fitness-for-use
of the system by the business user. The user acceptance test is performed by
the users and application managers.
• Operational Acceptance test
Also known as Production acceptance test validates whether the system
meets the requirements for operation. In most of the organization the
operational acceptance test is performed by the system administration
before the system is released. The operational acceptance test may include
testing of backup/restore, disaster recovery, maintenance tasks and periodic
check of security vulnerabilities.
• Contract Acceptance testing
It is performed against the contract’s acceptance criteria for
producing custom developed software. Acceptance should be
formally defined when the contract is agreed.
• Compliance acceptance testing
It is also known as regulation acceptance testing is performed against
the regulations which must be adhered to, such as governmental,
legal or safety regulations.
Advantages Of Acceptance Testing
• The functions and features to be tested are known.
• The details of the tests are known and can be measured.
• The tests can be automated, which permits regression testing.
• The progress of the tests can be measured and monitored.
• The acceptability criteria are known.
Disadvantages Of Acceptance Testing
• Requires significant resources and planning.
• The tests may be a re-implementation of system tests.
• It may not uncover subjective defects in the software, since you are
only looking for defects you expect to find.
Beta Testing
• Beta Testing is also known as field testing. It takes place at customer’s site. It
sends the system/software to users who install it and use it under real-world
working conditions.
• A beta test is the second phase of software testing in which a sampling of
the intended audience tries the product out
• The goal of beta testing is to place your application in the hands of real users
outside of your own engineering team to discover any flaws or issues from
the user’s perspective that you would not want to have in your final,
released version of the application.
• Beta testing can be considered “pre-release testing.
Advantages of beta testing
• You have the opportunity to get your application into the hands of users
prior to releasing it to the general public.
• Users can install, test your application, and send feedback to you during
this beta testing period.
• Your beta testers can discover issues with your application that you may
have not noticed, such as confusing application flow, and even crashes.
• Using the feedback you get from these users, you can fix problems before
it is released to the general public.
• The more issues you fix that solve real user problems, the higher the
quality of your application when you release it to the general public.
• Having a higher-quality application when you release to the general public
will increase customer satisfaction.
• These users, who are early adopters of your application, will generate
excitement about your application.
Regression Testing
• Regression Testing is defined as a type of software testing to confirm that a recent
program or code change has not adversely affected existing features.
• Regression Testing is nothing but full or partial selection of already executed test
cases which are re-executed to ensure existing functionalities work fine.
• This testing is done to make sure that new code changes should not have side
effects on the existing functionalities. It ensures that old code still works once the
new code changes are done.
• Regression Testing is required when there is a
✓Change in requirements and code is modified according to the requirement
✓New feature is added to the software
✓Defect fixing
✓Performance issue fix
Regression Testing Techniques :
Retest All
This is one of the methods for Regression Testing in which all the tests in the
existing test bucket or suite should be re-executed. This is very expensive as
it requires huge time and resources.
Regression Test Selection
Instead of re-executing the entire test suite, it is better to select part of test
suite to be run
Test cases selected can be categorized as 1) Reusable Test Cases 2) Obsolete
Test Cases.
Re-usable Test cases can be used in succeeding regression cycles.
Obsolete Test Cases can't be used in succeeding cycles.
Prioritization of Test Cases
Prioritize the test cases depending on business impact, critical & frequently
used functionalities. Selection of test cases based on priority will greatly
reduce the regression test suite.
Selecting test cases for regression testing
It was found from industry data that good number of the defects reported by customers
were due to last minute bug fixes creating side effects and hence selecting the Test
Case for regression testing is an art and not that easy.
Effective Regression Tests can be done by selecting following test cases –
✓Test cases which have frequent defects
✓Functionalities which are more visible to the users
✓Test cases which verify core features of the product
✓Test cases of Functionalities which has undergone more and recent changes
✓All Integration Test Cases
✓All Complex Test Cases
✓Boundary value test cases
✓Sample of Successful test cases
✓Sample of Failure test cases
Regression Testing Tools
If your software undergoes frequent changes, regression testing costs will escalate.
In such cases, Manual execution of test cases increases test execution time as well as costs.
Following are most important tools used for both functional and regression testing:
Selenium: This is an open source tool used for automating web applications. Selenium can be
used for browser based regression testing.
Quick Test Professional (QTP): HP Quick Test Professional is automated software designed to
automate functional and regression test cases. It uses VBScript language for automation. It is a
Data driven, Keyword based tool.
Rational Functional Tester (RFT): IBM's rational functional tester is a Java tool used to automate
the test cases of software applications. This is primarily used for automating regression test
cases and it also integrates with Rational Test Manager.
• Black box testing
• No knowledge of internal design or code
required.
• Tests are based on requirements and
functionality
• White box testing
• Knowledge of the internal program design and code
required.
• Tests are based on coverage of code
statements,branches,paths,conditions.
BLACK BOX - TESTING TECHNIQUE
• Incorrect or missing functions
• Interface errors
• Errors in data structures or external database access
• Performance errors
• Initialization and termination errors
Black box / Functional testing
• Based on requirements and functionality
• Not based on any knowledge of internal
design or code
• Covers all combined parts of a system
• Tests are data driven
White box testing / Structural testing
• Based on knowledge of internal logic of an
application's code
• Based on coverage of code statements,
branches, paths, conditions
• Tests are logic driven
Difference between black box, white box, grey box testing.