0% found this document useful (0 votes)
44 views

UNIT - I Notes

21IT1904 - Software Testing and Automation

Uploaded by

poojasai235
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views

UNIT - I Notes

21IT1904 - Software Testing and Automation

Uploaded by

poojasai235
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

FOUNDATIONS OF SOFTWARE TESTING

Why do we test Software?, Black-Box Testing and White-Box


Testing, Software Testing Life Cycle, V-model of Software
Testing, Program Correctness and Verification, Reliability
versus Safety, Failures, Errors and Faults (Defects),
Software Testing Principles, Program Inspections, Stages of
Testing: Unit Testing, Integration Testing, System Testing.

1.1 INTRODUCTION
Software testing is a method for figuring out if the real piece of
software meets requirements and is error-free. It involves running software
or system components manually or automatically in order to evaluate one
or more intriguing characteristics. Finding faults; gaps or unfulfilled
requirements in comparison to the documented specifications is-the aim of
software testing.
Some prefer to use the terms white box and black box testing to
describe the concept of software testing. To-put it simply, software
testing is the process of validating an application that is being tested.

1.1.1 What is Software Testing ?


Software testing is the process of determining if a piece of
software is accurate by taking into account all of its characteristics
(reliability, scalability, portability, Re-usability and usability) and
analyzing how its various components operate in order to detect any
bugs, faults or flaws.
Software testing delivers assurance of the software's fitness and offers
a detached viewpoint and purpose of the programmer. It entails testing each
component that makes up the necessary services to see whether or not it
satisfies the set criteria. Additionally, the procedure informs the customer
about the software's caliber.
In Simple words, “Testing is the process of executing a program with
the intent of finding faults.”
Testing is required because failure of the programmer at any point
owing to a lack of testing would be harmful. Software cannot be released to
the end user without being tested.

1.1.2 What is Testing?

Testing is a collection of methods to evaluate an application's


suitability for use in accordance with a predetermined script, however
testing is not able to detect every application flaw. The basic goal of
testing is to find application flaws so that they may be identified and
fixed. It merely shows that a product doesn't work in certain particular

Panimalar Engineering College 1


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
circumstances, not that it works correctly under all circumstances.
Testing offers comparisons between software behaviour and state and
mechanisms since mechanisms may identify problems in software. The
method may incorporate previous’ iterations of the same or similar items,
comparable goods, expected-purpose interfaces, pertinent standards or
other criteria, but is not restricted to these.
Testing includes both the analysis and execution of the code in
different settings and environments, as well as the whole code analysis. A
testing team may be independent from the development team in the present
software development scenario so that information obtained from testing
may be utilized to improve the software development process.
The intended audience's adoption of the programmed, its user-friendly
graphical user interface, its robust functionality load test, etc., is all factors
in its success. For instance, the target ‘market for banking and a video
game are very different. As a result, an organization can determine if a
software product it produces will be useful to its customers and other
audience members.

1.1.3 Why Software Testing is Important?


Software testing is a very expensive and critical activity; but releasing
the software without testing is definitely more expensive and dangerous. We
shall try to find more errors in the early phases of software development.
The cost of removal of such errors will be very reasonable as compared to
those errors which we may find in the later phases of software development.
The cost to fix errors increases drastically from the specification phase to
the test phase and finally to the maintenance phase as shown in Figure 1.1.

Figure 1.1 Phase wise cost of fixing an error

Panimalar Engineering College 2


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

If an error is found and fixed in the specification and analysis phase, it


hardly costs anything. We may term this as ‘1 unit of cost’ for fixing an
error during specifications and analysis phase. The same error, if
propagated to design, may cost 10 units and if, further propagated to
coding, may cost 100 units. If it is detected and fixed during the testing
phase, it may lead to 1000 units of cost. If it could not be detected even
during testing and is found by the customer after release, the cost becomes
very high. We may not be able to predict the cost of failure for a life critical
system’s software. The world has seen many failures and these failures have
been costly to the software companies.
The fact is that we are releasing software that is full of errors, even
after doing sufficient testing. No software would ever be released by its
developers if they are asked to certify that the software is free of errors.
Testing, therefore, continues to the point where it is considered that the
cost of testing processes significantly outweighs the returns.
1.1.4. Need of Testing
Software flaws may be costly or even harmful, thus testing instances
when software defects led to financial and personal loss is crucial. History is
replete with
 Over 300,000 traders in the financial markets were impacted after a
software error caused the London Bloomberg terminal to collapse in
April 2015. It made the government delay a 3-billion-pound debt
auction.
 Nissan recalled nearly 1 million vehicles from the market because the
airbag sensory detectors’ software was flawed. Due to this software
flaw, two accidents have been documented.
 Starbucks' POS system malfunctioned, forcing them to shut nearly 60
% of its locations in the United States and Canada. The shop once
provided free coffee since they couldn't complete the purchase.
 Due to a technical error, some of Amazon's third-party sellers had
their product prices slashed to 1p. They suffered severe losses as a
result.
 A weakness in windows 10. Due to a defect in the win32k system,
users are able to bypass security sandboxes thanks to this issue.
 In 2015, a software flaw rendered the F-35 fighter jet incapable of
accurately detecting “targets. On April 26, 1994; an airbus A300
operated by China airlines crashed due to a software error, killing 264
unintentional people.
 Three patients died and three others were badly injured in 1985 when
a software glitch caused Canada's Therac-25 radiation treatment
system to fail and deliver deadly radiation doses to patients.

Panimalar Engineering College 3


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

Figure 1.2 Types of Testing

1.1.5 Benefits of Software Testing


The following are advantages of employing software testing:
One of the key benefits of software testing is that it is cost-effective.
Timely testing of any IT project enables you to make long-term financial
savings. If flaws are found sooner in the software testing process, fixing
them is less expensive.
Security: This perilous and delicate advantage of software testing. People are
searching for reliable goods. It assists in eradicating hazards and issues
early.
Product quality: Any software product must meet these criteria. Testing
guarantees that buyers get a high-quality product.
Customer satisfaction: Providing consumers with contentment is the
primary goal of every product. The optimum user experience is made
guaranteed of through U/UX testing.

1.1.6 Type of Software Testing


1.Manual testing
The process of checking the functionality of an application as per the
customer needs without taking any help of automation tools is known as
manual testing. While performing the manual testing on any application, we
do not need any specific knowledge of any testing tool, rather than have a
proper understanding of the product so we can easily prepare the test
document.
Manual testing can be further divided into three types of testing, which
are as follows:
 White box testing

Panimalar Engineering College 4


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
 Black box testing
 Gray box testing.
2. Automation testing
Automation testing is a process of converting any manual test cases into
the test scripts with the help of automation tools or any programming
language is known as automation testing. ‘With the help of automation
testing, we can enhance the speed of our test execution because here, we do
not require any human efforts. We need to write a test script and execute
those scripts.

1.2 BLACK-BOX TESTING AND WHITE-BOX TESTING


Black box testing (also called functional testing) is testing that ignores the
internal mechanism of a system or component and focuses solely on the
outputs generated in response to selected inputs and execution conditions.
White box testing (also called structural testing and glass box testing) is
testing that takes into account the internal mechanism of a system or
component.
Gray Box Testing is a software testing technique that is a combination of
the Black Box Testing technique and the White Box Testing technique. It
allows testers to work with partial knowledge of the internal workings of an
application. Gray box testing facilitates a comprehensive and efficient
evaluation process by leveraging both the developer’s insights and the
tester’s perspective, ensuring software quality and reliability.

1.2.1 What is White-Box Testing


 Testing a system in a "black box" is doing so without knowing
anything about how it operates within. A tester inputs data and
monitors the output produced by the system being tested. This allows
for the identification of the system's reaction time, usability
difficulties and reliability concerns as well as how the system reacts to
anticipated and unexpected user activities.
 Because of the system's internal viewpoint, the phrase "white box" is
employed. The term “clear box," “Glass Box”, "white box" or
"transparent box" refers to the capability of seeing the software's
inner workings through its exterior layer.
 Developers carry it out before sending the programme to. The testing
team, who then conducts black-box testing. Testing the infrastructure
of the application is the primary goal of white-box testing. As it
covers unit testing and integration testing, it is performed at lower
levels. Given that it primarily focuses on the code structure,
pathways, conditions and ‘branches of a programmed or piece of
software, it necessitates programming skills. Focusing on the inputs
and outputs via the programme and enhancing its security are the
main objectives of white-box testing:
 It is also referred to as transparent testing, code-based testing,
structural testing and clear box testing. It is a good fit and is
recommended for testing algorithms.

Panimalar Engineering College 5


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

1.2.1.1 Types of White Box Testing in Software Testing


White box testing is a type of software testing that examines the
internal structure and design of a program or application. The following are
some common types of white box testing:
 Unit testing: Tests individual units or components of the software
to ensure they function as intended.
 Integration testing: Tests the interactions between different units or
components of the software to ensure they work together correctly.
 Functional testing: Tests the functionality of the software to
ensure it meets the requirements and specifications.
 Performance testing: Tests the performance of the software under
various loads and conditions to ensure it meets performance
requirements.
 Security testing: Tests the software for vulnerabilities and
weaknesses to ensure it is secure.
 Code coverage testing: Measures the percentage of code that is
executed during testing for ensures that all parts of the code are
tested.
 Regression testing: Regression testing is not a level of testing, but it is
the retesting of software that occur when changes are made to ensure
that new version of the software has retained the capability of the old
version and no new defects has been introduced due to the changes.
Regression Testing can occur at any level of test

1.2.1.2 Techniques of White Box Testing


There are some techniques which is used for white box testing -
 Statement coverage: This testing approach involves going over every
statement in the code to make sure that each one has been run at
least once. As a result, the code is checked line by line.
 Branch coverage: Is a testing approach in which test cases are
created to ensure that each ranch is tested at least once. This method
examines all potential configurations for the system.
 Path coverage: Path coverage is a software testing approach that
defines and covers all potential pathways. From system entrance to
exit points, pathways are statements that may be executed. It takes a
lot of time.
 Loop testing: With the help of this technique, loops and values in
both independent and dependent code are examined. Errors often
happen at the start and conclusion of loops. This method included
testing loops, Concatenated loops, Simple loops, and Nested loops.
 Basis path testing: Using this methodology, control flow diagrams
are created from code and subsequently calculations are made for
cyclomatic complexity. For the purpose of designing the fewest possible

Panimalar Engineering College 6


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
test cases, cyclomatic complexity specifies the quantity of separate
routes. Cyclomatic complexity is a software metric used to indicate the
complexity of a program. It is computed using the Control Flow Graph of
the program.
1.2.1.3 Advantages of White Box Testing
 Complete coverage.
 Better understanding of the system.
 Improved code quality.
 Increase efficiency.
 Early detection of error.

1.2.1.4 Disadvantages of White Box Testing


 This testing is very expensive and time-consuming.
 Redesign of code needs test cases to be written again.
 Missing functionalities cannot be detected.
 This technique can be very complex and at times not realistic.
 White-box testing requires a programmer with a high level of
knowledge due for the complexity of the level of testing that needs to
be done.

1.2.2 What is Black Box Testing


Testing a system in a "black box" is doing so without knowing
anything about how it operates within. A tester inputs data and monitors
the output produced by the system being tested. This allows for the
identification of the system's reaction time, usability difficulties and
reliability concerns as well as how the system reacts to anticipated and
unexpected user activities.)
Because it tests a system from beginning to finish, black box testing
is a potent testing method. A tester may imitate user action to check if the
system fulfills its promises, much as end users "don't care” how a system is
programmed or designed and expect to get a suitable answer to their
requests. A black box test assesses every important subsystem along the
route, including the UI/UX, database, dependencies and integrated
systems, as well as the web server or application server.

1.2.2.1 Black Box Testing Pros and Cons

S.No Pros Cons


1 Testers do not require Difficult to automate.
technical knowledge,
programming of IT skills.
2 Testers do not need to learn Requires prioritization, typically
Implementation details of the Infeasible to tests all user paths.
system.
3 Tests can be executed by Difficult to calculate test
crowd sourced or outsourced coverages.
testers.

Panimalar Engineering College 7


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
4 Low chance of false positives. If a test fails, it can be
difficult to understand the
root cause of the
issues.

5 Tests have lower complexity, Tests may be conducted at low


since they simply model scale or on a non-production
common user behavior like environment.

1.2.2.2 Types of Black Box Testing


Black box testing can be applied to three main types of tests:
Functional, non- functional and regression testing.
1. Functional Testing
Specific aspects or operations of the programme that is being tested
may be tested via black box testing. For instance, make sure that the right
user credentials may be used to log in and that the incorrect ones cannot.
Functional testing might concentrate on the most important features
of the programme (smoke testing/sanity testing), on how well the system
works as a whole (system testing) or non the integration of its essential
components.
2. Non-functional Testing
 Beyond features and functioning, black box testing allows for the
inspection of extra software components. A non-functional test
examines "how" rather than "if" the programme can carry out a
certain task.
 Black box testing may determine whether software is:
a) Usable and simple for its users to comprehend;
b) Perform under predicted or peak loads; Compatible with
relevant devices, screen sizes, browsers or operating systems;
c) Exposed to security flaws or frequent security threats.
Black Box Testing Techniques

1. Equivalence partitioning
Testing professionals may organize potential inputs into "partitions"
and test just one sample input from each category. For instance, it is
sufficient for testers to verify one birth date in the "under 18" group and one
date in the "over 18" group if a system asks for a user's birth date and
returns the same answer for users under the age of 18 and a different
response for users over 18.

2. Boundary value analysis


Testers can determine if a system responds differently around a
certain boundary value. For instance, a particular field could only support
values in the range of 0 and 99. Testing personnel may concentrate on the
boundary values (1, 0, 99 and 100) to determine if the system is
appropriately accepting and rejecting inputs.

Panimalar Engineering College 8


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
3. Decision Table Testing
Numerous systems provide results depending on a set of parameters.
Once rules” that are combinations of criteria have been identified, each
rule's conclusion can then be determined and test cases may then be
created for each rule.
1.2.3 Differences between Black Box Testing vs. White Box Testing:

Black Box Testing White Box Testing


It is a way of software testing in It is a way of testing the software
which the internal structure or the in which the tester has knowledge
program or the code is hidden and about the internal structure or the
nothing is known about it. code or the program of the
software.
Implementation of code is not Code implementation is
needed for black box testing. necessary for white box testing.
It is mostly done by software testers. It is mostly done by software
developers.
No knowledge of implementation is Knowledge of implementation is
needed. required.
It can be referred to as outer or It is the inner or the internal
external software testing. software testing.
It is a functional test of the software. It is a structural test of the software.
This testing can be initiated based This type of testing of software is
on the requirement specifications started after a detail design
document. document.
No knowledge of programming is It is mandatory to have knowledge of
required. programming.
It is the behavior testing of the It is the logic testing of the software.
software.
It is applicable to the higher levels of It is generally applicable to the
testing of software. lower levels of software testing.
It is also called closed testing. It is also called as clear box testing.
It is least time consuming. It is most time consuming.
It is not suitable or preferred for
It is suitable for algorithm testing.
algorithm testing.
Can be done by trial and error ways Data domains along with inner or
and methods. internal boundaries can be better
tested.
Example: Search something on Example: By input to check and
Google by using keywords verify loops
Black-box test design techniques- White-box test design techniques-
 Decision table testing  Control flow testing
 All-pairs testing  Data flow testing
 Equivalence partitioning  Branch testing
 Error guessing
Types of Black Box Testing: Types of White Box Testing:
 Functional Testing  Path Testing
 Non-functional testing  Loop Testing
 Regression Testing  Condition testing

Panimalar Engineering College 9


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
It is less exhaustive as compared It is comparatively more
to white box testing. exhaustive than black box
testing.

1.3 SOFTWARE TESTING LIFE CYCLE


The Software Testing Life Cycle (STLC) is a systematic approach to
testing a software application to ensure that it meets the requirements and
is free of defects. It is a process that follows a series of steps or phases, and
each phase has specific objectives and deliverables. The STLC is used to
ensure that the software is of high quality, reliable, and meets the needs of
the end-users.
The main goal of the STLC is to identify and document any defects or
issues in the software application as early as possible in the development
process. This allows for issues to be addressed and resolved before the
software is released to the public.
The stages of the STLC include Test Planning, Test Analysis, Test
Design, Test Environment Setup, Test Execution, Test Closure, and Defect
Retesting. Each of these stages includes specific activities and deliverables
that help to ensure that the software is thoroughly tested and meets the
requirements of the end users.
Overall, the STLC is an important process that helps to ensure the
quality of software applications and provides a systematic approach to
testing. It allows organizations to release high-quality software that meets
the needs of their customers, ultimately leading to customer satisfaction
and business success.

Phases of STLC

Panimalar Engineering College 10


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
1. Requirement Analysis: Requirement Analysis is the first step of the
Software Testing Life Cycle (STLC). In this phase quality assurance team
understands the requirements like what is to be tested. If anything is
missing or not understandable then the quality assurance team meets with
the stakeholders to better understand the detailed knowledge of
requirements.
The activities that take place during the Requirement Analysis stage
include:
• Reviewing the software requirements document (SRD) and other
related documents
• Interviewing stakeholders to gather additional information
• Identifying any ambiguities or inconsistencies in the requirements
• Identifying any missing or incomplete requirements
• Identifying any potential risks or issues that may impact the testing
process
Creating a requirement traceability matrix (RTM) to map
requirements to test cases
At the end of this stage, the testing team should have a clear
understanding of the
software requirements and should have identified any potential issues that
may impact the testing process. This will help to ensure that the testing
process is focused on the most important areas of the software and that
the testing team is able to deliver high-quality results.

2. Test Planning: Test Planning is the most efficient phase of the software
testing life cycle where all testing plans are defined. In this phase manager
of the testing, team calculates the estimated effort and cost for the testing
work. This phase gets started once the requirement- gathering phase is
completed.
The activities that take place during the Test Planning stage include:
• Identifying the testing objectives and scope
Developing a test strategy: selecting the testing methods and
techniques that will be used.

• Identifying the testing environment and resources needed


• Identifying the test cases that will be executed and the test data that
will be used
• Estimating the time and cost required for testing
• Identifying the test deliverables and milestones
• Assigning roles and responsibilities to the testing team
• Reviewing and approving the test plan

At the end of this stage, the testing team should have a detailed plan
for the testing activities that will be performed, and a clear
understanding of the testing objectives, scope, and deliverables. This
will help to ensure that the testing process is well-organized and that
the testing team is able to deliver high-quality results.

Panimalar Engineering College 11


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

3. Test Case Development: The test case development phase gets started
once the test planning phase is completed. In this phase testing team notes
down the detailed test cases. The testing team also prepares the required
test data for the testing. When the test cases are prepared then they are
reviewed by the quality assurance team.
The activities that take place during the Test Case Development stage
include:
• Identifying the test cases that will be developed
• Writing test cases that are clear, concise, and easy to understand
• Creating test data and test scenarios that will be used in the test
cases
• Identifying the expected results for each test case
• Reviewing and validating the test cases
• Updating the requirement traceability matrix (RTM) to map
requirements to test cases
At the end of this stage, the testing team should have a set of
comprehensive and accurate test cases that provide adequate coverage of the
software or application. This will help to ensure that the testing process is
thorough and that any potential issues are identified and addressed before
the software is released.

4. Test Environment Setup: Test environment setup is a vital part of the


STLC. Basically, the test environment decides the conditions on which
software is tested. This is independent activity and can be started along
with test case development. In this process, the testing team is not
involved. either the developer or the customer creates the testing
environment.

5. Test Execution: After the test case development and test environment
setup test execution phase gets started. In this phase testing team starts
executing test cases based on prepared test cases in the earlier step.
The activities that take place during the test execution stage of the
Software Testing Life Cycle (STLC) include:
• Test execution: The test cases and scripts created in the test
design stage are run against the software application to identify any
defects or issues.
• Defect logging: Any defects or issues that are found during test
execution are logged in a defect tracking system, along with
details such as the severity, priority, and description of the issue.
• Test data preparation: Test data is prepared and loaded into the
system for test execution
• Test environment setup: The necessary hardware, software,
and network configurations are set up for test execution
• Test execution: The test cases and scripts are run, and the results
are collected and analyzed.
• Test result analysis: The results of the test execution are analyzed
to determine the software’s performance and identify any defects or issues.
• Defect retesting: Any defects that are identified during test
execution are retested to ensure that they have been fixed correctly.
• Test Reporting: Test results are documented and reported to
the relevant stakeholders.
It is important to note that test execution is an iterative process and
Panimalar Engineering College 12
UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
may need to be repeated multiple times until all identified defects are fixed
and the software is deemed fit for release.

6. Test Closure: Test closure is the final stage of the Software Testing Life
Cycle (STLC) where all testing-related activities are completed and
documented. The main objective of the test closure stage is to ensure
that all testing-related activities have been completed and that the
software is ready for release.
At the end of the test closure stage, the testing team should have a
clear understanding of the software’s quality and reliability, and any defects
or issues that were identified during testing should have been resolved. The
test closure stage also includes documenting the testing process and any
lessons learned so that they can be used to improve future testing processes
Test closure is the final stage of the Software Testing Life Cycle (STLC)
where all testing-related activities are completed and documented. The main
activities that take place during the test closure stage include:
• Test summary report: A report is created that summarizes the
overall testing process, including the number of test cases executed, the
number of defects found, and the overall pass/fail rate.
• Defect tracking: All defects that were identified during testing are
tracked and managed until they are resolved.
• Test environment clean-up: The test environment is cleaned up,
and all test data and test artifacts are archived.
• Test closure report: A report is created that documents all the
testing-related activities that took place, including the testing objectives,
scope, schedule, and resources used.
• Knowledge transfer: Knowledge about the software and testing
process is shared with the rest of the team and any stakeholders who may
need to maintain or support the software in the future.
• Feedback and improvements: Feedback from the testing process is
collected and used to improve future testing processes

1.4 V-MODEL OF SOFTWARE TESTING


V- Model also referred to as the Verification and Validation Model. In
this, each phase of SDLC must complete before the next phase starts. It
follows a sequential design process same as the waterfall model. Testing of
the device is planned in parallel with a corresponding stage of development.
Verification: Process of evaluating a software system or component to
determine whether the products of a given development phase satisfy the
conditions imposed at the start of that phase. Verification is usually
associated with activities such as inspections and reviews of software
deliverables.
Validation: Process of evaluating a software system or components
during or at the end of, the development cycle in order to determine whether
it satisfies specified requirements. Validation is usually associated with
traditional execution based testing that is exercising the code with test
cases
So V-Model contains Verification phases on one side of the Validation
phases on the other side. Verification and Validation process is joined by
coding phase in V-shape. Thus it is known as V-Model.

Panimalar Engineering College 13


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

There are the various phases of Verification Phase of V-model:

1. Business requirement analysis: This is the first step where


product requirements understood from the customer's side. This phase
contains detailed communication to understand customer's expectations
and exact requirements.
2. System Design: In this stage system engineers analyze and
interpret the business of the proposed system by studying the user
requirements document.
3. Architecture Design: The baseline in selecting the architecture is
that it should understand all which typically consists of the list of modules,
brief functionality of each module, their interface relationships,
dependencies, database tables, architecture diagrams, technology detail,
etc. The integration testing model is carried out in a particular phase.
4. Module Design: In the module design phase, the system breaks
down into small modules. The detailed design of the modules is specified,
which is known as Low-Level Design
5. Coding Phase: After designing, the coding phase is started. Based
on the requirements, a suitable programming language is decided. There are
some guidelines and standards for coding. Before checking in the
repository, the final build is optimized for better performance, and the code
goes through many code reviews to check the performance.

There are the various phases of Validation Phase of V-model:


1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed
during the module design phase. These UTPs are executed to eliminate
errors at code level or unit level. A unit is the smallest entity which can
independently exist, e.g., a program module. Unit testing verifies that the
smallest entity can function correctly when isolated from the rest of the
codes/ units.
2. Integration Testing: Integration Test Plans are developed during the
Architectural Design Phase. These tests verify that groups created and tested
independently can coexist and communicate among themselves.

Panimalar Engineering College 14


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
3. System Testing: System Tests Plans are developed during System
Design Phase. Unlike Unit and Integration Test Plans, System Tests Plans
are composed by the client’s business team. System Test ensures that
expectations from an application developer are met.
4. Acceptance Testing: Acceptance testing is related to the business
requirement analysis part. It includes testing the software product in user
atmosphere. Acceptance tests reveal the compatibility problems with the
different systems, which is available within the user atmosphere. It
conjointly discovers the non-functional problems like load and performance
defects within the real user atmosphere.

When to use V-Model?


When the requirement is well defined and not ambiguous.
The V-shaped model should be used for small to medium-sized
projects where requirements are clearly defined and fixed.
The V-shaped model should be chosen when sample technical
resources are available with essential technical expertise.

Advantage (Pros) of V-Model

1. Easy to Understand.
2. Testing Methods like planning, test designing happens well before coding.
3. This saves a lot of time. Hence a higher chance of success over the
waterfall model.
4. Avoids the downward flow of the defects.
5. Works well for small plans where requirements are easily understood.

Disadvantage (Cons) of V-Model


1. Very rigid and least flexible.
2. Not a good for a complex project.
3. Software is developed during the implementation stage, so no early
prototypes of the software are produced.
4. If any changes happen in the midway, then the test documents
along with the required documents, has to be updated.

5. PROGRAM CORRECTNESS AND VERIFICATION


We discuss software correctness from the two perspectives of the
operational and the symbolic approach. To show that a program is
correct
• from the operational perspective we use testing;
• from the symbolic perspective we use proof.
The two perspectives –and with them testing and proof– are tightly
related and we make ample use of this relationship.

Panimalar Engineering College 15


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

Testing a Simple Fragment (Version 1)


Knowing about the relationship between values and facts we can
formulate a simple testing method for program fragments. The fragments
have the following general shape, consisting of three parts
initialize variables
carry out
computation
check condition
The initialise variables part sets up the input values for the fragment.
Usually, the input values are chosen taking into account conditions
concerning input values, e.g., to avoid division by zero. The carry out
computation part contains the “program”. The check condition part specifies
a condition to determine whether the program is correct.

Abstracting from Values and Computations


In order to complete the abstraction from values and computations we
are still lacking a means to describe assumptions about values at the
beginning of a fragment. The corresponding construct is called assume. It
specifies a fact that can be assumed to be true at the locations where it is
written. We can imagine assume to have the following effect during program
execution: if the condition assumed is true, assume does nothing; if the
condition is false, assume terminates the program “gently”, that is, it is not
considered an error. Contrast this to the assert statement, that aborts with
an error when its condition is false. Now, assume is not very useful for
programming because we are interested in error conditions that may be
encountered during program execution, but assume is very useful for
reasoning about program fragments.
Consider the following fragment involving a division.
Val x: Z =
randomInt( )
assume(x >= 5)
Val y: Z = 5 / x
assert(y <= 1)
As long as we know that the fact x >= 5 holds at the beginning we can
be certain that at the end y <= 1 holds. The use of assume will turn out to
be a powerful tool for drafting program fragments and also for determining
test cases. In fact, in conjunction with assert it is so useful, that a pair
consisting of an assume statement followed by an assert statement is given
its own terminology. We call such a pair a contract. We will see contracts in
different shapes. Special syntax is used for different purposes, for instance,
when specifying contracts for functions. Note that if a program fragment
with an assume statement was “called” and we would like that it does not
terminate gently, we would have to show that the condition of the assume
statement is met. We will come back to this point later.

Testing a Simple Fragment (Version 2)


Instead of giving an initialisation as in Version 1, we can also use an
assert statement to impose an initial condition for a test. Using the concept
of a contract, we can specify,

Panimalar Engineering College 16


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

assume initial condition on variables


specify computation
assert final condition on variables

The fragment terminates gently if the initial condition is not met and
aborts with an error if the initial condition was met but the final condition is
is not. This way of specifying tests turns out to be a foundation for deriving
test cases according to Version 1. Reasoning about contracts we can
systematically develop test cases.

Program Correctness
Following the preceding discussion we base our notion of correctness
on contracts.
We consider two variants in which contracts can be specified:
• Pairs of assume-assert-statements in program fragments and tests.
These contracts are executable and can be evaluated at run-time.
• Pairs of requires-ensures-clauses used for specifying functions.
These contracts are not executable. They are exclusively used for proving
that functions satisfy their contracts. By contrast, assume and assert can
be used for proving properties and for their runtime checking. Properties
specified using requires and ensures are more expressive, permitting
statements over infinite value ranges, for instance, which cannot be
evaluated at run-time. We introduce function contracts in the next chapter.
We call the first component of a contract, its assume-statement or
requires-clause, a pre-condition. We call its second component, its assert-
statement or ensures-clause, a post- condition.
We say a program (or function) is correct if it satisfies its contract:
Starting in a state
satisfying the pre-condition, it terminates in a state satisfying its post-
condition.

Program Verification
To demonstrate that a program is correct we verify it. We consider two
principles methods for verifying programs.
 Proof
Using logical deduction we show that any execution of a program
starting in a state satisfying the pre-condition, it terminates in a state
satisfying its post-condition. In other words, we show that the program is
correct.
 Testing
Executing a program for specific states satisfying the pre-
condition, we check whether on termination a state is reached that
satisfies the post-condition. It is up to us to determine suitable pairs of
states, called test cases. This approach does not show that a program is
correct. In practice, we conjecture that programs that have been
subjected to a sufficient number of tests is correct. This is kind of
reasoning is called induction: from a collection of tests that confirm
correctness for precisely those tests, we infer that this is the case for
possible tests. Testing is a fallible verification method: it is entirely
possible that all tests that we have provided appear to confirm
correctness, but later we find a test case that refutes the conjecture.
Either the program contains an error or the test case is wrong.
Panimalar Engineering College 17
UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

1.5 RELIABILITY VERSUS SAFETY


1.5.1 Software Reliability

Software reliability engineering involves much more than


analyzing test results, estimating remaining faults, and modeling future
failure probabilities.
Although in most organizations software test is no longer an
afterthought, management is almost always surprised by the cost and
schedule requirements of the test program, and it is often downgraded in
favor of design activities. Often adding a new feature will seem more
beneficial than performing a complete test on existing features. A good
software reliability engineering program, introduced early in the
development cycle, will mitigate these problems by:
• Preparing program management in advance for the testing effort and
allowing them to plan both schedule and budget to cover the required
testing.
• Continuous review of requirements throughout the life cycle,
particularly for handling of exception conditions. If requirements are
incomplete there will be no testing of the exception conditions.
• Offering management a quantitative assessment of the dependence
of reliability metrics (software/system availability; software/system outages
per day, etc.) on the effort (time and cost) allotted to testing.
• Providing the most efficient test plan targeted to bringing the
product to market in the shortest time subject to the reliability
requirements imposed by the customer or market expectations.
• Performing continuous quantitative assessment of software/system
reliability and the effort/cost required to improve them by a specified
amount.

Reliability Program Tasks:

1. Reliability Allocation

Reliability allocation is the task of defining the necessary reliability


of a software item. The item may be a part of an integrated
hardware/software system, may be a relatively independent software
application, or, more and more rarely, a standalone software program. In
any of these cases, goal is to bring system reliability within either a strict
constraint required by a customer or an internally perceived readiness level,
or optimize reliability within schedule and cost constraints.

Panimalar Engineering College 18


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

1. Defining and Analyzing Operational Profiles


The reliability of software is strongly tied to the operational usage of
an application - much stronger than the reliability of hardware. A software
fault may lead to a system failure only if that fault is encountered during
operational usage. If a fault is not accessed in a specific operational mode, it
will not cause failures at all. It will cause failure more often if it is located in
code that is part of a frequently used "operation" (An operation is defined as
a major logical task, usually repeated multiple times within an hour of
application usage). Therefore in software reliability engineering we focus on
the operational profile of the software which weighs the occurrence
probabilities of each operation. Unless safety requirements indicate a
modification of this approach we will prioritize our testing according to this
profile.
Software engineers has to complete the following tasks required to
generate a useable operational profile:
• Determine the operational modes (high traffic, low traffic, high
maintenance, remote use, local use, etc)
• Determine operation initiators (components that initiate the
operations in the system)
• Determine and group "Operations" so that the list includes only
operations that are significantly different from each other (and therefore
may present different faults)
• Determine occurrence rates for the different operations
• Construct the operational profile based on the individual operation
probabilities of occurrence.

2. Test Preparation and Plan


Test preparation is a crucial step in the implementation of an effective
software reliability program. A test plan that is based on the operational
profile on the one hand, and subject to the reliability allocation constraints
on the other, will be effective in achieving the program's reliability goals in
the least amount of time and cost.
Software Reliability Engineering is concerned not only with feature
and regression test, but also with load test and performance test. All these
should be planned based on the activities outlined above.
The reliability program will inform and often determine the following
test preparation activities:
• Assessing the number of new test cases required for the current
release
• New test case allocation among the systems (if multi-system)
• New test case allocation for each system among its new operations
• Specifying new test cases
• Adding the new test cases to the existing test cases from previous
releases

3. Software Reliability Models


Software reliability engineering is often identified with reliability
models, in particular reliability growth models. These models, when applied
correctly, are successful at providing guidance to management decisions
such as:

Panimalar Engineering College 19


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

• Test schedule
• Test resource allocation
• Time to market
• Maintenance resource allocation

The application of reliability models to software testing results allows


us to infer the rate at which failures are encountered (depending on usage
profile) and, more importantly, the changes in this rate (reliability growth).
The ability to make these inferences depends critically on the quality of test
results. It is essential that testing be performed in such a way that each
failure incident is accurately reported.

1.5.2 Software Safety


As systems and products become more and more dependent on
software components it is no longer realistic to develop a system safety
program that does not include the software elements.

Does software fail?


We tend to believe that well written and well tested safety critical
software would never fail. Experience proves otherwise with software
making headlines when it actually does fail, sometimes critically. Software
does not fail the same way as hardware does, and the various failure
behaviors we are accustomed to from the world of hardware are often not
applicable to software. However, software does fail, and when it does, it can
be just as catastrophic as hardware failures.

Safety-critical software
Safety-critical software is very different from both non-critical
software and safety- critical hardware. The difference lies in the massive
testing program that such software undergoes.

What are "software failure modes"?


Software, especially in critical systems, tends to fail where least
expected. Software does not "break" but it must be able to deal with
"broken" input and conditions, which often cause the "software failures".
The task of dealing with abnormal/anomalous conditions and inputs is
handled by the exception code dispersed throughout the program. Setting
up a test plan and exhaustive test cases for the exception code is by
definition difficult and somewhat subjective.
Anomalous inputs can be due to:
• failed hardware
• timing problems
• harsh/unexpected environmental conditions
• multiple changes in conditions and inputs that are beyond
what the hardware is able to deal with
• unanticipated conditions during software mode changes
• bad or unexpected user input
Often the conditions most difficult to predict are multiple, coinciding,
irregular inputs and conditions.
Safety-critical software is usually tested to the point that no new
critical failures are observed. This of course does not mean that the
software is fault-free at this point, only that failures are no longer observed
Panimalar Engineering College 20
UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
in test.
Why are the faults leading to these types of failures overseen in test?
These are faults that are not tested for any of the following reasons:
• Faults in code that is not frequently used and therefore not well
represented in the operational profiles used for testing
• Faults caused by multiple anomalous conditions that are difficult to
test
• Faults related to interfaces and controls of failed hardware
• Faults due to missing requirements
It is clear why these types of faults may remain outside of a
normal, reliability focused, test plan.

1.6 FAILURES, ERRORS AND FAULTS (DEFECTS)


Error :-
A Error is a mistake, misconception, or misunderstanding on the
part of a software developers. Developers we include software Engineers,
programming analysts and testers. For Eg, a developer may understand a
design notation, or a programmer might type a variable name incorrectly.
Faults:-
A fault (Defects) is introduced into the software as the result of an
error. It is an anomaly in the software that may cause it to behave
incorrectly, and not according to its specification. Faults or defects are
sometimes called as “bugs”
Failures
A failure is the inability of a software or component to perform its
requires functions within specified performance requirements.

1.8. SOFTWARE TESTING PRINCIPLES

Software testing is a process that involves putting software or an


application to use in order to find faults or flaws. Following certain
guidelines can help us test software or applications without creating any
problems and it will also save the test engineers' time and effort as they
put their time and effort into doing so. We will learn about the seven
fundamental tenets of software testing in this part.
Let us see the seven different testing principles, one by one:

Panimalar Engineering College 21


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

1. The Goal of testing is to find defects before customers find them out.
Whatever a software organization develops should meet the needs of
the customer. Everything else is secondary. Testing is a means of
makings sure that the product meets the needs of the customer.
Customer doesn’t mean an External Customers. There are also
Internal Customers .For examples if a product is built using different
components from different groups within an organization, the users of
these different components should be considered customers, even if
they are from the same organizations.
Internal Customers concept a step further where the development
team considers the testing team as its internal customer .this way we
can ensure that the product is built not only for usage requirement
but also for testing requirements.

2. Exhaustive testing is not possible:

Testing can only prove the presence of defects never the absence.

Example: Consider a program that is supposed to accept a six


character code and ensure that the first character is numeric and
rests of the characters are alphanumeric. How many combinations of
input data should we test. The first character can be filled up in one
of 10 ways (the digits (0-9) .The Second through sixth characters can
each be filled up in 62 ways(digits 0-9, lower case letters a-z and
capital letters A-Z). This means that we have a total of 10 * (625 ) or
9,16,328,320 valid combinations of values to test. Assuming that
each combination takes 10 seconds to test, testing all these valid
combinations will take approximately 2905 years.

It might often appear quite difficult to test all the modules and their
features throughout the real testing process using effective and ineffective
combinations of the input data.

3. A Test in time
Defect in a product can come from any phase. There could have been

Panimalar Engineering College 22


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
errors while gathering initial requirements .If a wrong or incomplete
requirement forms the basis for the design and development of a product,
then that functionality can never be realized correctly in the eventual
product. Similarly , when a product design – which forms the basis for the
product development (a la coding) - is faulty, then the code that realizes the
faulty design will also not meet the requirements. An Essential Conditions
should be that every phase of the software development (requirements,
design, coding)should catch and correct defects at that phase, without
letting the defects seep to the next stage.

4. Test the Tests First


Test the test first--- a defective test is more dangerous than a
defective products.

5. Defect clustering

Testing can only find a part of defects that exist in a cluster; fixing a
defect may introduce another defect to the cluster.

a. The defect clustering specified that we can identify the quantities


of problems that are associated to a limited number of modules during the
testing procedure: We have a number of explanations for this, including
the possibility of intricate modules, difficult code and more.
b. According to the pareto principle, which Suggests that we may
determine that approximately, these kinds of software or applications will
follow, roughly? Twenty percent of the modules contain eighty percent of
the complexity. This allows us to locate the ambiguous modules, but it has
limitations if the same tests are. run often since they will not be able to
spot any newly introduced flaws.

6. Beware of the pesticide paradox

Defects are like pests, testing is like designing the right pesticides to catch and kill the
pests, and the test cases that are written are like pesticides .

Tests are like pesticides- you have to constantly revise their composition to tackle new
pests(Defects) .

This is based on the theory that when you use pesticide repeatedly on
crops, insects will eventually build up an immunity, rendering it ineffective.
Similarly, with testing, if the same tests are run continuously then – while
they might confirm the software is working – eventually they will fail to find
new issues. It is important to keep reviewing your tests and modifying or
adding to your scenarios to help prevent the pesticide paradox from
occurring – maybe using varying methods of testing techniques, methods
and approaches in parallel.

7. Testing is context dependent


Testing is ALL about the context. The methods and types of testing
carried out can completely depend on the context of the software or systems
– for example, an e-commerce website can require different types of testing
Panimalar Engineering College 23
UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
and approaches to an API application, or a database reporting application.
What you are testing will always affect your approach.

1.9. PROGRAM INSPECTIONS


Program or Software inspection refers to a peer review of software to
identify bugs or defects at the early stages of SDLC. It is a formal review that
ensures the documentation produced during a given stage is consistent with
previous stages and conforms to pre- established rules and standards.
Software inspection involves people examining the software product to
discover defects and inconsistencies. Since it doesn’t require system
execution, inspection is usually done before implementation.

Purpose of software inspection:


Software inspection aims to identify software defects and deviations,
ensuring the product meets customer requirements, wants, and needs. In a
broader context, the objective of the inspection is to inhibit defective
software from flowing down the subsequent operations, thereby preventing
loss to the company.
Software inspection is designed to unravel defects or bugs, unlike
testing, which is done to make corrections. It is divided into two types:

1. Document inspection: Here, the documents produced for a given phase


are inspected, further focusing on their quality, correctness, and relevance.
2. Code inspection: The code, program source files, and test scenarios are
inspected and reviewed.

Who are the key parties involved?

Moderator: A facilitator who organizes and reports on inspection. Author:


A person who produces the report.
Reader: A person who guides the examination of software; more of a
paraphraser.
Recorder: An inspector who logs all the defects on the defect list.
Inspector: The inspection team member responsible for identifying the
defects.

Software Inspection Process:


Software inspection involves six steps –
 Planning,
 Overview,
 Preparation,
 Meeting,
 Rework, and
 Follow-up.

Panimalar Engineering College 24


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

1. Planning
The planning phase starts with the selection of a group review team. A
moderator plans the activities performed during the inspection and verifies
that the software entry criteria are met.
2. Overview
The overview phase intends to disseminate information regarding the
background of the product under review. Here, a presentation is given to the
inspector with some background information needed to review the software
product properly.

3. Preparation
In the individual preparation phase, the inspector collects all the
materials needed for inspection. Each reviewer studies the project
individually and notes the issues they encounter.

4. Meeting
The moderator conducts the meeting to collect and review defects.
Here, the reader reads through the product line by line while the inspector
points out the flaws. All issues are raised, and suggestions may be recorded.

5. Rework
Based on meeting notes, the author changes the work product.

6. Follow-up
In the last phase, the moderator verifies if necessary changes are
made to the software product, compiling a defect summary report.

Panimalar Engineering College 25


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

1.10. STAGES OF TESTING

Unit Test Individual


Components
Integration
Component
Test
Groups

System Test
System as
a Whole

Acceptance
Test

1.10.1 UNIT TESTING

 A Unit is the smallest possible testable software component. A unit is


traditionally viewed as a function or procedure implemented in a
procedural (imperative) programming language. A unit may also be a
small sized COTS component purchased from an outside vendor that
is undergoing evaluation by the purchaser, or simple module retrieved
from an in-house reuse library

Procedures Classes/Objects Procedure-sized reusable components


And And (Small sized COTS
Functions Methods components from an in-house
reuse library)

 Unit testing is a part of Test-Driven Development (TDD), a methodical


strategy that carefully constructs a product via ongoing testing and
refinement. Prior to using additional testing techniques like
integration testing, this testing approach is also the initial level of
software testing.
 To make sure a unit doesn't depend on any external code or
functionalities, unit tests are often isolated. Unit tests should be run
often by teams, whether manually or more frequently automatically.

How Unit Tests Work:


 Three steps make up a unit test: Planning, developing test cases and
running the test itself. Developers or QA experts prepare and examine
the unit test in the first stage. They then go on to writing test cases
and scripts. The code is tested in the third stage.
 ‘For test-driven development to work, unit tests must first be written
Panimalar Engineering College 26
UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
that fail. As soon as the test succeeds, they create code and
restructure the application. TDD often produces a code base that is
clear and predictable.
 To confirm that the code has no dependencies, each test case is run
separately in an isolated environment. The software developer should
utilize a testing framework to record any failed tests and write criteria
to validate each test case.
 The creation of tests for every line of code would be time-consuming
for developers. The code that could influence how the programme
being created behaves should be the focus of the tests that developers
write.
 Only those properties that are essential to the operation of the unit
being evaluated are included in unit testing.
 This enables developers to make modifications to the source code
without worrying about how they could affect the operation of other
components or the programme as a whole right away.
 Teams may use integration testing to assess bigger programme
components after every unit in the programme operates as effectively
and error- free as feasible.
 Unit tests may be run manually or automatically by developers. An
intuitive document outlining each stage in the process may be
developed for those using a manual technique, however automation
testing is the most popular approach for unit testing.
 Automated methods often create test cases using a testing framework.
In addition to presenting a summary of the test cases, these
frameworks are also configured to flag and report any failed test
cases.
 The auxiliary code developed to support testing of units and
components is called as test harness. The harness consists of drivers
that call the target code and stubs that represent modules it calls.
 The development of drivers and stubs requires testing resources. The
drivers and stubs must be tested themselves to insure they are
working properly and that they are reusable for subsequent releases
of the software
 Drivers and stubs can be developed at several levels of functionality

Unit testing advantages:


There are many advantages to unit testing, including the following:
 Compound mistakes happen less often the sooner an issue is
discovered.

Panimalar Engineering College 27


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
 Fixing issues as they arise is often less expensive than waiting
until they become serious, Simplified debugging procedures.
 The codebase can be modified easily by developers,
 Code ray be transferred to new projects and reused by developers.
Unit testing disadvantages:
While unit testing is integral to any software development and testing
strategy; there are some aspects to be aware of. Disadvantages to unit testing
include the following:
 Not all bugs will be found during tests.
 Unit testing does not identify integration flaws; it just checks data
sets and their functionality.
 To test one line of code, more lines of test code may need to be
developed, which might require additional time.
 To successfully apply unit testing, developers may need to pick up
new skills, such as how to utilize certain automated software tools.

1.10.2 INTEGRATION TESTING

“Integration Testing Type Focuses on testing interfaces that are


“Implicit and Explicit” and “Internal and External”

Why perform integration testing?


 There are many particular reasons why developers should do
integration testing, in addition to the basic reality that they must test
all software programmes before making them available to the general
public.
 Errors might result from incompatibility between programme
components.
 Every software module must be able to communicate with the
database and requirements are subject to change as a result of
customer feedback. Though if they haven't been extensively tested yet,
those additional needs should be.
 Every software developer has their own conceptual framework and
coding logic. Integrity testing guarantees that these diverse elements
work together seamlessly.
 Modules often interface with third-party APIs or tools; thus we
require integration testing to confirm that the data these tools receive is
accurate.
 There may be possible hardware compatibility issues.

Top-Down Integration :-
Integration Testing involves testing the topmost component
interface with other components in same order as you navigate from top to
bottom, till we cover all the components.
To understand this methodology, we will assume that a new product/
software development where components become available one after another
in the order of component number specified .The integration starts with
testing the interface between Component 1 and Component 2 .To complete the
integration testing all interfaces mentioned covering all the arrows, have to be
tested together. The order in which the interfaces are to be tested is depicted
Panimalar Engineering College 28
UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
in the table below. In a n incremental product development, where one or
two components gets added to the product in each increment, the integration
testing methodology pertains to only those new interfaces that are added .

Components 1

Components 2 Components 3 Components 4

Components 5 Components 6 Components 7 Components 8

Order of testing Interfaces


Steps Interfaces Tested
1 1-2
2 1-3
3 1-4
4 1-2-5
5 1-3-6
6 1-3-6-(3-7)
7 (1-2-5)-(1-3-6-(3-7))
8 1-4-8
9 (1-2-5)-(1-3-6-(3-7))-(1-4-8)

Bottom-Up Integration:-
Bottom-up integration is just the opposite of top-down integration, where the
components for a new product development become available in reverse order,
starting from the bottom. Testing takes place from the bottom of the control
flow upwards. Components or systems are substituted by drivers. Logic Flow
is from top to bottom and integration path is from bottom to top. Navigation in
bottom-up integration starts from component 1 converting all sub systems ,
till components 8 is reached. The order is listed below. The number of steps in
the bottom up can be optimized into four steps. By combining step2 and step3
and by combining step 5-8 in the previous table.

Panimalar Engineering College 29


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

Component 8

Component 5 Component 6 Component 7

Component 1 Component 2 Component 3 Component 4

Order of Interface tested using Bottom Up Integration

Steps Interfaces Tested


1 1-5
2 2-6,3-6
3 2-6-(3-6)
4 4-7
5 1-5-8
6 2-6-(3-6)-8
7 4-7-8
8 (1-5-8)-(2-6-(3-6)-8)-(4-7-8)

Big bang Integration is deal for a product where the interfaces are
stable with less number of defects.

Advantages of Integration Testing


 Integration testing ensures that every integrated module functions
correctly.
 Integration testing uncovers interface errors.
 Testers can initiate integration testing once a module is completed
and doesn’t require waiting for another module to be done and ready
for testing.
 Testers can detect bugs, defects and security issues.
 Integration testing provides testers with a comprehensive analysis
of the whole system, dramatically reducing the likelihood of severe
connectivity issues.

Challenges of Integration Testing:

Panimalar Engineering College 30


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
Unfortunately, integration testing has some difficulties to overcome as
well.
 Questions will arise about how components from two distinct systems
produced by two different suppliers will impact and interact with one
another during testing.
 Integrating new and old systems requires extensive testing and
possible revisions.
 Integration testing needs testing not just the integration
connections but the environment itself, adding another level of
complexity to the process.
 This is because integration testing requires testing not only the
integration links but the environment itself.

1.10.3 SYSTEM TESTING

 System testing is a sort of software testing done on a whole integrated


system to determine if it complies with the necessary criteria.
 Integration testing successful components is used as input during
system testing. Integration testing's objective is to find any
discrepancies between the Integrated components.
 System testing finds flaws in the integrated modules as well as the
whole system. A component or system's observed behavior during
testing is the outcome of system testing. System testing is done on the
whole system under the guidance of either functional or system
requirement specifications or under the guidance of both.
 The design, behavior and customer expectations of the system are all
tested during system testing. Beyond the parameters specified in the
Software Requirements Specification (SRS), it is used to test the
system.
 In essence, system testing is carried out by a testing team that is
separate from the development team and helps to objectively assess
the system's quality. It has been tested in functional and non-
functional ways. Black-box testing is what system testing is. After
integration testing but before acceptance testing, system testing is
carried out.

Process for system testing:


The steps for system testing are as follows:
1. Setup of the test environment: Establish a test environment for
higher-quality testing.
2. Produce a test case: Produce a test case for the testing pes
3. Produce test data: Produce the data-that will be put to the test.
4. Execute test case: Test cases are carried out after the production of
the test case and the test data.
5. Defect reporting: System flaws are discovered.
6. Regression testing: This technique is used to examine the
consequences of the testing procedure's side effects.
7. Log defects: In this stage, defects are corrected.
8. Retest: If the first test is unsuccessful, a second test is conducted.

Types of System Testing:

Performance testing: is a sort of software testing used to evaluate

Panimalar Engineering College 31


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION
the speed, scalability, stability and dependability of software applications
and products.

Load testing: This sort of software testing is used to ascertain how a


system or software product will behave under high loads.

Stress testing: Stress testing is a sort of software testing carried out to


examine the system's resilience under changing loads.

Advantages of system testing:


 The testers don't need to have further programming experience to do
this testing.
 "It wilts test the complete product or piece of software, allowing us to
quickly find any faults or flaws that slipped through integration and
unit testing.
 The testing environment resembles a real-world production or
commercial setting.
 It addresses the technical and business needs of customers and uses
various test scripts to verify the system's full operation.
 Following this testing, the product will have practically all potential
flaws or faults fixed, allowing the development team to safely go on to
acceptance testing.

Disadvantages of system testing:


 Because this testing involves checking the complete product or piece
of software, it takes longer than other testing methods.
 Since the testing involves testing the complete piece of software,
the cost will be considerable.
 Without a proper debugging tool, the hidden faults won't be discovered.

Panimalar Engineering College 32


UNIT I 21IT1904 - SOFTWARE TESTING AND AUTOMATION

PART –B QUESTIONS

1. Enumerate and elucidate the fundamental software testing principles and their
significance in the testing process
2. What do you mean by V-Model of Software Testing, and show how it differs
from traditional development approaches, and its role in promoting early defect
detection.
3. Explain the relationship between software failures, errors, and faults (defects)
and how they impact the reliability of software systems.
4. Explain the three stages of testing—Unit Testing, Integration Testing, and
System Testing—and their respective objectives and challenges.
5. Explain the concepts of Black-Box Testing and White-Box Testing,
highlighting their differences and use cases.
6. List out the fundamental reasons for testing software, and how do they
contribute to the software development process?
7. Apply the concept and importance of Integration Testing in the context of the
inventory management and order processing application.
8. Examine in detail about program inspections and outline their purpose,
processes, and key participants in a software development context.
9. How would you decide whether to use Black-Box Testing or White-Box
Testing, and why for complex project as a quality analyst.
10. Assume that you are a software project manager leading a project where
quality is paramount. How would you utilize the V-Model to ensure early
defect detection and effective testing?

Panimalar Engineering College 33

You might also like