0% found this document useful (0 votes)
8 views11 pages

Unit V

The document discusses test automation, emphasizing its role in improving software testing efficiency through automated processes that reduce manual intervention. It outlines various types of testing such as end-to-end, unit, integration, and performance tests, along with criteria for selecting automation tools and frameworks. Additionally, it highlights the importance of software testability and metrics to measure it, aiming to enhance the overall quality and reliability of software products.

Uploaded by

csestaff27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views11 pages

Unit V

The document discusses test automation, emphasizing its role in improving software testing efficiency through automated processes that reduce manual intervention. It outlines various types of testing such as end-to-end, unit, integration, and performance tests, along with criteria for selecting automation tools and frameworks. Additionally, it highlights the importance of software testability and metrics to measure it, aiming to enhance the overall quality and reliability of software products.

Uploaded by

csestaff27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

UNIT-V

TEST AUTOMATION AND TESTING THE APPLICATION


Test automation:
Automated Testing means using special software for tasks that people usually do when
checking and testing a software product. Nowadays, many software projects use automation
testing from start to end, especially in agile and DevOps methods. This means
the engineering team runs tests automatically with the help of software tools . It will help to
keep the testing team to make the process faster
Automation Process of a Manual Process. It allows for executing repetitive tasks without
the intervention of a Manual Tester.
 It is used to automate the testing tasks that are difficult to perform manually.
 Automation tests can be run at any time of the day as they use scripted sequences to
examine the software.
 Automation tests can also enter test data compare the expected result with the actual
result and generate detailed test reports.
 The goal of automation tests is to reduce the number of test cases to be executed
manually but not to eliminate manual testing.
 It is possible to record the test suit and replay it when required.
Automated Testing uses specialized software to replace manual testing tasks, speeding up
the process and integrating seamlessly with CI/CD pipelines. It allows for continuous code
verification and quicker deployment..
some of the reasons for using automation testing:
 Quality Assurance: Manual testing is a tedious task that can be boring and at the same
time error-prone. Thus, using automation testing improves the quality of the software
under test as more test coverage can be achieved.
 Error or Bug-free Software: Automation testing is more efficient for detecting bugs in
comparison to manual testing.
 No Human Intervention: Manual testing requires huge manpower in comparison to
automation testing which requires no human intervention and the test cases can be
executed unattended.
 Increased test coverage: Automation testing ensures more test coverage in comparison
to manual testing where it is not possible to achieve 100% test coverage.
 Testing can be done frequently: Automation testing means that the testing can be
done frequently thus improving the overall quality of the software under test.
1. End-to-End tests
End-to-end testing is a type of software testing used to test whether the flow of software
from the initial stage to the final stage is behaving as expected. The purpose of end-to-end
testing is to identify system dependencies and to make sure that the data integrity is
maintained between various system components and systems. End-to-end testing: End-to-
end testing, also known as end-to-end functional testing, is a type of testing that validates
the flow of a system from start to finish.
2. Unit tests
Unit testing is automated and is run each time the code is changed to ensure that new code
does not break existing functionality. Unit tests are designed to validate the smallest
possible unit of code, such as a function or a method, and test it in isolation from the rest of
the system.
3. Integration tests
Integration testing is the process of testing the interface between two software units or
modules. It focuses on determining the correctness of the interface. The purpose of
integration testing is to expose faults in the interaction between integrated units. Once all
the modules have been unit-tested, integration testing is performed.
4. Performance tests
Performance Testing is a type of software testing that ensures software applications
perform properly under their expected workload. It is a testing technique carried out to
determine system performance in terms of sensitivity, reactivity, and stability under a
particular workload.
Automation Testing Types
Below are the different types of automation testing:
 Unit testing: Unit testing is a phase in software testing to test the smallest piece of code
known as a unit that can be logically isolated from the code. It is carried out during the
development of the application.
 Integration testing: Integration testing is a phase in software testing in which individual
software components are combined and tested as a group. It is carried out to check the
compatibility of the component with the specified functional requirements.
 Smoke testing: Smoke testing is a type of software testing that determines whether the
built software is stable or not. It is the preliminary check of the software before its
release in the market.
 Performance testing: Performance testing is a type of software testing that is carried
out to determine how the system performs in terms of stability and responsiveness
under a particular load.
 Regression testing: Regression testing is a type of software testing that confirms that
previously developed software still works fine after the change and that the change has
not adversely affected existing features.
 Security testing: Security testing is a type of software testing that uncovers the risks,
and vulnerabilities in the security mechanism of the software application. It helps an
organization to identify the loopholes in the security mechanism and take corrective
measures to rectify the security gaps.
 Acceptance testing: Acceptance testing is the last phase of software testing that is
performed after the system testing. It helps to determine to what degree the application
meets end users’ approval.
 API testing: API testing is a type of software testing that validates the Application
Programming Interface(API) and checks the functionality, security, and reliability of
the programming interface.
 UI Testing: UI testing is a type of software testing that helps testers ensure that all the
fields, buttons, and other items on the screen function as desired.
Test Automation Frameworks
Some of the most common types of automation frameworks are:
 Linear framework: This is the most basic form of framework and is also known as the
record and playback framework. In this testers create and execute the test scripts for
each test case. It is mostly suitable for small teams that don’t have a lot of test
automation experience.
 Modular-Based Framework: This framework organizes each test case into small
individual units known as modules each module is independent of the other, having
different scenarios but all modules are handled by a single master script. This approach
requires a lot of pre-planning and is best suited for testers who have experience with test
automation.
 Library Architecture Framework: This framework is the expansion of a modular-based
framework with few differences. Here, the task is grouped within the test script into
functions according to a common objective. These functions are stored in the library so
that they can be accessed quickly when needed. This framework allows for greater
flexibility and reusability but creating scripts takes a lot of time so testers with
experience in automation testing can benefit from this framework.
Which Tests to Automate?
Below are some of the parameters to decide which tests to automate:
 Monotonous test: Repeatable and monotonous tests can be automated for further use in
the future.
 A test requiring multiple data sets: Extensive tests that require multiple data sets can
be automated.
 Business critical tests: High-risk business critical test cases can be automated and can
be scheduled to run regularly.
 Determinant test: Determinant test cases where it is easy for the computer to decide
whether the test has failed or not can be automated.
 Tedious test: Test cases that involve repeatedly doing the same action can be automated
so that the computer can do the repetitive task as humans are very poor at performing
the repetitive task with efficiency, which increases the chances of error.
Automation Testing Process
1. Test Tool Selection: There will be some criteria for the Selection of the tool. The
majority of the criteria include: Do we have skilled resources to allocate for automation
tasks, Budget constraints, and Do the tool satisfies our needs?
2. Define Scope of Automation: This includes a few basic points such as the Framework
should support Automation Scripts, Less Maintenance must be there, High Return on
Investment, Not many complex Test Cases
3. Planning, Design, and Development: For this, we need to Install particular frameworks
or libraries, and start designing and developing the test cases such
as NUnit , JUnit , QUnit , or required Software Automation Tools.
4. Test Execution: Final Execution of test cases will take place in this phase and it depends
on Language to Language for .NET, we’ll be using NUnit, for Java, we’ll be using
JUnit, for JavaScript, we’ll be using QUnit or Jasmine, etc.
5. Maintenance: Creation of Reports generated after Tests and that should be documented
to refer to that in the future for the next iterations.
Criteria to Select Automation Tool
Following are some of the criteria for selecting the automation tool:
 Ease of use: Some tools have a steep learning curve, they may require users to learn a
completely new scripting language to create test cases and some may require users to
maintain a costly and large test infrastructure to run the test cases.
 Support for multiple browsers: Cross-browser testing is vital for acceptance testing.
Users must check how easy it is to run the tests on different browsers that the
application supports.
 Flexibility: No single tool framework can support all types of testing, so it is advisable
to carefully observe what all tool offers and then decide.
 Ease of analysis: Not all tools provide the same sort of analysis. Some tools have a nice
dashboard feature that shows all the statistics of the test like which test failed and which
test passed. On the other hand, there can be some tools that will first request users to
generate and download the test analysis report thus, not very user-friendly. It depends
entirely on the tester, project requirement, and budget to decide which tool to use.
 Cost of tool: Some tools are free and some are commercial tools but many other factors
need to be considered before deciding whether to use free or paid tools. If a tool takes a
lot of time to develop test cases and it is a business-critical process that is at stake then
it is better to use a paid tool that can generate test cases easily and at a faster rate.
 Availability of support: Free tools mostly provide community support on the other hand
commercial tools provide customer support, and training material like tutorials, videos,
etc. Thus, it is very important to keep in mind the complexity of the tests before
selecting the appropriate tool

Software Testability
Software testability is measured with respect to the efficiency and effectiveness of testing.
Efficient software architecture is very important for software testability. Software testing is
a time-consuming, necessary activity in the software development lifecycle, and making
this activity easier is one of the important tasks for software companies as it helps to reduce
costs and increase the probability of finding bugs. There are certain metrics that could be
used to measure testability in most of its aspects. Sometimes, testability is used to mean
how adequately a particular set of tests will cover the product.
 Testability helps to determine the efforts required to execute test activities.
 Less the testability larger will be efforts required for testing and vice versa.

Factors of Software Testability

Below are some of the metrics to measure software testability:


1. Operability: "The better it works, the more efficiently it can be tested."
 The system has a few bugs (bugs add analysis and reporting overhead to the test
process).
 No bugs block the execution of tests.
 The product evolves in functional stages (allows simultaneous development testing).
2. Observability: "What you see is what you test."
 Distinct output is generated for each input.
System states and variables are visible or queriable during execution. Past system states
and variables are visible or queriable. For example, transaction logs.
 All factors affecting the output are visible.
 Incorrect output is easily identified.
 Internal errors are automatically detected through self-testing mechanisms. o Internal
errors are automatically reported. co Source code is accessible.
3. Controllability: "The better we can control the software, the more the testing can be
automated and optimized."
 All possible outputs can be generated through some combination of inputs. All code is
executable through some combination of input.
 Software and hardware states and variables can be controlled directly by the test
engineer.
 Input and output formats are consistent and structured.
4. Decomposability: "By controlling the scope of testing, we can more problems and
perform smarter retesting." quickly isolate
 The software system is built from independent modules.
 Software modules can be tested independently.
5. Simplicity: "The less there is to test, the more quickly we can test it."
 Functional simplicity (e.g., the feature set is the minimum necessary to meet
requirements).
 Structural simplicity (e.g., architecture is modularized to limit the propagation of
faults).
 Code simplicity (e.g., a coding standard is adopted for ease of inspection and
maintenance).
6. Stability: "The fewer the changes, the fewer the disruptions to testing." Changes to the
software are infrequent.
 Changes to the software are controlled.
 Changes to the software do not invalidate existing tests. The software recovers well
from failures.
7. Understandability: "The more information we have, the smarter we will test."
 The design is well understood.
 Dependencies between internal, external, and shared components are well
 understood.
 Changes to the design are communicated. Technical documentation is instantly
accessible.
 Technical documentation is well organized. Technical documentation is specific and
detailed.
 Technical documentation is accurate.
8. Availability: "The more accessible objects are the easier is to design test cases". It is all
about the accessibility of objects or entities for performing the testing, including bugs,
source code, etc.
9. Testing Tools: Testing tools that are easy to use will reduce the staff size and less
training will be required.
10. Documentation: Specifications and requirements documents should be according to the
client's needs and fully featured.

How to Measure Software Testability?

Software testability evaluates how easy it is to test the software and how likely software
testing will find the defects in the application. Software testability assessment can be
accomplished through software metrics assessment:
 Depth of Inheritance Tree.
 Fan Out (FOUT).
 Lack Of Cohesion Of Methods (LCOM).
 Lines Of Code per Class (LOCC).
 Response For Class (RFC).
 Weighted Methods per Class (WMC).
During software launch, it is crucial to determine which components may be more
challenging to test. Software testability assessment is crucial during the start of the testing
phase as it affects the efficiency of the planning process.

Requirements of Software Testability

The attributes suggested by Bach can be used by a software engineer to develop a software
configuration (i.e., programs, data, and documents) that is amenable to testing. Below are
some of the capabilities that are associated with software testability requirements:
 Module capabilities: Software is developed in modules and each module will be tested
separately. Test cases will be designed for each module and then the interaction
between the modules will be tested.
 Testing support capabilities: The entry point to test drivers and root must be saved for
each person, test interface as during the increment level testing, the trouble and
accuracy level of testing root and driver should be given high priority and importance.
 Defects disclosure capabilities: The system errors should be less so that they do not
block the software testing. The requirement document should also pass the following
parameters to be testable:
o The requirement must be accurate, correct, concise, and complete.
o The requirement should be unambiguous i.e it should have one meaning for
all staff members.
o A requirement should not contradict other requirements.
o Priority-based ranking of requirements should be implemented.
o A requirement must be domain-based so that the changing requirements
won't be a challenge to implement.
 Observation capabilities: Observing the software to monitor the inputs, their outcomes,
and the factors influencing them.

Types of Software Testability

Below are the different types of software testability:


1. Object-oriented programs testability: Software testing object-oriented software is
done at three levels, Unit, Integration, and System testing. Unit testing is the most
accessible level to get better software testability as one can apply testability examination
earlier in the improvement life-cycle.
2. Domain-based testability: Software products that are developed with the concept of
domain-driven development are easy to test and changes can be also done easily. The
domain testable software is modifiable to make it observable and controllable.
3. Testability based on modules: The module-based approach for software testability
consists of three stages:
 Normalize program: In this stage, the program needs to be normalized using some
semantic and systematic tools to make it more reasonable for testability measures.
 Recognize testable components: In this stage, the testable components are recognized
based on the demonstrated normalized data flow.
 Measure program testability: Program testability is measured based on the information
stream testing criteria.

Improving Software Testability

Below are some of the parameters that can be implemented in practice to improve software
testability:
 Appropriate test environment: If the test environment corresponds to the production
environment then testing will be more accurate and easier.
 Adding tools for testers: Building special instruments for manual testing helps to make
the process easier and simpler.
 Consistent element naming: If the developers can ensure that they are naming the
elements correctly, consistently, logically, and uniquely then it makes testing more
convenient. Although this approach is difficult in larger projects with multiple
developers and engineers.
 Improve observability: Improving observability provides unique outputs for unique
inputs for the Software Under Test.
 Adding assertions: Adding assertions to the units in the software code helps to make the
software more testable and find more defects.
 Manipulating coupling: Manipulating coupling to make it Domain dependent relative to
the increased testability of code.
 Internal logging: If software accurately logs the internal state then manual testing can
be streamlined and it enables to check of what is happening during any test.
 Consistent UI design: Consistent UI design also helps to improve software testability as
the testers can easily comprehend how the user interface principles work.

Benefits of software testability

 Minimizes testers' efforts: Testability calculates and minimizes the testers' efforts to
perform testing as improved software testability facilitates estimating the difficulty in
finding the software flaws.
 Determines the volume of automated testing: Software testability determines the
volume of automated testing based on the software product's controllability.
 Early detection of bugs: Software testability helps in the early and effortless detection
of bugs and thus saves time, cost, and effort required in the software development
process.
Components of test cases:
Software testing is known as a process for validating and verifying the working of a
software/application. It re-check that the software functions are meets the
requirements without errors, bugs, or any other issues and provides the expected output to
the user.
The software testing process is not limited to finding faults in the present software but also
includes measures to improve the software in various aspects such as efficiency, usability,
and accuracy. To test software, software testing provides a particular format called a Test
Case.
A test case is a set of steps or actions performed on a system to check if
it meets the software requirements and works as expected. It helps verify whether the
system functions correctly under different conditions and meets to the expectations. Test
cases are essential for identifying issues and ensuring the software performs as intended
Parameters of a Test Case
Here are the important parameters of testcase which is helping to the development process
of software:
 Module Name: Subject or title that defines the functionality of the test.
 Test Case Id: A unique identifier assigned to every single condition in a test case.
 Tester Name: The name of the person who would be carrying out the test.
 Test scenario: The test scenario provides a brief description to the tester, as in providing
a small overview to know about what needs to be performed and the small features, and
components of the test.
 Test Case Description: The condition required to be checked for a given software. for
eg. Check if only numbers validation is working or not for an age input box.
 Test Steps: Steps to be performed for the checking of the condition.
 Prerequisite: The conditions required to be fulfilled before the start of the test process.
 Test Priority: As the name suggests gives priority to the test cases that had to be
performed first, or are more important and that could be performed later.
 Test Data: The inputs to be taken while checking for the conditions.
 Test Expected Result: The output which should be expected at the end of the test.
 Test parameters: Parameters assigned to a particular test case.
 Actual Result: The output that is displayed at the end.
 Environment Information: The environment in which the test is being performed, such
as the operating system, security information, the software name, software version, etc.
 Status: The status of tests such as pass, fail, NA, etc.
 Comments: Remarks on the test regarding the test for the betterment of the software.
Test Cases?
Test cases are written in different situations:
 Before development: Test cases could be written before the actual coding as that would
help to identify the requirement of the product/software and carry out the test later when
the product/software gets developed.
 After development: Test cases are also written directly after coming up with a
product/software or after developing the feature but before the launching of a
product/software as needed to test the working of that particular feature.
 During development: Test cases are sometimes written during the development time,
parallelly. so whenever a part of the module/software gets developed it gets tested as
well.
So, test cases are written in such cases, as test cases help in further development and make
sure that we are meeting all the needed requirements.
Why Write Test Cases?
Test cases are one of the most important aspects of software engineering, as they define
how the testing would be carried out. Test cases are carried out for a very simple reason, to
check if the software works or not. There are many advantages of writing test cases:
 To check whether the software meets customer expectations: Test cases help to
check if a particular module/software is meeting the specified requirement or not.
 To check software consistency with conditions: Test cases determine if a particular
module/software works with a given set of conditions.
 Narrow down software updates: Test cases help to narrow down the software needs
and required updates.
 Better test coverage: Test cases help to make sure that all possible scenarios are
covered and documented.
 For consistency in test execution: Test cases help to maintain consistency in test
execution. A well-documented test case helps the tester to just have a look at the test
case and start testing the application.
 Helpful during maintenance: Test cases are detailed which makes them helpful during
the maintenance phase.
Test Case Template
A Test Case Template is a simple, organized format used in software testing to create test
cases. It helps ensure that all tests are written clearly and consistently.
Let’s look at a basic test case template for the login functionality.
 The Test case template contains the header section which has a set of parameters that
provide information about the test case such as the tester’s name, test case description,
Prerequisite, etc.
 The body section contains the actual test case content, such as test ID, test steps, test
input, expected result, etc.
Best Practice for Writing Test Case
There are certain practices that one could follow while writing the test cases that would be
considered beneficial.
 Simple and clear: Test cases need to be very concise, clear, and transparent. They
should be easy and simple to understand not only for oneself but for others as well.
 Maintaining the client/customer/end-user requirements must be unique : While writing
the test cases, it’s necessary to make sure that they aren’t being written over and over
again and that each case is different from the others.
 Zero Assumptions: Test cases should not contain assumed data, and don’t come up with
features/modules that don’t exist.
 Traceability: Test cases should be traceable for future reference, so while writing it’s
important to keep that in mind,
 Different input data: While writing test cases, all types of data must be taken into
consideration.
 Strong module name: The module name should be self-explanatory while writing the
test case.
 Minimal Description: The description of a test case should be small, one or two lines
are normally considered good practice, but it should give the basic overview properly.
 Maximum conditions: All kinds of conditions should be taken into consideration while
writing a test, increasing the effectiveness.
 Meeting requirements: While writing the test case the client/customer/end-user
requirements must be met.
 Repetitive Results: The test case must be written in such a way that it should provide
the same result.
 Different Techniques: Sometimes testing all conditions might not be possible but
using different testing with different test cases could help to check every aspect of a
software.
 Create test cases with the end user’s perspective: Create test cases by keeping end-
user in mind and the test cases must meet customer requirements.
 Use unique Test Case ID: It is considered a good practice to use a unique Test Case ID
for the test cases following a naming convention for better understanding.
 Add proper preconditions and postconditions: Preconditions and postconditions for
the test cases must be mentioned properly and clearly.
 Test cases should be reusable: There are times when the developer updates the code,
then the testers need to update the test cases to meet the changing requirements.
 Specify the exact expected outcome: Include the exact expected result, which tells us
what will be result of a particular test step.
Test Case Management Tools
Test case management tools are important in managing test cases efficiently, making the
testing process faster and less time-consuming compared to traditional methods. These
tools provide features such as advanced dashboards, bug tracking, progress
tracking, custom templates, and integration with other testing tools. Test case
management tools help testers organize, manage, and execute their tests more effectively.
Here are some of the most popular test case management tools:
1. TestLink: TestLink is a widely used test management tool that offers easy integration
with bug tracking systems and provides a user-friendly interface to manage test cases,
test plans, and test runs.
2. X-ray: X-ray is a test management tool for Jira, designed to manage both manual and
automated tests. It integrates seamlessly with Jira, providing robust reporting,
traceability, and tracking for test cases.
3. TestRail: TestRail helps manage test cases, test plans, and test runs. It offers real-time
insights into testing progress and enables better collaboration between QA teams,
helping them streamline their test case management processes.
4. PractiTest Ltd.: PractiTest is a test case management tool that focuses on organizing test
cases, creating test plans, and offering detailed reports. It allows for seamless
integration with popular bug tracking systems and other testing tools.
5. TestCollab: TestCollab is a tool for managing test cases, test plans, and testing progress.
It provides strong reporting and analytics features to give teams insights into their
testing efforts.
6. Kualitee: Kualitee is a comprehensive test case management platform that supports
manual and automated testing. It provides features for test case creation, execution, and
tracking, along with strong integration with bug tracking tools.
7. Qase: Qase is an easy-to-use test management tool that supports manual test execution
and test case management. It integrates with various CI/CD tools and offers powerful
reporting and analytics features.
8. Testiny: Testiny is a lightweight test management tool that provides an intuitive
interface for managing test cases. It allows easy tracking of test executions and
facilitates collaboration within teams.
9. TestMonitor: TestMonitor is a test case management platform designed to enhance
collaboration among teams. It offers comprehensive features like test case management,
test plan creation, bug tracking, and detailed reporting.
10. SpiraTest: SpiraTest is a test management tool that allows users to manage test cases,
bugs, and requirements in one integrated platform. It provides full traceability and
powerful reporting for effective test case management.
Formal and Informal Test Case:
 Formal Test Cases: Formal test cases are test cases that follow the basic test case
format. It contains the test case parameters such as conditions, ID, Module name, etc.
Formal Test cases have set input data and expected results, they are performed as per
the given order of steps.
 Informal Test Cases: Informal test cases are test cases that don’t follow the basic test
case format. In these, as the tests are performed the test cases are written in real-time
then pre-writing them, and the input and expected results are not predefined as well.
Types of Test Cases
Here are some of the Types of Test Cases:
 Functionality Test Case: The functionality test case is to determine if the interface of
the software works smoothly with the rest of the system and its users or not. Black box
testing is used while checking for this test case, as we check everything externally and
not internally for this test case.
 Unit Test Case: In unit test case is where the individual part or a single unit of the
software is tested. Here each unit/ individual part is tested, and we create a different test
case for each unit.
 User Interface Test Case: The UI test or user interface test is when every component of
the UI that the user would come in contact with is tested. It is to test if the UI
components requirement made by the user are fulfilled or not.
 Integration Test Case: Integration testing is when all the units of the software are
combined and then they are tested. It is to check that each component and its units work
together without any issues.
 Performance Test Case: The performance test case helps to determine response time
as well as the overall effectiveness of the system/software. It’s to see if the application
will handle real-world expectations.
 Database Test Case: Also known as back-end testing or data testing checks that
everything works fine concerning the database. Testing cases for tables, schema,
triggers, etc. are done.
 Security Test Case: The security test case helps to determine that the application
restricts actions as well as permissions wherever necessary. Encryption and
authentication are considered as main objectives of the security test case. The security
test case is done to protect and safeguard the data of the software.
 Usability Test Case: Also known as a user experience test case, it checks how user-
friendly or easy to approach a software would be. Usability test cases are designed by
the User experience team and performed by the testing team.

You might also like