0% found this document useful (0 votes)
35 views62 pages

Presentation 27222 Content Document 20241112090946AM

Module 4 covers the essential principles and practices of software testing, including the composition and skills required for a testing team, various types of testing, and the importance of effective test case evaluation. It emphasizes the need for a structured testing approach throughout the software development life cycle (SDLC) to ensure high-quality software delivery. The document also discusses defect management, the economics of testing, and the defect life cycle to enhance understanding of how to identify and resolve software issues efficiently.

Uploaded by

nirupagurram3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views62 pages

Presentation 27222 Content Document 20241112090946AM

Module 4 covers the essential principles and practices of software testing, including the composition and skills required for a testing team, various types of testing, and the importance of effective test case evaluation. It emphasizes the need for a structured testing approach throughout the software development life cycle (SDLC) to ensure high-quality software delivery. The document also discusses defect management, the economics of testing, and the defect life cycle to enhance understanding of how to identify and resolve software issues efficiently.

Uploaded by

nirupagurram3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 62

Module 4: Introduction to Testing

Guiding Principles of testing – Composition of a testing team – Essential Skills of a tester – STLC-Types of Testing – Evaluating the quality of
test cases – Techniques for reducing number of test cases- Requirements for effective testing – Test Oracle – Economics of testing –
Handling Defects
INTRODUCTION TO TESTING - Guiding Principles of testing – Composition of a testing team– Essential Skills of a tester – STLC- Types of Testing –
Test Cases- Evaluating the quality of test cases Textbook1- Chapter 4 - 4.1 - 4.4
Types of testing – Functional Testing – Non- Functional Testing, Acceptance Testing- Regression Testing
• Software testing is a task conducted to assess the quality of software and enhance its
performance. This process ensures that the software behaves as expected according to user
requirements.
• The software development life cycle (SDLC) outlines various stages such as Analysis,
Requirements, Design, Development, Testing, Deployment, and Maintenance to guide the
software development process.
• The primary objective of SDLC is to deliver error-free software that meets user expectations
within specified timeframes.
• Software testing is an integral part of SDLC, serving to identify issues and ensure the
completeness, correctness, and quality of the developed software.
• The ultimate goal is to create efficient software and maintain high-quality assurance
throughout the product's lifecycle
• Defect- a flaw, error, or anomaly in a program that causes to behave unexpectedly (incorrect
results)- mismatch between expected and actual result.
• Error- It’s a mistake in code or design – (found in unit testing)
• Bug- Error in software which causes it to behave unexpectedly, Ex- system crashes, insufficient
output, (during testing phase)
• Failure- an error found by an end user.
Testing – A quality control activity for defect correction and for ensuring delivery of defect
free software.
For delivering high quality software, the testers must be equipped with the knowledge, skills
and tools to perform the job of a tester.

Guiding principles of Testing –

1. Testing to be done before customer finds the defect


2. Test cases to be designed to have high probability of
detecting a defect
3. Testing begins through out the software life cycle
4. Test cases have to be revised to find new bugs.
5. Getting max benefits from testing tools required well
Planned automation strategy
6. Testing required talented, skilled and committed people
to work in teams.
Composition of a Testing Team – Role of a Tester –
• Test Manager - Design and document test cases
• Test Leaders - Execute test cases
• Senior Test Engineers - Record test case results
• Testers
- Document and track defects
- Perform test coverage analysis
Role of a Test Manager –
- Leads team and is responsible for
- Test plan preparation, estimation, schedule , training
- Test resource management, h/w & s/w, human
resources
- Defect analysis and identifying preventive actions
- Requirement analysis and identifying major scenarios
- Review of test design and identifying major scenarios
- Sharing report with management
- Risk analysis and risk escalation
Essential Skills of a Tester –

- Keen sense of observation ( Eye for detail to detect defects)


- Detective skills – smell for coding defects and find if documentations are updated
- Destructive creativity – Skills to crash s/w and find flaws in its functionality.
- Understand business domain of product – understand the workflow of the product to test for real
life scenarios
- Customer oriented perspective – expectations of customers and end user in mind
- Organised, patience and flexible – Organised to record all test cases and results of execution,
Patient and plan to re-run significant test cases
- Perfectionist – tester to find as many defects as possible for the quality of deliverables
- Analytical abilities – Functional testing may require constructing a system model and testing all
possible scenarios. Building a model of a system requires analytical skills
Types of Testing –
Testing is applied at
-Unit / Module level
- Components level
- Sub system
- system level
Types of Testing -

1.Unit Testing
2.Integration Testing
3.System Testing
4.Functional Testing
5.Acceptance Testing
6.Smoke Testing
7.Regression Testing
8.Performance Testing
9.Security Testing
10.User Acceptance Testing

1. UNIT TESTING

The main advantages of unit testing include:


11.It helps to identify bugs early in the development process, before they become more difficult
and expensive to fix.
12.It helps to ensure that changes to the code do not introduce new bugs.
13.It makes the code more modular and easier to understand and maintain.
14.It helps to improve the overall quality and reliability of the software.

White Box Testing – (glass box / structural) – Testing internal structures of code at unit /module /
component level to uncover logical defects which is done by programmers.
Integration testing can be performed in different ways, such as:

1.Top-down integration testing: It starts with the highest level modules and integrates them with lower-level
modules.
2.Bottom-up integration testing: It starts with the lowest-level modules and integrates them with higher-level
modules.
3.Big-Bang integration testing: It combines all the modules and integrates them all at once.
4.Incremental integration testing: It integrates the modules in small groups, testing each group as it is added.

The main advantages of integration testing include:

5.It helps to identify and resolve issues that may arise when different units of the software are combined.
6.It helps to ensure that the different units of the software work together as intended.
7.It helps to improve the overall reliability and stability of the software.
8.It’s important to keep in mind that Integration testing is essential for complex systems where different
components are integrated together.
9.As with unit testing, integration testing is only one aspect of software testing and it should be used in
combination with other types of testing such as unit testing, functional testing, and acceptance testing to ensure
that the software meets the needs of its users.
System Testing – Also referred as Black box testing

- Testing all integrated software components that have successfully passed integration testing.
- Purpose to detect defects within the system as a whole and cover both functional and non-functional testing

- Functional Testing – Tests functionality of software as required by customer and as specified in specification
requirement document (Need of business and s/w knowledge_

- Non-functional testing – Trying to find discrepancies between the non functional characteristics of the system
and the specifications (eg – load / performance / security / usability)

Acceptance testing – Testing done before customer accepts the s/w deliverable

Re-testing / Confirmation testing – When a s/w fail with a defect and developer fixes defect, a new version of
s/w will be released by developers. Testers re-test the product to confirm if the defect has been fixed.

Regression Testing - Whenever developer fix the defect, may introduce new defects in other parts of the s/w.
Such unexpected side effects of fixes are detected through regression testing
Positive testing – Tries to prove that given product does what is supposed to do and tested with valid input data.

- When TC checks the valid requirement of the product and compared with expected outputs is called positive test case.

- Checks results for only valid input data.

Negative testing – Done to ascertain if the product fail or becomes unstable when an unexpected input is given.

Error Guessing – Based on experience and intuition about probable types of errors and TC written to expose these errors.

- No clear procedure

eg –
- If input list is empty
- If input contain only 1 data value
- all inputs have same value

Exploratory Testing – Approach is useful when no there is no specification / poor defined specifications and time for
testing is limited

Sanity Testing – Done prior to more exhaustive round of testing to see s/w produced don’t have obvious defects.

- Surface level testing done if all functions and commands work well before they are produces for exhaustive testing
ER Diagram for Ecommerce website
Database testing – To check for any errors in database

Ignoring DB testing results in –

- Data corruption (Poor design)


- Redundant data (duplicate records)
- Inconsistent data (data added to DB by various app)
- Redundant validation (validating business rules on the DB as well as in client)

Before initiating –

- Review ER Diagram
- Review DB designs
- Review tables, views / Stored procedures

Testing should include –

- Front end / GUI


- Business logic layer
- Database
DB Testing – Steps
- Create SQL query for data manipulation
- Observe data flow and transformation in different tables like logical organized / DB performance /
- check on objects like views/triggers / stored procedures/ functions and job to work correctly
- does the DB implement constraints to allow only correct data to be stored in it
- Data security

Challenges in DB Testing –

- Tester should know the application in total and know which DB is being used for it to be tested.
- knowledge of SQL and DB management tools
- Data security is important

Risk based testing – Method to reduce the time for exhaustive testing of all portions of s/w.
A software testing strategy that identifies & prioritizes areas of product that are most likely to fail (identify risk,
analyze risk, prioritize testing, address risks)

Factors considered when deciding Risks-

- functions & attributes critical (for success of product)


- visibility of problem to customer
- how often is that function / attribute used
- can this function be avoided
Evaluating the quality of test cases
- To evaluate the quality of TC to make sure that they are able to detect defects in s/w under test.

2 methods used are


- Error seeding
- Mutation testing

Error seeding
It’s a process of consciously adding of errors to the source code
to evaluate quality of the system test phase
we use this to find out real errors VS seed errors
For Ex- x=x+5; would replace + with -,*,/,and ** relational operator replacement, a>b
- Numbers of errors are deliberately injected or sown into the code and tests are run.
- At the end of testing, the numbers of defects is counted to see if the test cases have detected the
same number of defects as the number sown
- Takes less time and more economical
Requirements for effective Testing –

- Unambiguous and verifiable requirements


(Ambiguity in requirement document, makes tester difficult to write precise TC)

- Effective test management


(Test manager to plan project with accurate estimation of test effort and progress monitoring
and control in scheduled time)

- Configuration management
(Lack of configuration control causes problems. Testware containing requirement document,
tests plans, test cases, test results and code to be tested are s/w configuration management)

- Coordination between software development and testing phases


(Use of V- Model helps in better co-ordination between S/w development and testing phases)
Factors essential for quality s/w –
V- Model 1. Effective configuration management
2. Well defined process
3. Appropriate tools and resources
4. Unambiguous requirements
5. Allocation of time as planned
6. Good test strategy
7. Co-ordination between development and
testing teams
8. Trained man power
9. Effective test management
10. Management support
Test Oracle –

- Testing involves comparing actual results vs Expected results for a test case
- To determine this, TC pass are failed will be marked against TCs
- The expected results are absent if the s/w is newly developed and then poses problem to judge just TC
pass / fail
- Such situation needs Test ORACLE, a mechanism to produce the predicted outcomes to compare with
the actual outcomes of the Software Under Test (SUT)

Test Oracle may be prior version of existing system, simulated system, legacy system, competitors
system, calculated values or test specifications but definitely the system under test cannot be used as test
oracle.
Economics of Software Testing

- Adequate testing is necessary while testing beyond is


waste of resources and under testing results in more
bugs

- Prolonged testing does not yield sufficient results for


justification of testing as the number of defects
uncovered reduces after some optimal time.

- After sometime the number of defects that are


uncovered starts decreasing while the cost incurred for
testing remains same.

- As time passes, the cost of finding defects increases


and after some point of time, the cost of testing exceeds
the benefits obtained from undercovering the defects.

- This is the optimal time and is more cost effective


from testing perspective.
Handling defects –
-Bug / Issues Tracking system
-To solve problems systematically, bug tracking system is used.
-Issues reported by development / testing team will be logged in the system
-A software tool used to track / maintain reported defects and fix them

-Defect severity level


- Severity is the degree of impact a defect has on development or operation of the system
- Critical – crashes / hangs / data loss
- Major – Important feature is broken and workaround is possible
- Minor – Minor loss of function and workaround is possible
- Trivial – Cosmetic failure / misspelled word / misaligned text

-Defect Priority level


- Fixing defect on priority
- Resolve immediately – (P1) – development cannot occur until not resolved
- Give high attention – P2 – Resolve asap
- Normal Q – P3 – solved during normal course of development activities
- Low priority – P4 – can be fixed when becomes serious
- Defer – P5 – can be resolved in future
What is a Bug/Defect?

• A defect is an error or bug in an application that is created during the building or


designing of software and due to which software starts to show abnormal behaviors
during its use.

• So it is one of the important responsibilities of the tester to find as much as defect


possible to ensure the quality of the product is not affected and the end product is
fulfilling all requirements perfectly for which it has been designed and provide required
services to the end-user.

• As much as defects will be identified and resolved then the software will behave
perfectly as per expectation.
What is Defect Life Cycle?

Bug life cycle refers to its entire state starting from a new defect detected to the closing off of that
defect by the tester. Alternatively, it is also called a Bug Life Cycle.

• The journey of the Defect Cycle varies from organization to organization and also from project
to project because development procedures and platforms as well as testing methods and
testing tools differ depending upon organizations and projects.
• The number of states that a defect goes through also varies depending upon the different tools
used and processes followed during the testing of software.
• The objective of the defect lifecycle is to easily coordinate and communicate the current status
of the defect and thus help to make the defect-fixing process efficient.
Defect States Workflow
The number of states that a defect goes through varies from project to project. Below lifecycle diagram, covers
all possible states
•New: When a new defect is logged and posted for the first time. It is assigned a status as NEW.
•Assigned: Once the bug is posted by the tester, the lead of the tester approves the bug and assigns the bug to
the developer team
•Open: The developer starts analyzing and works on the defect fix
•Fixed: When a developer makes a necessary code change and verifies the change, he or she can make bug
status as “Fixed.”
•Pending retest: Once the defect is fixed the developer gives a particular code for retesting the code to the
tester. Since the software testing remains pending from the testers end, the status assigned is “pending retest.”
•Retest: Tester does the retesting of the code at this stage to check whether the defect is fixed by the developer
or not and changes the status to “Re-test.”
•Verified: The tester re-tests the bug after it got fixed by the developer. If there is no bug detected in the
software, then the bug is fixed and the status assigned is “verified.”
•Reopen: If the bug persists even after the developer has fixed the bug, the tester changes the status to
“reopened”. Once again the bug goes through the life cycle.
•Closed: If the bug is no longer exists then tester assigns the status “Closed.”
•Duplicate: If the defect is repeated twice or the defect corresponds to the same concept of the bug, the
status is changed to “duplicate.”
•Rejected: If the developer feels the defect is not a genuine defect then it changes the defect to “rejected.”
•Deferred: If the present bug is not of a prime priority and if it is expected to get fixed in the next release,
then status “Deferred” is assigned to such bugs
•Not a bug: If it does not affect the functionality of the application then the status assigned to a bug is “Not
a bug”.
1.Tester finds the defect
2.Status assigned to defect- New
3.A defect is forwarded to Project Manager for analyze
4.Project Manager decides whether a defect is valid
5.Here the defect is not valid- a status is given “Rejected.”
6.So, project manager assigns a status rejected. If the defect is not rejected then the next step is to check
whether it is in scope. Suppose we have another function- email functionality for the same application, and
you find a problem with that. But it is not a part of the current release when such defects are assigned as
a postponed or deferred status.
7.Next, the manager verifies whether a similar defect was raised earlier. If yes defect is assigned a
status duplicate.
8.If no the defect is assigned to the developer who starts fixing the code. During this stage, the defect is
assigned a status in- progress.
9. Once the code is fixed. A defect is assigned a status fixed

10. Next, the tester will re-test the code. In case, the Test Case passes the defect is closed. If
the test cases fail again, the defect is re-opened and assigned to the developer.

11. Consider a situation where during the 1st release of Flight Reservation a defect was found
in Fax order that was fixed and assigned a status closed. During the second upgrade release the
same defect again re-surfaced. In such cases, a closed defect will be re-opened.
A Software test plan is a document that consists of all future testing-related activities. It defines
work products to be tested, how they will be tested, and test type distribution among the testers.
Test manager prepares this document

• The test plan serves as the blueprint that changes according to the progressions in the project
and stays current at all times.
• It serves as a base for conducting testing activities and coordinating activities among a QA
team.
• It is shared with Business Analysts, Project Managers, and anyone associated with the project.
Factors Roles

Who writes Test Plans? Test lead, Test Manager, Test Engineer

Test Lead, Test Manager, Test


Who reviews the Test Plan? Engineer, Customer, Development
Team

Who approves the Test Plan? Customer, Test Manager

Who writes Test Cases? Test Lead, Test Engineer

Test Engineer, Test Lead, Customer,


Who reviews Test Cases?
Development Team

Who approves Test Cases? Test Manager, Test Lead, Customer


The following are some of the key benefits of making a test plan:

•Quick guide for the testing process: The test plan serves as a quick guide for the testing process as it
offers a clear guide for QA engineers to conduct testing activities.
•Helps to avoid out-of-scope functionalities: The test plan offers detailed aspects such as test scope,
test estimation, strategy, etc.
•Helps to determine the time, cost, and effort: The Test serves as the blueprint to conduct testing
activities thus it helps to deduce an estimate of time, cost, and effort for the testing activities.
•Provide a schedule for testing activities: A test plan is like a rule book that needs to be followed, it
thus helps to schedule activities that can be followed by all the team members.
•Test plan can be reused: The test plan documents important aspects like test estimation, test scope,
and test strategy which are reviewed by the Management Team and thus can be reused for other
projects.
Objectives of the Test Plan:
1.Overview of testing activities: The test plan provides an overview of the testing activities and where
to start and stop the work.
2.Provides timeline: The test plan helps to create the timeline for the testing activities based on the
number of hours and the workers needed.
3.Helps to estimate resources: The test plan helps to create an estimate of the number of resources
needed to finish the work.
4.Serves as a blueprint: The test plan serves as a blueprint for all the testing activities, it has every
detail from beginning to end.
5.Helps to identify solutions: A test plan helps the team members They consider the project’s
challenges and identify the solutions.
6.Serves as a rulebook: The test plan serves as a rulebook for following rules when the project is
completed phase by phase.
How to create a Test Plan
Analyze the product: This phase focuses on analyzing the product, Interviewing clients, designers, and
developers, and performing a product walkthrough. This stage focuses on answering the following
questions:
•What is the primary objective of the product?
•Who will use the product?
•What are the hardware and software specifications of the product?
•How does the product work?

2. Design the test strategy: The test strategy document is prepared by the manager and details the
following information:
•Scope of testing which means the components that will be tested and the ones that will be skipped.
•Type of testing which means different types of tests that will be used in the project.
•Risks and issues that will list all the possible risks that may occur during testing.
•Test logistics mentions the names of the testers and the tests that will be run by them.

3. Define test objectives: This phase defines the objectives and expected results of the test execution.
Objectives include:
•A list of software features like functionality, GUI, performance standards, etc.
•The ideal expected outcome for every aspect of the software that needs testing.
Define test criteria: Two main testing criteria determine all the activities in the testing project:
•Suspension criteria: Suspension criteria define the benchmarks for suspending all the tests.
•Exit criteria: Exit criteria define the benchmarks that signify the successful completion of the test phase
or project. These are expected results and must match before moving to the next stage of development.

5. Resource planning: This phase aims to create a detailed list of all the resources required for project
completion. For example, human effort, hardware and software requirements, all infrastructure needed,
etc.

6. Plan test environment: This phase is very important as the test environment is where the QAs run
their tests. The test environments must be real devices, installed with real browsers and operating systems
so that testers can monitor software behavior in real user conditions.

7. Schedule and Estimation: Break down the project into smaller tasks and allocate time and effort for
each task. This helps in efficient time estimation. Create a schedule to complete these tasks in the
designated time with a specific amount of effort.

8. Determine test deliverables: Test deliverables refer to the list of documents, tools, and other
equipment that must be created, provided, and maintained to support testing activities in the project.
Deliverables required Deliverables required Deliverables required after
before testing during testing testing

Test Plan Test Scripts Test Results

Test Design Simulators Defect Reports

Test Data Release Notes

Error and Execution Logs


The Software Testing Life Cycle (STLC) :
• It is a systematic approach to testing a software application to ensure that it meets the
requirements and is free of defects.
• The main goal of the STLC is to identify and document any defects or issues in the software
application as early as possible in the development process. This allows for issues to be
addressed and resolved before the software is released to the public.
Characteristics of STLC

•STLC is a fundamental part of the Software Development Life Cycle (SDLC) but STLC

consists of only the testing phases.

•STLC starts as soon as requirements are defined or software requirement document is shared

by stakeholders.

•STLC yields a step-by-step process to ensure quality software.


Requirement Analysis

The activities that take place during the Requirement Analysis stage include:

•Reviewing the software requirements document (SRD) and other related documents
•Interviewing stakeholders to gather additional information
•Identifying any ambiguities or inconsistencies in the requirements
•Identifying any missing or incomplete requirements
•Identifying any potential risks or issues that may impact the testing process
•Creating a requirement traceability matrix (RTM) to map requirements to test cases
2. Test Planning In this phase manager of the testing, team calculates the estimated effort and
cost for the testing work.

The activities that take place during the Test Planning stage include:
• Identifying the testing objectives and scope
• Developing a test strategy: selecting the testing methods and techniques that will be used
• Identifying the testing environment and resources needed
• Identifying the test cases that will be executed and the test data that will be used
• Estimating the time and cost required for testing
• Identifying the test deliverables and milestones
• Assigning roles and responsibilities to the testing team
• Reviewing and approving the test plan
3. Test Case Development:
• In this phase testing team notes down the detailed test cases.
• The testing team also prepares the required test data for the testing.
• When the test cases are prepared then they are reviewed by the quality assurance team.

The activities that take place during the Test Case Development stage include:
•Identifying the test cases that will be developed
•Writing test cases that are clear, concise, and easy to understand
•Creating test data and test scenarios that will be used in the test cases
•Identifying the expected results for each test case
•Reviewing and validating the test cases
•Updating the requirement traceability matrix (RTM) to map requirements to test cases
4. Test Environment Setup: Basically, the test environment decides the conditions on which software is tested.
This is independent activity and can be started along with test case development. In this process, the testing
team is not involved. either the developer or the customer creates the testing environment.
5. Test Execution: In this phase testing team starts executing test cases based on prepared test cases in the
earlier step.
The activities that take place during the test execution stage of the Software Testing Life Cycle (STLC)
include:
•Test execution: The test cases and scripts created in the test design stage are run against the software
application to identify any defects or issues.
•Defect logging: Any defects or issues that are found during test execution are logged in a defect tracking
system, along with details such as the severity, priority, and description of the issue.
•Test data preparation: Test data is prepared and loaded into the system for test execution
•Test environment setup: The necessary hardware, software, and network configurations are set up for test
execution
•Test execution: The test cases and scripts are run, and the results are collected and analyzed.
•Test result analysis: The results of the test execution are analyzed to determine the software’s performance
and identify any defects or issues.
•Defect retesting: Any defects that are identified during test execution are retested to ensure that they have
been fixed correctly.
•Test Reporting: Test results are documented and reported to the relevant stakeholders.
6. Test Closure: all testing-related activities are completed and documented. The main objective of the test
closure stage is to ensure that all testing-related activities have been completed and that the software is ready
for release.
Test closure is the final stage of the Software Testing Life Cycle (STLC) where all testing-related activities are
completed and documented. The main activities that take place during the test closure stage include:

•Test summary report: A report is created that summarizes the overall testing process, including the number of
test cases executed, the number of defects found, and the overall pass/fail rate.
•Defect tracking: All defects that were identified during testing are tracked and managed until they are
resolved.
•Test environment clean-up: The test environment is cleaned up, and all test data and test artifacts are
archived.
•Test closure report: A report is created that documents all the testing-related activities that took place,
including the testing objectives, scope, schedule, and resources used.
•Knowledge transfer: Knowledge about the software and testing process is shared with the rest of the team
and any stakeholders who may need to maintain or support the software in the future.
•Feedback and improvements: Feedback from the testing process is collected and used to improve future
testing processes
What is a Test Case?
A test case is a defined format for software testing required to check if a particular application/software is
working or not. A test case consists of a certain set of conditions that need to be checked to test an
application or software i.e. in more simple terms when conditions are checked it checks if the resultant
output meets with the expected output or not. A test case consists of various parameters such as ID,
condition, steps, input, expected result, result, status, and remarks.

Parameters of Test Cases:


Module Name: Subject or title that defines the functionality of the test.
Test Case Id: A unique identifier assigned to every single condition in a test case.
Tester Name: The name of the person who would be carrying out the test.
Test scenario: The test scenario provides a brief description to the tester, as in providing a small overview
to know about what needs to be performed and the small features, and components of the test.
Test Case Description: The condition required to be checked for a given software. for eg. Check if only
numbers validation is working or not for an age input box.
Test Steps: Steps to be performed for the checking of the condition.
Test Priority: As the name suggests gives priority to the test cases that had to be performed first, or are
more important and that could be performed later.
Test Data: The inputs to be taken while checking for the conditions.
Test Expected Result: The output which should be expected at the end of the test.
Test parameters: Parameters assigned to a particular test case.
Actual Result: The output that is displayed at the end.
Environment Information: The environment in which the test is being performed, such as the operating
system, security information, the software name, software version, etc.
Status: The status of tests such as pass, fail, NA, etc.
Comments: Remarks on the test regarding the test for the betterment of the software.
Parameter Test Case Test Scenario

A test case is a defined format for The test Scenario provides a small
software testing required to check if a description of what needs to be performed
Definition particular application/software/module is based on the use case.
working or not. Here we check for
different conditions regarding the same.

Test cases are more detailed with several Test Scenario provides a small
Level of detailing
parameters. description, mostly one-line statements.

Action Level Test cases are low-level actions. Test scenarios are high-level actions.

Test cases are mostly derived from test Test scenarios are derived from
Derived from
scenarios. documents like BRS, SRS, etc.

It focuses on “What to test” and “How to


Objective It focuses more on ‘What to test”.
test”.
Test cases require more resources for Fewer resources are required to write test
Resources required
documentation and execution. scenarios.

It includes all positive and negative


Inputs inputs, expected results, navigation steps, They are one-liner statements.
etc.

It requires more time compared to test


Time requirement Test scenarios require less time.
scenarios.

Maintenance They are hard to maintain. They require less time to maintain.
Test case design techniques are the key to planning, designing, and implementing tests for software
applications. These techniques involve various steps that aim to ensure the effectiveness of test cases in
uncovering bugs or other defects in software programs.
A basic example of test case design
Let us take an example of any e-commerce app or website(like Amazon or Flipkart)for test case design. We
want to ensure users can quickly checkout and make payments without issues. Here we test for 1 product
in the cart;
Title: Test that user can complete the checkout process when there is 1 item in the cart.
Description: Ensure users can checkout and make payments without issues on the website/app
Preconditions: The user is already logged in
Assumptions: They are using a supported device or browser to log in.
Test Steps:
1. Open the app/website.
2. Go to 1 product
3. Add the product to the cart.
4. Check out the item in the cart.
5. Add address information for delivery
6. Add payment information
7. Complete the checkout process.
Expected Result: The checkout process should be complete, and the user should receive confirmation.
What are the types of test case design techniques?
These test case design techniques can be classified into three major groups: :
•Specification-based
•Structure-based
•Experience-based
Specification-Based or Black-Box techniques
Specification-based testing, also known as black-box testing, is a testing technique that focuses on
testing the software system based on its functional requirements and specifications without any
knowledge about the underlying code or system structure.
1.Boundary Value Analysis (BVA) identifies errors at the input domain’s boundary. A simple
example of boundary value analysis would be testing a text box that requires the user to enter a
number between 1 and 10. In this case, the boundary values would be 1 and 10, and we would test
with values that are just above, at, and just below these boundaries.

For example, we would test with 0, 1, 2, 9, 10, and 11. We can expect that errors or defects are
most likely to occur at or near the boundary values. Identifying these issues early can help prevent
them from causing problems later in the software development process.
2. Equivalence Partitioning (EP) is another technique that helps reduce the required test cases.
By partitioning test input data into classes with an equivalent number of data, one can design test
cases for each class or partition. This technique ensures that one thoroughly tests the software
while minimizing the required test cases.

For example, if a program requires an input of numbers between 1 and 100, an EP test would
include a range of values, such as 1-50 and 51-100, and numbers outside that range, such as -1 or
101. Testing one value from each partition is sufficient to test all values within that partition.

3. Decision Table Testing is a technique that involves designing test cases based on decision
tables formulated using different combinations of inputs and their corresponding outputs based on
various conditions and scenarios sticking to other business rules. This technique ensures that we
test the software thoroughly and accurately.

For example, if a program offers discounts based on the type of customer and the amount spent, a
decision table would list all possible combinations of customer types and the amount paid to
receive a discount. Each cell in the table would specify the value that should be applied. Testers
can ensure the program behaves correctly under various scenarios by testing all combinations.
4. Developers use State Transition Diagrams(STD) to test software with a finite number of states
of different types. A set of rules that define the response to various inputs guides the transition
from one state to another. This technique is handy for systems with specific workflows within
them.

For example, consider an e-commerce website that has different states such as “logged out,”
“logged in,” “cart empty,” “cart not empty,” and “order placed.” The transitions between the
states would be triggered by login in and logout, adding the product to the cart, removing the
product from the cart, proceeding to checkout, etc. An STD can help visualize and test such
complex states and transitions in a system.

5. Use Case Testing involves designing test cases to execute different business scenarios and end-
user functionalities.

For example, A use case could be a “student enrolling in a course” on an academic website. Test
cases would simulate the enrollment process and verify the system’s response from a student’s
perspective.
Structure-Based or White-Box techniques
Structure-based testing, also known as white-box testing, is a testing technique that involves the
testing of internal structures or components of software applications. In this approach, the tests
interact with the code directly. These tests are designed to ensure the code works correctly and
efficiently.
1.Statement Testing and Coverage is a technique that involves executing all the executable
statements in the source code at least once. We then calculate the percentage of executable
statements as per the given requirement.

For example, consider code that inputs two numbers and checks if the first number is greater than
or equal to the second. A statement coverage test would verify that both the “greater than” and
“equal to” statements are executed during testing to ensure that all code branches are covered.
1.Decision Testing Coverage, also known as branch coverage, validates all the branches in the code by
executing each possible branch from each decision point at least once. This helps ensure that no branch
leads to unexpected application behavior.

For example, if a program requires an input of a number between 1 and 100 and uses an “if/else”
statement to check if the number is even, decision testing coverage would ensure that both the even and
odd
2. outcomes have been tested to confirm all possible scenarios have been checked.

Condition Testing, also known as Predicate coverage testing. It involves evaluating each Boolean
expression in the code and checking its output values, TRUE or FALSE, against the expected outcomes.
This test checks all outcomes at least once to achieve 100% code coverage. We design test cases that make
it easy to execute the condition outcomes.

For example, if a program determines whether a user is eligible for a discount based on age, condition
testing would verify that the code handles each age group accurately. It would test age values such as one
less than, one more than, and within the age range requirement to evaluate if the code performs as
expected.
1.Multiple Condition Testing aims to test different combinations of conditions to achieve 100% coverage.
This technique requires two or more test scripts, which may require more effort.

For example, if a program uses an “if/else” statement to check age and gender to provide a discount,
multiple condition testing would verify that the program handles all possible scenarios correctly. It would
test various age ranges and gender combinations to ensure the code performs accurately for all
possibilities.

2.All Path Testing leverages the source code of a program to find every executable path.

For example, if a program asks a user for two inputs (A and B) and has multiple conditions, All Path
Testing would ensure that each condition is tested independently. The technique would test all
combinations of A and B, including zero, negative, and positive Testings, to identify any potential errors in
the code.
Test case reduction techniques
Test Case Reduction-? as the name suggests, is the process of cutting away all the code that is irrelevant to the bug and
generating a smaller program or code that still induces the bug. This helps in reducing the cost of executing, validating, and
managing test cases or test suites for the Development team.

Generic steps followed for reducing test cases can be:


• Write test cases manually or through automated tools.
• For each test case, build the coverage and data sets.
• Apply the proposed test case reduction technique and remove unwanted test cases.
• Analyze the effectiveness of the Test Case Reduction technique.

Test Case Reduction Techniques


Best Practices To Write A Good Test Case
With those test case templates downloaded, you can now start to document all of the test cases
you are working on for a more structured and comprehensive view. Here are some best practices
and tips to help you best utilize the template we provide:
•You can clone the template and have separate test case sheets for different areas of the software
• Follow a consistent naming convention for test cases to make them easily searchable.
•You can group similar test cases together under a common feature/scenario
•Familiarize yourself with the requirement or feature you're testing before creating the test case
so that you’ll know what information to include
•Use action verbs at the start of each test step like “Click”, “Enter” or “Validate”. If needed, you
may even create a semantic structure to describe your test case. You can check out how it is done
in BDD testing.
•Include any setup or prerequisites needed before executing the test.
•Ensure that the test cases you included are not only the “common” scenarios but also the
negative scenarios that users don’t typically face but do happen in the system
•Use formatting to make your test cases easier to read and follow
•Make sure to update your test cases regularly
What is a Test Case?
A test case is a specific scenario designed to verify the functionality and reliability of a software
system. In the test case, testers outline the specific steps to be taken, the input data to be used,
and the expected outcomes to determine if the software behaves as intended.

Test cases are typically documented in a dedicated testing document (such as Google Sheets, or
text document) or a test case management tool (TestRail, Zephyr, qTest, and PractiTest). For
automated testing, test cases can be written directly within code files using programming
languages and testing frameworks.

https://katalon.com/resources-center/blog/what-is-test-data-management
Types of Test Data
1.Positive Test Data: this type of data consists of input values that are valid and within the
expected range and is designed to test how the system behaves under expected and normal
conditions. Examples: a set of valid username and password that allows users to login to their
account page on an eCommerce site.

2.Negative Test Data: in contrast with positive data, negative test data consists of input values
that are invalid, unexpected, or outside the specified range. It is designed to test how the system
behaves when users do something out of the “correct” path intended. Examples: a set of username
and password that is too long.

3.Boundary Test Data: these are values at the edges or boundaries of acceptable input ranges
chosen to assess how the system handles inputs at the upper and lower limits of the allowed range.

4.Invalid Test Data: these are data that does not accurately represent the real-world scenarios or
conditions that the software is expected to encounter. It does not conform to the expected format,
structure, or rules within a given context.

You might also like