0% found this document useful (0 votes)
88 views

Manual

Test scenarios describe functionality that can be tested while test cases contain the specific steps to execute tests. The software testing life cycle (STLC) involves requirement analysis, test planning, test case development, test environment setup, test execution, and test cycle closure to ensure quality goals are met. A defect's life cycle progresses through states like new, assigned, open, fixed, retested, and closed. Priority indicates how soon a bug should be fixed while severity describes a defect's impact on functionality. Regression testing checks for unexpected effects of changes while retesting verifies fixes for original faults.

Uploaded by

santhosh
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views

Manual

Test scenarios describe functionality that can be tested while test cases contain the specific steps to execute tests. The software testing life cycle (STLC) involves requirement analysis, test planning, test case development, test environment setup, test execution, and test cycle closure to ensure quality goals are met. A defect's life cycle progresses through states like new, assigned, open, fixed, retested, and closed. Priority indicates how soon a bug should be fixed while severity describes a defect's impact on functionality. Regression testing checks for unexpected effects of changes while retesting verifies fixes for original faults.

Uploaded by

santhosh
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 14

1. What is the difference between test scenarios, test cases?

Difference between test scenarios and test cases is that


Test Scenarios: A Test Scenario is any functionality that can be tested. It is also
called Test Condition or Test Possibility.
Test Cases: It is a document that contains the steps that have to be executed; it
has been planned earlier.

2. What is STLC and explain about STLC steps


Software Testing Life Cycle (STLC) is a sequence of specific activities conducted
during
the testing process to ensure software quality goals are met.
STLC involves both verification and validation activities.
Contrary to popular belief, Software Testing is not just a single/isolate activity,
i.e. testing.
It consists of a series of activities carried out methodologically to help
certify your software product. STLC stands for Software Testing Life Cycle.
-->Requirement Analysis
Activities in Requirement Phase Testing :
Identify types of tests to be performed.
Gather details about testing priorities and focus.
Prepare Requirement Traceability Matrix (RTM).
Identify test environment details where testing is supposed to be
carried out.
Automation feasibility analysis (if required).
-->Test Planning
Test Planning Activities :
Preparation of test plan/strategy document for various types of
testing
Test tool selection
Test effort estimation
Resource planning and determining roles and responsibilities.
Training requirement
-->Test case development
Test Case Development Activities :
Create test cases, automation scripts (if applicable)
Review and baseline test cases and scripts
Create test data (If Test Environment is available)
-->Test Environment setup
Test Environment Setup Activities :
Understand the required architecture, environment set-up and
prepare hardware and software requirement list for the Test Environment.
Setup test Environment and test data
Perform smoke test on the build
-->Test Execution
Test Execution Activities
Execute tests as per plan
Document test results, and log defects for failed cases
Map defects to test cases in RTM
Retest the Defect fixes
Track the defects to closure
-->Test Cycle closure
Test Cycle Closure Activities
Evaluate cycle completion criteria based on Time, Test coverage,
Cost,Software, Critical Business Objectives, Quality
Prepare test metrics based on the above parameters.
Document the learning out of the project
Prepare Test closure report
Qualitative and quantitative reporting of quality of the work
product to the customer.
Test result analysis to find out the defect distribution by type
and severity.
3. Defect/Bug Life Cycle
The Defect/Bug Life Cycle is a cycle of defects from which it goes through covering
the different states in its entire life.
This starts as soon as any new defect is found by a tester and comes to an end when
a tester closes that defect assuring that it won’t get reproduced again.
#1) New: This is the first state of a defect in the Defect Life Cycle. When
any new defect is found, it falls in a ‘New’ state, and validations & testing are
performed on this defect in the later stages of the Defect Life Cycle.
#2) Assigned: In this stage, a newly created defect is assigned to the
development team to work on the defect. This is assigned by the project lead or the
manager of the testing team to a developer.
#3) Open: Here, the developer starts the process of analyzing the defect and
works on fixing it, if required.
If the developer feels that the defect is not appropriate then it may get
transferred to any of the below four states namely Duplicate, Deferred, Rejected,
or Not a Bug-based upon a specific reason. We will discuss these four states in a
while.
#4) Fixed: When the developer finishes the task of fixing a defect by making
the required changes then he can mark the status of the defect as “Fixed”.
#5) Pending Retest: After fixing the defect, the developer assigns the defect
to the tester to retest the defect at their end, and until the tester works on
retesting the defect, the state of the defect remains in “Pending Retest”.
#6) Retest: At this point, the tester starts the task of retesting the defect
to verify if the defect is fixed accurately by the developer as per the
requirements or not.
#7) Reopen: If any issue persists in the defect, then it will be assigned to
the developer again for testing and the status of the defect gets changed to
‘Reopen’.
#8) Verified: If the tester does not find any issue in the defect after being
assigned to the developer for retesting and he feels that if the defect has been
fixed accurately then the status of the defect gets assigned to ‘Verified’.
#9) Closed: When the defect does not exist any longer, then the tester
changes the status of the defect to “Closed”.
A Few More:
Rejected: If the defect is not considered a genuine defect by the
developer then it is marked as “Rejected” by the developer.
Duplicate: If the developer finds the defect as same as any other
defect or if the concept of the defect matches any other defect then the status of
the defect is changed to ‘Duplicate’ by the developer.
Deferred: If the developer feels that the defect is not of very
important priority and it can get fixed in the next releases or so in such a case,
he can change the status of the defect as ‘Deferred’.
Not a Bug: If the defect does not have an impact on the functionality
of the application, then the status of the defect gets changed to “Not a Bug”.
4. Seviarity & Priority :
Priority is the order in which the developer should resolve a defect whereas
Severity is the degree of impact that a defect has on the operation of the product.
Priority is categorized into three types : low, medium and high whereas
Severity is categorized into five types : critical. major, moderate, minor and
cosmetic.
Priority is associated with scheduling while Severity is associated with
functionality or standards.
Priority indicates how soon the bug should be fixed whereas Severity indicates the
seriousness of the defect on the product functionality.
Priority of defects is decided in consultation with the manager/client while
Severity levels of the defects are determined by the QA engineer.
Priority is driven by business value while Severity is driven by functionality.
High Priority and low severity status indicates, defect have to be fixed on
immediate bases but does not affect the application while
High Severity and low priority status indicates defect have to be fixed but not on
immediate bases.

A very low severity with a high priority:


A logo error for any shipment website, can be of low severity as it not going to
affect the functionality of the website but can be of high priority as you don’t
want any further shipment to proceed with the wrong logo.
A very high severity with a low priority:
Likewise, for flight operating website, a defect in reservation functionality may
be of high severity but can be a low priority as it can be scheduled to release in
a next cycle.
High Priority High Seviority : Key feature does not work
High Priority Low seviority : Company logo is the wrong color
Low Priority High Seviority : Feature rartely used but does not work
Low Priority Low Seviority : The caption on an image is written in the wrong font

based on fucntioanliyu impact i iwll decedieteh the severiy

git status to scheck


git commit
git push
code commit
pusing the code

update employee set employeeid=30 where employeeid=20

query inside a query

5. Difference between Regression & Retesting :


Regression testing is performed for passed test cases while Retesting is done only
for failed test cases.
Regression testing checks for unexpected side-effects while Re-testing makes sure
that the original fault has been corrected.
Regression Testing doesn’t include defect verification whereas Re-testing includes
defect verification.
Regression testing is known as generic testing whereas Re-testing is planned
testing.
Regression Testing is possible with the use of automation whereas Re-testing is not
possible with automation.

6. Regression Testing :
Regression testing is a black box testing techniques.
It is used to authenticate a code change in the software does not impact the
existing functionality of the product.
Regression testing is making sure that the product works fine with new
functionality, bug fixes, or any change in the existing feature.
Test cases are re-executed to check the previous functionality of the application
is working fine, and the new changes have not produced any bugs.
Regression testing can be performed on a new build when there is a significant
change in the original functionality.
It ensures that the code still works even when the changes are occurring.
Types Of Regression :
The different types of Regression Testing are as follows:
Unit Regression Testing [URT]
Regional Regression Testing[RRT]
Full or Complete Regression Testing [FRT]
Unit Regression Testing [URT]
In this, we are going to test only the changed unit, not the impact area,
because it may affect the components of the same module.
Example:
In the below application, and in the first build, the developer develops
the Search button that accepts 1-15 characters.
Then the test engineer tests the Search button with the help of the test case
design technique.
Now, the client does some modification in the requirement and also requests
that the Search button can accept the 1-35 characters.
The test engineer will test only the Search button to verify that it takes 1-
35 characters and does not check any further feature of the first build.
Regional Regression testing [RRT]
In this, we are going to test the modification along with the impact area or
regions, are called the Regional Regression testing.
Here, we are testing the impact area because if there are dependable modules,
it will affect the other modules also.
Example:
Assume we have four different modules, such as Module A, Module B, Module C,
and Module D,
which are provided by the developers for the testing during the first build.
Now, the test engineer will identify the bugs in Module D.
The bug report is sent to the developers, and the development team fixes
those defects and sends the second build.
In the second build, the previous defects are fixed.
Now the test engineer understands that the bug fixing in Module D has
impacted some features in Module A and Module C.
Hence, the test engineer first tests the Module D where the bug has been
fixed and then checks the impact areas in Module A and Module C.
Therefore, this testing is known as Regional regression testing.
Full Regression testing [FRT]
During the second and the third release of the product, the client asks for
adding 3-4 new features,
and also some defects need to be fixed from the previous release.
Then the testing team will do the Impact Analysis and identify that the above
modification will lead us to test the entire product.
Therefore, we can say that testing the modified features and all the
remaining (old) features is called the Full Regression testing.

7. Integration Testing :
When individual software modules are merged and tested as a group than it is known
as integration testing.
Integration testing is sets between Unit Testing and System Testing.

Integration Testing Example


For example you have to test the keyboard of a computer than it is a unit testing,
but when you have to combine the keyboard and mouse of a computer together to see
its working or not than it is the integration testing.
So it is prerequisite that for performing integration testing, a system must be
unit tested before.

Why we need integration testing?


Integration testing is executed to establish whether the components interact with
each other consort to the specification or not.
Integration testing in large refers to joining all the components resulting in the
complete system.
It is further performed by the developer or the software Tester or by both.
Example- checking that a Payroll system interacts as required with the Human
Resource system.

Integration Testing Types


1) Top-Down Integration Testing: As the name suggests, this testing always starts
at the top of the program hierarchy and travels towards its branches.
This can be done in either depth-first or breadth-first.
2) Bottom-Up Integration Testing: This testing always starts at the lowest level in
the program structure.

Techniques of integration testing


1) Top-down testing approach
2) Bottom-up testing approach
3) Big-Bang testing approach
4) Sandwiched testing approach

Conclusion:
At last we conclude that Integration testing focuses on testing multiple modules
working together and this testing
is one of the extensive exercises of the software testing in which particular
software modules are merged and tested as a group.

8. Smoke testing & Sanity Testing


Smoke Testing :
Smoke Testing is a software testing technique performed post software build
to verify that the critical functionalities of software are working fine.
It is executed before any detailed functional or regression tests are
executed.
The main purpose of smoke testing is to reject a software application with
defects so that QA team does not waste time testing broken software application.

Sanity Testing :
Sanity testing is a kind of Software Testing performed after receiving a
software build, with minor changes in code, or functionality,
to ascertain that the bugs have been fixed and no further issues are
introduced due to these changes.
The goal is to determine that the proposed functionality works roughly as
expected.
If sanity test fails, the build is rejected to save the time and costs
involved in a more rigorous testing.

Smoke Testing has a goal to verify “stability” whereas Sanity Testing has a goal to
verify “rationality”.
Smoke Testing is done by both developers or testers whereas Sanity Testing is done
by testers.
Smoke Testing verifies the critical functionalities of the system whereas Sanity
Testing verifies the new functionality like bug fixes.
Smoke testing is a subset of acceptance testing whereas Sanity testing is a subset
of Regression Testing.
Smoke testing is documented or scripted whereas Sanity testing isn’t.
Smoke testing verifies the entire system from end to end whereas Sanity Testing
verifies only a particular component.

9. SDLC :
Software Development Life Cycle
Requirement Gathering
Analysis
Design
Coding / Development
Testing
Deployment & Maintenance
SDLC Models
Waterfall Model
V Model
Prototype Model
Spiral Model
Agile Model

10. RTM - Requiremnt Traciability Matrics


Requirement Traceability Matrix (RTM) is a document that maps and traces user
requirement with test cases.
It captures all requirements proposed by the client and requirement traceability in
a single document,
delivered at the conclusion of the Software developement life cycle.
The main purpose of Requirement Traceability Matrix is to validate that all
requirements are checked via test cases
such that no functionality is unchecked during Software testing.

Advantages :
With the help of the RTM document, we can display the complete test execution and
bugs status based on requirements.
It is used to show the missing requirements or conflicts in documents.
In this, we can ensure the complete test coverage, which means all the modules are
tested.
It will also consider the efforts of the testing teamwork towards reworking or
reconsidering on the test cases.

11. Test Scenarios & Test cases :


Test Scenario :
It is a document that defines the multiple ways or combinations of testing
the application.
Generally, it is prepared to understand the flow of an application. It does
not consist of any inputs and navigation steps.
The test scenario is a detailed document of test cases that cover end to end
functionality of a software application in liner statements.
The liner statement is considered as a scenario. The test scenario is a high-
level classification of testable requirements.
These requirements are grouped on the basis of the functionality of a module
and obtained from the use cases.
In the test scenario, there is a detailed testing process due to many
associated test cases.
Before performing the test scenario, the tester has to consider the test
cases for each scenario.
In the test scenario, testers need to put themselves in the place of the user
because they test the software application under the user's point of view.
Features of Test Scenario :
The test scenario is a liner statement that guides testers for the testing
sequence.
Test scenario reduces the complexity and repetition of the product.
Test scenario means talking and thinking about tests in detail but write them
in liner statements.
It is a thread of operations.
Test scenario becomes more important when the tester does not have enough
time to write test cases, and team members agree with a detailed liner scenario.
The test scenario is a time saver activity.
It provides easy maintenance because the addition and modification of test
scenarios are easy and independent.
Test Case :
The test case is defined as a group of conditions under which a tester
determines whether a software application is working as per the customer's
requirements or not.
Test case designing includes preconditions, case name, input conditions, and
expected result. A test case is a first level action and derived from test
scenarios.
It is an in-details document that contains all possible inputs (positive as
well as negative) and the navigation steps,
which are used for the test execution process.
Writing of test cases is a one-time attempt that can be used in the future at
the time of regression testing.
Test case gives detailed information about testing strategy, testing process,
preconditions, and expected output.
These are executed during the testing process to check whether the software
application is performing the task for that it was developed or not.
Test case helps the tester in defect reporting by linking defect with test
case ID.
Why we arite Test cases :
We will write the test for the following reasons:
To require consistency in the test case execution
To make sure a better test coverage
It depends on the process rather than on a person
To avoid training for every new test engineer on the product
To require consistency in the test case execution: we will see the test case
and start testing the application.
To make sure a better test coverage: for this, we should cover all possible
scenarios and document it,
so that we need not remember all the scenarios again and again.
It depends on the process rather than on a person: A test engineer has tested
an application during the first release,
second release, and left the company at the time of third release.
As the test engineer understood a module and tested the application
thoroughly by deriving many values. If the person is not there for the third
release,
it becomes difficult for the new person. Hence all the derived values
are documented so that it can be used in the future.
To avoid giving training for every new test engineer on the product: When the
test engineer leaves, he/she leaves with a lot of knowledge and scenarios.
Those scenarios should be documented so that the new test engineer can
test with the given scenarios and also can write the new scenarios.
Test Case Template :
A set of input values, execution preconditions, expected results and execution
postconditions, developed for a particular objective or test condition, such as to
exercise a particular program path or
to verify compliance with a specific requirement.
Main Attributes from template :
Test Case Id: a Unique name/number (Alphanumeric)
Test Case Name: Name of Test Case
Test Suite ID: Unique name/number (Alphanumeric)
Module Name :
Release : One release can contain many versions of the release.
Pre-condition: These are the necessary conditions that need to be satisfied
by every test engineer before starting the test execution process.
Or it is the data configuration or the data setup that needs to be
created for the testing.
For example: In an application, we are writing test cases to add users,
edit users, and delete users.
The per-condition will be seen if user A is added before editing it and
removing it.
Test data : These are the values or the input we need to create as per the
per-condition.
For example, Username, Password, and account number of the users.
Severity : The severity can be major, minor, and critical, the severity in
the test case talks about the importance of that particular test cases.
All the text execution process always depends on the severity of the
test cases.
We can choose the severity based on the module. There are many features
include in a module,
even if one element is critical, we claim that test case to be
critical. It depends on the functions for which we are writing the test case
Brief description : The test engineer has written a test case for a
particular feature.
If he/she comes and reads the test cases for the moment, he/she will
not know for what feature has written it.
So, the brief description will help them in which feature test case is
written.
Test case Type : UI/Functional/NonFunctional/Smoke
Input Data : To execute the test case
Steps: Steps for Executing the Test Case
Post-Condition: Status After Test Case Execution
Expected Result: Expected Result as per Requirements
Actual Results:
Test Results: Pass / Fail
Remarks: Comments (Optional)

12. Exploratory Testing :


If requirement does not exist, then we do one round of exploratory testing.
So, for this first, we will be exploring the application in all possible
ways, u
nderstanding the flow of the application, preparing a test document and then
testing the application, this approach is known as exploratory testing.

How to perform exploratory testing

To perform exploratory testing, first, we will start using the application


and understand the requirement of the application
from the person who has a good product knowledge such as senior test
engineer, and developers.
Then we will explore the application and write the necessary document, and
this document is sent to the domain expert,
and they will go through the document.
And we can test the application based on our knowledge, and taking the help
of the competitive product, which is already launched in the market
We can perform Exploratory testing in 3 ways which are : Freestyle, Strategy
based ,Scenario-based
Freestyle : In freestyle testing, we did not follow any rules, there is no maximum
coverage, and we will explore the application just like Adhoc testing.
If we want to get friendly with the software and checks the other test
engineer's works, we can use freestyle exploratory testing.

Strategy based : Strategy based exploratory testing can be performed with the help
of multiple testing techniques such as risk-based,
boundary value analysis, and equivalence partitioning.
It is done by the experienced tester and who is using the application for the
longest time because he/she is known the application very much.
Scenario-based : Scenario-based exploratory testing is performed with the help of
multiple scenarios such as end-to-end, test scenarios,
and real user scenarios.
The test engineer can find defects and also checks various set of
possibilities for the multiple scenarios with their application knowledge
while they were exploring the application.

13. Adhoch testing :


This testing we do when the build is in the checked sequence, then we go for Adhoc
testing by checking the application randomly.
Adhoc testing is also known as Monkey testing and Gorilla testing.
It is negative testing because we will test the application against the client's
requirements.
When the end-user using the application randomly, and he/she may see a bug, but the
professional test engineer uses the software systematically,
so he/she may not find the same bug.

14. Globalization testing :


Globalization testing is used to test the software that is developed for multiple
languages, is called globalization testing, 
and improving the application or software for various languages is known
as globalization.
This testing ensures that the application will support multiple languages and
multiple features because,
in current scenarios, we can see the enhancement in several technologies as
the applications are planned in such a way that it is used globally.

Purpose of Globalization testing


It is used to make sure that the application is to support all the languages
around the world.
It is used for the identification of the various phases of the
implementation.
It is used to define the user interfaces of the software.
This testing will focus on the world-wide experiences of the application.
It is used to make sure that code can control all international support
without breaking the functionality of the application.
Interatioalization testing (I18N testing):
The internationalization testing is the procedure of developing and planning
the software or the application (product) or file content,
which allows us to localize our application for any given language,
culture, and region without demanding any changes in the source code.
This testing is also known as I18N testing, and here 18 is the number
between I and N in the Internationalization word.
The main objective is to perform the internationalization testing, and this
testing concentrates on the multiple testing such as functional testing,
integration testing, usability testing, user interface testing,
compatibility testing, installation testing, and validation testing.
Why we do the internalization Teasting ?
For verifying that the right content is in the correct place.
To check an opposing effect on product quality.
For verifying that the content is in the correct language.
Localization testing (L10N testing):
The localization testing is nothing but the format testing, where we test the
format specification based on the country, region, etc.
It is also known as L10N testing, and here 10 is the number
between L and N in the Localization word.
The primary objective of localization is to provide a product the look and
feel for a target market, no matter their culture, location, and the languages.
This testing is not required to localize the product.
Examples which we perform in the localization testing:
Date format testing : For example - In the USA, the date format→MM-DD-YY & In
India, the date format→DD-MM-YYYY
Currency format testing : In this, we do not worry about the functionality of
the format, such as $ is converted to Rs. or not.
Here we only test whether the $ should be in the first or the last. For
example, 200$, $250, Rs.500 (the standard should be as per country standards)
Pin code format testing : In this, we have the countries that have the pin
code with the characters such as PQ230. Checking the Pin code format is L10N
testing, and checking whether PQ is translated to French is I18N testing. The L10N
testing contains the Date Format, currency format, and the Pin code format.
Image format testing : In this, we can only change the name of the image
because the image cannot be changed.
Therefore we must have multiple images depending on the country.
15. Static & Dynamic testing :
Static Testing :
Static testing is a verification process used to test the application without
implementing the code of the application.
And it is a cost-effective process.
To avoid the errors, we will execute Static testing in the initial stage of
development because it is easier to identify the sources of errors,
and it can fix easily.
In other words, we can say that Static testing can be done manually or with
the help of tools to improve the quality of the application
by finding the error at the early stage of development; that is also
called the verification process.
We can do some of the following important activities while performing static
testing:
Business requirement review
Design review
Code walkthroughs
The test documentation review
Static testing also helps us to identify those errors which may not be
found by Dynamic Testing.
We can test the various testing activities in Static Testing, which are as follows:
BRD [Business Requirements Document]
Functional or system Requirements
Unit Use Cases
Prototype
Prototype Specification Document
Test Data
DB Fields Dictionary Spreadsheet
Documentation/Training Guides/ User Manual
Test Cases/Test Plan Strategy Document
Traceability Matrix Document
Performance Test Scripts/Automation

Dynamic Testing :
Dynamic testing is one of the most important parts of Software testing, which
is used to analyse the code's dynamic behavior.
The dynamic testing is working with the software by giving input values and
verifying if the output is expected by
implementing a specific test case that can be done manually or with an
automation process.
The dynamic testing can be done when the code is executed in the run time
environment.
It is a validation process where functional testing [unit, integration,
system, and user acceptance testing] and
non-functional testing [Performance, usability, compatibility, recovery
and security testing] are performed.
As we know that Static testing is a verification process, whereas dynamic
testing is a validation process,
and together they help us to deliver a cost-effective quality Software
product.

16. Acceptance Testing :


Acceptance testing is formal testing based on user requirements and function
processing.
It determines whether the software is conforming specified requirements and
user requirements or not.
It is conducted as a kind of Black Box testing where the number of required
users involved testing the acceptance level of the system.
User acceptance testing (UAT) is a type of testing, which is done by the customer
before accepting the final product.
Generally, UAT is done by the customer (domain expert) for their
satisfaction,
and check whether the application is working according to given business
scenarios, real-time scenarios.
In this, we concentrate only on those features and scenarios which are regularly
used by the customer or mostly
user scenarios for the business or those scenarios which are used daily by
the end-user or the customer.
However, the software has passed through three testing levels (Unit Testing,
Integration Testing, System Testing)
But still there are some minor errors which can be identified when the system
is used by the end user in the actual scenario.

17. Alpha Testing :


Alpha testing is conducted in the organization and tested by a representative
group of end-users at the developer's side and
sometimes by an independent team of testers.
Alpha testing is simulated or real operational testing at an in-house site.
It comes after the unit testing, integration testing, etc.
Alpha testing used after all the testing are executed.
It can be a white box, or Black-box testing depends on the requirements -
particular lab environment and simulation of the actual environment
required for this testing.
Advantages of Alpha Testing :
One of the benefits of alpha testing is it reduces the delivery time of the
project.
It provides a complete test plan and test cases.
Free the team member for another project.
Every feedback helps to improve software quality.
It provides a better observation of the software's reliability and
accountability.
18. Beta Testing :
If we compare the various activity performed to develop ideal software, we will
find the importance of software testing similar to that of the software
development process. Testing is one of those activities which ensure the
accuracy of the development process while validating
its functionality and performance.
Features of Beta Testing are :
Beta testing used in a real environment at the user's site. Beta testing
helps in providing the actual position of the quality.
Testing performed by the client, stakeholder, and end-user.
Beta testing always is done after the alpha testing, and before releasing it
into the market.
Beta testing is black-box testing.
Beta testing performs in the absence of tester and the presence of real users
Beta testing is performed after alpha testing and before the release of the
final product.
Beta testing generally is done for testing software products like utilities,
operating systems, and applications, etc.

19. UI Vs Usability Testing :


GUI Testing :
It is the technique for making sure the appropriate performance of the
graphical user interfaces for a precise application.
GUI testing typically evaluates a sketch of elements such as layout, hues,
and additionally fonts, font sizes, labels, textual content boxes,
textual content formatting, captions, buttons, lists, icons, hyperlinks, and
content.
Usability Testing :
It is the exercise of checking out how effortless a plan is to use on a group
of representative users.
It generally includes looking at customers as they strive to complete duties
and can be performed for unique sorts of designs,
from user interfaces to bodily products.

1. UI : It is used to test the front-end part of any application.


Usability : It measures the extent of the friendliness of the User
Interface part and overall functioning of the software.
2. UI : It focuses on the look and feel of an application.
Usability : It focuses on the friendliness of an application.
3. UI : It assures the look and feel of an application by matching with
standards and user requirements.
Usability : It assures that the user should be comfortable to use any
app by making its design easy.
4. UI : In this, an application should be amazing in look whether it is
easy to use or not.
Usability : In this, an application should be easy to use whether its
appearance is up to the mark or not.
5. UI : The testing is performed on various platforms just to make sure
its appearance will be perfect.
Usability : It tests the app to check its difficulty level.
6. UI : In this type of testing, we do not test the functionality of an
app.
Usability : In this type of testing, we test the functionality of an
app to check that is it user friendly or not.
7. UI : It concerns the interface part of the software.
Usability : It focuses on the product quality of software.

20. Testing type sequence


20.1 Functional Testing ( Unit Testing ) - Unit testing involves the testing
of each unit or an individual component of the software application.
It is the first level of functional testing. The aim behind unit
testing is to validate unit components with its performance.
A unit is a single testable part of a software system and tested during
the development phase of the application software.
The purpose of unit testing is to test the correctness of isolated
code. A unit component is an individual function or code of the application.
White box testing approach used for unit testing and usually done
by the developers.
Whenever the application is ready and given to the Test engineer,
he/she will start checking every component of the module or
module of the application independently or one by one, and this
process is known as Unit testing or components testing.
Unit testing helps tester and developers to understand the base of code
that makes them able to change defect causing code quickly.
Unit testing helps in the documentation.
Unit testing fixes defects very early in the development phase that's
why there is a possibility to occur a smaller number of defects
in upcoming testing levels.
It helps with code reusability by migrating code and test cases.
20.2 Integration Testing -
Integration testing is the second level of the software testing process
comes after unit testing.
In this testing, units or individual components of the software
are tested in a group.
The focus of the integration testing level is to expose defects
at the time of interaction between integrated components or units.
Unit testing uses modules for testing purpose, and these modules are
combined and tested in integration testing.
The Software is developed with a number of software modules that
are coded by different coders or programmers.
The goal of integration testing is to check the correctness of
communication among all the modules.

We go for the integration testing only after the functional testing is


completed on each module of the application.
We always do integration testing by picking module by module so that a
proper sequence is followed,
and also we don't miss out on any integration scenarios.
First, determine the test case strategy through which executable test
cases can be prepared according to test data.
Examine the structure and architecture of the application and identify
the crucial modules to test them first and
also identify all possible scenarios.
Design test cases to verify each interface in detail.
Choose input data for test case execution. Input data plays a
significant role in testing.
If we find any bugs then communicate the bug reports to developers and
fix defects and retest.
Perform positive and negative integration testing.
Here positive testing implies that if the total balance is Rs15, 000
and we are transferring Rs1500 and checking if the amount transfer works fine.
If it does, then the test would be a pass.
And negative testing means, if the total balance is Rs15, 000 and we
are transferring Rs20, 000 and check if amount transfer occurs or not,
if it does not occur, the test is a pass. If it happens, then
there is a bug in the code, and we will send it to the development team
for fixing that bug.
Integration Testing Technique :
Black Box Testing :
State Transition technique
Decision Table Technique
Boundary Value Analysis
All-pairs Testing
Cause and Effect Graph
Equivalence Partitioning
Error Guessing
White Box Testing :
Data flow testing
Control Flow Testing
Branch Coverage Testing
Decision Coverage Testing
Integration Types :
Incremental Approach -
Top Down Approach - The top-down testing strategy deals with the
process in which higher level modules are tested with lower level modules
until the successful completion of testing of all the
modules. Major design flaws can be detected and fixed early because critical
modules tested first. In this type of method, we will add
the modules incrementally or one by one and check the data flow in the same order.
In the top-down approach, we will be ensuring that the
module we are adding is the child of the previous one like Child C is a child of
Child B 
and so on
Bottom Up Approach - The bottom to up testing strategy deals with
the process in which lower level modules are tested with higher level modules
until the successful completion of testing of all the
modules. Top level critical modules are tested at last, so it may cause a defect.
Or we can say that we will be adding the modules
from bottom to the top and check the data flow in the same order.
In the bottom-up method, we will ensure that the modules we
are adding are the parent of the previous one 
Non Incremental Approach -
We will go for this method, when the data flow is very complex
and when it is difficult to find who is a parent and who is a child.
And in such case, we will create the data in any module
bang on all other existing modules and check if the data is present.
Hence, it is also known as the Big bang method.
20.3 System Testing -
System Testing includes testing of a fully integrated software system.
Generally, a computer system is made with the
integration of software (any software is only a single element of a
computer system).
The software is developed in units and then interfaced with other
software and hardware to create a complete computer system.
In other words, a computer system consists of a group of software
to perform the various tasks, but only software cannot perform the task;
for that software must be interfaced with compatible hardware.
System testing is a series of different type of tests with the purpose to exercise
and examine the full working of an integrated software computer
system against requirements.
To check the end-to-end flow of an application or the software as a
user is known as System testing.
In this, we navigate (go through) all the necessary modules of an
application and check if the end features or the end business works fine,
and test the product as a whole system.
It is end-to-end testing where the testing environment is similar to
the production environment.
There are four levels of software testing: unit testing, integration
testing, system testing and acceptance testing,
all are used for the testing purpose. Unit Testing used to test a
single software; Integration Testing used to test a group of units of software,
System Testing used to test a whole system and Acceptance Testing
used to test the acceptability of business requirements.
Here we are discussing system testing which is the third level of
testing levels.

You might also like