0
Introduction
of Software
Testing
Introduction of Software Testing
Learning Objectives :
Introduction of Software Testing.
Why is Testing necessary?
Participants in Testing
Best Practices in Testing
Skills required for Testing
Software Testing
What is SoftwareTesting?
In Software Testing it is not sufficient to demonstrate that the
software is doing what it is supposed to do.
In Software Testing it is more important to demonstrate
that the software is not doing what it is not supposed to do.
Software testing can also be stated as the process of validating
and verifying that a software program/application/product:
Works as expected according to the business and technical
requirements.
Works consistently and predictably.
Process of finding defects i.e. variance between Expected
results and Actual results.
Process of executing a software program application with intent
of finding errors.
Why –What –How -Who
Why to test?
What to test?
How often to test?
Who tests?
Why is Testing necessary?
Software Testing is necessary to make sure the product or
application is defect free, as per customer specifications.
Software Testing identifies faults whose removal increases
the software quality and increases the software's reliability.
More complex the program, more the testing effort is required.
Testing effort is directly proportional to the complexity of the
program.
Software should be:
Error Free
Efficient
Secured
Software Testing is important as it may cause mission
failure, impact on operational performance and unreliable if
not done properly.
StartTesting -When?
Testing starts right from the requirements phase and
continues till the release time.
As soon as possible.
Objective of starting early:
Requirements related defects caught later in the SDLC result
in higher cost to fix the defect.
Stop Testing –When?
Participants in Testing
Customer
User -End user- uses an application
Developer
Tester
Auditor
Common problems in the software
development process
Poor Requirements –unclear, incomplete, too general
Inadequate testing –No one will know whether or not the
program is any good till the customer complains or system
crashes.
Features–A request to pile on new features after development
is underway; extremely common.
Miscommunications–If developers don’t know what’s needed
or customers have erroneous expectations, problems are
guaranteed
Best Practices in Testing
Test Planning
Code Testability
Test often-
Test early
Measure test costs, coverage, results and effectiveness.
Negative Testing: Needs Kick the wall approach .
Skills required for Testing
Strong desire for quality and attention to detail.
Ability to understand the point of view of the customer.
Tact and diplomacy for maintaining the cooperative
relationship with developers.
Ability to communicate with both technical and non-technical
people.
Judgment skills are needed to assess high risk areas of an
application on which to focus testing efforts when time is limited.
‘Test to break’ attitude.
Misconceptions
Anyone can test software: No particular skill is required.
Testers can test the quality at the end of the project!
Defects found means blaming to the developers.
Interview Questions
What is Software Testing?
Justify the importance of Software testing.
What are the skills required for Testing?
What are the objectives of Early Testing?
Testing Principles
1.Testing is context dependent
2.Exhaustive testing is impossible
3Early testing
4Defect clustering
5Pesticide paradox
6.Testing shows presence of defects
7.Absence of error fallacy
Testing is context dependent
Testing is done differently in different contexts.
Fore.g. Safety-critical software is tested differently from an e-
commerce site.
Exhaustive testing is impossible
Testing everything (all combinations of inputs and
preconditions) is not feasible except for trivial cases. Instead of
exhaustive testing we use risks and priorities to focus testing
efforts.
Early testing
Testing activities should start as early as possible in the
software or system development life cycle and should be
focused on defined objectives.
Defect clustering
A small number of modules contain most of the defects
discovered during pre-release testing or show the most
operational failures.
80/20 Rule.
Pesticide paradox
Ifthe same test are repeated over and over again, eventually
the same set of test cases will no longer find any new bugs.
To overcome this ‘pesticide paradox’ ,the test case needs to be
regularly reviewed and revised to potentially find more defects.
Testing shows presence of defects
Testing can show that defects are present, but cannot prove
that there are no defects.
Absence of error fallacy
Findingand fixing defects does not help if the system built is
unusable and does not full fill the users needs and expectation.
Interview Questions
What are the principles of testing?
What are main benefits of designing test cases early?
What is 80/20 Rule?
Explain the principle "Exhaustive Testing"?
What steps can be taken to overcome pesticide paradox?
Explain principle "absence of error fallacy"?
Learning Objectives:
Introduction of Software Process
PDCA Cycle
Phases in SDLC
SDLC Models
WaterfallModel
Incremental Model
Spiral Model
Agile Model
Software Process
Process –Projects –Products
A software process specifies a method of developing software.
A software project on the other hand, is a development project
in which a software process is used.
A Software product is the final outcome of a software project.
PDCA is a successive cycle which starts off small to test
potential effects on processes, but then gradually leads to larger
and more targeted change.
PLAN
Establish the objectives and processes necessary to deliver
results in accordance with the expected output.
DO
Implement the new processes. Often on a small scale if
possible,
to test possible effects. either one or more of the P-D-C-A
steps.
CHECK
Measure the new processes and compare the results against
the expected results to ascertain any differences.
ACT
Analyze the differences to determine their cause.
Software Development Life Cycle(SDLC)
Software Development Life Cycle(SDLC) is a process used by
software industry to design, develop and test high quality
software's.
The SDLC aims to produce a high quality software that meets
or exceeds customer expectations, reaches completion within
times and cost estimates.
The SDLC consists of a detailed plan describing how to
develop, maintain, replace and alter or enhance specific
software.
The life cycle defines a methodology for improving the quality
of software and the overall development process.
Stage 1: Requirement Gathering & Analysis
Business requirements are gathered in this phase.
Itis performed by the senior members of the team with inputs
from the customer, the sales department, market surveys and
domain experts in the industry.
Meetings with managers, stake holders and users are held in
order to determine the requirements
After gathering these requirements are analyzed for their
validity and the possibility of incorporating the requirements in
the system
Stage 2 : Design
HLD-Border view
LLD-Detailed view
The outputs gathered in requirement phase are the inputs of
the design phase.
Based on the requirements specified in SRS, usually more
than one design approach for the product architecture is
proposed and documented in a DDS -Design Document
Specification.
The DDS is reviewed by all the important stakeholders.
Stage 3 : Implementation & Coding
In this stage of SDLC the actual development starts and the
product is built.
The programming code is generated as per DDS during this
stage.
Differenthigh level programming languages such as C, C++,
Pascal, Java, and PHP are used for coding.
The programming language is chosen with respect to the type
of software being developed.
Stage 4 : Integration & Testing
After the code is developed individually it is integrated into one
project/product
Itis then tested against the requirements to make sure that the
product is actually solving the needs addressed and gathered
during the requirements phase.
In this stage products defects are reported, tracked, fixed and
retested, until the product reaches the quality standards defined
in the SRS.
Stage 5 : Deployment
After it passes the testing phase, the product is
delivered/deployed to the customer for their use.
As soon as it is delivered, the customers first perform beta
testing on the product.
Ifany changes are required or if any bugs are caught, then
they report it to the engineering team.
Once those changes are made or the bugs are fixed then the
final deployment happens.
Stage 6 : Maintenance
The software is maintained timely by updating the code
according to the changes taking place in user end environment
or technology.
Maintenance phase may face challenges from hidden bugs
and real-world unidentified problems.
SDLC Models
Following are the various SDLC Models :
Waterfall Model
Incremental Model
Spiral Model
Agile Model
Waterfall Model
Phase Output
Requirements== Software Requirements Specification (SRS), Use
Analysis Cases
Design== Design Document, Design Classes
Implementation == Code
Test== Test Report, Change Requests
Advantages of Waterfall Model
This model is simple and easy to understand and use.
Itis easy to manage due to the rigidity of the model –each
phase has specific deliverables and a review process.
In this model phases are processed and completed one at a
time. Phases do not overlap.
Waterfallmodel works well for smaller projects where
requirements are very well understood.
Disadvantage of Waterfall Model
Once an application is in the testing stage, it is very difficult to go
back and change something that was not well-thought out in the
concept stage.
No working software is produced until late during the life cycle.
High amounts of risk and uncertainty.
Not a good model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Not suitable for the projects where requirements are at a moderate
to high risk of changing.
2.Incremental Model
Advantages of Incremental Model
Generates working software quickly and early during the
software life cycle.
This model is more flexible –less costly to change scope and
requirements.
It is easier to test and debug during a smaller iteration.
In this model customer can respond to each built.
Lowers initial delivery cost.
Easierto manage risk because risky pieces are identified
and handled during it’s iteration.
Disadvantages of Incremental Model
Needs good planning and design.
Needs a clear and complete definition of the whole system
before it can be broken down and built incrementally.
Total cost is higher than waterfall.
Spiral
m
Spiral Model Strengths
High amount of risk analysis.
Good for large and mission critical projects.
Software is produced early in the SDLC.
Spiral Model Weaknesses
Can be costly model to use.
Risk analysis requires highly specific expertise.
Project success is highly dependent on the risk analysis
phase.
Don’t work well for smaller projects.
When to use a spiral model?
When creation of project is appropriate
When costs and risk is evaluation is important
For medium to high project
users are unsure for their requirements
Requirements are complex
Significant changes are accepted.
Agile Model
BACKLOG:-Product backlog, sprint backlog
SPRINT:-Iterations
Scrum:-Framework
Sprint planning:-it is done at the start. Planning of each sprint.
Sprint Retrospective:-It is done at the end of sprint.
done to check planned vs actual.
Advantages of Agile model
People and interactions are emphasized rather than process
and tools.
Customers, developers and testers constantly interact with
each other.
Working software is delivered frequently (weeks rather than
months).
Continuous attention to technical excellence and good
design.
Regular adaptation to changing circumstances. Even late
changes in requirements are welcomed.
Disadvantages of Agile Model
Incase of some software deliverables, especially the large
ones, it is difficult to assess the effort required at the
beginning of the software development life cycle.
Thereis lack of emphasis on necessary designing and
documentation.
The project can easily get taken off track if the customer
representative is not clear what final outcome that they want.
Only senior programmers are capable of taking the kind of
decisions required during the development process.
Interview Questions
What are different phases in SDLC?
What are the strength of waterfall model?
When to use spiral model?
What are advantages of Incremental Model?
What is the weakness of Agile Model?
What is Scrum Methodology used in Agile Software Development ?
What is the difference between BRS and SRS ?
CHAPTER-4
V-Shaped Model AND CMMI
Learning Objectives :
Steps in the V-Shaped Model
The V-Shaped Model
V-Shaped Strengths
V-Shaped Weaknesses
Benefits of V-Model
The Capability Maturity Model (CMMI)
V-Model evolved from waterfall Model.
Each phase must be completed before the next phase begins.
Instead of moving down in a linear way, the process steps
are bent upwards after the coding phase, to form the typical
V shape.
Testing is emphasized in this model more than in the waterfall
model.
It is a structured approach to testing.
Brings high quality into the development of our products.
V-Shaped Strengths
Works well for small projects where requirements are
easily understood.
Each phase has specific deliverables.
Simple and Easy to use.
V-Shaped Weaknesses
Very rigid like the waterfall model.
Software is developed during the developing phase, so no
early prototypes of the software are produced.
Does not easily handle dynamic changes in
requirements.
Does not contain risk analysis activities
Benefits of V-Model
Faults are prevented and it stops fault multiplication.
Avoids the downward flow of defect.
Lower defect Resolution cost due to earlier detection.
Improved quality and reliability.
Reduction in the amount of Re-work.
Validation at each level of stage containment.
Allows testers to be active in the project early in the
project’s lifecycle. They develop critical knowledge about
the system.
CMMI(Capability Maturity Model
Integration)
The Capability Maturity Model (CMMI) is a methodology
used to develop and refine an organization's software
development process.
The CMMI is similar to ISO 9001, one of the ISO 9000 series
of standards specified by the International Organization for
Standardization (ISO).
To Develop a Software using the SDLC process CMMI
standards have to be followed.
CMMI defines 5 levels of process maturity based on certain
Key Process Areas (KPA)
Level 1 –Initial
--Initial (chaotic, ad hoc, individual heroics) -the starting point
for use of a new process.
--Quality is difficult to predict.
--Lowest Quality & Highest Risk
Level 2 –Repeatable
--Basic project management processes are established
to track cost, schedule and functionality.
--Repeat earlier successes on projects with similar
applications.
--Low Quality & High Risk
Level 3 –Defined
--The software process for both management and
engineering activities is documented, standardized and
integrated.
--All projects use a documented and approved version of the
organization’s process.
--Medium Quality & Medium Risk
Level 4 –Managed
--Detailed measures of the software process and product
quality are collected, understood and controlled.
--Management can identify ways to adjust and adapt the
process to particular projects without measurable losses
of quality.
--Higher Quality & Lower Risk
Level 5 –Optimizing
--Focus is on continually improving process performance
through both incremental and innovative technological
changes.
--Defects are minimized and products are delivered on time
and within the budget boundaries.
--Highest Quality & Lowest Risk
Interview Questions
Explain V-model architecture.
Explain validation phases in V-model.
Explain Advantages and Disadvantages of V Model.
What is CMMI and what is the advantage of implementing it
in an organization?
Explain different process area in CMMI
chapter 5:
Software Testing Life Cycle
Software testing life cycle or STLC refers to a
comprehensive group of testing related actions
specifying details of every action along with the
specification of the best time to perform such actions.
Different organizations have different phases in STLC
however generic Software Test Life Cycle (STLC) for waterfall
development model consists of the following phases.
Requirements Analysis
In this phase testers analyze the customer requirements.
Itis very important to start testing activities from the
requirements phase itself because the cost of fixing
defect is very less if it is found in requirements phase
rather than in future phases.
Test Planning
In this phase all the planning about testing is done like what
needs to be tested
How the testing will be done
Test strategy to be followed
What will be the test environment
What test methodologies will be followed
Hardware and software availability ,resources, risks etc.
Test Analysis
After test planning phase is over test analysis phase starts, in this
phase we need to dig deeper into project and figure out what testing
needs to be carried out in each SDLC phase.
Automation activities are also decided in this phase, if
automation needs to be done for software product, how will the
automation be done, how much time will it take to automate
and which features need to be automated.
Test Design
In this phase various black-box and white-box test design
techniques are used to design the test cases for testing,
testers start writing test cases by following those design
techniques.
If automation testing needs to be done then automation scripts
also needs to written in this phase.
Test construction and verification
Inthis phase testers prepare more test cases by
keeping in mind the positive and negative scenarios, end
user scenarios etc.
The test plan document should also be finalized and verified
by reviewers.
Test Execution and Bug Reporting
The test cases are executed and defects are reported in bug
tracking tool.
Testing is an iterative process i.e. If defect is found and
fixed, testing needs to be done after every defect fix.
Final Testing and implementation
In this phase the final testing is done for the software,
non functional testing like stress, load and performance
testing are performed in this phase.
Final test execution reports and documents are prepared in
this phase.
Post Implementation
The process review meeting's are done and lessons learnt
are documented .
The test plan document should also be finalized and verified
by reviewers.
A document is prepared to cope up similar problems in
future releases.
Software Testing Life Cycle
PHASES OF STLC
Test Planning :- Documentation done in this phase
It is created by the test manager
Test Design :- HLD- Border view, test scenarios.
LLD-Detailed view, test cases/ test scripts
Test Design Review :-test cases or test scripts are checked
by manager or TL
Test Execute:- Manual(Without any tool),
Automation-with the help of tools(selenium,
jmeter, cucumber,SoapUI, testlink etc)
Defect reporting:-tester report the defects to the developer.
(jira, mantis,Bugzilla)
Post Implementation
Interview Questions
What is STLC?
Explain phases of STLC?
Difference between STLC and SDLC?
CHAPTER 6: Verification
Learning Objectives :
VVModel.
This Model is called a Verification and Validation Model.
VV model is the classic software development model and
testing model.
For each phase, the subsequent phase becomes the
verification (QA) phase and the corresponding testing phase
in the other arm of the V becomes the validating (QC) phase.
Testingof the product is planned in parallel with a
corresponding phase of development
Disciplined approach to evaluate whether a software product
fulfills the requirements or conditions imposed on them.
Are we doing the job right?
Also called as static testing.
Done by systematically reading the contents of a software
product with the intention of detecting defects.
Helps in identifying not only the defects but also their
location
Build
An executable file of application which is released from
development team.
An integrated application with all modules released by the
development team is called as a build.
Types of Verification/Methods of verification
Walkthrough –A step-by-step presentation by the author of
the document in order to gather information and to establish a
common understanding of its content.
Inspection –A type of peer review that relies on visual
examination of documents to detect defects. This is the most
formal review technique and therefore always based on a
documented procedure.
Technical Review –An evaluation of a product or project
status to ascertain discrepancies from planned results and to
recommend improvements.
Audits:
Internal: Done by the organization
External: Done by people external to the organization to
check the standards and procedures of project
Walkthrough
Meeting led by author
Open-ended sessions
To explain (knowledge transfer) and evaluate the contents
of the document
To establish a common understanding of the document
The meeting is led by the authors; often a separate scribe
is present
A walkthrough is especially useful for higher-level
documents, such as requirement specifications and
architectural documents.
Inspection
Led by trained moderator (not the author).
help the author to improve the quality of the document under
inspection.
Formal process based on rules and checklists.
Remove defects efficiently, as early as possible.
Pre-meeting preparation.
Improve product quality, by producing documents with a
higher level of quality.
Formal follow-up process.
learnfrom defects found and improve processes in order to
prevent similar defects.
Technical Review
Itis often performed as a peer review without management
participation.
Ideallyit is led by a trained moderator, but possibly also by a
technical expert.
A separate preparation is carried out during which the
product is examined and the defects are found.
Benefits of Verification
Include early defect detection and correction.
Development productivity improvements .
Reduced development timescales .
Reduced testing time and cost.
Fewer defects and improved communication.
CHAPTER 7:
Validation
Learning Objectives:
Introduction of Validation.
1. Disciplined approach to evaluate whether the final
product fulfills its specific intended use.
2. Are we doing the right job?
3. Also called as dynamic testing.
4. Done by systematically testing a software product
with the intention of finding defects.
5. Helps in identifying presence of the defects, not their
location
Levels of Testing-
Unit Testing -
Unit -smallest testable piece of software.
Tests the functionality of units.
Typically done by the developers and not by testers.
It is typically used to verify control flow, data flow and
memory leak problems.
Integration Testing :-
Integration is a process of combining and testing multiple
components together.
Starts at module level when various modules are integrated
with each other to form a system.
Considered interfaces on system.
Focuses on design and construction of software architecture.
Approaches:
Bottom Up-
Process of testing the very lowest layers of software first is
called the bottom up approach.
Program is tested from bottom to the top.
In this approach, programmers use a temporary program
instead of main program, which is under construction.
The temporary program is called “Driver” or “Calling
Program”.
Top Down
Process of testing the top most layers of software first is
called the Top-Down Approach.
Programmers use temporary programs called “stubs” instead
of sub-programs, which are under construction.
The other name for stubs is “Called Programs”
A stub returns the control to the main program.”
Difference between STUB & DRIVER
Critical Part First
Hybrid Approach (Critical Part First)
Also known as “Sandwich approach”, this is a
combination of the Top-Down Approach & Bottom-Up
Approach process.
Both Top-Down & Bottom-Up started simultaneously and
testing is built up from both sides.
It needs big team.
Big Bang approach
It is also known as “System Approach”.
Big bang approach is the simplest integration testing
approach:
All the modules are simply put together and tested.
This technique is used only for very small systems.
System Testing -
System Testing
System testing is the testing of a finally integrated product
for compliance against user requirements.
After development of all required modules, the development
team releases a software build to be tested on the System
Testing Environment.
System testing is classified into 3 levels as
Usability Testing
Functional Testing (Black Box Testing Techniques)
Non Functional Testing
Usability Testing
Also called as accessibility testing.
To check the ease of use for the user.
And how easy is to understand the application and process
execution for the user.
This Usability testing consists of two Sub-Techniques:
a) User –Interface Testing-
Feel and look of the application is attractiveness,
short navigation
b) Manual Support Testing
It is based on online help document/ User manual.
Functional Testing (Black Box Testing Techniques)
1.Functinality testing
Testing team concentrates on customer requirements in
terms of functionality.
Concentrating on requirements correctness and
completeness.
2.GUI coverage or Behavioural coverage (Graphics user
interface)
(valid changes in properties of objects and windows in our
application build).
We check for the properties of the objects available on the
form
Eg. Enable, disable, on , off etc
3. Error handling coverage
(the prevention of wrong operations with meaningful error
messages like displaying a message before closing a file
without saving it).
4.Input Domain coverage
(the validity of i/p values in terms of size and type like while
giving alphabets to age field).
5. Manipulations coverage
(the correctness of o/p or outcomes).
6.Order of functionalities
(the existence of functionality w.r.t. customer requirements).
7.Back end coverage (the impact of front end’s screen
operation on back end’s table content in corresponding
functionality).
Smoke Testing
Also called as Basic functional testing.
Checking the testability of the software.
Shallow and wide approach of testing.
Each major function of the software is tested without
bothering finer details.
Sanity Testing
Also called as Narrow regression testing.
Checking the behaviour of the software.
Narrow and depth approach of testing.
One or few parts of the system is tested in deep.
Smoke Testing Vs Sanity Testing
Smoke Testing Sanity Testing
Cursory Testing In-Depth Testing
Wide and Shallow Narrow and deep
Functional Subset of Regression
Testing
Test cases are Test cases are Not-
Scripted Scripted
Can be Automated Cannot be Automated
Non-Functionality Testing
Testing team concentrates on characteristics of S/W.
1. Recovery Testing
It is also called as Reliability testing.
Testing how well system recovers from crashes, hardware
failures or sudden problems
With the help of backup and recovery procedures
application can go from abnormal to normal state.
Example :
While an application is receiving data from a network, unplug
the connecting cable. After some time, plug the cable back in
and analyze the applications ability to continue receiving data
from the point at which the network connection was
terminated.
Compatibility Testing-
Also known as Portability testing.
Validates whether the application build is running on
customer expected platform or not.
Compatibility testing is done for things like Operating
Systems, Compilers, Browsers & Other system software.
Configuration Testing-
Also known as “Hardware compatibility testing”.
Validates whether our s/w build is supporting different
technology devices or not ?
E.g. different types of printers, disk drives, different n/w
technologies , mobile device etc
Inter system Testing-
It is also known “END –TO –END” Testing.
Testing of interface between two or more systems.
To determine whether parameters and data are correctly
passed between applications.
E.g. In shopping mall, billing system and barcode reader are
interconnected with each other
Installation Testing-
Performance Testing
Performance Testing is the process of determining the speed
or effectiveness of a computer, network, software program or
device.
1. Load Testing -Tests the performance of an application
on loading the system with max users at same time.
Ex: websites, yahoo—G-mail.
2.Stress Testing -Checks how the system can behave under
extremes such as insufficient memory, inadequate hardware
etc.
Data Volume Testing-
Find weaknesses in the system with respect to its handling of
large amounts of data during short time periods.
Parallel Testing-
It is also known as comparative or competitive testing.
Comparison of two different systems. (old version vs. new
version).
Compares with the competitive software in market to
estimate competitiveness.
Applicable to S/W product only.
Security testing-
It is also known as Penetration testing.
During this the testing team is validating for:
Authorization: Access to the valid user and Deny to the
invalid users
Access control: Giving access permissions to the valid
users to use specific services like features or functionalities in
s/w.
Encryption or Decryption: Deny to third party access to
enter into the system. Code conversion is in between client
process and server process.
Availability Testing
Availability
testing is running an application for a planned
period, and collecting failure events with repair times.
Itis conducted to check both reliability (finding defects and
reducing the number of failures) and availability (measuring
and minimizing the actual repair time) of an application.
Internationalization Testing
Checks the compatibility of an application to all possible languages.
Internationalization is the process of designing a software application so
that it can be adapted to various languages and regions without engineering
changes.
Globalization testing is the testing technique that ensures compatibility
of an application to all possible languages
Localization Testing
Localization is the process of adapting a globalized
application to a particular culture/locale.
Localization definition: Localization testing is testing process
to validate whether application is capable enough for using in
a particular location or country.
In this testing localization, testing is to carried out to check
the quality of the product for particular locale/culture.
For example of a Zip code field in Sign up form: For localized
(country like INDIA), it should allow only numbers in input
field. Whereas for UK it should allow Alpha numeric
characters.
Progression Testing and Retesting
Execute the test cases for the first time, it is called
progression testing.
Re-executing all the failed test cases to check fixing done by
development team is really fixed or not as called as retesting.
Ensuring that a bug is fixed without any side effects is called
Regression Testing.
The Re-Execution of selected test cases on modified build
to estimate completeness and correctness of the application
without any ripple effects due to bug fixes
User Acceptance Testing
Both testers and developers are involved.
After completion of system testing, the project management
concentrates on UAT to collect feed back from real customer
or model customer.
There are 2 ways to conduct UAT.
Alpha Testing-
Ad-hoc Testing or informal Testing
In general, every testing team conducts planned testing, but
testing team adopts informal testing sometimes due to some
challenges or risks.
E.g: Lack of time, lack of resources, lack of team size, lack
of skill, etc.
There are different ways of Ad-hoc testing.
Monkey Testing
Due to lack of time, the testing team concentrates on some
of the main activities in the software build for testing.
Thisstyle of testing is known as “Monkey testing” or
“Chimpanzee testing” or “Gorilla testing”.
Buddy Testing
Due to lack of time, the management groups programmers &
testers as “Buddies”.
Every buddy group consists of programmers & testers.
E.g.: 1:1 (or) 2:1 (or) 3:1 (preferable)
Exploratory Testing
Due to lack of proper documentation of the software being
built, the test engineers depend on past experience, discuss
with others, browse the Internet or Operate similar projects
and contact customer side people if possible.
This style of testing is called “Exploratory Testing”.
Pair Testing
Due to lack of knowledge on project domain the
management groups a senior tester & a Junior Programmers
are conducted testing, these all are called Pair testing.
Defect Seeding
To estimate the efficiency of test engineers, the
programmers add some bugs to the build. This task is called
defect seeding / Bebugging.
It is a method in which developers plant defects in the AUT
to test the efficiency of the testing process.
It is also used as a method to determine an approximate
count of “the number of defects remaining in the application”.
Example of Defect Seeding and practical usage:Let us
take an assumption ---
Seeded Defects in the AUT = 100Discovered Seeded Defects
= 75Actual defects = xDiscovered Defects = 600
From the above statistics, we can infer that testers are able to
find 75 out of 100 defects; which is 75% of the total defects
present in the AUT.
The number of defects remaining in the application can be
calculated as: Actual Defects -Discovered Defects
Interview Questions
What is the key difference between verification and
validation?
What are types of validation?
What are the different levels of testing?
Explain Sanity and Smoke testing?
What is the difference between 'internal level' and 'Field
level' testing?
What is defect seeding?
What is Retesting and Regression testing?
What is the difference between sanity testing and
regression testing?
What is exploratory testing?
CHAPTER NO-8
Introduction of Performance Testing
Learning Objectives :
Performance Testing Concepts
Pre-requisites of Performance Testing
Performance Testing Types
Common Performance problems
Performance Testing process
Performance Testing Concepts –
Performance Testing involves testing software applications to
ensure they will perform well under their expected workload.
The goal of performance testing is not to find bugs but to
eliminate performance bottlenecks.
Performance testing is done to provide stakeholders with
information about their application regarding speed, stability
and scalability.
Performance testing uncovers what needs to be improved
before the product goes to market.
Performance testing will determine whether or not their
software meets speed, scalability and stability requirements
under expected workloads
Pre-requisites of Performance Testing
The base prerequisites for performance testing includes:
Understanding the application under test.
Identifyingperformance requirements such as response
time, normal and peak load.
Common traffic patterns and expected or required uptime.
Performance Test Types
Load testing -Checks the application's ability to perform
under anticipated user loads. The objective is to identify
performance bottlenecks before the software application goes
live.
Stress testing -The objective is to identify breaking point of
an application.
Endurance(Soak) testing –Is done to make sure the
software can handle the expected load over a long period of
time.
Load Testing
Load testing is meant to test the system by constantly and
steadily increasing the load on the system till the time it
reaches the threshold limit.
Load Testing tests the following :
Number of users using the website at a given time.
Check for peak loads and how system behaves .
Check for Large amount of data accessed by user.
Load Testing
Tests the performance of an application on loading the
system with max users at same time. Ex: websites, yahoo—
G-mail.
Load or Scale means that the number of concurrent users
(at the same time) who are operating a s/w.
The execution of our s/w build under customer expected
configuration and customer expected load to estimate the
performance is LOAD TESTING. (Inputs are customer
expected configuration and output is performance).
Stress Testing
Stress testing is a generic term used to describe the process
of putting a system through exertion or stress.
Stress testing putsgreater emphasis on robustness,
availability, and error handling under a heavy load, rather than
on what would be considered correct behavior under normal
circumstances.
The goal of the stress testing is to analyze post-crash
reports to define the behavior of application after failure.
Stress Testing
The goals of such tests may be to ensure the software does
not crash in conditions of insufficient computational resources
(such as memory or disk space) , Performance of memory,
CPU, file handling etc
In a successful stress testing, the system will come back to
normality along with all its components, after even the most
terrible break down.
Examples of Stress Conditions
Excessive volume in terms of either users or data.
Resource reduction such as a disk drive failure.
Unexpected sequencing.
Unexpected outages/outage recovery.
Volume Testing
It’s also known as storage testing or memory testing.
Find weaknesses in the system with respect to its handling
of large amounts of data during short time periods.
For example, this kind of testing ensures that the system will
process data across physical and logical boundaries such as
across servers and across disk partitions on one server.
Endurance(Soak) Testing
Endurance Testingis a type of Performance Testing which
is usually used to determine how much a system can sustain
the continuous expected load.
The goal is to discover how the system behaves under
sustained use. That is, to ensure that the throughput and/or
response times after some long period of sustained activity
are as good or better than at the beginning of the test.
Common Performance Problems
Followingare the Performance testing Limitations with
Manual Testing:
Generate enough load to perform testing.
Obtainsufficient testing personnel (Users) and host
machines.
Synchronize Users.
Measuring Test results.
Repeatability of tests after identifying & fixing bottlenecks.
Performance Testing Process
Identify your testing environment -Understand details of
the hardware, software and network configurations used
during testing before you begin the testing process. It will help
testers create more efficient tests. testing procedures.
Identify the performance acceptance criteria -This
includes goals and constraints for throughput, response times
and resource allocation.
Itis also necessary to identify project success criteria
outside of these goals and constraints. Testers should be
empowered to set performance criteria and goals.
Plan & design performance tests -Determine how usage
is likely to vary amongst end users and identify key scenarios
to test for all possible use cases.
Itis necessary to simulate a variety of end users, plan
performance test data and outline what metrics will be
gathered
Configuring the test environment -Prepare the testing
environment before execution
Implement test design -Create the performance tests
according to your test design.
Run the tests -Execute and monitor the tests.
Interview Questions
Why Performance Testing is performed?
What are the tools used in performance testing?
What is performance tuning?
How to identify a memory leak?
What is throughput in performance testing?
What is concurrent user hits in load testing?
What is baseline testing and benchmark testing?
CHAPTER 9.
Quality
Learning Objectives :
What is quality ?
Quality Views
Quality –Productivity
Software Quality
Quality Control (QC)
Quality Assurance (QA)
What is quality ?
High levels of user satisfaction and low level of defect often
associated with low complexity.
Qualityis another name of consistently meeting customer
requirements, cost, delivery schedule and services offered
Quality Views
Customer’s view :
Delivering the right product
Satisfying the customers need
Meeting the customer’s expectations
Treating every customer with proper way
Quality Views
Supplier’s view :
Doing the right thing
Doing it the right way
Doing it right the first time
Doing it on time
Any difference between the two views can cause problems.
Quality Means
Consistently meeting customer needs in terms of
Requirements
Cost
Delivery schedule
Service
Quality –Productivity
Increase in Quality can directly lead to increase in
productivity.
Rework, Repairs and complaints are key factors affecting
Quality and productivity.
Directrelationship between Quality and cost of the process
and product.
Software quality includes activities related to both process
and product
Software Quality
SQA: Software Quality Assurance
SQC: Software Quality control
SQA: The Monitoring & Measuring the strength of
development process is called SQA
SQC: The Validation of final product before release to the
customer is called SQC
What is Quality Control(QC) ?
Attempts to test a product after it is built.
Ifexpected behaviour is not the same as actual, fixes the
product
and rebuilds it.
Defect-detection and defect-correction oriented.
Examples : Software testing at various levels.
Quality Assurance (QA)
Attempts defect prevention by concentrating on the process.
Defect prevention activity.
Examples : Reviews and Audits
Interview Questions
What is the difference between Quality Assurance, Quality
Control?
When the process of QA is started in a project?
What is the role of software Quality Assurance engineer?
What is QMS?
Chapter-10
Black Box and White Box Testing
Learning Objectives :
The Test Method
Black Box Testing
Equivalence partitioning
Boundary Value Analysis
Use case based testing
State Transition Testing
White Box -testing techniques
Cyclomatic Complexity
Test Methods
Black Box Testing
Black box testing is testing that ignores the internal
mechanism of a system or component and focuses solely on
the outputs generated in response to selected inputs and
execution conditions.
(also called as Behavioral testing, Functional testing, Data
driven testing, I/O driven testing)
White Box Testing
White box testing is testing that takes into account the
internal mechanism of a system or component.
Also called as Structural testing, Glass box testing,
Transparent-box and Clear box Testing
Black Box Testing
Approach to the testing where program is considered as a
“black box”.
The system is considered as a black box where the outputs
of the test results are matched against user Specifications.
Inputs are given and outputs are compared against
specifications.
No implementation details of the code are considered.
Test cases are based on requirement specifications.
Test Case Design methods:
Equivalence Class partitioning method
Boundary value analysis
Decision Tables
State transition testing
Use case based testing
Error guessing
Equivalence partitioning
Equivalence partitioning is a black-box testing method
Divide the input domain of a program into classes of data
Derive test cases based on these partitions.
An equivalence class represents a set of valid or invalid
states for input condition.
User name 237578bhxb##$%
pas sward
Boundary Value Analysis
Many systems have tendency to fail on boundary. So testing
boundary values of application is important.
Boundary Value Analysis (BVA) is a test Functional Testing
technique where the extreme boundary values are chosen.
Boundary values include maximum, minimum, just
inside/outside boundaries, typical values, and error values.
Extends equivalence partitioning, Test both sides of each
boundary.Look at output boundaries for test cases too.
Min –1, Min, Min +1, Mid60or50, Max 99, Max+100, Max
+101
Decision Tables
Explore combinations of inputs, situations or events.
It is very easy to overlook specific combinations of input .
Start by expressing the input conditions of interest so that
they are either TRUE or FALSE.
Valid id- Vaishali
Valid Passward-Vaishali@123
Valid id- Vaishali
InValid Passward-1234Va@#$%^&123
InValid id- Vai2468&*(
Valid Passward-Vaishali@123
InValid id- Vaishal35768i
InValid Passward-Vais@$%%hali@123
State Transition Testing
Process of writing the test cases for each and every possible
look where application or system is changing the state from
one state to another state.
Itis used when there is a sequence of events and
associated conditions that apply to those events
Example:
In a Banking application login screen the correct login and
password need to be entered to access the account details. 3
attempts are allowed and if a wrong password is entered the
4thtime, the account is locked.
So, testing with the correct password and with an incorrect
password is compulsory, for that we use state transition
testing.
Use case based testing
Actors represented with help of stick diagram and use cases
are represented with help
Error Guessing
Testerguesses from his /her experience on the error-prone
area and concentrates around that area.
Example : When the bike stops, first thing you do is –check
the petrol.
You probably wont
go to mechanic or
read the manual or
check the spark plug
You know from your experience or
usage that petrol could be over.
Advantages of Black box testing
More effective on larger units of code than glass box testing.
Testers and developers are independent of each other.
Testerneeds no knowledge of code in which system is
designed.
Testing is done from a user’s point of view.
Testcases can be designed as soon as specifications are
complete.
Disadvantages of Black Box testing
Only a small number of possible inputs can actually be
tested
Without clear and concise specifications, test cases are hard
to design.
Repetition of test inputs if tester is not informed of the test
cases the programmer has already tried.
May leave many program paths untested.
White box Testing
Testing based on analysis of internal logic (design, code,
etc.).
White-box testing techniques apply primarily to lower levels
of testing (e.g., unit and component).
Targets to check control flow, looping, dataflow, all the
nodes and paths.
Mandatory to have a knowledge of code in which the
system is designed.
White Box -testing techniques
Statement coverage
Decision coverage
Condition Coverage
Statement coverage
Execute all the statements at least once
Weakest form of coverage as it requires every line of code
to be checked
Int a=10
Int b=20
Int d=30 // This statement is not necessary
Int c=a+b //30
System.out.println(c);
Decision coverage(Branch coverage)
Exercise all logical decision on their true or false sides.
To test the branch we must once
check the true condition and once the false condition
Condition Coverage
Execute each decision with all possible outcomes at least
once
It requires all cases.
Checks each of the ways condition can be made true or
false
Cyclomatic Complexity
Itis important to testers because it provides an indication of
the amount of testing.
Cyclomatic complexity is defined as
control flow graph G,cyclomatic complexity V(G):
V(G)= E-N+2 Where
N is the number of nodes in G=Boxes
E is the number of edges in G=lines
V(G)=5-5+2=2
=6-5+2=3
Advantage of WBT
As the knowledge of internal coding structure is pre-
requisite, it becomes very easy to find out which type of
input/data can help in testing the application effectively.
It helps in optimizing the code.
Helps in removing the extra lines of code, which can bring in
hidden defects.
Disadvantages of WBT
As the knowledge of internal coding structure is pre-
requisite, a skilled tester is needed. So cost increases.
Itis impossible to look into every bit of code to find out
hidden errors, which may create problems, resulting in failure
of an application.
It fails to detect missing functions.
Gray Box Testing
GrayBox Testing is a combination of Black Box Testing
method and White Box Testing method.
The internal structure is partially known
This involves having access to internal data structures and
algorithms for purposes of designing the test cases, but
testing at the user, or black-box level
Gray Box Testing is named so because the software
program, in the eyes of the tester is like a gray/ semi-
transparent box; inside which one can partially see.
Interview Questions
What is difference between WBT & BBT?
Which test cases are prepared first BBT or WBT? Justify
Does 100% condition coverage guarantees 100% statement
coverage and 100% branch coverage?
What is difference between test scenario and test case?
What is cyclomatic complexity?
Chapter-11
Test Plan
Learning Objectives :
What is Test Plan ?
Preparing a Test Plan
Test plan Template
Test Scope
Suspension and Resumption Criteria
Test Deliverables
Test Plan
It explains
What needs to be tested?
Why the tests are performed?
How the tests are conducted?
When the tests are scheduled?
Who does the testing?
Preparing a Test Plan
Test plan acts as the anchor for the execution, tracking and
reporting of the entire testing project and covers.
What needs to be tested.
How the testing is going to be performed.
What resources are needed for testing.
The time line by which the testing activities will be
performed.
Test plan Template
Test plan identifier
Introduction
Test items
Features to be tested
Features not to be tested
Approach
Item pass/fail criteria
Suspension criteria and resumption requirements
Test deliverables
Environmental needs
Responsibilities
Staffing and training needs
Schedule
Risks and contingencies
Approvals
Test Plan
TestPlan Identifier: Provides a unique identifier for the
document
Introduction: State the purpose of the plan, specify the
goals and objectives.
Test Items : A list of what is to be tested.
Features to be tested
What is to be tested from the USERS viewpoint.
(BasedonBRSQAManagerdecidestheFeaturestobetested)
Features not to be tested
What is NOT to be tested from the Users viewpoint.
We need to identify and justify why the feature is not to be
tested. The reasons could be that the features not to be
included in this build or release of the Software.
(QA Manager will decide which Features not to be tested
based on BRS)
Approach
Approach: This mentions the overall test Strategy/Approach
for your Test Plan.
e.g. Specify the testing method (Manual or Automated, White
box or black box etc. )
Item Pass/Fail Criteria
Item Pass/Fail Criteria : Specify the criteria that will be
used to determine whether each test item (software/product)
has passed or failed testing.
To define the criteria for pass and fail, consider issues such
as the following :
How Significant is the problem. Does it affect any critical
function?
How likely is it that someone will encounter the problem?
Are there any show stopper issues?
Suspension and Resumption Criteria
Suspension and Resumption Criteria : Suspension
criteria specify the criteria to be used to suspend all or a
portion of the testing activities while resumption criteria
specify when testing can resume after it has been suspended.
Example :
The testing will be suspended if mandatory network link is not
available. Testing activity will resume with the Network link is
made available.
Test Deliverables
Test Deliverables: Are nothing but all type of documents
which are used in implementing the Testing it starts with sign
in to sign off.
For example:
1.Test Plan2.Test cases3.Test log, etc ...
Examples of test Deliverables
Test cases Documents
Test reports
Test Plan
Test Summary Report
Test Bug Report
Test Analysis Report
Review documents
Bug Analysis report etc
Environmental needs :
Specify the properties of test environment: hardware,
software, network etc.
List any testing or related tools.
Responsibilities :
List the responsibilities of each team/role/individual.
Staffing and Training Needs
Staffing and training needs:
How many staff members are required to do the testing?
What are their required skills?
What kind of training will be required?
Schedule :
Builtaround the roadmap but specific to testing with testing
milestones
When will testing start
What will be the targets/milestones pertaining to time
When will testing cycle end
Risks and Contingencies
Any activity that is a threat to the testing schedule is a
planning risk.
What are the contingencies( A provision for an unforeseen
event) in case the risk manifests itself.
What are the mitigation plans (Alternate plans) for the risk.
Approvals :
Specify the names and roles of all persons who must
approve the plan.
Provide space for signatures and dates. (If the document is
to be printed.)
Interview Questions
What is a Test Plan?
What is difference between Suspension criteria /Resume
criteria and Exit Criteria?
What is Item Pass /Fail Criteria?
What the parameters of Test Plan?
What is a Test Scope?
Chapter-12
Test Cases
Learning Objectives
Structure of Test Cases
Sample Test Cases
Test Scenario
A Scenario is any functionality that can be tested. It is also
called Test Condition or Test Possibility.
A scenario is an idea from which we can derive possible
test cases on particular functionality.
Test Cases
A case that tests the functionality of specific object.
Test case is a description of what to be tested, what data to
be given and what actions to be done to check the actual
result against the expected result.
It is document which includes step description of i/p, o/p
conditions with test data along with expected results.
Characteristics of good test cases
A good test case should have the following:
TC should start with “what you are testing”.
TC should be independent.
TC should not contain “If” statements.
TC should be uniform.
Expected result=Actual result=pass
Expected result!=Actual result=Fail = Defect report
Issues to be considered
All the TCs should be traceable.
There should not be too many duplicate test cases.
Out dated test cases should be cleared off.
All the test cases should be executable.
Structure of test Case
I
Structure of test Case
Testing environment/configuration: Hardware and software
configuration.
Pre-requisite: Initial condition
Finalization: Action to be done after test case is performed.
E.g. if test case crashes the data base, tester should restore it
before other test cases will be performed.
Input data description
Expected results
Sample Test Case
Preconditions:
Open Web browser and enter the given url in the address
bar.
Home page must be displayed.
All test cases must be executed from this page.
Interview Questions
Write a test case on a pen / fan / table.
What is a Test Case?
What are the parameters included in test case template ?
Explain Test Case Execution Process .
In a Login page what will be test procedure, test scenario,
test description & test steps?
Chapter no=13
Defect Life Cycle
Learning Objectives :
Understanding Defect
Severity and Priority
Format of Defect Report
Defect Management
Defect lifecycle
Types of Defect
What is Defect ?
“ Missing, Wrong, Not expected in the software which is
difficult to understand, hard to use is called as Defect”.
“A software error is present when the program does not do
what product specification states it should do”.
Error -Fault -Failure
Who can report a Defect?
Anyone who has involved in software development life cycle
and who is using the software can report a Defect.
In most of the cases defects are reported by Testing Team.
Priority and Severity
Severity-The degree of impact casted by the bug on an
application.
OR
Severity is the seriousness of the problem.
Priority-Relative degree of precedence given to a bug for its
fixation.
OR
Priority is the urgency of fixing a problem.
Severity and Priority Table
Difference between Defect Priority and
Defect Severity
Examples of Severity and Priority
High severity and high priority
System crashes in Log-in Page
High severity and Low priority
System crashes on clicking a button.
The page on which the button is located is a part of next
release.
Low severity and high priority
Spelling mistake in any company name web site.
Low severity and Low priority
Location of button in any web based application
Defect Report Template
Defect identifier
Defect summary
Test Id
Test case name
Module name
Reproducible
Severity
Priority
Raised by
Assigned to
Date of assignment
Status
Snap shots
Fixed by
Date of fixing
Approvals
Defect Management
Primary goal is to prevent defects
Should be integral part of development process
As much as possible should be automated
Defect information should be used for process
improvements
Defect submission
In the above defect submission process, testing team uses
defect tracking software.
E.g.: Test Director, Bugzilla, ALM(application lifecycle
managment)…etc.
New : Tester found new bug and reports it to test lead.
Open : Team lead open this bug and assign it to the developer.
Assign : Developer has three choices after assigning the bug
1.Reject :He can say that this is not a bug because of some
hardware or other problems you might getting defect in application.
2.Differed : He can postpone bug fixing according to priority of bug
3.Duplicate : If the bug is repeated twice or the two bugs mention
the same concept of the bug, then one bug status is changed to
“DUPLICATE”.
Fixed : If there are no such conditions like reject, duplicate the
developer has to fix the bug.
Re-testing : Re-test the whole application to find the defects if
the defect is still there.
In case there is a bug present, they will re-open the bug and
if bug is not there it will move to verify.
Re-open : If defect raised during re-testing we re-open bug.
Verify : Test whole application(Regression).
Close : Close the defect.
What is defect Age?
Time gap between defect reporting & defect closing or
deferring.
Types of defects
User interface defects
•Spelling mistakes
•Invalid label of object w.r.tfunctionality
•Improper right alignment
Error handling defects
•Error message not coming for wrong operation
•Wrong error message is coming for wrong operation
•Correct error message but incomplete
Input domain defects
•Does not take valid input
•Taking valid and invalid also
•Taking valid type but the range is exceeded
Manipulations defects
•Wrong output
•Valid output without having decimal points
•Valid output with rounded decimal points
•EX: actual answer is 10.96
•High (13), medium (10) and low (10.9)
Race conditions defects
•Hang or dead lock
•Invalid order of functionalities
•Application build is running on some of platforms only
H/W related defects
•Device is not connecting
•Device is connecting but returning wrong output
•Device is connecting and returning correct output but
incomplete
Load condition defects
•Does not allow customer expected load
•Allow customer expected load on some of the functionalities
•Allowing customer expected load on all functionalities
w.r.tbenchmarks
Source defects
•Wrong help document
•Incomplete help document
•Correct and complete help but complex to understand
ID control defects
•Logo missing, wrong logo, version number missing, copy
right window missing, team members name missing
Interview Questions
What is the difference between Defect Severity and Priority
?
What is the key difference between Error , Defect and
failure ?
What is deferred defect?
How defects are prioritized?
What is a reproducible defect?
Explain the Defect Life Cycle.
What is Defect Density?
Chapter no-14
Incident management
Learning Objectives:
Incident Management
Incidents
Incident Life Cycle
Incident management
Incident:Any event that occurs during testing that requires
subsequent investigation or correction.
actual results do not match expected results
–possible causes:
software fault
test was not performed correctly
expected results incorrect
Incidents
May be used to monitor and improve testing
Should be logged
Should be tracked through stages,
e.g.:
initial recording
analysis (s/w fault, test fault, enhancement, etc.)
assignment to fix (if fault)
fixed not tested
fixed and tested OK
closed
Incident Lifecycle
Interview Questions
What is an incident in software testing?
How to log an Incident in software testing?
Explain Incident Life Cycle.
What are the valid objectives of incident report?
Chapter n0-15
Risk Analysis
Learning Objectives :
What is Risk ?
Risk Analysis
Risk Management
Risk Mitigation
What is Risk ?
Risk is a potential that a chosen action or activity will lead to
a loss
or undesirable outcome.
Risk is a potential problem. It might happen or might not.
Eg. There is a possibility that I might meet with an accident if I
drive.
There is no possibility of an accident, if I do not drive at all.
Risk Analysis
Risk analysis and Management are a series of steps that
help software team to understand and manage uncertainty.
Risk analysis is “ Combination of likelihood and the impact
that it could have on the user ”
What Risk Analysis Can Do?
Helps in
Forecasting any unwanted situation
Estimating damages of such situation
Decision making to control such situation
Evaluating effectiveness of control measures
Interview Questions
What is Risk Analysis and ?
How Risk Analysis is related with Severity and Priority?
What is risk-based testing?
What determines the level of risk?
How will you conduct Risk Analysis?
How can you eliminate the product risk in your project ?
What are the common risk that leads to the project failure?
Chapter -16
Introduction of Mobile Testing
Learning Objective :
Overview of Mobile Devices
Types of Mobile Devices
Types of Mobile Operating Systems
Different types of Mobile Applications
The Future of Mobile Devices
A pocket-sized computing device, typically having a display
screen with touch input or a miniature keyboard.
Types of Mobile Devices
Mobile Computers
Notebook PC,
Mobile PC Handheld Game
Consoles Media Recorders
Digital Camera,
Digital Video Camera
What is Mobile Application Testing?
Mobile Application Testing is a process by which application
software developed for hand held devices is tested for its
functionality, consistency and usability.
Difference between mobile testing and
mobile application testing
Mobile Testing or Mobile Device Testing:
-Mobile Testing is testing of Mobile Handsets or devices.
-Testing all the core like SMS ,Voice calls,
connectivity(Bluetooth) , Battery(Charging),Signal receiving,
Network are working correctly.
-Testing is conducted on both hardware and software.
Mobile ApplicationTesting:
-Mobile Application Testing is the testing of mobile
applications which we are making as third party for the
targeted mobile handset.
-Some core feature of the mobile are tested just to see that
your application has not created any side effects on your
device functionality.
Different Mobile Platforms
Each operating system has its own limitations.
Testing a single application across multiple devices running
on the same platform and every platform poses a unique
challenge for testers.
Android
IOS(Iphone)
Symbian(Nokia)
J2ME
RIM(Blackberry)
BREW
Windows Mobile or WinCe
Bada(Samsung)
Different types of Mobile Applications
Mobile apps are basically little, self-contained programs,
used to enhance existing functionality, hopefully in a simple,
more user-friendly way.
Normally, when people talk about apps they are almost
always referring to programs that run on mobile devices, such
as Smartphones or Tablet Computers.
There seems to be an app for everything. Whether it’s
checking up on breaking news, chatting with friends via social
networking or even booking last minute holidays there’s an
app out there to help you.
Most applications work alone, but some cooperate with tools
in other media.
Different types of Mobile Applications
Each app provides limited and isolated functionality such as
game, calculator or mobile web browsing..There are different
types of Apps:
Web App
Native App
Hybrid App
Native App
Native App has been developed for use on a particular
platform or device.
A native mobile app is a Smartphone application that is
coded in a specific programming language, such as Objective
C for iOSand Java for Android operating systems.
Native mobile apps provide fast performance and a high
degree of reliability.
They also have access to a phone’s various devices, such
as its camera and address book.
Web App Testing
Web App are stored on a remote server and delivered over
the internet through browser.
Web mobile applications are software programs that run
directly from the web browser on mobile phones and tablets.
Web apps are not real apps; they are really websites that, in
many ways, look and feel like native applications.
They are run by a browser and typically written in HTML5.
Users first access them as they would access any web
page: they navigate to a special URL and then have the
option of “installing” them on their home screen by creating a
bookmark to that page.
Hybrid apps
Hybrid appsare part native apps, part web apps..
Like web apps, Hybrid appsrely on HTML being rendered in a
browser, with the caveat that the browser is embedded within the
app.
Companies build hybrid apps as wrappers for an existing web
page; in that way, they hope to get a presence in the app store,
without spending significant effort for developing a different app.
Creating a hybrid app makes it easier to build multiple mobile apps
for different platforms quickly.
Once the code can be written in a web development language (like
HTML, CSS, or Javascript) and then translate this over to native iOS
or Android code
Interview Questions
What is the difference between Mobile device testing and
mobile application testing?
What are the types of mobile applications?
Which parameters needs to be considered while testing
mobile application using black box technique ?
What are the challenges in mobile application testing?
Chapter 18
Automation Testing
Why Automation Testing ?
Every organization has unique reasons for automating
software quality activities, but several reasons are common
across industries.
Improves efficiency of testing.
Reducing testing costs.
Replicating testing across different platforms.
To give consistent and accurate results
Benefits of Automated Testing
Fast: Automation tool runs tests significantly faster than
human users.
Reliable: Tests perform precisely the same operations each
time they are run, thereby eliminating human error.
Repeatable: You can test how the Web site or application
reacts after repeated execution of the same operations.
Programmable:You can program sophisticated tests that
bring out hidden information.
Comprehensive: You can build a suite of tests that covers
every feature in your Web site or application.
Reusable: You can reuse tests on different versions of a Web
site or application, even if the user interface changes
Things we do before Automation
Identify the test cases that cover functionalities of each
module.
Identify what can be tested and what can not be tested.
Plan the flow of control and sequence of steps.
What to Automate?
Tests that will be run many times.
Tests that will be run with different sets of data.
When to Automate?
When the Application under manual test is stable.
Test cases that need to be run for every build such as Login
Module or core functions like Calendar utility.
Tests that require execution of multiple data called
parameterization.
Identical test cases which have to be executed on different
hardware configuration.
When not to automate?
It cannot be used when the functionality of the application
changes frequently.
Cannot be used for user acceptance testing because it is
done by end users.
When the project doesn’t have enough time for the initial
time required for writing automation test scripts.
Test with unknown or non-quantifiable results cannot be
automated such as correct color combination, web site look-n-
feel etc.
Selection of Automation tool
Choosing an automated software testing tool is an important
step
Generally a good tool should:
Tests all functions
It should have good debugging faacilities
It should have clear help file and a user manual
Various Tool Vendors
Interview Questions
What are the steps involved in automation process?
When the test can be automated ?
What are the types of framework used in automation
testing?
What are the pros and cons for automation?
Why Tools
The efficient and time saving way of testing are through the
usage of testing tools
The tools are used to continuously improve the quality of
testing as well as faster performance.
The tools help to complete the execution without any manual
intervention
Test tool necessity
Classifydifferent types of test tools according to the test
process activities.
Recognize tools that may help developers in their testing
Types of tools
Test management tools
Requirements management tools
Incident management tools
Performance testing/load testing/stress testing tools
Test management tools
Support for the management of tests and the testing
activities carried out.
Support for traceability of tests, test results and incidents to
source documents, such as requirement specifications.
Logging of test results and generation of progress reports
Requirements management Tools
Requirements management tools store requirement
statements, check for consistency and undefined (missing)
requirements.
Allow requirements to be prioritized and enable individual
tests to be traceable to requirements.
Example:
GatherSpace
Incident management tools
Incident management tools store and manage incident
reports. i.e. defects, failures
These tools enable the progress of incidents to be monitored
over time
They are also known as defect tracking tools.
Example:
Bugzilla
Performance testing/load
testing/stress testing tools
Performance testing tools monitor and report on how a
system behaves under a variety of simulated usage
conditions.
They simulate a load on an application, a database, or a
system environment, such as a network or server.
Example : Load Runner
Interview Questions
How many types of automations tools are there?
What is meant by Test management tool?
What are the benefits of defect tracking tools?
What is the difference between licensed and open source
tools?
Can any non-functional testing be done by tools?
It contains: