0% found this document useful (0 votes)
93 views101 pages

Software Quality Engineering - Lecture 11 - 17

The document discusses various aspects of software quality testing including risk-based testing, test case management, and test automation. It covers defining and identifying risks, assessing risks, mitigating risks, and managing risks throughout the software development lifecycle. It also discusses prioritizing testing based on risk analysis and applying risk-based testing approaches in Agile and DevOps environments. Other topics covered include test case design techniques, writing test cases, test data, test environments, and high level versus low level test cases.

Uploaded by

Ahmad Irfan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views101 pages

Software Quality Engineering - Lecture 11 - 17

The document discusses various aspects of software quality testing including risk-based testing, test case management, and test automation. It covers defining and identifying risks, assessing risks, mitigating risks, and managing risks throughout the software development lifecycle. It also discusses prioritizing testing based on risk analysis and applying risk-based testing approaches in Agile and DevOps environments. Other topics covered include test case design techniques, writing test cases, test data, test environments, and high level versus low level test cases.

Uploaded by

Ahmad Irfan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 101

Software Quality

Engineering
Lecture 11 - 17
By Jawad Khalid @ FAST-NU
Next few lectures

- Optimizing test cases further using Risk-based Testing


- Understand Test Case Management, Test Environments, Test Data, Test
Execution and Bug Reporting
- Exploring and Understanding Test Automation including UI, Web API and
Unit
- Understand test planning after developing understanding of different
aspects of testing
Risk-Based Testing
Definition of Risk

- Possibility of a future event with negative consequences


- A factor that could result in future negative consequences.
- The probability of an event, hazard, accident, threat or situation occurring
and its undesirable consequences
- Risk = Probability of the event occurring x Impact if it did happen
Types of Risks
● Related to characteristics of the product:
○ Functional, Reliability, Usability
Product Risks
○ Performance, Security, Compatibility
○ Maintainability etc.
● Associated with organization itself:
○ Unavailability of required skilled resources
Organizational Risks
○ Insufficiently trained resources
○ Personal issues or conflicting priorities
● Related to Project Management
○ Unclear requirements, bad architecture
Technical Risks
○ Inaccurate estimates
○ Weak project management
● Related to third-party contractors
○ Supplier failed to deliver services timely
Associated Third-party Risks
○ Inadequate support from the supplier
○ Supplier goes bankrupt
● Associated with the development process:
Process Related Risks ○ Vague or non-standard development process
○ Poor quality code, testing, configuration
Risk-based Testing

- Testing in which the management, selection, prioritization, and use of


testing activities and resources are based on corresponding risk types and
risk levels.
- Risk-based testing is used to:
- decide when and where to start testing
- identify areas that require more focus
- reduce the probability of an adverse event
- reduce the impact of an adverse event
- get more information on the identified risks
Risk-based Testing

- Risk-based testing consists of four overlapping activities:


- Risk Identification
- Risk Assessment
- Risk Mitigation
- Risk Management
Risk Identification

- Risk identification is done during the product quality risk analysis that
usually involves all stakeholders
- Stakeholders can employ one or more of the following techniques for risk
identification:

e
iv

th
ct
s

i
rs w
w

pe

ps
ie

de ng
en t

ho
sm en
rv

ro

ol mi
te

ks
es nd

et

eh or
In

tR

or
ss e

ak st
A dep
rt

W
St ain
ec
pe

sk
oj

Br
In
Ex

Pr

Ri
1 2 3 4 5
Risk Assessment

Categorization Determining the


of each likelihood of each
identified risk risk

Assessing the Identifying and


impact of each assigning risk
identified risk properties like
Risk Owner etc.
Risk Assessment

- Risk categorization
- Risk Severity
- Critical (1) − Harsh effects that reduces the productivity to zero. It can also cause the
project to be terminated. It has a top priority in risk management.
- High (2) − Large effects, can cause great loss, and severely threatens the project.
- Medium (3) - Short term damage still reversible through restoration activities.
- Low (4) - Little or minimal damage or loss. This can be monitored and managed by
routine procedures.
- Risk Probability
- Frequent (A) − Expected to occur several times in most scenarios. (91-100%)
- Probable (B) − Likely to arise several times in most scenarios (61-90%)
- Occasional (C) − May arise sometime (41-60%)
- Remote(D) − Unlikely to arise/may arise sometime (11-40%)
- Improbable(E) − May arise in rare scenarios (0-10%)
- Eliminated(F) − Impossible (0%)
Risk Mitigation

- Risk mitigation activity includes:


- Designing effective test cases that help reduce product quality risk
- Reviewing specification, design and user documentation
- Implement risk mitigation activities identified earlier
- Re-evaluate known risks during the project life cycle
- Identify new risks identified during testing
Risk Management

- Risk management occurs throughout product lifecycle


- Risk management includes:
- Preparing guidelines about managing risks
- Encouraging teams prioritize and focus on high to medium risk items
- Addressing sources and consequences of the risks
- Conducting retrospectives and brainstorming sessions related to overall risk management
Software Testing Prioritization
Risk Based Testing in Agile/DevOps
Test Case Management
Process to write Test Cases

Test cases by
Identify all possible
System Study applying test design Review Test cases
test scenarios
techniques

Fix the review Store in Test case


Test Case Approval
comments repository
Test Case Design

- Determining in which test areas low-level or high-level test cases are appropriate
- Determining the test technique(s) that will enable the necessary coverage to be
achieved. The techniques that may be used are established during test planning.
- Using test techniques to design test cases and sets of test cases that cover the
identified test conditions
- Identifying necessary test data to support test conditions and test cases
- Designing the test environment and identifying any required infrastructure including
tools
- Capturing bi-directional traceability (e.g., between the test basis, test conditions and
test cases)
Test Case Design

- High Level Test Case


- A test case with abstract preconditions, input data, expected results,
postconditions, and actions (where applicable).
- Advantages :- A tester is not bound to follow the test cases step by step and thus
it gives a chance to explore more edge cases. This also increases the chance to
find new bugs.
- Disadvantages:- Its disadvantage is that its not sure that all scenarios are
covered and its difficult for an inexperienced tester to work with these test
cases.
Test Case Design

- Low Level Test Case


- A test case with concrete values for preconditions, input data, expected results,
postconditions, and a detailed description of actions (where applicable).
- Advantages:-Its advantages is that a tester is unlikely to miss bugs and also its
easy for an inexperienced tester to work with these test cases as he can easily
get the desired results by following steps.
- Disadvantages:- It’s a tedious to executes these test cases again and again and
tester may not find job challenges.
How much to document

- Factors that identify how much to document


- Project risks (what must/must not be documented)
- The added value which the documentation brings to the project
- Standards to be followed and/or regulations to be met
- SDLC or approach used (e.g., an Agile approach aims for “just enough”
documentation)
- The requirement for traceability from the test basis through test analysis and
design
Review Process

- Review Types
- Self-review
- Peer review
- Supervisory Review
Test Case lifecycle
Common Issues in Test Cases

- Incomplete test cases.


- Missing negative test cases.
- No test data.
- Inappropriate/Incorrect test data.
- Incorrect Expected behavior.
- Grammatical problems.
- Spelling errors.
- Replication of Test Cases.
- Inconsistent tense/voice.
- Incomplete results/number of test runs.
- Defect information was not recorded in the test case.
Test Case VS Exploratory Testing
Test Case VS Exploratory Testing

Test Cases Exploratory Testing

- Pros: - Pros:
- Explicit documented tests - More adoptable: Tester determines actual
- Better Guarantee of Coverage test steps
- Easy to trace back to requirements - Learning and testing at the same time
- It’s automatable - Less test prep and documentation
- Medium and Junior QA can test - Cons:
- Written directly from requirements - Coverage depends on tester’s skill to
- Cons: explore and learn
- Not adoptable i.e. can’t deviate from - It’s not automatable
steps - Difficult to trace back to requirements
- Require documentation and maintenance
Test Case VS Exploratory Testing

- Why Choose?
- Test case based testing and Exploratory testing are both excellent techniques for
reducing the number of defects found in production. Together they serve as two
great complementary techniques that build on each other. Ideally, we would like
to do both.
Test Environment
Test Environment

- A testing environment is a setup of software and hardware on which the testing team
is going to execute test cases. The test environment consists of real business and
user environment, as well as physical environments, such as server, front end running
environment.
Different Major Available Environments

● Used by dev teams for feature preview and

01 Development Environments

Collaboration
No Client Data

● Used by QA and project team for acceptance testing

02 QA/Test Environments


with test data
No client Data
Company may have multiple QA/Test Environments

● Pre-production environment used for final

03 Staging Environment


acceptance based on production sized data set
Limited production data
Scaled down replica of Production environment

04 Production Environment


Used by clients (live)
Full Production Data
Setting Up Test Environments

- Test Environment should also be designed during test designing


- Test Environments should be ready and available before the test implementations and
test execution
- Environment should be fit for purpose
- Should be capable of enabling the exposure of defects present during controlled
testing
- Operate normally when failures are not occurring so that we can focus on real
software bugs rather than environment specific issues
- Different test environment setups might be required for different testings
(Unit/Integration/Performance/Security/etc)
Setting Up Test Environments

- Environment setup should be repeatable (create identical environments everytime you


need one)
- Make use of virtualization (Containers/VMs) and CI/CD for quick and easy setup
- Key setup components
- System and applications components (Frontend, backend, data server, etc)
- Test Data
- Client operating system
- Browsers
- Hardware includes Server operating system
- Network
- Documentation
Test Environments Management

- Maintenance of central repository with all the updated version of test environments
- Test Environment management as per the test team demands
- Creating new environments as per the new requirements
- Monitoring the environments
- Updating/Deleting outdated test environments
- Investigation of issues on the environment
- Coordination till an issue resolution
Challenges of Test Environment

- Poor planning on resource usage


- Ineffective planning for resource usage can affect the actual output. Also, it may lead to conflict
between teams.
- Remote environment
- It is possible that a Test environment is located geographically apart. In such a case, the testing
team has to rely on the support team for various test assets. (Software, hardware, and other
issues).
- Elaborate setup time
- Sometimes test set up gets too elaborated
- Shared usage by teams
- If the testing environment is used by development & testing team simultaneously, test results will
be corrupted.
- Complex test configuration
- Certain test requires complex test environment configuration. It may pose a challenge to the test
team.
Test Data Management
Have you seen Test Data?

- Yes, all the data that you have used or created during the testing of the applications in
lab exercises is Test Data. e.g
- Boards, lists and cards that you had to create to do the testing of search (Test
Data created to meet the pre-condition)
- The Value(s) that you used to test the search field (Test Data used as input to
confirm the functionality)
Test Data

- Data that is used in testing


- Test data is used to verify that the application works as-expected given the
decided set of input data
- Test data is also used to satisfy test preconditions and postconditions
- Test data may consist of synthetic (fake) or representative (real) values for any
given input field
- Test data is ideal if it identifies all application errors with minimum size of data
set
Importance of Test Data

- Preparation & maintenance of test data consumes between 30%-60% of the tester’s
time
- Manual Test data practices create bottlenecks in CI/CD pipelines
Test Data Example

- You are designing test cases for Whatsapp. You want to test if the Whatsapp shows
correct messaging history for all types of user. Identify what test data you would
need?
- A new user with no messaging history/call history
- A very old user with lots of messaging history. Messages including
- Images
- Text
- Voice
- An old user that switched mobiles between android / IOS
- Old user who deleted their account and then recreated it
How to Create Test Data

- Manual test data generation


- Time consuming and bottleneck for testing in CI/CD
- Only recommended in Exploratory testing
- From production data Manually
- Sensitive Data should be Masked
- Synthetic Data should be added where required
- Data subsetting/sampling should be done to build compact

Us rat
but complete test data

Ge ools

on
ing ion

Pro From
ne
- Using Test Generation Tools

cti
T

Te

du
- Quick data generation using online tools

st
- More robust data generation using specialized tools
- Customized data generation using custom scripting as part of
application code
Explore creating test data

- https://generatedata.com/
- https://mockaroo.com/
- https://www.bestrandoms.com/random-address-in-ph?quantity=6
Challenges in Preparing Test Data

- Testing team lacking knowledge and skills in test data generation tools
- Testing team not having access to data sources
- Delay in production data access to the testers
- Production data not being fully usable in case of developing business scenarios
- Large volumes of data required
- Synthetic data is less reliable and credible
- Non-representative data fails to identify critical bugs
- Replication and/or sharing of sensitive data can create legal issues
- Protecting Test Data to standard compliances
Test Data Creation Criteria

- Following categories of test data should be considered while designing test data:
- No data: Check system response when no data is submitted
- Valid data: Check system response when Valid test data is submitted
- Invalid data: Check system response when InValid test data is submitted
- Illegal data format: Check system response when test data is in an invalid format
- Boundary Condition Dataset: Test data meeting boundary value conditions
- Equivalence Partition Data Set: Test data qualifying your equivalence partitions.
- Decision Table Data Set: Test data qualifying your decision table testing strategy
- State Transition Test Data Set: Test data meeting your state transition testing
strategy
- Use Case Test Data: Test Data in-sync with your use cases.
Test Data Management Strategies

● Simple text files / CSV files


Flat files based on Mapping
01 rules
● Can be generated manually, exported from
production if available, Generated using tools

SQL scripts to extract Data


02 from existing systems
● Maintaining SQL Queries to extract Data when
required

03 Production Data
● Manage sanitized production data to be used
when required

● Tools that can find and make required data

04 Test Data Automation Tools


● Can be commercial/open-source tools or
custom made module to populate required test
data
Real-time automation and orchestration of
enterprise test data
Features require in Test Data Automation
Making Test Data always Available

- Keeping Data up-to-date


- Need to update test data after any change in data model.
- Updates will be according to the test data management strategy
- Guarding Data from Corruption
- Maintain backup of test data
- Return modified data to its original state after test execution
- Data division among the testers
- Informing the team on any changes to test data
Test Execution
Test Execution

- The activity that runs a test on a component or system producing actual results.
- Basic Test Execution Tasks
- Executing manual tests, including exploratory testing
- Executing automated tests
- Comparing actual results with expected results
- Analyzing anomalies to establish their likely causes
- Reporting defects based on the failures observed
- Logging the actual results of test execution
- Executing regression tests
Other Test Execution Tasks

- Recognizing defect clusters which may indicate the need for more testing of a
particular part of the test object
- Making suggestions for future exploratory testing sessions based on the findings
from exploratory testing
- Identifying new risks from information obtained when performing test execution tasks
- Making suggestions for improving any of the work products from the test
implementation activity (e.g., improvements to test procedures)
Test Execution Entry Criteria

- Test Bed (Test Environment + Test Data) is available


- Scripted Test Cases or Session Chartered are available
- Tester has access to all required systems, applications, application logs, observability
dashboards, stubs/drivers, third-party software integrated with application, and any
other tools
Important Handy Tools

Screencasting/S
creen capturing
tool

Console Monitor
Monitoring Tool
Network

Browser
Clipboard
History Manager
Test Execution Prioritization

- Sanity / Smoke test


- Execute Acceptance tests considering
- Current Changes under test (User Stories Acceptance Criteria)
- Associated Risk-based Prioritization done during test designing
- Execute Regression tests considering
- Risk-based Prioritization done during test planning
Test Execution Tracking

- Test Management Tools are user to create test cycles according to test plans.
- During execution, Tester needs to update the test case execution status properly
- To-Be-Executed
- In-Progress
- Passed
- Failed
- Blocked
Bug Reporting
Bug Reporting

- One major task during test execution is reporting defects based on the failures
observed
Bug Reporting
- Bug Report Components
- Identifier
- Title/ Summary
- Reporting Date
- Reporter
- Test Item
- Test Environment
- Description
- Steps to reproduce
- Test Data used
- Screenshots / screencasts
- System Logs and any attachment that can help
- Expected & Actual Results
- Severity
- Priority
- Status
Bug’s Lifecycle

- This is a typical bug life cycle


- In Agile, Bug is reported along side
stories and therefore have similar
workflows as regular story/task
- Notice, not all bugs are fixed
- There's not enough time
- It's really not a bug
- It's too risky to fix
- It's just not worth it
- Ineffective bug reporting
Bug’s Severity Levels
Exercise

- Define Severity of following


- The website performance is too slow
- The login function of the website does not work properly
- The GUI of the website does not display correctly on mobile devices
- The website could not remember the user login session
- Some links doesn’t work
Challenges of Bug Reporting

- Difficult to analyse and report bugs that are have random


occurrences
- Occurrence frequency should be included in bug
report so that it can be prioritized
- Lack of process for bug logging
- Improper defect triage/communication process
- Improper setup of Severity and Priority
- Communication Gaps between Testers and Developers
- Focused on justifying one's own perspective rather
than understanding and focusing on overall quality
Test Plan
Test Plan do we need it or not?

- As QA engineer, do we wait for


requirements and start analyzing/testing
when we get them?
- As QA engineer, do we wait for other work
products and start analyzing/testing when
we get them?
- As QA engineer, do we wait for application
and start testing when we get them?
- As QA engineer, do we wait for our test
basis and start designing respective test
cases?
Benefits and Goals of Test Plan

- The testing process can't operate in a vacuum


- testing without knowing what’s has changed? why it’s changed? when it will be
available? what will be required to test it?
- Test Plan helps us determine the effort needed to validate the quality of the
application under test
- Help people outside the test team such as developers, business managers, customers
understand the details of testing.
- Test Plan guides our thinking. It is like a rule book, which needs to be followed.
- Important aspects like test estimation, test scope, test strategy are documented in
Test Plan, so it can be reviewed by Management Team and re-used for other projects
Benefits and Goals of Test Plan

- The ultimate goal of the test planning process is communicating (not recording) the
software test team's intent, its expectations, its understanding of the testing that's to
be performed.
Test Planning Topics To Discuss

- High-Level Expectations:
- Initial alignment of stakeholders on product, test planning process, etc
- People, Places, and Things:
- List of all key stakeholders and their contact details
- Definitions:
- High-level quality and reliability goals
- Inter-Group Responsibilities:
- Identify tasks and deliverables that potentially affect the test effort
- What Will and Won't Be Tested:
- Somethings will not be tested because they are already tested components, changes
have no relation, some component is a third-party’s tested product
- Test Phases
- Identify what to test and when to test based on development plan
Test Planning Topics To Discuss

- Test Strategy
- Define test strategy that the test team will use to test the software both overall and in
each phase.
- Resource Requirements
- People, Equipment, Office and lab space, Software, Outsource companies,
Miscellaneous supplies
- Tester Assignments
- Assign responsibilities to individuals or groups
- Test Schedule
- Discuss timelines based on scope, phases, strategy, resources, etc.,
- Test Cases
- What test case to write, can we use existing test cases, how and where to manage the
test cases
Test Planning Topics To Discuss

- Bug Reporting
- Process for bug reporting and Managing
- Metrics and Statistics
- Metrics and KPIs that needs to be tracked to measure progress of testing, quality of
testing, success of project
- Risks and Issues
- Very useful and critical for successful testing and delivery of project
- Risks include: unrealistic deadlines for testing, insufficient skills of testers, delays in
environment availability or work product availability, sufficient resources not available,
people leaving the companies
- Note: these are risks to testing effort and not related to product risks that we used for
Risk-Based Testing
Key Steps for Test Planning

9. Risk Assessment
and Mitigation
Exercises

- Discussing some aspects of test planning


- Scenario 1: You are starting a new project for a new client. The product is also
new for which the first phase is to build a PoC. You have experienced QAs that
have worked on many different projects
- Scenario 2: You have a mature product and a dedicated QA teams. A set of bug
fixes are planned to be made in the application.
- Scenario 3: You have a mature agile and continuous delivery plan where each
user story has its own independent development lifecycle. You are to test a new
user story that adds few buttons and fields to an existing form.
Exercises

- For Scenario 1:A new test strategy will need to be devised. However, team is
experienced and we require only a PoC, so they can choose a best suitable one from
their prior experience. As there is no resource allocation so need to evaluate if a new
hiring is required or not. Clear definition of entry/exit criteria that is aligned with the
needs of PoC to avoid too less or too much testing
- In Scope and Out of Scope:
- It’s a PoC so the scope may only include functional testing
- Only positive scenarios may need to be tested. No need to test negative or
edge cases
- Formal test case writing may not be in Scope.
Exercises

- For Scenario 2: Most of the tooling, environments, skills, test cases, etc will already be
their. May only need to check for schedule, resource availability from the existing
team, risk-based and change-based testing to identify suitable test cases to execute,
risk in delay of availability of new code, etc.
- In Scope and Out of Scope:
- Confirmation test suite, smoke test suite and regression test suite
- No need to test unrelated functionalities
- For Scenario 3: In this case, we may only require ATDD/BDD test cases, required
regression testing and adjusting story point estimation respectively
Exercises
- Following are some tasks teams need to do:
- Product Vision Statement
- list of product components, their design/features
- Complete Project schedule
- Product Architecture and design
- Product Code
- Unit testing
- Test Planning
- Test Plan review by stakeholders
- General Testing
- Considering following teams
- Program Management
- Developers
- Tester
- Marketing

Identify which team is mainly responsible for each task above and identify each tasks inter dependencies to highlight associate risks
Exercises
- Following are some tasks teams need to do:
- 1. Product Vision Statement (PM and Marketing)
- 2. List of product components, their design/features (PM) - Depends on 1
- 3. Complete Project schedule (PM) - Depends on 2
- 4. Product Architecture and design (Developers) - Depends on 2
- 5. Product Code (Developers) - Depends on 4
- 6. Unit testing (Developers) - Depends on 5
- 7. Test Planning (Testers) - Depends on 2 n 3
- 8. Test Plan review by stakeholders (PM, Developers) Depends on 8
- 9. General Testing (Testers) - Depends on 5 n 8
Exercises
- Your company is building a software that has following timelines for different deliverables
- Product High level requirements - 1 weeks from now
- Product detailed requirements - 3 weeks from now
- First set of features developed - 5 weeks from now
- 2nd set of features developed - 7 weeks from now
- Complete Product - 9 weeks from now
- You need to identify schedule for following
- Finalized Test Plan
- Test Case writing
- Testing
- Also identify what resources will be required and and for what will be their responsibilities
- QA Lead
- QA Tester(s) - for test designing and test execution
Exercises
- High Level Schedule with sample values
- Update Test Plan Start 1 week from now require 2 days QA Lead
- Finalize Test Plan Start 3 weeks from now require 2 days QA Lead
- Test case writing for high level requirements Start 1 week from now require 1 week Tester 50% + Lead 20%
- Test case writing for Phase 1 requirements Start 3 week from now require 1 week Tester 100% + Lead 20 %
- Test case writing for Phase 2 requirements Start 4 week from now require 1 week Tester 100% + Lead 20 %
- Test case writing for Phase 3 requirements Start 5 week from now require 1 week Tester 100% + Lead 20 %
- Testing for Phase 1 requirements Start 5 week from now require 1 week Tester 100% + Lead 20 %
- Testing for Phase 2 requirements Start 7 week from now require 1 week Tester 100% + Lead 20 %
- Testing for Phase 3 requirements Start 9 week from now require 1 week Tester 100% + Lead 20 %
- Notes:
- For week 5 we need to allocate 2 testers and more total of 40% time from lead based on above schedule
- As testing is only taking 1 week for each phase and development of next phase is taking two weeks so we may have free time for a tester ( they
might have to test for some bug fixes for previous phases or test case writing task can be divided so that we don’t have to involve the second
tester
- We may need to assign a redundant tester to stay involved in the project to avoid any delays if the current tester/lead needs to take a few days
off
Test Strategy
Test Strategy

- Test Strategy is a critical step in making a Test Plan in Software Testing.


- Provides set of guidelines that explains test design and determines how testing needs
to be done
Test Strategy (Already Covered)
- Dynamic
- Example
- Exploratory Testing
- Create a lightweight set of testing guidelines that focus on rapid adaptation or known weaknesses in
software
- Commonly concentrates on finding as many defects as possible during test execution and adapting to the
realities of the system under test as it is when delivered
- Analytical
- Example
- Risk-Based
- Requirement-Based
- Analytical test strategies use some formal or informal analytical technique, usually during the requirements
and design stages of the project.
- Here the testing team defines the testing conditions to be covered after analyzing the test basis, be it risks
or requirements, etc.
Other Test Strategy
- Model-based
- Example
- You can build mathematical models for loading and response for e commerce servers, and test
based on that model
- Common focus on the creation or selection of some formal or informal model for critical system behaviors,
usually during the requirements and design stages of the project.
- Methodical
- Example
- Checklist that you have put together over the years that suggests the major areas of testing to run or
you might follow an industry-standard for software quality, such as ISO 9126, for your outline of
major test areas. You then methodically design, implement and execute tests following this outline
- Adherence to a pre-planned, systematized approach that has been developed in-house, assembled from
various concepts developed inhouse and gathered from outside, or adapted significantly from outside ideas
and may have an early or late point of involvement for testing.
Other Test Strategy
- Process or Standard-Compliant
- Example
- Adopting to standards like IEEE 829, testing needs of EN 61508, testing strategies defined for
Scrum/XP
- Common reliance upon an externally developed approach to testing, often with little – if any –
customization
- Consultative/Directed
- Example
- You might ask the users or developers of the system to tell you what to test or even rely on them to
do the testing
- Common the reliance on a group of non-testers to guide or perform the testing effort and typically
emphasize the later stages of testing simply due to the lack of recognition of the value of early testing
Other Test Strategy
- Regression-Averse
- Example
- Automate all the tests of system functionality so that, whenever anything changes, you can re-run
every test to ensure nothing has broken
- Regression-averse strategy may involve automating functional tests prior to release of the function, in which
case it requires early testing, but sometimes the testing is almost entirely focused on testing functions that
already have been released, which is in some sense a form of post release test involvement.
How to Choose?
- Risks
- Risk management is very important during testing, so consider the risks and the level of risk
- Skills
- Consider which skills your testers possess and lack because strategies must not only be chosen, they must
also be executed
- Objectives
- Testing must satisfy the needs and requirements of stakeholders to be successful
- Regulations
- Sometimes you must satisfy not only stakeholders, but also regulators
- Product
- Some products like, weapons systems and contract-development software tend to have well-specified
requirements
- Business
- Business considerations and business continuity are often important
Details Included in Test Strategy
- Test levels
- Entry as well exit conditions for each test level
- Relationships between the test levels
- Procedure to integrate different test levels
- Techniques for testing
- Degree of independence of each test
- Compulsory as well as non-compulsory standards that must be adhered
- Testing environment
- Level of automation for testing
- Tools to be used in testing
- Confirmation and regression testing
- Re-usability of both software and testing work products
- Controlling testing
- Reporting on test results
- Metrics and measurements to be evaluated during testing
- Managing defects detected
- Managing test tools and infrastructure configuration
- Test team members roles and responsibilities
Testing Levels, Types, and
Methodologies
Levels of Testing

- As Discussed, shifting left for


verification and testing will
improve software quality,
therefore we test at all levels
- We are also testing each work
product like we tested
requirement document
Testing Levels
- Unit Testing
- focuses on components that are separately testable
- Test Basis Examples
- Detailed design
- Code
- Data model
- Component specifications
- Integration Testing
- focuses on interactions between components or systems
- Test Basis Examples
- Software and system design
- Sequence diagrams
- Interface and communication protocol specifications
- Architecture at component or system level
- Workflows
Testing Levels
- System Testing
- focuses on the behavior and capabilities of a whole system or product, often considering the end-to-end
tasks the system can perform and the non-functional behaviors it exhibits while performing those tasks
- Objectives
- Validate system completeness, Reducing risk, Finding defects, Building confidence
- Test Basis Examples
- System and software requirement specifications (functional and non-functional)
- Risk analysis reports
- Epics and user stories
- System and user manuals
Testing Levels
- Acceptance Testing
- Acceptance testing may produce information to assess the system’s readiness for deployment and use by the
customer (end-user)
- Common forms of acceptance testing include the following:
- User acceptance testing
- Focused on validating the fitness for use of the system by intended users in a real or simulated
operational environment
- Operational acceptance testing
- Testing of backup, restore, installation, uninstallation, upgrading, Disaster Recovery, User Management,
Maintenance Tasks, Data load and migration, Checks for security, performance testing
- Contractual and regulatory acceptance testing
- Contract’s acceptance criteria for producing custom-developed software
- Alpha and beta testing
- Alpha and beta testing are typically used by developers of commercial off-the-shelf (COTS) software
who want to get feedback from potential or existing users, customers, and/or operators before the
software product is put on the market
White box and Black box Testing

- Black box testing – what is


supposed to be done 🡪
functionality
- does not concern how the
software accomplish the
work outside view of the
software input-output
relationships
- White (clear) box testing – how
the software does its job
- look into code details
Functional and Non Functional
- Functional testing of a system involves tests that evaluate functions that the system should perform.
- Non-functional testing of a system evaluates characteristics of systems and software such as usability, performance
efficiency or security
Change-related Testing
- Confirmation Testing
- Testing should be done to confirm that the changes have corrected the defect or implemented the functionality
correctly
- Regression Testing
- Testing should be done to confirm that the changes have not caused any unforeseen adverse consequences
Smoke Test & Sanity Testing
- Both testing help avoid wasting time and effort by quickly determining whether an application is too flawed to merit any
rigorous testing.
- Smoke Test
- A test suite that covers the main functionality of a component or system to determine whether it works
properly before planned regression testing begins.
- Sanity Testing
- A test suite that covers the high level testing of new functionality/bug fixes to determine whether it works
properly before planned confirmation testing begins.

ISTQB Glossary does not differentiates and considers them synonyms.


Behavior Drive Development and Test Driven
Development
Acceptance Test-Driven Development
Applying Testing in Agile
Test Process
Test Planning and
Testing Process Control

Test Analysis and


Design

Test Implementation
- Testing Process that we saw and Execution

in first lecture
Evaluating Exit
Criteria and Reporting

Test Closure Activities


Testing Process
- Test Planning
- As discussed Today
- Test Monitoring and Control
- Test monitoring involves the on-going comparison of actual progress against planned progress using any
test monitoring metrics defined in the test plan
- Test analysis
- The test basis is analyzed to identify testable features and define associated test conditions
- Major activities
- Analyzing the test basis appropriate to the test level being considered
- Evaluating the test basis and test items to identify defects of various types in test basis
- Identifying features and sets of features to be tested
- Defining and prioritizing test conditions for each feature based on analysis
- Capturing bi-directional traceability
Testing Process
- Test Design
- The test conditions are elaborated into high-level test cases
- Major Activities
- Designing and prioritizing test cases and sets of test cases
- Identifying necessary test data to support test conditions and test cases
- Designing the test environment and identifying any required infrastructure and tools
- Capturing bi-directional traceability
- Test implementation
- The testware necessary for test execution is created and/or completed, including sequencing the test cases into test
procedures
- Major Activities
- Developing and prioritizing test procedures, and, potentially, creating automated test scripts
- Creating test suites from the test procedures and (if any) automated test scripts
- Arranging the test suites within a test execution schedule in a way that results in efficient test execution
- Building the test environment (including, potentially, test harnesses, service virtualization, simulators, and
other infrastructure items) and verifying that everything needed has been set up correctly
- Preparing test data and ensuring it is properly loaded in the test environment
- Verifying and updating bi-directional traceability
Testing Process
- Test execution
- Test suites are run in accordance with the test execution schedule.
- Major Activities
- Recording the IDs and versions of the test item(s) or test object, test tool(s), and testware
- Executing tests either manually or by using test execution tools
- Comparing actual results with expected results
- Analyzing anomalies to establish their likely causes (e.g., failures may occur due to defects in the code, but
false positives also may occur)
- Reporting defects based on the failures observed
- Logging the outcome of test execution (e.g., pass, fail, blocked)
- Repeating test activities either as a result of action taken for an anomaly, or as part of the planned testing
(e.g., execution of a corrected test, confirmation testing, and/or regression testing)
- Verifying and updating bi-directional traceability between the test basis, test conditions, test cases, test
procedures, and test results.
Testing Process
- Test completion
- Test completion activities collect data from completed test activities to consolidate experience, testware, and any
other relevant information
- Major Activities
- Checking whether all defect reports are closed, entering change requests or product backlog items for any
defects that remain unresolved at the end of test execution
- Creating a test summary report to be communicated to stakeholders
- Finalizing and archiving the test environment, the test data, the test infrastructure, and other testware for later
reuse
- Handing over the testware to the maintenance teams, other project teams, and/or other stakeholders who
could benefit from its use
- Analyzing lessons learned from the completed test activities to determine changes needed for future
iterations, releases, and projects
- Using the information gathered to improve test process maturity

You might also like