0% found this document useful (0 votes)
1K views

Study Notes - ISTQB Foundation Level - Feb-2023

The document provides an overview of software testing concepts for ISTQB certification. It defines key terms, outlines testing principles and processes, and describes different test levels, types, and techniques. Specifically, it discusses the objectives of testing, defines errors, defects and failures, explains static vs dynamic testing, and outlines test levels from component to acceptance testing. It also covers test types like functional, non-functional, and white box testing as well as change related testing.

Uploaded by

Giselle Mata
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views

Study Notes - ISTQB Foundation Level - Feb-2023

The document provides an overview of software testing concepts for ISTQB certification. It defines key terms, outlines testing principles and processes, and describes different test levels, types, and techniques. Specifically, it discusses the objectives of testing, defines errors, defects and failures, explains static vs dynamic testing, and outlines test levels from component to acceptance testing. It also covers test types like functional, non-functional, and white box testing as well as change related testing.

Uploaded by

Giselle Mata
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

ISTQB - Curse resume

Tuesday, January 3, 2023 10:06 AM

To do:
• Leer el glosario

Learning objectives:
K1: Remember - recognize and recall something
• Identify typical objectves of testing
• Be able to recognize the definition of failure within the ISTQB Glossary
• Define risk levels by using likelihhod and impact
K2: Understand - understand the reasons behind something
• Dintinguish between the root cause of a defect and its effects
• Explain the impact of context on the test process
• Expplain the differences and similarities between integration ans system testing
K3: Apply - apply a concept to something
• Identify boundary values for valid and invalid partitions
• Write a defect report, covering a defect found during testing
• Apply a review technique to a work product to find defects

Errors ≠ Defects ≠ Failures


Dynamic Testing / Static Testing

1.1 What is Testing?


Objectives of Testing:
• Prevent defect
• Verify fulfillment of requirements
• Check if system is complete
• Build confidence in the system
• Reduce the level of risk
• Provide sufficient information to stakeholders
• Comply with contractural and legal requirements

Quality Management
Testing ≠ Quality Assurance

The seven principles of testing:


1. Testing shows the presence of defects, not their absence
2. Exhaustive testing is impossible
○ Risk analysis, test techniques and priorities shoyld be used to focus the testing effort
3. Early testing saves time and money
○ Is known as Shift left testing
4. Defects cluster together
5. Beware of the pesticide paradox
6. Testing is context dependent
7. The absence of errors is a fallacy

Testing Process:
• Test planning
• Test execution
• Analysing results
• Designing and implementing test
• Reporting testing progress
• Evaluating the system that is being tested
Testing activities:
• Tets planning
Activities that define the objectives of testing and the approach for meeting test
objectives
• Tets monitoring and control
○ Minitoring: comparison of actual progress against planned progress
○ Control: taking actions necessary to meet objectives in test plan
○ Both supported by evaluation of exit criteria (definition of done)
• Tets analysis
System under test is analyzed to identify testable features
○ Determines WHAT to test
• Tets design
Test conditions are elaborated into high-level test and sets of test cases
○ Defines HOW to test
○ Identifying necessary test data to support test conditions
○ Designing the test environment and identifying any requiered tools
• Tets implementation
Testware necessary for test execution is created
○ Defines if you have everything in place to run the tests
○ Developing and priorizing test procedures and potentially creating automated test
scripts.
○ Creating test suites from the test procedures
○ Preparing test data
• Tets execition
○ Recording the IDs and versions of the test item
○ Executing tests either manually or through tools
○ Comparing actual results with expected resuluts
○ Logging the Outcome of test execution
• Tets completion
Collect data from completed test activities to consolidate any relevant information
○ Chacking whether all defect reports are closed
○ Creating a test summary report
○ Finalizing and archiving the testware
○ Handing over the testware to the maintenance teams
○ Using the information gathered to improve test process maturity

SDLC models:
• Sequential
Linear sequential flow of activities
○ Waterfall development model
• Iterative and incremental
V model
○ Agile developmen model
Rational Unified Process (RUP)
Scrum
Kanban
Spiral
DevOps

Test Levels:
• Component testing
Focuses on components that are separately testable
○ Reducing risk
○ Verifying component behavior
○ Building confidence in the components quality
○ Finding defects
○ Preventing defect escapes to higher test levels
• Integration testing
Focuses on interactions between components or systems
○ Reducing risk
○ Verifying interface behavior
○ Building confidence in the quality of the interfaces
○ Finding defects
○ Preventing defect escapes to higher test levels
 Component Integration Testing
Interactions and interfaces between integrated components
 System Integration Testing
Interactions and interfaces between systems, packages and microservices
• System testing
Focuses on the behavior and capavilities of a whole system or product
Often considering the end to end task the system can perform.
○ Reducing risk
○ Verifying total system behavior
○ Verifying the system is complete
○ Building confidence in the quality of the system as a whole
○ Finding defects
○ Preventing defect escapes to higher test levels or to production
○ Satisfy legal or regulatory requirements
• Acceptance testing
Focuses on the behavior and capavilities of a whole system or product
○ Establish confidence in the quality of the system as a whole
○ Validating system is complete
○ Verifying system behaves as specified
○ User acceptance testing

Test levels Characteristics:


• Specific objectives
• Test basis
• Test object
• Typical defects
• Specific approaches na responsabilities

Test Type Objectives:


• Evaluating functional quality characteristics
• Evaluating nonfunctional characteristics
• Evaluating structural/architectural correctness
• Evaluating changes (intended and otherwise)

Test Types:
• Functional Testing
Envolves tests that valuate functions that the system should perform
 Black box techniques may be used to derive test conditions and test cases for the
functionality of the component or system
○ The functions are WHAT the system should do.
○ Considers the behavior of the software
○ Test the functionality of the component or system
○ May involve special skills (a particular business problem that the software solves)
• Nonfunctional Testing
○ Evaluates characteristics of systems and softwares such as
 Performance
 Usability
 Performance efficiency
 Security
○ Tests HOW well the system behaves
○ Should be performed at all levels and early
○ Late discovery of nonfunctional issues can be dangerous
• White Box Testing
○ Derivs tests based on the system's internal structure
Internal structure may include:
 Code
 Architecture
 Workflows
 Data flows
○ May require knowledge of the way the code is built, how data is stored
• Change Related Testing
Testing following a change to the system from a fix, upgrade or new feature, should be
donde to confirm that the changes have corrected de defect or implemented the
function correctly
○ Confirmation testing, after a defect is fixed - Testing failed cases due to defect
○ Regression testing - Testing other areas of the system to ensure stability

Reasons for Maintenance:


• Fix defects found in software
• Add new functionality
• Remove existing functionality
• Preserve/improve characteristics of systems
 Performance
 Security
 Portability

Maintenance triggers:
• Modification
• Migration
 Operational test of the new enviroment
 Operational test of the change software
• Retirement

IoT Internet of things

Impact Analysis:
Evaluate the changes that were made for the maintenance release to identify the intended
consecuences as well as expected and possible side effects of the change.
Can also help identify the impact of a change on existing tests.
• Evaluate the changes
• Identify the consequences
• Identify the impact of the change
• Can be done before a change is made

Static vs Dynamic Testing:

Dynamic Testing Static Testing


Requieres the execution of the software being Relies on the manual examination of work
tested products or tool-driven evaluation of the code or
other work products
Identify failures caused by defects when the Finds defects in work products directly
software is run
Typically focuses on external visible behaviors Can be used to improve the consistency and
internal quality of work products

Static Testing:
• Static analysis is importante for safety critical computer systems
 Aviation, medical or nuclear software
• Is often incorporated into automated software build and distribution tools
 In agile development, continuous delivery and deployment
• Almost any work product can be examined using static testing
 Specifications and requierement (business, functional, security requirements)
 Epics, users stories, acceptance criteria
 Arquitecture and d
 Design specifications
 Code
 User guides
 Contracts
• Finds defects in work products directly rather than identify failures caused by defects when
the software is run

Test Techniques

Black-box vs White-box Techniques:


Black-box White-box Experience-based
Characteristics: Characteristics: Characteristics:
 Test conditions  Test conditions  Test conditions
 Test cases  Test cases  Test cases
 Data derived from tests basis  Data derived from tests basis  Data derived from tests
 May be used to detect gaps (may include code, software basis (may include
between the requirements arquitechture, etc) knowledge or experience of
and implementation team members)
Behavioral techniques Structural techniques
Based on analysis of appropriate Based on the analysis of: Knowledge includes:
tests basis:  Architecture  Expected use of the system
 Formal requierements  Internal structure  Environment likely defects
documents  Detailed design  Distribution of those defects
 Specifications  Code of the test objects
 Use cases
 Use stories
 Business procedures
Applicable to both functional and Often combined with white/black-
nonfunctional testing box tecniques
Concentrate on the inputs / Concentrates on the structure and Leverage experience of multiple
outputs without reference to processing within the test objects project members to design,
internal structure implement and execute tests
Coverage measure based on items Coverage measure based on items
tested tested within a selected structure
(code/interfaces)

Test Managment

Degrees of Testing Independence (from least to most independent):


• No independent testers:
The only form of testing available is developers testing their own code.
• Independent developers or testers within the development teams:
Could be the developers testing each other's code.
• Independent test team or group within the organization:
Team reporting to project managment or executive managment.
• Independent testers from the business organization:
Or user community or with specializations in specific test types, such as usability,
security or performance.
• Independent testers external to the organization:
Most independant type, either working onsite or offsite

*Developers should participate in testing, specially at lower levels, so as to exercise control over the
quality of their own work.
*The way in wich independence of testing is implementing varies depending on the software
development lifecycle model.

Benefits of Test Independence:


• Independent testers are likely to recognize different kinds of failures, because of their
different backgrounds, technical perspectives and biases.
• Independent testers can verify, challenge, or disprove assumptios made by stakeholders
during specification and implementation of the system.
• Independent testers of a vendor can report a upright and objective manner about the system
under test without pressure of the company that hired them.

Drawbacks of test independence:


• Isolation from the development team:
May lead to a lack of collaboration, delays in providing feedback to the development
team or na adversarial relationship with them.
• Developers may lose sense of responsability for quality of their work
• Independent testers may be seem as a bottleneck
• May lack some important information about the test objects

Test Manager:
The test manager is tasked with the overall responsability for the test process and the successful
leadership of the test activities
• Develop or review a test policy or strategy for the organization
• Plan the test activities by considering the context and understanding the test objectives and
risks
• Write and update the test plans
• Coordinate the test plan with project managers and product owners
• Monitor test progress and results and checking the status of exit criteria to facilitate test
completion activities
• Prepare and deliver test progress reports and test sumary reports based on the information
gathered
• Adapt plpanning based on test results and the progress and take any actions necessary for test
control
• Introduce suitable metrics for measuring test progress and evaluate the quality of the testing
and the product
• Support the selecction and implementation of tools
• Promote and advocate for the testers
• Develop the skills and careers of testers

Tester:
• Review and contribute to test plans
• Analyze, review and assess requirements, user stories and aceptance criteria
• Identify and document test conditions and capturing traceability between test cases, test
conditions and the test basis
• Design, set up and verify test environments often coordinating with system administration and
network managment
• Design and implement test cases and test procedures
• Prepare and acquire test data
• Create test execution schedule
• Execute test, evaluate the results and document any deviations fromexpected results
• Automate tests as needed
• Evaluate nonfunctional characteristics such as performance, efficiency, reliability and security
• Review test developed by others

Test Strategies:
• Analytical -->(Risk based testing tecnique)
• Model-based
• Methodical
• Process-compliant
• Directed or consultative
• Regression-averse
• Reactive --> (Exploratory testing tecnique)

You might also like