0% found this document useful (0 votes)
2 views

1.2 2nd copy

The document outlines key concepts in software testing, including the distinction between verification and validation, and various testing levels such as unit, integration, and system testing. It emphasizes the importance of different testing activities, when to stop testing, and the types of bugs found in each phase. Additionally, it discusses performance testing, the roles of different testers, and the necessity of employing diverse testing techniques to effectively identify and eliminate bugs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

1.2 2nd copy

The document outlines key concepts in software testing, including the distinction between verification and validation, and various testing levels such as unit, integration, and system testing. It emphasizes the importance of different testing activities, when to stop testing, and the types of bugs found in each phase. Additionally, it discusses performance testing, the roles of different testers, and the necessity of employing diverse testing techniques to effectively identify and eliminate bugs.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Key Concepts in Software Testing

1. Verification vs. Validation

• Verification: Ensures the software is built correctly (conforms to specifications).


o Conducted during development stages by developers.
o Includes static and dynamic activities (e.g., reviews, analysis, simulations, unit
testing).
• Validation: Ensures the right software is built (meets user needs).
o Conducted at the end (system testing) by testers.
o Only involves dynamic activities (executing software against requirements).

2. Levels of Testing

1. Unit Testing
o Tests individual components (functions, modules, classes) in isolation.
o Done by developers.
o Identifies algorithm and programming errors.
2. Integration Testing
o Tests how units interact with each other.
o Done after unit testing.
o Identifies interface-related issues (e.g., parameter mismatches).
3. System Testing
o Tests the complete software system against specifications.
o Conducted by testers.
o Identifies performance issues and functionality failures.
4. Regression Testing
o Conducted during maintenance after changes to the software.
o Ensures new changes don’t introduce defects in previously working features.
o Helps detect regression bugs (existing functionality breaking due to
modifications).

3. Testing Activities

• Test Suite Design – Creating a collection of test cases.


• Running Test Cases – Executing tests on the software.
• Checking Test Results – Comparing actual vs. expected outputs.
• Logging Failures – Preparing test reports and failure lists.
• Debugging & Fixing Bugs – Developers analyze failure reports and correct faults.

4. When to Stop Testing?


• Based on bug detection rate: If no new bugs appear over a certain period, testing can
stop.
• Based on bug seeding: Introducing known defects and measuring how many are
found (indicates thoroughness of testing).
• Ensuring seeded bugs match real-world defects.

5. Types of Bugs Found in Different Testing Phases

• Unit Testing: Algorithm errors, variable misuses.


• Integration Testing: Interface mismatches, incorrect data exchanges.
• System Testing: Performance issues, usability problems.

6. Importance of Unit Testing

• Why not just do System Testing?


o Debugging is harder in system testing due to large codebase.
o Fixing a bug in a system test requires identifying the faulty module first.
o Unit testing is cost-effective and helps catch errors early.

7. Smoke Testing

• Performed frequently (daily or several times a day).


• Ensures basic functionalities work before deeper testing.
• Helps detect severe integration issues early.
• Named after real-world smoke tests used to check for leaks in pipelines.

8. System Testing Breakdown

• Functional Testing: Verifies the software behaves as per requirements.


• Performance Testing: Checks non-functional aspects like response time, stress
handling, recovery, etc.

9. Types of System Testing

• Alpha Testing – Done by the development team before releasing to customers.


• Beta Testing – Done by a friendly customer group before public release.
• Acceptance Testing – Performed by end users before accepting software.
10. Performance Testing

• Focuses on non-functional requirements (speed, usability, stability).


• Types:
o Response Time Testing – Measures speed.
o Throughput Testing – Checks handling capacity.
o Stress Testing – Tests extreme loads.
o Recovery Testing – Checks how well the system recovers from failures.
o Configuration Testing – Verifies software behavior in different
environments.

11. Who Performs Testing?

• Programmers: Unit testing.


• Testers: Integration & system testing.
• Users: Acceptance and usability testing.

12. Testing in the Waterfall Model

• Developers perform verification (reviews, analysis, unit tests).


• Testers perform validation (integration & system tests).
• Testers are mainly required during the testing phase, leading to inefficiencies.

13. Pesticide Effect in Testing

• Analogy to pesticides: Bugs that survive one test type cannot be eliminated by
repeating the same test.
• Requires applying multiple testing techniques over time.
• Different testing methodologies act as bug filters (e.g., equivalence partitioning,
path testing, MCDC testing).

14. Bug Survival Probability

• The effectiveness of testing improves with diverse test techniques.


• If multiple test strategies are used, the probability of surviving bugs decreases.
• This will be analyzed mathematically in the next session.

You might also like