0% found this document useful (0 votes)
7 views

SE Unit Iv part 1

Uploaded by

cmptup2020
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

SE Unit Iv part 1

Uploaded by

cmptup2020
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 60

UNIT-IV

Unit IV Software Testing


Software Testing: A Strategic Approach to Software Testing - Strategic Issues Test Strategies for
Conventional Software - Validation Testing - System Testing.Testing Conventional Applications: White-
Box Testing - Basis Path Testing - Control Structure Testing - Black-Box Testing.Testing Web
Applications: Testing Concepts for WebApps - The Testing Process - Content Testing - User Interface
Testing- Component-Level Testing - Configuration Testing -Security Testing - Performance Testing.

Unit V Test Automation and Software Maintenance Software Testing Plan and Test Case Preparation:
Introduction - Test Plan - Test Case. Test Automation: Expectations from Test Automation - Limitations -
Automation Strategy -Automation Frameworks Automation Metrics. Software Maintenance:
Introduction - Maintenance Activities - Maintenance Process -Maintenance Cost - Maintenance
Strategies.
.

By
Dr.C.Mohanapriya
MSc(CT).,Mphil.,SET.,PhD., 1
Software Testing Strategies
• software testing describes ,
• the steps to be conducted as part of testing,
• when these steps are planned and then undertaken, and
• how much effort, time, and resources will be required.
• Therefore, any testing strategy must incorporate
• test planning,
• test case design,
• test execution, and
• resultant data collection and evaluation.
• A software testing strategy should be flexible enough to promote a customized testing
approach.
• At the same time, it must be rigid enough to encourage reasonable planning and
management tracking as the project progresses.
3
A STRATEGIC APPROACH TO
SOFTWARE TESTING
By doing this, many errors will be eliminated before testing commences.

• Testing begins at the component level and works “outward”

• Different testing techniques are appropriate for different software engineering approaches
and at different points in time.

• Testing is conducted by the developer of the software and (for large projects) an
independent test group.

• Testing and debugging are different activities, but debugging must be accom-modated in
any testing strategy.
4
A STRATEGIC APPROACH TO
SOFTWARE TESTING
A Strategic Approach to Software Testing
•Verification and Validation: This section discusses the importance of ensuring that the
software correctly implements a specific function (verification) and that the software meets
its intended purpose (validation).
•Organizing for Software Testing: Explores how testing teams should be organized,
emphasizing the need for independence and clear communication channels.
•Software Testing Strategy - The Big Picture: Provides an overview of different levels of
testing (unit, integration, system, etc.) and how they fit into the overall software
development process.
•Criteria for Completion of Testing: Discusses when testing should be considered complete,
including factors like coverage of requirements, bug discovery rate, and user acceptance.

6
Verification and Validation
Verification refers to the set of tasks that ensure that software correctly implements a specific
function.
Validation refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements.

Verification: “Are we building the product right?”


Imagine you're developing a calculator app. During verification, you would check if each
function (like addition, subtraction, multiplication) works correctly as per the design
documents. If the design specifies that the calculator should handle decimal numbers,
verification would involve testing whether the app correctly adds numbers like 3.5 + 2.7. If
the function works as intended, the verification is successful.
Validation: “Are we building the right product?”
Example:
If the customer requested a simple calculator for basic arithmetic but the app you built
includes complex scientific functions, it might fail validation—even if all those functions
work correctly (as per verification)

. 8
Verification Validation
Description Ensures that the software Ensures that the software meets the
correctly implements a specific needs and requirements of the customer
function as per design or end-user.
specifications.
Purpose To confirm that the software To confirm that the final software
product is being developed product fulfills its intended purpose and
according to the design and meets customer expectations.
technical standards.
Methods Reviews, inspections, Testing, user acceptance testing (UAT),
walkthroughs, and static analysis. and beta testing.
When During the development phase, After the software is fully developed,
before the software is completed. during and after testing phases.
Example A code review is conducted to A retail management system is tested
ensure that a function with real customer data to ensure it
processOrder() correctly follows handles transactions as expected by the
the design document. client.
9
Unit, Integration, and System Testing
Testing Type Description Purpose Example
Unit Testing Testing individual To verify that each unit Testing a
components or functions of works as intended. calculateTotal()
the software in isolation. function in an e-
commerce application.
Integration Testing the interactions To identify issues in Testing the integration
Testing between integrated units or the interaction between
components. between units. calculateTotal() and
applyDiscount()
functions.
System Testing the entire system as To validate that the Testing the entire
Testing a whole in a production-like complete system checkout process in an
environment. meets the specified e-commerce
requirements. application.
Unit Testing Example:

Scenario: In a library management system, you have a function
called checkAvailability(bookID) that checks if a particular book is
available.
•Test: You write a test to ensure that when the function is called
with bookID = 101, it returns true if the book is available and false
if it is not.

16
Drivers and stubs are auxiliary code used in unit testing to simulate the behavior
of components that a particular unit interacts with. They help isolate the unit
being tested to ensure that the test focuses only on that unit without dependency
on other components.

• In most applications a driver is nothing more than a “main program”


that accepts test case data, passes such data to the component (to
be tested), and prints relevant results.
• Stubs serve to replace modules that are subordinate (invoked by) the
component to be tested.
• A stub or “dummy subprogram” uses the subordinate module’s
interface, may do minimal data manipulation, prints verification of
entry, and returns control to the module undergoing testing.

20
Integration Testing Overview
• Integration testing is crucial for ensuring that the individual components of a
software system work together as intended.
• Even after all modules have been unit tested, putting them together can reveal
issues related to interfacing, data handling, and interaction between components
• Integration testing aims to uncover these issues systematically as the software is
constructed.
Key Concepts:
1.Interfacing Issues: Data can be lost across interfaces, or components can adversely
affect each other when combined.
2.Incremental vs. Big Bang: Integration can be done incrementally, combining and
testing modules gradually, or using a "big bang" approach, where all modules are
combined at once. Incremental integration is preferred to isolate and fix errors more
easily.
22
Incremental Integration Strategies
1. Top-Down Integration:
•Approach: Integration starts with the main control module and moves downward
through the control hierarchy.
•Depth-First: Focuses on integrating all components along a major control path before
moving to other paths.
•Breadth-First: Integrates all components directly subordinate to the control module,
moving across the structure horizontally.
Process:
1.Use the main control module as a test driver with stubs for subordinate components.
2.Replace stubs one by one with actual components, testing each as it’s integrated.
3.Continue until the entire program structure is built.
4.Perform regression testing to ensure no new errors are introduced.

23
24
Incremental Integration Strategies

•Advantages:
•Verifies major control points early in the process.
•Demonstrates functional capabilities early, boosting stakeholder confidence.
•Challenges:
•Testing at upper levels can be difficult if low-level processing is required.
•Stubs can become complex, requiring careful management.

25
Incremental Integration Strategies
2. Bottom-Up Integration:
•Approach: Integration starts with the lowest-level modules and moves upward.
•Process:
• Combine low-level components into clusters that perform specific subfunctions.
• Use drivers to coordinate input/output during testing.
• Test each cluster and remove drivers before integrating with higher-level modules.
• Continue until the entire program structure is built.
•Advantages:
• Functionality from lower levels is always available, reducing the need for stubs.
• Errors are easier to isolate within smaller clusters.
•Challenges:
• Drivers are needed for testing, which can add overhead.
• Integration might delay the discovery of issues in high-level modules
26
27
Integration Testing Example:

•Scenario: Continuing with the library system, after checking the


availability of a book, the system reserves the book using a
reserveBook(userID, bookID) function.
•Test: Integration testing would check that checkAvailability() and
reserveBook() work correctly together. For instance, if
checkAvailability() returns true, reserveBook() should
successfully reserve the book for the user.

28
Regression Testing Overview
• Regression testing is a vital process in software development that ensures changes,
such as new module additions or bug fixes, do not introduce new errors or
unintended behaviors.
• This testing process re-executes a subset of tests that have already been conducted,
focusing on verifying that the recent changes haven’t adversely affected existing
functionalities.

29
Regression Testing Overview
Key Concepts:
1.Reexecution of Tests: Regression testing involves rerunning previous tests to ensure
that changes in the software have not broken any existing features or introduced new
issues.
2.Propagation of Side Effects: When new modules are integrated or when changes are
made, new data flow paths, I/O operations, and control logic are introduced. Regression
testing checks for unintended side effects that these changes might cause.
3.Manual vs. Automated Testing: Regression testing can be done manually or through
automated tools like capture/playback, which records test cases and their results for
later comparison.
30
Regression Testing Overview

Regression Test Suite:


The regression test suite typically contains three types of test cases:
•Representative Sample Tests: These tests cover all software functions, ensuring a
broad check across the entire application.
•Change-Specific Tests: These focus on areas most likely to be affected by recent
changes, ensuring that those changes haven’t introduced new issues.
•Modified Component Tests: These specifically test the components that have
undergone changes.

31
Smoke Testing Overview

• Smoke testing is an integration testing technique used primarily in product software


development.
• It serves as a pacing mechanism in time-critical projects, allowing the software team
to frequently assess the software's stability and integration progress.

32
Smoke Testing Overview
Key Concepts:
1.Frequent Builds: Smoke testing involves integrating new builds of the software daily,
with each build containing all necessary components, data files, libraries, and engineered
modules.
2.Quick Assessment: The tests in smoke testing aim to expose major errors quickly,
especially those that could delay the project.
3.These tests are not exhaustive but are thorough enough to ensure that the build is
stable for more detailed testing.
4.Integration Testing Approach: Smoke testing is often done using either top-down or
bottom-up integration strategies. It gives teams a daily assessment of progress, helping
33
identify integration issues early.
Benefits of Smoke Testing:
•Minimized Integration Risk: Daily testing reduces the likelihood of severe schedule
impacts due to early detection of major issues.
•Improved Product Quality: By uncovering errors early, particularly functional and
architectural errors, smoke testing helps improve the overall quality of the software.
•Simplified Error Diagnosis: Since errors are often linked to the most recent changes,
diagnosing and fixing them becomes easier.
•Easier Progress Assessment: Frequent tests provide a clear view of integration
progress, boosting team morale and giving managers a reliable way to track project
advancement.
34
Strategic Options in Integration Testing
:
There is an ongoing debate about the best strategy for integration testing, particularly
between top-down and bottom-up approaches.
Both have their advantages and disadvantages:
•Top-Down Integration:
• Advantages: Major control functions are tested early, helping to identify significant
control problems early in the process.
• Disadvantages: Requires stubs for lower-level modules, which can complicate
testing and add overhead.
Bottom-Up Integration:
• Advantages: Test cases are easier to design, and there is no need for stubs. Lower-
level functionality is always available, simplifying testing.
• Disadvantages: The full program doesn’t exist until the final module is integrated,
which can delay finding high-level issues.

35
Combined Approach (Sandwich Testing):

• A hybrid approach, combining top-down testing for upper levels and bottom-
up testing for lower levels, is often the best compromise.
• This approach leverages the strengths of both strategies, testing critical
modules early while maintaining easier test case design and reducing the
need for stubs.

36
Integration Test Work Products
• The integration testing process involves several key work products:
1.Test Specification: This document includes a test plan and procedure, detailing the phases
of integration testing, specific tests to be conducted, and the order of module integration.
2.Test Plan: Outlines the overall strategy for integration, including schedules, availability of
unit-tested modules, and descriptions of any necessary overhead software (e.g., stubs and
drivers).
3.Test Procedure: Describes the detailed steps for testing, including the order of integration,
test cases, expected results, and handling of any issues that arise during testing.
4.Test Report: Records actual test results, problems encountered, and any peculiarities,
providing valuable information for future maintenance.
• In summary, regression and smoke testing are essential practices in ensuring that software
changes do not introduce new issues and that integration proceeds smoothly, with early
detection of errors leading to a more stable and reliable final product .

37
Example of Regression Testing:

Banking Software
•Scenario: The bank updates its software to include a new loan calculator tool.
•Regression Testing:
• Retest the core functionalities like account balance inquiries, fund transfers, and
transaction history to confirm that these essential features still work correctly
after the new tool's integration.
• Ensure that the new tool does not interfere with the calculation of interests in
savings accounts or the application process for other financial products.

38
Validation Testing

• Validation testing is conducted to ensure that the software system meets


the requirements and expectations of the customer.
• This testing occurs after integration testing, where individual components
have been combined into a complete system and all interfaces between
components have been tested and corrected.

39
•The Software Requirements Specification (SRS) serves as a guide, outlining all
user-visible attributes and defining the validation criteria.
•Validation testing involves running specific test cases designed to verify that
the software meets all functional, behavioral, performance, and usability
requirements.
•After each test, the results are compared against the expected outcomes. If the
software performs as expected, it is accepted; if not, a deficiency list is created
to document the issues that need to be resolved.

40
Configuration Review
• A configuration review is an essential part of the validation process.
• It ensures that all elements of the software configuration (such as the code,
documentation, and data) have been properly developed, cataloged, and detailed to
support the software's ongoing maintenance and operation.
•The review, sometimes called an audit, involves checking that all components of the
software configuration are complete and correctly documented.
•This review ensures that the software is ready for deployment and that all
necessary resources are in place to support the software post-release.
•The configuration review is crucial for maintaining the integrity and supportability of
the software.
•It helps prevent issues related to missing or incorrect documentation, incomplete code,
or other configuration problems that could affect the software's performance or
maintainability.
41
Alpha Testing

• Alpha testing is an initial testing phase where the software is tested by a small group of
end users at the developer’s site.
• This testing aims to identify any issues in a controlled environment before the software is
released to a broader audience.
•The software is used by a representative group of end users, while the developers observe
and record any errors or usability problems that arise.
•The testing is conducted in a natural setting but under the supervision of the developers,
allowing them to quickly address any issues and gather immediate feedback.
•The results of alpha testing help developers identify and fix bugs, improve usability, and ensure
that the software meets the users' expectations.
•It serves as a final check before the software moves on to beta testing.

42
Beta Testing
• Beta testing is the final testing phase before the software's public release.
• It is conducted by end users in a real-world environment to identify any issues that were not
discovered during alpha testing.
•The software is distributed to one or more end-user sites, where it is used in the users'
everyday environments.
• Unlike alpha testing, the developer is not present during beta testing.
•Users report any problems they encounter to the developer, who then makes necessary
modifications before the software is released to the general public.
•It ensures that the software is robust and ready for release.

43
Customer Acceptance Testing
• In the case of custom software developed for a specific customer, customer
acceptance testing is performed to validate that the software meets the contractual
requirements before it is officially accepted.
Process:
•The customer conducts a series of specific tests, often based on the agreed-upon
requirements and use cases, to ensure that the software functions as expected.
•This testing can range from informal testing by the customer to a formal, systematic
process that may take days or weeks.
•The software is accepted by the customer if it passes the acceptance tests, or it may
require further modifications if issues are found.
•This phase is critical for ensuring customer satisfaction and fulfilling contractual
obligations.

44
Validation Testing

•Validation-Test Criteria: Establishes criteria to ensure that the software


meets the user's needs and functions in real-world scenarios.
•Configuration Review: Focuses on verifying that all configurations
(software and hardware) are set up correctly.
•Alpha and Beta Testing: Discusses the importance of early user
feedback through alpha (internal) and beta (external) testing phases.

46
Validation-Test Criteria, Configuration Review, Alpha & Beta Testing
Concept Description Purpose Example
Validation- Benchmarks to ensure the To confirm that the A hospital
Test Criteria software meets user and software functions management system
stakeholder needs. correctly in real-world ensuring secure
scenarios. handling of patient
records.
Configuratio Reviewing the software To verify that all Reviewing servers and
n Review and hardware components are network settings
configurations for testing correctly configured before deploying a
and deployment. for optimal CRM system.
performance.
Alpha and Internal and external To catch bugs early Alpha: Internal testing
Beta Testing testing phases to identify (Alpha) and gather of a mobile app. Beta:
issues and gather real-world feedback Limited release to
feedback. (Beta). users for feedback.
System testing

• System testing is a comprehensive approach to validating and ensuring that a


software application integrates correctly with other system components and
performs as expected in real-world scenarios.
• Here's a detailed explanation of the key types of system tests
1. Recovery Testing
2. Security Testing
3. Stress Testing
4. Performance Testing
5. Deployment Testing

48
Recovery Testing

•Purpose: To ensure that a system can recover from various faults and resume
operation with minimal downtime.
•Example: Consider an online banking system that must recover quickly from a server
crash. Recovery testing might involve simulating a server failure and verifying that the
system can restore the user’s session and transaction data correctly.

•Automatic Recovery: Check mechanisms like checkpoints and data recovery.


•Manual Recovery: Evaluate mean-time-to-repair (MTTR) to ensure it meets
acceptable limits.

49
Recovery Testing

•Purpose: To ensure that a system can recover from various faults and resume
operation with minimal downtime.
•Example: Consider an online banking system that must recover quickly from a server
crash. Recovery testing might involve simulating a server failure and verifying that the
system can restore the user’s session and transaction data correctly.

•.

Penetration testing on a healthcare


Example management system
Metrics Vulnerability findings, attack resistance

50
3. Stress Testing
•Purpose: To determine how a system performs under extreme conditions.
•Example: Testing an e-commerce site by simulating a high volume of concurrent
users (e.g., 10,000 users) to see if it can handle the load without crashing or slowing
down significantly.
•Notes:
• Extreme Conditions: High load, large data volumes.
• Sensitivity Testing: Identifies performance issues with valid but extreme input
•. values.

51
4. Performance Testing

4. Performance Testing
•Purpose: To measure how well the system performs in terms of responsiveness
and stability under various conditions.
•Example: Testing a real-time trading platform to ensure it processes
transactions within a specified time frame (e.g., under 2 seconds) even when
handling large volumes of trades.
•Notes:
• Continuous Evaluation: Performance is assessed throughout the testing
process.
• Instrumentation: Requires detailed monitoring of system resources and
execution intervals.

52
Deployment Testing

•Purpose: To ensure that the software functions correctly across different


environments and platforms, and that installation procedures are effective.
•Example: For a software application intended to run on Windows, macOS, and
Linux, deployment testing would involve installing the software on each operating
system and verifying that it functions as expected in each environment.

• Cross-Platform Testing: Ensures compatibility with various operating systems


and configurations.
• Installation Procedures: Tests the installation process and user documentation.

53
Type of Testing Purpose Example Key Focus
Simulating server
Validate fault Fault tolerance,
Recovery Testing crashes in an online
recovery mechanisms MTTR
banking system
Verify protection Penetration testing
Security mechanisms,
Security Testing against unauthorized on a healthcare
attack resistance
access management system
Assess performance High traffic
System stability
Stress Testing under extreme simulation on an e-
under load
conditions commerce site
Transaction
Performance Measure run-time Responsiveness,
processing times in a
Testing performance resource utilization
trading platform
Check compatibility Software installation Cross-platform
Deployment Testing and installation on Windows, macOS, compatibility,
procedures Linux installation success
54

You might also like