Software Eng S4 (Software Testing)
Software Eng S4 (Software Testing)
• Testing Objectives,
• Unit Testing,
• Integration Testing,
• Acceptance Testing,
• Regression Testing,
• Testing for Functionality and Testing for Performance,
• Top-Down and Bottom-Up
• Testing Strategies:
• Test Drivers and Test Stubs,
• Structural Testing (White Box Testing),
• Functional Testing (Black Box Testing),
• Test Data Suit Preparation,
• Alpha and Beta Testing of Products.
• Static Testing Strategies: Formal Technical Reviews (Peer Reviews), Walk Through, Code Inspection,
Compliance with Design and Coding Standards.
Coding
• It is a phase where we translate a design or algorithm into a code of a particular
programming language.
• It is a technical phase so there is not much in software engineering to given
guideline in this
phase, still we can give characteristics of good code.
• Characteristics of Good Coding
• It must be simple, easy to understand (unconditional jumps must be
avoided)
• It must be readable, i.e. we must use proper space and comments lines and write code in
hierarchical fashion
• Usability try to code in modular fashion so that we can reuse the previous code.
Compliance with Design and Coding Standards
• Definition:
• Compliance with design and coding standards refers to adhering to a set of rules,
guidelines, and best practices that govern the process of designing and writing software
code, ensuring consistent, high-quality, and maintainable software products.
• Importance:
• Improves code readability and maintainability
• Facilitates collaboration and communication among team members
• Minimizes the introduction of defects and vulnerabilities
• Reduces development time and costs
• Design Standards:
• Architectural consistency: Ensuring that the overall structure of the software adheres to
established patterns and principles
• Scalability and performance: Designing software to handle future growth and changing
requirements without negatively impacting performance
• Coding Standards:
• Naming conventions: Establishing consistent rules for naming variables, functions, classes,
and other code elements
• Formatting: Defining guidelines for indentation, whitespace, and code layout to improve
readability
• Comments and documentation: Providing clear, concise, and accurate inline comments
and external documentation
• Error handling: Implementing proper error handling and reporting mechanisms to improve
software robustness and reliability
• Compliance Enforcement:
• Code reviews: Conducting regular peer reviews to ensure adherence to design and coding
standards
• Automated tools: Using static code analysis and linter tools to identify deviations from
established standards
• Continuous integration: Integrating and testing code changes frequently to catch issues
early in the development process
• Training and education: Providing team members with training and resources to stay up-
to-date on best practices and standards
• Challenges:
• Balancing strict adherence to standards with development speed and flexibility
• Ensuring that standards evolve as new technologies and best practices emerge
• Obtaining buy-in from all team members and fostering a culture of compliance
Testing
• Because of human error there will be a bug or fault in the code and if that
bug/fault is executed it become a failure.
• Software testing is a process of executing a program with the intention of
finding bugs or fault in the code.
• It is generally very difficult to test a software exhaustibility or completely
because the i/p space is very large. So, we have to write test cases wisely so
that in minimum test we can provide the maximum reliability.
Preparation for Testing
• Before we start test, we must have written, complete & approved copy of SRS.
• The budget, time & schedule for the testing must be written and document with proper
timeliness & milestones which are to be achieved.
• We must have a proper assembled team with we understood responsibilities.
• We must have a written document of limitation property and scope of testing.
• There are two methods of arranging testing
• Skilled based approach: In this, a person works on a specific technology for a long time.
here the person has a chance to become a true specialist in that particular area or
technology
• Project based approach: Here we assign test team to a project so that they can have a
much deeper and better understand of that particular project which increases the chance
of success.
Objective of testing
● The primary objective of integration testing is to test the module interfaces, i.e. there are no
errors in the parameter passing, when one module invokes another module. To verify the
functional, performance, and reliability between the modules that interact with each other.
Integration Testing
• Integration testing is the phase in software testing in which individual software modules are
combined and tested as a group. It occurs after unit testing and before validation testing.
• Purpose is to expose faults in the interaction between integrated units.
• It can be time-consuming and costly due to the complexity of inter-module dependencies. It
requires a lot of coordination between different teams. The sequence of tasks is crucial for
integration testing which may cause delays if not planned properly.
• Types of Integration Testing:
• Big Bang Integration Testing
• Top-Down Integration Testing
• Bottom-Up Integration Testing
• Sandwich/Hybrid Integration Testing
Big-Bang
• Is a type of integration testing where all the modules are integrated
simultaneously and then tested as a complete system,
• Simplicity: It is easier to set up because testing starts only after all the modules have
been integrated.
• Suitability: It is better suited for smaller systems where the modules are heavily
interlinked.
• Efficiency: It can potentially save time, as testing is conducted after the entire software
has been developed and integrated.
• Issue Detection and Resolution: It can be challenging to isolate and fix bugs because of the high level
of integration.
• High Risk: There's a high risk involved as any significant issues are only found late in the development
process, which can lead to project delays.
• Resource Consumption: It can be resource-intensive, requiring a significant amount of time and effort
to find and fix bugs.
• Inefficiency in Large Systems: It's inefficient for larger systems where problems can become
increasingly complex and hard to identify when all modules are integrated at once.
Top to Bottom
• Top-Down Integration Testing is a method of software integration testing where the top-level
modules are tested first and the lower-level modules are tested step by step after that. This
process continue until all components are integrated and then whole system has been
completely tested.
• Stub modules may be used to simulate the effect of lower-level modules that have not yet
been integrated and are called by the routines under test.
Advantages of Top-Down Integration Testing
• Early Defect Identification: Critical high-level design and control flow issues can be detected
at an early stage.
• Facilitates Progressive Testing: Testing is easier and more systematic, progressing from top-
level modules to lower-level modules.
• Supports Early Demonstration: The basic functionality of the system can be demonstrated
early in the testing process, even if lower-level modules are not yet developed or tested.
Disadvantages of Top-Down Integration Testing
• Stub Development: Stubs need to be created for simulating lower-level modules, which may
require additional time and resources.
• Late Detection of Lower-Level Bugs: Bugs in lower-level modules may not be found until the
later stages of testing, which can lead to delays.
• Incomplete Testing: Due to reliance on stubs, some types of errors can be difficult to detect
until full functionality is integrated.
• Difficulty in Test Management: The complexity and dependencies between different modules
can make test management challenging.
Bottom to Top
• Bottom-Up Integration Testing is a strategy in software testing where the lower-level modules are
tested first and then integrated and tested with the higher-level modules. This approach often uses
"driver" modules for testing and simulation.
Advantages of Bottom-Up Integration Testing
• Early Problem Detection: It allows early detection of faults and failures in the lower-level
modules of the software.
• No Need for Stubs: Unlike top-down testing, bottom-up testing doesn't require the use of
stubs as testing begins from the lower-level modules.
• Need for Drivers: Drivers need to be created to simulate higher-level modules, which can require
additional time and resources.
• Late Detection of Higher-Level Bugs: Issues in the integration of high-level modules may not become
apparent until late in the testing process.
• Incomplete System Overview: Early stages of testing do not provide a complete view of the system,
which may make it harder to assess overall functionality and performance.
Sandwich/Hybrid integration testing
• The combination of top down and bottom integration testing is called sandwich integration.
• This system is viewed as a three layer just like a sandwich, the upper layer of sandwich use top
down integration the lower layer of the sandwich integration use bottom up integration and
Stubs and drivers are used to replace the missing modules in the middle layers.
• Combines Strengths: It combines the advantages of both Top-Down and Bottom-Up testing
approaches to achieve more comprehensive testing coverage.
• Time-Efficiency: The simultaneous testing at both ends can help to reduce the overall testing
time.
• Variety of Scenarios: This approach allows for a wide range of testing scenarios and can lead
to more thorough verification of the system's functionality.
• Flexibility: It offers flexibility in the testing process, as it can be adjusted according to the
nature of the software and the resources available.
System Testing
• System testing is a level of software testing where the complete and integrated software system as a whole is
tested to evaluate its compliance with specified requirements in SRS.
• It is a crucial step before the software gets deployed to the user, aiming to catch any defects that might have
slipped through the earlier stages of testing.
• System testing is performed in an environment that closely resembles the real-world or production
environment.
• Generally, it is performed by independent testers who haven't been involved in the development phase to
ensure unbiased testing.
• It may include functional testing, usability testing, performance testing, security testing, and compatibility
testing.
User Acceptance Testing
• Definition and Purpose: User Acceptance Testing (UAT) is the final testing phase before
software deployment, aiming to validate if the system meets the business requirements and is
fit for use.
• Participants: Usually performed by clients or end-users, UAT evaluates the software's
functionality in a real-world scenario.
• Focus: Emphasizing the software's user-friendliness, efficiency, and effectiveness, UAT goes
beyond purely technical aspects to assess overall user experience.
• Documentation: During UAT, all scenarios, outcomes, and user feedback are recorded to
inform potential changes and improvements.
• Outcome: Successful UAT culminates in user sign-off, signifying the system meets the set
acceptance criteria and is ready for release.
Regression Testing
• Definition: Regression testing is a type of software testing carried out to ensure that
previously developed and tested software still functions as expected after making changes,
such as updates or bug fixes.
• Purpose: The main goal is to identify any issues or defects introduced by changes in the
software, and to ensure that the changes have not disrupted any existing functionality.
• Types: Types of regression testing include unit regression, partial regression, and complete
regression testing.
• Test Cases: Regression testing generally involves re-running previously completed tests and
verifying that program behavior has not changed as a result of the newly introduced changes.
• Automation: Due to the repetitive nature of these tests, regression testing is often automated
to improve efficiency and accuracy.
Black box Testing/ White box Testing
• Black Box Testing(Validation) – Where we treat system as a whole, and check system
according to user requirement (are we making the right product), i.e. we check o/p for every
i/p
• White Box Testing(Verification) – Here we go inside a system and check how actual
functionality is performed(are we making the product right)
Alpha Testing/ Beta Testing
• Any type of testing which is done on developer side is called alpha testing, usually performed
with artificial test cases.
• Any type of testing which is done at customer side is called beta testing, usually performed
with real time data.
Stress Testing
• Stress testing is also known as endurance testing.
• Stress testing evaluates system performance when it is stressed for short periods of time.
• Stress tests are black box tests which are designed to impose a range of abnormal and even
illegal input conditions so as to stress the capabilities of the software.
Boundary value analysis
• Boundary Value Analysis is a software testing technique that focuses on the values at the
boundaries of the input domain.
• The theory behind BVA is that errors are more likely to occur at the extremes of an input
domain rather than in the center. Hence, it's generally more useful to focus on testing the
boundary values.
• BVA is used for testing ranges and data array elements in a software application.
• In practice, BVA can be applied by identifying all the boundaries and then creating test cases
for the boundary values and just above and below the boundary values.
• Example: Let's consider a simple application that accepts an integer input from 1 to 100.
• The boundary values here would be 0 (just below the valid range), 1 (lower limit), 100 (upper
limit), and 101 (just above the valid range).
• You would then create test cases to input these values and verify the system's behavior.
Equivalence partitioning
• Equivalence Partitioning is a black box testing technique that divides the input data of a
software unit into partitions of equivalent data.
• The logic behind EP is that the system should handle all the equivalent data in the same way,
thus you can save testing effort by testing only one value from each partition.
• It helps to reduce the total number of test cases from an infinite pool to a more manageable
number.
• This technique can be used for both valid and invalid data input.
• Example: Let's consider the same application that accepts an integer input from 1
to 100.
• The equivalence classes here would be: less than 1 (invalid), between 1 and 100
(valid), and greater than 100 (invalid).
• You would then create test cases to input a value from each of these classes (for
example, 0, 50, and 101) and verify the system's behavior.
Graph-Based Testing Methods
• Uses graphical representation for software testing.
• Nodes represent states; edges represent transitions.
• Example: For a web app with Login, Dashboard, and Logout screens, draw
nodes for each and edges for transitions. Test paths like Login -> Dashboard ->
Logout.
Formal Technical Review (Peer Reviews)
• A Formal Technical Review (FTR) is a structured, systematic, and disciplined approach to
examining and evaluating software artifacts, such as code, design documents, or
requirements specifications, with the primary goal of identifying and addressing defects,
inconsistencies, or areas for improvement.
• FTRs are conducted by a team of peers, consisting of the artifact's author and other software
professionals, who analyze the work product and provide constructive feedback to ensure
high-quality software development.
• Formal Technical Reviews play a crucial role in improving software quality, detecting issues
early in the development lifecycle, promoting knowledge sharing, and fostering collaboration
within software engineering teams.
• Types of Formal Technical Reviews
• Code reviews
• Walkthroughs
• Inspections
• Pair programming
• The Formal Technical Review Process
• A. Planning
• 1. Establish objectives
• 2. Select participants
• 3. Set schedule
• B. Preparation
• 1. Distribute materials
• 2. Review guidelines
• 3. Allocate time for individual review
• D. Post-review activities
• 1. Document review results
• 2. Implement action items
• 3. Monitor follow-up actions
• Roles in Formal Technical Reviews
• A. Review leader
• B. Author
• C. Reviewers
• D. Recorder
• B. Preparation
• 1. Distribute materials
• 2. Review guidelines
• 3. Allocate time for individual review
• D. Post-walkthrough activities
• 1. Summarize findings
• 2. Assign action item
• 3. Monitor follow-up actions
• Roles in Walkthroughs
• A. Presenter
• B. Reviewers
• C. Recorder
• Benefits of Walkthroughs
• A. Early defect detection
• B. Knowledge sharing
• C. Team collaboration
• D. Training and mentoring
Code Inspection
• Definition:
• Code inspection is a systematic review process in which a team of developers evaluates a
software product's source code for potential issues, such as errors, vulnerabilities, and
deviations from coding standards.
• Objectives:
• Improve code quality
• Detect and fix defects early in the development cycle
• Share knowledge and best practices among team members
• Enforce coding standards and guidelines
• Process:
• Planning: Select the code to be inspected, define goals, and assemble the inspection team
• Preparation: Team members review the code individually to identify potential issues
• Inspection Meeting: The team discusses the identified issues, and the moderator notes
down agreed-upon action items
• Rework: The original developer addresses the identified issues and submits the revised
code
• Follow-up: The moderator verifies that all action items have been addressed and closes
the inspection
• Inspection Team Roles:
• Author: The developer who wrote the code being inspected
• Moderator: The person who leads the inspection process and ensures it runs smoothly
• Reviewers: Other developers who provide insights and suggestions for improvements
• Recorder: The person responsible for documenting the issues found and decisions made
during the inspection
• Benefits:
• Enhanced code quality and maintainability
• Reduced development costs and project risks
• Faster time-to-market due to early detection of defects
• Improved team collaboration and learning
• Limitations:
• Time-consuming process
• Possibility of human errors or oversights
• Potential for conflict among team members
• May not catch all types of defects, such as performance or concurrency issues