week 9 software
week 9 software
Verification and validation processes are concerned with checking that software being
developed meets its specification and delivers the functionality expected by the people paying
for the software.
Software verification is the process of checking that the software meets its stated functional and
non-functional requirements.
Validation is a more general process. The aim of software validation is to ensure that the
software meets the customer’s expectations.
1. Development testing
Development testing includes all testing activities that are carried out by the team
developing the system. The tester of the software is usually the programmer who developed
that software. Some development processes use programmer/tester pairs (Cusamano and
Selby 1998) where each programmer has an associated tester who develops tests and assists
with the testing process. For critical systems, a more formal process may be used, with a
separate testing group within the development team. This group is responsible for
developing tests and maintaining detailed records of test results. There are three stages of
development testing:
A. Unit Testing
Unit testing is the process of testing program components, such as methods or object classes.
Individual functions or methods are the simplest type of component. Your tests should be
calls to these routines with different input parameters.
When you are testing object classes, you should design your tests to provide coverage of all
of the features of the object. This means that you should test all operations associated with
the object; set and check the value of all attributes associated with the object; and put the
object into all possible states. This means that you should simulate all events that cause a
state change.
B. Component Testing
Testing composite components should therefore focus on showing that the component
interface or interfaces behave according to its specification. You can assume that unit tests
on the individual objects within the component have been completed.
There are different types of interface between program components and, consequently,
different types of interface error that can occur:
Parameter interfaces: These are interfaces in which data or sometimes function references
are passed from one component to another. Methods in an object have a parameter interface.
Shared memory interfaces: These are interfaces in which a block of memory is shared
between components. Data is placed in the memory by one subsystem and retrieved from
there by other subsystems. This type of interface is used in embedded systems, where
sensors create data that is retrieved and processed by other system components.
Procedural interfaces: These are interfaces in which one component encapsulates a set of
procedures that can be called by other components. Objects and reusable components have
this form of interface.
Message passing interfaces: These are interfaces in which one component requests a
service from another component by passing a message to it. A return message includes the
results of executing the service. Some object-oriented systems have this form of interface,
as do client–server systems.
Examine the code to be tested and identify each call to an external component. Design a set
of tests in which the values of the parameters to the external components are at the extreme
ends of their ranges. These extreme values are most likely to reveal interface
inconsistencies.
Where pointers are passed across an interface, always test the interface with null pointer
parameters.
Where a component is called through a procedural interface, design tests that deliberately
cause the component to fail. Differing failure assumptions are one of the most common
specification misunderstandings.
Use stress testing in message passing systems. This means that you should design tests that
generate many more messages than are likely to occur in practice. This is an effective way
of revealing timing problems.
Where several components interact through shared memory, design tests that vary the order
in which these components are activated. These tests may reveal implicit assumptions made
by the programmer about the order in which the shared data is produced and consumed.
Sometimes it is better to use inspections and reviews rather than testing to look for interface
errors. Inspections can concentrate on component interfaces and questions about the
assumed interface behavior asked during the inspection process.
C. System Testing
During system testing, reusable components that have been separately developed and off-
the-shelf systems may be integrated with newly developed components. The complete
system is then tested.
Components developed by different team members or sub teams may be integrated at this
stage. System testing is a collective rather than an individual process. In some companies,
system testing may involve a separate testing team with no involvement from designers and
programmers.
2. Static Testing
Static Testing is a software testing technique which is used to check defects in software
application without executing the code. Static testing is done to avoid errors at an early stage
of development as it is easier to identify the errors and solve the errors. It also helps finding
errors that may not be found by Dynamic Testing.
Informal reviews
Walkthroughs
Technical review
Inspections
During the Review process four types of participants that take part in testing are:
Types of defects which can be easier to find during static testing are:
Usually, the defect discovered during static testing are due to security vulnerabilities,
undeclared variables, boundary violations, syntax violations, inconsistent interface, etc.
Tips for Successful Static Testing Process
Carry out the inspection process to completely inspect the design of the application
Use a checklist for each document under review to ensure all reviews are covered
completely
1. Use Cases Requirements Validation: It validates that all the end-user actions are
identified, as well as any input and output associated with them. The more detailed
and thorough the use cases are, the more accurate and comprehensive the test cases
can be.
2. Functional Requirements Validation: It ensures that the Functional Requirements
identify all necessary elements. It also looks at the database functionality, interface
listings, and hardware, software, and network requirements.
3. Architecture Review: All business level process like server locations, network
diagrams, protocol definitions, load balancing, database accessibility, test equipment,
etc.
4. Prototype/Screen Mockup Validation: This stage includes validation of
requirements and use cases.
5. Field Dictionary Validation: Every field in the UI is defined well enough to create
field level validation test cases. Fields are check for min/max length, list values, error
messages, etc.
3. Dynamic Testing
Dynamic testing refers to analyzing code’s dynamic behavior in the software. In this type
of testing, you have to give input and get output as per the expectation through executing
a test case.
Execution of code: The software’s code needs to compile and run on the test environment. It
should be error-free.
Execution of test cases on the running system: First, identify the features that need to be
tested. Then you need to execute the test cases in the test environment. The test cases must be
prepared in the early stage of dynamic testing.
Inputs are provided during the execution: It is necessary to execute the code with the
required input per the end-users specifications.
Observing the output and behavior of the system: You need to analyze the actual output
after the test execution and compare them to the expected output. This output will decide the
behavior of the system. If they match, then the test will pass. Otherwise, you need to consider
the test as a failure. So, report it as a bug.
1. Functional Testing
Functional testing checks the functionality of an application as per the requirement
specifications. Each module needs to be tested by giving an input, assuming an output, verify
the actual result with the expected one. Further, this testing divides into four types –
Unit Testing: It tests the code’s accuracy and validates every software module component. It
determines that every component or unit can work independently.
Integration Testing: It integrates or combines each component and tests the data flow between
them. It ensures that the components work together and interact well.
System Testing: It makes to test the entire system. So it’s also known as end-to-end testing.
You should work through all the modules and check the features so that the product fits the
business requirements.
User Acceptance Testing: Customers perform this test just before releasing the software in the
market to check the system meets the real user’s conditions and business specifications.
2. Non-Functional Testing
Non-functional testing implies checking the quality of the software. That implies testing
whether the software meets the end users’ requirements. It expands the product’s usability,
maintainability, effectiveness, and performance. Hence it reduces the manufacturing risk for
the non-functional components.
Performance Testing: In this testing, we have to check how the software can perform in
different conditions. Three types of conditions are most considerable to do the performance
testing. They are:
Speed Testing: The time requires loading a web page with all components- texts, images,
videos, etc.
Load Testing: Test the software stability when users increase gradually. That means, by this
test, you can check the system’s performance under variable loads.
Stress Testing: It sets a limit on which the system breaks due to a sudden increase in users’
number.
Security Testing: Security testing reveals the vulnerabilities and threats of a system. Also, it
ensures that the system is protected from unauthorized access, data leakages, attacks, and other
issues. Then fix the issues before deployment.
Usability Testing: This test checks how easily an end user can handle a
software/system/application. Additionally, it will check the app’s flexibility and capability to
reach the user’s requirements.
Test cases define the sequence of actions required to verify the system functionality. A typical
test case consists of test steps, preconditions, expected results, and actual results. QA teams
usually create test cases from test scenarios. These test scenarios provide a high-level overview
of what QA should test regarding user workflows and end-to-end functionality to confirm that
the system satisfies customer requirements.
Test automation involves executing the tests automatically, managing test data, and utilizing
results to improve software quality. It’s a quality assurance measure, but its role consists of the
commitment of the entire software production team. From business analysts to developers and
DevOps engineers, getting the most out of test automation takes the inclusion of everyone.
It reduces the pressure on manual testers and allows them to focus on higher-value tasks –
exploratory reviewing test results, etc. Automation testing is essential to achieve maximum test
coverage and effectiveness, shorten testing cycle duration, and greater result accuracy.
Before developing the automation test cases, it is essential to select the ideal test conditions
that should be automated based on the following factors:
Test Coverage: Test cases define the specific scenarios, inputs, and expected outputs that must
be tested.
Test Script Creation: Serve as a blueprint for creating automated test scripts. Each test case is
typically mapped to one or more test scripts.
Test Execution: Automated tests are executed based on the instructions provided by test cases.
Test cases define the sequence of steps performed during test execution.
Test Maintenance: When the software changes due to bug fixes, new features, or updates, the
existing test cases must be updated accordingly. Test cases clearly show what needs to be
modified or added.
Test Result Validation: After test execution, automated tests compare the actual and expected
results defined in the test cases. This comparison helps identify discrepancies, errors, or bugs
in the software under test,
Regression Testing: Automated tests can quickly identify any regression issues introduced by
new code changes or updates by executing the same test cases repeatedly.
Test Reporting and Analysis: Test cases provide a structured framework for reporting and
analyzing test results. By associating test cases with specific test outcomes, defects, or issues,
it becomes easier to track the overall progress.
Specifications: The Test Case includes the details on the right application state for executing
the test, including browser launch and logins.
Sync and wait statements: This allows the necessary time for the application to get to the
required state before testing the actual functionality.
Test steps: Writing Test Steps includes data entry requirements, detailed steps to reach the next
required state, and steps to return the application to its original state before test runs.
Comments to explain the approach.
Debugging statements to invoke any available debugging functionality that can be used for
code correction to avoid the flakiness of the tests.
Output statements that describe where and how to record the test results