0% found this document useful (0 votes)
4 views

week 9 software

The document outlines the processes of software verification and validation, emphasizing the importance of ensuring that software meets specifications and customer expectations. It details various testing methodologies, including development testing, static testing, and dynamic testing, along with their respective components such as unit testing, integration testing, and performance testing. Additionally, it discusses the role of test cases in automated testing, highlighting their significance in ensuring software quality and effectiveness.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

week 9 software

The document outlines the processes of software verification and validation, emphasizing the importance of ensuring that software meets specifications and customer expectations. It details various testing methodologies, including development testing, static testing, and dynamic testing, along with their respective components such as unit testing, integration testing, and performance testing. Additionally, it discusses the role of test cases in automated testing, highlighting their significance in ensuring software quality and effectiveness.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

SOFTWARE VERIFICATION AND VALIDATION

Verification and validation processes are concerned with checking that software being
developed meets its specification and delivers the functionality expected by the people paying
for the software.
Software verification is the process of checking that the software meets its stated functional and
non-functional requirements.
Validation is a more general process. The aim of software validation is to ensure that the
software meets the customer’s expectations.

Test model of a system

1. Development testing

Development testing includes all testing activities that are carried out by the team
developing the system. The tester of the software is usually the programmer who developed
that software. Some development processes use programmer/tester pairs (Cusamano and
Selby 1998) where each programmer has an associated tester who develops tests and assists
with the testing process. For critical systems, a more formal process may be used, with a
separate testing group within the development team. This group is responsible for
developing tests and maintaining detailed records of test results. There are three stages of
development testing:

A. Unit Testing

Unit testing is the process of testing program components, such as methods or object classes.
Individual functions or methods are the simplest type of component. Your tests should be
calls to these routines with different input parameters.
When you are testing object classes, you should design your tests to provide coverage of all
of the features of the object. This means that you should test all operations associated with
the object; set and check the value of all attributes associated with the object; and put the
object into all possible states. This means that you should simulate all events that cause a
state change.

B. Component Testing

Testing composite components should therefore focus on showing that the component
interface or interfaces behave according to its specification. You can assume that unit tests
on the individual objects within the component have been completed.

There are different types of interface between program components and, consequently,
different types of interface error that can occur:

Parameter interfaces: These are interfaces in which data or sometimes function references
are passed from one component to another. Methods in an object have a parameter interface.

Shared memory interfaces: These are interfaces in which a block of memory is shared
between components. Data is placed in the memory by one subsystem and retrieved from
there by other subsystems. This type of interface is used in embedded systems, where
sensors create data that is retrieved and processed by other system components.

Procedural interfaces: These are interfaces in which one component encapsulates a set of
procedures that can be called by other components. Objects and reusable components have
this form of interface.

Message passing interfaces: These are interfaces in which one component requests a
service from another component by passing a message to it. A return message includes the
results of executing the service. Some object-oriented systems have this form of interface,
as do client–server systems.

How to go about component testing?

Examine the code to be tested and identify each call to an external component. Design a set
of tests in which the values of the parameters to the external components are at the extreme
ends of their ranges. These extreme values are most likely to reveal interface
inconsistencies.
Where pointers are passed across an interface, always test the interface with null pointer
parameters.

Where a component is called through a procedural interface, design tests that deliberately
cause the component to fail. Differing failure assumptions are one of the most common
specification misunderstandings.

Use stress testing in message passing systems. This means that you should design tests that
generate many more messages than are likely to occur in practice. This is an effective way
of revealing timing problems.

Where several components interact through shared memory, design tests that vary the order
in which these components are activated. These tests may reveal implicit assumptions made
by the programmer about the order in which the shared data is produced and consumed.

Sometimes it is better to use inspections and reviews rather than testing to look for interface
errors. Inspections can concentrate on component interfaces and questions about the
assumed interface behavior asked during the inspection process.

C. System Testing

System testing during development involves integrating components to create a version of


the system and then testing the integrated system. System testing checks that components
are compatible, interact correctly, and transfer the right data at the right time across their
interfaces. It obviously overlaps with component testing, but there are two important
differences:

During system testing, reusable components that have been separately developed and off-
the-shelf systems may be integrated with newly developed components. The complete
system is then tested.

Components developed by different team members or sub teams may be integrated at this
stage. System testing is a collective rather than an individual process. In some companies,
system testing may involve a separate testing team with no involvement from designers and
programmers.
2. Static Testing

Static Testing is a software testing technique which is used to check defects in software
application without executing the code. Static testing is done to avoid errors at an early stage
of development as it is easier to identify the errors and solve the errors. It also helps finding
errors that may not be found by Dynamic Testing.

Static testing can be classified into these 4 major review parts:

 Informal reviews
 Walkthroughs
 Technical review
 Inspections

During the Review process four types of participants that take part in testing are:

 Moderator: Performs entry check, follow up on rework, coaching team member,


schedule the meeting.
 Author: Takes responsibility for fixing the defect found and improves the quality of
the document
 Scribe: It does the logging of the defect during a review and attends the review
meeting
 Reviewer: Check material for defects and inspects
 Manager: Decide on the execution of reviews and ensures the review process
objectives are met.

Types of defects which can be easier to find during static testing are:

 Deviations from standards


 Non-maintainable code
 Design defects
 Missing requirements
 Inconsistent interface specifications

Usually, the defect discovered during static testing are due to security vulnerabilities,
undeclared variables, boundary violations, syntax violations, inconsistent interface, etc.
Tips for Successful Static Testing Process

Some useful tips to perform a static testing process in Software Engineering.

 Focus only on things that really count


 Explicitly plan and track review activities. A software walkthrough and inspection are
generally composite into peer’s reviews
 Train participants with Examples
 Resolve people issues
 Keep process formal as the project culture
 Continuous Improvement – Process and Tools
 By removing the major delays in test execution, testing cost and time can be reduced

Why Static Testing?

Static testing is performed due to the following reasons

 Early defect detection and correction


 Reduced development timescales
 Reduced testing cost and time
 For improvement of development productivity
 To get fewer defect at a later stage of testing

What is Tested in Static Testing

In Static Testing, following things are tested

 Unit Test Cases


 Business Requirements Document (BRD)
 Use Cases
 System/Functional Requirements
 Prototype
 Prototype Specification Document
 DB Fields Dictionary Spreadsheet
 Test Data
 Traceability Matrix Document
 User Manual/Training Guides/Documentation
 Test Plan Strategy Document/Test Cases
 Automation/Performance Test Scripts

How Static Testing is Performed

To perform Static Testing, it is done in the following ways,

 Carry out the inspection process to completely inspect the design of the application
 Use a checklist for each document under review to ensure all reviews are covered
completely

The various activities for performing Static Testing are:

1. Use Cases Requirements Validation: It validates that all the end-user actions are
identified, as well as any input and output associated with them. The more detailed
and thorough the use cases are, the more accurate and comprehensive the test cases
can be.
2. Functional Requirements Validation: It ensures that the Functional Requirements
identify all necessary elements. It also looks at the database functionality, interface
listings, and hardware, software, and network requirements.
3. Architecture Review: All business level process like server locations, network
diagrams, protocol definitions, load balancing, database accessibility, test equipment,
etc.
4. Prototype/Screen Mockup Validation: This stage includes validation of
requirements and use cases.
5. Field Dictionary Validation: Every field in the UI is defined well enough to create
field level validation test cases. Fields are check for min/max length, list values, error
messages, etc.
3. Dynamic Testing

Dynamic testing refers to analyzing code’s dynamic behavior in the software. In this type
of testing, you have to give input and get output as per the expectation through executing
a test case.

Characteristics of Dynamic Testing

 Execution of code: The software’s code needs to compile and run on the test environment. It
should be error-free.
 Execution of test cases on the running system: First, identify the features that need to be
tested. Then you need to execute the test cases in the test environment. The test cases must be
prepared in the early stage of dynamic testing.
 Inputs are provided during the execution: It is necessary to execute the code with the
required input per the end-users specifications.
 Observing the output and behavior of the system: You need to analyze the actual output
after the test execution and compare them to the expected output. This output will decide the
behavior of the system. If they match, then the test will pass. Otherwise, you need to consider
the test as a failure. So, report it as a bug.
1. Functional Testing
Functional testing checks the functionality of an application as per the requirement
specifications. Each module needs to be tested by giving an input, assuming an output, verify
the actual result with the expected one. Further, this testing divides into four types –

 Unit Testing: It tests the code’s accuracy and validates every software module component. It
determines that every component or unit can work independently.
 Integration Testing: It integrates or combines each component and tests the data flow between
them. It ensures that the components work together and interact well.
 System Testing: It makes to test the entire system. So it’s also known as end-to-end testing.
You should work through all the modules and check the features so that the product fits the
business requirements.
 User Acceptance Testing: Customers perform this test just before releasing the software in the
market to check the system meets the real user’s conditions and business specifications.
2. Non-Functional Testing
Non-functional testing implies checking the quality of the software. That implies testing
whether the software meets the end users’ requirements. It expands the product’s usability,
maintainability, effectiveness, and performance. Hence it reduces the manufacturing risk for
the non-functional components.

Performance Testing: In this testing, we have to check how the software can perform in
different conditions. Three types of conditions are most considerable to do the performance
testing. They are:

Speed Testing: The time requires loading a web page with all components- texts, images,
videos, etc.

Load Testing: Test the software stability when users increase gradually. That means, by this
test, you can check the system’s performance under variable loads.

Stress Testing: It sets a limit on which the system breaks due to a sudden increase in users’
number.

Security Testing: Security testing reveals the vulnerabilities and threats of a system. Also, it
ensures that the system is protected from unauthorized access, data leakages, attacks, and other
issues. Then fix the issues before deployment.

Usability Testing: This test checks how easily an end user can handle a
software/system/application. Additionally, it will check the app’s flexibility and capability to
reach the user’s requirements.

Dynamic Testing Methodologies


Black Box Testing
In this testing test, engineers have to test the software as per the requirements and specifications.
They don’t need to know about the internal implementation or coding of the software. So,
programming knowledge is not necessary for this testing.
Imagine a black colored box is put on a table. You don’t know what is inside the box or even
can’t see them because of the black color. The same things happen in this testing. Testers
shouldn’t know about internal structures and can’t see that. The tester will put input in a selected
test case and then check its functionality to determine whether it’s giving the expected output
or not. If it gives the expected output, it will be marked as ‘pass’.
Black box testing is performed after white box testing.

White Box Testing


White box testing mandates the coding knowledge of a tester because it needs to test internal
coding implementation and algorithms for the system. You can compare it to a transparent white
box to see all the materials from the outside.
Similarly, in this testing, it’s already known, and can see the internal coding of the system. Thus
it’s named like that. For this testing, you have to execute a programming line-by-line to find
whether there are errors in the line. Every line should be tested in this way. Input and output
are already known for this testing. Here, test cases are made from source code.

Gray Box Testing


Gray box testing combines black and white box testing. Here the testers need to perform both
tests, hence, the name. In this case, testers can see the internal coding partially, and you can say
the coding is like a gray box that indicates semi-transparency.
‘Database testing’ is a practical example of gray box testing. Cause you have to test both the
front and backend sides of the DB. The front end comprises UI-level operations like login. So,
here you don’t need any kind of programming. It’s black-box testing.
4. Creating Test Cases for Automated Tests

Test cases define the sequence of actions required to verify the system functionality. A typical
test case consists of test steps, preconditions, expected results, and actual results. QA teams
usually create test cases from test scenarios. These test scenarios provide a high-level overview
of what QA should test regarding user workflows and end-to-end functionality to confirm that
the system satisfies customer requirements.

Test automation involves executing the tests automatically, managing test data, and utilizing
results to improve software quality. It’s a quality assurance measure, but its role consists of the
commitment of the entire software production team. From business analysts to developers and
DevOps engineers, getting the most out of test automation takes the inclusion of everyone.
It reduces the pressure on manual testers and allows them to focus on higher-value tasks –
exploratory reviewing test results, etc. Automation testing is essential to achieve maximum test
coverage and effectiveness, shorten testing cycle duration, and greater result accuracy.

Before developing the automation test cases, it is essential to select the ideal test conditions
that should be automated based on the following factors:

 Tests that need to be executed across multiple test data sets


 Tests that give maximum test coverage for complex and end-to-end functionalities
 Tests that need to be executed across several hardware or software platforms and on multiple
environments
 Tests that consume a lot of time and effort to execute manually

Role of Test Cases in Automated Testing


Test cases play a crucial role in automated testing. They are the building blocks for designing,
executing, and validating computerized tests. Here are some key functions that test cases fulfill
in the context of automated testing:

 Test Coverage: Test cases define the specific scenarios, inputs, and expected outputs that must
be tested.
 Test Script Creation: Serve as a blueprint for creating automated test scripts. Each test case is
typically mapped to one or more test scripts.
 Test Execution: Automated tests are executed based on the instructions provided by test cases.
Test cases define the sequence of steps performed during test execution.
 Test Maintenance: When the software changes due to bug fixes, new features, or updates, the
existing test cases must be updated accordingly. Test cases clearly show what needs to be
modified or added.
 Test Result Validation: After test execution, automated tests compare the actual and expected
results defined in the test cases. This comparison helps identify discrepancies, errors, or bugs
in the software under test,
 Regression Testing: Automated tests can quickly identify any regression issues introduced by
new code changes or updates by executing the same test cases repeatedly.
 Test Reporting and Analysis: Test cases provide a structured framework for reporting and
analyzing test results. By associating test cases with specific test outcomes, defects, or issues,
it becomes easier to track the overall progress.

How to write Test Cases in Automation Testing?


Writing the automated test case is a complex task that requires a different method than its
manual counterpart. Automation test cases should further break down workflows compared to
manual test cases. Templates for automation test cases vary depending on the automation tools;
still, they should all have the following components:

 Specifications: The Test Case includes the details on the right application state for executing
the test, including browser launch and logins.
 Sync and wait statements: This allows the necessary time for the application to get to the
required state before testing the actual functionality.
 Test steps: Writing Test Steps includes data entry requirements, detailed steps to reach the next
required state, and steps to return the application to its original state before test runs.
 Comments to explain the approach.
 Debugging statements to invoke any available debugging functionality that can be used for
code correction to avoid the flakiness of the tests.
 Output statements that describe where and how to record the test results

You might also like