0% found this document useful (0 votes)
12 views

ST M4 Notes (1)

Test execution involves executing code, comparing expected and actual results, and reporting bugs. Key activities include system integration testing, defect reporting, re-testing, and regression testing, with a focus on validating the application under test (AUT) and ensuring quality. Test case specifications are critical documents that outline testing objectives, inputs, expected results, and acceptance criteria, guiding the testing process.

Uploaded by

mohammed.twaha08
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

ST M4 Notes (1)

Test execution involves executing code, comparing expected and actual results, and reporting bugs. Key activities include system integration testing, defect reporting, re-testing, and regression testing, with a focus on validating the application under test (AUT) and ensuring quality. Test case specifications are critical documents that outline testing objectives, inputs, expected results, and acceptance criteria, guiding the testing process.

Uploaded by

mohammed.twaha08
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Module- 4

Test Execution
Test execution is the process of executing the code and comparing the expected and actual
results. Following factors are to be considered for a test execution process:
• Based on a risk, select a subset of test suite to be executed for this cycle.
• Assign the test cases in each test suite to testers for execution.
• Execute tests, report bugs, and capture test status continuously.
• Resolve blocking issues as they arise.
• Report status, adjust assignments, and reconsider plans and priorities daily.
• Report test cycle findings and status.

The following points need to be considered for Test Execution.


• In this phase, the QA team performs actual validation of AUT based on
prepared test cases and compares the stepwise result with the expected result.
• The entry criteria of this phase is completion of the Test Plan and the Test Cases
Development phase, the test data should also be ready.
• The validation of Test Environment setup is always recommended through
smoke testing before officially entering the test execution.
• The exit criteria requires the successful validation of all Test Cases; Defects
should be closed or deferred; test case execution and defect summary report
should be ready.
Activities for Test Execution
The objective of this phase is real time validation of AUT before moving on to
production/release. To sign off from this phase, the QA team performs different types of
testing to ensure the quality of product. Along with this defect reporting and retesting is also
crucial activity in this phase. Following are the important activities of this phase −
System Integration Testing
The real validation of product / AUT starts here. System Integration Testing (SIT) is a black box
testing technique that evaluates the system's compliance against specified requirements/
test cases prepared.
System Integration Testing is usually performed on subset of system. The SIT can be
performed with minimum usage of testing tools, verified for the interactions exchanged and
the behavior of each data field within individual layer is also investigated. After the
integration, there are three main states of data flow −

• Data state within the integration layer


• Data state within the database layer
• Data state within the Application layer
Note − In SIT testing, the QA team tries to find as many defects as possible to ensure the
quality. The main objective here is finding bugs as many as possible.
Defect Reporting
A software bug arises when the expected result doesn't match with the actual result. It can
be an error, flaw, failure, or fault in a computer program. Most bugs arise from mistakes and
errors made by developers or architects.
While performing SIT testing, the QA team finds these types of defects and these need to be
reported to the concerned team members. The members take further action and fix the
defects. Another advantage of reporting is it eases the tracking of the status of defects. There
are many popular tools like ALM, QC, JIRA, Version One, Bugzilla that support defect reporting
and tracking.
Defect Reporting is a process of finding defects in the application under test or product by
testing or recording feedback from customers and making new versions of the product that
fix the defects based on the client’s feedback.
Defect tracking is also an important process in software engineering as complex and business
critical systems have hundreds of defects. One of the most challenging factors is managing,
evaluating and prioritizing these defects. The number of defects gets multiplied over a period
of time and to effectively manage them, defect tracking system is used to make the job easier.
Defect Mapping
Once defect is reported and logged, it should be mapped with the concerned failed/blocked
test cases and corresponding requirements in Requirement Traceability Matrix. This mapping
is done by the Defect Reporter. It helps to make a proper defect report and analyze the
impishness in product. Once the test cases and requirements are mapped with the defect,
stakeholders can analyze and take a decision on whether to fix/defer the defect based on
priority and severity.
Re-testing
Re-testing is executing a previously failed test against AUT to check whether the problem is
resolved. After a defect has been fixed, re-testing is performed to check the scenario under
the same environmental conditions.
During re-testing, testers look for granular details at the changed area of functionality,
whereas regression testing covers all the main functions to ensure that no functionalities are
broken due to this change.
Regression Testing
Once all defects are in closed, deferred or rejected status and none of the test cases are in
progress/failed/no run status, it can be said that system integration testing is completely
based on test cases and requirement. However, one round of quick testing is required to
ensure that none of the functionality is broken due to code changes/ defect fixes.
Regression testing is a black box testing technique that consists of re-executing those tests
that have had an impact due to code changes. These tests should be executed as often as
possible throughout the software development life cycle.
Types of Regression Tests
• Final Regression Tests − A "final regression testing" is performed to validate
the build that has not undergone a change for a period of time. This build is
deployed or shipped to customers.
• Regression Tests − A normal regression testing is performed to verify if the
build has NOT broken any other parts of the application by the recent code
changes for defect fixing or for enhancement.
Activity Block Diagram
Following block diagram shows the important activities performed in this phase; it also shows
the dependency from the previous phases −

Test Case Specification :


Test Case Specification document described detailed summary of what scenarios will be
tested, how they will be tested, how often they will be tested, and so on and so forth, for a
given feature. It specifies the purpose of a specific test, identifies the required inputs and
expected results, provides step-by-step procedures for executing the test, and outlines the
pass/fail criteria for determining acceptance.

Test Case Specification has to be done separately for each unit. Based on the approach
specified in the test plan, the feature to be tested for each unit must be determined. The
overall approach stated in the plan is refined into specific test techniques that should be
followed and into the criteria to be used for evaluation. Based on these the test cases are
specified for the testing unit.
However, a Test Plan is a collection of all Test Specifications for a given area. The Test Plan
contains a high-level overview of what is tested for the given feature area.

Reason for Test Case Specification:


There are two basic reasons test cases are specified before they are used for testing:

• Testing has severe limitations and the effectiveness of testing depends heavily on
the exact nature of the test case. Even for a given criterion the exact nature of the
test cases affects the effectiveness of testing.
• Constructing a good Test Case that will reveal errors in programs is a very creative
activity and depends on the tester. It is important to ensure that the set of test
cases used is of high quality. This is the primary reason for having the test case
specification in the form of a document.

The Test Case Specification is developed in the Development Phase by the organization
responsible for the formal testing of the application.

What is Test Case Specification Identifiers?


The way to uniquely identify a test case is as follows:

• Test Case Objectives: Purpose of the test


• Test Items: Items (e.g., requirement specifications, design specifications, code, etc.)
required to run a particular test case. This should be provided in "Notes” or
“Attachment” feature. It describes the features and conditions required for testing.
• Input Specifications: Description of what is required (step-by-step) to execute the test
case (e.g., input files, values that must be entered into a field, etc.). This should be
provided in “Action” field.
• Output Specifications: Description of what the system should look like after the test
case is run. This should be provided in the “Expected Results” field.
• Environmental Needs: Description of any special environmental needs. This includes
system architectures, Hardware & Software tools, records or files, interfaces, etc

To sum up, Test Case Test Case Specification defines the exact set up and inputs for one Test
Case.

After successfully achieving software testing objectives, the team starts preparing
several reports and documents that are delivered to the client after the culmination of
the testing process. During this stage these documents and reports are created by the team,
which include details about the testing process, its techniques, methods, test data, test
environment, test suite, etc. Test Case Specification document, which is the last document
published by the testing team is a vital part of these deliverables and is mainly developed by
the organization responsible for formally testing the software product or application.
Therefore, to help you understand the importance, format and specification of this document,
we have curated this article on Test Case Specification.
What are Test Case Specifications?
One of the deliverables offered to the client, test case specification is a document that
delivers a detailed summary of what scenarios will be tested in a software during the software
testing life cycle (STLC). This document specifies the main objective of a specific test and
identifies the required inputs as well as expected results/outputs. Moreover, it acts as a guide
for executing the procedure of testing and outline the pass & fail criteria for determining
acceptance. Test case specification is among those documents, whose format is set by IEEE
Standard for Software & System Test Document (829-1998). With the assistance of test case
specification document one can verify the quality of the numerous test cases created during
the software testing phase.
Format for Test Case Specifications:
Developed by the organization that is responsible for formal testing of the software, the test
case specification document needs to be prepared separately for each unit to ensure its
effectivity and to help build a proper and efficient test plan. Therefore, the format that is used
for creating this document is:
Objectives: The purpose of testing the software is defined here in detail. Relevant and crucial
information, which can make the process understandable for

Test Case Specification

After successfully achieving software testing objectives, the team starts preparing
several reports and documents that are delivered to the client after the culmination of the testing
process. During this stage these documents and reports are created by the team, which include
details about the testing process, its techniques, methods, test data, test environment, test suite,
etc. Test Case Specification document, which is the last document published by the testing team
is a vital part of these deliverables and is mainly developed by the organization responsible for
formally testing the software product or application.

What are Test Case Specifications?


One of the deliverables offered to the client, test case specification is a document that delivers a
detailed summary of what scenarios will be tested in a software during the software testing life
cycle (STLC). This document specifies the main objective of a specific test and identifies the
required inputs as well as expected results/outputs. Moreover, it acts as a guide for executing the
procedure of testing and outline the pass & fail criteria for determining acceptance. Test case
specification is among those documents, whose format is set by IEEE Standard for Software &
System Test Document (829-1998). With the assistance of test case specification document one
can verify the quality of the numerous test cases created during the software testing phase.
Format for Test Case Specifications:
Developed by the organization that is responsible for formal testing of the software, the test case
specification document needs to be prepared separately for each unit to ensure its effectivity and
to help build a proper and efficient test plan. Therefore, the format that is used for creating this
document is:

• Objectives: The purpose of testing the software is defined here in detail. Relevant and

crucial information, which can make the process understandable for the reader is mainly

included in this section of the format.

• Preconditions: The items and documents that were required before executing a particular

test case is mentioned with proper evidence and records. Moreover, it describes the

features and and conditions required for testing. Other important details included here
are:

o Requirement Specification.

o Detail Design Specification.

o User Guides.

o Operations Manual

o System Design Specifications.

• Input Specifications: Once the preconditions are defined, the team works together to

identify all the inputs that are required for executing the test cases. These can vary on the
basis of the level the test case is written for.

• Output Specification: These include all outputs that are required to verify the test case such

as data, tables, human actions, conditions, files, timings, etc.

o Post Conditions: Here, the team defines the the various environment requirements

stated by the team after the process of testing. Moreover, the team identifies the any

special requirements and constraints on the test cases.


o After successfully achieving software testing objectives, the team starts preparing several

reports and documents that are delivered to the client after the culmination of the testing

process. During this stage these documents and reports are created by the team, which

include details about the testing process, its techniques, methods, test data, test

environment, test suite, etc. Test Case Specification document, which is the last document

published by the testing team is a vital part of these deliverables and is mainly developed by

the organization responsible for formally testing the software product or application.

Therefore, to help you understand the importance, format and specification of this

document, we have curated this article on Test Case Specification.

o What are Test Case Specifications?

o One of the deliverables offered to the client, test case specification is a document that

delivers a detailed summary of what scenarios will be tested in a software during the

software testing life cycle (STLC). This document specifies the main objective of a

specific test and identifies the required inputs as well as expected results/outputs.

Moreover, it acts as a guide for executing the procedure of testing and outline the pass

& fail criteria for determining acceptance. Test case specification is among those

documents, whose format is set by IEEE Standard for Software & System Test Document

(829-1998). With the assistance of test case specification document one can verify the
quality of the numerous test cases created during the software testing phase.

o Format for Test Case Specifications:

o Test Case Specification Infographics


o Developed by the organization that is responsible for formal testing of the software, the

test case specification document needs to be prepared separately for each unit to

ensure its effectivity and to help build a proper and efficient test plan. Therefore, the

format that is used for creating this document is:

o Objectives: The purpose of testing the software is defined here in detail. Relevant and

crucial information, which can make the process understandable for the reader is mainly

included in this section of the format.

o Preconditions: The items and documents that were required before executing a

particular test case is mentioned with proper evidence and records. Moreover, it

describes the features and conditions required for testing. Other important details

included here are:

o Requirement Specification.

o Detail Design Specification.

o User Guides.

o Operations Manual.

o System Design Specifications.


o Input Specifications: Once the preconditions are defined, the team works together to

identify all the inputs that are required for executing the test cases. These can vary on

the basis of the level the test case is written for.

o Output Specification: These include all outputs that are required to verify the test case

such as data, tables, human actions, conditions, files, timings, etc.

o Post Conditions: Here, the team defines the the various environment requirements

stated by the team after the process of testing. Moreover, the team identifies the any

special requirements and constraints on the test cases. It consists of details like:

o Hardware: configuration and limitations.

o Software: System, operating system, tools, etc.

o Procedural Requirements: Special setup, output location & identification, operations

interventions, etc.

o Others: Mix of applications, facilities, trainings, among other things.

• Intercase Dependencies: Finally the team identifies any requirements or prerequisites test

cases. Here, the test cases are documented with references other requirements and

specifications. To ensure the quality of these test cases the team identifies follow on test

cases.
Scaffolding :

Scaffolding, as used in computing, refers to one of two techniques: The first is a code

generation technique related to database access in some model–view–

controller frameworks; the second is a project generation technique supported by

various tools.

Code generation:
Scaffolding is a technique supported by some model–view–controller frameworks, in which the
programmer can specify how the application database may be used. The compiler or framework
uses this specification, together with pre-defined code templates, to generate the final code that
the application can use to create, read, update and delete database entries, effectively treating
the templates as a "scaffold" on which to build a more powerful application.
Scaffolding is an evolution of database code generators from earlier development environments,
such as Oracle's CASE Generator, and many other 4GL client-server software development
products.
Scaffolding was made popular by the Ruby on Rails framework. It has been adapted to other
software frameworks, including OutSystems Platform, Express Framework, Blitz.js, Play
framework, Django, web2py, MonoRail,
Brail, Symfony, Laravel, CodeIgniter, Yii, CakePHP, Phalcon PHP, Model-
Glue, PRADO, Grails, Catalyst, Mojolicious, Seam Framework, Spring Roo, JHipster, ASP.NET
Dynamic Data, KumbiaPHP and ASP.NET MVC framework's Metadata Template Helpers.

Run-time vs. design-time scaffolding:


Scaffolding can occur at two different phases of the program lifecycle: design time and run time.
Design time scaffolding produces files of code that can later be modified by the programmer to
customize the way the application database is used. However, for large-scale applications this
approach may be difficult to maintain due to the sheer number of files produced, and the fact that
the design of the files was largely fixed when they were generated or copied from the original
templates. Alternatively, run time scaffolding produces code on the fly. It allows changes to the
design of the templates to be immediately reflected throughout the application. But modifying
the design of the templates may be more difficult or impractical in the case of run time scaffolding.
scaffolding.

Scaffolding in Ruby on Raiils:


When the line scaffold : model_name is added to a controller, Rails will automatically generate all
of the appropriate data interfaces at run time. Since the API is generated on the fly, the
programmer cannot easily modify the interfaces generated this way. Such a simple scaffold is often
used for prototyping applications and entering test data into a database.
The programmer may also run an external command to generate Ruby code for the scaffold in
advance: rails generate scaffold model_name. The generate script will produce files of Ruby
code that the application can use to interact with the database. It is somewhat less convenient
than dynamic scaffolding, but gives the programmer the flexibility of modifying and customizing
the generated APIs.
Note: As of Rails 2.0, dynamic scaffolding is no longer stored.

Server side vs Client side Scaffolding:


Scaffolding techniques based on the application database typically involve Server
side frameworks. Server side web frameworks commonly perform operations directly against
database entries, and code generation for these operations may be considered Server side
Scaffolding. Alternatively, Client side development often uses frameworks that perform data
transport operations instead of directly accessing the database. The focus of Client side
Scaffolding is thus more on generating a starter template for the application as a whole, rather
than generating code to access a database.
Some Client side web frameworks, such as Meteor, allow the client to perform database
operations in a manner similar to Server side frameworks. In this case, Scaffolding techniques can
go beyond merely generating a starter template. They can perform run time scaffolding of web
forms on the Client side to create, read, update and delete database entries. One example of this
is provided by an add-on to Meteor called aldeed:autoform.

Project generation:
Complicated software projects often share certain conventions on project structure and
requirements. For example, they often have separate folders for source code, binaries and code
tests, as well as files containing license agreements, release notes and contact information. To
simplify the creation of projects following those conventions, "scaffolding" tools can automatically
generate them at the beginning of each project. Such tools include Yeoman, Cargo and Ritchie CLI.

Generic vs Specific Scaffolding :

The difference is between the adjectives generic and specific. Specific is something clearly defined
or identified, whereas generic applies to a group of things and is the exact opposite of 'specific'.

So, in testing terms, does the scaffolding serve a generic purpose or a more specific one?
Test Oracles
Test Oracle is a mechanism, different from the program itself, that can be used to test the
accuracy of a program’s output for test cases. Conceptually, we can consider testing a process
in which test cases are given for testing and the program under test. The output of the two
then compares to determine whether the program behaves correctly for test cases. This is
shown in figure.

Testing oracles are required for testing. Ideally, we want an automated oracle, which always
gives the correct answer. However, often oracles are human beings, who mostly calculate by
hand what the output of the program should be. As it is often very difficult to determine
whether the behavior corresponds to the expected behavior, our “human deities” may make
mistakes. Consequently, when there is a discrequently, between the program and the result,
we must verify the result produced by the oracle before declaring that there is a defect in the
result.

The human oracles typically use the program’s specifications to decide what the correct behavior
of the program should be. To help oracle determine the correct behavior, it is important that the
behavior of the system or component is explicitly specified and the specification itself be error-
free. In other words actually specify the true and correct behavior.

There are some systems where oracles are automatically generated from the specifications of
programs or modules. With such oracles, we are assured that the output of the oracle conforms
to the specifications. However, even this approach does not solve all our problems, as there is a
possibility of errors in specifications. As a result, a divine generated from the specifications will
correct the result if the specifications are correct, and this specification will not be reliable in case
of errors. In addition, systems that generate oracles from specifications require formal
specifications, which are often not generated during design.

In computing, software engineering, and software testing, a test oracle (or just oracle) is a
mechanism for determining whether a test has passed or failed.[1] The use of oracles involves
comparing the output(s) of the system under test, for a given test-case input, to the output(s) that
the oracle determines that product should have. The term "test oracle" was first introduced in a
paper by William E. Howden.[2] Additional work on different kinds of oracles was explored
by Elaine Weyuker.[3]
Oracles often operate separately from the system under test.[4] However, method postconditions
are part of the system under test, as automated oracles in design by
contract models.[5] Determining the correct output for a given input (and a set of program or
system states) is known as the oracle problem or test oracle problem,[6]: 507 which is a much
harder problem than it seems, and involves working with problems related to controllability and
observability.[

Categories
A research literature survey covering 1978 to 2012[6] found several potential categories of test
oracles.
Specified
These oracles are typically associated with formalized approaches to software modeling and
software code construction. They are connected to formal specification,[8] model-based
design which may be used to generate test oracles,[9] state transition specification for which
oracles can be derived to aid model-based testing[10] and protocol conformance
testing,[11] and design by contract for which the equivalent test oracle is an assertion.
Specified Test Oracles have a number of challenges. Formal specification relies on abstraction,
which in turn may naturally have an element of imprecision as all models cannot capture all
behavior.[6]: 514
Derived
A derived test oracle differentiates correct and incorrect behavior by using information derived
from artifacts of the system. These may include documentation, system execution results and
characteristics of versions of the system under test.[6]: 514 Regression test suites (or reports) are
an example of a derived test oracle - they are built on the assumption that the result from a
previous system version can be used as aid (oracle) for a future system version. Previously
measured performance characteristics may be used as an oracle for future system versions, for
example, to trigger a question about observed potential performance degradation. Textual
documentation from previous system versions may be used as a basis to guide expectations in
future system versions.
A pseudo-oracle[6]: 515 falls into the category of derived test oracle. A pseudo-oracle, as defined
by Weyuker,[12] is a separately written program which can take the same input as the program or
system under test so that their outputs may be compared to understand if there might be a
problem to investigate.
A partial oracle[6]: 515 is a hybrid between specified test oracle and derived test oracle. It specifies
important (but not complete) properties of the system under test. For example, metamorphic
testing exploits such properties, called metamorphic relations, across multiple executions of the
system.
Implicit
An implicit test oracle relies on implied information and assumptions.[6]: 518 For example, there
may be some implied conclusion from a program crash, i.e. unwanted behavior - an oracle to
determine that there may be a problem. There are a number of ways to search and test for
unwanted behavior, whether some call it negative testing, where there are specialized subsets
such as fuzzing.
There are limitations in implicit test oracles - as they rely on implied conclusions and assumptions.
For example, a program or process crash may not be a priority issue if the system is a fault-tolerant
system and so operating under a form of self-healing/self-management. Implicit test oracles may
be susceptible to false positives due to environment dependencies.
HumanEdit
When specified, derived or implicit test oracles cannot be used, then human input to determine
the test oracles is required.[7] These can be thought of as quantitative and qualitative
approaches.[6]: 519–520 A quantitative approach aims to find the right amount of information to
gather on a system under test (e.g., test results) for a stakeholder to be able to make decisions on
fit-for-purpose or the release of the software. A qualitative approach aims to find the
representativeness and suitability of the input test data and context of the output from the system
under test. An example is using realistic and representative test data and making sense of the
results (if they are realistic). These can be guided by heuristic approaches, such as gut instincts,
rules of thumb, checklist aids, and experience to help tailor the specific combination selected for
the program/system under test.

Examples
Test oracles are most commonly based on specifications and documentation.[13][14] A formal
specification used as input to model-based design and model-based testing would be an example
of a specified test oracle. The model-based oracle uses the same model to generate and verify
system behavior.[15] Documentation that is not a full specification of the product, such as a usage
or installation guide, or a record of performance characteristics or minimum machine
requirements for the software, would typically be a derived test oracle.
A consistency oracle compares the results of one test execution to another for similarity.[16] This
is another example of a derived test oracle.
An oracle for a software program might be a second program that uses a different algorithm to
evaluate the same mathematical expression as the product under test. This is an example of a
pseudo-oracle, which is a derived test oracle.[12]: 466
During Google search, we do not have a complete oracle to verify whether the number of returned
results is correct. We may define a metamorphic relation[17] such that a follow-up narrowed-
down search will produce fewer results. This is an example of a partial oracle, which is a hybrid
between specified test oracle and derived test oracle.
A statistical oracle uses probabilistic characteristics,[18] for example with image analysis where a
range of certainty and uncertainty is defined for the test oracle to pronounce a match or
otherwise. This would be an example of a quantitative approach in human test oracle.
A heuristic oracle provides representative or approximate results over a class of test
inputs.[19] This would be an example of a qualitative approach in human test oracle.
Self Checks as Oracles:

her than comparing actual values, use properties about results to judge sequences.Take the form of assertions,
tracts, and other logical properties.
@Test
public void multiplicationOfZeroIntegersShouldReturnZero() {
// Tests
assertEquals("10 x 0 must be 0", 0, tester.multiply(10, 0));
assertEquals("0 x 10 must be 0", 0, tester.multiply(0, 10));
assertEquals("0 x 0 must be 0", 0, tester.multiply(0, 0));
}
@Test
public void propertiesOfSort (String[] input) {
// Tests
String[] sorted = quickSort(input);
assert(sorted.size >= 1, "This array can’t be empty.")

Usually written at the function level.


○ For one method or one high-level “feature”.
○ Properties based on behavior of that function.
● Work for any input to that function.
● Only accurate for those properties.
○ Faults may be missed if the specified properties are
obeyed.
○ More properties = more expensive to write.

Capture/Replay Tool:

GUI capture & replay tools have been developed for testing the applications against graphical user
interfaces. Using a capture and replay tool, testers can run an application and record the
interaction between a user and the application. The Script is recorded with all user actions
including mouse movements and the tool can then automatically replay the exact same interactive
session any number of times without requiring a human intervention. This supports fully
automatic regression testing of graphical user interfaces.
Tools for GUI Capture/Replay:

Product Vendor URL


QF-Test QFS www.qfs.de/en/qftest/

SWTBot OpenSource http://wiki.eclipse.org/SWTBot/UsersGuide

GUIdancer BREDEX http://testing.bredex.de/


and Jubula

TPTP GUI Eclipse http://www.eclipse.org/tptp/


Recorder

Many applications today are too comprehensive to be tested manually with reasonable effort.
Capture and Replay tools automate test procedures by interacting with the graphical user
interface of the application under test. A Capture and Replay tool is a type of test execution tool
that records entries during a manual test with the goal of creating automated test scripts that can
then be used repeatedly. Capture and Replay – sometimes referred to as Capture and Playback –
is often used to support automated regression testing.
The Capture and Replay procedure
Capture and Replay defines four essential steps:
• Capture mode records user interactions with user interface elements. Capture and Replay
is a script that documents both the test process and the test parameters.

• The scripts, which are usually defined and editable in XML formats, can be used to describe
simple test scenarios or complex test suites.

• During test evaluation, it must be verified whether previously defined events occur or
errors occur. Output formats, database contents or GUI states are checked and the results
documented accordingly.

• Test scenarios can be easily reproduced by repeatedly replaying (mode) the previously
recorded scripts. Individual elements of the user interface are also recognised if their
position or shape has changed. This works because the user input in capture mode, for
example, not only saves the behavior of the mouse pointer, but also records the
corresponding object ID at the same time.
Capture and Replay were developed to test the applications against graphical user interfaces. With
a capture and replay tool, an application can be tested in which an interactive session can be
repeated any number of times without human intervention. This saves time and effort while
providing valuable insights for further application development.
Sensitivity in capture and replay process:

The Interactive Test Capture (ITC) tool enables you to create automated tests interactively.
Automated tests can be used by programmers to manage code quality, detect code regression or
instability. An automated test is created by launching V5 using a specific capture mode during
which all user interaction are recorded. Once recorded an automated test can be replayed on all
V5 supported platforms, independently of the recording platform.
ITC vs. CUT: ITC needs CUT to run since it is completely integrated in mkodt. With ITC you can
capture scenarios interactively whereas it is necessary to code the scenario when using CUT. A non
developer can use ITC to capture and replay scenarios. Developer skill is needed for failing replay
analysis.

This documentation explains how to use ITC and provides some insight on the internal mechanisms
used by the V5 capture/replay engine. A sample use case is provided along with a list of known
limitations and best practices.

As programs grow and become more and more complex, ensuring code quality becomes a more
and more challenging task. To ensure a good software quality, one might consider a multiple level
approach: formal analysis of the source code, automated tests, and manual tests.

ITC is a product dedicated to automated tests. Basically, it allows an interactive scenario to be


recorded and replayed. The scenario needs only to be recorded once, and can be replayed as many
times as necessary. This provides the developer with a way to make impact analysis or to check
for regressions. The tester saves a tedious repetition of manual tests.

The V5 record engine works at "V5 device level": what are recorded are mouse clicks, mouse
moves and keyboard events that are translated into V5 system events. Contrary to other
commercial automatic test tools, it takes advantage of its full integration into the V5 platform to
provide a good stability:

• Replay is not sensitive to the operating system. For instance, record can be done on
Windows and replay can be done on both Windows and Unix
• Replay is not sensitive to screen resolution
• Replay is not sensitive to window position, dialog box layout, toolbar position, and toolbar
layout
• Replay is not sensitive to the number of items or the order of items in combo-boxes or list-
boxes.

The purpose of ITC is to detect changes between code levels. This happens when a scenario cannot
be replayed anymore. When such a thing happens, it is left to the developer's responsibility to
analyze whether the failure is a regression symptom or not. In some cases, the failure is normal
because the software user interface or its internals have changed. In other cases, the failure is a
true regression symptom. A discussion on how to make both stable and relevant automated tests
can be found later in this document.

In particular, ITC does not provide any mechanism to ensure stability between a V5 code level and
another. No stability is insured between V5 releases, service packs or hot fixes.

Other restrictions are:

• ITC is built on a technology that applies to Wintop based products only (no Webtop
support)
• Only file based scenario are supported. No VPM interoperability is supported
• Scenarios must be captured and replayed on V5 using the English language, installed on a
machine that uses the US-English locale.

Basic Tasks
Recording and Replaying

ITC product is accessed through mkodt tool. "-C" option is used for capture. No specific option is
necessary for replay.

• Create a shell file containing at least the "SetOdtParam TYPE=RECORD" command and the
name of the V5 executable to launch. In case of CATIA V5 it will be:

SetOdtParam TYPE=RECORD
CNEXT

Save the file under a given name in the TestCases directory, for instance
TestCases/MyRecord.sh.

In a licensed environment you need to activate the license for CNEXT. Create a
FunctionTest/InputData/MyRecord.rec directory, and drop a valid licensing.CATSettings
files that will activate the necessary license for capturing and replaying the scenario.

• Launch the command

mkodt -s MyRecord -C

This starts a V5 interactive session.

• Perform your scenario in the V5 session and end it by exiting the application.

The returned error code should be 0, meaning the record has successfully been captured.

Notice that the FunctionTests/InputData/MyRecord.rec directory has been created. This


directory contains three files: capture.rec, capture.env, and capture.ver.

o capture.rec contains the record data. This is a binary file.


o capture.env contains the list of authorized paths for the record. This is a text file.
o capture.ver specifies the level used for the record. This is a text file and is used for
record versioning.

More explanations of the content of those files will be given in the record engine internal
description section.

• Replay the test by launching the command:

mkodt -s MyRecord
Notice that the test is replayed on screen. The return code should be 0 if the test succeed.
Using Settings

When recording a session, all V5 defaults settings values are used. If your scenario needs to be
recorded and replayed with non default setting values, simply put the necessary *.CATSettings
files in the .rec directory.

Those setting files are the same that are created and used when launching a normal V5 session.

In a licensed environment, a valid licensing.CATSettings file is needed to activate the necessary


licenses for the record and replay scenario.

Using P2 UI Level

By default, in capture mode, V5 sessions are launched using the P1 UI level. This is a behavior
difference between a normal V5 session and a record capture session. To obtain the P2 UI level
you need to put a copy of the FrameGeneral.CATSetting file into the *.rec directory.

It is recommended to use a clean FrameGeneral.CATSettings file. To obtain it, remove your


FrameGeneral.CATSettings file in your setting directory, launch a normal V5 session and exit
immediately.

Opening Documents

Access the necessary files using the File->Open command. Authorized paths for document files
are:

• "FW.tst/FunctionTests/InputData/"
• "FW.tst/FunctionTests/Output/"
• $ADL_ODT_TMP

Where:

• FW.tst is the test framework


• $ADL_ODT_TMP is a temporary directory that exists only while the scenario is capture or
replayed. You can choose it to open or save documents in the file selection box.

The record engine automatically makes file name and path name conversions to ensure stability
when the replay environment differs from the record environment (Unix vs. Windows).

However, to ensure stability when replaying Windows recorded scenario on Unix, make sure that
file path entered in the "File Open" or directory chooser dialog box at record time have exactly the
same case as the actual directory structure on the machine. Otherwise, the scenario might capture
and replay correctly on Windows but fail on Unix.
Saving Documents

Saving files can be used to test the save functionality itself or to obtain documents that can be
compared to a reference later in the automated test. ITC does not provide such tools for document
comparison.

Save the file using the File->Save command. Authorized path for files are:

• $ADL_ODT_TMP
• "FW.tst/FunctionTests/Output/"

The $ADL_ODT_TMP directory is deleted at the end of mkodt whereas the content of "Output" is
kept. To avoid disk space saturation when replaying a large number of ODTs, use $ADL_ODT_TMP
to save documents.

[Top]
Using the Cache System

Cache system location is stored in CATIAV5Cache.CATSettings file. Enter ${ADL_ODT_TMP} in


Cache Management directory to make sure that cgr files are generated to a valid directory. Also,
this directory is automatically cleaned, avoiding risk of disk saturation.

[Top]
Capture Recommendations, Limitations and Tips

• Play your scenario once before capturing it to check that it works properly. Do not forget
that a record ending with an abend is not captured.
• Do not move your mouse too often since every movement is recorded. The less
interactions, the better.
• Do not use the wheel or mouse scroll button to scroll the specification tree, otherwise you
will not be able to replay the record. The wheel interaction is not supported by the
record/replay engine.
• Do not resize the MDI Client Window (document window). On the Unix platform it is not
possible to do it at capture time.
• Do not use the Window menu. Window size management controls ("maximize" and
"minimize" buttons, for instance) at the top-right corner of the window are not available.
• Do not use the MRU in the File or Start menu to load documents
• Bear in mind that any specification tree alteration (such as the insertion of a node) may
prevent the record to work properly.
• Check that a creation command works correctly by recording a selection and a deletion
through contextual menu of the created object afterwards
• Interactions with the compass can be recorded on condition that a local transformation
(such as a translation) of the 3D viewer is done before doing the first interaction.
• Some elements cannot be recorded:
o Preselection navigator (also known as drill selection)
o Drag and Drop
o Contextual menus on toolbars
o Context-sensitive help (also known as tool tips)
• The timing of interactions is not recorded. At replay, recorded interactions are launched
one after the other. If a scenario depends on a precise timing sequence of interactions,
then the replay will probably fail. In other words the elapsed time between two
interactions is not a stable value, and might depend on machine hardware configuration,
environment, etc
• Macros cannot be recorded and replayed.

Replay Limitations and Tips

• CATDlgNotify dialog boxes launched using the CATDlgNotify::DisplayBlocked method are


never displayed at replay. The interactions made in those boxes are recorded and replayed
but the box itself is never shown at replay.
• Menus and contextual menus are visible at replay. Interactions menus are recorded and
replayed, but menus are not visually expanded at replay.
• Modal dialog boxes are not modal at replay. If a modal dialog box appears at replay while
it was not there at capture time, it does not prevent the record engine going on replaying
the remaining interactions.

[Top]
Common Return Code
Tip

The return code is the return code of the shell passed to mkodt using -s argument [1]. The return
code generated by the record engine can be overloaded in this shell. Before making an analysis
based on a given return code please check that it corresponds to the actual return code generated
by the replay engine.
Code 1

Code 1 is used by the replay engine when the scenario cannot be replayed anymore. Most
common situations are:

• "CATCommand not found": a notification went through a CATCommand at capture time,


but this CATCommand was not there at replay time. Since all Dialog objects are
CATCommands, this happens when any kind of element is missing in the UI: icon, menu or
contextual menu item, dialog box, etc...
• "Invalid record buffer": the capture.rec file is corrupted or its end has been reached in an
unexpected way. This situation might happen when an unexpected CATDlgNotify message
box is launched at replay time.
• "You want to create a new CATCommand named xxx, but you already have in your path a
CATCommand with the same name": two instances of CATCommand with the same name
share the same parent. This situation is not supported by the replay engine. Either this is a
design error, or one of the instances has not been deleted, revealing a life-cycle object
problem. A common mistake leading to this error, is to use the same toolbar in two
different workbenches. The solution is to rename one of the toolbar.

Code 10

This code applies to problems encountered with dialog objects. Please read the associated
message to get a description of the problem. Common situations are:

• Visibility or sensitivity stability check fails

To ensure non-regression of UI, V5 record/replay checks that visibility and sensitivity of


dialog objects on which interactions are made is stable. At capture time the visibility and
sensitivity status of the interacted objects is stored along with the nature of the interaction.
If this status is different at replay time, then the replay fails.

An exception to this is the contextual menu, on which only the sensitivity check is applied.

• Missing lines in a CATDlgCombo or a CATDlgSelectorList

When selecting a line in a combo-box or a list-box, the record engine stores the content of
the line as a string. This ensures stability towards the change of the order and number of
items in the list. When the string is missing at replay time, the replay fails.

• Missing line in a CATDlgMultilist

When selecting a line in a multi-column list, the record engine stores the whole line as a
separate string for each column. At replay time, the selected line is restored based on the
content of the first visible.

Feedback :

← BACK TO BLOG
UserFeedback: TheUltimate Guide

Matei Culcer

April 6, 2021

Table of contents:

• Introduction
• What is user feedback?
• Why is it important to collect user feedback?
• How should you build a user feedback strategy?
• What types of feedback exist?
• What should you do with negative feedback?
• What should you avoid when collecting user feedback?
• What tools should you use to capture user feedback?
• Best practices of user feedback management
• Conclusion

Introduction

It was 1989 when 2 editors of Inc. magazine, George Gendron and Bo Burlingham made the
nervous drive to Palo Alto, California. Not long beforehand they’d decided on who to name as
Inc.’s Entrepreneur of the Decade, and finally, they would get a chance to interview him.

As they entered the offices of NeXT, their interviewee approached them. In his trademark jeans
and turtleneck sweater, Steve Jobs led them up the stairs to his office and the interview
commenced.

Securing an interview with Steve Jobs was rare, even in 1989. And, wanting to make the most of
their time, the editors got straight to the point with their very first question:

“Where do great products come from?”


After a slight pause, and a shuffle in his chair, Jobs replied:

“I think really great products come from melding two points of view; the technology point of view
and the customer point of view. You need both. You can't just ask customers what they want and
then try to give that to them. By the time you get it built, they'll want something new.”

Silence overshadowed the room. Three decades later, and this powerful answer Jobs gave is
something that still isn’t often internalized in companies.

Collecting user feedback is incredibly important. As you’ll see examples of later in this article,
launching surveys, asking onboarding questions, and conducting customer interviews are all vital
tools for improving your product.

But the true lesson that Steve Jobs gave all this time ago was that user feedback isn’t as simple as
asking what users want, or what they think about your product, and making those changes. You
have to dive much deeper.

After gathering user feedback, it’s up to you to connect the dots and understand the real desires
beneath the surface.

As Henry Ford is famously quoted as saying; “If I’d asked my customers what they wanted, they’d
have asked for a faster horse.”

What did his users really want? A faster way of getting from A to B.

User feedback may not have given the right solution, but it would have identified the deeper
customer desire. Speed.

So as you read on and learn all about user feedback, bear this important lesson in mind – collecting
user feedback is great, but the customer shouldn’t always be trusted to come up with the solution.
What is user feedback?

User feedback describes the kind of responses elicited from those who use your product. In fact,
it’s a rather simple concept, though it’s not always the easiest to hear. Ideally, your consumer base
will tell you exactly what you want to hear: that they’ve enjoyed using your product and will
continue to in the future. However, constructive feedback is just as integral to the process of
product development, and any company should welcome and use any criticisms to its advantage.

To receive the most helpful information, it’s vital that your company begins by asking the right
questions. No matter the medium, surveying users should be able to realistically indicate the
performance of your product. After all, it’s been made for a specific use; and given the opportunity,
the users themselves will often tell you exactly what they think about it.

User feedback can primarily suggest changes and additional features for the product, as well as
whether or not users experienced any difficulties (and on what scale) when trying it out. While all
of these elements of feedback can provide the fuel for product development, the baseline
question should always be whether the customer’s needs are satisfied by the product. Depending
on a variety of factors, users might invest in a product that could use some improvement; but
seldom will customers continue to utilize a tool that doesn’t do what they need it to – if there are
viable alternatives. Of course, asking your users first-hand is the only way to truly gauge how they
feel about your product.

Partition:
What is Equivalence Partitioning Testing?
Equivalence Partitioning also called as equivalence class partitioning. It is abbreviated as ECP. It is
a software testing technique that divides the input test data of the application under test into each
partition at least once of equivalent data from which test cases can be derived.
An advantage of this approach is it reduces the time required for performing testing of a software
due to less number of test cases.
Example:
The Below example best describes the equivalence class Partitioning:

Assume that the application accepts an integer in the range 100 to 999
Valid Equivalence Class partition: 100 to 999 inclusive.

Non-valid Equivalence Class partitions:

Restrictions :
Use cases cover only the functional requirements. The testing of non-functional requirements
may not be possible. Use cases are written from the user's perspective.

Visibility :

It is easy for testing work to be invisible or illegible. Most of it happens in your head, and

sometimes when the software is pretty good to begin with, there is no evidence that testing

actually happened.
One of the things I am pretty regularly asked to do is review a testing practice. I usually find that

the testers are there, they are working hard, and they care about their work. The problem is that

their work is completely hidden. No one can see it, and because of that, no one understands it or

how it might be improved.

So, how can you make your testing work visible to your team?
Scrum

Most of the daily scrums I have been part of were difficult for testers. We went around the room,

starting with developers and product people. If testers were called on at all for a status update, it

was at the very end, and most people were distracted or uninterested by that point. A discussion

would sometimes start before the testers said their bit, and we’d run out of time.

When one of the testers did get called on, they would say something like, “I’m testing the social
media sharing card.” If they were working on multiple projects, the status would be even less

specific, and they’d just say which project they were planning to work on for the day.

No one left the scrum knowing any more about the testing effort than they did when they walked

in.

Just like software development, testing is multifaceted. We aren’t just developing or testing —

there are a lot of activities that happen in the context of testing. The daily scrum is a good time to

highlight some of those activities.


The team transitioned from being very general to actually talking about the work. One person

might be working on building test data, one person might be working on reproducing something

that seems like a race condition and could use some help, and another might just need an hour or

two more before they move on to the next change.

The daily standup is the appropriate time to share this level of detail.
Test Coverage

Wanting to know how much of a product or feature is tested is a very reasonable thing. Developers

want to anticipate any bug-fixing they might need to do. Managers need to understand how the
release is coming along. And product people need to be able to explain when a customer might

see a much-needed feature or fix in production. However, these are also questions most testers

have a hard time answering, and it undermines their value.

One of the best ways to talk about coverage is in terms of inventories. A software product can be

described from different perspectives — configurations, pages, features, text fields, supported

browsers, databases and so on. One way to talk coverage is to keep an inventory of each of these

things. When someone asks about test coverage at a high level, you can tell them that you have

tested 15 of the 45 documented features on two of your supported browsers. You can also talk

about what is remaining and what is out of scope.


The first step in making your work more visible is to talk about what you do, so your daily standup
might be a good place to do this. Rather than glossing over the details of your work, pick out a few
that people might be interested in: risks you are exploring, technical tasks you have to perform,
developers you want to talk with. After that, pick specific points about your work you want to
share — test coverage, test design, planning — and find a relevant audience.

What is Test Monitoring?


Test Monitoring in test execution is a process in which the testing activities and testing efforts are
evaluated in order to track current progress of testing activity, finding and tracking test metrics,
estimating the future actions based on the test metrics and providing feedback to the concerned
team as well as stakeholders about current testing process.
What is Test Control?
Test Control in test execution is a process of taking actions based on results of the test monitoring
process. In the test control phase, test activities are prioritized, test schedule is revised, test
environment is reorganized and other changes related to testing activities are made in order to
improve the quality and efficiency of future testing process.
Congratulation! We now start with Test Execution phase. While your team works on the assigned
tasks, you need to monitor and control their work activity.
In the Test Management Phases tutorial, we briefly introduced Test Monitoring and Control. In
this tutorial, you will learn it in detail.

Why do we monitor?
This small example shows you why we need to monitor and control test activity.
After finishing the Test Estimation and test planning, the management board agreed with your
plan and the milestones are set as per the following figure.

You promised to finish and deliver all test artifacts of the Guru99 Bank Testing project as per above
milestones. Everything seems to be great, and your team is hard at work.

But after 4 weeks, things are not going as per plan. The task of “Making Test specification”
is delayed by 4 working days. It has a cascading effect and all succeeding tasks are delayed.
You missed the milestone as well the overall project deadline.
As a consequence, your project fails and your company loses the customer trust. You must take
full responsibility for the project’s failure.

Analyze
In this step, you compare the progress you defined in plan with the actual progress that the team
has made. By analyzing the record, you can also see how much time has been spent on individual
task and the total time spent on the project overall.
Let’s move back to the report, which the Test Administrators sent you, in the previous section.
In that report, what issue did you figure out?
Nothing wrong, it’s still good
The task progress seems to be delayed
I could not find any issue in that report
By tracking and analyzing the project progress, you can early detect any issue which may happen
to the project, and you can find out the solution to solve that issue.

Organizational Factors for Successful Management of Software Development:

For some years, the information technology industry has been flourishing in both China and Hong
Kong. In view of the cost-efficiency and abundant supply of staff in China, more and more Hong
Kong organizations have set up software development labs in China in the form of wholly-owned
establishments or joint ventures. Due to different environmental factors, there are differences in
software project management issues in China and Hong Kong. This study focuses on two variables
that impact the success of a project: software development management practices and
organizational factors. Through a questionnaire survey, we have identified a list of good
management practices adopted in the two regions. We have also developed a list of organizational
factors that affect the adoption of management practices. The adoption of management practices
and differences in organizational factors in the two regions are then compared against those of
European countries. In view of the growing importance and close connection of software
industries in both regions, this study contributes to a better understanding of project management
issues in an organizational context.

Planning and Monitoring :


Test Monitoring is the process of evaluating and providing feedback on the test proceedings that
are currently in progress. It comprises techniques to ensure that specific targets are met at every
stage of testing so that they meet predetermined benchmarks and objectives.
Test Monitoring is the process of evaluating and providing feedback on the test proceedings that
are currently in progress. It comprises techniques to ensure that specific targets are met at every
stage of testing so that they meet predetermined benchmarks and objectives

Test and Analysis Strategies:


• Scope and Overview.

• Testing Methodology.

• Testing Environment Specifications.

• Testing Tools.

• Release Control.

• Risk Analysis.

• Review and Approvals.

Improving Test Process :


Quality Team:

No Silos in Sprint Planning


Unlike in the Scrum methodology, process planning for software testing isn’t just for QA leads to
decide. Clients, stakeholders, and the remaining members are required to partake in. Realistic
requirements and effort prioritization are only possible with feedback from various roles and
perspectives.
Say a feature request pops up. Project managers need to know whether or not it should be passed
onto the development and QA team. For QA engineers, they could raise the lack of automation in
place and manually-intensive work to verify the new functionality doesn’t break the old ones. As
for developers, they could foresee the limited time for unit testing and risk delivering buggy codes
to their QAs.
Overall, learn to set boundaries. It could be refusing to take meetings at lunchtime or turning down
feature requests that double the effort for regression testing near the delivery date.

Working With Software Testing Technologies


Learning about the tech stack of competitors or top performers helps QA leads know about their
options and the tried-and-true strategies. However, what drives the growth in one team won’t
ensure the same for the other.

The nuts and bolts of every organization vary in size, expertise, and, of course, challenges. A
Fortune 500 firm might have attained a much higher testing maturity than a start-up. Bigger firms
had gone through the major trials to find what works or what doesn’t. For long-standing players,
their engineers have fully implemented CI/CD and DevOps, where the priority now is just to
streamline the workflow as a whole.
Newcomers will probably have it different. Organizations twist and turn, searching for the optimal
way of doing QA without being fully dependent on developers. Such scenarios call for a solution
that allows keywords/actions to be made using coding expertise, where unseasoned testers can
easily reuse them to design cases quicker.
To become a QA lead that members love, one should acknowledge their team’s difficulties and
offer a viable solution.

Reporting Testing Progress to Stakeholders


Software development projects are held up by various tools, making data collection for reports a
real hassle. No leader wants to have a poorly depicted picture of quality to project stakeholders
and management.

You might also like