ST M4 Notes (1)
ST M4 Notes (1)
Test Execution
Test execution is the process of executing the code and comparing the expected and actual
results. Following factors are to be considered for a test execution process:
• Based on a risk, select a subset of test suite to be executed for this cycle.
• Assign the test cases in each test suite to testers for execution.
• Execute tests, report bugs, and capture test status continuously.
• Resolve blocking issues as they arise.
• Report status, adjust assignments, and reconsider plans and priorities daily.
• Report test cycle findings and status.
Test Case Specification has to be done separately for each unit. Based on the approach
specified in the test plan, the feature to be tested for each unit must be determined. The
overall approach stated in the plan is refined into specific test techniques that should be
followed and into the criteria to be used for evaluation. Based on these the test cases are
specified for the testing unit.
However, a Test Plan is a collection of all Test Specifications for a given area. The Test Plan
contains a high-level overview of what is tested for the given feature area.
• Testing has severe limitations and the effectiveness of testing depends heavily on
the exact nature of the test case. Even for a given criterion the exact nature of the
test cases affects the effectiveness of testing.
• Constructing a good Test Case that will reveal errors in programs is a very creative
activity and depends on the tester. It is important to ensure that the set of test
cases used is of high quality. This is the primary reason for having the test case
specification in the form of a document.
The Test Case Specification is developed in the Development Phase by the organization
responsible for the formal testing of the application.
To sum up, Test Case Test Case Specification defines the exact set up and inputs for one Test
Case.
After successfully achieving software testing objectives, the team starts preparing
several reports and documents that are delivered to the client after the culmination of
the testing process. During this stage these documents and reports are created by the team,
which include details about the testing process, its techniques, methods, test data, test
environment, test suite, etc. Test Case Specification document, which is the last document
published by the testing team is a vital part of these deliverables and is mainly developed by
the organization responsible for formally testing the software product or application.
Therefore, to help you understand the importance, format and specification of this document,
we have curated this article on Test Case Specification.
What are Test Case Specifications?
One of the deliverables offered to the client, test case specification is a document that
delivers a detailed summary of what scenarios will be tested in a software during the software
testing life cycle (STLC). This document specifies the main objective of a specific test and
identifies the required inputs as well as expected results/outputs. Moreover, it acts as a guide
for executing the procedure of testing and outline the pass & fail criteria for determining
acceptance. Test case specification is among those documents, whose format is set by IEEE
Standard for Software & System Test Document (829-1998). With the assistance of test case
specification document one can verify the quality of the numerous test cases created during
the software testing phase.
Format for Test Case Specifications:
Developed by the organization that is responsible for formal testing of the software, the test
case specification document needs to be prepared separately for each unit to ensure its
effectivity and to help build a proper and efficient test plan. Therefore, the format that is used
for creating this document is:
Objectives: The purpose of testing the software is defined here in detail. Relevant and crucial
information, which can make the process understandable for
After successfully achieving software testing objectives, the team starts preparing
several reports and documents that are delivered to the client after the culmination of the testing
process. During this stage these documents and reports are created by the team, which include
details about the testing process, its techniques, methods, test data, test environment, test suite,
etc. Test Case Specification document, which is the last document published by the testing team
is a vital part of these deliverables and is mainly developed by the organization responsible for
formally testing the software product or application.
• Objectives: The purpose of testing the software is defined here in detail. Relevant and
crucial information, which can make the process understandable for the reader is mainly
• Preconditions: The items and documents that were required before executing a particular
test case is mentioned with proper evidence and records. Moreover, it describes the
features and and conditions required for testing. Other important details included here
are:
o Requirement Specification.
o User Guides.
o Operations Manual
• Input Specifications: Once the preconditions are defined, the team works together to
identify all the inputs that are required for executing the test cases. These can vary on the
basis of the level the test case is written for.
• Output Specification: These include all outputs that are required to verify the test case such
o Post Conditions: Here, the team defines the the various environment requirements
stated by the team after the process of testing. Moreover, the team identifies the any
reports and documents that are delivered to the client after the culmination of the testing
process. During this stage these documents and reports are created by the team, which
include details about the testing process, its techniques, methods, test data, test
environment, test suite, etc. Test Case Specification document, which is the last document
published by the testing team is a vital part of these deliverables and is mainly developed by
the organization responsible for formally testing the software product or application.
Therefore, to help you understand the importance, format and specification of this
o One of the deliverables offered to the client, test case specification is a document that
delivers a detailed summary of what scenarios will be tested in a software during the
software testing life cycle (STLC). This document specifies the main objective of a
specific test and identifies the required inputs as well as expected results/outputs.
Moreover, it acts as a guide for executing the procedure of testing and outline the pass
& fail criteria for determining acceptance. Test case specification is among those
documents, whose format is set by IEEE Standard for Software & System Test Document
(829-1998). With the assistance of test case specification document one can verify the
quality of the numerous test cases created during the software testing phase.
test case specification document needs to be prepared separately for each unit to
ensure its effectivity and to help build a proper and efficient test plan. Therefore, the
o Objectives: The purpose of testing the software is defined here in detail. Relevant and
crucial information, which can make the process understandable for the reader is mainly
o Preconditions: The items and documents that were required before executing a
particular test case is mentioned with proper evidence and records. Moreover, it
describes the features and conditions required for testing. Other important details
o Requirement Specification.
o User Guides.
o Operations Manual.
identify all the inputs that are required for executing the test cases. These can vary on
o Output Specification: These include all outputs that are required to verify the test case
o Post Conditions: Here, the team defines the the various environment requirements
stated by the team after the process of testing. Moreover, the team identifies the any
special requirements and constraints on the test cases. It consists of details like:
interventions, etc.
• Intercase Dependencies: Finally the team identifies any requirements or prerequisites test
cases. Here, the test cases are documented with references other requirements and
specifications. To ensure the quality of these test cases the team identifies follow on test
cases.
Scaffolding :
Scaffolding, as used in computing, refers to one of two techniques: The first is a code
various tools.
Code generation:
Scaffolding is a technique supported by some model–view–controller frameworks, in which the
programmer can specify how the application database may be used. The compiler or framework
uses this specification, together with pre-defined code templates, to generate the final code that
the application can use to create, read, update and delete database entries, effectively treating
the templates as a "scaffold" on which to build a more powerful application.
Scaffolding is an evolution of database code generators from earlier development environments,
such as Oracle's CASE Generator, and many other 4GL client-server software development
products.
Scaffolding was made popular by the Ruby on Rails framework. It has been adapted to other
software frameworks, including OutSystems Platform, Express Framework, Blitz.js, Play
framework, Django, web2py, MonoRail,
Brail, Symfony, Laravel, CodeIgniter, Yii, CakePHP, Phalcon PHP, Model-
Glue, PRADO, Grails, Catalyst, Mojolicious, Seam Framework, Spring Roo, JHipster, ASP.NET
Dynamic Data, KumbiaPHP and ASP.NET MVC framework's Metadata Template Helpers.
Project generation:
Complicated software projects often share certain conventions on project structure and
requirements. For example, they often have separate folders for source code, binaries and code
tests, as well as files containing license agreements, release notes and contact information. To
simplify the creation of projects following those conventions, "scaffolding" tools can automatically
generate them at the beginning of each project. Such tools include Yeoman, Cargo and Ritchie CLI.
The difference is between the adjectives generic and specific. Specific is something clearly defined
or identified, whereas generic applies to a group of things and is the exact opposite of 'specific'.
So, in testing terms, does the scaffolding serve a generic purpose or a more specific one?
Test Oracles
Test Oracle is a mechanism, different from the program itself, that can be used to test the
accuracy of a program’s output for test cases. Conceptually, we can consider testing a process
in which test cases are given for testing and the program under test. The output of the two
then compares to determine whether the program behaves correctly for test cases. This is
shown in figure.
Testing oracles are required for testing. Ideally, we want an automated oracle, which always
gives the correct answer. However, often oracles are human beings, who mostly calculate by
hand what the output of the program should be. As it is often very difficult to determine
whether the behavior corresponds to the expected behavior, our “human deities” may make
mistakes. Consequently, when there is a discrequently, between the program and the result,
we must verify the result produced by the oracle before declaring that there is a defect in the
result.
The human oracles typically use the program’s specifications to decide what the correct behavior
of the program should be. To help oracle determine the correct behavior, it is important that the
behavior of the system or component is explicitly specified and the specification itself be error-
free. In other words actually specify the true and correct behavior.
There are some systems where oracles are automatically generated from the specifications of
programs or modules. With such oracles, we are assured that the output of the oracle conforms
to the specifications. However, even this approach does not solve all our problems, as there is a
possibility of errors in specifications. As a result, a divine generated from the specifications will
correct the result if the specifications are correct, and this specification will not be reliable in case
of errors. In addition, systems that generate oracles from specifications require formal
specifications, which are often not generated during design.
In computing, software engineering, and software testing, a test oracle (or just oracle) is a
mechanism for determining whether a test has passed or failed.[1] The use of oracles involves
comparing the output(s) of the system under test, for a given test-case input, to the output(s) that
the oracle determines that product should have. The term "test oracle" was first introduced in a
paper by William E. Howden.[2] Additional work on different kinds of oracles was explored
by Elaine Weyuker.[3]
Oracles often operate separately from the system under test.[4] However, method postconditions
are part of the system under test, as automated oracles in design by
contract models.[5] Determining the correct output for a given input (and a set of program or
system states) is known as the oracle problem or test oracle problem,[6]: 507 which is a much
harder problem than it seems, and involves working with problems related to controllability and
observability.[
Categories
A research literature survey covering 1978 to 2012[6] found several potential categories of test
oracles.
Specified
These oracles are typically associated with formalized approaches to software modeling and
software code construction. They are connected to formal specification,[8] model-based
design which may be used to generate test oracles,[9] state transition specification for which
oracles can be derived to aid model-based testing[10] and protocol conformance
testing,[11] and design by contract for which the equivalent test oracle is an assertion.
Specified Test Oracles have a number of challenges. Formal specification relies on abstraction,
which in turn may naturally have an element of imprecision as all models cannot capture all
behavior.[6]: 514
Derived
A derived test oracle differentiates correct and incorrect behavior by using information derived
from artifacts of the system. These may include documentation, system execution results and
characteristics of versions of the system under test.[6]: 514 Regression test suites (or reports) are
an example of a derived test oracle - they are built on the assumption that the result from a
previous system version can be used as aid (oracle) for a future system version. Previously
measured performance characteristics may be used as an oracle for future system versions, for
example, to trigger a question about observed potential performance degradation. Textual
documentation from previous system versions may be used as a basis to guide expectations in
future system versions.
A pseudo-oracle[6]: 515 falls into the category of derived test oracle. A pseudo-oracle, as defined
by Weyuker,[12] is a separately written program which can take the same input as the program or
system under test so that their outputs may be compared to understand if there might be a
problem to investigate.
A partial oracle[6]: 515 is a hybrid between specified test oracle and derived test oracle. It specifies
important (but not complete) properties of the system under test. For example, metamorphic
testing exploits such properties, called metamorphic relations, across multiple executions of the
system.
Implicit
An implicit test oracle relies on implied information and assumptions.[6]: 518 For example, there
may be some implied conclusion from a program crash, i.e. unwanted behavior - an oracle to
determine that there may be a problem. There are a number of ways to search and test for
unwanted behavior, whether some call it negative testing, where there are specialized subsets
such as fuzzing.
There are limitations in implicit test oracles - as they rely on implied conclusions and assumptions.
For example, a program or process crash may not be a priority issue if the system is a fault-tolerant
system and so operating under a form of self-healing/self-management. Implicit test oracles may
be susceptible to false positives due to environment dependencies.
HumanEdit
When specified, derived or implicit test oracles cannot be used, then human input to determine
the test oracles is required.[7] These can be thought of as quantitative and qualitative
approaches.[6]: 519–520 A quantitative approach aims to find the right amount of information to
gather on a system under test (e.g., test results) for a stakeholder to be able to make decisions on
fit-for-purpose or the release of the software. A qualitative approach aims to find the
representativeness and suitability of the input test data and context of the output from the system
under test. An example is using realistic and representative test data and making sense of the
results (if they are realistic). These can be guided by heuristic approaches, such as gut instincts,
rules of thumb, checklist aids, and experience to help tailor the specific combination selected for
the program/system under test.
Examples
Test oracles are most commonly based on specifications and documentation.[13][14] A formal
specification used as input to model-based design and model-based testing would be an example
of a specified test oracle. The model-based oracle uses the same model to generate and verify
system behavior.[15] Documentation that is not a full specification of the product, such as a usage
or installation guide, or a record of performance characteristics or minimum machine
requirements for the software, would typically be a derived test oracle.
A consistency oracle compares the results of one test execution to another for similarity.[16] This
is another example of a derived test oracle.
An oracle for a software program might be a second program that uses a different algorithm to
evaluate the same mathematical expression as the product under test. This is an example of a
pseudo-oracle, which is a derived test oracle.[12]: 466
During Google search, we do not have a complete oracle to verify whether the number of returned
results is correct. We may define a metamorphic relation[17] such that a follow-up narrowed-
down search will produce fewer results. This is an example of a partial oracle, which is a hybrid
between specified test oracle and derived test oracle.
A statistical oracle uses probabilistic characteristics,[18] for example with image analysis where a
range of certainty and uncertainty is defined for the test oracle to pronounce a match or
otherwise. This would be an example of a quantitative approach in human test oracle.
A heuristic oracle provides representative or approximate results over a class of test
inputs.[19] This would be an example of a qualitative approach in human test oracle.
Self Checks as Oracles:
her than comparing actual values, use properties about results to judge sequences.Take the form of assertions,
tracts, and other logical properties.
@Test
public void multiplicationOfZeroIntegersShouldReturnZero() {
// Tests
assertEquals("10 x 0 must be 0", 0, tester.multiply(10, 0));
assertEquals("0 x 10 must be 0", 0, tester.multiply(0, 10));
assertEquals("0 x 0 must be 0", 0, tester.multiply(0, 0));
}
@Test
public void propertiesOfSort (String[] input) {
// Tests
String[] sorted = quickSort(input);
assert(sorted.size >= 1, "This array can’t be empty.")
Capture/Replay Tool:
GUI capture & replay tools have been developed for testing the applications against graphical user
interfaces. Using a capture and replay tool, testers can run an application and record the
interaction between a user and the application. The Script is recorded with all user actions
including mouse movements and the tool can then automatically replay the exact same interactive
session any number of times without requiring a human intervention. This supports fully
automatic regression testing of graphical user interfaces.
Tools for GUI Capture/Replay:
Many applications today are too comprehensive to be tested manually with reasonable effort.
Capture and Replay tools automate test procedures by interacting with the graphical user
interface of the application under test. A Capture and Replay tool is a type of test execution tool
that records entries during a manual test with the goal of creating automated test scripts that can
then be used repeatedly. Capture and Replay – sometimes referred to as Capture and Playback –
is often used to support automated regression testing.
The Capture and Replay procedure
Capture and Replay defines four essential steps:
• Capture mode records user interactions with user interface elements. Capture and Replay
is a script that documents both the test process and the test parameters.
• The scripts, which are usually defined and editable in XML formats, can be used to describe
simple test scenarios or complex test suites.
• During test evaluation, it must be verified whether previously defined events occur or
errors occur. Output formats, database contents or GUI states are checked and the results
documented accordingly.
• Test scenarios can be easily reproduced by repeatedly replaying (mode) the previously
recorded scripts. Individual elements of the user interface are also recognised if their
position or shape has changed. This works because the user input in capture mode, for
example, not only saves the behavior of the mouse pointer, but also records the
corresponding object ID at the same time.
Capture and Replay were developed to test the applications against graphical user interfaces. With
a capture and replay tool, an application can be tested in which an interactive session can be
repeated any number of times without human intervention. This saves time and effort while
providing valuable insights for further application development.
Sensitivity in capture and replay process:
The Interactive Test Capture (ITC) tool enables you to create automated tests interactively.
Automated tests can be used by programmers to manage code quality, detect code regression or
instability. An automated test is created by launching V5 using a specific capture mode during
which all user interaction are recorded. Once recorded an automated test can be replayed on all
V5 supported platforms, independently of the recording platform.
ITC vs. CUT: ITC needs CUT to run since it is completely integrated in mkodt. With ITC you can
capture scenarios interactively whereas it is necessary to code the scenario when using CUT. A non
developer can use ITC to capture and replay scenarios. Developer skill is needed for failing replay
analysis.
This documentation explains how to use ITC and provides some insight on the internal mechanisms
used by the V5 capture/replay engine. A sample use case is provided along with a list of known
limitations and best practices.
As programs grow and become more and more complex, ensuring code quality becomes a more
and more challenging task. To ensure a good software quality, one might consider a multiple level
approach: formal analysis of the source code, automated tests, and manual tests.
The V5 record engine works at "V5 device level": what are recorded are mouse clicks, mouse
moves and keyboard events that are translated into V5 system events. Contrary to other
commercial automatic test tools, it takes advantage of its full integration into the V5 platform to
provide a good stability:
• Replay is not sensitive to the operating system. For instance, record can be done on
Windows and replay can be done on both Windows and Unix
• Replay is not sensitive to screen resolution
• Replay is not sensitive to window position, dialog box layout, toolbar position, and toolbar
layout
• Replay is not sensitive to the number of items or the order of items in combo-boxes or list-
boxes.
The purpose of ITC is to detect changes between code levels. This happens when a scenario cannot
be replayed anymore. When such a thing happens, it is left to the developer's responsibility to
analyze whether the failure is a regression symptom or not. In some cases, the failure is normal
because the software user interface or its internals have changed. In other cases, the failure is a
true regression symptom. A discussion on how to make both stable and relevant automated tests
can be found later in this document.
In particular, ITC does not provide any mechanism to ensure stability between a V5 code level and
another. No stability is insured between V5 releases, service packs or hot fixes.
• ITC is built on a technology that applies to Wintop based products only (no Webtop
support)
• Only file based scenario are supported. No VPM interoperability is supported
• Scenarios must be captured and replayed on V5 using the English language, installed on a
machine that uses the US-English locale.
Basic Tasks
Recording and Replaying
ITC product is accessed through mkodt tool. "-C" option is used for capture. No specific option is
necessary for replay.
• Create a shell file containing at least the "SetOdtParam TYPE=RECORD" command and the
name of the V5 executable to launch. In case of CATIA V5 it will be:
SetOdtParam TYPE=RECORD
CNEXT
Save the file under a given name in the TestCases directory, for instance
TestCases/MyRecord.sh.
In a licensed environment you need to activate the license for CNEXT. Create a
FunctionTest/InputData/MyRecord.rec directory, and drop a valid licensing.CATSettings
files that will activate the necessary license for capturing and replaying the scenario.
mkodt -s MyRecord -C
• Perform your scenario in the V5 session and end it by exiting the application.
The returned error code should be 0, meaning the record has successfully been captured.
More explanations of the content of those files will be given in the record engine internal
description section.
mkodt -s MyRecord
Notice that the test is replayed on screen. The return code should be 0 if the test succeed.
Using Settings
When recording a session, all V5 defaults settings values are used. If your scenario needs to be
recorded and replayed with non default setting values, simply put the necessary *.CATSettings
files in the .rec directory.
Those setting files are the same that are created and used when launching a normal V5 session.
Using P2 UI Level
By default, in capture mode, V5 sessions are launched using the P1 UI level. This is a behavior
difference between a normal V5 session and a record capture session. To obtain the P2 UI level
you need to put a copy of the FrameGeneral.CATSetting file into the *.rec directory.
Opening Documents
Access the necessary files using the File->Open command. Authorized paths for document files
are:
• "FW.tst/FunctionTests/InputData/"
• "FW.tst/FunctionTests/Output/"
• $ADL_ODT_TMP
Where:
The record engine automatically makes file name and path name conversions to ensure stability
when the replay environment differs from the record environment (Unix vs. Windows).
However, to ensure stability when replaying Windows recorded scenario on Unix, make sure that
file path entered in the "File Open" or directory chooser dialog box at record time have exactly the
same case as the actual directory structure on the machine. Otherwise, the scenario might capture
and replay correctly on Windows but fail on Unix.
Saving Documents
Saving files can be used to test the save functionality itself or to obtain documents that can be
compared to a reference later in the automated test. ITC does not provide such tools for document
comparison.
Save the file using the File->Save command. Authorized path for files are:
• $ADL_ODT_TMP
• "FW.tst/FunctionTests/Output/"
The $ADL_ODT_TMP directory is deleted at the end of mkodt whereas the content of "Output" is
kept. To avoid disk space saturation when replaying a large number of ODTs, use $ADL_ODT_TMP
to save documents.
[Top]
Using the Cache System
[Top]
Capture Recommendations, Limitations and Tips
• Play your scenario once before capturing it to check that it works properly. Do not forget
that a record ending with an abend is not captured.
• Do not move your mouse too often since every movement is recorded. The less
interactions, the better.
• Do not use the wheel or mouse scroll button to scroll the specification tree, otherwise you
will not be able to replay the record. The wheel interaction is not supported by the
record/replay engine.
• Do not resize the MDI Client Window (document window). On the Unix platform it is not
possible to do it at capture time.
• Do not use the Window menu. Window size management controls ("maximize" and
"minimize" buttons, for instance) at the top-right corner of the window are not available.
• Do not use the MRU in the File or Start menu to load documents
• Bear in mind that any specification tree alteration (such as the insertion of a node) may
prevent the record to work properly.
• Check that a creation command works correctly by recording a selection and a deletion
through contextual menu of the created object afterwards
• Interactions with the compass can be recorded on condition that a local transformation
(such as a translation) of the 3D viewer is done before doing the first interaction.
• Some elements cannot be recorded:
o Preselection navigator (also known as drill selection)
o Drag and Drop
o Contextual menus on toolbars
o Context-sensitive help (also known as tool tips)
• The timing of interactions is not recorded. At replay, recorded interactions are launched
one after the other. If a scenario depends on a precise timing sequence of interactions,
then the replay will probably fail. In other words the elapsed time between two
interactions is not a stable value, and might depend on machine hardware configuration,
environment, etc
• Macros cannot be recorded and replayed.
[Top]
Common Return Code
Tip
The return code is the return code of the shell passed to mkodt using -s argument [1]. The return
code generated by the record engine can be overloaded in this shell. Before making an analysis
based on a given return code please check that it corresponds to the actual return code generated
by the replay engine.
Code 1
Code 1 is used by the replay engine when the scenario cannot be replayed anymore. Most
common situations are:
Code 10
This code applies to problems encountered with dialog objects. Please read the associated
message to get a description of the problem. Common situations are:
An exception to this is the contextual menu, on which only the sensitivity check is applied.
When selecting a line in a combo-box or a list-box, the record engine stores the content of
the line as a string. This ensures stability towards the change of the order and number of
items in the list. When the string is missing at replay time, the replay fails.
When selecting a line in a multi-column list, the record engine stores the whole line as a
separate string for each column. At replay time, the selected line is restored based on the
content of the first visible.
Feedback :
← BACK TO BLOG
UserFeedback: TheUltimate Guide
Matei Culcer
April 6, 2021
Table of contents:
• Introduction
• What is user feedback?
• Why is it important to collect user feedback?
• How should you build a user feedback strategy?
• What types of feedback exist?
• What should you do with negative feedback?
• What should you avoid when collecting user feedback?
• What tools should you use to capture user feedback?
• Best practices of user feedback management
• Conclusion
Introduction
It was 1989 when 2 editors of Inc. magazine, George Gendron and Bo Burlingham made the
nervous drive to Palo Alto, California. Not long beforehand they’d decided on who to name as
Inc.’s Entrepreneur of the Decade, and finally, they would get a chance to interview him.
As they entered the offices of NeXT, their interviewee approached them. In his trademark jeans
and turtleneck sweater, Steve Jobs led them up the stairs to his office and the interview
commenced.
Securing an interview with Steve Jobs was rare, even in 1989. And, wanting to make the most of
their time, the editors got straight to the point with their very first question:
“I think really great products come from melding two points of view; the technology point of view
and the customer point of view. You need both. You can't just ask customers what they want and
then try to give that to them. By the time you get it built, they'll want something new.”
Silence overshadowed the room. Three decades later, and this powerful answer Jobs gave is
something that still isn’t often internalized in companies.
Collecting user feedback is incredibly important. As you’ll see examples of later in this article,
launching surveys, asking onboarding questions, and conducting customer interviews are all vital
tools for improving your product.
But the true lesson that Steve Jobs gave all this time ago was that user feedback isn’t as simple as
asking what users want, or what they think about your product, and making those changes. You
have to dive much deeper.
After gathering user feedback, it’s up to you to connect the dots and understand the real desires
beneath the surface.
As Henry Ford is famously quoted as saying; “If I’d asked my customers what they wanted, they’d
have asked for a faster horse.”
What did his users really want? A faster way of getting from A to B.
User feedback may not have given the right solution, but it would have identified the deeper
customer desire. Speed.
So as you read on and learn all about user feedback, bear this important lesson in mind – collecting
user feedback is great, but the customer shouldn’t always be trusted to come up with the solution.
What is user feedback?
User feedback describes the kind of responses elicited from those who use your product. In fact,
it’s a rather simple concept, though it’s not always the easiest to hear. Ideally, your consumer base
will tell you exactly what you want to hear: that they’ve enjoyed using your product and will
continue to in the future. However, constructive feedback is just as integral to the process of
product development, and any company should welcome and use any criticisms to its advantage.
To receive the most helpful information, it’s vital that your company begins by asking the right
questions. No matter the medium, surveying users should be able to realistically indicate the
performance of your product. After all, it’s been made for a specific use; and given the opportunity,
the users themselves will often tell you exactly what they think about it.
User feedback can primarily suggest changes and additional features for the product, as well as
whether or not users experienced any difficulties (and on what scale) when trying it out. While all
of these elements of feedback can provide the fuel for product development, the baseline
question should always be whether the customer’s needs are satisfied by the product. Depending
on a variety of factors, users might invest in a product that could use some improvement; but
seldom will customers continue to utilize a tool that doesn’t do what they need it to – if there are
viable alternatives. Of course, asking your users first-hand is the only way to truly gauge how they
feel about your product.
Partition:
What is Equivalence Partitioning Testing?
Equivalence Partitioning also called as equivalence class partitioning. It is abbreviated as ECP. It is
a software testing technique that divides the input test data of the application under test into each
partition at least once of equivalent data from which test cases can be derived.
An advantage of this approach is it reduces the time required for performing testing of a software
due to less number of test cases.
Example:
The Below example best describes the equivalence class Partitioning:
Assume that the application accepts an integer in the range 100 to 999
Valid Equivalence Class partition: 100 to 999 inclusive.
Restrictions :
Use cases cover only the functional requirements. The testing of non-functional requirements
may not be possible. Use cases are written from the user's perspective.
Visibility :
It is easy for testing work to be invisible or illegible. Most of it happens in your head, and
sometimes when the software is pretty good to begin with, there is no evidence that testing
actually happened.
One of the things I am pretty regularly asked to do is review a testing practice. I usually find that
the testers are there, they are working hard, and they care about their work. The problem is that
their work is completely hidden. No one can see it, and because of that, no one understands it or
So, how can you make your testing work visible to your team?
Scrum
Most of the daily scrums I have been part of were difficult for testers. We went around the room,
starting with developers and product people. If testers were called on at all for a status update, it
was at the very end, and most people were distracted or uninterested by that point. A discussion
would sometimes start before the testers said their bit, and we’d run out of time.
When one of the testers did get called on, they would say something like, “I’m testing the social
media sharing card.” If they were working on multiple projects, the status would be even less
specific, and they’d just say which project they were planning to work on for the day.
No one left the scrum knowing any more about the testing effort than they did when they walked
in.
Just like software development, testing is multifaceted. We aren’t just developing or testing —
there are a lot of activities that happen in the context of testing. The daily scrum is a good time to
might be working on building test data, one person might be working on reproducing something
that seems like a race condition and could use some help, and another might just need an hour or
The daily standup is the appropriate time to share this level of detail.
Test Coverage
Wanting to know how much of a product or feature is tested is a very reasonable thing. Developers
want to anticipate any bug-fixing they might need to do. Managers need to understand how the
release is coming along. And product people need to be able to explain when a customer might
see a much-needed feature or fix in production. However, these are also questions most testers
One of the best ways to talk about coverage is in terms of inventories. A software product can be
described from different perspectives — configurations, pages, features, text fields, supported
browsers, databases and so on. One way to talk coverage is to keep an inventory of each of these
things. When someone asks about test coverage at a high level, you can tell them that you have
tested 15 of the 45 documented features on two of your supported browsers. You can also talk
Why do we monitor?
This small example shows you why we need to monitor and control test activity.
After finishing the Test Estimation and test planning, the management board agreed with your
plan and the milestones are set as per the following figure.
You promised to finish and deliver all test artifacts of the Guru99 Bank Testing project as per above
milestones. Everything seems to be great, and your team is hard at work.
But after 4 weeks, things are not going as per plan. The task of “Making Test specification”
is delayed by 4 working days. It has a cascading effect and all succeeding tasks are delayed.
You missed the milestone as well the overall project deadline.
As a consequence, your project fails and your company loses the customer trust. You must take
full responsibility for the project’s failure.
Analyze
In this step, you compare the progress you defined in plan with the actual progress that the team
has made. By analyzing the record, you can also see how much time has been spent on individual
task and the total time spent on the project overall.
Let’s move back to the report, which the Test Administrators sent you, in the previous section.
In that report, what issue did you figure out?
Nothing wrong, it’s still good
The task progress seems to be delayed
I could not find any issue in that report
By tracking and analyzing the project progress, you can early detect any issue which may happen
to the project, and you can find out the solution to solve that issue.
For some years, the information technology industry has been flourishing in both China and Hong
Kong. In view of the cost-efficiency and abundant supply of staff in China, more and more Hong
Kong organizations have set up software development labs in China in the form of wholly-owned
establishments or joint ventures. Due to different environmental factors, there are differences in
software project management issues in China and Hong Kong. This study focuses on two variables
that impact the success of a project: software development management practices and
organizational factors. Through a questionnaire survey, we have identified a list of good
management practices adopted in the two regions. We have also developed a list of organizational
factors that affect the adoption of management practices. The adoption of management practices
and differences in organizational factors in the two regions are then compared against those of
European countries. In view of the growing importance and close connection of software
industries in both regions, this study contributes to a better understanding of project management
issues in an organizational context.
• Testing Methodology.
• Testing Tools.
• Release Control.
• Risk Analysis.
The nuts and bolts of every organization vary in size, expertise, and, of course, challenges. A
Fortune 500 firm might have attained a much higher testing maturity than a start-up. Bigger firms
had gone through the major trials to find what works or what doesn’t. For long-standing players,
their engineers have fully implemented CI/CD and DevOps, where the priority now is just to
streamline the workflow as a whole.
Newcomers will probably have it different. Organizations twist and turn, searching for the optimal
way of doing QA without being fully dependent on developers. Such scenarios call for a solution
that allows keywords/actions to be made using coding expertise, where unseasoned testers can
easily reuse them to design cases quicker.
To become a QA lead that members love, one should acknowledge their team’s difficulties and
offer a viable solution.