0% found this document useful (0 votes)
59 views

Software Testing (Introduction) : Alessandro Marchetto Fondazione Bruno Kessler - IRST

The document provides an overview of software testing concepts including: - Unit testing involves testing individual functions or classes in isolation to detect errors. - Acceptance testing checks that the overall system meets requirements by executing tests defined by customers/analysts. - Different testing levels include unit, integration, system, and regression testing conducted at various stages of development.

Uploaded by

Ankur Singh
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views

Software Testing (Introduction) : Alessandro Marchetto Fondazione Bruno Kessler - IRST

The document provides an overview of software testing concepts including: - Unit testing involves testing individual functions or classes in isolation to detect errors. - Acceptance testing checks that the overall system meets requirements by executing tests defined by customers/analysts. - Different testing levels include unit, integration, system, and regression testing conducted at various stages of development.

Uploaded by

Ankur Singh
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 26

Software Testing (introduction)

Alessandro Marchetto Fondazione Bruno Kessler - IRST

Link
Material for the course:
http://selab.fbk.eu/swat

Outline of the course


1. 1. 1. 1.

Acceptance Testing

Fitnesse Junit Structural testing criteria

Unit Testing

Adequacy testing criteria

Regression Testing

Testing
One of the practical methods commonly used to detect the presence of errors (failures) in a computer program is to test it for a set of inputs.
The output is correct?
Expected results = ? Obtained results Inputs
4

I1, I2, I3, , In,

Our program

Terminology

Failure: it is an observable incorrect behavior or state of a given system. In this case, the system displays a behavior that is contrary to its specifications/requirements. Thus, a failure is tied (only) to system executions/behaviors and it occurs at runtime when some part of the system enters an unexpected state. Fault: (commonly named bug/defect) it is a defect in a system. A failure may be caused by the presence of one or more faults on a given system. However, the presence of a fault in a system may or may not lead to a failure, e.g., a system may contain a fault in its code but on a fragment of code that is never exercised so this kind of fault do not lead to a software failure. Error: it is the developer mistake that produce a fault. Often, it has been caused by human activities such as the typing errors.

LOC 1 2 3 4 5 6 7

Code program double (); var x,y: integer; begin read(x); y := x * x; write(y) end

Example

Failure: x = 3 means y =9 Failure! This is a failure of the system since the correct output would be 6 Fault: The fault that causes the failure is in line 5. The * operator is used instead of +. Error: The error that conduces to this fault may be:
a typing error (the developer has written * instead of +) a conceptual error (e.g., the developer doesn't know how means to double a number)

Terminology

Test Case: associated with program behavior. It carries set of input and list of expected output Testing: testing is the process of executing a program with the intent of finding errors

Testing cannot guarantee the absence of faults, Strategies for defining test suites, Formal methods (e.g., model checking) can be used to statically verify software properties, this is not testing.

Debugging: finding and fixing faults in the code

Sources for test cases definition


The requirements to the program (its specification) An informal description A set of scenarios (use cases) A set of sequence diagrams A state machine The system itself (the code or the execution of the application) A set of selection criteria Heuristics (e.g., guidelines for testing) Experience (of the tester)

Testing: three main questions

At which level conducting the testing?


Unit Integration System

How to choose inputs?

Considering the program as black box

A randomly selected set of inputs is statistically insignificant

using the specifications/use cases/requirements using the structure

Considering the program as white box

How to identify the expected output?

Test phases

Unit testing this is basically testing of a single function, procedure, class. Integration testing this checks that units tested in isolation work properly when put togheter. System testing here the emphasis is to ensure that the whole system can cope with real data, monitor system performance, test the systems error handling and recovery routines. Regression Testing this checks that the system preserves its functionality after maintenance and/or evolution tasks. Acceptance Testing this checks if the overall system is functioning as required.

10

Testing tools
FIT/Fitnesse (High level)

Jemmy/Abbot/JFCUnit/

GUI

Perfomance and Load Testing JMeter/JUnitPerf

Cactus

Business Logic HttpUnit/Canoo/Selenium

Junit (Low level) Web UI

Persistence Layer

Junit/SQLUnit/XMLUnit
11

Unit Testing

Unit Tests are tests written by the developers to test functionality as they write it. Each unit test typically tests only a single class, or a small cluster of classes. Unit tests are typically written using a unit testing framework, such as JUnit (automatic unit tests). Target errors not found by Unit testing: - Requirements are mis-interpreted by developer. - Modules dont integrate with each other

12

Unit testing: a white-box approach


Testing based on the coverage of the executed program (source) code. Different coverage criteria:
statement coverage path coverage condition coverage definition-use coverage ..

It is often the case that it is not possible to cover all code. For instance:
- for the presence of dead code (not executable code) - for the presence of not feasible path in the CFG - etc.
13

Acceptance Testing

Acceptance Tests are specified by the customer and analyst to test that the overall system is functioning as required (Do developers build the right system?). Acceptance tests typically test the entire system, or some large chunk of it. When all the acceptance tests pass for a given user story (or use case, or textual requirement), that story is considered complete. At the very least, an acceptance test could consist of a script of user interface actions and expected results that a human can run. Ideally acceptance tests should be automated, either using the unit testing framework (Junit), or a separate acceptance testing framework (Fitnesse).
14

Acceptance Testing

Used to judge if the product is acceptable to the customer Coarse grained tests of business operations Scenario/Story-based (contain expectations) Simple: Happy paths (confirmatory) Sad paths Alternative paths (deviance)

15

Acceptance testing: a black-box approach


1.describe the system using a Use-Cases Diagram
* a use-case of that diagram represents a functionality implemented by the system

2.detail each use-case with a textual description of, e.g., its pre-post conditions and flow of events
* events are related to: (i) the interactions between system and user; and (ii) the expected actions of the system * a flow of events is composed of basic and alternate flows

3.define all instances of each use-case (scenarios) executing the system for realizing the functionality 4.define, at least, one test case for each scenario 5.(opt) define additional test cases to test the interaction between use-cases.
16

At different points in the process

Iterative Software development


Write and execute unit tests

Write acceptance tests

Execute acceptance tests

increment

+ system
Written before Prioritized functionalities Executed after the development
17

Acceptance vs Unit Testing


In theory:

Acceptance Tests
Written by Customer and Analyst. Written using an acceptance testing framework (also unit testing framework). (extreme programming) When acceptance tests pass, stop coding. The job is done. The motivation of acceptance testing is demonstrating working functionalities. Used to verify that the implementation is complete and correct. Used for Integration, System, and regression testing. Used to indicate the progress in the development phase. (Usually as %). Used as a contract. Used for documentation (high level) Written before the development and executed after. Starting point: User stories, User needs, Use Cases, Textual Requirements,

Unit Tests
Written by developers. Written using a unit testing framework. (extreme programming) When unit tests pass, write another test that fails. The motivation of unit testing is finding faults. Used to find faults in individual modules or units (individual programs, functions, procedures, web pages, menus, classes, ) of source code. Used for documentation (low level)

Written and executed during the development. Starting point: new capability (to add a new module/function or class/method).

18

Acceptance vs Unit Testing


In practice: The difference is not so clear-cut. We can often use the same tools for either or both kinds of tests.

19

Traditional Approaches for acceptance testing

Manual Acceptance testing. User exercises the system manually using his creativity. Acceptance testing with GUI Test Drivers (at the GUI level). These tools help the developer do functional/acceptance testing through a user interface such as a native GUI or web interface. Capture and Replay Tools capture events (e.g. mouse, keyboard) in modifiable script.

Disadvantages: expensive, error prone, not repeatable,

Disavantages: Tests are brittle, i.e., have to be re-captured if the GUI changes.

Avoid acceptance testing only in final stage: Too late to find bugs
20

Table-based Approach for acceptance testing

Starting from a user story (or use case or textual requirement), the customer enters in a table (spreadsheet application, html, Word, ) the expectations of the programs behavior. At this point tables can be used as oracle. The customer can manually insert inputs in the System and compare outputs with expected results.

inputs output

Pro: help to clarify requirements, used in System testing, Cons: expensive, error prone,
21

Table-based test cases can help in clarifying requirements


order-processing system for a brewery

It is estimated that 85% of the defects in developed software originate in the requirements (communication between customer and analyst, communication between analyst and developer). There are several sins to avoid when specifying requirements:

if a retail store buys 50 cases of a seasonal brew, no discount is applied; but if the 50 cases are not seasonal a 12% discount is applied. If a store buys 100 cases of a seasonal brew, a discount is applied, but it's only 5%. A 100-case order of a nonseasonal drink is discounted at 17%. There are similar rules for buying in quantities of 200.

noise silence ambiguity over-specification wishful thinking,

=> ambiguous, unusable requirements.

inconsistent,

22

Badly designed systems makes testing difficult

We have a thick GUI that has program logic. The interfaces between the modules are not clearly defined. Testing of specific functions (Unit Testing) cannot be isolated. Testing has to be done through the GUI => Fit/Fitnesse is not sufficient. Testing is difficult.
GUI Test Drivers

Badly designed system

23

Well architected applications makes testing simple

The GUI does not contain any program logic other than dealing with presentation. The interfaces between the modules are well defined. This give us testing advantages. Unit and System acceptance testing are simpler.

Well architected application

24

Well architected applications makes testing simple: Testing a Module

When an application has modules with well defined interfaces, each module can be tested independently from the other modules. Using this type of environment the developer can test the module to make sure everything is working before trying to integrate it with other modules. This system does not require Fit/ FitNesse. You could use any automated test harness that works for your application (i.e., Junit).

Test Tool = Fit/Fitnesse or Junit


25

Conclusions

Badly designed systems makes testing difficult. Unit testing is complex and all end-to-end tests are through the GUI. Well architected applications simplify testing. Unit testing is simple and end-to-end tests are through interfaces of modules. The motivation of Acceptance testing is demonstrating working functionalities. The motivation of Junit is finding faults. Manual acceptance testing is expensive, error prone and not repeatable. Table-based test cases help to clarify textual requirements. Table-based test cases can be requirements verifiable and executable. Table-based test cases can be useful for Managers, Customers, Analysts and Developers.
26

You might also like