0% found this document useful (0 votes)
34 views19 pages

Chp-3: Static Testing

Uploaded by

Shaikh Imran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views19 pages

Chp-3: Static Testing

Uploaded by

Shaikh Imran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Chp-3: Static Testing

1. Static Techniques – Review


2. Review Process (Informal & Formal)
3. Desk Checking,
4. Technical or Peer Review
5. Walkthrough
6. Inspection
7. Static Techniques – Static Analysis
8. Data flow analysis
9. Control flow analysis,
10. Static Analysis by Tools (Automated Static Analysis)
11. Case Study on Preparation of Inspection Checklist

Static Techniques – Review


Static testing is a type of testing which requires only the source code of the product not
executable. Static testing does not involve executing the program on computer but involves
select people going through the code to find out whether –

- The code works according to the functional requirement.

- The code has been written in accordance with the design developed earlier in the
project life cycle.

- The code for any functionality has been missed out.

- The code handles error properly.

Static Testing can be done by human or with the help of specialized tools –

- These methods rely on the principle of human reading the program code to detect error
rather than computer executing the code to find errors.

The process has several advantages:

(i) Sometime human can find errors that computer cannot. E.g. when there are two
variables with similar names and the programmer used a “wrong” variable by mistake
in an expression, the computer will not detect the error, but execute the statement and
produce incorrect result, whereas a human being can spot such an error.

There are multiple methods to achieve static testing by humans:


(i) Code Walkthrough
(ii) Code Review
(iii) Code Inspection

What Review Items

Requirement

Design review

Code Review

User Document
How & Who
Audit Inspectio Peer Walkthroug Desk
Detect errors Review Check

Check conformity
with specifications
Check Quality Attributes
Why
Check usage & standards

Check progress

Review Process

(i) Review: Are conducted during and at the end of each phase of life cycle to determine whether
established requirements, design concepts and specifications have been met.
▪ Review consists of the presentation of material to panel.
▪ Review are most effective when conducted by personnel who have not been directly
involved in the software being reviewed.

Formal Review Informal Review

(ii) Formal Review: These reviews are conducted at the end of each life cycle phase. The acquirer of
the software appoints the formal review panels, who may make or affect a go or not to go decision
to proceed to the next step of the life cycle.
Formal review includes –
 Software requirement review
 Software Preliminary Design review
 Software test readiness review
Informal review – Informal reviews are conducted as and when basis. The developer chooses a
review panel and provides and or presents the material to be reviewed. Document / Hard copy
presented.
Testing:
Testing the operation of the software with real or simulates inputs to demonstrate that the product
satisfies its requirement and if it does not to identify the specific differences between expected and
actual results.
 Careful planning is required to get the most out of testing and inspection process.
 Planning should start early in the development process.
 Describe the project life cycle and milestones.
 Summarize the schedule of verification and validation tasks and how verification and
validation results provide feedback to the development process to support overall project
management function.
Goal of verification and validation:
Verification and validation should establish confidence that software is fit for purpose.
Planning:
 Scope of work
 Software integrity levels. e.g. Medical device – high level
Personal record keeping – low level
 Development of verification and validation plan
Benefits of verification and validation:
▪ Early detecting leads to a better solution rather than quick fixes.
▪ Validating the solution is solving the right problem against software requirements.
▪ Support process important with an objective, feedback on the quality of development process
of products.
Review Guidelines:
Minimum set of guidelines for formal technical reviews.

(a) Review the product not the producer


(b) Set an agenda and maintain it
(c) Limit debate
(d) Indicate problem areas
(e) Take written notes
(f) Limit number of participants
(g) Develop a checklist for each product that is likely to be reviewed
(h) Allocate resources and schedule time for FTRs
(i) Conduct meaningful training for all reviewers
(j) Review your early reviews.
(k) Each reviewer must have present his findings.
(l) At the end of review meeting all attendances have to sign off the protocol.
Review Meeting

The meeting typically consists of the following elements (party depending on review types).

Review Meeting

i) Logging Phase 2) Discussion Phase 3) Decision Phase


(Sorting/classification)

(i) Logging Phase:


The issues e.g. Defects that have been identified during the preparation are mentioned page
by page, reviewer by reviewer and are logged by author.
To ensure progress and efficiency, no real discussion is allowed during the logging phase.
(ii) Discussion Phase:
If an issue need discussion, the item is logged and handled on whatever or not an issue is a
defect is not very meaningful, as
(iii) Decision Phase:
It is much more efficient to simply log it and proceed to the next one. Every defect and its severity
should be logged. The participant who identifies the defect proposes the severity.

Severity

Critical Major Minor

At the end of meeting, a decision on the document under review has to be made by the
participants, sometime based on formal exit-criteria. The most important exit criteria is the average
number of critical and major exceeds a certain level the document must be reviewed again, after it
has been reworked.

Rework - Review documents

Follow up - Moderator is responsible for ensuring that satisfactory action have been taken
on all logged defects, process improvement suggestions and change requests.

In order to control and optimize the review process, a number of measurements are collected by
the moderator at each step of the process. eg. Such measurement includes no. of defects found, no.
of defects found per page, time spent checking per page, total review efforts etc.

Review Reporting and Record Keeping


All issues that have been raised are summarized at the end of the review meeting and a review
issues list is produced.

In addition, a formal technical review summary report is completed.

Review summary report – answers three questions.

(i) What was reviewed?

(ii) Who reviewed it?

(iii) What were the findings and conclusions?

Every single finding has to be weighted as –

Crucial error - Usability of product impossible it has be corrected before release.

Main error - Affect the usability of the product.

Good - Without error.

Software Inspections:

 Involve people examining the source code representation with the aim of discovering
anomalies and defects.
 Very effective technique for discovering errors.

(i) Inspection process:

Many different defects may be discovered in a single inspection. In testing one defect may
mask another. So several executions are required. Inspection can check conformance with the
customer’s real requirements.

(ii) Inspection Procedure:

▪ System overview presented to inspection team.

▪ Code and associated documents are distributed to inspection team in advance.

▪ Inspection takes place and discovered errors are noted.

▪ Modifications are made to repair discovered errors.

(iii) Inspection Checklist:

Checklist of common error should be used to drive the inspection.

Error checklist is programming language dependent

The weak the type of checking – the larger the checklist.


An inspection is detailed examination of a product on a step by step or line-of-code by line-of-
code basis.

The purpose of code inspection is to find errors.

It’s a formal meeting carried out by small team of at least four peoples

Author / owner Reader Moderator Chief Moderator

(iv) Inspection is defect detection:

Defects include –

▪ Logic errors

▪ Requirement mismatch

▪ Design defects

(v) Inspection Checklist Format:

Purpose:-

Process:-

Members:-

Inspection checklist:-

Role of members:-

(vi) Inspection must consider following points:

 Prepare checklist of a likely errors to derive inspection process.

 Training to readers first and then team members.

(vii) Role of Inspection Team:

Author / Owner :Producing the program or design documents

Reader :Summarize the code or documents at an inspection meeting.

Moderator :Manage the process and facilities the inspection reports the process results to
chief moderator.

Chief Moderator :Responsible for inspection process improvements.

 Checklist updating
 Standards developments etc.

(viii) Contents of Inspection Checklist:


 Initialization
 Constant naming
 Loop termination
 Array bounds

Fault Class Static Analysis

a) Data Fault  Have all constants named


 Are all program variables initialized before their value
are used
b) Control fault  For each conditional statement: is the condition correct

c) Input / Output  Can unexpected input / output cause corruption


fault
d) Storage  Is space allocation and de-allocation is no longer
management fault required
e) Exception  Have all possible error conditions have been taken into
handling account

Write 7-8 points functional / non-functional inspection checklist for testing –

1. On-line banking
2. ATM
3. Railway reservation system
4. Mobile phone
5. Traffic signals
6. 2-way data sharing switch
7. Calculator
8. Bulb
9. Yahoo page

Automated static Analysis:

 Helping to improve the quality of increasingly complex software and system.


 As software becomes more complex – The probability of exposing end users to program
defects increases exponentially

 Static analysis means the study of things that are not changing however in software terms.

 This definition should be redefined to include the study of source code and or binary code
that is not currently executed.

 To analyze running code, you need a debugger or profile but you can learn a lot from code
without ever initializing program.

 Organization may able to produce better software at a lower cost by integrating static
analysis information into the software development process.

 There are two basic reasons to embrace static analysis.

 1st is to reduce the time and cost of developing high quality code and 2nd is to increase
revenue and reduce business risk by providing reliable software to customers.

 Static analyzers are software tools for source text processing.

 Stages of Static analysis

1. Control Flow Analysis:- Check loops with multiple ext and entry

2. Data use analysis:- Detect un-initialized variables

3. Interface Analysis:- Check consistency of routine and


procedure declarations and their use.

4. Information Flow Analysis:- Identifies dependencies of output


variables.

5. Path Analysis:- Identifies path through the program and


set out the statements to be executed in
that path.

Walkthrough
Walkthrough in software testing is used to review documents with peers, managers, and fellow team
members who are guided by the author of the document to gather feedback and reach a consensus.
A walkthrough can be pre-planned or organised based on the needs.

Walkthroughs are represented by the below characteristics:

 It is not a formal process/review


 It is led by the authors
 Author guide the participants through the document according to his or her thought process to
achieve a common understanding and to gather feedback.
 Useful for the people if they are not from the software discipline, who are not used to or cannot
easily understand software development process.
 Is especially useful for higher level documents like requirement specification, etc.

The goals of a walkthrough:

1. To present the documents both within and outside the software discipline in order to gather the
information regarding the topic under documentation.
2. To explain or do the knowledge transfer and evaluate the contents of the document
3. To achieve a common understanding and to gather feedback.
4. To examine and discuss the validity of the proposed solutions

Desk Checking

A desk check is an informal non-computerized or manual process for verifying the programming and
logic of an algorithm before the program is launched. A desk check helps programmers to find bugs
and errors which would prevent the application from functioning properly. Although a useful
technique for spotting errors, modern debugging applications and tools have made desk checks less
relevant and not as essential as they previously were. A desk check focuses on the logic and value of
the variables. This is quite different from a test plan, which does not focus on the internal workings
and logic, and rather mostly focuses on inputs and outputs required by the application. A desk check is
performed with the help of a table with columns for pseudo-code line number column, condition
column, input/output column and a column for variables. The pseudo-code line number column helps
in specifying the line or lines being executed. The condition column helps in the showing the working
when evaluating the conditions. The input/output column helps in showing the inputs and outputs, and
helps in evaluating the input received by the user and the output displayed by the logic. The column
for variables helps in evaluating the calculations using variables. The programmer/designer/tester
starts with some possible inputs and walks through the algorithm line by line. The lines are assigned
line numbers and proceed with each one taking into account the change in values for variables. All
information is captured in table columns. The evaluation is usually done with the help of pen/pencil
and paper, and is similar to proofreading. There are many benefits associated with desk checking. It
can find and expose issues and errors with the algorithm. It also helps in verifying that the algorithm
performs as intended to the designer or programmer. It is a fast and inexpensive technique. It can help
in identifying errors in logic at early stages of evaluation.
A desk check is not foolproof. It is the duty of the designer/programmer to make sure to have
traversed through all possible paths of the logic and make use of every data set that is required. Desk
checking is subject to human error, as the evaluator needs to understand requirements before
evaluating the logic.
A third human error-detection process is the older practice of desk checking. A desk check can be
viewed as a one-person inspection or walkthrough: A person reads a program, checks it with respect to
an error list, and/or walks test data through it.
For most people, desk checking is relatively unproductive. One reason is that it is a completely
undisciplined process. A second, and more important, reason is that it runs counter to a testing
principle of Chapter 2—the principal that people are generally ineffective in testing their own
programs. For this reason, you could deduce that desk checking is best performed by a person other
than the author of the program (e.g., two programmers might swap programs rather than desk check
their own programs), but even this is less effective than the walkthrough or inspection process. The
reason is the synergistic effect of the walkthrough or inspection team. The team session fosters a
healthy environment of competition; people like to show off by finding errors. In a desk-checking
process, since there is no one to whom you can show off, this apparently valuable effect is missing. In
short, desk checking may be more valuable than doing nothing at all, but it is much less effective than
the inspection or walkthrough.

Data flow testing focuses on the points at which variable receives values and the point at which these
values are used. It detects, improves use and data values (data flow anomalies) due to coding errors.

Data Flow Testing

Static Dynamic

Compile time, inspection Execution Time

 A variable that is defined but never used


 A variable that is used but never defined

A form of static analysis based on the definition and usage of variables.

It is performed by doing analysis of data use. The usage of data paths through the program code is
checked.

Use to detect data flow an anomalies, unexpected initialization sequences of operations on a variable.

e.g. of Data flow anomalies

 Reading variables without previous initialization.


 Not using the values of a variable at all.
 Usage of every single variable is inspected.

Three types of usage / states of variables


i) Defined (d): A variable is assigned a values
ii) Reference (r): The value of variable is read and or used.
iii) Undefined (u): A variable has no defined value.

Three types of data flow anomalies –

a) ur-anomaly: An undefined value (u) of variable is read on a program path (r).


b) du-anomaly: The variable is assigned a value (d) that becomes invalid / undefined (u).
c) dd-anomaly: The variable receives a value for the second time (d) and first value had not
been used (d).

e.g. The following function is supposed to exchange the integer value of the parameters Max and Min
with the help of the variable Help if the value of variable Min is greater than values of variable Max.

exchange (int& Min, int& Max)

Int Help;

If (Min > Max)

Max=Help;

Max=Min;

Help=Min;

The following anomalies detected

ur-anomalies of the variable Help

- The first usage of the variable is on the right side of an assignment statement.
- There was no initialization of the variable when it was declared.
- This time variable still has undefined value which is referenced there.

dd-anomaly of variable Max

- The variable is used twice consecutively on the left side of an assignment and therefore
is assigned a value twice.
- Either first assignment can be omitted or the use of the first value has been forgotten.

Du-anomaly of the variable Help


- In assignment of function the variable Help is assigned another value that can not be
used anywhere.
- This is because of the variable is only valid inside the function.

Void exchange (int& Min, int& Max)

int Help;

if (Min > Max)

Max = Help; Help = Max;

Max = Min; Max = Min;

Help = Min; Min = Help;

- Data flow analysis can be used to increase program understanding and to develop test
cases based on the data flow within the program.
- Data flow analysis focuses on occurrences of variables, following data flow
initialization of variables to its uses. The variable value may be used for computing
values for defining other variables or used as predicate (condition) variables to decide
whether a predicate is true for traversing a specific execution path.
- The path of the usage of the data can help in identifying suspicious code blocks and in
developing test cases to validate the runtime behavior of the software.
- Data flow analysis is a technique for gathering information about the possible set of
values calculated at various points in a computer program. A program control flow
graph (CFG) is used to determine those parts of a program to which a particular values
assigned to variables might propagate. The information gathered is often used by
compiled when optimizing a program.
- A simple way to perform data flow analysis of a program is to setup data flow
equations for each node of the control flow graph and solve them by repeatedly
calculating the output from the input locally at each node until the whole system
stabilize i.e. it reaches at fixpoint.
- Data flow analysis is inherently flow sensitive.
 Flow sensitive
 Path sensitive
 Context Sensitive

Control Flow Analysis:

- A control flow analysis (CFG) in computer science is a representation using a graph notation of
all path that might traversed through a program during its execution.
- In control flow graph each node in the graph represents a basic block. i.e. straight-line piece of
code without any jumps or jump targets. Jump target start a block and jumps end a block.

Connected Flow

Entry

a b

c d

Exit
- Direct edges are used to represent jumps in the control flow. These are in most presentations,
two specially designed blocks.
The entry block – through which control enters into the glow graph and
The exit block – through which all control flow leaves.
Flow Graph Notations -

Sequence While

If then - else
Until

(i) Arrow called edges represent flow of control

(ii) Circles called nodes represent one or more actions

(iii) Areas bounded by edges of nodes called regions

(iv) A predicate node is a node containing a condition

- Any procedural design / program can be transferred into a flow graph. Later the flow
graph can be analyzed for various paths within it.
- Control flow analysis techniques for the control flow of a program. The control flow is
expressed as a control flow graph (CFG).
- For many languages, the control flow of a program is explicit in a program’s usually
refers to a static analysis technique for determining the receiver(s) of function or
method calls in computer programs written in a higher-order programming language.
- For both functional programming language and object oriented programming
languages, the term CFA refers to algorithm that computers control flows.
- To determine the possible targets, a control flow analysis must be used to compare
control flow analysis.
- It is based upon graphical representation of program process. In control flow analysis,
the program graphs have nodes which represent a segment or segment possibly ending
in an unresolved branch.
- The objective of control flow analysis is to determine the potential problems in logic
branches that might result in loop condition or improper processing.
- It is an abstract representation of all possible sequences of events (paths) in the
execution of a components or system.
- A program structure is represented (modeled) by a control flow graph (CFG).
- CFG is a directed graph that shows a sequence of events (paths) in the execution
through a component or system.
- CFG consists of nodes  and edges 
Nodes – represents or statements or sequence of statements
Edge – represents control flow from one statement to another basic construct.

Sequence If …..then If then – else

Do
if C then
A
if D then

if E then F C

else G
D
else
E H
if H then J

else K F G J K L

else L
B
while B
M

Control flow detected in anomaly in the control flow of a test object.

eg. jump out of a loop body program structure has many exit.

Draw control flow graph for the following functions named FIND_MAX from control flow graph
determine its cyclomatic complexity.

int max; ,1

if (i < j) then
2 5
if (l < k) then

max = l 3 4 6 7
else
8
max = k

else

if (j > k) then

max = j

else
M = E – N+P
max = k
E = 10, N = 8
return (max)
No. of decisions + 1 = 3 + 1 = 4
}

Cycloramic complexity

It is used to measure the complexity of the software process. It is used to measure how many number
of test cases are used to test the application in all possible ways.

Cyclomatic complexity comes under white box testing. It means best path searching. To measure
logical complexity of a program.

Macabe used the theory of graph in defining of cyclomatic complexity. There are sets of linearly
independent program paths through any program graph.

A maximum set of these linearly independent paths called basis set.

Cyclomatic complexity is number of decision statement in the program being tested plus one i.e. to
find shortest path between the nodes.

Cyclomatic complexity is a software metric (measurement). It was developed by Thomas MeCabe in


1976, and used to indicate the complexity of a program. It directly measures the number of linearly
independent path through a program source code.

Cyclomatic complexity is computed using control flow graph of the program. The nodes of the graph
corresponds to indivisible groups of commands of a program and directed edge connects two nodes. If
the second command might be executed immediately after the first command. Cyclomatic complexity
may also be applied to individual functions, modules, methods or classes within a program.

In Cyclomatic complexity of a section of source code is the count of the number of linearly
independent paths through the source code.

Eg. If the source code contained no decision points such as If statements or For loops, the complexity
would be 1. Since there is only a single path through the code. If the code had a single If statement
containing a single condition there would be two paths through the code, one path where the If
statement is evaluated as TRUE and one path where the IF statements evaluated as FALSE.

Mathematically, the Cyclomatic complexity of a structured program is defined with reference to the
control flow graph of the program, a directed graph containing the basic blocks of the program, with
an edge between two basic blocks. If control may pass from the first to the second, the complexity M
is defined as

M = E-N+2P

Where E = number of edges of the graph

N = Number of nodes of the graph

P = The number of disconnected components/nodes


A control flow graph of a simple program.

The program being executing at red node. The enters a loop (group of 3 nodes
immediately below the res nodes)

On existing the loop, there is conditional statement (group below the loops) and
finally the program exits at the blue node.

E = 9, N = 8, P = 1, So the Cyclomatic complexity of the program is

9– 8 + ( 2 x 1) = 3

For single program (or subroutine or method) P is always equal to 1.


Cyclomatic complexity may, however, be applied to several programs or
subroutines at the some time and in these cases P will be equal to the number
of program in question.

If some function as above shown as a strongly connected control flow graph for
calculation via the alternate method for this graph:

E = 10, N = 8, and P = 1

Cyclomatic complexity for this program is = 10 – 8 + 1 = 3

if (c1)

f1();

else

f2();

if (c2)

For example, consider a program that consists of the sequential if …. then … else statements.

To achieve a complete branch coverage two test cases are sufficiently here –
For complete path coverage, four test cases are necessary, Cyclomatic complexity is M = 3

Cyclomatic complexity can be applied in several areas including;

- Code development risk analysis


- Change risk analysis in maintenance
- Test planning – mathematical analysis has shown that Cyclomatic complexity gives the
exact number of test needed to test every decision point in a program for each outcome.
- Re-engineering.

Deriving test cases: Test cases are designed in many ways. The steps involved for test cases design are

(i) Using the design or code, draw corresponding flow graph.


(ii) Determine Cyclomatic complexity of the flow graph
(iii) Determine basic set of independent paths.
(iv) Prepare test cases that will force execution of each path in the basic set.

McCab’s Cyclomatic complexity is defined as

M = E – N + 2P

E = Number of links in the flow graph

N = Number of nodes in the flow graph

P = Number of disconnected path of the flow graph

Complexity of several graphs considered together is equal to the sum of the individual complexities of
these graphs.

E =1, N = 2, P = 1, M = 1-2+2 =1

E =4, N = 2, P = 1, M = 4-4+2x1 =2

E =2, N = 4, P = 1, M = 2-4+4 =2

E =4, N = 5, P = 1, M = 4-5+2 =1

- A software metric used to measure the complexity of software.


- Described as the number of decision points +1.
- A number of industry studies have indicated that the higher the complexity the higher
the probability of errors.
Basis point testing

We derive independent paths;


1
M=4
2
Path 1 : 1,2,3,6,7,8

Path 2 : 1,2,3,5,7,8
4 3
Path 3 : 1,2,4,7,8

Path 4 : 1,2,4,7,2,4, ….,7,8,9


5 6
Finally we derive test cases to exercise these paths.

M = E – N + 2P = 8 – 8 + 2 x 1 = 2 7

You might also like