0% found this document useful (0 votes)
33 views59 pages

Notes of Unit - III (SE)

Uploaded by

Aditi Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views59 pages

Notes of Unit - III (SE)

Uploaded by

Aditi Goel
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Notes of Unit - III(SE)

Software design is a process to transform user requirements into some suitable

form, which helps the programmer in software coding and implementation.

For assessing user requirements, an SRS (Software Requirement Specification)

document is created whereas for coding and implementation, there is a need of

more specific and detailed requirements in software terms. The output of this

process can directly be used into implementation in programming languages.

Software design is the first step in SDLC (Software Design Life Cycle), which moves

the concentration from problem domain to solution domain. It tries to specify how

to fulfill the requirements mentioned in SRS.

Software Design Levels


Software design yields three levels of results:

● Architectural Design - The architectural design is the highest abstract

version of the system. It identifies the software as a system with many

components interacting with each other. At this level, the designers

get the idea of proposed solution domain.

● High-level Design- The high-level design breaks the ‘single

entity-multiple component’ concept of architectural design into a

less-abstracted view of subsystems and modules and depicts their

interaction with each other. High-level design focuses on how the

system along with all of its components can be implemented in forms

of modules. It recognizes the modular structure of each sub-system

and their relation and interaction among each other.


● Detailed Design or Low level Design- Detailed design deals with the

implementation part of what is seen as a system and its sub-systems in

the previous two designs. It is more detailed towards modules and

their implementations. It defines the logical structure of each module

and their interfaces to communicate with other modules.

Modularization
Modularization is a technique to divide a software system into multiple discrete and

independent modules, which are expected to be capable of carrying out task(s)

independently. These modules may work as basic constructs for the entire

software. Designers tend to design modules such that they can be executed and/or

compiled separately and independently.

Modular design unintentionally follows the rules of ‘divide and conquer’

problem-solving strategy this is because there are many other benefits attached

with the modular design of a software.

Advantage of modularization:

● Smaller components are easier to maintain

● Program can be divided based on functional aspects

● Desired level of abstraction can be brought in the program

● Components with high cohesion can be reused again

● Concurrent execution can be made possible

● Desired from security aspect

Concurrency
Back in time, all software are meant to be executed sequentially. By sequential

execution we mean that the coded instruction will be executed one after another

implying only one portion of program being activated at any given time. Say, a

software has multiple modules, then only one of all the modules can be found

active at any time of execution.

In software design, concurrency is implemented by splitting the software into

multiple independent units of execution, like modules and executing them in

parallel. In other words, concurrency provides capability to the software to execute

more than one part of code in parallel to each other.

Structure Chart represent hierarchical structure of modules. It breaks down

the entire system into lowest functional modules, describe functions and

sub-functions of each module of a system to a greater detail. Structure Chart

partitions the system into black boxes (functionality of the system is known

to the users but inner details are unknown). Inputs are given to the black

boxes and appropriate outputs are generated.

Modules at top level called modules at low level. Components are read from

top to bottom and left to right. When a module calls another, it views the

called module as black box, passing required parameters and receiving

results.
Symbols used in construction of structured chart

1. Module

It represents the process or task of the system. It is of three types.

○ Control Module

A control module branches to more than one sub module.

○ Sub Module

Sub Module is a module which is the part (Child) of

another module.

○ Library Module

Library Module are reusable and invokable from any

module.

2.

3. Conditional Call

It represents that control module can select any of the sub module
on the basis of some condition.

4. Loop (Repetitive call of module)

It represents the repetitive execution of module by the sub module.

A curved arrow represents loop in the module.

All the sub modules cover by the loop repeat execution of module.

5. Data Flow

It represents the flow of data between the modules. It is


represented by directed arrow with empty circle at the end.

6. Control Flow

It represents the flow of control between the modules. It is

represented by a directed arrow with a filled circle at the end.

7. Physical Storage

Physical Storage is where all the information is to be stored.


Example : Structure chart for an Email server

Types of Structure Chart:

1. Transform Centered Structured:

These types of structure charts are designed for the systems that

receive an input which is transformed by a sequence of operations

being carried out by one module.


2. Transaction Centered Structure:

This structure describes a system that processes a number of

different types of transactions.

Algorithm: It’s an organized logical sequence of the actions or the approach

towards a particular problem. A programmer implements an algorithm to

solve a problem. Algorithms are expressed using natural verbal but

somewhat technical annotations.

Pseudo code: It’s simply an implementation of an algorithm in the form of

annotations and informative text written in plain English. It has no syntax like

any of the programming language and thus can’t be compiled or interpreted

by the computer.

Advantages of Pseudocode

● Improves the readability of any approach. It’s one of the best

approaches to start implementation of an algorithm.

● Acts as a bridge between the program and the algorithm or

flowchart. Also works as a rough documentation, so the program of

one developer can be understood easily when a pseudo code is

written out. In industries, the approach of documentation is

essential. And that’s where a pseudo-code proves vital.


● The main goal of a pseudo code is to explain what exactly each line

of a program should do, hence making the code construction phase

easier for the programmer.

How to write a Pseudo-code?

1. Arrange the sequence of tasks and write the pseudocode

accordingly.

Start with the statement of a pseudo code which establishes the main goal

or the aim.

Example:

This program will allow the user to check

2. the number whether it's even or odd.

The way the if-else, for, while loops are indented in a program, indent the

statements likewise, as it helps to comprehend the decision control and

execution mechanism. They also improve the readability to a great extent.

Example:
if "1"

print response

"I am case 1"

if "2"

print response

"I am case 2"

3.
4. Use appropriate naming conventions. The human tendency follows

the approach to follow what we see. If a programmer goes through

a pseudo code, his approach will be the same as per it, so the

naming must be simple and distinct.

5. Use appropriate sentence casings, such as CamelCase for methods,

upper case for constants and lower case for variables.

6. Elaborate everything which is going to happen in the actual code.

Don’t make the pseudo code abstract.

7. Use standard programming structures such as ‘if-then’, ‘for’, ‘while’,

‘cases’ the way we use it in programming.

8. Check whether all the sections of a pseudo code is complete, finite

and clear to understand and comprehend.

9. Don’t write the pseudo code in a complete programmatic manner. It

is necessary to be simple to understand even for a layman or client,


hence don’t incorporate too many technical terms.
Example:

Let’s have a look at this code

● Java
// This program calculates the Lowest Common
multiple
// for excessively long input values

import java.util.*;

public class LowestCommonMultiple {

private static long


lcmNaive(long numberOne, long numberTwo)
{

long lowestCommonMultiple;

lowestCommonMultiple
= (numberOne * numberTwo)
/ greatestCommonDivisor(numberOne,
numberTwo);

return lowestCommonMultiple;
}

private static long


greatestCommonDivisor(long numberOne, long
numberTwo)
{

if (numberTwo == 0)
return numberOne;

return greatestCommonDivisor(numberTwo,
numberOne %
numberTwo);
}
public static void main(String args[])
{

Scanner scanner = new Scanner(System.in);


System.out.println("Enter the inputs");
long numberOne = scanner.nextInt();
long numberTwo = scanner.nextInt();

System.out.println(lcmNaive(numberOne,
numberTwo));
}
}

And here’s the Pseudo Code for the same.

This program calculates the Lowest Common multiple


for excessively long input values

function lcmNaive(Argument one, Argument two){

Calculate the lowest common variable of


Argument
1 and Argument 2 by dividing their product by
their
Greatest common divisor product

return lowest common multiple


end
}
function greatestCommonDivisor(Argument one,
Argument two){
if Argument two is equal to zero
then return Argument one

return the greatest common divisor

end
}

{
In the main function

print prompt "Input two numbers"

Take the first number from the user


Take the second number from the user

Send the first number and second number


to the lcmNaive function and print
the result to the user
}

Flowchart?
Flowchart is a graphical representation of an algorithm. Programmers often
use it as a program-planning tool to solve a problem. It makes use of
symbols which are connected among them to indicate the flow of
information and processing.
The process of drawing a flowchart for an algorithm is known as
“flowcharting”.
Basic Symbols used in Flowchart Designs
1. Terminal: The oval symbol indicates Start, Stop and Halt in a
program’s logic flow. A pause/halt is generally used in a program
logic under some error conditions. Terminal is the first and last
symbols in the flowchart.

● Input/Output: A parallelogram denotes any function of input/output


type. Program instructions that take input from input devices and
display output on output devices are indicated with parallelogram
in a flowchart.
● Processing: A box represents arithmetic instructions. All
arithmetic processes such as adding, subtracting, multiplication
and division are indicated by action or process symbol.

● Decision Diamond symbol represents a decision point. Decision


based operations such as yes/no question or true/false are
indicated by diamond in flowchart.
● Connectors: Whenever flowchart becomes complex or it spreads
over more than one page, it is useful to use connectors to avoid
any confusions. It is represented by a circle.

● Flow lines: Flow lines indicate the exact sequence in which


instructions are executed. Arrows represent the direction of flow
of control and relationship among different symbols of flowchart.

Rules For Creating Flowchart :


A flowchart is a graphical representation of an algorithm.it should follow
some rules while creating a flowchart
Rule 1: Flowchart opening statement must be ‘start’ keyword.
Rule 2: Flowchart ending statement must be ‘end’ keyword.
Rule 3: All symbols in the flowchart must be connected with an arrow line.
Rule 4: The decision symbol in the flowchart cannot be associated with the
arrow line.

Advantages of Flowchart:
● Flowcharts are a better way of communicating the logic of the
system.
● Flowcharts act as a guide for blueprint during program designed.
● Flowcharts help in debugging process.
● With the help of flowcharts programs can be easily analyzed.
● It provides better documentation.
● Flowcharts serve as a good proper documentation.
● Easy to trace errors in the software.
● Easy to understand.
● The flowchart can be reused for inconvenience in the future.
● It helps to provide correct logic.

Disadvantages of Flowchart:
● It is difficult to draw flowcharts for large and complex programs.
● There is no standard to determine the amount of detail.
● Difficult to reproduce the flowcharts.
● It is very difficult to modify the Flowchart.
● Making a flowchart is costly.
● Some developers think that it is a waste of time.
● It makes software processes low.
● If changes are done in software, then the flowchart must be
redrawn

Example : Draw a flowchart to input two numbers from the user and display
the largest of two numbers
Coupling and Cohesion
When a software program is modularized, its tasks are divided into several

modules based on some characteristics. As we know, modules are set of

instructions put together in order to achieve some tasks. They are though,

considered as single entity but may refer to each other to work together. There are

measures by which the quality of a design of modules and their interaction among

them can be measured. These measures are called coupling and cohesion.

Cohesion
Cohesion is a measure that defines the degree of intra-dependability within

elements of a module. The greater the cohesion, the better is the program design.

There are seven types of cohesion, namely –

● Co-incidental cohesion - It is unplanned and random cohesion, which

might be the result of breaking the program into smaller modules for

the sake of modularization. Because it is unplanned, it may serve

confusion to the programmers and is generally not-accepted.

● Logical cohesion - When logically categorized elements are put

together into a module, it is called logical cohesion.

● Temporal Cohesion - When elements of a module are organized such

that they are processed at a similar point in time, it is called temporal

cohesion.
● Procedural cohesion - When elements of a module are grouped

together, which are executed sequentially in order to perform a task, it

is called procedural cohesion.

● Communicational cohesion - When elements of a module are grouped

together, which are executed sequentially and work on the same data

(information), it is called communicational cohesion.

● Sequential cohesion - When elements of a module are grouped

because the output of one element serves as input to another and so

on, it is called sequential cohesion.

● Functional cohesion - It is considered to be the highest degree of

cohesion, and it is highly expected. Elements of module in functional

cohesion are grouped because they all contribute to a single

well-defined function. It can also be reused.

Coupling
Coupling is a measure that defines the level of inter-dependability among modules

of a program. It tells at what level the modules interfere and interact with each

other. The lower the coupling, the better the program.

There are five levels of coupling, namely -

● Content coupling - When a module can directly access or modify or

refer to the content of another module, it is called content level

coupling.

● Common coupling- When multiple modules have read and write access

to some global data, it is called common or global coupling.


● Control coupling- Two modules are called control-coupled if one of

them decides the function of the other module or changes its flow of

execution.

● Stamp coupling- When multiple modules share a common data

structure and work on different parts of it, it is called stamp coupling.

● Data coupling- Data coupling is when two modules interact with each

other by means of passing data (as parameter). If a module passes

data structure as parameter, then the receiving module should use all

its components.

Ideally, no coupling is considered to be the best.

Design Verification
The output of software design process is design documentation, pseudo codes,

detailed logic diagrams, process diagrams, and detailed description of all functional

or non-functional requirements.

The next phase, which is the implementation of software, depends on all outputs

mentioned above.

It then becomes necessary to verify the output before proceeding to the next phase.

The earlier any mistake is detected, the better it is or it might not be detected until

testing of the product. If the outputs of the design phase are in formal notation

form, then their associated tools for verification should be used otherwise a

thorough design review can be used for verification and validation.


By structured verification approach, reviewers can detect defects that might be

caused by overlooking some conditions. A good design review is important for good

software design, accuracy and quality.

Differentiate between Coupling and Cohesion


Coupling Cohesion

Coupling is also called Inter-Module Binding. Cohesion is also called Intra-Module Binding.

Coupling shows the relationships between modules. Cohesion shows the relationship within the module.

Coupling shows the relative independence between the modules. Cohesion shows the module's relative functional streng
eating, you should aim for low coupling, i.e., dependency among modules should
reating you should aim for high cohesion, i.e.,

ent/ module focuses on a single function (i.e., single-m

e interaction with other modules of the system.

In coupling, modules are linked to the other modules. In cohesion, the module focuses on a single thing.

In software engineering, the coupling is the degree of interdependence between


software modules. Two modules that are tightly coupled are strongly dependent on
each other. However, two modules that are loosely coupled are not dependent on each
other. Uncoupled modules have no interdependence at all within them.

The various types of coupling techniques are shown in fig:

A good design is the one that has low coupling. Coupling is measured by the number
of relations between the modules. That is, the coupling increases as the number of
calls between modules increase or the amount of shared data is large. Thus, it can be
said that a design with high coupling will have more errors.

Cohesion is an ordinal type of measurement and is generally described as "high


cohesion" or "low cohesion."

Software Design Strategies


Software design is a process to conceptualize the software requirements into

software implementation. Software design takes the user requirements as

challenges and tries to find optimum solution. While the software is being

conceptualized, a plan is chalked out to find the best possible design for

implementing the intended solution.

There are multiple variants of software design. Let us study them briefly:

Structured Design
Structured design is a conceptualization of problem into several well-organized

elements of solution. It is basically concerned with the solution design. Benefit of

structured design is, it gives better understanding of how the problem is being

solved. Structured design also makes it simpler for designer to concentrate on the

problem more accurately.

Structured design is mostly based on ‘divide and conquer’ strategy where a

problem is broken into several small problems and each small problem is

individually solved until the whole problem is solved.

The small pieces of problem are solved by means of solution modules. Structured

design emphasis that these modules be well organized in order to achieve precise

solution.

These modules are arranged in hierarchy. They communicate with each other. A

good structured design always follows some rules for communication among

multiple modules, namely -

Cohesion - grouping of all functionally related elements.

Coupling - communication between different modules.

A good structured design has high cohesion and low coupling arrangements.

Function Oriented Design


In function-oriented design, the system is comprised of many smaller sub-systems

known as functions. These functions are capable of performing significant task in

the system. The system is considered as top view of all functions.


Function oriented design inherits some properties of structured design where divide

and conquer methodology is used.

This design mechanism divides the whole system into smaller functions, which

provides means of abstraction by concealing the information and their operation..

These functional modules can share information among themselves by means of

information passing and using information available globally.

Another characteristic of functions is that when a program calls a function, the

function changes the state of the program, which sometimes is not acceptable by

other modules. Function oriented design works well where the system state does

not matter and program/functions work on input rather than on a state.

Design Process
● The whole system is seen as how data flows in the system by means of

data flow diagram.

● DFD depicts how functions changes data and state of entire system.

● The entire system is logically broken down into smaller units known as

functions on the basis of their operation in the system.

● Each function is then described at large.

Object Oriented Design


Object oriented design works around the entities and their characteristics instead of

functions involved in the software system. This design strategies focuses on entities

and its characteristics. The whole concept of software solution revolves around the

engaged entities.

Let us see the important concepts of Object Oriented Design:


● Objects - All entities involved in the solution design are known as

objects. For example, person, banks, company and customers are

treated as objects. Every entity has some attributes associated to it and

has some methods to perform on the attributes.

● Classes - A class is a generalized description of an object. An object is

an instance of a class. Class defines all the attributes, which an object

can have and methods, which defines the functionality of the object.

In the solution design, attributes are stored as variables and

functionalities are defined by means of methods or procedures.

● Encapsulation - In OOD, the attributes (data variables) and methods

(operation on the data) are bundled together is called encapsulation.

Encapsulation not only bundles important information of an object

together, but also restricts access of the data and methods from the

outside world. This is called information hiding.

● Inheritance - OOD allows similar classes to stack up in hierarchical

manner where the lower or sub-classes can import, implement and

re-use allowed variables and methods from their immediate super

classes. This property of OOD is known as inheritance. This makes it

easier to define specific class and to create generalized classes from

specific ones.

● Polymorphism - OOD languages provide a mechanism where methods

performing similar tasks but vary in arguments, can be assigned same

name. This is called polymorphism, which allows a single interface

performing tasks for different types. Depending upon how the function

is invoked, respective portion of the code gets executed.


Design Process
Software design process can be perceived as series of well-defined steps. Though it

varies according to design approach (function oriented or object oriented, yet It may

have the following steps involved:

● A solution design is created from requirement or previous used system

and/or system sequence diagram.

● Objects are identified and grouped into classes on behalf of similarity

in attribute characteristics.

● Class hierarchy and relation among them is defined.

● Application framework is defined.

Software Design Approaches


Here are two generic approaches for software designing:

Top Down Design


We know that a system is composed of more than one sub-systems and it contains

a number of components. Further, these sub-systems and components may have

their on set of sub-system and components and creates hierarchical structure in the

system.

Top-down design takes the whole software system as one entity and then

decomposes it to achieve more than one sub-system or component based on some

characteristics. Each sub-system or component is then treated as a system and

decomposed further. This process keeps on running until the lowest level of system

in the top-down hierarchy is achieved.


Top-down design starts with a generalized model of system and keeps on defining

the more specific part of it. When all components are composed the whole system

comes into existence.

Top-down design is more suitable when the software solution needs to be designed

from scratch and specific details are unknown.

Bottom-up Design
The bottom up design model starts with most specific and basic components. It

proceeds with composing higher level of components by using basic or lower level

components. It keeps creating higher level components until the desired system is

not evolved as one single component. With each higher level, the amount of

abstraction is increased.

Bottom-up strategy is more suitable when a system needs to be created from some

existing system, where the basic primitives can be used in the newer system.

Both, top-down and bottom-up approaches are not practical individually. Instead, a

good combination of both is used.


Halstead Software Science

Halstead’s Software Metrics


A computer program is an implementation of an algorithm considered to be
a collection of tokens which can be classified as either operators or
operands. Halstead’s metrics are included in a number of current
commercial tools that count software lines of code. By counting the tokens
and determining which are operators and which are operands, the following
base measures can be collected :
n1 = Number of distinct operators.
n2 = Number of distinct operands.
N1 = Total number of occurrences of operators.
N2 = Total number of occurrences of operands.
In addition to the above, Halstead defines the following :
n1* = Number of potential operators.
n2* = Number of potential operands.
Halstead refers to n1* and n2* as the minimum possible number of
operators and operands for a module and a program respectively. This
minimum number would be embodied in the programming language itself,
in which the required operation would already exist (for example, in C
language, any program must contain at least the definition of the function
main()), possibly as a function or as a procedure: n1* = 2, since at least 2
operators must appear for any function or procedure : 1 for the name of the
function and 1 to serve as an assignment or grouping symbol, and n2*
represents the number of parameters, without repetition, which would need
to be passed on to the function or the procedure.

Halstead metrics –

Halstead metrics are :


● Halstead Program Length – The total number of operator
occurrences and the total number of operand occurrences.
N = N1 + N2
And estimated program length is, N^ = n1log2n1 + n2log2n2
The following alternate expressions have been published to
estimate program length:
○ NJ = log2(n1!) + log2(n2!)
○ NB = n1 * log2n2 + n2 * log2n1
○ NC = n1 * sqrt(n1) + n2 * sqrt(n2)
○ NS = (n * log2n) / 2
● Halstead Vocabulary – The total number of unique operator and
unique operand occurrences.
n = n1 + n2
● Program Volume – Proportional to program size, represents the
size, in bits, of space necessary for storing the program. This
parameter is dependent on specific algorithm implementation. The
properties V, N, and the number of lines in the code are shown to
be linearly connected and equally valid for measuring relative
program size.
V = Size * (log2 vocabulary) = N * log2(n)
The unit of measurement of volume is the common unit for size
“bits”. It is the actual size of a program if a uniform binary
encoding for the vocabulary is used. And error = Volume / 3000
● Potential Minimum Volume – The potential minimum volume V* is
defined as the volume of the most succinct program in which a
problem can be coded.
V* = (2 + n2*) * log2(2 + n2*)
Here, n2* is the count of unique input and output parameters
● Program Level – To rank the programming languages, the level of
abstraction provided by the programming language, Program
Level (L) is considered. The higher the level of a language, the less
effort it takes to develop a program using that language.
L = V* / V
The value of L ranges between zero and one, with L=1
representing a program written at the highest possible level (i.e.,
with minimum size).
And estimated program level is L^ =2 * (n2) / (n1)(N2)
● Program Difficulty – This parameter shows how difficult to handle
the program is.
D = (n1 / 2) * (N2 / n2)
D=1/L
As the volume of the implementation of a program increases, the
program level decreases and the difficulty increases. Thus,
programming practices such as redundant usage of operands, or
the failure to use higher-level control constructs will tend to
increase the volume as well as the difficulty.
● Programming Effort – Measures the amount of mental activity
needed to translate the existing algorithm into implementation in
the specified program language.
E = V / L = D * V = Difficulty * Volume

● Language Level – Shows the algorithm implementation program


language level. The same algorithm demands additional effort if it
is written in a low-level program language. For example, it is easier
to program in Pascal than in Assembler.
L’ = V / D / D
lambda = L * V* = L2 * V

● Intelligence Content – Determines the amount of intelligence


presented (stated) in the program This parameter provides a
measurement of program complexity, independently of the
program language in which it was implemented.
I=V/D
● Programming Time – Shows time (in minutes) needed to translate
the existing algorithm into implementation in the specified
program language.
T = E / (f * S)
The concept of the processing rate of the human brain, developed
by the psychologist John Stroud, is also used. Stoud defined a
moment as the time required by the human brain requires to carry
out the most elementary decision. The Stoud number S is
therefore Stoud’s moments per second with:
5 <= S <= 20. Halstead uses 18. The value of S has been
empirically developed from psychological reasoning, and its
recommended value for programming applications is 18.
Stroud number S = 18 moments / second
seconds-to-minutes factor f = 60
Counting rules for C language –

1. Comments are not considered.


2. The identifier and function declarations are not considered
3. All the variables and constants are considered operands.
4. Global variables used in different modules of the same program
are counted as multiple occurrences of the same variable.
5. Local variables with the same name in different functions are
counted as unique operands.
6. Functions calls are considered as operators.
7. All looping statements e.g., do {…} while ( ), while ( ) {…}, for ( )
{…}, all control statements e.g., if ( ) {…}, if ( ) {…} else {…}, etc.
are considered as operators.
8. In control construct switch ( ) {case:…}, switch as well as all the
case statements are considered as operators.
9. The reserve words like return, default, continue, break, sizeof, etc.,
are considered as operators.
10. All the brackets, commas, and terminators are considered as
operators.
11.GOTO is counted as an operator and the label is counted as an
operand.
12. The unary and binary occurrence of “+” and “-” are dealt
separately. Similarly “*” (multiplication operator) are dealt
separately.
13. In the array variables such as “array-name [index]”
“array-name” and “index” are considered as operands and [ ] is
considered as operator.
14. In the structure variables such as “struct-name, member-name”
or “struct-name -> member-name”, struct-name, member-name are
taken as operands and ‘.’, ‘->’ are taken as operators. Some names
of member elements in different structure variables are counted as
unique operands.
15. All the hash directive are ignored.

Example – List out the operators and operands and also calculate the
values of software science measures like

int sort (int x[ ], int n)

{
int i, j, save, im1;
/*This function sorts array x in ascending order */
If (n< 2) return 1;
for (i=2; i< =n; i++)
{
im1=i-1;
for (j=1; j< =im1; j++)
if (x[i] < x[j])
{
Save = x[i];
x[i] = x[j];
x[j] = save;
}
}
return 0;
}

Explanation –
operator occurrence operand occurrence
s s s s

int 4 sort 1

() 5 x 7

, 4 n 3

[] 7 i 8

if 2 j 7

< 2 save 3

; 11 im1 3
for 2 2 2

= 6 1 3

– 1 0 1

<= 2 – –

++ 2 – –

return 2 – –

{} 3 – –

n1=14 N1=53 n2=10 N2=38


Therefore,
N = 91
n = 24
V = 417.23 bits
N^ = 86.51
n2* = 3 (x:array holding integer
to be sorted. This is used both
as input and output)
V* = 11.6
L = 0.027
D = 37.03
L^ = 0.038
T = 610 seconds

Advantages of Halstead Metrics:

● It is simple to calculate.
● It measures overall quality of the programs.
● It predicts the rate of error.
● It predicts maintenance effort.
● It does not require the full analysis of programming structure.
● It is useful in scheduling and reporting projects.
● It can be used for any programming language.

Disadvantages of Halstead Metrics:

● It depends on the complete code.


● It has no use as a predictive estimating model.
Function Point (FP) is an element of software development which helps to
approximate the cost of development early in the process. It may measures
functionality from the user's point of view.
Counting Function Point (FP):
Step-1:
F = 14 * scale
Scale varies from 0 to 5 according to character of Complexity Adjustment
Factor (CAF). Below table shows scale:
0 - No Influence
1 - Incidental
2 - Moderate
3 - Average
4 - Significant
● 5 - Essential
● Step-2: Calculate Complexity Adjustment Factor (CAF).
CAF = 0.65 + ( 0.01 * F )
● Step-3: Calculate Unadjusted Function Point (UFP).
TABLE (Required)

Function
Units

EI
EO

EQ

ILF

EIF


Multiply each individual function point to corresponding values in
TABLE.
● Step-4: Calculate Function Point.
FP = UFP * CAF
Example:
Given the following values, compute function point when all complexity
adjustment factor (CAF) and weighting factors are average.
User Input = 50
User Output = 40
User Inquiries = 35
User Files = 6
External Interface = 4

Explanation:
Step-1: As complexity adjustment factor is average (given in question),
hence,
scale = 3.
● F = 14 * 3 = 42
● Step-2:
CAF = 0.65 + ( 0.01 * 42 ) = 1.07
● Step-3: As weighting factors are also average (given in question)
hence we will multiply each individual function point to
corresponding values in TABLE.
UFP = (50*4) + (40*5) + (35*4) + (6*10) + (4*7) = 628
● Step-4:
Function Point = 628 * 1.07 = 671.96
This is the required answer.

Program to calculate Function Point is as follows :-


#include <bits/stdc++.h>
using namespace std;

// Function to calculate Function Point


void calfp(int frates[][3], int fac_rate)
{

// Function Units
string funUnits[5] = {
"External Inputs",
"External Outputs",
"External Inquiries",
"Internal Logical Files",
"External Interface Files"
};

// Weight Rates
string wtRates[3] = { "Low", "Average", "High" };

// Weight Factors
int wtFactors[5][3] = {
{ 3, 4, 6 },
{ 4, 5, 7 },
{ 3, 4, 6 },
{ 7, 10, 15 },
{ 5, 7, 10 },
};

int UFP = 0;

// Calculating UFP (Unadjusted Function Point)


for (int i = 0; i < 5; i++) {

for (int j = 0; j < 3; j++) {

int freq = frates[i][j];

UFP += freq * wtFactors[i][j];


}
}

// 14 factors
string aspects[14] = {
"reliable backup and recovery required ?",
"data communication required ?",
"are there distributed processing functions ?",
"is performance critical ?",
"will the system run in an existing heavily utilized
operational environment ?",
"on line data entry required ?",
"does the on line data entry require the input
transaction to be built over multiple screens or operations
?",
"are the master files updated on line ?",
"is the inputs, outputs, files or inquiries complex
?",
"is the internal processing complex ?",
"is the code designed to be reusable ?",
"are the conversion and installation included in the
design ?",
"is the system designed for multiple installations
in different organizations ?",
"is the application designed to facilitate change
and ease of use by the user ?"
};

/*
Rate Scale of Factors
Rate the following aspects on a scale of 0-5 :-
0 - No influence
1 - Incidental
2 - Moderate
3 - Average
4 - Significant
5 - Essential
*/
int sumF = 0;

// Taking Input of factors rate


for (int i = 0; i < 14; i++) {

int rate = fac_rate;

sumF += rate;
}

// Calculate CFP
double CAF = 0.65 + 0.01 * sumF;

// Calculate Function Point (FP)


double FP = UFP * CAF;

// Output Values
cout << "Function Point Analysis :-" << endl;

cout << "Unadjusted Function Points (UFP) : " << UFP <<
endl;

cout << "Complexity Adjustment Factor (CAF) : " << CAF


<< endl;

cout << "Function Points (FP) : " << FP << endl;


}

// driver function
int main()
{
int frates[5][3] = {
{ 0, 50, 0 },
{ 0, 40, 0 },
{ 0, 35, 0 },
{ 0, 6, 0 },
{ 0, 4, 0 }
};

int fac_rate = 3;

calfp(frates, fac_rate);

return 0;
}

Output:
Function Point Analysis :-
Unadjusted Function Points (UFP) : 628
Complexity Adjustment Factor (CAF) : 1.07
Function Points (FP) : 671.96

Cyclomatic Complexity
● Difficulty Level : Easy
● Last Updated : 27 Jul, 2022

Read

Discuss

Cyclomatic complexity of a code section is the quantitative measure of the


number of linearly independent paths in it. It is a software metric used to
indicate the complexity of a program. It is computed using the Control Flow
Graph of the program. The nodes in the graph indicate the smallest group
of commands of a program, and a directed edge in it connects the two
nodes i.e. if second command might immediately follow the first command.
For example, if source code contains no control flow statement then its
cyclomatic complexity will be 1 and source code contains a single path in
it. Similarly, if the source code contains one if condition then cyclomatic
complexity will be 2 because there will be two paths one for true and the
other for false.

Mathematically, for a structured program, the directed graph inside control


flow is the edge joining two basic blocks of the program as control may
pass from first to second.

So, cyclomatic complexity M would be defined as,

M = E – N + 2P

where,

E = the number of edges in the control flow graph

N = the number of nodes in the control flow graph

P = the number of connected components


Steps that should be followed in calculating cyclomatic complexity and test
cases design are:

● Construction of graph with nodes and edges from code.


● Identification of independent paths.
● Cyclomatic Complexity Calculation
● Design of Test Cases

Let a section of code as such:

A = 10
IF B > C THEN
A = B
ELSE
A = C
ENDIF
Print A
Print B

Print C
Control Flow Graph of above code
The cyclomatic complexity calculated for above code will be from control
flow graph. The graph shows seven shapes(nodes), seven lines(edges),
hence cyclomatic complexity is 7-7+2 = 2.

Use of Cyclomatic Complexity:

● Determining the independent path executions thus proven to be


very helpful for Developers and Testers.
● It can make sure that every path have been tested at least once.
● Thus help to focus more on uncovered paths.
● Code coverage can be improved.
● Risk associated with the program can be evaluated.
● These metrics being used earlier in the program helps in reducing
the risks.

Advantages of Cyclomatic Complexity:.

● It can be used as a quality metric, giving relative complexity of


various designs.
● It is able to compute faster than Halstead's metrics.
● It is used to measure the minimum effort and best areas of
concentration for testing.
● It is able to guide the testing process.
● It is easy to apply.

Disadvantages of Cyclomatic Complexity:

● It is the measure of the program's control complexity and not the


data complexity.
● In this, nested conditional structures are harder to understand
than non-nested structures.
● In case of simple comparisons and decision structures, it may give
a misleading figure.

Reference: https://en.wikipedia.org/wiki/Cyclomatic_complexity

Control Flow Graph (CFG)


● Difficulty Level : Easy
● Last Updated : 15 May, 2019

Read

Discuss

A Control Flow Graph (CFG) is the graphical representation of control flow


or computation during the execution of programs or applications. Control
flow graphs are mostly used in static analysis as well as compiler
applications, as they can accurately represent the flow inside of a program
unit. The control flow graph was originally developed by Frances E. Allen.

Characteristics of Control Flow Graph:

● Control flow graph is process oriented.


● Control flow graph shows all the paths that can be traversed
during a program execution.
● Control flow graph is a directed graph.
● Edges in CFG portray control flow paths and the nodes in CFG
portray basic blocks.

There exist 2 designated blocks in Control Flow Graph:

1. Entry Block:
Entry block allows the control to enter into the control flow graph.
2. Exit Block:
Control flow leaves through the exit block.

Hence, the control flow graph is comprised of all the building blocks
involved in a flow diagram such as the start node, end node and flows
between the nodes.

General Control Flow Graphs:


Control Flow Graph is represented differently for all statements and loops.
Following images describe it:

1. If-else:

2. while:
3. do-while:

4. for:
Example:

if A = 10 then
if B > C
A = B
else A = C
endif
endif

print A, B, C

Flowchart of above example will be:


Control Flow Graph of above example will be:
Advantage of CFG:

There are many advantages of a control flow graph. It can easily


encapsulate the information per each basic block. It can easily locate
inaccessible codes of a program and syntactic structures such as loops
are easy to find in a control flow graph.

You might also like