SOFTWARE DESIGN PROCESS:
Software design is a process to transform user requirements into
some suitable form, which helps the programmer in software coding and
implementation. For assessing user requirements,an SRS (Software
Requirement Specification) document is created whereas for coding and
implementation, there is a need of more specific and detailed requirements in
software terms. The output of this process can directly be used into
implementation in programming languages,
Software design is the first step in SDLC (Software Design Life Cycle),
which moves the concentration fram problem domain to solution domain. It
tries to specify how to fulfill the requirements mentioned in SRS.
Outcome of the Design Process:
The following items are designed and documented during the design
phase,
Different modules required: The different modules in the solution should be
identified. Each module is a collection of functions and the data shared by
these functions. Each module should accomplish some well-defined task out
of the overall responsibility of the software. Each module should be named
according to the task it performs. For example, in an academic automation
software, the module consisting of the functions and data necessary to
accomplish the task of registration of the students should be named handle
student registration.
Control relationships among modules: A control relationship between two
modules essentially arises due to function calls across the two modules, The
control relationships existing among various modules should be identified in
the design document.
Interfaces among different modules: The interfaces between two modules
identifies the exact data items that are exchanged between the two modules
when one module invokes a function of the other module.
Data structures of the individual modules; Each module normally stores
some data that the functions of the module need to share to accomplish the
overall responsibility of the module. Suitable data structures for storing and
managing the data of a module need to be properly designed and
documented.
Algorithms required to implement the individual modules: Each function in
a module usually performs some processing activity. The algorithms required
to accomplish the processing activities of various modules need to be
carefully designed and documented with due considerations given to the
accuracy of the results, space and time complexities.Classifification of Design Activities
A good software design is seldom realised by using a single step
procedure, rather it requires iterating over a series of steps called the design
activities. Let us first classify the design activities before discussing them in
detail. Depending on the order in which various design activities are
performed, we can broadly classify them into two important stages.
i Preliminary (or high-level) design, and
{! Detailed design.
The meaning and scope of these two stages can vary considerably
from one design methodology to another. However, for the traditional function
-oriented design approach, it is possible to define the objectives of the high-
level design as follows:
The outcome of high-level design is called the program structure or the
software architecture. High-level design is a crucial step in the overall design
of a software. When the high-level design is complete, the problem should
have been decomposed into many small functionally independent modules
that are cohesive, have low coupling among themselves, and are arranged ina
hierarchy.
Characteristics of Good Software Design:
Good design relies on a combination of high-level systems thinking and
low-level component knowledge. In modern software design, best practice
revolves around creating modular components that you can call and deploy as
needed. In doing this, you build software that is reusable, extensible, and easy
to test. Characteristics of good design are :
Correctness: A good design should first of all be correct. That is, it should
correctly implement all the functionalities of the system.
Understandability: A good design should be easily understandable. Unless a
design solution is easily understandable, it would be difficult to implement
and maintain it.
Efficiency: A good design solution should adequately address resource, time,
and cost optimization issues.
Maintainability: A good design should be easy to change. This is an important
requirement, since change requests usually keep coming from the customer
even after product release.BLACK BOX TESTING:
In black-box testing, test cases are designed from an examination of
the input/output values only and no knowledge of design or code is required.
The following are the two main approaches available to design black box
test cases:
+ i Equivalence class partitioning
* i Boundary value analysis
Equivalence Class Partitioning
In the equivalence class partitioning approach, the domain of input
values to the program under test is partitioned into a set of equivalence
classes. The partitioning is done such that for every input data belonging to
the same equivalence class, the program behaves similarly.
Equivalence classes for a unit under test can be designed by
examining the input data and output data. The following are two general
guidelines for designing the equivalence classes:
1. If the input data values to a system can be specified by a range of
values, then one valid and two invalid equivalence classes need to be defined.
For example, if the equivalence class is the set of integers in the range 1 to
10 {i.e., [1,10]), then the invalid equivalence classes are [-00,0], [11,+00].
2. If the input data assumes values from a set of discrete members of
some domain, then one equivalence class for the valid input values and
another equivalence class for the invalid input values should be defined.
Ex : For a software that computes the square root of an input integer that can
assume values in the range of 0 and 5000. Determine the equivalence classes
and the black box test suite.
Answer: There are three equivalence classes—The set of negative integers,
the set of integers in the range of 0 and 5000, and the set of integers larger
than 5000. Therefore, the test cases must include representatives for each of
the three equivalence classes.
A possible test suite can be: {-5,500,6000})
Set of all inputs |
see A.
; =
7 aia
4.
Palindromes |
r — y
| Tnvalid inputs
Boundary Value Analysis
Boundary value analysis-based test suite design involves designing
test cases using the values at the boundaries of different equivalence classes.
To design boundary value test cases, it is required to examine the
equivalence classes to check if any of the equivalence classes contains a
range of values. For those equivalence classes that are not a range of values
no boundary value test cases can be defined, For an equivalence class that is
a range of values, the boundary values need to be included in the test suite.
For example, if an equivalence class contains the integers in the range
1 to 10, then the boundary value test suite is {0,1,10,11}.
Ex : For a function that computes the square root of the integer values
in the range of 0 and 5000, determine the boundary value test suite.
Answer: There are three equivalence classes—The set of negative
integers, the set of integers in the range of 0 and 5000, and the set of integers
larger than 5000. The boundary value-based test
suite is: {0,-1,5000,5001}.= The principles of object-orientation have been founded ona
few simple concepts. These concepts are pictorially showed
he
Agents
Abstraction | {Method overriding =
Widgets)
nericity
( Polymorphism |
ee Persistence
(Kneapsulation | ( Composite objects ———
Key concepts Related terms
Objects | Object Classes Inheritance |
relations Methods
Basic mechanisms
Important concepts used in the object-oriented approach.
»Basic Concepts :
= A few important concepts that form the corner stones of the
object-oriented paradigm. They are :
~ Objects :
* Each object in an object-oriented program usually represents
a tangible real-world entity such as a library member, a book, an
issue register, etc.
* However while solving a problem, it becomes advantageous
at times to consider certain conceptual entities (e.g., a
scheduler, a controller, etc.) as objects as well.
= This simplifies the solution and helps to arrive at a good
design.
*As already mentioned, each object stores some data and
supports certain operations on the stored data.
* As an example, consider the library Member object of a library
automation application. The private data of a library Member
object can be the following:
name of the member
membership number
address
phone number
e-mail address
The operations supported by a libraryMember object can be the
following:
issue-book
return-book
find-membership-details
>Class :
= Once we define a class, it can be used as a template for
object creation.
= All the objects constituting a class possess similar attributes
and methods.* Abstract data:
The data of an object can be accessed only through its
methods.
* Data type:
In programming language terminology, a data type defines a
collection of data values and a set of predefined operations
on those values.
* Methods -
The operations (such as create, issue, return, etc.) supported
by
an object are implemented in the form of methods.
» Class Relationships :
* Classes in a programming solution can be related to each
other in the
following four ways:
> Inheritance
~ Association and link
— Aggregation and composition
— Dependency(i) Inheritance :
= The inheritance feature allows one to define a new class by
incrementally extending the features of an existing class.
= The original class is called the base class (also called
superclass or parent class ) and the new class obtained through
inheritance is called the derived class (also called a subclass or a
child class ).
Ble derived class is said to inherit the features of the base
class.
Library information system
LibraryMember | Base class
. SS
on
A
a“ ~
=
Staff | Derived classes
Faculty | Students
a eae
7 N
A) Ss
gO oS
N
Zn ~
Undergrad | PostGrad Research
“ Multiple inheritance :
= Multiple inheritance is a mechanism by which a subclass can
inherit attributes and methods from more than one base class.
‘LinearyMember | Sui akes
a a
a =
[Pecuity ] ¢ [ Btmtone | Staff | Derived’classes
eg ns! Zz j
Pos. ~ ;
ae - 2/7 Multiple inheritance
Undergrad | | PostGraa | \_[ tesearch
multiple inhertance.
(ii) Association and link :
= Association is a common type of relation among classes.
= When two classes are associated, they can take each others
help (i.e. invoke each others methods) to serve user requests.
= More technically, we can say that if one class is associated
with another bidirectionally, then the corresponding objects of
the two classes know each others ids (identities).# n-ary association :
«Binary association between classes is very commonly
encountered in design probleme. However, there can be situations
where three or more different classes can be involved in an
association.
* As an example of a ternary association, consider the
following—A person books a ticket for a certain show. Here, an
association exists among the classes Person, Ticket, and Show.
. A class can have an association relationship with itself. This is
calle
recursive association or unary association.
As an example, consider the following—two students may be
friends. Here, an association named friendship exists among
pairs of objects of the Student class.
[——_|regiitar| —__] | Books fy
‘student || Elective subject) | Person | | | Ticket sue | friend of
u——— — Student
|
Show |
(a) binary (b) ternary (c) unary association.
= When two classes are associated, the relationship
between two objects of the corresponding classes is
called a link.
(iii) Composition and aggregation :
= Composition and aggregation represent part/whole
relationships among objects.
* Objects which contain other objects are called composite
objects.
1.10
1
Book > Chapter
aggregation relationship.
(iv) Dependency :
*A dependency relation between two classes shows that any
change made to the independent class would require the
corresponding change to be made to the dependent class.
= Dependencies among classes may arise due to various causes.
= Two important reasons for dependency to exist between two
classes are the following:
~
Abstraction :
The abstraction mechanism allows us to represent a problem
in asimpler way by considering only those aspects that are
ae to some purpose and omitting all other details that are
irrelevant.
Abstraction is supported in two different ways in an object-
oriented designs (OODs). These are the following:
— Feature abstraction
-- Data abstractionEncapsulation :
The data of an object is encapsulated within its methods. To
access the data internal to an object, other objects have to
invoke its methods, and cannot directly access the data.
Schematic representation of the concept of encapsulation
Polymorphism :
* Polymorphism literally means poly (many) morphis m (forms).
* There are two main types of polymorphisms in object-orientation:
(i) Static polymorphism:
Static polymorphism occurs when multiple methods implement the
same operation. In this type of polymorphism, when a_ method is called
(same method name but different parameter types), different behaviour
(actions) would be observed.
This type of polymorphism is also referred to as static binding .
(ii) Dynamic polymorphism:
Dynamic polymorphism is also called dynamic binding. In
dynamic binding,the exact method that would be invoked
(bound) on a method call can only be known at the run time
(dynamically) and cannot be determined at compile time.* PROJECT SIZE ESTIMATION :
+ Estimation is the process of finding an estimate, or approximation,
which is a value that can be used for some purpose even if input data
may be incomplete, uncertain, or unstable.
+ Estimation determines how much money, effort, resources, and time it
will take to build a specific system or product. Estimation is based on -
+ Past Data/Past Experience
+ Available Documents/Knowledge
+ Assumptions
* Identified Risks
Currently two metrics are popularly being used widely to estimate size:
+ 1- lines of code (LOC)
* 2- function point (FP)
* LINES OF CODE (LOC)
+ LOC is the simplest among all metrics available to estimate project
size.
* This metric is very popular because it is the simplest to use.
+ Using this metric, the project size is estimated by counting the number
of source instructions in the developed program.
* Obviously, while counting the number of source instructions, lines used
for commenting the the code and the header lines should be ignored.
* Determining the LOC count at the end of a project is a very simple job.
+ However, accurate estimation of the LOC count at the beginning of a
project is very difficult.
* In order to estimate the LOC count at the beginning of a project.
* project managers usually divide the problem into modules, and each
module into submodules and so on, until the sizes of the different leaf-
level modules can be approximately predicted.
* To be able to do this, past experience in developing similar products is
helpful.
+ By using the estimation of the lowest level modules, project managers
arrive at the total size estimation.«FUNCTION POINT (FP):
* Function point metric was proposed by Albrecht [1983].
* This metric overcomes many of the shortcomings of the LOC metric.
* Since its inception in late 1970s, function point metric has been slowly
gaining popularity.
* One of the important advantages of using the function point metric is
that it can be used to easily estimate the size of a software product
directly from the problem specification.
* This is in contrast to the LOC metric, where the size can be accurately
determined only after the product has fully been developed.
* The conceptual idea behind the function point metric is that the size of
a software product is directly dependent on the number of different
functions or features it supports.
- EFFORT ESTIMATON TECHQNIQUES:
Estimation is the process of finding an estimate, or approximation,
which is a value that can be used for some purpose even if input data
may be incomplete, uncertain, or unstable.
+ Estimation determines how much money, effort, resources, and time it
will take to build a specific system or product.
Estimation is based on -
* Past Data/Past Experience
+ Available Documents/Knowledge
* Assumptions
* Identified Risks
* Atealistic effort estimate requires you to have a clear understanding of
certain elements of the project:
* The purpose and scope of the project (If working with a client, what are
their expectations?)
* What needs to be done to achieve it
+ What resources should be allocated
* Timeline
+ The four basic steps in Software Project Estimation are -
+ Estimate the size of the development product.
+ Estimate the effort in person-months or person-hours.
* Estimate the schedule in calendar months.
* Estimate the project cost in agreed currency.+ Estimation need not be a one-time task in a project.
+ Itcan take place during -
+ Acquiring a Project.
* Planning the Project.
* Execution of the Project as the need arises.
1. Top-down Estimate:
* Once more detail is learned on the project's scope, a top-down estimating
technique assigns an overall time for the project and divides the project into
parts according to the work breakdown structure.
2. Bottom-up Estimate:
* The bottom-up method is the opposite of top-down. It approaches the project
as a combination of small workpieces. By making a detailed estimate for
each task and combining them together, you can build an overall project
estimate.
3. Expert judgement:
* The expert judgment technique requires consulting the expert who will
perform the task to ask how long it will take to complete. This method relies
on your trust in the expert's insights and experience.
4, Analogous Estimating :
+ Analogous estimating is a technique for estimating based on similar
projects completed in the past. If the whole project has no analogs, it
can be applied by blending it with the bottom-up technique. In this case,
you compare the tasks with their counterparts, then combine them to
estimate the overall project...
5. Three-point Estimating:
* Three-point estimating is very straightforward. It involves three different
estimates that are usually obtained from subject matter experts:
* Optimistic estimate
* Pessimistic estimate
* Most likely estimate
1) Optimistic estimate:
The optimistic estimate gives the amount of work and time that would be
required if everything went smoothly.
(2)pessimistic estimate:
A pessimistic estimate provides the worst-case scenario,
(3)Most likely estimate:
The result will be most realistic when the two are averaged with the most
likely estimate.ees Degen
Use-case Diageam -forlibbary management gatem-s
USE Case diagram vefesred asa Behaviour mode) or
diagvam. Pe Siroply Aescribes and displays the xelation
ov Mtevattion between the USES OF CUStorers and
providevs 4 application cesvice ov the |
“Leoxth and» A
Updore a} bind
~Reneid sid) ex clades> 1 Uibrasion
\ (a'boor) Sb. Delete ae
“Seep book ve (ord) |
bi 1 \ \ (pays seeped — |
| \ i Pine ace eh
a \ \ GLEN. retord / |
Rah \ (F Srvayy | ecinuuder> |
SS mas ve.
a | \ fp Xinduder tee ip.
a fr ; Updates
Ls fox (Rilting thexgtord
ee, sus ty eae
ae ' 1Stey\ , 6 5
diol s) 4 Te pane ;
gst whee! oe 5 nedo USER Jee neluiclesry “Ubrayydataass =
up mee
wear (Registsca) (Geray
—casdtD/
Pane eToysSome SCenaxios ¢ thesyeteny aveas follows -
@ user who seqisters himselt a8 a new user Intetal 6
wegarded OS Stott or Student pw the brary system.
f +o an fey
—> For the af get i istered ag anew USe¥, veal sration
m areavarla z
-for e thatfs needed to be-ful-titted b eee
* g
| aa ae” ds eee Card iS tecued t0 the ttged by.
pee TO PO Torts ced Yaniin ts assigned to
Cordholdes ow user: |
|
@ fitter getting the Library cord A New bookis weqinested by
| the use® a& perv there vequivement- *
@ ties, requesting the ~etired book ts vesesved ey the usey |
that means NO Other User lan VEqUESE for that Kook.
the used Can renew, O book that means the user Can
ate for the destred book i-£ the. User hag
Now,
Os new due d
| wed them ‘
Laer ae user forgets to wetter the book be-fore the duedate
then the Uses poys pre
eee fll the -Peedback —foxm available tp they want: to.
Ubraxian has a key vo le Sn the System. Ubraxtan adds
athe wetords in the Ubrary database about eathstudent-
ov uUsex every time tasuing the book or FetLurning the book,
ov paying -fine-
Upraxian also deletes A welord & o pacticullas, wi tre
wp the Student Leaves the College or passed Outro
the college: Dp the book no longer exists in the Ur
nen the record 4 the particulars book §$ also deleted.
updating cotabase i$ the tmpertant woled librarian.Design Principles:
Software design is both a process and a model. The design process is
a sequence of steps that enable the designer to describe all aspects of the
software to be built.
it ts important to note, however, that the design process is not simply a
cookbook. Creative skill, past experience, a sense of what makes “good”
software, and an overall commitment to quality are critical success factors for
a competent design. The design model is the equivalent of an architect's
plans for a house. It begins by representing the totality of the thing to be built
(e.g.. a three-dimensional rendering of the house) and slowly refines the thing
to provide guidance for constructing each detail (e.g,, the plumbing layout).
Similarly, the design model that is created for software provides a variety of
different views of the computer software. Basic design principles enable the
software engineer to navigate the design process. Davis [DAV95] suggests a
set of principles for software design, which have been adapted and extended
in the following list:
The design process should not suffer from “tunnel vision." A good designer
should consider alternative approaches, judging each based on the
requirements of the problem, the resources available to do the jab.
The design should be traceable to the analysis model. Because a single
@lement of the design model often traces to multiple requirements, it is
necessary to have a means for tracking how requirements have been satisfied
by the design model.
The design should not reinvent(create) the wheel. Systems are constructed
using a set of design patterns, many of which have likely been encountered
before. These patterns should always be chosen as an alternative to
reinvention. Time is short and resources are limited! Design time should be
invested in representing truly new ideas and integrating those patterns that
already exist.
The design should “minimize the intellectual distance” [DAV95]
between the software and the problem as it exists in the real world. That is,
the structure of the software design should (whenever possible) mimic the
structure of the problem domain.The design should exhibit uniformity and integration. A design is uniform if it
appears that one person developed the entire thing. Rules of style and format
should be defined for a design team before design work begins. A design is
integrated if care is taken in defining interfaces between design components
The design should be structured to accommodate change. The design
concepts discussed in the next section enable a design to achieve this
principle
The design should be structured to degrade gently, even when aberrant data,
events, or operating conditions are encountered. Well designed software
should never “bomb.” It should be designed to accommodate unusual
circumstances, and if it must terminate processing, do so in a graceful
manner.
Design is not coding, coding is not design. Even when detailed procedural
designs are created for program components, the level of abstraction of the
design model is higher than source code. The only design decisions made at
the coding level address the small implementation details that enable the
procedural design to be coded.
The design should be assessed for quality as it is being created, not after the
fact. A variety of design concepts (Section 13.4) and design measures are
available to assist the designer in assessing quality.
The design should be reviewed to minimize conceptual (semantic) errors.
There is sometimes a tendency to focus on minutiae when the design is
reviewed, missing the forest for the trees. A design team should ensure that
major conceptual elements of the design (omissions, ambiguity,
inconsistency) have been addressed before worrying about the syntax of the
design model.Software project management:
The main goal of software project management is to enable a group of
developers to work effectively towards the successful completion of a project”
It is an art and discipline of planning and supervising software projects.
It is a sub-discipline of software project management in which
software projects planned, implemented, monitored and controlled.
It is a procedure of managing, allocating and timing resources to
develop computer software that fulfills requirements.
In software Project Management, the client and the developers need to
know the length, period and cost of the project.
Needs Software project management:
There are three needs for software project management. These are:
® Time
® Cost
® Quality
It is an essential part of the software organization to deliver a quality
product, keeping the cost within the client?s budget and deliver the project as
per schedule.
There are various factors, both external and internal, which may impact
this triple factor.Software Configuration Management
When we develop software, the product (software) undergoes many
changes in their maintenance phase; we need to handle these changes
effectively,
Several individuals (programs) works together to achieve these
common goals. This individual produces several work product (SC Items) e.g.,
Intermediate version of modules or test data used during debugging, parts of
the final product.
The elements that comprise all information produced as a part of the
software process are collectively called a software configuration.
As software development progresses, the number of Software
Configuration elements (SCI's) grow rapidly.
A configuration of the product refers not only to the product's
constituent but also to a particular version of the component.Therefore, SCM
is the discipline which
@ Identify change
@ Monitor and control change
@ Ensure the proper implementation of change made to the item.
@ Auditing and reporting on the change made.
@ Configuration Management (CM) is a technic of identifying, organizing,
and controlling modification to software being built by a programming
team.
CM is used to essential due to the inventory management, library
management, and updation management of the items essential for the
project.
Why do we need Configuration Management?
@ Multiple people are working on software which is consistently updating.
@ It may be a method where multiple version, branches, authors are
involved in a software project, and the team is geographically distributed
and works concurrently.
@ It changes in user requirements, and policy, budget, schedules need to be
accommodated.
Importance of SCM:
@ Itis practical in controlling and managing the access to various SCls e.g.,
by preventing the two members of a team for checking out the same
component for modification at the same time.
@ it provides the tool to ensure that changes are being properly
implemented.
@ \t has the capability of describing and storing the various constituent of
software.
@ SCM is used in keeping a system in a consistent state by automatically
producing derived version upon modification of the same component.Sliding Window Planning:
It is usually very difficult to make accurate plans for large projects at
project initiation. A part of the difficulty arises from the fact that large projects
may take several years to complete. As a result, during the span of the project,
the project parameters, scope of the project, project staff, etc., often change
drastically resulting in the initial plans going haywire. In order to overcome
this problem, sometimes project managers undertake project planning over
several stages.
In the sliding window planning technique, starting with an initial plan,
the project is planned more accurately over a number of stages.
At the start of a project, the project manager has incomplete
knowledge about the nitty-gritty of the project. His information base gradually
improves as the project progresses through different development phases.Joteraction Diagrams
Interaction diagrams depict interactions of objects and their relationships. They alsa include the
‘messages passed between them, There are two types of interaction diagrams:
© Sequence Diagrams
* Collaboration Diagrams
Interaction diagrams are used for modeling:
© The control flow by time ordering using sequence diagrams.
© The control flow of organization using collaboration diagrams.
‘G.Scanence Diagrams,
Sequence diagrams are interaction diagrams that illustrate the ordering of messages
accofding to Gime.
‘Notations: These diagrams are in the form of two-dimensional charts. The objects that mitiatte the
interaction are placed on the x-axis. The messages that these objects send and receive are placed
along the y—axis, in the order of increasing time fram top to bottom.
Exiimiple: A sequence diagram for the Automated Trading House System is showti in the following
figure,
Collaboration diagrams are interaction diagrams that illustrate the structure of the objects that send
and receive messages.
Notations: In these diagrams, the objects that participate in the interaction are shown using vertices.
‘The links that connect the objects are used to send and recdive messages. The message is shown as a
labelled arrow.
Example: Collaboration diagram for the Automated Trading House System is illustrated in the
figure below,
IplactOrderPurpose of Interaction Diagrams
The purpose of interaction diagrams is to
visualize the interactive behavior of the
system. Visualizing the interaction is a difficult
task. Hence, the solution is to use different
types of models to capture the different
aspects of the interaction.
Sequence and collaboration diagrams are used
to capture the dynamic nature but from a
different angle.
The purpose of interaction diagram is -
@ To capture the dynamic behaviour of a
system.
@ To describe the message flow in the
system.
@ To describe the structural organization
of the objects.
e@ To describe the interaction among
objects.Structured Design:
The aim of structured design is to transform the results of the
structured analysis (that is, the DFD model) into a structure chart. A structure
chart represents the software architecture.
The structure chart representation can be easily implemented using
some programming language.
Since the main focus in a structure chart representation is on module
structure of a software and the interaction among the different modules, the
procedural aspects (e.g. how a particular functionality is achieved) are not
represented.
No
af = BPO
hos = Serer chart
‘Structured design methodologies
The basic building blocks using which structure charts are designed are as
following:
Rectangular boxes: A rectangular box represents a module. Usually, every
rectangular box is annotated with the name of the module it represents.
[o
Module invocation arrows: An arrow connecting two modules implies that
during program execution control is passed from one module to the other in
the direction of the connecting arrow.
Data flow arrows: These are small arrows appearing alongside the module
invoeation arrows. The data flow arrows are annotated with the
corresponding data name. Data flow arrows represent the fact that the named
data passes from one module to the other in the direction of the arrow.
Selection: The diamond symbol represents the fact that one module of
several modules connected with the diamond symbol is invoked depending on
the outcome of the condition attached with the diamond symbol.
Repetition: A loop around the control flow arrows denotes that the respective
modules are invoked repeatedly.
Transformation of a DFD Model into Structure Chart
Systematic techniques are available to transform the OFD representation of a
problem into a module structure represented by as a structure chart
Structured design provides two strategies to guide transformation of a DFD
into a structure chart:
1) Transform analysis 2) Transaction analysis
Library modules: A library module is usually represented by a rectangle with
double edges. Libraries comprise the frequently called modules. Usually,
when a module is invoked by many other modules, it is made into a library
module.
Transform analysis
Transform analysis identifies the primary functional components (modules)
and the input and output data for these components. The first step in
transform analysis is to divide the DFD into three types of parts:
1) Input
2) Processing
3) Output
The input portion in the DFD includes processes that transform input data
from physical (e.g, character from terminal) to logical form (e.g,, internal
tables, lists, ete). Each input portion is called an afferent branch.The output portion of a DFD transforms output data from logical form to
physical form. Each output portion is called an efferent branch. The remaining
portion of a DFD is called central transform.
In the next step of transform analysis, the structure chart is derived by
drawing one functional component each for the central transform, the
afferent and efferent branches. These are drawn below a root module, which
would invoke these modules.
Transaction analysis
Transaction analysis is an alternative to transform analysis and is
useful while designing transaction processing programs. A transaction allows
the user to perform some specific type of work by using the software. For
example, ‘issue book’, ‘return book’, ‘query book’, etc., are transactions
As in transform analysis, first all data entering into the DFD need to be
identified. In a transaction-driven system, different data items may pass
through different computation paths through the DFD. Each different way in
which input data is processed is a transaction, A simple way to identify a
transaction is the following. Check the input data.
The number of bubbles on which the input data to the DFD are
incident defines the number of transactions, However, some transactions
may not require any input data. These transactions can be identified based on
the experience gained from solving a large number of exarnples.DEBUGGING
Debugging is the process of fixing a bug in the software. In other words,
it refers to identifying, analyzing, and removing errors. This activity begins
after the software fails to execute properly and concludes by solving the
iy
"problem and successfully testing the software. It is considered to be an
extremely complex and tedious task because errors need to be resolved at all
stages of debugging.
Debugging Approaches
The following are some of the approaches that are popularly adopted
by the programmers for debugging:
Brute force method
This is the most common method of debugging but is the least
efficient method. In this approach, print statements are inserted throughout
the program to print the intermediate values with the hope that some of the
printed values will help to identify the statement in error.
This approach becomes more systematic with the use of a symbolic
debugger (also called a source code debugger), because values of different
variables can be easily checked and break points and watch points can be
easily set to test the values of variables effortlessly.
Single stepping using a symbolic debugger is another form of this
approach, where the developer mentally computes the expected result after
every source instruction and checks whether the same is computed by single
‘stepping through the program.
Backtracking
This is also a fairly common approach. In this approach, starting from
the statement at which an Error symptom has been observed, the source code
is traced backwards umtil the error is discovered. Unfortunately, as the number
of source lines to be traced back increases, the number of potential backward
paths increases and may become unmanageable large for complex programs,
limiting the use of this approach.
Cause elimination method
In this approach, once a failure is observed, the symptoms of the
failure (i.e, certain variable is having a negative value though it should be
positive, etc.) are noted. Based on the failure symptoms, the causes which
could possibly have contributed to the symptom are developed and tests are
conducted to eliminate each. A related technique of identification of the error
from the error symptom is the software fault tree analysis.Program slicing
This technique is similar to back tracking. in the backtracking approach,
one often has to examine a large number of statements. However, the search
space is reduced by defining stices
A slice of a program for a particular variable and at a particular
statement is the set of source lines preceding this statement that can
influence the value of that variable. Program slicing makes use of the fact that
an error in the value of a variable can be caused by the statements on which it
is data dependent.
Each of these debugging approaches can be supplemented with
debugging tools. We can apply a wide variety of debugging compilers, dynamic
debugging aids ("tracers"), automatic test case generators, memory dumps,
and cross-reference maps.
However, tools are not a substitute for careful evaluation based on a
complete software design document and clear source code.
Debugging Guidelines
Debugging is often carried out by programmers based on their
ingenuity and experience. The following are some general guidelines for
effective debugging:
+ ll Many times debugging requires a thorough understanding of the
program design. Trying to debug based on a partial understanding of the
program design may require an inordinate amount of effort to be put into
debugging even for simple problems.
@ Debugging may sometimes even require full redesign of the system. In
such cases, a common mistakes that novice programmers often make is
attempting not to fix the error but its symptoms.
& One must be beware of the possibility that an error correction may
introduce new errors, Therefore after every round of error-fixing,
regression testing must be carried out.
Usability Testing :Usability Testing in software testing is a type of testing,
that is done from an end user's perspective to determine if the system is
easily usable. Usability testing is generally the practice of testing how to
easy a design is to use on a group of representative users.
Avery common mistake in usability testing is conducting a study too
late in the design process If you wait until right before your product is
released, you won't have the time or money to fix any issues ~ and you'll
have wasted a lot of effort developing your product the wrong way.
Phases of Usability Testing
There are five phases in usability testing which are followed by the
system when usability testing is performed. These are given below:
Prepare your product or design to test: The first phase of usability testing is
choosing a product and then making it ready for usability testing, For usability
testing, more functions and operations are required than this phase provided
that type of requirement. Hence this is one of the most important phases in
usability testing.Find your participants: The second phase of usability testing is finding an
employee whe is helping you with performing usability testing, Generally, the
number of participants that you need is based on a number of case studies.
Generally, five participants are able to find almost as many usability problems
as you'd find using many more test participants.
Write a test plan: This is the third phase of usability testing. The plan is one of
the first steps in each round of usability testing is to develop a plan for the
test. The main purpose of the plan is to document what you are going to do,
how you are going to conduct the test, what metrics you are going to find, the
number of participants you are going to test, and what scenarios you will use.
Take on the role of the moderator: This is the fourth phase of usability testing
and here the moderator plays a vital role that involves building a partnership
with the participant. Most of the research findings are derived by observing
the participant's actions and gathering verbal feedback to be an effective
moderator, you need to be able to make instant decisions while
simultaneously overseeing various aspects of the research session,
Present your findings/ final report: This phase generally involves combining
your results into an overall score and presenting it meaningfully to your
audience, An easy method to do this is to compare each data point to a target
goal and represent this as one single metric based on a percentage of users
who achieved this goal.Software Measurement and Metrics:
Software Measurement: A measurement is a manifestation of the size,
quantity, amount, or dimension of a particular attribute of a product or
process.
Software measurement is a titrate impute of a characteristic of a
software product or the software process.
It is an authority within software engineering. The software
measurement process is defined and governed by |SO Standard.
Software Measurement Principles:
The software measurement process can be characterized by five
activities-
Formulation: The derivation of software measures and metrics appropriate for
the representation of the software that is being considered.
Collection: The mechanism used to accumulate data required to derive the
formulated metrics.
Analysis: The computation of metrics and the application of mathematical
tools.
Interpretation: The evaluation of metrics resulting in insight into the quality of
the representation,
Feedback: Recommendation derived from the interpretation of product
metrics transmitted to the software team.
Need for Software Measurement: Software is measured to:
@ Create the quality of the current product or process.
@ Anticipate future qualities of the product or process.
@ Enhance the quality of a product or process.
@ Regulate the state of the project in relation to budget and schedule
Classification of Software Measurement:
There are 2 types of software measurement:
1)Direct Measurement: In direct measurement, the product, process, or thing
is measured directly using a standard scale.
2)indirect Measurement: In indirect measurement, the quantity or quality to be
measured is measured using related parameters i.e. by use of reference.Metrics:
A metric is a measurement of the level at which any impute belongs to
a system product or process.Software metrics will be useful only if they are
characterized effectively and validated so that their worth is proven. There are
4 functions related to software metrics:
@ Planning
@ Organizing
@ Controlling
@ Improving
Characteristics of software Metrics:
Quantitative: Metrics must possess quantitative nature. It means metrics can
be expressed in values.
Understandable: Metric computation should be easily understood, and the
method of computing metrics should be clearly defined.
Applicability: Metrics should be applicable in the initial phases of the
development of the software.
Repeatable: The metric values should be the same when measured repeatedly
and consistent in nature.
Economical: The computation of metrics should be economical.
Language Independent: Metrics should not depend on any programming
language.Classification of Software Metrics: There are 3 types of software metrics:
Product Metrics: Product metrics are used to evaluate the state of the product,
tracing risks and undercover prospective problem areas. The ability of the
team to contro! quality is evaluated.eg..size,complexity,design,quality
Process Metrics: Process metrics pay particular attention to enhancing the
long-term process of the team or organization.(based on company
performance-effectively designed)
Project Metrics: The project matrix describes the project characteristic and
execution process.
@ = Number of software developer
@ = Staffing patterns over the life cycle of software
@ Cost and schedule
@ Productivity
Advantages of Software Metrics :
@ = Reduction in cost or budget.
@ It helps to identify the particular area for improvising.
@ It helps to increase the product quality.
@ = Managing the workloads and teams.
@ Reduction in overall time to produce the product,.
@ = |t helps to determine the complexity of the code and to test the code with
resources,
@ It helps in providing effective planning, controlling and managing of the
entire product.
Disadvantages of Software Metrics :
@ tis expensive and difficult to implement the metrics in some cases.
@ Performance of the entire team or an individual from the team can’t be
determined.
Only the performance of the product is determined.
Sometimes the quality of the product is not met with the expectation.
It leads to measure the unwanted data which is wastage of time.
¢¢¢¢
Measuring the incorrect data leads to make wrong decision making.