0% found this document useful (0 votes)
12 views

Software Engineering - CH 2 - Requirement Analysis - Handout

Uploaded by

zem091415
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Software Engineering - CH 2 - Requirement Analysis - Handout

Uploaded by

zem091415
Copyright
© © All Rights Reserved
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 24

Software Engineering (Comp 424)

Chapter Two
Modeling Software Systems
Chapter Objectives

At the end of the chapter students will be able to:


 Describe the requirement engineering activities.
 Differentiate classical requirement analysis with object oriented requirement
analysis models.
 Discuss the system design activities.
 Differentiate classical and object oriented design models.
 Describe system implementations.

Chapter Contents
2.1. Requirement Engineering
2.1.1 Activities in requirement analysis
2.1.2 Classical operational analysis Vs. Object oriented analysis
2.2. System Design
2.2.1. Software design activity and its objective
2.2.2. Modularization techniques
2.2.3. Top-down versus bottom-up design
2.2.4. Design patterns: classical vs. object oriented :UML
2.3. System Implementation

2.1. Requirement Engineering

This provides the appropriate mechanism for understanding what customer wants,
analyzing need, assessing feasibility, negotiating a reasonable solution, specifying the
solution unambiguously, validating the specification and managing the requirements as
they are transformed into an operational system.

The requirements engineering process can be described into six distinct steps:

 Requirements elicitation
 Requirements analysis
 Requirements specification
 Requirement modeling
 Requirements validation
 Requirements management

2.1.1. Requirement Elicitation

Before requirements can be analyzed, modeled or specified they must be gathered


through an elicitation process.

I. Initiating the process

1
The most commonly used requirements elicitation technique is to conduct a meeting or
interview. Analyst starts by asking a set of (context-free) questions that will lead to a
basic understanding of the problem, the people who want the solution, the nature of the
solution that is desired and the effectiveness of the first encounter. They focus on the
customer, overall goals and the benefits.

 These questions help to identify all stakeholders who will have interest in the
software to be built. In addition, the questions identify the measurable benefit of
a successful implementation and possible alternatives to custom software
development.
 The next set of questions enables the analyst to gain better understanding of the
problem and the customer to voice his/her perceptions about the solution
 The final set of questions focuses on the effectives of the meeting. These
questions will help to ‘break the ice’ and initiate the communication that is
essential to successful analysis.
 The Q&A session should be used for the first encounter only and then replaced
by a meeting format that combines elements of problem solving, negotiation
and specification.

II. Facilitated Application Specification Techniques (FAST)

A number of independent investigators have developed a team-oriented approach to


requirements gathering that is applied during early stages of analysis and specification,
called FAST, this approach encourages the creation of a joint team of customers and
developers who work together to identify the problem, propose elements of the solution,
negotiate different approaches and specify a preliminary set of solution requirements.
Many different approaches to FAST are there following the basic guidelines:
 A meeting is conducted at a neutral site and attended by both software engineers
and customers.
 Rules for preparation and participation are established.
 An agenda is suggested that is formal to cover all important points and informal
to encourage the free flow of ideas.
 A facilitator (one among the customer or developer or an outsider) controls the
meeting.
 A ‘definition mechanism’ (can be work sheets / wall stickers / electronic
bulletin board / chat room) is used. The goal is to identify the problem, propose
elements of the solution, negotiate different approaches and specify a
preliminary set of solution requirement in an atmosphere that is conducive to
the accomplishment of the goal.
 Each FAST attendee makes a list of objects, list of services, list of constraints
and performance criteria. After individual lists are presented, a combined list is
created by the group. The combined list of each topic is reviewed to develop a
consensus list.
 Once consensus lists have been completed, team is divided into smaller sub
teams to develop mini-specifications for each list. After mini-specs are

2
completed, each FAST attendee makes a list of validation criteria for the
product / system.
 Finally, one or more participants assigned the task of writing the complete draft
specification using all inputs.

The team approach in FAST provides the benefits of many points of view, instantaneous
discussion and refinement and is a concrete step toward the development of a
specification.

III. Quality Function Deployment

QFD is a quality management technique that translates the needs of the customer into
technical requirements for software. It concentrates on maximizing customer satisfaction
from the software engineering process. IT emphasizes an understanding of what is
valuable to the customer and then deploys these values throughout the engineering
process.

 Identifies three types of requirements:


 Normal requirements – objectives and goals that are stated for product during
meetings with the customer. They might be graphical displays, specific system
functions and defined levels of performance.
 Expected requirements – are implicit to the product or system and their absence
will cause for significant dissatisfaction. Examples are: human /machine
interaction, overall operational correctness, reliability and software installation.
 Existing requirements – these go beyond expectations and prove to be very
satisfying when present. For example word processing software is requested with
standard features. The delivered product contains a number of age layout
capabilities.
 Spans the entire engineering process i.e, function deployment is used to determine the
value of each function that is required for the system.
 Uses customer interviews and observation, surveys and examination of historical data
as raw data for requirements gathering activity.
These data are then translated into a table of requirements – called the customer voice
table – that is reviewed with the customer.

2.1.2. Requirements Analysis

Initially the analyst studies the system specification and the software project plan, which
is important to understand software in a system context and to review the software scope
that was used to generate planning estimates.
Next, communication for analysis must be established so that the problem recognition as
perceived by the customers is ensured.
Problem evaluation and solution synthesis is the next major area of effort for analysis.
The analyst must define all externally observable data objects, evaluate the flow and
content of information, define and elaborate all software functions, understand software
behavior in the context of events that affect the system, establish system interface

3
characteristics and uncover additional design constraints. Each of these tasks serves to
describe the problem so that an overall approach or solution may be synthesized.

For e.g., an inventory control system is required for a major supplier of auto parts. The
analyst finds problems with current manual system include
 Inability to obtain the status of the component rapidly,
 Two-or-three day turnaround to update a card file,
 Multiple reorders to the same vendor because there is no way to associate vendors
with components and so on.

Once problem is identified, the analyst determines what information is to be produced


and what data is to be supplied to the system like customer desires to have a daily report
of the parts taken from inventory and how many are remaining. Customer indicates the
clerks will log an identification number of each part as it goes of inventory.

Upon evaluating current problems and desired information, analyst begins to synthesize
one or more solutions:

 To start, the data objects, processing functions and behavior of the system are
defined in detail.
 Once this information is established, basic architectures for implementation are
considered.

The process of evaluation and synthesis continues until both analyst and customer feel
confident that software can be adequately specified for subsequent development steps.
During the evaluation and solution synthesis activity, analyst creates models of the
system in an effort to better understand data and control flow, functional processing,
operational behavior and information content. The model serves as a foundation for
software design and as the basis for the creation of specifications for the software.

The detailed specifications may not be possible at this stage. Customer may be unsure of
precisely what is required. The developer may be unsure that specific approach will
properly accomplish function and performance. For these reasons, the alternative
approach to requirement analysis called prototyping is considered.

Requirement Analysis Principles

Each analysis method has a unique point of view. All analysis methods are related by
asset of operational principles:
 The information domain of a problem must be represented and understood.
 The function that the software is to perform must be defined.
 The behavior of the software must be represented.
 The models that depict information function and behavior must be partitioned in a
manner that uncovers detail in a layered fashion.
 The analysis process should move from essential information toward
implementation detail.

4
Guiding principles for requirements analysis:

 Understand the problem before you begin to create the analysis model.
 Develop prototypes that enable user to understand how human/machine
interaction will occur.
 Record the origin of and the reason for every requirement.
 Use multiple views of requirements.
 Rank the requirements.
 Work to eliminate ambiguity.

A software engineer who takes these principles can develop a software specification that
will provide an excellent foundation for design.

2.1.3. Requirement Specification

This may be used as representation process regardless of the mode through which we
accomplish it. Requirements are represented in a manner that leads to successful software
implementation.

Specification Principles
A number of specification principles can be proposed:
 Separate functionality from implementation.
 Develop a model of the desired behavior of a system that encompasses data and
the functional responses of a system to various environments.
 Specify the manner in which other system components interact with software.
 Define the environment in which the system operates.
 Design a model as perceived by its user rather than a design or implementation
model.
 Recognize the specifications, as that is an abstraction of some real situation that is
normally quite complex.
 Content and structure of a specification should be established in such a way that
will enable it to be amenable to change.

In many cases the software requirements specification may be accompanied by an


executable prototype or a preliminary user’s manual. The manual can serve as a valuable
tool for uncovering problems at the human / machine interface. In other words, this
document can be considered as proposal which includes the following elements:

Background (about the subject area you want to work with, about the organizational
background), Statement of the problem (state the problem clearly in terms of
quantitatively), Justification of the problem (show that the efforts made to solve a
problem and your project contribution), Objectives of the projects (general and specific
objectives), Methodology of the project (data collection methods and why , sample
selection and why, selection of requirement analysis method and why, selection of
design tools and why, implementation issues , programming language selection and why,
testing tools and methodology), Scope of the project (boundary of your project, state the
things your project will do), Application of the project (to whom and how your project
work can be applicable), Project management (group formation, group management,

5
group report structure), Project budget (resources with estimations), Time management
(activities with time schedule), References (site your references using standardization
techniques), Appendixes (attach necessary documents and code scripts).

2.1.4. Requirement Analysis Modeling

Models created during requirements analysis serve important roles:


 Model aids the analyst in understanding the information, function and behavior of
a system, so that requirement analysis task easy and more systematic.
 Model becomes the focal point for review, determination of completeness,
consistency and accuracy of the specifications.
 Model becomes the foundation for design for design, providing the designer the
essential representation of software that can be ‘mapped’ into an implementation.

2.1.4.1. Structured Requirement Analysis Modeling

The structured system analysis uses the following tools:

 Data Flow Diagram (DFD)


 Data Dictionary
 Entity Relationship Diagram (ERD)
 Structured English

I. DFD

Data flow diagram is a network representation of a system. It portrays the system in terms
of its components pieces with all interfaces among the components indicated. If there is
an existing system, study the current system: identify the data flows, data stores and
processes. The analysis should ask the users repeatedly until the analyst gets the required
knowledge of the current system. There are different methods to develop the current
system dataflow diagram. Use the current organizational unit to easily identify
information flows, data stores and process such as yellow pages, model 19, black carbon
copy etc. When you develop the current system DFD, use the existing physical names.
This helps to establish good communication between the analyst and the users.

As analysis proceeds the physical considerations become a burden. Use of the physical
terms also limits the readership of the documents to those who are familiar with the
details. Then transform all the physical information in the current logical model. The
logical DFD should show only what the system performs but not how the system
operates. The DFD is made up of only four basic elements:

a) Data flow
A data flow is a pipeline through which packets of information of known composition
flow. It portrays some interface among components or a data flow diagram. Most data
flows move between processes, but they can just as well flow into or out of files, and to
and from destination boxes and sources boxes respectively. It is represented by named
vectors.

6
b) Processes
A process is a transformation incoming data flow(s) into outgoing data flow(s). It
invariably show some amount of work performed on data. It is represented by circles or
bubbles or rectangles.

c) Files
A file is a temporary repository of data or information. It may be a tape, or an area of
disk, or a card data set, or a chart in a wall, or an index file in someone’s drawer, or the
little book if deadbeat cardholders that the credit card companies issue from time to time.
It might even be a wastebasket. As long as it is a temporary repository of data, it qualifies
as a file.
Databases qualify as files under this definition. The term database caries along with it
even more connotations about physical implementation than a file does. It is represented
by unclosed rectangles or lines.

d) Data sources and Sinks


Any system or business area can be described on a DFD with data flows, processes, and
files. Sometimes, however, you can substantially increases the readability of your
diagram by showing where the net inputs to the system come from and where the net
outputs go to. A source or sink is a person or organization, lying outside the context of a
system that is a net originator or receiver of system data. It is represented by boxes or
ellipses.

II. Data Dictionary

The analysis model encompasses representation of data objects, function and control.
Thus, it is necessary to provide an organized approach for representing the characteristics
of each data object and control item. This is accomplished by data dictionary.
“Data dictionary is an organized listing of all data elements that are pertinent to the
system, with precise, rigorous definitions so that both user and system analyst will have a
common understanding of inputs, outputs, components of stores and intermediate
calculations”

Data dictionary is always implemented as part of a CASE structured analysis and design
tools. The information contained in the dictionary is:
 Name – the primary name of the data or control item, the data store or an external
entity.
 Alias – other names used for first entry.
 Where used / how used – a listing of the process that uses the data or control item and
how it is used (e.g., input to the process, output from the process, as a store, as an
external entity).
 Content description – a notation for representing content.
 Supplementary information – other information about data types, preset values (if
known), restrictions or limitations and so forth.

As an example if the data item telephone number is specified as input, the data
dictionary provides with a precise definition of telephone number for the DFD in

7
addition it indicates where and how this data item is used and any supplementary
information that is relevant to it.

name: telephone number


aliases: none
where used/how used: assess against set-up (output)
dial phone (input)
description:
telephone number = [ local number | long distance number ]
local number = prefix + access number
long distance number = 1 + area code + local number
area code = [ 800 | 888 | 561 ]
prefix = * a three digit number that never starts with o or 1*
access number = * any four number string *

The content description is expanded until all composite data items have been represented
as elementary items or until all composite items are represented in terms that would be
well known. The data dictionary grows rapidly in size and complexity for large
computer-based systems. CASE tools should be used as it is extremely difficult to
maintain dictionary manually.

III. Entity Relationship Diagram (ERD)

The object/relationship pair can be represented graphically using the entity/relationship


diagrams. The purpose of the ERD is to represent data objects and their relationships. A
set of components are identified for the ERD: data objects, attributes, relationships and
various type indicators.
Data objects are represented by a labeled rectangle. Relationships are indicated with a
labeled line connecting objects. Connections between data objects and relationships are
established using variety of special symbols that indicate cardinality and modality.

Data modeling and ERD provide the analyst with a notation for examining data within
the context of a software application. This approach is used to create one piece of the
analysis model that can also be used for database design and to support other
requirements analysis methods.

An entity is any real world existing object or event, which should know the concept of
entity records. An attribute is an item of information recorded about an object, for
example borrower name or borrower’s department. A key attribute is the principal
identifier of an object. Borrower name is the key attribute of borrowers’ entity. A key
attribute has a unique value for each occurrence of a record. A relationship is an
indication that one object is associated with one or more other objects.

The process of removing internal repeating groups from complex files and setting them
up separately is called normalization. It is used to avoid data redundancy and to build a
stable data model, which can be expanded in terms of the user’s requirement. If data is

8
accessed by a data flow and is not used by any process, remove that the form which
contains that data item from your DFD.

IV. Structured English

All the Data flows (DFD), Entity Relationship Diagrams (ERD) should be written in
descriptive English. This gives the reader of the requirement engineering documents a
clear picture and creates understanding between the requirement analyst and the user of
the software.

2.1.5. Requirement Verification

There are two general ways of verifying a specification. One consists of observing the
dynamic behavior of the specified system in order to check whether it conforms to the
intuitive understanding we had of the behavior of the ideal system. The other consists of
analyzing the properties of the specified system that can be deduced from the
specification. The properties that are deduced are then checked against the expected
properties of the system. The effectiveness of both techniques increases depending on the
degree of formality of the specification.

2.1.5.1. Object Oriented Requirement Analysis Modeling

The conventional specification methods which are based on the structured approach view
the software in two ways: either in data-oriented view or in process or action
oriented view. Nevertheless, data and action are two sides of the same coin; a data item
cannot change unless an action is performed on it, and actions without associated data are
equally meaningless.

In object oriented approach, both data and actions are bounded together to form objects
which are instance of classes. Classes may also be related to each other through
hierarchical relationships such as inheritance. Thus, in object-oriented approach, data and
actions are considered to be of the same importance, and neither takes precedence over
the other. This approach is superior to the structured approach.

Object-oriented analysis (OOA) is a semiformal specification technique for the object-


oriented paradigm. As there are a large number of equivalent structured analysis
techniques, there are number of different techniques for performing OOA. Because
the different techniques use different graphical representations, learning a specific
technique requires learning the relevant graphical notation. However, this has changed
with the publication of the Unified Modeling Language (UML).

Object Oriented Requirement Analysis Modeling includes the following UML


representations:

a) Use Case
Use case shows the interaction between external users and the system. It is used to
capture business requirement for the system. It is used to build a process model, which
defines the business processes in a more formal manner. It is a set of activities that

9
produce some output results. Each use case describes how the system reacts to an event
that triggers the system. A use case contains a fairly complete description of all the
activities that occur in response to a trigger event. It is similar to context diagram in
structured approach.

b) Class Diagram
Class diagram shows the static nature of a system at the class level and is used to
illustrate the relationship between classes modeled in the system. It is similar with data
model i.e., entity relationship diagram (ERD).

c) Object Diagram
It shows the static nature of a system at the class level. It is used to illustrate the
relationship between object modeled in the system. It is used when actual instances of the
class will better communicate the model. It is similar with data model i.e., Entity
Relationship Diagram (ERD).

d) Sequence Diagram
It shows the interaction between classes for a given use case, arranged by time sequence.
It models the behavior of classes with in a use case. It is similar with process model
(DFD).

e) Collaboration Diagram
It shows the interaction between classes for a give use case, not arranged by time
sequence. It is used to model the behavior of classes with in a use case. It is similar with
process model (DFD).

f) State Chart Diagram


It shows the sequence of states that an object can assume, the events that cause an object
transition from state to state, and significant activities and actions that occur as a result. It
is used to examine the behavior of one class with in a use case.

g) Activity Diagram
Is a specific business process, or the dynamics of a group of object provides a view or
flows and what is going on inside a use case or among several classes. It illustrates the
flow of activities in a use case.

h) Component Diagram
The physical components (i.e., ex files, dll files) in a design and where they are located. It
illustrates the physical structure of the software.

i) Deployment diagram
Shows the structure of the run time system, for example it can show how physical
modules of code are distributed across various hardware platforms. Shows the mapping
of software to hardware components.

Advantages of OO Approach

10
1) Better organization of inherent complexity: use of inheritance ensures related
concepts, resources, and other objects can be efficiently defined and used.
2) Reduced development effort through reuse: reusing objects classes that have been
written, tested, and maintained by other cuts development, testing and
maintenance time.
3) More extensible and maintainable systems: the use of OOP software helps limit
the number of potential interactions of different parts of software, ensuring that
changes to the implementation of a class can be made with little impact on the rest
of the system.
4) Objects enable programmers to customize an operating system to meet new
requirements without disrupting system integrity.
5) Because objects communicate by means of messages, it matters not whether two
communicating objects are on the same system or on two different systems in a
network.
6) Objects pave the road to distributed computing.

2.2. System Design

2.2.1. Major Questions

In the system design, the major decisions are whether to buy or make in house the
software. The major questions raised to answer these questions are:

 Which part of the system should be computerized?


 How they should be automated? Major alternatives are batch, online, centralized
and distributed.

The main task of the system design is to provide details specifications of the system that
can be implemented by computer programmers and technicians.

2.2.2. Objective of System Design

There are two major goals in the system design:


 To design a system that can fulfill user requirements and will be user friendly.
 To produce clear and complete specifications for the computer programmers and
technicians.

2.2.3. Considerations in the System Design

In order to achieve the above goals, the system analyst and designer should perform the
following considerations during the system design stage.

1. Involve end users in the system design such as in the design of outputs and inputs.
Because it is the users who remain working with the physical system.
2. Fulfill current and projected functional requirements. The designed system should
answer how the identified functions are met. Additional functions are also added
when we automate the manual system such as entering data into the computer

11
system, editing input data, and security and performance functions such as
protecting sensitive data through password.
3. Design all information system components
 Data and Information: each data and information flows were documented
during the analysis phase and you specified the media during the system
selection phase. Now it is time to design the style, organization and format
of all inputs and outputs.
 Data Store: specify format, organization and access methods for all files
and databases to be used in the computer based system.
 End user: the roles people must play in the new system must be specified
such as who will capture and input data, who will receive outputs and so
on.

4. Methods and Procedures: the sequence of steps and flow of control through the
new system must be specified. The processing methods and intermediate manual
procedures must be also documented.
5. Computer Equipment:
 Specify the type of hardware to be purchased.
 Computer Programs: complete programming specifications must be
prepared for every program that must be written internal controls, specify
internal controls to ensure the security and reliability of the system.

2.2.4. Design Approaches

The software design phase consists of three approaches: architectural


design, detailed design, and design testing.

Architectural Design

During architectural design (also known as general design, logical


design, or high-level design), a modular decomposition of the product
is developed. That is, the specifications are carefully analyzed, and a
module structure that has the desired functionality is produced. The
output from this activity is a list of the modules and a description of
how they are to be interconnected. From the viewpoint of abstraction,
during architectural design the existence of certain modules is
assumed; the design is then developed in terms of those modules.

Detailed Design

The next activity is detailed design, also known as modular design,


physical design, or lowlevel design, during which each module is
designed in detail. For example, specific algorithms are selected and
data structures are chosen. From the viewpoint of abstraction, during
this activity the fact that the modules to be interconnected to form a
complete product is ignored.

12
Design Testing

The third activity is testing which is an integral part of design. It is not


something that is performed only after the architectural design and
detailed design have been completed.

2.2.5. Design Activities

I. File and Database Design

When we design files and databases, we have to envision that:


 The files and databases are a shared resources
 Future programs may use files and databases in ways nor originally assumed
 How programs will access the data in order to improve performance. The access
types are sequential, linked lists, random, etc. This issue affects file and
database organization decisions.
 Another issue we consider is that the record size and storage volume
requirements.
 Files and databases are shared resources and we must also design internal
controls to ensure proper security and disaster recovery techniques in case data
is lost or destroyed.
 Files and database should be designed to adapt to future requirements and
expansions.

II. Output Design

The output of a computer system is the primary contact between the system and most
users. The quality of this output and its usefulness determines whether the system will be
used, so it is essential to have the best possible output. Output design considerations
include:

a. End user issues


When we design outputs, we should ensure that the outputs are clear and understandable
to end users. The following principles are important:

 Computer outputs should be simple to read and interpret.


 Every report or output screen should have a title
 Reports and screens should include section headings to segment large
amount of information.
 Information in columns should have column headings
 Because section heading and column headings are frequently abbreviated
to conserve space, reports should include legends to interpret those
headings
 Legends should also be used to formally define all fields on a report
 Computer jargon and error messages should be omitted from all outputs,
or at the very least, relegated to the end of the output.

13
 The timing of computer outputs is important. Outputs must be received by their
recipients while the information is pertinent to transactions or decisions. The
can affect how the output is designed and implemented.
 The distribution of computer outputs must be sufficient to assist all relevant end
users.
 The computer outputs must be acceptable to the end users who will receive it.

b. Choices for Media and Formats of Output

There are different media to present the output such s paper media, screen output and
secondary storage media. Hence we should specify which media to use to present the
output. On the other hand format refers to the way the information is displayed on that
medium. There are several formats you can consider for communicating information on a
medium.
 Tabular columns of text and numbers are the oldest and most common format for
computer outputs.
 Graphics output is coming more popular as high capacity computers and
specialized graphics software comes on the market. The end user a picture can be
more valuable that words. Bar charts, pie charts, line charts, step charts,
histograms, and other graphs can help end user grasp trends and data relationships
that can not be easily seen in tabular numbers.

c. Internal Controls

Internal controls ensure information is delivered to the right person and protect
information from unnecessary misuse and fraud. The following guidelines are offered for
output controls:
 The timing and the volume of each output must be precisely specified.
 The distribution of all outputs must be specified. For each output, the recipients of
all copies must be determined. A distribution log, which provides an audit trial for
the outputs, is frequently required.
 Access controls are used to control accessibility of video (on line) output. For
example, a password may be required displayed a certain output on a CRT
terminal.
 Control totals should be incorporated into all reports. The number of records input
should be equal the number of records output.

d. How to design outputs

 Review existing system outputs to easily identified the outputs generated by the
system and the data elements in each output.
 Add new data elements to existing outputs to meet new user requirements
 Prototype the layout for end users. There are different tools for rapid prototyping
such as Microsoft Excel Spreadsheet, CASE tools and using DBMS report
generator facilities etc.

III.Input Design

14
Input design serves to easily enter data into the compute system. What to be outputted
depends on what have been inputted to the system. Important terms in input design
include:

 Data Capture: refers to collecting the relevant data from the source documents
and filling on computer input form.
 Data Entry: is the process of converting data into machine readable form such as
data encoding through a compute keyboard.
 Data Input: refers to data in machine readable form.

a. Input Methods and Media

Input methods are broadly classified into two:

Batch methods: The sources documents are collected and sent periodically say once in a
week or month for data encoders.
Online methods: data is directly entered at its origin through a computer terminal. The
most common online medium is the display terminal, which includes at least a monitor
and a keyboard connected to a compute system. No form is used to collect data from the
source documents for later data entry. There is no also data clerk. Data entry errors are
detected immediately during data entry by computer edit program and notify the CRT
(Cathode Ray Tubes) operator to make correction. Of course the edit program does not
detect all data entry errors, human checking still important. Online data entry method is
common in retail shops and grocery at sales terminal point.

b. End user considerations for Input Design


End user consideration is very important for input design because they are always
working with the system. Considerations include:
 Enter only variable data, don’t enter constant data.
 Do not input data that can be calculated or stored in computer programs.
 Use codes for appropriate date elements.
 Include instructions for completing the form.
 Minimize the amount of handwriting.
 Design documents so they can be easily and quickly entered into the system.
 Data to be entered should be sequenced so it can be read like a book.
 Ideally potions of the form that are not to be input are placed in or about the lower
right portions of the source document. Alternatively this information can be
placed on the back of the form.

c. Internal control for inputs

Input controls ensure the accuracy of data input to the computer. This includes:
 The number of inputs should be monitored. For any missing or misplaced source
documents.
 Care must be taken to ensure that the data is valid such as typing 123 instead of
132. such checking includes:
o Completeness checks
o Limit and range checks

15
o Combination checks
o Self checking digits
Data validation requires that special edit programs be written to perform checks.
However, the input validation requirements should be designed when the inputs
themselves are designed.

d. Designing computer inputs

Follow the following steps to prototype and design your compute inputs:
 Review input requirements. Check your DFD. Any data flow that enters the
machine side of the system is an input to be designed. We should also check the
data elements whether they are sufficient to produce the required output.
 Design how the input data flow will be implemented.
o Identify source documents that need to be designed or modified.
o Determine the input method to be used.
o Determine the timing and volume of input.
o Specify internal controls and special instructions to be followed for the
input.
o Study the input data elements to determine which data really needs to be
input.
 Design or prototype the source document. If the source document is used to
capture data, we prefer to design that document first.
 Prototype online input screens. You can produce a sketches or prototypes and
show for end users to comment on it and finalize your input screen designs. You
can also use different tools such as Database management software to produce a
prototype screen design. This is usually designed for online input and remote
batch inputs.

IV. Design System Methods, Procedures and Controls

Methods and procedures define the sequence of events that produce outputs from their
requisite inputs. Specifically, a method is a way of doing something. A procedure is a
step-by-step plan for implementing the method. Methods and procedures can also be
described as answering the questions “who does that and when do they do it? And “how
will it be done?”

V. User Interface Design

 The overall process for designing a user interface begins with the creation of
different models of system function.
 The human-and-computer-oriented tasks that are required to achieve system
function are then outlined
 Design issues that apply to all interface designs are considered
 Tools are used to prototype and implement the design model
 The result is evaluated for quality.

User interface implementation involves the following:

16
 Initial analysis activity focuses on the profile of the users who will interact with
the system i.e., skill level, business understanding and general receptiveness of
the new system are recorded.
 Once general requirements have been defined, a more detailed task analysis is
conducted. Those tasks that the user performs to accomplish the goals of the
system are identified, described and elaborated.
 The information gathered as part of the analysis activity is used to create an
analysis model for the interface. The goal is to define a set of interface objects and
actions that enable the user to perform all defined tasks in the manner that meets
every usability goal defined for the system.
 Validation focuses on
o Ability of the interface to implement every user task correctly, to
accommodate all task variations and to achieve all general user
requirements;
o Degree to which the interface is easy to use and easy to learn and
o The user’s acceptance of the interface as a useful tool in their work.

VI. Design of Computer Programs

Computer programs are designed on module based to avoid complexities of big


programs.

a) Modularization Techniques

A module is a well-defined component of a software system. A module may be a collection


of routines, a collection of data, a collection of type definitions, a collection of classes or
object, or a mixture of all of these. It can be viewed as provider of computational resources
or services.

When we decompose a system into modules, we must be able to describe the overall modular
structure precisely and state the relationships among the individual modules.

Functional Independence

Functional independence is a direct outgrowth of modularity and the concepts of


abstraction and information hiding.
 The software is designed in such a way so that each module addresses a specific
sub-function of requirements and has a simple interface when viewed from other
parts.
 Software with independent modules is easier to develop because functions may be
compartmentalized and interfaces are simplified,
 Independent modules are easier to maintain because secondary effects caused by
design or code modification are limited, error propagation is reduced and reusable
modules are possible.
 Independence is measured using two qualitative criteria
 Coupling is a measure of the relative interdependence among the modules.
 Cohesive is a measure of the relative functional strength of the module.

17
b) Top-down Vs bottom-up

What strategy should we follow when we design a system? Top-down or bottom-up. Both
strategies have strong and weak points. Criticisms on top-down strategy include:

 Sub problems tend to be analyzed in isolation.


 No emphasis is placed on identification of commonalities or on
reusability of components.
 Little attention is paid to data and more generally information hiding.

Information hiding proceeds mainly bottom up. It suggests that we should first recognize
what we wish to encapsulate within a module and then provide an abstract interface to
define the module’s boundaries as seen from the clients. Note, however, that the
decision of what to hide inside a module may depend on the result of some top-down
activity. Since information hiding is proven to be highly effective in supporting
design for change, program families, and reusable components, its bottom-up
philosophy should be followed in a consistent way.

Design, however, is a highly critical and creative human activity. Good designers do not
proceed in a strictly top-down or strictly bottom-up fashion. A typical design strategy
may proceed partly top down and partly bottom up, depending on the phase of design or
the nature of the application being designed, called yo-yo design. For example, we might
start decomposing a system top down in terms of subsystems and, at some later point,
synthesize subsystems in terms of a hierarchy of information-hiding modules.

The top–down approach, however, is often useful as a way to document a design even if
the system has not been designed in top-down fashion.

2.2.6. Design Modeling

Software design is both a process and a model. The design process is a sequence of steps
that enable the designer to describe all aspects of the software to be built. The design
model is the equivalent of an architect’s plans for a house. It begins by representing the
totality of the thing to be built and slowly refines the thing to provide guidance for
constructing each detail. Similarly, the design model that is created for software provides
a variety of different views of the computer software.

2.2.6.1. Structured Design

Structured techniques deals with the size and complexity of a program by


breaking up the program into a hierarchy of modules that result in a computer
program that is easier to implement and maintain. The concept of structured
design includes:
 Design a program as a tip down hierarchy of modules.
 The hierarchy is developed according to various design rules and
guidelines.
 The modules are evaluated according to certain quality acceptance criteria

18
to ensure the best modular design for the program.
 The modules are implemented using structured programming principles.

Structured Chart

In the structured design methods the popular one is structured chart. Structured
chart illustrates a modular design of a program. This shows how the program has
been partitioned into smaller more manageable modules- the hierarchy and
organization of those modules, and the communication interface between
modules.

2.2.6.2. Object-oriented Design

Object-oriented design is a technique that pushes to the extreme a design


approach based on abstract data types. Its product is a design document in terms
of objects which are instances of classes and subclasses.

Designing object-oriented software is hard, and designing reusable object-


oriented software is even harder. You need to find pertinent objects, factor them
into classes at the right granularity, define class interfaces and inheritance
hierarchies, and establish key relationship among them. Your design should be
specific to the problem at hand but also general enough to address future
problems and requirements. You also want to avoid redesign, or at least
minimize it. Before a design is finished, they will usually try to reuse it several
times, modifying it each time.

Object oriented design (OOD) consists of the following steps:

1. Finding objects and classes.


2. Constructing detailed class diagrams

1. Finding class and objects

This process can be further classified in to the following activities:

a. Identifying Objects

This phase starts from understanding of the problems domain by identifying


relevant and stable objects that will form the core of the application. An object in
OOD is an abstraction from the problem domain about which we wish to keep
information (attribute of an object) and with which we can interact (the services).
Identification of objects reduces the complexity of the model produced so far by
dividing or grouping it into more manageable and understandable subjects.

Hints to fine relevant class and object

 Look for structure

19
 Look at other systems with which the system under consideration interacts
as a way of prompting potential class and objects.
 Ask what physical devices the system interacts with
 Examine the events that must be remembered and recorded. e.g. date, the
roles that people play like owner, manager, client
 Examine the physical or geographical locations of relevance and also the
organizational unit. e.g. Divisions and teams.

Eg. University student administration


Class - registration, student, course, registration clerk
Class - registration
Attribute - date, number, fee
Services - create, renew, terminate, suspend, approve, check qualifications

b. Define attributes

After objects are identified attributes should be defined. Attributes are data
variables contained with in an object. It represents properties that describe the
state of an object. The fact that the internal processing and the details of the data
are hidden (or private) is known as encapsulation.

c. Defining services

It includes definining the processing, method or behaviour. A servce is the


operation or processes perfomed by the objects in response to the receipt of a
message. Methods are procedures that can be triggered from outside to perform
certain functions. The method may change the state of the object. The method
may update some of its variables. The method may act on outside resources to
which the obhect has access.

2. Constructing detailed class diagrams

A class represents a kind of person, place, or thing about which the systme must
capture and stor information. Organize the basic classes and objects into
hierarchies that will enable the benefits of inheritance to be realized. It involves
the identification of those aspects or objects that are common or generalized, and
separating them from those that are specific. So we are doing the identification of
structures on classes.

Generalized specialized structure usually shows a hierarchy of classes and known


as the identification of super classes and subclasses.

Generalization e.g.
student

Specialization e.g. Specialization e.g.


full-time student part-time student 20
2.3. System Implementation

Choice of programming language selection is the first activity in implementation. Based


on the type of the problem at hand and the expertise we have, we can select an
appropriate programming language for coding.
Coding is undertaken once the design phase is complete. In the coding phase every
module identified and specified in the design document is independently coded and unit
tested. Since the module specifications where the data structures and algorithms for each
module specified, the objective of the coding phase is to transform the design of the
system, as given by its module specification, into a high-level language code and then to
unit test this code.
A good software development organization adheres to some well defined and standard
style of coding called coding standards.
They can be developed or formulated by the organizations in order to suits their needs. A
coding standard sets out standard ways of doing several things such as the way variables
are to be named; the code is to be laid out, maximum number of source lines that can be
allowed per function, etc. Besides the coding standards, several coding guidelines are also
suggested by software companies. Coding guidelines provide some general suggestions
regarding the coding style to be followed but leave the actual implementation of these
guidelines to the discretion of the individual engineers.
After a module has been coded, usually code inspection and code walk-through are
carried out to ensure that coding standards are followed and to detect as many errors as
possible before testing. It is important to detect as many errors as possible during code
inspection, code walk-through, and unit testing because an error detected at these stages
requires much less effort for debugging than the effort that would be needed if the same
error was detected during integration or system testing.

I. Coding Standards and Guidelines


Different organizations usually develop their own coding standards and guidelines
depending on what best suits them. Therefore, the following are a few representative
coding standards and guidelines commonly adopted by many software development
organizations. Representative coding standards include:
 Rules for limiting the use of global variables: These rules list what types of data
can be declared global and what cannot.
 Contents of the headers preceding codes for different modules: The information
contained in the headers of different modules should be in a standard format. The
following are some standard header data:
o name of the module, date on which the module was created, author's
name, modification history, abstract of the module, different functions
supported, along with their input/output parameters, global variables
accessed/modified by the module.
 Naming conventions for global variables, local variables, and constant Identifiers:
A possible naming convention can be that global variable names always start with
a capital letter, local variable names are made up of small letters, and constant
names are always capital letters.

21
 Error return conventions and exception handling mechanisms: The way error
conditions are reported by different functions in a program and the way common
exception conditions are handled should be standardized within an organization.
For example, different functions when they encounter an error condition should
either return a 0 or 1 consistently.

Representative Coding Guidelines include the following points:

The following are some representative coding guidelines recommended by many


software development organizations. Wherever necessary the rationale underlying these
guidelines is also mentioned.
 Don’t use too clever and difficult to understand coding style: A code should be
easy to understand. Clever coding can obscure the meaning of the code and
hamper understanding. It also makes maintenance difficult.
 Avoid ambiguous side effects: The side effects of a function call include
modification of parameters passed by reference, modification of global variables,
and I/O operations. These side effects make a piece of code the code difficult to
understand. For example, if a global variable is changed obscurely in a called
module, it becomes difficult for anybody to understand the code.
 Don’t use an identifier for multiple purposes: Programmers often use the same
identifier to denote several temporary entities. For example, some programmers
use a temporary loop variable for also storing the final result. The rationale that is
usually given by these programmers for such multiple uses of variables is memory
efficiency, e.g. three variables use up three memory locations, whereas the same
variable used in three different ways uses just one memory location. Some of the
inconveniences caused by such multiple uses of variables are as follows:
o Each variable should be given a descriptive name indicating its purpose.
This is not possible if an identifier is used for multiple purposes. The use
of a variable for multiple purposes can lead to confusion and annoyance
for somebody trying to read and understand the code.
o The use of variables for multiple purposes usually makes future
enhancements extremely difficult.
 The code should be well-documented: As a rule of thumb, there must be at least
one comment line on the average for every three source lines.
 The length of any function should not exceed 10 source lines: A function that is
very lengthy is usually very difficult to understand as it probably carries out many
different functions.
 Do not use goto statements, etc.: The use of “goto” statements makes a program
unstructured and difficult to understand.
II. Code Walk-Through

A code walk-through is an informal technique for analysis of the code. A code walk-
through of a module is undertaken after the coding of the module is complete. In this

22
technique, after a module has been coded, members of the development team select some
test cases and simulate execution of the code by hand. Even though a code walk-through
is an informal analysis technique, several guidelines have been evolved over the years for
making this naive but useful analysis technique more effective. Of course, these
guidelines are based on personal experience, common sense, and several subjective
factors, and hence should be considered more as examples than rules to be applied
dogmatically. Some of these guidelines are given below:
 The team performing a code walk-through should not be either too big or too
small. Ideally, it should consist of between three to seven members.
 Discussions should be focused on the discovery of errors and not on how to fix
the discovered errors.
 In order to foster cooperation and avoid the feeling among the engineers that
they are being evaluated, managers should not participate in the discussions.
III. Code Inspections

In contrast to code walk-throughs, code inspections aim explicitly at the discovery of


commonly made errors. In other words, during code inspection the code is examined for
the presence of certain kinds of errors in contrast to the hand simulation of code
execution as done in code walk-throughs. For instance, consider the classical error of
writing a procedure that modifies a formal parameter while the calling routine calls that
procedure with a constant actual parameter. It is more likely that such an error will be
discovered by looking for it in the code, rather than by simply hand simulating the
execution of the procedure. Most software development companies collect statistics to
identify the type of errors most frequently committed. Such a list of commonly
committed errors can be used during code inspections to keep a look-out for possible
errors.
Some classical programming errors which can be looked for during code inspection are:
 Use of uninitialized variables, Jumps into loops, Nonterminating loops,
Incompatible assignments, Array indices out of bounds, Improper storage
allocation and deallocation, Mismatches between actual and formal parameters
in procedure calls, Use of incorrect logical operators or incorrect precedence
among operators, Improper modification of loop variables, Comparison of
equality of floating point values, etc.
Adherence to coding standards is also checked during code inspections.
IV. Software Documentation

When we develop a software product we not only develop the executable files and the
source code but also develop various kinds of documents such as users' manual, software
requirements specification (SRS) document, design document, test document, installation
manual, etc. All these documents are a vital part of any good software development
practice. Good documents enhance understandability and maintainability of software
product. Different types of software documentation can be broadly classified as:
• Internal documentation, and
• External documentation (supporting documents).

23
Internal documentation is the code comprehension features provided as part of the source
code itself. Internal documentation is provided through appropriate module headers and
comments embedded in the source code. Internal documentation is also provided through
the use of meaningful variable names, code indentation, code structuring, use of
enumerated types and constant identifiers, use of user-defined data types, etc. Most
software development organizations usually ensure good internal documentation by
appropriately formulating their coding standards and coding guidelines.

External documentation is provided through various types of supporting documents such


as users' manual, software requirements specification document, design document, test
documents etc. A systematic software development style ensures that all these documents
are produced in an orderly fashion. An important feature of good documentation is
consistency. If the different documents are not consistent, a lot of confusion is created for
somebody trying to understand the product. Also, all the documents for a product should
be up-to-date. Even if only a few documents are not up-to-date, they create inconsistency
and lead to confusion.

24

You might also like