Software Engineering Unit-V (Se R23 Jntuk)
Software Engineering Unit-V (Se R23 Jntuk)
• A key benefit arising out of the use of a CASE environment is cost saving through all developmental phases.
Different studies carry out to measure the impact of CASE, put the effort reduction between 30 per cent and
40 per cent.
• Use of CASE tools leads to considerable improvements in quality. This is mainly due to the facts that one can
effortlessly iterate through the different phases of software development, and the chances of human error is
considerably reduced.
• CASE tools help produce high quality and consistent documents. Since the important data relating to a
software product are maintained in a central repository, redundancy in the stored data is reduced, and
therefore, chances of inconsistent documentation is reduced to a great extent.
• CASE tools take out most of the drudgery (dull/bored) in a software engineer’s work. For example, they need
not check meticulously the balancing of the DFDs, but can do it effortlessly through the press of a button.
• Introduction of a CASE environment has an impact on the style of working of a company, and makes it
oriented towards the structured and orderly approach.
• Since one of the main uses of a prototyping CASE tool is graphical user interface (GUI) development, The user
should be allowed to define all data entry forms, menus and controls.
• It should integrate with the data dictionary of a CASE environment.
• If possible, it should be able to integrate with external user defined modules written in C or some popular high
level programming languages.
• The user should be able to define the sequence of states through which a created prototype can run. The user
should also be allowed to control the running of the prototype.
• The run time system of prototype should support mock up run of the actual system and management of the
input and output data.
12.3.2 Structured Analysis and Design
• Several diagramming techniques are used for structured analysis and structured design.
• A CASE tool should support one or more of the structured analysis and design technique.
• The CASE tool should support effortlessly drawing analysis and design diagrams.
• The CASE tool should support drawing complex diagrams and preferably through a hierarchy of levels.
• It should provide easy navigation through different levels and through design and analysis.
• The tool must support completeness and consistency checking across the design and analysis and through all
levels of hierarchy analysis.
• The CASE tool should support generation of module skeletons or templates, The tool should generate records,
structures, class definitions automatically from the contents of the data dictionary in one or more popular
programming languages.
• It should be possible to include copyright message, brief description of the module, author name and the date
of creation in some selectable format.
• It should generate database tables for relational database management systems.
• The tool should generate code for user interface from prototype definition for X window and MS window-
based applications.
Intelligent diagramming support: The fact that diagramming techniques are useful for system analysis and design is
well established. The future CASE tools would provide help to aesthetically and automatically layout the diagrams.
Integration with implementation environment: The CASE tools should provide integration between design and
implementation.
Data dictionary standards: The user should be allowed to integrate many development tools into one environment. It
is highly unlikely that any one vendor will be able to deliver a total solution. Moreover, a preferred tool would require
tuning up for a particular system. Thus, the user would act as a system integrator. This is possible only if some
standard on data dictionary emerges.
Customisation support: The user should be allowed to define new types of objects and connections. This facility may
be used to build some special methodologies. Ideally it should be possible to specify the rules of a methodology to a
rule engine for carrying out the necessary consistency checks.
User interface
• The user interface provides a consistent framework for accessing the different tools thus making it easier for
the users to interact with the different tools and reducing the overhead of learning how the different tools are
used.
Tool set:
• Tool set is a set of software application programs (CASE tools), which are used to automate SDLC activities.
• CASE tools are used by software project managers, analysts and engineers to develop software system.
• There are number of CASE tools available to simplify various stages of Software Development Life Cycle such
as Analysis tools, Design tools, Project management tools, Database Management tools, Documentation
tools, etc.,
• Use of CASE tools accelerates the development of project to produce desired result and helps to uncover flaws
before moving ahead with next stage in software development.
• Different case tools represent the software product as a set of entities such as specification, design, text data,
project plan, etc.
• The object management system maps these logical entities into the underlying storage management system
(repository).
• The commercial relational database management systems are geared towards supporting large volumes of
information structured as simple relatively short records.
• There are a few types of entities but large number of instances. By contrast, CASE tools create a large number
of entities and relation types with perhaps a few instances of each.
• The object management system takes care of appropriately mapping these entities into the underlying
storage management system.
Software maintenance: any changes made to a software product after it has been delivered to the customer is
known as software maintenance. Maintenance is inevitable (necessary) for almost any kind of product. However,
most products need maintenance due to the wear and tear caused by use.
There are three types of software maintenance, which are described as follows:
Corrective: Corrective maintenance of a software product is necessary to overcome the failures observed while the
system is in use.
Adaptive: A software product might need maintenance when the customers need the product to run on new
platforms, on new operating systems, or when they need the product to interface with new hardware or software.
Perfective: A software product needs maintenance to support any new features that the users may want it to
support, to change different functionalities of the system according to customer demands, or to enhance the
performance of the system.
Lehman and Belady studied the characteristics of evolution of several software products [1980]. They expressed their
observations in the form of laws. Their important laws are
Lehman’s first law: A software product must change continually, otherwise it becomes progressively less useful.
This law clearly shows that every product irrespective of how well designed must undergo maintenance. In fact, when
a product does not need any more maintenance, it is a sign that the product is about to be retired/discarded.
Lehman’s second law: The structure of a program tends to degrade as more and more maintenance is carried out
on it. The reason for the degraded structure is that usually maintenance activities result in patch work. members of
the original development team are not part of the maintenance team. Therefore, the maintenance team has a partial
and inadequate understanding of the architecture, design, and code of the software. Therefore, any modifications
tend to be ugly and more complex than they should be.
Lehman’s third law: The rate at which code is written or modified is approximately the same during development
and maintenance. Over a program’s lifetime, its rate of development is approximately constant. The rate of
development can be quantified in terms of the lines of code written or modified. Therefore, this law states that the
rate at which code is written or modified is approximately the same during development and maintenance.
Currently, the Software maintenance work is much more expensive than what it should be and takes more time than
required. The reasons for this situation are the following:
• Software maintenance work in organisations is mostly carried out using ad hoc techniques, rather than
systematic and planned activities. The primary reason being that software maintenance is one of the most
neglected areas of software engineering.
• Software maintenance has a very poor image in industry. Therefore, an organisation cannot employ right
engineers to carry out maintenance work. During maintenance it is necessary to thoroughly understand the
work, and must carry out the required modifications and extensions by using experienced employees
• Another problem associated with maintenance work is that many of the software products needing
maintenance are legacy products. The software system having a poor design and documentation can be
considered as a legacy system. legacy systems are poorly documented and unstructured.
13.2 SOFTWARE REVERSE ENGINEERING
• Software reverse engineering is the process of recovering the design and the requirements specification of a
product from an analysis of its code.
• The purpose of reverse engineering is to facilitate maintenance work by improving the understandability of a
system and to produce the necessary documents for a legacy system.
• Reverse engineering is becoming more important, because legacy software products don’t have proper
documentation, and are highly unstructured.
• The first stage of reverse engineering usually focuses on carrying out cosmetic changes to the code to
improve its readability, structure, and understandability, without changing any of its functionalities.
• A way to carry out these cosmetic changes is shown schematically in Figure 13.2
• A program can be reformatted using any of the several available PrettyPrinter programs which layout the
program neatly.
• Assigning meaningful names is important, all variables, data structures, and functions should be assigned
meaningful names wherever possible.
• Complex nested conditions in the program can be replaced by simpler conditional statements.
• After the cosmetic changes have been carried out on a legacy software, the process of extracting the code,
design, and the requirements specification can begin. These activities are schematically shown in Figure 13.1
When the changes needed to a software product are minor and straightforward (for small products), the code can be
directly modified and the changes are appropriately reflected in all the documents.
For complex projects, the software process can be represented by a reverse engineering cycle followed by a forward
engineering cycle with an emphasis on as much reuse as possible from the existing code and other documents.
First model
• The first model is preferred for projects involving small reworks where the code is changed directly, and the
changes are reflected in the relevant documents later.
• This maintenance process is graphically presented in Figure 13.3. In this approach, the project starts by
gathering the requirements for changes.
• The requirements are next analysed to formulate the strategies to be adopted for code change.
• At this stage, the association of at least a few members of the original development team goes a long way in
reducing the cycle time, especially for projects involving unstructured and inadequately documented code.
• The availability of a working old system to the maintenance engineers at the maintenance site greatly
facilitates the task of the maintenance team as they get a good insight into the working of the old system and
also can compare the working of their modified system with the old system.
• Debugging of the re-engineered system becomes easier as the program traces of both the systems can be
compared to localise the bugs.
Second model
• The second model is preferred for projects where the amount of rework required is significant. This approach
can be represented by a reverse engineering cycle followed by a forward engineering cycle. Such an approach
is also known as software re-engineering. This process model is depicted in Figure 13.4.
• During the reverse engineering, the old code is analysed (abstracted) to extract the module specifications.
• The module specifications are then analysed to produce the design.
• The design is analysed (abstracted) to produce the original requirements specification.
• The change requests are then applied to this requirements specification to arrive at the new requirements
specification. At this point a forward engineering is carried out to produce the new code.At the design,
module specification, and coding a substantial reuse is made from the reverse engineered products.
• An important advantage of this approach is that it produces a more structured design compared to what the
original product had, produces good documentation, and very often results in increased efficiency.
• The efficiency improvements are brought about by a more efficient design. However, this approach is more
costly than the first approach. An empirical study indicates that process 1 is preferable when the amount of
rework is no more than 15 per cent (see Figure 13.5).
Besides the amount of rework, several other factors might affect the decision regarding using process model 1 over
process model 2 as follows:
• Re-engineering might be preferable for products which exhibit a high failure rate.
• Re-engineering might also be preferable for legacy products having poor design and code structure.
Boehm [1981] proposed a formula for estimating maintenance costs as part of his COCOMO cost estimation model.
Boehm’s maintenance cost estimation is made in terms of a quantity called the annual change traffic (ACT). Boehm
defined ACT as the fraction of a software product’s source instructions which undergo change during a typical year
either through addition or deletion.
were,
KLOCadded is the total kilo lines of source code added during maintenance.
KLOCdeleted is the total Kilo lines of source code deleted during maintenance. Thus, the code that is changed, should
be counted in both the code added and code deleted.
The annual change traffic (ACT) is multiplied with the total development cost to arrive at the maintenance cost:
Most maintenance cost estimation models, however, give only approximate results because they do not consider
several factors such as experience level of the engineers, and familiarity of the engineers with the product, hardware
requirements, software complexity, etc.
1. Requirements specification
2. Design
3. Code
4. Test cases
5. Knowledge
Knowledge is the most abstract development artifact that can be reused. However, two major difficulties with
unplanned reuse of knowledge is:
(i) a developer experienced in one type of product might be included in a team developing a different type of
software.
(ii) it is difficult to remember complete details of the reusable development knowledge. For this, the reusable
knowledge should be systematically extracted and documented.
1. Component creation
2. Component indexing and storing
3. Component search
4. Component understanding
5. Component adaptation
6. Repository maintenance
Component creation: For component creation, the reusable components must be identified first. Selecting the right
kind of components which have a potential for reuse is important. domain analysis is a promising technique which
can be used to create reusable components.
Component indexing and storing: Indexing requires classification of the reusable components, so that they can be
easily searched when we are looking for a component for reuse. The components need to be stored in a relational
database management system (RDBMS) or an object-oriented database system (ODBMS) for efficient access when
the number of components becomes large.
Component searching: The programmers need to search for right components from the database, which match their
requirements clearly. To be able to search components efficiently, the programmers require a proper method to
describe the components that they are looking for.
Component understanding: The programmers need to understand the components sufficiently, what the component
does, where it can be reused, then they are able to decide whether they can reuse the component are not. To
facilitate understanding, the components should be well documented and should do something simple.
Component adaptation: Often, the components may need adaptation before they can be reused, since a selected
component may not fit exactly to the problem at hand.
Repository maintenance: once a component repository is created, it requires continuous maintenance. Newly created
components must be entered into the repository. The outdated components might be removed from the repository.
14.4.1 Domain Analysis: The aim of domain analysis is to identify the reusable components for a problem domain.
Reuse domain: A reuse domain is a technically related set of application areas, nothing but it’s a pattern of similarity
among the development components of the software product. Examples of domains are accounting software domain,
banking software domain, manufacturing automation software domain, telecommunication software domain, etc.
one needs to be familiar with a network of related domains for successfully carrying out domain analysis.
Domain analysis identifies the objects, operations, and the relationships among them. For example, consider the
airline reservation system, the reusable objects can be seats, flights, airports, crew, meal orders, etc. During domain
analysis, a specific community of software developers get together to discuss community-wide solutions. The actual
construction of the reusable components for a domain is called domain engineering.
Evolution of a reuse domain: The ultimate results of domain analysis is development of problem-oriented languages.
The problem-oriented languages are also known as application generators. Once these application generators are
developed, they form application development standards. The domains are slowly developed; while developing a
domain, it undergoes through various stages, we may distinguish them as:
Stage 1: There is no clear and consistent set of notations. Obviously, no reusable components are available. All
software is written from scratch.
Stage 2: Here, only the experience from similar projects is used in a development effort. This means that there is
only knowledge reuse.
Stage 3: At this stage, the domain is ripe for reuse. The set of concepts are stabilised, and the notations are
standardised. Standard solutions to standard problems are available. There is both knowledge and component reuse.
Stage 4: The domain has been fully explored. The software development for the domain can largely be automated.
Programs are not written in the traditional sense anymore. Programs are written using a domain specific language,
which is also known as an application generator.
14.4.2 Component Classification: Components need to be properly classified in order to develop an effective
indexing and storage scheme.
Prieto-Diaz’s classification scheme Each component is best described using a number of different characteristics or
facets. For example, objects can be classified using the following:
Prieto-Diaz’s faceted classification scheme requires choosing an n-tuple that best fits a component.
14.4.3 Searching:
• The domain repository may contain thousands of reuse items. In such large domains, what is the most
efficient way to search an item that one is looking for?
• A popular search technique that has proved to be very effective is one that provides a web interface to the
repository.
• Using such a web interface, one would search an item using an approximate automated search using key
words, and then from these results, one would look up for a related item.
• we must remember that the items to be searched may be components, designs, models, requirements, and
even knowledge.
14.4.4 Repository Maintenance: Repository maintenance involves entering new items and removing the old items
which are no more necessary and modifying the search attributes of items to improve the effectiveness of search.
Also, the links relating the different items may need to be modified to improve the effectiveness of search.
• Assessment of a components reuse potential can be obtained from an analysis of a questionnaire circulated
among the developers. The programmers working in similar application domain can be used to answer the
questionnaire about the product’s reusability.
• Depending on the answers given by the programmers, either the component be taken up for reuse as it is, (or)
it is modified (or) it is refined before it is entered into the reuse repository, or it is ignored.
• A sample questionnaire to assess a component’s reusability is the following:
1. Is the component’s functionality required for implementation of systems in the future?
2. How common is the component’s function within its domain?
3. Would there be a duplication of functions within the domain if the component is taken up?
4. Is the component hardware dependent?
5. Is the design of the component optimised enough?
6. If the component is non-reusable, then can it be decomposed to yield some reusable components?
7. Can we parametrise a non-reusable component so that it becomes reusable?
Refining products for greater reusability
For a product to be reusable, it must be relatively easy to adapt it to different contexts. Machine dependency must be
abstracted out or localised using data encapsulation techniques.
Name generalisation: The names should be general, rather than being directly related to a specific application.
Operation generalisation: Operations should be added to make the component more general. Also, operations that
are too specific to an application can be removed.
Exception generalisation: This involves checking each component to see which exceptions it might generate. For a
general component, several types of exceptions might have to be handled.
• The programs also often need to call some operating system functionality, and these calls may not be the
same on all machines. Also, programs use some function libraries, which may not be available on all host
machines.
• A portability solution to overcome these problems is shown in Figure 14.1. The portability solution suggests
that rather than call the operating system and I/O procedures directly, abstract versions of these should be
called by the application program.
• All platform-related calls should be routed through the portability interface.
• One problem with this solution is the significant overhead incurred, which makes it inapplicable to many real-
time systems and applications requiring very fast response.
In spite of all the shortcomings of the state-of-the-art reuse techniques, it is the experience of several organisations
that most of the factors inhibiting an effective reuse program are non-technical. Some of these factors are the
following: