Software Engineering Unit-I (Se R23 Jntuk)
Software Engineering Unit-I (Se R23 Jntuk)
• In the early days of programming, there were good programmers and bad programmers. The good
programmers knew certain principles (or tricks) that helped them to write good programs, which
they seldom shared with the bad programmers.
• Program writing in later years was similar to a craft. Over the next several years, all good principles
(or tricks) that were discovered by programmers along with research innovations have
systematically been organised into a body of knowledge that forms the discipline of software
engineering.
• Software engineering principles are now being widely used in industry, and new principles are still
emerging at a very rapid rate, making this discipline highly dynamic.
• Software engineering practices have proven to be indispensable (very essential/very necessary) to
the development of large software products, though exploratory styles are often used successfully to
develop small programs such as programs written by students as classroom assignments.
1.1.3 A Solution to the Software Crisis
• software engineering appears to be one of the option available to tackle the present software crisis.
• organisations are spending increasingly larger portions of their budget on software compared to
hardware. Among all the symptoms of the present software crisis, the trend of increasing software
costs is probably the most vexing.
• In early days, when you bought any hardware product, the essential software that ran on it can
came free with it.
• At present, many organisations are spending much more on software than on hardware. If this trend
continues, we might soon have an amusing scenario, like in future when you buy any software
product the hardware on which the software runs would come free with the software!!!
• There are many factors, which causes the software crisis, the important factors are
i) rapidly increasing problem size
ii) lack of adequate training in software engineering techniques
iii) increasing skill shortage, and low productivity improvements.
• What is the remedy? It is believed that a satisfactory solution to the present software crisis is to
spread the software engineering practices among the developers, coupled with further
advancements to the software engineering discipline itself.
1.2 SOFTWARE DEVELOPMENT PROJECTS
Before discussing about the various types of development projects that are being undertaken by software
development companies, let us first understand the important ways in which professional software differs
from toy software such as those written by a student in his first programming assignment.
Programs:
• Many toy software’s are being developed by individuals such as students for their classroom
assignments and hobbyists for their personal use.
• These are usually small in size and support limited functionalities. Further, the author of a program
is usually the sole user of the software and himself maintains the code.
• These toy software’s therefore usually lack good user-interface and proper documentation. Besides
these may have poor maintainability, efficiency, and reliability.
• These toy softwares do not have any supporting documents such as users’ manual, maintenance
manual, design document, test documents, etc.,
• we call these toy softwares as programs.
Products:
• professional software usually have multiple users and, therefore, have good user-interface, proper
users’ manuals, and good documentation support.
• A software product has a large number of users, so it is systematically designed, carefully
implemented, and thoroughly tested.
• A professionally written software usually consists not only the program code but also all the
associated documents such as requirements specification document, design document, test
document, users’ manuals, etc.
• A further difference is that professional software is often too large and complex to be developed by
any single individual. It is usually developed by a group of developers working in a team.
Even though software engineering principles are primarily intended for use in development of professional
software, many results of software engineering can effectively be used for development of small programs
as well. However, when developing small programs for personal use, rigid adherence to software
engineering principles is often not worthwhile.
1.2.2 Types of Software Development Projects
A software development company has a large number of on-going projects. Each of these projects may be
classified into software product development projects or services type of projects
Customization projects:
• In this type of service projects, already existed software programs can be modified to quickly fulfil
the specific requirements of any customer.
• At present, hardly no one is developing the software project from scratch by writing the program
code, and mostly they are developing the softwares by customizing some existing softwares.
• For example, to develop a software to automate the payroll generation activities of an educational
institute, the vendor may customize an existing software that might been developed earlier for a
different client or educational institute.
• Due to heavy reuse of code, now it has become possible to develop the large software systems in a
short period of time.
Maintenance projects:
• In this type of service projects, the tech support is provided to the deployed projects.
• Software Maintenance refers to the process of modifying and updating a software system after it
has been delivered to the customer.
• This involves fixing bugs, adding new features, and adapting to new hardware or software
environments.
• Effective maintenance is crucial for extending the software’s lifespan and aligning it with evolving
user needs.
Outsourced projects:
• Development of outsourced software is a type of software service.
• Outsourced software projects may arise for many reasons. Sometimes, it can make good
commercial sense for a company developing a large project to outsource some parts of its
development work to other companies.
• The reasons behind such a decision may be many. For example, a company might consider the
outsourcing option, if it feels that it does not have sufficient experts to develop some specific parts
of the software; or it may determine that some parts can be developed cost-effectively by another
company.
• Since an outsourced project is a small part of some larger project, outsourced projects are usually
small in size and need to be completed within a few months or a few weeks of time.
• The block labelled sensory organs represents the five human senses sight, hearing, touch, smell, and
taste.
• The block labelled neuromotor organs represents the hands, fingers, feet, etc.
We now elaborate this human cognition model as
1) Short-term memory
2) Long-term memory
3) Item
4) Evidence of short-term memory
5) The magical number 7
Short-term memory:
• The short-term memory, as the name itself suggests, can store information for a short while usually
up to a few seconds, and at most for a few minutes. The short-term memory is also sometimes
referred to as the working memory.
• The information stored in the short-term memory is immediately accessible for processing by the
brain. The short-term memory of an average person can store up to seven items; but in extreme
cases it can vary anywhere from five to nine items (7 ± 2).
• It should be clear that the short-term memory plays a very crucial part in the human cognition
mechanism. All information collected through the sensory organs are first stored in the short-term
memory and used by the brain to drive the neuromotor organs.
Long-term memory:
• Unlike the short-term memory, the size of the long-term memory is not known to have a definite
upper bound. The size of the long-term memory can vary from several million items to several billion
items, largely depending on how actively a person exercises his mental faculty.
• An item once stored in the long-term memory, is usually retained for several years. But, how do
items get stored in the long-term memory?
• Items present in the short-term memory can get stored in the long-term memory either through
large number of refreshments (repetitions) or by forming links with already existing items in the
long-term memory.
• For example, you possibly remember your own telephone number because you might have repeated
(refreshed) it for a large number of times in your short-term memory.
Item:
• An item is any set of related information. According to this definition, a character such as ‘a’ or a
digit such as ‘5’ can each be considered as an item. A word, a sentence, a story, or even a picture can
each be a single item. Each item normally occupies one place in memory.
• The phenomenon of forming one item from several items is referred to as chunking by psychologists.
For example, if you are given the binary number 110010101001—it may prove very hard for you to
understand and remember.
• But, the octal form of the number 6251 (i.e., the representation (110)(010)(101)(001)) may be much
easier to understand and remember since we have managed to create chunks of three items each.
Evidence of short-term memory:
• Evidences of short-term memory manifest themselves in many of our day-to-day experiences. As an
example of the short-term memory, consider the following situation. Suppose, you look up a number
from the telephone directory and start dialling it. If you find the number to be busy, you would dial
the number again after a few seconds—in this case, you would be able to do so almost effortlessly
without having to look up the directory. But, after several hours or days since you dialled the number
last, you may not remember the number at all, and would need to consult the directory again.
The magical number 7:
• Miller called the number seven as the magical number, if a person deals with seven or less number
of unrelated information at a time these would be easily accommodated in the short-term memory.
So, he can easily understand it. As the number of items that one has to deal with increases beyond
seven, it becomes exceedingly difficult to understand it.
1.3.2 Principles Deployed by Software Engineering to Overcome Human Cognitive Limitations
• Two important principles that are deployed by software engineering to overcome the problems
arising due to human cognitive limitations are:
1) abstraction
2) decomposition
Abstraction:
• Abstraction is a technique to solve the complexity of the computer systems.
• Abstraction means displaying only the essential information and hiding the details. It works by
establishing a level of simplicity on which a person interacts with the system, suppressing the more
complex details below the current level.
• Data abstraction refers to providing only essential information about the data to the outside world,
hiding the background details or implementation. Consider a real-life example of a man driving a car
only knows how to drive the car, not about working mechanism of the car.
Decomposition:
• It is another important principle that is available in software engineer to handle the problem
complexity.
• This principle is profusely made use by several software engineering techniques to contain the
exponential growth of the perceived problem complexity.
• The decomposition principle is popularly known as the divide and conquer principle.The
decomposition principle advocates decomposing the problem into many small independent parts.
• The small parts are then taken up one by one and solved separately. The idea is that each small part
would be easy to grasp and understand and can be easily solved. The full problem is solved when all
the parts are solved.
Summary of the shortcomings of the exploratory style of software development:
We briefly summarise the important shortcomings of using the exploratory development style to develop a
professional software:
• The foremost difficulty is the exponential growth of development time and effort with problem size
and large-sized software becomes almost impossible using this style of development.
• The exploratory style usually results in unmaintainable code. The reason for this is that any code
developed without proper design and documentation would result in highly unstructured and poor-
quality code, so if there is any changes, maintenance of the code becomes very difficult.
• It becomes very difficult to use the exploratory style in a team development environment. Most
softwares mandate huge development efforts, necessitating team effort for developing large
softwares. Team development is indispensable (very essential/ very necessary) for developing
modern software.
• It becomes very difficult to partition the work among a set of developers who can work concurrently.
2. High-level Language Programming: Computers became faster with the introduction of semiconductor
technology in the early 1960s. Faster semiconductor transistors replaced the prevalent vacuum tube-based
circuits in a computer. With the availability of more powerful computers, it became possible to solve larger
and more complex problems. At this time, high-level languages such as FORTRAN, ALGOL, and COBOL were
introduced. This considerably reduced the effort required to develop software and helped programmers to
write larger programs. Writing each high-level programming construct in effect enables the programmer to
write several machine instructions. However, the programmers were still using the exploratory style of
software development.
3. Control Flow-based Design: A program’s control flow structure indicates the sequence in which
the program’s instructions are executed. In order to help develop programs having good control flow
structures, the flowcharting technique was developed. Even today, the flowcharting technique is
being used to represent and design algorithms.
4. Data Structure-oriented Design: Computers became even more powerful with the advent of Integrated
Circuits (ICs) in the early 1970s. These could now be used to solve more complex problems. Software
developers were tasked to develop larger and more complicated software. Which often required writing in
excess of several tens of thousands of lines of source code. It is much more important to pay attention to the
design of the important data structures of the program than to the design of its control structure. Design
techniques based on this principle are called Data Structure-oriented Design.
Example: Jackson’s Structured Programming (JSP) technique developed by Michael Jackson (1975). In
JSP methodology, a program’s data structure is first designed using the notations for sequence, selection,
and iteration. The JSP methodology provides an interesting technique to derive the program structure from
its data structure representation.
5. Data Flow-oriented Design: As computers became still faster and more powerful with the introduction of
very large scale integrated (VLSI) Circuits and some new architectural concepts, more complex and
sophisticated software were needed to solve further challenging problems. Therefore, software developers
looked out for more effective techniques for designing software and Data Flow-Oriented Design techniques
were proposed. The functions are also called as processes and the data items that are exchanged between
the different functions are represented in a diagram is known as a Data Flow Diagram (DFD).
6. Object-oriented Design: Object-oriented design technique is an intuitively appealing approach, where the
natural objects (such as employees, etc.) relevant to a problem are first identified and then the relationships
among the objects such as composition, reference, and inheritance are determined. Each object essentially
acts as a data hiding is also known as data abstraction. Object-oriented techniques have gained widespread
acceptance because of their simplicity, the scope for code and design reuse, promise of lower development
time, lower development cost, more robust code, and easier maintenance. let us examine what are the
current challenges in designing software.1) program sizes are further increasing as compared to what was
being developed a decade back.2) many of the present day software are required to work in a client-server
environment through a web browser-based access(called web-based software).
1.5 NOTABLE CHANGES IN SOFTWARE DEVELOPMENT PRACTICES
• Before we discuss the details of various software engineering principles, it is worthwhile to examine
the differences between an exploratory style of software development and modern software
engineering practices.
exploratory style of software development modern software engineering practices
the exploratory software development style software engineering techniques are based
is based on error correction (build and fix) on the principles of error prevention.
software engineering principles
is the realisation that it is much more cost-effective to
prevent errors from occurring
than to correct them as and when they are detected.
In the exploratory style, coding was In the modern software development style, coding is
considered synonymous with software regarded as only a small part of
development. For instance, this naive way the overall software development activities. There are
of developing a software believed in several development activities
developing a working system as quickly as such as design and testing which may demand much
possible and then successively modifying more effort than coding.
it until it performed satisfactorily
No requirements phase A lot of attention is now being paid to requirements
specification. Significant
effort is being devoted to develop a clear and correct
specification of the problem
before any development activity starts.
No design phase there is a distinct design phase where standard design
techniques are employed
to yield coherent and complete design models
No Periodic reviews Periodic reviews are being carried out during all stages
of the development process.
The main objective of carrying out reviews is phase
containment of errors, i.e. detect
and correct errors as soon as possible.
No testing phase software testing has become very systematic and
standard testing techniques
are available. Testing activity has also become all
encompassing, as test cases are
being developed right from the requirements
specification stage.
No phase activities There is better visibility of the software through various
developmental activities.
No planning phase projects are being thoroughly planned. The primary
objective of project
planning is to ensure that the various development
activities take place at the
correct time and no activity is halted due to the want
of some resource.
No metrics are used Several metrics of the products and the product
development activities are being collected to help in
software project management
and software quality assurance.
1.6 COMPUTER SYSTEMS ENGINEERING: different stages of computer system engineering are:
Feasibility study
• The main focus of the feasibility study stage is to determine whether it would be financially and
technically feasible to develop the software.
• The feasibility study contains several activities such as collection of basic information relating to the
software such as data to be input, how the input data processed, outputs produced from the
processed data, as well as various constraints on the development.
Requirements analysis and specification
• The aim of the requirements analysis and specification phase is to understand the exact
requirements of the customer and to document them properly.
Hardware software partitioning:
• One of the important stages in systems engineering is the stage in which decision is made regarding
the parts of the problems that are to be implemented in hardware and the ones that would be
implemented in software. This has been represented by the box captioned hardware-software
partitioning in Figure 1.14.
• It is well known that all living organisms undergo a life cycle. For example, when a seed is planted, it
germinates, grows into a full tree, and finally dies.
• Based on this concept of a biological life cycle, the term software life cycle has been defined to imply
the different stages (or phases).
• The stage where the customer feels a need for the software and forms rough ideas about the
required features is known as the inception stage.
• Starting with the inception stage, a software evolves through a series of identifiable stages (also
called phases) on account of the development activities carried out by the developers, until it is fully
developed and released to the customers.
• Once installed and made available for use, the users start to use the software. This signals the start
of the operation (also called maintenance) phase.
• As the users use the software, they request for fixing any failures that they might be encountered
and suggest several improvements and modifications to the software.
• Thus, the maintenance phase usually involves continually making changes to the software to
accommodate the bug-fix and change requests from the user.
• Finally, the software is retired, when the users do not find it any longer useful due to reasons such as
availability of some new software with improved features and more efficient working
Software development life cycle (SDLC) model
• A software development life cycle (SDLC) model (also called software life cycle model (SLCM) and
software development process model (SDPM)) describes the different activities that need to be
carried out for the software to evolve in its life cycle.
• A development process may not only describe various activities that are carried out over the life
cycle but also prescribe a specific methodology to carry out the activities and also recommends the
specific documents and other artifacts(tasks) that should be produced at the end of each phase.
Process versus methodology
The software process is a set of steps or a set of activities that are It is a way to solve a particular problem
Definition
used during the development of software. in a structured way.
A Process consists of all the steps from the start to the completion Its focus is more on the particular
Focus
of the project that are needed during the development. phase during the development.
• It is not enough for an organisation to just have a well-defined development process, but the
development process needs to be properly documented.
• We have identified a few important problems that may crop up when a development process is not
adequately documented.
• Those problems are as follows:
1. A documented process model ensures that every activity in the life cycle is accurately
defined. Without proper documentation, the activities and their ordering tend to be loosely
defined, this may lead to confusion and misinterpretation.
2. It is easier to tailor a documented process model, when it is required to modify certain
activities or phases of the life cycle.
3. A documented process model, as we discuss later, is a mandatory requirement of the modern
quality assurance standards such as ISO 9000 and SEI CMM, otherwise it would not qualify
for accreditation by any of the quality certifying agencies.
4. In the absence of a quality certification for the organisation, the customers would be
suspicious of its capability of developing a quality software and the organisation might find
it difficult to win tenders for software development.
Phase entry and exit criteria
• A good SDLC besides clearly identifying the different phases in the life cycle.
• A good SDLC model should unambiguously define the entry and exit criteria for each phase.
• The phase entry or exit criteria is usually expressed as a set of conditions that needs to be satisfied
for the phase to start (or) phase to complete).
• When the phase entry and exit criteria are not well-defined, it becomes very difficult for the project
manager to determine the exact status of the development and track the progress of the project.
• This usually leads to a problem that is usually identified as the 99 per cent complete syndrome.
1.8 WATERFALL MODEL AND ITS EXTENSIONS
1. Classical Waterfall Model
2. Iterative Waterfall Model
3. V-Model
4. Prototyping Model
5. Incremental Development Model
6. Evolutionary Model
• Observe the Figure that among all the life cycle phases, the maintenance phase normally requires
the maximum effort, on average 60 percent of the total effort is spent on the maintenance phase.
• Next, among the development phases, the integration and system testing phase requires the
maximum effort in a typical development project.
Feasibility study
• The main focus of the feasibility study stage is to determine whether it would be financially and
technically feasible to develop the software.
• The feasibility study contains several activities such as collection of basic information relating to the
software such as data to be input, how the input data processed, outputs produced from the
processed data, as well as various constraints on the development.
• These collected data are analysed to perform the following:
1. Development of an overall understanding of the problem: we need to understand the details of
various requirements of the customer, such as the screen layouts, graphical user interface (GUI),
and algorithm’s
2. Formulation of various possible strategies for solving the problem: In this activity, various
possible high-level solution schemes to the problem are determined.
3. Evaluation of the different solution strategies: The different identified solution schemes are
analysed to evaluate their benefits and shortcomings. Such evaluation often requires making
approximate estimates of the resources required, cost of development, and development time
required. The different solutions are compared, and the best solution is finalised. Once the best
solution is identified, all activities in the later phases are carried out as per this solution.
Requirements analysis and specification
• The aim of the requirements analysis and specification phase is to understand the exact
requirements of the customer and to document them properly.
• This phase consists of two distinct activities:
1. requirements gathering and analysis.
2. requirements specification.
Requirements gathering and analysis:
• The goal of the requirement’s gathering activity is to collect all relevant information
regarding the software to be developed from the customer with a view to clearly understand
the requirements.
• The goal of the requirements analysis activity is to weed out the incompleteness and
inconsistencies in these gathered requirements.
Requirements specification:
• After the requirement gathering and analysis activities are complete, the identified
requirements are documented. This document is called as a software requirements
specification (SRS) document.
• The SRS document is written using end-user terminology. This makes the SRS document
understandable to the customer. Therefore, understandability of the SRS document is an
important issue.
• The SRS document normally serves as a contract between the development team and the
customer. Any future dispute between the customer and the developers can be settled by
examining the SRS document.
• The SRS document is an important document which must be thoroughly understood by the
development team and reviewed jointly with the customer.
Design
• The goal of the design phase is to transform the requirements specified in the SRS document into a
structure that is suitable for implementation in some programming language.
• In technical terms, during the design phase the software architecture is derived from the SRS
document.
• Two different design approaches are popularly being used, they are
1. procedural design approach.
2. object-oriented design approach.
Procedural design approach:
• The traditional procedural design approach is in use in many software development projects
at the present time.
• This traditional design technique is based on data flow modelling.
• It consists of two important activities:
1. Structured analysis: During structured analysis, the functional requirements specified in
the SRS document are decomposed into subfunctions and the dataflow among these
subfunctions is analysed and represented diagrammatically in the form of DFDs.
2. Structured design: Structured design consists of two main activities: architectural design
(also called high-level design) and detailed design (also called Low-level design). High-
level design involves decomposing the system into modules and representing the
interfaces and the invocation relationships among the modules. During the detailed
design activity, internals of the individual modules such as the data structures and
algorithms of the modules are designed and documented.
Object-oriented design approach:
• First identify various objects (real world entities) occurring in the problem.
• Identify the relationships among the objects. For example, the objects in a pay-roll software
may be employees, managers, pay-roll register, Departments, etc.
• The OOD approach is credited to have several benefits such as lower development time and
effort, and better maintainability of the software.
Coding and unit testing
• The purpose of the coding and unit testing phase is to translate a software design into source code
and to ensure that individually each function is working correctly.
• The coding phase is also called as the implementation phase, since the design is implemented into a
workable solution in this phase.
• Each component of the design is implemented as a program module.
• The main object of this phase is a set of program modules that have been developed are individually
unit tested and determined that they are working correctly and the modules must be documented.
Integration and system testing
• Integration of different modules is undertaken soon after they have been coded and unit tested.
• During the integration and system testing phase, the different modules are integrated in a planned
manner. Various modules making up a software are almost never integrated in one shot.
• Integration of various modules are normally carried out incrementally over a number of steps.
During each integration step, previously planned modules are added to the partially integrated
system and the resultant system is tested.
• Finally, after all the modules have been successfully integrated and tested, the full working system is
obtained. System testing is carried out on this fully working system.
• System testing usually consists of three different kinds of testing activities:
1. a-testing: a testing is the system testing performed by the development team.
2. b-testing: This is the system testing performed by a friendly set of customers.
3. Acceptance testing: After the software has been delivered, the customer performs system
testing to determine whether to accept the delivered software or to reject it.
Maintenance
• The total effort spent on maintenance of a typical software during its operation phase is usually far
greater than that required for developing the software itself. Many studies confirm this and indicate
that the ratio of the total effort spent on its maintenance is roughly 40:60.
• Maintenance is required in the following three types of situations:
• Corrective maintenance: This type of maintenance is carried out to correct errors that were not
discovered during the product development phase.
• Perfective maintenance: This type of maintenance is carried out to improve the performance of the
system, or to enhance the functionalities of the system based on customer’s requests.
• Adaptive maintenance: Adaptive maintenance is usually required for porting the software to work
in a new environment. For example, porting may be required to get the software to work on a new
computer platform or with a new operating system.
Disadvantages (Shortcomings) of the classical waterfall model: The classical waterfall model is a very
simple and intuitive model. However, it suffers from several shortcomings. Let us identify some of the
important shortcomings of the classical waterfall model.
• No feedback paths: In classical waterfall model, the evolution of a software from one phase to the
next is analogous to a waterfall. Just as water in a waterfall after having flowed down cannot flow
back, once a phase is complete, the activities carried out in it and any artifacts produced in this
phase are final and are closed for any rework.
• Difficult to accommodate change requests: The customers’ requirements may keep on changing
with time. But, in this model it becomes difficult to accommodate any requirement change requests
made by the customer after the requirements specification phase is completed.
• Inefficient error corrections: This model defers integration of code and testing tasks until it is very
late when the problems are harder to resolve.
• No overlapping of phases: This model recommends that the phases must be carried out sequentially
new phase can start only after the completion of previous phase.
Is the classical waterfall model useful at all?
• We have already pointed out that it is hard to use the classical waterfall model in real projects.
• In any practical development environment, as the software takes shape, several iterations through
the different waterfall stages become necessary for correction of errors committed during various
phases.
• Therefore, the classical waterfall model is hardly usable for software development. But, as
suggested by Parnas [1972] the final documents for the product should be written as if the product
was developed using a pure classical waterfall.
• No matter how careful a programmer may be, he might end up committing some mistake or other
while carrying out a life cycle activity. These mistakes result in errors (also called faults or bugs) in
the work product.
• It is advantageous to detect the errors in the same phase in which they take place, since early
detection of bugs reduces the effort and time required for correcting those errors.
Phase overlap
• An important reason for phase overlap is that usually the work required to be carried out in a phase
is divided among the team members. Some members may complete their part of the work earlier
than other members. If strict phase transitions are maintained, then the team members who
complete their work early would idle waiting for the phase to be complete and are said to be in a
blocking state. Thus, the developers who complete early would idle while waiting for their team
mates to complete their assigned work. Clearly this is a cause for wastage of resources and a source
of cost escalation and inefficiency. As a result, in real projects, the phases are allowed to overlap.
That is, once a developer completes his work assignment for a phase, proceeds to start the work for
the next phase, without waiting for all his team members to complete their respective work
allocations.
1.8.3 V-Model
• The verification phase comprises several levels. These levels are interdependent, i.e. if a change is
made at a higher level, something will also change at the levels below.
• An important step in the verification phase is gathering the requirements. requirements must be
analysed in detail and clearly defined.
• In the system analysis step, it is clarified whether it is possible to achieve all these requirements.
• A design is developed for the entire system and, depending on the type of project, possible graphic
designs are also developed.
• The verification phase ends with the module design. This is the lowest level at which it is determined
how the functions will be implemented in your product.
2)the coding phase: In the coding phase, the product is programmed, and the software is created.
3)validation phase: the bottom-up principle is applied and tested. This is the right-hand side of the V. You
start with the tests at unit level and work your way up to the tests at acceptance level.
• We are now on the right-hand side of the V. The tests found on this side always correspond to the
level on the left-hand side of the V. therefore They are opposite each other in the diagram.
• The unit test is at the lowest level. This is where the individual components are analysed.
• This is followed by the integration test. This checks that the modules work together and that the
data exchange between the components works smoothly.
• The system test is the latest point at which the customer is also involved, and several test runs of the
entire system are carried out.
• Last phase is known as acceptance, Users should be invited to take part in the test in order to obtain
a test result that is as realistic as possible. With the last phase, the software development is finished
according to the V-model.
Advantages of V-model
The important advantages of the V-model over the iterative waterfall model are as following:
• In the V-model, much of the testing activities (test case design, test planning, etc.) are carried out in
parallel with the development activities.
• The test team is reasonably kept occupied throughout the development cycle in contrast to the
waterfall model where the testers are active only during the testing phase. This leads to more
efficient manpower utilisation.
• In the V-model, the test team is associated with the project from the beginning. Therefore they build
up a good understanding of the development artifacts, and this in turn, helps them to carry out
effective testing of the software.
Disadvantages of V-model
• Being a derivative of the classical waterfall model, this model inherits most of the weaknesses of the
waterfall model.
Testing Activities In Waterfall model testing activities start In V-model testing activities start
Start after the development activities are over. with the first stage.
Testing during It is not possible to test a software during its There is possibility to test a
Development development. software during its development.
• Prototype development starts with an initial requirements gathering phase. A quick design is carried
out and a prototype is built.
• The developed prototype is submitted to the customer for evaluation. Based on the customer
feedback, the requirements are refined and the prototype is suitably modified.
• This cycle of obtaining customer feedback and modifying the prototype continues till the customer
approves the prototype.
Iterative development:
• Once the customer approves the prototype, the actual software is developed using the iterative
waterfall approach.
• In spite of the availability of a working prototype, the SRS document is usually needed to be
developed.
Advantages of prototype model:
• It is advantageous to use the prototyping model for development of the graphical user interface
(GUI) part of an application. Through the use of a prototype, it becomes easier to illustrate the input
data formats, messages, reports, and the interactive dialogs to the customer.
• The prototyping model is especially useful when the exact technical solutions are unclear to the
development team.
• An important reason for developing a prototype is that it is impossible to “get it right” the first time.
• This model is the most appropriate for projects that suffer from risks arising from technical
uncertainties and unclear requirements. A constructed prototype helps overcome these risks.
Disadvantages of the prototype model:
• The prototype model can increase the cost of the project development.
• The prototyping model is ineffective for risks identified later during the development cycle.
• In the incremental life cycle model, the requirements of the software are first broken down into
several modules or features that can be incrementally constructed and delivered.
• This has been pictorially depicted in Figure 2.7. At any time, plan is made only for the next increment
and no long-term plans are made. Therefore, it becomes easier to accommodate change requests
from the customers.
• The development team, first develop the core features of the system.
• Once the initial core features are developed, these are refined into increasing levels of capability by
adding new functionalities in successive versions.
• Each incremental version is usually developed using an iterative waterfall model of development.
The incremental model is schematically shown in Figure 2.8.
• Each successive version of the software is constructed and delivered to the customer, the customer
feedback is obtained on the delivered version and these feedbacks are incorporated in the next
version.
• Each delivered version of the software incorporates additional features over the previous version
• The incremental model has schematically been shown in Figure 2.8. After the requirements
gathering and specification, the requirements are split into several increments.
• Starting with the core (increment 1), in each successive iteration, the next increment is constructed
using a waterfall model of development and deployed at the customer site.
• After the last (shown as increment n) has been developed and deployed at the client site, the full
software is developed and deployed.
Focus Delivering a complete system by adding features Delivering a core version and
incrementally evolving it based on user
feedback
Requirements Defined upfront, it means that, the development Evolve based on feedback
activities start only after the completion of
requirements specification.
Scope Fixed, with minor deviations possible Flexible and evolves based on
feedback
Deliverables Each increment is a working version of the system Each stage builds upon the
previous one
1.9 RAPID APPLICATION DEVELOPMENT (RAD)
The major goals of the RAD model are as follows:
1) To decrease the time taken and the cost incurred to develop software systems.
2) To limit the costs of accommodating change requests.
3) To reduce the communication gap between the customer and the developers.
• The critical feature of this model is the use of powerful development tools and techniques.
• A software project is broken down into small modules wherein each module can be assigned
independently to separate teams.
• The development team always includes a customer representative to clarify the requirements and to
reduce the communication gap between the customer and the developers.
• Multiple teams work parallelly on developing the software system using the RAD model.
• These modules can finally be combined to form the final product.
• Development of each module involves the various basic steps as in the waterfall model i.e.
requirements analysing, designing, coding, and then testing, etc. as shown in the above figure.
• Another striking feature of this model is a short period i.e. the time frame for delivery(time-box) is
generally 60-90 days.
• This process involves building a rapid prototype, delivering it to the customer, and take the feedback
from the customer. After validation by the customer, the SRS document is developed, and the design
is finalized.
• This model has the features of both prototyping and evolutionary models.
• It deploys an evolutionary model to obtain and incorporate the customer feedbacks on incrementally
delivered versions.
• In this model prototypes are constructed, and incrementally the features are developed and
delivered to the customer.
• But unlike the prototyping model, the prototypes are not thrown away but are enhanced and used in
the software construction.
• The customers usually suggest changes to a specific feature, only after they have used it. Since the
features are delivered in small increments.
• The decrease in development time and cost, and at the same time an increased flexibility to
incorporate changes are achieved in the RAD model in two ways—minimal use of planning and
heavy reuse of any existing code through rapid prototyping.
• RAD model emphasises code reuse as an important mechanism for completing a project faster and
to reduce the development cost.
Advantages of RAD:
Customised software: As already pointed out a customised software is developed for the customers by
adapting an existing software. In customised software development projects, substantial reuse is usually
made of code from pre-existing software.
Non-critical software: The RAD model suggests that a quick and dirty software should first be developed
and later this should be refined into the final software for delivery.
Highly constrained project schedule: RAD aims to reduce development time. Naturally, for projects with
very aggressive time schedules, RAD model should be preferred.
Large software: Only for software supporting many features (large software) can incremental development
and delivery be meaningfully carried out.
Disadvantages of RAD:
Generic products (wide distribution): software products are generic in nature and usually have wide
distribution. For such systems, optimal performance and reliability are more important in a competitive
market. The RAD model of development may not yield systems having optimal performance and reliability.
Requirement of optimal performance and/or reliability: For certain categories of products, optimal
performance or reliability is required. Examples of such systems include an operating system (high reliability
required) and a flight simulator software (high performance required). If such systems are to be developed
using the RAD model, the desired product performance and reliability may not be realised.
Lack of similar products: If a company has not developed similar software, then it is not possible to reuse
the existing artifacts, and the use of RAD model becomes meaningless.
Monolithic entity: For certain software, especially small-sized software, it may be hard to divide the
required features into small parts that can be incrementally developed and delivered. In this case, it
becomes difficult to develop a software incrementally.
1.10 AGILE DEVELOPMENT MODELS
• The agile model could help a project to adapt to change requests quickly.
• Thus, a major aim of the agile model is to facilitate quick project completion. It gaves the required
flexibility so that the activities that may not be necessary for a specific project could be easily
removed. Also, anything that wastes time and effort is avoided.
• Please note that agile model is being used as an umbrella term to refer to a group of development
processes. While these processes share certain common characteristics, yet they do have certain
subtle differences among themselves.
• A few popular agile SDLC models are the following:
1) Crystal
2) Atern (formerly DSDM)
3) Feature-driven development
4) Scrum
5) Extreme programming (XP)
6) Lean software development
7) Unified process
• In an agile model, the requirements are decomposed into many small parts that can be
incrementally developed.
• The agile models adopt an incremental and iterative approach. Each incremental part is developed
over an iteration. Each iteration is intended to be small and easily manageable and lasts for a couple
of weeks only. At a time, only one increment is planned, developed, and then deployed.
• The time to complete an iteration is called a time box. The implication of the term time box is that
the end date for an iteration does not change. Does not exceed the scheduled time. central principle
of the agile model is, delivery of an increment to the customer after each time box.
• For establishing close interactions with the customer during development and to gain a clear
understanding of domain-specific issues, each agile project usually includes a customer
representative in the team.
• At the end of each iteration, stakeholders and the customer representative review the progress
made and re-evaluate the requirements.
• Agile models emphasise use of face-to-face communication in preference over written documents. It
is recommended that the development team size be kept small (5-9 people). This would help the
team members to meaningfully engage in face-to-face communication and lead to a collaborative
work environment.
1.11 SPIRAL MODEL
• This model gets its name from the appearance of its diagrammatic representation that looks like a
spiral with many loops (see Figure 2.12).
• The exact number of loops of the spiral is not fixed and can vary from project to project. The number
of loops shown in Figure 2.12 is just an example.
• Each loop of the spiral is called a phase of the software process. The exact number of phases
through which the product is developed can be varied by the project manager depending upon the
project risks.
• A prominent feature of the spiral model is handling unforeseen risks that can show up after the
project has started.
Phases of the Spiral Model
In this model each phase is splited into four sectors (or quadrants) as shown in Figure 2.12. In the first
quadrant, a few features of the software are identified to be taken up for immediate development based on
how crucial it is to the overall software development. With each iteration around the spiral (beginning at
the center and moving outwards), progressively more complete versions of the software get built.
Quadrant 1: The objectives are investigated, elaborated, and analysed. Based on this, the risks involved in
meeting the phase objectives are identified.
Quadrant 2: During the second quadrant, the alternative solutions are evaluated to select the best possible
solution. To be able to do this, the solutions are evaluated by developing an appropriate prototype.
Quadrant 3: At the end of the third quadrant, the identified features have been implemented and the next
version of the software is available.
Quadrant 4: Activities during the fourth quadrant concern reviewing the results of the stages traversed so
far (i.e. the developed version of the software) with the customer and plan the next iteration of the spiral.
Advantage: For projects having many unknown risks that might show up after the development proceeds,
the spiral model would be the most appropriate development model to follow.
Disadvantage: It is very difficult to use, unless there are knowledgeable and experienced staff in the project.
Also, it is not suitable for the development of outsourced projects.
Difference between various SDLC Models
Developmen
Sequential Iterative Iterative Iterative Iterative Iterative
t Approach
Planning,
Planning, Planning, Planning, Divided into
Design,
Risk Sprint, Design, increments,
Coding,
Analysis, Review, Implementation each with
Linear Testing,
Engineering Retrospectiv , Testing, Planning,
Evaluation
, Testing e (Iterative Deployment Implementation
(Repeated
(Cyclical) Cycles) (Parallel) , Testing
Phases Iteratively)
Risk
Continuous Continuous
Late Proactive risk management Proactive risk
risk risk
mitigation, management aligned with management,
assessment, assessment,
Risk Limited , Adaptability phases, Adaptability to
Proactive Adaptability
Managemen adaptability to changes Moderate changes
mitigation to changes
t adaptability
Time-to-
Longer Faster Variable Faster Moderate Faster
Market
User
Limited Continuous Periodic Continuous Periodic Continuous
Involvement
Linear Easier to
Cyclical Adaptive Adaptive
Complexity approach, manage, Traceability
approach, approach to approach to
Managemen Limited Adaptability helps manage c
Risk-d changes changes
t adaptability to changes
Explain When to use which SDLC models?
Likely to Stable to
Moderate
Requirements Stable Can evolve Can evolve change Moderate
stability
Stability frequently Stability
Moderate to
Low Moderate High Moderate Moderate
Risk Tolerance High
Time-to-
Moderate Faster Variable Faster Moderate Faster
Market
Testing
Sequential Continuous Continuous Continuous Continuous
Conducted
Testing after Testing Testing and Testing
After the
Development Throughout Throughout Collaborative Throughout
Testing Completion of
Phases Iterations the Spiral Testing Increments
Approach Phases
Highly
Limited High Adaptive to Moderate High
Change Adaptive to
Flexibility Flexibility Changes Flexibility Flexibility
Management Changes