software_engineering MCA
software_engineering MCA
MODULE 1
* Software provides one of the most significant products of our day, information,
by doing things like:
=> It changes personnel data [for example, an individual financial
transaction]
=> It handles business
• All of these factors have contributed to the evolution of increasingly complex and
sophisticated computer-based systems.
• Sophistication and complexity yield desirable outcomes if the system works as
intended, but can pose serious challenges if the opposite is true.
• Large software companies now employ entire groups of specialists, each of whom
works on a specific aspect of the technology needed to complete a single application.
• The inquiries made of the lone programmer are identical to those made throughout the
development of contemporary computer systems.
They are,
(1) What factors contribute to the protracted time required to finish software?
(2) What aspects of development add to the staggering costs?
(3) Why is it that we are unable to detect all of the faults in the software before
releasing it to our clients?
(4) Why do we put forth a lot of time and energy to keep the programmes that are
currently in place going?
(5) Why do we still struggle to accurately measure the amount of progress made
when the software is being built and maintained?
3
• The fact that these inquiries are being made demonstrates that businesses are
worried about software and its creation process. • This worry has prompted the
growth of software engineering practices.
1.4 Software:
Definition:
“ It is a set of instruction that when executed provide desired features, functions
and performance”
(Or)
“ It is a data structure that enable the programs to adequately manipulate
information”
(Or)
“ It is a documents that describe the operation and use of programs”
Dust, Vibration, Abuse, Extreme Temperatures, and Time all add up to shorten the
lifespan of hardware components.
This leads to an increase in the failure rate; alternatively, the hardware "wears out." The
"Bathtub Curve" below illustrates this point.
Time
* The failure rate curve for software should take the form of a "idealised curve," as it is
not affected by environmental conditions in any way.
to Side Effects
Failure
Rate
Actual Curve
Change
Idealized Curve
* In the beginning stages of a program's life, significant failure rates are caused by
Time
mistakes that have not yet been found.
* The curve will become flatter once these faults have been rectified (provided that no
new flaws are introduced).
Therefore, it is abundantly evident that:
• Software undergoes deterioration rather than complete obsolescence;
modifications occur throughout its lifespan; and the implementation of changes
introduces errors.
• In order for the curve to revert back to its initial steady state failure, an additional
modification is requested that induces another surge in the curve, thereby
escalating the failure rate. Consequently, this degradation impacts the software's
quality.
• Hardware components are replaceable with spare parts when they fail; however,
software components are not replaceable. This distinction renders software
maintenance a more challenging endeavour. • Hardware maintenance is less
complex in comparison to software maintenance.
(3) Although the industry is moving toward component – based construction, most
software continues to be custom built
Component reuse is a natural element of the engineering process in the world of
hardware, but in the world of software, it has to be performed on a large scale to be
effective.
6
Characteristics:
=> Extensive connection with computer hardware
=> Prolonged and intensive use by a number of individuals
=> Structures of data that are complex
= Multiple connections to the outside world
=> Working in parallel at the same time
=> A microwave oven's keypad control corresponds to a variety of digital functions found
in automobiles, including the gas lever and dashboard.
=> Such components as displays and the braking system are encompassed.
***********************************************************************
*
1.7 Software Myths:
* Beliefs about software
* The procedure used to build it, which can be traced back to the earliest days of
computers
* The myth – has a number of characteristics that have contributed to them becoming
insidious [i.e. proceeding inconspicuously but harmfully]
* For example, myths give the impression of being factual claims that are rational [and
sometimes do contain aspects of truth].
9
Myth 1:
It suffices to commence programme writing with a broad statement of objectives.
The details can be completed later.
Reality:
• A statement that is equivocal, or has two meanings, gives rise to a multitude of
complications.
• However, unambiguous statements can only be generated via consistent and
efficient communication between the client and the developer.
• Consequently, it is not always feasible to formulate statements of requirements
that are exhaustive and consistent.
Myth 2:
• The proposed change has the potential to induce disruption, such as violent
change or disturbance, which may necessitate the allocation of supplementary
resources and substantial modifications to the design.
Reality:
Software engineering focuses on producing high-quality software rather than mere
document creation.
12
* It finds a small set of framework tasks that can be used on all software projects, no
matter how big or complicated they are.
*The process framework has a group of tasks that can be used throughout the whole
software development process.
Framework Activity:
* It has a set of software engineering actions [a group of related jobs that come together
to make a big piece of software engineering work].
Design is an action in software engineering.
Each action has its own set of tasks that need to be done. These tasks do some of the
work that the action implies.
15
A Process Framework
Constant interaction and cooperation with the client is required, as well as the gathering
of requirements and other relevant tasks.
(2) Planning:
17
*
It describes the
*
*It lists and carries out the tasks needed to make sure the quality of software
(4) Formal Technical Reviews:
* Get rid of any mistakes that you find before moving on to the next action (Or)
activity.
(5) Measurements:
To aid the team in delivering software, it specifies and collects process, project, and
product measures, and it can be used in conjunction with other frameworks and
overarching tasks.
(6) Software configuration management:
Effectively handles change impact management for software development.
(7) Reusability management:
* It sets up a way to make parts that can be used again and again.
* It sets rules for reusing work products.
(8) Work product preparation and production:
* It includes the things that need to be done to make work goods, like
=> Models
=> Documents
=> Logs, forms and lists
19
PROCESS MODELS
2.0 Process Models – Definition
* It's a separate collection of things you have to do, accomplish, and produce in order to
create high-quality software.
* While not flawless, these process models do provide a helpful framework for software
engineering projects.
=> Planning
=> Modeling
=> Construction and
=> Deployment
Problems encountered in waterfall model:
(1) In practise, projects rarely progress in a linear fashion. Consequently, the team's
progress is muddled by the constant stream of changes.
(2) The consumer often has trouble articulating their needs in detail;
(3) The customer must be patient
* Because of the sequential structure of the water fall model, "Blocking State" occurs when
certain members of the project team must wait for others to finish dependent tasks. * In
certain contexts, the water-fall model can be utilized effectively as a model for the
process.
=> Requirements are fixed and work is to proceed to completion in a
linear manner
20
meaning that the fundamental needs have been met, but the primary extra features have
not been given
* Either the fundamental product is put through extensive testing by the customer, or the
customer uses it.
* As a direct outcome of the evaluation, a strategy for the subsequent increment is
prepared.
=> Communication
=> Planning
Unlike the prototyping model, which prioritises the delivery of a functional product with
each increment, the incremental model focuses on the delivery of an operational product
with each increment, ensuring that the product fully meets the needs of the customer
before it is considered complete.
22
*
In contrast to the prototype methodology, the incremental model focuses on adapting the
original product to new circumstances.
* When personnel is unavailable for a comprehensive implementation by the business
deadline that has been imposed for the project, this strategy is particularly effective. * It
is possible to implement early increments with a smaller number of individuals. In the
event that the core product is favourably received, extra personnel may be added in order
to implement the subsequent increase. It is possible to plan for increments in order to
mitigate technological risks.
* For instance, the production of a significant quantity of brand new hardware is now
underway, although the exact date of its release is unknown.
* Therefore, it is important to arrange early increments in a way that prevents the use of
this hardware. This will make it possible for end-users to receive partial functionality
without an excessive amount of delay.
What Is Agility ?
• Modifications to the software under development, adjustments to team members,
modifications brought about by new technology, and modifications of any kind that
could
24
affect the product they created or the project that develops the project are all instances of
the kinds of modifications to which an agile team can adapt in a way that is appropriate.
• An agile team is aware that software is created by people working in groups, and that
the success of the project depends on the abilities and talents of these people
cooperating.
• Agility encompasses more than just the ability to adapt quickly to change. In addition to
that, it incorporates the agile way of thinking.
What Is An Agile Process?
The bulk of software development projects are based on three key assumptions, and
an AGILE SOFTWARE PROCESS is characterised by the way it handles these
assumptions.
1. It is impossible to determine in advance which software requirements will
continue to be necessary and which will be replaced by new ones. It is similarly
challenging to anticipate how the priorities of a customer will shift as a project moves
forward.
2. The phases of design and production are frequently combined in the creation of
different kinds of software. i.e. It is recommended that both processes be carried out
simultaneously so that design models can be validated as they are being developed. It
is challenging to make an accurate estimate of the amount of design work that must
be completed before construction can be used to validate the design.
3. The phases of analysis, design, construction, and testing are not as predicable
as we would like them to be (based on the planning).
• Based on these three presumptions, we are able to assert that the process's success
resides in its adaptability (to rapidly shifting technical conditions and project
parameters).
• Flexibility is an absolute requirement for an agile process.
• The iterative approach to software development known as agile needs to change.
25
• The agile team needs feedback from customers in order to achieve incremental goals.
• The iterative methodology enables the customer to frequently evaluate the software
increment, provide the software team with any necessary input, and have some say in
the process changes that are made to meet the feedback provided by the customer.
Those who wish to attain agility are required to adhere to the following 12 principles,
as defined by the Agile Alliance:
1. The earliest and most consistent delivery of useful software is our first and
foremost concern in order to fulfil the requirements of our customers.
2. Be prepared to modify plans in response to changing needs, particularly as
development progresses. Agile processes give the client a competitive edge by
allowing them to adapt to change.
3. Regularly deliver functional software, giving attention to completion in the least
amount of time. A few weeks or a few months could pass in this case.
4. Business experts and developers work together every day for the duration of the
project.
5. Focus on individuals with a strong sense of motivation. Have faith in their abilities
to do the task and give them the environment and assistance they need.
6. Direct, in-person communication is the most effective and beneficial means of
sharing information with other team members and members of a development
team.
7. The best measure of success is having software that functions as intended.
Using agile methodologies promotes sustainable development. The speed is
anticipated to be steady indefinitely, and sponsors, developers, and consumers should
all be able to maintain it.
9. You can improve agility by keeping an eye on sound design and technical
excellence.
Simplifying involves maximizing the amount of work that is not done, which is a key
element.
11. The best architectures, specifications, and designs are created by self-
organizing teams.
26
12. On a regular basis, the team reflects on how it may become more efficient and
then adapts and changes its behavior to take those ideas into account.
PLANNING
• The creation of a collection of tales, also known as user stories, that outline the
features and functionalities required for the program to be developed is the first
step in the planning process.
• Each story, sometimes referred to as a "Use-Case," must be written by the
consumer and put on an index card.
• Based on the feature or function's overall business value, the customer assigns the
story a VALUE (priority).
• Following that, each tale is assessed by the XP team members, who then
determine its cost, which is expressed in terms of the weeks needed to complete it.
27
• The customer will be asked to divide the story into manageable chunks and the
value and cost will be allocated to each story separately once more if the narrative
requires more than three weeks of development time.
•The author is free to write the new stories whenever they like.
•The XP team works with its customers to determine how to best arrange individual user stories
within the upcoming release, which is also referred to as the upcoming software increment. The
XP team makes this choice.
•The XP team will order the stories that will be developed in one of the following three ways
after a release date has been committed to:
1. Every story will be implemented immediately (in a matter of weeks).
2. The tasks related to the stories with the biggest possible impact will be finished
first and given a higher priority in the schedule.
3. The stories with the highest potential for failure will be placed to the front of the
schedule and implemented first.
28
DESIGN
• The design of XP adheres to the "Keep it Simple" (KIS) approach. A less
complicated representation is preferable over a more complicated design.
• The design offers implementation assistance for a story exactly as it is stated;
nothing less and nothing more than that.
• XP promotes the utilization of CRC cards, which stand for "Class-Responsible
Collaboration." These cards identify and organize the object-oriented classes that
are pertinent to the currently active software increment.
• The only design work products that were generated as a result of the XP process
were the CRC cards.
• Refactoring, a building approach that doubles as a design technique, is
encouraged by the XP game mode.
• The term "REFACTORING" refers to a design process that is ongoing during the
construction of the system.
CODING
• According to the XP approach, once a team has finished working on the stories
and the preliminary design work, they should not move on to coding but rather
develop a set of unit tests that are included into the software increment that is
presently being worked on. The XP methodology proposes that once the team has
finished working on the stories and the preliminary design work, they should not
go on to work on coding.
• Once the unit test has been constructed, the developer will have a much easier
time focusing on the requirements that need to be met in order for the unit test to
be passed. • Once the programming has been completed, the code may be
instantly put through unit testing, which provides the developers with immediate
feedback.
EX: It's possible that one person will focus on the specifics of the coding for a segment of
the design while another looks over their shoulder to make sure the coding standards are
being adhered to.
• The created code will "FIT" into the larger framework of what the tale is about.
• The integration work is the duty of the pair programmers. This technique of
continuous integration helps to avoid problems with compatibility and interfacing,
and it creates a "SMOKE TESTING" environment, which helps to expose defects
at an earlier stage.
TESTING
• The newly developed unit tests ought to be put into practise by making use of a
framework that gives rise to the possibility of their being automated. When code
is modified in any way, this should encourage the use of a regression testing
technique.
• Integration and validation testing of the system can be performed on a daily basis
now that the unit tests have been arranged into a "Universal Testing suit." The XP
team receives a continuous indication of progress from it, and it also has the
potential to raise early warning signs if things start to deteriorate.
• XP acceptance tests, also known as customer tests, are tests that are specified by
the customer and concentrate on the overall features and functionality of the
system that are reviewed by the client.
• Acceptance tests are created from user stories after they have been incorporated
into a product release.
• Once the XP team has completed the delivery of the first release of the project,
they will compute PROJECT VELOCITY, which is the number of customer
stories that were implemented during the first release. After then, one may make
advantage of Project Velocity to
1. Contribute to the estimation of delivery dates and the release schedule for
subsequent versions and
2. Determine whether an over commitment has been made for all of the stories that
are part of the overall development project. In the event that an over commitment
takes place, either the substance of the release or the end-delivery dates will be
adjusted.
30
• As the development process progresses, the client has the option of adding new
tales, altering the value of an existing narrative, splitting existing stories, or
removing existing stories. After then, the XP team reviews all of the releases that
are still to come and adjusts its plan accordingly.
SCRUM
SCRUM principles are consistent with the agile manifesto :
• Small working teams are organised to make the most of their communication,
minimise their overhead costs, and make the most of their opportunity to share
their knowledge.
• In order to "ensure the best product is produced," the process needs to be flexible
enough to accommodate both changes in technology and in business.
• The procedure results in frequent software increments "that can be inspected,
adjusted, tested, documented, and built upon."
• The work of development and the individuals who carry it out are separated "into
clean, low coupling partitions, or packets"
• As the product is being assembled, testing and documentation are carried out in a
continuous manner.
• The SCRUM methodology affords users the "flexibility to declare a product
'done' whenever it is necessary to do so."
• The SCRUM principles are utilised to direct the development activities that take
place within a process that also includes the activities of the following framework:
requirements, analysis, design, evolution, and delivery.
• Within each activity of the framework, work tasks take place according to a
pattern of processes known as Sprint.
• The work that is completed during a Sprint (the number of sprints necessary for
each framework activity will vary based on the complexity and scale of the
product) is suited to the problem that is currently being worked on. Additionally,
the SCRUM team defines the work and frequently modifies it in real time. The
following diagram outlines the general steps involved in the SCRUM process.
• SCRUM places an emphasis on the utilisation of a collection of "Software process
patterns," which have been demonstrated to be effective for projects that have
tight timeframes, fluctuating requirements, and high business criticality.
31
• Scrum meetings are brief meetings that are held every day by the scrum team.
During the meeting, there are three important issues that are discussed, and each
member of the team provides an answer. o Since our previous gathering, what
have you been up to? o What kinds of challenges are you facing right now? o
What do you hope to have accomplished by the time we get together again as a
team?
32
• The gathering is directed by a group leader known as a "Scrum master," who also
evaluates the contributions made by each individual. The team is able to identify
potential difficulties at the earliest possible stage thanks to the Scrum sessions.
• The regular meetings facilitate "knowledge socialization," which in turn
contributes to the development of a structure that allows the team to organise
itself.
• Demos - Deliver the software increment to the customer so that the customer can
showcase and evaluate the functionality that has been built. This allows the
customer to provide feedback on the functionality.
• The demonstration might not have all of the functionality that was anticipated, but
it should be possible to implement these features within the time constraint that
was set.
Feasibility study – In order to determine if an application is a good fit for the DSDM
process, it is necessary to first establish the fundamental business requirements and
constraints connected with the application.
Business Study – Establishes the functional and information requirements that must be
met in order for the application to be able to give value to the company, as well as
specifies the fundamental architecture of the application and outlines the needs that must
be met for the application to be maintainable.
Implementation – Installs the most recent software update available into the environment
in which it is now running. It is essential to keep in mind that
1) The increment might not be completely done, or
2) Changes might be required while the incremental is being implemented. Both of these
scenarios are crucial to keep in mind.
• DSDM and XP can be coupled to give a strategy that defines a stable process
model with the nuts and bolts practises (XP) that are used to construct software
increments.
This combination method is known as a combination approach.
AGILE MODELING(AM)
Software engineers often find themselves in the position of having to construct
massive, mission-critical systems for businesses.
Modelling the scope and complexity of such systems is necessary in order to achieve
the following goals:
1. ensuring that all stakeholders have a better understanding of what has to be
achieved.
2. The individuals who are responsible for finding a solution to the problem can
be divided up into groups that are more likely to be successful. And
3. The quality can be evaluated at each stage of the engineering and construction
processes for the system.
• Many other software engineering modelling methods and notations have been
suggested for use in the process of analysis and design; however, despite the major
34
virtues of these methods, it has been found that they are difficult to implement and
demanding to maintain.
• The "Weight" of these modelling methodologies is a contributing factor to the
problem. When we refer to this, we are referring to the amount of notation that is
required, the degree of formalism that is advised, the complexity of the models for
larger projects, as well as the difficulty in sustaining the model as change occurs.
• The only methods that provide a sustainable benefit for larger projects are the
analysis and the design modelling.
• Modeling
• Implementation
• Testing
• Deployment
• Configuration and Project Management
• Environment Management
Component-Based Development
1. 1. Research and analysis are conducted on the component-based products that are
currently available on the market for the application domain in question.
2. Problems with component integration are taken into consideration.
3. A software architecture that can accommodate the components is built.
4. The architecture incorporates the components into its structure.
5. Extensive testing is carried out to validate the correct operation of the component.
The use of a component-based development methodology results in increased
software reuse and reusability.
The Formal Methods Model
• Using formal techniques, you can describe, build, and validate a computer-based
system by employing a stringent mathematical language. This is made possible by
the use of formal methods.
• The creation of formal models now requires a significant amount of time and
money due to their complexity.
• Extensive training is necessary since only a small percentage of software
engineers have the appropriate experience to apply formal approaches.
• Customers who are not technically savvy will have a tough time understanding
how to use the models as a communication channel.
• The process will incorporate features from both evolutionary and concurrent
process models.
• Aspect-oriented software development (AOSD) is an acronym that stands for
"aspect-oriented software development."
THE UNIFIED PROCESS
It's an effort to incorporate many of the finest practises of agile software development
while drawing on the strengths of traditional software process models.
The streamlined procedure highlights the significance of software architecture and directs
the architect's attention to where it's needed most.
=> Appropriate objectives
=> Clarity
=> Adaptability
=> Reusability
* It suggests at a process flow that is iterative and gradual, which creates the appearance
that evolution is taking place.
38
MODULE-2
REQUIREMENTS ENGINEERING
What is Requirement
Requirements engineering is the systematic procedure of determining the specific services
that a client demands from a system, as well as the limitations and conditions that govern
its operation and development. Requirements are the explicit descriptions of the services
and limitations of a system that are developed during the process of requirements
engineering.
Different types of Requirement Specification
Domain requirements
Domain-specific requirements are system requirements that are derived from the
application domain and reflect the unique characteristics and functionalities of that domain.
41
Functional requirements
• Specify the features and services of the system.
• Be contingent upon the nature of the software, its intended audience, and the nature
of the system on which it runs.
• The functional system requirements must be specific, in contrast to the functional
user requirements which might be more general descriptions of the system's
expected behaviour.
Requirements completeness and consistency
It is generally accepted that requirements should be exhaustive and uniform.
Complete
– All services required by the user should be defined.
Consistent
It is impossible to create a comprehensive and consistent requirements document in
practise, but it is essential that there be no inconsistencies or conflicts in the descriptions
of system facilities.
Non-functional requirements
• The requirements and characteristics of a system are defined by these factors. I/O
device limitations, system representations, and similar factors all serve as
constraints.
• Non-functional needs can be more crucial than functional requirements. If these
conditions are not fulfilled, the system becomes ineffective.
• The implementation of these needs may be spread out across the system. There are
two factors contributing to this:
• The non-functional requirements of a system may have more of an impact on the
system's architecture as a whole than on its individual components. To fulfil
performance criteria, for instance, you might have to organise the system in such a
way as to reduce the amount of communication that occurs between its various
components.
• It is possible for a single non-functional demand, such as a requirement for system
security, to spawn a number of related functional requirements that describe new
42
Non-functional classifications
• Product requirements
• Prescriptions on how the delivered product must perform, including time to
completion, reliability, etc.
• Organisational requirements
• Company-specific requirements, such as those for meeting process standards,
meeting deadlines, etc., that are a direct outcome of the company's policies and
procedures.
• External requirements
- Requirements that are imposed on the system and its development process by elements
that are not directly related to the system itself, such as interoperability requirements,
legislative requirements, and so on.
43
Requirements specification
• An ideal requirements definition would result in user and system needs that are
crystal obvious, unambiguous, simple to grasp, exhaustive, consistent, and well-
organized into a requirements document.
• System users without extensive technical knowledge should be able to grasp the
user requirements for a system if they accurately represent the system's functional
and nonfunctional requirements.
• The system's external behaviour and operational restrictions should be simply
described in the requirements.
• They shouldn't worry about the system's design or implementation.
All design information must be included, and that's not doable. This is due to a
number of factors:
1. To better organise the requirements specification, you may need to create an
initial architecture of the system. Requirements are categorised by the several
subsystems that make up the whole.
2. Most systems need to communicate with one another, which might place
limitations on the design and additional demands on the new system.
46
Requirements Analysis:
48
4. Requirements specification The specifications are written down and included into the
subsequent iteration of the spiral. There is potential for the production of formal or
informal requirements documents. Viewpoints
• Viewpoints serve as a method for organizing requirements in order to accurately
represent the many perspectives of different stakeholders. Stakeholders can be
categorized based on several perspectives.
• A multi-perspective study is crucial because there is no definitive method for
analyzing system requirements. Types of viewpoint
• Interactor viewpoints
• Individuals or other entities that directly engage with the system. In an ATM, the
customer's information and the account database are interconnected virtual private
networks.
• Indirect viewpoints
• Individuals who are not directly involved in the system's operation but who have
an impact on its requirements. Both the management and the security staff at an
ATM are considered to be indirect opinions.
• Domain viewpoints
• The needs are influenced by the characteristics and constraints of the domain. In
the context of an Automated Teller Machine (ATM), an illustration would be the
protocols and guidelines governing the exchange of information between different
banks.
Interviewing
• During either a formal or a casual interview, the RE team will ask stakeholders
questions about the system that they currently use as well as the system that will
be constructed.
• There are two different kinds of interviews: closed interviews, in which
participants answer a series of questions that have been determined in advance,
and open interviews.
— Interviews with no set agenda, in which a wide variety of topics are discussed with
various stakeholders; these are open interviews.
51
Scenarios
• Scenarios are real-life examples of how a system might be utilised. • Scenarios
should include the following: a description of the beginning situation; a
description of the goal of the scenario.
Requirements checking
• <text>Validity.</text> <text>Does the system offer the functions that most
effectively meet the customer's requirements?</text>
• Consistency. Are there any conflicts arising from requirements?
• Completeness. Does the customer's requirements encompass all necessary
functions?
• Realism. Is it feasible to achieve the requirements within the constraints of the
existing budget and technology?
• Verifiability. Could you verify the requirements? Requirements validation
techniques
• Requirements reviews
– A methodical manual analysis of the specifications.
• Prototyping
– Verifying requirements with an executable model of the system.
• Test-case generation
– Creating tests to verify the testability of requirements. Requirements
reviews
• While the requirements definition is being developed, regular reviews ought to be
conducted.
• Staff from the contractor and the client should participate in reviews.
52
Review checks
• Verifiability. Can the requirement be tested in a practical way?
• Comprehensibility. Does the requirement make sense to you?
• Traceability. Does the requirement clearly identify where it came from?
• Adaptability. Is it possible to modify the requirement without significantly
affecting other requirements?
Requirements Management
• Managing evolving needs during requirements engineering and system
development is known as requirements management.
• There will always be incomplete and inconsistent requirements;
– As business needs evolve and a deeper understanding of the system is
created, new requirements will always arise;
– Diverse perspectives will result in diverse requirements, many of which
are contradictory.
Requirements evolution
Requirements classification
53
• Relationships
Data Objects
• A data object is a computer-understandable representation of complex data.
• • Anything that meets the definition of a data object (such as anything that
generates or uses information) can be considered a data object.
• something (such a report or a display, for example)
• an occurrence (like a phone call), • an event (like an alarm), or both
• a function (such as a salesperson),
• a department within an organisation (such as accountancy), a location (such as a
warehouse), or a structure (such as a file) are all examples of organisational units.
Data Attributes
The properties of a data entity are referred to as its data attributes, and these attributes can
have one of three distinct qualities. They have three different functions:
(1) they can be used to name an instance of the data object;
(2) they can be used to describe the instance; and
(3) they can be used to make a reference to another instance that is located in another
table.
Relationships
• There are various methods in which data objects are related to one another.
57
SCENARIO-BASED MODELING
60
These are the problems that need to be addressed in order for use cases to be a useful tool
for requirements modeling.
The SafeHome home surveillance function that are performed by the homeowner
actor:
• Choose the camera you want to view.
• Make sure thumbnails are requested from all of the cameras.
• Views of the camera can be seen in a window on your computer.
• Manage the camera's pan and zoom settings individually.
• Record the output of the camera in a selectable manner.
• Play back the output from the camera.
• Use the Internet to access the video surveillance cameras.
Use case: Access camera surveillance via the Internet—display camera views Actor:
homeowner
1. The homeowner visits the SafeHome Products website and checks in to their account.
2. The user ID of the homeowner is entered into the system.
3. The homeowner is required to enter two passwords, each of which must be at least
eight characters long.
4. The system presents buttons for all of the primary functions.
5. The homeowner presses the button labelled "surveillance" to access the system's
primary functions.
6. The homeowner then chooses the option to "pick a camera."
7. The system will show you the layout of the house's floor plan.
8. The homeowner chooses an icon for a camera from the floor layout.
9. The homeowner clicks the "view" button on their computer screen.
Refining a Preliminary Use Case
Therefore, in order to evaluate each phase that makes up the major scenario, the
following questions will be asked.
61
• Is there any other course of action that the actor can take at this point?
• Is it feasible that the actor will experience some kind of error circumstance at this
particular juncture? If that's the case, what could it be?
• Is it likely that the actor will come across another behaviour at this point, such as a
behaviour that is triggered by an event that is not under the actor's control? If that's
the case, what could it be?
Preliminary use case diagram for the SafeHome system
Swimlane diagram for Access camera surveillance via the Internet—display camera views
function.
63
The following size-related variables dictate how much focus is placed on requirements
modeling for Web and mobile applications:
(1) The scope and intricacy of the application increment;
(2) The quantity of stakeholders (analysis can assist in identifying conflicting
requirements originating from various sources);
(3) The size of the app development team;
(4) The extent to which team members have collaborated previously (analysis can aid in
creating a shared understanding of the project); and
(5) The duration elapsed since the team's last collaboration.
64
Module 3
DESIGN ENGINEERING AND METRICS
1. Class Diagram
Purpose: Represents the static structure of a system by showing its classes, attributes,
operations, and the relationships among objects.
• Class: A blueprint for creating objects. It encapsulates data for the object and
methods to manipulate that data.
o Attributes: Properties or fields of a class.
o Operations (Methods): Functions or procedures that the class can perform.
• Association: A relationship between classes.
o Multiplicity: Defines how many instances of a class are associated with
one instance of another class (e.g., one-to-many, many-to-many).
• Aggregation: A type of association that represents a "whole-part" relationship. For
example, a library and books.
• Composition: A stronger form of aggregation with a life-cycle dependency. For
example, a house and its rooms.
• Inheritance: A mechanism where one class (subclass) inherits attributes and
operations from another class (superclass).
• Interface: Defines a contract that implementing classes must follow, without
providing the implementation details.
• Dependency: A relationship indicating that a change in one class may affect
another class.
Purpose: Describes the functional requirements of a system from the end user's
perspective.
• Actor: An external entity (user or another system) that interacts with the system.
• Use Case: A specific functionality or service that the system provides to actors.
• System Boundary: Defines the scope of the system, showing which use cases are
included.
• Relationship:
o Association: A line connecting an actor to a use case indicating interaction.
65
3. Activity Diagram
Purpose: Illustrates the flow of activities or actions in a system, showing the sequence and
conditions for these activities.
4. Interaction Diagram
Purpose: Focuses on the flow of messages between objects and how they collaborate.
Purpose: Models the dynamic behavior of a system or an object by showing its states and
transitions.
6. Component Diagram
7. Deployment Diagram
Purpose: Shows the physical deployment of artifacts on nodes and their relationships.
These UML diagrams collectively help in visualizing, specifying, constructing, and documenting
the artifacts of a software system, ensuring a comprehensive understanding of both static and
dynamic aspects of the system.
1. Process Metrics
Purpose: Process Metrics are the measures of the development process that create a body
of software.
• Cycle Time: The time taken to complete a particular process or task from start to finish.
For instance, the time to develop a feature or resolve a defect.
• Lead Time: The total time from when a request is made until the final delivery. This
includes development time, testing time, and any other phases.
• Throughput: The number of units of work (e.g., features, defects) completed in a
specific period. It reflects the productivity and capacity of the development team.
• Defect Density: The number of defects per unit size of the code (e.g., per 1000 lines of
code). This helps in evaluating the quality of the development process.
• Work in Progress (WIP): The number of tasks or features currently being worked on. It
helps in understanding the current load and efficiency.
• Process Compliance: The degree to which the development process adheres to defined
standards and practices. To defined standards, guidelines, or regulations. It ensures
consistency and quality in the software development process.
2. Project Metrics
• Cost Performance Index (CPI): A measure of cost efficiency in a project. CPI = Earned
Value / Actual Cost. A CPI greater than 1 indicates cost efficiency.
• Schedule Performance Index (SPI): A measure of schedule efficiency. SPI = Earned
Value / Planned Value. An SPI greater than 1 indicates ahead of schedule.
• Earned Value (EV): The value of work actually performed compared to the baseline
plan. It helps in measuring project performance.
• Planned Value (PV): The value of work planned to be performed by a specific time. It
helps in assessing whether the project is on track.
• Actual Cost (AC): The actual cost incurred for the work performed by a specific time. It
helps in comparing with the planned costs.
• Variance Analysis: Analyzing the differences between planned and actual performance.
This includes cost variance (CV = EV - AC) and schedule variance (SV = EV - PV).
• Risk Metrics: Measures related to risk management, such as the number of identified
risks, risk impact, and the effectiveness of risk mitigation actions.
68
Software Measurement
• Size Metrics: Quantify the size of software components. Common measures include:
o Lines of Code (LOC): The number of lines in the codebase. It helps in estimating
complexity and effort.
o Function Points (FP): A measure of functionality provided to the user,
independent of the programming language.
• Complexity Metrics: Measure the complexity of software to estimate effort and
maintainability.
o Cyclomatic Complexity: Measures the number of linearly independent paths
through the program's source code. Higher values indicate higher complexity.
o Halstead Complexity Measures: Metrics based on the number of operators and
operands in the code.
• Maintainability Metrics: Assess how easily software can be modified.
o Maintainability Index: A composite metric that includes cyclomatic complexity,
lines of code, and other factors to estimate maintainability.
Metrics for Software Quality
By employing these metrics, organizations can gain insights into their software development
processes, manage projects more effectively, and ensure high-quality software products. Metrics
help in making data-driven decisions, improving performance, and achieving better alignment
with business goals.
62
* Various procedures are carried out under the umbrella term of "validation" to
guarantee that the final product of software development is tied to the requirements of the
customer.
Example:
Validation: Are we building the right product?
* The processes of verification and validation involve a wide range of SQA operations
that include the following:
=> Formal Technical Reviews
=> Quality and Configuration audits
=> Performance Monitoring
=> Simulation
=> Feasibility Study
=> Documentation Review
=> Database Review
=> Analysis Algorithm
=> Development Testing
=> Usability Testing
=> Qualification Testing
=> Installation Testing
(2) Organizing for Software Testing:
* The developer often also carries out integration testing, which is a phase of testing that
comes before the complete software architecture is constructed.
* Testing the various units (components) of the program is always the responsibility of
the software developer.
* Once the software architecture has been completed, an independent testing group will
be recruited to evaluate the product. The purpose of an Independent Test Group, which
is also abbreviated as an ITG, is to eliminate the inherent challenges that come when
the builder is given the opportunity to test the thing that has been built. This is
accomplished by eliminating the inherent difficulties. During the course of a software
project, the developer and the ITG work together very closely to ensure that through
testing will be performed.
* The developer needs to be available when testing is being done so that he or she may
fix any mistakes that are found.
64
* Unit testing starts at the middle of the spiral and concentrates on the software's
components as they are written in the source code. * Proceeding in a different direction
down the spiral, we encounter integration testing, which is concerned with the planning
and building of software architecture. * Next, validation testing is introduced, which
compares requirements that have been established during software requirements
analysis with software that has already been developed. * Finally, we come across
verification testing, which verifies that software that has been constructed satisfies
requirements that were determined during software requirements At last, we come to
the system testing phase, which involves testing the software along with the other
components of the system as a whole.
Software Testing Steps:
(i) Unit Testing:
* The initial phase of testing involves the examination of each component in isolation to
verify its proper functioning as an independent unit.
65
* If you want your software testing strategy to be successful, you need to address the
following issues:
(1) Specify product requirements in a quantifiable manner well in advance of testing
beginning;
(2) State testing objectives explicitly;
(3) Understand who will be using the software and create a profile for each user category.
(4) Formulate a strategy for tasting that places an emphasis on "Rapid Cycle Testing."
(5) Construct reliable software that is intended to perform its own testing.
(6) Employ efficient formal technical reviews as a filter prior to testing.
(7) Carry out formal technical reviews to evaluate the test strategy and test cases.
(8) Construct an approach for the testing process that emphasises continual improvement.
Independent Paths:
* Each and every basis path that passes through the control structures is investigated to
guarantee that
=> Each and every statement contained within a module has been run at least once.
Boundary Conditions:
68
Boundary Testing:
* This is one of the most significant responsibilities involved in unit testing
69
* A common cause of software failure is when it reaches one of its limits (for example,
an error frequently happens when the nth element of an n-dimensional array is
handled).
When evaluating error handling, the following are some examples of potential errors that
should be tested:
Processing under exception conditions is not right. The error description is
insufficient to help identify the error's cause.
(1) The error description is not clear.
(2) The error reported does not match the error experienced.
(3) An operating system intervention occurs before error handling due to an error state.
(4) The error description is insufficient to help identify the error's cause.
Unit Test Procedures: * Unit test design can be done either before
or after code is generated.
70
Driver:
* A driver is not much more than an application's "main programme" in the vast majority
of cases * It accepts
=> data from test cases
=> Sends these data to the component that is [about to be tested].
=> Print the findings that are relevant.
* Drivers and stubs are two different types of software that need to be built, but they are
not included in the final software product.
* The real overhead is reasonably modest if the drivers and stubs are kept simple;
otherwise, it is substantial.
* When a component that has a high cohesiveness is designed, it simplifies the unit
testing process.
Integration Testing:
* Once all of the modules have passed their own unit tests meaning that all of the
modules function properly, we have doubts about whether or not they will work, when
do we integrate them together?
* Integration testing is going to be the solution for this problem.
Interfacing:
It is the mechanism that brings together all of the individual modules.
The following are the issues that can arise throughout the interfacing process: It is
possible to lose data when moving between interfaces.
=> An unintended negative effect can be caused by one module on another module.
71
=> The combination of subfunctions might not result in the principal function that is
required.
What it does
=> An imprecision that is tolerable on its own might become intolerable when it is
multiplied. scales levels
=> Issues may arise due to the use of global data structures
Integration Testing – Definition:
* It is a methodical approach to building the software architecture, while at the same time
running tests to find faults linked with the software's interface.
* The goal is to use components that have undergone unit testing and construct a
programme structure according to the specifications set out by the design.
Incremental Integration:
* In this scenario, the programme is built and tested in small steps
* Errors are easy to localise and rectify
* Interfaces are tested in their entirety and a systematic testing strategy may be used
* There are several variants of the incremental integration method to choose from.
The software architecture is being built up in stages, including through this testing, which
is an incremental method.
* The modules are integrated by moving down the control hierarchy, beginning with the
main control module, also referred to as the main programme.It is possible to
incorporate the subordinate module to the principal control module either depth-first or
breadth-first.
Advantages:
(1) Top-down integration has several advantages, including ensuring important control or
decision points are proven early in the testing process.
73
(2) In order to gain the trust of both the customer and the developer, it is advantageous to do
an early demonstration of the product's functional capabilities. This is noteworthy because it
shows that the feature is operating according to plan, which is crucial information to know.
(3) Despite not being unduly complex, this strategy could result in a number of real-world
problems.
(2) Bottom – Up Integration:
* In this scenario, start creating and testing the foundational components of the
software.The processing necessary for components subordinate to a specific level can
always be available thanks to the bottom-up integration of components.In this instance,
the stub is not required.
Clusters 1, 2, and 3 are formed by assembling the components, as seen in the image below.
* A driver assists in putting each Cluster through its paces.
* The components from Clusters 1 and 2 are subordinate to the Ma component.After
taking drivers D1 and D2 out of commission, clusters are now connected to Ma
directly.Driver D3 for cluster-3 has also been removed and integrated with MB in a
similar fashion.
The Mc structure incorporates both Ma and Mb as components.
74
Bottom up Integration
* Software engineers can record test cases and outcomes with capture and playback tools
for later comparison and playback.
* There are three types of test cases in the regression test suite:
(i) A sample set of tests that will run through every feature of the
software
(ii) Further testing concentrating on software features that are probably
going to be impacted by the modification
(iii) Tests concentrating on the modified software components
Smoke Testing:
* When developing software products, this approach to integration testing is frequently
employed.
* It is intended to serve as a patching mechanism for projects that are time-sensitive,
enabling the software team to regularly evaluate its work.
Activities included in the smoke testing:
(1) A "Cluster" is assembled from software components that have been converted
into code. Every data file, library, reusable module, and engineering component needed
to carry out one or more product functions is included in a cluster.
(2) A battery of tests is intended to identify any problems that prevent the cluster
from operating as intended.
(3) The product is tested for smoke every day and the clusters are integrated with
additional clusters.
* The integration strategy might be either bottom-up or top-down.
Critical Module:
* It is a measure which contains one (Or) more of the following characteristics:
(i) Addresses several software requirements
(ii) Has a high level of control [resides relatively high in program
structure]
(iii) is complex (Or) error prone
(iv) Has definite performance requirements
* Testing the crucial module as soon as feasible is recommended.
76
Validation Testing:
* The validation process at the system level concentrates on the following:
=> User – visible actions
=> User recognizable output from the system
* Validation testing is only successful when the program operates as the customer would
reasonably expect it to.
* The Software Requirements Specifications define reasonable expectations
* A section of the specification called validation criteria serves as the foundation for a
validation testing approach
* One of the following two scenarios could arise following the validation test:
(i) The function's (or) performance characteristics meet the requirements
and are approved.
(ii) A list of deficiencies is made and the derivation from the specification
is discovered.
(iii) Derivation (Or) errors found at this point are rarely able to be fixed in
time for the delivery date.
Configuration Review (Or) Audit:
* It is a crucial component of the validation procedure. * The purpose of the review is to
make sure that
=> all software configuration elements have been generated or cataloged
correctly and
=> contain all the information needed to support each stage of the
software life cycle.
Alpha and Beta testing:
* To enable the customer to verify all requirements, a series of acceptance tests are
carried out when custom software is developed for a single customer.
* The end-user acceptance test, which is carried out by them instead of software
engineers
* Since it is difficult to conduct acceptance tests with every client while developing
software for mass use, most software product developers employ a procedure known as
alpha (Or) beta testing.
(i) Alpha testing:
* End Users perform it where the developers are located.
* The program is used in a realistic environment;
78
* The developer is present at all times;
* The developer monitors errors and usage issues;
* Alpha tests are conducted in a closely supervised environment.
.
(ii) Beta testing:
* The beta test is a "Live" implementation of the program in an uncontrollable
environment for the developer.
* The end-user logs any issues that come up during beta testing;
* These error reports are forwarded to the developer on a regular basis;
* The software engineer modifies the product based on the error report, and then gets
ready to release it to all of the users. * It is carried out at end-user locations;
* The developer is not present at this time;
* The beta test is a "Live" application of the software in an
System Testing:
* It is a set of several tests with the main goal of thoroughly testing the computer-based
system.
* Although the goals of each test vary, they always aim to confirm that the various
components of the system have been correctly integrated and are carrying out their
designated tasks.
* Despite the fact that this test series' main objective is to thoroughly test the computer-
based system
Types:
(i) Recovery Testing
(ii) Security Testing
(iii) Stress Testing
(iv) Performance Testing
(v) Sensitivity Testing
(1) Recovery Testing:
* It is a type of system test that involves purposefully breaking the programme in a number
of various ways and checking to see whether or not recovery is carried out properly.
79
* If the recovery process is automated, often known as being carried out by the system on
its own, then
=> * If it is necessary for human intervention to recover the lost data and restart the
system, the Mean Time to Repair (MTTR) metric is analysed to evaluate whether or not it falls
within the allowed range of values. The evaluation of the validity of the checkpointing
processes that occur as a consequence of reinitialization is what ultimately leads to the
assessment of data recovery and restarting.
(2) Security Testing:
Good security testing will eventually be able to break into a system if given sufficient
time and resources to do so. When designing a system, it is the job of the system designer
to ensure that the cost of penetrating the system is higher than the value of the
information that would be obtained.
* It ensures that the protective mechanism that is implemented into a system will, in fact,
safeguard the system from unauthorised intrusion.
* In the process of evaluating the system's security, the tester acts out the part of a
potential threat who wants to break into the system.
* In the process of evaluating the system's security, the tester acts out the part of a
potential threat who wants to break into the system.
* It ensures that the safety feature that was incorporated into a
*The amount of pressure to determine the causes of a mistake also grows *This pressure
compels the software developer to fix one problem while simultaneously adding two
more
*The amount of pressure to determine the causes of a mistake also increases
*The greater the implications of an error, the greater the amount of pressure there is to
investigate and pinpoint its root cause.
82
6.15 Debugging Strategies:
* In general three debugging strategies have been proposed
(i) Brute Force
(ii) Back Tracking
(iii) Cause Elimination
Through the use of this strategy, SE is able to generate test cases that
1. Ensure that each and every independent branch contained within a module has been
traversed at least once.
2. Consider all logical judgements from both the correct and incorrect perspectives,
3. Ensure that all loops are executed at their respective limits and remain inside their
respective operational limitations
4. Perform tests on the internal data structures to confirm that they are valid.
Each circle in figure B, which is referred to as a node on a flow graph, represents one or
more procedural statements.
• A mapping into a single node is possible for a series of process boxes and a decision
diamond.
• Similar to the arrows on flowcharts, the arrows on the flow graph, sometimes referred
to as edges or links, show the flow of control.
• Even in cases when a node does not reflect a procedural statement, it still needs an
edge to finish at that node.
• Any area bounded by a network of edges and nodes is referred to as a region. The area
outside the graph is counted together with the regions when we calculate the overall
count.
• In the event that a compound condition arises during the process of procedural design,
the flow graph will become marginally more convoluted.
1. Statement Coverage
Definition: Ensures that every executable statement in the code is executed at least once
during testing.
Objective: To verify that no line of code remains untested.
Ex:-
def calculate(num):
if num > 0:
print("Positive number")
else:
print("Non-positive number")
Definition: Ensures that every possible branch (outcome of each decision) is tested at least
once.
Objective: To test both the true and false outcomes of each decision point.
Ex:-
91
def is_even(num):
if num % 2 == 0:
return True
else:
return False
3. Path Coverage
Definition: Ensures all possible paths through the code (including combinations of
branches) are tested.
Objective: To explore every unique path in the code, covering all branches and loops
Ex:-
def check_number(num):
if num > 0:
if num % 2 == 0:
else:
else:
print("Non-positive number")
Definition: Ensures that every individual condition in a decision (e.g., within if statements)
is tested for both true and false outcomes.
Objective: To test each condition independently.
Ex:-
5. Loop Testing
Definition: Focuses on validating the behavior of loops under different conditions (e.g., zero
iterations, one iteration, multiple iterations, maximum iterations).
Objective: To ensure loops handle various cases correctly.
Ex:-
def sum_numbers(n):
total = 0
for i in range(n):
total += i
return total
Functional Testing:
This type of testing is useful for the testers in identifying the functional requirements of a software
or system.
Regression Testing:
This testing type is performed after the system maintenance procedure, upgrades or code fixes to
know the impact of the new code over the earlier code.
Non-Functional Testing:
This testing type is not connected with testing for any specific functionality but relates to non-
functional parameters like usability, scalability and performance.
94
Graph-Based Testing Methods
• To have an understanding of the relationships that connect the items that are
modelled in software as well as the objects themselves.
• The following step is to develop a series of tests that will verify that "all objects
have the expected relationship to one another."
Alternately stated as follows:
- Make a diagram depicting significant things and the connections between them.
- Conceive a battery of examinations that will be centred on the graph.
• In order to ensure that every item and relationship is tested and that any errors are
located.
• To begin, design a graph that consists of a set of nodes, which stand for items, and links,
which depict the connections that exist between those objects. The nodes will represent
the items, and the links will show the relationships between the items.
– weights that describe some aspect of a link, which are referred to as link weights; –
weights that characterise the qualities of a node, which are referred to as node weights.
• The nodes in the network are depicted as circles, and the connections between them can
take on a variety of forms.
• A one-way relationship is denoted by a directed link, which is depicted as an arrow and
indicates that the link only goes in one direction.
• The relationship is considered to apply in both directions when there is a bidirectional
link, which is also referred to as a symmetric link.
When multiple distinct associations need to be constructed between two nodes in a graph,
the use of parallel links is necessary.
Example
95
• Object #1 This is the item that you selected from the “New File” option.
• Object #2 This is the window that was made just for the document.
• Object No. 3 indicates the next that is present in the document.
2. When an input condition asks for a certain value, there are two defined incorrect
equivalence classes and one defined genuine equivalency class.
3. When an input condition identifies a member of a set, an equivalency class will be
created.
4. When an input condition is Boolean, a valid class and an invalid class are defined.
96
Example
• The area code, which could be a three-digit number or a blank number
• The terms prefix and suffix refer to three-digit numbers that do not begin with zero or
one, respectively, and sometimes to four-digit numbers.
• Password: a six-digit string that is both numeric and alphabetic
• orders like pay bills, deposit checks, and other comparable ones
• Boolean input condition: either true or false, depending on whether the area code is
present.
• Kindly enter the value as a three-digit number and the condition.
• prefix: • Range of values defined between 200 and 999, with some deviations from the
norm.
• Four-digit length for the input condition and value; • Suffix:
• password: Boolean input condition that allows a password to be either present or
absent.
Divide the input data into groups (or "partitions") where the system is expected to
behave similarly for every input within the same group.
Example:
Goal:
· Minimize the number of test cases by selecting representative values from each partition.
Example:
Goal:
MODULE - 5
RISK, QUALITY MANAGEMENT AND REENGINEERING
Risk:-
Risk in the software development lifecycle refers to the possibility of uncertain
events or conditions that could negatively impact the project's objectives, such as
quality, timelines, budget, or performance.
RISK MANAGEMENT:-
A reactive approach deals with risks after they have occurred. It's a "wait and see" strategy, where
the focus is on damage control.
Characteristics:
Advantages:
Lower upfront effort: No resources are spent on risks that may never occur.
Fast response to specific incidents.
Disadvantages:
Proactive strategies aim to identify and mitigate risks before they occur. It’s about anticipation and
prevention.
Characteristics:
Prevention focus: Risks are systematically identified and addressed during the development
process.
Long-term planning: Emphasis on creating robust systems that minimize potential risks.
Example scenarios:
o Conducting regular code reviews and static analysis to catch bugs early.
o Implementing automated tests and continuous integration (CI) pipelines.
o Investing in security audits and penetration testing.
Advantages:
Cost savings: Early detection and prevention of risks are often cheaper than fixing problems
later.
Improved quality: Reduces the likelihood of major disruptions, enhancing system
reliability.
Customer trust: Fewer issues in production enhance user satisfaction.
Disadvantages:
Higher upfront effort: Requires time and resources for risk assessment and mitigation
planning.
Possible over-preparation: Some risks may never materialize, leading to unnecessary work.
* The primary goal is to prevent risk, but because no risk can be eliminated entirely, *
The team is working to establish a contingency plan that will enable it to react in a
controlled and efficient manner in the event that a risk is encountered.
Software Risks:
1) Project Risks
100
Risks that affect project management, timelines, and resource allocation The project
scope expands without corresponding adjustments to time or budget..
2) Technical Risks
Risks related to the technology, tools, or methods used in the project The system
may not meet performance requirements under high load.
3) Operational Risks
Risks that impact the day-to-day operations or processes of the software Downtime
or crashes in production environments.
4) Security Risks
5) Business Risks
Risks that impact the business value or strategic goals of the software. The product
doesn't align with customer expectations..
6) Process Risks
Components of RMMM
Risk Mitigation
Focuses on reducing the likelihood or impact of identified risks before they occur.
Strategies:
4. Acceptance: Acknowledge the risk and prepare to handle its impact if it occurs.
Risk Monitoring
Involves tracking identified risks and detecting new risks as the project progresses.
Key Activities:
Risk Management
Encompasses the decision-making and corrective actions required to handle risks effectively.
Key Activities:
1. Risk Identification: Identify risks using methods like brainstorming, risk checklists, or
historical data.
2. Risk Assessment: Evaluate the probability and impact of each risk.
102
3. Risk Prioritization: Rank risks based on their severity (high, medium, low).
4. Risk Mitigation Plan: Develop strategies to address high-priority risks.
5. Implementation of Mitigation: Execute the planned actions.
6. Monitoring and Reporting: Continuously track risks and update stakeholders.
7. Management and Response: Take corrective actions if risks evolve into actual issues.
1) Identified Risk: The payment gateway might fail during peak usage.
2) Mitigation Plan:
Use load testing tools to simulate high traffic and optimize the gateway's
performance.
3) Monitoring:
4) Management:
Benefits of RMMM
Software Quality Assurance (SQA) is a systematic process that ensures software products and
processes meet predefined quality standards throughout the software development lifecycle (SDLC).
It encompasses a range of activities, including planning, monitoring, testing, and improving
software quality to ensure the product meets customer expectations and complies with
organizational and regulatory standards.
1) Ensure Quality
2) Prevent Defects
Identify potential defects early in the development process to reduce cost and
time of fixing issues later.
4) Ensure Compliance
1) Quality Management:
Quality Planning: Define quality standards and establish processes to achieve them.
Quality Control: Monitor specific project outputs to ensure they meet the defined standards.
1. Verification: Ensures that the product is being developed correctly (e.g., design
reviews, code inspections).
2. Validation: Ensures the final product meets user needs and expectations (e.g., user
acceptance testing).
4) Testing
Includes various types of testing like unit testing, integration testing, system testing,
and performance testing to identify defects.
Conduct internal and external audits to ensure compliance with processes and
standards.
Regular code reviews, design reviews, and project status reviews to catch potential issues
early.
SQA Activities:-
Requirement Analysis
Testing Strategies
Plan and execute various levels of testing (unit, integration, system, acceptance).
Process Monitoring
Ensure that software development follows the defined processes (e.g., adherence to
coding standards).
105
Risk Management
Identify, assess, and mitigate risks that could impact software quality.
Benefits of SQA:-
Detecting and fixing issues early reduces the cost and time required for later-stage
corrections.
Customer Satisfaction
Reduced Risk
Challenges in SQA:-
Resource Constraints
o Limited time, budget, or skilled personnel can impact quality assurance activities.
Changing Requirements
Integration Complexity
• The normal number of attendees for a review meeting should be between three
and five people; • The number of participants in the review should fall anywhere
between three and five.
• There should be some level of advance preparation, but the amount of effort
required from each individual should not exceed two hours.
• The time allotted for the review meeting must be significantly shorter than two
hours.
• FTR concentrates on a particular (and relatively insignificant) portion of the
software as a whole. • For instance, rather than attempting to examine the complete
design, FTRs are carried out for each component or small group of components. •
Each and every review meeting ought to be constrained by the following guidelines:
• Typically, the evaluation should include participation from between three and
five different people.
• There should be some level of advance preparation, but the amount of effort
required from each individual should not exceed two hours.
• The time allotted for the review meeting must be significantly shorter than two
hours.
107
• FTR concentrates on a particular (and relatively insignificant) portion of the
software as a whole. • For instance, rather than attempting to examine the complete
design, FTRs are carried out for each component or small group of components.
• The review summary report is sent to the project leader and any other parties interested in
the work, and it is included in the project's historical record.
1. To identify the specific areas of the product that are producing problems
2. to serve as a reference point for action items that the producer can use as direction
while making adjustments.
• Creating a follow-up procedure is crucial to ensuring that the problems listed on the list
have been appropriately fixed.
• The summary report usually comes with a problems list.
An issues list is typically included in a summary report. Should this not be done, there's a
chance that the issues raised will "fall between the cracks."
• Giving the person in charge of the review the responsibility for follow-up is one tactic.
108
SOFTWARE RELIABILITY
Software reliability is defined statistically as "the probability of failure-free operation of a
computer programme in a specified environment for a specified amount of time."
•What does it really mean to be unsuccessful in something?
–Failure in the context of any discussion about the reliability and quality of software is defined
as nonconformance to software requirements.
It is likely that correcting one error can cause others to arise, which will then cause more errors,
which will ultimately result in further failures.
•When development and historical data are combined, software reliability may be
tracked, directed, and evaluated.
In other words, an end-user is not concerned with the overall number of errors; rather, they are only
concerned with the number of failures. Because each individual fault identified within a program
has a different failure rate, the total error count is not a very trustworthy predictor of the
dependability of a system.
•In addition to our current dependability metric, we also need to build an availability meter.
109
• Software availability is the likelihood that a programme is working according to the
requirements at a given point in time. The formula for calculating availability is as
follows: Availability = [MTTF/(MTTR + MTTF)] 100%.
• The sensitivity of the MTBF reliability measure to the MTTF and MTTR is the same.
• The availability measure is somewhat more sensitive to the MTTR, which is an
indirect indication of the maintainability of software.
110
Reengineering Introduction
• Regardless of application size, complexity and domain environment modification
occurs.
1. Due to new feature demanded by customer.
2. Due to errors.
3. Due to new technology
• We have to maintain when it is necessary and we have to re-engineer right.
• What is it?
• Who does it?
111
• Why it is important?
• What are the steps?
❖ Maintenance correct the defects, adopts new functionality as per user needs and
changing env
❖ At strategic level BPR identifies and evaluates the existing business process and
create revised BP that better meet the current goals
• What is the work product?
❖ Variety of maintenance and re-engineering work products are produced eg:
usecases, analysis and design model, test procedures
❖ The final output is upgraded software
• How do you ensure that you have done right?
❖ Use SQA practices that are applied to every SE process
✓ Technical reviews assess the analysis and design models
✓ Specialized review consider the business applicability and compatibility ✓
Testing is applied to uncover the errors in content functionality etc.
Re-Engineering Advantages
✓ Reduced risk
✓ There is a high risk in new software development. There may be development
problems, staffing problems and specification problems.
✓ Reduced cost
✓ The cost of re-engineering is often significantly less than the costs of developing
new software.
Business Process Re-Engineering
• BPR extends for beyond scope of IT and SE
• Concerned with re-designing business processes to make them more responsive
and more efficient.
Business Process:
• BP is a set of logically related tasks performed to achieve a defined business
outcome.
• Within the BP people, equipment, material resources and business procedures are
combined to produce s specified results.
• The overall business can be segmented as fallow
Business ->Business System->Business Process->Business Sub-process.
112
•
BPR can be applied to at any level of hierarchy but as the scope of BPR
broadens, risk associated with BPR grows dramatically.
BPR Model:
- *BPR is an Iterative Model*: The BPR process is evolutionary and does not have a
fixed start or end. It adapts continuously to changes in the environment.
- *BPR Model Activities*:
1. *Business Definition*:
- Identify business goals based on four key drivers: cost reduction, time reduction,
quality improvement, and personal development/empowerment.
- Goals can be defined at the overall business level or for specific business components.
2. *Process Identification*:
- Determine which processes are necessary to achieve the identified business goals.
- Rank these processes by importance, need for change, or other relevant criteria.
3. *Process Evaluation*:
- Analyze and measure the current process thoroughly.
- Identify tasks within the process and note the cost, time consumption, and performance
issues.
4. *Process Specification and Design*:
- Develop use-cases based on information from the previous activities.
- Define scenarios within these use-cases that reflect outcomes for the customer.
- Design new tasks and processes to address identified needs.
119
5. *Prototyping*
:
- Create a prototype of the redesigned business process.
- Test the prototype to make necessary refinements before full integration.
•
Re-engineering takes time, it costs significant amounts of money, and it absorbs
resource.
For all these reason re-engineering is not accomplished in a few months or even a
few years.
• Re-engineering of information system is an activity that will absorb IT
resources for many years. That’s why every organization needs a
pragmatic strategy for software re-engineering.
• A workable strategy is encompassed in re-engineering process model.
• Re-engineering is a rebuilding activity.
Eg: Rebuilding of a house
❖ Before you can start rebuilding, it would seem reasonable to inspect the house.
❖ Before you tear down and rebuild the entire house, be sure that structure is weak.
❖ Before you start rebuilding be sure you understand how the original was built.
❖ If you begin to rebuild, use only the most modern, long lasting materials.
❖ If you decide to rebuild be disciplined about it.
Software Re-engineering Activities:
• The scenario is to common to all: An application is served the business
need of a company for 10 or 15 years.
• During that type it has been corrected, adapted and enhance many times.
116
•
Re-Engineering activities paradigm is a cyclical model that means each activities
presented as a part of paradigm can be re visited.
Totally we are having 6 software re-engineering activities.
❖ Inventory analysis:
• Every software org should have a inventory of all application.
• Inventory is nothing but spread sheet type of document.
• By sorting the info according to business critically, longevity etc..
• Resources can then allocated to candidate application for re-engineering
work.
• This inventory should be revised on a regular cycle as a status of
application change.
❖ Document Restructuring:
• Weak document is trademark for many legacy software. What you can do about
it? And what are your opt?
➢ Creating document is far too time consuming:
▪ If a system is working you just go for live with what you have.
▪ In some cases this is correct approach why because it is not possible to re-
create document for hundreds of computer program.
▪ If the program is static document will end otherwise it wont.
➢ Document must be updated, but your org have limited resources:
▪ Here we use a “documented when touched” approach it will not necessary to
fully document an application.
▪ Rather than that those portions of the system that are currently undergoing
change are fully documented.
➢ The system is business critical and must be fully re documented.
▪ Even in this case an intelligent approach to pare documentation to as essential
minimum.
❖ Reverse engineering:
• The term RE has its origin in the hardware world.
117
•
• A company disassemble a competitive hardware product in effort to understand
its competitor design and many of secrets.
• These secrets can be easily understood whenever the specifications obtained.
• But these documents are proprietary and not available for re engineering.
Because of this SE derives one or more design and manufacturing specification
for a product by examining the actual outputs.
Sometime RE is done company’s own work to understand the specification.
• Therefore RE is the process of design recovery.
• RE tools extracts data, architecture and procedural design info from an existing
system.
❖ Code Restructuring:
• The most common type of re-engineering code restructuring.
• Legacy system have solid program architecture but individual modules coded in
way that makes them difficult to understand, test, maintain.
• In this case code within the suspect module can be restructured.
• To accomplish this task source code can be analyzed by using restructuring tools.
• Violations in structure programming are noted and then code is restructured(this
can be done automatically) or re written in modern language.
• Resultant code can be reviewed and tested to ensure that no anomalies have been
introduced.
• Internal code document is updated.
❖ Data restructuring:
• A program with weak data architecture will be difficult to adopt and enhance.
• Code restructuring will occurs at low level but data restructuring is full scale re-
engineering actually.
• In most case DR begins with RE actions
• Current data set are dissected and necessary data model are defined.
• Data objects and attributes are identified and existing data structure are reviewed
for quality.
• When data architecture is weak the data are reengineer because it is has strong
influence in either architecture and code level changes.
❖ Forward engineering:
118
•
• Application can be rebuild using automated RE engine.
• The old program fed into the engine analyzed, restructured and regenerated
in a form that exhibit the best aspect of software quality.
Reverse Engineering
• Undocumented source file converted into fully documented source code.
In reverse engineering designer must extract the design info from source code but
❖ Abstraction level
❖ Completeness of the documentation
❖ The degree to which tools and a human analyst work together ❖ The
directionality of the process are highly variable.
• The abstraction level and completeness can be extracted from source code.
• The RE process should be capable of deriving
❖ procedural design representation(LL)
❖ Program and DS info(somewhat HL)
❖ Object model(HL)
• As a abstraction level increases you are provided with info that will allow easier
understand of the program.
• The completeness of RE process refers to the level of details that is provided at
an abstraction.
• Completeness improve direct proportion to the amount of analysis performed by
the person doing RE
• Interactivity refers to degree to which the human is integrated with automated
tools to create effective REP.
• In most cases as AL increases interactivity must increase or completeness will
suffer.
• If the directionality of REP is one way all the info extracted from the source code
is provided to the SE who can use it during any maintenance activity.
• If directionality is two way the info is fed into RE tools that attempts to
restructure or regenerate old program.
• Before Re-Eng commences unstructured source code is restructured
119
•
• This makes source code easier to read and provides the basis for all subsequence
RE activity.
• You must evaluate the old program from the source code and develop
❖ Meaningful specification
❖ User interface applied
❖ Program DS or database i.e used
Reverse engineering to understand data
• RE of data occurs at different level of abstraction often it is first RE
task.
120
•
At the program level internal program DS must often be RE as a part of overal
Re-engineering efforts.
• At the system level global DS can be evaluated.
Internal DS:
• RE tech for internal program data focus on the def of classes of objects.
• In many cases the data org within the code identifies abstract data types.
Database structure:
• Regardless of logical org and physical structure DB allows the def of object
and support some methods to establishing relationship among the objects.
• Therefore re-eng one DB schema into another requires an understanding of
existing object and relationship.
• Following steps may be used to re eng to new DB
❖ Build initial object model
❖ Determine candidate key
❖ Refine tenative class
❖ Define generalization
❖ Discover association using CRC tech RE to understand processing:
• RE to understand processing begins with an attempt to understand and than
extract procedure abstraction presented in source code.
• To understand the process abstraction the code is analyzed at varying level of
abstraction.
• Each program that makes the application represent functional program at high
level of abstraction.
• Block diagram can be created.
• In some cases system program and component specification already exist in this
situation the specification can be reviewed.
• Things become more complex when we look at the section of code that represents
generic procedural pattern.
• Almost in every component a section of code prepares data for processing(within
the module)
•
121
•
Different section of code does processing and section of code prepares result of
processing for export.
For large system RE is generally accomplished by using semi automated
approach.
• Automatic tools is used to understand the semantics of the code
• Output of this then passed to restructuring and FE tools to complete re-eng
process.
RE user interface:
• Sophisticated GUI have become mandatory to computer based and other based
system.
• Before user interaction be built RE should occur
• The structure and behaviour of the present user interface need to be defined in
order to gain a complete understanding of it.
• Before the start of UI, Merlo and his coworkers present queries that can have their
answers provided.
• What are the fundamental operations that need to be processed by the interface?
• What is a succinct description of the way the behaviour of the system reacts when
it is exposed to the action?
• What exactly do we mean when we talk about replacement, and how exactly does
the idea of equivalence of interface apply to this situation?
• The answers to the first two questions can be found by modelling behaviour.
• It is frequently to one's advantage to conceptualise new interaction metaphors
Restructuring
• Software restructuring modifies source code and/or data in effort to make future
changes.
• This restructuring doesn't modifies the entire program architecture.
• Mainly it focus on the design details of individual modules and local data
structure within the module.
122
•
• If suppose restructuring efforts extends beyond the boundaries of module and
encompasses the software architecture restructuring becomes Forward
engineering.
• Restructuring occurs when the basic architecture of an application is solid, even
through technical internal need to work.
This step initiated when major parts of software are serviceable and only subset of
all modules and data need extensive modification.
Code Restructuring:
• CR is performed to yield a design that the same function but with higher quality
than the original program.
• In general CR technology model program logic with Boolean algebra and then
apply a series of transformation rules that restructures logic.
• A resource exchange diagram is a map that depicts each programme module as
well as the resources that are traded between that module and the other modules.
• The programme design can be changed to ensure minimum coupling among
modules if the representation flow is first created. This will allow for more
flexibility.
Data Restructuring:
• The process of reverse engineering, also known as analysis of code, needs to be
carried out before DR can get underway.
• The evaluation of all PL statements takes place, whether or not they contain data
definition, file descriptions, I/O, or interface description.
• The purpose of this activity, which is known as data analysis, is to extract data
items and objects in order to obtain information on data flow and to gain an
understanding of the existing data structures that have been put into place.
• The process of data redesign begins once the examination of the data has been
finished.
• Data record standardization step clarifies data definition to achieve consistency
among data items names or physical record format within the existing DS or file
format
•
123
•
• Another form of redesign called data name rationalization it ensure that all data
naming conventions conform to local standards that aliases are eliminated as data
flow through the system.
• When restructuring move behind the standardization and rationalization physical
modification can be done to existing one in order to make the data design more
effective.
This means that translation from one file format to another or in some cases
translation from one type database to another.
124
•
Forward Engineering
• A program with control flow that is the graphic equivalent of bowl of spaghetti
with modules.
❖ You can struggle through modification after modification, fighting the ad
hoc design and source code to implement the necessary changes.
❖ You can attempt to understand inner working of program in broader way to
make modification more efficiently.
❖ You can redesign, recode and test those portion of the software that require
modification and apply SE approach.
❖ You can completely redesign or recode and conduct test to understand the
current design.
• There is no single correct option circumstances may dictate the first option even
if others are more desirable.
•
125
•
In many cases mainframe application can be segmented into set of desktop
application that is controlled by business rules layer.
Client application layer:
• it implements the business function that are required by specific group of end
user.
Forward engineering for object oriented architectures
• First we do reverse engineering to the existing system so that the appropriate
data, functional and behavioral models can be created.
• If the re engineered system extends the functionality or behavior of original
application usecase are created.
• The data models created in reverse engineering are then used with conjunction
with CRC modeling to establish the basis definition of classes.
• Class hierarchies, object-behavior models and sub systems are defined and object
oriented design commences.
• As object oriented FE progresses from analysis to design