0% found this document useful (0 votes)
72 views168 pages

SE 3 Sem 1 To 5 U

Uploaded by

gtmtmba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views168 pages

SE 3 Sem 1 To 5 U

Uploaded by

gtmtmba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 168

Gate Degree & PG College

SOFTWARE ENGINEERING
Syllabus

UNIT-1
Introduction to Software Engineering:

The term software engineering is the product of two words, software, and engineering.
The software is a collection of integrated programs.
Software subsists of carefully-organized instructions and code written by developers on any
of various particular computer languages.
Computer programs and related documentation such as requirements, design models and user
manuals.
Engineering is the application of scientific and practical knowledge to invent, design,
build, maintain, and improve frameworks, processes, etc.

Software Engineering is required:


Software Engineering is required due to the following reasons:
o To manage Large software
o For more Scalability
o Cost Management
o To manage the dynamic nature of software
o For better quality Management
Need of Software Engineering:
The necessity of software engineering appears because of a higher rate of progress in user
requirements and the environment on which the program is working.
o Huge Programming: It is simpler to manufacture a wall than to a house or building,
similarly, as the measure of programming become extensive engineering has to step to give
it a scientific process.
o Adaptability: If the software procedure were not based on scientific and engineering ideas,
it would be simpler to re-create new software than to scale an existing one.
o Cost: As the hardware industry has demonstrated its skills and huge manufacturing has let
down the cost of computer and electronic hardware. But the cost of programming remains
high if the proper process is not adapted.
o Dynamic Nature: The continually growing and adapting nature of programming hugely
depends upon the environment in which the client works. If the quality of the software is
continually changing, new upgrades need to be done in the existing one.
o Quality Management: Better procedure of software development provides a better and
quality software product.
Characteristics of a good software engineer:
The features that good software engineers should possess are as follows:
Exposure to systematic methods, i.e., familiarity with software engineering principles.
Good technical knowledge of the project range (Domain knowledge).
Good programming abilities.
Good communication skills. These skills comprise of oral, written, and interpersonal skills.
High motivation.
Sound knowledge of fundamentals of computer science.
Intelligence.
Ability to work in a team
Importance of Software Engineering:

The importance of Software engineering is as follows:


1. Reduces complexity: Big software is always complicated and challenging to progress.
Software engineering has a great solution to reduce the complication of any project. Software
engineering divides big problems into various small issues. And then start solving each small
issue one by one. All these small problems are solved independently to each other.
2. To minimize software cost: Software needs a lot of hardwork and software engineers are
highly paid experts. A lot of manpower is required to develop software with a large number
of codes. But in software engineering, programmers project everything and decrease all those
things that are not needed. In turn, the cost for software productions becomes less as
compared to any software that does not use software engineering method.
3. To decrease time: Anything that is not made according to the project always wastes time.
And if you are making great software, then you may need to run many codes to get the
definitive running code. This is a very time-consuming procedure, and if it is not well
handled, then this can take a lot of time. So if you are making your software according to the
software engineering method, then it will decrease a lot of time.
4. Handling big projects: Big projects are not done in a couple of days, and they need lots of
patience, planning, and management. And to invest six and seven months of any company, it
requires heaps of planning, direction, testing, and maintenance. No one can say that he has
given four months of a company to the task, and the project is still in its first stage. Because
the company has provided many resources to the plan and it should be completed. So to
handle a big project without any problem, the company has to go for a software engineering
method.
5. Reliable software: Software should be secure, means if you have delivered the software,
then it should work for at least its given time or subscription. And if any bugs come in the
software, the company is responsible for solving all these bugs. Because in software
engineering, testing and maintenance are given, so there is no worry of its reliability.
6. Effectiveness: Effectiveness comes if anything has made according to the standards.
Software standards are the big target of companies to make it more effective. So Software
becomes more effective in the act with the help of software engineering

size factors of Software Engineering:


In software engineering, size factors play a crucial role in project estimation, planning, and
management. These factors help determine the scope, complexity, and effort required to
complete a software project. Here are some key size factors:
1. Lines of Code (LOC):
o Measures the total number of lines in the source code.
o Often used for productivity and quality metrics.
2. Function Points (FP):
o Measures the functionality provided to the user based on inputs, outputs, user interactions,
files, and external interfaces.
o Helps in comparing different software projects.
3. Use Case Points (UCP):
o Based on the number and complexity of use cases in the system.
o Considers actors and use case scenarios to estimate effort.
4. Object Points (OP):
o Measures the number of objects or classes in object-oriented design.
o Factors in object complexity and their interactions.
5. Feature Points:
o An extension of function points, taking into account additional factors like algorithm
complexity.
6. Story Points:
o Used in Agile methodologies.
o Measures the effort required to implement a user story based on complexity, risks, and
uncertainties.
7. Effort (Person-Months):
o Measures the amount of work required in terms of person-months or person-hours.
o Derived from other size factors to plan resources.
8. Software Size Metrics:
o Kilo Line of Code (KLOC): Thousands of lines of code.
o Effective Lines of Code (eLOC): Lines of code excluding comments and blank lines.
9. Complexity Metrics:
o Cyclomatic Complexity: Measures the number of linearly independent paths through a
program’s source code.
o Halstead Complexity Measures: Based on the number of operators and operands in the
code.
10. Work Breakdown Structure (WBS):
o Divides the project into smaller, manageable sections or tasks.
o Each section’s size can be estimated to sum up to the total project size.
These size factors are often used in conjunction with estimation models such as COCOMO
(Constructive Cost Model), which uses size factors to predict the effort, cost, and duration of
a software project.

Quality and productivity Factors:


Quality and productivity are two critical aspects of software engineering that significantly
influence the success of a project. Here are some important factors that affect quality and
productivity in software engineering:
Quality Factors
1. Reliability:
o The ability of the software to perform its required functions under stated conditions for a
specified period.
o Metrics: Mean Time Between Failures (MTBF), Mean Time to Repair (MTTR).
2. Maintainability:
o The ease with which the software can be modified to correct faults, improve performance, or
adapt to a changed environment.
o Metrics: Change request frequency, defect density, code complexity.
3. Usability:
o The degree to which the software can be used by specified users to achieve specified goals
with effectiveness, efficiency, and satisfaction.
o Metrics: User satisfaction surveys, task completion time, error rates.
4. Efficiency:
o The capability of the software to provide appropriate performance relative to the amount of
resources used.
o Metrics: Response time, throughput, resource utilization.
5. Portability:
o The ease with which the software can be transferred from one environment to another.
o Metrics: Number of environments supported, effort required for porting.
6. Security:
o The software's ability to protect information and data so that unauthorized persons or systems
cannot read or modify them and authorized persons or systems are not denied access.
o Metrics: Number of security incidents, time to detect and respond to security threats.
7. Functionality:
o The degree to which the software performs its intended functions.
o Metrics: Compliance with requirements, number of features implemented.
8. Interoperability:
o The ability of the software to interact with other systems or software.
o Metrics: Number of supported integrations, ease of integration.
Productivity Factors
1. Development Process:
o The methodologies and practices used during software development.
o Metrics: Development time, defect rates, adherence to schedule.
2. Team Skills and Experience:
o The knowledge, experience, and skills of the development team.
o Metrics: Team experience levels, training hours, developer productivity.
3. Tools and Technologies:
o The effectiveness of development tools, programming languages, and frameworks used.
o Metrics: Tool usage frequency, defect rates, development speed.
4. Project Management:
o The practices related to planning, tracking, and managing software projects.
o Metrics: Schedule adherence, budget adherence, project success rates.
5. Communication:
o The effectiveness of communication within the development team and with stakeholders.
o Metrics: Frequency of meetings, communication clarity, feedback loop efficiency.
6. Requirements Management:
o The process of eliciting, documenting, and managing software requirements.
o Metrics: Requirements stability, requirements clarity, changes in requirements.
7. Code Quality:
o The overall quality of the source code, including readability, maintainability, and complexity.
o Metrics: Code reviews, static code analysis results, refactoring frequency.
8. Testing and Quality Assurance:
o The processes and practices used to ensure software quality through testing and validation.
o Metrics: Test coverage, defect detection rate, defect resolution time.
9. Automation:
o The extent to which development and testing processes are automated.
o Metrics: Build frequency, deployment frequency, automated test coverage.
10. Work Environment:
o The physical and psychological conditions under which the development team works.
o Metrics: Team morale, turnover rates, work-life balance.
Balancing these quality and productivity factors is crucial for delivering high-quality
software within time and budget constraints. Effective management practices, continuous
improvement, and adopting the right tools and methodologies can significantly enhance both
quality and productivity in software engineering.
Managerial issues in software engineering:
Managerial issues in software engineering encompass a wide range of challenges that
managers face while planning, executing, and controlling software projects. Addressing these
issues effectively is crucial for the successful delivery of software products. Here are some
of the key managerial issues:
1. Project Planning and Scheduling:
• Estimating Project Size and Effort: Accurately estimating the size and effort required for a
project can be challenging, leading to over- or under-estimation.
• Resource Allocation: Ensuring that the right resources (e.g., developers, testers) are available
and efficiently utilized throughout the project lifecycle.
• Scheduling: Creating realistic timelines and milestones, and adapting to changes in scope or
unexpected delays.
2. Risk Management:
• Identifying Risks: Recognizing potential risks early in the project, such as technical
challenges, resource shortages, or changes in requirements.
• Mitigating Risks: Developing strategies to minimize the impact of identified risks and
preparing contingency plans.
3. Scope Management:
• Scope Creep: Managing changes to the project scope to prevent uncontrolled growth and
ensure that new requirements are properly evaluated and integrated.
• Requirements Management: Ensuring that requirements are well-defined, documented, and
agreed upon by all stakeholders.
4. Quality Management:
• Quality Assurance: Implementing processes to ensure that the software meets defined quality
standards and is free of defects.
• Testing: Planning and executing comprehensive testing strategies to identify and fix issues
before deployment.
5. Team Management:
• Team Dynamics: Managing diverse teams, addressing conflicts, and fostering a collaborative
and productive work environment.
• Skill Development: Ensuring that team members have the necessary skills and providing
training or mentoring as needed.
• Motivation and Retention: Keeping the team motivated and engaged, and addressing factors
that contribute to employee turnover.
6. Communication and Collaboration:
• Stakeholder Communication: Ensuring clear and consistent communication with all
stakeholders, including clients, team members, and management.
• Collaboration Tools: Utilizing effective tools and practices to facilitate collaboration and
information sharing among team members.
7. Budget Management:
• Cost Estimation: Accurately estimating project costs and creating realistic budgets.
• Budget Control: Monitoring expenditures and ensuring that the project stays within budget.
8. Process Management:
• Adopting
Methodologies: Choosing and implementing the appropriate software development
methodologies (e.g., Agile, Waterfall) that fit the project needs.
• ProcessImprovement: Continuously evaluating and improving development processes to
enhance efficiency and quality.
9. Technology Management:
• Tool
Selection: Choosing the right tools and technologies that align with project requirements
and team capabilities.
• Keeping Up with Trends: Staying updated with the latest industry trends and advancements
to ensure that the project benefits from modern practices and technologies.
10.Change Management:
*Handling Changes: Managing changes in project scope, technology, team composition, and
other factors effectively.
*Adaptability: Ensuring that the team and processes are flexible enough to adapt to changes
without significant disruption.

11. Performance Monitoring:


*Tracking Progress: Regularly monitoring project progress against planned milestones and
performance metrics.
*Performance Metrics: Using key performance indicators (KPIs) to assess productivity,
quality, and other critical aspects.
12. Client and User Involvement:
*Requirements Gathering: Engaging clients and users in the requirements-gathering process
to ensure that the final product meets their needs.
*Feedback: Soliciting and incorporating feedback from clients and users throughout the
project lifecycle.
13. Compliance and Legal Issues:
*Regulatory Compliance: Ensuring that the software complies with relevant regulations and
standards.
*Intellectual Property: Managing intellectual property rights and ensuring that the software
does not infringe on third-party rights.
Effectively managing these issues requires a combination of strong leadership, effective
communication, and the ability to adapt to changing circumstances. By addressing these
managerial challenges, software engineering managers can increase the likelihood of
delivering successful projects on time and within budget.

Planning a Software Project:


The objective of software project planning is to provide a framework that enables the
manager to make reasonable estimates of resources, cost, and schedule. These estimates are
made within a limited time frame at the
beginning of a software project. Also they should be updated regularly as the project
progresses. The planning objective is achieved through a process of information discovery
that leads to reasonable estimates.
Software Scope:
The first activity in software project planning is the determination of software scope.
Software scope describes function, performance, constraints, interfaces, and reliability.
Functions are evaluated and in some cases refined to provide more detail prior to the
beginning of estimation. Both cost and schedule estimates are functionally oriented and
hence some degree of decomposition is often useful. Performance considerations encompass
processing and response time requirements. Constraints identify limits placed on the software
by external hardware, available memory, or other existing systems. To define scope, it is
necessary to obtain the relevant information and hence get the communication process started
between the customer and the developer. To accomplish this, a preliminary meeting or
interview is to be conducted. The analyst may start by asking context free questions. That is,
a set of questions that will lead to a basic understanding of the problem, the people who want
a solution, the nature of the solution that is desired, and the effectiveness of the first encounter
itself. The next set of questions enable the analyst to gain a better understanding of the
problem and the customer to voice his or her perceptions about a solution. The final set of
questions, known as Meta questions, focus on the effectiveness of the meeting. A team-
oriented approach, such as the Facilitated Application Specification Techniques (FAST),
helps to establish the scope of a project.
Resources :
The development resources needed are:
1. Development Environment (Hardware/Software Tools)
2. Reusable Software Components
3. Human Resources (People)
Each resource is specified with four characteristics – description of the resource, a statement
of availability, chronological time that the resource will be required, and duration of time that
the resource will be applied. The last two characteristics can be viewed as a time window.
Availability of the resource for a specified window must be established at the earliest
practical time.
Human Resources:
Both organizational positions (e.g. Manager, senior software engineer, etc.) and specialty
(e.g. Telecommunications, database, etc.) are specified. The number of people required varies
for every software project and it can be determined only after an estimate of development
effort is made.
Reusable Software Resources:
These are the software building blocks that can reduce development costs and speed up the
product delivery. The four software resource categories that should be considered as planning
proceeds are:
1.Off-the-shelf components –
Existing software that can be acquired from a third-party or that has been developed
internally for past project. These are ready for use and have been fully validated. Generally,
the cost for acquisition and integration of such components will be less than the cost to
develop equivalent software.
2. Full-experience components :
Existing specifications, designs, code, or test data developed for past projects that are similar
to the current project. Members of the current software team have had full experience in the
application area represented by these components. Therefore modifications will be relatively
low risk.
3. Partial-experience components:
Existing specifications, designs, code, or test data developed for past projects that are related
to the current project, but will require substantial modification. Members of the current
software team have only limited experience in the application area represented by these
components. Therefore modifications will have a fair degree of low risk and hence their use
for the current project must be analyzed in detail.
4. New components:
Software components that must be built by the software team specifically for the needs of
the current project.
Environmental Resources:
The environment that supports the software project, often called Software Engineering
Environment (SEE), incorporates hardware and software. Hardware provides a platform
that supports the tools required to produce the work products. A project planner must
prescribe the time window required for hardware and software and verify that these resources
will be available.
The phases of a software project :
Software projects are divided into individual phases. These phases collectively and their
chronological sequence are termed the software life cycle (see Fig. 2.2).
Software life cycle: a time span in which a software product is developed and used, extending

to its retirement.
The cyclical nature of the model expresses the fact that the phases can be carried out
repeatedly in the development of a software product.
Requirements analysis and planning phase
Goal:

➢ Determining and documenting:


❖ Which steps need to be carried out,
❖ The nature of their mutual effects,
❖ Which parts are to be automated, and
❖ Which recourses are available for the realization of the project.
Important activities:
➢ Completing the requirements analysis,
➢ Delimiting the problem domain,
➢ Roughly sketching the components of the target system,
➢ Making an initial estimate of the scope and the economic feasibility of
the planned project, and
➢ Creating a rough project schedule.
Products:
➢ User requirements,
➢ Project contract, and
➢ Rough project
schedule. System
specification phase
Goal:
➢ a contract between the client and the software producer (precisely
specifies what the target software system must do and the premises for its
realization.)
Important activities:
➢ Specifying the system,
➢ Compiling the requirements definition,
➢ Establishing an exact project schedule,
➢ Validating the system specification, and
➢ Justifying the economic feasibility of the project.
Products:
➢ Requirements definition, and
➢ Exact project

schedule. System and


components design Goal:
➢ Determining which system components will cover which requirements in
the system specification, and
➢ How these system components will work together.
Important activities:
➢ Designing system architecture,
➢ Designing the underlying logical data model,
➢ Designing the algorithmic structure of the system components, and
➢ Validating the system architecture and the algorithms to realize the
individual system components.
Products:
➢ Description of the logical data model,
➢ Description of the system architecture,
➢ Description of the algorithmic structure of the system components, and
➢ Documentation of the design decisions.

Implementation and component test


Goal:
➢ Transforming the products of the design phase into a form that is
executable on a computer.
Important activities:
➢ Refining the algorithms for the individual components,
➢ Transferring the algorithms into a programming language (coding),
➢ Translating the logical data model into a physical one,
➢ Compiling and checking the syntactical correctness of the algorithm, and
➢ Testing, and syntactically and semantically correcting erroneous
system components.
Products:
➢ Program code of the system components,
➢ Logs of the component tests, and
➢ Physical data model.

System test
Goal:
➢ Testing the mutual effects of system components under conditions close
to reality,
➢ Detecting as many errors as possible in the software system, and
➢ Assuring that the system implementation fulfills the system specification.

Operation and maintenance


Task of software maintenance:
➢ Correcting errors that are detected during actual operation, and
➢ Carrying out system modifications and extensions.
This is normally the longest phase of the software
life cycle. Two important additional aspects:
➢ Documentation, and
➢ Quality assurance.
During the development phases the documentation should enable
communication among the persons involved in the development;
upon completion of the development phases it supports the
utilization and maintenance of the software product.
Quality assurance encompasses analytical, design and
organizational measures for quality planning and for fulfilling quality
criteria such as correctness, reliability, user friendliness,
maintainability, efficiency and portability.
DEFINING THE PROBLEM:
Most software projects are undertaken to provide solution to
business needs. In the beginning of a software project the business
needs are often expressed informally as part of a meeting or a casual
conversation. In a more formal approach, a customer could send
Request For Information (RFI) to organizations to know their area of
expertise and domain specifications. The customer puts up a Request
For Proposal (RFP) stating the business needs. Organizations will to
provide their services will send proposals and one of the proposals is
accepted by the customer.

DEVELOPING A SOLUTION STRATEGY:


The business needs have to be understood and the role of
software in providing the solution has to be identified. Software
development requires a model to be used to drive it and tract it to
completion. The model will provide an effective roadmap for the
software team.
PLANNING THE DEVELOPMENT PROCESS:

Planning the software development process involves several


important considerations. The first consideration is to define a
product life-cycle model. A software project goes through various
phases before it is ready to be used for practical purposes. For every
project, a framework must be used to define the flow of activities such
as define, develop, test, deliver, operate, and maintain a software
product. There are many well define models that can be use. There
could be variations to these models also, depending on the
deliverables and milestones for the project. A model has to be
selected and finalized to start a project.
The following section discusses the various models such as:
a. Waterfall Model
b. Prototype Model

c. Spiral Model
d .Object-oriented life-cycle mode
Waterfall Model:

The Waterfall Model was the first Process Model to be introduced. It is also referred to as a
linear- sequential life cycle model or classic model. It is very simple to understand and use.
In a waterfall model, each phase must be completed before the next phase can begin and there
is no overlapping in the phases.
The Waterfall model is the earliest SDLC approach that was used for software development.

The waterfall Model illustrates the software development process in a linear sequential flow.
This means that any phase in the development process begins only if the previous phase is
complete. In this waterfall model, the phases do not overlap.
Waterfall approach was first SDLC Model to be used widely in Software Engineering to ensure
success of the project. In "The Waterfall" approach, the whole process of software
development is divided into separate phases. In this Waterfall model, typically, the outcome
of one phase acts as the input for the next phase sequentially.
The following illustration is a representation of the different phases of the Waterfall Model.

The sequential phases in Waterfall model are −

• Requirement Gathering and analysis − All possible requirements of the system to


be developed are captured in this phase and documented in a requirement
specification document.
• System Design − The requirement specifications from first phase are studied in this
phase and the system design is prepared. This system design helps in specifying
hardware and system requirements and helps in defining the overall system
architecture.
• Implementation − With inputs from the system design, the system is first developed
in small programs called units, which are integrated in the next phase. Each unit is
developed and tested for its functionality, which is referred to as Unit Testing.
• Integration and Testing − All the units developed in the implementation phase are
integrated into a system after testing of each unit. Post integration the entire system
is tested for any faults and failures.
• Deployment of system − Once the functional and non-functional testing is done; the
product is deployed in the customer environment or released into the market.
• Maintenance − There are some issues which come up in the client environment. To
fix those issues, patches are released. Also to enhance the product some better
versions are released. Maintenance is done to deliver these changes in the customer
environment.
All these phases are cascaded to each other in which progress is seen as flowing steadily
downwards (like a waterfall) through the phases. The next phase is started only after the
defined set of goals areachieved for previous phase and it is signed off, so the name "Waterfall
Model". In this model, phasesdo not overlap.
Advantages:

Some of the major advantages of the Waterfall Model are as follows −


• Simple and easy to understand and use
• Phases are processed and completed one at a time.
• Works well for smaller projects where requirements are very well understood.
• It is disciplined in approach.
Disadvantages:

• No working software is produced until late during the life cycle.


• High amounts of risk and uncertainty.
• Not a good model for complex and object-oriented projects.
• Poor model for long and ongoing projects.
• Not suitable for the projects where requirements are at a moderate to high risk
of changing.So, risk and uncertainty is high with this process model.

Spiral Model:

The spiral model, initially proposed by Boehm, it is the combination of waterfall and iterative
model,Using the spiral model, the software is developed in a series of incremental releases.
Each phase in spiral model begins with planning phase and ends with evaluation phase.
The spiral model has four phases. A software project repeatedly passes through these
phases initerations called Spirals.

Planning phase:
This phase starts with gathering the business requirements in the baseline spiral. In the
subsequent spirals as the product matures, identification of system requirements, subsystem
requirements and unit requirements are all done in this phase.
This phase also includes understanding the system requirements by continuous
communication between the customer and the system analyst. At the end of the spiral, the
product is deployed in theidentified market.

Risk Analysis:
Risk Analysis includes identifying, estimating and monitoring the technical feasibility and
management risks, such as schedule slippage and cost overrun. After testing the build, at the
end of first iteration, the customer evaluates the software and provides feedback.

Engineering or construct phase:


The Construct phase refers to production of the actual software product at every spiral. In the
baseline spiral, when the product is just thought of and the design is being developed a POC
(Proof ofConcept) is developed in this phase to get customer feedback.
Evaluation Phase:
This phase allows the customer to evaluate the output of the project to update before the
project continues to the next spiral.
Software project repeatedly passes through all these four phases.

Advantages:

• Flexible model
• Project monitoring is very easy and effective
• Risk management
• Easy and frequent feedback from users.
Dis advantages:

• It doesn’t work for smaller projects


• Risk analysis require specific expertise.
• It is costly model & complex.
• Project success is highly dependent on risk.
Prototype Model:

To overcome the disadvantages of waterfall model, this model is implemented with a special
factorcalled prototype. It is also known as revaluation model.

Step 1: Requirements gathering and analysis

A prototyping model starts with requirement analysis. In this phase, the requirements of the
systemare defined in detail. During the process, the users of the system are interviewed to
know what is their expectation from the system.

Step 2: Quick design

The second phase is a preliminary design or a quick design. In this stage, a simple design of
the system is created. However, it is not a complete design. It gives a brief idea of the system
to the user.The quick design helps in developing the prototype.

Step 3: Build a Prototype


In this phase, an actual prototype is designed based on the information gathered from quick
design.It is a small working model of the required system.
Step 4: Initial user evaluation

In this stage, the proposed system is presented to the client for an initial evaluation. It helps
to findout the strength and weakness of the working model. Comment and suggestion are
collected fromthe customer and provided to the developer.

Step 5: Refining prototype

If the user is not happy with the current prototype, you need to refine the prototype according
to theuser's feedback and suggestions.

This phase will not over until all the requirements specified by the user are met. Once the user
issatisfied with the developed prototype, a final system is developed based on the approved
final prototype.

Step 6: Implement Product and Maintain

Once the final system is developed based on the final prototype, it is thoroughly tested and
deployedto production. The system undergoes routine maintenance for minimizing
downtime and prevent large-scale failures.

Advantages:

• Users are actively involved in development. Therefore, errors can be detected in the
initial stage of the software development process.
• Missing functionality can be identified, which helps to reduce the risk of failure as
Prototyping is also considered as a risk reduction activity.
• Helps team member to communicate effectively
• Customer satisfaction exists because the customer can feel the product at a very early
stage.
Disadvantages:

• Prototyping is a slow and time taking process.


• The cost of developing a prototype is a total waste as the prototype is
ultimately thrown away.
• Prototyping may encourage excessive change requests.
• After seeing an early prototype model, the customers may think that the actual
product will be delivered to him soon.
• The client may lose interest in the final product when he or she is not happy with
the initial prototype.
The object-oriented life-cycle model:

• The usual division of a software project into phases remains intact with the use
of object-oriented techniques.
• The requirements analysis stage strives to achieve an understanding of the
client’s application domain.
• The tasks that a software solution must address emerge in the course of requirements
analysis.
• The requirements analysis phase remains completely independent of an
implementation technique that might be applied later.
• In the system specification phase the requirements definition describes what the
software product must do, but not how this goal is to be achieved.
• One point of divergence from conventional phase models arises because
implementation with object-oriented programming is marked by the assembly of
already existing components.

The advantages of object-oriented life-cycle model:


• Design no longer is carried out independently of the later implementation because
during the design phase we must consider which components are available for the
solution of the problem. Design and implementation become more closely
associated, and even the choice of a different programming language can lead to
completely different program structures.
• The duration of the implementation phase is reduced. In particular, (sub) products
become available much earlier to allow testing of the correctness of the design.
Incorrect decisions can be recognized and corrected earlier. This makes for closer
feedback coupling of the design and implementation phases.
• The class library containing the reusable components must be continuously
maintained. Saving at the implementation end is partially lost as they are
reinvested in this maintenance. A new job title emerges, the class librarian, who
is responsible for ensuring the efficient usability of the class library.
• During the test phase, the function of not only the new product but also of the
reused components is tested. Any deficiencies in the latter must be documented
exactly. The resulting modifications must be handled centrally in the class library
to ensure that they impact on other projects, both current and future.
• Newly created classes must be tested for their general usability. If there is a
chance that a component could be used in other projects as well, it must be
included in the class library and documented accordingly. This also means that
the new class must be announced and made accessible to other programmers
who might profit from it. This places new requirements on the in-house
communication structures.

The actual software life cycle recurs when new requirements arise in the
company that initiates a new requirements analysis stage.
The object and prototyping-oriented life-cycle model
The specification phase steadily creates new prototypes. Each time
we are confronted with the problem of having to modify or enhance
existing prototypes. If the prototypes were already implemented with
object-oriented technology, then modifications and extensions are
particularly easy to carry out. This allows an abbreviation of the
specification phase, which is particularly important when proposed
solutions are repeatedly discussed with the client. With such an
approach it is not important whether the prototype serves solely for
specification purposes or whether it is to be incrementally developed to
the final product. If no prototyping tools are available, object-oriented
programming can serve as a substitute tool for modeling user
interfaces. This particularly applies if an extensive class library is
available for user interface elements.

For incremental prototyping (i.e. if the product prototype is to be used as the


basis for the implementation of the product), object-oriented programming
also proves to be a suitable medium. Desired functionality can be added
stepwise to the prototypes without having to change the prototype itself.
These results in a clear distinction between the user interfaces modeled in the
specification phase and the actual functionality of the program. This is
particularly important for the following reasons:
• This assures that the user interface is not changed during the implementation
of the program functionality. The user interface developed in collaboration with
the client remains as it was defined in the specification phase.
• In the implementation of the functionality, each time a subtask is completed, a
more functional prototype results, which can be tested (preferably together with
the client) and compared with the specifications? During test runs situations
sometimes arise that require rethinking the user interface. In such cases the
software life cycle retreats one step and a new user interface prototype is
constructed.
Since the user interface and the functional parts of the program are largely

decoupled, two cycles result that shares a common core. The integration of
the functional classes and the user interface classes creates a prototype that
can be tested and validated. This places new requirements on the user
interface and/or the functionality, so that the cycle begins.
Types of project planning:
Types of project planning:
Quality Describes the quality procedures and standards
plan that will be used in a project.

Validatio Describes the approach, resources and schedule


n plan used for system validation.

Configu Describes the configuration management


ration procedures and structures to be used.
manage
ment
plan

Mai Predicts the maintenance requirements of the


nten system, maintenance costs and effort required.
ance
plan

Staff Describes how the skills and experience of the


develop project team members will be developed. See
ment
plan

PLANNING AN ORGANIZATION STRUCTURE:


Completing a software project is a team effort. The following options
are available for applying human resources to a project that will require
‘n’ people working for ‘K’ years.
• ‘n’ individuals are assigned to ‘m’ different functional tasks.
• ‘n’ individuals are assigned to ‘m’ different functional tasks (m<n) so that
informal teams are established and coordinated by project manager.
• ‘n’ individuals are organized into ‘t’ teams and each team is assigned
one/more functional tasks.
Even though the above three approaches have their pros and cons,
option 3 is most productive.
There are several roles within each software project team. Some of
the roles in a typical software project are listed below:

Desi Job Profile


gnati
on
Proje Initiates, plans, tracks and manages resources of an
ct entire project
Mana
ger

Modu A software engineer who manages and leads the team


le working
Lead on a particular module of the software project. The
er module leader will conduct reviews and has to ensure
the proper
Gate Degree & PG College

functionality of the module

Anal A software engineer who analyzes the requirements


yst gathered. Analysis of the requirements is done to get a
clear understanding of the requirements.

Dom An expert who knows the system of which the


ain software is a part. This would involve the technical
Cons knowledge of how the entities of the domain interface
ultan with the software being developed. For example, a
t banking domain consultant or a telecom domain
consultant.

Revie A software engineer who reviews artifacts like project


wer documents or code. The review can be a technical
review which scrutinizes the technical details of the
artifact. It could be a review where the reviewer
ascertains whether or not the artifact adheres to a
prescribed standard

Archi A software engineer involved in the design of the


tect solution after the analyst has clearly specified the
business requirements

Devel A software engineer, who writes the code, tests it and


oper delivers it error free

Teste A software engineer who conducts tests on the


r completed software or a unit of the software. If any
defects are found these defects are logged and a
report is sent to the owner of the tested unit.

Programming Team Structure


Every programming team must have an internal structure. The
best team structure for any particular project depends on the nature
of the project and the product, and on the characteristics of the
individual team members. Basic team structure includes:
a. Democratic team: Team Member participate in all decisions
b. Chief Programmer Team: A chief programmer is assisted and
supported by other team members.
c. Hierarchical Team: The project leader assigns tasks attend reviews and
walkthrough, detects problem areas, balances the workload and
participates in technical activities.
Democratic Team
This was first described by Weinberg as the “egoless team”. In
an egoless team goals are set and decisions made by group consensus.
Group leadership rotates from member to member based on the tasks

28
Gate Degree & PG College

to be performed and the differing abilities of the team members. Work


products (requirements, design, source code, user manual, etc) are
discussed openly and are freely examined by all team members.
Advantage:
❖ Opportunity for each team member to contribute to decision
❖ Opportunity for team members to learn from one another
❖ Increased Job satisfaction that results from good communication in
open, non-threatening work environments.
Disadvantages
❖ Communication overhead required in reaching decision,
❖ All team members must work well together,
❖ Less individual responsibility and authority can result in less initiative
and less personal drive from team members.
The chief programmer team
Baker's organizational model ([Baker 1972])

➢ Important characteristics:
• The lack of a project manager who is not personally involved in
system development
• The use of very good specialists
• The restriction of team size
➢ The chief programmer team consists of:
• The chief programmer
• The project assistant
• The project secretary
• Specialists (language specialists, programmers, test specialists).
➢ The chief programmer is actively involved in the planning,
specification and design process and, ideally, in the implementation
process as well.
➢ The chief programmer controls project progress, decides all
important questions, and assumes overall responsibility.
➢ The qualifications of the chief programmer need to be accordingly high.
➢ The project assistant is the closest technical coworker of the chief
programmer.
➢ The project assistant supports the chief programmer in all important
activities and serves as the chief programmer's representative in the latter's
absence. This team member's qualifications need to be as high as those
of the chief programmer.
➢ The project secretary relieves the chief programmer and all other

29
Gate Degree & PG College

programmers of administrative tasks.


➢ The project secretary administrates all programs and documents and
assists in project progress checks.
➢ The main task of the project secretary is the administration of the project
library.
➢ The chief programmer determines the number of specialists needed.
➢ Specialists select the implementation language, implement individual
system components, choose and employ software tools, and carry out
tests.
Advantages
• The chief programmer is directly involved in system development and can
better exercise the control function.
• Communication difficulties of pure hierarchical
organization are ameliorated. Reporting concerning project
progress is institutionalized.
• Small teams are generally more productive than large teams.
Disadvantages
• It is limited to small teams. Not every project can be handled by a small
team.
• Personnel requirements can hardly be met. Few software engineers can
meet the qualifications of a chief programmer or a project assistant.
• The project secretary has an extremely difficult and responsible job,
although it consists primarily of routine tasks, which gives it a
subordinate position. This has significant psychological disadvantages.
Due to the central position, the project secretary can easily become a
bottleneck.
The organizational model provides no replacement for the project
secretary. The loss of the project secretary would have failing
consequences for the remaining course of the project.
Hierarchical organizational model
➢ There are many ways to organize the staff of a project. For a long time the
organization of software projects oriented itself to the hierarchical
organization common to other industrial branches. Special importance
is

30
Gate Degree & PG College

31
1
UNIT-II

SOFTWARE COST FACTORS

1. Programmer Ability
Experiment sackman and colleagues. The goal was to determine the relativeinfluence of batch and time-
shared access on programmers productivity.
Ex: 12 programmers given 2 programmes each
11 years experience
productivity variation 16:1
Individual differences in ability can be significant.
2. Product complexity
3 categories of software product
✓ Application programs
✓ Utility programs
✓ System programs
Brooks states that utility programs are 3 times as difficult to write as application programs and that systems
programs and that systems programsare 3 times as difficult to write utility programs
1(App)-3(utility)-9(System)

2
Boehm three levels
PM=programmer months
KDSI= number of thousands of delivered instructions
Application
PM=2.4*(KDSI)**1.05 Program cost
Util Programmer effort
PM=3.0*(KDSI)**1.12
ity
PM=3.6*(KDSI)**1.20
Syst

em

3. Product size:
A large software product is obviously more expensive to develop the small one. Boehm’s equations
indicate that the rate of increase in required effortgrows with the number of source instructions at an
exponential rate slightlygreater than one.

3
4.Available time
Total project effort is sensitive to the calendar time available for project completion. Most of them agree
that software projects require more total effort if development time is compressed or expanded from the
optimal time.
5.Required level of reliability
Software reliability can be defined as the probability that a program willperform a required function under
stated conditions for a stated period of time. It can be expressed in terms of accuracy, robustness,
completeness, consistency of the source code.
Boehm describes five categories

Category Effect of failure

Very low Slight inconvenience

Low Losses easily recovered

Nominal Moderately difficult to recover losses

High High financial loss

Very high Risk to human life

4
4. Level of technology
Software development project is reflected by the programming language, theabstract machine, the
programming practices and software tools used. The number of source instructions written per day is
largely dependent of the language used, written in HLL, expand into several machine level statements.

SOFTWARE COST ESTIMATION TECHNIQUES


✓ Software cost estimates are based on past performance.
✓ Historical data are used to identify cost factors and determine the relativeimportance and various
factors with in the environment of that organization.
✓ Cost estimates can be either top-down or bottom-up.
✓ Top-down estimation first focuses on system-level costs(such as personalrequired to develop the
system)
✓ Bottom-up cost estimation first estimates the cost of develop each module
or subsystem.

5
1. Expert judgement
➢ The most widely used cost estimation technique is expert judgement,which is an inherently
top-down estimation technique .
➢ Expert judgement relies on the experience background and business senseof one or more key people
in the organization.
Advantages
✓ Experience can also be a liability.
✓ The expert may be confident that the project is similar to a previous one,but may have overlooked
some factors that make the new project significantly different.
2. Delphi cost estimation
Developed by Rand corporation in 1948 to gain expert consensus withoutintroducing the adverse side
effects.
Delphi technique can be adapted to software cost estimation in the followingmanner.

➢ Coordinator provides each estimator with the system definition documentwith the system definition
document and a form for recording a cost estimate.
➢ They complete their estimates. They may ask questions of the co-
ordinator but they do not discuss their estimates with one another.
➢ The coordinator prepared distributes a summary of the estimators, responses and includes any
unusual rationales, noted by the estimators.
➢ Estimators complete another estimate, using the results from the previousestimate.
➢ The process is iterated for as many rounds as required . No groupdiscussion is allowed
during the entire process.

3. Work breakdown structure


✓ The work break down chart can indicate either product hierarchy orprocess hierarchy.
✓ Product hierarchy identifies the product components and indicates themanner in which the
components and interconnected.
✓ Process hierarchy identifies the work activities and relationship among
activities.

7
Product hierarchy

✓ Some planners use both product and process hierarchy.


Advantages
➢ Work break down structure technique are identifying and accounting for various process and
product factors, and is making explicit exactly whichcosts are included in the estimate.

8
Process of work breakdown structure

9
4. Algorithmic cost models
➢ Bottom-up estimators
➢ Constructive cost model(COCOMO) is an algorithmic cost model describedby bohem
➢ COCOMO effort multipliers
a. Product attributes
b. computer attributes
c. personal attributes
d. project attributes
Ex:normal organic mode equations apply in the following types of situations.Small to medium size
projects(2k to 32k) familiar applications area.
Stable, well-understood virtual machine in house development effort.

Staffing level estimation :


Once the effort required to develop a software has been determined, it is necessary to determine the staffing
requirement for the project. Putnam first studied the problem of what should be a proper staffing pattern for
software projects. He extended the work of Norden who had earlier investigated the staffing pattern of
research and development (R&D) type of projects. In order to appreciate the staffing pattern of software
projects, Norden’s and Putnam’s results must beunderstood.

Norden’s Work :
Norden studied the staffing patterns of several R & D projects. He found that the staffing pattern can be
10
approximated by the Rayleigh distribution curve. Norden represented the Rayleigh curve by the following
equation:
E = K/t²d * t * e-t² / 2 t² d
Where E is the effort required at time t. E is an indication of the number of engineers (or the staffing level)
at any particular time during the duration of the project, K is the area under the curve, and td is the time at
which the curve attains its maximum value. It must be remembered that the results of Norden are applicable
to general R & D projects and were not meant to model the staffing pattern of software development
projects.

Putnam’s Work :
Putnam studied the problem of staffing of software projects and found that the software development
has characteristics very similar to other R & D projects studied by Norden and that the Rayleigh-Norden
curve can be used to relate the number of delivered lines of code to the effort and the time required to
11
develop the project. By analyzing a large number of army projects, Putnam derived the following
expression:
L = Ck K 1/3td 4/3
The various terms of this expression are as follows:
• K is the total effort expended (in PM) in the product development and L is the product size in
KLOC.
• td corresponds to the time of system and integration testing. Therefore, td can be approximately
considered as the time required to develop the software.
• Ck is the state of technology constant and reflects constraints that impede the progress of the programmer.
Typical values of Ck = 2 for poor development environment (no methodology, poor documentation, and
review, etc.), Ck = 8 for good software development environment (software engineering principles are
adhered to), Ck = 11 for an excellent environment (in addition to following software engineering principles,
automated tools and techniques are used). The exact value of Ck for a specific project can be computed
from the historical data of the organization developing it.
Putnam suggested that optimal staff build-up on a project should follow the Rayleigh curve. Only a small
number of engineers are needed at the beginning of a project to carry out planning and specification tasks.
As the project progresses and more detailed work is required, the number of engineers reaches a peak. After
implementation and unit testing, the number of project staff falls. However, the staff build-up should not be
carried out in large installments. The team size should either be increased or decreased slowly whenever
required to match the Rayleigh-Norden curve. Experience shows that a very rapid build up of project staff
any time during the project development correlates with schedule slippage.
It should be clear that a constant level of manpower through out the project duration would lead to wastage
of effort and increase the time and effort required to develop the product. If a constant number of engineers
are used over all the phases of a project, some phases would be overstaffed and the other phases would be
understaffed causing inefficient use of manpower, leading to schedule slippage and increase in cost.
Effect of schedule change on cost:
By analyzing a large number of army projects, Putnam derived the following expression:
L = CkK 1/3td 4/3
12
Where, K is the total effort expended (in PM) in the product development and L is the product size in
KLOC, td corresponds to the time of system and integration testing and Ck is the state of technology
constant and reflects constraints that impede the progress of the programmer

Now by using the above expression it is obtained that,


K = L 3 /C k 3 td 4
OR
K = C/td 4
For the same product size,
C = L3 / C k 3 is a constant
or,
K1/K2 = td24 /t d1 4
or,
K ∝ 1/td 4
or, cost ∝ 1/td
(as project development effort is equally proportional to project development cost)
From the above expression, it can be easily observed that when the schedule of a project is
compressed, the required development effort as well as project development cost increases in proportion to
the fourth power of the degree of compression. It means that a relatively small compression in delivery
schedule can result in substantial penalty of human effort as well as development cost. For example, if the
estimated development time is 1 year, then in order to develop the product in 6 months, the total effort
required to develop the product (and hence the project cost) increases 16 times

Software Maintenance Cost Factors:


There are two types of cost factors involved in software maintenance.
These are
o Non-Technical Factors
o Technical Factors

13
Non-Technical Factors:

1.Application Domain
14
o If the application of the program is defined and well understood, the system requirements may be
definitive and maintenance due to changing needs minimized.
o If the form is entirely new, it is likely that the initial conditions will be modified frequently, as user gain
experience with the system.

2. Staff Stability
o It is simple for the original writer of a program to understand and change an application rather than some
other person who must understand the program by the study of the reports and code listing.
o If the implementation of a system also maintains that systems, maintenance costs will reduce.
o In practice, the feature of the programming profession is such that persons change jobs regularly. It is
unusual for one user to develop and maintain an application throughout its useful life.
3. Program Lifetime
o Programs become obsolete when the program becomes obsolete, or their original hardware is replaced,
and conversion costs exceed rewriting costs.
4. Dependence on External Environment
o If an application is dependent on its external environment, it must be modified as the climate changes.
o For example:
o Changes in a taxation system might need payroll, accounting, and stock control programs to be modified.
o Taxation changes are nearly frequent, and maintenance costs for these programs are associated with the
frequency of these changes.
o A program used in mathematical applications does not typically depend on humans changing the
assumptions on which the program is based.
5. Hardware Stability
o If an application is designed to operate on a specific hardware configuration and that configuration does
not changes during the program's lifetime, no maintenance costs due to hardware changes will be
incurred.
o Hardware developments are so increased that this situation is rare.

15
o The application must be changed to use new hardware that replaces obsolete equipment.

Technical Factors:
Technical Factors include the following:

Module Independence
It should be possible to change one program unit of a system without affecting any other unit.

Programming Language
Programs written in a high-level programming language are generally easier to understand than programs
written in a low-level language.
16
Programming Style
The method in which a program is written contributes to its understandability and hence, the ease with
which it can be modified.
Program Validation and Testing
o Generally, more the time and effort are spent on design validation and program testing, the fewer bugs in
the program and, consequently, maintenance costs resulting from bugs correction are lower.
o Maintenance costs due to bug's correction are governed by the type of fault to be repaired.
o Coding errors are generally relatively cheap to correct, design errors are more expensive as they may
include the rewriting of one or more program units.
o Bugs in the software requirements are usually the most expensive to correct because of the drastic design
which is generally involved.
Documentation
o If a program is supported by clear, complete yet concise documentation, the functions of understanding
the application can be associatively straight-forward.
o Program maintenance costs tends to be less for well-reported systems than for the system supplied with
inadequate or incomplete documentation.
Configuration Management Techniques
o One of the essential costs of maintenance is keeping track of all system documents and ensuring that
these are kept consistent.
o Effective configuration management can help control these costs.

Software Requirement Specifications:


The production of the requirements stage of the software development process is Software Requirements
Specifications (SRS) (also called a requirements document). This report lays a foundation for software
engineering activities and is constructing when entire requirements are elicited and analyzed. SRS is a
formal report, which acts as a representation of software that enables the customers to review whether it
(SRS) is according to their requirements. Also, it comprises user requirements for a system as well as
detailed specifications of the system requirements.
17
The SRS is a specification for a specific software product, program, or set of applications that perform
particular functions in a specific environment. It serves several goals depending on who is writing it. First,
the SRS could be written by the client of a system. Second, the SRS could be written by a developer of the
system. The two methods create entirely various situations and establish different purposes for the
document altogether. The first case, SRS, is used to define the needs and expectation of the users. The
second case, SRS, is written for various purposes and serves as a contract document between customer and
developer.

Characteristics of good SRS:

Following are the features of a good SRS document:


18
1. Correctness: User review is used to provide the accuracy of requirements stated in the SRS. SRS
is said to be perfect if it covers all the needs that are truly expected from the system.

Completeness: The SRS is complete if, and only if, it includes the following elements:

(1). All essential requirements, whether relating to functionality, performance, design, constraints, attributes,
or external interfaces.

(2). Definition of their responses of the software to all realizable classes of input data in all available
categories of situations.

(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions of all terms and
units of measure.

3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements described in its
conflict. There are three types of possible conflict in the SRS:

(1). The specified characteristics of real-world objects may conflicts. For example,

(a) The format of an output report may be described in one requirement as tabular but in another as textual.

(b) One condition may state that all lights shall be green while another states that all lights shall be blue.

(2). There may be a reasonable or temporal conflict between the two specified actions. For example,

(a) One requirement may determine that the program will add two inputs, and another may determine that
the program will multiply them.

(b) One condition may state that "A" must always follow "B," while other requires that "A and B" co-occurs.
19
(3). Two or more requirements may define the same real-world object but use different terms for that object.
For example, a program's request for user input may be called a "prompt" in one requirement's and a "cue"
in another. The use of standard terminology and descriptions promotes consistency.

4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one interpretation. This
suggests that each element is uniquely interpreted. In case there is a method used with multiple definitions,
the requirements report should determine the implications in the SRS so that it is clear and simple to
understand.

. Ranking for importance and stability: The SRS is ranked for importance and stability if each
requirement in it has an identifier to indicate either the significance or stability of that particular
requirement.

Typically, all requirements are not equally important. Some prerequisites may be essential, especially for
life-critical applications, while others may be desirable. Each element should be identified to make these
differences clear and explicit. Another way to rank requirements is to distinguish classes of items as
essential, conditional, and optional.

6. Modifiability: SRS should be made as modifiable as likely and should be capable of quickly obtain
changes to the system to some extent. Modifications should be perfectly indexed and cross-referenced.

7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-effective system
to check whether the final software meets those requirements. The requirements are verified with the help of
reviews.

8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if it facilitates the
referencing of each condition in future development or enhancement documentation.

There are two types of Traceability:


20
1. Backward Traceability: This depends upon each requirement explicitly referencing its source in earlier
documents.

2. Forward Traceability: This depends upon each element in the SRS having a unique name or reference
number.

The forward traceability of the SRS is especially crucial when the software product enters the operation and
maintenance phase. As code and design document is modified, it is necessary to be able to ascertain the
complete set of requirements that may be concerned by those modifications.

9. Design Independence: There should be an option to select from multiple design alternatives for the final
system. More specifically, the SRS should not contain any implementation details.

10. Testability: An SRS should be written in such a method that it is simple to generate test cases and test
plans from the report.

11. Understandable by the customer: An end user may be an expert in his/her explicit domain but might
not be trained in computer science. Hence, the purpose of formal notations and symbols should be avoided
too as much extent as possible. The language should be kept simple and clear.

12. The right level of abstraction: If the SRS is written for the requirements stage, the details should be
explained explicitly. Whereas,for a feasibility study, fewer analysis can be used. Hence, the level of
abstraction modifies according to the objective of the SRS.

Properties of a good SRS document

The essential properties of a good SRS document are the following:

21
Concise: The SRS report should be concise and at the same time, unambiguous, consistent, and complete.
Verbose and irrelevant descriptions decrease readability and also increase error possibilities.

Structured: It should be well-structured. A well-structured document is simple to understand and modify.


In practice, the SRS document undergoes several revisions to cope up with the user requirements. Often,
user requirements evolve over a period of time. Therefore, to make the modifications to the SRS document
easy, it is vital to make the report well-structured.

Black-box view: It should only define what the system should do and refrain from stating how to do these.
This means that the SRS document should define the external behavior of the system and not discuss the
implementation issues. The SRS report should view the system to be developed as a black box and should
define the externally visible behavior of the system. For this reason, the SRS report is also known as the
black-box specification of a system.

Conceptual integrity: It should show conceptual integrity so that the reader can merely understand it.
Response to undesired events: It should characterize acceptable responses to unwanted events. These are
called system response to exceptional conditions.

Verifiable: All requirements of the system, as documented in the SRS document, should be correct. This
means that it should be possible to decide whether or not requirements have been met in an implementation.

Formal specification techniques:

Formal specification techniques in software engineering involve the use of mathematical models to define
the behavior, functionality, and constraints of software systems. These techniques aim to provide a precise
and unambiguous description of what a system is supposed to do, which helps in verifying and validating
software through formal methods.

Key Techniques
22
1. Algebraic Specifications:
o Description: Specify software components in terms of algebraic equations that define the
relationships between operations.
o Use Case: Typically used for abstract data types and interfaces.
o Example: Specifying a stack using operations like push, pop, and top with their corresponding
axioms.
2. Model-Based Specifications:
o Description: Use state-based models to define the system's behavior through states and transitions.
o Use Case: Suitable for reactive and stateful systems.
o Example: VDM (Vienna Development Method) and Z notation.
3. Petri Nets:
o Description: Graphical and mathematical modeling tool applicable to distributed systems.
o Use Case: Modeling concurrent, asynchronous, and distributed systems.
o Example: Describing workflow processes, communication protocols.
4. Finite State Machines (FSMs):
o Description: Model the system as a finite number of states and transitions between those states
based on inputs.
o Use Case: Systems with well-defined states and events.
o Example: Describing the behavior of a user interface or protocol.
5. Temporal Logic:
o Description: Use temporal operators to specify the timing of events within a system.
o Use Case: Real-time systems and systems where timing constraints are critical.
o Example: Linear Temporal Logic (LTL) and Computation Tree Logic (CTL).
6. Process Algebras:
o Description: Algebraic techniques to model and analyze the behaviors of concurrent systems.
o Use Case: Analyzing complex interactions in distributed systems.
o Example: CSP (Communicating Sequential Processes) and Pi-calculus.

23
Benefits

• Precision: Formal specifications eliminate ambiguity, providing a clear and precise description of system
behavior.
• Verification: Enable rigorous proofs of correctness and other properties using formal verification tools.
• Validation: Help in validating requirements by providing a formal model that can be analyzed and
simulated.
• Documentation: Serve as unambiguous documentation that can be referred to throughout the
development process.

Challenges

• Complexity: Creating formal specifications can be complex and time-consuming.


• Skill Requirements: Requires specialized knowledge in mathematical logic and formal methods.
• Scalability: May be challenging to apply to very large and complex systems.
• Tool Support: Limited availability of user-friendly tools for formal methods.

Examples of Formal Specification Languages

1. Z Notation: Used for describing and modeling computing systems, particularly in terms of their data and
operations.
2. B-Method: Focuses on the specification, design, and verification of software through a mathematical
approach.
3. VDM (Vienna Development Method): Provides a framework for developing precise and abstract
models of software systems.
4. Alloy: A lightweight modeling language for software design that uses a relational model to describe
structures and behaviors.

24
Formal specification techniques are powerful tools in software engineering, particularly for systems where
reliability, security, and correctness are critical. However, their adoption requires balancing the benefits
against the challenges and ensuring appropriate expertise and resources are available.

n software engineering, languages and processors for requirements specification are essential tools to define,
analyze, and manage the requirements of a software system accurately and efficiently. They ensure that the
requirements are clear, unambiguous, and traceable throughout the software development lifecycle.

Key Languages for Requirements Specification

1. Natural Language:
o Description: Uses everyday language to describe requirements.
o Use Case: Widely used due to its accessibility to all stakeholders.
o Challenges: Ambiguity and lack of precision can lead to misunderstandings.
2. Structured Natural Language:
o Description: Imposes structure on natural language to reduce ambiguity.
o Use Case: Balances readability and precision.
o Examples: Use case specifications, user stories, and structured templates like the Volere
Requirements Specification Template.
3. Use Case Diagrams:
o Description: Visual representation of the interactions between users (actors) and the system.
o Use Case: Capturing functional requirements and user interactions.
o Example: UML (Unified Modeling Language) use case diagrams.
4. User Stories:
o Description: Short, simple descriptions of a feature from the user's perspective.
o Use Case: Common in agile methodologies for capturing requirements.
o Format: "As a [type of user], I want [a goal] so that [a reason]."
5. Formal Specification Languages:
o Description: Use mathematical notation to define requirements rigorously.
25
o Use Case: Critical systems requiring precision and unambiguity.
o Examples: Z Notation, B-Method, VDM (Vienna Development Method), Alloy.
6. Graphical Models:
o Description: Use diagrams to represent requirements and system behaviors.
o Use Case: Enhances understanding through visual representation.
o Examples: Statecharts, activity diagrams, sequence diagrams (UML, SysML).

Processors and Tools for Requirements Specification

1. Requirements Management Tools:


o Description: Software tools designed to manage, trace, and analyze requirements.
o Examples:
▪ IBM DOORS: For complex systems, supports traceability and collaboration.
▪ Jama Software: Comprehensive requirements management with collaboration and impact
analysis features.
▪ Helix RM: Focuses on managing requirements and ensuring compliance and traceability.
2. Model-Based Tools:
o Description: Tools that use models to specify, analyze, and verify requirements.
o Examples:
▪ Enterprise Architect: Supports UML, SysML, BPMN for comprehensive requirements
modeling.
▪ MagicDraw: Offers UML modeling and collaboration features for specifying and managing
requirements.
3. Formal Methods Tools:
o Description: Tools for creating, analyzing, and verifying formal specifications.
o Examples:
▪ Alloy Analyzer: For creating and analyzing models in the Alloy language.

26
▪ Rodin Platform: An Eclipse-based toolset for developing and verifying models in the B-
Method.
▪ Z/Eves: Supports Z notation for formal specification and verification.
4. Agile Tools:
o Description: Tools supporting agile methodologies, managing user stories and backlog items.
o Examples:
▪ JIRA: Popular agile project management tool for user stories, sprints, and backlog
management.
▪ VersionOne: Comprehensive agile project management tool supporting user stories, tasks,
and sprints.

Benefits of Using Specification Languages and Tools

• Clarity and Precision: Reduce ambiguity and ensure a common understanding among stakeholders.
• Traceability: Track requirements throughout the development process, ensuring that all requirements are
met.
• Verification and Validation: Facilitate early detection of inconsistencies, errors, and omissions in
requirements.
• Collaboration: Enhance communication and collaboration among project teams and stakeholders.

Challenges

• Complexity: Some formal specification languages and tools require specialized knowledge and
expertise.
• Cost: High-quality tools and training can be expensive.
• Adoption: Resistance to change from traditional methods to formal or structured approaches.

Examples of Formal Specification Languages

27
1. Z Notation:
o Description: A formal specification language used for describing and modeling computing
systems.
o Use Case: Defining data and operations of systems rigorously.
2. B-Method:
o Description: Focuses on the specification, design, and verification of software through a
mathematical approach.
o Use Case: Formal development and verification of software components.
3. VDM (Vienna Development Method):
o Description: Provides a framework for developing precise and abstract models of software
systems.
o Use Case: Specifying and modeling software and system requirements.
4. Alloy:
o Description: A lightweight modeling language for software design that uses a relational model to
describe structures and behaviors.
o Use Case: Analyzing complex system structures and their properties.

28
29
UNIT-III
Software Design:
The design phase of software development deals with transforming the customer
requirements as described in the SRS documents into a form implementable using a
programming language. The software design process can be divided into the following three
levels or phases of design:
1. Interface Design
2. Architectural Design
3. Detailed Design
Elements of a System
1. Architecture: This is the conceptual model that defines the structure, behavior, and
views of a system. We can use flowcharts to represent and illustrate the architecture.
2. Modules: These are components that handle one specific task in a system. A
combination of the modules makes up the system.
3. Components: This provides a particular function or group of related functions. They
are made up of modules.
4. Interfaces: This is the shared boundary across which the components of a system
exchange information and relate.
5. Data: This is the management of the information and data flow.

Interface Design
Interface design is the specification of the interaction between a system and its
environment. This phase proceeds at a high level of abstraction with respect to the
inner workings of the system i.e, during interface design, the internal of the systems
are completely ignored, and the system is treated as a black box. Attention is focused
on the dialogue between the target system and the users, devices, and other systems
with which it interacts. The design problem statement produced during the problem
analysis step should identify the people, other systems, and devices which are
collectively called agents.
Interface design should include the following details:
1. Precise description of events in the environment, or messages from agents to which
the system must respond.
2. Precise description of the events or messages that the system must produce.
3. Specification of the data, and the formats of the data coming into and going out of the
system.
4. Specification of the ordering and timing relationships between incoming events or
messages, and outgoing events or outputs.
Architectural Design
Architectural design is the specification of the major components of a system, their
responsibilities, properties, interfaces, and the relationships and interactions between
them. In architectural design, the overall structure of the system is chosen, but the
internal details of major components are ignored. Issues in architectural design
includes:
1. Gross decomposition of the systems into major components.
2. Allocation of functional responsibilities to components.
3. Component Interfaces.
4. Component scaling and performance properties, resource consumption properties,
reliability properties, and so forth.
5. Communication and interaction between components.
The architectural design adds important details ignored during the interface design.
Design of the internals of the major components is ignored until the last phase of the
design.
Detailed Design
Detailed design is the specification of the internal elements of all major system
components, their properties, relationships, processing, and often their algorithms and
the data structures. The detailed design may include:
1. Decomposition of major system components into program units.
2. Allocation of functional responsibilities to units.
3. User interfaces.
4. Unit states and state changes.
5. Data and control interaction between units.
6. Data packaging and implementation, including issues of scope and visibility of
program elements.
7. Algorithms and data structures.

1. Correctness: Software design should be correct as per requirement.


2. Completeness: The design should have all components like data structures, modules,
and external interfaces, etc.
3. Efficiency: Resources should be used efficiently by the program.
4. Flexibility: Able to modify on changing needs.
5. Consistency: There should not be any inconsistency in the design.
6. Maintainability: The design should be so simple so that it can be easily maintainable
by other designers.

Fundamental Concepts in Software Design:


Fundamental concepts of Software design include Abstraction, Structure, Information
hiding, Modularity, Concurrency and Verification.
1. Abstraction – Abstraction is the intellectual tool which enables us to separate the
conceptual aspects of the System. For eg: we may specify the FIFO property of a
source or stack and functional characteristics of the routines (new, push, pop, top,
empty) without concern for the algorithmic details of the routine. The 3 types of
abstraction mechanisms used are Functional abstraction, Data abstraction, Control
abstraction.
▪ Functional Abstraction – Functional abstraction involves the use of
parameterized subprograms. Functional abstraction allows to bind different
parameter values on different invocations of the subprogram. Functional
abstraction can be generalized to collections of subprograms called groups
which may be visible or hidden.
▪ Data abstraction or Abstract data hiding – Data abstraction involves
specifying a data type or a data object by specifying legal operation on objects,
representation and manipulation details are suppressed. Thus, we may define
the type ‘stack’ abstractly as a LIFO mechanism in which the routines New,
Push, Pop, Top and Empty are defined abstractly. The term abstract data type
is used to denote declaration of a datatype or object like ‘stack’ from which
numerous instances can be created.
2. Information Hiding – While using information hiding approach, each module/function in
the system hides the internal details of its processing activities and modules communicate
only through well-defined interfaces. Information hiding may be applied to hide –
▪ Data structure, Internal linkage and the implementation details of the
classes/interfaces that manipulate it
▪ The format of central blocks such as queues in an Operating System
▪ Character codes and their implementation details
▪ Shifting, masking and other machine dependent details
3. Structure – Structure is a fundamental characteristic of computer software and its most
general form the Network structure. A computing network can be represented as a directed
graph consisting of nodes and arcs. The nodes representing the processing elements that
transforms data and the arcs can be used to represent data links between nodes. Also, nodes
can represent data stores and the arcs data transformations.
In its simplest form, a network may specify data flow and processing steps within a single
subprogram or data flow among a collection of sequential subprograms. The process network
view of software structures is as shown below –
As shown above, the Network consists of Nodes and Links. The Network consists of
concurrently and sequentially executing process involved in synchronous or asynchronous
message passing. Each process consists of various groups or objects and various processing
routines. Each group consists of visible part, static area and hidden part which refers to its
representation details.
4. Modularity – Modularity refers to the desirable property of software to be represented as
modules or classes or interfaces each having operational characteristics that could be reused
in different locations of software so we can reduce code duplication and reduce code
dependency when a huge amount of code need to be modified when something changes. For
eg:, we may save a set of code as a function/method, a group of similar functions/methods as
a compiled module (like the Header .H files in C) or the VehicleControllers.class in C#. Also
similar classes/interfaces may be compiled as an assembly or a set of modules can be
compiled as a software package or a Visual Studio project file or as .PRJ files as in C
programming.
Modularity enhances design clarity and eases implementation, debugging, testing,
documentation, debugging, testing, documentation and maintenance of the Software product.
Some of the desirable properties of a modular Software System include –
▪ Each processing abstraction should be well defined so that it may be applicable in
various situations.
▪ Each function/method in each abstraction has well defined purpose
▪ Each function/method manipulates no more than one major data structure
▪ Functions/methods those share global data structures selectively may be grouped
together
▪ Functions/methods that manipulate instances of abstract data types are encapsulated
with the data structure being manipulated.
5. Modularization criteria – There are various modularization criteria to represent the
Software as modules and depending on the criteria, various system structures may result.
Various modularization criteria include –
▪ Conventional criteria refers to modularization in which each module and its
submodules correspond to a processing step in the program execution sequence.
▪ Information hiding criteria refers to modularization in which each module hides a
difficult or changeable design decision from other modules.
▪ Data abstraction criterion refers to modularization in which each modules hides the
representation details of major data structure behind functions that access and modify
the data structure.
▪ Levels of abstraction in which modules and collections of modules provide a
hierarchical set of increasingly complex services.
▪ Coupling and Cohesion in which a system is structured to maximize the cohesion of
elements in each modules and to minimize the coupling between modules.
▪ Problem Modelling in which the modular structure of the system matches the
structure of problem being solved.
6. Concurrency – Concurrent processes are those processes executing concurrently or in
parallel utilizing the concurrent and scheduling mechanism of the processor. Like concurrent
processes, we may implement concurrent threads or code segments of execution which may
execute concurrently utilizing the concurrent processing power of the processor and the
concurrency mechanism of the programming language used (eg: C#, Java).
7. Coupling and Cohesion – Coupling and Cohesion are applied to support the fundamental
goal of software design to structure the Software product so that the number and complexity
of interconnections between various modules and classes is minimized.
▪ Coupling – Coupling is defined between two modules and it relates to the degree of
linkage between the coupled module s. The strength of coupling between two module
is influenced by the complexity of the interface, type of connection and type of
communication. If all modules communicate only by parameter passing, then the
internal details of the modules can be modified without modifying the functions used
in each modules. Module references by their content is stronger than those by
modules names and in the former case, the entire content will have to be taken into
account while referring to it. Communication between modules involves passing of
data, passing elements of control like flags, events, switches, labels, objects etc and
modification of one module/interface’s code by another. The degree of coupling is
highest for modules that modify other modules, higher for control communication and
lowest for data communication system. The major types of coupling are as follows –
– Content Coupling – Occurs when one module modifies local data values or
instructions in another modules and usually occurs in assembly language programs.
– Common Coupling – Occurs when a set of routines reference a common data
block.
– Stamp Coupling – Stamp coupling is similar to common coupling except that the
global data items are shared collectively among the routines. Stamp coupling is more
desirable than common coupling since fewer modules will have to be modified if a shared
data structure is modified.
– Data coupling – involves the use of parameters lists to pass data items between
routines.
The most desirable form of coupling between modules is combination of Stamp and
Data coupling.
▪ Cohesion – Cohesion or internal Cohesion between two modules is measured in terms
of the strength of binding of elements within the module. The following are the
various cohesion mechanisms –
– Coincidental Cohesion – occurs when the elements with a module have no apparent
relationship to one another
– Logical Cohesion – refers to some relationship among the elements of the module
such as those perform all input / output operation or those edit or validate data. Logically
bound modules often combines several related functions in a complex and interrelated
fashion.
– Temporal Cohesion – forms complex connections as logical cohesion but are on the
higher scale of binding since all elements are executed at one time and no parameter logic are
required to determine which elements to execute such as in the case of a module performing
program initialization.
– Communicational Cohesion -refers to same set of input / output data and the
binding is higher on the binding scale than temporal binding since the elements are executed
at one time and also refer to the same data.
– Sequential Cohesion – occurs when the output of one element is the input of
another element. Sequential cohesion has higher binding levels since the module structure
usually represents the problem structure.
– Functional Cohesion – is a situation in which every elements function towards the
performance of a single method such as in the case of data elements of a method performing
sqrt().

– Informational Cohesion – of elements in a module occurs when a complex data


structure manipulated by several routines in that module. Also each routine in the module
exhibits functional binding.
Modules and modularization Criteria:
Architectural design has the goal of producing well structured, modular software system. The
software module named entity has the following characteristics:
Modules contain instructions, processing logic and data structures.
Modules can be separately compiled and stored in a library.
Modules can be included in a program.
Module segments can be used by invoking a name and some parameters.
Modules can use other modules.
Ex: procedures, subroutines, functions.
Coupling and Cohesion:
The fundamental goal of software designs to structure the software product so that the
number of complexity of interconnection between modules is minimized.
The strength of coupling between two modules is influenced by the complexity of the
interface, the type of connection, and the type of communication.
Obvious relationships results in less complexity.
Ex: common control blocks, common data blocks, common overlay regions in memory.
Loosely coupled= connections established by referring to other module.
Connections between modules involves, passing of data, passing of elements(flags, switches,
labels and procedure names) degree of coupling
lowest- data communication
higher- control communication
highest- modify other modules.
Coupling can be ranked as follows:
a. Content coupling: when one module modifies local data values or instructions in another
module.
b. Common coupling: are bound together by global data structures
c. Control coupling: involves passing of control flags between modules so that one module
controls the sequence of processing steps in another module.
d. Stamp coupling: similar to common coupling except that global data items are shared
selectively among routines that require the data.
e. Data coupling: involves the use of parameter lists to pass data items between routines.
• Internal cohesion of a module is measured in terms of the strength of binding of element
within the module.
• Cohesion elements occur on the scale of weakest to strongest as follows:
a. Coincidental cohesion: Module is created from a group of unrelated instructions that
appear several times in other modules.
b. Logical cohesion: implies some relationship among the elements of the module.
ex: module performs all i/o operations.
c. Temporal cohesion: all elements are executed at one time and no parameter logic are
required to determine which elements to execute.
d. Communication cohesion: refer to same set of input or output data
Ex: ‘print and punch’ the output file is communicationally bound.
e. Sequential cohesion: of elements occurs when the output of one element is the input for
the next element.
ex: ‘read next transaction and update master file’
f. Functional Cohesion: is strong type of binding of elements in a module because all
elements are related to the performance of a single function.
Ex: computer square root, obtain random number etc.,
g. Informational cohesion: occurs when the module contains a complex data structure and
several routines to manipulate the data structure.

Design notations:
Dynamic
Data flow diagrams (DFDs).
State transition diagrams (STDs).
State charts.
Structure diagrams.
Static
Entity Relationship Diagrams (ERDs).
Class diagrams.
Structure charts.
Object diagrams.

Data Flow Diagrams (DFDs):


A notation developed in conjunction with structured systems analysis/structured design
(SSA/SD).
Used primarily for pipe-and-filter styles of architecture.
Graph–based diagrammatic notation.
There are extensions for real-time systems that distinguish control flow from data flow.

DFDs: Diagrammatic elements

A producer or consumer of information that resides outside the bounds of the system
to be modeled.

A transformation of information (a function) that resides within the bounds of the system to
be modeled.

A data object; the arrowhead indicates the direction of data flow.

A repository of data that is to be stored for use by one or more processes; may be as simple as
a buffer or queue or as sophisticated as a relational database
State Transition Diagrams (STDs) :
Used for capturing state transition behavior in cases where there is an intuitive finite
collections of states.
Derives from the notion of a finite state automaton.
Graph–based diagrammatic notation.
Labeled nodes correspond to states.
Arcs correspond to transitions.
Arcs are labeled with events and actions (actions can cause further events to occur).
E.g.: a telephone call!
Describes a single underlying process.

State charts:
Developed by David Harel.
A generalization of STDs:
States can have zero, one, two or more STDs contained within.
Related to Petri nets.
Higraph–based diagrammatic notation.
Labeled nodes correspond to states.
Arcs correspond to transitions.
Arcs are labeled with events and actions (actions can cause further events to occur).
Describes one or more underlying processes.
Structure Diagrams :
Used in Jackson Structured Programming.
Used to describe several kinds of things.
Ordered hierarchical structure.
Sequential processing.
Based on the idea of regular languages.
Sequencing.
Selection.
Iteration.
Entity Relationship Diagrams (ERDs):
Structure Charts
Based on the fundamental notion of a module.
Used in structured systems analysis/structured design (SSA/SD).
Graph–based diagrammatic notation:
a structure chart is a collection of one or more node labeled rooted directed acyclic graphs.
Each graph is a process.
Nodes and modules are synonymous.
A directed edge from module M1 to module M2 captures the fact that M1 directly uses in
some way the services provided by M2.
Definitions: The fan-in of a module is the count of the number of arcs directed toward the
module. The fan-out of a module is the count of the number of arcs outgoing from the
module.
Strategy of Design:
A good system design strategy is to organize the program modules in such a method that are
easy to develop and latter too, change. Structured design methods help developers to deal
with the size and complexity of programs. Analysts generate instructions for the developers
about how code should be composed and how pieces of code should fit together to form a
program.
To design a system, there are two possible approaches:
1. Top-down Approach
2. Bottom-up Approach
1. Top-down Approach: This approach starts with the identification of the main components
and then decomposing them into their more detailed sub-components.
We know that a system is composed of more than one sub-systems and it contains a number
of components. Further, these sub-systems and components may have their on set of sub-
system and components and creates hierarchical structure in the system.
Top-down design takes the whole software system as one entity and then decomposes it to
achieve more than one sub-system or component based on some characteristics. Each
subsystem or component is then treated as a system and decomposed further. This process
keeps on running until the lowest level of system in the top-down hierarchy is achieved.
Top-down design starts with a generalized model of system and keeps on defining the more
specific part of it. When all components are composed the whole system comes into
existence.
Top-down design is more suitable when the software solution needs to be designed from
scratch and specific details are unknown.
2. Bottom-up Approach: A bottom-up approach begins with the lower details and moves
towards up the hierarchy, as shown in fig. This approach is suitable in case of an
existing system.
The bottom up design model starts with most specific and basic components. It
proceeds with composing higher level of components by using basic or lower level
components. It keeps creating higher level components until the desired system is not
evolved as one single component. With each higher level, the amount of abstraction is
increased.
Bottom-up strategy is more suitable when a system needs to be created from some
existing system, where the basic primitives can be used in the newer system.

Both, top-down and bottom-up approaches are not practical individually. Instead, a
good combination of both is used.
Walkthrough
Walkthrough is a method of conducting informal group/individual review. In a
walkthrough, author describes and explain work product in a informal meeting to his
peers or supervisor to get feedback. Here, validity of the proposed solution for work
product is checked.
• It is cheaper to make changes when design is on the paper rather than at time of
conversion. Walkthrough is a static method of quality assurance. Walkthrough are
informal meetings but with purpose.
INSPECTION
• An inspection is defined as formal, rigorous, in depth group review designed to
identify problems as close to their point of origin as possible. Inspections improve
reliability, availability, and maintainability of software product.
• Anything readable that is produced during the software development can be
inspected. Inspections can be combined with structured, systematic testing to provide
a powerful tool for creating defect-free programs.
• Inspection activity follows a specified process and participants play welldefined
roles. An inspection team consists of three to eight members who plays roles of
moderator, author, reader, recorder and inspector.
UNIT-IV
USER INTERFACE DESIGN:

The user interface is the front-end application view to which the user
interacts to use the software. User can manipulate and control the software as
well as hardware by means of user interface.
User interface design creates an effective communication medium
between a human and a computer. UI provides fundamental platform for human
computer interaction.
1. Attractive
2. Simple to use
3. Responsive in a short time
4. Clear to understand
5. Consistent on all interface screens
Types of User Interface
1. Command Line Interface: The Command Line Interface provides a
command prompt, where the user types the command and feeds it to the
system. The user needs to remember the syntax of the command and its
use.
2. Graphical User Interface: Graphical User Interface provides a simple
interactive interface to interact with the system. GUI can be a
combination of both hardware and software. Using GUI, the user
interprets the software.
User Interface Design Process
The analysis and design process of a user interface is iterative and can be
represented by a spiral model. The analysis and design process of user
interface consists of four framework activities.
1. User, Task, Environmental Analysis, and Modeling
Initially, the focus is based on the profile of users who will interact with the
system, i.e., understanding, skill and knowledge, type of user, etc., based on the
user’s profile users are made into categories. From each category requirements
are gathered. Based on the requirement’s developer understand how to develop
the interface. Once all the requirements are gathered a detailed analysis is
conducted. In the analysis part, the tasks that the user performs to establish the
goals of the system are identified, described and elaborated. The analysis of the
user environment focuses on the physical work environment. Among the
questions to be asked are:
1. Where will the interface be located physically?
2. Will the user be sitting, standing, or performing other tasks unrelated to
the interface?
3. Does the interface hardware accommodate space, light, or noise
constraints?
4. Are there special human factors considerations driven by environmental
factors?
2. Interface Design
The goal of this phase is to define the set of interface objects and actions i.e.,
control mechanisms that enable the user to perform desired tasks. Indicate how
these control mechanisms affect the system. Specify the action sequence of
tasks and subtasks, also called a user scenario. Indicate the state of the system
when the user performs a particular task. Always follow the three golden rules
stated by Theo Mandel. Design issues such as response time, command and
action structure, error handling, and help facilities are considered as the design
model is refined. This phase serves as the foundation for the implementation
phase.
3. Interface Construction and Implementation
The implementation activity begins with the creation of a prototype (model) that
enables usage scenarios to be evaluated. As iterative design process continues a
User Interface toolkit that allows the creation of windows, menus, device
interaction, error messages, commands, and many other elements of an
interactive environment can be used for completing the construction of an
interface.
4. Interface Validation
This phase focuses on testing the interface. The interface should be in such a
way that it should be able to perform tasks correctly, and it should be able to
handle a variety of tasks. It should achieve all the user’s requirements. It should
be easy to use and easy to learn. Users should accept the interface as a useful
one in their work.
User Interface Design Golden Rules
The following are the golden rules stated by Theo Mandel that must be followed
during the design of the interface. Place the user in control:
1. Define the interaction modes in such a way that does not force the
user into unnecessary or undesired actions: The user should be able to
easily enter and exit the mode with little or no effort.
2. Provide for flexible interaction: Different people will use different
interaction mechanisms, some might use keyboard commands, some
might use mouse, some might use touch screen, etc., Hence all interaction
mechanisms should be provided.
3. Allow user interaction to be interruptible and undoable: When a user
is doing a sequence of actions the user must be able to interrupt the
sequence to do some other work without losing the work that had been
done. The user should also be able to do undo operation.
4. Streamline interaction as skill level advances and allow the
interaction to be customized: Advanced or highly skilled user should be
provided a chance to customize the interface as user wants which allows
different interaction mechanisms so that user doesn’t feel bored while
using the same interaction mechanism.
5. Hide technical internals from casual users: The user should not be
aware of the internal technical details of the system. He should interact
with the interface just to do his work.
6. Design for direct interaction with objects that appear on-screen: The
user should be able to use the objects and manipulate the objects that are
present on the screen to perform a necessary task. By this, the user feels
easy to control over the screen.
Reduce the User’s Memory Load
1. Reduce demand on short-term memory: When users are involved in
some complex tasks the demand on short-term memory is significant. So
the interface should be designed in such a way to reduce the remembering
of previously done actions, given inputs and results.
2. Establish meaningful defaults: Always an initial set of defaults should
be provided to the average user, if a user needs to add some new features
then he should be able to add the required features.
3. Define shortcuts that are intuitive: Mnemonics should be used by the
user. Mnemonics means the keyboard shortcuts to do some action on the
screen.
4. The visual layout of the interface should be based on a real-world
metaphor: Anything you represent on a screen if it is a metaphor for a
real-world entity then users would easily understand.
5. Disclose information in a progressive fashion: The interface should be
organized hierarchically i.e., on the main screen the information about the
task, an object or some behavior should be presented first at a high level
of abstraction. More detail should be presented after the user indicates
interest with a mouse pick.
Make the Interface Consistent
1. Allow the user to put the current task into a meaningful
context: Many interfaces have dozens of screens. So it is important to
provide indicators consistently so that the user know about the doing
work. The user should also know from which page has navigated to the
current page and from the current page where it can navigate.
2. Maintain consistency across a family of applications: in The
development of some set of applications all should follow and implement
the same design, rules so that consistency is maintained among
applications.
3. If past interactive models have created user expectations do not make
changes unless there is a compelling reason: once a particular
interactive sequence has become standard (eg: ctrl+s to save file) the user
expects this in every application she encounters.
User interface design is a crucial aspect of software engineering, as it is
the means by which users interact with software applications. A well-
designed user interface can improve the usability and user experience of
an application, making it easier to use and more effective.
Key Principles for Designing User Interfaces
1. User-centered design: User interface design should be focused on the
needs and preferences of the user. This involves understanding the user’s
goals, tasks, and context of use, and designing interfaces that meet their
needs and expectations.
2. Consistency: Consistency is important in user interface design, as it helps
users to understand and learn how to use an application. Consistent
design elements such as icons, color schemes, and navigation menus
should be used throughout the application.
3. Simplicity: User interfaces should be designed to be simple and easy to
use, with clear and concise language and intuitive navigation. Users
should be able to accomplish their tasks without being overwhelmed by
unnecessary complexity.
4. Feedback: Feedback is significant in user interface design, as it helps
users to understand the results of their actions and confirms that they are
making progress towards their goals. Feedback can take the form of
visual cues, messages, or sounds.
5. Accessibility: User interfaces should be designed to be accessible to all
users, regardless of their abilities. This involves considering factors such
as color contrast, font size, and assistive technologies such as screen
readers.
6. Flexibility: User interfaces should be designed to be flexible and
customizable, allowing users to tailor the interface to their own
preferences and needs.
Real-time systems:
A real-time system means that the system is subjected to real-time, i.e.,
the response should be guaranteed within a specified timing constraint or
the system should meet the specified deadline. For example flight control
systems, real-time monitors, etc.
Types of real-time systems based on timing constraints:
1. Hard real-time system: This type of system can never miss its deadline.
Missing the deadline may have disastrous consequences. The usefulness
of results produced by a hard real-time system decreases abruptly and
may become negative if tardiness increases. Tardiness means how late a
real-time system completes its task with respect to its deadline. Example:
Flight controller system.
2. Soft real-time system: This type of system can miss its deadline
occasionally with some acceptably low probability. Missing the deadline
have no disastrous consequences. The usefulness of results produced by a
soft real-time system decreases gradually with an increase in tardiness.
Example: Telephone switches.
3. Firm Real-Time Systems: These are systems that lie between hard and
soft real-time systems. In firm real-time systems, missing a deadline is
tolerable, but the usefulness of the output decreases with time. Examples
of firm real-time systems include online trading systems, online auction
systems, and reservation systems.
Reference model of the real-time system:
Our reference model is characterized by three elements:
1. A workload model: It specifies the application supported by the system.
2. A resource model: It specifies the resources available to the application.
3. Algorithms: It specifies how the application system will use resources.
Terms related to real-time system:
1. Job: A job is a small piece of work that can be assigned to a processor
and may or may not require resources.
2. Task: A set of related jobs that jointly provide some system
functionality.
3. Release time of a job: It is the time at which the job becomes ready for
execution.
4. Execution time of a job: It is the time taken by the job to finish its
execution.
5. Deadline of a job: It is the time by which a job should finish its
execution. Deadline is of two types: absolute deadline and relative
deadline.
6. Response time of a job: It is the length of time from the release time of a
job to the instant when it finishes.
7. The maximum allowable response time of a job is called its relative
deadline.
8. The absolute deadline of a job is equal to its relative deadline plus its
release time.
9. Processors are also known as active resources. They are essential for the
execution of a job. A job must have one or more processors in order to
execute and proceed towards completion. Example: computer,
transmission links.
10.Resources are also known as passive resources. A job may or may not
require a resource during its execution. Example: memory, mutex
11.Two resources are identical if they can be used interchangeably else they
are heterogeneous.

Advantages:
• Real-time systems provide immediate and accurate responses to external
events, making them suitable for critical applications such as air traffic
control, medical equipment, and industrial automation.
• They can automate complex tasks that would otherwise be impossible to
perform manually, thus improving productivity and efficiency.
• Real-time systems can reduce human error by automating tasks that
require precision, accuracy, and consistency.
• They can help to reduce costs by minimizing the need for human
intervention and reducing the risk of errors.
• Real-time systems can be customized to meet specific requirements,
making them ideal for a wide range of applications.
Disadvantages:
• Real-time systems can be complex and difficult to design, implement, and
test, requiring specialized skills and expertise.
• They can be expensive to develop, as they require specialized hardware
and software components.
• Real-time systems are typically less flexible than other types of computer
systems, as they must adhere to strict timing requirements and cannot be
easily modified or adapted to changing circumstances.
• They can be vulnerable to failures and malfunctions, which can have
serious consequences in critical applications.
• Real-time systems require careful planning and management, as they
must be continually monitored and maintained to ensure they operate
correctly.
HUMAN FACTORS:
The essentially of human factors are imperative for the design and
development of any software work. It presents the underlying idea for
incorporating these factors into the software life cycle. Many giant
companies came to recognise that the success of a product depends upon
a solid Human factors design. Human factors discovers and applies
information about human behaviour, abilities, limitations and other
characteristics to the design of tools machines, systems, tasks, jobs and
environment for productive, safe, comfortable and effective human use.
Study of human factors is essential for every software manager since
he/she must be acquitted with low his/her staff members interact with
each other .Generally ,software products are used by variety of populace
and its necessary to take account the abilities of such a group to make the
software more useful and popular.
Objective of human factors design:
The purpose of human factors design into create products that meet
the operability and learn ability goals. This design should meet the user’s
needs by being effective. Efficient but also high quality keeping an eye on
the major concern of the customer in most cases, that is affordability.
The engineering discipline for designers and developers must focus on
the following:
• Users and their psychology
• Amount of work that the user must do, including task goals,
performance requirements and group communication requirements.
• Quality and performance.
• Information required by users and their job.
Benefits:
• Elevated user satisfaction.
• Decreased training time and costs.
• Reduced operator stress.
• Reduced product liability.
• Decrement of operating costs.
• Lesser operational error.

Based approach to human factors:


It is often that people take human factors not too seriously because
it is often regarded as common sense. Many companies heavily channel their
resources and time towards factors of software development like planning,
management, control. They often neglect the fact that they must present their
product in such a way that it is easy to learn and implement and that it should be
aesthetic in nature.
Interface designers and engineering psychologies apply systematic human factors
technique to produce designs for hardware and software.
A systematic approach is required in the design process in human factors design
and thus usability is required.
Usability is a software quality characteristics that surveys on software usability
cost and benefits and it can be simply be defined as the external attributes of
software quality. The process involving users in the development life cycle
ensures that the product is user friendly and is widely accepted.
Usability aims at the following:
• Shortening the time to accomplish tasks.
• Reducing the no. of mistakes made.
• Reducing learning time.
• Improving people’s satisfication with a system.

Benefits of usability:
• Elevated sales and consumer satisfaction.
• Increased productivity and efficiency.
• Decreased training costs and time.
• Lesser support and maintenance costs.
• Reduced documentation and support costs.
• Increased satisfaction, performance and productivity.

For software product to be successful with the customer, a software


engineer needs to develop his/her product in such a way that it is easy to
understand, learn and use human factors play a very important role in the
software life cycle.
A software engineer must always keep in mind the end user who is
going to use the product and should make things as simple as possible and
provide the best, at the same time not being too hard at his/her pocket.
Usability testing deals with the effective designing of a product.

Human-computer Interaction:

The Human-computer interaction (HCI) program will play a leading


role in the creation of tomorrow’s exciting new user interface design
software and technology, by supporting the broad spectrum of fundamental
research that will ultimately transform the human computer interaction
experience so the computer is no longer a distracting focus of attention.

Computer:
A Computer system comprises various elements, each of
which affects the user of the system. Input devices for interactive use,
allowing text entry, drawing and selection from the screen.
➢ Text entry: Traditional keyboard, phone text entry.
➢ Pointing: Mouse, but also touch pads.
Output display devices for interactive use
➢ Different types of screen mostly using same form of bitmap display.
➢ Large displays and situated displays for shared and public use.
Memory:
Short term memory: RAM
Long term memory: Magnetic and optical disks capacity limitation
related to
Document and wide storage.
Processing:
The effects when systems run too slow too fast, the myth of the
infinitely fast machine.
Limitations and processing speed.
Instead of workstations, computer may be in the form of
embedded computational machines, such as parts of microwave ovens.
Because the technique for designing these interfaces bear so much
relationship to the techniques for designing workstations interfaces, they
can be profitably treated together. Human computer interaction, by
contrast, studies both the mechanism side and the human side, but of a
narrower class of devices.
Human:
Humans are limited in their capacity to process information. This
has important implications for design. Information is received and response
given via a no of input and output channels.
➢ Visual channel.
➢ Auditory channel
➢ Movement
Information is stored in memory:
➢ Sensory memory.
➢ Short term memory.
➢ Long term memory.
Information is processed applied:
➢ Reasoning.
➢ Problem solving.
➢ Error.
Interaction:
The communication between the user and the system their interaction
framework has four parts:
1.User
2.Input
3.System
4.Output
Interaction models help us to understand what is going on in the interaction
between user and system. They address the translations between what the user
wants and what the system does.
Human-Computer interaction is concerned with the joint performance of tasks by
humans and machines; the structure of communication between human and
machine, human capabilities to use machines.
The goals of HCI are to produce usable and safe system as well as functional
systems. In order to produce computer system with good usability develops must
attempt to:
➢ Understand the factors that determines how people use technology.
➢ Develop tools and technique to enable building suitable system.
➢ Achieve efficient, effective and safe interactive.
➢ Put people first.

HCI arise as a field from inter wined roots in computer graphics,


operating systems, human factors, ergonomics, cognitive
psychology and the systems part of computer science.
A key aim of HCI is to understand how human interface with
computers, and to represent how knowledge is passed between the
two.
Interaction styles:
Interaction can be seen as a dialogue between the computer and the user.
Some applications have very distinct styles of interaction.
We can identify some common styles.
• Command line interface
• Menus
• Natural language
• Form-fills and spread sheets
• WIMP
Command line interface:
Way of expressing instructions to the computer directly, can be
function keys, single characters, short abbreviations.
➢ Suitable for repetitive tasks.
➢ Better for expert users than invoices.
➢ Offer direct access to system functionality.
Menus:
Set of options displayed on the screen options visible so demand less recall-rely
on recognition so names should be meaningful select by using mouse, numeric or
alphabetic keys.
Menu system can be
➢ Purely text based, with options presented as numbered choices or
➢ Can have graphical component with menu appearing in box and choices
made either by typing initial letter or moving around arrow keys.
Form filling interfaces:
➢ Primarily for data entry or data retrieval.
➢ Screen like paper form.
➢ Data put in relevant place.
WIMP interface:
➢ Windows
➢ Icon
➢ Menus
➢ Pointers
Windows: Areas of the screen that behave as if they were independent terminals.
• Can contain text bro graphics.
• Can be moved or resized.
• Scroll bars allow user to move the contents of the window up and down or
from side to side.
• Title bars describe the name of the window.
Icon: Small picture or image, used to represent same object in the interface, often
a window. Windows can be closed down to this small representation allowing
many windows to be accessible. Icons can be many and various highly stylized
or realistic representations.
Pointers: Important component, since WIMP style relies on pointing and
selecting things such as icons and menu items.
➢ Usually achieved with mouse.
➢ Joystick, track ball, cursor keys or keyboard shortcuts are also used wide
variety.
Menus: Choice of operations or services that can be performed offered on the
screen, Required option selected with pointer.
➢ Problem – menus an take up a lot of screen space.
➢ Solution – Use pull-down or pop-up menus.
➢ Pull-down menus are dragged down from single title at the top of the
screen.
➢ Popup menus appear when a particular region of the screen is clicked on.

Interaction devices:
Different tasks, different types of data and different types of users
all require different user interface devices. In most cases, interface devices are
either input or output devices. For example: A touch screen combines both.
➢ Interface devices correlate to the human senses.
➢ Now a day, a device usually is designed either for input or for output.
Input devices:
Most commonly, personal computers are equipped with text input and
pointing devices. For text input, the QWERTY keyboard is the standard solution,
but depending on the purpose of the system. At the same time, the mouse is not
only imaginable pointing device. Alternative for similar but slightly different
purposes include touchpad, track ball, joystick.
Output devices:
Output from a personal computer in most cases means output of visual data.
Devices for ‘dynamic visualisation’ include the traditional cathode ray tube
(CRT), liquid crystal display (LCD). Printers are also a very important device for
visual output but are substantially different from screens in that output is static.
The subject of HCI is very rich both terms of the disciplines it draws from
as well as opportunities for research. The study of user interface provides a
double-sided approach to understand how human and machines interact. From
studying how human psychology, We can design better into for people to interact
with computer.
Human- Computer Interface Design:
The overall process for designing a user interface begins with
the creation of different models. The intention of computer interface design is to
learn the ways of designing user-friendly interfaces or interactions.
Interface Design Models:
Four different models come into play when a human-computer
interface (HCI) is to be designed.
The software engineering creates a design model, a human engineer (or the
software engineer) establishes a user model, the end user develops a mental image
that is often called the user’s model or the system perception, and the implements
of the system create a system image.
Task Analysis and Modelling:
Task analysis and modelling can be applied to understand the tasks that
people currently perform and map these into similar set of tasks.
For example, assume that a small software company wants build a
computer-aided design system explicitly for interior designers. By of serving a
designer at work, the engineer notices that the interior design is comprised of a
number of activities : furniture layout, fabric and material selection, wall and
window covering selection, presentation costing and shopping. Each of these
major tasks can be elaborated into subtasks. For example, furniture layout can be
refined into the following tasks:
(1) Draw floor plan based on room dimensions;
(2) Place windows and doors at appropriate locations;
(3) Use furniture templates to draw scaled furniture outlines on floor
plan;
(4) Move furniture outlines to get best placement;
(5) Label all furniture outlines;
(6) Draw dimensions to show location; and
(7) Draw perspective view for customer.

Subtask 1 to 7 each be refined further. Subtask 1 to 6 will be performed by


manipulating information and performing actions with the user interface. On the
other hand, subtask 7 can be performed automatically in software and will result
in little direct user interaction.
Desing issues:
As the design of a user interface evolves, four common design issues
almost all ways surface: system response time, user help facilities, error
information handling, and command labelling.
System response time is the primarily complaint for many interactive
systems. In general, system response time is measured from the point at which
the user performs some control action until the software responds with desired
output or action.
System response has two important characteristics length and variability.
If the system response time too long, user frustration and stress is the inevitable
result.
Variability refers to the deviation from average response time, and in many
ways, it is the important of the response time characteristics.
In many cases, however, modern software provides on-line help facilities
that enable a user to get a question answered or resolve a problem without leaving
the interface.
Two different types of help facilities are encountered: integrated and add
on. An integrated help facility is designed into the software from the beginning.
An add-on help facility is added to the software after the system has been built.
In many ways, it is really an on-line user’s manual with limited query capability.
There is little doubt that the integrated help facility is preferable to the add-on
approach.
The error message provides no real indication of what is wrong or where
to look to get additional information. An error message presented in the manner
shown above does nothing to assuage user anxiety or to help correct the problem.
• The message should describe the problem in jargon that the user can
understand.
• The message should provide constructive advice for recovering from the
error.
• The message should indicate any negative consequences of the error.
Implementation Tools:
The process of user interface design is iterative. That is, a design model is
implemented as a prototype, and modified based on their comments. To
accommodate this iterative design approach a board class of interface design and
prototyping tools has evolved, called user interface toolkits, these tools provide
routines or objects that facilitate certain of windows, menus, device interaction,
error messages, commands, and many other elements of an interactive
environment.
Design Evaluation:
After the preliminary design has been completed, an operational user
interface prototype has been created. The protype is evaluated by the user, who
provides the designer with direct comments about the efficiency of the interface.
In addition, if formal evaluation techniques are used (eg.
Questionaires, rating sheets), the designers may extract information from this
information (eg. 80 percent of all users did not like the mechanism for saving data
files).
Design modifications are made based on user input and the next-
level prototype is created. The evaluation cycle continues until no further
modifications to the interface design are necessary.

Interface design :
Interface design is one of the most important part of software design. It is crucial
in a sense that user interaction with the system takes place through various
interfaces provided by the software product.
Think of the days of text based system where user had to type command on the
command line to execute a simple task.
Example of a command line interface:
• run prog1.exe /i=2 message=on
The above command line interface executes a program prog1.exe with a input i=2
with message during execution set to on. Although such command line interface
gives liberty to the user to run a program with a concise command. It is difficult
for a novice user and is error prone. This also requires the user to remember the
command for executing various commands with various details of options as
shown above. Example of Menu with option being asked from the user (refer to
Figure 3.11).

This simple menu allow the user to execute the program with option available as
a selection and further have option for exiting the program and going back to
previous screen. Although it provide grater flexibility than command line option
and does not need the user to remember the command still user can’t navigate to
the desired option from this screen. At best user can go back to the previous screen
to select a different option.
Modern graphical user interface provides tools for easy navigation and
interactivity to the user to perform different tasks.
The following are the advantages of a Graphical User Interface (GUI):
• Various information can be display and allow user to switch to different
task directly from the present screen.
• Useful graphical icons and pull down menu reduces typing effort by the user.
• Provides key-board shortcut to perform frequently performed tasks.
• Simultaneous operations of various task without loosing the present context.
Any interface design is targeted to users of different categories.
• Expert user with adequate knowledge of the system and application
• Average user with reasonable knowledge
• Novice user with little or no knowledge.
The following are the elements of good interface design:
• Goal and the intension of task must be identified.
• The important thing about designing interfaces is all about maintaining
consistency. Use of consistent color scheme, message and terminologies helps.
• Develop standards for good interface design and stick to it.
• Use icons where ever possible to provide appropriate message.
• Allow user to undo the current command. This helps in undoing mistake
committed by the user.
• Provide context sensitive help to guide the user.
• Use proper navigational scheme for easy navigation within the application.
• Discuss with the current user to improve the interface.
• Think from user prospective.
• The text appearing on the screen are primary source of information exchange
between the user and the system. Avoid using abbreviation. Be very specific in
communicating the mistake to the user. If possible provide the reason for error.
• Navigation within the screen is important and is specially useful for data entry
screen where keyboard is used intensively to input data.
• Use of color should be of secondary importance. It may be kept in mind about
user accessing application in a monochrome screen.
• Expect the user to make mistake and provide appropriate measure to handle
such errors through proper interface design.
• Grouping of data element is important. Group related data items accordingly.
• Justify the data items.
• Avoid high density screen layout. Keep significant amount of screen blank.
• Make sure an accidental double click instead of a single click may does some
thing unexpected.
• Provide file browser. Do not expect the user to remember the path of the required
file.
• Provide key-board shortcut for frequently done tasks. This saves time.
• Provide on-line manual to help user in operating the software.
• Always allow a way out (i.e., cancellation of action already completed).
• Warn user about critical task, like deletion of file, updating of critical
information.
• Programmers are not always good interface designer. Take help of expert
professional who understands human perception better than programmers.
• Include all possible features in the application even if the feature is available in
operating system.
Word the message carefully in a user understandable manner.
• Develop navigational procedure prior to developing the user interface.
Interface standards:
A user interface is the system by which people (user) interact with machine.
Why we need standards?
➢ Despite the best efforts of HCI, we are still getting if wrong.
➢ We specify the system the system behaviour.
➢ We validate our specification.
➢ We test the code and prove the correctness of our system.
➢ It is not just design issue or usability testing issue.
History of user interface standards
• In 1965, human factors specialists worked to make user interfaces- it is,
accurate and easy to learn.
• In 1985, We realised that usability was not enough. We needed consistency
standards become important.
• User interface standards are very effective for when you are developing,
testing or designing any new site or application or when you are revising
over so percent of the [pages in an existing application or site.
Creating a user interface standard helps you to create user interface that are
consistent and easy to understand
Example:
1.Modelling a system which has user controlled display options.
2.User can select from one of three choices.
3.choices determine the size of the current window display.
4.so they came up with schema and present first prototype.
Select screen display
FULL
HALF
PANEL

Problem:
➢ User testing shows the system breaks when a user selects more than one
option.
➢ Designer fixes it and present second prototype.
➢ But isn’t this the original prototype?
➢ Designer has ‘improved it’.
➢ User can now only select one checkbox.
➢ Designer has broken guidelines regarding selection controls.
Guidelines for using selection controls:
➢ Use radio buttons to indicate one or more options that must be either on or
off, but which are mutually exclusive.
➢ Use checkboxes to indicate one or more options that must be either on or
off, but which are not mutually exclusive.
Extending the specification:
➢ Design must satisfy our specification.
➢ Design must also satisfy guidelines.
➢ Find a way to specify selection widget guidelines.
➢ Ensure the described property holds in our system.
➢ So, they extend specification and present revised prototype.
Types of standards:
There are 3 types of standards
Methodological standards: This is S checklist to remind developers of the tasks
to create usable systems such as user interview, task analysis and design etc.
Design standards: This is building code. A set of absolute legal requirements that
ensure a consistent look and feel.
Design principles: Good design principles are specific and research – based and
developers work well within the design standards rules.
Building the design standards:
Major activities when building these standards are
➢ Project kick off and planning
• You collaborate with key members of the project team to define the
goals and scope of the user interface standards
• This includes whether the UI document is to be considered a
guideline, standard or style guide, which UI technology it will be
based on and who should participate in its development.
• You work closely with your team and other stake holders to identify
your key business need and business flows.
➢ Gather user interface samples
Based on the information and direction received from your team,
you begin by reviewing your major business applications and
extracting. Examples for the UI standard.
This is an iterative process that takes feedback from as wide
an audience as is appropriate.
➢ Develop user interface document
The document itself includes
• How to change and update the document.
• Common UI elements and when to use them.
• General navigation, graphic look and feel(or style), error handling,
messages.
➢ Review with team
• This is an iterative process that takes feedback from as wide an
audience as it is appropriate.
• The standard is reviewed and refined with your team and stake
holders in a consensus building process.
➢ Present user interface document.
• You present the UI document in electronic form or paper form.
Benefits of standards:
1.The goal of UI design is to made the user interaction as simple as efficient as
possible.
2.Your user or customers see a consistent UI within and between applications.
3.Reduced costs for support, user training packages and job aids.
4.Most important customer satisfaction, your users will reduce errors, training
requirement, and frustration time per transaction.
5.Reduced cost and effort for system maintenance.
UNIT-V
What is Software Quality?
Software Quality shows how good and reliable a product is. To convey an
associate degree example, think about functionally correct software. It performs
all functions as laid out in the SRS document. But, it has an associate degree
virtually unusable program. even though it should be functionally correct, we tend
not to think about it to be a high-quality product.
Software Quality Assurance (SQA):
Software Quality Assurance (SQA) is simply a way to assure quality in the
software. It is the set of activities that ensure processes, procedures as well as
standards are suitable for the project and implemented correctly.
Software Quality Assurance is a process that works parallel to Software
Development. It focuses on improving the process of development of software so
that problems can be prevented before they become major issues. Software
Quality Assurance is a kind of Umbrella activity that is applied throughout the
software process.
For those looking to deepen their expertise in SQA and elevate their professional
skills, consider exploring a specialized training program – Manual to
Automation Testing: A QA Engineer’s Guide . This program offers practical,
hands-on experience and advanced knowledge that complements the concepts
covered in.
What is quality?
Quality in a product or service can be defined by several measurable
characteristics. Each of these characteristics plays a crucial role in determining
the overall quality.

Software Quality Assurance (SQA) encompasse s


SQA process Specific quality assurance and quality control tasks (including
technical reviews and a multitiered testing strategy) Effective software
engineering practice (methods and tools) Control of all software work products
and the changes made to them a procedure to ensure compliance with software
development standards (when applicable) measurement and reporting
mechanisms
Elements of Software Quality Assurance (SQA)
1. Standards: The IEEE, ISO, and other standards organizations have
produced a broad array of software engineering standards and related
documents. The job of SQA is to ensure that standards that have been
adopted are followed and that all work products conform to them.
2. Reviews and audits: Technical reviews are a quality control activity
performed by software engineers for software engineers. Their intent is to
uncover errors. Audits are a type of review performed by SQA personnel
(people employed in an organization) with the intent of ensuring that
quality guidelines are being followed for software engineering work.
3. Testing: Software testing is a quality control function that has one primary
goal—to find errors. The job of SQA is to ensure that testing is properly
planned and efficiently conducted for primary goal of software.
4. Error/defect collection and analysis : SQA collects and analyzes error
and defect data to better understand how errors are introduced and what
software engineering activities are best suited to eliminating them.
5. Change management: SQA ensures that adequate change management
practices have been instituted.
6. Education: Every software organization wants to improve its software
engineering practices. A key contributor to improvement is education of
software engineers, their managers, and other stakeholders. The SQA
organization takes the lead in software process improvement which is key
proponent and sponsor of educational programs.
7. Security management: SQA ensures that appropriate process and
technology are used to achieve software security.
8. Safety: SQA may be responsible for assessing the impact of software
failure and for initiating those steps required to reduce risk.
9. Risk management : The SQA organization ensures that risk management
activities are properly conducted and that risk-related contingency plans
have been established.
Software Quality Assurance (SQA) focuses
The Software Quality Assurance (SQA) focuses on the following

• Software’s portability: Software’s portability refers to its ability to be


easily transferred or adapted to different environments or platforms without
needing significant modifications. This ensures that the software can run
efficiently across various systems, enhancing its accessibility and
flexibility.
• software’s usability: Usability of software refers to how easy and
intuitive it is for users to interact with and navigate through the application.
A high level of usability ensures that users can effectively accomplish their
tasks with minimal confusion or frustration, leading to a positive user
experience.
• software’s reusability: Reusability in software development involves
designing components or modules that can be reused in multiple parts of
the software or in different projects. This promotes efficiency and reduces
development time by eliminating the need to reinvent the wheel for similar
functionalities, enhancing productivity and maintainability.
• software’s correctness: Correctness of software refers to its ability to
produce the desired results under specific conditions or inputs. Correct
software behaves as expected without errors or unexpected behaviors,
meeting the requirements and specifications defined for its functionality.
• software’s maintainability: Maintainability of software refers to how
easily it can be modified, updated, or extended over time. Well-maintained
software is structured and documented in a way that allows developers to
make changes efficiently without introducing errors or compromising its
stability.
• software’s error control: Error control in software involves
implementing mechanisms to detect, handle, and recover from errors or
unexpected situations gracefully. Effective error control ensures that the
software remains robust and reliable, minimizing disruptions to users and
providing a smoother experience overall.
Software Quality Assurance (SQA) Include
1. A quality management approach.
2. Formal technical reviews.
3. Multi testing strategy.
4. Effective software engineering technology.
5. Measurement and reporting mechanism.
Major Software Quality Assurance (SQA) Activities
1. SQA Management Plan: Make a plan for how you will carry out the SQA
throughout the project. Think about which set of software engineering
activities are the best for project. check level of SQA team skills.
2. Set The Check Points: SQA team should set checkpoints. Evaluate the
performance of the project on the basis of collected data on different check
points.
3. Measure Change Impact: The changes for making the correction of an
error sometimes re introduces more errors keep the measure of impact of
change on project. Reset the new change to check the compatibility of this
fix with whole project.
4. Multi testing Strategy: Do not depend on a single testing approach. When
you have a lot of testing approaches available use them.
5. Manage Good Relations: In the working environment managing good
relations with other teams involved in the project development is
mandatory. Bad relation of SQA team with programmers team will impact
directly and badly on project. Don’t play politics.
6. Maintaining records and reports: Comprehensively document and share
all QA records, including test cases, defects, changes, and cycles, for
stakeholder awareness and future reference.
7. Reviews software engineering activities: The SQA group identifies and
documents the processes. The group also verifies the correctness of
software product.
8. Formalize deviation handling: Track and document software deviations
meticulously. Follow established procedures for handling variances.
Benefits of Software Quality Assurance (SQA)
1. SQA produces high quality software.
2. High quality application saves time and cost.
3. SQA is beneficial for better reliability.
4. SQA is beneficial in the condition of no maintenance for a long time.
5. High quality commercial software increase market share of company.
6. Improving the process of creating software.
7. Improves the quality of the software.
8. It cuts maintenance costs. Get the release right the first time, and your
company can forget about it and move on to the next big thing. Release a
product with chronic issues, and your business bogs down in a costly, time-
consuming, never-ending cycle of repairs.
Disadvantage of Software Quality Assurance (SQA)
There are a number of disadvantages of quality assurance.
• Cost: Some of them include adding more resources, which cause the more
budget its not, Addition of more resources For betterment of the product.
• Time Consuming: Testing and Deployment of the project taking more
time which cause delay in the project.
• Overhead : SQA processes can introduce administrative overhead,
requiring documentation, reporting, and tracking of quality metrics. This
additional administrative burden can sometimes outweigh the benefits,
especially for smaller projects.
• Resource Intensive : SQA requires skilled personnel with expertise in
testing methodologies, tools, and quality assurance practices. Acquiring
and retaining such talent can be challenging and expensive.
• Resistance to Change : Some team members may resist the
implementation of SQA processes, viewing them as bureaucratic or
unnecessary. This resistance can hinder the adoption and effectiveness of
quality assurance practices within an organization.
• Not Foolproof : Despite thorough testing and quality assurance efforts,
software can still contain defects or vulnerabilities. SQA cannot guarantee
the elimination of all bugs or issues in software products.
• Complexity : SQA processes can be complex, especially in large-scale
projects with multiple stakeholders, dependencies, and integration points.
Managing the complexity of quality assurance activities requires careful
planning and coordination.
Goals and Measures of Software Quality Assurance:
Software Quality simply means to measure how well software is designed i.e.
the quality of design, and how well software conforms to that design i.e. quality
of conformance. Software quality describes degree at which component of
software meets specified requirement and user or customers’ needs and
expectations.
Software Quality Assurance (SQA) is a planned and systematic pattern of
activities that are necessary to provide a high degree of confidence regarding
quality of a product. It actually provides or gives a quality assessment of quality
control activities and helps in determining validity of data or procedures for
determining quality. It generally monitors software processes and methods that
are used in a project to ensure or assure and maintain quality of software.
Goals of Software Quality Assurance :
• Quality assurance consists of a set of reporting and auditing functions.
• These functions are useful for assessing and controlling effectiveness and
completeness of quality control activities.
• It ensures management of data which is important for product quality.
• It also ensures that software which is developed, does it meet and
compiles with standard quality assurance.
• It ensures that end result or product meets and satisfies user and business
requirements.
• It simply finds or identify defects or bugs, and reduces effect of these
defects.
Measures of Software Quality Assurance :
There are various measures of software quality. These are given below:
1. Reliability –
It includes aspects such as availability, accuracy, and recoverability of
system to continue functioning under specific use over a given period of
time. For example, recoverability of system from shut-down failure is a
reliability measure.
2. Performance –
It means to measure throughput of system using system response time,
recovery time, and start up time. It is a type of testing done to measure
performance of system under a heavy workload in terms of
responsiveness and stability.
3. Functionality –
It represents that system is satisfying main functional requirements. It
simply refers to required and specified capabilities of a system.
4. Supportability –
There are a number of other requirements or attributes that software
system must satisfy. These include- testability, adaptability,
maintainability, scalability, and so on. These requirements generally
enhance capability to support software.
5. Usability –
It is capability or degree to which a software system is easy to understand
and used by its specified users or customers to achieve specified goals
with effectiveness, efficiency, and satisfaction. It includes aesthetics,
consistency, documentation, and responsiveness.
Software Quality Assurance (SQA) SET2
consists of a set of activities that monitor the software engineering
processes and methods used to ensure quality.
Software Quality Assurance (SQA) Encompasses
1. A quality management approach.
2. Effective software engineering technology (methods and tools).
3. Some formal technical reviews are applied throughout the software
process.
4. A multi-tiered testing strategy.
5. Controlling software documentation and the changes made to it.
6. Procedure to ensure compliance with software development standards
(when applicable).
7. Measurement and reporting mechanisms.
Software Quality
Software quality is defined in different ways but here it means the
conformance to explicitly stated functional and performance
requirements, explicitly documented development standards, and implicit
characteristics that are expected of all professionally developed software.
Following are the quality management system models under which the
software system is created is normally based:
1. CMMI
2. Six Sigma
3. ISO 9000
Note: There may be many other models for quality management, but the
ones mentioned above are the most popular.
Software Quality Assurance (SQA) Activities
Software Quality Assurance is composed of a variety of tasks associated
with two different fields:
1. The software engineers who do technical work.
2. SQA group that has responsibility for quality assurance planning,
oversight, record keeping, analysis, and reporting.
Basically, software engineers address quality (and perform quality
assurance and quality control activities) by applying solid technical
methods and measures, conducting formal technical reviews, and
performing well-planned software testing.
Prepares an SQA Plan for a Project
This type of plan is developed during project planning and is reviewed by
all interested parties. The quality assurance activities performed by the
software engineering team and the SQA group are governed by the plan.
The plan identifies:
• Evaluations to be performed.
• Audits and reviews to be performed.
• Standards that are applicable to the project.
• Procedures for error reporting and tracking.
• All the documents to be produced by the SQA group.
• The total amount of feedback provided to the software project team.
Measuring Software Quality using Quality Metrics:
In Software Engineering, Software Measurement is done based on
some Software Metrics where these software metrics are referred to as the
measure of various characteristics of a Software.
In Software engineering Software Quality Assurance (SAQ) assures the quality
of the software. A set of activities in SAQ is continuously applied throughout
the software process. Software Quality is measured based on some software
quality metrics.
There is a number of metrics available based on which software quality is
measured. But among them, there are a few most useful metrics which are
essential in software quality measurement. They are –
1. Code Quality
2. Reliability
3. Performance
4. Usability
5. Correctness
6. Maintainability
7. Integrity
8. Security
Now let’s understand each quality metric in detail –
1. Code Quality – Code quality metrics measure the quality of code used for
software project development. Maintaining the software code quality by writing
Bug-free and semantically correct code is very important for good software
project development. In code quality, both Quantitative metrics like the number
of lines, complexity, functions, rate of bugs generation, etc, and Qualitative
metrics like readability, code clarity, efficiency, and maintainability, etc are
measured.
2. Reliability – Reliability metrics express the reliability of software in different
conditions. The software is able to provide exact service at the right time or not
checked. Reliability can be checked using Mean Time Between Failure (MTBF)
and Mean Time To Repair (MTTR).
3. Performance – Performance metrics are used to measure the performance of
the software. Each software has been developed for some specific purposes.
Performance metrics measure the performance of the software by determining
whether the software is fulfilling the user requirements or not, by analyzing how
much time and resource it is utilizing for providing the service.
4. Usability – Usability metrics check whether the program is user-friendly or
not. Each software is used by the end-user. So it is important to measure that the
end-user is happy or not by using this software.
5. Correctness – Correctness is one of the important software quality metrics as
this checks whether the system or software is working correctly without any
error by satisfying the user. Correctness gives the degree of service each
function provides as per developed.
6. Maintainability – Each software product requires maintenance and up-
gradation. Maintenance is an expensive and time-consuming process. So if the
software product provides easy maintainability then we can say software quality
is up to mark. Maintainability metrics include the time required to adapt to new
features/functionality, Mean Time to Change (MTTC), performance in changing
environments, etc.
7. Integrity – Software integrity is important in terms of how much it is easy to
integrate with other required software which increases software functionality
and what is the control on integration from unauthorized software’s which
increases the chances of cyberattacks.
8. Security – Security metrics measure how secure the software is. In the age of
cyber terrorism, security is the most essential part of every software. Security
assures that there are no unauthorized changes, no fear of cyber attacks, etc
when the software product is in use by the end-user.

SOFTWARE RELIABILITY
Software reliability is defined as the probability of failure-free operation of a
software system for a specified time in a specified environment.
DEFINITIONS OF SOFTWARE RELIABILITY
Software reliability is defined as the probability of failure-free operation of a
software system for a specified time in a specified environment. The key
elements of the definition include probability of failure-free operation, length of
time of failure-free operation and the given execution environment. Failure
intensity is a measure of the reliability of a software system operating in a given
environment. Example: An air traffic control system fails once in two years.
Factors Influencing Software Reliability
• A user’s perception of the reliability of a software depends upon two
categories of information.
o The number of faults present in the software.
o The way users operate the system. This is known as the operational
profile.
• The fault count in a system is influenced by the following.
o Size and complexity of code.
o Characteristics of the development process used.
o Education, experience, and training of development personnel.
o Operational environment.
Applications of Software Reliability
The applications of software reliability includes
• Comparison of software engineering technologies.
o What is the cost of adopting a technology?
o What is the return from the technology — in terms of cost and
quality?
• Measuring the progress of system testing –The failure intensity
measure tells us about the present quality of the system: high intensity
means more tests are to be performed.
• Controlling the system in operation –The amount of change to a
software for maintenance affects its reliability.
• Better insight into software development processes – Quantification of
quality gives us a better insight into the development processes.
FUNCTIONAL AND NON-FUNCTIONAL REQUIREMENTS
System functional requirements may specify error checking, recovery features,
and system failure protection. System reliability and availability are specified as
part of the non-functional requirements for the system.
SYSTEM RELIABILITY SPECIFICATION
• Hardware reliability focuses on the probability a hardware component
fails.
• Software reliability focuses on the probability a software component will
produce an incorrect output.
• The software does not wear out and it can continue to operate after a bad
result.
• Operator reliability focuses on the probability when a system user makes
an error.
FAILURE PROBABILITIES
If there are two independent components in a system and the operation of the
system depends on them both then, P(S) = P (A) + P (B)
If the components are replicated then the probability of failure is P(S) = P (A) n
which means that all components fail at once.
FUNCTIONAL RELIABILITY REQUIREMENTS
• The system will check all operator inputs to see that they fall within their
required ranges.
• The system will check all disks for bad blocks each time it is booted.
• The system must be implemented in using a standard implementation of
Ada.
NON-FUNCTIONAL RELIABILITY SPECIFICATION
The required level of reliability must be expressed quantitatively. Reliability is
a dynamic system attribute. Source code reliability specifications are
meaningless (e.g. N faults/1000 LOC). An appropriate metric should be chosen
to specify the overall system reliability.
HARDWARE RELIABILITY METRICS
Hardware metrics are not suitable for software since its metrics are based on
notion of component failure. Software failures are often design failures. Often
the system is available after the failure has occurred. Hardware components can
wear out.
SOFTWARE RELIABILITY METRICS
Reliability metrics are units of measure for system reliability. System reliability
is measured by counting the number of operational failures and relating these to
demands made on the system at the time of failure. A long-term measurement
program is required to assess the reliability of critical systems.
PROBABILITY OF FAILURE ON DEMAND
The probability system will fail when a service request is made. It is useful
when requests are made on an intermittent or infrequent basis. It is appropriate
for protection systems where service requests may be rare and consequences
can be serious if service is not delivered. It is relevant for many safety-critical
systems with exception handlers.
RELIABILITY METRICS
• Probability of Failure on Demand (PoFoD)
o PoFoD = 0.001.
o For one in every 1000 requests the service fails per time unit.
• Rate of Fault Occurrence (RoCoF)
o RoCoF = 0.02.
o Two failures for each 100 operational time units of operation.
• Mean Time to Failure (MTTF)
o The average time between observed failures (aka MTBF)
o It measures time between observable system failures.
o For stable systems MTTF = 1/RoCoF.
o It is relevant for systems when individual transactions take lots of
processing time (e.g. CAD or WP systems).
• Availability = MTBF / (MTBF+MTTR)
o MTBF = Mean Time Between Failure
o MTTR = Mean Time to Repair
• Reliability = MTBF / (1+MTBF)
TIME UNITS
Time units include:
• Raw Execution Time which is employed in non-stop system
• Calendar Time is employed when the system has regular usage patterns
• Number of Transactions is employed for demand type transaction
systems
AVAILABILITY
Availability measures the fraction of time system is really available for use. It
takes repair and restart times into account. It is relevant for non-stop
continuously running systems (e.g. traffic signal).
FAILURE CONSEQUENCES – STUDY 1
Reliability does not take consequences into account. Transient faults have no
real consequences but other faults might cause data loss or corruption. Hence it
may be worthwhile to identify different classes of failure, and use different
metrics for each.
FAILURE CONSEQUENCES – STUDY 2
When specifying reliability both the number of failures and the consequences
of each matter. Failures with serious consequences are more damaging than
those where repair and recovery is straightforward. In some cases, different
reliability specifications may be defined for different failure types.
FAILURE CLASSIFICATION
Failure can be classified as the following
• Transient – only occurs with certain inputs.
• Permanent – occurs on all
• Recoverable – system can recover without operator help.
• Unrecoverable – operator has to help.
• Non-corrupting – failure does not corrupt system state or d
• Corrupting – system state or data are altered.
BUILDING RELIABILITY SPECIFICATION
The building of reliability specification involves consequences analysis of
possible system failures for each sub-system. From system failure analysis,
partition the failure into appropriate classes. For each class send out the
appropriate reliability metric.
SPECIFICATION VALIDATION
It is impossible to empirically validate high reliability specifications. No
database corruption really means PoFoD class < 1 in 200 million. If each
transaction takes 1 second to verify, simulation of one day’s transactions takes
3.5 days.
Software testing:
Software testing is an important process in the software development
lifecycle . It involves verifying and validating that a software application is
free of bugs, meets the technical requirements set by
its design and development , and satisfies user requirements efficiently and
effectively.
This process ensures that the application can handle all exceptional and
boundary cases, providing a robust and reliable user experience. By
systematically identifying and fixing issues, software testing helps deliver high-
quality software that performs as expected in various scenarios.
Software Testing is a method to assess the functionality of the software
program. The process checks whether the actual software matches the expected
requirements and ensures the software is bug-free. The purpose of software
testing is to identify the errors, faults, or missing requirements in contrast to
actual requirements. It mainly aims at measuring the specification, functionality,
and performance of a software program or application.
Software testing can be divided into two steps
1. Verification: It refers to the set of tasks that ensure that the software
correctly implements a specific function. It means “Are we building the
product right?”.
2. Validation: It refers to a different set of tasks that ensure that the
software that has been built is traceable to customer requirements. It
means “Are we building the right product?”.
Different Types Of Software Testing
Explore diverse software testing methods
including manual and automated testing for improved quality assurance .
Enhance software reliability and performance through functional and non-
functional testing, ensuring user satisfaction. Learn about the significance of
various testing approaches for robust software development.

Software Testing can be broadly classified into 3 types:

1. Functional testing : It is a type of software testing that validates the


software systems against the functional requirements. It is performed to
check whether the application is working as per the software’s
functional requirements or not. Various types of functional testing are
Unit testing, Integration testing, System testing, Smoke testing, and so
on.

2. Non-functional testing : It is a type of software testing that checks the


application for non-functional requirements like performance,
scalability, portability, stress, etc. Various types of non-functional
testing are Performance testing, Stress testing, Usability Testing, and so
on.

3. Maintenance testing : It is the process of changing, modifying, and


updating the software to keep up with the customer’s needs. It
involves regression testing that verifies that recent changes to the code
have not adversely affected other previously working parts of the
software.

Apart from the above classification software testing can be further divided into
2 more ways of testing:

1. Manual testing : It includes testing software manually, i.e., without


using any automation tool or script. In this type, the tester takes over the
role of an end-user and tests the software to identify any unexpected
behaviour or bug. There are different stages for manual testing such as
unit testing, integration testing, system testing, and user acceptance
testing. Testers use test plans, test cases, or test scenarios to test
software to ensure the completeness of testing. Manual testing also
includes exploratory testing, as testers explore the software to identify
errors in it.

2. Automation testing : It is also known as Test Automation, is when the


tester writes scripts and uses another software to test the product. This
process involves the automation of a manual process. Automation
Testing is used to re-run the test scenarios quickly and repeatedly, that
were performed manually in manual testing.

Apart from Regression testing , Automation testing is also used to test the
application from a load, performance, and stress point of view. It increases the
test coverage, improves accuracy, and saves time and money when compared
to manual testing.

Different Types of Software Testing Techniques

Software testing techniques can be majorly classified into two categories:

1. Black box Testing : Testing in which the tester doesn’t have access to
the source code of the software and is conducted at the software
interface without any concern with the internal logical structure of the
software known as black-box testing.

2. White box Testing : Testing in which the tester is aware of the internal
workings of the product, has access to its source code, and is conducted
by making sure that all internal operations are performed according to
the specifications is known as white box testing.

3. Grey Box Testing : Testing in which the testers should have knowledge
of implementation, however, they need not be experts.

Software Testing can be broadly classified into 3 types:

1. Functional testing : It is a type of software testing that validates the


software systems against the functional requirements. It is performed to
check whether the application is working as per the software’s
functional requirements or not. Various types of functional testing are
Unit testing, Integration testing, System testing, Smoke testing, and so
on.

2. Non-functional testing : It is a type of software testing that checks the


application for non-functional requirements like performance,
scalability, portability, stress, etc. Various types of non-functional
testing are Performance testing, Stress testing, Usability Testing, and so
on.

3. Maintenance testing : It is the process of changing, modifying, and


updating the software to keep up with the customer’s needs. It
involves regression testing that verifies that recent changes to the code
Different Levels of Software Testing

Software level testing can be majorly classified into 4 levels:

1. Unit testing : It a level of the software testing process where individual


units/components of a software/system are tested. The purpose is to
validate that each unit of the software performs as designed.

2. Integration testing : It is a level of the software testing process where


individual units are combined and tested as a group. The purpose of this
level of testing is to expose faults in the interaction between integrated
units.

3. System testing : It is a level of the software testing process where a


complete, integrated system/software is tested. The purpose of this test
is to evaluate the system’s compliance with the specified requirements.

4. Acceptance testing : It is a level of the software testing process where a


system is tested for acceptability. The purpose of this test is to evaluate
the system’s compliance with the business requirements and assess
whether it is acceptable for delivery.

Benefits of Software Testing

• Product quality: Testing ensures the delivery of a high-quality product


as the errors are discovered and fixed early in the development cycle.

• Customer satisfaction: Software testing aims to detect the errors or


vulnerabilities in the software early in the development phase so that the
detected bugs can be fixed before the delivery of the product. Usability
testing is a type of software testing that checks the application for how
easily usable it is for the users to use the application.

• Cost-effective: Testing any project on time helps to save money and


time for the long term. If the bugs are caught in the early phases of
software testing, it costs less to fix those errors.

• Security: Security testing is a type of software testing that is focused on


testing the application for security vulnerabilities from internal or
external sources.

Path Testing:
Path Testing is a method that is used to design the test cases. In the
path testing method, the control flow graph of a program is designed to
find a set of linearly independent paths of execution. In this method,
Cyclomatic Complexity is used to determine the number of linearly
independent paths and then test cases are generated for each path.
It gives complete branch coverage but achieves that without covering all
possible paths of the control flow graph. McCabe’s Cyclomatic
Complexity is used in path testing. It is a structural testing method that
uses the source code of a program to find every possible executable
path.

• Control Flow Graph:


Draw the corresponding control flow graph of the program in which all
the executable paths are to be discovered.

• Cyclomatic Complexity:
After the generation of the control flow graph, calculate the cyclomatic
complexity of the program using the following formula .

• Make Set:
Make a set of all the paths according to the control flow graph and
calculate cyclomatic complexity. The cardinality of the set is equal to
the calculated cyclomatic complexity.
• Create Test Cases:
Create a test case for each path of the set obtained in the above step.

Path Testing Techniques

• Control Flow Graph:


The program is converted into a control flow graph by representing the
code into nodes and edges.

• Decision to Decision path:


The control flow graph can be broken into various Decision to Decision
paths and then collapsed into individual nodes.

• Independent paths:
An Independent path is a path through a Decision to Decision path
graph that cannot be reproduced from other paths by other methods.

Advantages of Path Testing

1. The path testing method reduces the redundant tests.

2. Path testing focuses on the logic of the programs.

3. Path testing is used in test case design.

Disadvantages of Path Testing

1. A tester needs to have a good understanding of programming knowledge


or code knowledge to execute the tests.

2. The test case increases when the code complexity is increased.

3. It will be difficult to create a test path if the application has a high


complexity of code.

4. Some test paths may skip some of the conditions in the code. It
may not cover some conditions or scenarios if there is an error in
the specific paths.

Control structure testing:

Control structure testing is used to increase the coverage area by testing


various control structures present in the program. The different types of testing
performed under control structure testing are as follows
1. Condition Testing

2. Data Flow Testing

3. Loop Testing

1. Condition Testing: Condition testing is a test cased design method, which


ensures that the logical condition and decision statements are free from errors.
The errors present in logical conditions can be incorrect boolean operators,
missing parenthesis in a booleans expression, error in relational operators,
arithmetic expressions, and so on. The common types of logical conditions
that are tested using condition testing are-

1. A relation expression, like E1 op E2 where ‘E1’ and ‘E2’ are arithmetic


expressions and ‘OP’ is an operator.
2. A simple condition like any relational expression preceded by a NOT
(~) operator. For example, (~E1) where ‘E1’ is an arithmetic expression
and ‘a’ denotes NOT operator.
3. A compound condition consists of two or more simple conditions,
Boolean operator, and parenthesis. For example, (E1 & E2)|(E2 & E3)
where E1, E2, E3 denote arithmetic expression and ‘&’ and ‘|’ denote
AND or OR operators.
4. A Boolean expression consists of operands and a Boolean operator like
‘AND’, OR, NOT. For example, ‘A|B’ is a Boolean expression where
‘A’ and ‘B’ denote operands and | denotes OR operator.

3. Data Flow Testing: The data flow test method chooses the test path of a
program based on the locations of the definitions and uses all the
variables in the program. The data flow test approach is depicted as
follows suppose each statement in a program is assigned a unique
statement number and that theme function cannot modify its parameters
or global variables. For example, with S as its statement number.

DEF (S) = {X | Statement S has a definition of X}


USE (S) = {X | Statement S has a use of X}

If statement S is an if loop statement, them its DEF set is empty and its USE
set depends on the state of statement S. The definition of the variable X at
statement S is called the line of statement S’ if the statement is any way from
S to statement S’ then there is no other definition of X. A definition use (DU)
chain of variable X has the form [X, S, S’], where S and S’ denote statement
numbers, X is in DEF(S) and USE(S’), and the definition of X in statement S
is line at statement S’. A simple data flow test approach requires that each DU
chain be covered at least once. This approach is known as the DU test
approach. The DU testing does not ensure coverage of all branches of a
program. However, a branch is not guaranteed to be covered by DU testing
only in rare cases such as then in which the other construct does not have any
certainty of any variable in its later part and the other part is not present. Data
flow testing strategies are appropriate for choosing test paths of a program
containing nested if and loop statements.

3. Loop Testing: Loop testing is actually a white box testing technique. It


specifically focuses on the validity of loop construction. Following are the
types of loops.

1. Simple Loop – The following set of test can be applied to simple loops,
where the maximum allowable number through the loop is n.
1. Skip the entire loop.
2. Traverse the loop only once.
3. Traverse the loop two times.
4. Make p passes through the loop where p<n.
5. Traverse the loop n-1, n, n+1 times.
2. Concatenated Loops – If loops are not dependent on each other,
contact loops can be tested using the approach used in simple loops. if
the loops are interdependent, the steps are followed in nested loops.

1. Nested Loops – Loops within loops are called as nested loops. when
testing nested loops, the number of tested increases as level nesting
increases. The following steps for testing nested loops are as follows-
1. Start with inner loop. set all other loops to minimum values.
2. Conduct simple loop testing on inner loop.
3. Work outwards.
4. Continue until all loops tested.
2. Unstructured loops – This type of loops should be redesigned,
whenever possible, to reflect the use of unstructured the structured
programming constructs.

Black Box Testing:


Black Box Testing is an important part of making sure software works
as it should. Instead of peeking into the code, testers check how the software
behaves from the outside, just like users would. This helps catch any issues or
bugs that might affect how the software works.
This simple guide gives you an overview of what Black Box Testing is all
about and why it matters in software development.
Black-box testing is a type of software testing in which the tester is not
concerned with the software’s internal knowledge or implementation details
but rather focuses on validating the functionality based on the provided
specifications or requirements.

Types Of Black Box Testing


The following are the several categories of black box testing:

1. Functional Testing

2. Regression Testing

3. Nonfunctional Testing (NFT)

Before we move in depth of the Black box testing do you known that their are
many different type of testing used in industry and some automation testing
tools are there which automate the most of testing so if you wish to learn the
latest industry level tools then you check-out our manual to automation testing
course in which you will learn all these concept and tools

Functional Testing

• Functional testing is defined as a type of testing that verifies that each


function of the software application works in conformance with the
requirement and specification.

• This testing is not concerned with the source code of the application.
Each functionality of the software application is tested by providing
appropriate test input, expecting the output, and comparing the actual
output with the expected output.

• This testing focuses on checking the user interface, APIs, database,


security, client or server application, and functionality of the
Application Under Test. Functional testing can be manual or
automated. It determines the system’s software functional requirements.

Regression Testing

• Regression Testing is the process of testing the modified parts of the


code and the parts that might get affected due to the modifications to
ensure that no new errors have been introduced in the software after the
modifications have been made.

• Regression means the return of something and in the software field, it


refers to the return of a bug. It ensures that the newly added code is
compatible with the existing code.

• In other words, a new software update has no impact on the


functionality of the software. This is carried out after a system
maintenance operation and upgrades.
Nonfunctional Testing

• Non-functional testing is a software testing technique that checks the


non-functional attributes of the system.

• Non-functional testing is defined as a type of software testing to check


non-functional aspects of a software application.

• It is designed to test the readiness of a system as per nonfunctional


parameters which are never addressed by functional testing.

• Non-functional testing is as important as functional testing.

• Non-functional testing is also known as NFT. This testing is not


functional testing of software. It focuses on the software’s performance,
usability, and scalability.

Advantages of Black Box Testing

• The tester does not need to have more functional knowledge or


programming skills to implement the Black Box Testing.

• It is efficient for implementing the tests in the larger system.

• Tests are executed from the user’s or client’s point of view.

• Test cases are easily reproducible.

• It is used to find the ambiguity and contradictions in the functional


specifications.

Disadvantages of Black Box Testing

• There is a possibility of repeating the same tests while implementing the


testing process.

• Without clear functional specifications, test cases are difficult to


implement.

• It is difficult to execute the test cases because of complex inputs at


different stages of testing.

• Sometimes, the reason for the test failure cannot be detected.

• Some programs in the application are not tested.


• It does not reveal the errors in the control structure.

• Working with a large sample space of inputs can be exhaustive and


consumes a lot of time.

Ways of Black Box Testing Done

1. Syntax-Driven Testing – This type of testing is applied to systems that can


be syntactically represented by some language. For example, language can be
represented by context-free grammar. In this, the test cases are generated so
that each grammar rule is used at least once.

2. Equivalence partitioning – It is often seen that many types of inputs work


similarly so instead of giving all of them separately we can group them and
test only one input of each group. The idea is to partition the input domain of
the system into several equivalence classes such that each member of the class
works similarly, i.e., if a test case in one class results in some error, other
members of the class would also result in the same error.

The technique involves two steps:

1. Identification of equivalence class – Partition any input domain into a


minimum of two sets: valid values and invalid values . For example, if
the valid range is 0 to 100 then select one valid input like 49 and one
invalid like 104.

2. Generating test cases – (i) To each valid and invalid class of input
assign a unique identification number. (ii) Write a test case covering all
valid and invalid test cases considering that no two invalid inputs mask
each other. To calculate the square root of a number, the equivalence
classes will be (a) Valid inputs:

• The whole number which is a perfect square-output will be an


integer.

• The entire number which is not a perfect square-output will be a


decimal number.

• Positive decimals

• Negative numbers(integer or decimal).

• Characters other than numbers like “a”,”!”,”;”, etc.


3. Boundary value analysis – Boundaries are very good places for errors to
occur. Hence, if test cases are designed for boundary values of the input
domain then the efficiency of testing improves and the probability of finding
errors also increases. For example – If the valid range is 10 to 100 then test for
10,100 also apart from valid and invalid inputs.

4. Cause effect graphing – This technique establishes a relationship between


logical input called causes with corresponding actions called the effect. The
causes and effects are represented using Boolean graphs. The following steps
are followed:

1. Identify inputs (causes) and outputs (effect).

2. Develop a cause-effect graph.

3. Transform the graph into a decision table.

4. Convert decision table rules to test cases.

For example, in the following cause-effect graph:


Each column corresponds to a rule which will become a test case for testing.
So there will be 4 test cases.

5. Requirement-based testing – It includes validating the requirements given


in the SRS of a software system.

6. Compatibility testing – The test case results not only depends on the
product but is also on the infrastructure for delivering functionality. When the
infrastructure parameters are changed it is still expected to work properly.
Some parameters that generally affect the compatibility of software are:

1. Processor (Pentium 3, Pentium 4) and several processors.

2. Architecture and characteristics of machine (32-bit or 64-bit).

3. Back-end components such as database servers.

4. Operating System (Windows, Linux, etc).

Tools Used for Black Box Testing:

1. Appium

2. Selenium

3. Microsoft Coded UI

4. Applitools

5. HP QTP .

What can be identified by Black Box Testing

1. Discovers missing functions, incorrect function & interface errors

2. Discover the errors faced in accessing the database

3. Discovers the errors that occur while initiating & terminating any
functions.

4. Discovers the errors in performance or behaviour of software.

Features of black box testing


1. Independent testing: Black box testing is performed by testers who are
not involved in the development of the application, which helps to
ensure that testing is unbiased and impartial.

2. Testing from a user’s perspective: Black box testing is conducted


from the perspective of an end user, which helps to ensure that the
application meets user requirements and is easy to use.

3. No knowledge of internal code: Testers performing black box testing


do not have access to the application’s internal code, which allows them
to focus on testing the application’s external behaviour and
functionality.

4. Requirements-based testing: Black box testing is typically based on


the application’s requirements, which helps to ensure that the
application meets the required specifications.

5. Different testing techniques: Black box testing can be performed using


various testing techniques, such as functional testing, usability testing,
acceptance testing, and regression testing.

6. Easy to automate: Black box testing is easy to automate using various


automation tools, which helps to reduce the overall testing time and
effort.

7. Scalability: Black box testing can be scaled up or down depending on


the size and complexity of the application being tested.

8. Limited knowledge of application: Testers performing black box


testing have limited knowledge of the application being tested, which
helps to ensure that testing is more representative of how the end users
will interact with the application.

Integration testing:

Integration testing is the process of testing the interface between two


software units or modules. It focuses on determining the correctness of the
interface. The purpose of integration testing is to expose faults in the
interaction between integrated units. Once all the modules have been unit-
tested, integration testing is performed.

What is Integration Testing?


Integration testing is a software testing technique that focuses on verifying the
interactions and data exchange between different components or modules of a
software application. The goal of integration testing is to identify any
problems or bugs that arise when different components are combined and
interact with each other. Integration testing is typically performed after unit
testing and before system testing. It helps to identify and resolve integration
issues early in the development cycle, reducing the risk of more severe and
costly problems later on.

Integration testing is one of the basic type of software testing and there are
many other basic and advance software testing. If you are interested in
learning all the testing concept and other more advance concept in the field of
the software testing

• Integration testing can be done by picking module by module. This can


be done so that there should be a proper sequence to be followed.

• And also if you don’t want to miss out on any integration scenarios then
you have to follow the proper sequence.

• Exposing the defects is the major focus of the integration testing and the
time of interaction between the integrated units.

Why is Integration Testing Important?


Integration testing is important because it verifies that individual
software modules or components work together correctly as a whole system.
This ensures that the integrated software functions as intended and helps
identify any compatibility or communication issues between different parts of
the system. By detecting and resolving integration problems early, integration
testing contributes to the overall reliability, performance, and quality of the
software product.
Integration test approaches
There are four types of integration testing approaches. Those approaches are
the following:

1. Big-Bang Integration Testing

• It is the simplest integration testing approach, where all the modules are
combined and the functionality is verified after the completion of
individual module testing.

• In simple words, all the modules of the system are simply put together
and tested.

• This approach is practicable only for very small systems. If an error is


found during the integration testing, it is very difficult to localize the
error as the error may potentially belong to any of the modules being
integrated.

• So, debugging errors reported during Big Bang integration testing is


very expensive to fix.

• Big-bang integration testing is a software testing approach in which all


components or modules of a software application are combined and
tested at once.

• This approach is typically used when the software components have a


low degree of interdependence or when there are constraints in the
development environment that prevent testing individual components.

• The goal of big-bang integration testing is to verify the overall


functionality of the system and to identify any integration problems that
arise when the components are combined.
• While big-bang integration testing can be useful in some situations, it
can also be a high-risk approach, as the complexity of the system and
the number of interactions between components can make it difficult to
identify and diagnose problems.

Advantages of Big-Bang Integration Testing

• It is convenient for small systems.

• Simple and straightforward approach.

• Can be completed quickly.

• Does not require a lot of planning or coordination.

• May be suitable for small systems or projects with a low degree of


interdependence between components.

Disadvantages of Big-Bang Integration Testing

• There will be quite a lot of delay because you would have to wait for all
the modules to be integrated.

• High-risk critical modules are not isolated and tested on priority since
all modules are tested at once.

• Not Good for long projects.

• High risk of integration problems that are difficult to identify and


diagnose.

• This can result in long and complex debugging and troubleshooting


efforts.

• This can lead to system downtime and increased development costs.

• May not provide enough visibility into the interactions and data
exchange between components.

• This can result in a lack of confidence in the system’s stability and


reliability.

• This can lead to decreased efficiency and productivity.

• This may result in a lack of confidence in the development team.


• This can lead to system failure and decreased user satisfaction.

2. Bottom-Up Integration Testing

In bottom-up testing, each module at lower levels are tested with higher
modules until all modules are tested. The primary purpose of this integration
testing is that each subsystem tests the interfaces among various modules
making up the subsystem. This integration testing uses test drivers to drive and
pass appropriate data to the lower-level modules.

Advantages of Bottom-Up Integration Testing

• In bottom-up testing, no stubs are required.

• A principal advantage of this integration testing is that several disjoint


subsystems can be tested simultaneously.

• It is easy to create the test conditions.

• Best for applications that uses bottom up design approach.

• It is Easy to observe the test results.

Disadvantages of Bottom-Up Integration Testing

• Driver modules must be produced.

• In this testing, the complexity that occurs when the system is made up of
a large number of small subsystems.

• As Far modules have been created, there is no working model can be


represented.

3. Top-Down Integration Testing

Top-down integration testing technique is used in order to simulate the


behaviour of the lower-level modules that are not yet integrated. In this
integration testing, testing takes place from top to bottom. First, high-level
modules are tested and then low-level modules and finally integrating the low-
level modules to a high level to ensure the system is working as intended.
Advantages of Top-Down Integration Testing

• Separately debugged module.

• Few or no drivers needed.

• It is more stable and accurate at the aggregate level.

• Easier isolation of interface errors.

• In this, design defects can be found in the early stages.

Disadvantages of Top-Down Integration Testing

• Needs many Stubs.

• Modules at lower level are tested inadequately.

• It is difficult to observe the test output.

• It is difficult to stub design.

4. Mixed Integration Testing

A mixed integration testing is also called sandwiched integration testing. A


mixed integration testing follows a combination of top down and bottom-up
testing approaches. In top-down approach, testing can start only after the top-
level module have been coded and unit tested. In bottom-up approach, testing
can start only after the bottom level modules are ready. This sandwich or
mixed approach overcomes this shortcoming of the top-down and bottom-up
approaches. It is also called the hybrid integration testing. also, stubs and
drivers are used in mixed integration testing.

Advantages of Mixed Integration Testing

• Mixed approach is useful for very large projects having several sub
projects.
• This Sandwich approach overcomes this shortcoming of the top-down
and bottom-up approaches.

• Parallel test can be performed in top and bottom layer tests.

Disadvantages of Mixed Integration Testing

• For mixed integration testing, it requires very high cost because one part
has a Top-down approach while another part has a bottom-up approach.

• This integration testing cannot be used for smaller systems with huge
interdependence between different modules.

Applications of Integration Testing

1. Identify the components: Identify the individual components of your


application that need to be integrated. This could include the frontend,
backend, database, and any third-party services.

2. Create a test plan: Develop a test plan that outlines the scenarios and
test cases that need to be executed to validate the integration points
between the different components. This could include testing data flow,
communication protocols, and error handling.

3. Set up test environment: Set up a test environment that mirrors the


production environment as closely as possible. This will help ensure that
the results of your integration tests are accurate and reliable.

4. Execute the tests: Execute the tests outlined in your test plan, starting
with the most critical and complex scenarios. Be sure to log any defects
or issues that you encounter during testing.

5. Analyze the results: Analyze the results of your integration tests to


identify any defects or issues that need to be addressed. This may
involve working with developers to fix bugs or make changes to the
application architecture.

6. Repeat testing: Once defects have been fixed, repeat the integration
testing process to ensure that the changes have been successful and that
the application still works as expected.

Test Cases For Integration Testing


• Interface Testing : Verify that data exchange between modules occurs
correctly. Validate input/output parameters and formats. Ensure proper
error handling and exception propagation between modules.

• Functional Flow Testing : Test end-to-end functionality by simulating


user interactions. Verify that user inputs are processed correctly and
produce expected outputs. Ensure seamless flow of data and control
between modules.

• Data Integration Testing : Validate data integrity and consistency


across different modules. Test data transformation and conversion
between formats. Verify proper handling of edge cases and boundary
conditions.

• Dependency Testing : Test interactions between dependent modules.


Verify that changes in one module do not adversely affect others.
Ensure proper synchronization and communication between modules.

• Error Handling Testing : Validate error detection and reporting


mechanisms. Test error recovery and fault tolerance capabilities. Ensure
that error messages are clear and informative.

• Performance Testing : Measure system performance under integrated


conditions. Test response times, throughput, and resource utilization.
Verify scalability and concurrency handling between modules.

• Security Testing : Test access controls and permissions between


integrated modules. Verify encryption and data protection mechanisms.
Ensure compliance with security standards and regulations.

• Compatibility Testing : Test compatibility with external systems, APIs,


and third-party components. Validate interoperability and data exchange
protocols. Ensure seamless integration with different platforms and
environments.

Validation and System Testing:

At the end of integration testing, software is completely assembled as a


package, interfacing errors have been uncovered and corrected and now
validation testing is performed. Software validation is achieved through a
series of black-box tests that demonstrate conformity with requirements.

After each validation test case has been conducted, one of two possible
condition exist:
1. The function or performance characteristics conform to specification
and are accepted or

2. a deviation from specification is uncovered and a deficiency list is


created. Deviation or error discovered at this stage in a project can rarely be
corrected prior to scheduled delivery.

Alpha and Beta Testing:

It is virtually impossible for a software developer to foresee how the


customer will really use a program:

• Instructions for use may misinterpreted.

• strange combinations of data may be regularly used

• output that seemed clear to the tester may be unintelligible to a user in


the field. When custom software is built for one customer, a series of
acceptance tests are conducted to enable the customer to validate all
requirements. If software is developed as a product to be used by many
customers, it is impractical to perform acceptance tests with each one. 

alpha and beta tests are used to uncover errors that only the end-user seems
able to find.

The Alpha Test is conducted at the developer’s site by a customer. The


software is used in a natural setting with the developer “looking over the
shoulder” of the user and recording errors and usage problems. Alpha tests
are conducted in a controlled environment.

The Beta test is conducted at one or more customer sites by the end-user of
the software. Unlike alpha testing, the developer is generally not present.

Unlike alpha testing, the developer is generally not present. Therefore, the
beta test is a "live" application of the software in an environment that
cannot be controlled by the developer. The customer records all problems
(real or imagined) that are encountered during beta testing and reports these
to the developer at regular intervals. As a result of problems reported
during beta tests, software engineers make modifications and then prepare
for release of the software product to the entire customer base

System Testing:
System testing is actually a series of different tests whose primary
purpose is to fully exercise the computer-based system. Although each test has
a different purpose, all work to verify that system elements have been properly
integrated and perform allocated functions.

System Testing is basically performed by a testing team that is


independent of the development team that helps to test the quality of the
system impartial.

System Testing is carried out on the whole system in the context of


either system requirement specifications or functional requirement
specifications or in the context of both. System testing tests the design and
behavior of the system and also the expectations of the customer.

Types of System Testing:

• Performance Testing: Performance Testing is a type of software


testing that is carried out to test the speed, scalability, stability and reliability
of the software product or application.

• Load Testing: Load Testing is a type of software testing which is


carried out to determine the behavior of a system or software product under
extreme load.

• Stress Testing: Stress Testing is a type of software testing performed


to check the robustness of the system under the varying loads.

• Scalability Testing: Scalability Testing is a type of software testing


which is carried out to check the performance of a software application or
system in terms of its capability to scale up or scale down the number of user
request load.

Reverse Engineering:

Software Reverse Engineering is a process of recovering the design,


requirement specifications, and functions of a product from an analysis of its
code. It builds a program database and generates information from this. This
article focuses on discussing reverse engineering in detail.

What is Reverse Engineering?

Reverse engineering can extract design information from source code,


but the abstraction level, the completeness of the documentation, the degree to
which tools and a human analyst work together, and the directionality of the
process are highly variable.

Objective of Reverse Engineering:

1. Reducing Costs: Reverse engineering can help cut costs in product


development by finding replacements or cost-effective alternatives for
systems or components.

2. Analysis of Security: Reverse engineering is used in cybersecurity to


examine exploits, vulnerabilities, and malware. This helps in
understanding of threat mechanisms and the development of practical
defenses by security experts.

3. Integration and Customization: Through the process of reverse


engineering, developers can incorporate or modify hardware or software
components into pre-existing systems to improve their operation or
tailor them to meet particular needs.

4. Recovering Lost Source Code: Reverse engineering can be used to


recover the source code of a software application that has been lost or is
inaccessible or at the very least, to produce a higher-level representation
of it.

5. Fixing bugs and maintenance: Reverse engineering can help find and
repair flaws or provide updates for systems for which the original source
code is either unavailable or inadequately documented.

Reverse Engineering Goals:

1. Cope with Complexity: Reverse engineering is a common tool used to


understand and control system complexity. It gives engineers the ability
to analyze complex systems and reveal details about their architecture,
relationships and design patterns.

2. Recover lost information: Reverse engineering seeks to retrieve as


much information as possible in situations where source code or
documentation are lost or unavailable. Rebuilding source code,
analyzing data structures and retrieving design details are a few
examples of this.

3. Detect side effects: Understanding a system or component’s behavior


requires analyzing its side effects. Unintended implications,
dependencies, and interactions that might not be obvious from the
system’s documentation or original source code can be found with the
use of reverse engineering.

4. Synthesis higher abstraction: Abstracting low-level features in order


to build higher-level representations is a common practice in reverse
engineering. This abstraction makes communication and analysis easier
by facilitating a greater understanding of the system’s functionality.

5. Facilitate Reuse: Reverse engineering can be used to find reusable


parts or modules in systems that already exist. By understanding the
functionality and architecture of a system, developers can extract and
repurpose components for use in other projects, improving efficiency
and decreasing development time.

Re-engineering:

Re-engineering is a process of software development that is done to


improve the maintainability of a software system. Re-engineering is the
examination and alteration of a system to reconstitute it in a new form.
This process encompasses a combination of sub-processes like reverse
engineering, forward engineering, reconstructing, etc.

What is Re-engineering?
Re-engineering, also known as software re-engineering, is the process of
analyzing, designing, and modifying existing software systems to
improve their quality, performance, and maintainability.

1. This can include updating the software to work with new hardware or
software platforms, adding new features, or improving the software’s
overall design and architecture.

2. Software re-engineering, also known as software restructuring or


software renovation, refers to the process of improving or upgrading
existing software systems to improve their quality, maintainability, or
functionality.

3. It involves reusing the existing software artifacts, such as code, design,


and documentation, and transforming them to meet new or updated
requirements.

Objective of Re-engineering

The primary goal of software re-engineering is to improve the quality


and maintainability of the software system while minimizing the risks
and costs associated with the redevelopment of the system from scratch.
Software re-engineering can be initiated for various reasons, such as:

1. To describe a cost-effective option for system evolution.

2. To describe the activities involved in the software maintenance process.

3. To distinguish between software and data re-engineering and to explain


the problems of data re-engineering.

Overall, software re-engineering can be a cost-effective way to improve


the quality and functionality of existing software systems, while
minimizing the risks and costs associated with starting from scratch.

Process of Software Re-engineering

The process of software re-engineering involves the following steps:


1. Planning: The first step is to plan the re-engineering process, which
involves identifying the reasons for re-engineering, defining the scope,
and establishing the goals and objectives of the process.

2. Analysis: The next step is to analyze the existing system, including the
code, documentation, and other artifacts. This involves identifying the
system’s strengths and weaknesses, as well as any issues that need to be
addressed.

3. Design: Based on the analysis, the next step is to design the new or
updated software system. This involves identifying the changes that
need to be made and developing a plan to implement them.

4. Implementation: The next step is to implement the changes by


modifying the existing code, adding new features, and updating the
documentation and other artifacts.

5. Testing: Once the changes have been implemented, the software system
needs to be tested to ensure that it meets the new requirements and
specifications.

6. Deployment: The final step is to deploy the re-engineered software


system and make it available to end-users.

Steps involved in Re-engineering

1. Inventory Analysis
2. Document Reconstruction

3. Reverse Engineering

4. Code Reconstruction

5. Data Reconstruction

6. Forward Engineering

Re-engineering Cost Factors

1. The quality of the software to be re-engineered.

2. The tool support available for re-engineering.

3. The extent of the required data conversion.

4. The availability of expert staff for re-engineering.

Advantages of Re-engineering

1. Reduced Risk: As the software is already existing, the risk is less as


compared to new software development. Development problems,
staffing problems and specification problems are the lots of problems
that may arise in new software development.

2. Reduced Cost: The cost of re-engineering is less than the costs of


developing new software.
3. Revelation of Business Rules: As a system is re-engineered , business
rules that are embedded in the system are rediscovered.

4. Better use of Existing Staff: Existing staff expertise can be maintained


and extended accommodate new skills during re-engineering.

5. Improved efficiency: By analyzing and redesigning processes, re-


engineering can lead to significant improvements in productivity, speed,
and cost-effectiveness.

6. Increased flexibility: Re-engineering can make systems more adaptable


to changing business needs and market conditions.

7. Better customer service: By redesigning processes to focus on


customer needs, re-engineering can lead to improved customer
satisfaction and loyalty.

8. Increased competitiveness: Re-engineering can help organizations


become more competitive by improving efficiency, flexibility, and
customer service.

9. Improved quality: Re-engineering can lead to better quality products


and services by identifying and eliminating defects and inefficiencies in
processes.

10.Increased innovation: Re-engineering can lead to new and innovative


ways of doing things, helping organizations to stay ahead of their
competitors.

11.Improved compliance: Re-engineering can help organizations to


comply with industry standards and regulations by identifying and
addressing areas of non-compliance.

Disadvantages of Re-engineering

Major architectural changes or radical reorganizing of the systems data


management has to be done manually. Re-engineered system is not likely to be
as maintainable as a new system developed using modern software Re-
engineering methods.

1. High costs: Re-engineering can be a costly process, requiring


significant investments in time, resources, and technology.
2. Disruption to business operations: Re-engineering can disrupt normal
business operations and cause inconvenience to customers, employees
and other stakeholders.

3. Resistance to change: Re-engineering can encounter resistance from


employees who may be resistant to change and uncomfortable with new
processes and technologies.

4. Risk of failure: Re-engineering projects can fail if they are not planned
and executed properly, resulting in wasted resources and lost
opportunities.

5. Lack of employee involvement: Re-engineering projects that are not


properly communicated and involve employees, may lead to lack of
employee engagement and ownership resulting in failure of the project.

6. Difficulty in measuring success: Re-engineering can be difficult to


measure in terms of success, making it difficult to justify the cost and
effort involved.

7. Difficulty in maintaining continuity: Re-engineering can lead to


significant changes in processes and systems, making it difficult to
maintain continuity and consistency in the organization.

CASE Tools:
CASE tools are set of software application programs, which are used to
automate SDLC activities. CASE tools are used by software project
managers, analysts and engineers to develop software system.

There are number of CASE tools available to simplify various stages of


Software Development Life Cycle such as Analysis tools, Design tools,
Project management tools, Database Management tools, Documentation
tools are to name a few.

Use of CASE tools accelerates the development of project to produce


desired result and helps to uncover flaws before moving ahead with next
stage in software development.

Components of CASE Tools

CASE tools can be broadly divided into the following parts based on their
use at a particular SDLC stage:
• Central Repository - CASE tools require a central repository, which can
serve as a source of common, integrated and consistent information.
Central repository is a central place of storage where product
specifications, requirement documents, related reports and diagrams,
other useful information regarding management is stored. Central
repository also serves as data dictionary.

• Upper Case Tools - Upper CASE tools are used in planning, analysis
and design stages of SDLC.
• Lower Case Tools - Lower CASE tools are used in implementation,
testing and maintenance.
• Integrated Case Tools - Integrated CASE tools are helpful in all the
stages of SDLC, from Requirement gathering to Testing and
documentation.

CASE tools can be grouped together if they have similar functionality, process
activities and capability of getting integrated with other tools.

Project Management Tools

These tools are used for project planning, cost and effort estimation, project
scheduling and resource planning. Managers have to strictly comply project
execution with every mentioned step in software project management. Project
management tools help in storing and sharing project information in real-time
throughout the organization. For example, Creative Pro Office, Trac Project,
Basecamp.
Analysis Tools

These tools help to gather requirements, automatically check for any


inconsistency, inaccuracy in the diagrams, data redundancies or erroneous
omissions. For example, Accept 360, Accompa, CaseComplete for requirement
analysis, Visible Analyst for total analysis.

Design Tools

These tools help software designers to design the block structure of the
software, which may further be broken down in smaller modules using
refinement techniques. These tools provides detailing of each module and
interconnections among modules. For example, Animated Software Design

Programming Tools

These tools consist of programming environments like IDE (Integrated


Development Environment), in-built modules library and simulation tools.
These tools provide comprehensive aid in building software product and include
features for simulation and testing. For example, Cscope to search code in C,
Eclipse.

Integration testing tools

Integration testing tools are used to test the interface between modules and find
the bugs; these bugs may happen because of the multiple modules integration.
The main objective of these tools is to make sure that the specific modules are
working as per the client's needs. To construct integration testing suites, we will
use these tools.

Some of the most used integration testing tools are as follows:

o Citrus
o FitNesse
o TESSY
o Protractor
o Rational Integration tester
Software Development Life Cycle (SDLC)

A software life cycle model (also termed process model) is a pictorial and
diagrammatic representation of the software life cycle. A life cycle model
represents all the methods required to make a software product transit
through its life cycle stages.

SDLC Cycle represents the process of developing software. SDLC framework


includes the following steps:

The stages of SDLC are as follows:

Stage1: communication and requirement analysis

In Communication, the user request for software by meeting

service provider. Requirement Analysis is the most

important and necessary stage in SDLC.

Business analyst and Project organizer set up a meeting with the client to
gather all the data like what the customer wants to build, who will be the end
user, what is the objective of the product. Before creating a product, a core
understanding or knowledge of the product is very necessary.

Once the requirement is understood, the SRS (Software Requirement


Specification) document is created. The developers should thoroughly follow
this document and also should be reviewed by the customer for future
reference.

Stage2: Feasibility study and system analysis

Rough plan and road map is done for software by using algorithms, models.

Stage3: Designing the Software

The next phase is about to bring down all the knowledge of requirements,
analysis, and design of the software project. This phase is the product of the
last two, like inputs from the customer, requirement gathering and blueprint
of software.
Stage4: Developing the project
In this phase of SDLC, the actual development begins, and the
programming is built. The implementation of design begins concerning
writing code. Developers have to follow the coding guidelines described by
their management and programming tools like compilers, interpreters,
debuggers, etc. are used to develop and implement the code.

Stage5: Testing

After the code is generated, it is tested against the requirements to make sure
that the products are solving the needs addressed and gathered during the
requirements stage.

During this stage, unit testing, integration testing, system testing, acceptance
testing are done.

Stage6: Deployment

Once the software is certified, and no bugs or errors are stated, then it is
deployed.

Then based on the assessment, the software may be released as it is or with


suggested enhancement in the object segment.

Stage7: Maintenance

Once when the client starts using the developed systems, then the real issues
come up and requirements to be solved from time to time.

This procedure where the care is taken for the developed product is known as
maintenance.

Different Software models

Waterfall Model:

The Waterfall Model was the first Process Model to be introduced. It is also
referred to as a linear- sequential life cycle model or classic model. It is
very simple to understand and use. In a waterfall model, each phase must be
completed before the next phase can begin and there is no overlapping in the
phases.
The Waterfall model is the earliest SDLC approach that was used for software
development.
The waterfall Model illustrates the software development process in a linear
sequential flow. This means that any phase in the development process begins
only if the previous phase is complete. In this waterfall model, the phases do
not overlap.
Waterfall approach was first SDLC Model to be used widely in Software
Engineering to ensure success of the project. In "The Waterfall" approach, the
whole process of software development is divided into separate phases. In
this Waterfall model, typically, the outcome of one phase acts as the input for
the next phase sequentially.
The following illustration is a representation of the different phases of the
Waterfall Model.

The sequential phases in Waterfall model are −


Requirement Gathering and analysis − All possible requirements of
the system to be developed are captured in this phase and documented
in a requirement specification document.
System Design − The requirement specifications from first phase are
studied in this phase and the system design is prepared. This system
design helps in specifying hardware and system requirements and helps
in defining the overall system architecture.
Implementation − With inputs from the system design, the system is
first developed in small programs called units, which are integrated in
the next phase. Each unit is developed and tested for its functionality,
which is referred to as Unit Testing.
Integration and Testing − All the units developed in the
implementation phase are integrated into a system after testing of each
unit. Post integration the entire system is tested for any faults and
failures.
Deployment of system − Once the functional and non-functional
testing is done; the product is deployed in the customer environment or
released into the market.
Maintenance − There are some issues which come up in the client
environment. To fix those issues, patches are released. Also to enhance
the product some better versions are released. Maintenance is done to
deliver these changes in the customer environment.
All these phases are cascaded to each other in which progress is seen as
flowing steadily downwards (like a waterfall) through the phases. The next
phase is started only after the defined set of goals are achieved for previous
phase and it is signed off, so the name "Waterfall Model". In this model, phases
do not overlap.
Advantages:

Some of the major advantages of the Waterfall Model are as follows −


Simple and easy to understand and use
Phases are processed and completed one at a time.
Works well for smaller projects where requirements are very well
understood.
It is disciplined in approach.
Disadvantages:

No working software is produced until late during the life cycle.


High amounts of risk and uncertainty.
Not a good model for complex and object-oriented projects.
Poor model for long and ongoing projects.
Not suitable for the projects where requirements are at a moderate to
high risk of changing.So, risk and uncertainty is high with this
process model.

Spiral Model:

The spiral model, initially proposed by Boehm, it is the combination of


waterfall and iterative model, Using the spiral model, the software is
developed in a series of incremental releases. Each phase in spiral model
begins with planning phase and ends with evaluation phase.

The spiral model has four phases. A software project repeatedly passes
through these phases in iterations called Spirals.

Planning phase
This phase starts with gathering the business requirements in the baseline
spiral. In the subsequent spirals as the product matures, identification of
system requirements, subsystem requirements and unit requirements are all
done in this phase.
This phase also includes understanding the system requirements by
continuous communication between the customer and the system analyst. At
the end of the spiral, the product is deployed in the identified market.
Risk Analysis
Risk Analysis includes identifying, estimating and monitoring the technical
feasibility and management risks, such as schedule slippage and cost overrun.
After testing the build, at the end of first iteration, the customer evaluates the
software and provides feedback.
Engineering or construct phase
The Construct phase refers to production of the actual software product at
every spiral. In the baseline spiral, when the product is just thought of and the
design is being developed a POC (Proof of Concept) is developed in this phase
to get customer feedback.
Evaluation Phase
This phase allows the customer to evaluate the output of the project to update
before the project continues to the next spiral.
Software project repeatedly passes through all these four phases.
Advantages:
Flexible model
Project monitoring is very easy and effective
Risk management
Easy and frequent feedback from users.
Dis advantages:
It doesn’t work for smaller projects
Risk analysis require specific expertise.
It is costly model & complex.
Project success is highly dependent on risk.
Prototype Model:

To overcome the disadvantages of waterfall model, this model is


implemented with a special factor called prototype. It is also known as
revaluation model.
Step 1: Requirements gathering and analysis

A prototyping model starts with requirement analysis. In this phase, the


requirements of the system are defined in detail. During the process, the users
of the system are interviewed to know what is their expectation from the
system.
Step 2: Quick design

The second phase is a preliminary design or a quick design. In this stage, a


simple design of the system is created. However, it is not a complete design.
It gives a brief idea of the system to the user. The quick design helps in
developing the prototype.

Step 3: Build a Prototype

In this phase, an actual prototype is designed based on the information


gathered from quick design. It is a small working model of the required
system.

Step 4: Initial user evaluation

In this stage, the proposed system is presented to the client for an initial
evaluation. It helps to find out the strength and weakness of the working
model. Comment and suggestion are collected from the customer and
provided to the developer.

Step 5: Refining prototype

If the user is not happy with the current prototype, you need to refine the
prototype according to the user's feedback and suggestions.

This phase will not over until all the requirements specified by the user are
met. Once the user is satisfied with the developed prototype, a final system is
developed based on the approved final prototype.

Step 6: Implement Product and Maintain

Once the final system is developed based on the final prototype, it is


thoroughly tested and deployed to production. The system undergoes routine
maintenance for minimizing downtime and prevent large-scale failures.
Advantages:

Users are actively involved in development. Therefore, errors can


be detected in the initial stage of the software development process.
Missing functionality can be identified, which helps to reduce the risk
of failure as Prototyping is also considered as a risk reduction activity.
Helps team member to communicate effectively
Customer satisfaction exists because the customer can feel the product
at a very early stage.

Disadvantages:

Prototyping is a slow and time taking process.


The cost of developing a prototype is a total waste as the
prototype is ultimately thrown away.
Prototyping may encourage excessive change requests.
After seeing an early prototype model, the customers may think that
the actual product will be delivered to him soon.
The client may lose interest in the final product when he or she is
not happy with the initial prototype.

SDLC - V-Model

The V-model is an SDLC model where execution of processes happens in a


sequential manner in a V- shape. It is also known as Verification and
Validation model.
The V-Model is an extension of the waterfall model and is based on the
association of a testing phase for each corresponding development stage. This
means that for every single phase in the development cycle, there is a directly
associated testing phase. This is a highly-disciplined model and the next phase
starts only after completion of the previous phase.
V- Model - Design

Under the V-Model, the corresponding testing phase of the development phase is
planned in parallel. So, there are Verification phases on one side of the ‘V’
and Validation phases on the other side. The Coding Phase joins the two sides
of the V-Model.

There are the various phases of Verification Phase of V-model:

1. Business requirement analysis: This is the first step where product


requirements understood from the customer's side. This phase
contains detailed communication to understand customer's
expectations and exact requirements.
2. System Design: In this stage system engineers analyze and
interpret the business of the proposed system by studying the
user requirements document.
3. Architecture Design: The baseline in selecting the architecture is that
it should understand all which typically consists of the list of modules,
brief functionality of each module, their interface relationships,
dependencies, database tables, architecture diagrams, technology
detail, etc. The integration testing model is carried out in a particular
phase.
4. Module Design: In the module design phase, the system breaks down
into small modules. The detailed design of the modules is specified,
which is known as Low-Level Design
5. Coding Phase: After designing, the coding phase is started. Based on
the requirements, a suitable programming language is decided. There
are some guidelines and standards for coding. Before checking in the
repository, the final build is optimized for better performance, and the
code goes through many code reviews to check the performance.

There are the various phases of Validation Phase of V-model:

1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed
during the module design phase. These UTPs are executed to
eliminate errors at code level or unit level. A unit is the smallest entity
which can independently exist, e.g., a program module. Unit testing
verifies that the smallest entity can function correctly when isolated
from the rest of the codes/ units.
2. Integration Testing: Integration Test Plans are developed during
the Architectural Design Phase. These tests verify that groups
created and tested independently can coexist and communicate
among themselves.
3. System Testing: System Tests Plans are developed during System
Design Phase. Unlike Unit and Integration Test Plans, System Tests
Plans are composed by the client’s business team. System Test
ensures that expectations from an application developer are met.
4. Acceptance Testing: Acceptance testing is related to the business
requirement analysis part. It includes testing the software product in
user atmosphere. Acceptance tests reveal the compatibility problems
with the different systems, which is available within the user
atmosphere. It conjointly discovers the non-functional problems like
load and performance defects within the real user atmosphere.

When to use V-Model?


1. When the requirement is well defined and not ambiguous.
2. The V-shaped model should be used for small to
medium-sized projects where requirements are clearly
defined and fixed.
3. The V-shaped model should be chosen when sample technical
resources are available with essential technical expertise.

Advantage:

• Easy to Understand.
• Testing Methods like planning, test designing happens well before
coding.
• This saves a lot of time. Hence a higher chance of success over the
waterfall model.
• Avoids the downward flow of the defects.
• Works well for small plans where requirements are easily understood.

Disadvantage:

• Very rigid and least flexible.


• Not a good for a complex project.
• Software is developed during the implementation stage, so no
early prototypes of the software are produced.
• If any changes happen in the midway, then the test documents
along with the required documents,

SDLC - RAD Model

The RAD (Rapid Application Development) model is based on prototyping


and iterative development with no specific planning involved. The process of
writing the software itself involves the planning required for developing the
product.
Rapid Application Development focuses on gathering customer requirements
through workshops or focus groups, early testing of the prototypes by the
customer using iterative concept, reuse of the existing prototypes
(components), continuous integration and rapid delivery.
RAD Model Design:

RAD model distributes the analysis, design, build and test phases into a series
of short, iterative development cycles.

Following are the various phases of the RAD Model −


Business Modelling:
The business model for the product under development is designed in terms
of flow of information and the distribution of information between various
business channels. A complete business analysis is performed to find the vital
information for business, how it can be obtained, how and when is the
information processed and what are the factors driving successful flow of
information.
Data Modelling:
The information gathered in the Business Modelling phase is reviewed and
analyzed to form sets of data objects vital for the business. The attributes of all
data sets is identified and defined. The relation between these data objects are
established and defined in detail in relevance to the business model.
Process Modelling:
The data object sets defined in the Data Modelling phase are converted to
establish the business information flow needed to achieve specific business
objectives as per the business model. The process model for any changes or
enhancements to the data object sets is defined in this phase. Process
descriptions for adding, deleting, retrieving or modifying a data object are
given.
Application Generation:
The actual system is built and coding is done by using automation tools to
convert process and data models into actual prototypes.
Testing and Turnover
The overall testing time is reduced in the RAD model as the prototypes are
independently tested during every iteration. However, the data flow and the
interfaces between all the components need to be thoroughly tested with
complete test coverage. Since most of the programming components have
already been tested, it reduces the risk of any major issues.
Incremental model:

It is a process of software development where requirements divided into


multiple models of SDLC.
Each module goes through the requirement, design, implementation and
testing phases, this process continues until complete system is achieved.

The various phases of Iterative model are as follows:

1. Requirement gathering & analysis: In this phase, requirements are


gathered from customers and check by an analyst whether requirements will
fulfil or not. Analyst checks that need will achieve within budget or not.
After all of this, the software team skips to the next phase.
2. Design: In the design phase, team design the software by the different
diagrams like Data Flow diagram, activity diagram, class diagram, state
transition diagram, etc.

3. Implementation: In the implementation, requirements are written in


the coding language and transformed into computer programs which are
called Software.

4. Testing: After completing the coding phase, software testing starts using
different test methods. There are many test methods, but the most common
are white box, black box, and grey box test methods.

5. Deployment: After completing all the phases, software is deployed to its


work environment.

6. Review: In this phase, after the product deployment, review phase is


performed to check the behavior and validity of the developed product.
And if there are any error found then the process starts again from the
requirement gathering.

7. Maintenance: In the maintenance phase, after deployment of the


software in the working environment there may be some bugs, some
errors or new updates are required. Maintenance involves debugging
and new addition options.

Advantages:

1. Testing and debugging during smaller iteration is easy.


2. A Parallel development can plan.
3. It is easily acceptable to ever-changing needs of the project.
4. Risks are identified and resolved during iteration.
5. Limited time spent on documentation and extra time on designing.

Disadvantages:

1. It is not suitable for smaller projects.


2. Design can be changed again and again because of imperfect
requirements.
3. Requirement changes can cause over budget.
4. Project completion date not confirmed because of changing
requirements.

You might also like