0% found this document useful (0 votes)
58 views196 pages

CSBS-SE Question & Answers - 1,2,3,4,5

It's about the software engineering questions of units like 1,2,3,4,5 and also with answers where w
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views196 pages

CSBS-SE Question & Answers - 1,2,3,4,5

It's about the software engineering questions of units like 1,2,3,4,5 and also with answers where w
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 196

Department of Computer Science and Business Systems

Question and Answers in Software Engineering


SYLLABUS: SOFTWARE ENGINEERING
UNIT I
Introduction: Programming in the small vs. programming in the large; software project failures
and importance of software quality and timely availability; engineering approach to software
development; role of software engineering towards successful execution of large software projects;
emergence of software engineering as a discipline.
UNIT II
Software Project Management: Basic concepts of life cycle models – different models and
milestones; software project planning – identification of activities and resources; concepts of
feasibility study; techniques for estimation of schedule and effort; software cost estimation models
and concepts of software engineering economics; techniques of software project control and
reporting; introduction to measurement of software size; introduction to the concepts of risk and
its mitigation; configuration management.
UNIT III
Software Quality and Reliability: Internal and external qualities; process and product quality;
principles to achieve software quality; introduction to different software quality models like
McCall, Boehm, FURPS / FURPS+, Dromey, ISO – 9126; introduction to Capability Maturity
Models (CMM and CMMI); introduction to software reliability, reliability models and estimation.
UNIT IV
Problem Space Understanding: How an industry works, how an IT company works, How IT
supports business, Problem Space Understanding, Knowledge Driven Development (KDD),
Domain knowledge framework of KDD, usage of domain knowledge framework in Insurance,
Banking and Automobile, KDD as a project delivery methodology, Linking domain knowledge to
software development, An example to illustrate this, A case study to produce a KDD artifact using
Agile.
Software Requirements Analysis, Design and Construction: Introduction to Software
Requirements Specifications (SRS) and requirement elicitation techniques; techniques for
requirement modelling – decision tables, event tables, state transition tables, Petri nets;
requirements documentation through use cases; introduction to UML, introduction to software
metrics and metrics based control methods; measures of code and design quality.
UNIT V
Software Testing: Introduction to faults and failures; basic testing concepts; concepts of
verification and validation; black box and white box tests; white box test coverage – code coverage,
condition coverage, branch coverage; basic concepts of black-box tests – equivalence classes,
boundary value tests, usage of state tables; testing use cases; transaction based testing; testing for
non-functional requirements – volume, performance and efficiency; concepts of inspection.
Text Books:
1. Software Engineering, Ian Sommerville
Reference Books:
1. Fundamentals of Software Engineering, Carlo Ghezzi, Jazayeri Mehdi, Mandrioli Dino
2. Software Requirements and Specification: A Lexicon of Practice, Principles and Prejudices,
Michael Jackson
3. The Unified Development Process, Ivar Jacobson, Grady Booch, James Rumbaugh
4. Design Patterns: Elements of Object-Oriented Reusable Software, Erich Gamma, Richard
Helm, Ralph Johnson, John Vlissides
5. Software Metrics: A Rigorous and Practical Approach, Norman E Fenton, Shari Lawrence
Pfleeger
6. Software Engineering: Theory and Practice, Shari Lawrence Pfleeger and Joanne M. Atlee
7. Object-Oriented Software Construction, Bertrand Meyer
8. Object Oriented Software Engineering: A Use Case Driven Approach --Ivar Jacobson
9. Touch of Class: Learning to Program Well with Objects and Contracts --Bertrand Meyer
10. UML Distilled: A Brief Guide to the Standard Object Modeling Language-Martin Fowler

UNIT- I
PART - A (2 Marks)
1. Define software engineering
2. What is meant by small programming
3. What is meant by large programming
4. Write any two characteristics of software as a product
5. What do you mean by software quality
6. What are the major differences between system engineering and software engineering
7. List the roles of software engineer
8. List the roles of software developer

PART- B (10 Marks)


1. Explain the reasons for software project failure
2. Illustrate the importance of software quality
3. Explain in detail about Engineering approach to software development
4. Explain the role of software engineering towards successful execution of large software
projects
5. Demonstrate the emergence of software engineering as a discipline

UNIT II
PART-A (2 Marks)
1. List the phases in SDLC
2. What do you mean by feasibility study
3. List the techniques for estimation of schedule and effort
4. List the software cost estimation models
5. What do you mean by software engineering economics
6. List the techniques of software project control and reporting
7. How can you measure the software size
8. Define a risk
9. What is configuration management

PART-B (10 Marks)

1. Explain Water fall Model


2.Explain in detail about Spiral Model
3.Explain the Evolutionary and Incremental Model.
4.What are the necessities of Life cycle model? Elaborate on the various issues
of Software lifecycle
6. Demonstrate the software project planning
7. Demonstrate the feasibility study
8. Explain the techniques for estimation of schedule and effort
9. Illustrate the software cost estimation models
10. Explain the techniques of software project control and reporting
11. Explain about risk and its mitigation
12. Explain about configuration management
13. Demonstrate COCOMO with example

UNIT-III:
PART- A (2 Marks)
1. List the Internal and external qualities of software quality
2. List the Principles to achieve software quality
3. What are the software quality models
4. What is CMMI
5. Define software reliability
6. List the reliability models

PART- B (10 Marks)

1. Explain the Principles to achieve software quality


2. Illustrate the software quality models like McCall and Boehm
3. Demonstrate the software quality models like FURPS and Dromey
4. Discuss about ISO – 9126 standards
5. Explain bout Capability Maturity Models
6. Discuss about reliability models

UNIT – IV
PART-A (2 Marks)
1. Define a Problem Space
2. How an IT company works
3. How an IT supports business
4. What do you mean by Knowledge Driven Development
5. List the usage of domain knowledge framework in Insurance
6. What is SRS
7. List the requirement elicitation techniques
8. List the techniques for requirement modelling
9. Define a decision table
10. Define an event table
11. Define state transition table
12. Define an UML
13. Define a software metrics

PART- B (10 Marks)

1. Explain the needy of Problem Space Understanding in IT industry


2. Explain about Knowledge Driven Development
3. Illustrate the Domain knowledge framework of KDD
4. Explain the usage of domain knowledge framework in Insurance, Banking and
Automobile
5. Explain about Software Requirements Specifications
6. Illustrate the requirement elicitation techniques
7. Demonstrate the techniques for requirement modelling
8. Explain the Importance and Principles of Modeling
9. Explain the importance of UML
10. Explain the importance of software metrics
11. Demonstrate the measures of code and design quality

UNIT – V
PART- A (2 Marks)
1. Define software testing
2. What are the objectives of testing
3. Define White Box Testing
4. What are the two levels of testing
5. What are the various testing activities
6. Write short note on black box testing
7. What is equivalence partitioning
8. What is Regression Testing
9. What is a boundary value analysis
10. What is cyclomatic complexity
11. How to compute the cyclomatic complexity
12. Distinguish between verification and validation
13. Distinguish between alpha and beta testing
14. State the objectives and guidelines for debugging
15. What do you mean by test case management

PART- B (10 Marks)


1. Explain the testing objectives and its principles
2. What is the need for software maintenance and maintenance report
3. What are the attributes of the good test? Explain the test case design
4. Write a note of
(i) Black box testing
(ii) Regression testing
(iii) White box testing
(iv) Integration testing
5. Discuss the differences between black box and white box testing
6. Explain boundary value analysis
7. Justify the importance of testing process
8. Explain automated testing tools. How test cases are generated
9. What are the various testing strategies to software testing
10. Illustrate about transaction-based testing
11. Explain the testing for non-functional requirements

UNIT I (Short Answers)


Define Software Engineering

 Software Engineering is the systematic engineering approaches for software


product/application development. It is an engineering branch associated with analyzing
user requirements, design, development, testing and maintenance of software products.

 Software engineering is defined as a process of analyzing user requirements and then


designing, building, and testing software application which will satisfy those requirements.
 IEEE, defines software engineering as the application of a systematic, disciplined, which
is a computable approach for the development, operation, and maintenance of software.
 Fritz Bauer defined it as ‘the establishment and used standard engineering principles. It
helps you to obtain, economically, software which is reliable and works efficiently on the
real machines’.
 Boehm defines software engineering, which involves, ‘the practical application of
scientific knowledge to the creative design and building of computer programs. It also
includes associated documentation needed for developing, operating, and maintaining
them.’

What is meant by Small Programming


See it in Long Answers Section

What is meant by Large Programming


See it in Long Answers Section

Write any two characteristics of Software as a Product

Software Products are nothing but software systems delivered to the customer with the
documentation that describes how to install and use the system. In certain cases, software
products may be part of system products where hardware, as well as software, is delivered to a
customer. Software products are produced with the help of the software process. The software
process is a way in which we produce software.
Types of software products:
Software products fall into two broad categories:
1. Generic products:
Generic products are stand-alone systems that are developed by a production unit
and sold on the open market to any customer who is able to buy them.
2. Customized Products:
Customized products are the systems that are commissioned by a particular
customer. Some contractor develops the software for that customer.
Essential characteristics of Well-Engineered Software Product:
A well-engineered software product should possess the following essential characteristics:
 Efficiency:
The software should not make wasteful use of system resources such as memory
and processor cycles.
 Maintainability:
It should be possible to evolve the software to meet the changing requirements of
customers.
 Dependability:
It is the flexibility of the software that ought to not cause any physical or economic
injury within the event of system failure. It includes a range of characteristics such
as reliability, security, and safety.
 In time:
Software should be developed well in time.
 Within Budget:
The software development costs should not overrun and it should be within the
budgetary limit.
 Functionality:
The software system should exhibit the proper functionality, i.e. it should perform
all the functions it is supposed to perform.
 Adaptability:
The software system should have the ability to get adapted to a reasonable extent
with the changing requirements.

What do you mean by Software Quality

 Software quality is defined as a field of study and practice that describes the desirable
attributes of software products. There are two main approaches to software quality:
defect management and quality attributes.
 Software Quality Attributes are features that facilitate the measurement of
performance of a software product by Software Testing professionals, and include
attributes such as availability, interoperability, correctness, reliability, learnability,
robustness, maintainability, readability, extensibility, testability.

What are the major differences between System Engineering and Software Engineering

System Engineer:

 A System Engineer is a person who deals with the overall management of engineering
projects during their life cycle (focusing more on physical aspects).
 They follows an interdisciplinary approach governing the total technical and managerial
effort required to transform requirements into solutions.
 They are generally focused with all aspects of computer based system development not
only this but also hardware, software and process engineering etc. are included.
Systems Engineering Methods :
 Stakeholder Analysis
 Interface Specification
 Design Tradeoffs
 Configuration Management
 Systematic Verification and Validation
 Requirements Engineering
2. Software Engineer :
 A Software Engineer is a person who deals with the designing and developing good
quality of software applications/software products.
 They follows a systematic and disciplined approach for software design, development,
deployment and maintenance of software applications.
 They are generally concerned with all aspects of software development, infrastructure,
control, applications and databases in the system.
Software Engineering Methods :
 Process Modeling
 Incremental Verification and Validation
 Process Improvement
 Model-Driven Development
 Agile Methods
 Continuous Integration

Difference between System Engineer and Software Engineer :


S.No. SYSTEM ENGINEER SOFTWARE ENGINEER

A System Engineer is a person who


deals with the overall management A Software Engineer is a person who
of engineering projects during their deals with the designing and
life cycle (focusing more on developing good quality of software
01. physical aspects). applications/software products.

System Engineers follows an Software Engineers follows a


interdisciplinary approach governing systematic and disciplined approach
the total technical and managerial for software design, development,
effort required to transform deployment and maintenance of
02. requirements into solutions. software applications.

In general they are concerned with In general they are concerned with all
all aspects of computer based system aspects of software development,
development including hardware, infrastructure, control, applications and
03. software and process engineering. databases in the system.

One thing software engineering can One thing system engineering can
learn from system engineering i.e learn from software engineering i.e
Consideration of trade-offs and use Disciplined approach to cost
04. of framework methods. estimation.

System engineers mostly focus on Software engineers mostly focus on


05. users and domains. developing good software.
Systems Engineering Methods are
Stakeholder Analysis, Interface
Specification, Design Tradeoffs, Software Engineering Methods are
Configuration Management, Modeling, Incremental Verification
Systematic Verification and and Validation, Process Improvement,
Validation, Requirements Model-Driven Development, Agile
06. Engineering etc. Methods, Continuous Integration etc.

It ensures correct external interfaces, It makes interfaces among software


interfaces among subsystems and module, data and communication path
07. software. work.

System Engineers requires a broader


education background like While Software Engineers requires
Engineering, Mathematics and Computer Science or Computer
08. Computer science etc. Engineering background.

 But these two disciplines are interconnected to each other and there is n such hard and
fast rules for these titles at IT industries and we can see also how these two disciplines
are cooperating to each other.
List the roles of Software Engineer
See the above answer
List the roles of Software Developer
 Talking through requirements with clients.
 Testing software and fixing problems.
 Maintaining systems once they're up and running.
 Being a part of technical designing.
 Integrate software components.
 Producing efficient codes.
 Writing program codes for reference and reporting.

(Long Answers)
Programming in the small vs. programming in the large:
 In software engineering, programming in the large and programming in the small refer to
two different aspects of writing software, namely,
 Designing a larger system as a composition of smaller parts, and Creating those smaller
parts by writing lines of code in a programming language, respectively.
 The terms were coined by Frank DeRemer and Hans Kron in their 1975 paper
"Programming-in-the-large versus programming-in-the-small", in which they argue that
the two are essentially different activities, and that typical programming languages, and the
practice of structured programming, provide good support for the latter, but not for the
former.

 Fred Brooks, identifies that the way an individual program is created is different from how
a programming systems product is created.
 The former likely does one relatively simple task well. It is probably coded by a single
engineer, is complete in itself, and is ready to run on the system on which it was developed.
 The programming activity was probably fairly short-lived as simple tasks are quick and
easy to complete. This is the endeavor that DeRemer and Kron describe as programming
in the small.
 The project is likely to be split up into several or hundreds of separate modules which
individually are of a similar complexity to the individual programs described above.
However, each module will define an interface to its surrounding modules.
 Brooks describes how programming systems projects are typically run as formal projects
that follow industry best practices and will comprise testing, documentation and ongoing
maintenance activities as well as activities to ensure that the product is generalized to work
in different scenarios including on systems other than the development systems on which
it was created.

Programming in the large


 In software development, programming in the large can involve programming by larger
groups of people or by smaller groups over longer time periods. Either of these conditions
will result in large, and hence complicated, programs that can be challenging for
maintainers to understand.
 With programming in the large, coding managers place emphasis on partitioning work
into modules with precisely-specified interactions. This requires careful planning and
careful documentation.
 With programming in the large, program changes can become difficult. If a change
operates across module boundaries, the work of many people may need re-doing. Because
of this, one goal of programming in the large involves setting up modules that will not need
altering in the event of probable changes. This is achieved by designing modules so they
have high cohesion and loose coupling.
 Programming in the large requires abstraction-creating skills. Until a module becomes
implemented it remains an abstraction. Taken together, the abstractions should create
an architecture unlikely to need change. They should define interactions that have
precision and demonstrable correctness.
 Programming in the large requires management skills. The process of building abstractions
aims not just to describe something that can work but also to direct the efforts of people
who will make it work.

Programming in the small


 In software development, programming in the small describes the activity of writing a
small program. Small programs are typified by being small in terms of their source code
size, are easy to specify, quick to code and typically perform one task or a few very closely
related tasks very well.
 Programming in the small can involve programming by individuals or small groups over
short time periods and may involve less formal practices (for instance less emphasis on
documentation or testing), tools and programming languages (e.g. the selection of a loosely
typed scripting language in preference to a strictly typed programming language).
 Programming in the small can also describe an approach to making a prototype software
or where rapid application development is more important than stability or correctness.
 In computer science terms, programming in the small deals with short-lived programmatic
behavior, often executed as a single ACID transaction and which allows access to local
logic and resources such as files, databases, etc
Software project failures: (Explain the reasons for software project failure)
Common Software Failure Causes:

 Lack of user participation


 Changing requirements
 Unrealistic or unarticulated project goals
 Inaccurate estimates of needed resources
 Badly defined system requirements
 Poor reporting of the project’s status
 Lack of resources
 Unmanaged risks
 Poor communication among customers, developers, and users
 Use of immature technology
 Inability to handle the project’s complexity
 Sloppy development practices
 Poor Project Management
 Stakeholder politics
 Lack of Stakeholder involvement
 Commercial pressures

Strategies to avoid Software project failures:

Strategy 1- Define a Software Product Vision


 A good software product vision describes the core essence of a software product and where
that product is headed.

 It explains the higher purpose of why that product exists in the first place.

 It sets the direction for where the product is going and what it will deliver in the future.

Strategy 2- Define Roles & Responsibilities from the Outset


 Executive involvement is a primary variable in predicting the success of a software project.

 Having a leadership team aligned across an organization articulating the purpose, value,
and rationale for a software project goes a long way towards getting stakeholders and end-
users pulling the proverbial rope in the same direction.

 You need to ensure that you define the key stakeholders within your business that will be
involved in the delivery of the solution.

Strategy 3- Prioritize & Budget But Keep Your Score Flexible

 One would imagine that every software development project needs a detailed, step-by-step
plan.

 The kind of plan that would outline every requirement, detail every risk and mitigation
steps, document the key people involved and so on.

 The old proverb of “You don’t know what you don’t know” is key here. No matter how
much time is spent developing a detailed plan, specification and wireframes or prototypes
there’s no way for a team to know what they don’t know.

 A much better approach is to define a product vision and then define a broad-strokes plan
that allows a project to begin to inch forward.

 You spend time detailing a small but useful subset of features of the overall project, get it
built, review it, discuss it, compare against your original product vision/plan and rinse and
repeat. This is an agile approach to software development.

 The ability to adapt and incorporate changes to your business as you move forward can
mean the difference between success and failure, and moreover creates the best possible
product.

 To achieve this is to prioritize ruthlessly. Work should be undertaken in a close-knit


partnership between the software development team and the customer.

 Meetings should take place regularly to discuss the status and re-prioritize. For these
meetings to be meaningful, the software partner must be transparent about the budget/time
consumed and progress made towards the completion of the project.

 This flexible approach keeps everybody actively focused on the delivery of the product
vision to the exclusion of all waste.

 It also allows both parties to mould the scope of the project in response to new information.

 The finished solution is a better product that meets your product vision. Given the wealth
of data provided to the customer about the progress made versus budget spent, the customer
always makes scope changes armed with the information they require.
 We respect that it’s often the case that a business will want a cost and timescale for the
building of a new software solution.

 However, one should be very mindful of the change that will likely come about in your
requirements as you progress, and build in mechanisms to allow changes to happen.

 This is true partnership working, where information is shared and the product quality and
scope carefully sculpted in collaboration.

Strategy 4- Use Wireframes & Prototypes to Your Advantage

 In many ways, one of the hardest parts about creating software is not the actual building of
the software, but instead the communication of the requirements in the first place.

 Imagine you met somebody who had never seen a car before, and the only mechanism you
could use to describe what a car looks like and how it works is the written word.

 It would be a pretty grueling experience, to say the least. Just attempting to describe what
the average car looks like could be a dozen pages of text. How many wheels does it have,
and where do the wheels go? Hold on, what exactly is a wheel? And so on…

 Now imagine instead if you were simply able to draw them a car. Wouldn’t that make the
communication of what the car looks like so much easier?

 In software development, this is what we would refer to as a wireframe. Simple static


drawings of one or more screens within the proposed software solution.

 Now take this analogy a step further; imagine you could create a small plastic model of a
car with some basic operating parts like turning wheels, opening doors, and so on.

 This is what we would call a prototype, and a prototype typically conveys more
information than a wireframe due to its interactive nature.

 There’s a reason the saying “a picture is worth a thousand words” is often quoted. In
software, wireframe or prototype of a proposed software solution is an essential step for
communicating the desired outcome.

 Getting a wireframe or prototype in place helps everybody on the team to get on the same
page about what success looks like much quicker.

 You’ll have multiple stakeholders within your company to involve, and the sooner they all
agree on the desired outcome, the sooner your project can begin.

Strategy 5- Traditional (Waterfall) Vs Agile Project Management


 Requirements changing or being added to are almost an inevitable fact of software
development, so failing to plan for them is to plan to fail.

 The problem here is not a lack of good intention. The problem is specifically with the
opening statement “I know all of my requirements”.

 You probably don’t, and the approach to your commercials and project management are
now set up such that when you realize this is the case it’s too late.

 The contract between you and the software developer attempts by its nature to limit,
perhaps even prevent, change.

 Alternatively, agile project management builds the need to change requirements as you
progress your project into the approach explicitly.

 Agile recognizes that changes in requirements are not software project failures, instead,
they’re opportunities to realign the project with the overall product vision as more
information about the requirements are unearthed.

 Another benefit of agile is that planning, design and documentation beyond the minimum
necessary to begin work is a waste.
 It specifically focuses on delivering working features, which is where the value for the
customer comes to life.
 Lots have been written on the benefits of agile so rather than labor all of them, we’ll break
down the highlights.

Strategy 6- Expect & Plan for Changes in Requirements


 Testing isn’t the most ‘fun’ part of a software project but arguably it’s the most
important.

Bugs that make it out into the real world are costly
 Various studies show that it is over five times more expensive to fix bugs or issues that
make it out “into the wild” than it is to find and fix those bugs during the testing phase.

 Thorough software testing is the primary way to avoid bugs leaking out into the wild, and
for this sixth and final strategy, we consider two unique yet related types of testing:

 Testing performed by your software partner, which can include one or more of the
following:
o Cross-browser testing – does the solution work on multiple web browsers
o Functional testing – does the software do what it’s supposed to do
o Automated testing – a routine which can perform a certain set of steps to check the
outcome is as expected
o Unit testing – lines of code which automatically check other lines of code produce
the correct output given certain inputs
 Testing performed by you, the customer, which is commonly referred to as User
Acceptance Testing (UAT).
Importance of software quality and timely availability:
(Illustrate the importance of software quality)

 Software quality product is defined in term of its fitness of purpose. That is, a quality
product does precisely what the users want it to do. For software products, the fitness of
use is generally explained in terms of satisfaction of the requirements laid down in the SRS
document. Although "fitness of purpose" is a satisfactory interpretation of quality for many
devices such as a car, a table fan, a grinding machine, etc. for software products, "fitness
of purpose" is not a wholly satisfactory definition of quality.

Example: Consider a functionally correct software product. That is, it performs all tasks as
specified in the SRS document. But, has an almost unusable user interface. Even though it may be
functionally right, we cannot consider it to be a quality product.

The modern view of a quality associated with a software product several quality methods
such as the following:

Portability: A software device is said to be portable, if it can be freely made to work in various
operating system environments, in multiple machines, with other software products, etc.

Usability: A software product has better usability if various categories of users can easily invoke
the functions of the product.

Reusability: A software product has excellent reusability if different modules of the product can
quickly be reused to develop new products.

Correctness: A software product is correct if various requirements as specified in the SRS


document have been correctly implemented.

Maintainability: A software product is maintainable if bugs can be easily corrected as and when
they show up, new tasks can be easily added to the product, and the functionalities of the product
can be easily modified, etc.

Software Quality Management System

 A quality management system is the principal methods used by organizations to provide


that the products they develop have the desired quality.

A quality system subsists of the following:

 Managerial Structure and Individual Responsibilities: A quality system is the


responsibility of the organization as a whole. However, every organization has a sever
quality department to perform various quality system activities. The quality system of an
arrangement should have the support of the top management. Without help for the quality
system at a high level in a company, some members of staff will take the quality system
seriously.
 Quality System Activities: The quality system activities encompass the following:

Auditing of projects
Review of the quality system
Development of standards, methods, and guidelines, etc.
Production of documents for the top management summarizing the effectiveness of the
quality system in the organization.
Engineering approach to software development:
(Explain in detail about Engineering approach to software development)

Basic Principles of Good Software Engineering approach


Software Engineering is the systematic engineering approaches for software product/application
development. It is an engineering branch associated with analyzing user requirements, design,
development, testing and maintenance of software products.

Some basic principles of good software engineering are :

1. One of basic software Engineering principle is Better Requirement analysis which gives
a clear vision about the project. At last a good understanding on user requirements provides
value to it’s users by delivering a good software product which meets user’s requirements.

2. All designs and implementations should be as much simple as possible means KISS
(Keep it Simple, Stupid) principle should be followed. It makes code so simple as a result
debugging and further maintenance becomes simple.

3. Maintaining vision of project is most important thing throughout complete


development process for success of software project. As a clear vision on project leads
development of project in a right way.

4. Software projects include number of functionalities; all functionalities should be


developed in a Modular approach so that development will be faster and easier. This
modularity makes functions or system components independent.

5. Another specialization of principle of separation of concerns is Abstraction for


suppressing complex things and delivering simplicity to customer/user means it gives
what actually user needs and hides unnecessary things.

6. Think then Act is a must required principle for software engineering means before
starting developing functionality first it requires to think about application architecture,
as a good planning on flow of project development produces better result.

7. Sometimes developer adds up all functionalities together but later find no use of that.
So following Never add extra principle is important as it implements what actually
needed and later implements what are required which saves effort and time.
8. When other developers work in another’s code they should not be surprised and should
not waste their time in getting code. So providing better Documentation at required steps
is a good way of developing software projects.

9. Law of Demeter should be followed as it makes classes independent on their


functionalities and reduces connections and inter dependability between classes which is
called coupling.

10. The developers should develop project in such a way that it should satisfy principle of
Generality means it should not be limited or restricted to some of cases/functions rather
it should free from unnatural restrictions and should be able to provide service to
customer what actually they need or general needs in an extensive manner.
11. Principle of Consistency is important in coding style and designing GUI (Graphical
User Interface) as consistent coding style gives easier reading of code and consistency in
GUI makes user learning easier in dealing with interface and in using the software.

12. Never waste time if anything is required and that is already exit at that time take help of
Open source and fix it in your own way as per requirement.

13. Performing continuous validation helps in checking software system meets


requirement specifications and fulfills it’s intended purpose which helps in better
software quality control.

14. To exit in current technology market trend Using modern programming practices is
important to meet user’s requirement in a latest and advanced way.

15. Scalability in Software Engineering should be maintained to grow and manage increased
demand for software application.

Role of software engineering towards successful execution of large software projects:


(Explain the role of software engineering towards successful execution of large software
projects)
 Software development is a field which provides scope for creativity and innovation.
 Developing successful software systems require painstaking efforts, careful study of
requirements, taking high-quality design decisions, evolving a futuristic architecture,
good programming practice, and a deep sense of commitment to excel.
 A passionate software developer is always aspiring to better his skills and methods and
keep his creative and innovative faculties alive, exploiting his abilities to the fullest to
churn out each project with success.
 The developer gets an immense sense of achievement, pride and motivation when he
sees his own creation put to beneficial real- life use. No job can be more gratifying than
this.
Seven steps to successful execution of Software projects:
Step 1: Understand the Requirements
 A software project begins with a client approaching the developer with a vague list of
requirements.
 This is the trigger point for the developer to engage and take charge, sit down with the
client along with some of their key end-users and carry out a detailed discussion.
 Listen carefully to the client and each user to get a perspective of what their goals and
objectives are and what functionalities they require.
 Understand their business processes, ask questions, and make suggestions to activate
the thought process of the end users.
 This engagement will enable a deeper participation of the users who will help come out
with a clearer specification.
 This process helps refine the sketchy wish list and crystallize the requirements.
Remember, your client understands their business processes well and have a clear
vision of their business needs.
 However, they do not understand the capabilities and the possibilities of software
solution and hence there is a general tendency to simplify the requirements.
 Simplified description often leads to overlooking some of the key attributes of the
business process, which if otherwise implemented in the software, could give them an
operational edge.
 As a software expert, you are expected to facilitate the process of extracting this out
from the client's team.
 And, mark my words - if you succeed in doing this, you have already crossed the first
hurdle of gaining your client's trust and confidence.
 Your positive contribution to steering this process will make the client comfortable in
dealing with you, which means that later when you come out with a concrete proposal
you will not have much difficulty in bagging the contract at a good price.
 Apart from crystallizing the processes, functionalities, and features required, an
important activity at this stage is to also get a clear picture of the kind of data that the
software should be designed to work upon and the kind of data it will output.
 You need to have a clear idea of what are the kinds of data that the software should be
capable of handling, what is the volume of data it has to work upon daily, and whether
there are any extreme data conditions.
Step 2: Freeze the Scope of work & Timeline
 Once the general requirements are gathered from the client, you need to sit down with
your own team to prepare a detailed scope of work document.
 Detailed description of functionalities, user interfaces, reports generated, etc. must be
worked out in as much detail as possible. This document should also clearly state the
exclusions, to avoid ambiguity and subsequent conflict.
 As each component of work involves cost to you, this detailing is extremely important
and your entire profitability will depend upon how clearly you have envisaged and
communicated the scope of work to the client.
 The scope of work document should also be accompanied by a project execution
plan with clear-cut phase wise timelines and the overall project timeline.
 Cost incurred at the end of each phase of the development process should be worked
out accurately and a payment schedule should also accompany your scope of work
document.
 For large development projects professional project management software are used to
create a detailed plan of resource versus timeline using PERT technique.
 It is also a good idea to include a list of functionalities and features for future
enhancement, thus opening the door to more work from the client in future and a
continuing relationship.
 The scope of work document would eventually become a legal document so that if there
are ever disputes, any ambiguity of what was promised to the client can be clarified.
 An agreement on the scope of work should be reached, and written approval should be
taken from the client.
 Eventually when you bag the order, this document should form a part of the client’s
order document.
Step 3: Design the Solution
 This is the most important stage and brings your real experience and expertise to the
fore. In this stage you conceptualize the entire system before getting down to code
writing.
 You focus on all technical and operational requirements and keep an overall vision of
what the software system should do and how it should do it.
 The overall structure of the entire system is worked out without getting distracted by
the nitty-gritty of coding and implementation details. Abstraction is the key.
 The system is conceived to consist of multiple building blocks referred as software
elements/components, with each element deemed to perform well defined roles and
functions.
 Each element will serve a purpose, and the elements will interact with each other to
make the complete software system.
 The entire process of planning and designing the solution is referred as architecting the
system.
 Software architecting is a vast subject. Some of the activities include, conceptualizing
and preparing the complete solution, documenting it well and communicating to all
stake holders.
 A software architect has to work closely with the development team and also interact
with the client during the entire architecting and development stage.
Key architecting activities
1. Conceptualization of the overall system structure.
2. Break down of the structure into well defined elements/components – each capable of
performing a set of roles and functions and delivering a specialized service. These
elements will form the building blocks of the software system.
3. Identification of relationships among the software elements.
4. Identification of cross-cutting components. Cross cutting components are those that
operate across the entire breadth of the software system and interact with most of the other
software elements. The functionalities these components handle, include - user
authentication and authorization, exception management, communication, notification,
caching, instrumentation & logging, data validation, etc.
5. Defining standardized protocol(s) for communication amongst the software elements and
cross-cutting components.
6. Deciding on appropriate combination of architectural styles to use for interoperability of
the software elements. The architectural styles considered, include - client/server,
component-based architecture, domain driven design, layered architecture, message bus
architecture, n-tier/3-tier, object-oriented, and service-oriented architecture. Suitable
combinations of one or more of these architectural styles will be used depending upon the
manner in which the software elements inter-operate to deliver service and the quality
attributes required. Selection of architectural styles is also influenced by several other
constraints such as – existing infrastructure at the client, budget constraints, specific client
preferences, capabilities and experience of your developers, etc.
7. Selection of technology platform on which the software system will operate. The codes
will be written accordingly using appropriate programming languages.
8. Design of database, database tables and entity relationship diagrams.
Key design considerations
 The key quality attributes that are kept in view while working with the above activities,
include - performance efficiency, capacity to handle multiple users concurrently,
reliability, security, fault-tolerance, maintainability, manageability, scalability, and
adaptability to change in business processes and technology with time.
 Other considerations include - configurability, reusability of design components and
software elements, and ability to plug in additional functionalities without disruption
after the software is deployed and is live. There are economic considerations as well
which call for tradeoffs due to budget and technology constraints.
Simplification & organizing
 When designing the system, the architect aims to minimize complexity by separating
the design into three broad areas of concern, viz. business processing, data access,
and user interface.
 Within each area, the components & software elements are designed to focus on that
specific area and do not mix code from other areas of concern. For instance, user
interface processing components will not include code to directly access a data source,
instead it will make a service call to either business components or data access
components to retrieve data.
Communicating your design

 Communicating your design is critical for architecture reviews with client, as well as
to ensure it is implemented correctly by your developers.
 You must communicate your architectural design to all the stakeholders including the
development team, system administrators and operators, business owners, and other
interested parties.
Periodic review of the architecture
 It is important to revisit and review the architecture at major project milestones, as the
development progresses.
 This will help identify and fix architectural problems at the earlier stage and will help
prevent cost and time over-runs.
Documentation

 Detailed recording of the design generated during the software architecting process is
important.
 Documentation helps communicate the entire system to the developers and the client.
It also helps in future maintenance and enhancements.
 This is treated as the final technical specification for the project and all developers are
expected to code with strict adherence to the architecture, so that conceptual system
integrity is preserved.
Step 4: Build the System
 After the architecture of the software system has been built and documented, a team of
programmers/developers sit down to build the system.
 This stage is also known as the development stage and takes the longest time.
 The developers have at their disposal, a well documented design specification along
with instructions on processes, standards and tools to be used.
 Now, they have to convert the design prepared by the architect, into a working system
that takes care of all the requirements addressed in the design document.
 The next stage, testing, also commences as the developers go along building the system.
Development activities entail –
 Building all the system elements and components. You must establish a standardized
coding style and naming convention for development. Check to see if the client has
established coding style and naming standards. If not, you should establish common
standards. This provides a consistent model that makes it easier for team members to
review code they did not write, leading to better maintainability.
 Integrating the elements into larger components.
 Preparing the technical environment/ infrastructure for the system.
 Building a Proof of Concept. This is a skeleton system that tests key elements of the
solution on a non-production simulation of the proposed operational environment. The
team walks users through the solution to get their feedback and re-confirm their
requirements before full-fledged coding begins.
 Developing testing tools and pre-populated test data. Unit test cases are created and
automated scripts are written to test each of the software elements individually as well as
test their inter-operability with other elements.
 Preparing code documents. As individual software elements and their integration
components are built, it is the developer’s responsibility to document the internal design
of the software fully. The documentation must cover - detailing the provisions, specific
coding instructions, and procedures for issue tracking. Everything that help explain the
functionality of the software must be documented so that the codes can in future be
understood by other programmers when they need to do maintenance fine tweaking, or
enhancements.
 The Integration documents will describe the assembly and interaction of the software
elements with each other and also the interaction of the software elements with the
hardware.
 Preparing Implementation plan. This document will describe how the software system
should be deployed in the production environment. Here you define all planned activities
to ensure successful implementation of the entire software system.
 Preparing Operation & Maintenance manual. This document will detail out necessary
procedures and instructions that will be required by system administrators and the
maintenance team to ensure smooth system operation. Various operational and
maintenance procedures must be described, and standards used must be specified.
 Preparing help documents and training manuals. This document will outline technical and
user training needs. It must contain all necessary help instructions and explanations of
implemented business rules and processes so that end users fully understand the system
capabilities, configurability, and adaptability to change. Each user interface must
accompany a context based help page which explains the fields and data validation
requirements. Essentially, anything that helps users use the system to its full capability
should be described in this documentation.
Step 5: Test the System
 As we have mentioned earlier, development and testing go hand-in hand. The testing
process is designed to identify and address potential issues prior to deployment.
 During the development stage, each software element is tested independently. Then,
when the software elements are integrated together, more testing is done to test the
integration fully on various scenarios. Automated scripts are written to perform testing.
 After the entire system is ready, further testing is done by users, system administrators and
maintenance team to evaluate the system and identify any remaining issues they need to
address before releasing for real life use.

 Test Analysis Reports are prepared that present a description of the unit tests and the
results mapped to the system requirements.

 Testing helps identify system capabilities and deficiencies and hence proper selection of
test data and real-life use cases must be done.
The entire range of testing covers -
 Code component testing
 Database testing
 Infrastructure testing
 Security testing
 Integration testing
 User acceptance and usability testing
 Stress, capacity, and performance testing. This will identify any issues with the system’s
architecture and design itself.
Step 6: Deploy for Production
 After the software system has been tested thoroughly and in its entirety, it is considered
ready to be launched and is approved for release by a quality assurance team.
Deployment is the final stage of releasing an application for real life use. If you reach
this far in your project, you have succeeded.
 However, there are still things that can go wrong. You need to plan for deployment and
you can use the deployment checklist that would have been prepared by the
development team while documenting the Implementation plan.
Step 7: Maintain the System
 It is normal practice that clients tend to pass on this activity to the same company that
did the development work. This is a continuous process and entails responding to user
problems and resolving them quickly. You would be required to have a small dedicated
or semi-dedicated team (depending upon the size of the software system), who would
engage in the activity of tweaking and fine tuning the codes to accommodate day-to-
day arising needs. Normally, a well-designed software system should require minimal
tweaking. Yet, in reality some minor tweaking may be required..
 Maintaining and enhancing software to cope with newly discovered faults or
requirements can take substantial time and effort, as missed requirements may force
redesign of some modules of the software. This can be kept to the minimum by proper
execution of the first 6 stages of the development process.
 If you are also entrusted with the task of maintaining the server(s) and hardware
infrastructure, server & network management would become an important activity.
You will have to ensure that the server infrastructure on which your software system is
running is up and running all the time. You may have signed an SLA (service level
agreement) with the client to ensure a certain uptime say, 99.9% uptime. Ensuring this
may require a small team to be constantly monitoring the server and other associated
hardware.
Emergence of software engineering as a discipline:
(Demonstrate the emergence of software engineering as a discipline)

Software engineering discipline is the result of advancement in the field of technology. Various
innovations and technologies that led to the emergence of software engineering discipline are:
Early Computer Programming

 As we know that in the early 1950s, computers were slow and expensive. Though the
programs at that time were very small in size, these computers took considerable time to
process them.
 They relied on assembly language which was specific to computer architecture. Thus,
developing a program required lot of effort.
 Every programmer used his own style to develop the programs.
High Level Language Programming

 With the introduction of semiconductor technology, the computers became smaller,


faster, cheaper, and reliable than their predecessors.
 One of the major developments includes the progress from assembly language to high-
level languages.
 Early high-level programming languages such as COBOL and FORTRAN came into
existence.
 As a result, the programming became easier and thus, increased the productivity of the
programmers.
 However, still the programs were limited in size and the programmers developed
programs using their own style and experience.
Control Flow Based Design

 With the advent of powerful machines and high level languages, the usage of computers
grew rapidly: In addition, the nature of programs also changed from simple to complex.
 The increased size and the complexity could not be managed by individual style.
 It was analyzed that clarity of control flow (the sequence in which the program’s
instructions are executed) is of great importance.
 To help the programmer to design programs having good control flow
structure, flowcharting technique was developed.
 In flowcharting technique, the algorithm is represented using flowcharts. A flowchart is
a graphical representation that depicts the sequence of operations to be carried out to solve
a given problem.
 Note that having more GOTO constructs in the flowchart makes the control flow messy,
which makes it difficult to understand and debug.
 In order to provide clarity of control flow, the use of GOTO constructs in flowcharts
should be avoided and structured constructs-decision, sequence, and loop-should be
used to develop structured flowcharts.
 The decision structures are used for conditional execution of statements (for example, if
statement). The sequence structures are used for the sequentially executed statements.
 The loop structures are used for performing some repetitive tasks in the program. The use
of structured constructs formed the basis of the structured programming methodology.
 Structured programming became a powerful tool that allowed programmers to write
moderately complex programs easily.
 It forces a logical structure in the program to be written in an efficient and understandable
manner.
 The purpose of structured programming is to make the software code easy to modify when
required.
 Some languages such as Ada, Pascal, and dBase are designed with features that implement
the logical program structure in the software code.
Data-Flow Oriented Design

 With the introduction of very Large-Scale Integrated circuits (VLSI), the computers
became more powerful and faster.
 As a result, various significant developments like networking and GUIs came into being.
Clearly, the complexity of software could not be dealt using control flow-based design.
 Thus, a new technique, namely, data-flow-oriented technique came into existence.
 In this technique, the flow of data through business functions or processes is represented
using Data-flow Diagram (DFD).
 IEEE defines a data-flow diagram (also known as bubble chart and work-flow
diagram) as ‘a diagram that depicts data sources, data sinks, data storage, and processes
performed on data as nodes, and logical flow of data as links between the nodes.’
Object Oriented Design

 Object-oriented design technique has revolutionized the process of software development.


 It not only includes the best features of structured programming but also some new and
powerful features such as encapsulation, abstraction, inheritance, and polymorphism.
 These new features have tremendously helped in the development of well-designed and
high-quality software.
 Object-oriented techniques are widely used these days as they allow reusability of the
code. They lead to faster software development and high-quality programs.
 Moreover, they are easier to adapt and scale, that is, large systems can be created by
assembling reusable subsystems.

UNIT II (Short Answers)


List the phases in SDLC
Refer from Long Answers Section

What do you mean by feasibility study


 Feasibility is defined as the practical extent to which a project can be performed
successfully.
 To evaluate feasibility, a feasibility study is performed, which determines whether the
solution considered to accomplish the requirements is practical and workable in the
software.
 Information such as resource availability, cost estimation for software development,
benefits of the software to the organization after it is developed and cost to be incurred on
its maintenance are considered during the feasibility study.
 The objective of the feasibility study is to establish the reasons for developing the software
that is acceptable to users, adaptable to change and conformable to established standards.
Various other objectives of feasibility study are listed below.
• To analyze whether the software will meet organizational requirements.
• To determine whether the software can be implemented using the current technology and within
the specified budget and schedule.
• To determine whether the software can be integrated with other existing software.

Types of Feasibility

Various types of feasibility that are commonly considered include technical feasibility,
operational feasibility, and economic feasibility.

Technical feasibility assesses the current resources (such as hardware and software) and
technology, which are required to accomplish user requirements in the software within the
allocated time and budget. For this, the software development team ascertains whether the current
resources and technology can be upgraded or added in the software to accomplish specified user
requirements. Technical feasibility also performs the following tasks.
• Analyzes the technical skills and capabilities of the software development team members.
• Determines whether the relevant technology is stable and established.
• Ascertains that the technology chosen for software development has a large number of users so
that they can be consulted when problems arise or improvements are required.
Operational feasibility assesses the extent to which the required software performs a series of
steps to solve business problems and user requirements. This feasibility is dependent on human
resources (software development team) and involves visualizing whether the software will
operate after it is developed and be operative once it is installed. Operational feasibility also
performs the following tasks.
• Determines whether the problems anticipated in user requirements are of high priority.
• Determines whether the solution suggested by the software development team is acceptable.
• Analyzes whether users will adapt to a new software.
• Determines whether the organization is satisfied by the alternative solutions proposed by the
software development team.
Economic feasibility determines whether the required software is capable of generating financial
gains for an organization. It involves the cost incurred on the software development team,
estimated cost of hardware and software, cost of performing feasibility study, and so on. For this,
it is essential to consider expenses made on purchases (such as hardware purchase) and activities
required to carry out software development. In addition, it is necessary to consider the benefits
that can be achieved by developing the software. Software is said to be economically feasible if
it focuses on the issues listed below.
• Cost incurred on software development to produce long-term gains for an organization.
• Cost required to conduct full software investigation (such as requirements elicitation and
requirements analysis).
• Cost of hardware, software, development team, and training.
Feasibility Study Process
Feasibility study comprises the following steps.

• Information assessment: Identifies information about whether the system helps in achieving
the objectives of the organization. It also verifies that the system can be implemented using new
technology and within the budget and whether the system can be integrated with the existing
system.
• Information collection: Specifies the sources from where information about software can be
obtained. Generally, these sources include users (who will operate the software), organization
(where the software will be used), and the software development team (which understands user
requirements and knows how to fulfill them in software).
• Report writing: Uses a feasibility report, which is the conclusion of the feasibility study by the
software development team. It includes the recommendations whether the software development
should continue. This report may also include information about changes in the software scope,
budget, and schedule and suggestions of any requirements in the system.
• General information: Describes the purpose and scope of feasibility study. It also describes
system overview, project references, acronyms and abbreviations, and points of contact to be
used. System overview provides description about the name of the organization responsible for
the software development, system name or title, system category, operational status, and so
on. Project references provide a list of the references used to prepare this document such as
documents relating to the project or previously developed documents that are related to the
project. Acronyms and abbreviations provide a list of the terms that are used in this document
along with their meanings. Points of contact provide a list of points of organizational contact
with users for information and coordination. For example, users require assistance to solve
problems (such as troubleshooting) and collect information such as contact number, e-mail
address, and so on.
Management summary: Provides the following information.
• Environment: Identifies the individuals responsible for software development. It provides
information about input and output requirements, processing requirements of the software and
the interaction of the software with other software. It also identifies system security requirements
and the system’s processing requirements
• Current functional procedures: Describes the current functional procedures of the existing
system, whether automated or manual. It also includes the data-flow of the current system and
the number of team members required to operate and maintain the software.
• Functional objective: Provides information about functions of the system such as new services,
increased capacity, and so on.
• Performance objective: Provides information about performance objectives such as reduced
staff and equipment costs, increased processing speeds of software, and improved controls.
• Assumptions and constraints: Provides information about assumptions and constraints such
as operational life of the proposed software, financial constraints, changing hardware, software
and operating environment, and availability of information and sources.
• Methodology: Describes the methods that are applied to evaluate the proposed software in order
to reach a feasible alternative. These methods include survey, modeling, benchmarking, etc.
• Evaluation criteria: Identifies criteria such as cost, priority, development time, and ease of
system use, which are applicable for the development process to determine the most suitable
system option.
• Recommendation: Describes a recommendation for the proposed system. This includes the
delays and acceptable risks.
• Proposed software: Describes the overall concept of the system as well as the procedure to be
used to meet user requirements. In addition, it provides information about improvements, time
and resource costs, and impacts. Improvements are performed to enhance the functionality and
performance of the existing software. Time and resource costs include the costs associated with
software development from its requirements to its maintenance and staff training. Impacts
describe the possibility of future happenings and include various types of impacts as listed below.
• Equipment impacts: Determine new equipment requirements and changes to be made in the
currently available equipment requirements.
• Software impacts: Specify any additions or modifications required in the existing software and
supporting software to adapt to the proposed software.
• Organizational impacts: Describe any changes in organization, staff and skills requirement.
• Operational impacts: Describe effects on operations such as user-operating procedures, data
processing, data entry procedures, and so on.
• Developmental impacts: Specify developmental impacts such as resources required to develop
databases, resources required to develop and test the software, and specific activities to be
performed by users during software development.
• Security impacts: Describe security factors that may influence the development, design, and
continued operation of the proposed software.
• Alternative systems: Provide description of alternative systems, which are considered in a
feasibility study. This also describes the reasons for choosing a particular alternative system to
develop the proposed software and the reason for rejecting alternative systems.
List the techniques for estimation of schedule and effort
Refer from Long Answers Section

List the software cost estimation models


Refer from Long Answers Section

What do you mean by software engineering economics


Refer from Long Answers Section

List the techniques of software project control and reporting


Refer from Long Answers Section

How can you measure the software size


Refer from Long Answers Section

Define a risk
Refer from Long Answers Section

What is configuration management


Refer from Long Answers Section

(Long Answers)

Explain Water fall Model


Explain in detail about Spiral Model
Explain the Evolutionary and Incremental Model.
What are the necessities of Life cycle model? Elaborate on the various issues
of Software lifecycle
Demonstrate the feasibility study
Explain the techniques for estimation of schedule and effort
Illustrate the software cost estimation models
Explain the techniques of software project control and reporting
Explain about risk and its mitigation
Explain about configuration management
Demonstrate COCOMO with example
Software Project Management: Basic concepts of life cycle models – different models and
milestones:
Explain in detail about Software Development Life Cycle

 A software life cycle model is a pictorial and diagrammatic representation of the software
life cycle.
 A life cycle model represents all the methods required to make a software product transit
through its life cycle stages.
 It also captures the structure in which these methods are to be undertaken.
 Life cycle model maps the various activities performed on a software product from its
inception to retirement.
 Different life cycle models may plan the necessary development activities to phases in
different ways.
 Thus, no element which life cycle model is followed; the essential activities are contained
in all life cycle models though the action may be carried out in distinct orders in different
life cycle models.
 During any life cycle stage, more than one activity may also be carried out.

Need of SDLC

 The development team must determine a suitable life cycle model for a particular plan and
then observe to it.
 Without using an exact life cycle model, the development of a software product would not
be in a systematic and disciplined manner.
 When a team is developing a software product, there must be a clear understanding among
team representative about when and what to do. Otherwise, it would point to chaos and
project failure.
 A software life cycle model describes entry and exit criteria for each phase.
 A phase can begin only if its stage-entry criteria have been fulfilled.
 Without a software life cycle model, the entry and exit criteria for a stage cannot be
recognized.
 Without software life cycle models, it becomes tough for software project managers to
monitor the progress of the project.

SDLC Cycle

SDLC Cycle represents the process of developing software. SDLC framework includes the
following steps:
The stages of SDLC are as follows:

Stage 1: Planning and requirement analysis

 The senior members of the team perform it with inputs from all the stakeholders and
domain experts in the industry.
 Planning for the quality assurance requirements and identifications of the risks associated
with the projects is also done at this stage.
 Business analyst and Project organizer set up a meeting with the client to gather all the data
like what the customer wants to build, who will be the end user, what is the objective of
the product. Before creating a product, a core understanding or knowledge of the product
is very necessary.

For Example, A client wants to have an application which concerns money transactions. In this
method, the requirement has to be precise like what kind of operations will be done, how it will be
done, in which currency it will be done, etc.

 Once the required function is done, an analysis is complete with auditing the feasibility of
the growth of a product.
 In case of any ambiguity, a signal is set up for further discussion.
 Once the requirement is understood, the SRS (Software Requirement Specification)
document is created.
 The developers should thoroughly follow this document and also should be reviewed by
the customer for future reference.
Stage 2: Defining Requirements

 Once the requirement analysis is done, the next stage is to certainly represent and document
the software requirements and get them accepted from the project stakeholders.
 This is accomplished through "SRS"- Software Requirement Specification document
which contains all the product requirements to be constructed and developed during the
project life cycle.

Stage 3: Designing the Software

 The next phase is about to bring down all the knowledge of requirements, analysis, and
design of the software project.
 This phase is the product of the last two, like inputs from the customer and requirement
gathering.

Stage 4: Developing the project

 In this phase of SDLC, the actual development begins, and the programming is built.
 The implementation of design begins concerning writing code.
 Developers have to follow the coding guidelines described by their management and
programming tools like compilers, interpreters, debuggers, etc. are used to develop and
implement the code.

Stage 5: Testing

 After the code is generated, it is tested against the requirements to make sure that the
products are solving the needs addressed and gathered during the requirements stage.
 During this stage, unit testing, integration testing, system testing, acceptance testing are
done.

Stage 6: Deployment

 Once the software is certified, and no bugs or errors are stated, then it is deployed.
 Then based on the assessment, the software may be released as it is or with suggested
enhancement in the object segment.
 After the software is deployed, then its maintenance begins.

Stage 7: Maintenance

 Once when the client starts using the developed systems, then the real issues come up and
requirements to be solved from time to time.
 This procedure where the care is taken for the developed product is known as maintenance.

SDLC Models
 Software Development life cycle (SDLC) is a spiritual model used in project management
that defines the stages include in an information system development project, from an
initial feasibility study to the maintenance of the completed application.
 There are different software development life cycle models specify and design, which are
followed during the software development phase.
 These models are also called "Software Development Process Models." Each process
model follows a series of phase unique to its type to ensure success in the step of software
development.

Some important phases of SDLC life cycle:

Waterfall Model

 The waterfall is a universally accepted SDLC model. In this method, the whole process of
software development is divided into various phases.
 The waterfall model is a continuous software development model in which development is
seen as flowing steadily downwards (like a waterfall) through the steps of requirements
analysis, design, implementation, testing (validation), integration, and maintenance.
 Linear ordering of activities has some significant consequences.
 First, to identify the end of a phase and the beginning of the next, some certification
techniques have to be employed at the end of each step.
 Some verification and validation usually do this mean that will ensure that the output of
the stage is consistent with its input (which is the output of the previous step), and that the
output of the stage is consistent with the overall requirements of the system.

RAD Model
 RAD or Rapid Application Development process is an adoption of the waterfall model; it
targets developing software in a short period.
 The RAD model is based on the concept that a better system can be developed in lesser
time by using focus groups to gather system requirements.

o Business Modeling
o Data Modeling
o Process Modeling
o Application Generation
o Testing and Turnover

Spiral Model

 The spiral model is a risk-driven process model.


 This SDLC model helps the group to adopt elements of one or more process models like a
waterfall, incremental, waterfall, etc.
 The spiral technique is a combination of rapid prototyping and concurrency in design and
development activities.
 Each cycle in the spiral begins with the identification of objectives for that cycle, the
different alternatives that are possible for achieving the goals, and the constraints that exist.
This is the first quadrant of the cycle (upper-left quadrant).
 The next step in the cycle is to evaluate these different alternatives based on the objectives
and constraints. The focus of evaluation in this step is based on the risk perception for the
project.
 The next step is to develop strategies that solve uncertainties and risks. This step may
involve activities such as benchmarking, simulation, and prototyping.

V-Model

 In this type of SDLC model testing and the development, the step is planned in parallel.
 So, there are verification phases on the side and the validation phase on the other side.
V-Model joins by Coding phase.

Incremental Model

 The incremental model is not a separate model. It is necessarily a series of waterfall cycles.
 The requirements are divided into groups at the start of the project.
 For each group, the SDLC model is followed to develop software.
 The SDLC process is repeated, with each release adding more functionality until all
requirements are met.
 In this method, each cycle act as the maintenance phase for the previous software release.
 Modification to the incremental model allows development cycles to overlap.
 After that subsequent cycle may begin before the previous cycle is complete.

Agile Model

 Agile methodology is a practice which promotes continues interaction of development and


testing during the SDLC process of any project.
 In the Agile method, the entire project is divided into small incremental builds.
 All of these builds are provided in iterations, and each iteration lasts from one to three
weeks.
 Any agile software phase is characterized in a manner that addresses several key
assumptions about the bulk of software projects:

1. It is difficult to think in advance which software requirements will persist and which will
change. It is equally difficult to predict how user priorities will change as the project
proceeds.
2. For many types of software, design and development are interleaved. That is, both activities
should be performed in tandem so that design models are proven as they are created. It is
difficult to think about how much design is necessary before construction is used to test
the configuration.
3. Analysis, design, development, and testing are not as predictable (from a planning point of
view) as we might like.

Iterative Model

 It is a particular implementation of a software development life cycle that focuses on an


initial, simplified implementation, which then progressively gains more complexity and a
broader feature set until the final system is complete.
 In short, iterative development is a way of breaking down the software development of a
large application into smaller pieces.

Big bang model

 Big bang model is focusing on all types of resources in software development and coding,
with no or very little planning.
 The requirements are understood and implemented when they come.
 This model works best for small projects with smaller size development team which are
working together.
 It is also useful for academic software development projects.
 It is an ideal model where requirements are either unknown or final release date is not
given.
Prototype Model

 The prototyping model starts with the requirements gathering.


 The developer and the user meet and define the purpose of the software, identify the needs,
etc.
 A 'quick design' is then created. This design focuses on those aspects of the software that
will be visible to the user. It then leads to the development of a prototype. The customer
then checks the prototype, and any modifications or changes that are needed are made to
the prototype.
 Looping takes place in this step, and better versions of the prototype are created.
 These are continuously shown to the user so that any new changes can be updated in the
prototype.
 This process continue until the customer is satisfied with the system.
 Once a user is satisfied, the prototype is converted to the actual system with all
considerations for quality and security.

Waterfall model

 Winston Royce introduced the Waterfall Model in 1970.


 This model has five phases: Requirements analysis and specification, design,
implementation, and unit testing, integration and system testing, and operation and
maintenance.
 The steps always follow in this order and do not overlap.
 The developer must complete every phase before the next phase begins.
 This model is named "Waterfall Model", because its diagrammatic representation
resembles a cascade of waterfalls.

1. Requirements analysis and specification phase:

 The aim of this phase is to understand the exact requirements of the customer and to
document them properly.
 Both the customer and the software developer work together so as to document all the
functions, performance, and interfacing requirement of the software.
 It describes the "what" of the system to be produced and not "how.
 "In this phase, a large document called Software Requirement Specification
(SRS) document is created which contained a detailed description of what the system will
do in the common language.
2. Design Phase:

 This phase aims to transform the requirements gathered in the SRS into a suitable form
which permits further coding in a programming language.
 It defines the overall software architecture together with high level and detailed design. All
this work is documented as a Software Design Document (SDD).

3. Implementation and unit testing:

 During this phase, design is implemented. If the SDD is complete, the implementation or
coding phase proceeds smoothly, because all the information needed by software
developers is contained in the SDD.
 During testing, the code is thoroughly examined and modified. Small modules are tested
in isolation initially.
 After that these modules are tested by writing some overhead code to check the interaction
between these modules and the flow of intermediate output.

4. Integration and System Testing:

 This phase is highly crucial as the quality of the end product is determined by the
effectiveness of the testing carried out.
 The better output will lead to satisfied customers, lower maintenance costs, and accurate
results. Unit testing determines the efficiency of individual modules.
 However, in this phase, the modules are tested for their interactions with each other and
with the system.

5. Operation and maintenance phase:


 Maintenance is the task performed by every user once the software has been delivered to
the customer, installed, and operational.

When to use SDLC Waterfall Model

o When the requirements are constant and not changed regularly.


o A project is short
o The situation is calm
o Where the tools and technology used is consistent and is not changing
o When resources are well prepared and are available to use.

Advantages of Waterfall model

o This model is simple to implement also the number of resources that are required for it is
minimal.
o The requirements are simple and explicitly declared; they remain unchanged during the
entire project development.
o The start and end points for each phase is fixed, which makes it easy to cover progress.
o The release date for the complete product, as well as its final cost, can be determined before
development.
o It gives easy to control and clarity for the customer due to a strict reporting system.

Disadvantages of Waterfall model

o In this model, the risk factor is higher, so this model is not suitable for more significant and
complex projects.
o This model cannot accept the changes in requirements during development.
o It becomes tough to go back to the phase. For example, if the application has now shifted
to the coding phase, and there is a change in requirement, It becomes tough to go back and
change it.
o Since the testing done at a later stage, it does not allow identifying the challenges and risks
in the earlier phase, so the risk reduction strategy is difficult to prepare.
RAD (Rapid Application Development) Model

 RAD is a linear sequential software development process model that emphasizes a concise
development cycle using an element-based construction approach.
 If the requirements are well understood and described, and the project scope is a constraint,
the RAD process enables a development team to create a fully functional system within a
concise time period.
 RAD (Rapid Application Development) is a concept that products can be developed faster
and of higher quality through:

o Gathering requirements using workshops or focus groups


o Prototyping and early, reiterative user testing of designs
o The re-use of software components
o A rigidly paced schedule that refers design improvements to the next product version
o Less formality in reviews and other team communication

The various phases of RAD are as follows:


1.Business Modelling:

 The information flow among business functions is defined by answering questions like
what data drives the business process, what data is generated, who generates it, where does
the information go, who process it and so on.

2. Data Modelling:

 The data collected from business modeling is refined into a set of data objects (entities)
that are needed to support the business.
 The attributes (character of each entity) are identified, and the relation between these data
objects (entities) is defined.

3. Process Modelling:

 The information object defined in the data modeling phase are transformed to achieve the
data flow necessary to implement a business function.
 Processing descriptions are created for adding, modifying, deleting, or retrieving a data
object.

4. Application Generation:

 Automated tools are used to facilitate construction of the software; even they use the 4th
GL techniques.

5. Testing & Turnover:

 Many of the programming components have already been tested since RAD emphasis
reuse.
 This reduces the overall testing time. But the new part must be tested, and all interfaces
must be fully exercised.

When to use RAD Model

o When the system should need to create the project that modularizes in a short span time (2-
3 months).
o When the requirements are well-known.
o When the technical risk is limited.
o When there's a necessity to make a system, which modularized in 2-3 months of period.
o It should be used only if the budget allows the use of automatic code generating tools.

Advantage of RAD Model


o This model is flexible for change.
o In this model, changes are adoptable.
o Each phase in RAD brings highest priority functionality to the customer.
o It reduced development time.
o It increases the reusability of features.

Disadvantage of RAD Model

o It required highly skilled designers.


o All application is not compatible with RAD.
o For smaller projects, we cannot use the RAD model.
o On the high technical risk, it's not suitable.
o Required user involvement.

Spiral Model

 The spiral model, initially proposed by Boehm, is an evolutionary software process model
that couples the iterative feature of prototyping with the controlled and systematic aspects
of the linear sequential model.
 It implements the potential for rapid development of new versions of the software. Using
the spiral model, the software is developed in a series of incremental releases.
 During the early iterations, the additional release may be a paper model or prototype.
 During later iterations, more and more complete versions of the engineered system are
produced.

The Spiral Model is shown in fig:


Each cycle in the spiral is divided into four parts:

Objective setting:

 Each cycle in the spiral starts with the identification of purpose for that cycle, the various
alternatives that are possible for achieving the targets, and the constraints that exists.2020)

Risk Assessment and reduction:

 The next phase in the cycle is to calculate these various alternatives based on the goals and
constraints.
 The focus of evaluation in this stage is located on the risk perception for the project.

Development and validation:

 The next phase is to develop strategies that resolve uncertainties and risks.
 This process may include activities such as benchmarking, simulation, and prototyping.

Planning:

 Finally, the next step is planned.


 The project is reviewed, and a choice made whether to continue with a further period of
the spiral.
 If it is determined to keep, plans are drawn up for the next step of the project.
 The development phase depends on the remaining risks.
 For example, if performance or user-interface risks are treated more essential than the
program development risks, the next phase may be an evolutionary development that
includes developing a more detailed prototype for solving the risks.
 The risk-driven feature of the spiral model allows it to accommodate any mixture of a
specification-oriented, prototype-oriented, simulation-oriented, or another type of
approach.
 An essential element of the model is that each period of the spiral is completed by a review
that includes all the products developed during that cycle, including plans for the next
cycle.
 The spiral model works for development as well as enhancement projects.

When to use Spiral Model

o When deliverance is required to be frequent.


o When the project is large
o When requirements are unclear and complex
o When changes may require at any time
o Large and high budget projects

Advantages

o High amount of risk analysis


o Useful for large and mission-critical projects.

Disadvantages

o Can be a costly model to use.


o Risk analysis needed highly particular expertise
o Doesn't work well for smaller projects.

V-Model

 V-Model also referred to as the Verification and Validation Model.


 In this, each phase of SDLC must complete before the next phase starts.
 It follows a sequential design process same as the waterfall model.
 Testing of the device is planned in parallel with a corresponding stage of development.
Verification:

 It involves a static analysis method (review) done without executing code.


 It is the process of evaluation of the product development process to find whether specified
requirements meet.

Validation:

 It involves dynamic analysis method (functional, non-functional), testing is done by


executing code.
 Validation is the process to classify the software after the completion of the development
process to determine whether the software meets the customer expectations and
requirements.
 V-Model contains Verification phases on one side of the Validation phases on the other
side.
 Verification and Validation process is joined by coding phase in V-shape. Thus it is known
as V-Model.

There are the various phases of Verification Phase of V-model:


1. Business requirement analysis: This is the first step where product requirements
understood from the customer's side. This phase contains detailed communication to
understand customer's expectations and exact requirements.
2. System Design: In this stage system engineers analyze and interpret the business of the
proposed system by studying the user requirements document.
3. Architecture Design: The baseline in selecting the architecture is that it should understand
all which typically consists of the list of modules, brief functionality of each module, their
interface relationships, dependencies, database tables, architecture diagrams, technology
detail, etc. The integration testing model is carried out in a particular phase.
4. Module Design: In the module design phase, the system breaks down into small modules.
The detailed design of the modules is specified, which is known as Low-Level Design
5. Coding Phase: After designing, the coding phase is started. Based on the requirements, a
suitable programming language is decided. There are some guidelines and standards for
coding. Before checking in the repository, the final build is optimized for better
performance, and the code goes through many code reviews to check the performance.

There are the various phases of Validation Phase of V-model:

1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed during the module
design phase. These UTPs are executed to eliminate errors at code level or unit level. A
unit is the smallest entity which can independently exist, e.g., a program module. Unit
testing verifies that the smallest entity can function correctly when isolated from the rest
of the codes/ units.
2. Integration Testing: Integration Test Plans are developed during the Architectural Design
Phase. These tests verify that groups created and tested independently can coexist and
communicate among themselves.
3. System Testing: System Tests Plans are developed during System Design Phase. Unlike
Unit and Integration Test Plans, System Tests Plans are composed by the client?s business
team. System Test ensures that expectations from an application developer are met.
4. Acceptance Testing: Acceptance testing is related to the business requirement analysis
part. It includes testing the software product in user atmosphere. Acceptance tests reveal
the compatibility problems with the different systems, which is available within the user
atmosphere. It conjointly discovers the non-functional problems like load and performance
defects within the real user atmosphere.
When to use V-Model

o When the requirement is well defined and not ambiguous.


o The V-shaped model should be used for small to medium-sized projects where
requirements are clearly defined and fixed.
o The V-shaped model should be chosen when sample technical resources are available with
essential technical expertise.

Advantage (Pros) of V-Model:

1. Easy to Understand.
2. Testing Methods like planning, test designing happens well before coding.
3. This saves a lot of time. Hence a higher chance of success over the waterfall model.
4. Avoids the downward flow of the defects.
5. Works well for small plans where requirements are easily understood.

Disadvantage (Cons) of V-Model:

1. Very rigid and least flexible.


2. Not a good for a complex project.
3. Software is developed during the implementation stage, so no early prototypes of the
software are produced.
4. If any changes happen in the midway, then the test documents along with the required
documents, has to be updated.

Incremental Model

 Incremental Model is a process of software development where requirements divided into


multiple standalone modules of the software development cycle.
 In this model, each module goes through the requirements, design, implementation and
testing phases.
 Every subsequent release of the module adds function to the previous release.
 The process continues until the complete system achieved.
The various phases of incremental model are as follows:

1. Requirement analysis:

 In the first phase of the incremental model, the product analysis expertise identifies the
requirements.
 And the system functional requirements are understood by the requirement analysis team.
 To develop the software under the incremental model, this phase performs a crucial role.

2. Design & Development:

 In this phase of the Incremental model of SDLC, the design of the system functionality and
the development method are finished with success.
 When software develops new practicality, the incremental model uses style and
development phase.

3. Testing:

In the incremental model, the testing phase checks the performance of each existing function
as well as additional functionality.
In the testing phase, the various methods are used to test the behavior of each task. | Two
Software Developer Career

4. Implementation:

 Implementation phase enables the coding phase of the development system.


 It involves the final coding that design in the designing and development phase and tests
the functionality in the testing phase.
 After completion of this phase, the number of the product working is enhanced and
upgraded up to the final system product

When we use the Incremental Model

o When the requirements are superior.


o A project has a lengthy development schedule.
o When Software team are not very well skilled or trained.
o When the customer demands a quick release of the product.
o You can develop prioritized requirements first.

Advantage of Incremental Model

o Errors are easy to be recognized.


o Easier to test and debug
o More flexible.
o Simple to manage risk because it handled during its iteration.
o The Client gets important functionality early.

Disadvantage of Incremental Model

o Need for good planning


o Total Cost is high.
o Well defined module interfaces are needed.

Agile Model

 The meaning of Agile is swift or versatile."Agile process model" refers to a software


development approach based on iterative development.
 Agile methods break tasks into smaller iterations, or parts do not directly involve long term
planning.
 The project scope and requirements are laid down at the beginning of the development
process.
 Plans regarding the number of iterations, the duration and the scope of each iteration are
clearly defined in advance.
 Each iteration is considered as a short time "frame" in the Agile process model, which
typically lasts from one to four weeks.
 The division of the entire project into smaller parts helps to minimize the project risk and
to reduce the overall project delivery time requirements.
 Each iteration involves a team working through a full software development life cycle
including planning, requirements analysis, design, coding, and testing before a working
product is demonstrated to the client.

Phases of Agile Model:

Following are the phases in the Agile model are as follows:

1. Requirements gathering
2. Design the requirements
3. Construction/ iteration
4. Testing/ Quality assurance
5. Deployment
6. Feedback
1. Requirements gathering: In this phase, you must define the requirements. You should explain
business opportunities and plan the time and effort needed to build the project. Based on this
information, you can evaluate technical and economic feasibility.

2. Design the requirements: When you have identified the project, work with stakeholders to
define requirements. You can use the user flow diagram or the high-level UML diagram to show
the work of new features and show how it will apply to your existing system.

3. Construction/ iteration: When the team defines the requirements, the work begins. Designers
and developers start working on their project, which aims to deploy a working product. The
product will undergo various stages of improvement, so it includes simple, minimal functionality.

4. Testing: In this phase, the Quality Assurance team examines the product's performance and
looks for the bug.

5. Deployment: In this phase, the team issues a product for the user's work environment.

6. Feedback: After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback.

Agile Testing Methods:

o Scrum
o Crystal
o Dynamic Software Development Method(DSDM)
o Feature Driven Development(FDD)
o Lean Software Development
o eXtreme Programming(XP)

Scrum

SCRUM is an agile development process focused primarily on ways to manage tasks in team-
based development conditions. There are three roles in it, and their responsibilities are:

o Scrum Master: The scrum can set up the master team, arrange the meeting and remove
obstacles for the process
o Product owner: The product owner makes the product backlog, prioritizes the delay and
is responsible for the distribution of functionality on each repetition.
o Scrum Team: The team manages its work and organizes the work to complete the sprint
or cycle.
eXtreme Programming(XP)

This type of methodology is used when customers are constantly changing demands or
requirements, or when they are not sure about the system's performance.

Crystal:

There are three concepts of this method-

1. Chartering: Multi activities are involved in this phase such as making a development team,
performing feasibility analysis, developing plans, etc.
2. Cyclic delivery: under this, two more cycles consist, these are:
o Team updates the release plan.
o Integrated product delivers to the users.
3. Wrap up: According to the user environment, this phase performs deployment, post-
deployment.

Dynamic Software Development Method (DSDM):

DSDM is a rapid application development strategy for software development and gives an agile
project distribution structure. The essential features of DSDM are that users must be actively
connected, and teams have been given the right to make decisions. The techniques used in DSDM
are:

1. Time Boxing
2. MoSCoW Rules
3. Prototyping

The DSDM project contains seven stages:

1. Pre-project
2. Feasibility Study
3. Business Study
4. Functional Model Iteration
5. Design and build Iteration
6. Implementation
7. Post-project
Feature Driven Development(FDD):

This method focuses on "Designing and Building" features. In contrast to other smart methods,
FDD describes the small steps of the work that should be obtained separately per function.

Lean Software Development:

Lean software development methodology follows the principle "just in time production." The lean
method indicates the increasing speed of software development and reducing costs. Lean
development can be summarized in seven phases.

1. Eliminating Waste
2. Amplifying learning
3. Defer commitment (deciding as late as possible)
4. Early delivery
5. Empowering the team
6. Building Integrity
7. Optimize the whole

When to use the Agile Model

o When frequent changes are required.


o When a highly qualified and experienced team is available.
o When a customer is ready to have a meeting with a software team all the time.
o When project size is small.

Advantage(Pros) of Agile Method:

1. Frequent Delivery
2. Face-to-Face Communication with clients.
3. Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.

Disadvantages(Cons) of Agile Model:


1. Due to the shortage of formal documents, it creates confusion and crucial decisions taken
throughout various phases can be misinterpreted at any time by different team members.
2. Due to the lack of proper documentation, once the project completes and the developers
allotted to another project, maintenance of the finished project can become a difficulty.

Iterative Model

 In this Model, you can start with some of the software specifications and develop the first
version of the software.
 After the first version if there is a need to change the software, then a new version of the
software is created with a new iteration.
 Every release of the Iterative Model finishes in an exact and fixed period that is called
iteration.
 The Iterative Model allows the accessing earlier phases, in which the variations made
respectively.
 The final output of the project renewed at the end of the Software Development Life Cycle
(SDLC) process.
The various phases of Iterative model are as follows:

1. Requirement gathering & analysis: In this phase, requirements are gathered from customers
and check by an analyst whether requirements will fulfil or not. Analyst checks that need will
achieve within budget or not. After all of this, the software team skips to the next phase.

2. Design: In the design phase, team design the software by the different diagrams like Data Flow
diagram, activity diagram, class diagram, state transition diagram, etc.

3. Implementation: In the implementation, requirements are written in the coding language and
transformed into computer programmes which are called Software.

4. Testing: After completing the coding phase, software testing starts using different test methods.
There are many test methods, but the most common are white box, black box, and grey box test
methods.

5. Deployment: After completing all the phases, software is deployed to its work environment.

6. Review: In this phase, after the product deployment, review phase is performed to check the
behaviour and validity of the developed product. And if there are any error found then the process
starts again from the requirement gathering.

7. Maintenance: In the maintenance phase, after deployment of the software in the working
environment there may be some bugs, some errors or new updates are required. Maintenance
involves debugging and new addition options.

When to use the Iterative Model

1. When requirements are defined clearly and easy to understand.


2. When the software application is large.
3. When there is a requirement of changes in future.

Advantage(Pros) of Iterative Model:

1. Testing and debugging during smaller iteration is easy.


2. A Parallel development can plan.
3. It is easily acceptable to ever-changing needs of the project.
4. Risks are identified and resolved during iteration.
5. Limited time spent on documentation and extra time on designing.

Disadvantage(Cons) of Iterative Model:


1. It is not suitable for smaller projects.
2. More Resources may be required.
3. Design can be changed again and again because of imperfect requirements.
4. Requirement changes can cause over budget.
5. Project completion date not confirmed because of changing requirements.

Big Bang Model

 In this model, developers do not follow any specific process.


 Development begins with the necessary funds and efforts in the form of inputs.
 And the result may or may not be as per the customer's requirement, because in this model,
even the customer requirements are not defined.
 This model is ideal for small projects like academic projects or practical projects.
 One or two developers can work together on this model.

When to use Big Bang Model

As we discussed above, this model is required when this project is small like an academic project
or a practical project. This method is also used when the size of the developer team is small and
when requirements are not defined, and the release date is not confirmed or given by the customer.

Advantage(Pros) of Big Bang Model:


1. There is no planning required.
2. Simple Model.
3. Few resources required.
4. Easy to manage.
5. Flexible for developers.

Disadvantage(Cons) of Big Bang Model:

1. There are high risk and uncertainty.


2. Not acceptable for a large project.
3. If requirements are not clear that can cause very expensive.

Prototype Model

 The prototype model requires that before carrying out the development of actual software,
a working prototype of the system should be built.
 A prototype is a toy implementation of the system. A prototype usually turns out to be a
very crude version of the actual system, possible exhibiting limited functional capabilities,
low reliability, and inefficient performance as compared to actual software.
 In many instances, the client only has a general view of what is expected from the software
product.
 In such a scenario where there is an absence of detailed information regarding the input to
the system, the processing needs, and the output requirement, the prototyping model may
be employed.
Steps of Prototype Model

1. Requirement Gathering and Analyst


2. Quick Decision
3. Build a Prototype
4. Assessment or User Evaluation
5. Prototype Refinement
6. Engineer Product

Advantage of Prototype Model

1. Reduce the risk of incorrect user requirement


2. Good where requirement are changing/uncommitted
3. Regular visible process aids management
4. Support early product marketing
5. Reduce Maintenance cost.
6. Errors can be detected much earlier as the system is made side by side.

Disadvantage of Prototype Model

1. An unstable/badly implemented prototype often becomes the final product.


2. Require extensive customer collaboration
o Costs customer money
o Needs committed customer
o Difficult to finish if customer withdraw
o May be too customer specific, no broad market
3. Difficult to know how long the project will last.
4. Easy to fall back into the code and fix without proper requirement analysis, design,
customer evaluation, and feedback.
5. Prototyping tools are expensive.
6. Special tools & techniques are required to build a prototype.
7. It is a time-consuming process.

Evolutionary Process Model

 Evolutionary process model resembles the iterative enhancement model.


 The same phases are defined for the waterfall model occurs here in a cyclical fashion.
 This model differs from the iterative enhancement model in the sense that this does not
require a useful product at the end of each cycle.
 In evolutionary development, requirements are implemented by category rather than by
priority.
 For example, in a simple database application, one cycle might implement the graphical
user Interface (GUI), another file manipulation, another queries and another updates. All
four cycles must complete before there is a working product available.
 GUI allows the users to interact with the system, file manipulation allow the data to be
saved and retrieved, queries allow user to get out of the system, and updates allows users
to put data into the system.
Benefits of Evolutionary Process Model

 Use of EVO brings a significant reduction in risk for software projects.


 EVO can reduce costs by providing a structured, disciplined avenue for experimentation.
 EVO allows the marketing department access to early deliveries, facilitating the
development of documentation and demonstration.
 Better fit the product to user needs and market requirements.
 Manage project risk with the definition of early cycle content.
 Uncover key issues early and focus attention appropriately.
 Increase the opportunity to hit market windows.
 Accelerate sales cycles with early customer exposure.
 Increase management visibility of project progress.
 Increase product team productivity and motivations.

Software project planning – identification of activities and resources:


(Demonstrate the software project planning)

 A Software Project is the complete methodology of programming advancement from


requirement gathering to testing and support, completed by the execution procedures, in a
specified period to achieve intended software product.
 Software manager is responsible for planning and scheduling project development.
 They manage the work to ensure that it is completed to the required standard.
 They monitor the progress to check that the event is on time and within budget.
 The project planning must incorporate the major issues like size & cost estimation
scheduling, project monitoring, personnel selection evaluation & risk management.
 To plan a successful software project, we must understand:

o Scope of work to be completed


o Risk analysis
o The resources mandatory
o The project to be accomplished
o Record of being followed

Software Project planning starts before technical work start. The various steps of planning
activities are:
 The size is the crucial parameter for the estimation of other activities.
 Resources requirement are required based on cost and development time.
 Project schedule may prove to be very useful for controlling and monitoring the progress
of the project.
 This is dependent on resources & development time.

Software Cost Estimation

 For any new software project, it is necessary to know how much it will cost to develop and
how much development time will it take.
 These estimates are needed before development is initiated, but how is this done? Several
estimation procedures have been developed and are having the following attributes in
common.

1. Project scope must be established in advanced.


2. Software metrics are used as a support from which evaluation is made.
3. The project is broken into small PCs which are estimated individually.
To achieve true cost & schedule estimate, several options arise.
4. Delay estimation
5. Used symbol decomposition techniques to generate project cost and schedule estimates.
6. Acquire one or more automated estimation tools.

Uses of Cost Estimation

1. During the planning stage, one needs to choose how many engineers are required for the
project and to develop a schedule.
2. In monitoring the project's progress, one needs to access whether the project is
progressing according to the procedure and takes corrective action, if necessary.
Cost Estimation Models

 A model may be static or dynamic.


 In a static model, a single variable is taken as a key element for calculating cost and time.
 In a dynamic model, all variable are interdependent, and there is no basic variable.

Static, Single Variable Models:

 When a model makes use of single variables to calculate desired values such as cost, time,
efforts, etc. is said to be a single variable model.
 The most common equation is:

C=aLb

Where C=CostsL=size a and b are constants

 The Software Engineering Laboratory established a model called SEL model, for
estimating its software production.
 This model is an example of the static, single variable model.

E=1.4L0.93
DOC=30.4L0.90
D=4.6L0.26

Where

E=Efforts(PersonPerMonth)
DOC=Documentation(NumberofPages)
D=Duration(D,inmonths)
L = Number of Lines per code

Static, Multivariable Models:

 These models are based on method (1), they depend on several variables describing various
aspects of the software development environment.
 In some model, several variables are needed to describe the software development process,
and selected equation combined these variables to give the estimate of time & cost.
 These models are called multivariable models.
 WALSTON and FELIX develop the models at IBM provide the following equation gives
a relationship between lines of source code and effort:

E=5.2L0.91

In the same manner duration of development is given by

D=4.1L0.36

The productivity index uses 29 variables which are found to be highly correlated productivity as
follows:

Where Wi is the weight factor for the ithvariable and Xi={-1,0,+1} the estimator gives Xione of the
values -1, 0 or +1 depending on the variable decreases, has no effect or increases the productivity.

Example: Compare the Walston-Felix Model with the SEL model on a software development
expected to involve 8 person-years of effort.

a. Calculate the number of lines of source code that can be produced.


b. Calculate the duration of the development.
c. Calculate the productivity in LOC/PY
d. Calculate the average manning

Solution:

The amount of manpower involved = 8PY=96persons-months

(a)Number of lines of source code can be obtained by reversing equation to give:

Then

L(SEL)=(96/1.4)1⁄0.93=94264LOC
L (SEL) = (96/5.2)1⁄0.91=24632 LOC

(b)Duration in months can be calculated by means of equation


D(SEL)=4.6(L)0.26
= 4.6 (94.264)0.26 = 15 months
D (W-F) = 4.1 L0.36
= 4.1 (24.632)0.36 = 13 months

(c) Productivity is the lines of code produced per persons/month (year)

(d)Average manning is the average number of persons required per month in the project

COCOMO Model

 Boehm proposed COCOMO (Constructive Cost Estimation Model) in 1981.


 COCOMO is one of the most generally used software estimation models in the world.
 COCOMO predicts the efforts and schedule of a software product based on the size of the
software.

The necessary steps in this model are:

1. Get an initial estimate of the development effort from evaluation of thousands of delivered
lines of source code (KDLOC).
2. Determine a set of 15 multiplying factors from various attributes of the project.
3. Calculate the effort estimate by multiplying the initial estimate with all the multiplying
factors i.e., multiply the values in step1 and step2.

 The initial estimate (also called nominal estimate) is determined by an equation of the form
used in the static single variable models, using KDLOC as the measure of the size.
 To determine the initial effort Ei in person-months the equation used is of the type is shown
below

Ei=a*(KDLOC)b

The value of the constant a and b are depends on the project type.
In COCOMO, projects are categorized into three types:
1. Organic
2. Semidetached
3. Embedded

1.Organic:

 A development project can be treated of the organic type, if the project deals with
developing a well-understood application program, the size of the development team is
reasonably small, and the team members are experienced in developing similar methods of
projects.
 Examples of this type of projects are simple business systems, simple inventory
management systems, and data processing systems.

2. Semidetached:

 A development project can be treated with semidetached type if the development consists
of a mixture of experienced and inexperienced staff. Team members may have finite
experience in related systems but may be unfamiliar with some aspects of the order being
developed.
 Example of Semidetached system includes developing a new operating system (OS), a
Database Management System (DBMS), and complex inventory management system.

3. Embedded:

 A development project is treated to be of an embedded type, if the software being


developed is strongly coupled to complex hardware, or if the stringent regulations on the
operational method exist.
 For Example: ATM, Air Traffic control.

According to Boehm, software cost estimation should be done through three stages:
1. Basic Model
2. Intermediate Model
3. Detailed Model

1. Basic COCOMO Model:

 The basic COCOMO model provide an accurate size of the project parameters.
 The following expressions give the basic COCOMO estimation model:

Effort=a1*(KLOC)a2 PM
Tdev=b1*(efforts)b2 Months
Where
 KLOC is the estimated size of the software product indicate in Kilo Lines of Code,
 a1,a2,b1,b2 are constants for each group of software products,
 Tdev is the estimated time to develop the software, expressed in month
 Effort is the total effort required to develop the software product, expressed in person
months (PMs).

Estimation of development effort

 For the three classes of software products, the formulas for estimating the effort based on
the code size are shown below:

 Organic: Effort = 2.4(KLOC) 1.05 PM

 Semi-detached: Effort = 3.0(KLOC) 1.12 PM

 Embedded: Effort = 3.6(KLOC) 1.20 PM

Estimation of development time

 For the three classes of software products, the formulas for estimating the development
time based on the effort are given below:
 Organic: Tdev = 2.5(Effort) 0.38 Months
 Semi-detached: Tdev = 2.5(Effort) 0.35 Months
 Embedded: Tdev = 2.5(Effort) 0.32 Months
 Some insight into the basic COCOMO model can be obtained by plotting the estimated
characteristics for different software sizes.
 Fig shows a plot of estimated effort versus product size.
 From fig, we can observe that the effort is somewhat superliner in the size of the software
product.
 Thus, the effort required to develop a product increases very rapidly with project size.
 The development time versus the product size in KLOC is plotted in fig.
 From fig it can be observed that the development time is a sub linear function of the size
of the product, i.e.
 when the size of the product increases by two times, the time to develop the product does
not double but rises moderately.
 This can be explained by the fact that for larger products, a larger number of activities
which can be carried out concurrently can be identified.
 The parallel activities can be carried out simultaneously by the engineers. This reduces the
time to complete the project.
 Further, from fig, it can be observed that the development time is roughly the same for all
three categories of products.
 For example, a 60 KLOC program can be developed in approximately 18 months,
regardless of whether it is of organic, semidetached, or embedded type.

 From the effort estimation, the project cost can be obtained by multiplying the required
effort by the manpower cost per month.
 But, implicit in this project cost computation is the assumption that the entire project cost
is incurred on account of the manpower cost alone.
 In addition to manpower cost, a project would incur costs due to hardware and software
required for the project and the company overheads for administration, office space, etc.
 It is important to note that the effort and the duration estimations obtained using the
COCOMO model are called a nominal effort estimate and nominal duration estimate.
 The term nominal implies that if anyone tries to complete the project in a time shorter than
the estimated duration, then the cost will increase drastically.
 But, if anyone completes the project over a longer period of time than the estimated, then
there is almost no decrease in the estimated cost value.

Example1: Suppose a project was estimated to be 400 KLOC. Calculate the effort and
development time for each of the three model i.e., organic, semi-detached & embedded.
Solution: The basic COCOMO equation takes the form:

Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Estimated Size of project= 400 KLOC

(i)Organic Mode

E = 2.4 * (400)1.05 = 1295.31 PM


D = 2.5 * (1295.31)0.38=38.07 PM

(ii)Semidetached Mode

E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 PM

(iii) Embedded Mode

E = 3.6 * (400)1.20 = 4772.81 PM


D = 2.5 * (4772.8)0.32 = 38 PM

Example2: A project size of 200 KLOC is to be developed. Software development team has
average experience on similar type of projects. The project schedule is not very tight. Calculate
the Effort, development time, average staff size, and productivity of the project.

Solution: The semidetached mode is the most appropriate mode, keeping in view the size,
schedule and experience of development time.

Hence,

E=3.0(200)1.12=1133.12PM
D=2.5(1133.12)0.35=29.3PM

P = 176 LOC/PM

2. Intermediate Model: The basic COCOMO model considers that the effort is only a function
of the number of lines of code and some constants calculated according to the various software
systems. The intermediate COCOMO model recognizes these facts and refines the initial estimates
obtained through the basic COCOMO model by using a set of 15 cost drivers based on various
attributes of software engineering.

Classification of Cost Drivers and their attributes:

(i) Product attributes -

o Required software reliability extent


o Size of the application database
o The complexity of the product

Hardware attributes -

o Run-time performance constraints


o Memory constraints
o The volatility of the virtual machine environment
o Required turnabout time

Personnel attributes -

o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience

Project attributes -

o Use of software tools


o Application of software engineering methods

o Required development schedule

The cost drivers are divided into four categories:


Intermediate COCOMO equation:

E=ai (KLOC) bi*EAF


D=ci (E)di

Coefficients for intermediate COCOMO

Project ai bi ci di

Organic 2.4 1.05 2.5 0.38


Semidetached 3.0 1.12 2.5 0.35

Embedded 3.6 1.20 2.5 0.32

3. Detailed COCOMO Model:

 Detailed COCOMO incorporates all qualities of the standard version with an assessment
of the cost driver’s? effect on each method of the software engineering process.
 The detailed model uses various effort multipliers for each cost driver property.
 In detailed COCOMO, the whole software is differentiated into multiple modules, and then
we apply COCOMO in various modules to estimate effort and then sum the effort.

The Six phases of detailed COCOMO are:

1. Planning and requirements


2. System structure
3. Complete structure
4. Module code and test
5. Integration and test
6. Cost Constructive model

The effort is determined as a function of program estimate, and a set of cost drivers are given
according to every phase of the software lifecycle.

Concepts of software engineering economics:


(Explain about software engineering economics
 Software engineering economics is about making decisions related to software engineering
in a business context. The success of a software product, service, and solution depends on
good business management. Yet, in many companies and organizations, software business
relationships to software development and engineering remain vague. This knowledge area
(KA) provides an overview on software engineering economics.
 Economics is the study of value, costs, resources, and their relationship in a given context
or situation. In the discipline of software engineering, activities have costs, but the resulting
software itself has economic attributes as well.
 Software engineering economics provides a way to study the attributes of software and
software processes in a systematic way that relates them to economic measures. These
economic measures can be weighed and analyzed when making decisions that are within
the scope of a software organization and those within the integrated scope of an entire
producing or acquiring business.
 Software engineering economics is concerned with aligning software technical decisions
with the business goals of the organization.
BREAKDOWN OF TOPICS FOR SOFTWARE ENGINEERING MODELS AND
METHODS
Software Engineering Economics Fundamentals
1.1 Finance

Finance is the branch of economics concerned with issues such as allocation, management,
acquisition, and investment of resources. Finance is an element of every organization, including
software engineering organizations. The field of finance deals with the concepts of time, money,
risk, and how they are interrelated. It also deals with how money is spent and budgeted. Corporate
finance is concerned with providing the funds for an organization’s activities. Generally, this
involves balancing risk and profitability, while attempting to maximize an organization’s wealth
and the value of its stock. This holds primarily for “for-profit” organizations, but also applies to
“not-for-profit” organizations. The latter needs finances to ensure sustainability, while not
targeting tangible profit. To do this, an organization must

 identify organizational goals, time horizons,


risk factors, tax considerations, and financial
constraints;
 identify and implement the appropriate
business strategy, such as which portfolio and
investment decisions to take, how to manage
cash flow, and where to get the funding;
 measure financial performance, such as cash
flow and ROI (see section 4.3, Return on
Investment), and take corrective actions in
case of deviation from objectives and
strategy.
1.2 Accounting

Accounting is part of finance. It allows people whose money is being used to run an organization
to know the results of their investment: did they get the profit they were expecting? In “for-profit”
organizations, this relates to the tangible ROI, while in “not-for-profit” and governmental
organizations as well as “for-profit” organizations, it translates into sustainably staying in business.
The primary role of accounting is to measure the organization’s actual financial performance and
to communicate financial information about a business entity to stakeholders, such as shareholders,
financial auditors, and investors. Communication is generally in the form of financial statements
that show in money terms the economic resources to be controlled. It is important to select the
right information that is both relevant and reliable to the user. Information and its timing are
partially governed by risk management and governance policies. Accounting systems are also a
rich source of historical data for estimating.
1.3 Controlling
Controlling is an element of finance and accounting. Controlling involves measuring and
correcting the performance of finance and accounting. It ensures that an organization’s objectives
and plans are accomplished. Controlling cost is a specialized branch of controlling used to detect
variances of actual costs from planned costs.
1.4 Cash Flow
Cash flow is the movement of money into or out of a business, project, or financial product over a
given period. The concepts of cash flow instances and cash flow streams are used to describe the
business perspective of a proposal. To make a meaningful business decision about any specific
proposal, that proposal will need to be evaluated from a business perspective. In a proposal to
develop and launch product X, the payment for new software licenses is an example of an outgoing
cash flow instance. Money would need to be spent to carry out that proposal. The sales income
from product X in the 11th month after market launch is an example of an incoming cash flow
instance. Money would be coming in because of carrying out the proposal.
The term cash flow stream refers to the set of cash flow instances over time that are caused by
carrying out some given proposal. The cash flow stream is, in effect, the complete financial picture
of that proposal. How much money goes out? When does it go out? How much money comes in?
When does it come in? Simply, if the cash flow stream for Proposal A is more desirable than the
cash flow stream for Proposal B, then—all other things being equal—the organization is better off
carrying out Proposal A than Proposal B. Thus, the cash flow stream is an important input for
investment decision-making. A cash flow instance is a specific amount of money flowing into or
out of the organization at a specific time as a direct result of some activity. A cash flow diagram
is a picture of a cash flow stream. It gives the reader a quick overview of the financial picture of
the subject organization or project. Figure 12.2 shows an example of a cash flow diagram for a
proposal.

Figure 12.2: A Cash Flow Diagram


1.5 Decision-Making Process

If we assume that candidate solutions solve a given technical problem equally well, why should
the organization care which one is chosen? The answer is that there is usually a large difference in
the costs and incomes from the different solutions. A commercial, off-the-shelf, object request
broker product might cost a few thousand dollars, but the effort to develop a homegrown service
that gives the same functionality could easily cost several hundred times that amount.
If the candidate solutions all adequately solve the problem from a technical perspective, then the
selection of the most appropriate alternative should be based on commercial factors such as
optimizing total cost of ownership (TCO) or maximizing the short-term return on investment
(ROI). Life cycle costs such as defect correction, field service, and support duration are also
relevant considerations. These costs need to be factored in when selecting among acceptable
technical approaches, as they are part of the lifetime ROI. A systematic process for making
decisions will achieve transparency and allow later justification. Governance criteria in many
organizations demand selection from at least two alternatives.
A systematic process is shown in Figure 12.3. It starts with a business challenge at hand and
describes the steps to identify alternative solutions, define selection criteria, evaluate the solutions,
implement one selected solution, and monitor the performance of that solution.
Figure 12.3 shows the process as mostly stepwise and serial. The real process is more fluid.
Sometimes the steps can be done in a different order and often several of the steps can be done in
parallel. The important thing is to be sure that none of the steps are skipped or curtailed. It’s also
important to understand that this same process applies at all levels of decision making: from a
decision as big as determining whether a software project should be done at all, to a deciding on
an algorithm or data structure to use in a software module. The difference is how financially
significant the decision is and, therefore, how much effort should be invested in making that
decision. The project-level decision is financially significant and probably warrants a relatively
high level of effort to make the decision. Selecting an algorithm is often much less financially
significant and warrants a much lower level of effort to make the decision, even though the same
basic decision-making process is being used.
More often than not, an organization could carry out more than one proposal if it wanted to, and
usually there are important relationships among proposals. Maybe Proposal Y can only be carried
out if Proposal X is also carried out. Or maybe Proposal P cannot be carried out if Proposal Q is
carried out, nor could Q be carried out if P were. Choices are much easier to make when there are
mutually exclusive paths—for example, either A or B or C or whatever is chosen. In preparing
decisions, it is recommended to turn any given set of proposals, along with their various
interrelationships, into a set of mutually exclusive alternatives. The choice can then be made
among these alternatives.

Figure 12.3: The Basic Business Decision-Making Process


1.6 Valuation

In an abstract sense, the decision-making process— be it financial decision making or other— is


about maximizing value. The alternative that maximizes total value should always be chosen. A
financial basis for value-based comparison is comparing two or more cash flows. Several bases of
comparison are available, including

 present worth
 future worth
 annual equivalent
 internal rate of return
 (discounted) payback period.
Based on the time-value of money, two or more cash flows are equivalent only when they equal
the same amount of money at a common point in time. Comparing cash flows only makes sense
when they are expressed in the same time frame.
Note that value can’t always be expressed in terms of money. For example, whether an item is a
brand name or not can significantly affect its perceived value. Relevant values that can’t be
expressed in terms of money still need to be expressed in similar terms so that they can be evaluated
objectively.
1.7 Inflation

Inflation describes long-term trends in prices. Inflation means that the same things cost more than
they did before. If the planning horizon of a business decision is longer than a few years, or if the
inflation rate is over a couple of percentage points annually, it can cause noticeable changes in the
value of a proposal. The present time value therefore needs to be adjusted for inflation rates and
also for exchange rate fluctuations.
1.8 Depreciation

Depreciation involves spreading the cost of a tangible asset across a number of time periods; it is
used to determine how investments in capitalized assets are charged against income over several
years. Depreciation is an important part of determining after-tax cash flow, which is critical for
accurately addressing profit and taxes. If a software product is to be sold after the development
costs are incurred, those costs should be capitalized and depreciated over subsequent time periods.
The depreciation expense for each time period is the capitalized cost of developing the software
divided across the number of periods in which the software will be sold. A software project
proposal may be compared to other software and non-software proposals or to alternative
investment options, so it is important to determine how those other proposals would be depreciated
and how profits would be estimated.
1.9 Taxation

Governments charge taxes in order to finance expenses that society needs but that no single
organization would invest in. Companies have to pay income taxes, which can take a substantial
portion of a corporation’s gross profit. A decision analysis that does not account for taxation can
lead to the wrong choice. A proposal with a high pretax profit won’t look nearly as profitable in
post-tax terms. Not accounting for taxation can also lead to unrealistically high expectations about
how profitable a proposed product might be.
1.10 Time-Value of Money

One of the most fundamental concepts in finance—and therefore, in business decisions— is that
money has time-value: its value changes over time. A specific amount of money right now almost
always has a different value than the same amount of money at some other time. This concept has
been around since the earliest recorded human history and is commonly known as time-value. In
order to compare proposals or portfolio elements, they should be normalized in cost, value, and
risk to the net present value. Currency exchange variations over time need to be taken into account
based on historical data. This is particularly important in cross-border developments of all kinds.
1.11 Efficiency

Economic efficiency of a process, activity, or task is the ratio of resources actually consumed to
resources expected to be consumed or desired to be consumed in accomplishing the process,
activity, or task. Efficiency means “doing things right.” An efficient behavior, like an effective
behavior, delivers results—but keeps the necessary effort to a minimum. Factors that may affect
efficiency in software engineering include product complexity, quality requirements, time
pressure, process capability, team distribution, interrupts, feature churn, tools, and programming
language.
1.12 Effectiveness

Effectiveness is about having impact. It is the relationship between achieved objectives to defined
objectives. Effectiveness means “doing the right things.” Effectiveness looks only at whether
defined objectives are reached—not at how they are reached.
1.13 Productivity

Productivity is the ratio of output over input from an economic perspective. Output is the value
delivered. Input covers all resources (e.g., effort) spent to generate the output. Productivity
combines efficiency and effectiveness from a value-oriented perspective: maximizing productivity
is about generating highest value with lowest resource consumption.

2 Life Cycle Economics


2.1 Product

A product is an economic good (or output) that is created in a process that transforms product
factors (or inputs) to an output. When sold, a product is a deliverable that creates both a value and
an experience for its users. A product can be a combination of systems, solutions, materials, and
services delivered internally (e.g., in-house IT solution) or externally (e.g., software application),
either as-is or as a component for another product (e.g., embedded software).
2.2 Project

A project is “a temporary endeavor undertaken to create a unique product, service, or result”. In


software engineering, different project types are distinguished (e.g., product development,
outsourced services, software maintenance, service creation, and so on). During its life cycle, a
software product may require many projects. For example, during the product conception phase, a
project might be conducted to determine the customer need and market requirements; during
maintenance, a project might be conducted to produce a next version of a product.
2.3 Program
A program is “a group of related projects, subprograms, and program activities managed in a
coordinated way to obtain benefits not available from managing them individually.” Programs are
often used to identify and manage different deliveries to a single customer or market over a time
horizon of several years.
2.4 Portfolio
Portfolios are “projects, programs, sub-portfolios, and operations managed as a group to achieve
strategic objectives.” Portfolios are used to group and then manage simultaneously all assets within
a business line or organization. Looking to an entire portfolio makes sure that impacts of decisions
are considered, such as resource allocation to a specific project—which means that the same
resources are not available for other projects.
2.5 Product Life Cycle

A software product life cycle (SPLC) includes all activities needed to define, build, operate,
maintain, and retire a software product or service and its variants. The SPLC activities of “operate,”
“maintain,” and “retire” typically occur in a much longer time frame than initial software
development (the software development life cycle—SDLC—see Software Life Cycle Models in
the Software Engineering Process KA). Also the operate-maintain-retire activities of an SPLC
typically consume more total effort and other resources than the SDLC activities (see Majority of
Maintenance Costs in the Software Maintenance KA). The value contributed by a software product
or associated services can be objectively determined during the “operate and maintain” time frame.
Software engineering economics should be concerned with all SPLC activities, including the
activities after initial product release.
2.6 Project Life Cycle

Project life cycle activities typically involve five process groups—Initiating, Planning, Executing,
Monitoring and Controlling, and Closing. The activities within a software project life cycle are
often interleaved, overlapped, and iterated in various ways [3*, c2]. For instance, agile product
development within an SPLC involves multiple iterations that produce increments of deliverable
software. An SPLC should include risk management and synchronization with different suppliers
(if any), while providing auditable decision-making information (e.g., complying with product
liability needs or governance regulations). The software project life cycle and the software product
life cycle are interrelated; an SPLC may include several SDLCs.
2.7 Proposals

Making a business decision begins with the notion of a proposal. Proposals relate to reaching a
business objective—at the project, product, or portfolio level. A proposal is a single, separate
option that is being considered, like carrying out a particular software development project or not.
Another proposal could be to enhance an existing software component, and still another might be
to redevelop that same software from scratch. Each proposal represents a unit of choice—either
you can choose to carry out that proposal or you can choose not to. The whole purpose of business
decision-making is to figure out, given the current business circumstances, which proposals should
be carried out and which shouldn’t.
2.8 Investment Decisions

Investors make investment decisions to spend money and resources on achieving a target objective.
Investors are either inside (e.g., finance, board) or outside (e.g., banks) the organization. The target
relates to some economic criteria, such as achieving a high return on the investment, strengthening
the capabilities of the organization, or improving the value of the company. Intangible aspects such
as goodwill, culture, and competences should be considered.
2.9 Planning Horizon

When an organization chooses to invest in a particular proposal, money gets tied up in that
proposal— so-called “frozen assets.” The economic impact of frozen assets tends to start high and
decreases over time. On the other hand, operating and maintenance costs of elements associated
with the proposal tend to start low but increase over time. The total cost of the proposal—that is,
owning and operating a product—is the sum of those two costs. Early on, frozen asset costs
dominate; later, the operating and maintenance costs dominate. There is a point in time where the
sum of the costs is minimized; this is called the minimum cost lifetime.
To properly compare a proposal with a four-year life span to a proposal with a six-year life span,
the economic effects of either cutting the six-year proposal by two years or investing the profits
from the four-year proposal for another two years need to be addressed. The planning horizon,
sometimes known as the study period, is the consistent time frame over which proposals are
considered. Effects such as software lifetime will need to be factored into establishing a planning
horizon. Once the planning horizon is established, several techniques are available for putting
proposals with different life spans into that planning horizon.
2.10 Price and Pricing
A price is what is paid in exchange for a good or service. Price is a fundamental aspect of financial
modeling and is one of the four Ps of the marketing mix. The other three Ps are product, promotion,
and place. Price is the only revenue-generating element amongst the four Ps; the rest are costs.
Pricing is an element of finance and marketing. It is the process of determining what a company
will receive in exchange for its products. Pricing factors include manufacturing cost, market
placement, competition, market condition, and quality of product. Pricing applies prices to
products and services based on factors such as fixed amount, quantity break, promotion or sales
campaign, specific vendor quote, shipment or invoice date, combination of multiple orders, service
offerings, and many others. The needs of the consumer can be converted into demand only if the
consumer has the willingness and capacity to buy the product. Thus, pricing is very important in
marketing. Pricing is initially done during the project initiation phase and is a part of “go” decision
making.
2.11 Cost and Costing

A cost is the value of money that has been used up to produce something and, hence, is not
available for use anymore. In economics, a cost is an alternative that is given up as a result of a
decision.
A sunk cost is the expenses before a certain time, typically used to abstract decisions from expenses
in the past, which can cause emotional hurdles in looking forward. From a traditional economics
point of view, sunk costs should not be considered in decision making. Opportunity cost is the cost
of an alternative that must be forgone in order to pursue another alternative.
Costing is part of finance and product management. It is the process to determine the cost based
on expenses (e.g., production, software engineering, distribution, rework) and on the target cost to
be competitive and successful in a market. The target cost can be below the actual estimated cost.
The planning and controlling of these costs (called cost management) is important and should
always be included in costing.
An important concept in costing is the total cost of ownership (TCO). This holds especially for
software, because there are many not-so-obvious costs related to SPLC activities after initial
product development. TCO for a software product is defined as the total cost for acquiring,
activating, and keeping that product running. These costs can be grouped as direct and indirect
costs. TCO is an accounting method that is crucial in making sound economic decisions.
2.12 Performance Measurement

Performance measurement is the process whereby an organization establishes and measures the
parameters used to determine whether programs, investments, and acquisitions are achieving the
desired results. It is used to evaluate whether performance objectives are actually achieved; to
control budgets, resources, progress, and decisions; and to improve performance.
2.13 Earned Value Management

Earned value management (EVM) is a project management technique for measuring progress
based on created value. At a given moment, the results achieved to date in a project are compared
with the projected budget and the planned schedule progress for that date. Progress relates already-
consumed resources and achieved results at a given point in time with the respective planned
values for the same date. It helps to identify possible performance problems at an early stage. A
key principle in EVM is tracking cost and schedule variances via comparison of planned versus
actual schedule and budget versus actual cost. EVM tracking gives much earlier visibility to
deviations and thus permits corrections earlier than classic cost and schedule tracking that only
looks at delivered documents and products.
2.14 Termination Decisions
Termination means to end a project or product. Termination can be preplanned for the end of a
long product lifetime (e.g., when foreseeing that a product will reach its lifetime) or can come
rather spontaneously during product development (e.g., when project performance targets are not
achieved). In both cases, the decision should be carefully prepared, considering always the
alternatives of continuing versus terminating. Costs of different alternatives must be estimated—
covering topics such as replacement, information collection, suppliers, alternatives, assets, and
utilizing resources for other opportunities. Sunk costs should not be considered in such decision
making because they have been spent and will not reappear as a value.
2.15 Replacement and Retirement Decisions

A replacement decision is made when an organization already has a particular asset and they are
considering replacing it with something else; for example, deciding between maintaining and
supporting a legacy software product or redeveloping it from the ground up. Replacement
decisions use the same business decision process as described above, but there are additional
challenges: sunk cost and salvage value. Retirement decisions are also about getting out of an
activity altogether, such as when a software company considers not selling a software product
anymore or a hardware manufacturer considers not building and selling a particular model of
computer any longer. Retirement decision can be influenced by lock-in factors such as technology
dependency and high exit costs.

3 Risk and Uncertainty


3.1 Goals, Estimates, and Plans

Goals in software engineering economics are mostly business goals (or business objectives). A
business goal relates business needs (such as increasing profitability) to investing resources (such
as starting a project or launching a product with a given budget, content, and timing). Goals apply
to operational planning (for instance, to reach a certain milestone at a given date or to extend
software testing by some time to achieve a desired quality level—see Key Issues in the Software
Testing KA) and to the strategic level (such as reaching a certain profitability or market share in a
stated time period).
An estimate is a well-founded evaluation of resources and time that will be needed to achieve
stated goals. A software estimate is used to determine whether the project goals can be achieved
within the constraints on schedule, budget, features, and quality attributes. Estimates are typically
internally generated and are not necessarily visible externally. Estimates should not be driven
exclusively by the project goals because this could make an estimate overly optimistic. Estimation
is a periodic activity; estimates should be continually revised during a project.
A plan describes the activities and milestones that are necessary in order to reach the goals of a
project. The plan should be in line with the goal and the estimate, which is not necessarily easy
and obvious— such as when a software project with given requirements would take longer than
the target date foreseen by the client. In such cases, plans demand a review of initial goals as well
as estimates and the underlying uncertainties and inaccuracies. Creative solutions with the
underlying rationale of achieving a win-win position are applied to resolve conflicts.
To be of value, planning should involve consideration of the project constraints and commitments
to stakeholders. Figure 12.4 shows how goals are initially defined. Estimates are done based on
the initial goals. The plan tries to match the goals and the estimates. This is an iterative process,
because an initial estimate typically does not meet the initial goals.

Figure 12.4: Goals, Estimates, and Plans


3.2 Estimation Techniques

Estimations are used to analyze and forecast the resources or time necessary to implement
requirements. Five families of estimation techniques exist:

 Expert judgment
 Analogy
 Estimation by parts
 Parametric methods
 Statistical methods.
No single estimation technique is perfect, so using multiple estimation technique is useful.
Convergence among the estimates produced by different techniques indicates that the estimates
are probably accurate. Spread among the estimates indicates that certain factors might have been
overlooked. Finding the factors that caused the spread and then re-estimating again to produce
results that converge could lead to a better estimate.
3.3 Addressing Uncertainty

Because of the many unknown factors during project initiation and planning, estimates are
inherently uncertain; that uncertainty should be addressed in business decisions. Techniques for
addressing uncertainty include

 consider ranges of estimates


 analyze sensitivity to changes of assumptions
 delay final decisions.
3.4 Prioritization
Prioritization involves ranking alternatives based on common criteria to deliver the best possible
value. In software engineering projects, software requirements are often prioritized in order to
deliver the most value to the client within constraints of schedule, budget, resources, and
technology, or to provide for building product increments, where the first increments provide the
highest value to the customer.
3.5 Decisions under Risk

Decisions under risk techniques are used when the decision maker can assign probabilities to the
different possible outcomes. The specific techniques include

 expected value decision making


 expectation variance and decision making
 Monte Carlo analysis
 expected value of perfect information.
3.6 Decisions under Uncertainty

Decisions under uncertainty techniques are used when the decision maker cannot assign
probabilities to the different possible outcomes because needed information is not available.
Specific techniques include

 Laplace Rule
 Maximin Rule
 Maximax Rule
 Hurwicz Rule
 Minimax Regret Rule.

4 Economic Analysis Methods


4.1 For-Profit Decision Analysis

Figure 12.5 describes a process for identifying the best alternative from a set of mutually exclusive
alternatives. Decision criteria depend on the business objectives and typically include ROI or
Return on Capital Employed (ROCE) .
For-profit decision techniques don’t apply for government and nonprofit organizations. In these
cases, organizations have different goals—which means that a different set of decision techniques
are needed, such as cost-benefit or cost-effectiveness analysis.
Figure 12.5: The for-profit decision-making process
4.2 Minimum Acceptable Rate of Return

The minimum acceptable rate of return (MARR) is the lowest internal rate of return the
organization would consider to be a good investment. Generally speaking, it wouldn’t be smart to
invest in an activity with a return of 10% when there’s another activity that’s known to return 20%.
The MARR is a statement that an organization is confident it can achieve at least that rate of return.
The MARR represents the organization’s opportunity cost for investments. By choosing to invest
in some activity, the organization is explicitly deciding to not invest that same money somewhere
else. If the organization is already confident it can get some known rate of return, other alternatives
should be chosen only if their rate of return is at least that high. A simple way to account for that
opportunity cost is to use the MARR as the interest rate in business decisions. An alternative’s
present worth evaluated at the MARR shows how much more or less (in present- day cash terms)
that alternative is worth than investing at the MARR.
4.3 Return on Investment

Return on investment (ROI) is a measure of the profitability of a company or business unit. It is


defined as the ratio of money gained or lost (whether realized or unrealized) on an investment
relative to the amount of money invested. The purpose of ROI varies and includes, for instance,
providing a rationale for future investments and acquisition decisions.
4.4 Return on Capital Employed
The return on capital employed (ROCE) is a measure of the profitability of a company or business
unit. It is defined as the ratio of a gross profit before taxes and interest (EBIT) to the total assets
minus current liabilities. It describes the return on the used capital.
4.5 Cost-Benefit Analysis

Cost-benefit analysis is one of the most widely used methods for evaluating individual proposals.
Any proposal with a benefit-cost ratio of less than 1.0 can usually be rejected without further
analysis because it would cost more than the benefit. Proposals with a higher ratio need to consider
the associated risk of an investment and compare the benefits with the option of investing the
money at a guaranteed interest rate.
4.6 Cost-Effectiveness Analysis

Cost-effectiveness analysis is similar to cost benefit analysis. There are two versions of cost
effectiveness analysis: the fixed-cost version maximizes the benefit given some upper bound on
cost; the fixed-effectiveness version minimizes the cost needed to achieve a fixed goal.
4.7 Break-Even Analysis

Break-even analysis identifies the point where the costs of developing a product and the revenue
to be generated are equal. Such an analysis can be used to choose between different proposals at
different estimated costs and revenue. Given estimated costs and revenue of two or more proposals,
break-even analysis helps in choosing among them.
4.8 Business Case

The business case is the consolidated information summarizing and explaining a business proposal
from different perspectives for a decision maker (cost, benefit, risk, and so on). It is often used to
assess the potential value of a product, which can be used as a basis in the investment decision
making process. As opposed to a mere profit loss calculation, the business case is a “case” of plans
and analyses that is owned by the product manager and used in support of achieving the business
objectives.
4.9 Multiple Attribute Evaluation
The topics discussed so far are used to make decisions based on a single decision criterion: money.
The alternative with the best present worth, the best ROI, and so forth is the one selected. Aside
from technical feasibility, money is almost always the most important decision criterion, but it’s
not always the only one. Quite often there are other criteria, other “attributes,” that need to be
considered, and those attributes can’t be cast in terms of money. Multiple attribute decision
techniques allow other, nonfinancial criteria to be factored into the decision.
There are two families of multiple attribute decision techniques that differ in how they use the
attributes in the decision. One family is the “compensatory,” or single-dimensioned, techniques.
This family collapses all of the attributes onto a single figure of merit. The family is called
compensatory because, for any given alternative, a lower score in one attribute can be compensated
by—or traded off against—a higher score in other attributes. The compensatory techniques include

 nondimensional scaling
 additive weighting
 analytic hierarchy process.
In contrast, the other family is the “non-compensatory,” or fully dimensioned, techniques. This
family does not allow tradeoffs among the attributes. Each attribute is treated as a separate entity
in the decision process. The non-compensatory techniques include

 dominance
 satisficing
 lexicography.
4.10 Optimization Analysis

The typical use of optimization analysis is to study a cost function over a range of values to find
the point where overall performance is best. Software’s classic space-time tradeoff is an example
of optimization; an algorithm that runs faster will often use more memory. Optimization balances
the value of the faster runtime against the cost of the additional memory.
Real options analysis can be used to quantify the value of project choices, including the value of
delaying a decision. Such options are difficult to compute with precision. However, awareness that
choices have a monetary value provides insight in the timing of decisions such as increasing project
staff or lengthening time to market to improve quality.

5 Practical Considerations
5.1 The “Good Enough” Principle

Often software engineering projects and products are not precise about the targets that should be
achieved. Software requirements are stated, but the marginal value of adding a bit more
functionality cannot be measured. The result could be late delivery or too-high cost. The “good
enough” principle relates marginal value to marginal cost and provides guidance to determine
criteria when a deliverable is “good enough” to be delivered. These criteria depend on business
objectives and on prioritization of different alternatives, such as ranking software requirements,
measurable quality attributes, or relating schedule to product content and cost.
The RACE principle (reduce accidents and control essence) is a popular rule towards good enough
software. Accidents imply unnecessary overheads such as gold-plating and rework due to late
defect removal or too many requirements changes. Essence is what customers pay for. Software
engineering economics provides the mechanisms to define criteria that determine when a
deliverable is “good enough” to be delivered. It also highlights that both words are relevant: “good”
and “enough.” Insufficient quality or insufficient quantity is not good enough.
Agile methods are examples of “good enough” that try to optimize value by reducing the overhead
of delayed rework and the gold plating that results from adding features that have low marginal
value for the users. In agile methods, detailed planning and lengthy development phases are
replaced by incremental planning and frequent delivery of small increments of a deliverable
product that is tested and evaluated by user representatives.
5.2 Friction-Free Economy
Economic friction is everything that keeps markets from having perfect competition. It involves
distance, cost of delivery, restrictive regulations, and/or imperfect information. In high-friction
markets, customers don’t have many suppliers from which to choose. Having been in a business
for a while or owning a store in a good location determines the economic position. It’s hard for
new competitors to start business and compete. The marketplace moves slowly and predictably.
Friction-free markets are just the reverse. New competitors emerge and customers are quick to
respond. The marketplace is anything but predictable. Theoretically, software and IT are friction
free. New companies can easily create products and often do so at a much lower cost than
established companies, since they need not consider any legacies. Marketing and sales can be done
via the Internet and social networks, and basically free distribution mechanisms can enable a ramp
up to a global business. Software engineering economics aims to provide foundations to judge how
a software business performs and how friction-free a market actually is. For instance, competition
among software app developers is inhibited when apps must be sold through an app store and
comply with that store’s rules.
5.3 Ecosystems
An ecosystem is an environment consisting of all the mutually dependent stakeholders, business
units, and companies working in a particular area. In a typical ecosystem, there are producers and
consumers, where the consumers add value to the consumed resources. Note that a consumer is
not the end user but an organization that uses the product to enhance it. A software ecosystem is,
for instance, a supplier of an application working with companies doing the installation and support
in different regions. Neither one could exist without the other. Ecosystems can be permanent or
temporary. Software engineering economics provides the mechanisms to evaluate alternatives in
establishing or extending an ecosystem—for instance, assessing whether to work with a specific
distributor or have the distribution done by a company doing service in an area.
5.4 Offshoring and Outsourcing
Offshoring means executing a business activity beyond sales and marketing outside the home
country of an enterprise. Enterprises typically either have their offshoring branches in low-cost
countries or they ask specialized companies abroad to execute the respective activity. Offshoring
should therefore not be confused with outsourcing. Offshoring within a company is called captive
offshoring. Outsourcing is the result-oriented relationship with a supplier who executes business
activities for an enterprise when, traditionally, those activities were executed inside the enterprise.
Outsourcing is site-independent. The supplier can reside in the neighborhood of the enterprise or
offshore (outsourced offshoring). Software engineering economics provides the basic criteria and
business tools to evaluate different sourcing mechanisms and control their performance. For
instance, using an outsourcing supplier for software development and maintenance might reduce
the cost per hour of software development, but increase the number of hours and capital expenses
due to an increased need for monitoring and communication.
Techniques of software project control and reporting:
 Project Control focuses on the factors that impact the schedule and budget of your
project.
 It is a key factor in keeping your project running smoothly.
 Issues associated with project control are: Scope creep – resulting in blow-out of cost
and schedule.

Why is project reporting essential to project management?


 Without adequate project management reports, the project team and project stakeholders end
up being in the dark, unable to put their finger on what’s going on with the project.
 As a result, it’s all too easy for the project to fail, simply because the right insights aren’t
getting through and therefore, appropriate decisions aren’t being taken.
 Project reporting fulfills the need for information in the project management process so that
data is taken from where it’s generated, and delivered to where it’s interpreted and applied.

Overall, project management reports are important because it:

 Shows the project team what they are working, so they can explain why it’s working and
focus more on it.
 Uncovers what’s not working so the team can investigate and determine an appropriate
course of action i.e. what to do about it with the help of the project dashboard.
 Gives the team a 360° overview of how the project is doing so they can determine what
steps to take next.

The different types of project management reports


1. Project status report

The project status report is a critical report that shows stakeholders a general snapshot of how well
the project is advancing toward its targets. The project status report can be thought of as a general
update that’s designed to keep stakeholders or project progress, emerging issues, and key points to
note, all at a glance.

2. Project health report

Project health reports are designed to update stakeholders on the overall health of the project, derived
from whether the project is either advancing as projected, in danger of stagnating or completely
stagnated.

Why you need project health reports:


The project health report answers the following questions:

 Are we on track to deliver this project on target? Have we stagnated?


 How far off are we from the target?
 What needs the most attention to get us back on track?

Project health reports make it easy to identify when something’s wrong so the team can identify what
and get it out of the way.

3. Team availability reports

The team availability report functions like a team calendar that shows every team member’s schedule
so it’s easy to see who’s occupied and when they are busy. This way, stakeholders who are planning
for a project or requiring input anywhere can see which team members can be assigned, those who
can safely take on more work, as well as those who are at full capacity and might need assistance.

Why you need team availability reports:

 Availability reports make it easy to visualize how much everyone has on their plates so
work can be more evenly distributed to achieve faster results, higher efficiency, and most
importantly, prevent project burnout between teams.
 An availability report plots staff names against calendar days, with either a color tone or a
written designation showing their workload for each calendar day.

4. Risk reports

A risk report identifies the blockers hindering a project’s successful completion and presents it for the
stakeholders’ analysis. The risk report is designed to not only display existing or potential obstacles
but to offer a sense of the danger they pose to the project so the project’s stakeholders can take
adequate steps to eliminate project risks or adapt the project.

Why you need risk reports:

Project risk reporting helps the project team to:

 Determine existing and potential project constraints that are already holding back the
project, or that will hold it back
 Visualize obstacles on a risk scale to determine which ones to prioritize
 Determine how to keep future projects from running into similar hitches

5. Variance report
It’s quite common for teams to deviate from the project’s key targets without even knowing. In the
end, this results in project failure after time and resources have been expended.
A variance report helps the project team and stakeholders to ensure that doesn’t happen. You can
track the target project milestones and objectives of the project along with the work that’s getting
done.

Why you need variance reports:

With a variance report, the team can see if the work they’re getting done is actually the project’s
targets or whether they’re just spending time without ticking off the following,

o project milestones
o project objectives
o project deliverables.

6. Time tracking report

Project time tracking helps the project team & stakeholders see how much time is getting spent by
team members at every stage of the project management process. A time tracking report helps the
team to see how much time overall is spent on specific tasks and how much individual team members
spend on tasks.

Why you need time tracking reports:

 Time tracking reports help in assigning team members to tasks where they’re more
efficient,
 Tracking time spent on tasks for compensation,
 as well as optimizing systems and processes so work gets done faster.

How to create effective project reports

The aim of project reporting is to offer all the information generated from your projects in a simple
format so stakeholders can understand and apply those insights. Here are some best practices that’ll
help you create reports that actually enable project stakeholders to make informed decisions.

1. Keep data at the center

The aim of project management reports is to deliver processed data to those who need it so they can
be informed and make appropriate decisions from it. It’s important that reports present solid data that
stakeholders can look at and get an idea of the big picture.

2. Visualize the data

Apply an abundance of images, charts, and graphs wherever appropriate to fully illustrate the
implications of whatever data you present with the help of visual project management tools.

3. Leave the stage open for constructive communication


Reports shouldn’t be full stops that spit out data and get over with it, rather reports should try to
explain the data and its implications while inviting further questions. It might be demanding but this
ensures that project collaboration between stakeholders are on the same page and get a full picture of
what you’re trying to convey.

4. Create reports appropriate for your audience

Senior-level management won’t have the time to sift through small details; team members won’t be
able to make much out of a report that shows only a few figures, project management charts, and
notes. Reports must be adapted to the needs of your specific audience so they get all the information
they need through project communication management, without getting bogged down or left in the
dark with incomplete data.

Introduction to measurement of software size:


 Software sizing or Software size estimation is an activity in software engineering that is
used to determine or estimate the size of a software application or component in order to
be able to implement other software project management activities (such as estimating or
tracking).
 Size is an inherent characteristic of a piece of software just like weight is an inherent
characteristic of a tangible material.
 Software sizing is different from software effort estimation.
 Sizing estimates the probable size of a piece of software while effort estimation predicts
the effort needed to build it.
 The relationship between the size of software and the effort required to produce it is
called productivity.
For example, if a software engineer has built a small web-based calculator application, we can say
that the project effort was 280 man-hours. However, this does not give any information about the
size of the software product itself. Conversely, we can say that the application size is 5,000 LOCs
(Lines Of Code), or 30 FPs (Function Points) without identifying the project effort required to
produce it.
Functional software-sizing methods
 Historically, the most common software sizing methodology has been counting the lines
of code written in the application source.
 Another approach is to do Functional Size Measurement, to express the functionality size
as a number by performing function point analysis.
 The original sizing method is the IFPUG. The IFPUG FPA functional sizing method (FSM)
has been used successfully – despite being less accurate in estimating complex algorithms
and being relatively more difficult to use than estimating lines of code.
 Adaptations of the original Functional Size Measurement methodology have emerged, and
these standards are: COSMIC Function Points, Mk II Function Points, Nesma Function
Points, and FiSMA Function Points.
 Other variants of these standards include Object-Oriented Function Points (OOFP) and
newer variants as Weighted Micro Function Points, which factor algorithmic and control-
flow complexity.
 The best Functional Sizing Method depends on a number of factors, including the
functional domain of the applications, the process maturity of the developing organization
and the extent of use of the FSM Method.
 There are many uses and benefits of function points beyond measuring project productivity
and estimating planned projects, these include monitoring project progress and evaluating
the requirements coverage of commercial off-the-shelf (COTS) packages.
 Other software sizing methods include Use Case-based software sizing, which relies on
counting the number and characteristics of use cases found in a piece of software,
and COSMIC functional size measurement, which addresses sizing software that has a very
limited amount of stored data such as 'process control' and 'real time' systems.
 Both the IFPUG Method and the COSMIC Methods are ISO/IEC standards.
Non-functional software-sizing method
 The IFPUG method to size the non-functional aspects of a software or component is called
SNAP, therefore the non-functional size is measured by SNAP Points.
 The SNAP model consists of four categories and fourteen sub-categories to measure the
non-functional requirements.
 Non-functional requirement are mapped to the relevant sub-categories. Each sub-category
is sized, and the size of a requirement is the sum of the sizes of its sub-categories.
 The SNAP sizing process is very similar to the function point sizing process. Within the
application boundary, non-functional requirements are associated with relevant categories
and their sub-categories.
 Using a standardized set of basic criteria, each of the sub-categories is then sized according
to its type and complexity; the size of such a requirement is the sum of the sizes of its sub-
categories.
 These sizes are then totaled to give the measure of non-functional size of the software
application.
Introduction to the concepts of risk and its mitigation:
 Risk mitigation is a systematic process where an organization develops actions and options
to increase opportunities and reduce threats to project objectives.
 Risk mitigation implementation refers to the process of risk mitigation actions.

Software Risk is actually a problem that may or may not occurs that shows the uncertainty of
risks but if occur, unwanted losses or threatening or consequences will occur. It is generally
caused due to a lack of incomplete information, control, or time. Risk Assessment and Risk
Mitigation is a process in which identifying, assessing, and mitigating risk takes place to scope,
schedule, cost, and quality of the project.

Risk Assessment:

Risk assessment simply means to describe the overall process or method to identify risk and
problem factors that might cause harm. It is actually a systematic examination of a task or project
that you perform to simply identify significant risks, problems, hazards, and then to find out
control measures that you will take to reduce risk. The best approach is to prepare a set of
questions that can be answered by project managers in order to assess overall project risks.
These questions are shown below:
 Will the project get proper support from the customer manager?
 Are end-users committed to software that has been produced?
 Is there a clear understanding of the requirements?
 Is there an active involvement of customers in the requirement definition?
 Are the expectations set for the product are realistic?
 Is project scope stable?
 Are there team members with the required skills?
 Are project requirements stable?
 Does technology used for software is known to developers?
 Is the size of the team sufficient to develop the required product?
 Is that all customers know the importance of the product/requirements of the system
to be built?
Thus, the number of negative answers to these questions represents the severity of the impact of
risk on the overall project. It is not about creating or making a large number of work papers, but
rather simply identify and find out measures to control risks in your workplace.

Risk Mitigation:

Risk mitigation simply means to reduce adverse effects and impact of risks that are harmful to
project and Business continuity. It includes introducing measures and step taken into a project
plan to mitigate, reduce, eliminate, or control risk. Risk mitigation means preventing risks to
occur (risk avoidance).

Following are measures and steps to be taken for mitigating risks:

 Communicate with concerned staff to find probable risks.


 Identify and eliminate all those causes and issues that can create risk before the
beginning of project work.
 Develop policy in an organization that will help to continue the project even though
some staff leaves the organization.
 Everybody in the project team should be acquainted i.e. should be aware of and
familiar with current development activity.
 Maintain corresponding documents in a timely manner. This documentation should
be strictly followed as per standards set by the organization.
 Conduct timely reviews in order to speed up work.
 For conducting every critical activity during software development, provide
additional staff is required.
Risk management:
 Maintain a worldwide perspective: view software risks within the context of a
system and therefore the business drawback planned to solve.
 Take an advanced view: ink regarding the risk which can occur in the longer term
and make future plans for managing the future events.
 Encourage open communication: Encourage all the stakeholders and users for
suggesting risks at any time.
 Integrate: A thought of risk should be integrated into the software process.
 Emphasize never-ending process: Modify the known risk than than a lot of info is
understood and add new risks as higher insight is achieved.
 Develop a shared product vision: If all the stakeholders share a similar vision of the
software then it’s easier for better risk identification.
 Encourage teamwork: whereas conducting risk management activities pool the
skills and knowledge of all stakeholders.

Configuration management:

 Configuration Management (CM) is a technic of identifying, organizing, and controlling


modification to software being built by a programming team.
 The objective is to maximize productivity by minimizing mistakes (errors).
 CM is used to essential due to the inventory management, library management, and
updation management of the items essential for the project.

Why do we need Configuration Management

 Multiple people are working on software which is consistently updating.


 It may be a method where multiple version, branches, authors are involved in a software
project, and the team is geographically distributed and works concurrently.
 It changes in user requirements, and policy, budget, schedules need to be accommodated.

Importance of SCM

 It is practical in controlling and managing the access to various SCIs e.g., by preventing
the two members of a team for checking out the same component for modification at the
same time.
 It provides the tool to ensure that changes are being properly implemented.
 It has the capability of describing and storing the various constituent of software.
 SCM is used in keeping a system in a consistent state by automatically producing derived
version upon modification of the same component.

SCM Process

 It uses the tools which keep that the necessary change has been implemented adequately to
the appropriate component. The SCM process defines a number of tasks:

o Identification of objects in the software configuration


o Version Control
o Change Control
o Configuration Audit
o Status Reporting

Identification

 Basic Object: Unit of Text created by a software engineer during analysis, design, code,
or test.
 Aggregate Object: A collection of essential objects and other aggregate objects. Design
Specification is an aggregate object.21.3Mrs of India | List of Prime Minister of India
(1947-2020)
 Each object has a set of distinct characteristics that identify it uniquely: a name, a
description, a list of resources, and a "realization."
 The interrelationships between configuration objects can be described with a Module
Interconnection Language (MIL).

Version Control

 Version Control combines procedures and tools to handle different version of configuration
objects that are generated during the software process.
 Clemm defines version control in the context of SCM: Configuration management
allows a user to specify the alternative configuration of the software system through the
selection of appropriate versions. This is supported by associating attributes with each
software version, and then allowing a configuration to be specified [and constructed] by
describing the set of desired attributes.

Change Control
 James Bach describes change control in the context of SCM is: Change Control is Vital.
But the forces that make it essential also make it annoying.
 We worry about change because a small confusion in the code can create a big failure in
the product. But it can also fix a significant failure or enable incredible new capabilities.
 We worry about change because a single rogue developer could sink the project, yet
brilliant ideas originate in the mind of those rogues, and
 A burdensome change control process could effectively discourage them from doing
creative work.
 A change request is submitted and calculated to assess technical merit; potential side
effects, the overall impact on other configuration objects and system functions, and
projected cost of the change.
 The results of the evaluations are presented as a change report, which is used by a change
control authority (CCA) - a person or a group who makes a final decision on the status and
priority of the change.
 The "check-in" and "check-out" process implements two necessary elements of change
control-access control and synchronization control.
 Access Control governs which software engineers have the authority to access and modify
a particular configuration object.
 Synchronization Control helps to ensure that parallel changes, performed by two
different people, don't overwrite one another.

Configuration Audit

 SCM audits to verify that the software product satisfies the baselines requirements and
ensures that what is built and what is delivered.
 SCM audits also ensure that traceability is maintained between all CIs and that all work
requests are associated with one or more CI modification.
 SCM audits are the "watchdogs" that ensures that the integrity of the project's scope is
preserved.

Status Reporting

 Configuration Status reporting (sometimes also called status accounting) providing


accurate status and current configuration data to developers, testers, end users, customers
and stakeholders through admin guides, user guides, FAQs, Release Notes, Installation
Guide, Configuration Guide, etc.
Software Quality and Reliability: Internal and external qualities; process and product quality;
principles to achieve software quality; introduction to different software quality models like
McCall, Boehm, FURPS / FURPS+, Dromey, ISO – 9126; introduction to Capability Maturity
Models (CMM and CMMI); introduction to software reliability, reliability models and estimation.
UNIT-III

What is Software Quality



Software quality is defined as a field of study and practice that describes the desirable
attributes of software products. There are two main approaches to software quality: defect
management and quality attributes.
 A software defect can be regarded as any failure to address end-user requirements.
Common defects include missed or misunderstood requirements and errors in design,
functional logic, data relationships, process timing, validity checking, and coding errors.
 The software defect management approach is based on counting and managing defects.
Defects are commonly categorized by severity, and the numbers in each category are used
for planning.

Software Quality Attributes Approach

This approach to software quality is best exemplified by fixed quality models, such as ISO/IEC
25010:2011. This standard describes a hierarchy of eight quality characteristics, each composed
of sub-characteristics:

1. Functional suitability
2. Reliability
3. Operability
4. Performance efficiency
5. Security
6. Compatibility
7. Maintainability
8. Transferability
ISO/IEC 25010:2011 Software Quality Model

Additionally, the standard defines a quality-in-use model composed of five characteristics:

1. Effectiveness
2. Efficiency
3. Satisfaction
4. Safety
5. Usability

A fixed software quality model is often helpful for considering an overall understanding of
software quality. In practice, the relative importance of particular software characteristics typically
depends on software domain, product type, and intended usage. Thus, software characteristics
should be defined for, and used to guide the development of, each product.

Software Reliability

 Software Reliability means Operational reliability. It is described as the ability of a


system or component to perform its required functions under static conditions for a specific
period.
 Software reliability is also defined as the probability that a software system fulfills its
assigned task in a given environment for a predefined number of input cases, assuming that
the hardware and the input are free of error.
 Software Reliability is an essential connect of software quality, composed with
functionality, usability, performance, serviceability, capability, installability,
maintainability, and documentation.
 Software Reliability is hard to achieve because the complexity of software turn to be high.
While any system with a high degree of complexity, containing software, will be hard to
reach a certain level of reliability, system developers tend to push complexity into the
software layer, with the speedy growth of system size and ease of doing so by upgrading
the software.

For example, large next-generation aircraft will have over 1 million source lines of software on-
board; next-generation air traffic control systems will contain between one and two million lines;
the upcoming International Space Station will have over two million lines on-board and over 10
million lines of ground support software; several significant life-critical defense systems will have
over 5 million source lines of software. While the complexity of software is inversely associated
with software reliability, it is directly related to other vital factors in software quality, especially
functionality, capability, etc.

Internal vs. External Software Quality

External Quality Characteristics:

Correctness
Usability
Efficiency
Reliability
Integrity
Adaptability
Accuracy and
Robustness

Internal Quality Characteristics:

Maintainability
Flexibility
Portability
Re-usability
Readability
Testability and
Understandability

 Internal Quality determines your ability to move forward on a project

 External Quality determines the fulfillment of stakeholder requirements

 Software with a high internal quality is easy to change, easy to add new features, and easy
to test.

 Software with a low internal quality is hard to understand, difficult to change, and
troublesome to extend.

 Measures like McCabe’s Cyclomatic Complexity, Cohesion, Coupling and Function Points
can all be used to understand internal quality.

 External software quality is a measure of how the system as a whole meets the requirements
of stakeholders.

Benefits
 The business case for unit/integration/system tests is much clearer
 The motivation for a particular type of test is easier to understand
 Understanding the motivation for a particular test avoids mixing test abstraction levels

Feedback from tests

What is interesting about this definition is that using different testing approaches you can garner
quality feedback of different types. It shows that end-to-end system tests provide to most amount
of feedback on the external quality of the system and unit tests provide the most amount of
feedback on internal quality.

It also underlines the importance of multiple testing approaches. If you want to make a system that
both meets the stakeholder requirements and is easy to understand and change (who doesn’t?) then
it makes sense to develop with both unit and system level tests.

Principles to achieve Software Quality

In today’s software marketplace, the principal focus is on cost, schedule, and function; quality is
lost in the noise. This is unfortunate since poor quality performance is the root cause of most
software cost and schedule problems. The first step is adopting and demanding that vendors follow
these six principles of software quality:
Principle 1:
If a customer does not demand a quality product, he or she will probably not get one.
Principle 2:
To consistently produce quality products, the developers must manage the quality of their work.
Principle 3:
To manage product quality, the developers must measure quality.
Principle 4:
The quality of a product is determined by the quality of the process used to develop it.
Principle 5:
Since a test removes only a fraction of a product’s defects, to get a quality product out of test you
must put a quality product into test.
Principle 6:
Quality products are only produced by motivated professionals who take pride in their work.

Principle No. 1
If the customer does not demand a quality product, he or she will probably not get one.
 If you want quality products, you must demand them. But how do you do that? That is the
subject of this article. I first define quality, then I discuss quality management, and then
third, I cover quality measurement. Next, I describe the methods for verifying the quality
of software products before you get them, and finally, I give some pointers for those
acquisition managers who would like to consider using these methods.
Defining Quality
 Product developers typically define a quality product as one that satisfies the customer.
However, this definition is not of much help to you, the customer. What you need is a
definition of quality to guide your acquisition process. To get this, you must define what
quality means to you and how you would recognize a quality product if you got one.
 In the broadest sense, a quality product is one that is delivered on time, costs what it was
committed to cost, and flawlessly performs all of its intended functions. While the first two
of these criteria are relatively easy to determine, the third is not. These first two criteria are
part of the normal procurement process and typically receive the bulk of the customer’s
and supplier’s attention during a procurement cycle, but the third is generally the source of
most acquisition problems. This is because poor product quality is often the reason for a
software-intensive system’s cost and schedule problems.
Think of it this way
 If quality did not matter, you would have to accept whatever quality the supplier provided,
and the cost and schedule would be largely determined by the supplier. In simplistic terms,
the supplier’s strategy would be to supply whatever quality level he felt would get the
product accepted and paid for. In fact, even if you had contracted for a specific quality
level, as long as you could not verify that quality level prior to delivery and acceptance
testing, the supplier’s optimum strategy would be to deliver whatever quality level it could
get away with as long as it was paid.
 Since, at least for software, most quality problems do not show up until well after the end
of the normal acquisition cycle, you would be no better off than before. I do not mean to
imply that this is how most suppliers behave, but merely that this would be their most
economically attractive short-term strategy. In the long term, quality work has always
proven to be most economically attractive.
Addressing the Quality Problem
 In principle, there are only two ways to address the software quality problem. First, use a
supplier that has a sufficiently good record of delivering quality products so you will be
comfortable that the products he provides will be of high quality. Then, just leave the
supplier alone to do the development work. The second choice would be to closely monitor
the development process the supplier uses to be assured that the product being produced
will be of the desired quality.
 While the first approach would be ideal, it is not useful when the supplier has historically
had quality problems or where his current performance causes concern. In these cases, you
are left with the second choice: to monitor the development work. To do this, you must
consider the second principle of quality management.
Principle No. 2
To produce quality products consistently, developers must manage the quality of their work.
Managing Product Quality
 While you may want a quality product, if you have no way to determine the product’s
quality until after you get it, you will not be able to pressure the supplier to do quality work
until it is too late. The best time to influence the product’s quality is early in its development
cycle where you can determine the quality of the product before it is delivered and
influence the way the work is done. At least you can do this if your contract provides you
the needed leverage.
 This, of course, means that you must anticipate the product’s quality before it is delivered,
and you must also know what to tell the supplier to do to assure that the delivered product
will actually be of high quality. Therefore, the first need is to predict the product’s quality
before it is built. This is essential, for if you only measure the product’s quality after it has
been built, it is too late to do anything but fix its defects. This results in a defective product
with patches for the known defects. Unless you have an extraordinarily effective test and
evaluation system, you will not then know about most of the product’s defects before you
accept the product and pay the supplier.
 While you might still have warranties and other contract provisions to help you recover
damages, and you might still be able to require the supplier to fix the product’s defects,
these contractual provisions cannot protect you from getting a poor quality product.
Identifying Quality Work
 To determine the likely quality of a product while it is being developed, we must consider
the third principle of quality work.
Principle No. 3
To manage product quality, the developers must measure quality.
 To monitor product quality before delivery you must measure quality during development.
Further, you must require that the developers gather quality measurements and supply them
to you while they do the development work. What measures do you want, and how would
you use them? This article suggests a proven set of quality measures, but first, to define
these measures, we must consider what a quality product looks like.
 While software experts debate this point, every other field of engineering agrees on one
basic characteristic of quality: A quality product contains few, if any, defects. In fact, it has
been shown that this definition is equally true for software. We also know that software
professionals who consistently produce defect-free or near defect-free products are proud
of their work and that they strive to remove all the product’s defects before they begin
testing. Low defect content is one of the principal criteria used for identifying the quality
of software products.
Defining Process Quality
 To define the needed quality measures, we must consider the fourth quality principle.
Principle No. 4
The quality of a product is determined by the quality of the process used to develop it.
 This implies that to manage product quality, we must manage the quality of the process
used to develop that product.
 If a quality product has few if any defects, that means that a quality process must produce
products having few if any defects. What kind of process would consistently produce
products with few if any defects? Some argue that extensive testing is the only way to
produce quality software, and others believe that extensive reviews and inspections are the
answer.
 No single defect-removal method can be relied upon to produce high-quality software
products. A high-quality process must use a broad spectrum of quality management
methods.
 Examples are many kinds of testing, team inspections, personal design and code reviews,
design analysis, defect tracking and analysis, and defect prevention.
 One indicator of the quality of a process is the completeness of the defect management
methods it employs. However, because the methods could be applied with varying
effectiveness, a simple listing of the methods is not sufficient.
 So, given two processes that use similar defect-removal methods, how could you tell which
one would produce the highest quality products? To determine this, you must determine
how well these defect-removal methods were applied. That takes measurement and
analysis.

The Filter View of Defect-Removal


This leads us to the next quality principle.
Principle No. 5
Since a test removes only a fraction of a product’s defects, to get a quality product out of test,
you must put a quality product into test.
 This principle also applies to every defect-removal method, from reviews and inspections,
through all the tests and other quality verification methods.
 Every defect-removal method only removes a fraction of the defects in the product; so to
understand the quality of a development process, you must understand the effectiveness of
the defect-removal methods that were used.
 Further, to predict the quality of the delivered product, you must measure the effectiveness
of every defect-removal step.
 This also means that the highest quality development process would be the one that
removed the highest percentage of the product’s defects early in the process and then had
the lowest number of defects in final testing.
 Finally, this means that the highest-quality products are those with the fewest defects on
entry into the final stage of testing.
Criteria for a Quality Process
 To evaluate a process, you must measure that process and then compare the measures with
your criteria for a quality process. This means that you must have criteria that define what
a quality process looks like. Defect removal is like removing impurities from water.
 To get water that is pure enough to drink, we should find progressively fewer impurities in
each successive filtration step.
 Finally, if we were going to actually drink the water ourselves, we would not want to find
any impurities in the final filtration step.
 In effect, this means that the last filtration step is really used to verify the quality of the
water produced by the prior stages.
 If there were any significant impurities, you would want to put that water through the entire
filtration process again, starting from the very beginning. Then you might be willing to
take a drink. Similarly, for a software system, this suggests three quality criteria.

1. Most of the defects must be found early in the development process.


2. Toward the end of the process, fewer defects should be found in each successive filtration stage.
3. The number of defects found in the final process stages must be fewer than some predefined
minimum.

Determining Process Quality


 While these sound like appropriate process-quality criteria, they have one major failing –
you will not have complete defect data until the end of the process after the product has
been built, tested, accepted, and used.
 During the process you will only know the number of defects found so far and not the
number to be found in future stages. This is a problem because a low number of defects in
a defect-removal stage could be because the product was of high quality, because the
defect-removal stage was improperly performed, or because the defect data on that stage
were incomplete.
 This means that you must have multiple ways to determine the effectiveness of a defect-
removal stage and that these ways must include at least one way to evaluate the
effectiveness of that stage at the time that it is actually enacted. Partial defect data can be
used to do that. In fact, without these data, there is no way to determine the effectiveness
of the defect-removal stages.
 The three things we can measure about a process stage are: (1) the time the developers
spent in that stage, (2) the number of defects removed in that stage, and (3) the size of the
product produced by that stage.
In-Process Quality Measures
 From data on 3,240 Personal Software Process (PSP) exercise programs written by
experienced software developers, the SEI has determined the characteristics of a high-
quality software process. They show that developers inject about 2.0 defects per hour
during detailed design and find about 3.3 defects per hour during detail-level-design
reviews.
 To find the defects injected in one hour of design work, the average developer would have
to spend 60*2/3.3 = 36 minutes reviewing that design. Similarly, since developers inject
an average of about 4.6 defects per hour during coding and find about 6.0 defects per hour
in code reviews, this same average developer should spend about 60*4.6/6 = 46 minutes
reviewing the code produced in each hour. Since there is considerable variation among
developers, the SEI has established the general guideline that developers personally spend
at least half as much time reviewing design or code quality as they spent producing that
design or code.
 Further, from data on many programs, we have found that, when there are fewer than 10
defects found while compiling each 1,000 lines of code and fewer than 5.0 defects found
while unit testing each 1,000 lines of code, that program is likely to have few if any
remaining defects. Combining these criteria with an additional requirement that developers
spend at least as much time designing a program as they spent coding it, gives the following
five software process quality criteria.

Calculating the Quality Profile


 The quality profile has five terms that are derived from the above data. The equations for
these terms are as follows.

1. Design/Code Time = Minimum (design time/coding time: 1.0)


2. Design Review Time = Minimum (2* design review time/design time: 1.0)
3. Code Review Time = Minimum (2* code review time/coding time: 1.0)
4. Compile Defects/KLOC = Minimum (20/(10 + compile defects/KLOC): 1.0)
5. Unit Test Defects/KLOC = Minimum (10/(5 + unit test defects/KLOC): 1.0)

 To derive the five profile terms, consider formula No. 3 for code reviews. In one hour of
coding, a typical software developer will inject 4.6 defects. Since this developer can find
and fix defects at the rate of 6.0 per hour, he or she needs to spend 4.6/6.0 = 0.7667 of an
hour, or about 46 minutes, reviewing the code produced in one hour. Since there is wide
variation in these injection and removal rates, and since the number 0.7667 is hard to
remember, the SEI uses 0.5 as the factor.
 Based on experience to date, this has proven to be suitable. Since these parameter values
are sensitive to application type and operational criticality, we suggest that organizations
periodically analyze their own data and adjust these values accordingly.
 The formula for the code review profile term compares the ratio of the actual time the
developer spent reviewing code with the actual time spent in coding. If that ratio equals or
is greater than 0.5, then the criteria are met.
 The factor of 2 in the equation is used to double both sides of this equation so it compares
twice the ratio of review to coding time with 1.0. Also, to get a useful quality figure of
merit, we need a measure that varies between 0 and 1.0, where 0 is very poor and 1.0 is
good. Therefore, the equation’s value should equal 1.0 whenever 2 times the code review
time is equal to or greater than the coding time and be progressively less with lower
reviewing times.
 This is the reason for the Minimum function in each equation, where Minimum (A:B) is
the minimum of A and B. A little calculation will show that this is precisely the way
equation No. 3 works. Equations No. 1 and No. 2 work in exactly the same way (except
design time should equal or exceed coding time in equation No. 1).
 To produce equations No. 4 and No. 5, the SEI used data it gathered while training software
developers for TSP teams. It found that when more than about 10 defects/thousand lines of
code (KLOC) were found in compiling, programs typically had poor code quality in testing,
and when more than about five defects/KLOC were found in initial (or unit) testing,
program quality was often poor in integration and system testing.
 Therefore, we seek an equation that will produce a value of 1.0 when fewer than 10
defects/KLOC are found in compiling, and we want this value to progressively decrease as
more defects are found. A little calculation will show that this is precisely what equation
No. 4 does. Equation No. 5 works the same way for the value of five defects/KLOC in unit
testing.
 One of the great advantages of these five criteria is that they can be determined at the time
that process step is performed. Therefore, at the end of the design review for example, the
developer can tell if he or she has met the design-review quality criteria. Since these
measures can all be available before integration and system test entry, and since they can
be calculated for every component part of a large system, they provide the information
needed to correct quality problems well before product delivery.
The Process Quality Index
 For large products, it is customary to combine the data for all components into a composite
system quality profile. Since the data for a few poor quality components could then be
masked by the data for a large number of high quality components, it is important to have
a way to identify any potentially defective system components. The process quality index
(PQI) was devised for this purpose. It is calculated by multiplying together the five
components of the quality profile to give a value between 0.0 and 1.0. Then the components
with PQI values below some threshold can be quickly identified and reviewed to see which
ones should be reinspected, reworked, or replaced.
 Experience to date shows that, with PQI values above about 0.4, components typically have
no defects found after development. Since the quality problems for large systems are
normally caused by a relatively small number of defective components, the PQI measure
permits acquisition groups to rapidly pinpoint the likely troublesome components and to
require they be repaired or replaced prior to delivery. Once organizations have sufficient
data, they should reexamine these criteria values and make appropriate adjustments.

Doing Quality Work


 Since few software development groups currently gather the data required to use modern
software quality management practices, we must consider the sixth principle of software
quality.
Principle No. 6
Quality products are only produced by motivated professionals who take pride in the quality of
their work.
 Because the measures required for quality management must be gathered by the software
professionals themselves, these professionals must be motivated to gather and use the
needed data. If they are not, they will either not gather the data or the data will not be very
accurate.
 Experience shows that developers will only be motivated to gather and use data on their
work if they use the data themselves, and if they believe that the practices required to
consistently produce quality software products will help them do better work. Most
developers who have used the TSP believe these things, but without proper training very
few developers will.
 While these measures and quality practices are not difficult, they represent a significant
behavioral change for most practicing software professionals and their management. There
are, however, a growing number of professionals who do practice these methods, and the
SEI now has a program to transition these methods into general practice.
 The methodology involved is the PSP, and to consistently use the PSP methods on a
project, development groups must use the TSP. There is now considerable experience with
these methods, and it shows that with proper use TSP teams typically produce defect-free
or nearly defect-free products at or very close to their committed costs and schedules.
Acquisition Pointers
 Sound quality management is the key to software quality; without appropriate quality
measures, it is impossible to manage the quality of a process or to predict the quality of the
products that process produces.
 The developers must gather and analyze these data; they will not do this unless they know
how to gather and how to use these data. This is why the sixth quality principle is critically
important. Merely ordering the organization to provide the desired data will guarantee
getting lots of numbers that are unlikely to be useful unless quality principle No. 6 is met.
This requires motivating development management, and having development management
train and motivate the developers in the needed quality measurement and management
practices.
 Once the developers regularly gather, analyze, and use these data, there only remains the
question of how acquisition executives can get and use the data. This is both a contracting
and a customer-supplier issue. Experience to date shows that when the developers use the
TSP, you should have no trouble getting the required data.
The specific data needed to measure and manage software quality are the following:

1. The time spent in each phase of the development process. These times must be measured in
minutes.
2. The number of defects found in each defect-removal phase of the process, including reviews,
inspections, compiling, and testing.
3. The sizes of the products produced by each phase, typically in pages, database elements, or lines
of code.

Introduction to different software quality models:

Software Quality Models are a standardized way of measuring a software product. With the
increasing trend in software industry, new applications are planned and developed every day. This
eventually gives rise to the need for reassuring that the product so built meets at least the expected
standards.

McCall
 McCall software quality model was introduced in 1977.
 This model is incorporated with many attributes, termed as software factors, which
influence a software.
 The model distinguishes between two levels of quality attributes :
1. Quality Factors: The higher level quality attributes which can be assessed directly
are called quality factors. These attributes are external attributes. The attributes in this
level are given more importance by the users and managers.
2. Quality Criteria: The lower or second level quality attributes which can be assessed
either subjectively or objectively are called Quality Criteria. These attributes are
internal attributes. Each quality factor has many second level of quality attributes or
quality criteria.

Example:

Usability quality factor is divided into operability, training, communicativeness,


input/output volume, input/output rate. This model classifies all software requirements
into 11 software quality factors. The 11 factors are organised into three product quality
factors – product operation, product revision, and product transition factors. The
following are the product quality factors:
1. Product Operation :

It includes five software quality factors, which are related with the requirements that
directly affect the operation of the software such as operational performance,
convenience, ease of usage and its correctness. These factors help in providing a better
user experience.
 Correctness –
The extent to which a software meets its requirements specification.
 Efficiency –
The amount of hardware resources and code the software, needs to
perform a function.
 Integrity –
The extent to which the software can control an unauthorized person from
the accessing the data or software.
 Reliability –
The extent to which a software performs its intended functions without
failure.
 Usability –
The extent of effort required to learn, operate and understand the functions
of the software.
2. Product Revision :

It includes three software quality factors, which are required for testing and
maintenance of the software. They provide ease of maintenance, flexibility and testing
effort to support the software to be functional according to the needs and requirements
of the user in the future.
 Maintainability –
The effort required to detect and correct an error during maintenance
phase.
 Flexibility –
The effort needed to improve an operational software program.
 Testability –
The effort required to verify a software to ensure that it meets the
specified requirements.

3. Product Transition :

It includes three software quality factors, that allows the software to adapt to the change
of environments in the new platform or technology from the previous.
 Portability –
The effort required to transfer a program from one platform to another.
 Re-usability –
The extent to which the program’s code can be reused in other
applications.
 Interoperability –
The effort required to integrate two systems with one another.
Boehm
 In 1978, B.W. Boehm introduced his software quality model.
 The model represents a hierarchical quality model similar to McCall Quality Model to
define software quality using a predefined set of attributes and metrics, each of which
contributes to overall quality of software.
 The difference between Boehm’s and McCall’s model is that McCall’s model primarily
focuses on precise measurement of high-level characteristics, whereas Boehm’s quality
model is based on a wider range of characteristics.
Example:

Characteristics of hardware performance, that are missing in McCall’s model. The Boehm’s
model has three levels for quality attributes. These levels are divided based on their
characteristics. These levels are primary uses (high level characteristics), intermediate constructs
(mid-level characteristics) and primitive constructs (primitive characteristics). The highest level
of Boehm’s model has following three primary uses stated as below –
1. As is utility –
Extent to which, we can use software as-is.
2. Maintainability –
Effort required to detect and fix an error during maintenance.
3. Portability –
Effort required to change software to fit in a new environment.
The next level of Boehm’s hierarchical model consists of seven quality factors associated with
three primary uses, stated as below –
1. Portability –
Effort required to change software to fit in a new environment.
2. Reliability –
Extent to which software performs according to requirements.
3. Efficiency –
Amount of hardware resources and code required to execute a function.
4. Usability (Human Engineering) –
Extent of effort required to learn, operate and understand functions of the software.
5. Testability –
Effort required to verify that software performs its intended functions.
6. Understandability –
Effort required for a user to recognize logical concept and its applicability.
7. Modifiability –
Effort required to modify a software during maintenance phase.
Boehm further classified characteristics into Primitive constructs as follows- device
independence, accuracy, completeness, consistency, device efficiency, accessibility,
communicativeness, self-descriptiveness, legibility, structuredness, conciseness, augment-
ability. For example- Testability is broken down into:- accessibility, communicativeness,
structuredness and self descriptiveness.
Advantages :
 It focuses and tries to satisfy the needs of the user.
 It focuses on software maintenance cost effectiveness.
Disadvantages :
 It doesn’t suggest, how to measure the quality characteristics.
 It is difficult to evaluate the quality of software using the top-down approach.
 So, we can say that, Boehm’s model is an improvised version of McCall’s model and it is
used extensively, but because of the top-down approach to see quality of software, Boehm’s
model can’t be employed always.
FURPS / FURPS+
FURPS is an acronym representing a model for classifying software quality attributes
(functional and non-functional requirements):

 Functionality - Capability (Size & Generality of Feature Set), Reusability


(Compatibility, Interoperability, Portability), Security (Safety & Exploitability)
 Usability (UX) - Human Factors, Aesthetics, Consistency, Documentation,
Responsiveness
 Reliability - Availability (Failure Frequency (Robustness/Durability/Resilience),
Failure Extent & Time-Length (Recoverability/Survivability)), Predictability
(Stability), Accuracy (Frequency/Severity of Error)
 Performance - Speed, Efficiency, Resource Consumption (power, ram, cache, etc.),
Throughput, Capacity, Scalability
 Supportability (Serviceability, Maintainability, Sustainability, Repair Speed) -
Testability, Flexibility (Modifiability, Configurability, Adaptability, Extensibility,
Modularity), Installability, Localizability
 The model, developed at Hewlett-Packard was first publicly elaborated by Grady and
Caswell.
 FURPS+ is now widely used in the software industry.
 The + was later added to the model after various campaigns at HP to extend the acronym
to emphasize various attributes.
Functional requirements
 In software engineering and systems engineering, a functional requirement defines a
function of a system or its component, where a function is described as a specification of
behavior between inputs and outputs.
 Functional requirements may involve calculations, technical details, data manipulation and
processing, and other specific functionality that define what a system is supposed to
accomplish.
 Behavioral requirements describe all the cases where the system uses the functional
requirements, these are captured in use cases. Functional requirements are supported by
non-functional requirements, which impose constraints on the design or implementation.
Generally, functional requirements are expressed in the form "system must do ," while non-
functional requirements take the form "system shall be ."
 The plan for implementing functional requirements is detailed in the system design,
whereas non-functional requirements are detailed in the system architecture.
 As defined in requirements engineering, functional requirements specify particular results
of a system. This should be contrasted with non-functional requirements, which specify
overall characteristics such as cost and reliability.
 Functional requirements drive the application architecture of a system, while non-
functional requirements drive the technical architecture of a system.
Non-functional requirements

 In systems engineering and requirements engineering, a non-functional


requirement (NFR) is a requirement that specifies criteria that can be used to judge the
operation of a system, rather than specific behaviors.
 They are contrasted with functional requirements that define specific behavior or
functions. The plan for implementing functional requirements is detailed in
the system design.
 The plan for implementing non-functional requirements is detailed in
the system architecture, because they are usually architecturally significant requirements.
 Broadly, functional requirements define what a system is supposed to do and non-
functional requirements define how a system is supposed to be.
 Functional requirements are usually in the form of "system shall do ", an individual action
or part of the system, perhaps explicitly in the sense of a mathematical function, a black
box description input, output, process and control functional model or IPO Model.
 In contrast, non-functional requirements are in the form of "system shall be", an overall
property of the system as a whole or of a particular aspect and not a specific function.
 The system's overall properties commonly mark the difference between whether the
development project has succeeded or failed.
 Non-functional requirements are often mistakenly called the "quality attributes" of a
system, however there is a distinction between the two.
 Non-functional requirements are the criteria for evaluating how a software system should
perform and a software system must have certain quality attributes in order to meet non-
functional requirements.
 So when we say a system should be "secure", "highly-available", "portable", "scalable" and
so on, we are talking about its quality attributes. Other terms for non-functional
requirements are "qualities", "quality goals", "quality of service requirements",
"constraints", "non-behavioral requirements", or "technical requirements".
 Informally these are sometimes called the "Qualities", from attributes like stability and
portability. Nnon-functional requirements can be divided into two main categories:

1. Execution qualities, such as safety, security and usability, which are observable
during operation (at run time).
2. Evolution qualities, such as testability, maintainability, extensibility and
scalability, which are embodied in the static structure of the system.
Dromey: Dromey’s Quality model(1995):
Dromey has built a quality evaluation framework that analyzes the quality of software components
through the measurement of tangible quality properties. Each artifact produced in the software life-
cycle can be associated with a quality evaluation model. Dromey gives the following examples of
what he means by software components for each of the different models:
• Variables, functions, statements, etc. can be considered components of the Implementation
model;
• A requirement can be considered a component of the requirements model;
• A module can be considered a component of the design model;
According to Dromey, all these components possess intrinsic properties that can be classified into
four categories:
• Correctness : Evaluates if some basic principles are violated.
• Internal : Measure how well a component has been deployed according to its intended use.
• Contextual : Deals with the external influences by and on the use of a component.
• Descriptive : Measure the descriptiveness of a component.
 Dromey proposes a product based quality model that recognizes that quality evaluation
differs for each product and that a more dynamic idea for modeling the process is needed
to be wide enough to apply for different systems.
 Dromey is focusing on the relationship between the quality attributes and the sub-attributes,
as well as attempting to connect software product properties with software quality
attributes.
 This quality model presented by R. Geoff Dromey is most recent model which is also
similar to McCall’s and Boehm’s model.
 Dromey proposes a product based quality model that recognizes that quality evaluation
differs for each product and that a more dynamic idea for modeling the process is needed
to be wide enough to apply for different systems.
 Dromey focuses on the relationship between the quality attributes and the sub-attributes,
as well as attempts to connect software product properties with software quality attributes.
1) Product properties that influence quality.
2) High level quality attributes.
3) Means of linking the product properties with the quality attributes.
 Dromey’s Quality Model is further structured around a 5 step process:

1) Choose a set of high-level quality attributes necessary for the evaluation.


2) List components/modules in your system.
3) Identify quality-carrying properties for the components/modules (qualities of the component
that have the most impact on the product properties from the list above).
4) Determine how each property effects the quality attributes.
5) Evaluate the model and identify weaknesses.
ISO-9126:
 The ISO/IEC 9126 standard describes a software quality model which categorizes
software quality into six characteristics (factors) which are sub-divided into sub-
characteristics (criteria).
 The characteristics are manifested externally when the software is used as a consequence
of internal software attributes.
 The internal software attributes are measured by means of internal metrics (e.g., monitoring
of software development before delivery).
 The quality characteristics are measured externally by means of external metrics (e.g.,
evaluation of software products to be delivered).

 Functionality
o Suitability
 Highly Related Metrics
 Related Metrics
o Accuracy
 Highly Related Metrics
 Related Metrics
o Interoperability
 Highly Related Metrics
 Related Metrics
o Security
 Highly Related Metrics
 Related Metrics
o Compliance
 Reliability
o Maturity
 Highly Related Metrics
 Related Metrics
o Fault-tolerance
 Highly Related Metrics
 Related Metrics
o Recoverability
 Highly Related Metrics
 Related Metrics
o Compliance
 Usability
o Understandability
o Learnability
o Operability
o Attractiveness
o Compliance
 Re-Usability
o Understandability for Reuse
 Highly Related Metrics
 Related Metrics
o Learnability for Reuse
 Highly Related Metrics
 Related Metrics
o Operability for Reuse - Programmability
 Highly Related Metrics
 Related Metrics
o Attractiveness for Reuse
 Highly Related Metrics
 Related Metrics
o Compliance for Reuse
 Efficiency
o Time behavior
 Highly Related Metrics
 Related Metrics
o Resource utilization
 Highly Related Metrics
 Related Metrics
o Compliance
 Maintainability
o Analyzability
 Highly Related Metrics
 Related Metrics
o Changeability
 Highly Related Metrics
 Related Metrics
o Stability
 Highly Related Metrics
 Related Metrics
o Testability
 Highly Related Metrics
 Related Metrics
o Compliance
 Portability
o Adaptability
 Highly Related Metrics
 Related Metrics
o Installability
 Highly Related Metrics
 Related Metrics
o Co-existence
 Highly Related Metrics
 Related Metrics
o Replaceablity
 Highly Related Metrics
 Related Metrics
o Compliance
McCall’s Quality model (1977)

Also called as General Electrics Model. This model was mainly developed for US military to
bridge the gap between users and developers. It mainly has 3 major representations for defining
and identifying the quality of a software product, namely
Product Revision : This identifies quality factors that influence the ability to change the
software product.
(1) Maintainability : Effort required to locate and fix a fault in the program within its
operating environment.
(2) Flexibility : The ease of making changes required as dictated by business by changes
in the operating environment.
(3) Testability : The ease of testing program to ensure that it is error-free and meets its
specification, i.e, validating the software requirements.

Product Transition : This identifies quality factors that influence the ability to adapt the
software to new environments.
(1) Portability : The effort required to transfer a program from one environment to
another.
(2) Re-usability : The ease of reusing software in a different context.
(3) Interoperability: The effort required to couple the system to another system.

Product Operations : This identifies quality factors that influence the extent to which the
software fulfills its specification.
(1) Correctness : The extent to which a functionality matches its specification.
(2) Reliability : The systems ability not to fail/ the extent to which the system fails.
(3) Efficiency : Further categorized into execution efficiency and storage efficiency and
generally means the usage of system resources, example: processor time, memory.
(4) Integrity : The protection of program from unauthorized access.
(5) Usability : The ease of using software.
McCall’s Quality Model


McCall’s 11 Quality attributes hierarchy
Boehm’s Quality model (1978):

 Boehm’s model is similar to the McCall Quality Model in that it also presents a hierarchical
quality model structured around high-level characteristics, intermediate level
characteristics, primitive characteristics – each of which contributes to the overall quality
level.
 The high-level characteristics represent basic high-level requirements of actual use to
which evaluation of software quality could be put – the general utility of software. The
high-level characteristics address three main questions that a buyer of software has:

• As-is utility : How well (easily, reliably, efficiently) can I use it as-is?
• Maintainability: How easy is it to understand, modify and retest?
• Portability : Can I still use it if I change my environment?
 The intermediate level characteristic represents Boehm’s 7 quality factors that together
represent the qualities expected from a software system:
• Portability (General utility characteristics): Code possesses the characteristic portability
to the extent that it can be operated easily and well on computer configurations other than
its current one.
• Reliability (As-is utility characteristics): Code possesses the characteristic reliability to
the extent that it can be expected to perform its intended functions satisfactorily.
• Efficiency (As-is utility characteristics): Code possesses the characteristic efficiency to
the extent that it fulfills its purpose without waste of resources.
• Usability (As-is utility characteristics, Human Engineering): Code possesses the
characteristic usability to the extent that it is reliable, efficient and human-engineered.
• Testability (Maintainability characteristics): Code possesses the characteristic testability
to the extent that it facilitates the establishment of verification criteria and supports
evaluation of its performance.
• Understandability (Maintainability characteristics): Code possesses the characteristic
understandability to the extent that its purpose is clear to the inspector.
• Flexibility (Maintainability characteristics, Modifiability): Code possesses the
characteristic modifiability to the extent that it facilitates the incorporation of changes,
once the nature of the desired change has been determined.
 The lowest level structure of the characteristics hierarchy in Boehm’s model is the
primitive characteristics metrics hierarchy. The primitive characteristics provide the
foundation for defining qualities metrics – which was one of the goals when Boehm
constructed his quality model. Consequently, the model presents one more metrics
supposedly measuring a given primitive characteristic.
 Though Boehm’s and McCall’s models might appear very similar, the difference is that
McCall’s model primarily focuses on the precise measurement of the high-level
characteristics “As-is utility”, whereas Boehm’s quality model is based on a wider range
of characteristics with an extended and detailed focus on primarily maintainability.

Boehm’s Quality Model

ISO 9000 Certification

ISO (International Standards Organization) is a group or consortium of 63 countries established to


plan and fosters standardization. ISO declared its 9000 series of standards in 1987. It serves as a
reference for the contract between independent parties. The ISO 9000 standard determines the
guidelines for maintaining a quality system. The ISO standard mainly addresses operational
methods and organizational methods such as responsibilities, reporting, etc. ISO 9000 defines a
set of guidelines for the production process and is not directly concerned about the product itself.

Types of ISO 9000 Quality Standards


The ISO 9000 series of standards is based on the assumption that if a proper stage is followed for
production, then good quality products are bound to follow automatically. The types of industries
to which the various ISO standards apply are as follows.

1. ISO 9001: This standard applies to the organizations engaged in design, development,
production, and servicing of goods. This is the standard that applies to most software
development organizations.
2. ISO 9002: This standard applies to those organizations which do not design products but
are only involved in the production. Examples of these category industries contain steel
and car manufacturing industries that buy the product and plants designs from external
sources and are engaged in only manufacturing those products. Therefore, ISO 9002 does
not apply to software development organizations.
3. ISO 9003: This standard applies to organizations that are involved only in the installation
and testing of the products. For example, Gas companies.

How to get ISO 9000 Certification?

An organization determines to obtain ISO 9000 certification applies to ISO registrar office for
registration. The process consists of the following stages:
1. Application: Once an organization decided to go for ISO certification, it applies to the
registrar for registration.
2. Pre-Assessment: During this stage, the registrar makes a rough assessment of the
organization.
3. Document review and Adequacy of Audit: During this stage, the registrar reviews the
document submitted by the organization and suggest an improvement.
4. Compliance Audit: During this stage, the registrar checks whether the organization has
compiled the suggestion made by it during the review or not.
5. Registration: The Registrar awards the ISO certification after the successful completion
of all the phases.
6. Continued Inspection: The registrar continued to monitor the organization time by time.

Introduction to Capability Maturity Model (CMM):


CMM was developed by the Software Engineering Institute (SEI) at Carnegie Mellon University
in 1987.
 It is not a software process model. It is a framework that is used to analyze the approach
and techniques followed by any organization to develop software products.
 It also provides guidelines to further enhance the maturity of the process used to
develop those software products.
 It is based on profound feedback and development practices adopted by the most
successful organizations worldwide.
 This model describes a strategy for software process improvement that should be
followed by moving through 5 different levels.
 Each level of maturity shows a process capability level. All the levels except level-1
are further described by Key Process Areas (KPA’s).
Shortcomings of SEI/CMM:
 It encourages the achievement of a higher maturity level in some cases by displacing
the true mission, which is improving the process and overall software quality.
 It only helps if it is put into place early in the software development process.
 It has no formal theoretical basis and in fact is based on the experience of very
knowledgeable people.
 It does not have good empirical support and this same empirical support could also be
constructed to support other models.
Key Process Areas (KPA’s):
Each of these KPA’s defines the basic requirements that should be met by a software process in
order to satisfy the KPA and achieve that level of maturity.
Conceptually, key process areas form the basis for management control of the software project and
establish a context in which technical methods are applied, work products like models, documents,
data, reports, etc. are produced, milestones are established, quality is ensured and change is
properly managed.
The 5 levels of CMM are as follows:
Level-1: Initial
 No KPA’s defined.
 Processes followed are Adhoc and immature and are not well defined.
 Unstable environment for software development.
 No basis for predicting product quality, time for completion, etc.
Level-2: Repeatable
 Focuses on establishing basic project management policies.
 Experience with earlier projects is used for managing new similar natured projects.
 Project Planning- It includes defining resources required, goals, constraints, etc. for
the project. It presents a detailed plan to be followed systematically for the successful
completion of good quality software.
 Configuration Management- The focus is on maintaining the performance of the
software product, including all its components, for the entire lifecycle.
 Requirements Management- It includes the management of customer reviews and
feedback which result in some changes in the requirement set. It also consists of
accommodation of those modified requirements.
 Subcontract Management- It focuses on the effective management of qualified
software contractors i.e. it manages the parts of the software which are developed by
third parties.
 Software Quality Assurance- It guarantees a good quality software product by
following certain rules and quality standard guidelines while developing.
Level-3: Defined
 At this level, documentation of the standard guidelines and procedures takes place.
 It is a well-defined integrated set of project-specific software engineering and
management processes.
 Peer Reviews- In this method, defects are removed by using a number of review
methods like walkthroughs, inspections, buddy checks, etc.
 Intergroup Coordination- It consists of planned interactions between different
development teams to ensure efficient and proper fulfillment of customer needs.
 Organization Process Definition- Its key focus is on the development and
maintenance of the standard development processes.
 Organization Process Focus- It includes activities and practices that should be
followed to improve the process capabilities of an organization.
 Training Programs- It focuses on the enhancement of knowledge and skills of the
team members including the developers and ensuring an increase in work efficiency.
Level-4: Managed
 At this stage, quantitative quality goals are set for the organization for software
products as well as software processes.
 The measurements made help the organization to predict the product and process
quality within some limits defined quantitatively.
 Software Quality Management- It includes the establishment of plans and strategies
to develop quantitative analysis and understanding of the product’s quality.
 Quantitative Management- It focuses on controlling the project performance in a
quantitative manner.
Level-5: Optimizing
 This is the highest level of process maturity in CMM and focuses on continuous process
improvement in the organization using quantitative feedback.
 Use of new tools, techniques, and evaluation of software processes is done to prevent
recurrence of known defects.
 Process Change Management- Its focus is on the continuous improvement of the
organization’s software processes to improve productivity, quality, and cycle time for
the software product.
 Technology Change Management- It consists of the identification and use of new
technologies to improve product quality and decrease product development time.
 Defect Prevention- It focuses on the identification of causes of defects and prevents
them from recurring in future projects by improving project-defined processes.
Introduction to Capability Maturity Model (CMMI):
 Capability Maturity Model Integration (CMMI) is a successor of CMM and is a more
evolved model that incorporates best components of individual disciplines of CMM like
Software CMM, Systems Engineering CMM, People CMM, etc.
 Since CMM is a reference model of matured practices in a specific discipline, so it becomes
difficult to integrate these disciplines as per the requirements. This is why CMMI is used
as it allows the integration of multiple disciplines as and when needed.
Objectives of CMMI :
1. Fulfilling customer needs and expectations.
2. Value creation for investors/stockholders.
3. Market growth is increased.
4. Improved quality of products and services.
5. Enhanced reputation in Industry.
CMMI Representation – Staged and Continuous :
A representation allows an organization to pursue a different set of improvement objectives.
There are two representations for CMMI :
 Staged Representation :
 uses a pre-defined set of process areas to define improvement path.
 provides a sequence of improvements, where each part in the sequence
serves as a foundation for the next.
 an improved path is defined by maturity level.
 maturity level describes the maturity of processes in organization.
 Staged CMMI representation allows comparison between different
organizations for multiple maturity levels.
 Continuous Representation :
 allows selection of specific process areas.
 uses capability levels that measures improvement of an individual process
area.
 Continuous CMMI representation allows comparison between different
organizations on a process-area-by-process-area basis.
 allows organizations to select processes which require more improvement.
 In this representation, order of improvement of various processes can be
selected which allows the organizations to meet their objectives and
eliminate risks.
CMMI Model – Maturity Levels :
In CMMI with staged representation, there are five maturity levels described as follows :
1. Maturity level 1 : Initial
 processes are poorly managed or controlled.
 unpredictable outcomes of processes involved.
 ad hoc and chaotic approach used.
 No KPAs (Key Process Areas) defined.
 Lowest quality and highest risk.
2. Maturity level 2 : Managed
 requirements are managed.
 processes are planned and controlled.
 projects are managed and implemented according to their documented
plans.
 This risk involved is lower than Initial level, but still exists.
 Quality is better than Initial level.
3. Maturity level 3 : Defined
 processes are well characterized and described using standards, proper
procedures, and methods, tools, etc.
 Medium quality and medium risk involved.
 Focus is process standardization.
4. Maturity level 4 : Quantitatively managed
 quantitative objectives for process performance and quality are set.
 quantitative objectives are based on customer requirements, organization
needs, etc.
 process performance measures are analyzed quantitatively.
 higher quality of processes is achieved.
 lower risk
5. Maturity level 5 : Optimizing
 continuous improvement in processes and their performance.
 improvement has to be both incremental and innovative.
 highest quality of processes.
 lowest risk in processes and their performance.
CMMI Model – Capability Levels
A capability level includes relevant specific and generic practices for a specific process area that
can improve the organization’s processes associated with that process area. For CMMI models
with continuous representation, there are six capability levels as described below :
1. Capability level 0 : Incomplete
 incomplete process – partially or not performed.
 one or more specific goals of process area are not met.
 No generic goals are specified for this level.
 this capability level is same as maturity level 1.
2. Capability level 1 : Performed
 process performance may not be stable.
 objectives of quality, cost and schedule may not be met.
 a capability level 1 process is expected to perform all specific and generic
practices for this level.
 only a start-step for process improvement.
3. Capability level 2 : Managed
 process is planned, monitored and controlled.
 managing the process by ensuring that objectives are achieved.
 objectives are both model and other including cost, quality, schedule.
 actively managing processing with the help of metrics.
4. Capability level 3 : Defined
 a defined process is managed and meets the organization’s set of
guidelines and standards.
 focus is process standardization.
5. Capability level 4 : Quantitatively Managed
 process is controlled using statistical and quantitative techniques.
 process performance and quality is understood in statistical terms and
metrics.
 quantitative objectives for process quality and performance are
established.
6. Capability level 5 : Optimizing
 focuses on continually improving process performance.
 performance is improved in both ways – incremental and innovation.
 emphasizes on studying the performance results across the organization to
ensure that common causes or issues are identified and fixed.

Introduction to Software Reliability, Reliability Models and Estimation:

Software Reliability Models:

 A software reliability model indicates the form of a random process that defines the
behavior of software failures to time.
 Software reliability models have appeared as people try to understand the features of how
and why software fails, and attempt to quantify software reliability.
 Over 200 models have been established since the early 1970s, but how to quantify software
reliability remains mostly unsolved.
 There is no individual model that can be used in all situations. No model is complete or
even representative.to find Nth Highest Salary in SQL

Most software models contain the following parts:

o Assumptions
o Factors

A mathematical function that includes the reliability with the elements. The mathematical function
is generally higher-order exponential or logarithmic.

Software Reliability Modeling Techniques


 Both kinds of modeling methods are based on observing and accumulating failure data and
analyzing with statistical inference.

Differentiate between software reliability prediction models and software reliability estimation
models

Basics Prediction Models Estimation Models

Data Reference Uses historical information Uses data from the current software
development effort.

When used in Usually made before Usually made later in the life cycle (after
development development or test phases; can some data have been collected); not
cycle be used as early as concept phase. typically used in concept or development
phases.

Time Frame Predict reliability at some future Estimate reliability at either present or some
time. next time.

Reliability Models

 A reliability growth model is a numerical model of software reliability, which predicts how
software reliability should improve over time as errors are discovered and repaired.
 These models help the manager in deciding how much efforts should be devoted to testing.
The objective of the project manager is to test and debug the system until the required level
of reliability is reached.

Following are the Software Reliability Models are:


Jelinski and Moranda Model

 The Jelinski-Moranda (JM) model, which is also a Markov process model, has strongly
affected many later models which are in fact modifications of this simple model.

Characteristics of JM Model

1. It is a Binomial type model


2. It is certainly the earliest and certainly one of the most well-known black-box models.
3. J-M model always yields an over-optimistic reliability prediction.
4. JM Model follows a prefect debugging step, i.e., the detected fault is removed with
certainty simple model.
5. The constant software failure rate of the J?M model at the i^th failure interval is given by:

λ(ti) = ϕ [N-(i-1)], i=1, 2... N .........equation 1

Where560 Java Program for Beginners

ϕ=a constant of proportionality indicating the failure rate provided by each fault

N=the initial number of errors in the software

ti=the time between (i-1)th and (i)th failure.

 The mean value and the failure intensity methods for this model which belongs to the
binominal type can be obtained by multiplying the inherent number of faults by the
cumulative failure and probability density functions (pdf) respectively:

μ(ti )=N(1-e-ϕti)..............equation 2

And

€(ti)=Nϕe-ϕti.............equation 3

Those characteristics plus four other characteristics of the J-M model are summarized in table:

Measures of Reliability name Measures of Reliability formula

Probability density function f(ti)= ϕ[N-(i-1]e-ϕ[N-(i-1)]ti

Software Reliability function R(ti)= e-ϕ[N-(i-1)]ti


Failure rate function λ(ti)= ϕ[N-(i-1)]

Mean time to failure function

Mean value function µ(ti )=N(1-e-ϕti)

Failure Intensity function €(ti )=Nϕe-ϕti

Median m={ϕ[N-(i-1)]} -1 In2

Cumulative Distribution function f(ti)=1-e-ϕ[N-(i-1)]ti

Assumptions

The assumptions made in the J-M model contains the following:

1. The number of initial software errors is unknown but fixed and constant.
2. Each error in the software is independent and equally likely to cause a failure during a test.
3. Time intervals between occurrences of failure are separate, exponentially distributed
random variables.
4. The software failure rate remains fixed over the ranges among fault occurrences.
5. The failure rate is corresponding to the number of faults that remain in the software.
6. A detected error is removed immediately, and no new mistakes are introduced during the
removal of the detected defect.
7. Whenever a failure appears, the corresponding fault is reduced with certainty.

Variations in JM Model

 JM model was the first prominent software reliability model. Several researchers showed
interest and modify this model, using different parameters such as failure rate, perfect
debugging, imperfect debugging, number of failures, etc. now, we will discuss different
existing variations of this model.
1. Lipow Modified Version of Jelinski-Moranda Geometric Model

It allows multiple bugs removal in a time interval. The program failure rate becomes

λ(ti)=DKni-1

Where ni-1 is the cumulative number of errors found up to the (i-1)st time interval.

2. Sukert Modified Schick-Wolverton Model

Sukert modifies the S-W model to allow more than one failure at each time interval. The program
failure rate becomes

Where ni-1 is the cumulative number of failures at the (i-1)th failure interval.

3. Schick Wolverton Model


 The Schick and Wolverton (S-W) model are similar to the J-M model, except it further
consider that the failure rate at the ith time interval increases with time since the last
debugging.

Assumptions

o Errors occur by accident.


o The bug detection rate in the defined time intervals is constant.
o Errors are independent of each other.
o No new bugs are developed.
o Bugs are corrected after they have been detected.

In the model, the program failure rate method is:

λ (ti)= ϕ[N-(i-1)] ti

Where ϕ is a proportional constant, N is the initial number of bugs in the program, and ti is the test
time since the (i-1)st failure.

4. GO-Imperfect Debugging Model

Goel and Okumoto expand the J-M model by assuming that an error is removed with probability
p whenever a failure appears. The program failure rate at the ith failure interval is

λ (ti)= ϕ[N-p(i-1)]

R(ti)=e-ϕ[N-p(i-1)]-ti)

5. Jelinski-Moranda Geometric Model

This model considers that the program failure rate function is initially a constant D and reduce
geometrically at failure time. The program failure rate and reliability method of time between
failures at the ith failure interval are

λ (ti)=DKi-1
R(ti)=e-DKi-1ti)

Where k is Parameter of geometric function, 0<k<1

6. Little-Verrall Bayesian Model

This model considers that times between failures are independent exponential random variables
with a parameter € i=1, 2 ....n which itself has parameters Ψ(i) and α reflecting programmer quality
and function difficulty having a prior gamma distribution.
Where B represents the fault reduction factor

7. Shanthikumar General Markov Model

This model considers that the failure intensity functions as the number of failures removed are as
the given below

λ SG(n, t) = Ψ(t) (N0-n)

Where Ψ (t) is proportionality constant.

8. An Error Detection Model for Application during Software Development

 The primary feature of this new model is that the variable (growing) size of a developing
program is accommodated so that the quality of a program can be predicted by analyzing
a basic segment.

Assumptions

This model has the following assumptions along with the JM model assumptions:

1. Any tested initial portion of the program describes the entire program for the number and
nature of its incipient errors.
2. The detect-ability of a mistake is unaffected by the "'dilution" incurred when the initially
tested method is augmented by new code.
3. The number of lines of code which exists at any time is known.
4. The growth function and the bug detection process are independent.

9. The Langberg Singpurwalla Model

This model shows how several models used to define the reliability of computer software can be
comprehensively viewed by adopting a Bayesian point of view.

This model provides a different motivation for a commonly used model using notions from shock
models.

10. Jewell Bayesian Software Reliability Model

 Jewell extended a result by Langberg and Singpurwalla (1985) and made an expansion of
the Jelinski-Moranda model.
Assumptions

1. The testing protocol is authorized to run for a fixed length of time-possibly, but not
certainly, coinciding with a failure epoch.
2. The distribution of the unknown number of shortage is generalized from the one-parameter
Poisson distribution by considering that the parameter is itself a random quantity with a
Beta prior distribution.
3. Although the estimation of the posterior distributions of the parameters leads to complex
expressions, we show that the calculation of the predictive distribution for undetected bugs
is straightforward.
4. Although it is now identified that the MLE's for reliability, growth can be volatile, we show
that, if a point estimator is needed, the predictive model is easily calculated without
obtaining the full distribution first.

11. Quantum Modification to the JM Model

This model replaces the JM Model assumption, each error has the same contribution to the
unreliability of software, with the new assumption that different types of errors may have different
effects on the failure rate of the software.

Failure Rate:

Where
Q = initial number of failure quantum units inherent in a software
Ψ = the failure rate corresponding to a single failure quantum unit
wj= the number of failure-quantum units of the ith fault, i.e., the size of the ith failure-quantum

12. Optimal Software Released Based on Markovian Software Reliability Model

In this model, a software fault detection method is explained by a Markovian Birth process with
absorption. This paper amended the optimal software release policies by taking account of a waste
of a software testing time.

13. A Modification to the Jelinski-Moranda Software Reliability Growth Model Based on


Cloud Model Theory

A new unknown parameter θ is contained in the JM model parameters estimation such that θɛ [θL,
θ∪]. The confidence level is the probability value (1-α) related to a confidence interval. In general,
if the confidence interval for a software reliability index θ is achieved, we can estimate the
mathematical characteristics of virtual cloud C(Ex, En, He), which can be switched to system
qualitative evaluation by X condition cloud generator.

14. Modified JM Model with imperfect Debugging Phenomenon

The modified JM Model extends the J-M model by relaxing the assumptions of complete
debugging process and types of incomplete removal:

1. The fault is not deleted successfully while no new faults are introduced
2. The fault is not deleted successfully while new faults are created due to incorrect diagnoses.

Assumptions

The assumptions made in the Modified J-M model contain the following:

o The number of initial software errors is unknown but fixed and constant.
o Each error in the software is independent and equally feasible to cause a failure during a
test.
o Time intervals between occurrences of failure are independent, exponentially distributed
random variables.
o The software failure rate remains fixed over the intervals between fault occurrences.
o The failure rate is proportional to the number of errors that remain in the software.
o Whenever a failure occurs, the detected error is removed with probability p, the detected
fault is not entirely removed with probability q, and the new fault is generated with
probability r. So it is evident that p+q+r=1and q≥r.

List of various characteristics underlying the Modified JM Model with imperfect Debugging
Phenomenon

Measures of reliability name Measures of reliability formula

Software failure rate λ(ti=ϕ[N-(i-1)(p-r)]

Failure Density Function F(ti= ϕ[N-(i-1)(p-r)]exp(-ϕ[N-(i-1)(p-r)] ti)

Distribution Function Fi(ti)=1-exp(-ϕ[N-(i-1)(p-r)] ti)

Reliability function at the ith failure interval R(ti)=1-Fi (ti )=exp(-ϕ[N-(i-1)(p-r)] ti)

Mean time to failure function 1/ ϕ[N-(i-1)(p-r)]


UNIT-IV
UNIT IV
Problem Space Understanding: How an industry works, how an IT company works, How IT
supports business, Problem Space Understanding, Knowledge Driven Development (KDD),
Domain knowledge framework of KDD, usage of domain knowledge framework in Insurance,
Banking and Automobile, KDD as a project delivery methodology, Linking domain knowledge to
software development, An example to illustrate this, A case study to produce a KDD artifact using
Agile.
Software Requirements Analysis, Design and Construction: Introduction to Software
Requirements Specifications (SRS) and requirement elicitation techniques; techniques for
requirement modelling – decision tables, event tables, state transition tables, Petri nets;
requirements documentation through use cases; introduction to UML, introduction to software
metrics and metrics based control methods; measures of code and design quality.
How an industry works: How an IT company works:
 A software company is a company whose primary products are various forms of software,
software technology, distribution, and software product development.
 They make up the software industry.

Common roles in a software company


Organizing a software company is a very specialized type of management skill, where experienced
persons can turn the organizational problem into a unique benefit. For example, having sub-teams
spread in different time zones may allow a 24-hour company working day, if the teams, systems,
and procedures are well established. A good example is the test team in a time zone 8 hours ahead
or behind the development team, who fix software bugs found by the testers.
A professional software company normally consists of at least three dedicated sub-teams :

 Business analysts who define the business needs of the market


 Software developers who create the technical specification and write the software
 Software testers who are responsible for the whole process of quality management
In bigger software companies, greater specialization is employed, and quite often there are
also:

 Technical writers who write all the documentation such as user guides
 Release specialists who are responsible for building the whole product and software
versioning
 User experience designers, who are creating the design architecture based on business
requirements, user research and expertise in usability
 Graphic designers who are normally responsible for the design of the graphical user
interface.
 Maintenance engineers who are behind two, three or more lines of support
 Consultants are responsible for making the solution operational, especially if some
specialist knowledge is necessary.
 Examples of this include: building multidimensional cubes in business intelligence
software, integrating with existing solutions, and implementing business scenarios
in Business Process Management software.

Structure
 The manager of a software company is usually called the Head Of Development
(HOD), and reports to the stakeholders.
 He or she leads the sub-teams directly or via the managers/leaders depending on the size
of the organization.
 Usually teams of up to 10 person are the most operational.
 In bigger organizations, there are in general two models of the hierarchy:

Typical structure of a software company


 All the teams are fully independent and they work separately on the different projects.
 The structure is quite simple and all the employees reports to one person, what make the
situation quite clear however it is not a good solution in terms of knowledge exchange and
optimal usage of human resources.

Matrix structure
 In this model there are dedicated managers/leaders for each main specialization, "renting"
their people for particular projects led by product/project managers, who formally or
informally buy the people and pay for their time.
 This leads to each private employee having two bosses – the product/project manager and
the specialized "resource" manager.
 On one hand it optimizes the usage of human resources, on the other hand it may give
rise to conflicts about which one manager has priority in the structure.
 There are also a number of variants of these structures, and a number
of organizations have this structure spread and split within various departments and units.

How IT supports business:

6 Reasons Why IT Support is Important for Your Business:


 Every business, whether small or big, needs an effective and reliable IT department.
 An intact IT support enables an organization or business to stay competitive and curb
any potential IT costs.
 In addition, businesses attain higher flexibility by means of IT support, which allows
them to make higher profits.
 However, there are numerous reasons that necessitate crucial IT services.
1. Effective management of data

 For any business, data storage and management are of utmost significance.

 A sound IT department ensures the management of your business’ data seamlessly.

 With a comprehensive IT support firm such as this London IT support company,


businesses don’t have to suffer from the problems of lost files, virus infection, accidental
deletion, and so on.

2. Expert IT professionals

 An IT department gives an access of expert professionals to your business.

 These professionals are the mavens who impart IT training to your staff without incurring
any additional expenses. As a result of which, your IT training gets financed easily.

3. High-end solutions to technical problems

 Any software in its most-excellent form can give you annoying technical glitches.

 An effective IT support matches you with excellent solutions for solving your niggling
issues quickly, which allows you to become more effective in your job.

 In addition, it saves valuable hours from your important day that you would otherwise
spend in fixing numerous issues.

4. Safety from viruses

 Every business, whether small or big, needs an effective and reliable IT department.
 An intact IT support enables an organization or business to stay competitive and curb any
potential IT costs.

 In addition, businesses attain higher flexibility by means of IT support, which allows them
to make higher profits.

 However, there are numerous reasons that necessitate crucial Managed IT services.

 IT departments ascertain the security of your computer systems from different sorts of
viruses and other threats.

 Here, the role of an IT department is to render a combination of standard antivirus


management in order to protect your devices.

 Due to this, you get to save your time, money, and other resources.

5. Monitoring at every stage

 It is indeed important to monitor the performance and status of your business at each and
every stage.

 Especially, businesses serving online customers require monitoring at all stages in order to
ensure efficiency.

 For example, for a person running a shopping cart software on his/ her website, it is
imperative to have proper control.

 Consider a scenario, in which a business’ network goes down for a few hours, which may
lead to huge financial losses due to reduced sales.

 By means of IT support, such risky situations can be avoided easily. You can recover your
site within a few minutes, if you avail IT services.

6. Security of information

 Businesses possess sensitive and crucial information, such as salary, financial, and HR
details.

 By means of IT support, confidential information is kept safe from hacking and other
malicious attempts.

 An IT department is responsible for getting these elements rightly monitored and policed.

 Furthermore, an IT department ensures that data leakage is prioritized and staff members
don’t disclose company’s sensitive data to outside world.
9 Benefits of Outsourcing IT Support services for Business:

1. Effective Data Management

Businesses carry important data such as employees’ salary, income, and HR details. For this
reason, data storage and management are very crucial for any kind of business and it is also a great
example of why IT support is important. The inclusion of competent IT services in data
management enforces deeper assessment of business needs and careful scrutiny of the company’s
data landscape.

An efficient back-up system for all important files and software helps boost a business’ security
against data breach attempts. Hiring a team of highly skilled and knowledgeable IT personnel to
manage and secure a company’s valuable data goes hand-in-hand with the creation of an effective
data management strategy.

When this happens, confidential records are effectively kept safe from hacking and any other
attempt to leak valuable company and employee information.

2. Improve Decision Making


Good business decisions are based on solid market research. The process is possible through video
conferences, reviewing public comments on social media, industry forums and online survey
feedback. These processes are factors that contribute to better business decisions and goal-setting.

There are also digital marketing tools such as Microsoft CRM Dynamics and Google
Analytics that enable companies to track progress and development. On a larger scale, IT software
enhances existing strategies by presenting more precise and advanced alternatives to how core
objectives can be achieved.

3. Solve Complex Problems

Executing advanced and precise solutions to complex problems involving the internal systems that
keep a business running is another concrete example of importance of IT support.

IT services and systems provide businesses the tools needed to obtain improved hardware such as
high memory storage, faster processors, and high-quality displays. Combined with smarter
applications like mind-mapping software, collaborative systems, and an automated process for
making work more streamlined and organized, help industries research and collate data easily,
analyze information, and plan scalability. The result is the generation of more viable solutions to
complex business dilemmas.

4. Safety from Viruses and Other Compromising Software


Your IT support services assure the security of your computer systems from a variety of viruses
and other online threats. The role of your IT department is to set a combination of standard
antivirus management to extensively protect your devices. Keeping your computer systems
updated and well-monitored effectively keeps your business from falling prey to the risks of digital
data access and operations.

To give you a better idea as to why is technical support important for maintaining a strong
defensive wall against destructive computer viruses, several companies in the past have fallen prey
to viruses and malware and ransomware attacks. These companies include Dropbox, Pitney
Bowes, Capital One, and Asco. Their business websites, along with the security of their end-users
were significantly compromised by the unexpected security breach.

When you commit time and resources in enhancing your IT systems and empowering your tech
support team, it saves time and money while assuring you of long-term protection.

5. Comprehensive Monitoring

It is important to monitor the performance and progress of a business’ internal operations and
customer reach efforts at every stage. Among the best ways that IT can help execute a more refined
supervision of a business’ core operations include improving quality control, facilities planning
and logistics for companies with manufacturing sites, and internal auditing.

Comprehensive monitoring through the aid of a competent IT system is also a must for companies
offering online services to customers. This is to prevent their services as well as the security of
their customers from being jeopardized.

6. Organize Company Manpower and Human Resource Management


Paper-based documents are simply no longer efficient and practical, considering there are more
hi-tech and more manageable substitutes for record-keeping. An information system can be
developed specifically for a business’ unique structure and employment procedures and provide
another concrete exhibition of why IT support is important for startups and steadily growing
businesses.

A great example is the creation of a portal that only in-house employees can access. The portal
contains information about their employment status. This information may range from their job
description and employment contract, to their contact information and the periodic progress of
their individual performances. Moreover, a human resource information system helps determine
between resources and job openings that are still open from those that have already been fulfilled.

7. Enhanced Online Marketing Strategies

Marketing strategies can be amplified by information systems in terms of facilitating more


accurate market research and accumulating valuable data. This includes finding target audiences,
discovering their unique needs and demands, and building a promotional campaign that entices
people to buy.

Likewise, there are algorithms designed to continuously measure online business transactions and
customer purchasing behavior on a daily basis. When planning and deciding on new strategies to
equip a business’ goals, marketing mix subsystems is a business function of IT that presents
programs for assisting the decision making process on the following: introducing new products,
allocating prices, promoting products and services, distributing and keeping track of sales.
8. Improved Customer Support
Through IT support services, customers can be assisted from multiple communication channels
and it gives end-users more choices for how they can reach a business. Whether it’s through
telephone, email, social media messaging, live chat or even SMS, these channels make customers
reach your business conveniently. Hence, employing IT services to boost customer satisfaction is
a great way for businesses to understand customer behavior.

Applying technology in customer support systems can also be in the form of using the benefits of
outsourcing IT support. Startups have a limited workforce that as their services and audience reach
continue to expand, it becomes a challenge to keep up with the increasing volume of queries and
customer concerns. But with a reliable IT system, hiring remote staff to supplement the business’
existing team of support representatives is possible.

IT support services are essential for any kind of business, whether it is a startup or an established
company. It is crucial to not only maintain systems but to also excel through consistent upgrades
that can guarantee the optimum level of operations for your business.

9. Propel Better Branding

The last but not the least of examples that describe why technical support is important is the
influence that it has over improving branding strategies. When branding is paired with information
services and systems, it is not limited to enhancing existing marketing strategies alone or helping
form a new advertising approach. Branding can be further augmented by IT through maximizing
the originality of a business’ lineup of products and services.

Developing apps and systems to drive higher customer engagement or boost satisfaction rates, as
well as to gain an edge over competitors, effectively enhance a business’ marketability, purpose
and overall impact.

Providing an app or software to customers that make services more accessible and convenient
consequently drives higher authority on what businesses can offer.

What are some key pointers when executing IT systems for business purposes?
Injecting the importance and advantages of IT in a business’ internal and external operations is an
undeniably major change. It entails necessary cost adjustments and workforce preparation;
otherwise the entire company will fail to adjust to the demands of the technology that it plans to
employ.

Employees must also be properly oriented and given sufficient training in order to be highly
familiar with the software or system. Allocate a budget that will cover for the equipment,
installation, and additional manpower required to avoid any delays in updating the business’
system and workflow.

Problem Space Understanding:


 Problem space is where the needs of your customers reside.
 It is where you learn more about users and their problems to enable you to determine what
your product needs to do.
 Essentially, it is the foundation upon which the solution space stands. However, learning
about customers' needs isn't the simplest of tasks.

Problem Space Vs Solution Space

 Customer problems, known to be at the base of all product development. Yet if we speak
to developers or product managers, they are often not very clear about the requirements.
Here, the problem lies in the terminology.
 There are two terms buried in one word “requirements”: customer needs and product
requirement. To have this clarified, it’s important to understand a high-level
concept: separating problem space from solution space.

PROBLEM SPACE

 A market is a set of related customer needs, which rests squarely in problem space or
you can say “problems” define market, not “solutions”. A market is not tied to any
specific solutions that meet market needs. It is a broader space.
 There is no product or design that exists in problem space. Instead, problem space is
where all the customer needs that you’d like your product to deliver live. You shouldn’t
interpret the word “needs” too narrowly: Whether it’s a customer pain point, a desire, a job
to be done, or a user story, it lives in problem space.

SOLUTION SPACE
 If I speak of solution space, any product or the product design — such as mock-ups,
wire-frame, prototype, depends on and is built upon problem space, but is in solution
space. S
 o we can say problem space is at the base of solution space. Solution space includes any
product or representation of a product that is used by or intended for use by a customer.
When you build a product, you have chosen a specific implementation. Whether you’ve
done so explicitly or not, you’ve determined how the product looks, what it does, and how
it works.

THE WHAT AND HOW APPROACH

 “What” the product needed to accomplish for customers is Problem space. The
“what” describes the benefits product should give to the target customer.
 Whereas, “how” the product would accomplish it, is solution space. The “how” is the way
in which the product delivers the “what” to target customer. The “how” is the design of
the product and the specific technology used to implement the product.

INSIDE-OUT Vs OUTSIDE-IN PRODUCT DEVELOPMENT

 A failure to gain a clear understanding of the problem space before proceeding to the
solution space is prevalent in product teams that practice “inside-out” product development,
where “inside” refers to the company and “outside” refers to customers and the
market.
 In such teams, the product idea is what the product team think would be good to build. They
don’t test the ideas with customers to verify that it would solve actual customer needs.
 The best way to mitigate the risk of an “inside-out” mindset is “outside-in” mindset.
 The product development starts with talking to customers to understand their needs, as well
as what they like and don’t like about existing solutions.
 Outside-in product teams form a robust problem-space definition before starting product
design.

USING THE SOLUTION SPACE TO DISCOVER THE PROBLEM SPACE

 It’s hard for customers to talk about specific benefits they require and their importance.
Even if they do, it’s going to be very vague.
 It’s therefore up to product team to understand these requirements and define the
problem space.
 The problem here is, if you mention a customer about a problem or need and ask for their
input, at best, they may just talk about existing solutions available.
 The reality is that customers are much better at giving you feedback in the solution
space.
 If you show them a new product or design, they can tell you what they like and don’t like.
They can compare it to other solutions and identify pros and cons.
 Hence, having solution space discussions with customers is much more fruitful than
trying to explicitly discuss the problem space with them.
 In this way you can form your hypotheses of problem space.
 The feedback you gather in solution space actually helps you test and improve your problem
space hypotheses.
 The best problem space learning often comes from feedback you receive from
customers on the solution space.

Knowledge Driven Development (KDD):

Domain knowledge framework of KDD:

Software Requirement Specifications

The production of the requirements stage of the software development process is Software
Requirements Specifications (SRS) (also called a requirements document). This report lays a
foundation for software engineering activities and is constructing when entire requirements are
elicited and analyzed. SRS is a formal report, which acts as a representation of software that
enables the customers to review whether it (SRS) is according to their requirements. Also, it
comprises user requirements for a system as well as detailed specifications of the system
requirements.

The SRS is a specification for a specific software product, program, or set of applications that
perform particular functions in a specific environment. It serves several goals depending on who
is writing it. First, the SRS could be written by the client of a system. Second, the SRS could be
written by a developer of the system. The two methods create entirely various situations and
establish different purposes for the document altogether. The first case, SRS, is used to define the
needs and expectation of the users. The second case, SRS, is written for various purposes and
serves as a contract document between customer and developer.

Characteristics of good SRS

Following are the features of a good SRS document:

1. Correctness: User review is used to provide the accuracy of requirements stated in the SRS.
SRS is said to be perfect if it covers all the needs that are truly expected from the system.n JDK,
JRE, and JVM

2. Completeness: The SRS is complete if, and only if, it includes the following elements:

(1). All essential requirements, whether relating to functionality, performance, design, constraints,
attributes, or external interfaces.
(2). Definition of their responses of the software to all realizable classes of input data in all
available categories of situations.

(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions of all
terms and units of measure.

3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements
described in its conflict. There are three types of possible conflict in the SRS:

(1). The specified characteristics of real-world objects may conflicts. For example,

(a) The format of an output report may be described in one requirement as tabular but in another
as textual.

(b) One condition may state that all lights shall be green while another states that all lights shall
be blue.

(2). There may be a reasonable or temporal conflict between the two specified actions. For
example,

(a) One requirement may determine that the program will add two inputs, and another may
determine that the program will multiply them.

(b) One condition may state that "A" must always follow "B," while other requires that "A and B"
co-occurs.

(3). Two or more requirements may define the same real-world object but use different terms for
that object. For example, a program's request for user input may be called a "prompt" in one
requirement's and a "cue" in another. The use of standard terminology and descriptions promotes
consistency.

4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one
interpretation. This suggests that each element is uniquely interpreted. In case there is a method
used with multiple definitions, the requirements report should determine the implications in the
SRS so that it is clear and simple to understand.

5. Ranking for importance and stability: The SRS is ranked for importance and stability if each
requirement in it has an identifier to indicate either the significance or stability of that particular
requirement.

Typically, all requirements are not equally important. Some prerequisites may be essential,
especially for life-critical applications, while others may be desirable. Each element should be
identified to make these differences clear and explicit. Another way to rank requirements is to
distinguish classes of items as essential, conditional, and optional.
6. Modifiability: SRS should be made as modifiable as likely and should be capable of quickly
obtain changes to the system to some extent. Modifications should be perfectly indexed and cross-
referenced.

7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-
effective system to check whether the final software meets those requirements. The requirements
are verified with the help of reviews.

8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if it
facilitates the referencing of each condition in future development or enhancement documentation.

There are two types of Traceability:

1. Backward Traceability: This depends upon each requirement explicitly referencing its source
in earlier documents.

2. Forward Traceability: This depends upon each element in the SRS having a unique name or
reference number.

The forward traceability of the SRS is especially crucial when the software product enters the
operation and maintenance phase. As code and design document is modified, it is necessary to be
able to ascertain the complete set of requirements that may be concerned by those modifications.

9. Design Independence: There should be an option to select from multiple design alternatives
for the final system. More specifically, the SRS should not contain any implementation details.

10. Testability: An SRS should be written in such a method that it is simple to generate test cases
and test plans from the report.

11. Understandable by the customer: An end user may be an expert in his/her explicit domain
but might not be trained in computer science. Hence, the purpose of formal notations and symbols
should be avoided too as much extent as possible. The language should be kept simple and clear.

12. The right level of abstraction: If the SRS is written for the requirements stage, the details
should be explained explicitly. Whereas, for a feasibility study, fewer analysis can be used. Hence,
the level of abstraction modifies according to the objective of the SRS.

Properties of a good SRS document:

The essential properties of a good SRS document are the following:

Concise: The SRS report should be concise and at the same time, unambiguous, consistent, and
complete. Verbose and irrelevant descriptions decrease readability and also increase error
possibilities.
Structured: It should be well-structured. A well-structured document is simple to understand and
modify. In practice, the SRS document undergoes several revisions to cope up with the user
requirements. Often, user requirements evolve over a period of time. Therefore, to make the
modifications to the SRS document easy, it is vital to make the report well-structured.

Black-box view: It should only define what the system should do and refrain from stating how to
do these. This means that the SRS document should define the external behavior of the system and
not discuss the implementation issues. The SRS report should view the system to be developed as
a black box and should define the externally visible behavior of the system. For this reason, the
SRS report is also known as the black-box specification of a system.

Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.

Verifiable: All requirements of the system, as documented in the SRS document, should be
correct. This means that it should be possible to decide whether or not requirements have been met
in an implementation.

Requirements Elicitation

 Requirement’s elicitation is perhaps the most difficult, most error-prone and most
communication intensive software development.
 It can be successful only through an effective customer-developer partnership.
 It is needed to know what the users really need.

Requirement’s elicitation Activities:

 Knowledge of the overall area where the systems is applied.


 The details of the precise customer problem where the system are going to be applied
must be understood.
 Interaction of system with external requirements.
 Detailed investigation of user needs.
 Define the constraints for system development.

Requirement’s elicitation Methods:


There are a number of requirements elicitation methods. Few of them are listed below –
1. Interviews
2. Brainstorming Sessions
3. Facilitated Application Specification Technique (FAST)
4. Quality Function Deployment (QFD)
5. Use Case Approach
 The success of an elicitation technique used depends on the maturity of the analyst, developers,
users, and the customer involved.
1. Interviews:
Objective of conducting an interview is to understand the customer’s expectations from the
software.
It is impossible to interview every stakeholder hence representatives from groups are selected
based on their expertise and credibility.
Interviews maybe be open-ended or structured.
1. In open-ended interviews there is no pre-set agenda. Context free questions may be
asked to understand the problem.
2. In structured interview, agenda of fairly open questions is prepared. Sometimes a
proper questionnaire is designed for the interview.
2. Brainstorming Sessions:
 It is a group technique
 It is intended to generate lots of new ideas hence providing a platform to share views
 A highly trained facilitator is required to handle group bias and group conflicts.
 Every idea is documented so that everyone can see it.
 Finally, a document is prepared which consists of the list of requirements and their
priority if possible.
3. Facilitated Application Specification Technique:
It’s objective is to bridge the expectation gap – difference between what the developers think
they are supposed to build and what customers think they are going to get.
A team oriented approach is developed for requirements gathering.
Each attendee is asked to make a list of objects that are-
1. Part of the environment that surrounds the system
2. Produced by the system
3. Used by the system
Each participant prepares his/her list, different lists are then combined, redundant entries are
eliminated, team is divided into smaller sub-teams to develop mini-specifications and finally a
draft of specifications is written down using all the inputs from the meeting.
4. Quality Function Deployment:
In this technique customer satisfaction is of prime concern, hence it emphasizes on the
requirements which are valuable to the customer.
3 types of requirements are identified –
 Normal requirements –
In this the objective and goals of the proposed software are discussed with the
customer. Example – normal requirements for a result management system may be
entry of marks, calculation of results, etc
 Expected requirements –
These requirements are so obvious that the customer need not explicitly state them.
Example – protection from unauthorized access.
 Exciting requirements –
It includes features that are beyond customer’s expectations and prove to be very
satisfying when present. Example – when unauthorized access is detected, it should
backup and shutdown all processes.
The major steps involved in this procedure are –
1. Identify all the stakeholders, eg. Users, developers, customers etc
2. List out all requirements from customer.
3. A value indicating degree of importance is assigned to each requirement.
4. In the end the final list of requirements is categorized as –
 It is possible to achieve
 It should be deferred and the reason for it
 It is impossible to achieve and should be dropped off
5. Use Case Approach:
This technique combines text and pictures to provide a better understanding of the requirements.
The use cases describe the ‘what’, of a system and not ‘how’. Hence, they only give a functional
view of the system.
The components of the use case design includes three major things – Actor, Use cases, use case
diagram.
1. Actor –
It is the external agent that lies outside the system but interacts with it in some way. An
actor maybe a person, machine etc. It is represented as a stick figure. Actors can be
primary actors or secondary actors.
 Primary actors – It requires assistance from the system to achieve a goal.
 Secondary actor – It is an actor from which the system needs assistance.
2. Use cases –
They describe the sequence of interactions between actors and the system. They
capture who(actors) do what(interaction) with the system. A complete set of use cases
specifies all possible ways to use the system.
3. Use case diagram –
A use case diagram graphically represents what happens when an actor interacts with
a system. It captures the functional aspect of the system.
 A stick figure is used to represent an actor.
 An oval is used to represent a use case.
 A line is used to represent a relationship between an actor and a use case.
For more information on use case diagram, refer to – Designing Use Cases for a Project
What Is Requirements Elicitation?
It is all about obtaining information from stakeholders. In other words, once the business analysis
has communicated with stakeholders for understanding their requirements, it can be described as
elicitation. It can also be described as a requirement gathering.
Requirement elicitation can be done by communicating with stakeholders directly or by doing
some research, experiments. The activities can be planned, unplanned, or both.

 Planned activities include workshops, experiments.


 Unplanned activities happen randomly. Prior notice is not required for such
activities. For example, you directly go to the client site and start discussing the
requirements however there was no specific agenda published in advance.
Following tasks are the part of elicitation:
 Prepare for Elicitation: The purpose here is to understand the elicitation activity
scope, select the right techniques, and plan for appropriate resources.
 Conduct Elicitation: The purpose here is to explore and identify information
related to change.
 Confirm Elicitation Results: In this step, the information gathered in the
elicitation session is checked for accuracy.
We hope, you have got an idea about requirement elicitation by now. Let’s move on to the
requirements elicitation techniques.

Requirements Elicitation Techniques:


#1) Stakeholder Analysis
Stakeholders can include team members, customers, any individual who is impacted by the project
or it can be a supplier. Stakeholder analysis is done to identify the stakeholders who will be
impacted by the system.

#2) Brainstorming
This technique is used to generate new ideas and find a solution for a specific issue. The members
included for brainstorming can be domain experts, subject matter experts. Multiple ideas and
information give you a repository of knowledge and you can choose from different ideas.

This session is generally conducted around the table discussion. All participants should be given
an equal amount of time to express their ideas.

Brainstorming technique is used to answer the below questions:


 What is the expectation of a system?
 What are the risk factors that affect the proposed system development and what to
do to avoid that?
 What are the business and organization rules required to follow?
 What are the options available to resolve the current issues?
 What should we do so that this particular issue does not happen in the future?
Brainstorming can be described in the following phases:
There are some basic rules for this technique which should be followed to make it a success:
 The time limit for the session should be predefined.
 Identify the participants in advance. One should include 6-8 members for the
session.
 The agenda should be clear enough for all the participants.
 Clear expectations should be set with the participants.
 Once you get all the information, combine the ideas, and remove the duplicate
ideas.
 Once the final list is ready, distribute it among other parties.
Benefits:
 Creative thinking is the result of the brainstorming session.
 Plenty of ideas in a short time.
 Promotes equal participation.
Drawbacks:
 Participants can be involved in debating ideas.
 There can be multiple duplicate ideas.
#3) Interview
This is the most common technique used for requirement elicitation. Interview techniques should
be used for building strong relationships between business analysts and stakeholders. In this
technique, the interviewer directs the question to stakeholders to obtain information. One to one
interview is the most commonly used technique.

If the interviewer has a predefined set of questions then it’s called a structured interview.
If the interviewer is not having any particular format or any specific questions then it’s called
an unstructured interview.
For an effective interview, you can consider the 5 Why technique. When you get an answer to all
your Whys then you are done with your interview process. Open-ended questions are used to
provide detailed information. In this interviewee cannot say Yes or No only.

Closed questions can be answered in Yes or No form and also for areas used to get confirmation
on answers.

Basic Rules:
 The overall purpose of performing the interviews should be clear.
 Identify the interviewees in advance.
 Interview goals should be communicated to the interviewee.
 Interview questions should be prepared before the interview.
 The location of the interview should be predefined.
 The time limit should be described.
 The interviewer should organize the information and confirm the results with the
interviewees as soon as possible after the interview.
Benefits:
 Interactive discussion with stakeholders.
 The immediate follow-up to ensure the interviewer’s understanding.
 Encourage participation and build relationships by establishing rapport with the
stakeholder.
Drawbacks:
 Time is required to plan and conduct interviews.
 Commitment is required from all the participants.
 Sometimes training is required to conduct effective interviews.
#4) Document Analysis/Review
This technique is used to gather business information by reviewing/examining the available
materials that describe the business environment. This analysis is helpful to validate the
implementation of current solutions and is also helpful in understanding the business need.

Document analysis includes reviewing the business plans, technical documents, problem reports,
existing requirement documents, etc. This is useful when the plan is to update an existing system.
This technique is useful for migration projects.

This technique is important in identifying the gaps in the system i.e. to compare the AS-IS process
with the TO-BE process. This analysis also helps when the person who has prepared the existing
documentation is no longer present in the system.

Benefits:
 Existing documents can be used to compare current and future processes.
 Existing documents can be used as a base for future analysis.
Drawbacks:
 Existing documents might not be updated.
 Existing documents might be completely outdated.
 Resources worked on the existing documents might not be available to provide
information.
 This process is time-consuming.
#5) Focus Group
By using a focus group, you can get information about a product, service from a group. The Focus
group includes subject matter experts. The objective of this group is to discuss the topic and
provide information. A moderator manages this session.

The moderator should work with business analysts to analyze the results and provide findings to
the stakeholders.

If a product is under development and the discussion is required on that product then the result
will be to update the existing requirement or you might get new requirements. If a product is ready
to ship then the discussion will be on releasing the product.

How Focus groups are different than group interviews?


A Focus group is not an interview session conducted as a group; rather it is a discussion during
which feedback is collected on a specific subject. The session results are usually analyzed and
reported. A focus group typically consists of 6 to 12 members. If you want more participants then
create more than one focus group.

Benefits:
 You can get information in a single session rather than conducting one to one
interview.
 Active discussion with the participants creates a healthy environment.
 One can learn from other’s experiences.
Drawbacks:
 It might be difficult to gather the group on the same date and time.
 If you are doing this using the online method then the participant’s interaction will
be limited.
 A Skilled Moderator is required to manage focus group discussions.
#6) Interface Analysis
Interface analysis is used to review the system, people, and processes. This analysis is used to
identify how the information is exchanged between the components. An Interface can be described
as a connection between two components. This is described in the below image:
The interface analysis focus on the below questions:
1. Who will be using the interface?
2. What kind of data will be exchanged?
3. When will the data be exchanged?
4. How to implement the interface?
5. Why we need the interface? Can’t the task be completed without using the
interface?
Benefits:
 Provide missed requirements.
 Determine regulations or interface standards.
 Uncover areas where it could be a risk for the project.
Drawbacks:
 The analysis is difficult if internal components are not available.
 It cannot be used as a standalone elicitation activity.
#7) Observation
The main objective of the observation session is to understand the activity, task, tools used, and
events performed by others.

The plan for observation ensures that all stakeholders are aware of the purpose of the observation
session, they agree on the expected outcomes, and that the session meets their expectations. You
need to inform the participants that their performance is not judged.

During the session, the observer should record all the activities and the time taken to perform the
work by others so that he/she can simulate the same. After the session, the BA will review the
results and will follow up with the participants. Observation can be either active or passive.

Active observation is to ask questions and try to attempt the work that other persons are doing.
Passive observation is silent observation i.e. you sit with others and just observe how they are
doing their work without interpreting them.
Benefits:
 The observer will get a practical insight into the work.
 Improvement areas can be easily identified.
Drawbacks:
 Participants might get disturbed.
 Participants might change their way of working during observation and the observer
might not get a clear picture.
 Knowledge-based activities cannot be observed.
#8) Prototyping
Prototyping is used to identify missing or unspecified requirements. In this technique, frequent
demos are given to the client by creating the prototypes so that client can get an idea of how the
product will look like. Prototypes can be used to create a mock-up of sites, and describe the process
using diagrams.

Benefits:
 Gives a visual representation of the product.
 Stakeholders can provide feedback early.
Drawbacks:
 If the system or process is highly complex, the prototyping process may become
time-consuming.
 Stakeholders may focus on the design specifications of the solution rather than the
requirements that any solution must address.
#9) Joint Application Development (JAD)/ Requirement Workshops
This technique is more process-oriented and formal as compared to other techniques. These are
structured meetings involving end-users, PMs, SMEs. This is used to define, clarify, and complete
requirements.

This technique can be divided into the following categories:


 Formal Workshops: These workshops are highly structured and are usually
conducted with the selected group of stakeholders. The main focus of this workshop
is to define, create, refine, and reach closure on business requirements.
 Business Process Improvement Workshops: These are less formal as compared
to the above one. Here, existing business processes are analyzed and process
improvements are identified.
Benefits:
 Documentation is completed within hours and is provided quickly back to
participants for review.
 You can get on the spot confirmation on requirements.
 Successfully gathered requirements from a large group in a short period.
 Consensus can be achieved as issues and questions are asked in the presence of all
the stakeholders.
Drawbacks:
 Stakeholder’s availability might ruin the session.
 The success rate depends on the expertise of the facilitator.
 A workshop motive cannot be achieved if there are too many participants.
#10) Survey/Questionnaire
For Survey/Questionnaire, a set of questions is given to stakeholders to quantify their thoughts.
After collecting the responses from stakeholders, data is analyzed to identify the area of interest of
stakeholders.

Questions should be based on high priority risks. Questions should be direct and unambiguous.
Once the survey is ready, notify the participants and remind them to participate.

Two types of questions can be used here:


 Open-Ended: Respondent is given the freedom to provide answers in their own
words rather than selecting from predefined responses. This is useful but at the
same time, this is time- consuming as interpreting the responses is difficult.
 Close Ended: It includes a predefined set of answers for all the questions and the
respondent has to choose from those answers. Questions can be multiple choice or
can be ranked from not important to very important.
Benefits:
 Easy to get data from a large audience.
 Less time is required for the participants to respond.
 You can get more accurate information as compared to interviews.
Drawback:
 All the Stakeholders might not participate in the surveys.
 Questions may not be clear to all the participants.
 Open-ended questions require more analysis.
 Follow up surveys might be required based on the responses provided by
participants.

Amongst the all above techniques, the top five techniques that are commonly used for
elicitation are shown in the below image.

Elements of the Requirements Model

 Scenario-based elements :
Using a scenario-based approach, system is described from user’s point of view. For
example, basic use cases and their corresponding use-case diagrams evolve into more
elaborate template-based use cases. Figure 1(a) depicts a UML activity diagram for
eliciting requirements and representing them using use cases. There are three levels of
elaboration.
 Class-based elements :
A collection of things that have similar attributes and common behaviors i.e., objects
are categorized into classes. For example, a UML case diagram can be used to depict
a Sensor class for the SafeHome security function. Note that diagram lists attributes
of sensors and operations that can be applied to modify these attributes. In addition to
class diagrams, other analysis modeling elements depict manner in which classes
collaborate with one another and relationships and interactions between classes.

 Behavioral elements :
Effect of behavior of computer-based system can be seen on design that is chosen and
implementation approach that is applied. Modeling elements that depict behavior
must be provided by requirements model.

UML activity diagrams for eliciting requirements


Class diagram for sensor

Method for representing behavior of a system by depicting its states and events that
cause system to change state is state diagram. A state is an externally observable mode
of behavior. In addition, state diagram indicates actions taken as a consequence of a
particular event.
To illustrate use of a state diagram, consider software embedded within safeHome
control panel that is responsible for reading user input. A simplified UML state diagram
is shown in figure 2.

Figure 2: UML state diagram notation

 Flow-oriented elements :
As it flows through a computer-based system information is transformed. System
accepts input, applies functions to transform it, and produces output in a various
forms. Input may be a control signal transmitted by a transducer, a series of numbers
typed by human operator, a packet of information transmitted on a network link, or a
voluminous data file retrieved from secondary storage. Transform may compromise a
single logical comparison, a complex numerical algorithm, or a rule-inference
approach of an expert system. Output produce a 200-page report or may light a single
LED. In effect, we can create a flow model for any computer-based system,
regardless of size and complexity.

Decision Table:
A decision table is a brief visual representation for specifying which actions to perform depending
on given conditions. The information represented in decision tables can also be represented as
decision trees or in a programming language using if-then-else and switch-case statements.
A decision table is a good way to settle with different combination inputs with their corresponding
outputs and is also called a cause-effect table. The reason to call cause-effect table is a related
logical diagramming technique called cause-effect graphing that is basically used to obtain the
decision table.
Importance of Decision Table:

 Decision tables are very much helpful in test design techniques.


 It helps testers to search the effects of combinations of different inputs and other
software states that must correctly implement business rules.
 It provides a regular way of starting complex business rules, that is helpful for
developers as well as for testers.
 It assists in the development process with the developer to do a better job. Testing with
all combinations might be impractical.
 A decision table is basically an outstanding technique used in both testing and
requirements management.
 It is a structured exercise to prepare requirements when dealing with complex business
rules.
 It is also used in model complicated logic.

Decision Table in test designing:

Blank Decision Table


CONDITIONS STEP 1 STEP 2 STEP 3 STEP 4
Condition 1
Condition 2
Condition 3
Condition 4

Decision Table: Combinations


CONDITIONS STEP 1 STEP 2 STEP 3 STEP 4
Condition 1 Y Y N N
Condition 2 Y N Y N
Condition 3 Y N N Y
Condition 4 N Y Y N

Advantage of Decision Table:

 Any complex business flow can be easily converted into test scenarios & test cases
using this technique.
 Decision tables work iteratively which means the table created at the first iteration is
used as input tables for the next tables. The iteration is done only if the initial table is
not satisfactory.
 Simple to understand and everyone can use this method to design the test scenarios &
test cases.
 It provides complete coverage of test cases which helps to reduce the rework on writing
test scenarios & test cases.
 These tables guarantee that we consider every possible combination of condition
values. This is known as its completeness property.

Introduction to Software Metrics:

The standard of measure for the estimation of quality, progress and health of the software testing
effort is called software metrics. It can be divided into three groups: product metrics, process
metrics, and project metrics. The product characteristics like size, features of the design,
complexity, performance, level of quality, etc., is described using product metrics. In contrast,
software development and maintenance are improved using process metrics. The project’s
characteristics and execution are described by project metrics whose examples include the count
of software developers, cost, etc.

It is necessary to develop software metrics based on some guidelines. Those guidelines are:

 It is must be simple and computable. The derivation must be easy to learn, and the time
and effort involved must be average.
 The results given must be objective and consistent. The results should not be ambiguous.
 It is must make use of units and dimensions if there are mathematical computations
involved.
 The development of metrics should be based on an analysis model, design model or
structure of the model, and it should be independent of the programming language.
 It is effective if and only if it can deliver high-quality software products.
 It is must be able to adapt to the changing requirements of the project, which is calibration
must be easy.
 The cost of developing the metrics must be reasonable. One must be able to obtain it easily.
 If we are using software metrics for making any decisions, it must be validated before being
applied to make decisions.
 The developed metrics must be robust to changes that it should not be sensitive to changes
in project, process, or small product.
 As and when the value of the software characteristics represented by its changes, the value
must also change and for this to happen, the range must be in a meaningful range. Let us
say, for example, the range of software metrics is zero to five.

Types of Software Metrics


1. Process Metrics
Process metrics are used to measure the characteristics of the process of software development.
The example includes the efficiency of detection of fault etc. The characteristics of the methods,
tools, and techniques used for software development can be measured using process metrics.

2. Product Metrics
The characteristics of the software product are measured using product metrics. Some of the
important characteristics of the software are:

 Software size and complexity


 Software reliability and quality

Computation of these metrics is done for different stages of the software development lifecycle.

3. Internal Metrics
The properties which are of great importance to a software developer can be measured using the
metrics called internal metrics. An example is a measure of Lines of code (LOC).

4. External Metrics
The properties which are of great importance to a user can be measured using the metrics called
external metrics. An example is portability, reliability, usability, etc.

5. Project Metrics
The progress of the project is checked by the project manager using the metrics called project
metrics. Various metrics such as time, cost, etc., are collected by using the data from the projects
in the past, and they are used as an estimate for the new software. The project manager checks the
progress of the project from time to time, and effort, time and cost are compared with the original
effort, time and cost. The cost of development, efforts, risks and time can be reduced by using
these metrics. The quality of the project can also be improved. With the increase in quality, there
is a reduction in the number of errors, time, cost, etc.

Advantages

 The design methodology of the software systems can be studied comparatively.


 The characteristics of various programming languages can be studied for analysis and
comparison using software metrics.
 The software quality specifications can be prepared using software metrics.
 The compliance of requirements and specifications of software systems can be verified.
 The effort that needs to be put into the development and design of software systems can be
inferred.
 The complexity of the code can be determined.
 The decision of whether to divide a complex module or not can be done.
 The utilization of resource managers to their fullest can be guided.
 Design trade-offs and comparing maintenance costs and software development costs can
be done.
 The progress and quality of different phases of the software development life cycle can be
measured, and feedback can be given to the project managers using software metrics.
 The allocation of resources to test the code can be done based on software metrics.

Disadvantages

 It is not easy to apply metrics in all cases. It is difficult and expensive in some cases.
 It is difficult to verify the validity of historical or empirical data on which the verification
and justification.
 Software products can be managed, but the technical staff’s performance cannot be
evaluated using software metrics.
 The available tools and the working environment is used to define and derive the software
metrics, and there is no standard in defining and deriving them.
 Certain variables are estimated based on the predictive models, and they are not known so
often.
UNIT-V
Software Testing: Introduction to faults and failures; basic testing concepts; concepts of
verification and validation; black box and white box tests; white box test coverage – code coverage,
condition coverage, branch coverage; basic concepts of black-box tests – equivalence classes,
boundary value tests, usage of state tables; testing use cases; transaction based testing; testing for
non-functional requirements – volume, performance and efficiency; concepts of inspection.

Introduction to faults and failures:


Fault :

 It is an incorrect step in any process and data definition in computer program which is
responsible of the unintended behavior of any program in the computer.
 Faults or bugs in a hardware or software may cause errors.
 An error can be defined as a part of the system which will lead to the failure of the system.
Basically an error in a program is an indication that failure occurs or has to occurred.
 If there are multiple components of the system, errors in that system will lead to
component failure.
 As there are many component in the system that interact with each other, failure of one
component might be responsible to introduce one or more faults in the system. Following
cycle show the behavior of the fault.

Figure: Fault Behavior

Types of fault :
In software products, different types of fault can be occurred. In order to remove the fault, we
have to know what type of fault which is facing by our program. So the following are the types
of faults :
Figure: Types of Faults

1. Algorithm Fault :
This type of fault occurs when the component algorithm or logic does not provide the
proper result for the given input due to wrong processing steps. It can be easily
removed by reading the program i.e. disk checking.
2. Computational Fault :
This type of fault occur when a fault disk implementation is wrong or not capable of
calculating the desired result e.g. combining integer and floating point variables may
produce unexpected result.
3. Syntax Fault :
This type of fault occur due the use of wrong syntax in the program. We have to use
the proper syntax for the programming language which we are using.
4. Documentation Fault :
The documentation in the program tells what the program actually does. Thus it can
occur when program does not match with the documentation.
5. Overload Fault :
For memory purpose we used data structures like array, queue and stack etc. in our
programs. When they are filled with their given capacity and we are using them
beyond their capacity, then overload fault occurs in our program.
6. Timing Fault :
When the system is not responding after the failure occurs in the program then this
type of fault is referred as the timing fault.
7. Hardware Fault :
This type of failure occur when the specified hardware for the given software does
not work properly. Basically, it is due to the problem in the continuation of the
hardware that is not specified in the specification.
8. Software Fault :
It can occur when the specified software is not properly working or not supporting the
platform used or we can say operating system.
9. Omission Fault :
It ca occur when the key aspect is missing in the program e.g. when the initialization
of a variable is not done in the program.
10. Commission Fault :
It can occur when the statement of expression is wrong i.e. integer is initialized with
float.
Fault Avoidance :
 Fault in the program can be avoid by using techniques and procedures which aims to
avoid the introduction of the fault during any phase of the safety lifecycle of the safety
related system.
Fault Tolerance :
 It is ability of the functional unit to continue to perform a required function even in the
presence of the fault.

Basic testing concepts:


 Software testing is the process of evaluating and verifying that a software product or
application does what it is supposed to do.
 The benefits of testing include preventing bugs, reducing development costs and improving
performance.
 Software testing techniques are the ways employed to test the application under test against
the functional or non-functional requirements gathered from business.
 Each testing technique helps to find a specific type of defect.

Software Testing Techniques:


 Software testing techniques are the ways employed to test the application under test against
the functional or non-functional requirements gathered from business. Each testing technique
helps to find a specific type of defect.
 For example, Techniques which may find structural defects might not be able to find the
defects against the end-to-end business flow.
 Hence, multiple testing techniques are applied in a testing project to conclude it with acceptable
quality.

Principles of Testing:

1. All the tests should meet the customer requirements.


2. To make our software testing should be performed by a third party
3. Exhaustive testing is not possible. As we need the optimal amount of testing based on
the risk assessment of the application.
4. All the test to be conducted should be planned before implementing it
5. It follows the Pareto rule (80/20 rule) which states that 80% of errors come from 20%
of program components.
6. Start testing with small parts and extend it to large parts.

Types of Software Testing Techniques:

There are two main categories of software testing techniques:


1. Static Testing Techniques are testing techniques which are used to find defects in
Application under test without executing the code. Static Testing is done to avoid errors
at an early stage of the development cycle and thus reducing the cost of fixing them.
2. Dynamic Testing Techniques are testing techniques that are used to test the dynamic
behavior of the application under test, that is by the execution of the code base. The
main purpose of dynamic testing is to test the application with dynamic inputs- some
of which may be allowed as per requirement (Positive testing) and some are not allowed
(Negative Testing).
Each testing technique has further types as showcased in the below diagram. Each one of them
will be explained in detail with examples below.
Testing Techniques

Static Testing Techniques:

Static Testing techniques are testing techniques that do not require the execution of a code base.
Static Testing Techniques are divided into two major categories:
1. Reviews: They can range from purely informal peer reviews between two
developers/testers on the artifacts (code/test cases/test data) to totally
formal Inspections which are led by moderators who can be internal/external to the
organization.
1. Peer Reviews: Informal reviews are generally conducted without any
formal setup. It is between peers. For Example- Two developers/Testers
review each other’s artifacts like code/test cases.
2. Walkthroughs: Walkthrough is a category where the author of work (code
or test case or document under review) walks through what he/she has done
and the logic behind it to the stakeholders to achieve a common
understanding or for the intent of feedback.
3. Technical review: It is a review meeting that focuses solely on the
technical aspects of the document under review to achieve a consensus. It
has less or no focus on the identification of defects based on reference
documentation. Technical experts like architects/chief designers are
required for doing the review. It can vary from Informal to fully formal.
4. Inspection: Inspection is the most formal category of reviews. Before the
inspection, The document under review is thoroughly prepared before going
for an inspection. Defects that are identified in the Inspection meeting are
logged in the defect management tool and followed up until closure. The
discussion on defects is avoided and a separate discussion phase is used for
discussions, which makes Inspections a very effective form of reviews.
2. Static Analysis: Static Analysis is an examination of requirement/code or design with
the aim of identifying defects that may or may not cause failures. For Example-
Reviewing the code for the following standards. Not following a standard is a defect
that may or may not cause a failure. There are many tools for Static Analysis that are
mainly used by developers before or during Component or Integration
Testing. Even Compiler is a Static Analysis tool as it points out incorrect usage of
syntax, and it does not execute the code per se. There are several aspects to the code
structure – Namely Data flow, Control flow, and Data Structure.
1. Data Flow: It means how the data trail is followed in a given program –
How data gets accessed and modified as per the instructions in the program.
By Data flow analysis, You can identify defects like a variable definition
that never got used.
2. Control flow: It is the structure of how program instructions get executed
i.e conditions, iterations, or loops. Control flow analysis helps to identify
defects such as Dead code i.e a code that never gets used under any
condition.
3. Data Structure: It refers to the organization of data irrespective of code.
The complexity of data structures adds to the complexity of code. Thus, it
provides information on how to test the control flow and data flow in a given
code.

Dynamic Testing Techniques:

Dynamic techniques are subdivided into three categories:


1. Structure-based Testing:
These are also called White box techniques. Structure-based testing techniques are focused on how
the code structure works and test accordingly. To understand Structure-based techniques, We first
need to understand the concept of code coverage.
Code Coverage is normally done in Component and Integration Testing. It establishes what code
is covered by structural testing techniques out of the total code written. One drawback of code
coverage is that- it does not talk about code that has not been written at all (Missed requirement),
There are tools in the market that can help measure code coverage.
There are multiple ways to test code coverage:
1. Statement coverage: Number of Statements of code exercised/Total number of statements. For
Example, If a code segment has 10 lines and the test designed by you covers only 5 of them then
we can say that statement coverage given by the test is 50%.
2. Decision coverage: Number of decision outcomes exercised/Total number of Decisions. For
Example, If a code segment has 4 decisions (If conditions) and your test executes just 1, then
decision coverage is 25%
3. Conditional/Multiple condition coverage: It has the aim to identify that each outcome of every
logical condition in a program has been exercised.
2. Experience-Based Techniques:
These are techniques of executing testing activities with the help of experience gained over the
years. Domain skill and background are major contributors to this type of testing. These techniques
are used majorly for UAT/business user testing. These work on top of structured techniques like
Specification-based and Structure-based, and it complements them. Here are the types of
experience-based techniques:
1. Error guessing: It is used by a tester who has either very good experience in testing or with the
application under test and hence they may know where a system might have a weakness. It cannot
be an effective technique when used stand-alone but is really helpful when used along with
structured techniques.
2. Exploratory testing: It is hands-on testing where the aim is to have maximum execution
coverage with minimal planning. The test design and execution are carried out in parallel without
documenting the test design steps. The key aspect of this type of testing is the tester’s learning
about the strengths and weaknesses of an application under test. Similar to error guessing, it is
used along with other formal techniques to be useful.
3. Specification-based Techniques:
This includes both functional and nonfunctional techniques (i.e. quality characteristics). It
basically means creating and executing tests based on functional or non-functional specifications
from the business. Its focus is on identifying defects corresponding to given specifications. Here
are the types of specification-based techniques:
1. Equivalence partitioning: It is generally used together and can be applied to any level of
testing. The idea is to partition the input range of data into valid and non-valid sections such that
one partition is considered “equivalent”. Once we have the partitions identified, it only requires us
to test with any value in a given partition assuming that all values in the partition will behave the
same. For example, if the input field takes the value between 1-999, then values between 1-999
will yield similar results, and we need NOT test with each value to call the testing complete.
2. Boundary Value Analysis (BVA): This analysis tests the boundaries of the range- both valid
and invalid. In the example above, 0,1,999, and 1000 are boundaries that can be tested. The
reasoning behind this kind of testing is that more often than not, boundaries are not handled
gracefully in the code.
3. Decision Tables: These are a good way to test the combination of inputs. It is also called
a Cause-Effect table. In layman’s language, One can structure the conditions applicable for the
application segment under test as a table and identify the outcomes against each one of them to
reach an effective test.
1. It should be taken into consideration that there are not too many combinations, so that
table becomes too big to be effective.
2. Take an example of a Credit Card that is issued if both credit score and salary limit are
met. This can be illustrated in below decision table:

Decision Table

4. Use case-based Testing: This technique helps us to identify test cases that execute the system
as a whole- like an actual user (Actor), transaction by transaction. Use cases are a sequence of
steps that describe the interaction between the Actor and the system. They are always defined in
the language of the Actor, not the system. This testing is most effective in identifying the
integration defects. Use case also defines any preconditions and postconditions of the process flow.
ATM machine example can be tested via use case:

Use case-based Testing

5. State Transition Testing: It is used where an application under test or a part of it can be treated
as FSM or finite state machine. Continuing the simplified ATM example above, We can say that
ATM flow has finite states and hence can be tested with the State transition technique. There are
4 basic things to consider –
1. States a system can achieve
2. Events that cause the change of state
3. The transition from one state to other
4. Outcomes of change of state
A state event pair table can be created to derive test conditions – both positive and negative.
Concepts of verification and validation:
 Verification and validation are processes that collect evidence of a model's correctness or
accuracy for a specific scenario; thus, V&V cannot prove that a model is correct and accurate
for all possible conditions and applications, but, rather, it can provide evidence that a model is
sufficiently accurate.
 Verification process includes checking of documents, design, code and program whereas
Validation process includes testing and validation of the actual product.
 Verification checks whether the software confirms a specification whereas Validation
checks whether the software meets the requirements and expectations.
 The four fundamental methods of verification are Inspection, Demonstration, Test, and
Analysis. The four methods are somewhat hierarchical in nature, as each verifies requirements
of a product or system with increasing rigor.
 Method validation is the process used to confirm that the analytical procedure employed
for a specific test is suitable for its intended use. Results from method validation can be used
to judge the quality, reliability and consistency of analytical results; it is an integral part of any
good analytical practice.

What is Verification Testing ?

 Verification is the process of evaluating work-products of a development phase to determine


whether they meet the specified requirements.
 Verification ensures that the product is built according to the requirements and design
specifications. It also answers to the question, Are we building the product right?

Verification Testing - Workflow:

Verification testing can be best demonstrated using V-Model. The artefacts such as test Plans,
requirement specification, design, code and test cases are evaluated.
Activities:

 Reviews
 Walkthroughs
 Inspection

Validation Testing

 The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
 Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
 It answers to the question, Are we building the right product?

Validation Testing - Workflow:

Validation testing can be best demonstrated using V-Model. The Software/product under test is
evaluated during this type of testing.

Activities:

 Unit Testing
 Integration Testing
 System Testing
 User Acceptance Testing
white box test coverage – code coverage, condition coverage, branch coverage; basic concepts of
black-box tests – equivalence classes, boundary value tests, usage of state tables; testing use cases;
transaction based testing; testing for non-functional requirements – volume, performance and
efficiency; concepts of inspection.

Black box and White box tests:

Black Box Testing vs White Box Testing:

Software Testing can be majorly classified into two categories:

1. Black Box Testing is a software testing method in which the internal structure/ design/
implementation of the item being tested is not known to the tester

2. White Box Testing is a software testing method in which the internal structure/ design/
implementation of the item being tested is known to the tester.

Differences between Black Box Testing vs White Box Testing:

Black Box Testing White Box Testing

It is a way of testing the software in which


It is a way of software testing in which the the tester has knowledge about the internal
internal structure or the program or the code is structure or the code or the program of the
hidden and nothing is known about it. software.

It is mostly done by software testers. It is mostly done by software developers.

No knowledge of implementation is needed. Knowledge of implementation is required.

It can be referred as outer or external software It is the inner or the internal software
testing. testing.

It is functional test of the software. It is structural test of the software.

This testing can be initiated on the basis of This type of testing of software is started
requirement specifications document. after detail design document.

It is mandatory to have knowledge of


No knowledge of programming is required. programming.
Black Box Testing White Box Testing

It is the behavior testing of the software. It is the logic testing of the software.

It is applicable to the higher levels of testing of It is generally applicable to the lower levels
software. of software testing.

It is also called closed testing. It is also called as clear box testing.

It is least time consuming. It is most time consuming.

It is not suitable or preferred for algorithm


testing. It is suitable for algorithm testing.

Can be done by trial-and-error ways and Data domains along with inner or internal
methods. boundaries can be better tested.

Example: search something on google by


using keywords Example: by input to check and verify loops

Types of Black Box Testing:

A. Functional Testing
B. Non-functional testing
C. Regression Testing

White box Testing:

 White box testing techniques analyze the internal structures of the used data structures, internal
design, code structure and the working of the software rather than just the functionality as in
black box testing.
 It is also called glass box testing or clear box testing or structural testing.
Working process of white box testing:
 Input: Requirements, Functional specifications, design documents, source code.
 Processing: Performing risk analysis for guiding through the entire process.
 Proper test planning: Designing test cases so as to cover entire code. Execute rinse-
repeat until error-free software is reached. Also, the results are communicated.
 Output: Preparing final report of the entire testing process.

Testing techniques:
 Statement coverage: In this technique, the aim is to traverse all statement at least once.
Hence, each line of code is tested. In case of a flowchart, every node must be traversed
at least once. Since all lines of code are covered, helps in pointing out faulty code.

Statement Coverage Example

 Branch Coverge: In this technique, test cases are designed so that each branch from
all decision points are traversed at least once. In a flowchart, all edges must be traversed
at least once.
4 test cases required such that all branches of all decisions are covered, i.e, all edges of flowchart are

covered

 Condition Coverage: In this technique, all individual conditions must be covered as


shown in the following example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions
get TRUE and FALSE as their values. One possible example would be:
4. #TC1 – X = 0, Y = 55
5. #TC2 – X = 5, Y = 0

 Multiple Condition Coverage: In this technique, all the possible combinations of the
possible outcomes of conditions are tested at least once. Let’s consider the following
example:

1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
4. #TC1: X = 0, Y = 0
5. #TC2: X = 0, Y = 5
6. #TC3: X = 55, Y = 0
7. #TC4: X = 55, Y = 5
Hence, four test cases required for two individual conditions.
n
Similarly, if there are n conditions then 2 test cases would be required.

 Basis Path Testing: In this technique, control flow graphs are made from code or
flowchart and then Cyclomatic complexity is calculated which defines the number of
independent paths so that the minimal number of test cases can be designed for each
independent path.
Steps:
1. Make the corresponding control flow graph
2. Calculate the cyclomatic complexity
3. Find the independent paths
4. Design test cases corresponding to each independent path
Flow graph notation: It is a directed graph consisting of nodes and edges. Each node
represents a sequence of statements, or a decision point. A predicate node is the one
that represents a decision point that contains a condition after which the graph splits.
Regions are bounded by nodes and edges.
Cyclomatic Complexity: It is a measure of the logical complexity of the software and
is used to define the number of independent paths. For a graph G, V(G) is its cyclomatic
complexity.
Calculating V(G):
5. V(G) = P + 1, where P is the number of predicate nodes in the flow graph
6. V(G) = E – N + 2, where E is the number of edges and N is the total number
of nodes
7. V(G) = Number of non-overlapping regions in the graph
Example:

V(G) = 4 (Using any of the above formulae)


No of independent paths = 4

8. #P1: 1 – 2 – 4 – 7 – 8
9. #P2: 1 – 2 – 3 – 5 – 7 – 8
10. #P3: 1 – 2 – 3 – 6 – 7 – 8
11. #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8

 Loop Testing: Loops are widely used and these are fundamental to many algorithms
hence, their testing is very important. Errors often occur at the beginnings and ends of
loops.
1. Simple loops: For simple loops of size n, test cases are designed that:
 Skip the loop entirely
 Only one pass through the loop
 2 passes
 m passes, where m < n
 n-1 ans n+1 passes
2. Nested loops: For nested loops, all the loops are set to their minimum
count and we start from the innermost loop. Simple loop tests are
conducted for the innermost loop and this is worked outwards till all the
loops have been tested.
3. Concatenated loops: Independent loops, one after another. Simple loop
tests are applied for each.
If they’re not independent, treat them like nesting.
Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines
of code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.

Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming
language as opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.

Black box testing:

 Black box testing is a type of software testing in which the functionality of the software is not
known.
 The testing is done without the internal knowledge of the products. Black box testing can be
done in following ways:

1. Syntax Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example- compilers, language that can be represented by
context free grammar. In this, the test cases are generated so that each grammar rule is used at least
once.

2. Equivalence partitioning – It is often seen that many type of inputs work similarly so instead
of giving all of them separately we can group them together and test only one input of each group.
The idea is to partition the input domain of the system into a number of equivalence classes such
that each member of class works in a similar way, i.e., if a test case in one class results in some
error, other members of class would also result into same error.
The technique involves two steps:
1. Identification of equivalence class – Partition any input domain into minimum two
sets: valid values and invalid values. For example, if the
2. valid range is 0 to 100 then select one valid input like 49 and one invalid like 104.
3. Generating test cases –
(i) To each valid and invalid class of input assign unique identification number.
(ii) Write test case covering all valid and invalid test case considering that no two
invalid inputs mask each other.
To calculate the square root of a number, the equivalence classes will be:
(a) Valid inputs:
 Whole number which is a perfect square- output will be an integer.
 Whole number which is not a perfect square- output will be decimal
number.
 Positive decimals
(b) Invalid inputs:
 Negative numbers(integer or decimal).
 Characters other that numbers like “a”,”!”,”;”,etc.

3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence if test
cases are designed for boundary values of input domain then the efficiency of testing improves
and probability of finding errors also increase. For example – If valid range is 10 to 100 then test
for 10,100 also apart from valid and invalid inputs.

4. Cause effect Graphing – This technique establishes relationship between logical input called
causes with corresponding actions called effect. The causes and effects are represented using
Boolean graphs. The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop cause effect graph.
3. Transform the graph into decision table.
4. Convert decision table rules to test cases.
For example, in the following cause effect graph:

It can be converted into decision table like:


Each column corresponds to a rule which will become a test case for testing. So there will be 4
test cases.
5. Requirement based testing – It includes validating the requirements given in SRS of
software system.

6. Compatibility testing – The test case result not only depend on product but also infrastructure
for delivering functionality. When the infrastructure parameters are changed it is still expected to
work properly. Some parameters that generally affect compatibility of software are:
1. Processor (Pentium 3,Pentium 4) and number of processors.
2. Architecture and characteristic of machine (32 bit or 64 bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).

What is Functional Testing? Types & Examples

What is Functional Testing?


 Functional Testing is a type of software testing that validates the software system against the
functional requirements/specifications.
 The purpose of Functional tests is to test each function of the software application, by providing
appropriate input, verifying the output against the Functional requirements.

 Functional testing mainly involves black box testing and it is not concerned about the source
code of the application.

 This testing checks User Interface, APIs, Database, Security, Client/Server communication and
other functionality of the Application Under Test.

 The testing can be done either manually or using automation.

What do you test in Functional Testing?


The prime objective of Functional testing is checking the functionalities of the software system. It
mainly concentrates on –

 Mainline functions: Testing the main functions of an application


 Basic Usability: It involves basic usability testing of the system. It checks whether a user
can freely navigate through the screens without any difficulties.
 Accessibility: Checks the accessibility of the system for the user
 Error Conditions: Usage of testing techniques to check for error conditions. It checks
whether suitable error messages are displayed.
How to do Functional Testing: Following is a step by step process:

 Understand the Functional Requirements


 Identify test input or test data based on requirements
 Compute the expected outcomes with selected test input values
 Execute test cases
 Compare actual and computed expected results

Functional Vs Non-Functional Testing:


Functional Testing Non-Functional Testing
Functional testing is performed using the
Non-Functional testing checks the
functional specification provided by the client
Performance, reliability, scalability and other
and verifies the system against the functional
non-functional aspects of the software system.
requirements.
Non-functional testing should be performed
Functional testing is executed first
after functional testing

Manual Testing or automation tools can be used


Using tools will be effective for this testing
for functional testing

Business requirements are the inputs to Performance parameters like speed, scalability
functional testing are inputs to non-functional testing.

Functional testing describes what the product Nonfunctional testing describes how good the
does product works
Easy to do Manual Testing Tough to do Manual Testing
Examples of Non-functional testing are:
Examples of Functional testing are:
 Performance Testing
 Unit Testing
 Load Testing
 Smoke Testing
 Volume Testing
 Sanity Testing
 Stress Testing
 Integration Testing
 Security Testing
 White box testing
 Installation Testing
 Black Box testing
 Penetration Testing
 User Acceptance testing
 Compatibility Testing
 Regression Testing
 Migration Testing

Functional Testing Tools:

 Selenium – Popular Open Source Functional Testing Tool


 QTP – Very user-friendly Functional Test tool by HP
 JUnit– Used mainly for Java applications and this can be used in Unit and System Testing
 soapUI – This is an open source functional testing tool, mainly used for Web service
testing. It supports multiple protocols such as HTTP, SOAP, and JDBC.
 Watir – This is a functional testing tool for web applications. It supports tests executed at
the web browser and uses a ruby scripting language

What is Non Functional Testing? Types with Example

What is Non-Functional Testing?


 Non-Functional Testing is defined as a type of Software testing to check non-functional
aspects (performance, usability, reliability, etc) of a software application.
 It is designed to test the readiness of a system as per nonfunctional parameters which are never
addressed by functional testing.
 An excellent example of non-functional test would be to check how many people can
simultaneously login into a software.
 Non-functional testing is equally important as functional testing and affects client satisfaction.

Objectives of Non-functional testing

 Non-functional testing should increase usability, efficiency, maintainability, and


portability of the product.
 Helps to reduce production risk and cost associated with non-functional aspects of the
product.
 Optimize the way product is installed, setup, executes, managed and monitored.
 Collect and produce measurements, and metrics for internal research and development.
 Improve and enhance knowledge of the product behavior and technologies in use.

Characteristics of Non-functional testing

 Non-functional testing should be measurable, so there is no place for subjective


characterization like good, better, best, etc.
 Exact numbers are unlikely to be known at the start of the requirement process
 Important to prioritize the requirements
 Ensure that quality attributes are identified correctly in Software Engineering.s Important

Non-functional testing Parameters

1) Security:
The parameter defines how a system is safeguarded against deliberate and sudden attacks from
internal and external sources. This is tested via Security Testing.

2) Reliability:
The extent to which any software system continuously performs the specified functions without
failure. This is tested by Reliability Testing

3) Survivability:
The parameter checks that the software system continues to function and recovers itself in case of
system failure. This is checked by Recovery Testing
4) Availability:
The parameter determines the degree to which user can depend on the system during its operation.
This is checked by Stability Testing.

5) Usability:
The ease with which the user can learn, operate, prepare inputs and outputs through interaction
with a system. This is checked by Usability Testing

6) Scalability:
The term refers to the degree in which any software application can expand its processing capacity
to meet an increase in demand. This is tested by Scalability Testing

7) Interoperability:
This non-functional parameter checks a software system interfaces with other software systems.
This is checked by Interoperability Testing

8) Efficiency:
The extent to which any software system can handles capacity, quantity and response time.

9) Flexibility:
The term refers to the ease with which the application can work in different hardware and software
configurations. Like minimum RAM, CPU requirements.

10) Portability:
The flexibility of software to transfer from its current hardware or software environment.

11) Reusability:
It refers to a portion of the software system that can be converted for use in another application.

Concepts of inspection:
How Software Inspection improves Software Quality ?
 The term software inspection was developed by IBM in the early 1970s, when it was noticed
that the testing was not enough sufficient to attain high quality software for large applications.
 Inspection is used to determine the defects in the code and remove it efficiently.
 This prevents defects and enhances the quality of testing to remove defects.
 This software inspection method achieved the highest level for efficiently removing defects
and improving software quality.
There are some factors that generate the high quality software:
 Phrases quality design inspection and Code inspections: This factor refers to formal
oversight that follows protocols such as training. Participants, material distributed for
inspection. Both moderators and recorders are present to analyze defect statistics.
 Phrase quality assurance : This factor refers to an active software quality assurance
group, which joins a group of software developments to support them in the
development of high quality software.
 Formal Testing :It throws the test process under certain conditions
 For an application, a test plan was created.
 Are complete specifications so that test cases can be made without
significant gaps.
 Vast library control tools are used.
 Test coverage analysis tools are used.

Software Inspection Process :


 The inspection process was developed in the mid 1970s, later extended and revised.
 The process must have an entry criterion that determines whether the inspection process is
ready to begin. this prevents incomplete products from entering the inspection process.
 Entry criteria can be interstitial with items such as “The Spell-Document Check”.
There are some of the stages in the software inspection process such as-
 Planning : The moderator plan the inspection.
 Overview Meeting: The background of the work product is described by the author.
 Preparation: The examination of the work product is done by inspector to identify the
possible defects.
 Inspection Meeting: The reader reads the work product part by part during this
meeting and the inspectors the faults of each part.
 Rework: After the inspection meeting, the writer changes the work product according
to the work plans.
 Follow Up: The changes done by the author are checked to make sure that everything
is correct.

Advantages of Software Inspection:


 Helps in the Early removal of major defects.
 This inspection enables a numeric quality assessment of any technical document.
 Software inspection helps in process improvement.
 It helps in staff training on the job.
 Software inspection helps in gradual productivity improvement.
Disadvantages of Software Inspection:
 It is a time-consuming process.
 Software inspection requires discipline.

What is an Inspection?

 Inspection is the most formal form of reviews, a strategy adopted during static testing phase.

Characteristics of Inspection:

 Inspection is usually led by a trained moderator, who is not the author. Moderator's role is
to do a peer examination of a document
 Inspection is most formal and driven by checklists and rules.
 This review process makes use of entry and exit criteria.
 It is essential to have a pre-meeting preparation.
 Inspection report is prepared and shared with the author for appropriate actions.
 Post Inspection, a formal follow-up process is used to ensure a timely and a prompt
corrective action.
 Aim of Inspection is NOT only to identify defects but also to bring in for process
improvement.

*********************THE END**********************

You might also like