CSBS-SE Question & Answers - 1,2,3,4,5
CSBS-SE Question & Answers - 1,2,3,4,5
UNIT- I
PART - A (2 Marks)
1. Define software engineering
2. What is meant by small programming
3. What is meant by large programming
4. Write any two characteristics of software as a product
5. What do you mean by software quality
6. What are the major differences between system engineering and software engineering
7. List the roles of software engineer
8. List the roles of software developer
UNIT II
PART-A (2 Marks)
1. List the phases in SDLC
2. What do you mean by feasibility study
3. List the techniques for estimation of schedule and effort
4. List the software cost estimation models
5. What do you mean by software engineering economics
6. List the techniques of software project control and reporting
7. How can you measure the software size
8. Define a risk
9. What is configuration management
UNIT-III:
PART- A (2 Marks)
1. List the Internal and external qualities of software quality
2. List the Principles to achieve software quality
3. What are the software quality models
4. What is CMMI
5. Define software reliability
6. List the reliability models
UNIT – IV
PART-A (2 Marks)
1. Define a Problem Space
2. How an IT company works
3. How an IT supports business
4. What do you mean by Knowledge Driven Development
5. List the usage of domain knowledge framework in Insurance
6. What is SRS
7. List the requirement elicitation techniques
8. List the techniques for requirement modelling
9. Define a decision table
10. Define an event table
11. Define state transition table
12. Define an UML
13. Define a software metrics
UNIT – V
PART- A (2 Marks)
1. Define software testing
2. What are the objectives of testing
3. Define White Box Testing
4. What are the two levels of testing
5. What are the various testing activities
6. Write short note on black box testing
7. What is equivalence partitioning
8. What is Regression Testing
9. What is a boundary value analysis
10. What is cyclomatic complexity
11. How to compute the cyclomatic complexity
12. Distinguish between verification and validation
13. Distinguish between alpha and beta testing
14. State the objectives and guidelines for debugging
15. What do you mean by test case management
Software Products are nothing but software systems delivered to the customer with the
documentation that describes how to install and use the system. In certain cases, software
products may be part of system products where hardware, as well as software, is delivered to a
customer. Software products are produced with the help of the software process. The software
process is a way in which we produce software.
Types of software products:
Software products fall into two broad categories:
1. Generic products:
Generic products are stand-alone systems that are developed by a production unit
and sold on the open market to any customer who is able to buy them.
2. Customized Products:
Customized products are the systems that are commissioned by a particular
customer. Some contractor develops the software for that customer.
Essential characteristics of Well-Engineered Software Product:
A well-engineered software product should possess the following essential characteristics:
Efficiency:
The software should not make wasteful use of system resources such as memory
and processor cycles.
Maintainability:
It should be possible to evolve the software to meet the changing requirements of
customers.
Dependability:
It is the flexibility of the software that ought to not cause any physical or economic
injury within the event of system failure. It includes a range of characteristics such
as reliability, security, and safety.
In time:
Software should be developed well in time.
Within Budget:
The software development costs should not overrun and it should be within the
budgetary limit.
Functionality:
The software system should exhibit the proper functionality, i.e. it should perform
all the functions it is supposed to perform.
Adaptability:
The software system should have the ability to get adapted to a reasonable extent
with the changing requirements.
Software quality is defined as a field of study and practice that describes the desirable
attributes of software products. There are two main approaches to software quality:
defect management and quality attributes.
Software Quality Attributes are features that facilitate the measurement of
performance of a software product by Software Testing professionals, and include
attributes such as availability, interoperability, correctness, reliability, learnability,
robustness, maintainability, readability, extensibility, testability.
What are the major differences between System Engineering and Software Engineering
System Engineer:
A System Engineer is a person who deals with the overall management of engineering
projects during their life cycle (focusing more on physical aspects).
They follows an interdisciplinary approach governing the total technical and managerial
effort required to transform requirements into solutions.
They are generally focused with all aspects of computer based system development not
only this but also hardware, software and process engineering etc. are included.
Systems Engineering Methods :
Stakeholder Analysis
Interface Specification
Design Tradeoffs
Configuration Management
Systematic Verification and Validation
Requirements Engineering
2. Software Engineer :
A Software Engineer is a person who deals with the designing and developing good
quality of software applications/software products.
They follows a systematic and disciplined approach for software design, development,
deployment and maintenance of software applications.
They are generally concerned with all aspects of software development, infrastructure,
control, applications and databases in the system.
Software Engineering Methods :
Process Modeling
Incremental Verification and Validation
Process Improvement
Model-Driven Development
Agile Methods
Continuous Integration
In general they are concerned with In general they are concerned with all
all aspects of computer based system aspects of software development,
development including hardware, infrastructure, control, applications and
03. software and process engineering. databases in the system.
One thing software engineering can One thing system engineering can
learn from system engineering i.e learn from software engineering i.e
Consideration of trade-offs and use Disciplined approach to cost
04. of framework methods. estimation.
But these two disciplines are interconnected to each other and there is n such hard and
fast rules for these titles at IT industries and we can see also how these two disciplines
are cooperating to each other.
List the roles of Software Engineer
See the above answer
List the roles of Software Developer
Talking through requirements with clients.
Testing software and fixing problems.
Maintaining systems once they're up and running.
Being a part of technical designing.
Integrate software components.
Producing efficient codes.
Writing program codes for reference and reporting.
(Long Answers)
Programming in the small vs. programming in the large:
In software engineering, programming in the large and programming in the small refer to
two different aspects of writing software, namely,
Designing a larger system as a composition of smaller parts, and Creating those smaller
parts by writing lines of code in a programming language, respectively.
The terms were coined by Frank DeRemer and Hans Kron in their 1975 paper
"Programming-in-the-large versus programming-in-the-small", in which they argue that
the two are essentially different activities, and that typical programming languages, and the
practice of structured programming, provide good support for the latter, but not for the
former.
Fred Brooks, identifies that the way an individual program is created is different from how
a programming systems product is created.
The former likely does one relatively simple task well. It is probably coded by a single
engineer, is complete in itself, and is ready to run on the system on which it was developed.
The programming activity was probably fairly short-lived as simple tasks are quick and
easy to complete. This is the endeavor that DeRemer and Kron describe as programming
in the small.
The project is likely to be split up into several or hundreds of separate modules which
individually are of a similar complexity to the individual programs described above.
However, each module will define an interface to its surrounding modules.
Brooks describes how programming systems projects are typically run as formal projects
that follow industry best practices and will comprise testing, documentation and ongoing
maintenance activities as well as activities to ensure that the product is generalized to work
in different scenarios including on systems other than the development systems on which
it was created.
It explains the higher purpose of why that product exists in the first place.
It sets the direction for where the product is going and what it will deliver in the future.
Having a leadership team aligned across an organization articulating the purpose, value,
and rationale for a software project goes a long way towards getting stakeholders and end-
users pulling the proverbial rope in the same direction.
You need to ensure that you define the key stakeholders within your business that will be
involved in the delivery of the solution.
One would imagine that every software development project needs a detailed, step-by-step
plan.
The kind of plan that would outline every requirement, detail every risk and mitigation
steps, document the key people involved and so on.
The old proverb of “You don’t know what you don’t know” is key here. No matter how
much time is spent developing a detailed plan, specification and wireframes or prototypes
there’s no way for a team to know what they don’t know.
A much better approach is to define a product vision and then define a broad-strokes plan
that allows a project to begin to inch forward.
You spend time detailing a small but useful subset of features of the overall project, get it
built, review it, discuss it, compare against your original product vision/plan and rinse and
repeat. This is an agile approach to software development.
The ability to adapt and incorporate changes to your business as you move forward can
mean the difference between success and failure, and moreover creates the best possible
product.
Meetings should take place regularly to discuss the status and re-prioritize. For these
meetings to be meaningful, the software partner must be transparent about the budget/time
consumed and progress made towards the completion of the project.
This flexible approach keeps everybody actively focused on the delivery of the product
vision to the exclusion of all waste.
It also allows both parties to mould the scope of the project in response to new information.
The finished solution is a better product that meets your product vision. Given the wealth
of data provided to the customer about the progress made versus budget spent, the customer
always makes scope changes armed with the information they require.
We respect that it’s often the case that a business will want a cost and timescale for the
building of a new software solution.
However, one should be very mindful of the change that will likely come about in your
requirements as you progress, and build in mechanisms to allow changes to happen.
This is true partnership working, where information is shared and the product quality and
scope carefully sculpted in collaboration.
In many ways, one of the hardest parts about creating software is not the actual building of
the software, but instead the communication of the requirements in the first place.
Imagine you met somebody who had never seen a car before, and the only mechanism you
could use to describe what a car looks like and how it works is the written word.
It would be a pretty grueling experience, to say the least. Just attempting to describe what
the average car looks like could be a dozen pages of text. How many wheels does it have,
and where do the wheels go? Hold on, what exactly is a wheel? And so on…
Now imagine instead if you were simply able to draw them a car. Wouldn’t that make the
communication of what the car looks like so much easier?
Now take this analogy a step further; imagine you could create a small plastic model of a
car with some basic operating parts like turning wheels, opening doors, and so on.
This is what we would call a prototype, and a prototype typically conveys more
information than a wireframe due to its interactive nature.
There’s a reason the saying “a picture is worth a thousand words” is often quoted. In
software, wireframe or prototype of a proposed software solution is an essential step for
communicating the desired outcome.
Getting a wireframe or prototype in place helps everybody on the team to get on the same
page about what success looks like much quicker.
You’ll have multiple stakeholders within your company to involve, and the sooner they all
agree on the desired outcome, the sooner your project can begin.
The problem here is not a lack of good intention. The problem is specifically with the
opening statement “I know all of my requirements”.
You probably don’t, and the approach to your commercials and project management are
now set up such that when you realize this is the case it’s too late.
The contract between you and the software developer attempts by its nature to limit,
perhaps even prevent, change.
Alternatively, agile project management builds the need to change requirements as you
progress your project into the approach explicitly.
Agile recognizes that changes in requirements are not software project failures, instead,
they’re opportunities to realign the project with the overall product vision as more
information about the requirements are unearthed.
Another benefit of agile is that planning, design and documentation beyond the minimum
necessary to begin work is a waste.
It specifically focuses on delivering working features, which is where the value for the
customer comes to life.
Lots have been written on the benefits of agile so rather than labor all of them, we’ll break
down the highlights.
Bugs that make it out into the real world are costly
Various studies show that it is over five times more expensive to fix bugs or issues that
make it out “into the wild” than it is to find and fix those bugs during the testing phase.
Thorough software testing is the primary way to avoid bugs leaking out into the wild, and
for this sixth and final strategy, we consider two unique yet related types of testing:
Testing performed by your software partner, which can include one or more of the
following:
o Cross-browser testing – does the solution work on multiple web browsers
o Functional testing – does the software do what it’s supposed to do
o Automated testing – a routine which can perform a certain set of steps to check the
outcome is as expected
o Unit testing – lines of code which automatically check other lines of code produce
the correct output given certain inputs
Testing performed by you, the customer, which is commonly referred to as User
Acceptance Testing (UAT).
Importance of software quality and timely availability:
(Illustrate the importance of software quality)
Software quality product is defined in term of its fitness of purpose. That is, a quality
product does precisely what the users want it to do. For software products, the fitness of
use is generally explained in terms of satisfaction of the requirements laid down in the SRS
document. Although "fitness of purpose" is a satisfactory interpretation of quality for many
devices such as a car, a table fan, a grinding machine, etc. for software products, "fitness
of purpose" is not a wholly satisfactory definition of quality.
Example: Consider a functionally correct software product. That is, it performs all tasks as
specified in the SRS document. But, has an almost unusable user interface. Even though it may be
functionally right, we cannot consider it to be a quality product.
The modern view of a quality associated with a software product several quality methods
such as the following:
Portability: A software device is said to be portable, if it can be freely made to work in various
operating system environments, in multiple machines, with other software products, etc.
Usability: A software product has better usability if various categories of users can easily invoke
the functions of the product.
Reusability: A software product has excellent reusability if different modules of the product can
quickly be reused to develop new products.
Maintainability: A software product is maintainable if bugs can be easily corrected as and when
they show up, new tasks can be easily added to the product, and the functionalities of the product
can be easily modified, etc.
Auditing of projects
Review of the quality system
Development of standards, methods, and guidelines, etc.
Production of documents for the top management summarizing the effectiveness of the
quality system in the organization.
Engineering approach to software development:
(Explain in detail about Engineering approach to software development)
1. One of basic software Engineering principle is Better Requirement analysis which gives
a clear vision about the project. At last a good understanding on user requirements provides
value to it’s users by delivering a good software product which meets user’s requirements.
2. All designs and implementations should be as much simple as possible means KISS
(Keep it Simple, Stupid) principle should be followed. It makes code so simple as a result
debugging and further maintenance becomes simple.
6. Think then Act is a must required principle for software engineering means before
starting developing functionality first it requires to think about application architecture,
as a good planning on flow of project development produces better result.
7. Sometimes developer adds up all functionalities together but later find no use of that.
So following Never add extra principle is important as it implements what actually
needed and later implements what are required which saves effort and time.
8. When other developers work in another’s code they should not be surprised and should
not waste their time in getting code. So providing better Documentation at required steps
is a good way of developing software projects.
10. The developers should develop project in such a way that it should satisfy principle of
Generality means it should not be limited or restricted to some of cases/functions rather
it should free from unnatural restrictions and should be able to provide service to
customer what actually they need or general needs in an extensive manner.
11. Principle of Consistency is important in coding style and designing GUI (Graphical
User Interface) as consistent coding style gives easier reading of code and consistency in
GUI makes user learning easier in dealing with interface and in using the software.
12. Never waste time if anything is required and that is already exit at that time take help of
Open source and fix it in your own way as per requirement.
14. To exit in current technology market trend Using modern programming practices is
important to meet user’s requirement in a latest and advanced way.
15. Scalability in Software Engineering should be maintained to grow and manage increased
demand for software application.
Communicating your design is critical for architecture reviews with client, as well as
to ensure it is implemented correctly by your developers.
You must communicate your architectural design to all the stakeholders including the
development team, system administrators and operators, business owners, and other
interested parties.
Periodic review of the architecture
It is important to revisit and review the architecture at major project milestones, as the
development progresses.
This will help identify and fix architectural problems at the earlier stage and will help
prevent cost and time over-runs.
Documentation
Detailed recording of the design generated during the software architecting process is
important.
Documentation helps communicate the entire system to the developers and the client.
It also helps in future maintenance and enhancements.
This is treated as the final technical specification for the project and all developers are
expected to code with strict adherence to the architecture, so that conceptual system
integrity is preserved.
Step 4: Build the System
After the architecture of the software system has been built and documented, a team of
programmers/developers sit down to build the system.
This stage is also known as the development stage and takes the longest time.
The developers have at their disposal, a well documented design specification along
with instructions on processes, standards and tools to be used.
Now, they have to convert the design prepared by the architect, into a working system
that takes care of all the requirements addressed in the design document.
The next stage, testing, also commences as the developers go along building the system.
Development activities entail –
Building all the system elements and components. You must establish a standardized
coding style and naming convention for development. Check to see if the client has
established coding style and naming standards. If not, you should establish common
standards. This provides a consistent model that makes it easier for team members to
review code they did not write, leading to better maintainability.
Integrating the elements into larger components.
Preparing the technical environment/ infrastructure for the system.
Building a Proof of Concept. This is a skeleton system that tests key elements of the
solution on a non-production simulation of the proposed operational environment. The
team walks users through the solution to get their feedback and re-confirm their
requirements before full-fledged coding begins.
Developing testing tools and pre-populated test data. Unit test cases are created and
automated scripts are written to test each of the software elements individually as well as
test their inter-operability with other elements.
Preparing code documents. As individual software elements and their integration
components are built, it is the developer’s responsibility to document the internal design
of the software fully. The documentation must cover - detailing the provisions, specific
coding instructions, and procedures for issue tracking. Everything that help explain the
functionality of the software must be documented so that the codes can in future be
understood by other programmers when they need to do maintenance fine tweaking, or
enhancements.
The Integration documents will describe the assembly and interaction of the software
elements with each other and also the interaction of the software elements with the
hardware.
Preparing Implementation plan. This document will describe how the software system
should be deployed in the production environment. Here you define all planned activities
to ensure successful implementation of the entire software system.
Preparing Operation & Maintenance manual. This document will detail out necessary
procedures and instructions that will be required by system administrators and the
maintenance team to ensure smooth system operation. Various operational and
maintenance procedures must be described, and standards used must be specified.
Preparing help documents and training manuals. This document will outline technical and
user training needs. It must contain all necessary help instructions and explanations of
implemented business rules and processes so that end users fully understand the system
capabilities, configurability, and adaptability to change. Each user interface must
accompany a context based help page which explains the fields and data validation
requirements. Essentially, anything that helps users use the system to its full capability
should be described in this documentation.
Step 5: Test the System
As we have mentioned earlier, development and testing go hand-in hand. The testing
process is designed to identify and address potential issues prior to deployment.
During the development stage, each software element is tested independently. Then,
when the software elements are integrated together, more testing is done to test the
integration fully on various scenarios. Automated scripts are written to perform testing.
After the entire system is ready, further testing is done by users, system administrators and
maintenance team to evaluate the system and identify any remaining issues they need to
address before releasing for real life use.
Test Analysis Reports are prepared that present a description of the unit tests and the
results mapped to the system requirements.
Testing helps identify system capabilities and deficiencies and hence proper selection of
test data and real-life use cases must be done.
The entire range of testing covers -
Code component testing
Database testing
Infrastructure testing
Security testing
Integration testing
User acceptance and usability testing
Stress, capacity, and performance testing. This will identify any issues with the system’s
architecture and design itself.
Step 6: Deploy for Production
After the software system has been tested thoroughly and in its entirety, it is considered
ready to be launched and is approved for release by a quality assurance team.
Deployment is the final stage of releasing an application for real life use. If you reach
this far in your project, you have succeeded.
However, there are still things that can go wrong. You need to plan for deployment and
you can use the deployment checklist that would have been prepared by the
development team while documenting the Implementation plan.
Step 7: Maintain the System
It is normal practice that clients tend to pass on this activity to the same company that
did the development work. This is a continuous process and entails responding to user
problems and resolving them quickly. You would be required to have a small dedicated
or semi-dedicated team (depending upon the size of the software system), who would
engage in the activity of tweaking and fine tuning the codes to accommodate day-to-
day arising needs. Normally, a well-designed software system should require minimal
tweaking. Yet, in reality some minor tweaking may be required..
Maintaining and enhancing software to cope with newly discovered faults or
requirements can take substantial time and effort, as missed requirements may force
redesign of some modules of the software. This can be kept to the minimum by proper
execution of the first 6 stages of the development process.
If you are also entrusted with the task of maintaining the server(s) and hardware
infrastructure, server & network management would become an important activity.
You will have to ensure that the server infrastructure on which your software system is
running is up and running all the time. You may have signed an SLA (service level
agreement) with the client to ensure a certain uptime say, 99.9% uptime. Ensuring this
may require a small team to be constantly monitoring the server and other associated
hardware.
Emergence of software engineering as a discipline:
(Demonstrate the emergence of software engineering as a discipline)
Software engineering discipline is the result of advancement in the field of technology. Various
innovations and technologies that led to the emergence of software engineering discipline are:
Early Computer Programming
As we know that in the early 1950s, computers were slow and expensive. Though the
programs at that time were very small in size, these computers took considerable time to
process them.
They relied on assembly language which was specific to computer architecture. Thus,
developing a program required lot of effort.
Every programmer used his own style to develop the programs.
High Level Language Programming
With the advent of powerful machines and high level languages, the usage of computers
grew rapidly: In addition, the nature of programs also changed from simple to complex.
The increased size and the complexity could not be managed by individual style.
It was analyzed that clarity of control flow (the sequence in which the program’s
instructions are executed) is of great importance.
To help the programmer to design programs having good control flow
structure, flowcharting technique was developed.
In flowcharting technique, the algorithm is represented using flowcharts. A flowchart is
a graphical representation that depicts the sequence of operations to be carried out to solve
a given problem.
Note that having more GOTO constructs in the flowchart makes the control flow messy,
which makes it difficult to understand and debug.
In order to provide clarity of control flow, the use of GOTO constructs in flowcharts
should be avoided and structured constructs-decision, sequence, and loop-should be
used to develop structured flowcharts.
The decision structures are used for conditional execution of statements (for example, if
statement). The sequence structures are used for the sequentially executed statements.
The loop structures are used for performing some repetitive tasks in the program. The use
of structured constructs formed the basis of the structured programming methodology.
Structured programming became a powerful tool that allowed programmers to write
moderately complex programs easily.
It forces a logical structure in the program to be written in an efficient and understandable
manner.
The purpose of structured programming is to make the software code easy to modify when
required.
Some languages such as Ada, Pascal, and dBase are designed with features that implement
the logical program structure in the software code.
Data-Flow Oriented Design
With the introduction of very Large-Scale Integrated circuits (VLSI), the computers
became more powerful and faster.
As a result, various significant developments like networking and GUIs came into being.
Clearly, the complexity of software could not be dealt using control flow-based design.
Thus, a new technique, namely, data-flow-oriented technique came into existence.
In this technique, the flow of data through business functions or processes is represented
using Data-flow Diagram (DFD).
IEEE defines a data-flow diagram (also known as bubble chart and work-flow
diagram) as ‘a diagram that depicts data sources, data sinks, data storage, and processes
performed on data as nodes, and logical flow of data as links between the nodes.’
Object Oriented Design
Types of Feasibility
Various types of feasibility that are commonly considered include technical feasibility,
operational feasibility, and economic feasibility.
Technical feasibility assesses the current resources (such as hardware and software) and
technology, which are required to accomplish user requirements in the software within the
allocated time and budget. For this, the software development team ascertains whether the current
resources and technology can be upgraded or added in the software to accomplish specified user
requirements. Technical feasibility also performs the following tasks.
• Analyzes the technical skills and capabilities of the software development team members.
• Determines whether the relevant technology is stable and established.
• Ascertains that the technology chosen for software development has a large number of users so
that they can be consulted when problems arise or improvements are required.
Operational feasibility assesses the extent to which the required software performs a series of
steps to solve business problems and user requirements. This feasibility is dependent on human
resources (software development team) and involves visualizing whether the software will
operate after it is developed and be operative once it is installed. Operational feasibility also
performs the following tasks.
• Determines whether the problems anticipated in user requirements are of high priority.
• Determines whether the solution suggested by the software development team is acceptable.
• Analyzes whether users will adapt to a new software.
• Determines whether the organization is satisfied by the alternative solutions proposed by the
software development team.
Economic feasibility determines whether the required software is capable of generating financial
gains for an organization. It involves the cost incurred on the software development team,
estimated cost of hardware and software, cost of performing feasibility study, and so on. For this,
it is essential to consider expenses made on purchases (such as hardware purchase) and activities
required to carry out software development. In addition, it is necessary to consider the benefits
that can be achieved by developing the software. Software is said to be economically feasible if
it focuses on the issues listed below.
• Cost incurred on software development to produce long-term gains for an organization.
• Cost required to conduct full software investigation (such as requirements elicitation and
requirements analysis).
• Cost of hardware, software, development team, and training.
Feasibility Study Process
Feasibility study comprises the following steps.
• Information assessment: Identifies information about whether the system helps in achieving
the objectives of the organization. It also verifies that the system can be implemented using new
technology and within the budget and whether the system can be integrated with the existing
system.
• Information collection: Specifies the sources from where information about software can be
obtained. Generally, these sources include users (who will operate the software), organization
(where the software will be used), and the software development team (which understands user
requirements and knows how to fulfill them in software).
• Report writing: Uses a feasibility report, which is the conclusion of the feasibility study by the
software development team. It includes the recommendations whether the software development
should continue. This report may also include information about changes in the software scope,
budget, and schedule and suggestions of any requirements in the system.
• General information: Describes the purpose and scope of feasibility study. It also describes
system overview, project references, acronyms and abbreviations, and points of contact to be
used. System overview provides description about the name of the organization responsible for
the software development, system name or title, system category, operational status, and so
on. Project references provide a list of the references used to prepare this document such as
documents relating to the project or previously developed documents that are related to the
project. Acronyms and abbreviations provide a list of the terms that are used in this document
along with their meanings. Points of contact provide a list of points of organizational contact
with users for information and coordination. For example, users require assistance to solve
problems (such as troubleshooting) and collect information such as contact number, e-mail
address, and so on.
Management summary: Provides the following information.
• Environment: Identifies the individuals responsible for software development. It provides
information about input and output requirements, processing requirements of the software and
the interaction of the software with other software. It also identifies system security requirements
and the system’s processing requirements
• Current functional procedures: Describes the current functional procedures of the existing
system, whether automated or manual. It also includes the data-flow of the current system and
the number of team members required to operate and maintain the software.
• Functional objective: Provides information about functions of the system such as new services,
increased capacity, and so on.
• Performance objective: Provides information about performance objectives such as reduced
staff and equipment costs, increased processing speeds of software, and improved controls.
• Assumptions and constraints: Provides information about assumptions and constraints such
as operational life of the proposed software, financial constraints, changing hardware, software
and operating environment, and availability of information and sources.
• Methodology: Describes the methods that are applied to evaluate the proposed software in order
to reach a feasible alternative. These methods include survey, modeling, benchmarking, etc.
• Evaluation criteria: Identifies criteria such as cost, priority, development time, and ease of
system use, which are applicable for the development process to determine the most suitable
system option.
• Recommendation: Describes a recommendation for the proposed system. This includes the
delays and acceptable risks.
• Proposed software: Describes the overall concept of the system as well as the procedure to be
used to meet user requirements. In addition, it provides information about improvements, time
and resource costs, and impacts. Improvements are performed to enhance the functionality and
performance of the existing software. Time and resource costs include the costs associated with
software development from its requirements to its maintenance and staff training. Impacts
describe the possibility of future happenings and include various types of impacts as listed below.
• Equipment impacts: Determine new equipment requirements and changes to be made in the
currently available equipment requirements.
• Software impacts: Specify any additions or modifications required in the existing software and
supporting software to adapt to the proposed software.
• Organizational impacts: Describe any changes in organization, staff and skills requirement.
• Operational impacts: Describe effects on operations such as user-operating procedures, data
processing, data entry procedures, and so on.
• Developmental impacts: Specify developmental impacts such as resources required to develop
databases, resources required to develop and test the software, and specific activities to be
performed by users during software development.
• Security impacts: Describe security factors that may influence the development, design, and
continued operation of the proposed software.
• Alternative systems: Provide description of alternative systems, which are considered in a
feasibility study. This also describes the reasons for choosing a particular alternative system to
develop the proposed software and the reason for rejecting alternative systems.
List the techniques for estimation of schedule and effort
Refer from Long Answers Section
Define a risk
Refer from Long Answers Section
(Long Answers)
A software life cycle model is a pictorial and diagrammatic representation of the software
life cycle.
A life cycle model represents all the methods required to make a software product transit
through its life cycle stages.
It also captures the structure in which these methods are to be undertaken.
Life cycle model maps the various activities performed on a software product from its
inception to retirement.
Different life cycle models may plan the necessary development activities to phases in
different ways.
Thus, no element which life cycle model is followed; the essential activities are contained
in all life cycle models though the action may be carried out in distinct orders in different
life cycle models.
During any life cycle stage, more than one activity may also be carried out.
Need of SDLC
The development team must determine a suitable life cycle model for a particular plan and
then observe to it.
Without using an exact life cycle model, the development of a software product would not
be in a systematic and disciplined manner.
When a team is developing a software product, there must be a clear understanding among
team representative about when and what to do. Otherwise, it would point to chaos and
project failure.
A software life cycle model describes entry and exit criteria for each phase.
A phase can begin only if its stage-entry criteria have been fulfilled.
Without a software life cycle model, the entry and exit criteria for a stage cannot be
recognized.
Without software life cycle models, it becomes tough for software project managers to
monitor the progress of the project.
SDLC Cycle
SDLC Cycle represents the process of developing software. SDLC framework includes the
following steps:
The stages of SDLC are as follows:
The senior members of the team perform it with inputs from all the stakeholders and
domain experts in the industry.
Planning for the quality assurance requirements and identifications of the risks associated
with the projects is also done at this stage.
Business analyst and Project organizer set up a meeting with the client to gather all the data
like what the customer wants to build, who will be the end user, what is the objective of
the product. Before creating a product, a core understanding or knowledge of the product
is very necessary.
For Example, A client wants to have an application which concerns money transactions. In this
method, the requirement has to be precise like what kind of operations will be done, how it will be
done, in which currency it will be done, etc.
Once the required function is done, an analysis is complete with auditing the feasibility of
the growth of a product.
In case of any ambiguity, a signal is set up for further discussion.
Once the requirement is understood, the SRS (Software Requirement Specification)
document is created.
The developers should thoroughly follow this document and also should be reviewed by
the customer for future reference.
Stage 2: Defining Requirements
Once the requirement analysis is done, the next stage is to certainly represent and document
the software requirements and get them accepted from the project stakeholders.
This is accomplished through "SRS"- Software Requirement Specification document
which contains all the product requirements to be constructed and developed during the
project life cycle.
The next phase is about to bring down all the knowledge of requirements, analysis, and
design of the software project.
This phase is the product of the last two, like inputs from the customer and requirement
gathering.
In this phase of SDLC, the actual development begins, and the programming is built.
The implementation of design begins concerning writing code.
Developers have to follow the coding guidelines described by their management and
programming tools like compilers, interpreters, debuggers, etc. are used to develop and
implement the code.
Stage 5: Testing
After the code is generated, it is tested against the requirements to make sure that the
products are solving the needs addressed and gathered during the requirements stage.
During this stage, unit testing, integration testing, system testing, acceptance testing are
done.
Stage 6: Deployment
Once the software is certified, and no bugs or errors are stated, then it is deployed.
Then based on the assessment, the software may be released as it is or with suggested
enhancement in the object segment.
After the software is deployed, then its maintenance begins.
Stage 7: Maintenance
Once when the client starts using the developed systems, then the real issues come up and
requirements to be solved from time to time.
This procedure where the care is taken for the developed product is known as maintenance.
SDLC Models
Software Development life cycle (SDLC) is a spiritual model used in project management
that defines the stages include in an information system development project, from an
initial feasibility study to the maintenance of the completed application.
There are different software development life cycle models specify and design, which are
followed during the software development phase.
These models are also called "Software Development Process Models." Each process
model follows a series of phase unique to its type to ensure success in the step of software
development.
Waterfall Model
The waterfall is a universally accepted SDLC model. In this method, the whole process of
software development is divided into various phases.
The waterfall model is a continuous software development model in which development is
seen as flowing steadily downwards (like a waterfall) through the steps of requirements
analysis, design, implementation, testing (validation), integration, and maintenance.
Linear ordering of activities has some significant consequences.
First, to identify the end of a phase and the beginning of the next, some certification
techniques have to be employed at the end of each step.
Some verification and validation usually do this mean that will ensure that the output of
the stage is consistent with its input (which is the output of the previous step), and that the
output of the stage is consistent with the overall requirements of the system.
RAD Model
RAD or Rapid Application Development process is an adoption of the waterfall model; it
targets developing software in a short period.
The RAD model is based on the concept that a better system can be developed in lesser
time by using focus groups to gather system requirements.
o Business Modeling
o Data Modeling
o Process Modeling
o Application Generation
o Testing and Turnover
Spiral Model
V-Model
In this type of SDLC model testing and the development, the step is planned in parallel.
So, there are verification phases on the side and the validation phase on the other side.
V-Model joins by Coding phase.
Incremental Model
The incremental model is not a separate model. It is necessarily a series of waterfall cycles.
The requirements are divided into groups at the start of the project.
For each group, the SDLC model is followed to develop software.
The SDLC process is repeated, with each release adding more functionality until all
requirements are met.
In this method, each cycle act as the maintenance phase for the previous software release.
Modification to the incremental model allows development cycles to overlap.
After that subsequent cycle may begin before the previous cycle is complete.
Agile Model
1. It is difficult to think in advance which software requirements will persist and which will
change. It is equally difficult to predict how user priorities will change as the project
proceeds.
2. For many types of software, design and development are interleaved. That is, both activities
should be performed in tandem so that design models are proven as they are created. It is
difficult to think about how much design is necessary before construction is used to test
the configuration.
3. Analysis, design, development, and testing are not as predictable (from a planning point of
view) as we might like.
Iterative Model
Big bang model is focusing on all types of resources in software development and coding,
with no or very little planning.
The requirements are understood and implemented when they come.
This model works best for small projects with smaller size development team which are
working together.
It is also useful for academic software development projects.
It is an ideal model where requirements are either unknown or final release date is not
given.
Prototype Model
Waterfall model
The aim of this phase is to understand the exact requirements of the customer and to
document them properly.
Both the customer and the software developer work together so as to document all the
functions, performance, and interfacing requirement of the software.
It describes the "what" of the system to be produced and not "how.
"In this phase, a large document called Software Requirement Specification
(SRS) document is created which contained a detailed description of what the system will
do in the common language.
2. Design Phase:
This phase aims to transform the requirements gathered in the SRS into a suitable form
which permits further coding in a programming language.
It defines the overall software architecture together with high level and detailed design. All
this work is documented as a Software Design Document (SDD).
During this phase, design is implemented. If the SDD is complete, the implementation or
coding phase proceeds smoothly, because all the information needed by software
developers is contained in the SDD.
During testing, the code is thoroughly examined and modified. Small modules are tested
in isolation initially.
After that these modules are tested by writing some overhead code to check the interaction
between these modules and the flow of intermediate output.
This phase is highly crucial as the quality of the end product is determined by the
effectiveness of the testing carried out.
The better output will lead to satisfied customers, lower maintenance costs, and accurate
results. Unit testing determines the efficiency of individual modules.
However, in this phase, the modules are tested for their interactions with each other and
with the system.
o This model is simple to implement also the number of resources that are required for it is
minimal.
o The requirements are simple and explicitly declared; they remain unchanged during the
entire project development.
o The start and end points for each phase is fixed, which makes it easy to cover progress.
o The release date for the complete product, as well as its final cost, can be determined before
development.
o It gives easy to control and clarity for the customer due to a strict reporting system.
o In this model, the risk factor is higher, so this model is not suitable for more significant and
complex projects.
o This model cannot accept the changes in requirements during development.
o It becomes tough to go back to the phase. For example, if the application has now shifted
to the coding phase, and there is a change in requirement, It becomes tough to go back and
change it.
o Since the testing done at a later stage, it does not allow identifying the challenges and risks
in the earlier phase, so the risk reduction strategy is difficult to prepare.
RAD (Rapid Application Development) Model
RAD is a linear sequential software development process model that emphasizes a concise
development cycle using an element-based construction approach.
If the requirements are well understood and described, and the project scope is a constraint,
the RAD process enables a development team to create a fully functional system within a
concise time period.
RAD (Rapid Application Development) is a concept that products can be developed faster
and of higher quality through:
The information flow among business functions is defined by answering questions like
what data drives the business process, what data is generated, who generates it, where does
the information go, who process it and so on.
2. Data Modelling:
The data collected from business modeling is refined into a set of data objects (entities)
that are needed to support the business.
The attributes (character of each entity) are identified, and the relation between these data
objects (entities) is defined.
3. Process Modelling:
The information object defined in the data modeling phase are transformed to achieve the
data flow necessary to implement a business function.
Processing descriptions are created for adding, modifying, deleting, or retrieving a data
object.
4. Application Generation:
Automated tools are used to facilitate construction of the software; even they use the 4th
GL techniques.
Many of the programming components have already been tested since RAD emphasis
reuse.
This reduces the overall testing time. But the new part must be tested, and all interfaces
must be fully exercised.
o When the system should need to create the project that modularizes in a short span time (2-
3 months).
o When the requirements are well-known.
o When the technical risk is limited.
o When there's a necessity to make a system, which modularized in 2-3 months of period.
o It should be used only if the budget allows the use of automatic code generating tools.
Spiral Model
The spiral model, initially proposed by Boehm, is an evolutionary software process model
that couples the iterative feature of prototyping with the controlled and systematic aspects
of the linear sequential model.
It implements the potential for rapid development of new versions of the software. Using
the spiral model, the software is developed in a series of incremental releases.
During the early iterations, the additional release may be a paper model or prototype.
During later iterations, more and more complete versions of the engineered system are
produced.
Objective setting:
Each cycle in the spiral starts with the identification of purpose for that cycle, the various
alternatives that are possible for achieving the targets, and the constraints that exists.2020)
The next phase in the cycle is to calculate these various alternatives based on the goals and
constraints.
The focus of evaluation in this stage is located on the risk perception for the project.
The next phase is to develop strategies that resolve uncertainties and risks.
This process may include activities such as benchmarking, simulation, and prototyping.
Planning:
Advantages
Disadvantages
V-Model
Validation:
1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed during the module
design phase. These UTPs are executed to eliminate errors at code level or unit level. A
unit is the smallest entity which can independently exist, e.g., a program module. Unit
testing verifies that the smallest entity can function correctly when isolated from the rest
of the codes/ units.
2. Integration Testing: Integration Test Plans are developed during the Architectural Design
Phase. These tests verify that groups created and tested independently can coexist and
communicate among themselves.
3. System Testing: System Tests Plans are developed during System Design Phase. Unlike
Unit and Integration Test Plans, System Tests Plans are composed by the client?s business
team. System Test ensures that expectations from an application developer are met.
4. Acceptance Testing: Acceptance testing is related to the business requirement analysis
part. It includes testing the software product in user atmosphere. Acceptance tests reveal
the compatibility problems with the different systems, which is available within the user
atmosphere. It conjointly discovers the non-functional problems like load and performance
defects within the real user atmosphere.
When to use V-Model
1. Easy to Understand.
2. Testing Methods like planning, test designing happens well before coding.
3. This saves a lot of time. Hence a higher chance of success over the waterfall model.
4. Avoids the downward flow of the defects.
5. Works well for small plans where requirements are easily understood.
Incremental Model
1. Requirement analysis:
In the first phase of the incremental model, the product analysis expertise identifies the
requirements.
And the system functional requirements are understood by the requirement analysis team.
To develop the software under the incremental model, this phase performs a crucial role.
In this phase of the Incremental model of SDLC, the design of the system functionality and
the development method are finished with success.
When software develops new practicality, the incremental model uses style and
development phase.
3. Testing:
In the incremental model, the testing phase checks the performance of each existing function
as well as additional functionality.
In the testing phase, the various methods are used to test the behavior of each task. | Two
Software Developer Career
4. Implementation:
Agile Model
1. Requirements gathering
2. Design the requirements
3. Construction/ iteration
4. Testing/ Quality assurance
5. Deployment
6. Feedback
1. Requirements gathering: In this phase, you must define the requirements. You should explain
business opportunities and plan the time and effort needed to build the project. Based on this
information, you can evaluate technical and economic feasibility.
2. Design the requirements: When you have identified the project, work with stakeholders to
define requirements. You can use the user flow diagram or the high-level UML diagram to show
the work of new features and show how it will apply to your existing system.
3. Construction/ iteration: When the team defines the requirements, the work begins. Designers
and developers start working on their project, which aims to deploy a working product. The
product will undergo various stages of improvement, so it includes simple, minimal functionality.
4. Testing: In this phase, the Quality Assurance team examines the product's performance and
looks for the bug.
5. Deployment: In this phase, the team issues a product for the user's work environment.
6. Feedback: After releasing the product, the last step is feedback. In this, the team receives
feedback about the product and works through the feedback.
o Scrum
o Crystal
o Dynamic Software Development Method(DSDM)
o Feature Driven Development(FDD)
o Lean Software Development
o eXtreme Programming(XP)
Scrum
SCRUM is an agile development process focused primarily on ways to manage tasks in team-
based development conditions. There are three roles in it, and their responsibilities are:
o Scrum Master: The scrum can set up the master team, arrange the meeting and remove
obstacles for the process
o Product owner: The product owner makes the product backlog, prioritizes the delay and
is responsible for the distribution of functionality on each repetition.
o Scrum Team: The team manages its work and organizes the work to complete the sprint
or cycle.
eXtreme Programming(XP)
This type of methodology is used when customers are constantly changing demands or
requirements, or when they are not sure about the system's performance.
Crystal:
1. Chartering: Multi activities are involved in this phase such as making a development team,
performing feasibility analysis, developing plans, etc.
2. Cyclic delivery: under this, two more cycles consist, these are:
o Team updates the release plan.
o Integrated product delivers to the users.
3. Wrap up: According to the user environment, this phase performs deployment, post-
deployment.
DSDM is a rapid application development strategy for software development and gives an agile
project distribution structure. The essential features of DSDM are that users must be actively
connected, and teams have been given the right to make decisions. The techniques used in DSDM
are:
1. Time Boxing
2. MoSCoW Rules
3. Prototyping
1. Pre-project
2. Feasibility Study
3. Business Study
4. Functional Model Iteration
5. Design and build Iteration
6. Implementation
7. Post-project
Feature Driven Development(FDD):
This method focuses on "Designing and Building" features. In contrast to other smart methods,
FDD describes the small steps of the work that should be obtained separately per function.
Lean software development methodology follows the principle "just in time production." The lean
method indicates the increasing speed of software development and reducing costs. Lean
development can be summarized in seven phases.
1. Eliminating Waste
2. Amplifying learning
3. Defer commitment (deciding as late as possible)
4. Early delivery
5. Empowering the team
6. Building Integrity
7. Optimize the whole
1. Frequent Delivery
2. Face-to-Face Communication with clients.
3. Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.
Iterative Model
In this Model, you can start with some of the software specifications and develop the first
version of the software.
After the first version if there is a need to change the software, then a new version of the
software is created with a new iteration.
Every release of the Iterative Model finishes in an exact and fixed period that is called
iteration.
The Iterative Model allows the accessing earlier phases, in which the variations made
respectively.
The final output of the project renewed at the end of the Software Development Life Cycle
(SDLC) process.
The various phases of Iterative model are as follows:
1. Requirement gathering & analysis: In this phase, requirements are gathered from customers
and check by an analyst whether requirements will fulfil or not. Analyst checks that need will
achieve within budget or not. After all of this, the software team skips to the next phase.
2. Design: In the design phase, team design the software by the different diagrams like Data Flow
diagram, activity diagram, class diagram, state transition diagram, etc.
3. Implementation: In the implementation, requirements are written in the coding language and
transformed into computer programmes which are called Software.
4. Testing: After completing the coding phase, software testing starts using different test methods.
There are many test methods, but the most common are white box, black box, and grey box test
methods.
5. Deployment: After completing all the phases, software is deployed to its work environment.
6. Review: In this phase, after the product deployment, review phase is performed to check the
behaviour and validity of the developed product. And if there are any error found then the process
starts again from the requirement gathering.
7. Maintenance: In the maintenance phase, after deployment of the software in the working
environment there may be some bugs, some errors or new updates are required. Maintenance
involves debugging and new addition options.
As we discussed above, this model is required when this project is small like an academic project
or a practical project. This method is also used when the size of the developer team is small and
when requirements are not defined, and the release date is not confirmed or given by the customer.
Prototype Model
The prototype model requires that before carrying out the development of actual software,
a working prototype of the system should be built.
A prototype is a toy implementation of the system. A prototype usually turns out to be a
very crude version of the actual system, possible exhibiting limited functional capabilities,
low reliability, and inefficient performance as compared to actual software.
In many instances, the client only has a general view of what is expected from the software
product.
In such a scenario where there is an absence of detailed information regarding the input to
the system, the processing needs, and the output requirement, the prototyping model may
be employed.
Steps of Prototype Model
Software Project planning starts before technical work start. The various steps of planning
activities are:
The size is the crucial parameter for the estimation of other activities.
Resources requirement are required based on cost and development time.
Project schedule may prove to be very useful for controlling and monitoring the progress
of the project.
This is dependent on resources & development time.
For any new software project, it is necessary to know how much it will cost to develop and
how much development time will it take.
These estimates are needed before development is initiated, but how is this done? Several
estimation procedures have been developed and are having the following attributes in
common.
1. During the planning stage, one needs to choose how many engineers are required for the
project and to develop a schedule.
2. In monitoring the project's progress, one needs to access whether the project is
progressing according to the procedure and takes corrective action, if necessary.
Cost Estimation Models
When a model makes use of single variables to calculate desired values such as cost, time,
efforts, etc. is said to be a single variable model.
The most common equation is:
C=aLb
The Software Engineering Laboratory established a model called SEL model, for
estimating its software production.
This model is an example of the static, single variable model.
E=1.4L0.93
DOC=30.4L0.90
D=4.6L0.26
Where
E=Efforts(PersonPerMonth)
DOC=Documentation(NumberofPages)
D=Duration(D,inmonths)
L = Number of Lines per code
These models are based on method (1), they depend on several variables describing various
aspects of the software development environment.
In some model, several variables are needed to describe the software development process,
and selected equation combined these variables to give the estimate of time & cost.
These models are called multivariable models.
WALSTON and FELIX develop the models at IBM provide the following equation gives
a relationship between lines of source code and effort:
E=5.2L0.91
D=4.1L0.36
The productivity index uses 29 variables which are found to be highly correlated productivity as
follows:
Where Wi is the weight factor for the ithvariable and Xi={-1,0,+1} the estimator gives Xione of the
values -1, 0 or +1 depending on the variable decreases, has no effect or increases the productivity.
Example: Compare the Walston-Felix Model with the SEL model on a software development
expected to involve 8 person-years of effort.
Solution:
Then
L(SEL)=(96/1.4)1⁄0.93=94264LOC
L (SEL) = (96/5.2)1⁄0.91=24632 LOC
(d)Average manning is the average number of persons required per month in the project
COCOMO Model
1. Get an initial estimate of the development effort from evaluation of thousands of delivered
lines of source code (KDLOC).
2. Determine a set of 15 multiplying factors from various attributes of the project.
3. Calculate the effort estimate by multiplying the initial estimate with all the multiplying
factors i.e., multiply the values in step1 and step2.
The initial estimate (also called nominal estimate) is determined by an equation of the form
used in the static single variable models, using KDLOC as the measure of the size.
To determine the initial effort Ei in person-months the equation used is of the type is shown
below
Ei=a*(KDLOC)b
The value of the constant a and b are depends on the project type.
In COCOMO, projects are categorized into three types:
1. Organic
2. Semidetached
3. Embedded
1.Organic:
A development project can be treated of the organic type, if the project deals with
developing a well-understood application program, the size of the development team is
reasonably small, and the team members are experienced in developing similar methods of
projects.
Examples of this type of projects are simple business systems, simple inventory
management systems, and data processing systems.
2. Semidetached:
A development project can be treated with semidetached type if the development consists
of a mixture of experienced and inexperienced staff. Team members may have finite
experience in related systems but may be unfamiliar with some aspects of the order being
developed.
Example of Semidetached system includes developing a new operating system (OS), a
Database Management System (DBMS), and complex inventory management system.
3. Embedded:
According to Boehm, software cost estimation should be done through three stages:
1. Basic Model
2. Intermediate Model
3. Detailed Model
The basic COCOMO model provide an accurate size of the project parameters.
The following expressions give the basic COCOMO estimation model:
Effort=a1*(KLOC)a2 PM
Tdev=b1*(efforts)b2 Months
Where
KLOC is the estimated size of the software product indicate in Kilo Lines of Code,
a1,a2,b1,b2 are constants for each group of software products,
Tdev is the estimated time to develop the software, expressed in month
Effort is the total effort required to develop the software product, expressed in person
months (PMs).
For the three classes of software products, the formulas for estimating the effort based on
the code size are shown below:
For the three classes of software products, the formulas for estimating the development
time based on the effort are given below:
Organic: Tdev = 2.5(Effort) 0.38 Months
Semi-detached: Tdev = 2.5(Effort) 0.35 Months
Embedded: Tdev = 2.5(Effort) 0.32 Months
Some insight into the basic COCOMO model can be obtained by plotting the estimated
characteristics for different software sizes.
Fig shows a plot of estimated effort versus product size.
From fig, we can observe that the effort is somewhat superliner in the size of the software
product.
Thus, the effort required to develop a product increases very rapidly with project size.
The development time versus the product size in KLOC is plotted in fig.
From fig it can be observed that the development time is a sub linear function of the size
of the product, i.e.
when the size of the product increases by two times, the time to develop the product does
not double but rises moderately.
This can be explained by the fact that for larger products, a larger number of activities
which can be carried out concurrently can be identified.
The parallel activities can be carried out simultaneously by the engineers. This reduces the
time to complete the project.
Further, from fig, it can be observed that the development time is roughly the same for all
three categories of products.
For example, a 60 KLOC program can be developed in approximately 18 months,
regardless of whether it is of organic, semidetached, or embedded type.
From the effort estimation, the project cost can be obtained by multiplying the required
effort by the manpower cost per month.
But, implicit in this project cost computation is the assumption that the entire project cost
is incurred on account of the manpower cost alone.
In addition to manpower cost, a project would incur costs due to hardware and software
required for the project and the company overheads for administration, office space, etc.
It is important to note that the effort and the duration estimations obtained using the
COCOMO model are called a nominal effort estimate and nominal duration estimate.
The term nominal implies that if anyone tries to complete the project in a time shorter than
the estimated duration, then the cost will increase drastically.
But, if anyone completes the project over a longer period of time than the estimated, then
there is almost no decrease in the estimated cost value.
Example1: Suppose a project was estimated to be 400 KLOC. Calculate the effort and
development time for each of the three model i.e., organic, semi-detached & embedded.
Solution: The basic COCOMO equation takes the form:
Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Estimated Size of project= 400 KLOC
(i)Organic Mode
(ii)Semidetached Mode
E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 PM
Example2: A project size of 200 KLOC is to be developed. Software development team has
average experience on similar type of projects. The project schedule is not very tight. Calculate
the Effort, development time, average staff size, and productivity of the project.
Solution: The semidetached mode is the most appropriate mode, keeping in view the size,
schedule and experience of development time.
Hence,
E=3.0(200)1.12=1133.12PM
D=2.5(1133.12)0.35=29.3PM
P = 176 LOC/PM
2. Intermediate Model: The basic COCOMO model considers that the effort is only a function
of the number of lines of code and some constants calculated according to the various software
systems. The intermediate COCOMO model recognizes these facts and refines the initial estimates
obtained through the basic COCOMO model by using a set of 15 cost drivers based on various
attributes of software engineering.
Hardware attributes -
Personnel attributes -
o Analyst capability
o Software engineering capability
o Applications experience
o Virtual machine experience
o Programming language experience
Project attributes -
Project ai bi ci di
Detailed COCOMO incorporates all qualities of the standard version with an assessment
of the cost driver’s? effect on each method of the software engineering process.
The detailed model uses various effort multipliers for each cost driver property.
In detailed COCOMO, the whole software is differentiated into multiple modules, and then
we apply COCOMO in various modules to estimate effort and then sum the effort.
The effort is determined as a function of program estimate, and a set of cost drivers are given
according to every phase of the software lifecycle.
Finance is the branch of economics concerned with issues such as allocation, management,
acquisition, and investment of resources. Finance is an element of every organization, including
software engineering organizations. The field of finance deals with the concepts of time, money,
risk, and how they are interrelated. It also deals with how money is spent and budgeted. Corporate
finance is concerned with providing the funds for an organization’s activities. Generally, this
involves balancing risk and profitability, while attempting to maximize an organization’s wealth
and the value of its stock. This holds primarily for “for-profit” organizations, but also applies to
“not-for-profit” organizations. The latter needs finances to ensure sustainability, while not
targeting tangible profit. To do this, an organization must
Accounting is part of finance. It allows people whose money is being used to run an organization
to know the results of their investment: did they get the profit they were expecting? In “for-profit”
organizations, this relates to the tangible ROI, while in “not-for-profit” and governmental
organizations as well as “for-profit” organizations, it translates into sustainably staying in business.
The primary role of accounting is to measure the organization’s actual financial performance and
to communicate financial information about a business entity to stakeholders, such as shareholders,
financial auditors, and investors. Communication is generally in the form of financial statements
that show in money terms the economic resources to be controlled. It is important to select the
right information that is both relevant and reliable to the user. Information and its timing are
partially governed by risk management and governance policies. Accounting systems are also a
rich source of historical data for estimating.
1.3 Controlling
Controlling is an element of finance and accounting. Controlling involves measuring and
correcting the performance of finance and accounting. It ensures that an organization’s objectives
and plans are accomplished. Controlling cost is a specialized branch of controlling used to detect
variances of actual costs from planned costs.
1.4 Cash Flow
Cash flow is the movement of money into or out of a business, project, or financial product over a
given period. The concepts of cash flow instances and cash flow streams are used to describe the
business perspective of a proposal. To make a meaningful business decision about any specific
proposal, that proposal will need to be evaluated from a business perspective. In a proposal to
develop and launch product X, the payment for new software licenses is an example of an outgoing
cash flow instance. Money would need to be spent to carry out that proposal. The sales income
from product X in the 11th month after market launch is an example of an incoming cash flow
instance. Money would be coming in because of carrying out the proposal.
The term cash flow stream refers to the set of cash flow instances over time that are caused by
carrying out some given proposal. The cash flow stream is, in effect, the complete financial picture
of that proposal. How much money goes out? When does it go out? How much money comes in?
When does it come in? Simply, if the cash flow stream for Proposal A is more desirable than the
cash flow stream for Proposal B, then—all other things being equal—the organization is better off
carrying out Proposal A than Proposal B. Thus, the cash flow stream is an important input for
investment decision-making. A cash flow instance is a specific amount of money flowing into or
out of the organization at a specific time as a direct result of some activity. A cash flow diagram
is a picture of a cash flow stream. It gives the reader a quick overview of the financial picture of
the subject organization or project. Figure 12.2 shows an example of a cash flow diagram for a
proposal.
If we assume that candidate solutions solve a given technical problem equally well, why should
the organization care which one is chosen? The answer is that there is usually a large difference in
the costs and incomes from the different solutions. A commercial, off-the-shelf, object request
broker product might cost a few thousand dollars, but the effort to develop a homegrown service
that gives the same functionality could easily cost several hundred times that amount.
If the candidate solutions all adequately solve the problem from a technical perspective, then the
selection of the most appropriate alternative should be based on commercial factors such as
optimizing total cost of ownership (TCO) or maximizing the short-term return on investment
(ROI). Life cycle costs such as defect correction, field service, and support duration are also
relevant considerations. These costs need to be factored in when selecting among acceptable
technical approaches, as they are part of the lifetime ROI. A systematic process for making
decisions will achieve transparency and allow later justification. Governance criteria in many
organizations demand selection from at least two alternatives.
A systematic process is shown in Figure 12.3. It starts with a business challenge at hand and
describes the steps to identify alternative solutions, define selection criteria, evaluate the solutions,
implement one selected solution, and monitor the performance of that solution.
Figure 12.3 shows the process as mostly stepwise and serial. The real process is more fluid.
Sometimes the steps can be done in a different order and often several of the steps can be done in
parallel. The important thing is to be sure that none of the steps are skipped or curtailed. It’s also
important to understand that this same process applies at all levels of decision making: from a
decision as big as determining whether a software project should be done at all, to a deciding on
an algorithm or data structure to use in a software module. The difference is how financially
significant the decision is and, therefore, how much effort should be invested in making that
decision. The project-level decision is financially significant and probably warrants a relatively
high level of effort to make the decision. Selecting an algorithm is often much less financially
significant and warrants a much lower level of effort to make the decision, even though the same
basic decision-making process is being used.
More often than not, an organization could carry out more than one proposal if it wanted to, and
usually there are important relationships among proposals. Maybe Proposal Y can only be carried
out if Proposal X is also carried out. Or maybe Proposal P cannot be carried out if Proposal Q is
carried out, nor could Q be carried out if P were. Choices are much easier to make when there are
mutually exclusive paths—for example, either A or B or C or whatever is chosen. In preparing
decisions, it is recommended to turn any given set of proposals, along with their various
interrelationships, into a set of mutually exclusive alternatives. The choice can then be made
among these alternatives.
present worth
future worth
annual equivalent
internal rate of return
(discounted) payback period.
Based on the time-value of money, two or more cash flows are equivalent only when they equal
the same amount of money at a common point in time. Comparing cash flows only makes sense
when they are expressed in the same time frame.
Note that value can’t always be expressed in terms of money. For example, whether an item is a
brand name or not can significantly affect its perceived value. Relevant values that can’t be
expressed in terms of money still need to be expressed in similar terms so that they can be evaluated
objectively.
1.7 Inflation
Inflation describes long-term trends in prices. Inflation means that the same things cost more than
they did before. If the planning horizon of a business decision is longer than a few years, or if the
inflation rate is over a couple of percentage points annually, it can cause noticeable changes in the
value of a proposal. The present time value therefore needs to be adjusted for inflation rates and
also for exchange rate fluctuations.
1.8 Depreciation
Depreciation involves spreading the cost of a tangible asset across a number of time periods; it is
used to determine how investments in capitalized assets are charged against income over several
years. Depreciation is an important part of determining after-tax cash flow, which is critical for
accurately addressing profit and taxes. If a software product is to be sold after the development
costs are incurred, those costs should be capitalized and depreciated over subsequent time periods.
The depreciation expense for each time period is the capitalized cost of developing the software
divided across the number of periods in which the software will be sold. A software project
proposal may be compared to other software and non-software proposals or to alternative
investment options, so it is important to determine how those other proposals would be depreciated
and how profits would be estimated.
1.9 Taxation
Governments charge taxes in order to finance expenses that society needs but that no single
organization would invest in. Companies have to pay income taxes, which can take a substantial
portion of a corporation’s gross profit. A decision analysis that does not account for taxation can
lead to the wrong choice. A proposal with a high pretax profit won’t look nearly as profitable in
post-tax terms. Not accounting for taxation can also lead to unrealistically high expectations about
how profitable a proposed product might be.
1.10 Time-Value of Money
One of the most fundamental concepts in finance—and therefore, in business decisions— is that
money has time-value: its value changes over time. A specific amount of money right now almost
always has a different value than the same amount of money at some other time. This concept has
been around since the earliest recorded human history and is commonly known as time-value. In
order to compare proposals or portfolio elements, they should be normalized in cost, value, and
risk to the net present value. Currency exchange variations over time need to be taken into account
based on historical data. This is particularly important in cross-border developments of all kinds.
1.11 Efficiency
Economic efficiency of a process, activity, or task is the ratio of resources actually consumed to
resources expected to be consumed or desired to be consumed in accomplishing the process,
activity, or task. Efficiency means “doing things right.” An efficient behavior, like an effective
behavior, delivers results—but keeps the necessary effort to a minimum. Factors that may affect
efficiency in software engineering include product complexity, quality requirements, time
pressure, process capability, team distribution, interrupts, feature churn, tools, and programming
language.
1.12 Effectiveness
Effectiveness is about having impact. It is the relationship between achieved objectives to defined
objectives. Effectiveness means “doing the right things.” Effectiveness looks only at whether
defined objectives are reached—not at how they are reached.
1.13 Productivity
Productivity is the ratio of output over input from an economic perspective. Output is the value
delivered. Input covers all resources (e.g., effort) spent to generate the output. Productivity
combines efficiency and effectiveness from a value-oriented perspective: maximizing productivity
is about generating highest value with lowest resource consumption.
A product is an economic good (or output) that is created in a process that transforms product
factors (or inputs) to an output. When sold, a product is a deliverable that creates both a value and
an experience for its users. A product can be a combination of systems, solutions, materials, and
services delivered internally (e.g., in-house IT solution) or externally (e.g., software application),
either as-is or as a component for another product (e.g., embedded software).
2.2 Project
A software product life cycle (SPLC) includes all activities needed to define, build, operate,
maintain, and retire a software product or service and its variants. The SPLC activities of “operate,”
“maintain,” and “retire” typically occur in a much longer time frame than initial software
development (the software development life cycle—SDLC—see Software Life Cycle Models in
the Software Engineering Process KA). Also the operate-maintain-retire activities of an SPLC
typically consume more total effort and other resources than the SDLC activities (see Majority of
Maintenance Costs in the Software Maintenance KA). The value contributed by a software product
or associated services can be objectively determined during the “operate and maintain” time frame.
Software engineering economics should be concerned with all SPLC activities, including the
activities after initial product release.
2.6 Project Life Cycle
Project life cycle activities typically involve five process groups—Initiating, Planning, Executing,
Monitoring and Controlling, and Closing. The activities within a software project life cycle are
often interleaved, overlapped, and iterated in various ways [3*, c2]. For instance, agile product
development within an SPLC involves multiple iterations that produce increments of deliverable
software. An SPLC should include risk management and synchronization with different suppliers
(if any), while providing auditable decision-making information (e.g., complying with product
liability needs or governance regulations). The software project life cycle and the software product
life cycle are interrelated; an SPLC may include several SDLCs.
2.7 Proposals
Making a business decision begins with the notion of a proposal. Proposals relate to reaching a
business objective—at the project, product, or portfolio level. A proposal is a single, separate
option that is being considered, like carrying out a particular software development project or not.
Another proposal could be to enhance an existing software component, and still another might be
to redevelop that same software from scratch. Each proposal represents a unit of choice—either
you can choose to carry out that proposal or you can choose not to. The whole purpose of business
decision-making is to figure out, given the current business circumstances, which proposals should
be carried out and which shouldn’t.
2.8 Investment Decisions
Investors make investment decisions to spend money and resources on achieving a target objective.
Investors are either inside (e.g., finance, board) or outside (e.g., banks) the organization. The target
relates to some economic criteria, such as achieving a high return on the investment, strengthening
the capabilities of the organization, or improving the value of the company. Intangible aspects such
as goodwill, culture, and competences should be considered.
2.9 Planning Horizon
When an organization chooses to invest in a particular proposal, money gets tied up in that
proposal— so-called “frozen assets.” The economic impact of frozen assets tends to start high and
decreases over time. On the other hand, operating and maintenance costs of elements associated
with the proposal tend to start low but increase over time. The total cost of the proposal—that is,
owning and operating a product—is the sum of those two costs. Early on, frozen asset costs
dominate; later, the operating and maintenance costs dominate. There is a point in time where the
sum of the costs is minimized; this is called the minimum cost lifetime.
To properly compare a proposal with a four-year life span to a proposal with a six-year life span,
the economic effects of either cutting the six-year proposal by two years or investing the profits
from the four-year proposal for another two years need to be addressed. The planning horizon,
sometimes known as the study period, is the consistent time frame over which proposals are
considered. Effects such as software lifetime will need to be factored into establishing a planning
horizon. Once the planning horizon is established, several techniques are available for putting
proposals with different life spans into that planning horizon.
2.10 Price and Pricing
A price is what is paid in exchange for a good or service. Price is a fundamental aspect of financial
modeling and is one of the four Ps of the marketing mix. The other three Ps are product, promotion,
and place. Price is the only revenue-generating element amongst the four Ps; the rest are costs.
Pricing is an element of finance and marketing. It is the process of determining what a company
will receive in exchange for its products. Pricing factors include manufacturing cost, market
placement, competition, market condition, and quality of product. Pricing applies prices to
products and services based on factors such as fixed amount, quantity break, promotion or sales
campaign, specific vendor quote, shipment or invoice date, combination of multiple orders, service
offerings, and many others. The needs of the consumer can be converted into demand only if the
consumer has the willingness and capacity to buy the product. Thus, pricing is very important in
marketing. Pricing is initially done during the project initiation phase and is a part of “go” decision
making.
2.11 Cost and Costing
A cost is the value of money that has been used up to produce something and, hence, is not
available for use anymore. In economics, a cost is an alternative that is given up as a result of a
decision.
A sunk cost is the expenses before a certain time, typically used to abstract decisions from expenses
in the past, which can cause emotional hurdles in looking forward. From a traditional economics
point of view, sunk costs should not be considered in decision making. Opportunity cost is the cost
of an alternative that must be forgone in order to pursue another alternative.
Costing is part of finance and product management. It is the process to determine the cost based
on expenses (e.g., production, software engineering, distribution, rework) and on the target cost to
be competitive and successful in a market. The target cost can be below the actual estimated cost.
The planning and controlling of these costs (called cost management) is important and should
always be included in costing.
An important concept in costing is the total cost of ownership (TCO). This holds especially for
software, because there are many not-so-obvious costs related to SPLC activities after initial
product development. TCO for a software product is defined as the total cost for acquiring,
activating, and keeping that product running. These costs can be grouped as direct and indirect
costs. TCO is an accounting method that is crucial in making sound economic decisions.
2.12 Performance Measurement
Performance measurement is the process whereby an organization establishes and measures the
parameters used to determine whether programs, investments, and acquisitions are achieving the
desired results. It is used to evaluate whether performance objectives are actually achieved; to
control budgets, resources, progress, and decisions; and to improve performance.
2.13 Earned Value Management
Earned value management (EVM) is a project management technique for measuring progress
based on created value. At a given moment, the results achieved to date in a project are compared
with the projected budget and the planned schedule progress for that date. Progress relates already-
consumed resources and achieved results at a given point in time with the respective planned
values for the same date. It helps to identify possible performance problems at an early stage. A
key principle in EVM is tracking cost and schedule variances via comparison of planned versus
actual schedule and budget versus actual cost. EVM tracking gives much earlier visibility to
deviations and thus permits corrections earlier than classic cost and schedule tracking that only
looks at delivered documents and products.
2.14 Termination Decisions
Termination means to end a project or product. Termination can be preplanned for the end of a
long product lifetime (e.g., when foreseeing that a product will reach its lifetime) or can come
rather spontaneously during product development (e.g., when project performance targets are not
achieved). In both cases, the decision should be carefully prepared, considering always the
alternatives of continuing versus terminating. Costs of different alternatives must be estimated—
covering topics such as replacement, information collection, suppliers, alternatives, assets, and
utilizing resources for other opportunities. Sunk costs should not be considered in such decision
making because they have been spent and will not reappear as a value.
2.15 Replacement and Retirement Decisions
A replacement decision is made when an organization already has a particular asset and they are
considering replacing it with something else; for example, deciding between maintaining and
supporting a legacy software product or redeveloping it from the ground up. Replacement
decisions use the same business decision process as described above, but there are additional
challenges: sunk cost and salvage value. Retirement decisions are also about getting out of an
activity altogether, such as when a software company considers not selling a software product
anymore or a hardware manufacturer considers not building and selling a particular model of
computer any longer. Retirement decision can be influenced by lock-in factors such as technology
dependency and high exit costs.
Goals in software engineering economics are mostly business goals (or business objectives). A
business goal relates business needs (such as increasing profitability) to investing resources (such
as starting a project or launching a product with a given budget, content, and timing). Goals apply
to operational planning (for instance, to reach a certain milestone at a given date or to extend
software testing by some time to achieve a desired quality level—see Key Issues in the Software
Testing KA) and to the strategic level (such as reaching a certain profitability or market share in a
stated time period).
An estimate is a well-founded evaluation of resources and time that will be needed to achieve
stated goals. A software estimate is used to determine whether the project goals can be achieved
within the constraints on schedule, budget, features, and quality attributes. Estimates are typically
internally generated and are not necessarily visible externally. Estimates should not be driven
exclusively by the project goals because this could make an estimate overly optimistic. Estimation
is a periodic activity; estimates should be continually revised during a project.
A plan describes the activities and milestones that are necessary in order to reach the goals of a
project. The plan should be in line with the goal and the estimate, which is not necessarily easy
and obvious— such as when a software project with given requirements would take longer than
the target date foreseen by the client. In such cases, plans demand a review of initial goals as well
as estimates and the underlying uncertainties and inaccuracies. Creative solutions with the
underlying rationale of achieving a win-win position are applied to resolve conflicts.
To be of value, planning should involve consideration of the project constraints and commitments
to stakeholders. Figure 12.4 shows how goals are initially defined. Estimates are done based on
the initial goals. The plan tries to match the goals and the estimates. This is an iterative process,
because an initial estimate typically does not meet the initial goals.
Estimations are used to analyze and forecast the resources or time necessary to implement
requirements. Five families of estimation techniques exist:
Expert judgment
Analogy
Estimation by parts
Parametric methods
Statistical methods.
No single estimation technique is perfect, so using multiple estimation technique is useful.
Convergence among the estimates produced by different techniques indicates that the estimates
are probably accurate. Spread among the estimates indicates that certain factors might have been
overlooked. Finding the factors that caused the spread and then re-estimating again to produce
results that converge could lead to a better estimate.
3.3 Addressing Uncertainty
Because of the many unknown factors during project initiation and planning, estimates are
inherently uncertain; that uncertainty should be addressed in business decisions. Techniques for
addressing uncertainty include
Decisions under risk techniques are used when the decision maker can assign probabilities to the
different possible outcomes. The specific techniques include
Decisions under uncertainty techniques are used when the decision maker cannot assign
probabilities to the different possible outcomes because needed information is not available.
Specific techniques include
Laplace Rule
Maximin Rule
Maximax Rule
Hurwicz Rule
Minimax Regret Rule.
Figure 12.5 describes a process for identifying the best alternative from a set of mutually exclusive
alternatives. Decision criteria depend on the business objectives and typically include ROI or
Return on Capital Employed (ROCE) .
For-profit decision techniques don’t apply for government and nonprofit organizations. In these
cases, organizations have different goals—which means that a different set of decision techniques
are needed, such as cost-benefit or cost-effectiveness analysis.
Figure 12.5: The for-profit decision-making process
4.2 Minimum Acceptable Rate of Return
The minimum acceptable rate of return (MARR) is the lowest internal rate of return the
organization would consider to be a good investment. Generally speaking, it wouldn’t be smart to
invest in an activity with a return of 10% when there’s another activity that’s known to return 20%.
The MARR is a statement that an organization is confident it can achieve at least that rate of return.
The MARR represents the organization’s opportunity cost for investments. By choosing to invest
in some activity, the organization is explicitly deciding to not invest that same money somewhere
else. If the organization is already confident it can get some known rate of return, other alternatives
should be chosen only if their rate of return is at least that high. A simple way to account for that
opportunity cost is to use the MARR as the interest rate in business decisions. An alternative’s
present worth evaluated at the MARR shows how much more or less (in present- day cash terms)
that alternative is worth than investing at the MARR.
4.3 Return on Investment
Cost-benefit analysis is one of the most widely used methods for evaluating individual proposals.
Any proposal with a benefit-cost ratio of less than 1.0 can usually be rejected without further
analysis because it would cost more than the benefit. Proposals with a higher ratio need to consider
the associated risk of an investment and compare the benefits with the option of investing the
money at a guaranteed interest rate.
4.6 Cost-Effectiveness Analysis
Cost-effectiveness analysis is similar to cost benefit analysis. There are two versions of cost
effectiveness analysis: the fixed-cost version maximizes the benefit given some upper bound on
cost; the fixed-effectiveness version minimizes the cost needed to achieve a fixed goal.
4.7 Break-Even Analysis
Break-even analysis identifies the point where the costs of developing a product and the revenue
to be generated are equal. Such an analysis can be used to choose between different proposals at
different estimated costs and revenue. Given estimated costs and revenue of two or more proposals,
break-even analysis helps in choosing among them.
4.8 Business Case
The business case is the consolidated information summarizing and explaining a business proposal
from different perspectives for a decision maker (cost, benefit, risk, and so on). It is often used to
assess the potential value of a product, which can be used as a basis in the investment decision
making process. As opposed to a mere profit loss calculation, the business case is a “case” of plans
and analyses that is owned by the product manager and used in support of achieving the business
objectives.
4.9 Multiple Attribute Evaluation
The topics discussed so far are used to make decisions based on a single decision criterion: money.
The alternative with the best present worth, the best ROI, and so forth is the one selected. Aside
from technical feasibility, money is almost always the most important decision criterion, but it’s
not always the only one. Quite often there are other criteria, other “attributes,” that need to be
considered, and those attributes can’t be cast in terms of money. Multiple attribute decision
techniques allow other, nonfinancial criteria to be factored into the decision.
There are two families of multiple attribute decision techniques that differ in how they use the
attributes in the decision. One family is the “compensatory,” or single-dimensioned, techniques.
This family collapses all of the attributes onto a single figure of merit. The family is called
compensatory because, for any given alternative, a lower score in one attribute can be compensated
by—or traded off against—a higher score in other attributes. The compensatory techniques include
nondimensional scaling
additive weighting
analytic hierarchy process.
In contrast, the other family is the “non-compensatory,” or fully dimensioned, techniques. This
family does not allow tradeoffs among the attributes. Each attribute is treated as a separate entity
in the decision process. The non-compensatory techniques include
dominance
satisficing
lexicography.
4.10 Optimization Analysis
The typical use of optimization analysis is to study a cost function over a range of values to find
the point where overall performance is best. Software’s classic space-time tradeoff is an example
of optimization; an algorithm that runs faster will often use more memory. Optimization balances
the value of the faster runtime against the cost of the additional memory.
Real options analysis can be used to quantify the value of project choices, including the value of
delaying a decision. Such options are difficult to compute with precision. However, awareness that
choices have a monetary value provides insight in the timing of decisions such as increasing project
staff or lengthening time to market to improve quality.
5 Practical Considerations
5.1 The “Good Enough” Principle
Often software engineering projects and products are not precise about the targets that should be
achieved. Software requirements are stated, but the marginal value of adding a bit more
functionality cannot be measured. The result could be late delivery or too-high cost. The “good
enough” principle relates marginal value to marginal cost and provides guidance to determine
criteria when a deliverable is “good enough” to be delivered. These criteria depend on business
objectives and on prioritization of different alternatives, such as ranking software requirements,
measurable quality attributes, or relating schedule to product content and cost.
The RACE principle (reduce accidents and control essence) is a popular rule towards good enough
software. Accidents imply unnecessary overheads such as gold-plating and rework due to late
defect removal or too many requirements changes. Essence is what customers pay for. Software
engineering economics provides the mechanisms to define criteria that determine when a
deliverable is “good enough” to be delivered. It also highlights that both words are relevant: “good”
and “enough.” Insufficient quality or insufficient quantity is not good enough.
Agile methods are examples of “good enough” that try to optimize value by reducing the overhead
of delayed rework and the gold plating that results from adding features that have low marginal
value for the users. In agile methods, detailed planning and lengthy development phases are
replaced by incremental planning and frequent delivery of small increments of a deliverable
product that is tested and evaluated by user representatives.
5.2 Friction-Free Economy
Economic friction is everything that keeps markets from having perfect competition. It involves
distance, cost of delivery, restrictive regulations, and/or imperfect information. In high-friction
markets, customers don’t have many suppliers from which to choose. Having been in a business
for a while or owning a store in a good location determines the economic position. It’s hard for
new competitors to start business and compete. The marketplace moves slowly and predictably.
Friction-free markets are just the reverse. New competitors emerge and customers are quick to
respond. The marketplace is anything but predictable. Theoretically, software and IT are friction
free. New companies can easily create products and often do so at a much lower cost than
established companies, since they need not consider any legacies. Marketing and sales can be done
via the Internet and social networks, and basically free distribution mechanisms can enable a ramp
up to a global business. Software engineering economics aims to provide foundations to judge how
a software business performs and how friction-free a market actually is. For instance, competition
among software app developers is inhibited when apps must be sold through an app store and
comply with that store’s rules.
5.3 Ecosystems
An ecosystem is an environment consisting of all the mutually dependent stakeholders, business
units, and companies working in a particular area. In a typical ecosystem, there are producers and
consumers, where the consumers add value to the consumed resources. Note that a consumer is
not the end user but an organization that uses the product to enhance it. A software ecosystem is,
for instance, a supplier of an application working with companies doing the installation and support
in different regions. Neither one could exist without the other. Ecosystems can be permanent or
temporary. Software engineering economics provides the mechanisms to evaluate alternatives in
establishing or extending an ecosystem—for instance, assessing whether to work with a specific
distributor or have the distribution done by a company doing service in an area.
5.4 Offshoring and Outsourcing
Offshoring means executing a business activity beyond sales and marketing outside the home
country of an enterprise. Enterprises typically either have their offshoring branches in low-cost
countries or they ask specialized companies abroad to execute the respective activity. Offshoring
should therefore not be confused with outsourcing. Offshoring within a company is called captive
offshoring. Outsourcing is the result-oriented relationship with a supplier who executes business
activities for an enterprise when, traditionally, those activities were executed inside the enterprise.
Outsourcing is site-independent. The supplier can reside in the neighborhood of the enterprise or
offshore (outsourced offshoring). Software engineering economics provides the basic criteria and
business tools to evaluate different sourcing mechanisms and control their performance. For
instance, using an outsourcing supplier for software development and maintenance might reduce
the cost per hour of software development, but increase the number of hours and capital expenses
due to an increased need for monitoring and communication.
Techniques of software project control and reporting:
Project Control focuses on the factors that impact the schedule and budget of your
project.
It is a key factor in keeping your project running smoothly.
Issues associated with project control are: Scope creep – resulting in blow-out of cost
and schedule.
Shows the project team what they are working, so they can explain why it’s working and
focus more on it.
Uncovers what’s not working so the team can investigate and determine an appropriate
course of action i.e. what to do about it with the help of the project dashboard.
Gives the team a 360° overview of how the project is doing so they can determine what
steps to take next.
The project status report is a critical report that shows stakeholders a general snapshot of how well
the project is advancing toward its targets. The project status report can be thought of as a general
update that’s designed to keep stakeholders or project progress, emerging issues, and key points to
note, all at a glance.
Project health reports are designed to update stakeholders on the overall health of the project, derived
from whether the project is either advancing as projected, in danger of stagnating or completely
stagnated.
Project health reports make it easy to identify when something’s wrong so the team can identify what
and get it out of the way.
The team availability report functions like a team calendar that shows every team member’s schedule
so it’s easy to see who’s occupied and when they are busy. This way, stakeholders who are planning
for a project or requiring input anywhere can see which team members can be assigned, those who
can safely take on more work, as well as those who are at full capacity and might need assistance.
Availability reports make it easy to visualize how much everyone has on their plates so
work can be more evenly distributed to achieve faster results, higher efficiency, and most
importantly, prevent project burnout between teams.
An availability report plots staff names against calendar days, with either a color tone or a
written designation showing their workload for each calendar day.
4. Risk reports
A risk report identifies the blockers hindering a project’s successful completion and presents it for the
stakeholders’ analysis. The risk report is designed to not only display existing or potential obstacles
but to offer a sense of the danger they pose to the project so the project’s stakeholders can take
adequate steps to eliminate project risks or adapt the project.
Determine existing and potential project constraints that are already holding back the
project, or that will hold it back
Visualize obstacles on a risk scale to determine which ones to prioritize
Determine how to keep future projects from running into similar hitches
5. Variance report
It’s quite common for teams to deviate from the project’s key targets without even knowing. In the
end, this results in project failure after time and resources have been expended.
A variance report helps the project team and stakeholders to ensure that doesn’t happen. You can
track the target project milestones and objectives of the project along with the work that’s getting
done.
With a variance report, the team can see if the work they’re getting done is actually the project’s
targets or whether they’re just spending time without ticking off the following,
o project milestones
o project objectives
o project deliverables.
Project time tracking helps the project team & stakeholders see how much time is getting spent by
team members at every stage of the project management process. A time tracking report helps the
team to see how much time overall is spent on specific tasks and how much individual team members
spend on tasks.
Time tracking reports help in assigning team members to tasks where they’re more
efficient,
Tracking time spent on tasks for compensation,
as well as optimizing systems and processes so work gets done faster.
The aim of project reporting is to offer all the information generated from your projects in a simple
format so stakeholders can understand and apply those insights. Here are some best practices that’ll
help you create reports that actually enable project stakeholders to make informed decisions.
The aim of project management reports is to deliver processed data to those who need it so they can
be informed and make appropriate decisions from it. It’s important that reports present solid data that
stakeholders can look at and get an idea of the big picture.
Apply an abundance of images, charts, and graphs wherever appropriate to fully illustrate the
implications of whatever data you present with the help of visual project management tools.
Senior-level management won’t have the time to sift through small details; team members won’t be
able to make much out of a report that shows only a few figures, project management charts, and
notes. Reports must be adapted to the needs of your specific audience so they get all the information
they need through project communication management, without getting bogged down or left in the
dark with incomplete data.
Software Risk is actually a problem that may or may not occurs that shows the uncertainty of
risks but if occur, unwanted losses or threatening or consequences will occur. It is generally
caused due to a lack of incomplete information, control, or time. Risk Assessment and Risk
Mitigation is a process in which identifying, assessing, and mitigating risk takes place to scope,
schedule, cost, and quality of the project.
Risk Assessment:
Risk assessment simply means to describe the overall process or method to identify risk and
problem factors that might cause harm. It is actually a systematic examination of a task or project
that you perform to simply identify significant risks, problems, hazards, and then to find out
control measures that you will take to reduce risk. The best approach is to prepare a set of
questions that can be answered by project managers in order to assess overall project risks.
These questions are shown below:
Will the project get proper support from the customer manager?
Are end-users committed to software that has been produced?
Is there a clear understanding of the requirements?
Is there an active involvement of customers in the requirement definition?
Are the expectations set for the product are realistic?
Is project scope stable?
Are there team members with the required skills?
Are project requirements stable?
Does technology used for software is known to developers?
Is the size of the team sufficient to develop the required product?
Is that all customers know the importance of the product/requirements of the system
to be built?
Thus, the number of negative answers to these questions represents the severity of the impact of
risk on the overall project. It is not about creating or making a large number of work papers, but
rather simply identify and find out measures to control risks in your workplace.
Risk Mitigation:
Risk mitigation simply means to reduce adverse effects and impact of risks that are harmful to
project and Business continuity. It includes introducing measures and step taken into a project
plan to mitigate, reduce, eliminate, or control risk. Risk mitigation means preventing risks to
occur (risk avoidance).
Configuration management:
Importance of SCM
It is practical in controlling and managing the access to various SCIs e.g., by preventing
the two members of a team for checking out the same component for modification at the
same time.
It provides the tool to ensure that changes are being properly implemented.
It has the capability of describing and storing the various constituent of software.
SCM is used in keeping a system in a consistent state by automatically producing derived
version upon modification of the same component.
SCM Process
It uses the tools which keep that the necessary change has been implemented adequately to
the appropriate component. The SCM process defines a number of tasks:
Identification
Basic Object: Unit of Text created by a software engineer during analysis, design, code,
or test.
Aggregate Object: A collection of essential objects and other aggregate objects. Design
Specification is an aggregate object.21.3Mrs of India | List of Prime Minister of India
(1947-2020)
Each object has a set of distinct characteristics that identify it uniquely: a name, a
description, a list of resources, and a "realization."
The interrelationships between configuration objects can be described with a Module
Interconnection Language (MIL).
Version Control
Version Control combines procedures and tools to handle different version of configuration
objects that are generated during the software process.
Clemm defines version control in the context of SCM: Configuration management
allows a user to specify the alternative configuration of the software system through the
selection of appropriate versions. This is supported by associating attributes with each
software version, and then allowing a configuration to be specified [and constructed] by
describing the set of desired attributes.
Change Control
James Bach describes change control in the context of SCM is: Change Control is Vital.
But the forces that make it essential also make it annoying.
We worry about change because a small confusion in the code can create a big failure in
the product. But it can also fix a significant failure or enable incredible new capabilities.
We worry about change because a single rogue developer could sink the project, yet
brilliant ideas originate in the mind of those rogues, and
A burdensome change control process could effectively discourage them from doing
creative work.
A change request is submitted and calculated to assess technical merit; potential side
effects, the overall impact on other configuration objects and system functions, and
projected cost of the change.
The results of the evaluations are presented as a change report, which is used by a change
control authority (CCA) - a person or a group who makes a final decision on the status and
priority of the change.
The "check-in" and "check-out" process implements two necessary elements of change
control-access control and synchronization control.
Access Control governs which software engineers have the authority to access and modify
a particular configuration object.
Synchronization Control helps to ensure that parallel changes, performed by two
different people, don't overwrite one another.
Configuration Audit
SCM audits to verify that the software product satisfies the baselines requirements and
ensures that what is built and what is delivered.
SCM audits also ensure that traceability is maintained between all CIs and that all work
requests are associated with one or more CI modification.
SCM audits are the "watchdogs" that ensures that the integrity of the project's scope is
preserved.
Status Reporting
This approach to software quality is best exemplified by fixed quality models, such as ISO/IEC
25010:2011. This standard describes a hierarchy of eight quality characteristics, each composed
of sub-characteristics:
1. Functional suitability
2. Reliability
3. Operability
4. Performance efficiency
5. Security
6. Compatibility
7. Maintainability
8. Transferability
ISO/IEC 25010:2011 Software Quality Model
1. Effectiveness
2. Efficiency
3. Satisfaction
4. Safety
5. Usability
A fixed software quality model is often helpful for considering an overall understanding of
software quality. In practice, the relative importance of particular software characteristics typically
depends on software domain, product type, and intended usage. Thus, software characteristics
should be defined for, and used to guide the development of, each product.
Software Reliability
For example, large next-generation aircraft will have over 1 million source lines of software on-
board; next-generation air traffic control systems will contain between one and two million lines;
the upcoming International Space Station will have over two million lines on-board and over 10
million lines of ground support software; several significant life-critical defense systems will have
over 5 million source lines of software. While the complexity of software is inversely associated
with software reliability, it is directly related to other vital factors in software quality, especially
functionality, capability, etc.
Correctness
Usability
Efficiency
Reliability
Integrity
Adaptability
Accuracy and
Robustness
Maintainability
Flexibility
Portability
Re-usability
Readability
Testability and
Understandability
Software with a high internal quality is easy to change, easy to add new features, and easy
to test.
Software with a low internal quality is hard to understand, difficult to change, and
troublesome to extend.
Measures like McCabe’s Cyclomatic Complexity, Cohesion, Coupling and Function Points
can all be used to understand internal quality.
External software quality is a measure of how the system as a whole meets the requirements
of stakeholders.
Benefits
The business case for unit/integration/system tests is much clearer
The motivation for a particular type of test is easier to understand
Understanding the motivation for a particular test avoids mixing test abstraction levels
What is interesting about this definition is that using different testing approaches you can garner
quality feedback of different types. It shows that end-to-end system tests provide to most amount
of feedback on the external quality of the system and unit tests provide the most amount of
feedback on internal quality.
It also underlines the importance of multiple testing approaches. If you want to make a system that
both meets the stakeholder requirements and is easy to understand and change (who doesn’t?) then
it makes sense to develop with both unit and system level tests.
In today’s software marketplace, the principal focus is on cost, schedule, and function; quality is
lost in the noise. This is unfortunate since poor quality performance is the root cause of most
software cost and schedule problems. The first step is adopting and demanding that vendors follow
these six principles of software quality:
Principle 1:
If a customer does not demand a quality product, he or she will probably not get one.
Principle 2:
To consistently produce quality products, the developers must manage the quality of their work.
Principle 3:
To manage product quality, the developers must measure quality.
Principle 4:
The quality of a product is determined by the quality of the process used to develop it.
Principle 5:
Since a test removes only a fraction of a product’s defects, to get a quality product out of test you
must put a quality product into test.
Principle 6:
Quality products are only produced by motivated professionals who take pride in their work.
Principle No. 1
If the customer does not demand a quality product, he or she will probably not get one.
If you want quality products, you must demand them. But how do you do that? That is the
subject of this article. I first define quality, then I discuss quality management, and then
third, I cover quality measurement. Next, I describe the methods for verifying the quality
of software products before you get them, and finally, I give some pointers for those
acquisition managers who would like to consider using these methods.
Defining Quality
Product developers typically define a quality product as one that satisfies the customer.
However, this definition is not of much help to you, the customer. What you need is a
definition of quality to guide your acquisition process. To get this, you must define what
quality means to you and how you would recognize a quality product if you got one.
In the broadest sense, a quality product is one that is delivered on time, costs what it was
committed to cost, and flawlessly performs all of its intended functions. While the first two
of these criteria are relatively easy to determine, the third is not. These first two criteria are
part of the normal procurement process and typically receive the bulk of the customer’s
and supplier’s attention during a procurement cycle, but the third is generally the source of
most acquisition problems. This is because poor product quality is often the reason for a
software-intensive system’s cost and schedule problems.
Think of it this way
If quality did not matter, you would have to accept whatever quality the supplier provided,
and the cost and schedule would be largely determined by the supplier. In simplistic terms,
the supplier’s strategy would be to supply whatever quality level he felt would get the
product accepted and paid for. In fact, even if you had contracted for a specific quality
level, as long as you could not verify that quality level prior to delivery and acceptance
testing, the supplier’s optimum strategy would be to deliver whatever quality level it could
get away with as long as it was paid.
Since, at least for software, most quality problems do not show up until well after the end
of the normal acquisition cycle, you would be no better off than before. I do not mean to
imply that this is how most suppliers behave, but merely that this would be their most
economically attractive short-term strategy. In the long term, quality work has always
proven to be most economically attractive.
Addressing the Quality Problem
In principle, there are only two ways to address the software quality problem. First, use a
supplier that has a sufficiently good record of delivering quality products so you will be
comfortable that the products he provides will be of high quality. Then, just leave the
supplier alone to do the development work. The second choice would be to closely monitor
the development process the supplier uses to be assured that the product being produced
will be of the desired quality.
While the first approach would be ideal, it is not useful when the supplier has historically
had quality problems or where his current performance causes concern. In these cases, you
are left with the second choice: to monitor the development work. To do this, you must
consider the second principle of quality management.
Principle No. 2
To produce quality products consistently, developers must manage the quality of their work.
Managing Product Quality
While you may want a quality product, if you have no way to determine the product’s
quality until after you get it, you will not be able to pressure the supplier to do quality work
until it is too late. The best time to influence the product’s quality is early in its development
cycle where you can determine the quality of the product before it is delivered and
influence the way the work is done. At least you can do this if your contract provides you
the needed leverage.
This, of course, means that you must anticipate the product’s quality before it is delivered,
and you must also know what to tell the supplier to do to assure that the delivered product
will actually be of high quality. Therefore, the first need is to predict the product’s quality
before it is built. This is essential, for if you only measure the product’s quality after it has
been built, it is too late to do anything but fix its defects. This results in a defective product
with patches for the known defects. Unless you have an extraordinarily effective test and
evaluation system, you will not then know about most of the product’s defects before you
accept the product and pay the supplier.
While you might still have warranties and other contract provisions to help you recover
damages, and you might still be able to require the supplier to fix the product’s defects,
these contractual provisions cannot protect you from getting a poor quality product.
Identifying Quality Work
To determine the likely quality of a product while it is being developed, we must consider
the third principle of quality work.
Principle No. 3
To manage product quality, the developers must measure quality.
To monitor product quality before delivery you must measure quality during development.
Further, you must require that the developers gather quality measurements and supply them
to you while they do the development work. What measures do you want, and how would
you use them? This article suggests a proven set of quality measures, but first, to define
these measures, we must consider what a quality product looks like.
While software experts debate this point, every other field of engineering agrees on one
basic characteristic of quality: A quality product contains few, if any, defects. In fact, it has
been shown that this definition is equally true for software. We also know that software
professionals who consistently produce defect-free or near defect-free products are proud
of their work and that they strive to remove all the product’s defects before they begin
testing. Low defect content is one of the principal criteria used for identifying the quality
of software products.
Defining Process Quality
To define the needed quality measures, we must consider the fourth quality principle.
Principle No. 4
The quality of a product is determined by the quality of the process used to develop it.
This implies that to manage product quality, we must manage the quality of the process
used to develop that product.
If a quality product has few if any defects, that means that a quality process must produce
products having few if any defects. What kind of process would consistently produce
products with few if any defects? Some argue that extensive testing is the only way to
produce quality software, and others believe that extensive reviews and inspections are the
answer.
No single defect-removal method can be relied upon to produce high-quality software
products. A high-quality process must use a broad spectrum of quality management
methods.
Examples are many kinds of testing, team inspections, personal design and code reviews,
design analysis, defect tracking and analysis, and defect prevention.
One indicator of the quality of a process is the completeness of the defect management
methods it employs. However, because the methods could be applied with varying
effectiveness, a simple listing of the methods is not sufficient.
So, given two processes that use similar defect-removal methods, how could you tell which
one would produce the highest quality products? To determine this, you must determine
how well these defect-removal methods were applied. That takes measurement and
analysis.
To derive the five profile terms, consider formula No. 3 for code reviews. In one hour of
coding, a typical software developer will inject 4.6 defects. Since this developer can find
and fix defects at the rate of 6.0 per hour, he or she needs to spend 4.6/6.0 = 0.7667 of an
hour, or about 46 minutes, reviewing the code produced in one hour. Since there is wide
variation in these injection and removal rates, and since the number 0.7667 is hard to
remember, the SEI uses 0.5 as the factor.
Based on experience to date, this has proven to be suitable. Since these parameter values
are sensitive to application type and operational criticality, we suggest that organizations
periodically analyze their own data and adjust these values accordingly.
The formula for the code review profile term compares the ratio of the actual time the
developer spent reviewing code with the actual time spent in coding. If that ratio equals or
is greater than 0.5, then the criteria are met.
The factor of 2 in the equation is used to double both sides of this equation so it compares
twice the ratio of review to coding time with 1.0. Also, to get a useful quality figure of
merit, we need a measure that varies between 0 and 1.0, where 0 is very poor and 1.0 is
good. Therefore, the equation’s value should equal 1.0 whenever 2 times the code review
time is equal to or greater than the coding time and be progressively less with lower
reviewing times.
This is the reason for the Minimum function in each equation, where Minimum (A:B) is
the minimum of A and B. A little calculation will show that this is precisely the way
equation No. 3 works. Equations No. 1 and No. 2 work in exactly the same way (except
design time should equal or exceed coding time in equation No. 1).
To produce equations No. 4 and No. 5, the SEI used data it gathered while training software
developers for TSP teams. It found that when more than about 10 defects/thousand lines of
code (KLOC) were found in compiling, programs typically had poor code quality in testing,
and when more than about five defects/KLOC were found in initial (or unit) testing,
program quality was often poor in integration and system testing.
Therefore, we seek an equation that will produce a value of 1.0 when fewer than 10
defects/KLOC are found in compiling, and we want this value to progressively decrease as
more defects are found. A little calculation will show that this is precisely what equation
No. 4 does. Equation No. 5 works the same way for the value of five defects/KLOC in unit
testing.
One of the great advantages of these five criteria is that they can be determined at the time
that process step is performed. Therefore, at the end of the design review for example, the
developer can tell if he or she has met the design-review quality criteria. Since these
measures can all be available before integration and system test entry, and since they can
be calculated for every component part of a large system, they provide the information
needed to correct quality problems well before product delivery.
The Process Quality Index
For large products, it is customary to combine the data for all components into a composite
system quality profile. Since the data for a few poor quality components could then be
masked by the data for a large number of high quality components, it is important to have
a way to identify any potentially defective system components. The process quality index
(PQI) was devised for this purpose. It is calculated by multiplying together the five
components of the quality profile to give a value between 0.0 and 1.0. Then the components
with PQI values below some threshold can be quickly identified and reviewed to see which
ones should be reinspected, reworked, or replaced.
Experience to date shows that, with PQI values above about 0.4, components typically have
no defects found after development. Since the quality problems for large systems are
normally caused by a relatively small number of defective components, the PQI measure
permits acquisition groups to rapidly pinpoint the likely troublesome components and to
require they be repaired or replaced prior to delivery. Once organizations have sufficient
data, they should reexamine these criteria values and make appropriate adjustments.
1. The time spent in each phase of the development process. These times must be measured in
minutes.
2. The number of defects found in each defect-removal phase of the process, including reviews,
inspections, compiling, and testing.
3. The sizes of the products produced by each phase, typically in pages, database elements, or lines
of code.
Software Quality Models are a standardized way of measuring a software product. With the
increasing trend in software industry, new applications are planned and developed every day. This
eventually gives rise to the need for reassuring that the product so built meets at least the expected
standards.
McCall
McCall software quality model was introduced in 1977.
This model is incorporated with many attributes, termed as software factors, which
influence a software.
The model distinguishes between two levels of quality attributes :
1. Quality Factors: The higher level quality attributes which can be assessed directly
are called quality factors. These attributes are external attributes. The attributes in this
level are given more importance by the users and managers.
2. Quality Criteria: The lower or second level quality attributes which can be assessed
either subjectively or objectively are called Quality Criteria. These attributes are
internal attributes. Each quality factor has many second level of quality attributes or
quality criteria.
Example:
It includes five software quality factors, which are related with the requirements that
directly affect the operation of the software such as operational performance,
convenience, ease of usage and its correctness. These factors help in providing a better
user experience.
Correctness –
The extent to which a software meets its requirements specification.
Efficiency –
The amount of hardware resources and code the software, needs to
perform a function.
Integrity –
The extent to which the software can control an unauthorized person from
the accessing the data or software.
Reliability –
The extent to which a software performs its intended functions without
failure.
Usability –
The extent of effort required to learn, operate and understand the functions
of the software.
2. Product Revision :
It includes three software quality factors, which are required for testing and
maintenance of the software. They provide ease of maintenance, flexibility and testing
effort to support the software to be functional according to the needs and requirements
of the user in the future.
Maintainability –
The effort required to detect and correct an error during maintenance
phase.
Flexibility –
The effort needed to improve an operational software program.
Testability –
The effort required to verify a software to ensure that it meets the
specified requirements.
3. Product Transition :
It includes three software quality factors, that allows the software to adapt to the change
of environments in the new platform or technology from the previous.
Portability –
The effort required to transfer a program from one platform to another.
Re-usability –
The extent to which the program’s code can be reused in other
applications.
Interoperability –
The effort required to integrate two systems with one another.
Boehm
In 1978, B.W. Boehm introduced his software quality model.
The model represents a hierarchical quality model similar to McCall Quality Model to
define software quality using a predefined set of attributes and metrics, each of which
contributes to overall quality of software.
The difference between Boehm’s and McCall’s model is that McCall’s model primarily
focuses on precise measurement of high-level characteristics, whereas Boehm’s quality
model is based on a wider range of characteristics.
Example:
Characteristics of hardware performance, that are missing in McCall’s model. The Boehm’s
model has three levels for quality attributes. These levels are divided based on their
characteristics. These levels are primary uses (high level characteristics), intermediate constructs
(mid-level characteristics) and primitive constructs (primitive characteristics). The highest level
of Boehm’s model has following three primary uses stated as below –
1. As is utility –
Extent to which, we can use software as-is.
2. Maintainability –
Effort required to detect and fix an error during maintenance.
3. Portability –
Effort required to change software to fit in a new environment.
The next level of Boehm’s hierarchical model consists of seven quality factors associated with
three primary uses, stated as below –
1. Portability –
Effort required to change software to fit in a new environment.
2. Reliability –
Extent to which software performs according to requirements.
3. Efficiency –
Amount of hardware resources and code required to execute a function.
4. Usability (Human Engineering) –
Extent of effort required to learn, operate and understand functions of the software.
5. Testability –
Effort required to verify that software performs its intended functions.
6. Understandability –
Effort required for a user to recognize logical concept and its applicability.
7. Modifiability –
Effort required to modify a software during maintenance phase.
Boehm further classified characteristics into Primitive constructs as follows- device
independence, accuracy, completeness, consistency, device efficiency, accessibility,
communicativeness, self-descriptiveness, legibility, structuredness, conciseness, augment-
ability. For example- Testability is broken down into:- accessibility, communicativeness,
structuredness and self descriptiveness.
Advantages :
It focuses and tries to satisfy the needs of the user.
It focuses on software maintenance cost effectiveness.
Disadvantages :
It doesn’t suggest, how to measure the quality characteristics.
It is difficult to evaluate the quality of software using the top-down approach.
So, we can say that, Boehm’s model is an improvised version of McCall’s model and it is
used extensively, but because of the top-down approach to see quality of software, Boehm’s
model can’t be employed always.
FURPS / FURPS+
FURPS is an acronym representing a model for classifying software quality attributes
(functional and non-functional requirements):
1. Execution qualities, such as safety, security and usability, which are observable
during operation (at run time).
2. Evolution qualities, such as testability, maintainability, extensibility and
scalability, which are embodied in the static structure of the system.
Dromey: Dromey’s Quality model(1995):
Dromey has built a quality evaluation framework that analyzes the quality of software components
through the measurement of tangible quality properties. Each artifact produced in the software life-
cycle can be associated with a quality evaluation model. Dromey gives the following examples of
what he means by software components for each of the different models:
• Variables, functions, statements, etc. can be considered components of the Implementation
model;
• A requirement can be considered a component of the requirements model;
• A module can be considered a component of the design model;
According to Dromey, all these components possess intrinsic properties that can be classified into
four categories:
• Correctness : Evaluates if some basic principles are violated.
• Internal : Measure how well a component has been deployed according to its intended use.
• Contextual : Deals with the external influences by and on the use of a component.
• Descriptive : Measure the descriptiveness of a component.
Dromey proposes a product based quality model that recognizes that quality evaluation
differs for each product and that a more dynamic idea for modeling the process is needed
to be wide enough to apply for different systems.
Dromey is focusing on the relationship between the quality attributes and the sub-attributes,
as well as attempting to connect software product properties with software quality
attributes.
This quality model presented by R. Geoff Dromey is most recent model which is also
similar to McCall’s and Boehm’s model.
Dromey proposes a product based quality model that recognizes that quality evaluation
differs for each product and that a more dynamic idea for modeling the process is needed
to be wide enough to apply for different systems.
Dromey focuses on the relationship between the quality attributes and the sub-attributes,
as well as attempts to connect software product properties with software quality attributes.
1) Product properties that influence quality.
2) High level quality attributes.
3) Means of linking the product properties with the quality attributes.
Dromey’s Quality Model is further structured around a 5 step process:
Functionality
o Suitability
Highly Related Metrics
Related Metrics
o Accuracy
Highly Related Metrics
Related Metrics
o Interoperability
Highly Related Metrics
Related Metrics
o Security
Highly Related Metrics
Related Metrics
o Compliance
Reliability
o Maturity
Highly Related Metrics
Related Metrics
o Fault-tolerance
Highly Related Metrics
Related Metrics
o Recoverability
Highly Related Metrics
Related Metrics
o Compliance
Usability
o Understandability
o Learnability
o Operability
o Attractiveness
o Compliance
Re-Usability
o Understandability for Reuse
Highly Related Metrics
Related Metrics
o Learnability for Reuse
Highly Related Metrics
Related Metrics
o Operability for Reuse - Programmability
Highly Related Metrics
Related Metrics
o Attractiveness for Reuse
Highly Related Metrics
Related Metrics
o Compliance for Reuse
Efficiency
o Time behavior
Highly Related Metrics
Related Metrics
o Resource utilization
Highly Related Metrics
Related Metrics
o Compliance
Maintainability
o Analyzability
Highly Related Metrics
Related Metrics
o Changeability
Highly Related Metrics
Related Metrics
o Stability
Highly Related Metrics
Related Metrics
o Testability
Highly Related Metrics
Related Metrics
o Compliance
Portability
o Adaptability
Highly Related Metrics
Related Metrics
o Installability
Highly Related Metrics
Related Metrics
o Co-existence
Highly Related Metrics
Related Metrics
o Replaceablity
Highly Related Metrics
Related Metrics
o Compliance
McCall’s Quality model (1977)
Also called as General Electrics Model. This model was mainly developed for US military to
bridge the gap between users and developers. It mainly has 3 major representations for defining
and identifying the quality of a software product, namely
Product Revision : This identifies quality factors that influence the ability to change the
software product.
(1) Maintainability : Effort required to locate and fix a fault in the program within its
operating environment.
(2) Flexibility : The ease of making changes required as dictated by business by changes
in the operating environment.
(3) Testability : The ease of testing program to ensure that it is error-free and meets its
specification, i.e, validating the software requirements.
Product Transition : This identifies quality factors that influence the ability to adapt the
software to new environments.
(1) Portability : The effort required to transfer a program from one environment to
another.
(2) Re-usability : The ease of reusing software in a different context.
(3) Interoperability: The effort required to couple the system to another system.
Product Operations : This identifies quality factors that influence the extent to which the
software fulfills its specification.
(1) Correctness : The extent to which a functionality matches its specification.
(2) Reliability : The systems ability not to fail/ the extent to which the system fails.
(3) Efficiency : Further categorized into execution efficiency and storage efficiency and
generally means the usage of system resources, example: processor time, memory.
(4) Integrity : The protection of program from unauthorized access.
(5) Usability : The ease of using software.
McCall’s Quality Model
McCall’s 11 Quality attributes hierarchy
Boehm’s Quality model (1978):
Boehm’s model is similar to the McCall Quality Model in that it also presents a hierarchical
quality model structured around high-level characteristics, intermediate level
characteristics, primitive characteristics – each of which contributes to the overall quality
level.
The high-level characteristics represent basic high-level requirements of actual use to
which evaluation of software quality could be put – the general utility of software. The
high-level characteristics address three main questions that a buyer of software has:
• As-is utility : How well (easily, reliably, efficiently) can I use it as-is?
• Maintainability: How easy is it to understand, modify and retest?
• Portability : Can I still use it if I change my environment?
The intermediate level characteristic represents Boehm’s 7 quality factors that together
represent the qualities expected from a software system:
• Portability (General utility characteristics): Code possesses the characteristic portability
to the extent that it can be operated easily and well on computer configurations other than
its current one.
• Reliability (As-is utility characteristics): Code possesses the characteristic reliability to
the extent that it can be expected to perform its intended functions satisfactorily.
• Efficiency (As-is utility characteristics): Code possesses the characteristic efficiency to
the extent that it fulfills its purpose without waste of resources.
• Usability (As-is utility characteristics, Human Engineering): Code possesses the
characteristic usability to the extent that it is reliable, efficient and human-engineered.
• Testability (Maintainability characteristics): Code possesses the characteristic testability
to the extent that it facilitates the establishment of verification criteria and supports
evaluation of its performance.
• Understandability (Maintainability characteristics): Code possesses the characteristic
understandability to the extent that its purpose is clear to the inspector.
• Flexibility (Maintainability characteristics, Modifiability): Code possesses the
characteristic modifiability to the extent that it facilitates the incorporation of changes,
once the nature of the desired change has been determined.
The lowest level structure of the characteristics hierarchy in Boehm’s model is the
primitive characteristics metrics hierarchy. The primitive characteristics provide the
foundation for defining qualities metrics – which was one of the goals when Boehm
constructed his quality model. Consequently, the model presents one more metrics
supposedly measuring a given primitive characteristic.
Though Boehm’s and McCall’s models might appear very similar, the difference is that
McCall’s model primarily focuses on the precise measurement of the high-level
characteristics “As-is utility”, whereas Boehm’s quality model is based on a wider range
of characteristics with an extended and detailed focus on primarily maintainability.
1. ISO 9001: This standard applies to the organizations engaged in design, development,
production, and servicing of goods. This is the standard that applies to most software
development organizations.
2. ISO 9002: This standard applies to those organizations which do not design products but
are only involved in the production. Examples of these category industries contain steel
and car manufacturing industries that buy the product and plants designs from external
sources and are engaged in only manufacturing those products. Therefore, ISO 9002 does
not apply to software development organizations.
3. ISO 9003: This standard applies to organizations that are involved only in the installation
and testing of the products. For example, Gas companies.
An organization determines to obtain ISO 9000 certification applies to ISO registrar office for
registration. The process consists of the following stages:
1. Application: Once an organization decided to go for ISO certification, it applies to the
registrar for registration.
2. Pre-Assessment: During this stage, the registrar makes a rough assessment of the
organization.
3. Document review and Adequacy of Audit: During this stage, the registrar reviews the
document submitted by the organization and suggest an improvement.
4. Compliance Audit: During this stage, the registrar checks whether the organization has
compiled the suggestion made by it during the review or not.
5. Registration: The Registrar awards the ISO certification after the successful completion
of all the phases.
6. Continued Inspection: The registrar continued to monitor the organization time by time.
A software reliability model indicates the form of a random process that defines the
behavior of software failures to time.
Software reliability models have appeared as people try to understand the features of how
and why software fails, and attempt to quantify software reliability.
Over 200 models have been established since the early 1970s, but how to quantify software
reliability remains mostly unsolved.
There is no individual model that can be used in all situations. No model is complete or
even representative.to find Nth Highest Salary in SQL
o Assumptions
o Factors
A mathematical function that includes the reliability with the elements. The mathematical function
is generally higher-order exponential or logarithmic.
Differentiate between software reliability prediction models and software reliability estimation
models
Data Reference Uses historical information Uses data from the current software
development effort.
When used in Usually made before Usually made later in the life cycle (after
development development or test phases; can some data have been collected); not
cycle be used as early as concept phase. typically used in concept or development
phases.
Time Frame Predict reliability at some future Estimate reliability at either present or some
time. next time.
Reliability Models
A reliability growth model is a numerical model of software reliability, which predicts how
software reliability should improve over time as errors are discovered and repaired.
These models help the manager in deciding how much efforts should be devoted to testing.
The objective of the project manager is to test and debug the system until the required level
of reliability is reached.
The Jelinski-Moranda (JM) model, which is also a Markov process model, has strongly
affected many later models which are in fact modifications of this simple model.
Characteristics of JM Model
ϕ=a constant of proportionality indicating the failure rate provided by each fault
The mean value and the failure intensity methods for this model which belongs to the
binominal type can be obtained by multiplying the inherent number of faults by the
cumulative failure and probability density functions (pdf) respectively:
μ(ti )=N(1-e-ϕti)..............equation 2
And
€(ti)=Nϕe-ϕti.............equation 3
Those characteristics plus four other characteristics of the J-M model are summarized in table:
Assumptions
1. The number of initial software errors is unknown but fixed and constant.
2. Each error in the software is independent and equally likely to cause a failure during a test.
3. Time intervals between occurrences of failure are separate, exponentially distributed
random variables.
4. The software failure rate remains fixed over the ranges among fault occurrences.
5. The failure rate is corresponding to the number of faults that remain in the software.
6. A detected error is removed immediately, and no new mistakes are introduced during the
removal of the detected defect.
7. Whenever a failure appears, the corresponding fault is reduced with certainty.
Variations in JM Model
JM model was the first prominent software reliability model. Several researchers showed
interest and modify this model, using different parameters such as failure rate, perfect
debugging, imperfect debugging, number of failures, etc. now, we will discuss different
existing variations of this model.
1. Lipow Modified Version of Jelinski-Moranda Geometric Model
It allows multiple bugs removal in a time interval. The program failure rate becomes
λ(ti)=DKni-1
Where ni-1 is the cumulative number of errors found up to the (i-1)st time interval.
Sukert modifies the S-W model to allow more than one failure at each time interval. The program
failure rate becomes
Where ni-1 is the cumulative number of failures at the (i-1)th failure interval.
Assumptions
λ (ti)= ϕ[N-(i-1)] ti
Where ϕ is a proportional constant, N is the initial number of bugs in the program, and ti is the test
time since the (i-1)st failure.
Goel and Okumoto expand the J-M model by assuming that an error is removed with probability
p whenever a failure appears. The program failure rate at the ith failure interval is
λ (ti)= ϕ[N-p(i-1)]
R(ti)=e-ϕ[N-p(i-1)]-ti)
This model considers that the program failure rate function is initially a constant D and reduce
geometrically at failure time. The program failure rate and reliability method of time between
failures at the ith failure interval are
λ (ti)=DKi-1
R(ti)=e-DKi-1ti)
This model considers that times between failures are independent exponential random variables
with a parameter € i=1, 2 ....n which itself has parameters Ψ(i) and α reflecting programmer quality
and function difficulty having a prior gamma distribution.
Where B represents the fault reduction factor
This model considers that the failure intensity functions as the number of failures removed are as
the given below
The primary feature of this new model is that the variable (growing) size of a developing
program is accommodated so that the quality of a program can be predicted by analyzing
a basic segment.
Assumptions
This model has the following assumptions along with the JM model assumptions:
1. Any tested initial portion of the program describes the entire program for the number and
nature of its incipient errors.
2. The detect-ability of a mistake is unaffected by the "'dilution" incurred when the initially
tested method is augmented by new code.
3. The number of lines of code which exists at any time is known.
4. The growth function and the bug detection process are independent.
This model shows how several models used to define the reliability of computer software can be
comprehensively viewed by adopting a Bayesian point of view.
This model provides a different motivation for a commonly used model using notions from shock
models.
Jewell extended a result by Langberg and Singpurwalla (1985) and made an expansion of
the Jelinski-Moranda model.
Assumptions
1. The testing protocol is authorized to run for a fixed length of time-possibly, but not
certainly, coinciding with a failure epoch.
2. The distribution of the unknown number of shortage is generalized from the one-parameter
Poisson distribution by considering that the parameter is itself a random quantity with a
Beta prior distribution.
3. Although the estimation of the posterior distributions of the parameters leads to complex
expressions, we show that the calculation of the predictive distribution for undetected bugs
is straightforward.
4. Although it is now identified that the MLE's for reliability, growth can be volatile, we show
that, if a point estimator is needed, the predictive model is easily calculated without
obtaining the full distribution first.
This model replaces the JM Model assumption, each error has the same contribution to the
unreliability of software, with the new assumption that different types of errors may have different
effects on the failure rate of the software.
Failure Rate:
Where
Q = initial number of failure quantum units inherent in a software
Ψ = the failure rate corresponding to a single failure quantum unit
wj= the number of failure-quantum units of the ith fault, i.e., the size of the ith failure-quantum
In this model, a software fault detection method is explained by a Markovian Birth process with
absorption. This paper amended the optimal software release policies by taking account of a waste
of a software testing time.
A new unknown parameter θ is contained in the JM model parameters estimation such that θɛ [θL,
θ∪]. The confidence level is the probability value (1-α) related to a confidence interval. In general,
if the confidence interval for a software reliability index θ is achieved, we can estimate the
mathematical characteristics of virtual cloud C(Ex, En, He), which can be switched to system
qualitative evaluation by X condition cloud generator.
The modified JM Model extends the J-M model by relaxing the assumptions of complete
debugging process and types of incomplete removal:
1. The fault is not deleted successfully while no new faults are introduced
2. The fault is not deleted successfully while new faults are created due to incorrect diagnoses.
Assumptions
The assumptions made in the Modified J-M model contain the following:
o The number of initial software errors is unknown but fixed and constant.
o Each error in the software is independent and equally feasible to cause a failure during a
test.
o Time intervals between occurrences of failure are independent, exponentially distributed
random variables.
o The software failure rate remains fixed over the intervals between fault occurrences.
o The failure rate is proportional to the number of errors that remain in the software.
o Whenever a failure occurs, the detected error is removed with probability p, the detected
fault is not entirely removed with probability q, and the new fault is generated with
probability r. So it is evident that p+q+r=1and q≥r.
List of various characteristics underlying the Modified JM Model with imperfect Debugging
Phenomenon
Reliability function at the ith failure interval R(ti)=1-Fi (ti )=exp(-ϕ[N-(i-1)(p-r)] ti)
Technical writers who write all the documentation such as user guides
Release specialists who are responsible for building the whole product and software
versioning
User experience designers, who are creating the design architecture based on business
requirements, user research and expertise in usability
Graphic designers who are normally responsible for the design of the graphical user
interface.
Maintenance engineers who are behind two, three or more lines of support
Consultants are responsible for making the solution operational, especially if some
specialist knowledge is necessary.
Examples of this include: building multidimensional cubes in business intelligence
software, integrating with existing solutions, and implementing business scenarios
in Business Process Management software.
Structure
The manager of a software company is usually called the Head Of Development
(HOD), and reports to the stakeholders.
He or she leads the sub-teams directly or via the managers/leaders depending on the size
of the organization.
Usually teams of up to 10 person are the most operational.
In bigger organizations, there are in general two models of the hierarchy:
Matrix structure
In this model there are dedicated managers/leaders for each main specialization, "renting"
their people for particular projects led by product/project managers, who formally or
informally buy the people and pay for their time.
This leads to each private employee having two bosses – the product/project manager and
the specialized "resource" manager.
On one hand it optimizes the usage of human resources, on the other hand it may give
rise to conflicts about which one manager has priority in the structure.
There are also a number of variants of these structures, and a number
of organizations have this structure spread and split within various departments and units.
For any business, data storage and management are of utmost significance.
2. Expert IT professionals
These professionals are the mavens who impart IT training to your staff without incurring
any additional expenses. As a result of which, your IT training gets financed easily.
Any software in its most-excellent form can give you annoying technical glitches.
An effective IT support matches you with excellent solutions for solving your niggling
issues quickly, which allows you to become more effective in your job.
In addition, it saves valuable hours from your important day that you would otherwise
spend in fixing numerous issues.
Every business, whether small or big, needs an effective and reliable IT department.
An intact IT support enables an organization or business to stay competitive and curb any
potential IT costs.
In addition, businesses attain higher flexibility by means of IT support, which allows them
to make higher profits.
However, there are numerous reasons that necessitate crucial Managed IT services.
IT departments ascertain the security of your computer systems from different sorts of
viruses and other threats.
Due to this, you get to save your time, money, and other resources.
It is indeed important to monitor the performance and status of your business at each and
every stage.
Especially, businesses serving online customers require monitoring at all stages in order to
ensure efficiency.
For example, for a person running a shopping cart software on his/ her website, it is
imperative to have proper control.
Consider a scenario, in which a business’ network goes down for a few hours, which may
lead to huge financial losses due to reduced sales.
By means of IT support, such risky situations can be avoided easily. You can recover your
site within a few minutes, if you avail IT services.
6. Security of information
Businesses possess sensitive and crucial information, such as salary, financial, and HR
details.
By means of IT support, confidential information is kept safe from hacking and other
malicious attempts.
An IT department is responsible for getting these elements rightly monitored and policed.
Furthermore, an IT department ensures that data leakage is prioritized and staff members
don’t disclose company’s sensitive data to outside world.
9 Benefits of Outsourcing IT Support services for Business:
Businesses carry important data such as employees’ salary, income, and HR details. For this
reason, data storage and management are very crucial for any kind of business and it is also a great
example of why IT support is important. The inclusion of competent IT services in data
management enforces deeper assessment of business needs and careful scrutiny of the company’s
data landscape.
An efficient back-up system for all important files and software helps boost a business’ security
against data breach attempts. Hiring a team of highly skilled and knowledgeable IT personnel to
manage and secure a company’s valuable data goes hand-in-hand with the creation of an effective
data management strategy.
When this happens, confidential records are effectively kept safe from hacking and any other
attempt to leak valuable company and employee information.
There are also digital marketing tools such as Microsoft CRM Dynamics and Google
Analytics that enable companies to track progress and development. On a larger scale, IT software
enhances existing strategies by presenting more precise and advanced alternatives to how core
objectives can be achieved.
Executing advanced and precise solutions to complex problems involving the internal systems that
keep a business running is another concrete example of importance of IT support.
IT services and systems provide businesses the tools needed to obtain improved hardware such as
high memory storage, faster processors, and high-quality displays. Combined with smarter
applications like mind-mapping software, collaborative systems, and an automated process for
making work more streamlined and organized, help industries research and collate data easily,
analyze information, and plan scalability. The result is the generation of more viable solutions to
complex business dilemmas.
To give you a better idea as to why is technical support important for maintaining a strong
defensive wall against destructive computer viruses, several companies in the past have fallen prey
to viruses and malware and ransomware attacks. These companies include Dropbox, Pitney
Bowes, Capital One, and Asco. Their business websites, along with the security of their end-users
were significantly compromised by the unexpected security breach.
When you commit time and resources in enhancing your IT systems and empowering your tech
support team, it saves time and money while assuring you of long-term protection.
5. Comprehensive Monitoring
It is important to monitor the performance and progress of a business’ internal operations and
customer reach efforts at every stage. Among the best ways that IT can help execute a more refined
supervision of a business’ core operations include improving quality control, facilities planning
and logistics for companies with manufacturing sites, and internal auditing.
Comprehensive monitoring through the aid of a competent IT system is also a must for companies
offering online services to customers. This is to prevent their services as well as the security of
their customers from being jeopardized.
A great example is the creation of a portal that only in-house employees can access. The portal
contains information about their employment status. This information may range from their job
description and employment contract, to their contact information and the periodic progress of
their individual performances. Moreover, a human resource information system helps determine
between resources and job openings that are still open from those that have already been fulfilled.
Likewise, there are algorithms designed to continuously measure online business transactions and
customer purchasing behavior on a daily basis. When planning and deciding on new strategies to
equip a business’ goals, marketing mix subsystems is a business function of IT that presents
programs for assisting the decision making process on the following: introducing new products,
allocating prices, promoting products and services, distributing and keeping track of sales.
8. Improved Customer Support
Through IT support services, customers can be assisted from multiple communication channels
and it gives end-users more choices for how they can reach a business. Whether it’s through
telephone, email, social media messaging, live chat or even SMS, these channels make customers
reach your business conveniently. Hence, employing IT services to boost customer satisfaction is
a great way for businesses to understand customer behavior.
Applying technology in customer support systems can also be in the form of using the benefits of
outsourcing IT support. Startups have a limited workforce that as their services and audience reach
continue to expand, it becomes a challenge to keep up with the increasing volume of queries and
customer concerns. But with a reliable IT system, hiring remote staff to supplement the business’
existing team of support representatives is possible.
IT support services are essential for any kind of business, whether it is a startup or an established
company. It is crucial to not only maintain systems but to also excel through consistent upgrades
that can guarantee the optimum level of operations for your business.
The last but not the least of examples that describe why technical support is important is the
influence that it has over improving branding strategies. When branding is paired with information
services and systems, it is not limited to enhancing existing marketing strategies alone or helping
form a new advertising approach. Branding can be further augmented by IT through maximizing
the originality of a business’ lineup of products and services.
Developing apps and systems to drive higher customer engagement or boost satisfaction rates, as
well as to gain an edge over competitors, effectively enhance a business’ marketability, purpose
and overall impact.
Providing an app or software to customers that make services more accessible and convenient
consequently drives higher authority on what businesses can offer.
What are some key pointers when executing IT systems for business purposes?
Injecting the importance and advantages of IT in a business’ internal and external operations is an
undeniably major change. It entails necessary cost adjustments and workforce preparation;
otherwise the entire company will fail to adjust to the demands of the technology that it plans to
employ.
Employees must also be properly oriented and given sufficient training in order to be highly
familiar with the software or system. Allocate a budget that will cover for the equipment,
installation, and additional manpower required to avoid any delays in updating the business’
system and workflow.
Customer problems, known to be at the base of all product development. Yet if we speak
to developers or product managers, they are often not very clear about the requirements.
Here, the problem lies in the terminology.
There are two terms buried in one word “requirements”: customer needs and product
requirement. To have this clarified, it’s important to understand a high-level
concept: separating problem space from solution space.
PROBLEM SPACE
A market is a set of related customer needs, which rests squarely in problem space or
you can say “problems” define market, not “solutions”. A market is not tied to any
specific solutions that meet market needs. It is a broader space.
There is no product or design that exists in problem space. Instead, problem space is
where all the customer needs that you’d like your product to deliver live. You shouldn’t
interpret the word “needs” too narrowly: Whether it’s a customer pain point, a desire, a job
to be done, or a user story, it lives in problem space.
SOLUTION SPACE
If I speak of solution space, any product or the product design — such as mock-ups,
wire-frame, prototype, depends on and is built upon problem space, but is in solution
space. S
o we can say problem space is at the base of solution space. Solution space includes any
product or representation of a product that is used by or intended for use by a customer.
When you build a product, you have chosen a specific implementation. Whether you’ve
done so explicitly or not, you’ve determined how the product looks, what it does, and how
it works.
“What” the product needed to accomplish for customers is Problem space. The
“what” describes the benefits product should give to the target customer.
Whereas, “how” the product would accomplish it, is solution space. The “how” is the way
in which the product delivers the “what” to target customer. The “how” is the design of
the product and the specific technology used to implement the product.
A failure to gain a clear understanding of the problem space before proceeding to the
solution space is prevalent in product teams that practice “inside-out” product development,
where “inside” refers to the company and “outside” refers to customers and the
market.
In such teams, the product idea is what the product team think would be good to build. They
don’t test the ideas with customers to verify that it would solve actual customer needs.
The best way to mitigate the risk of an “inside-out” mindset is “outside-in” mindset.
The product development starts with talking to customers to understand their needs, as well
as what they like and don’t like about existing solutions.
Outside-in product teams form a robust problem-space definition before starting product
design.
It’s hard for customers to talk about specific benefits they require and their importance.
Even if they do, it’s going to be very vague.
It’s therefore up to product team to understand these requirements and define the
problem space.
The problem here is, if you mention a customer about a problem or need and ask for their
input, at best, they may just talk about existing solutions available.
The reality is that customers are much better at giving you feedback in the solution
space.
If you show them a new product or design, they can tell you what they like and don’t like.
They can compare it to other solutions and identify pros and cons.
Hence, having solution space discussions with customers is much more fruitful than
trying to explicitly discuss the problem space with them.
In this way you can form your hypotheses of problem space.
The feedback you gather in solution space actually helps you test and improve your problem
space hypotheses.
The best problem space learning often comes from feedback you receive from
customers on the solution space.
The production of the requirements stage of the software development process is Software
Requirements Specifications (SRS) (also called a requirements document). This report lays a
foundation for software engineering activities and is constructing when entire requirements are
elicited and analyzed. SRS is a formal report, which acts as a representation of software that
enables the customers to review whether it (SRS) is according to their requirements. Also, it
comprises user requirements for a system as well as detailed specifications of the system
requirements.
The SRS is a specification for a specific software product, program, or set of applications that
perform particular functions in a specific environment. It serves several goals depending on who
is writing it. First, the SRS could be written by the client of a system. Second, the SRS could be
written by a developer of the system. The two methods create entirely various situations and
establish different purposes for the document altogether. The first case, SRS, is used to define the
needs and expectation of the users. The second case, SRS, is written for various purposes and
serves as a contract document between customer and developer.
1. Correctness: User review is used to provide the accuracy of requirements stated in the SRS.
SRS is said to be perfect if it covers all the needs that are truly expected from the system.n JDK,
JRE, and JVM
2. Completeness: The SRS is complete if, and only if, it includes the following elements:
(1). All essential requirements, whether relating to functionality, performance, design, constraints,
attributes, or external interfaces.
(2). Definition of their responses of the software to all realizable classes of input data in all
available categories of situations.
(3). Full labels and references to all figures, tables, and diagrams in the SRS and definitions of all
terms and units of measure.
3. Consistency: The SRS is consistent if, and only if, no subset of individual requirements
described in its conflict. There are three types of possible conflict in the SRS:
(1). The specified characteristics of real-world objects may conflicts. For example,
(a) The format of an output report may be described in one requirement as tabular but in another
as textual.
(b) One condition may state that all lights shall be green while another states that all lights shall
be blue.
(2). There may be a reasonable or temporal conflict between the two specified actions. For
example,
(a) One requirement may determine that the program will add two inputs, and another may
determine that the program will multiply them.
(b) One condition may state that "A" must always follow "B," while other requires that "A and B"
co-occurs.
(3). Two or more requirements may define the same real-world object but use different terms for
that object. For example, a program's request for user input may be called a "prompt" in one
requirement's and a "cue" in another. The use of standard terminology and descriptions promotes
consistency.
4. Unambiguousness: SRS is unambiguous when every fixed requirement has only one
interpretation. This suggests that each element is uniquely interpreted. In case there is a method
used with multiple definitions, the requirements report should determine the implications in the
SRS so that it is clear and simple to understand.
5. Ranking for importance and stability: The SRS is ranked for importance and stability if each
requirement in it has an identifier to indicate either the significance or stability of that particular
requirement.
Typically, all requirements are not equally important. Some prerequisites may be essential,
especially for life-critical applications, while others may be desirable. Each element should be
identified to make these differences clear and explicit. Another way to rank requirements is to
distinguish classes of items as essential, conditional, and optional.
6. Modifiability: SRS should be made as modifiable as likely and should be capable of quickly
obtain changes to the system to some extent. Modifications should be perfectly indexed and cross-
referenced.
7. Verifiability: SRS is correct when the specified requirements can be verified with a cost-
effective system to check whether the final software meets those requirements. The requirements
are verified with the help of reviews.
8. Traceability: The SRS is traceable if the origin of each of the requirements is clear and if it
facilitates the referencing of each condition in future development or enhancement documentation.
1. Backward Traceability: This depends upon each requirement explicitly referencing its source
in earlier documents.
2. Forward Traceability: This depends upon each element in the SRS having a unique name or
reference number.
The forward traceability of the SRS is especially crucial when the software product enters the
operation and maintenance phase. As code and design document is modified, it is necessary to be
able to ascertain the complete set of requirements that may be concerned by those modifications.
9. Design Independence: There should be an option to select from multiple design alternatives
for the final system. More specifically, the SRS should not contain any implementation details.
10. Testability: An SRS should be written in such a method that it is simple to generate test cases
and test plans from the report.
11. Understandable by the customer: An end user may be an expert in his/her explicit domain
but might not be trained in computer science. Hence, the purpose of formal notations and symbols
should be avoided too as much extent as possible. The language should be kept simple and clear.
12. The right level of abstraction: If the SRS is written for the requirements stage, the details
should be explained explicitly. Whereas, for a feasibility study, fewer analysis can be used. Hence,
the level of abstraction modifies according to the objective of the SRS.
Concise: The SRS report should be concise and at the same time, unambiguous, consistent, and
complete. Verbose and irrelevant descriptions decrease readability and also increase error
possibilities.
Structured: It should be well-structured. A well-structured document is simple to understand and
modify. In practice, the SRS document undergoes several revisions to cope up with the user
requirements. Often, user requirements evolve over a period of time. Therefore, to make the
modifications to the SRS document easy, it is vital to make the report well-structured.
Black-box view: It should only define what the system should do and refrain from stating how to
do these. This means that the SRS document should define the external behavior of the system and
not discuss the implementation issues. The SRS report should view the system to be developed as
a black box and should define the externally visible behavior of the system. For this reason, the
SRS report is also known as the black-box specification of a system.
Conceptual integrity: It should show conceptual integrity so that the reader can merely
understand it. Response to undesired events: It should characterize acceptable responses to
unwanted events. These are called system response to exceptional conditions.
Verifiable: All requirements of the system, as documented in the SRS document, should be
correct. This means that it should be possible to decide whether or not requirements have been met
in an implementation.
Requirements Elicitation
Requirement’s elicitation is perhaps the most difficult, most error-prone and most
communication intensive software development.
It can be successful only through an effective customer-developer partnership.
It is needed to know what the users really need.
#2) Brainstorming
This technique is used to generate new ideas and find a solution for a specific issue. The members
included for brainstorming can be domain experts, subject matter experts. Multiple ideas and
information give you a repository of knowledge and you can choose from different ideas.
This session is generally conducted around the table discussion. All participants should be given
an equal amount of time to express their ideas.
If the interviewer has a predefined set of questions then it’s called a structured interview.
If the interviewer is not having any particular format or any specific questions then it’s called
an unstructured interview.
For an effective interview, you can consider the 5 Why technique. When you get an answer to all
your Whys then you are done with your interview process. Open-ended questions are used to
provide detailed information. In this interviewee cannot say Yes or No only.
Closed questions can be answered in Yes or No form and also for areas used to get confirmation
on answers.
Basic Rules:
The overall purpose of performing the interviews should be clear.
Identify the interviewees in advance.
Interview goals should be communicated to the interviewee.
Interview questions should be prepared before the interview.
The location of the interview should be predefined.
The time limit should be described.
The interviewer should organize the information and confirm the results with the
interviewees as soon as possible after the interview.
Benefits:
Interactive discussion with stakeholders.
The immediate follow-up to ensure the interviewer’s understanding.
Encourage participation and build relationships by establishing rapport with the
stakeholder.
Drawbacks:
Time is required to plan and conduct interviews.
Commitment is required from all the participants.
Sometimes training is required to conduct effective interviews.
#4) Document Analysis/Review
This technique is used to gather business information by reviewing/examining the available
materials that describe the business environment. This analysis is helpful to validate the
implementation of current solutions and is also helpful in understanding the business need.
Document analysis includes reviewing the business plans, technical documents, problem reports,
existing requirement documents, etc. This is useful when the plan is to update an existing system.
This technique is useful for migration projects.
This technique is important in identifying the gaps in the system i.e. to compare the AS-IS process
with the TO-BE process. This analysis also helps when the person who has prepared the existing
documentation is no longer present in the system.
Benefits:
Existing documents can be used to compare current and future processes.
Existing documents can be used as a base for future analysis.
Drawbacks:
Existing documents might not be updated.
Existing documents might be completely outdated.
Resources worked on the existing documents might not be available to provide
information.
This process is time-consuming.
#5) Focus Group
By using a focus group, you can get information about a product, service from a group. The Focus
group includes subject matter experts. The objective of this group is to discuss the topic and
provide information. A moderator manages this session.
The moderator should work with business analysts to analyze the results and provide findings to
the stakeholders.
If a product is under development and the discussion is required on that product then the result
will be to update the existing requirement or you might get new requirements. If a product is ready
to ship then the discussion will be on releasing the product.
Benefits:
You can get information in a single session rather than conducting one to one
interview.
Active discussion with the participants creates a healthy environment.
One can learn from other’s experiences.
Drawbacks:
It might be difficult to gather the group on the same date and time.
If you are doing this using the online method then the participant’s interaction will
be limited.
A Skilled Moderator is required to manage focus group discussions.
#6) Interface Analysis
Interface analysis is used to review the system, people, and processes. This analysis is used to
identify how the information is exchanged between the components. An Interface can be described
as a connection between two components. This is described in the below image:
The interface analysis focus on the below questions:
1. Who will be using the interface?
2. What kind of data will be exchanged?
3. When will the data be exchanged?
4. How to implement the interface?
5. Why we need the interface? Can’t the task be completed without using the
interface?
Benefits:
Provide missed requirements.
Determine regulations or interface standards.
Uncover areas where it could be a risk for the project.
Drawbacks:
The analysis is difficult if internal components are not available.
It cannot be used as a standalone elicitation activity.
#7) Observation
The main objective of the observation session is to understand the activity, task, tools used, and
events performed by others.
The plan for observation ensures that all stakeholders are aware of the purpose of the observation
session, they agree on the expected outcomes, and that the session meets their expectations. You
need to inform the participants that their performance is not judged.
During the session, the observer should record all the activities and the time taken to perform the
work by others so that he/she can simulate the same. After the session, the BA will review the
results and will follow up with the participants. Observation can be either active or passive.
Active observation is to ask questions and try to attempt the work that other persons are doing.
Passive observation is silent observation i.e. you sit with others and just observe how they are
doing their work without interpreting them.
Benefits:
The observer will get a practical insight into the work.
Improvement areas can be easily identified.
Drawbacks:
Participants might get disturbed.
Participants might change their way of working during observation and the observer
might not get a clear picture.
Knowledge-based activities cannot be observed.
#8) Prototyping
Prototyping is used to identify missing or unspecified requirements. In this technique, frequent
demos are given to the client by creating the prototypes so that client can get an idea of how the
product will look like. Prototypes can be used to create a mock-up of sites, and describe the process
using diagrams.
Benefits:
Gives a visual representation of the product.
Stakeholders can provide feedback early.
Drawbacks:
If the system or process is highly complex, the prototyping process may become
time-consuming.
Stakeholders may focus on the design specifications of the solution rather than the
requirements that any solution must address.
#9) Joint Application Development (JAD)/ Requirement Workshops
This technique is more process-oriented and formal as compared to other techniques. These are
structured meetings involving end-users, PMs, SMEs. This is used to define, clarify, and complete
requirements.
Questions should be based on high priority risks. Questions should be direct and unambiguous.
Once the survey is ready, notify the participants and remind them to participate.
Amongst the all above techniques, the top five techniques that are commonly used for
elicitation are shown in the below image.
Scenario-based elements :
Using a scenario-based approach, system is described from user’s point of view. For
example, basic use cases and their corresponding use-case diagrams evolve into more
elaborate template-based use cases. Figure 1(a) depicts a UML activity diagram for
eliciting requirements and representing them using use cases. There are three levels of
elaboration.
Class-based elements :
A collection of things that have similar attributes and common behaviors i.e., objects
are categorized into classes. For example, a UML case diagram can be used to depict
a Sensor class for the SafeHome security function. Note that diagram lists attributes
of sensors and operations that can be applied to modify these attributes. In addition to
class diagrams, other analysis modeling elements depict manner in which classes
collaborate with one another and relationships and interactions between classes.
Behavioral elements :
Effect of behavior of computer-based system can be seen on design that is chosen and
implementation approach that is applied. Modeling elements that depict behavior
must be provided by requirements model.
Method for representing behavior of a system by depicting its states and events that
cause system to change state is state diagram. A state is an externally observable mode
of behavior. In addition, state diagram indicates actions taken as a consequence of a
particular event.
To illustrate use of a state diagram, consider software embedded within safeHome
control panel that is responsible for reading user input. A simplified UML state diagram
is shown in figure 2.
Flow-oriented elements :
As it flows through a computer-based system information is transformed. System
accepts input, applies functions to transform it, and produces output in a various
forms. Input may be a control signal transmitted by a transducer, a series of numbers
typed by human operator, a packet of information transmitted on a network link, or a
voluminous data file retrieved from secondary storage. Transform may compromise a
single logical comparison, a complex numerical algorithm, or a rule-inference
approach of an expert system. Output produce a 200-page report or may light a single
LED. In effect, we can create a flow model for any computer-based system,
regardless of size and complexity.
Decision Table:
A decision table is a brief visual representation for specifying which actions to perform depending
on given conditions. The information represented in decision tables can also be represented as
decision trees or in a programming language using if-then-else and switch-case statements.
A decision table is a good way to settle with different combination inputs with their corresponding
outputs and is also called a cause-effect table. The reason to call cause-effect table is a related
logical diagramming technique called cause-effect graphing that is basically used to obtain the
decision table.
Importance of Decision Table:
Any complex business flow can be easily converted into test scenarios & test cases
using this technique.
Decision tables work iteratively which means the table created at the first iteration is
used as input tables for the next tables. The iteration is done only if the initial table is
not satisfactory.
Simple to understand and everyone can use this method to design the test scenarios &
test cases.
It provides complete coverage of test cases which helps to reduce the rework on writing
test scenarios & test cases.
These tables guarantee that we consider every possible combination of condition
values. This is known as its completeness property.
The standard of measure for the estimation of quality, progress and health of the software testing
effort is called software metrics. It can be divided into three groups: product metrics, process
metrics, and project metrics. The product characteristics like size, features of the design,
complexity, performance, level of quality, etc., is described using product metrics. In contrast,
software development and maintenance are improved using process metrics. The project’s
characteristics and execution are described by project metrics whose examples include the count
of software developers, cost, etc.
It is necessary to develop software metrics based on some guidelines. Those guidelines are:
It is must be simple and computable. The derivation must be easy to learn, and the time
and effort involved must be average.
The results given must be objective and consistent. The results should not be ambiguous.
It is must make use of units and dimensions if there are mathematical computations
involved.
The development of metrics should be based on an analysis model, design model or
structure of the model, and it should be independent of the programming language.
It is effective if and only if it can deliver high-quality software products.
It is must be able to adapt to the changing requirements of the project, which is calibration
must be easy.
The cost of developing the metrics must be reasonable. One must be able to obtain it easily.
If we are using software metrics for making any decisions, it must be validated before being
applied to make decisions.
The developed metrics must be robust to changes that it should not be sensitive to changes
in project, process, or small product.
As and when the value of the software characteristics represented by its changes, the value
must also change and for this to happen, the range must be in a meaningful range. Let us
say, for example, the range of software metrics is zero to five.
2. Product Metrics
The characteristics of the software product are measured using product metrics. Some of the
important characteristics of the software are:
Computation of these metrics is done for different stages of the software development lifecycle.
3. Internal Metrics
The properties which are of great importance to a software developer can be measured using the
metrics called internal metrics. An example is a measure of Lines of code (LOC).
4. External Metrics
The properties which are of great importance to a user can be measured using the metrics called
external metrics. An example is portability, reliability, usability, etc.
5. Project Metrics
The progress of the project is checked by the project manager using the metrics called project
metrics. Various metrics such as time, cost, etc., are collected by using the data from the projects
in the past, and they are used as an estimate for the new software. The project manager checks the
progress of the project from time to time, and effort, time and cost are compared with the original
effort, time and cost. The cost of development, efforts, risks and time can be reduced by using
these metrics. The quality of the project can also be improved. With the increase in quality, there
is a reduction in the number of errors, time, cost, etc.
Advantages
Disadvantages
It is not easy to apply metrics in all cases. It is difficult and expensive in some cases.
It is difficult to verify the validity of historical or empirical data on which the verification
and justification.
Software products can be managed, but the technical staff’s performance cannot be
evaluated using software metrics.
The available tools and the working environment is used to define and derive the software
metrics, and there is no standard in defining and deriving them.
Certain variables are estimated based on the predictive models, and they are not known so
often.
UNIT-V
Software Testing: Introduction to faults and failures; basic testing concepts; concepts of
verification and validation; black box and white box tests; white box test coverage – code coverage,
condition coverage, branch coverage; basic concepts of black-box tests – equivalence classes,
boundary value tests, usage of state tables; testing use cases; transaction based testing; testing for
non-functional requirements – volume, performance and efficiency; concepts of inspection.
It is an incorrect step in any process and data definition in computer program which is
responsible of the unintended behavior of any program in the computer.
Faults or bugs in a hardware or software may cause errors.
An error can be defined as a part of the system which will lead to the failure of the system.
Basically an error in a program is an indication that failure occurs or has to occurred.
If there are multiple components of the system, errors in that system will lead to
component failure.
As there are many component in the system that interact with each other, failure of one
component might be responsible to introduce one or more faults in the system. Following
cycle show the behavior of the fault.
Types of fault :
In software products, different types of fault can be occurred. In order to remove the fault, we
have to know what type of fault which is facing by our program. So the following are the types
of faults :
Figure: Types of Faults
1. Algorithm Fault :
This type of fault occurs when the component algorithm or logic does not provide the
proper result for the given input due to wrong processing steps. It can be easily
removed by reading the program i.e. disk checking.
2. Computational Fault :
This type of fault occur when a fault disk implementation is wrong or not capable of
calculating the desired result e.g. combining integer and floating point variables may
produce unexpected result.
3. Syntax Fault :
This type of fault occur due the use of wrong syntax in the program. We have to use
the proper syntax for the programming language which we are using.
4. Documentation Fault :
The documentation in the program tells what the program actually does. Thus it can
occur when program does not match with the documentation.
5. Overload Fault :
For memory purpose we used data structures like array, queue and stack etc. in our
programs. When they are filled with their given capacity and we are using them
beyond their capacity, then overload fault occurs in our program.
6. Timing Fault :
When the system is not responding after the failure occurs in the program then this
type of fault is referred as the timing fault.
7. Hardware Fault :
This type of failure occur when the specified hardware for the given software does
not work properly. Basically, it is due to the problem in the continuation of the
hardware that is not specified in the specification.
8. Software Fault :
It can occur when the specified software is not properly working or not supporting the
platform used or we can say operating system.
9. Omission Fault :
It ca occur when the key aspect is missing in the program e.g. when the initialization
of a variable is not done in the program.
10. Commission Fault :
It can occur when the statement of expression is wrong i.e. integer is initialized with
float.
Fault Avoidance :
Fault in the program can be avoid by using techniques and procedures which aims to
avoid the introduction of the fault during any phase of the safety lifecycle of the safety
related system.
Fault Tolerance :
It is ability of the functional unit to continue to perform a required function even in the
presence of the fault.
Principles of Testing:
Static Testing techniques are testing techniques that do not require the execution of a code base.
Static Testing Techniques are divided into two major categories:
1. Reviews: They can range from purely informal peer reviews between two
developers/testers on the artifacts (code/test cases/test data) to totally
formal Inspections which are led by moderators who can be internal/external to the
organization.
1. Peer Reviews: Informal reviews are generally conducted without any
formal setup. It is between peers. For Example- Two developers/Testers
review each other’s artifacts like code/test cases.
2. Walkthroughs: Walkthrough is a category where the author of work (code
or test case or document under review) walks through what he/she has done
and the logic behind it to the stakeholders to achieve a common
understanding or for the intent of feedback.
3. Technical review: It is a review meeting that focuses solely on the
technical aspects of the document under review to achieve a consensus. It
has less or no focus on the identification of defects based on reference
documentation. Technical experts like architects/chief designers are
required for doing the review. It can vary from Informal to fully formal.
4. Inspection: Inspection is the most formal category of reviews. Before the
inspection, The document under review is thoroughly prepared before going
for an inspection. Defects that are identified in the Inspection meeting are
logged in the defect management tool and followed up until closure. The
discussion on defects is avoided and a separate discussion phase is used for
discussions, which makes Inspections a very effective form of reviews.
2. Static Analysis: Static Analysis is an examination of requirement/code or design with
the aim of identifying defects that may or may not cause failures. For Example-
Reviewing the code for the following standards. Not following a standard is a defect
that may or may not cause a failure. There are many tools for Static Analysis that are
mainly used by developers before or during Component or Integration
Testing. Even Compiler is a Static Analysis tool as it points out incorrect usage of
syntax, and it does not execute the code per se. There are several aspects to the code
structure – Namely Data flow, Control flow, and Data Structure.
1. Data Flow: It means how the data trail is followed in a given program –
How data gets accessed and modified as per the instructions in the program.
By Data flow analysis, You can identify defects like a variable definition
that never got used.
2. Control flow: It is the structure of how program instructions get executed
i.e conditions, iterations, or loops. Control flow analysis helps to identify
defects such as Dead code i.e a code that never gets used under any
condition.
3. Data Structure: It refers to the organization of data irrespective of code.
The complexity of data structures adds to the complexity of code. Thus, it
provides information on how to test the control flow and data flow in a given
code.
Decision Table
4. Use case-based Testing: This technique helps us to identify test cases that execute the system
as a whole- like an actual user (Actor), transaction by transaction. Use cases are a sequence of
steps that describe the interaction between the Actor and the system. They are always defined in
the language of the Actor, not the system. This testing is most effective in identifying the
integration defects. Use case also defines any preconditions and postconditions of the process flow.
ATM machine example can be tested via use case:
5. State Transition Testing: It is used where an application under test or a part of it can be treated
as FSM or finite state machine. Continuing the simplified ATM example above, We can say that
ATM flow has finite states and hence can be tested with the State transition technique. There are
4 basic things to consider –
1. States a system can achieve
2. Events that cause the change of state
3. The transition from one state to other
4. Outcomes of change of state
A state event pair table can be created to derive test conditions – both positive and negative.
Concepts of verification and validation:
Verification and validation are processes that collect evidence of a model's correctness or
accuracy for a specific scenario; thus, V&V cannot prove that a model is correct and accurate
for all possible conditions and applications, but, rather, it can provide evidence that a model is
sufficiently accurate.
Verification process includes checking of documents, design, code and program whereas
Validation process includes testing and validation of the actual product.
Verification checks whether the software confirms a specification whereas Validation
checks whether the software meets the requirements and expectations.
The four fundamental methods of verification are Inspection, Demonstration, Test, and
Analysis. The four methods are somewhat hierarchical in nature, as each verifies requirements
of a product or system with increasing rigor.
Method validation is the process used to confirm that the analytical procedure employed
for a specific test is suitable for its intended use. Results from method validation can be used
to judge the quality, reliability and consistency of analytical results; it is an integral part of any
good analytical practice.
Verification testing can be best demonstrated using V-Model. The artefacts such as test Plans,
requirement specification, design, code and test cases are evaluated.
Activities:
Reviews
Walkthroughs
Inspection
Validation Testing
The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also be
defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
It answers to the question, Are we building the right product?
Validation testing can be best demonstrated using V-Model. The Software/product under test is
evaluated during this type of testing.
Activities:
Unit Testing
Integration Testing
System Testing
User Acceptance Testing
white box test coverage – code coverage, condition coverage, branch coverage; basic concepts of
black-box tests – equivalence classes, boundary value tests, usage of state tables; testing use cases;
transaction based testing; testing for non-functional requirements – volume, performance and
efficiency; concepts of inspection.
1. Black Box Testing is a software testing method in which the internal structure/ design/
implementation of the item being tested is not known to the tester
2. White Box Testing is a software testing method in which the internal structure/ design/
implementation of the item being tested is known to the tester.
It can be referred as outer or external software It is the inner or the internal software
testing. testing.
This testing can be initiated on the basis of This type of testing of software is started
requirement specifications document. after detail design document.
It is the behavior testing of the software. It is the logic testing of the software.
It is applicable to the higher levels of testing of It is generally applicable to the lower levels
software. of software testing.
Can be done by trial-and-error ways and Data domains along with inner or internal
methods. boundaries can be better tested.
A. Functional Testing
B. Non-functional testing
C. Regression Testing
White box testing techniques analyze the internal structures of the used data structures, internal
design, code structure and the working of the software rather than just the functionality as in
black box testing.
It is also called glass box testing or clear box testing or structural testing.
Working process of white box testing:
Input: Requirements, Functional specifications, design documents, source code.
Processing: Performing risk analysis for guiding through the entire process.
Proper test planning: Designing test cases so as to cover entire code. Execute rinse-
repeat until error-free software is reached. Also, the results are communicated.
Output: Preparing final report of the entire testing process.
Testing techniques:
Statement coverage: In this technique, the aim is to traverse all statement at least once.
Hence, each line of code is tested. In case of a flowchart, every node must be traversed
at least once. Since all lines of code are covered, helps in pointing out faulty code.
Branch Coverge: In this technique, test cases are designed so that each branch from
all decision points are traversed at least once. In a flowchart, all edges must be traversed
at least once.
4 test cases required such that all branches of all decisions are covered, i.e, all edges of flowchart are
covered
Multiple Condition Coverage: In this technique, all the possible combinations of the
possible outcomes of conditions are tested at least once. Let’s consider the following
example:
1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
4. #TC1: X = 0, Y = 0
5. #TC2: X = 0, Y = 5
6. #TC3: X = 55, Y = 0
7. #TC4: X = 55, Y = 5
Hence, four test cases required for two individual conditions.
n
Similarly, if there are n conditions then 2 test cases would be required.
Basis Path Testing: In this technique, control flow graphs are made from code or
flowchart and then Cyclomatic complexity is calculated which defines the number of
independent paths so that the minimal number of test cases can be designed for each
independent path.
Steps:
1. Make the corresponding control flow graph
2. Calculate the cyclomatic complexity
3. Find the independent paths
4. Design test cases corresponding to each independent path
Flow graph notation: It is a directed graph consisting of nodes and edges. Each node
represents a sequence of statements, or a decision point. A predicate node is the one
that represents a decision point that contains a condition after which the graph splits.
Regions are bounded by nodes and edges.
Cyclomatic Complexity: It is a measure of the logical complexity of the software and
is used to define the number of independent paths. For a graph G, V(G) is its cyclomatic
complexity.
Calculating V(G):
5. V(G) = P + 1, where P is the number of predicate nodes in the flow graph
6. V(G) = E – N + 2, where E is the number of edges and N is the total number
of nodes
7. V(G) = Number of non-overlapping regions in the graph
Example:
8. #P1: 1 – 2 – 4 – 7 – 8
9. #P2: 1 – 2 – 3 – 5 – 7 – 8
10. #P3: 1 – 2 – 3 – 6 – 7 – 8
11. #P4: 1 – 2 – 4 – 7 – 1 – . . . – 7 – 8
Loop Testing: Loops are widely used and these are fundamental to many algorithms
hence, their testing is very important. Errors often occur at the beginnings and ends of
loops.
1. Simple loops: For simple loops of size n, test cases are designed that:
Skip the loop entirely
Only one pass through the loop
2 passes
m passes, where m < n
n-1 ans n+1 passes
2. Nested loops: For nested loops, all the loops are set to their minimum
count and we start from the innermost loop. Simple loop tests are
conducted for the innermost loop and this is worked outwards till all the
loops have been tested.
3. Concatenated loops: Independent loops, one after another. Simple loop
tests are applied for each.
If they’re not independent, treat them like nesting.
Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines
of code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.
Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming
language as opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.
Black box testing is a type of software testing in which the functionality of the software is not
known.
The testing is done without the internal knowledge of the products. Black box testing can be
done in following ways:
1. Syntax Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example- compilers, language that can be represented by
context free grammar. In this, the test cases are generated so that each grammar rule is used at least
once.
2. Equivalence partitioning – It is often seen that many type of inputs work similarly so instead
of giving all of them separately we can group them together and test only one input of each group.
The idea is to partition the input domain of the system into a number of equivalence classes such
that each member of class works in a similar way, i.e., if a test case in one class results in some
error, other members of class would also result into same error.
The technique involves two steps:
1. Identification of equivalence class – Partition any input domain into minimum two
sets: valid values and invalid values. For example, if the
2. valid range is 0 to 100 then select one valid input like 49 and one invalid like 104.
3. Generating test cases –
(i) To each valid and invalid class of input assign unique identification number.
(ii) Write test case covering all valid and invalid test case considering that no two
invalid inputs mask each other.
To calculate the square root of a number, the equivalence classes will be:
(a) Valid inputs:
Whole number which is a perfect square- output will be an integer.
Whole number which is not a perfect square- output will be decimal
number.
Positive decimals
(b) Invalid inputs:
Negative numbers(integer or decimal).
Characters other that numbers like “a”,”!”,”;”,etc.
3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence if test
cases are designed for boundary values of input domain then the efficiency of testing improves
and probability of finding errors also increase. For example – If valid range is 10 to 100 then test
for 10,100 also apart from valid and invalid inputs.
4. Cause effect Graphing – This technique establishes relationship between logical input called
causes with corresponding actions called effect. The causes and effects are represented using
Boolean graphs. The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop cause effect graph.
3. Transform the graph into decision table.
4. Convert decision table rules to test cases.
For example, in the following cause effect graph:
6. Compatibility testing – The test case result not only depend on product but also infrastructure
for delivering functionality. When the infrastructure parameters are changed it is still expected to
work properly. Some parameters that generally affect compatibility of software are:
1. Processor (Pentium 3,Pentium 4) and number of processors.
2. Architecture and characteristic of machine (32 bit or 64 bit).
3. Back-end components such as database servers.
4. Operating System (Windows, Linux, etc).
Functional testing mainly involves black box testing and it is not concerned about the source
code of the application.
This testing checks User Interface, APIs, Database, Security, Client/Server communication and
other functionality of the Application Under Test.
Business requirements are the inputs to Performance parameters like speed, scalability
functional testing are inputs to non-functional testing.
Functional testing describes what the product Nonfunctional testing describes how good the
does product works
Easy to do Manual Testing Tough to do Manual Testing
Examples of Non-functional testing are:
Examples of Functional testing are:
Performance Testing
Unit Testing
Load Testing
Smoke Testing
Volume Testing
Sanity Testing
Stress Testing
Integration Testing
Security Testing
White box testing
Installation Testing
Black Box testing
Penetration Testing
User Acceptance testing
Compatibility Testing
Regression Testing
Migration Testing
1) Security:
The parameter defines how a system is safeguarded against deliberate and sudden attacks from
internal and external sources. This is tested via Security Testing.
2) Reliability:
The extent to which any software system continuously performs the specified functions without
failure. This is tested by Reliability Testing
3) Survivability:
The parameter checks that the software system continues to function and recovers itself in case of
system failure. This is checked by Recovery Testing
4) Availability:
The parameter determines the degree to which user can depend on the system during its operation.
This is checked by Stability Testing.
5) Usability:
The ease with which the user can learn, operate, prepare inputs and outputs through interaction
with a system. This is checked by Usability Testing
6) Scalability:
The term refers to the degree in which any software application can expand its processing capacity
to meet an increase in demand. This is tested by Scalability Testing
7) Interoperability:
This non-functional parameter checks a software system interfaces with other software systems.
This is checked by Interoperability Testing
8) Efficiency:
The extent to which any software system can handles capacity, quantity and response time.
9) Flexibility:
The term refers to the ease with which the application can work in different hardware and software
configurations. Like minimum RAM, CPU requirements.
10) Portability:
The flexibility of software to transfer from its current hardware or software environment.
11) Reusability:
It refers to a portion of the software system that can be converted for use in another application.
Concepts of inspection:
How Software Inspection improves Software Quality ?
The term software inspection was developed by IBM in the early 1970s, when it was noticed
that the testing was not enough sufficient to attain high quality software for large applications.
Inspection is used to determine the defects in the code and remove it efficiently.
This prevents defects and enhances the quality of testing to remove defects.
This software inspection method achieved the highest level for efficiently removing defects
and improving software quality.
There are some factors that generate the high quality software:
Phrases quality design inspection and Code inspections: This factor refers to formal
oversight that follows protocols such as training. Participants, material distributed for
inspection. Both moderators and recorders are present to analyze defect statistics.
Phrase quality assurance : This factor refers to an active software quality assurance
group, which joins a group of software developments to support them in the
development of high quality software.
Formal Testing :It throws the test process under certain conditions
For an application, a test plan was created.
Are complete specifications so that test cases can be made without
significant gaps.
Vast library control tools are used.
Test coverage analysis tools are used.
What is an Inspection?
Inspection is the most formal form of reviews, a strategy adopted during static testing phase.
Characteristics of Inspection:
Inspection is usually led by a trained moderator, who is not the author. Moderator's role is
to do a peer examination of a document
Inspection is most formal and driven by checklists and rules.
This review process makes use of entry and exit criteria.
It is essential to have a pre-meeting preparation.
Inspection report is prepared and shared with the author for appropriate actions.
Post Inspection, a formal follow-up process is used to ensure a timely and a prompt
corrective action.
Aim of Inspection is NOT only to identify defects but also to bring in for process
improvement.
*********************THE END**********************