Throwaway Prototype & Waterfall Model
Throwaway Prototype & Waterfall Model
1. Quick Feedback: It allows stakeholders and users to see and interact with a version of the product early in
development, providing feedback before investing in full-scale development.
2. Risk Reduction: By testing and refining ideas early, throwaway prototyping helps to minimize the risk of
spending time on a flawed design or functionality.
3. Cost-Effectiveness: Since it’s intended to be discarded, it saves time and resources by focusing only on what’s
needed for early validation without committing to full functionality.
4. Improving Requirements: The prototype helps clarify requirements by showing what works and what
doesn’t, leading to more refined specifications.
The Classical Waterfall Model suffers from various shortcomings we can’t use it in real projects, but we use other
software development lifecycle models which are based on the classical waterfall model. Below are some major
drawbacks of this model.
• No Feedback Path: In the classical waterfall model evolution of software from one phase to another phase is
like a waterfall. It assumes that no error is ever committed by developers during any phase. Therefore, it does
not incorporate any mechanism for error correction.
• Difficult to accommodate Change Requests: This model assumes that all the customer requirements can be
completely and correctly defined at the beginning of the project, but the customer’s requirements keep on
changing with time. It is difficult to accommodate any change requests after the requirements specification
phase is complete.
• No Overlapping of Phases: This model recommends that a new phase can start only after the completion of
the previous phase. But in real projects, this can’t be maintained. To increase efficiency and reduce cost,
phases may overlap.
• Limited Flexibility: The Waterfall Model is a rigid and linear approach to software development, which
means that it is not well-suited for projects with changing or uncertain requirements. Once a phase has been
completed, it is difficult to make changes or go back to a previous phase.
• Limited Stakeholder Involvement: The Waterfall Model is a structured and sequential approach, which
means that stakeholders are typically involved in the early phases of the project (requirements gathering and
analysis) but may not be involved in the later phases (implementation, testing, and deployment).
• Late Defect Detection: In the Waterfall Model, testing is typically done toward the end of the development
process. This means that defects may not be discovered until late in the development process, which can be
expensive and time-consuming to fix.
c) List any two Non-Functional Requirements from both, developer and end user’s perspective each.
Non-functional requirements (NFRs) refer to the quality attributes, performance standards, and usability metrics of a
system rather than its specific functionality. Here are two examples from both the developer's and end user's
perspectives:
Developer’s Perspective
1. Scalability: The system should be able to handle an increasing number of users or transactions without
performance degradation. Developers focus on designing architecture that can expand as needed to
accommodate growth.
2. Maintainability: The codebase should be organized and documented to facilitate easy updates, debugging,
and enhancement. This is crucial for long-term sustainability and smooth operation of the system.
1. Performance: The system should respond quickly to user actions, with minimal loading times. End users
expect a fast and efficient experience, particularly for actions like page loads and data retrieval.
2. Reliability: The system should be available and function correctly whenever the user needs it. Users want
assurance that the system will operate smoothly without frequent crashes or downtime.
1. Lack of Depth: Questionnaires are typically limited in detail, which can prevent a deep understanding of
complex requirements. They may not capture nuanced insights that interviews or focus groups might reveal.
2. Misinterpretation: Respondents might misunderstand questions or interpret them differently than intended.
This can lead to inaccurate or incomplete responses, affecting the reliability of the data collected.
3. Limited Flexibility: Once a questionnaire is distributed, it’s hard to adapt or ask follow-up questions based on
a respondent’s answers. This rigid format can limit the quality and relevance of the information gathered.
4. Low Response Rates: Participants might not be motivated to complete questionnaires, especially if they’re
lengthy or complex. This can lead to lower response rates, and the answers may not represent the broader
user base accurately.
5. Superficial Responses: Because questionnaires often include close-ended questions, they may encourage
respondents to give superficial answers without explaining their reasoning or preferences in detail.
6. Bias: Respondents may answer questions in a way that they think is expected or socially acceptable, which
could skew the data. Additionally, the way questions are phrased can introduce unintended bias.
e) While designing the modularity of projects, which factors are to be included for developer.
When designing for modularity in a project, developers should consider several critical factors to ensure that the
codebase is maintainable, flexible, and scalable. Here are the main factors:
1. Cohesion: Each module should focus on a specific functionality or purpose. High cohesion ensures that
modules have a single, well-defined responsibility, making them easier to understand, maintain, and test.
2. Coupling: Low coupling between modules is essential to avoid dependencies that could complicate changes
or updates. Modules should interact through clear, well-defined interfaces, minimizing interdependencies
and allowing for easier isolation and modification.
3. Encapsulation: Encapsulating module details hides implementation specifics from other parts of the system,
reducing complexity and protecting internal module states. This ensures that each module can be modified
independently without impacting others.
4. Reusability: Modules should be designed to be reusable wherever possible, allowing developers to apply the
same module in different parts of the project or across different projects. This reduces redundant code and
improves development efficiency.
5. Scalability: Modules should be designed to scale independently, allowing for easy addition or modification
without requiring significant changes to other parts of the system. This is crucial for accommodating growth
and changing requirements.
6. Testability: Each module should be easy to test independently, with well-defined inputs and outputs.
Modularity improves test coverage, as each component can be verified individually, simplifying debugging
and improving reliability.
7. Consistency: A consistent approach in naming conventions, coding standards, and module structure
improves readability and ease of collaboration among team members. Consistency helps developers quickly
understand and work with each module.
1. Early Detection of Errors: Walkthroughs allow team members to examine requirements, design documents,
or code line by line, helping to catch errors, inconsistencies, or missing functionality early in the development
process. Early detection minimizes costly fixes in later stages.
2. Improved Understanding of Requirements: By going through the project in detail, walkthroughs help ensure
that the team fully understands the requirements and intended functionality. This shared understanding
reduces the likelihood of misinterpretations that could lead to defects.
3. Cross-Functional Feedback: Walkthroughs involve input from various team members with different expertise
(e.g., developers, testers, designers), allowing them to contribute diverse perspectives. This cross-functional
feedback can identify potential issues or improvements that one individual might overlook.
1. Valid Partitions
• These are partitions where inputs are within the expected or allowable range.
• Testing one representative value from this partition should be sufficient to assume that the entire partition
behaves correctly.
2. Invalid Partitions
• Testing from invalid partitions ensures that the system properly handles and restricts invalid inputs.
3. Boundary Partitions
• Boundary partitions focus on the edges of the valid and invalid partitions to check boundary-related issues.
• This type of testing ensures that the system correctly handles the exact limits of the allowed inputs.
4. Error Partitions
• These partitions cover inputs that should trigger specific error handling, such as invalid data types or formats.
• Testing error partitions helps ensure the system can handle unexpected input formats gracefully.
5. Special/Exceptional Partitions
• These partitions handle cases that are unusual but may be valid in certain contexts, like empty or null inputs.
• Testing these ensures that the system handles these edge cases as expected without crashing.
1. Increased Effort in Understanding the System: Complex systems often have intricate codebases, numerous
interdependencies, and convoluted logic. Maintenance teams require more time to understand these details
before making changes or fixes, leading to higher labor costs and extended maintenance timelines.
2. Higher Risk of Errors: With increased complexity, the likelihood of introducing bugs during updates or
modifications rises. Complex systems have multiple interdependent components, where a change in one part
can inadvertently impact others, necessitating more extensive testing and debugging. This added caution and
corrective work contribute to higher maintenance costs.
3. Difficulty in Isolating Issues: When systems are complex, identifying the root cause of a problem becomes
challenging, requiring more time and specialized skills. For instance, tracking down a bug in a codebase with
multiple interconnected modules can be time-consuming and costly.
4. More Frequent Documentation and Training Needs: Complex systems require more extensive
documentation to explain architecture, workflows, and dependencies. Additionally, maintenance teams often
need specialized training to understand and work with complex architectures, adding to maintenance
expenses.
5. Increased Testing Requirements: In complex systems, testing becomes more comprehensive and expensive
because more components and dependencies need to be verified for each change. Regression testing,
especially, grows in scope to ensure that existing functionality isn't broken by new changes.
1. Code Libraries: Collections of pre-written code functions or modules that perform common tasks, such as
mathematical computations, data manipulation, or string handling, which developers can integrate across
different projects.
2. Frameworks: These provide a structured foundation for developing applications, offering pre-defined design
patterns, libraries, and tools for specific types of projects (e.g., web, mobile, or desktop applications). Popular
frameworks include Django for Python, Angular for JavaScript, and Spring for Java.
3. Modules: Self-contained units of code that perform specific functions, such as authentication, logging, or
data processing, which can be integrated into multiple applications.
4. Classes and Objects: In object-oriented programming, classes and objects can be reused across applications
to implement reusable entities, like "User," "Product," or "Invoice," with common properties and methods.
5. APIs (Application Programming Interfaces): Reusable interfaces that allow different software components or
services to communicate, enabling features such as payment processing, social media integration, and data
retrieval from external services.
6. Web Services and Microservices: Independent services that offer specific functionalities over a network. For
example, microservices for user management or notifications can be reused across multiple applications.
7. User Interface Components: Reusable UI elements, such as buttons, forms, navigation bars, and templates,
help create consistent and efficient user interfaces. These are often part of UI libraries like React, Angular
Material, or Bootstrap.
Part-2
2)a) Discuss the phases of Software development life cycle. Explain in detail.
The SDLC Model involves six phases or stages while developing any software.
Planning is a crucial step in everything, just as in software development. In this same stage, requirement analysis is
also performed by the developers of the organization. This is attained from customer inputs, and sales
department/market surveys.
The information from this analysis forms the building blocks of a basic project. The quality of the project is a result of
planning. Thus, in this stage, the basic project is designed with all the available information.
In this stage, all the requirements for the target software are specified. These requirements get approval from
customers, market analysts, and stakeholders.
This is fulfilled by utilizing SRS (Software Requirement Specification). This is a sort of document that specifies all
those things that need to be defined and created during the entire project cycle.
SRS is a reference for software designers to come up with the best architecture for the software. Hence, with the
requirements defined in SRS, multiple designs for the product architecture are present in the Design Document
Specification (DDS).
This DDS is assessed by market analysts and stakeholders. After evaluating all the possible factors, the most practical
and logical design is chosen for development.
Stage-4: Developing Product
At this stage, the fundamental development of the product starts. For this, developers use a specific programming
code as per the design in the DDS. Hence, it is important for the coders to follow the protocols set by the association.
Conventional programming tools like compilers, interpreters, debuggers, etc. are also put into use at this stage. Some
popular languages like C/C++, Python, Java, etc. are put into use as per the software regulations.
After the development of the product, testing of the software is necessary to ensure its smooth execution. Although,
minimal testing is conducted at every stage of SDLC. Therefore, at this stage, all the probable flaws are tracked, fixed,
and retested. This ensures that the product confronts the quality requirements of SRS.
Documentation, Training, and Support: Software documentation is an essential part of the software development
life cycle. A well-written document acts as a tool and means to information repository necessary to know about
software processes, functions, and maintenance. Documentation also provides information about how to use the
product. Training in an attempt to improve the current or future employee performance by increasing an employee’s
ability to work through learning, usually by changing his attitude and developing his skills and understanding.
After detailed testing, the conclusive product is released in phases as per the organization’s strategy. Then it is tested
in a real industrial environment. It is important to ensure its smooth performance. If it performs well, the
organization sends out the product as a whole. After retrieving beneficial feedback, the company releases it as it is or
with auxiliary improvements to make it further helpful for the customers. However, this alone is not enough.
Therefore, along with the deployment, the product’s supervision.
b) Differentiate between Waterfall Model and Prototyping Model.
1. Introduction
• Purpose: Describes the purpose of the SRS document, clarifying the intended audience and why it’s being
created.
• Scope: Defines the boundaries of the software, outlining what it will and won’t include.
• Definitions, Acronyms, and Abbreviations: Lists any specific terms, acronyms, or abbreviations used within
the document to aid understanding.
• References: Lists documents, standards, and other references that are relevant to the SRS.
• Overview: Provides a high-level description of the system, including its goals, major functions, and benefits.
2. Overall Description
• Product Perspective: Describes the product's context, such as its relationship to other systems, interfaces, or
dependencies.
• Product Functions: Summarizes the major functions of the software from a high-level viewpoint, typically in
the form of a list or diagram.
• User Characteristics: Defines the types of users who will interact with the software, including their
experience level and expected technical proficiency.
• Constraints: Lists restrictions that impact the system, such as regulatory requirements, hardware limitations,
or software dependencies.
• Assumptions and Dependencies: Outlines assumptions about the project (e.g., availability of certain
technologies) and dependencies on other systems or software.
• This section details the core functional requirements, specifying the features and capabilities that the
software must provide. Each feature is often broken down into:
• Functional requirements are typically organized by priority or sequence and may include use cases, user
stories, or flow diagrams for clarity.
• User Interfaces: Specifies the requirements for the user interface, including screen layouts, navigation, and
UI components.
• Hardware Interfaces: Describes the hardware components the software interacts with, such as servers,
printers, or external devices.
• Software Interfaces: Defines interactions with other software systems, including databases, APIs, and third-
party services.
5. Non-Functional Requirements
• Performance Requirements: Specifies performance metrics such as response time, load handling, and
transaction processing rates.
• Security Requirements: Defines security standards, access controls, data protection needs, and privacy
requirements.
• Reliability: Outlines expected system reliability, uptime, and fault tolerance requirements.
• Usability: Sets expectations for user experience, ease of use, accessibility, and adaptability for different user
groups.
• Maintainability: Describes requirements for ease of maintenance, including modularity, readability, and
documentation standards.
• Portability: Specifies compatibility with different environments, such as operating systems, browsers, or
devices.
6. Other Requirements
• This section includes any additional requirements that don’t fall under functional or non-functional
categories. Examples might include legal, compliance, or regulatory requirements specific to the industry.
7. Appendices
d) What do you understand with the term “requirement elicitation”? Discuss any two Techniques.
Requirement elicitation
The process of investigating and learning about a system’s requirements from users, clients, and other stakeholders is
known as requirements elicitation. Requirements elicitation in software engineering is perhaps the most difficult,
most error-prone, and most communication-intensive software development.
2. Requirements elicitation involves the identification, collection, analysis, and refinement of the requirements
for a software system.
3. Requirement Elicitation is a critical part of the software development life cycle and is typically performed at
the beginning of the project.
4. Requirements elicitation involves stakeholders from different areas of the organization, including business
owners, end-users, and technical experts.
5. The output of the requirements elicitation process is a set of clear, concise, and well-defined requirements
that serve as the basis for the design and development of the software system.
6. Requirements elicitation is difficult because just questioning users and customers about system needs may
not collect all relevant requirements, particularly for safety and dependability.
7. Interviews, surveys, user observation, workshops, brainstorming, use cases, role-playing, and prototyping are
all methods for eliciting requirements.
1. Interviews
The objective of conducting an interview is to understand the customer’s expectations of the software.
It is impossible to interview every stakeholder hence representatives from groups are selected based on their
expertise and credibility. Interviews may be open-ended or structured.
1. In open-ended interviews, there is no pre-set agenda. Context-free questions may be asked to understand
the problem.
2. In a structured interview, an agenda of fairly open questions is prepared. Sometimes a proper questionnaire
is designed for the interview.
2. Brainstorming Sessions
• It is intended to generate lots of new ideas hence providing a platform to share views
• Finally, a document is prepared which consists of the list of requirements and their priority if possible.
Its objective is to bridge the expectation gap – the difference between what the developers think they are supposed
to build and what customers think they are going to get. A team-oriented approach is developed for requirements
gathering. Each attendee is asked to make a list of objects that are:
Each participant prepares his/her list, different lists are then combined, redundant entries are eliminated, the team is
divided into smaller sub-teams to develop mini-specifications and finally, a draft of specifications is written down
using all the inputs from the meeting.
e) Explain the Boehm’s spiral life cycle development model with suitable diagrams and benefits,
shortcomings.
The Spiral Model is one of the most important Software Development Life Cycle models. The Spiral Model is a
combination of the waterfall model and the iterative model. It provides support for Risk Handling. The Spiral Model
was first proposed by Barry Boehm. This article focuses on discussing the Spiral Model in detail.
The Spiral Model is a Software Development Life Cycle (SDLC) model that provides a systematic and iterative
approach to software development. In its diagrammatic representation, looks like a spiral with many loops. The exact
number of loops of the spiral is unknown and can vary from project to project. Each loop of the spiral is called
a phase of the software development process.
1. The exact number of phases needed to develop the product can be varied by the project manager depending
upon the project risks.
2. As the project manager dynamically determines the number of phases, the project manager has an
important role in developing a product using the spiral model.
3. It is based on the idea of a spiral, with each iteration of the spiral representing a complete software
development cycle, from requirements gathering and analysis to design, implementation, testing, and
maintenance.
Advantages of the Spiral Model
1. Risk Handling: The projects with many unknown risks that occur as the development proceeds, in that case,
Spiral Model is the best development model to follow due to the risk analysis and risk handling at every
phase.
2. Good for large projects: It is recommended to use the Spiral Model in large and complex projects.
3. Flexibility in Requirements: Change requests in the Requirements at a later phase can be incorporated
accurately by using this model.
4. Customer Satisfaction: Customers can see the development of the product at the early phase of the software
development and thus, they habituated with the system by using it before completion of the total product.
5. Iterative and Incremental Approach: The Spiral Model provides an iterative and incremental approach to
software development, allowing for flexibility and adaptability in response to changing requirements or
unexpected events.
6. Emphasis on Risk Management: The Spiral Model places a strong emphasis on risk management, which
helps to minimize the impact of uncertainty and risk on the software development process.
7. Improved Communication: The Spiral Model provides for regular evaluations and reviews, which can
improve communication between the customer and the development team.
8. Improved Quality: The Spiral Model allows for multiple iterations of the software development process,
which can result in improved software quality and reliability.
1. Complex: The Spiral Model is much more complex than other SDLC models.
3. Too much dependability on Risk Analysis: The successful completion of the project is very much dependent
on Risk Analysis. Without very highly experienced experts, it is going to be a failure to develop a project using
this model.
4. Difficulty in time management: As the number of phases is unknown at the start of the project, time
estimation is very difficult.
5. Complexity: The Spiral Model can be complex, as it involves multiple iterations of the software development
process.
6. Time-Consuming: The Spiral Model can be time-consuming, as it requires multiple evaluations and reviews.
7. Resource Intensive: The Spiral Model can be resource-intensive, as it requires a significant investment in
planning, risk analysis, and evaluations.
g) What do you mean by Coding standard? Also, write down the types of code reviews with relevant
examples.
Good software development organizations want their programmers to maintain to some well-defined and standard
style of coding called coding standards. They usually make their own coding standards and guidelines depending on
what suits their organization best and based on the types of software they develop. It is very important for the
programmers to maintain the coding standards otherwise the code will be rejected during code review.
1. Limited use of global: These rules tell about which types of data that can be declared global and the data
that can’t be.
2. Standard headers for different modules: For better understanding and maintenance of the code, the header
of different modules should follow some standard format and information. The header format must contain
below things that is being used in various companies:
• Meaningful and understandable variables name helps anyone to understand the reason of using it.
• Local variables should be named using camel case lettering starting with small letter (e.g. localData)
whereas Global variables names should start with a capital letter (e.g. GlobalData). Constant names
should be formed using capital letters only (e.g. CONSDATA).
• It is better to avoid the use of digits in variable names.
• The names of the function should be written in camel case starting with small letters.
• The name of the function must describe the reason of using the function clearly and briefly.
4. Indentation: Proper indentation is very important to increase the readability of the code. For making the
code readable, programmers should use White spaces properly. Some of the spacing conventions are given
below:
• There must be a space after giving a comma between two function arguments.
• Each nested block should be properly indented and spaced.
• Proper Indentation should be there at the beginning and at the end of each block in the program.
• All braces should start from a new line and the code following the end of braces also start from a
new line.
5. Error return values and exception handling conventions: All functions that encountering an error condition
should either return a 0 or 1 for simplifying the debugging.
Examples:
. Peer Review
• Example:
o Developer A writes a function to handle user authentication and submits it for review to Developer
B.
o Developer B checks the code for issues such as potential security flaws, incorrect usage of
authentication libraries, or readability problems.
o Developer B suggests refactoring the function to improve readability by breaking it into smaller
helper methods.
2. Formal Review
• Example:
o The team is working on a new payment gateway integration.
o A formal review meeting is scheduled, where Developer A presents the code to the team (including
the project manager, lead developer, and QA engineer).
o The team goes through a checklist to ensure the code is secure, adheres to company standards, and
integrates correctly with the existing system.
o They identify potential issues like improper error handling and propose changes to improve the
code's robustness.
3. Walkthrough
• Example:
o Developer A is implementing a new search feature for a web application.
o Developer A invites Developer B and Developer C to a walkthrough session, where they go over the
code together.
o During the walkthrough, Developer A explains the search algorithm, and Developer B suggests
adding more specific error messages to enhance user feedback, while Developer C points out that
caching could improve performance.
4. Tool-Based Review
• Example:
o Developer A is working on a feature to display real-time notifications in a web application.
o Before submitting the code for manual review, Developer A runs the code through SonarQube to
check for any potential issues like security vulnerabilities, code smells, and adherence to coding
standards.
o The tool flags some unused variables and provides suggestions for refactoring redundant code, which
Developer A addresses before finalizing the code.
5. Pair Programming
• Example:
o Developer A and Developer B are working on a new module to process user inputs.
o Developer A writes the code for input validation while Developer B reviews the logic, suggesting
improvements in validation rules and error handling in real-time.
o Developer B notices a potential edge case for invalid inputs that Developer A hadn’t considered, and
they correct it together.
6. Over-the-Shoulder Review
• Example:
o Developer A is working on a bug fix for a feature in the application, but they’re unsure about the
correct way to implement a particular change.
o Developer A asks Developer B for an over-the-shoulder review.
o Developer B looks at the code, offers a quick suggestion on fixing the bug more efficiently, and
recommends adding a unit test to verify the fix.
7. Group Review
• Example:
o The development team is implementing a new feature that will allow users to upload files.
o A group review is held with the project manager, a security expert, a backend developer, and a QA
engineer to go over the feature’s code.
o During the review, the security expert points out that file uploads need better sanitization to prevent
malicious files from being uploaded, and the QA engineer suggests adding specific edge cases for
large file uploads in the test plan.
8. Ad-Hoc Review
• Example:
o Developer A writes a simple function to format a date in a specific format but is unsure if it follows
best practices.
o Developer A quickly asks Developer B for an ad-hoc review.
o Developer B reviews the function, gives feedback on improving its efficiency by using a built-in date
formatting method, and suggests adding a validation check for invalid date inputs.
h) What is meant by Software Quality? List the inherent attributes of Software Quality.
Software Quality shows how good and reliable a product is. To convey an associate degree example, think about
functionally correct software. It performs all functions as laid out in the SRS document. But, it has an associate
degree virtually unusable program. even though it should be functionally correct, we tend not to think about it to be
a high-quality product.
Another example is also that of a product that will have everything that the users need but has an associate degree
virtually incomprehensible and not maintainable code. Therefore, the normal construct of quality as “fitness of
purpose” for code merchandise isn’t satisfactory.
The modern read of high-quality associates with software many quality factors like the following:
1. Portability: A software is claimed to be transportable, if it may be simply created to figure in several package
environments, in several machines, with alternative code merchandise, etc.
2. Usability: A software has smart usability if completely different classes of users (i.e. knowledgeable and
novice users) will simply invoke the functions of the merchandise.
3. Reusability: A software has smart reusability if completely different modules of the merchandise will simply
be reused to develop new merchandise.
4. Correctness: Software is correct if completely different needs as laid out in the SRS document are properly
enforced.
5. Maintainability: A software is reparable, if errors may be simply corrected as and once they show up, new
functions may be simply added to the merchandise, and therefore the functionalities of the merchandise
may be simply changed, etc.
6. Reliability: Software is more reliable if it has fewer failures. Since software engineers do not deliberately plan
for their software to fail, reliability depends on the number and type of mistakes they make. Designers can
improve reliability by ensuring the software is easy to implement and change, by testing it thoroughly, and
also by ensuring that if failures occur, the system can handle them or can recover easily.
7. Efficiency. The more efficient software is, the less it uses of CPU-time, memory, disk space, network
bandwidth, and other resources. This is important to customers in order to reduce their costs of running the
software, although with today’s powerful computers, CPU time, memory and disk usage are less of a concern
than in years gone by.
i) Define the term:” risk management”. State the approach to identify best risk reduction Method when
there are many risk reduction approaches exist.
Risk Management:
Risk management is the process of identifying, assessing, and controlling risks that could potentially affect the
achievement of project goals or organizational objectives. It involves developing strategies to minimize the impact of
these risks or to avoid them altogether. Risk management is essential for ensuring that potential problems do not
derail a project or system, and it helps in making informed decisions to safeguard resources, timelines, and
outcomes.
1. Risk Identification: The first step is to identify potential risks that could threaten the success of a project,
such as technical challenges, financial issues, legal problems, or resource constraints.
2. Risk Assessment: Once risks are identified, they must be analyzed to determine their likelihood of occurring
and their potential impact on the project. This helps prioritize which risks need immediate attention.
3. Risk Mitigation or Reduction: After assessing the risks, strategies are developed to either reduce their
likelihood, minimize their impact, or avoid them altogether. This may involve planning, control measures, or
backup plans.
4. Monitoring and Review: Risk management is an ongoing process, and the effectiveness of the strategies
should be continuously monitored, with adjustments made as necessary to address new or emerging risks.
When multiple risk reduction approaches exist, choosing the best one involves considering several factors. Here’s an
approach to help determine the most effective method for risk reduction:
1. Risk Prioritization
• Assess Risk Severity: The first step is to assess and prioritize risks based on their probability (likelihood of
occurring) and impact (potential effect on the project).
o High Probability, High Impact Risks should be prioritized for immediate action.
• Risk Matrix: Create a risk matrix or heat map to visualize the severity of each risk. This can help prioritize
risks based on their combined impact and likelihood.
2. Cost-Benefit Analysis
• Analyze Cost vs. Benefit: For each risk reduction method, perform a cost-benefit analysis. This involves
evaluating the cost of implementing the method against the potential benefits (reduced impact or likelihood
of risk).
o Choose the method that offers the best risk reduction at the lowest cost.
• Example: A preventive measure may cost more upfront but may result in significant savings by avoiding a
high-impact risk. A less costly but less effective mitigation may be chosen for lower-priority risks.
• Avoidance: Eliminate the risk altogether by changing the project plan, scope, or objectives.
• Mitigation: Reduce the likelihood or impact of the risk through preventive actions, like improving security
practices, testing, or adding redundant systems.
• Transference: Transfer the risk to a third party (e.g., insurance or outsourcing) to manage the consequences.
• Acceptance: Accept the risk if the cost of mitigating it is too high relative to its impact. This is often used for
low-priority or low-impact risks.
Choose the method that aligns with the project’s objectives, budget, timeline, and risk tolerance.
• Seek advice from experienced team members, stakeholders, or external experts who have dealt with similar
risks in past projects. They may provide insights into which methods have been effective in the past and offer
advice tailored to your project’s context.
• Evaluate the feasibility of each approach considering the resources available (time, budget, and team
capacity). Some methods may be theoretically effective but difficult to implement due to constraints.
• Consider the practicality and ease of implementing each method. A method that requires significant changes
to the project infrastructure or technology may not be viable if it introduces too much complexity.
6. Stakeholder Preferences
• Engage stakeholders to understand their risk tolerance and preferences. For example, clients or investors
may have strong opinions about the level of risk they are willing to accept.
• This ensures that the risk management strategy aligns with stakeholder expectations and aligns with overall
project goals.
7. Iterative Approach
• In many cases, a combination of risk reduction methods might be the best approach. For example, high-
priority risks may be avoided or mitigated with preventive measures, while lower-priority risks might be
accepted or transferred.
• As risks and circumstances change throughout the project, risk management should be iterative, revisiting
the methods chosen as new risks emerge or old ones are resolved.
j) What do you mean by software maintenance? Why does the software need maintenance?
Software maintenance is a continuous process that occurs throughout the entire life cycle of the software system.
• The goal of software maintenance is to keep the software system working correctly, efficiently, and securely,
and to ensure that it continues to meet the needs of the users.
• This can include fixing bugs, adding new features, improving performance, or updating the software to work
with new hardware or software systems.
• It is also important to consider the cost and effort required for software maintenance when planning and
developing a software system.
• It is important to have a well-defined maintenance process in place, which includes testing and validation,
version control, and communication with stakeholders.
• It’s important to note that software maintenance can be costly and complex, especially for large and complex
systems. Therefore, the cost and effort of maintenance should be taken into account during the planning and
development phases of a software project.
• It’s also important to have a clear and well-defined maintenance plan that includes regular maintenance
activities, such as testing, backup, and bug fixing.
• Correct faults.
• Improve the design.
• Implement enhancements.
• Interface with other systems.
• Accommodate programs so that different hardware, software, system features, and telecommunications
facilities can be used.
• Migrate legacy software.
• Retire software.
• Requirement of user changes.
• Run the code fast
Software maintenance is necessary for several reasons:
1. Bug Fixing:
• No software is perfect, and bugs or errors inevitably appear after release. Corrective maintenance addresses
these bugs and ensures the software continues to function as expected.
• Example: A critical bug that crashes the application needs to be fixed promptly to restore functionality.
2. Adapting to Changes:
• The environment in which the software operates can change over time (e.g., operating systems, hardware,
third-party libraries). Adaptive maintenance ensures that the software continues to work despite these
changes.
• Example: A software application might stop working after an operating system update, and maintenance
would be required to ensure compatibility.
• As software is used over time, users may report performance issues, or developers may discover ways to
optimize the system. Perfective maintenance improves the software’s performance and ensures it meets
users' expectations.
• Business requirements evolve, and users may request new features or functionality. Perfective maintenance
is required to add new features, modify existing ones, or update the software to meet changing needs.
• Example: Adding support for a new payment gateway in an e-commerce application based on customer
demand.
5. Security Updates:
• As cybersecurity threats evolve, software must be regularly updated to address security vulnerabilities.
Preventive maintenance ensures that security patches and updates are applied to protect the software and
its users.
• Example: A vulnerability in the software is discovered, and a patch is released to protect against potential
exploits.
6. Regulatory Compliance:
• Changes in laws or regulations can require modifications to the software to ensure compliance. This may
involve adapting the software to new standards or practices.
• Example: A new data protection law (such as GDPR) requires changes in how the software handles personal
data, so it’s updated accordingly.
7. User Feedback:
• Software users often provide valuable feedback on issues or features that need improvement. Regular
updates through perfective maintenance allow the software to evolve based on this feedback.
• Example: Users may request more intuitive navigation or additional language support, leading to adjustments
in the software’s UI/UX design.
• As technology advances, the software might need to be updated to integrate with newer technologies,
platforms, or frameworks.
• Example: Updating a mobile application to support a new version of the mobile operating system (e.g.,
Android or iOS).
k) Describe Alpha and Beta testing along with their advantages and disadvantages.
Alpha Testing
Alpha Testing is a type of software testing performed to identify bugs before releasing the product to real users or to
the public. Alpha Testing is one of the user acceptance tests. It is the first stage of software testing, during which the
internal development team tests the program before making it available to clients or people outside the company.
1. Early Bug Detection: It helps find and fix major issues early in the software development cycle before the
software is released to the public.
2. Controlled Environment: Since it is done within the development environment, developers can easily track
and address issues in real-time.
3. Thorough Testing: Alpha testing is typically comprehensive, covering most aspects of the software, including
functionality, usability, and performance.
4. Feedback from Internal Teams: Developers and testers can provide immediate feedback and make
improvements.
1. Limited Real-World Exposure: Since it is conducted by internal testers who may not have the same
perspective as end users, it may not uncover all usability issues.
2. Incomplete Test Coverage: As it’s done in-house, certain edge cases or usage patterns that might occur in a
real-world environment might be missed.
3. Potential Bias: Internal testers may be familiar with the code and the product, which could lead to
unconscious bias or overlooking of certain defects.
Beta Testing
Beta Testing is performed by real users of the software application in a real environment. Beta testing is one type of
User Acceptance Testing. A pre-release version of the product is made available for testing to a chosen set of external
users or customers during the second phase of software testing.
1. Real-World Testing: The software is tested in real-world conditions, with testers from various backgrounds,
helping to identify issues that would not have been caught in a controlled environment.
2. User Feedback: Beta testers can provide valuable insights into usability, features, and overall user
experience, allowing developers to make user-centered improvements.
3. Unbiased Results: External testers are less likely to be influenced by the development process, so the
feedback tends to be more objective.
4. Exposure to a Larger Audience: Beta testing helps ensure that the software works under diverse conditions
(hardware, network, operating system) and on various devices (for mobile applications).
5. Market Readiness: It helps gauge user interest and readiness for the product in the market, offering a trial
run before the full release.
Disadvantages of Beta Testing:
1. Uncontrolled Environment: Since beta testing is performed by external users, developers may have less
control over the testing environment and conditions, which can lead to inconsistent feedback.
2. Security Risks: Sharing the software with external testers could expose vulnerabilities or lead to security
concerns.
3. Inconsistent Feedback: Beta testers may provide varied feedback, some of which may be unconstructive or
difficult to act upon, leading to ambiguity.
4. Incomplete Testing: Some testers may not explore all features or use the software thoroughly, meaning some
issues may remain undetected.
Code inspection, reviews, and walk-throughs are all techniques used to improve software quality by detecting defects
early in the development process. While they share similarities in their goal of identifying issues and improving code
quality, they differ in terms of structure, participants, goals, and focus. Below is a comparative study of these three
techniques.
1. Code Inspection
Definition:
Code inspection is a formal, structured process where a group of developers, often including a moderator, scrutinizes
the code for errors, standards violations, and potential improvements. It is a highly formal and systematic process.
Key Features:
• Formal Process: A highly structured approach with defined roles and a systematic process.
• Focus: Detect defects such as coding errors, adherence to coding standards, and design flaws.
• Participants: Typically includes a moderator, author, reviewers, and sometimes a scribe.
• Documentation: Detailed records are kept of the inspection process and outcomes.
• Timing: Done after code is written but before it is integrated into the system.
Advantages:
• High Detection Rate: Code inspection is effective at detecting defects early in the development process.
• In-Depth Analysis: Provides a detailed review of code and design, which can catch deep and subtle defects.
• Improves Code Quality: Encourages adherence to coding standards, improving overall code quality.
Disadvantages:
• Time-Consuming: It requires a significant amount of time and effort from the team, especially if the code is
complex.
• Costly: More resources are required due to the formal nature of the process.
• Requires Skilled Inspectors: The quality of the inspection depends heavily on the experience and expertise of
the participants.
2. Code Review
Definition:
Code review is a process in which one or more developers review code written by another developer. This is typically
less formal than an inspection and may involve checking for logic, functionality, readability, or conformance to
standards.
Key Features:
• Informal or Semi-Formal: Can range from a casual, ad-hoc review to a more structured review, but is typically
less formal than an inspection.
• Focus: Primarily on functionality, logic, adherence to coding standards, and potential bugs.
• Participants: Usually involves the author of the code and one or more reviewers.
• Documentation: Reviews may or may not be formally documented. Some teams maintain detailed records,
while others do not.
• Timing: Can be conducted at various stages of the development cycle (e.g., after code writing or as part of a
continuous integration process).
Advantages:
• Quick and Efficient: Code reviews are usually faster than inspections due to their less formal structure.
• Knowledge Sharing: Allows developers to share knowledge and learn from each other, which can improve
team skills.
Disadvantages:
• Limited Depth: May not catch all defects, especially in complex or poorly documented code.
• Subjectivity: The quality of feedback depends on the experience and perspective of the reviewers.
• Potential Bias: The author may defend their code or be resistant to feedback, especially in informal settings.
3. Walk-Through
Definition:
A walk-through is an informal process in which the code author presents their code to peers or stakeholders,
explaining its design, logic, and implementation. The goal is to gather feedback and identify potential issues, but it is
not as formal or exhaustive as an inspection.
Key Features:
• Informal: Walk-throughs are typically less structured than code inspections and reviews.
• Focus: Primarily focused on understanding the code and design rather than exhaustively checking for defects.
• Participants: Usually involves the author and a group of peers, including developers, designers, or
stakeholders.
• Documentation: Walk-throughs may or may not have formal documentation, but feedback can be captured
informally.
• Timing: Usually conducted early in the development process (e.g., during the design phase or before the
implementation phase).
Advantages:
• Collaborative and Educational: Encourages knowledge transfer and communication between team members.
• Low Cost: Less formal and less time-consuming than inspections and reviews, making them more cost-
effective.
• Promotes Shared Understanding: Helps ensure that everyone on the team understands the design and code
approach.
Disadvantages:
• Limited Bug Detection: Walk-throughs are not designed to be exhaustive in detecting bugs or defects.
• Lack of Structure: The informal nature of the walk-through means it can sometimes lack thoroughness or
focus.
• Potential for Miscommunication: Since it's a group discussion, there is the potential for miscommunication
or misunderstanding between the participants.
Part-3
1. Adaptability to Change:
o Waterfall Model is linear and rigid, making it difficult to accommodate changes once the project is
underway. Once you move past a phase (e.g., requirements gathering), it is hard to go back and make
adjustments without significant rework.
o Spiral Model, on the other hand, is iterative. It allows developers to revisit earlier stages of the
project at any point, making it easier to accommodate changes in requirements, technology, or
scope.
2. Risk Management:
o The Spiral Model emphasizes risk analysis and management at every phase, which is crucial in large
projects where uncertainties are high. The model allows for identifying potential risks early and
iterating through solutions.
o In the Waterfall Model, risks are often addressed too late in the process, which can lead to problems
not being detected until later phases, potentially causing delays and cost overruns.
3. Incremental Development:
o Spiral Model promotes incremental releases and gradual development, which helps deliver working
versions of the product at various stages. This is essential for large projects that need continuous
feedback and can benefit from early versions being deployed to test out the design and
functionalities.
o In the Waterfall Model, the product is built in one go, and no working software is available until the
end of the development cycle, which could delay feedback and product validation.
4. Customer Involvement:
o The Spiral Model encourages frequent customer feedback after each iteration. This allows customers
to validate features, suggest changes, and make adjustments to the product.
o In contrast, the Waterfall Model typically involves limited customer interaction, primarily during the
requirements and delivery phases, which means the product might not meet the customer’s
expectations by the time it’s finished.
Advantages of the Spiral Model for Large Projects
o The Spiral Model allows for continuous refinement of the software. Since the development process
involves several iterations, developers can continuously adapt to changing requirements, new
technologies, and market demands.
o Example: In large e-commerce systems, customer needs and trends change frequently. The Spiral
Model allows for introducing new features such as payment gateway integrations or UI/UX changes
after each iteration.
2. Risk Mitigation:
o One of the primary advantages of the Spiral Model is its focus on early risk analysis. By identifying
risks and addressing them in each phase, the model helps avoid catastrophic failures later on.
o Example: In a large financial application, the Spiral Model allows risk analysis in early iterations to
detect potential security vulnerabilities or regulatory compliance issues before they become costly to
fix.
o The iterative approach allows for ongoing testing and validation, resulting in higher-quality software.
Issues can be detected early, and corrective actions can be taken continuously.
o Example: A large-scale enterprise resource planning (ERP) system can undergo rigorous testing at
every iteration, ensuring that each module (finance, HR, sales, etc.) is robust and functions properly
before being fully integrated.
o Frequent involvement of customers and stakeholders in the iterative process ensures that the
product better matches their expectations. Customers can see prototypes early and suggest
modifications as development progresses.
o Example: A large healthcare management system might evolve according to feedback from doctors,
nurses, and administrative staff to ensure the final product meets their practical needs.
1. Complexity:
o The Spiral Model can be more complex to manage than the Waterfall Model, especially for teams
that are not experienced with iterative development or risk management.
o Managing several iterations, maintaining a clear record of changes, and continually assessing risks
can be challenging without the proper processes in place.
o Example: A large financial system might require careful tracking of risks and requirements across
many iterations, making the management of the process complicated and resource-intensive.
o Since the Spiral Model involves multiple cycles of planning, risk assessment, development, and
testing, it can be time-consuming and resource-heavy, especially in the initial stages of the project.
o Example: A large-scale social media platform may require significant upfront effort to plan each
iteration, assess risks, and develop early prototypes, leading to higher initial costs compared to a
simple Waterfall approach.
3. Difficult to Estimate Total Project Cost:
o Due to its iterative nature, it can be difficult to accurately estimate the total cost and timeline of the
project at the beginning. Since the project is divided into phases that may evolve, predicting the final
cost becomes uncertain.
o Example: In the development of a large software product like a customer relationship management
(CRM) system, it might be hard to determine how many iterations will be required to refine features,
making budget estimation more challenging.
4. Requires Expertise:
o The Spiral Model requires a high level of expertise in both risk management and project
management. Not every team is equipped to handle the complexity of iterative development
combined with continuous risk analysis.
o Example: A large-scale government project that requires compliance with stringent regulations might
need experts who can properly conduct risk analysis in every phase of development to avoid delays
and legal complications.
In a Software Requirements Specification (SRS) document, the requirements are categorized into functional and
non-functional requirements, each serving a distinct purpose in defining the scope and expectations of the software
system.
1. Functional Requirements
Definition:
Functional requirements describe the specific behavior or functions of the system that need to be implemented.
They define what the system should do, including actions, operations, data processing, or interactions with users,
other systems, or hardware.
Advantages:
• Clarity of Operations: They help in detailing exactly how the system should operate and interact, reducing
ambiguity.
• Guides Design and Implementation: Functional requirements serve as a clear blueprint for developers to
design and implement the system.
• Testable: They are easy to test, as they can be mapped to specific actions or functions that can be validated.
Disadvantages:
• May Overlook User Experience: Focusing only on functionality may lead to a system that works well
technically but is hard to use.
• Limited Scope for Flexibility: If functional requirements are rigid, it can limit changes or improvements later
in the development process.
• Requires Clear Understanding: Developers need a precise understanding of the problem domain and user
needs to write effective functional requirements.
• Patient Registration: The system must allow users to register new patients by entering their personal
information such as name, age, contact details, and medical history.
• Appointment Scheduling: The system must enable patients to schedule appointments with doctors based on
available timeslots.
• Billing System: The system must calculate and process payments for treatments, including insurance
handling, discounts, and receipts.
• Prescription Management: The system must allow doctors to prescribe medications and generate
prescriptions for patients, which can be printed or sent electronically to pharmacies.
2. Non-Functional Requirements
Definition:
Non-functional requirements describe the attributes or qualities that the system must have. They focus on how well
the system performs its functions and deal with aspects such as performance, security, usability, reliability, and
scalability. Non-functional requirements do not describe specific behaviors, but instead specify the conditions under
which the system should operate.
Advantages:
• Improves User Experience: Non-functional requirements like usability and performance ensure the system is
pleasant and efficient to use.
• Ensures System Quality: They help ensure the system is secure, reliable, and meets standards like
compliance and maintainability.
• Supports Scalability: Non-functional requirements like scalability and performance help ensure the system
can handle increased load or complexity in the future.
Disadvantages:
• Harder to Define: Non-functional requirements are often less concrete than functional ones, making them
harder to quantify and specify.
• Difficult to Test: Some non-functional requirements like usability and performance are subjective or hard to
measure in concrete terms.
• May Be Overlooked: If non-functional requirements are not well-defined or prioritized, they may be
neglected in the design and development phases.
Examples for a Hospital System:
• Performance: The system must support at least 500 simultaneous users without a noticeable decrease in
performance, ensuring quick response times for patient registrations, appointments, etc.
• Security: All patient data, including personal and medical records, must be encrypted both in transit and at
rest. The system must comply with data protection regulations like HIPAA or GDPR.
• Usability: The user interface should be intuitive and user-friendly, so hospital staff with minimal training can
use the system effectively.
• Availability: The system must be available 99.9% of the time, with planned downtimes for maintenance,
ensuring high reliability during hospital operations.
• Scalability: The system should be able to scale easily to accommodate future expansions, such as adding
more clinics or integrating with new technologies.
Answers:
Functional Independence refers to the concept in software design where each module or component of a system has
a clear, distinct responsibility, and its functionality is not overly dependent on other modules. In other words, a
module should perform a single task, and changes in one module should not directly affect the operations or
behavior of others.
Functional independence is often achieved through high cohesion and low coupling:
• High cohesion means that the elements within a module are closely related to each other and work towards
a single purpose.
• Low coupling implies that modules have minimal dependencies on each other, meaning changes to one
module have little or no impact on others.
1. Ease of Maintenance: When modules are functionally independent, developers can make changes to a
module without impacting others. This reduces the risk of introducing bugs when updating or adding new
features, thus making the software easier to maintain over time.
2. Reusability: Independent modules can often be reused in other parts of the software or even in other
projects. When a module has a clear and isolated function, it is easier to understand and integrate into
different contexts.
3. Scalability: A system with functionally independent modules can more easily handle changes as the software
grows. New features can be added without disturbing the existing functionality, allowing for smoother
scaling.
4. Testing: Functional independence facilitates unit testing. Since the behavior of a module is isolated from
other parts of the system, it can be tested independently, making it easier to identify issues.
5. Parallel Development: Independent modules enable teams to work on different parts of the system
simultaneously without interfering with each other's work. This improves development speed and efficiency.
6. Clearer System Structure: A design with functional independence results in a clear, logical structure, where
each module has a well-defined role. This leads to better readability and understanding of the system as a
whole.
In OOD, objects are the basic units of the design, and they encapsulate both data (attributes or properties) and
behavior (methods or functions). The design process involves organizing the system into interacting objects, each
representing a specific concept or entity from the real world.
Software Testing
Software testing is an important process in the software development lifecycle . It
involves verifying and validating that a software application is free of bugs, meets the technical
requirements set by its design and development, and satisfies user requirements efficiently and effectively.
This process ensures that the application can handle all exceptional and boundary cases, providing a robust
and reliable user experience. By systematically identifying and fixing issues, software testing helps deliver
high-quality software that performs as expected in various scenarios.
1. Manual Testing
2. Automation Testing
There are two different types of software testing currently used in the industry both have their own advantages and
disadvantages. If you looking to learn testing from the starting to advance level then you can checkout our dedicated
software testing course in which we offer very important concept and knowledge you need to master testing.
1. Manual Testing
Manual testing is a technique to test the software that is carried out using the functions and features of an
application. In manual software testing, a tester carries out tests on the software by following a set of predefined test
cases. In this testing, testers make test cases for the codes, test the software, and give the final report about that
software. Manual testing is time-consuming because it is done by humans, and there is a chance of human errors.
• Fast and accurate visual feedback: It detects almost every bug in the software application and is used to test
the dynamically changing GUI designs like layout, text, etc.
• Less expensive: It is less expensive as it does not require any high-level skill or a specific type of tool.
• No coding is required: No programming knowledge is required while using the black box testing method. It is
easy to learn for the new testers.
• Efficient for unplanned changes: Manual testing is suitable in case of unplanned changes to the application,
as it can be adopted easily.
2. Automation Testing
Automated Testing is a technique where the Tester writes scripts on their own and uses suitable Software or
Automation Tool to test the software. It is an Automation Process of a Manual Process. It allows for executing
repetitive tasks without the intervention of a Manual Tester.
• Simplifies Test Case Execution: Automation testing can be left virtually unattended and thus it allows
monitoring of the results at the end of the process. Thus, simplifying the overall test execution and increasing
the efficiency of the application.
• Improves Reliability of Tests: Automation testing ensures that there is equal focus on all the areas of the
testing, thus ensuring the best quality end product.
• Increases amount of test coverage: Using automation testing, more test cases can be created and executed
for the application under test. Thus, resulting in higher test coverage and the detection of more bugs. This
allows for the testing of more complex applications and more features can be tested.
• Minimizing Human Interaction: In automation testing, everything is automated from test case creation to
execution thus there are no changes for human error due to neglect. This reduces the necessity for fixing
glitches in the post-release phase.
White box testing techniques analyze the internal structures the used data structures, internal design, code structure,
and the working of the software rather than just the functionality as in black box testing. It is also called glass box
testing clear box testing or structural testing. White Box Testing is also known as transparent testing or open box
testing.
White box testing is a software testing technique that involves testing the internal structure and workings of a
software application. The tester has access to the source code and uses this knowledge to design test cases that can
verify the correctness of the software at the code level.
Black-box testing is a type of software testing in which the tester is not concerned with the internal knowledge or
implementation details of the software but rather focuses on validating the functionality based on the provided
specifications or requirements.
Gray Box Testing is a software testing technique that is a combination of the Black Box Testing technique and
the White Box Testing technique.
1. In the Black Box Testing technique, the tester is unaware of the internal structure of the item being tested
and in White Box Testing the internal structure is known to the tester.
3. This includes access to internal data structures and algorithms to design the test cases.
Types of Black Box Testing
1. Functional Testing
2. Non-Functional Testing
1. Functional Testing
Functional Testing is a type of Software Testing in which the system is tested against the functional requirements and
specifications. Functional testing ensures that the requirements or specifications are properly satisfied by the
application. This type of testing is particularly concerned with the result of processing. It focuses on the simulation of
actual system usage but does not develop any system structure assumptions. The article focuses on discussing
function testing.
• Bug-free product: Functional testing ensures the delivery of a bug-free and high-quality product.
• Customer satisfaction: It ensures that all requirements are met and ensures that the customer is satisfied.
• Testing focused on specifications: Functional testing is focused on specifications as per customer usage.
• Proper working of application: This ensures that the application works as expected and ensures proper
working of all the functionality of the application.
• Improves quality of the product: Functional testing ensures the security and safety of the product and
improves the quality of the product.
2. Non-Functional Testing
Non-functional Testing is a type of Software Testing that is performed to verify the non-functional requirements of
the application. It verifies whether the behavior of the system is as per the requirement or not. It tests all the aspects
that are not tested in functional testing. Non-functional testing is a software testing technique that checks the non-
functional attributes of the system. Non-functional testing is defined as a type of software testing to check non-
functional aspects of a software application. It is designed to test the readiness of a system as per nonfunctional
parameters which are never addressed by functional testing. Non-functional testing is as important as functional
testing.
• Improved performance: Non-functional testing checks the performance of the system and determines the
performance bottlenecks that can affect the performance.
• Less time-consuming: Non-functional testing is overall less time-consuming than the other testing process.
• Improves user experience: Non-functional testing like Usability testing checks how easily usable and user-
friendly the software is for the users. Thus, focus on improving the overall user experience for the
application.
• More secure product: As non-functional testing specifically includes security testing that checks the security
bottlenecks of the application and how secure is the application against attacks from internal and external
sources.
1. Unit Testing
Unit testing is a method of testing individual units or components of a software application. It is typically done by
developers and is used to ensure that the individual units of the software are working as intended. Unit tests are
usually automated and are designed to test specific parts of the code, such as a particular function or method. Unit
testing is done at the lowest level of the software development process , where individual units of code are tested in
isolation.
2. Integration Testing
Integration testing is a method of testing how different units or components of a software application interact with
each other. It is used to identify and resolve any issues that may arise when different units of the software are
combined. Integration testing is typically done after unit testing and before functional testing and is used to verify
that the different units of the software work together as intended.
3. System Testing
System testing is a type of software testing that evaluates the overall functionality and performance of a complete
and fully integrated software solution. It tests if the system meets the specified requirements and if it is suitable for
delivery to the end-users. This type of testing is performed after the integration testing and before the acceptance
testing.
System Testing is a type of software testing that is performed on a completely integrated system to evaluate the
compliance of the system with the corresponding requirements. In system testing, integration testing passed
components are taken as input. The goal of integration testing is to detect any irregularity between the units that are
integrated.
4. End-to-end Testing
End-to-end testing is the type of software testing used to test entire software from starting to the end along with its
integration with external interfaces. The main purpose of end-to-end testing is to identify system dependencies and
to make sure that the data integrity and communication with other systems, interfaces and databases to exercise
complete production.
5. Acceptance Testing
It is formal testing according to user needs, requirements, and business processes conducted to determine whether a
system satisfies the acceptance criteria or not and to enable the users, customers, or other authorized entities to
determine whether to accept the system or not.
1. Incremental Testing
2. Non-Incremental Testing
1. Incremental Testing
Like development, testing is also a phase of SDLC (Software Development Life Cycle) . Different tests are performed at
different stages of the development cycle. Incremental testing is one of the testing approaches that is commonly
used in the software field during the testing phase of integration testing which is performed after unit testing. Several
stubs and drivers are used to test the modules one after one which helps in discovering errors and defects in the
specific modules.
Top-down testing is a type of incremental integration testing approach in which testing is done by integrating or
joining two or more modules by moving down from top to bottom through the control flow of the architecture
structure. In these, high-level modules are tested first, and then low-level modules are tested. Then, finally,
integration is done to ensure that the system is working properly. Stubs and drivers are used to carry out this project.
This technique is used to increase or stimulate the behavior of Modules that are not integrated into a lower level.
Bottom-up Testing is a type of incremental integration testing approach in which testing is done by integrating or
joining two or more modules by moving upward from bottom to top through the control flow of the architecture
structure. In these, low-level modules are tested first, and then high-level modules are tested. This type of testing or
approach is also known as inductive reasoning and is used as a synthesis synonym in many cases. Bottom-up testing
is user-friendly testing and results in an increase in overall software development. This testing results in high success
rates with long-lasting results.
1. Performance Testing
2. Usability Testing
3. Compatibility Testing
1. Performance Testing
Performance Testing is a type of software testing that ensures software applications perform properly under their
expected workload. It is a testing technique carried out to determine system performance in terms of sensitivity,
reactivity, and stability under a particular workload.
Performance testing is a type of software testing that focuses on evaluating the performance and scalability of a
system or application. The goal of performance testing is to identify bottlenecks, measure system performance under
various loads and conditions, and ensure that the system can handle the expected number of users or transactions.
2. Usability Testing
You design a product (say a refrigerator) and when it becomes completely ready, you need a potential customer to
test it to check it working. To understand whether the machine is ready to come on the market, potential customers
test the machines. Likewise, the best example of usability testing is when the software also undergoes various testing
processes which is performed by potential users before launching into the market. It is a part of the software
development lifecycle (SDLC).
3. Compatibility Testing
Compatibility testing is software testing that comes under the non functional testing category, and it is performed on
an application to check its compatibility (running capability) on different platforms/environments. This testing is done
only when the application becomes stable. This means simply this compatibility test aims to check the developed
software application functionality on various software, hardware platforms, networks browser etc. This compatibility
testing is very important in product production and implementation point of view as it is performed to avoid future
issues regarding compatibility.
1. Load Testing
Load testing determines the behavior of the application when multiple users use it at the same time. It is the
response of the system measured under varying load conditions.
1. The load testing is carried out for normal and extreme load conditions.
2. Load testing is a type of performance testing that simulates a real-world load on a system or application to
see how it performs under stress.
3. The goal of load testing is to identify bottlenecks and determine the maximum number of users or
transactions the system can handle.
4. It is an important aspect of software testing as it helps ensure that the system can handle the expected usage
levels and identify any potential issues before the system is deployed to production.
2. Stress Testing
In Stress Testing, we give unfavorable conditions to the system and check how it perform in those conditions.
3. Scalability Testing
Scalability Testing is a type of non-functional testing in which the performance of a software application, system,
network, or process is tested in terms of its capability to scale up or scale down the number of user request load or
other such performance attributes. It can be carried out at a hardware, software or database level. Scalability Testing
is defined as the ability of a network, system, application, product or a process to perform the function correctly
when changes are made in the size or volume of the system to meet a growing need. It ensures that a software
product can manage the scheduled increase in user traffic, data volume, transaction counts frequency, and many
other things. It tests the system, processes, or database’s ability to meet a growing need.
4. Stability Testing
Stability Testing is a type of Software Testing to checks the quality and behavior of the software under different
environmental parameters. It is defined as the ability of the product to continue to function over time without failure.
It is a Non-functional Testing technique that focuses on stressing the software component to the maximum. Stability
testing is done to check the efficiency of a developed product beyond normal operational capacity which is known as
break point. It has higher significance in error handling, software reliability, robustness, and scalability of a product
under heavy load rather than checking the system behavior under normal circumstances.
Stability testing assesses stability problems. This testing is mainly intended to check whether the application will
crash at any point in time or not.
1. Smoke Testing
Smoke Testing is done to make sure that the software under testing is ready or stable for further testing
It is called a smoke test as the testing of an initial pass is done to check if it did not catch fire or smoke in the initial
switch-on.
2. Sanity Testing
It is a subset of regression testing . Sanity testing is performed to ensure that the code changes that are made are
working properly. Sanity testing is a stoppage to check whether testing for the build can proceed or not. The focus of
the team during the sanity testing process is to validate the functionality of the application and not detailed testing.
Sanity testing is generally performed on a build where the production deployment is required immediately like a
critical bug fix.
3. Regression Testing
The process of testing the modified parts of the code and the parts that might get affected due to the modifications
ensures that no new errors have been introduced in the software after the modifications have been made.
Regression means the return of something and in the software field, it refers to the return of a bug.
4. Acceptance Testing
Acceptance testing is done by the customers to check whether the delivered products perform the desired tasks or
not, as stated in the requirements. We use Object-Oriented Testing for discussing test plans and for executing the
projects.
User Acceptance Testing is a testing methodology where clients/end users participate in product testing to validate
the product against their requirements. It is done at the client’s site on the developer’s site. For industries such as
medicine or aerospace, contractual and regulatory compliance testing, and operational acceptance tests are also
performed as part of user acceptance tests. UAT is context-dependent and UAT plans are prepared based on
requirements and are not required to perform all kinds of user acceptance tests and are even coordinated and
contributed by the testing team.
6. Exploratory Testing
Exploratory Testing is a type of software testing in which the tester is free to select any possible methodology to test
the software. It is an unscripted approach to software testing. In exploratory testing, software developers use their
learning, knowledge, skills, and abilities to test the software developed by themselves. Exploratory testing checks the
functionality and operations of the software as well as identifies the functional and technical faults in it. Exploratory
testing aims to optimize and improve the software in every possible way.
7. Ad-hoc Testing
Ad-hoc testing is a type of software testing that is performed informally and randomly after the formal testing is
completed to find any loophole in the system. For this reason, it is also known as Random or Monkey testing. Ad-hoc
testing is not performed in a structured way so it is not based on any methodological approach. That’s why Ad-hoc
testing is a type of Unstructured Software Testing.
8. Security Testing
Security Testing is a type of Software Testing that uncovers vulnerabilities in the system and determines that the data
and resources of the system are protected from possible intruders. It ensures that the software system and
application are free from any threats or risks that can cause a loss. Security testing of any system is focused on
finding all possible loopholes and weaknesses of the system that might result in the loss of information or repute of
the organization.
9. Globalization Testing
Globalization Testing is a type of software testing that is performed to ensure the system or software application can
function independently of the geographical and cultural environment. It ensures that the application can be used all
over the world and accepts all language texts. Nowadays with the increase in various technologies, every software
product is designed in such a way that it is a globalized software product.
Regression testing is a method of testing that is used to ensure that changes made to the software do not introduce
new bugs or cause existing functionality to break. It is typically done after changes have been made to the code, such
as bug fixes or new features, and is used to verify that the software still works as intended.
Smoke Testing is done to make sure that the software under testing is ready or stable for further testing
It is called a smoke test as the testing of an initial pass is done to check if it did not catch fire or smoke in the initial
switch-on.
Alpha testing is a type of validation testing. It is a type of acceptance testing that is done before the product is
released to customers. It is typically done by QA people.
The beta test is conducted at one or more customer sites by the end-user of the software. This version is released for
a limited number of users for testing in a real-time environment.