Sqa Ia2
Sqa Ia2
Definition:
SDLC is a systematic process used by software organizations to design, develop, test, and deploy high-quality software
applications.
Purpose:
It defines each phase in the development process and ensures the final product meets user requirements with
minimum cost and time.
Goal:
To deliver software that is:
• Efficient
• Reliable
• Cost-effective
The Software Development Life Cycle typically consists of 6–7 major phases:
Activities:
Example:
For a banking app, requirements could include — fund transfer, account login, mini-statement view, and balance check.
Activities:
• Design database schema, data flow diagrams (DFD), and UML diagrams.
• Choose technology stack and frameworks.
Output:
Example:
Designing login module, user database schema, and how the front-end connects to the back-end.
Activities:
• Developers write code in chosen programming languages (C++, Java, Python, etc.).
Output:
• Source code
Example:
Writing the login authentication code using Java and SQL queries to validate users.
Phase 4: Testing
Objective: Verify that the software works as expected and meets all requirements.
Activities:
o Unit Testing
o Integration Testing
o System Testing
o Acceptance Testing
Output:
• Tested software
• Test reports
Example:
Test login, sign-up, and balance-check features with valid and invalid inputs.
Phase 5: Deployment
Activities:
Output:
Example:
Publishing the banking app on a live server or Play Store for users.
Phase 6: Maintenance
Activities:
Output:
Example:
Updating the app with new security patches or adding a UPI payment feature.
The process is iterative, meaning it can return to earlier phases for improvements or corrections.
Waterfall Model Linear & sequential; one phase after another Small, simple, well-defined projects
Iterative Model Develops software in repeated cycles When requirements evolve over time
Spiral Model Combines design & prototyping with risk analysis Large, complex, high-risk projects
V-Model Each development phase has a corresponding testing phase High reliability & verification needed
Agile Model Iterative and incremental, emphasizes collaboration Dynamic projects with changing needs
Big Bang Model No formal planning, quick coding & testing Small projects or academic experiments
Advantages of SDLC
Disadvantages of SDLC
2. 1. Introduction
• The Prototyping Model is a software development approach where a prototype, or an early working version of
the software, is built before the final product.
• This model is mainly used when the requirements of the user are not clearly understood at the beginning.
• The prototype helps users to see and interact with a working version of the system, give feedback, and help
developers understand what exactly is required.
2. Objective
• The main goal of this model is to reduce misunderstandings between the user and the developer by showing
an early version of the product.
• It helps in gathering accurate requirements and ensures that the final software matches user expectations.
a) Requirement Identification
• In this phase, only the basic system requirements are gathered from the user.
• The focus is on identifying the main input and output of the system, not the detailed functionalities.
• For example, if you are building a hospital management system, you may initially identify modules like patient
registration, appointment booking, and billing.
b) Quick Design
• This design mainly focuses on user interfaces, showing how the system will look on the screen, how data will
flow, and how users will interact with it.
• The purpose is not to create a detailed design but to give a rough idea of the system’s structure.
• Using the quick design, a prototype or a simple working model of the system is developed.
• It includes only the main features and may not have the full functionality or backend processing.
• The prototype is used to demonstrate how the system will function in real life.
d) User Evaluation
• The prototype is then shown to the users for feedback and evaluation.
• Users interact with it, test its functions, and suggest changes or improvements.
• This helps the developers to understand what users like or dislike and what changes need to be made.
e) Refinement
• This process is repeated multiple times until the user is fully satisfied with the system’s design and features.
• The final version of the prototype represents the complete set of user requirements.
• Once the prototype is approved, developers use the finalized requirements to design and build the actual
system.
• Proper coding, testing, and integration are done in this phase to produce the final product.
• Any issues or bugs found after deployment are corrected during the maintenance phase.
4. Diagram Representation
User Requirements
Quick Design
Build Prototype
User Evaluation
Refinement / Modification
Final Product
5. Types of Prototyping
1. Throwaway/Rapid Prototype:
Built quickly to understand requirements and then discarded. The final product is developed from scratch.
2. Evolutionary Prototype:
Improved step-by-step until it becomes the final product.
3. Incremental Prototype:
Several prototypes are built for different modules and then combined into one complete system.
• It helps in clarifying unclear requirements and ensures a better understanding between user and developer.
• Users get a chance to see and test the system early, which increases satisfaction.
• Users may keep demanding new features, leading to scope creep (project expansion).
• Documentation may not be properly maintained since focus is more on the prototype.
• It may not be suitable for large, complex systems that need strong backend design.
• When user requirements are not clearly defined or may change frequently.
9. Example
10. Conclusion
3. Spiral Model
1. Introduction
• The Spiral Model is a risk-driven software development model that combines features of both the Waterfall
Model and the Prototyping Model.
• This model is called “spiral” because the process looks like a spiral with many loops, where each loop
represents one phase of the software development process.
2. Objective
• The main goal of the Spiral Model is to identify and minimize risks at every stage of the project.
• It allows repeated refinement of the system through several iterations, leading to better accuracy and stability.
3. Key Features
• It combines iterative development (like in prototyping) with systematic steps (like in waterfall).
• Each phase in the spiral involves risk analysis, which helps avoid costly mistakes later.
• The number of loops depends on the size and complexity of the project.
The spiral model is divided into four main phases that are repeated for each loop in the spiral:
a) Planning Phase
• In this phase, the objectives and requirements of the project are determined.
• The project scope, constraints, and system functionality are defined.
• Possible risks and uncertainties (technical, financial, or operational) are identified and analyzed.
• Alternate solutions are suggested, and a prototype may be developed to reduce the risk.
c) Engineering Phase
• The actual development and testing of the product take place in this phase.
• The design, coding, and verification of the software are carried out according to the planned objectives.
d) Evaluation Phase
• The customer reviews the product and provides feedback on the work done.
• Based on the feedback, decisions are made for the next loop — whether to continue, modify, or stop the
project.
5. Diagram Representation
+----------------------+
| Customer Evaluation |
+----------------------+
+--------------------------+
| Engineering Phase |
+--------------------------+
+--------------------------+
| Risk Analysis |
+--------------------------+
↑
+--------------------------+
| Planning Phase |
+--------------------------+
Each loop represents one development cycle, and the product evolves through several spirals.
6. Advantages
7. Disadvantages
9. Example
For example, in the development of a flight control system, where safety and reliability are crucial, the Spiral Model is
used.
Risks such as hardware failure or software malfunction are analyzed in each loop, and prototypes are tested repeatedly
to ensure reliability.
10. Conclusion
Object-Oriented Model
1. Introduction
• The Object-Oriented Model (OOM) is a software development approach based on the concept of objects,
which represent both data and the operations that can be performed on that data.
• It focuses on building the system around real-world entities, making it more natural, reusable, and easier to
maintain.
2. Objective
• The main goal of the Object-Oriented Model is to increase software reusability, scalability, and maintainability
by organizing the system into interacting objects rather than separate functions or procedures.
a) Object
• An object is a real-world entity that contains both data (attributes) and functions (methods).
• Example: A Car object has attributes like color, model, speed, and methods like start() or stop().
b) Class
• Example: The class Car can be used to create multiple objects like Car1, Car2, etc.
c) Encapsulation
• It hides internal details from the outside world, ensuring data security.
d) Inheritance
e) Polymorphism
• It allows the same function or method to behave differently depending on the object that calls it.
• Example: The method move() can behave differently for Car and Bike objects.
f) Abstraction
• Implement the design using OOP languages like Java, C++, or Python.
5. Advantages
7. When to Use
• When the system is large and complex, involving multiple real-world entities.
8. Example
• Relationships among these objects help model the entire system effectively.
9. Conclusion
The Object-Oriented Model focuses on representing software as a collection of interacting objects based on real-world
entities.
It enhances reusability, scalability, and maintainability and forms the foundation for modern software development in
languages like Java, C++, and Python.
Factors Affecting the Intensity of Quality Assurance (QA) Activities in the Development Process
1. Introduction
• Quality Assurance (QA) refers to the systematic process of ensuring that the software product meets the
required quality standards and functions correctly.
• The intensity (or amount of effort and resources) needed for QA activities depends on several factors related to
the software project, team, and requirements.
• Complex logic or numerous modules increase the need for deeper inspection and testing.
• Projects that affect human life, safety, or finances (like medical, defense, or banking systems) demand high-
intensity QA.
• The QA effort depends on the model used (e.g., Waterfall, Agile, Spiral).
• The availability of automated testing tools, configuration management tools, and debugging utilities can
reduce QA intensity.
• Limited time or budget may reduce the intensity of QA, but this increases the risk of defects.
• If the customer expects high-quality or mission-critical performance, QA activities must be more rigorous.
• If similar past projects had many defects, it indicates that more QA focus is needed this time.
• QA intensity can be adjusted based on previous experiences.
3. Conclusion
• The intensity of QA activities is determined by multiple technical, managerial, and environmental factors.
• Properly assessing these factors helps in allocating appropriate QA resources, ensuring that the final product
meets the desired quality standards.
1. Introduction
• Verification, Validation, and Qualification are three important activities in software quality assurance.
• They ensure that the software system is built correctly and that it meets user needs and intended purposes.
2. Verification
a) Definition
• Verification means checking whether the product is being built correctly according to design specifications
and requirements.
b) Objective
• To confirm that each phase of development meets its specified inputs and outputs.
c) Activities Involved
• Reviews
• Inspections
• Walkthroughs
• Desk checking
d) Example
e) Question Phrase
“Are we building the product right?”
3. Validation
a) Definition
• Validation means checking whether the right product has been built — that is, whether the software actually
meets the user’s needs and expectations.
b) Objective
• To ensure that the final product functions as intended in the real-world environment.
c) Activities Involved
• System testing
• Beta testing
d) Example
• Running the software with real data to see if it produces correct results.
e) Question Phrase
4. Qualification
a) Definition
• Qualification is the process of ensuring that the software, hardware, or system components meet the
specified operational requirements in the target environment.
b) Objective
• To demonstrate that the software and its components are fit for use and work correctly in the intended
operational context.
c) Activities Involved
d) Example
• In a medical device software, qualification ensures that the system operates safely and effectively in hospitals.
5. Differences at a Glance
Ensure product is built Ensure correct product is Ensure system works correctly in its
Purpose
correctly built environment
Performed
Developers, QA team End users, testers QA team, regulatory authorities
by
Are we building the product Are we building the right Is the product fit for use in real
Question
right? product? conditions?
6. Conclusion
1. Introduction
• SQA (Software Quality Assurance) focuses on improving the processes used to develop software so that the
final product is defect-free.
• Defect Removal Effectiveness (DRE) is an important metric used in SQA to measure how effectively defects are
detected and removed before software delivery.
2. Definition
• Defect Removal Effectiveness (DRE) measures the percentage of defects that are detected and removed during
a specific phase of software development.
4. Example
• Suppose 80 defects were found and fixed during testing, but later, 20 more were found by the customer.
80
DRE = × 100 = 80%
80 + 20
• Data Aspect:
o Collects data about the number of defects, the phase in which they were found, and how many
escaped to later phases.
• Model Aspect:
o Uses mathematical models and historical data to predict defect rates and improvement areas.
6. Importance
7. Conclusion
• DRE is a vital SQA metric that reflects the efficiency of defect detection and removal.
• A higher DRE value means better quality assurance and fewer defects reaching customers.
• McCabe’s Cyclomatic Complexity is a software metric used to measure the complexity of a program’s control
flow.
• It was introduced by Thomas McCabe in 1976 to help determine how difficult a program is to test and
maintain.
2. Definition
• Cyclomatic Complexity measures the number of linearly independent paths through a program’s source code.
• It gives an idea of how many test cases are needed for complete branch coverage.
3. Formula
𝑉(𝐺) = 𝐸 − 𝑁 + 2
Where:
Alternative formula:
𝑉(𝐺) = 𝑃 + 1
Where P = Number of predicate (decision) nodes (like if, while, for, etc.)
4. Steps to Calculate
5. Example
if (a > 0)
printf("A positive\n");
else if (b > 0)
printf("B positive\n");
else
printf("Both non-positive\n");
• So, 𝑉(𝐺) = 𝑃 + 1 = 2 + 1 = 3
Cyclomatic complexity = 3
→ Therefore, 3 independent test cases are needed to test all branches.
6. Interpretation
21 – 50 Complex High
7. Importance
8. Conclusion
• McCabe’s Cyclomatic Complexity is a key metric that measures the logical complexity of software.
• It ensures that the code is understandable, testable, and maintainable, reducing potential errors.
• A software testing strategy defines a systematic plan to test software at different levels to ensure quality,
correctness, and performance.
• It specifies what types of testing will be done, how, and in what order.
2. Objective
• The main goal is to detect errors systematically at different stages of software development before
deployment.
a) Unit Testing
b) Integration Testing
c) System Testing
• Ensures that all modules, interfaces, and subsystems work properly together.
d) Acceptance Testing
• Performed by the end users or clients to ensure the software meets business needs.
• Done by developers.
• Done by testers.
6. Importance
7. Conclusion
• A well-planned software testing strategy ensures that testing is done at all levels — unit, integration, system,
and acceptance — in a structured manner.
• It improves software quality, reduces risk, and ensures a reliable final product.
• Software Testing is the process of executing a program with the intent to find errors and ensure that the final
product meets user requirements.
• The main objectives of software testing focus on ensuring quality, correctness, and reliability of the software.
a) To Detect Errors
• The primary goal of testing is to find and fix defects before software delivery.
b) To Verify Functionality
• Testing ensures that the software performs all intended functions correctly according to the requirements and
design specifications.
c) To Ensure Reliability
• It checks whether the software performs consistently and accurately over time and under various
environments.
d) To Ensure Quality
• Testing helps achieve high-quality software that satisfies both functional and non-functional requirements such
as performance, usability, and security.
e) To Validate Requirements
• It ensures that the final product meets user expectations and real-world needs — confirming that the right
product is built.
• Testing also provides feedback to improve the development process, helping to prevent similar issues in future
projects.
g) To Evaluate Performance
• It measures the response time, scalability, and stability of the system under various load conditions.
h) To Ensure Maintainability
• Testing ensures that future changes, updates, or enhancements do not break existing functionality (through
regression testing).
3. Conclusion
• The ultimate objective of software testing is not only to find bugs but also to build confidence in the product,
ensure it meets user needs, and deliver a reliable and high-quality software system.
1. Introduction
• White Box Testing and Black Box Testing are two fundamental approaches to software testing.
• Each has its own advantages and limitations based on what aspect of the software they test.
a) Definition
• White Box Testing is a structural testing technique where the tester has complete knowledge of the internal
logic, code, and structure of the software.
Pros (Advantages)
Helps in optimizing the code by identifying hidden errors, dead code, and inefficiencies.
Ensures complete path coverage and checks internal logic thoroughly.
Useful for unit testing and verifying the correctness of algorithms.
Detects logical errors early in development.
Supports code maintainability and performance improvement.
Cons (Disadvantages)
Requires in-depth programming knowledge.
Time-consuming for large systems with complex logic.
May miss missing functionalities (focuses only on what’s implemented).
Not effective for system-level or user-level testing.
a) Definition
• Black Box Testing is a functional testing technique that focuses on input and output behavior of the software
without knowing its internal code.
Pros (Advantages)
Cons (Disadvantages)
4. Comparison Table
Knowledge Required Requires knowledge of internal code No need for code knowledge
Best For Unit and integration testing System and acceptance testing
5. Conclusion
• White Box Testing ensures internal correctness, while Black Box Testing ensures external functionality.
• A combination of both provides complete coverage and guarantees both code quality and user satisfaction.
1. Introduction
• Revision Testing (also known as Regression Testing) is a type of testing performed after modifications or
enhancements are made to the software.
• The goal is to ensure that new changes do not introduce new defects and that the existing functionalities still
work properly.
• Testing classes (types) are divided based on how the software is affected by revisions.
2. Definition
• Revision Factor Testing refers to testing activities performed to re-evaluate software after updates, ensuring
that changes do not negatively impact system behavior or performance.
a) Regression Testing
• It checks whether previously tested functionalities still work after code changes, bug fixes, or updates.
• Example: After adding a new payment feature, old login and cart modules should still function properly.
b) Re-Testing
• It involves testing a specific defect again after it has been fixed to ensure that it has been resolved.
• Example: If a bug was reported in the “search” feature and fixed, the same test case is re-executed.
c) Confirmation Testing
• It is done to confirm that the changes implemented (such as patches or updates) work correctly in the
modified software.
d) Maintenance Testing
• When the software is deployed and changes are made due to updates or environment changes, maintenance
testing ensures continued functionality.
• Example: After upgrading from Windows 10 to Windows 11, the software should still work as intended.
• This helps testers focus on areas that are most likely to be impacted by the change.
4. Importance
5. Conclusion
• Revision Factor Testing is essential for maintaining software quality during updates and bug fixes.
• It ensures that the software remains reliable, stable, and consistent even after changes are made.
1. Introduction
• Transition Factor Testing Classes are a group of tests that focus on ensuring that software transitions smoothly
from development to operational use.
• This testing class helps ensure that changes like upgrades, migrations, or installations do not introduce new
defects or cause system failures.
• It is an essential part of quality assurance during the final stages of software deployment.
2. Purpose
• To detect and fix defects that could occur due to environment changes, data conversions, or system updates.
Installation Testing
• Confirms that the software can be installed correctly in the target environment without errors.
• Ensures all files, libraries, databases, and configurations are correctly installed.
Migration Testing
• Ensures that data and configurations from an older version or system are accurately transferred to the new
system.
• Example: Migrating customer data from an old banking database to a new one.
Conversion Testing
• Verifies that the new system correctly handles old data formats and integrates with existing applications.
• Ensures compatibility between old and new systems, avoiding functionality breaks.
• Confirms that users, system administrators, and support teams are prepared for the software transition.
4. Importance
• Reduces risks during deployment and ensures a smooth transition from development to production.
5. Conclusion
• Transition Factor Testing Classes are critical to ensuring software stability and usability after deployment.
• By testing installation, migration, conversion, and readiness, organizations can avoid costly operational issues
and deliver reliable software.
1. Definition
• Project Progress Control is the systematic process of monitoring, measuring, and managing project activities
to ensure timely completion, budget compliance, and quality standards.
• It ensures that deviations from the project plan are identified and corrected promptly.
2. Objectives
3. Major Components
Project Plan
Performance Metrics
• Quantitative indicators such as effort spent, cost incurred, tasks completed, and defects found.
• Helps objectively evaluate progress and productivity.
Risk Management
4. Importance
1. Internal Projects
• Internal projects are developed within the organization to meet internal needs.
• Internal project teams are responsible for planning, execution, monitoring, and reporting.
• Project progress control is simpler because all stakeholders belong to the same organization.
2. External Participants
• External participants include clients, consultants, contractors, or third-party vendors involved in the project.
4. Importance
• Both internal and external participants are critical to successful project progress control.
• Proper coordination ensures on-time delivery, quality assurance, and stakeholder satisfaction.
1. Introduction
• Project Progress Control Regimes are systematic approaches used to monitor, manage, and control project
activities to ensure that objectives are achieved on time, within budget, and at the required quality level.
• Implementation involves defining procedures, assigning responsibilities, and using tools to track progress
effectively.
2. Steps in Implementation
• Identify key objectives of the project and the criteria to measure progress.
• Common metrics include schedule adherence, cost performance, task completion, and defect rates.
• Develop a project plan with clearly defined tasks, timelines, resources, and milestones.
• This baseline serves as a reference for comparing actual vs. planned progress.
• Monitoring includes tracking task completion, resource utilization, and milestone achievement.
Assign Responsibilities
• Allocate roles for project managers, team leads, and QA personnel to oversee different aspects of progress
control.
• Changes in scope, schedule, or resources are documented, reviewed, and approved through a formal process.
• Identify differences between planned and actual progress (schedule variance, cost variance, quality
deviations).
• Take corrective measures such as reallocation of resources, rescheduling tasks, or risk mitigation.
• Conduct regular progress review meetings and audits to ensure adherence to project plans.
• Maintain clear communication with stakeholders through progress reports, dashboards, and meetings.
3. Importance
4. Conclusion
• Implementing project progress control regimes systematically ensures that projects are delivered successfully.
• It provides a structured approach to tracking performance, managing risks, and improving overall project
management efficiency.
• Computerized tools for software progress control are software applications that help project managers
monitor, track, and manage software development activities efficiently.
• These tools automate many control activities, reduce manual effort, and improve accuracy and transparency.
• Assign tasks to team members and monitor completion status, delays, or bottlenecks.
Resource Management
• Tools maintain version history, code changes, and configuration management to ensure consistency.
4. Importance
5. Conclusion
• Using computerized tools for software progress control makes project monitoring systematic, automated, and
effective.
• These tools help managers control schedule, resources, costs, quality, and risks, leading to successful project
completion.