0% found this document useful (0 votes)
7 views

Manual Testing

The document outlines various Software Development Life Cycle (SDLC) models, including Waterfall, Spiral, V and V, Prototype, Customized, Hybrid, and Agile models, detailing their advantages and disadvantages. It also describes the stages of SDLC, software testing processes, types of testing, and the importance of testing in ensuring software quality and functionality. Key concepts such as manual testing, defect life cycle, and specific testing methodologies like black box and end-to-end testing are also covered.

Uploaded by

aafreenj870
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Manual Testing

The document outlines various Software Development Life Cycle (SDLC) models, including Waterfall, Spiral, V and V, Prototype, Customized, Hybrid, and Agile models, detailing their advantages and disadvantages. It also describes the stages of SDLC, software testing processes, types of testing, and the importance of testing in ensuring software quality and functionality. Key concepts such as manual testing, defect life cycle, and specific testing methodologies like black box and end-to-end testing are also covered.

Uploaded by

aafreenj870
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

SDLC models

Waterfall model

Spiral model

V and V model

Customized Model

Prototype model

Hybrid model

Agile model

SDLC Stages

Step by step process of developing a new software

Requirement analysis-> Feasibility study-> Design-> Coding-> Testing->


Installation-> Maintenance

Requirement analysis- Process of collecting the requirement from the


customer

Feasibility study- Desired team decides whether to take up the project or


not

Design- an architecture of an application

Coding- Done by developers

Testing- Done by test engineers

Installation- deploying a software in an environment where it is accessible


to users. For eg: Playstore
Maintenance- Monitoring the software such as updation, fixing bugs after
it has been delivered to customer.

Why SDLC

To know the resources, manpower, cost of the project and to have proper
supporting documents.

Waterfall model

Is a standard procedure to build a new software. Here, requirement


collection is done only once.

Advantages:

Initial investment is less

Software quality is good

Disadvantages:

Time taken is more

This model is only for small projects

Wont be able to change the requirements

Spiral model

Is a standard procedure to build a new software. Also known as


incremental and iterative model. To overcome the drawbacks of waterfall
model, we can work with spiral model. Use it only if we have
dependencies
Advantages:

Requirement changes are allowed after every stage

Also for small project

Software quality is good

Spiral model is a controlled model as it analyses for risk after each stage
of development.

Disadvantages:

Requirement changes are not allowed in between the cycle.

Requirement and design part is not tested

Developer will be involved in testing

Verification and Validation model

Is a standard procedure to build a new software.

Verification: Verifying CRS, SRS, HLD and LLD and check whether it is
according to the requirement or not. Done by Test engineer and done
before software development. Here we check whether we are building
the software right or wrong. Also called a static testing

Validation: Testing the functionality of an application according to


requirement is called validation. Done by Test engineer and done after
software development. Here we check whether we built the software
right or wrong. Also called as dynamic testing.

Advantages:

Total cost will be less

Requirement changes are allowed

All the stages are tested


Disadvantages:

Initial investment is high

Documentation is more

Manage the interaction between Developer and Test Engineer

Protoype model

Is a dummy model prepared by web designer and developers where they


convert text format to image format by using tools like adobe photoshop
and ms point

Advantages:

There’s a improved connection between customer, developer team and


testing team.

Customer will get to know the outcome of end product in initial stage

Requirement changes are allowed

Software quality will be good

Disadvantages:

Investment needed just to build the prototype model

Actual development will be late as team will take take in designing and
developing the prototype model.

Customized model

Here, we take any basic model and customize it as per requirements. Also
called as desired model
Hybrid model

The process of customizing more than one model into a single model is
called as hybrid model

Spiral with prototype hybrid model

V and V with prototype hybrid model

Agile model

Breaks the projects into small components known as sprints and a team
works on each piece while getting continuous feedback for each.

Delivers the project in small increments rather than the finished project.

The goal is to deliver good software quickly and frequently.

Advantages:

Easy adjustments

Communication

Customer feedback

Disadvantages:

Not suitable for large complex projects with strict deadlines

Difficult to predict cost

HLD and LLD

HLD focuses on overall architecture and how they work as a whole while
LLD focus on each one in detail.
Software Testing

Software- is a product designed to do a particular task

Manual Testing

So we have an application….we( as a human) test it manually according to


customer requirements again and again to find defects.

We don’t use any automation tools here

Why we do software testing?

To find and fix the bugs

To improve the quality of the software

To check whether we built it according to customer requirement


specification(CRS)

Error, defect and bugs

Error- coding mistake made by developer.. Typed – instead of +

Defect- due to error in code,so failed to meet requirements as it behaves


incorrectly

Bug- serious defect that makes software crash

Error leads to defect and severe defect is a bug

Failure- Defect leads to failre


Manual testing adv and disad

Adv:

Software quality is good

Can find issues which atmtn testing cant

Programming knowledge not required

Disad

Resource utilization is more

Takes time

No consistency

Tough and repetitive

Software testing life cycle

System Study -> Prepare test plan -> Write test case -> Prepare
Traceability Matrix-> Test execution -> Defect tracking ->Test execution
report -> Retrospective meeting

System study- Here we read the reqirements and try to understand it. If
any doubt, interact with customers, dev team, testing team , business
analyst and product owners.

Prepare test plan- Test lead will be preparing a document before test
execution
Write test case- Is a detailed document which we write by looking into
requirement and test scenarios

Template

Header- requirement no, test case type, test data, precondition

Body- step no, description, input, expected result, actual result, status,
comment

Footer- author, reviewer, reviewed on

Test case:

Test case ID:

Precondition:

Test steps:

Description:

Input if any

Expected result:

Actual result:

Status:

Example:

Test scenario: To check whether user can move a specific mail from inbox
to spam folder.

Test case:

Test case ID: 101

Precondition: User should be logged into a gmail either installed in a


phone or PC with an active internet connection.
Steps: 1. Open gmail and click on inbox button ,,,

Inbox mails should be displayed

2. Select a mail from inbox and move to spam ,,,,

Mail should be moved to spam

3. Click on spam button ,,,,, Mail SBD in spam page

Prepare Traceability matrix: Is a document which we prepare to ensure


every requirement has got atleast one test case.

Test execution: executing tests

Defect Tracking- Tracking and noting down the defects found while
conducting tests in a MT tool.

Test execution report: report of what happened in execution process

Retrospective meeting: meeting is held to discuss what went well, what


didn’t and how it can be improved in the future.

Test scenerio:

A small outline of what we are supposed to test.

Defect life cycle

New/Open- when defect was first found

Assigned- assigned to a developer

Fixed- Code issue is resolved and ready for testing again


Closed- defect is resolved

Reopen- defect still exist after closed. So reopen

Assigned: 6 status

1. Reject: It is not a defect but a feature


2. Duplicate: That particular defect was already raised by others
3. Cannot be fixed: It doesn’t affect the customer business
4. Postponed/ Deferred: Developer doesn’t have sufficient time to fix
the defect in this cycle so he do it in next cycle.
5. Issue not reproducible: Everything works for me
6. Request for enhancement (RFE)- it's a suggestion for enhancing
existing functionality, not a defect that needs immediate correction.

Test cycle

Testing from start to finish starting from planning to closure

Respin:

Updated software that includes all the previous test fixes

Types of testing
Black box testing

White box testing

Black box testing

Also called as closed box testing, behavioural testing

It is done by Test Engineer

Programming knowledge is not required

Tests the functionality of the software without knowing anything about


the internal and external code or design.

Types: Functional vs non functional


Functional- checks if the product works according to requirements.

 System testing (f)


 Smoke testing (f)
 Sanity testing (f)
 Acceptance testing (f)
 User acceptance(f)
 Regression testing (f)
 Unit testing
 Integration testing (f)
 Exploratory testing
 End to end testing

Non Functional- focuses on how the product performs under


certain conditions, rather than what it does.

 Performance testing (nf)


 Usability testing (nf)
 Compatibilty testing (nf)
 Accessibilty testing (nf)
 Adhoc testing (nf)
 Recovery Testing (nf)
 Localization Testing (nf)
 Internalization testing (nf)

Software testing input

Over testing- running tests more than necessary wasting time and
resources

Under testing- not testing enough. Can miss bugs

Optimize testing- only focus on high risk areas

Positive testing- give valid input and test


Negative testing- give invalid input and test how software handles it

System Testing

Here we test the complete software to ensure it works according to


requirements. Focus on internal features and requirements mostly

Test Case: User Registration Process 1. Test Case ID: ST-001 2.


Objective: Verify the user registration functionality works as
intended. 3. Preconditions: User is on the registration page of the
application. 4. Steps: - Open the application. - Navigate to the
registration page. - Enter valid user details (name, email,
password). - Click the "Register" button. - Verify that the user
data is saved in the database. - Check that the system generates
a confirmation email. - Ensure that the email contains the correct
confirmation link. 5. Expected Result: - User data should be
accurately stored in the database. - A confirmation email should
be generated with the correct details.

When to test:

After integration testing and before acceptance testing

Unit->Integration-> System-> acceptance

Integration testing

Testing data flow between two models

Types
Incremental Integration Testing ( not yet fully integrated)

Non Incremental integration testing (fully integrated)

Incremental Integration Testing

-Top down Incremental Integration Testing


You start testing the water bottle’s features from the top level.
For example, you might first test the lid and then integrate it with
the bottle body. You check if the lid seals properly and prevents
leaks when the bottle is inverted.

#Chime

You might start testing the user interface for scheduling a


meeting. Once that works, you would integrate it with the
backend to check if the meeting details are saved correctly in the
database.

- Bottom up Incremental Integration Testing


Here, you start with the basic components, like testing the bottle
material and the insulation layer. Once confirmed, you integrate
these parts with the lid and check if the entire assembly
maintains the cold temperature as expected.

Stub: is a dummy module that acts like a real module which can receive
and generate the data. Ztop down
If you haven’t developed the lid yet, you might use a dummy lid
called stub

Drivers: Acts like a interface between two modules. It can analyse the
result and give output. Bottom up
If you’re testing the bottle's ability to be filled and poured, a
driver could simulate the action of pouring to see how well the
design works in practice.
Non Incremental integration testing
you test the entire water bottle as a whole after all components
are developed. For example, you would fill the assembled bottle
with water, seal it, and test for leaks, temperature retention, and
usability all at once.

Acceptance Testing

Final level of testing conducted to ensure it works according to


requirements. Typically done by clients or end users.

Types include user acceptance testing.

Example:
### Example 1: Water Bottle Scenario: A company has
developed a new water bottle that is supposed to keep drinks cold
for 12 hours and is leak-proof.

Acceptance Testing Steps:

1. Acceptance Criteria: - The bottle should keep water cold for at


least 12 hours. - The bottle should not leak when turned upside
down. - The design should be comfortable and easy to carry.

2. Test Cases: - Test Case 1: Fill the bottle with ice water, seal it,
and check the temperature after 12 hours. - Test Case 2: Fill the
bottle with water, seal it, turn it upside down, and check for leaks
after 1 hour. - Test Case 3: Evaluate the design for comfort while
holding and carrying.

3. Execution: - Conduct the tests as per the test cases. - Gather


feedback from users who try the bottle.

4. Evaluation: - Analyze the results: Did the bottle keep the water
cold? Was there any leakage? Was the design comfortable?
5. Sign-off: - If the bottle meets all the criteria, stakeholders sign
off on the product for market release.

### Test Case Example for Amazon Chime App –

Test Case ID: UAT-001

- Test Scenario: User joins a scheduled meeting

- Preconditions: - The user has the Amazon Chime app installed


on their device.

- The user is logged in to their Amazon Chime account.

- A meeting is scheduled and the user has the meeting link.

- Steps to Execute: 1. Open the Amazon Chime app. 2. Navigate


to the "Meetings" section. 3. Find the scheduled meeting in the
list. 4. Click on the meeting link or "Join" button. 5. Wait for the
meeting to load. 6. Verify that the user can see and hear other
participants.

- Expected Result: - The user should successfully join the meeting


without any errors. - The meeting interface should display all
participants. - Audio and video should be functioning properly. –

Actual Result: - (To be filled in during testing)

- Status: - (To be marked as Pass or Fail based on the actual


result)

User acceptance testing

Is an end to end testing done by end users where in they use the software
for a particular period of time and check whether the software is capable
of handling real time business scenerios or not.

Focuses on user experience


End to end testing

Tests the application flow from start to finish ensuring all integrated parts
work together well.

Mostly done from end user pov so tests literally everything

Done by testers

Test Case ID: E2E-001 2. Objective: Verify the user registration


process works correctly from start to finish. 3. Preconditions: User
is on the registration page of the application. 4. Steps: - Open the
application. - Navigate to the registration page. - Enter valid user
details (name, email, password). - Click the "Register" button. -
Check for a confirmation email in the user's inbox. - Click the
confirmation link in the email. - Log in with the newly registered
credentials. 5. Expected Result: - User should be registered
successfully. - Confirmation email should be received. - User
should be able to log in with the registered credentials.

Imp exp

### System Testing System testing is a type of testing where


the complete and integrated software application is tested as a
whole. The focus is on verifying that the system meets specified
requirements. Example: When you perform system testing for a
login feature, you would check if: - The login page loads correctly.
- Users can enter their username and password. - The system
correctly validates the credentials. - Appropriate error messages
are displayed for incorrect logins. - Successful logins redirect the
user to the correct dashboard. In this case, you're testing the
login functionality within the context of the entire application,
ensuring all components work together as intended.

### End-to-End Testing End-to-end testing, on the other hand, is


a more comprehensive testing approach that checks the entire
workflow of the application from start to finish. It simulates real
user scenarios and ensures that all integrated components work
together as expected. Example: For the login feature, end-to-end
testing would involve: - Starting from the home page of the
application. - Navigating to the login page. - Entering valid
credentials. - Submitting the login form. - Verifying that the user
is redirected to the correct dashboard. - Checking that all
subsequent features (like accessing user settings or logging out)
work correctly after logging in. In this scenario, you’re testing the
complete flow of the application, ensuring that all parts of the
system work together seamlessly from the user’s perspective.

Smoke Testing

Testing basic or critical feature before thorough testing

Why we do?

To check whether the product is testable or not

When we test basic or critical features first and find defects there itself
then its better to send it to developer at earlier stage so that developer
will have enough time to fix it.

Example:
Example with Water Bottle:

Imagine you’ve just received a new water bottle. The smoke test
would be checking if it can hold water without leaking. You fill it
up and look for any obvious leaks. If it holds water, it passes the
smoke test.

Eg: gpay send money

Amazon chime
Before the meeting, you would do a quick smoke test to ensure
that the basic features are working. This could include checking if
you can log in to the application, if your microphone and camera
are functioning, and if you can join a meeting. If all these basic
functions work, then the application is stable enough for the
meeting.

Sanity Testing:

Is used to verify specific functionalities work normally after an update in


an application.

Focuses on targeted areas rather than the entire application

Eg: gpay split money

Exaple: Water bottle


After fixing a design flaw in the water bottle’s cap, sanity testing
would involve checking just the cap to see if it now seals properly
and doesn’t leak. You wouldn't check the entire bottle again, just
the part that was changed.

Amazon chime:

the team added a new feature that allows screen sharing. After
the update, you would do a sanity test to check if the screen
sharing feature works properly. You would specifically test the
screen sharing function to ensure it operates as intended, without
going through all the other features of Chime again.

Regression Testing

Testing that the update did not break the existing features of an
application.

Ex: whatsapp videocall filter

Types of regression testing

Unit regression testing- Testing only update which is fixed


Regional regression testing- Testing update along with impacted region.
For eg: send, unsend

Full regression testing- Testing update and all remaining features. For eg,
In whatsapp, video call function was not working.

When we go for full regression testing?

Whenever there is an update

When update is done in the root of the product

When the environment or platform changed.

Example:
Scenario: After an update, a new feature for screen sharing is
added to Amazon Chime.

Regression Testing Steps: 1. Verify Screen Sharing: Test the new


screen sharing feature to ensure it works correctly.

2. Check Existing Features: Test other existing features like video


calls, chat functionality, and meeting scheduling to ensure they
still function properly after the update.

The goal is to confirm that the new feature works and that the
update didn’t break any existing functionalities.

Retesting/ Confirmation

Just a confirmation testing to check whether the defect after new build is
really fixed or not.

We cant do automation in retesting and it is a planned one

Restesting is only for failed test case


Build- new version of software

Update- software after specifc changes

Example:
Scenario: A bug was found in the chat feature where messages
weren't being sent.

Retesting Steps: 1. Fix the Bug: The development team fixes the
issue with the chat feature.

2. Retest the Chat Function: Test sending messages again to


ensure they are now sent successfully without any issues.

Failed test case

Defects found

Poor design

Platform changes

Exploratory testing:

Testers explore the application without any test plan. Testers use their
knowledge and experience instead of test cases to find defects
Exploratory Testing Example: A tester is assigned to evaluate a
new feature in a mobile app. They create a test charter to explore
the user registration process, focusing on various input scenarios
like valid emails, invalid passwords, and edge cases like special
characters. They document their findings and any bugs they
encounter during the session. Ad-hoc Testing Example: A tester
opens the same mobile app without any specific plan. They
randomly tap buttons, swipe through screens, and try to perform
actions like logging in with random credentials. They find a crash
but don’t document their steps or the specific actions that led to
it.
Non-Functional Testing

Performance testing

Testing the stability and response time of an application by applying load.

Response time: Total time taken to receive request, execute the program
and respond to user

Load: No of users

Stability: Ability to withstand the load

When we go for performance testing?

When the product becomes functionally stable

Performance testing tools:

J-Meter, NEO-Load, Load Runner

Types of performance testing:

Load testing

Stress Testing

Volume Testing

Soak Testing

Load Testing
Testing the stability and response time of an application by applying load
which is less than or equal to designed no of users.
Imagine testing the app by simulating 500 users trying to place
pizza orders at the same time. This helps you see if the app can
handle many orders without crashing or slowing down.

Stress Testing:

Testing the stability and response time of an application by applying load


which is more than designed no of users.
Here, you could push the app to its limits by having 5,000 users
trying to access it at once. This test checks how the app behaves
under extreme conditions, like if it can still process orders or if it
fails.

Volume Testing:

Testing the stability and response time of an application by transferring


huge amount of data.

checks how the app performs when you add thousands of pizza
items or customer orders at once. It ensures the app can handle
large amounts of data without crashing or slowing down.

Soak Testing:

Testing the stability and response time of an application by applying load


continuously for prolonged period of time.
involves running the app with a steady number of users (like 100)
placing orders continuously for a long time, say 24 hours. This
checks if the app remains stable and responsive over time,
looking for any performance issues or memory leaks.
Adhoc Testing

Testing the application randomly without looking into requirements.

And without test cases too..no knowledge needed about requirements


but good to have
Imagine you are testing a social media application. As an ad-hoc
tester, you might do the following: 1. Explore Features: You start
by logging into the app and navigating through various features
like posting a status, uploading photos, or sending messages. You
might take note of how each feature works and look for any
inconsistencies. 2. Test Edge Cases: You could try to post a status
with an unusually long text (like 10,000 characters) or upload a
very large image file to see how the app handles these scenarios.
3. Check for Usability Issues: While using the app, you might
notice if certain buttons are hard to find or if the navigation feels
clunky. You'd take mental notes on these usability aspects. 4.
Simulate Real-World Use: You could also try to perform actions
that a typical user might do, like quickly switching between
different tabs or refreshing the feed multiple times in a row.

Why we do adhoc testing?

When the product is launched into market, people might use the
application randomly and because of that they might find more defects.
To overcome this, Test engineer will test the application randomly and
find the bugs.

If we test according to requirements, the defects found will be less so we


test application randomly to find more defects.

When we do adhoc testing?

When there’s tight deadline

After a feedback from user


Types:

Monkey testing- TE feeds random inputs to find defects

Pair testing- TE will sit with TE

Buddy testing- TE will sit with developer


Sure! Here’s a simple example of monkey testing: Imagine you
have a mobile app for online shopping. During monkey testing, a
tester might do the following: 1. Randomly Tap: The tester
randomly taps on different buttons in the app, like "Add to Cart,"
"Checkout," or "Search," without a specific order. 2. Input Random
Data: They might enter random text into search fields or input
invalid credit card numbers during checkout. 3. Navigate
Erratically: The tester switches between different sections of the
app quickly, like jumping from the home page to the account
settings and back to the product page. By doing this, the tester
looks for any crashes, error messages, or unexpected behavior
that occurs due to these random actions. Let me know if you need
more examples or information!

Usability Testing:

Testing the user friendliness of an application

How to do?

Here we check whether the look and feel of the application is good or
not.

If the application is simple to understand or not.

Important feature or frequently used feature is easily accessible within 3-


4 clicks.
Whether the entire application is easily accessible.

Accessiblity testing

Testing the user friendliness of an application from physically challenged


person’s point of view.

Example:
Verifying that the application works well with screen readers,
which read out content for visually impaired users. - Color
Contrast and Text Size: Checking that text is readable, with
sufficient contrast against the background, and that text can be
resized without loss of content.

Compatibility testing

Check how an application performs in different platforms, hardware and


software.

Why we use?

Developer would have developed the application in one platform, tester


would have tested the application in same platform and when product is
launched, the user might use it in different platform and may not work.
Because of this customer usage will go down.

To check whether the features are working consistently in all the


platforms, we use compatibility testing.

When we do?
When the application is functionally stable in one base platform, then we
test the same in other platforms.

Tool: BrowserStack

Test how Amazon Chime performs on different web browsers such


as Google Chrome, Mozilla Firefox, Microsoft Edge, and Safari.
Check all features

nsure that Amazon Chime works on various operating systems


like Windows, macOS, and Linux.

Test the application on different devices, including desktops,


laptops, tablets, and smartphones.

Under different network conditions

Types of application:

Stand alone application- calci

Client server application- bank app

Web application- insta

Globalization:

I18N: Internationalization- Testing the software which is developed for


multi language.

LION: Localization- Testing the software to check whether the developed


according to country’s standard or culture.

For eg, In india DD/MM/YYYY and in us MM/DD/YYYY

Recovery Testing
Testing to ensure a system can recover automatically from crashes or
failures.

Scenario: Imagine a user is watching their favorite anime on


Crunchyroll, and suddenly the application crashes.

Testing Steps:

1. Simulate the Crash: Force the application to crash while the


user is streaming.

2. Recovery Mechanism: Check if the application automatically


restarts after the crash.

3. Session Restoration: Once the application is back up, verify if


the user can resume watching from where they left off or if they
have to start over.

Expected Outcome: The application should restart without


requiring user intervention, and the user should be able to
continue watching their anime without losing their place.

Difference between retesting and regression

Retesting regression
Just a confirmation testing to check Testing that the update did not
whether the defect after new build break the existing features of an
is really fixed or not. application.

Done for failed test cases Done for both passed and failed
test cases
Planned Not planned
Here we don’t go for automation Here we go for automation.

Difference between smoke and sanity testing


smoke sanity
High level and wide testing Deep and narrow testing
Try to cover all basic and critical Take one feature and test it
features. thoroughly
Positive testing Both positive and negative
Here we document scenerios and Here we don’t document scenerios
test cases and test cases
Here we go for automation Here we don’t do automation
Done by both dev and TE Done by only TE

Exploratory Adhoc
Structured and has a specific goal to Unstructured and random
test
Document everything Little to no documentation
Has a outline on what to test- Test No test case and requirements
charters and rely on requirements

Exploratory Testing: - Structured Approach: While it doesn't follow


a predefined test plan, exploratory testing is guided by a test
charter or specific objectives. Testers have a focus and know what
areas they want to explore. - Documentation: Testers typically
document their findings, including any bugs encountered and the
scenarios they tested. This helps in replicating issues and
provides insights for future testing.

2. Ad-hoc Testing: - Unstructured Approach: Ad-hoc testing is


more random and informal. Testers do not have a specific plan or
objectives and often test without a clear focus. - Minimal
Documentation: Testers usually do not document their actions or
the steps taken during testing. This can lead to challenges in
replicating issues found, as there's often no record of what was
done.

White box testing (clear box/ glass box)

Here tester has knowledge of internal workings of an application such as


code, structure, logic etc

Types: Unit, Integration, api

Unit: test each part of the code to make sure it works correctly before
combining them to create an application. Used to catch issues early.

Integration eg: chime

How login interacts with database to store user information

Regression: verify code after update

API: makes sure api does what its supposed to do and handles situations
well. Eg: weather api

Verify it returns accurate weather info based on location, date and time
etc

Automation testing

Uses tools to run prescripted test cases. Its for repetitive tasks where
same tasks needed to be executed frequently.

Includes unit, integration, regression, smoke etc


Tools include selenium, test studio

Priority and Severity

Severity- impact of a defect

Blocker: 100 percent will affect business… blank page


Critical: user can shop without paying (serious issue).

- Major: Discount codes don’t apply (affects user experience).

- Minor: Small typo in product description (not serious).

Priority- how urgently a defect should be fixed


- High: Fix the checkout crash immediately (urgent).

- Medium: Fix the discount code issue soon (important but not
urgent).

- Low: Address the typo later (not urgent).

Test case techniques/ strategies/ design techniques

Equivalence Partitioning- 50% positive testing 50% negative testing

Boundary value analysis- providing load more than desired amount

Exploratory testing- exploring app without any test cases

Error guessing: guessing what issues user might encounter.


(Brainstoeming errors)

Jira
Used for defect tracking.. We raise a issue we have found while testing by
giving the details of it in Jira tool and developers will use it to fix the
software.

Go to create

Project dropdown- Select the project we have allocated to jira previously

ISsue type- Select the issue type like bug

Summary- Write basic info of bug in summary

Description- describe complete bug. Then expected, actual result. Then


test steps(recommended) how to go there.

Priority: Assign high, med, low

Label

Environment

Bug types

Functional bugs- opp to requiremens

Performance bugs- takes too much time to load

Compatibilty bugs- doesn’t work properly in diff platforms like size


Security Bugs: These vulnerabilities can expose the software to
threats, such as unauthorized access or data breaches.

Test data- inuput

Alpha testing- test by te before release to cust. Happens after acceptance


testing

Beta testing- thousands of end users test it and give feedback before
official release to public.

You might also like