Software Testing
Software Testing
SOFTWARE
TESTING
By
VIKRAM
2
Contents
Software Quality
Fish Model
V-Model
• Reviews in Analysis
• Reviews in Design
• Unit Testing
• Integration Testing
• System Testing
1. Usability Testing
2. Functional Testing
3. Non Functional Testing
• User Acceptance Testing
• Release Testing
• Testing During Maintenance
• Risks and Ad-Hoc Testing
System Testing Process
• Test Initiation
• Test Planning
• Test Design
• Test Execution
1. Formal Meeting
2. Build Version Control
3. Levels of Test Execution
4. Levels of Test Execution VS Test Cases
5. Level-0(Sanity Testing)
6. Level-1(Comprehensive Testing)
7. Level-2(Regression Testing)
• Test Reporting
• Test Closure
• User Acceptance Testing
• Sign Off
Case Study
Manual Testing VS Automation Testing
WinRunner
Automation Test creation in WinRunner
• Recording Modes
1. Context Sensitive mode
2. Analog mode
• Check Points
1. GUI check point
for single point
for object or window
for multiple objects
2. Bitmap check point
for object or window
for screen area
3. Database check point
3
default check
custom check
runtime record check
4. Text check point
from object or window
from screen area
• Data Driven Testing
1. From Key Board
2. From Flat Files
3. From Front end Objects
4. From XL Sheets
• Silent mode
• Synchronization point
1. wait
2. for object/window property
3. for object/window bitmap
4. for screen area bitmap
5. Change Runtime settings
• Function Generator
• Administration of WinRunner
1. WinRunner Frame Work
Global GUI Map file
Per Test mode
2. Changes in references
3. GUI Map configuration
4. Virtual object wizard
5. Description Programming
6. Start script
7. Selected applications
• User defined functions
• Compiled Module
• Exception Handling or Recovery Manager
1. TSL exceptions
2. Object exceptions
3. Popup exceptions
• Web Test Option
1. Links coverage
2. Content coverage
3. Web functions
• Batch Testing
• Parameter Passing
• Data Driven Batch Testing
• Transaction Point
• Debugging
• Short/Soft Key Configuration
• Rapid Test Script Wizard(RTSW)
• GUI Spy
Quick Test Professionals
• Recording modes
1. General recording
4
2. Analog recording
3. Low level recording
• Check Points
1. Standard check point
2. Bitmap check point
3. Text check point
4. Textarea check point
5. Database check point
6. Accessibility check point
7. XML check point
8. XML check point(File)
VBScript
• Step Generator
• Data Driven Testing
• DDT through Key Board
• DDT through Front end objects
• DDT through XL sheet
• DDT through flat file
Multiple Actions
• Reusable actions
• Parameters
• Synchronization Point
• QTP FrameWork
• Per Test mode
• Export references
• Regular Expressions
• Object Identification
• Smart Identification
• Virtual Object Wizard
Web Test Option
• Links Coverage
• Content coverage
Recovery Scenario Manager
Batch testing
Parameter Passing
With statement
Active Screen
Output value
Call to WinRunner
Advanced Testing Process
5
MANUAL TESTING
Software Quality
"The monitoring and measuring the strength of development process is called as Software Quality
Assurance".It is a process based concept.
"The testing on a deliverable after completion of a process is called as Software Quality Control".
The QA is specified as Verification and the QC is specified as Validation.
*In the above process model, the development stages are indicating SDLC (Software Development Life
Cycle) and the lower angle is indicating STLC (Software Testing Life Cycle).
*During SDLC, the organizations are following standards, internal auditing and strategies, called SQA.
*After completion of every development stage, the organizations are conducting testing, called QC.
*QA is indicating defect prevention.The QC is indicating defect detection and correctness.
*Finally the STLC is indicating Quality Control.
BRS: The Business Requirements Specification defines the requirements of the customer to be developed as
software.
SRS: The Software Requirements Specification defines the functional requirements to be developed and the
system requirements to be used.
Walk Through: It is a static testing technique.During this, the responsible people are studying the document to
6
estimate the completeness and correctness.
Inspection: It is also a static testing technique to check a specific factor in corresponding document.
Peer Review: The comparison of similar documents is called as Point to Point review.
HLD: High level design document is representing the overall view of an s/w from root functionality to leaf
functionality.HLD are also known as External Design or Architectural design.
LLD: Low level Design document is representing the internal logic of every functionality.This design is also
known as Internal or Detailed design.
White Box Testing: It is a program based testing technique used to estimate the completeness and correctness of
the internal programs structure.
Black Box Testing: It is an s/w level testing technique used to estimate the completeness and correctness of the
external functionality.
NOTE: White Box Testing is also known as Clear Box Testing or Open box Testing."The combination of WBT
and BBT is called as Grey Box Testing".
7
The above V-Model is defining multiple stages of development with multiple stages of testing.The
maintence of separate teams for all stages is expensive to small and medium scale companies.
*Due to this reason, the small and medium scale organizations are maintaining the separate testing team
only for SystemTesting because this is Bottle-Neck stage in s/w process.
A) Reviews in Analysis
After completion of requirements garnering, the Business Analyst category people are developing the SRS
with required functional requirements and system requirements.The same category people are conducting reviews
on those documents to estimate the completeness and correctness.In this review meeting, they are following Walk
Throughs, Inspections and Peer Reviews to estimate below factors.
*are they complete requirements?
*are they correct requirements?
*are they achievable requirements?(Practically)
*are they reasonable requirements?(Budget and Time)
*are they testable requirements?
B) Reviews in Design
After completion of Analysis and their reviews,the designer category people are preparing HLD's and
LLD's.After completion of designing,the same category people are conducting a review meeting to estimate the
completeness and correctness through below factors.
*are they understandable?
*are they complete?
*are they correct?
*are they followable?
*are they handling errors?
C) Unit Testing
After completion of analysis and design,our organization programming category people are starting coding.
8
"The Analysis and Design level reviews are also known as Verification Testing".After completion of Verification
testing,the programmers are starting coding and verify every program internal structure using WBT technique’s as
follows.
1. Basis Path Testing:(whether it is executing or not)In this,the programmers are checking all executable
areas in that program to estimate "whether that program is running or not?".To conduct this testing the programs are
following below approach.
*write program w.r.t design logic (HLD's and LLD's).
*prepare flow graph.
*calculate individual paths in that flow graph called Cyclomatic Complexity.
*run that program more than one time to cover all individual paths.
After completion of Basis Path testing, the programmers are concentrating on correctness of inputs and
outputs using Control structure Testing
2. Control structure Testing: In this, the programmers are verifying every statement, condition and loop in
terms of completeness and correctness of I/O (example: Debugging)
3. Program Technique Testing: During this, the programmer is calculating the execution time of that
program.If the execution time is not reasonable then the programmer is performing changes in structure of that
program sans disturbing the functionality.
D) Integration Testing
After completion of dependent programs development and Unit Testing, the programmers are inter
connecting them to form a complete system.After completion of inter connection, the programmers are checking
the completeness and correctness of that inter connection.This Integration testing is also known as Interface
Testing. There are 4 approaches to inter connect programs and testing on that inter connections
1.Top Down approach: In this approach, programmers are inter connecting the Main module and
completed sub modules sans using under constructive sub modules.In the place of under constructive sub
modules,the programmers are using temporary or alternative programs called Stubs.These stubs are also known as
Called Programs by Main module.
9
2. Bottom to Up Approach: In this approach, programmers are connecting completed Sub modules sans
inter connection of Main module which is under construction.The programmers are using a temporary program
instead of main module called Driver or Callin Program.
4. System Approach: After completion of all approaches,the programmers are integrating modules,their
modules and Unit testing.This approach is known as Big Bank approach.
E) System Testing
After completion all required modules integration,the development team is releasing a s/w build to a
separate testing team in our organization.The s/w build is also known as Application Under Testing(UAT).This
System testing is classified into 3 levels as Usability Testing,Functional Testing(Black Box Testing Techniques)
and Non Functional Testing (this is an Expensive Testing).
1. Usability Testing
(Appearance of bike)In general the separate testing team is starting test execution with usability
testing to estimate user friendliness of s/w build.During this, test engineers are applying below sub tests
Case study
2. Functional testing
After completion of user interface testing on responsible screens in our application build, the
separate testing team is concentrating on requirements correctness and completeness in that build.In this
testing; the separate testing team is using a set of Black Box Testing Techniques, like
Boundary Value Analysis, Equivalence Class Partitions, Error Guessing, etc.
This testing is classified into 2 sub tests as follows
a) Functionality Testing: During this test, test engineers are validating the completeness and correctness of
every functionality.This testing is also known as Requirements Testing.In this test,the separate testing team
is validating the correctness of every functionality through below coverage’s.
*GUI coverage or Behavioral coverage (valid changes in properties of objects and windows in our
application build).
*Error handling coverage (the prevention of wrong operations with meaningful error messages like
displaying a message before closing a file without saving it).
*Input Domain coverage (the validity of i/p values in terms of size and type like while giving
alphabets to age field).
*Manipulations coverage (the correctness of o/p or outcomes).
*Order of functionalities (the existence of functionality w.r.t customer requirements).
*Back end coverage (the impact of front end’s screen operation on back end’s table content in
corresponding functionality).
NOTE: Above coverage’s are applicable on every functionality in our application build with the help of Black Box
Testing Techniques.
b)Sanitation testing: It is also known as Garbage Testing.During this test,the separate testing team is
detecting extra functionalities in s/w build w.r.t customer requirements(like sign in link sign in page).
3) Non-Functional Testing
It is also a Manditory testing level in System Testing phase.But it is expensive and complex to
conduct.During this test, the testing team is concentrating on characteristics of an s/w.
*a) Recovery/Reliability testing: During this, the test engineers are validating that whether our s/w
build is changing from abnormal state to normal state or not?
11
c)Configuration Testing: It is also known as H/W compatibility testing.During this,the testing team
is validating that whether our s/w build is supporting different technology devices or not?(Example different
types of printers, different types of n/w etc)
d)Inter system testing:It is also known as End-To-End Testing.During this the testing team is
validating that whether the s/w build is co-existing with other s/w or not?(To share common resources)
EX: E-Server
f)Data Volume testing:It is also known as Storage testing or Memory Testing.During this the testing
team is calculating the peak limit of data handled by the s/w build.(EX:Hospital software.It is also known as
Mass testing)
EX:MS Access technology oriented s/w builds are supporting 2GB data as maximum.
g)Load Testing: It is also known as Performance or Scalability testing.Load or Scale means that the
number of concurrent users(at the same time) who are operating a s/w.The execution of our s/w build under
customer expected configuration and customer expected load to estimate the performance is LOAD
TESTING.(Inputs are customer expected configuration and output is performance).Performance means that
speed of the processing.
12
h)Stress Testing: The execution of our s/w build under customer expected configuration and
various load levels to estimate Stability or continuity is called Stress Testing or Endurous testing.
i)Security Testing: It is also known as Penetration testing.During this the testing team is validating
Authorization,Access control and Encryption or Decryption.Authorization is indicating the validity of users
to s/w like Student entering a class.Access control is indicating authorities of valid users to use features or
functionalities or modules in that s/w like after entering Student has limited resources to use.
Encryption or Decryption procedures are preventing 3rd party accessing.
NOTE: In general the separate Testing team is covering Authorization and Access control checking, the same
development people are covering Encryption or Decryption checking.
G) Release Testing
After completion of UAT and their modifications, the project manager is defining Release or Delivery team
with few developers, few testers and few h/w engineers.This release team is coming to responsible customer site
and conducts Release Testing or Port Testing or Green Box Testing.In this testing the release team is observing
below factors in that customer site.
*Compact Installation.(fully installed or not)
*Overall Functionality.
*Input devices handling.(keyboard,mouse,etc)
13
*Output Devices handling. (Monitor printer, etc)
*Secondary Storage devices handling. (Cd drive, hard disk, floppy etc)
*OS error handling. (Reliability)
*Co-Existence with other s/w application.
After completion of Port testing the responsible release team is conducting TRAINING SESSIONS to end
users or customer site people.
Case study:
Testing phase/level/stage Responsible Testing technique
In analysis Business analyst Walk through,Inspections and Peer reviews
In design Designer Walk through,Inspections and Peer reviews
Unit testing Programmer White box testing
Integration/Interface testing Programmer Top down,Bottom up and Hybrid system approach
System Testing team Black box testing
testing(usability,functionality
and non-functionality)
UAT Real/Model customers Alpha and Beta testing
Release testing Release team Port testing factors
Testing during maintenance CCB Test s/w changes(Regression testing)
Test Initiation
In general, the system testing process starts with Test Initiation or Test Commencement.In this stage,the
Project Manager or Test Manager selects a reasonable approach or reasonable methodology to be followed by the
separate testing team.This approach or methodology is called Test Strategy.
The Test Strategy document consists of below components.
1. Scope and Objective: The importance of testing in this project.
2. Business Issues: The Cost and Time allocation for testing (100%cost=Deve&maintenance+36%Testing)
3. Test Approach: Selected list of reasonable testing factors or issues w.r.t the requirements in project, scope
of requirements and risks involved in testing.
4. Roles and Responsibilities: The names of jobs in testing team and their responsibilities.
5. Communication & Status reporting: Required negotiations in b/w every 2 consecutive jobs in testing team
15
6. Test Automation and Tools: The importance of Test Automation in this project testing and the names of
available testing tools in our organization.
7. Defect Reporting and Tracking: The required negotiation in b/w developers and testers to report and to
resolve defects.
8. Testing Measurements and Metrics: Selected lists of measures and metrics to estimate testing process.
9. Risks and Assumptions: List of expected risks will come in future and solution to over come.
10. Change and Configuration management: The management of deliverables related to s/w development and
testing.
11. Training Plan: The required number of training sessions before starting current project testing process by
testing team.
In above example,9 test factors or issues finalized by PM to be applied by testing team in current project system
testing.
Test Planning
After preparation of Test Strategy documents with required details,the test lead category people are defining
test plan in terms of What to test,How to test,When to test and Who to test.
In this stage,the test lead is preparing the system test plan and then divides that plan into module test plans.
(Master test plan into detailed test plans).In this test planning the test lead is following below approach to prepare
test plans.
a)Testing team formation: In general, the test planning process is starting with testing team formation by
test lead.In this team formation the test lead is depending on below factors.
*Project size(EX: number of functionality points).
*Availability of test engineers.
*Available test duration
*Availability of test environment resources(EX: Testing tools)
b)Identify Tactical risks:After completion of testing team formation,the test lead is analyzing possible risks
w.r.t team.Example risks are
*Lack of knowledge on project requirement domain
*Lack of time
*Lack of resources
*Delays in Delivery
*Lack of Documentation
*Lack of development process seriousness
*Lack of communication
c)Prepare Test plans:After completion of testing team formation and risks analysis,the test lead is
concentrating on Master Test plan and detailed test plans development.Every test plan document follows a fixed
format IEEE 829(Institute of Electrical and Electronics Engineering).These IEEE 829 standards are specially
designed for test documentation.The Format is
1. Test Plan Id: Unique number or name for future reference.
2. Introduction: About project.
3. Test Items: Names of all modules or features
4. Features to be tested: The names of modules or features to test.
5. Features not to be tested: The names of modules or features which are already tested.
3, 4 and 5 indicates what to test.
6. Tests to be applied: The selected list of testing techniques to be applied (From Test Strategy of
Project manager)
7. Test Environment: Required h/w and s/w including testing tools.
17
8. Entry Criteria: When the test engineers are able to start test execution for defect in s/w build.
*prepared all valid test cases
*Establishment of test environment
*received stable build from developers
9. Suspension Criteria: When the test engineers are interrupting test execution
*Test environment is not working
*High severe bug or show stopper problem detected
*Pending defects are not serious but more (called quality gap)
10. Exit Criteria: When the test engineers are stopping test execution
*All major bugs are resolved
*All modules or features tested
*crossed scheduled time
11. Test Deliverables: The names of testing documents to be prepared by test engineers
*test scenarios
*Test case documents
*test logs
*Defect logs
*Summary reports
6 to 9 indicates How to test
12. Staff & Training needs: The selected names of test engineers and required no of training sessions
13. Responsibilities: The work allocation in terms of test engineers VS requirements or test
engineers
VS testing techniques.
12 and 13 indicates Who to Test
14. Schedule: Dates and Time. It indicates When to Test
15. Risks and Assumptions: Previously analyzed list of risks and their assumptions
16. Approvals: The signatures of test lead and project manager or test manager.
d) Review Test plan:After completion of master and detailed test plans preparation,the test lead is
reviewing the documentation for completeness and correctness.In that review meeting,the test lead is depending on
the following factors
*Requirements oriented plans review.
*Testing techniques oriented plans review.
*Risks oriented plans review.
After completion of this review the project management is conducting training sessions to selected test
engineers.In this training period the project management is inviting subject experts or domain experts to share their
knowledge with engineers.
Test Design
After completion of required training, the responsible test engineers are concentrating on test cases
preparation.Every Test case defines a unique test condition to be applied on our s/w build.There are 3 types of
methods to prepare test case
*Functional and System specification based test case design
*Use cases based test case design
*User Interface or Application based test case design
1.Functional and System specification based:In general,the maximum test engineers are preparing test
case depending on functional and system specifications in SRS.
18
From the above model, test engineers are studying all responsible functional and system specifications to
prepare test cases.
Approach:
Step1: Garner all responsible functional and system specifications from SRS.
(Available in configuration repository)
Step2: Select one specification and their dependencies.
Step3: Study that specification and identify base state, input required, output or outcomes, normal flow, end
state, alternative flows and exceptions.
Step4: Prepare test case titles or scenarios.
Step5: Review that titles and then prepare test case documents.
Step6: Go to Step2 until all specifications study and test cases preparation.
Specification1:
A login process allows userid and password to authorize users.The userid is taking alpha numerics in lower
case from 4 to 16 characters long.The password object is allowing alphabets in lower case from 4 to 8 characters
long.Prepare test case scenarios or titles.
Specification2:
In an Insurance application,users can apply for different types of Insurance policies.When a user apply for
Type A insurance,system asks age of that user.The age value should be greater than 16yrs and should be less than
70yrs.Prepare test case titles or scenarios.
Test case title1: check Type a selection as insurance type.
Test case title2: check focus to age after selection of type A.
Test case title3: check age value.
Specification3:
In a shopping application, the users can apply for different types of items purchase orders.Every purchase
order is allowing user to select item number and entry of quantity upto 10.Every purchase order returns 1 item price
and total amount.Prepare test case titles or scenarios.
Test case title1: check item number selection.
Test case title2: check quantity value.
Test case title3: check return values using Total amount = Price * Quantity.
Specificatio4:
A door opened when a person comes to infront of that door and the door closed when that person comes into
inside.Prepare test case titles or scenarios.
Test case title1: check door open.
20
Test case title2: check door closed.
Test case title3: check door operation when a person is standing at the middle of the door.
*Specification5:
In an e-banking application users are connecting to bank server using internet connection.In this application
user are filling below fields to login to bank server.
Password - 6 digits number.
Area code - 3 digits number and optional.
Prefix - 3 digits number and does not start with 0 and 1.
Suffix - 6 digits alphanumeric.
Commands - cheque deposit, money transfer, mini statement and bills pay.
Prepare test case titles or scenarios.
Test case title1: check password.
Specification6:
For a computer shutdown operation prepare test case titles or scenarios.
Test case title1: check Shutdown option selection using Start menu.
Test case title2: check Shutdown option selection using Alt+F4.
Test case title3: check Shutdown operation using Command prompt.
Test case title4: check Shutdown operation using Shutdown option in start menu.
Test case title5: check Shutdown operation when a process is in running.
Test case title6: check Shutdown operation using Power off button.
Specification7:
For washing machine operations prepare test case titles.
Test case title1: check power supply to washing machine.
Test case title2: check door open.
Test case title3: check water filling.
Test case title4: check cloths filling.
Test case title5: check door closed.
Test case title6: check door closed when clothes over flow.
Test case title7: check selection of washing setting.
Test case title8: check washing operation.
Test case title9: check washing operation with improper power supply.
Test case title10: check its operation when door opened in middle of the process (Security testing).
Test case title11: check its operation when water is leaked from door (Security testing).
Test case title12: check its operation with cloths over load (Stress testing).
Test case title13: check with improper settings.
Test case title14: check with any machinery problem.
Specification8:
Money withdrawl from ATM with all rules and regulations.
Test case title1: check ATM card insertion.
Test case title2: check operation with card insertion in wrong way.
Test case title3: check operation with invalid card insertion (like other bank card, time expired, scratches etc).
Test case title4: check entry pin number.
Test case title5: check operation when you entered wrong pin number 3 times consequently.
Test case title6: check language selection.
Test case title7: check account type selection.
Test case title8: check operation when you have select wrong account w.r.t that inserted card.
Test case title9: check withdrawl option selection.
Test case title10: check amount entry.
Test case title11: check operation when you entered amount with wrong denominations (EX: withdrawl of Rs999).
Test case title12: check withdrawl operation success (received correct amount, getting right receipt and able to take
card back).
22
Test case title13: check withdrawl operation when the given amount is greater than possible balance.
Test case title14: check withdrawl operation when your ATM machine is having lack of amount.
Test case title15: check withdrawl operation when your ATM has machinery or network problem.
Test case title16: check withdrawl operation when our given card amount is greater than day limit.
Test case title17: check withdrawl operation when our current transaction number is greater than number of
transactions per day.
Test case title18: check withdrawl operation when you click cancel after insertion of card.
Test case title19: check withdrawl operation when you click cancel after entering pin number.
Test case title20: check withdrawl operation when you click cancel after language selection.
Test case title21: check withdrawl operation when you click cancel after type selection.
Test case title22: check withdrawl operation when you click cancel after withdrawl selection.
Test case title23: check withdrawl operation when you click cancel after entry of amount.
NOTE: After completion of required test case titles or scenarios, the test engineers are preparing test case
documents with all required details.
11) Test case passes or fails criteria: the final result of test case after execution on build or AUT.
NOTE: In above test case format, the test engineers are preparing test procedure when that test case is covering an
operation.And they are preparing data matrix when that test case is covering an object (taking inputs).
NOTE: In general the test engineers are not filling above like lengthy format of test cases.To save their job time,
test engineers are filling some of the fields and remember remaining fields value manually.
NOTE: In general, the test engineers are preparing test cases documents in MS Excel or available test management
tool (like test director).
23
Specification9:
A login process is authorizing users using userid and password.The userid object is allowing alpha numerics
in lower case from 4 to 16 characters long.The password object is allowing alphabets in lower case from 4 to 8
characters long.Prepare test case documents.
Document1:
*test case id: TC_Login_Sri_14_11_06_1.
*test case name: check userid.
*test suit id: TS_Login.
*priority: p0.
*test setup: userid object is taking inputs.
*data matrix:
Document2:
*test case id: TC_Login_Sri_14_11_06_2.
*test case name: check password.
*test suit id: TS_Login.
*priority: p0.
*test setup: password object is taking inputs.
*data matrix:
Document3:
*test case id: TC_Login_Sri_15_11_06_3.
*test case name: check login operation.
*test suit id: TS_Login.
*priority: p0.
*test setup: valid and invalid userid and password object values given.
*test procedure:
Specification10:
In a bank application,the bank employees are creating fixed deposit forms with the help of customers given
data.This fixed deposit form is taking below values from bank employees.
Depositor name: alphabets in lower case with initial as capital.
Amount: 1500-100000.
Tenure: upto 12 months.
24
Interest: numeric with one decimal.
In this fixed deposit operation if the tenure>10 months then the interest is also greater than 10%.Prepare test
case documents.
Document1:
*test case id: TC_FD_Sri_15_11_06_1.
*test case name: check deposit name.
*test suit id: TS_FD.
*priority: p0.
*test setup: deposit name is taking inputs.
*data matrix:
Document2:
*test case id: TC_FD_Sri_15_11_06_2.
*test case name: check amount.
*test suit id: TS_FD.
*priority: p0.
*test setup: amount is taking inputs.
*data matrix:
Document3:
*test case id: TC_FD_Sri_15_11_06_3.
*test case name: check tenure.
*test suit id: TS_FD.
*priority: p0.
*test setup: tenure is taking inputs.
*data matrix:
Document4:
*test case id: TC_FD_Sri_15_11_06_4.
*test case name: check interest.
*test suit id: TS_FD.
*priority: p0.
*test setup: interest is taking inputs.
*data matrix:
25
Document5:
*test case id: TC_FD_Sri_15_11_06_5.
*test case name: check fixed deposit operation.
*test suit id: TS_FD.
*priority: p0.
*test setup: valid and invalid values are available in hand.
*test procedure:
Document6:
*test case id:TC_FD_Sri_15_11_06_6.
*test case name:check tenure and interest rule.
*test suit id:TS_FD.
*priority:p0.
*test setup:valid and invalid values are available.
*test procedure:
Specification11:
Readers Paradise is a library management system.This s/w is allowing new users through registration.In this
new registration,the s/w is taking details from users and then return(o/p) person identity number like
RP_Date_XXXX(EX:RP_15_11_06_1111).Fields in Registration form.
User name:alphabets in capital.
Address:street name(alphabets),city name(alphabets), and pin code(numerics).
DOB:day month year as valid(in date / is taken automatically in this project).
e-mail id:valid ids and optional([email protected])
userid:1-256 characters and 0-9.
sitename:1-256 characters and numbers 0-9.
sitetype:1-3 characters.
26
Prepare test case documents.
Document1:
*test case id:TC_RP_Sri_15_11_06_1.
*test case name:check user name.
*test suit id:TS_RP.
*priority:p0.
*test setup:user name object is taking inputs.
*data matrix:
Document2:
*test case id:TC_RP_Sri_15_11_06_2.
*test case name:check street name.
*test suit id:TS_RP.
*priority:p0.
*test setup:street object is taking inputs.
*data matrix:
Document3:
*test case id:TC_RP_Sri_15_11_06_3.
*test case name:check city name.
*test suit id:TS_RP.
*priority:p0.
*test setup:city name object is taking inputs.
*data matrix:
Document4:
*test case id:TC_RP_Sri_15_11_06_4.
*test case name:check pincode.
*test suit id:TS_RP.
*priority:p0.
*test setup:pincode object is taking inputs.
*data matrix:
27
Document5:
*test case id:TC_RP_Sri_15_11_06_5.
*test case name:check date.
*test suit id:TS_RP.
*priority:p0.
*test setup:date object is taking inputs.
*data matrix:
Decision table:
Day Month Year
01-31 01,03,05,07,08,10,12 00-99
01-30 04,06,09,11 00-99
01-28 02 00-99
01-29 02 Leap year in b/w 00-99
Document6:
*test case id:TC_RP_Sri_16_11_06_6.
*test case name:check e-mail id.
*test suit id:TS_RP.
*priority:p0.
*test setup:e-mail object is taking inputs.
*data matrix:
Document7:check registration.
*test case id:TC_RP_Sri_16_11_06_7.
*test case name:check registration.
*test suit id:TS_RP.
*priority:p0.
*test setup:all valid and invalid values available in hand.
*test procedure:
28
2.Usecases based Test case design:Usecases are more elaborative than functional and system specifications
in SRS.In this usecase oriented test case design,test engineers are not taking their own assumptions.
From the above model,the usecase defines How to use a functionality.Every test case defines How to test a
functionality.Every test case is derived from usecase.Depending on agreement,the responsible development team
management people or responsible testing team management people are developing usecases depending on
functional and system specifications in SRS.
Usecase format:
1)usecase id:unique number or name.
2)usecase description:the summary of requirement.
3)actors:the type of users,which are accessing this requirements in our application build.
4)preconditions:necessary tasks to do before start this requirement functionality.
5)event list:a step by step procedure with required input and expected output.
6)post conditions:necessary tasks to do after completion of this requirement functionality.
7)flow diagram:pictorial presentation of requirement functionality.
8)prototype:a sample screen to indicate requirement functionality.
9)business rules:a list of rules and regulations if possible.
10)alternative flows:a list of alternative events to do same requirement functionality,if possible.
11)dependent usecases:a list of usecases related to this usecase.
Depending on above like formatted usecases,test engineers are preparing test cases sans any their own
assumptions because the usecases are providing all details about corresponding requirement functionality.
Usecase1:
*usecase id:UC_Login.
*usecase desc:a login process allows user id and password to authorize users.
*actors:registered users(they have valid id and password).
*pre conditions:every user registered before going to login.
*event list:activate login window.
enter user id as alpha numerics in lower case from 4 to 16 char long.
enter password as alphabets in lower case 4 to 8 characters.
click SUBMIT button.
*post conditions:mail box opened after succesfull login,error message for unsucessfull login.
*flow diagram:
29
*prototype:
Document2:
*test case id: TC_Login_Sri_16_11_06_2.
*test case name: check password.
*test suit id: TS_Login.
*priority: p0.
*test setup: password object is taking inputs.
*data matrix:
Document3:
*test case id: TC_Login_Sri_16_11_06_3.
*test case name: check login.
*test suit id: TS_Login.
*priority: p0.
*test setup: login form is verified.
30
*test procedure:
Step no Task/Event Required input Expected output
1 Activate login window None Userid and pwd are empty by default
2 Enter userid and pwd Userid and pwd SUBMIT button enabled
3 Click SUBMIT valid & valid Mail box opened
valid & invalid Error message
invalid & valid Error message
valid & blank Error message
blank & value Error message
Usecase2:
*use case id: UC_Book_Issue.
*usecase desc: administrator opens the book issue form and enters book id to know the availability and if
Available, issues book to the valid user.
*actors: administrators and valid users.
*pre conditions: the administrator and user should be registered.
*event list: check for book by entering the bookid in bookid field and click GO (EX: RP_XXXX).
Check the availability from the message window that is displayed on GO click.
For a given user id verifies whether the user is valid or not (EX: RP_Date_XXXX).
If the user is valid, then the message window is displayed on GO click.
Issue that book through Click issue Dutton.
if the book is not available or the user is not valid then click Cancel.
*post condition: issue the book.
*flow diagram:
*prototype:
Document2:
*test case id: TC_BookIssue _Sri_17_11_06_2.
*test case name: check GO for availability verification.
*test suit id: TS_BookIssue.
*priority: p0.
*test setup: valid and invalid book ids are available in hand.
*test procedure:
Step no Task/Event Required input Expected
1 activate BookIssue window none bookid object focused
2 enter bookid and click GO available bookid message as Available
unavailable message as Unavailable
Document3:
*test case id: TC_BookIssue _Sri_17_11_06_3.
*test case name: check user id value.
*test suit id: TS_BookIssue.
*priority: p0.
*test setup: user id object takes some value.
*test matrix:
Document4:
*test case id: TC_BookIssue _Sri_17_11_06_4.
*test case name: check user id validation by GO.
*test suit id: TS_BookIssue.
*priority: p0.
*test setup: valid and invalid user ids are available in hand.
*test procedure:
Step no Task/Event Required input Expected
1 activate BookIssue window none bookid object focused
2 enter bookid and click GO available bookid message as Available and focus to user id
3 enter user id and click GO valid id message as Issue book permitted
invalid id not permitted.Cancel message
32
Document5:
*test case id: TC_BookIssue _Sri_17_11_06_5.
*test case name: check BookIssue operation.
*test suit id: TS_BookIssue.
*priority: p0.
*test setup: valid bookid and valid user id available in hand.
*test procedure:
Step no Task/Event Required input Expected
1 activate BookIssue window none bookid object focused
2 enter bookid and click GO available bookid message as Available and focus to user id
3 enter user id and click GO valid id message as Valid user and Issue button
enabled
4 click Issue none “Acknowledgement”.
Document6:
*test case id: TC_BookIssue _Sri_17_11_06_6.
*test case name: check Cancel operation.
*test suit id: TS_BookIssue.
*priority: p0.
*test setup: invalid bookid and invalid user id available in hand.
*test procedure:
Step no Task/Event Required input Expected
1 activate Book Issue window none bookid object focused
2 enter bookid and click GO unavailable bookid focus to Cancel button
3 enter user id and click GO un valid id focus to Cancel button
NOTE: In general, the maximum testing team are following functional and system specification based test case
design depending on SRS.In this method,the test engineers are exploring their knowledge depending on
SRS,previous experience,discussions with others,similar s/w browsing,internet surfing,etc.
3.User Interface test case design:In general,the test engineers are preparingtest cases for functional and
non functional tests depending on any one of previous 2 methods.To prepare test cases for Usability testing,test
engineers are depending on user interface based test case design.
In this method, test engineers are identifying the interest of customer site people and user interface rules in
market.
Example test cases:
Test case title1: check spelling.
Test case title2: check font uniqueness in every screen.
Test case title3: check style uniqueness in every screen.
Test case title4: check labels initial letters as capitals.
Test case title5: check alignment of object in every screen.
Test case title6: check color contrast in every screen.
Test case title7: check name spacing uniqueness in every screen.
Test case title8: check spacing uniqueness in b/w label and object.
Test case title9: check spacing in b/w objects.
Test case title10: check dependent objects grouping.
Test case title11: check borders of object groups.
Test case title12: check tool tips of icons in all screens.
Test case title13: check abbreviations or full forms.
Test case title14: check multiple data objects positions in every screen (Ex: Dropdown list box, Menus
(always at top), Tables and data windows).
Test case title15: check scroll bars in every screen.
Test case title16: check short cut keys in keyboards to operate our build.
33
Test case title17: check visibility of all icons in every screen.
Test case title18: check help documents (Manual support testing).
Test case title19: check identity controls (EX: title of s/w, version of s/w, logo of company, copy wright of
win).
NOTE: Above usability test cases are applicable on any GUI application for Usability testing.For these test cases,
p2 is given as priority by testers.
NOTE: The maximum above usability test cases are STATIC because the test cases are applicable on build sans
operating.
Case Study
Project – Flight Reservation.
Feature to be tested – login.
Tests to be conducted – usability, functional and non functional (compatibility and performance) testing.
Test case titles:
1. Functional testing:
*check agent name.
*check password.
*check login operation.
*check Cancel operation.
*check help button.
2. Non functional testing:
*compatibility testing: check login in windows 2000, xp, win NT. (these are customer expected
platforms)
*load testing: check login performance under customer expected load.
*stress testing: check login reliability under various load levels.
3. Usability testing:
*refer user interface test cases examples given.
Test Execution
After completion of test cases design and review, the testing people are concentrating on test execution.In
this stage the testing people are communicating with development team for features negotiations.
a)Formal Meeting: The test execution process is starting with a small formal meeting.In this meeting,the
PM,project leads,developers,test leads and test engineers are involved.In this meeting the members are confirming
the architecture of required environment.
34
b)Build version control: After confirming required environment the formal review meeting members are
concentrating on build version control.From this concept the development people are assigning unique version
number to every modified build after solving defects.This version numbering system is understandable to testing
team.
c)Levels of test execution:After completion of formal review meeting, the testing people are concentrating
on the finalization of test execution levels.
*Level -0 testing on Initial build.
*Level-1 testing on Stable build or Working build.
*Level-2 testing on Modified build.
*Level-3 testing on Master build.
*UAT on release build.
35
*Finally Golden build is released to customer site.
e)Level-0 (Sanity testing):Practically the test execution process is starting with Sanity test execution to
estimate stability of that build.In this Sanity testing,the test engineers are concentrating on below factors through
the coverage of basic functionality in that build.
*Understandable.(On seeing the project)
*Operatable.(No hanging during operation)
*Observable (know its flow)
*Controllable (do the operation and undo the operation)
*Consistency (in functionality)
*Simplicity (means less navigation required)
*Maintainable (in testers system)
*Automatable (whether some tools are applicable or not)
The above like level-0 Sanity testing is estimating Testing ability of build.This testing is also known as
Sanity testing or Testability testing or Build Acceptance testing or Build verification testing or Tester acceptance
testing or Octangle testing(Above 8 testing factors).
f)Level-1(Comprehensive testing):After completion of Sanity testing test engineers are conducting level1
real testing to detect defects in build.In this level,the test engineers are executing all test cases either in manual or in
automation as test batches.Every test batch consists of a set of defined test cases.This test batch is also known as
Test Suit or Test set or Test Chain or Test build.
In these test cases execution as batches on the build the test engineers are preparing test log documents with
3 types of entries.
*Passed: All our test case expected values are equal to build actual values.
*Failed: Any one expected value is not equal to build actual value.
*Blocked: Our test case execution postponed due to incorrect parent functionality.
In this Level-1 test execution as test batches, the test engineers are following below approach.
36
From the above approach,the test engineers are skipping some test cases due to lack of time for test
execution.The final status of every test case is CLOSED or SKIP.
g)Level-2(Regression testing):During level1 Comprehensive testing the test engineers are reporting
mismatches in b/w our test case expected values and build actual values as defect reports.After receiving defect
reports from testers,the developers are conducting a review meeting to fix defects.If our defect accepted by the
developers then they are performing changes in coding and then they will release Modified Build with Release
note.The release note of a modified build is describing the changes in that modified build to resolve reported
defects.
Test engineers are going to plan Regression testing to conduct on that modified build w.r.t release note.
Approach to Regression testing:
*receive modified build along with release note from developers.
*apply Sanity or Smoke test on that modified build.
*Select test cases to be executed on that modified build w.r.t modifications specified in release note.
*run that selected test cases on that modified build to ensure correctness of modifications sans having side
effects in that build.
In above regular Regression testing approach, the selection of test cases w.r.t modifications is critical task.
Due to this reason, the test engineers are following some standardized process models for regression testing.
Case1: If the development team resolved defect severity is high, then the test engineers are re-executing all
functional, all non-functional and maximum usability test cases on that modified build to ensure the correctness of
modifications sans side effects.
Case2: If the development team resolved defect severity is medium, then the test engineers are re-executing
all functional, maximum non-functional and some usability test cases on that modified build to ensure the
correctness of modifications sans side effects.
Case3:If the development team resolved defect severity is low,then the test engineers are re-executing some
functional,some non-functional and some usability test cases on that modified build to ensure the correctness of
modifications sans side effects.
Case4: If the development team released modified build due to sudden changes in customer requirements,
then the test engineers are performing changes in corresponding test cases and then re-executing that test case on
that modified build to ensure the correctness of modifications w.r.t changes in requirements.
After completion of the required level of regression testing, the test engineers are continuing remaining
level1 test execution.
37
Test Reporting
During level1 and level2 test execution test engineers are reporting mismatches to development team as
defects.The defect is also known as Error or Issue or Bug.
A programmer detected a problem in program is called ERROR.
The tester detected a problem in build is called DEFECT or ISSUE.
The reported defect or issue accepted to resolve is called BUG.
In this defect reporting to developers the test engineers are following a standard defect report format
(IEEE829).
a)Defect Report:
*defect id: the unique name or number.
*description: the summary of the defect.
*build version id: the version number of build, in this build the test engineer detect defect.
*feature: the name of module or function, in that module the test engineers find this defect.
*test case title: title of failed test case.
*detected by: name of the test engineer.
*detected on: date of the defect detection and submition.
*status: New (reporting first time), Re-Open (re reporting).
*severity:the seriousness of defect in terms of functionality.If it is High or Show stopper,then not able to
continue testing sans resolving that defect.If it is Medium or Major,then able to continue testing but Manditory to
resolve.If it is Low or Minor,then able to continue testing and may or may not to resolve.
*priority: the importance of defect to resolve in terms of customer (high, medium, low ex name).
*reproduceable: Yes or No.Yes means defect appears every time in test execution (then attach test
procedure).No means defect rarely appears in test execution (then attach snapshot and test procedure.Snapshot is
taken by Print screen button when defect is occurred).
*assigned to: the name of responsible person to receive this defect at development site.
*suggested fix: the suggestion to accept or reject the defect.It is optional.
NOTE: In general the test engineers are reporting defect to development team after getting permission from test
lead.
NOTE: In application oriented s/w development test engineers are reporting defects to customer site also.
New->Open->Closed
New->Open->Reopen->Closed
New->Reject->Closed
New->Reject->Reopen->Closed
New->Deferred
d)Defect age: The time gap in b/w defect reporting and defect closing or deferring is called Defect age.
e)Defect density: The average no of defects detected by our testing team in one module of application
build.
f)Defect Resolution types: After receiving defect report from testing team the development team is
conducting a review meeting to fix that defect and then sending resolution type to testing team.There are 12 types
as
*duplicate: test engineer reported defect rejected due to similarity with previously reported defect.
*enhancement: the test engineer reported defect rejected due to relation with future requirements of
customer.
*s/w limitation: rejected due to relation with limitation of s/w technology (ex 2 GB->999 records only).
*h/w limitation: rejected due to relation with limitation of h/w technology.
*not applicable: rejected due to wrong test case execution.
*functions as designed: rejected due to correctness of coding w.r.t design documents.
*need more information: defect is not accepted and not rejected but developers require more information to
understand a defect properly.
*not reproduceable: the test engineer reported defect is not accepted and not rejected but developers require
correct procedure to reproduce the defect.
*no plan to fix it: defect is not accepted and not rejected but developers require some extra time to fix that
defect.
*fixed or open: defect is accepted and developers are ready to resolve the defect and release modified build
along with release note.
*fixed indirectly or deferred: report is accepted and postponed to future release due to low severity and low
priority.
*user direction: defect is accepted but the developers are producing a message in build about that defect
sans resolving (like showing message to user as Error)
g)Types of defects:During usability,functional and non functional test execution on our application build or
UAT the test engineers are detecting below categories
*user interface defects (low severity):
Spelling mistakes (high priority)
39
Invalid label of object w.r.t functionality (medium priority)
Improper right alignment (low priority)
*error handling defects (medium severity)
Error message not coming for wrong operation (high priority)
Wrong error message is coming for wrong operation (medium)
Correct error message but incomplete (low)
*input domain defects (medium severity)
Does not taking valid input (high)
Taking valid and invalid also (medium)
Taking valid type and valid size values but the range is exceeded (low)
*manipulations defects (high severity)
Wrong output (high)
Valid output with out having decimal points (medium)
Valid output with rounded decimal points (low)
EX: actual answer is 10.96
High (13), medium (10) and low (10.9)
*race conditions defects (high)
Hang or dead lock (show stopper and high priority)
Invalid order of functionalities (medium)
Application build is running on some of platforms only (low)
*h/w related defects (high)
Device is not connecting (high)
Device is connecting but returning wrong output (medium)
Device is connecting and returning correct output but incomplete (low)
*load condition defects (high)
Does not allow customer expected load (high)
Allow customer expected load on some of the functionalities (medium)
Allowing customer expected load on all functionalities w.r.t benchmarks (low)
*source defects (medium)
Wrong help document (high)
Incomplete help document (medium)
Correct and complete help but complex to understand (low)
*version control defects (medium)
Unwanted differences in b/w old build and modified build
*id control defects (medium)
Logo missing, wrong logo, version number missing, copy right window missing, team members
names missing.
Test Closure(UAT)
After completion of all reasonable test cycles completion the test lead is conducting a review meeting to
estimate the completeness and correctness of the test execution.If the test execution status is equal to EXIT
CRITERIA then testing team is going to stop testing.Otherwise the team will continue remaining test execution
w.r.t available time.In this test closure review meeting the test lead is depending on below factors.
a)Coverage analysis:
*requirements oriented coverages
*testing techniques oriented coverages
b)Defect density:
*modules or functionalizes
40
NOTE: In general the project management is deferring low severity and low priority defects only.
After completion of above test closure review meeting the testing team is concentrating on level-3 test
execution.This level of testing is also known as Postmatern testing or Final regression testing or Pre-acceptance
testing.In this test execution the test engineers are following below approach.
In above like Final regression testing the test engineers are concentrating on high defect density modules or
functionalities only.”If they got any defect in this level, they are called as Golden defect or Lucky defect”.After
resolving the all golden defects, the testing team is concentrating on UAT along with developers.
Sign Off
After completion of UAT and their modifications,the test lead is conducting sign off review.In this review
the test lead is garnering all testing documents from test engineers as Test Strategy,System test plan and detailed test
plans,Test scenarios or titles,Test case documents,Test log,Defect report and
Final defects summary reports (defect id, description, severity, detected by and status (closed or deferred)).
Reuqirements Traceability matrix (RTM) (reqid, test case, defected, status).
*RTM is mapping in b/w requirements and defects via test cases.
Case Study
1. Test Initiation
Done by Project manager.
Deliver test strategy document.
Test Responsibility Matrix (RTM) defines reasonable tests to be applied (part in test strategy).
2. Test Planning
Done by Test lead or senior test engineer.
Deliver System test plan and detailed test plans.
Follows IEEE 829 document standards.
41
3. Test Design
Done by test engineer.
Deliver test scenarios or titles and test documents.
4. Test Execution
Done by test engineer.
Prepare automation programs (if possible).
Deliver test logs or test results.
5. Test Reporting
Done test engineer and test lead.
Send defect reports.
Receive modified build with release note.
6. Test Closure
Done by test lead and test engineer.
Plan post marten testing.
Initiate UAT (User Acceptance testing).
7. Sign Off
Done by test lead
Garner all test documents
Finalize RTM (Requirements traceability Matrix).
Objective
* Study Functional and system specification or Use Cases.
* Prepare Functional test cases in English.
* Convert to TSL programs (automation).
Add-In Manager
This Window list out all WinRunner supported technologies w.r.t license. The test engineers are selecting the
current application Build technology.
1) Recording Modes: To generate automation program, test engineers are recording build actions or operation
in 2 types of modes
* Content Sensitive mode
* Analog mode
Content Sensitive mode: In this mode, the WinRunner is recording all mouse and key board operations w.r.t
Objects and Windows in our application build. To select this mode we can use below options
* Click “start record” icon
* Text menu -> Context Sensitive option
Analog mode: In this mode, the WinRunner is recording all mouse pointer movements w.r.t desktop coordinates.
To select analog mode we can use below options
* Click Start Record icon twice
* Text menu -> Analog option (Example: Recording digital signatures, graphs drawing and image
movements)
NOTE: To change from one mode to another mode, the test engineers are using F2 as a short key.
NOTE: In Analog mode, the WinRunner is recording mouse pointer movements w.r.t desktop coordinates instead
of windows and objects. Due to this reason, the test engineers are maintaining corresponding window position on
the desktop and monitor resolution as constant.
2) Check Points: After completion of required action or operations recording, the test engineers are inserting
required check points into that recorded script. The WinRunner8.0 is supporting 4 types of check points.
*GUI check point.
*Bitmap check point.
*Database check point.
*Text check point.
Every check point is comparing test engineer given expected value and build actual value. The above 4
check points are automating all functional test coverage’s on our application build.
*GUI or Behavioral coverage’s.
*Error handling coverage’s.
*Input domain coverage’s.
*Manipulation coverage’s.
*Backend coverage’s.
*Order of functionality coverage’s.
GUI check point: To check behavior of objects in our application windows, we can use this check point. This
check point consists of three sub options.
*For single property.
*For object or window.
*For multiple objects.
45
a) For single property: to verify one property of one object we can use this option (like starting a Mile with one
step)
EX1: Manual test case
Test case id: TC_EX_SRI_24NOV_1
Test case name: check Delete Order button
Test suit id: TS_EX
Priority: P0
Test set up: already one record is inserted to delete
Test procedure:
Step no Event Input required Expected output
1 Focus to Flight Reservation window None “Delete Order” button disabled
2 Open an order Valid order number “Delete order” button enabled
Build -> Flight Reservation window
Automation program:
set_window (“Flight Reservation”, 1);
button_check_info (“Delete Order”,” enabled”, 0);
#check point on Delete order button
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 1);
button_set (“Order no”, ON);
edit_set (“Edit”,”1”);
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Delete Order”,” enabled”, 1);
#check point on Delete order button
EX2: Manual test case
Test case id: TC_EX_SRI_24NOV_2
Test case name: check Update Order button
Test suit id: TS_EX
Priority: P0
Test set up: already one valid record is inserted to update
Test procedure:
Step no Event Input required Expected output
1 Focus to Flight Reservation window None “Update Order” button disabled
2 Open an order Valid order number “Update Order” button disabled
3 Perform a change Valid change is required “Update Order” enabled
Build -> Flight Reservation window
46
Automation program:
set_window (“Flight Reservation”, 2);
button_check_info (“Update Order”, “enabled”, 0);
#check point on Update order button
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 4);
button_set (“Order no”, ON);
edit_set (“Edit”,”1”);
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Update Order”, “enabled”, 0);
#check point on Update order button
set_window (“Flight Reservation”, 2);
button_set (“First”, ON);
button_check_info (“Update Order”, “enabled”, 1);
#check point on Update order button
Automation program:
set_window (“Sample”, 5);
button_check_info (“OK”,” enabled”, 0);
edit_set (“Name”, “Sri”);
button_check_info (“OK”,” enabled”, 1);
EX4: Manual test case
Test case id: TC_EX_SRI_28NOV_4
47
Test case name: check SUBMIT button
Test suit id: TS_EX
Priority: P0
Test set up: all input objects are taking values
Test procedure:
Step no Event Input required Expected output
1 Focus to Registration window None SUBMIT button disabled
2 Enter Name Valid SUBMIT button disabled
3 Select Gender as M or F None SUBMIT button disabled
4 Say Y/F for Passport availability None SUBMIT button disabled
5 Select Country None SUBMIT button enabled
Build->Registration form
Automation program:
set_window (“Registration”, 5);
button_check_info (“SUBMIT”,” enabled”, 0);
edit_set (“Name”, “Sri”);
button_check_info (“SUBMIT”,” enabled”, 0);
button_set (“Male”, ON);
button_check_info (“SUBMIT”,” enabled”, 0);
button_set (“YES”, ON);
button_check_info (“SUBMIT”,” enabled”, 0);
list_select_item (“COUNTRY”,”INDIA”);
button_check_info (“SUBMIT”,” enabled”, 1);
Case Study
Object Type Testable Properties
Push button Enabled (0 or 1), Focus
Radio button Enabled (0 or 1), Status (ON or OFF)
Check box Enabled (0 or 1), Status (ON or OFF)
List or Combo box Enabled (0 or 1), Count, Value (of selected item)
Menu Enabled (0 or 1), Count
Edit or Text box Enabled (0 or 1), Focused, Value, Range, Regular
expression (text or pwd), Data format, Time
format…………..
Table guard Rows count, Columns count, Cell count
EX5: Manual test case
Test case id: TC_EX_SRI_28NOV_5
Test case name: check Flight to count
Test suit id: TS_EX
Priority: P0
Test set up: Fly From and Fly To consists of valid city name
Test procedure:
48
Step no Event Input required Expected output
1 Focus to Journey window and select one None Fly To count decreased by one
city name in Fly From
Build -> Journey
Automation program:
set_window (“Journey”, 5);
list_get_info (“Fly To”, “count”, x);
list_select_item (“Fly From”, “VIZ”);
list_check_info (“Fly To”, “count”, x-1);
EX6: Manual test case
Test case id: TC_EX_SRI_28NOV_6
Test case name: check Message value
Test suit id: TS_EX
Priority: P0
Test set up: all valid names are available for Messages
Test procedure:
Step no Event Input required Expected output
1 Focus to Display window None OK button disabled
2 Select a Name None OK button enabled
3 Click OK None Coming message is equal to selected message
Build->Display form
Automation program:
set_window (“Registration”, 5);
button_check_info (“OK”,” enabled”, 0);
list_select_item (“Name”, “Sri”);
list_get_info (“Name”, “value”, x);
button_check_info (“OK”,” enabled”, 1);
button_press (“OK”);
edit_check_info (“Message”, “value”, x);
EX7: Manual test case
Test case id: TC_EX_SRI_28NOV_7
Test case name: check SUM button
Test suit id: TS_EX
Priority: P0
Test set up: input objects consists of numeric values
Test procedure:
Step no Event Input required Expected output
1 Focus to Addition window None OK button disabled
49
2 Select input one None OK button disabled
3 Select input two None OK button enabled
4 OK click none Coming output is equal to addition of 2 inputs
Build->Addition form
Automation program:
set_window (“Addition”, 5);
button_check_info (“OK”,” enabled”, 0);
list_select_item (“INPUT1”, “20”);
list_get_info (“INPUT1”, “value”, x);
button_check_info (“OK”,” enabled”, 0);
list_select_item (“INPUT2”, “4”);
list_get_info (“INPUT2”, “value”, y);
button_check_info (“OK”,” enabled”, 1);
button_press (“OK”);
edit_check_info (“SUM”, “value”, x+y);
EX8: Manual test case
Test case id: TC_EX_SRI_29NOV_8
Test case name: check Age, Gender and Qualification objects
Test suit id: TS_EX
Priority: P0
Test set up: all insurance policy types are available
Test procedure:
Step no Event Input required Expected output
1 Focus to Insurance window and None If type is A then Age is focused.
select type of insurance policy If type is B then Gender is focused.
If other then Qualification is focused
Build->Insurance form
Automation program:
set_window (“Insurance”, 5);
list_select_item (“Type”, “xx”);
list_get_info (“Type”, “value”, x);
50
if (x= = “A”)
edit_check_info (“Age”, “focused”, 1);
else if (x= = “B”)
list_check_info (“Gender”, “focused”, 1);
Else
list_check_info (“Qualification”, “focused”, 1);
EX9: Manual test case
Test case id: TC_EX_SRI_29NOV_9
Test case name: check Student grade
Test suit id: TS_EX
Priority: P0
Test set up: all valid students’ mark’s already feeded
Test procedure:
Step no Event Input required Expected output
1 Focus to Student window None OK button disabled
2 Select a Student roll number None OK button enabled
3 OK click none Returns total marks and grade
If total >= 800 then grade is A.
If total >= 700 and <800 then grade is B.
If total >= 600 and <700 then grade is C.
If total <600 then grade is D.
Build->Student form
Automation program:
set_window (“Student”, 5);
button_check_info (“OK”,” enabled”, 0);
list_select_item (“Roll no”, “xx”);
button_check_info (“OK”,” enabled”, 1);
button_press (“OK”);
edit_get_info (“Total”, “value”, x);
if (x> = 800)
edit_check_info (“Grade”, “value”, “A”);
Else if (x<800 && x>=700)
edit_check_info (“Grade”, “value”, “B”);
Else if (x<700 && x>=600)
edit_check_info (“Grade”, “value”, “C”);
Else
edit_check_info (“Grade”, “value”, “D”);
EX10: Manual test case
Test case id: TC_EX_SRI_29NOV_10
Test case name: check Gross salary of an Employee
Test suit id: TS_EX
Priority: P0
Test set up: all valid employees Basic salaries are feeded.
Test procedure:
51
Automation program:
set_window (“Employee”, 5);
button_check_info (“OK”,” enabled”, 0);
list_select_item (“Empno”, “xxx”);
button_check_info (“OK”,” enabled”, 1);
button_press (“OK”);
edit_get_info (“Basic”, “value”, x);
If (x> = 15000)
edit_check_info (“Gross”, “value”, x+ (10/100)*x);
Else if (x<1500 && x>=8000)
edit_check_info (“Gross”, “value”, x+ (5/100)*x);
Else
edit_check_info (“Gross”, “value”, x+200);
b) For Object or Window: To verify more than one properties of one object, we can use this option.
EXAMPLE:
*Update order button is disabled after focus to window
*Update order button disabled after open a record
*Update order button enabled and focused after perform a change. (Here one object with TWO properties)
Build -> Flight Reservation window
Automation program:
set_window (“Flight Reservation”, 5);
button_check_info (“Update Order”,” enabled”, 0);
#check point on Update order button
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 4);
button_set (“Order no”, ON);
edit_set (“Edit”,”1”);
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Update Order”,” enabled”, 0);
#check point on Update order button
set_window (“Flight Reservation”, 2);
button_set (“First”, ON);
obj_check_gui (“Update Order”, “list1.ckl”, “gui1”, 1);
#check point for MULTIPLE properties
SYNTAX for Multiple properties:
52
obj_check_gui (“Object name”, “CheckListfile.ckl”, “Expected values file (GUI)”, Time);
In above syntax
CHECKLIST FILE specifies the selected list of properties
EXPECTED VALUES FILE specifies the selected values for that properties
c) For Multiple Objects: To check more than one property of more Objects, we can use this option. (Objects must
be in same WINDOW).
EXAMPLE
*Insert, Delete and Update order buttons are disabled after focus to window.
*Insert and Update order buttons are disabled, Delete order button is enabled after open a record.
*Insert order button is disabled; Update order button enabled and focused and Delete button is enabled after
perform a change.
Automation program:
set_window (“Flight Reservation”, 5);
win_check_gui (“Flight Reservation”, “list1.ckl”, “gui1”, 1);
#check point
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 4);
button_set (“Order no”, ON);
edit_set (“Edit”,”1”);
button_press (“OK”);
win_check_gui (“Flight Reservation”, “list2.ckl”, “gui2”, 1);
#check point
set_window (“Flight Reservation”, 1);
button_set (“First”, ON);
win_check_gui (“Flight Reservation”, “list3.ckl”, “gui3”, 1);
#check point
Syntax for Multiple Objects:
win_check_gui (“Window name”, “CheckListfile.ckl”, “Expected values file (GUI)”, Time);
NOTE: This check point is applicable on more than one object in a same window
Navigation to insert Check Point:
*Select a position in Script
*Choose Insert Menu option
*In it, choose GUI Check point
*Then select sub option as For Multiple objects
*Click Add button and select Testable objects
*Now Right click to relive from selection
*Select required properties with expected values
*Click OK
EX11: Manual test case
Test case id: TC_EX_SRI_30NOV_11
Test case name: check value of tickets
Test suit id: TS_EX
Priority: P0
Test set up: all valid records feeded.
Test procedure:
Step no Event Input required Expected output
1 Focus to Flight Reservation Valid Order no No of Tickets value is numeric up to 10
window and Open an order
53
NOTE: Testing the TYPE OF VALUE of an Object is called as REGULAR EXPRESSION.
Build->Employee Form
Automation program:
set_window (“Flight Reservation”, 2);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 4);
button_set (“Order no”, ON);
edit_set (“Edit”,”1”);
button_press (“OK”);
set_window (“Flight Reservation”, 1);
obj_check_gui (“Tickets”, “list1.chl”, “gui”, 1);
#Check point for Range and Regular Expression with 0 to 10 and [0-9]* (* is for multiple positions)
EX12: Prepare Regular expression for Alpha numeric
[a-zA-Z0-9]*
EX13: Prepare Regular expression for Alpha numeric in lower case with initial as capital.
[A-Z][a-z0-9]*
EX14: Prepare Regular expression for Alpha numeric in lower case but start with capital and end with lower case.
[A-Z][a-z0-9]*[a-z]
EX15: Prepare Regular expression for Alpha numeric in lower case with underscore, which does not start with _
[a-z0-9][a-z0-9_]*[a-z0-9]
EX16: Prepare Regular expression for Yahoo mail user id
Changes in Check points: most Irritating part in s/w testing
Due to sudden changes in customer requirements or mistakes in test creation, the test engineers are
performing changes in existing check points.
a) Changes in expected values:
*Run our test
*Open result
*Perform changes in expected values
*Click OK
*Close results
*Re-execute that test
b) Add new properties:
*Insert Menu
*edit GUI checklist
*select Checklist file name
*click OK
*select new properties for testing
*click OK
*click OK to over write checklist file
*click OK after reading suggestion to update
*change run mode to Update mode and run from top (the modified checklist is taking default values
as expected)
*run our test script in verify mode to get results
*analyze that results manually
If the defaults expected are not correct, then test engineers are changing that expected values and re run the
test.
Bitmap check point: (binary presentation of an image). It is an optional check point in functional testing. Test
engineers are using this option to compare images. This check point is supporting static images only. This check
point consists of 2 sub options
1) For object or window bitmap: To compare our expected image with our application build actual image, we
can use this option.
EX: logo testing
54
NOTE: TSL is not supporting Functional Overloading. TSL is supporting Variable Number of Arguments or
Parameters for functions.
55
NOTE: The GUI check point is Manditory, but the Bitmap check point is optional because all windows are not
consisting of images
Database Check point: The GUI and Bitmap check points are applicable on our application build front end screens
only. This Database check point is applicable on our application build back end tables to estimate the impact of
front end screen operation on back end tables content. This checking is called DATABASE OR BACK END
TESTING.
To automate Database or Back end testing, the database check point in WinRunner is following below approach
From the above model every application build report screens are retrieving data from database tables. To
estimate completeness and correctness of that retrieving process, test engineers are using this check point in
WinRunner.
57
Automation program:
set_window (“Audit”, 2);
obj_get_text (“File1”, x);
x = substr (x, 1, length(x)-2);
obj_get_text (“File2”, y);
y = substr (y, 1, length(y)-2);
obj_get_text (“Sum”, s);
s = substr (s, 1, length(s)-2);
If (s = = x+y)
Printf (“Test is pass”);
Else
Printf (“Test is Fail”);
EX4: Manual expected is Total = price * quantity
Build:
Automation program:
set_window (“Shopping”, 2);
obj_get_text (“Quantity”, q);
obj_get_text (“Price”, p);
p = substr (p, 4, length(p)-5);
obj_get_text (“Total”, t);
t = substr (t, 4, length(t)-5);
If (t= =p*q)
Printf (“Test is pass”);
Else
Printf (“Test is Fail”);
* tl_step (); we can use this to create tester defined Pass or Fail message in test results.
Syntax: tl_step (“Step name”, 0/1, “message”);
0 for pass
1 for fail
*Data Driven Testing: The re-execution of a test with multiple test data is called DDT or Iterative testing or Re-
testing. WinRunner8.0 is supporting 4 types of DDT.
*DDT with test data from Key Board.
*DDT with test data from a Flat File. (.txt files)
60
*DDT with test data from Front end Objects.
*DDT with test data from XL Sheets.
1. From Key Board: Sometimes the test engineers are re-executing their test cases with multiple test data
through Dynamic submission using Keyboard.
To get required Test data from keyboard, the test engineers are using below TSL statement in corresponding
automation program.
create_input_dialog (“Message”);
EX1: Manual expected is Delete order button is enabled after open an order.
Build: Flight Reservation
Test data: Ten unique order numbers
Automation program:
For (i=1; i<=10; i++)
{
x=create_input_dialog (“Enter the order number”);
set_window (“Flight Reservation”, 1);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 1);
button_set (“Order no”, ON);
edit_set (“Edit”, x); #Parameterization.
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Delete Order”,” enabled”, 1);
}
EX2: Manual expected is Result = intput1 * intput2.
Build:Multiply
In above approach, the required test data is coming from a Flat file without test engineer interaction. So this
method is known as 24/7 testing.
a) file_open(); We can use this function to open a specified Flat file into RAM.
file_open (“Path of file”, FO_MODE_READ or FO_MODE_WRITE or FO_MODE_AOOEND):
b) file_getline(); We can use this function to read a line of text from opened file.
file_getline (“Path of file”, variable”);
In above syntax, the file pointer is automatically incremented.
c) file_close(); We can use this function to swap out a opened file from RAM.
file_close(“Path of file”);
EX1: Manual expected is Insert order button disabled after open an existing order.
Build: Flight Reservation window
Test data: C:\\Documents and Settings\\Administration\\My Documents\\b8amdynamic.txt
Automation program:
f= “C:\\Documents and Settings\\Administration\\My Documents\\b8amdynamic.txt”;
file_open(f,FO_MODE_READ);
while(file_getline(f,x)!=E_FILE_EOF)
{
set_window (“Flight Reservation”, 1);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 1);
button_set (“Order no”, ON);
edit_set (“Edit”, x); #Parameterization.
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Insert Order”,” enabled”, 0);
}
file_close(f);
EX2: Manual expected is Result = Input1 * Input2.
Build:
63
Automation program:
f= “C:\\Documents and Settings\\Administration\\My Documents\\result.txt”;
file_open(f,FO_MODE_READ);
while(file_getline(f,x)!=E_FILE_EOF)
{
split(x,y, “”);
set_window (“Multiply”, 1);
edit_set (“Input1”, y[1]);
edit_set (“Input2”, y[2]);
button_press (“OK”);
obj_get_text (“Result”, z);
If (z = = y[1]*y[2])
tl_step (“s1”, 0, “Pass”);
Else
tl_step (“s1”, 1, “Fail”);
}
file_close(f);
EX3: Manual test expected is Total = Price * Quantity.
Build:Shopping
Automation program:
f= “C:\\Documents and Settings\\Administration\\My Documents\\login.txt”;
file_open(f,FO_MODE_READ);
while(file_getline(f,x)!=E_FILE_EOF)
{
split(x,y, “”);
split(y[1],z,”@”);
set_window (“Login”, 1);
edit_set (“Userid”, z[1]);
password_edit_set (“Password”, password_encrypt(y[2]));
button_check_info (“OK”, “enabled”, 1);
button_press (“Clear”);
}
65
file_close(f);
d)file_compare(); The WinRunner is providing this function to compare two files content.
file_compare(“Path of file1, “Path of file2”, “Folder name”);
*file comparison and file concatenation in New folder.
In above syntax, the Folder name is optional. The WinRunner is creating a new folder with that name
to store that both compared files.
e)file_printf(); We can use this function to write a line of text into a opened file in WRITE or APPEND
MODES.
file_printf(“Path of file”, “Format”, values or variables);
In the above syntax, the format is specifying the type of value to write in specified fields.
EX:
a=xxxx and b=xxxx to be written in file then file_printf(“Path of file”,”a=%d and b=%d”,a,b);
%d for int, %f for real/float, %c for char, %s for string.
3.From Front end objects: Sometimes the test engineers are re-executing tests depending on multiple data
objects values like Listbox, Table grid, Menus, ActiveX controls and data windows.
EX1: Manual expected is selected City name in Flyfrom does not appear in Flyto list.
Build:Journey
From the above model, the test engineers are using Xl sheet content as test data in their automation
programs.To manipulate XL sheet content as test data, test engineers are using below TSL functions.
a)ddt_open();We can use this function to open an XL sheet into RAM.
ddt_open(“Path of the XL sheet”,DDT_MODE_READ/READWRITE);
b)ddt_get_row_count():We can use this function to find no of rows in an XL sheet.
ddt_get_row_count(“Path of Xl sheet”,variable);
In above syntax the variable specifies the no of rows in XL sheet excluding header.
69
c)ddt_set_row():We can use this function to point a row in an XL sheet.
ddt_set_row(“Path of XL sheet”,rownumber);
d)ddt_val():We can use this function to capture specified XL sheet column value.
ddt_val(“Path of XL sheet”,columnname);
e)ddt_close():to swap out of a opened XL sheet from RAM, we can use this function.
ddt_close(“Path of XL sheet”);
NOTE: In this method, the WinRunner is generating DDT script implicitly.
Navigation:
*open WinRunner and application build.
*create an automation program for corresponding manual test case.
*select Table menu.
*And select Data Driver Wizard option.
*In wizard click Next.
*browse the path of XL sheet(Default is given by tool).
*specify variable name to store path of XL sheet.
*select Import data from database option.
*specify connect to database using ODBC or data junction
*select Specify sql statement option
*click Next
*click create to select connectivity.
*write select statement to import required data from database into XL sheet.
*click Next.
*parameterize that imported data in required place of automation program.
*click Next.
*say Yes or No to show XL sheet.
*now run the project and analyze the results manually.
EX1:Manual expected is Delete order button is enabled after open an existing record.
Build:Flight Reservation window.
Test data:Imported existing order numbers form database which are available in an XL sheet.
Automation program:
table=”default.xls”;
rc=ddt_open(table,DDT_MODE_READWRITE);
if(rc!=E_OK && rc!=E_FILE_OPEN)
pause(“Cannot open the table”);
ddt_update_from_db(table,”msqr1.sql”,count);
ddt_save(table);
ddt_get_row_count(table,n);
for(i=1;i<=n;i++)
{
ddt_set_row(table,i);
set_window (“Flight Reservation”, 1);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 1);
button_set (“Order no”, ON);
edit_set (“Edit”,ddt_val(table,”order_number”));
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Delete Order”,” enabled”, 1);
}
ddt_close(table);
EX2:Manual expected is Insert order button is disabled after open an existing record.
Build:Flight Reservation window.
70
Test data:Manually entered valid Order numbers, which are available in XL sheet.
Automation program:
table=”default.xls”;
rc=ddt_open(table,DDT_MODE_READ);
if(rc!=E_OK && rc!=E_FILE_OPEN)
pause(“Cannot open the table”);
ddt_get_row_count(table,n);
for(i=1;i<=n;i++)
{
ddt_set_row(table,i);
set_window (“Flight Reservation”, 1);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 1);
button_set (“Order no”, ON);
edit_set (“Edit”,ddt_val(table,”input”));
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Insert Order”,” enabled”, 0);
}
ddt_close(table);
EX3:Manual expected is Result=input1*Input2.
Build:Multiply
In this mode,the WinRunner toll is maintaining a fixed buffer to store unsaved GUI Map references.
*Tool menu.
*GUI Map editor.
*View sub menu.
*GUI files.
*LO<temporary> which is a local buffer.
b)Per Test mode:In this mode,the WinRunner is creating references to objects and windows as separate for
every test.
By default WinRunner8.0 is maintaining Global GUI Map file mode to prevent repetition in references of
objects and windows.To select Per test mode,the test engineers are following below navigation.
*Tools menu.
*General options.
*General tab.
*change GUI Map file mode to GUI Map file per test.
*click Apply and Ok.
2.Changes in references:Sometimes the test engineers are changing references of dynamic objects and
windows.The dynamic objects and windows are changing of label names during operation.To recognize this type of
objects and windows,test engineers are performing changes in GUI map reference of that corresponding objects and
windows.
Navigation:Tools menu->GUI Map editor->select Dynamic object or window reference->click Modify->in Modify
window->add wild card characters or regular expressions(like ! and *) to label of that object in Physical
description->click OK.
EX1:logical name:Fax order no.1
{
class: window,
label: ”Fax order no.1”
}
This is actual reference in GUI Map editor.
logical name:Fax order no.1
{
class: window,
label: ”!Fax order no.*”
}
This is modified one by placing ! and *.
EX2:Stop or Start on same button called as Toggle objects.
logical name: Start logical name: Start
{ {
class: push-button, modified to -> class: push-button,
label: “Start” label: “![S][t][ao][rp][a-z]*”
} }
75
3.GUI Map configuration:sometimes our application build screens are maintaining more than one
similar objects.To distinguish this similar objects the test engineers are enhancing physical description of that
similar objects.
Descriptive program:
EX:TSL statement:button_press(“OK”);
76
GUI Map reference:logical name:OK
{
Class:push-button,
Label:”OK”
}
It is converted into Description programming as
TSL statement:button_press(“{Class:push-button,label:”OK”}”);
User defined functions
For code re-usability,test engineers are using user defined functions concept.Every user defined function is
indicating re-usable operation in our application build w.r.t testing.
EX:All are automation programs.
*Function is indicating only operations or recorded statements.
*Tests are indicating operations and checkpoints to test.
Syntax: public function functionname(in/out arg1,…….)
{
Body of function re-usable for other tests
}
EX1:public function add(in a,in b,out c)
{
c=a+b;
}
calling test:
x=10;
y=20;
add(x,y,z); #x to a,y to b and z from c.
printf(z);
EX1:public function add(in a,inout b)
{
b=a+b;
}
calling test:
x=10;
y=20;
add(x,y); #x to a and y is sent to b and get back to y.
printf(y);
EX3:public function login(in x,in y)
{
set_window(“Login”,2);
edit_set(“Agent name”,x);
password_edit_set(“Password”,password_encrypt(y));
button_press(“OK”);
}
calling test:
login(“sri”,”mercury”);
Case Study:
User Defined Functions.
public function login(in x,in y)
{
set_window(“Login”,2);
edit_set(“Agent name”,x);
password_edit_set(“Password”,password_encrypt(y));
button_press(“OK”);
77
}
public function open(in x)
{
set_window (“Flight Reservation”, 2);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 4);
button_set (“Order no”, ON);
edit_set (“Edit”,x);
button_press (“OK”);
}
Calling test:Check Update order button
login(“sri”,”mercury”);
set_window(“Flight Reservation”,2);
button_check_info(“Update order”,”enabled”,0);
open(5);
set_window(“Flight Reservation”,1);
button_check_info(“Update order”,”enabled”,0);
button_set(“Business”,ON);
button_check_info(“Update order”,”enabled”,1);
Calling test2: Check Delete order button
login(“sri”,”mercury”);
set_window(“Flight Reservation”,2);
button_check_info(“Delete order”,”enabled”,0);
open(2);
set_window(“Flight Reservation”,1);
button_check_info(“Delete order”,”enabled”,1);
Compiled Module
After creating UDF’s in TSL,test engineers are making that UDF’s as executable forms.An executable form
of UDF is called a Compiled module.To create Compiled Modules we can follow below navigation.
*open WinRunner and Build.
*record repeatable operations once.
*make that repeatable operations as user defined functions with unique function names.
*save those functions.
*open File menu.
*select Test properties option.
*change test type to Compiled Module.
*click Apply and OK.
*and then execute once that saved file.(this file may contain more functions)
*write load(“filename”,0,1); in start up script or program of WinRunner.
load(); function is used to load Compiled module file into RAM.
Load(“Compiled Module filename”,0or1,0or1);
In above syntax,in second argument 0 indicates UDM and 1 indicates System Defined Module loaded
automatically.In third argument,0 indicates Function body to appear while running and 1 indicates Function body to
hide while running.
Exception Handling or Recovery Manager
To recover from abnormal situations while running test,test engineers are using 3 types of recovery
techniques.
*TSL exceptions.
*Object exceptions.
*Popup exceptions.
1.TSL exceptions:These exceptions raised when a TSL statement returns a specified returned code.
78
Navigation:
*Tools menu.
*select Recovery Manager.
*click New button.
*select Exception event type as TSL.
*click Next.
*enter exception name with description for future reference.
*click Next.
*select TSL function name with return code to define problem.
*click Next.
*enter Recovery function name.
*click Define recovery function button.
*select Paste to paste it to current test option.
*click OK and Next and Finish.
*define function body to recover from problem.
*make that function as Compiled Module and write load statement of that function in Start up script.
EX: TSL Function:<<any function>> option in that wizard is selected.
Error code:E_ANY_ERROR.
Handler function:abc.
public function abc(in func,in rc)
{
printf(func&”returns”&rc); #the & represents concatenation.
}
2.Object exceptions:Thse exceptions raised when a specified object property is equal to our expected value.
Navigation:
*Tools menu.
*select Recovery Manager.
*click New button.
*select Exception event type as Object.
*click Next.
*enter exception name with description for future reference.
*click Next.
*select object property with value to define problem.
*click Next.
*enter Recovery function name.
*click Define recovery function button.
79
*select Paste to paste it to current test option.
*click OK and Next and Finish.
*define recovery function body.
*make that function as Compiled Module and write load statement of that function in Start up script.
EX: Window:Flight Reservation.
Object:Isert order button.
Property:enabled.
Value:1
Handler function:pqr
public function pqr(in win,in obj,in attr,in val)
{
printf(“enabled”);
}
3.Popup Window exception:These expections raised when a specified un wanted window in build.
Navigation:
*Tools menu.
*select Recovery Manager.
*click New button.
*select Exception event type as Popup.
*click Next.
*enter exception name with description for future reference.
*click Next.
*select that un wanted window.
*click Next.
*specify Recovery operation to skip that window.
*click button,close window and execute a function.
*click Next and Finish.
NOTE:If our operation is a function,then test engineers are creating that recovery function as a Compiled Module
and write load statement in Start up script.
EX: Window:Flight Reservation
Handler action:fname
public function fname(in window)
{
set_window(“Flight Reservation”,2);
butoon_press(“OK”);
set_window(“Open order”,2);
edit_set(“Edit”,”1”); #default order no(1) is given by tester
button_press(“OK”);
}
This exception is occurred when order in not there while running open order for n times.
NOTE:WinRunner8.0 is allowing you to administrate existing exceptions.
*exception_off(“exception name”); is used to disable the exception.
*exception_on(“exception name”); is used to enable the exception.
*exception_off_all(“exception name”); is used to disable all exceptions.
NOTE:In this exception handling and recovery managed concept,the test engineers are predicting raising abnormal
status and solutions to overcome them depending on available documents,past experience,discussions with others
and browse our application build more than one time.
Web Test Option
The WinRunner8.0 is supporting functional test automation on web applications also like HTML pages.It is
not supporting XML web pages.To conduct testing on XML web pages,the test engineers are following manual test
execution or using QTP tool.
80
During functional testing on website,test engineers are concentrating on below manual automation
coverages.
*behavioral coverage(changes in properties of web objects).
*input domain coverage(the type and size of web input objects).
*error handling coverage(the prevention of wrong operation).
*manipulations coverage(the correctness of output or outcome).
*order of functionalities coverage(the arrangement of functionalities or modules in a web site).
*backend coverage(the impact of web page operation on backend tables).
*links coverage or URL coverage(the execution and existence of every link in a web page).
*content coverage(the completeness and correctness of existence test in a web page).
NOTE:The last 2 coverages are applicable for Web applications functionality testing.
NOTE:One Web site consists of more than one interlinked web pages.
NOTE:To open web applications or sites,the browser is manditory.
NOTE:During web applications development and testing,the development and testing team are using
Off-Line mode.There are two types of Off-Line modes such as Local host and local network.
1.Links coverage:It is a new coverage in web functionality testing.During this test,the test engineers are
validating every link in every web page of a website in terms of link execution and link existence.
“Link Execution means that the correctness of next page after clicking a particular link”.
“Link Existence means that the place of link in a web page ie whether it is in correct position and order or
not”.
To automate this links coverage,test engineers are using Web Test option in Add-In manager of WinRunner.
After launching WinRunner with WebTest option,test engineers are using GUI check point to automate
every link object.
a)Text link:Insert menu->GUI check point->for object/window->select testable text link->select URL and
Brokenlink properties with expected values->click OK.
obj_check_gui(“Link text”,”checklist name.ckl”,”expected values file”,time);
b)Image link:Insert menu->GUI check point->for object/window->select testable image link->select URL
and Brokenlink properties with expected values->click OK.
obj_check_gui(“Image text file name”,”checklist name.ckl”,”expected values file”,time);
c)Page/Frame:WinRunner8.0 is allowing us to create check point on all page level links.The flow is Insert
menu->GUI check point->for object/window->select one link object in testable Webpage->change your selection
from link object to Page->specify expected values for URL and Brokenlink properties->click OK.
win_check_gui(“Web page name”,”checklist name.ckl”,”expected values file”,time);
NOTE:In general,the test engineers are using GUI check point at page level for link coverages.
2.Content coverage:It is also a new coverage in Web functionality testing.During this test,the test engineers
are checking spelling,grammar,word missing,line missing etc. in content of web page.To automate this coverage
using WinRunner,test engineers are using Text check point.This check point consists of 4 sub options,when you
select the Web test option in Add-In manager.
*form object/window.
*from screen area.
*from selection(web only).
*web text check point.
a)from object/window:It is used to capture specified web object value(like capturing value from text box).
81
Navigation:Insert menu->get text check point->from object/window->select testable object in Web page to
capture that object value into variable.
web_obj_get_text(“Object name”,”#line row no”,”#line column no”,variable,”text before”,”text after”,time);
In above syntax,row and column numbers are indicating the line number of text area object content.For
text box or edit box consists of only one line of text i.e. #0 and #0.Text before and Text after are indicating
unwanted content in required line of text in a text box or textarea box.
EX:
It is a Functional testing tool developed by Mercury Interactive.It is derived from WinRunner and Astra
Quick test.WinRunner is testing tool for Mercury Interactive with script in TSL where as Astra Quick Test is for
Astra company with script in VBScript.Mercury Interactive had taken over Astra company and a new testing tool
came called QTP with both WinRunner+Astra Quick test concepts.This QTP scripting is in VBScript.
It converts our manual test cases into VBScript programs.QTP supports all WinRunner8.0 supporting
technologies and XML,Multimedia,SAP,People soft,ORACLE Apps.
Mercury Interactive is taken over by HP company.
Test process
*select manual functional test cases to be automated.
*receive Stable build from developers ie build after Sanity testing.
*convert selected manual cases into VBScript programs.
*make those programs as test batches.
*Run test batches.
*analyze results manually for defect reporting if required.
Add-In Manager
This window list out all QTP supported technologies w.r.t license.
Welcome Screen window provides 4 options unlike WinRunner.
*Tutorial->used for help documents.
*Start recording->used to open a new test with recording mode.
*Open existing->used to open previously created test.
*Blank test->used to open new test.
NOTE:
*Like as WinRunner8.0,the QTP8.2 is also maintaining one global XL sheet for every test.But in
WinRunner the XL sheet is opening explicitly where as in QTP it is opened implicitly.
*The QTP Stop icon is useful to stop recording and running both.
*QTP8.2 is taking required application path before starting every automation program creation.This
“Selected Application” option is optional in WinRunner but in QTP it is manditory.
*Unlike WinRunner8.0,the QTP8.2 is maintaining recorded script in 2 views as:Expert view and Keyword
view.The expert view is maintaining script in VBScript language where as the keyword view is maintaining the
script in English documents.
*VBScript is not case sensitive and it is not maintaining delimeter(;) at the end of every statement.
Recording modes
The QTP8.2 is allowing us to record our manual test case actions in 3 types of modes.
*General recording.
*Analog recording.
*Low Level recording.
a)General recording:In this mode,the QTP is recording mouse and Keyboard operations w.r.t objects and
windows in our application build same as Context Sensitive mode in WinRunner.This mode is a default one in
QTP.To select this mode,we can use below options.
*click Start record icon or
*Test menu->Record option or
*F3 as short key.
86
b)Analog recording:To record mouse pointer movements on the desktop,we can use this mode.To select
this mode,test engineers are using below options
*Test menu->Analog recording option or
*Ctrl+Shift+F3 as short key.
EX:Digital signatures recording,Graphs drawing recording,image movements recording(not available in
WinRunner),etc.
NOTE:
*unlike in WinRunner,the recording of operations are starting with General recording in QTP.To change to
other modes,test engineers are using available options.
*unlike in WinRunner,the QTP8.2 is providing a facility to record mouse pointer movements relative to
desktop(this option is there in WinRunner) or relative to specific window(this is new option in QTP).
*in QTP,the test engineers are using same options or short keys to Start and to Stop Analog and Low level
recording modes.
c)Low level recording:Test engineers are using this mode to record non recognized or advanced technology
object operations in our application build.
EX:Advanced technology objects,Time based operations on objects,non recognized objects in supported technology
etc.
To select this mode,test engineers are using below process.
*Test menu->low level recording option or
*Ctrl+Shift+F3 as short key.
NOTE:The QTP8.2 is not maintaining any common key for above 3 modes like in WinRunner where WinRunner is
maintaining F2 as short key for both the modes.
Check Points
The QTP8.2 is a Functional testing tool and it provides facilities to automate functional testing coverages
like
*GUI or behavioral coverage
*input domain coverage
*error handling coverage
*manipulations coverage
*order of functionalities
*backend coverage
*links coverage or URL coverage(for web site only)
*content coverage(for web site only)
To automate above coverages the test engineers are using below 8 check points in QTP.
1)Standard check point->to check the properties of objects and windows like GUI check point in
WinRunner.
2)Bitmap check point->to compare static images and Dynamic images(new in QTP).
3)Text check point->to check selected object value.
4)Textarea check point->to check selected area value.
5)Database check point->to check the completeness and correctness of changes in database tables.
6)Accessibility check point->to check point hidden properties for Web sites testing only.
7)XML check point->to check properties of XML objects in our web pages.
8)XML check point ->to check XML code tags.
*The QTP is allowing us to insert check points in Recording mode only,except Database check point and
XML check point(File).But in WinRunner we can insert check points while recording and after completion of
recording also.
1)Standard check point:To check properties of objects,we use this check point.
Navigation:
*select a position in script to insert check point.
*select Insert menu.
*choose Check point option.
*select Standard check point sub option.
87
*now select the testable object in build.
*click OK after confirmation message.
*select testable properties with expected values.
*click OK.
Syntax:Window(“Window name”).WinObject(“Object name”).Check Checkpoint(“name of the check point”)
NOTE:
*QTP is allowing checkpoints insertion while recording operations only.
*objects confirmation is manditory while inserting check points.
*the QTP check points are allowing one object at a time.
*VBScript is maintaining similar syntax for all types of check point statements.
*the QTP check points are taking 2 types of expected values such as Constant and Parameter(for parameter
XL sheet column name).
*If our expected is Constant in a check point,then QTP runs that automation program one time by default.If
our expected is a Parameter in a check point,then the QTP runs that automation program more than one time
depending on number of rows in an XL sheet column.
*the silent mode concept is optional in WinRunner but it is implicit or manditory in QTP,because QTP
continues a test execution when a check point is Fail also.
2)Bitmap check point:we can use this option to compare expected image and actual image.Unlike to
WinRunner,this check point is supporting Dynamic images comparison also.To create this check point on dynamic
images,test engineers are selecting Multi Media option in Add-In Manager.
EX: logo testing
Navigation :
*open expected image in build.
*select Insert menu.
*select Bitmap check point option.
*select that expected image.
*click OK after confirmation message.
*select area of image if required and click OK.
*now close the Expected image and open Actual image.
*run check point.
88
*analyze results manually.
NOTE:The Bitmap check point in QTP is supporting static images and dynamic images comparison,but this check
point is not providing differences in b/w images.
3)Database check point:We can use this check point to conduct database testing.During database
testing,the testing team is concentrating on the impact of front end screen operations on database table content in
terms of data validity and data integrity.
Data validity means that the correctness of new data stored into database.Data integrity means that the
correctness of changes in existing values.
To automate above like observations on our application build database,test engineers are using this check
point in QTP.This check point is depending on the content of database tables like as default check point in
WinRunner database check point.
Navigation:
*open QTP and select database check point in Insert menu(no need to insert this check point while
recording).
*specify connect to database using ODBC or Data Junction.
*select Specify sql statement manually option.
*click Next.
*click Create to select database connectivity name of our application build provided by development team.
*write select statement on impacted database tables.
*click Finish and click OK after confirmation of database content as expected.
*open our application build.
*perform front end operation in build and Run database check point.
*analyze results manually.
EX:
Automation program:
option explicit
dim x,y
x=Window(“Sample”).WinEdit(“Input”).GetTextVisible
y=Window(“Sample”).WinEdit(“Output”).GetTextVisible
if y=x*100 then
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 0,”S1”,”FAIL”
end if
EX2: Manual expected is Total = price * quantity
Build:
Automation program:
option explicit
dim q,p,t
q=Window(“Shopping”).WinEdit(“Quantity”).GetTextVisible
p=Window(“Shopping”).WinEdit(“Price”).GetTextVisible
p=mid(p,4,len(p)-3)
t=Window(“Shopping”).WinEdit(“Total”).GetTextVisible
t=mid(t,4,len(t)-3)
if cdbl(t)=cdbl(p)*q then
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 0,”S1”,”FAIL”
end if
EX3: Manual expected is Total = price * no of tickets
92
Build:Flight Reservation Window
Automation program:
option explicit
dim t,p,tot
Window(“Flight Reservation”).WinMenu(“Menu”).Select (“File;Open Order”)
Window(“Flight Reservation”).Dialog(“Open Order”).WinCheckbox(“Order no”).Set “ON”
Window(“Flight Reservation”).Dialog(“Open Order”).WinEdit(“Edit”).Set “1”
Window(“Flight Reservation”).Dialog(“Open Order”).WinButton(“OK”).Click
t=Window(“Flight Reservation”).WinEdit(“Tickets”).GetTextVisible
p=Window(“Flight Reservation”).WinEdit(“Price”).GetTextVisible
p=mid(p,2,len(p)-1)
tot=Window(“Flight Reservation”).WinEdit(“Total”).GetTextVisible
t=mid(t,2,len(t)-1)
if cdbl(tot)=cdbl(p)*t then
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 0,”S1”,”FAIL”
end if
*Data Driven Testing
The re-execution of a test on same application with multiple test data is called DDT or Iterative testing or
Re-testing.In this DDT,test engineer is concentrating on the validation of a functionality with possible input
values.There are four types of DDT
*DDT through Key Board.
*DDT through Flat File. (.txt files)
*DDT through Front end Objects.
*DDT through XL Sheets.
1. DDT through Key Board: Sometimes the test engineers are re-executing their test cases with multiple
test data through Dynamic submission using Keyboard.
To read data from keyboard dynamically,we can use below statement in VBScript
option explicit
dim x
x=inputbox(“Message”)
EX1: Manual expected is Result = intput1 * intput2.
Build:Multiply
EX1: Manual expected is If Life insurance type is “A” then Age object is focused. If Life insurance type is “B” then
Gender object is focused. If Life insurance other one then Qualification object is focused.
Build:
94
To use file content as test data in an automation program,test engineers are adding below VBScript
statements to that automation program.
Option explicit
Dim fso,f
set fso=createobject(“scripting.filesystemobject”)
set f=fso.opentextfile(“file path”,1,TRUE)
Here 1 is used for READ mode,2 for WRITE mode and 8 for APPEND mode.
Test engineers are using above VBScript command set to use flat file content as test data.
EX1: Manual expected is Delete order button enabled after open an existing order.
Build: Flight Reservation window
Test data: C:\\Documents and Settings\\Administration\\My Documents\\b8amdynamic.txt
Automation program:
Option explicit
Dim fso,f,p,x
set fso=createobject(“scripting.filesystemobject”)
p=” C:\\Documents and Settings\\Administration\\My Documents\\b8amdynamic.txt”
97
set f=fso.opentextfile(p,1,TRUE)
while f.atendofline<>true
x=f.readline
Window(“Flight Reservation”).WinMenu(“Menu”).Select(“File;Open Order”)
Window(“Flight Reservation”).Dialog(“Open Order”).WinCheckbox(“Order no”).Set “ON”
Window(“Flight Reservation”).Dialog(“Open Order”).WinEdit(“Edit”).Set x
Window(“Flight Reservation”).Dialog(“Open Order”).WinButton(“OK”).Click
Window(“Flight Reservation”).WinButton(“Delete order”).Check Checkpoint(“Delete order”)
wend
EX2: Manual expected is Result = Input1 * Input2.
Build:
Automation program:
Option explicit
Dim fso,f,p,x,y,r
set fso=createobject(“scripting.filesystemobject”)
p=” C:\\Documents and Settings\\Administration\\My Documents\\result.txt”
set f=fso.opentextfile(p,1,TRUE)
while f.atendofline<>true
x=f.readline
y=split x,””
Window(“Multiply”).WinEdit(“Input1”).Set y(0)
Window(“Multiply”).WinEdit(“Input2”).Set y(1)
Window(“Multiply”).WinButton(“OK”).Click
r=Window(“Multiply”).WinEdit(“Result”).GetTextVisible
if r=y(0)*y(1) then
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 1,”S1,”FAIL”
end if
wend
EX3: Manual test expected is Total = Price * Quantity.
Build:Shopping
98
Automation program:
Option explicit
Dim fso,f,p,x,y,price,tot
set fso=createobject(“scripting.filesystemobject”)
p=” C:\\Documents and Settings\\Administration\\My Documents\\price.txt”
set f=fso.opentextfile(p,1,TRUE)
while f.atendofline<>true
x=f.readline
y=split x,””
Window(“Shopping”).WinEdit(“Itemno”).Set y(2)
Window(“Shopping”).WinEdit(“Quantity”).Set y(5)
Window(“Shopping”).WinButton(“OK”).Click
price=Window(“Shopping”).WinEdit(“Price”).GetTextVisible
price=mid(price,2,len(price)-1)
tot=Window(“Shopping”).WinEdit(“Total”).GetTextVisible
tot=mid(tot,2,len(tot)-1)
if cdbl(tot)=cdbl(y(5))*price then
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 1,”S1,”FAIL”
end if
wend
Case study
Data Driven Testing method Tester involvement in scripting Tester involvement in execution
Through keyboard Yes(by using inputbox(“”) option) Yes
Through front end objects Yes(using VBScript statements) No(this is 24/7 testing)
Through XL sheet No(using the navigations) No
Through flat files Yes(filesystemobject statements) No
Multiple Actions
The QTP8.2 is allowing multiple actions creation in an automation program.
99
To create multiple actions in an automation program,test engineers are following below navigation.
*select Insert menu.
*choose Call to new action option.
*specify Action name and click Ok.
Reusable actions
To improve modularity in calculation program,test engineers are using reusability concept.An action of a
program is invoking in other automation program,which action is called as Reusable action
In the above example,the Action1 of test1 is invoked in Action1 of test2 for code reusability.
To create reusable actions test engineers are following below navigation.
*record a reusable operation in our application build in a test as separate Action.
*select Step menu and choose Action properties.
*select Reusable Action checkbox and click OK.
*now open other test and select position in required place to insert reusable action.
*select Insert menu.
*choose Call to existing action option.
*browse previous test path and reusable action name.
*click OK.
Here one test reusable action is added to another test.
Parameters
*open reusable action in script.
*select Step menu and choose Action properties option.
*now select Parameters tab.
*click Add icon(+) to add parameter.
*now enter details.
*click add icon again to add more parameters and click OK.
*use that parameter in required place of sample inputs.
*save modifications in reusable action.
EX:
Test1
Action1(this is set as Reusable action)
--------
--------
Window(“login”).WinEdit(“Agent name”).set parameter(“x”)
Window(“login”).WinEdit(“Password”).setsecure crypt.encrypt(parameter(“y”))
100
--------
--------
Action2
- - - - - - - - some check points
--------
Test2
Action1
RunAction “Action1[Test1]”,one iteation,”sri”,”mercury”
--------
--------
The x,y parameters are sent while calling Action1 of test1 in Action1 of Test2.
NOTE:In WinRunner TSL,the sample input values are replacing with parameter names.But in QTP VBScript,the
sample input values are replaced by parameter statements like parameter(“parameter name”).
Synchronization Point
For successful test execution the test engineers are maintaining synchronization point to define time
mapping in b/w tool and our application build.
a)wait();this function defines fixed waiting time in test execution.
wait(time) or wait time
b)for object or window status:the above function is defining fixed weighting time but our application build
operations are taking variable times to complete.Due to this reason test engineers are synchronizing tool and build
depending on properties of objects like status bar,progress bar.
Navigation:
*select a position in script.
*select Insert menu.
*choose Step option.
*select synchronization point option.
*select process completion indicator object(like status bar,progress bar).
*click OK after confirmation message.
*select enabled property as true.
*specify maximum time to wait in milliseconds and click OK.
Syntax: Window(“Window name”).WinObject(“Object name”).WaitProperty”enabled”,TRUE,100000
c)Change Run Settings:Sometimes our application build is not maintaining progress completion indicator
objects.In this situation test engineers are changing Run Settings of QTP to synchronize with build.
Navigation:
*Test menu.
*select Settings option.
*choose Run tab.
*increase Timeout in milliseconds.
*click Apply and click Ok.
NOTE:By default in WinRunner8.0,the timeout is 10000ms.But in QTP8.2 the timeout is 20000ms.In Qtp,the
timeout settings are applied for that specified test only.
NOTE:The Run settings in WinRunner are Tool level where as in QTP the settings are test level.For every new test
settings will change.
Administration of QTP
a)QTP FrameWork:
101
c)Export references:Sometimes the test engineers are executing automation program in VBScript at
different locations.In this situation the test engineers are following below navigation.
Syatem1:create test(recording + inserting check points)
save test
export references
System2:download test and references files
open build and run that test
To export references to external file,test engineers are following below navigation.
*Tools menu.
*Object Repository.
*click Export button.
*save file name with .tsr as extension(Test Script Resource).
*click Save.
To use these references in other system,test engineers are following as
*Test menu.
*Settings.
*Resources tab.
*check Shared object option and browse the path of .tsr file.
*click Apply and click Ok.
d)Regular Expressions:Sometimes our application build object or windows labels are changing
dynamically.To recognize these objects and windows by QTP test engineers are using regular expressions to change
corresponding dynamic objects or windows reference.
Navigation:
102
*Tools menu.
*Object Repository.
*select Corresponding reference.
*click Constant value options icon.
*insert Regular expression into constant label.
*select regular expression checkbox.
*click OK.
EX:logical name:Fax Orderno:1 logical name:Fax Orderno:1
{ changed to {
class:Window class:Window
label:”Fax Orderno:1” label:”Fax Orderno:[0-9]*”
} }
check Regular Expression. option
e)Object Identification:Sometimes our application build windows consists of more than similar objects to
distinguish these object by QTP test engineer is using object identification concept like as GUI map configuration
in WinRunner.
*select Tools menu.
*choose Object Identification option.
*select Environment(means the technology used to construct our application build).
*select similar objects type.
*add MSWId as Assistive properties and finally click OK.
EX:
Sometimes the test engineers are using Smart Identification concept to recognize non recognized or non
standard objects instead of Low level recording.
Navigation:
*select Tools menu.
*choose Object Identification option.
*select Environment.
*select Object type.
*check enable Smart Identification option.
103
*click Configure.
*select our required properties as base and optional.
*click OK.
g)Virtual Object Wizard:Instead of low level recording and smart identification,test engineers are using
VOW to recognize a non recognized objects forcibly.
Navigation:
*select Tools menu.
*Virtual objects option.
*new virtual object.
*click Next.
*select expected standard type.
*click Next.
*mark non recognized object area.
*click Next.
*confirmation of non standard object area selection.
*click Next.
*say Yes or No to create more virtual objects and click Finish.
NOTE:In general,the QTP8.2 is maintaining references for objects and windows in Per test mode using object
repository.This tool is maintaining Virtual objects references in Global mode using Virtual Object Manager.
Web Test Option
Like as WinRunner8.0,the QTP is also allowing us to automate Functional test cases on Web Sites.During
functional testing on web applications,test engineers are concentrating on Links coverage and Content coverage as
extra.To automate these coverages,test engineers are using Standard check point(for HTML) or XML check point
and Text check point respectively.
During web site testing,test engineers are maintaining OFF line mode like Localhost or Local n/w.To
automate functional testing on web sites,test engineers are selecting Web option in Add-In manager.
a)Links Coverage:It is a new coverage in web functionality testing.During this test,test engineers are
verifying every link existence and execution.In automation of links coverage,testers are using standard check point
for HTML links and XML check point for XML links.The links coverage automation is possible at 2 levels such as
link level and page level.
1)Text link or Image link:
*select Insert menu.
*choose Check point option.
*select Standard check point option.
*select testable Text or Image link.
*select HREF property with expected path of next page.
*click OK.
Browser(“Browser name”).Page(“Page name”).Link(“Link text”).Check Check point(“Checkpoint name”)
Browser(“Browser name”).Page(“Page name”).Image(“Image link text”).Check Check point(“Checkpoint name”)
2) Page/Frame:
*select Insert menu.
*choose Check point option.
*select Standard Check Point.
*select testable link.
*now select page level selection in confirmation message and click OK.
*click Filter link check.
*specify expected href/url for all page level links.
*click OK.
*say Yes (or) No to verify broken links.
*click Ok.
Browser(“browser name”).Page(“page name”).Check Check point(“check point name”)
NOTE:The QTP8.2 is allowing you to automate broken links testing at page level only.
104
NOTE:The WinRunner8.0 is not supporting XML but QTP8.2 is providing XML check point web page and
XML check point(File) to verify XML web page properties and XML code respectively.
b)Content coverage:During this test,the test engineers are validating the correctness of existing text in
corresponding web pages in terms of spelling and grammar.To automate this content coverage using QTP,test
engineers are using Text check point in Insert menu.This check point is comparing test engineers given expected
text with web page actual text in terms of Match Case,Exact Match,Ignore Space and TextNotDisplayed.
Navigation:
*select Insert menu.
*choose Check Point option.
*select Text Check point option.
*select web page content to test.
*specify expected text provided by customer as Constant or Parameter.
*select required type of comparison(like 4 match types).
*click OK.
Browser(“Browser name”).Page(“Page name”).Check Check point(“Checkpoint name”)
NOTE:The QTP8.2 is allowing us to apply above testing on part of web page content also.The marking of part of
web page content is depending on “Text Before” and “Text After” buttons.
Recovery Scenario Manager
In general,the test engineers are planning 24/7 testing in test automation.In this style of test execution,the
testing tools are recover from Abnormal state Normal state using existing recovery scenarios.
Approach:
Step1:specify details about Abnormal state(Popup(for windows),Objects,Test run and Application crash).
Step2:specify details about recovery(mouse/keyboard operation,function calling,close window,restart
windows).
Step3:specify post recovery to continue testing.
Step4:save above details for future reference.
Navigation:
*select Tools menu.
*choose Recovery scenario manager option.
*click new scenario icon.
*click Next.
*select type of problem(popup/object/test run error/application crash).
*specify required details to define that problem.
*click Next.
*select type of recovery(keyboard or mouse operations/function call/close application process/restart MS
windows).
*click Next.
*specify details to define recovery.
*click Next.
*specify post recovery option as(repeat current and continue/procees to next step/proceed to next
action/procees to next test iteration/restart current test execution/stop test execution).
*specify scenario name with description for future reference.
*click Next.
*select add to current test or add to default test settings.
*click Finish.
Batch testing
The sequential execution of more than one automation program is called Batch Testing.Every test batch
consists of a set dependent test cases.The end state of one test is base state to next test to continue test execution
105
sans missing and sans repeating of test cases execution.Every test batch is also known as Test Suit/Test Belt/Test
Chain/Test Built.To create test batches in QTP,test engineers are using 2 types of concepts such as
*Test Batch Runner option.
*RunAction command.
a)Test Batch Runner:It is a new topic in QTP8.2.It provides a facility to test engineers to make dependent
tests as batches.To create batch using test batch runner,we follow below navigation.
*Start menu.
*Programs.
*QTP.
*Tools.
*Test Batch Runner.
*click add icon.
*browse the path of test in order.
*click Run icon after base state creation in build for first test in that batch.
*analyze results of batch manually.
b)RunAction command:Like as in WinRunner CALL statement,the test engineers are using RunAction
command to interconnect dependent automation program.
Parameter Passing
*make dependent tests as a batch using RunAction command.
*provide Reusable action permission to all Sub/Called tests.
*open required Called/Sub test.
*select Step menu.
*choose Action properties.
*select Parameters tab.
*click Add icon to create parameters with required details.
*replace that parameter in the place of sample input using parameter statement.
*open corresponding main test and add constant or parameter value to RunAction statement.
Ex:for constant give value directly as 3.
for parameter give as datatable.value(“Input”)
NOTE:
*whenever check point is failed the QTP8.2 is providing screen shots in results.
*Test Batch Runner like option is not available in WinRunner tool.
*Test Batch Runner concept is not allowing parameter passing in b/w tests.
*In Data Driven Testing,the end state of last test is base state to first test.
EX: Test1(Login)
SystemUtil.Run “flight.exe”,””,””,”open”
Dialog(“Login”).WinEdit(“Agent name”).Set “sri”
Dialog(“Login”).WinEdit(“Password”).SetSecure “554ds”
Dialog(“Login”).WinButton(“OK”).Click
RunAction “Action1[Test2]”,oneIteration,datatable.value(“Input”)
106
Test2(Open order operation and x as parameter)
Window(“Flight Reservation”).WinMenu(“Menu”).Select(“File;Open Order”)
Window(“Flight Reservation”).Dialog(“Open Order”).WinCheckbox(“Order no”).Set “ON”
Window(“Flight Reservation”).Dialog(“Open Order”).WinEdit(“Edit”).Set parameter(“x”)
Window(“Flight Reservation”).Dialog(“Open Order”).WinButton(“OK”).Click
RunAction “Action1[Test3]”,oneIteration
Test3(chek point insertion)
Window(“Flight Reservation”).WinButton(“Delete order”).Check Checkpoint(“Delete order”)
Window(“Flight Reservation”).WinButton(“Insert order”).Check Checkpoint(“Insert order”)
Window(“Flight Reservation”).WinButton(“Update order”).Check Checkpoint(“Update order”)
Window(“Flight Reservation”).Close
With statement
To decease the size of a program in VBScript,the test engineers are using this With statement.
*select Edit menu.
*choose Apply “With” to script option
We can use Remove “with” statement option to get original VBScript using same navigation as shown
above.
With Window(“Flight Reservation”)
.WinMenu(“Menu”).Select(“File;Open Order”)
With .Dialog(“Open Order”)
.WinCheckbox(“Order no”).Set “ON”
.WinEdit(“Edit”).Set parameter(“x”)
.WinButton(“OK”).Click
End With
End With
Active Screen
The QTP8.2 is providing a facility to see snapshot of build w.r.t existing automation program in VBScript.
*select View menu.
*and choose Active Screen option.
NOTE:Like as in WinRunner8.0,the QTP8.2 is also allowing us to decrease the memory space of automation
program in VBScript.To decrease space,test engineers are using
*in File menu.
*select Export test to zip file option.
And to get original VBScript code,test engineers are using
*in File menu.
*select Import test from zip file option.
Accessibility check point:This check point is only applicable on Web Pages.Test engineers are using this check
point to verify hidden properties of web objects.
EX:Alternative Image testing,Page/Frame Titles check,Server Side Image check,Tables check.
Navigation:
*in Insert menu
*select Check point option.
*now choose Accessibility check point.
*select a Page/Object in build.
*select Accessibility setting.
*click OK.
Syntax is same like as others browser check point.
Q:How to conduct testing on Web Site development rules and regulations?
A:Using Accessibility check point we can conduct testing on properties of web objects.
XML check point(File):This check point is applicable on XML source code.
One Black Box Tester trying to know the internal code of a build is called as Gray Box Testing.
Navigation:
107
*in Insert menu
*select Check point.
*choose XML check point(File).
*browse XML file path.
*click OK.
*select testable statement in that XML code.
*specify required properties with expected values using Attributes button.
*click OK.
NOTE:Unlike to WinRunner,the QTP8.2 is allowing us to automate our testing on XML web pages and XML
program files.
Syntax: XMLFile(“.xml”).Check Checkpoint(“.xml”)
Output values:This concept is providing a facility to get values from build like get_info statement in WinRunner.
Syntax:for browser Browser(“browser name”).Page(“page name”).Link(“link text”).Output Checkpoint(“name”)
For window Window(“window name”).WinObject(“object name”).Output Checkpoint(“name”)
Navigation:
*select Insert menu.
*choose Output value option.
*select type of output.
*provide details.
*select required properties to get value of that property.
*click OK.
Call to WinRunner
In companies,version one of a s/w is tested in WinRunner and in version two testing they test main module
in VBScript and sub modules in TSL.
The QTP is allowing us to call TSL programs and functions in required place of VBScript program.
a)Test Call:
*select position in QTP.
*and now select Insert menu.
*choose Call to WinRunner option.
*in it choose Test option.
*browse the path of test.
*specify parameters if required.
*say Yes/No to run WinRunner minimized.
*say Yes/No to close WinRunner after running the test.
*click OK.
Syntax: TSLTest.RunTestEx “path of the WinRunner test file”,parameters,T/F,T/F
b)Function Call:
*select position in QTP.
*and now select Insert menu.
*choose Call to WinRunner option.
*in it choose Function option.
*browse the path of the Compiled Module file.
*enter function name used in that module.
*specify parameters if required.
*say Yes/No to run WinRunner minimized.
*say Yes/No to close WinRunner after running the test.
*click OK.
Syntax: TSLTest.CallFunEx “path of the WinRunner Compiled Moduled file”,”function name”,parameters,T/F,T/F
NOTE:
*like as WinRunner8.0,the QTP8.2 is also allowing Transactions to calculate execution time of selected part
of our program.
108
*like as WinRunner8.0,the QTP8.2 is also supporting Run From Step to run a part of program and
Update run to run our automation check point with default values as expected.
*in WinRunner,the test engineers are using F6 as short key to debug our program line by line.But in QTP,the
test engineers are using F11 as short key to debug our VB program line by line.
*like as GUI Spy in WinRunner,the test engineers are using Object Spy in QTP to know whether an object
in our build is identifiable by our tool or not?
*like as WinRunner,the QTP is also allowing User Defined Programs creation with repeatable operations in
our application build.
Syntax for User Defined functions:
Function functionname(Parameters)
-------
-------
End Function
*Save this file with (.vbs) as extension.
*open a test after saving that UDF.
*select Test menu.
*choose Setting option.
*select Resources tab.
*browse path of that function file.
Load Testing
The execution of our application build under customer expected configuration and customer expected load
to estimate speed of processing is called Load Testing.
Load means that number of concurrent user working on our application build at same time.
Stress Testing
The execution of our application build under customer expected configuration and various levels to estimate
continuity is called Stress Testing or Endurous testing.
Manual Vs Automation Performance testing
109
The manual load and stress testing is expensive and complex to conduct.Due to this reason,the testing teams
are concentrating on test automation to create virtual load
Examples:Load Runner,Silk performer,Jmeter,Rational Load test etc.
LoadRunner8.0
It is also developed by Mercury Interactive.It is a load and stress testing tool.It supports
Client/Server,Web,ERP and Legacy applications.
Create virtual users instead of real network.
Virtual Environment
RCL:The Remote Command Launcher is converting a local request into a remote request using loop back
addressing(means source computer address and destination computer address is Same).
VUGen:The Virtual User Generator is making the one real remote request as multiple virtual requests but
responses generated due to these virtual requests are real.
110
Port Mapper:It submits all virtual user requests to a single server process port.
Controller Scenario(CS):The CS returns the performance results.
NOTE:The Remote Command Launcher and Port Mapper are internal components in LoadRunner.VUGen,
Controller Scenario and Results Analyzer are external components.
Time Parameters:
a)Think Time:The time to create a request in client process.
b)Elapsed Time:The time taken to complete request transmission,processing in server and response
transmission is called Elapsed/Turn Around/Swing Time.
c)Response Time:The time to get first response from server process or the time to start an operation in
server.
b)Load and Stress testing Environment:we are using the following to establish test environment
*customer expected computer configuration.
*AUT or Build.
*LoadRunner.
*Database server.
c)Load and stress test cases:
111
d)Navigation to create load and stress testing:let us assume our computer as customer expected
configured computer.
*Install our application build and LoadRunner software
*Install corresponding database server
*select Start menu
*Programs
*open corresponding database server(like Oracle,Sql,MySql,Quardbase etc)
*now select Start menu
*Programs
*Mercury LoadRunner
*LoadRunner
*choose Create/Edit script option in Welcome screen
*specify build type as Client/Server
*select database server type(by default ODBC)
*click OK and specify path of our application build
*select recording to action as(Vuser_init,Action(one action only),Vuser_end)
*click OK and record our build front end operations per one user as init,action and end
*click stop recording
*save that script per one user(VSL)
*Tools menu
*Create Controller Scenario
*specify number of users to define customer expected load and click OK
*click Start Scenario
NOTE:If our applied load is passed on build,then the testing team is analyzing performance results.If our applied
load is not accepted by build(Memory leakage:means space is not sufficient),then testing team is reporting defect to
development team.
The final result is for whole process(init,action and end)
Transaction point:To get performance results for specific operation,test engineers are marking required operation
as transactions.
Navigation:
*select position on the top of the required operation in Action
*Insert menu
*Start transaction and specify a name and click OK
*now select position at the end of operation
*Insert menu
*End transaction and specify same name and click OK
EX: lr_start_transaction(“transaction name”);
open cursor
select/insert/delete/update/commit/rollback
close cursor
lr_start_transaction(“transaction name”,LR_AUTO);
The above script is Operation/Action per one user and LR_AUTO gives the required action time.
112
Rendezvous point:It is an Interrupt point in load test execution.This point stops current Vuser process
execution until remaining Vusers process are also executed upto that point called as Load Balancing/Load
Controlling.
lr_rendezvous(“name”);
lr_start_transaction(“transaction name”);
open cursor
select/insert/delete/update/commit/rollback
close cursor
lr_start_transaction(“transaction name”,LR_AUTO);
It is placed above transaction point so that any Vuser executing fastly can be slowed down until other Vusers
come there.
Analyze Results
If our applied customer expected load is passed,then the test engineers are concentrating on results
analyzing else test are concentrating on defect reporting and tracking.
*Results menu
*Analyze results
*identify average response time in seconds.
Results Submission:After completion of required load test scenarios execution,the test engineers are submitting
performance results to project management in below format
Scenario(select/insert/delete/update/commit/rollback) Load(no of Vusers) Average response time in
seconds
Select 5 0.02
Select 10 0.05
Select 20 0.07
--- --- ---
--- --- ---
NOTE:In load and stress testing,our application build accepted peak load is greater than or equal to customer
expected load.
Bench Mark Testing
After receiving performance results from testing team,PM is conducting Bench Mark Testing.In this test,the
PM confirms whether this performance is acceptable or not.
In this Bench Mark Testing,PM is depending on performance results of previous version of s/w or
competitive products in market or interests of customer site people or interest of product managers.If reported
performance of build is acceptable,then PM is concentrating on release of that s/w.If the reported performance of
build is not acceptable,then programmers are concentrating on changes in coding structure sans disturbing
functionality or suggest to customer site people to improve configuration of working environment.
Increase load
The LoadRunner is allowing us to increase load upto peak load.In general the test engineers are increasing
load to apply stress on build.
Navigation:
*Tools menu
*Create Controller Scenario
*Vusers button
*Add Vusers button
*specify Quantity to Add to increase load and click OK
*click Close
*click Start Scenario for load test execution
*follow above navigation until peak load
Mixed operations Load testing
LoadRunner8.0 is allowing us to perform load testing with various operations in our application build.
EX:5 users for Select operation and 2 users for Update operation,but the total load is for 7 Vusers.
Navigation:
113
*create Vuser scripts separately for all required operations
*maintain Rendezvous point name as same in all Vuser scripts
*Tools menu
*Create Controller Scenario option
*specify load for current Vuser script and click OK
*select Add group button
*specify other Vuser script with quantity
*click Add group again to add multiple Vuser scripts with different loads
*click Start Scenario
*analyze results manually when load is passed
*send reports if failed
NOTE:for Mixed Operations load testing,test engineers are maintaining Rendezvous point name as same.At
Rendezvous point,the Maximum waiting time of Vuser for other Vusers is 30 seconds.
Since the Rendezvous point is placed above the Transaction point,the waiting time for Vuser is not included
in final performance time.
II)Web load and stress testing
a)Web site architecture
Data Submission:It emulates us to submit a formless or content less data to web server under load like auto
login,autologout,auto transaction(Dog look) etc.
Performance Results Submission
After completion of required scenarios load and stress testing,the testing team is submitting performance
results to project management.
Scenario(URL,image link,text Hits per second Throughput Average Response time
link,form and data submission)
URL open 100 25kbps 1sec
URL open 150 42kbps 1sec
URL open 200 50kbps 2sec
URL open 250 50kbps 3sec
-------- ------ ------- -----
Bench Mark testing
After receiving performance results from testing team,the project management is conducting Bench Mark
testing.In this test,the PM is comparing current performance values with previous version performance or
competitive websites performance or world wide web consortium(W3C) standards.From these standards,a quality
website is taking 3seconds for links operations and 12seconds for transactional operations.
NOTE:If the estimated performance is not acceptable,then the project management sends back that website build to
development team to perform changes in coding structure sans disturbing functionality.Sometimes the project
management is suggesting to customer site to improve configuration of the environment.
Run Time Settings
To conduct load and stress testing efficiently,the test engineers are using Run Time settings in Virtual User
Generator.
a)Pacing:we can use this option to run our Vuser script iteratively with fixed delay or random delay.
b)Log:LoadRunner is maintaining 2 types of performance results such as Standard log and Extended log.
c)Think Time:the time to create a request by client.LoadRunner is allowing us to add think time to
performance time if required.But we don’t require think time as it was the time for user request not build
performance time.
EX:Cookies of server running at client.
d)Additional Attributes:LoadRunner8.0 is allowing parameter passing in Vuser scripts.
e)Miscellaneous:LoadRunner is maintaining silent mode if required.It allows processing in Multithreading
or Multiprocessing mode.It defines each action as a transaction if required.
Test Director8.0
115
Configuration Repository
It is a storage area in our organization server.It consists of all documents and s/w programs related to a
project or product for future reference.