0% found this document useful (0 votes)
179 views36 pages

Method Validation-Where Do I Start?: DR Geraldine O'Donnell Director of DNA

This document discusses method validation and provides guidance on how to validate instrumental methods. It explains that validation is required for non-standard methods to prove they are suitable for their intended use. The key performance parameters to evaluate include precision, trueness/bias, measurement range, ruggedness, and measurement uncertainty. For qualitative methods, precision is assessed by reproducibility, trueness by false positive and negative rates, and measurement range by the limit of detection. For quantitative methods, precision is measured by reproducibility or relative standard deviation at different concentrations, and trueness by measuring bias from a reference value.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
179 views36 pages

Method Validation-Where Do I Start?: DR Geraldine O'Donnell Director of DNA

This document discusses method validation and provides guidance on how to validate instrumental methods. It explains that validation is required for non-standard methods to prove they are suitable for their intended use. The key performance parameters to evaluate include precision, trueness/bias, measurement range, ruggedness, and measurement uncertainty. For qualitative methods, precision is assessed by reproducibility, trueness by false positive and negative rates, and measurement range by the limit of detection. For quantitative methods, precision is measured by reproducibility or relative standard deviation at different concentrations, and trueness by measuring bias from a reference value.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
  • Introduction: Introduces the topic of method validation, setting the stage for the detailed discussions that follow on validation processes and parameters.
  • Validation Process: Explores the reasons, timings, and steps involved in method validation, with emphasis on adherence to standards such as ISO/IEC 17025.

Method Validation-where do I

start?

Dr Geraldine O’Donnell
Director of DNA

Your logo(s) here


Overview-Where do I start?

 Why, when, when not and how to validate


 Instrumental methods (qualitative)-
performance parameters
 Instrumental methods (quantitative)-
performance parameters
 Template for validation plan/report
 Touch on human based method validation
Why Validate?
 During the process of the introduction or
implementation of a new method a specific step
must be taken to prove in an objective way that
the method is suitable for its intended use. This
step is called validation

 Demand for validated methods has been driven


by customers, accreditation bodies (i.e. as part
of ISO accreditation requirement), and forensic
community.

 an integral part in the development and


implementation process of a new method
When to Validate
ISO/IEC 17025 [Link]
 non-standard methods,

 laboratory-designed/developed methods

 standard methods used outside their


intended scope
 amplifications and modifications of
standard methods
When not to validate
 for standardised methods such as ISO, ASTM a
full validation is not necessary

 need to verify the in-house performance of the


method as detailed in ISO/IEC 17025 5.4.2

 the laboratory shall confirm that it can properly


operate standard methods before introducing
the tests or calibrations

 called verification according to VIM


How to validate
 Make decision that initial method development is
finished

 Often not possible to determine exactly where


method development finishes and validation
begins

 Many of the method performance parameters


that are associated with method validation are in
fact usually evaluated, at least somewhat, as part
of method development.

 Document the measurement procedure (SOP)


How to validate
Part A- Initiate Project
 A designated person is appointed to draw up
the validation plan
 Plan should contain the following four elements:
1. The laboratory and customer requirements

2. The performance parameters, that will need to be


used to ensure that the outputs meet the laboratory
and customer requirements

- parameters will be dependent on the technique or process under


consideration, but should in general address, as appropriate:
 Sampling
 Precision
 Repeatability

 Within-lab reproducibility

 Bias
 Matrix/substrate effects

 Specificity

 Working range
 Limit of detection/sensitivity

 Linearity

 Robustness
 Environmental susceptibility

 Competency of personnel
Eurachem guide: Terminology in analytical measurement –
Introduction to VIM 3 (2011) available from [Link].
How to validate
Part A- Initiate Project

[Link] Acceptance Criteria to be used to assess whether


the performance parameters have been met

Note: It is critical to the success of the validation that the


acceptance criteria are set as specific as possible prior to
the commencement of the validation work.

4. The design of the validation tests should also be


considered at this stage to ensure that they are as
objective as possible.
validation plan will be reviewed by the operational
manager/quality manager
How to validate
Part B
 Start the validation work
tests performed should be those specified in the validation
plan. New or additional tests should not be introduced, or
planned tests not performed, unless authorised by the
operational manager for the designated area.
 Prepare implementation plan
designated person has to consider what needs to be in
place before the new technique or process can be
implemented and how the implementation will be carried
out. These considerations should be included as part of
the final validation report
How to validate
part B
 Where appropriate the following should be
addressed:
 The staff training plan and the arrangements for
competence assessment and proficiency testing
 The protocols for calibration, monitoring and
maintenance of any equipment
 The supply and traceability of any
standards/reference materials
 The supply and quality control of key materials
and reagents
How to validate
part B

 The SOP documents for the process and the


interpretation/reporting of results
 Anti-contamination protocols
 Any special requirements associated with health
and safety 

 Review progress on validation work


How to validate
Part C
 Complete and prepare validation report
 important that this includes all the information
needed to facilitate independent assessment of
the fitness for purpose of the technique or
process

 A summary of the raw experimental data will


normally suffice. But the raw data must be
available.

 A statement of fitness for purpose regarding the


method is added in the report.
How to validate
Part C
 Review validation report and
implementation plan
 final validation report and implementation plan will be
reviewed and approved at least by the operational manager
for the designated area or the quality manager.
 must be signed of formally as deemed fit for use.
 consideration given to executive summary
 aim of this is to provide those making decisions on the use
of the results with a summary of the validation steps
performed, and key issues surrounding the validation
Instrumental Methods (Qualitative)
Performance parameters
 Qualitative analysis can be defined as “Classification
according to specified criteria

 In analytical chemistry and related disciplines, the ‘criteria’


are understood to relate, in general, to information about
the presence, composition and/or structure of materials

 A qualitative method would normally give three possible


results, positive, negative or inconclusive

 For single laboratory validation of qualitative methods


recommend the Metrology of Qualitative Chemical Analysis
(MEQUALAN) , report EUR 20605 EN, ISBN 92-894-5194-7,
European Commission
Instrumental Methods (Qualitative)
Performance parameters
 Precision
 most useful estimate is the within-lab reproducibility
which is the precision measured with different
analysts, over extended timescales and, within a
single laboratory

 Precision for qualitative measurement can be stated in


terms of a percentage of similar results obtained for
test samples

 precision is generally dependent on analyte


concentration, and so should be determined at
different levels
Instrumental Methods (Qualitative)
Performance parameters
 Trueness – Bias
 is an expression of how close the mean of a set of results is
to the reference value

 can measure the false positive and false negative rates when
we have prior information about the presence or absence of
an analyte in a test sample

 For a given test method, the basic properties that need to be


measured are the numbers of true positive (TP) and true
negative (TN) results and the numbers of false positive (FP)
and false negative (FN) results obtained on a range of
samples. From these numbers, the fundamental measures of
reliability viz. the false positive and false negative rates can
be calculated.
 
Instrumental Methods (Qualitative)
Performance parameters

The false rates vary with the level of any analyte.


For high levels of analyte, the likelihood of false
negatives will be very low but at levels slightly
above the threshold it will be relatively
high.
Instrumental Methods (Qualitative)
Performance parameters
 Measurement range
 For a qualitative test there is a LOD or a
threshold.

 For further guidance see The Fitness for


Purpose of Analytical Methods; a Laboratory
Guide to Method Validation and Related
Topics, 2nd Edition, Eurachem (2014)
Instrumental Methods (Qualitative)
Performance parameters
 Ruggedness
 Control of uncertainties in test parameters, such as
times, temperatures, lengths etc, are vital for reliable
qualitative testing.

 expected to control factors affecting the test result to


within specified tolerances or in a validation
demonstrate that the possible variation in individual
test parameters have no significant influence on the
outcome of the test.
 Measurement uncertainty
Not directly applicable for a qualitative test.
Instrumental Methods (Quantitative)
Performance parameters
 Precision
 the most useful estimate is the within-
laboratory reproducibility

 is usually stated in terms of standard deviation


or relative standard deviation (RSD)

 is generally dependent on analyte


concentration, and so should be determined at
a number of concentrations
Instrumental Methods (Quantitative)
Performance parameters
 Trueness – Bias
 trueness is normally expressed in terms of
measurement bias

 bias is estimated from the difference between the


mean value of several measurement results preferable
obtained under within-laboratory reproducibility
conditions and a reference value

 trueness can also be investigated as selectivity e.g. by


measuring bias at different levels of known
interferences and in different matrices.
Instrumental Methods (Quantitative)
Performance parameters
 Measurement range
 measuring range is an interval of the concentration, which can be
measured with a specified uncertainty using the method
 the lower limit of the measuring range is often considered to be
the limit of quantification, LOQ

Eurachem guide: Terminology in analytical measurement -


Introduction to VIM 3 (2011)
Instrumental Methods (Quantitative)
Performance parameters
 Ruggedness/Robustness
 In any method there will be certain steps
which, if not carried out sufficiently carefully,
will have a severe effect on method
performance

 steps should be identified and their influence


on method performance can be evaluated

 involves making deliberate variations to the


method, and investigating the subsequent
effect on performance
Instrumental Methods (Quantitative)
Performance parameters
 Measurement uncertainty
 expanded measurement uncertainty is
normally what the laboratory reports to the
customer
 provides an interval within which the value of
the measurand is believed to lie with a high
level of confidence (normally 95%)

 in proficiency testing the claimed uncertainty


can be verified by comparing the difference
between the result of the laboratory and the
assigned value
Template of the validation plan and report
 IN-HOUSE VALIDATION OF MEASUREMENT PROCEDURES

VALIDATION PLAN
Template of the validation plan and report
 IN-HOUSE VALIDATION OF MEASUREMENT PROCEDURES

VALIDATION PLAN
Template of the validation plan and report
Template of the validation plan and report
Template of the validation plan and report
Template of the validation plan and report
The key to understanding how to validate human
based methods is understanding what changes
arise from substituting the INSTRUMENT with the
HUMAN
Human Based
 Aspects not as self determining as for instrumental
methods
(i.e. selection of critical features)

 No standards available
Hence:
 As method development is more demanding two
requirements arise:
 documentation of the feature set to demonstrate that
the method is fit for purpose
 documentation of the decision / choices made
for the selection of the critical features

Need to check the basis of these decisions


ahead of performance testing
(method development check)
Instrumental Methods
parameters to be validated e.g. precision, trueness, measurement range
etc.

Human Based Methods

Specimen

Analysis Comparison Evaluation

Verification

Performance testing involves validating the analysis, comparison


and Verification steps
Must use samples where the ground truth is known for the
features in performance testing
THANK YOU!!

geraldineaodonnell@[Link]

You might also like