0% found this document useful (0 votes)
661 views16 pages

SOP Laboratory Data Management Out of Specification Result

Uploaded by

Imad Abureid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
661 views16 pages

SOP Laboratory Data Management Out of Specification Result

Uploaded by

Imad Abureid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Analytical Quality Control Working Group

STANDARD OPERATING PROCEDURE


Laboratory Data Management;
Out of Specification (OOS) Results

Name and Role Date


Authors:
Dr Christopher Burgess 09-Aug-2013
Dr Bernd Renger 09-Aug-2013

Technical Review:
Dr Matthias Heuermann
Dr Ulrich Rose
On behalf of the Analytical Quality Control Working Group (Core Team)

Approved by:
Dr Günter Brendelberger 09-Aug-2013
On behalf of the Analytical Quality Control Working Group

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 1 of 16


Analytical Quality Control Working Group

Contents

Document Revision History ................................................................................................................................. 2


Scope & Application ............................................................................................................................................ 3
Regulatory References ........................................................................................................................................ 3
SOP Process Flow................................................................................................................................................. 4
Overview of Laboratory Data Management & the Analytical Process ................................................................ 5
QU involvement................................................................................................................................................... 5
Reportable values (results).................................................................................................................................. 6
Gross Laboratory Errors & System Suitability Failures (Phase Ia) ....................................................................... 7
Laboratory Data Record Errors (Phase Ib) ........................................................................................................... 8
Full Laboratory Investigations (Phase II) ............................................................................................................. 9
Outlier testing ...................................................................................................................................................... 9
Retesting protocol ............................................................................................................................................. 10
Appendix 1; Checklist for Phase 1a OOS investigation ...................................................................................... 11
Appendix 2; Example calculation of a 95% Confidence Limit ........................................................................... 12
Appendix 3; Example calculation of a robust mean and robust 95% Confidence Limit (H15 method) ............ 13
Appendix 4; Technical Glossary ......................................................................................................................... 15
Document Revision History

Version Date Reason for Change Status


V 1.0 11-Oct-2011 First approved version of this SOP. Used as input Released
for public review at ECA OOS Forum Prague June
2012
V 2.01 11-Jul-2012 Draft version 2 by CB & BR based on inputs from Draft
OOS Forum and informal comments by FDA
V 2.01 11-Jul-2012 New appendix III added Draft
V 2.02 01-Aug-2012 Technical Approval & Release process Draft
V2.0 10-Aug-2012 Second approved version Released
V2.1 09-Aug -2013 Typo graphical errors in Appendix 2 corrected Released

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 2 of 16


Analytical Quality Control Working Group

Scope & Application

This procedure applies to physicochemical -based laboratory testing. It is directed toward traditional drug
testing and release methods. These laboratory tests are performed on active pharmaceutical ingredients,
excipients and other components, in-process materials, and finished drug products. It is not applicable to
PAT (Process Analytical Technology) or RTR (Real Time Release) approaches.

In addition, it is intended to cover only continuous variables, for example assay, impurity values, hardness
etc., which may be assumed to be normally distributed. It is not intended to cover discrete variables, for
example, particle counts, identity tests or cosmetic quality defects derived from AQLs. This procedure is
confined to the analytical laboratory and does not apply to microbiological testing.

Specifically;
 Batch release testing and testing of starting materials.
 Registered In-Process Control testing if data are used as part of for batch evaluation or certification
 Stability studies on marketed products and or active pharmaceutical ingredients (ongoing and follow
up stability)
 Batches for clinical trials or IMPs.

If at the end of Phase II of the OOS investigation, no assignable root cause has been identified, a full failure
investigation will be undertaken. This should be conducted under other procedures and is not covered by
this SOP.

Regulatory References
1. Guidance for Industry; Investigating Out-of-Specification (OOS)
Test Results for Pharmaceutical Production, US Food and Drug Administration, FDA, (CDER), October
2006
2. Out Of Specification Investigations, Medicines and Healthcare products Regulatory Agency, UK,
(MHRA) November 2010
http://www.mhra.gov.uk/Howweregulate/Medicines/Inspectionandstandards/GoodManufacturingP
ractice/FAQ/OOSFAQs/index.htm
3. Evaluation and Reporting of Results, OMCL Network of the Council of Europe, Quality Assurance
Document. PA/PH/OMCL (07) 28 DEF CORR, December 2007
4. USP 35 (2012) General Chapter <1010>, ANALYTICAL DATA; INTERPRETATION & TREATMENT
5. ISO 5725-5 (1989) Accuracy (trueness and precision) of measurement methods and results —Part 5:
Alternative methods for the determination of the precision of a standard measurement method;
Section 6.2 Robust analysis Algorithm A page 35

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 3 of 16


Analytical Quality Control Working Group

6. SOP Process Flow

Analytical Laboratory Phase I a


START Obvious
Results & Investigation
ANALYSIS error?
Reportable Values (Analyst)

Yes
Supervisor No
review &
approval
Within
specification
Repeat Analysis ?
& No
Trend Occurrence

Consider Document &


CAPA if Invalidate Result Formal review
appropriate or Value Phase I b of all
Investigation analytical
(Analyst & documentation
Supervisor)

Document &
Consider Invalidate Result Supervisor Remeasurement Probable
CAPA if or Value and review & or correction Yes Root cause
appropriate Substitute Correct approval confirmed identified?
Value

No or Inconclusive
Deviation Management, CAPA & Change Control

Document & Phase II


Invalidate Result Investigation
or Value (QC & QA)

Yes

RESAMPLE Sampling &


Assignable Batch Record
(If Yes Sample
cause found? Review
appropriate) Management

RESAMPLE
No
(Exceptional) No

Rationale &
Justification

Sufficient Retesting
Assignable
Laboratory Protocol No
cause found?
Sample?

Yes
Yes
Execute QA
authorised protocol

Execute QU Phase III


authorised protocol FULL FAILURE
Acceptance
INVESTIGATION
criteria No Yes
(All relevant
achieved?
departments must
Consider Document & be involved)
CAPA if Invalidate Result
Yes
appropriate or Value including
initial OOS result

OOS All results in


Confirmed specification

OOS
Not Confirmed
BATCH DISPOSITION
By QU

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 4 of 16


Analytical Quality Control Working Group

Overview of Laboratory Data Management & the Analytical Process

Laboratory data quality management processes are a part of the overall Quality Management System as
required by Chapter 1 of EU GMP and the FDA cGMPs as specified in 21 CFR §210 & §211.

Analytical processes and procedures are managed as part of a lifecycle concept. Laboratory data integrity
and security are critical requirements under the GMPs. Such a process is illustrated below.

INPUTS
CONTROLLED FACTORS OUTPUTS

Qualified &
Policies &
Qualified Validated Validated Trained
Procedures
Instruments Systems Methods Staff
PAPER
RECORDS

DATA & RECORD


SECURITY
 DATA

SAMPLES
REAGENTS
Analytical Testing Process  Raw data HYBRID RECORDS
 STANDARDS [LABORATORY DATA INTEGRITY]  Meta data

ELECTRONIC
Method Drift RECORDS
Deviation
Instrument System & Human
from
Failures Failures Uncontrolled Error
Procedures
Modifications

UNCONTROLLED FACTORS

The purpose of this SOP is to define the procedures for managing laboratory data which are out of
specification. Managing laboratory data which are Out-of-Expectation (OOE) or Out -of-Trend(OOT) are
subject of a separate SOP.

QU involvement

The flow chart is a high level process guide to the SOP and is not intended to include all details or eventualities in this
SOP. Quality Control testing is considered an integral part of the Company's Quality Unit as explicitly required by EU
GMP. Formal Quality involvement, eg by a separate QA function, is kept to the minimum consistent with US & EU
regulatory expectations and requirements based upon published legislation and guidelines as well as the Core Team’s
experience. The extent of Quality oversight is very dependent on individual company requirements.

Organisation and nomenclature of Quality Control and Assurance functions and assignment of responsibilities are also
highly company specific. This SOP does not dictate or recommend specific steps that must be supervised by specific
quality functions other than those required by regulation. Therefore the term Quality Unit (QU) as used in the revised
chapter 1 of EU GMP Guide, is used here.

The initial investigation, however, should be performed directly under the responsibility of the competent laboratory.

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 5 of 16


Analytical Quality Control Working Group

Reportable values (results)

As defined in the Technical Glossary (Appendix 4), a reportable value or result is the predefined combination
of analytical measurements to arrive at a final analytical result which is compared with a registered
specification.

An analytical method/procedure might consist of a specific number of replicates to arrive at a reportable


value or result.

Performing more than one measurement (replicate measurements) is a valid and acceptable approach to
reduce analytical variability of the reportable result derived from these replicate measurements. This
predefined combination of analytical measurements must be formally written in the analytical
method/procedure used.

In this instance, individual analytical measurements are NOT compared with the registered specification.
However, they all must lie within the known predefined acceptance criteria based on the analytical
method/procedure's process capability which are also documented in the analytical method/procedure.

The reportable value or result is derived from one full execution of that method/procedure, starting from
the original (homogeneous) laboratory sample.

For instance;

1. an HPLC assay may be determined by averaging the peak responses from a number of consecutive,
replicate injections from the same preparation (usually 2 or 3). The assay result would be calculated
using the peak response average. This determination is considered one test and one reportable
result. The spread of the individual peak responses would need to meet the acceptance criteria set
in the analytical method/procedure before the averaging took place.

2. In some cases, eg assays, a series of sample preparations from the same homogeneous laboratory
sample constitute the method/procedure. It is appropriate to specify in the method/procedure that
the average of these multiple sample preparations from the same homogeneous laboratory sample
is considered to be one test and represents one reportable result.

Limits on acceptable variability among the individual assay results is based on the known variability
of the validated method/procedure and specified in the authorised written method/procedure.

If the individual assay results do not comply with these limits, the test is invalidated.

Note that the individual values from a series of sample preparations from the same homogeneous
laboratory sample are NOT compared with the registered specification.

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 6 of 16


Analytical Quality Control Working Group

Gross Laboratory Errors & System Suitability Failures (Phase Ia)

The Phase la investigation is to determine and document if there has been a clear obvious laboratory error
or the basic data quality fails system suitability acceptance criteria. This is the responsibility of the Analyst to
perform and the Supervisor to confirm. If a clear obvious laboratory error occurs the procedure must be
stopped and the Supervisor informed immediately.

Examples of a clear obvious laboratory error are;

 Spillage of sample or standard materials


 Incorrect volumetric glassware used
 Instrument or system malfunction
 Incorrectly set instrument or system parameters
 Wrong standard or reagents used
 Sample preparation problems

If this is a recurrent problem raising a CAPA will be appropriate.

The number and type of clear obvious laboratory errors should be trended as part of the overall Laboratory
Data Management system.

The Analyst must ensure that only those instruments meeting established performance specifications are
used and that all instruments are properly calibrated. The Analyst is responsible for checking that analytical
measurement data are within the predetermined acceptance criteria for System Suitability as defined in the
analytical procedure. All sample and standard preparations must be retained and only disposed of after the
Supervisor has signed off any investigation.

Examples of System Suitability failures are;

 System precision exceeded at the start or the end of a chromatographic run


 Permitted range of analytical measurements or results exceeded
 Standard responses drifting beyond predetermined limits
 Non conformance of chromatographic parameters as defined in USP <621> and Pharm. Eur. 2.2.46

System Suitability failures indicate that the system is not functioning properly. All of the data collected
during the suspect time period should be properly identified and should not be used.
The cause of the malfunction should be identified and, if possible, corrected before a decision is made
whether to use any data prior to the suspect period.

The supervisor will determine the appropriate remedial actions to be taken. A CAPA would normally be
expected to be raised in the case of System Suitability failures if the root cause is not trivial.

The documentation of this investigation is conveniently handled using a simple check list. An example of a
generic model check list is given in Appendix 1.

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 7 of 16


Analytical Quality Control Working Group

If gross analytical errors or system failures are identified, the data are invalidated and documented in the
analytical record. The analysis is repeated using the original laboratory sample in accordance with the
analytical procedure.

If no gross analytical errors or system failures are identified, proceed to Phase 1b.

Laboratory Data Record Errors (Phase Ib)

In the absence of a clear obvious laboratory error or system suitability failure a Phase lb investigation is
carried out to review the analytical process and record to determine if a laboratory root cause can be
assigned. This is the responsibility of the competent Supervisor and should cover at least;

 Establishing that the correct sample(s) were tested.


 Confirming that sample integrity was maintained in the laboratory.
 Reviewing that all instrumentation and systems used in the testing were within calibration date and
log books reviewed for any changes.
 Confirming that the Analyst was trained to carry out the method.
 Discussing the test method with the Analyst and confirming analyst knowledge of and performance
of the correct procedure.
 Examining the actual samples and standards used and any of the solutions prepared.
 Examining the raw data obtained in the analysis, including chromatograms and spectra, and
identifying any anomalous or suspect information or appearance.
 Reviewing environmental temperature/humidity data within the area whilst the test was conducted.
 Determining that appropriate reference standards, solvents, reagents, and other solutions were
used.
 Identifying any previous issues with this procedure.
 Identifying other potentially interfering testing/activities that occurred at the time of the testing.
 Verifying that the calculations used to convert raw data values into a result(s) and reportable value
are scientifically sound, appropriate, and correct.
 Fully documenting and preserving records of this Phase of the laboratory assessment.

As a result of this investigation, there may be a laboratory error hypothesis concerning the root cause of an
OOS result. This may be confirmed by reappraisal and/or remeasurement of the previously prepared
solutions. Retesting is not appropriate at this stage.

Laboratory error should be relatively rare. Frequent errors suggest a problem that might be due to
inadequate training of analysts, poorly maintained or improperly calibrated equipment, insufficiently
detailed written methods/procedure or careless work.

Whenever a laboratory error is identified, the source of that error should be determined, corrective action
and, if appropriate, preventative actions taken.

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 8 of 16


Analytical Quality Control Working Group

If a laboratory error is identified, the initial OOS data are invalidated and recorded in the analytical
documentation. The analysis should be repeated using the original laboratory sample in accordance with
the original analytical procedure.

If the Phase 1b investigation is inconclusive and does not identify a root cause for the OOS result, a full Phase
II laboratory failure investigation is undertaken under the control of the Quality Unit.

Full Laboratory Investigations (Phase II)

Before consideration of any retesting, the Quality Unit must conduct a review of batch records to ascertain if
any deviations or errors in the production or sampling process might indicate the root cause for the OOS
result in accordance with an established procedure not covered by this SOP.

If the outcome of this QU review is negative, a protocol for the retesting of the original laboratory sample
with the intention of ‘isolating’ or confirming the OOS result should be prepared. under the control of the
QU.

Very infrequently, resampling may have to be carried out. In this instance, a documented rationale for and
justification of this activity must be carried out.

Outlier testing

On rare occasions, a reportable value may be obtained that is markedly different from the others in a series
obtained using a validated method. Such a value may qualify as a statistical outlier.

It should never be assumed that the reason for an outlier is error in the testing procedure, rather than
inherent variability in the sample being tested.

Outlier testing is a statistical procedure for identifying from an array those data that are extreme. The
possible use of outlier tests should be determined in advance in accordance with an SOP.

An outlier test is of value in estimating the probability that the OOS result is discordant from a data set. This
information can be used in an auxiliary fashion, along with all other data from an investigation, to evaluate
the significance of the result. However, a statistical outlier test will not identify the cause of an extreme
observation and, therefore must not be used to invalidate a suspect result alone.

Outlier testing must be used with extreme caution. The use of any outlier test and the minimum
number of results required to obtain a statistically significant assessment from the specified
outlier test must be approved in advance by the QU.

A description of some possible test approaches is given in Regulatory Reference 4. A robust method is to be
preferred as non robust tests are strongly influenced by the nature of the outlier itself.

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 9 of 16


Analytical Quality Control Working Group

Retesting protocol

The retesting protocol will be authorised by the Quality Unit and contain at least the following sections;

1. The purpose of the retesting


2. The method to be used. The number of retests relate to the number of complete executions of this
method including standards.
3. The use of a second analyst if considered appropriate
4. The maximum number of retests to be performed on the sample should be specified in advance.
This can be based on scientifically sound statistical principles as described in USP General Chapter
<1010>. In addition, the procedures of Hofer1 or Andersen2 may be usefully consulted.
5. Methods of data evaluation are defined.
6. The acceptance criteria for the successful isolation of the OOS result must be defined. This will
include that all retest results are within specification and the spread within the known process
capability of the method.
7. The 95% confidence intervals of the data including the initial OOS result must also lie within
specification. A description and examples of a simple approach are given in Appendix 2.
8. Preferably a 95% confidence interval using a robust method of estimating the mean and standard
deviation can be undertaken such as described in ISO 5725-5 (1989) [Regulatory Reference 5] or its
extension to Huber's H15 method. This approach is briefy described in Appendix 3. These
approaches are more mathematically demanding but are more statistically sound.
9. An evaluation of all data and a conclusion of the outcome of the retesting.

The chosen retesting rationale must be scientifically sound and well documented. It is proposed by the
Company and should be defendable to the Competent Authority. The approaches outlined in Appendices 2
and 3 are only examples of suitable approaches. Other equally valid approaches may be adopted.

If all the acceptance criteria in the approved retesting protocol are met the batch is considered to be within
specification. However the initial OOS result cannot be invalidated as no assignable root cause has been
established. All data must be documented in the analytical record. The value for the analytical value is
taken as the mean of the retest data without the original OOS result.

If the Phase II investigation fails to isolate the OOS result or is inconclusive as it does not identify a root
cause for the OOS, a full Phase 3 investigation is undertaken under the control of the Quality Unit.

A full Phase III investigation is not the subject of this SOP.

1 J D Hofer, Considerations When Determining a Routine Sample Size for a Retest Procedure, Pharmaceutical Technology NOVEMBER
2003
2 S Andersen, An alternative to the ESD approach for determining sample size and test for Out-of Specification situations,

Pharmaceutical Technology MAY 2004

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 10 of 16


Analytical Quality Control Working Group

Appendix 1; Checklist for Phase 1a OOS investigation

Sample/Laboratory Reference:
Analytical Method Number:
Date & Time of the Investigation:
OBVIOUS OR GROSS ERRORS
 Weighing error  Instrument or equipment malfunction
 Sample or standard spillage  Instrument parameters incorrectly set
 Incorrect pipette used  Incorrect Instrument used
 Incorrect volumetric flask used  Power failure during analysis
 Volumetric dilution error  External interferences; eg vibration, excessive airflows
 Contamination or cross contamination  Sample management compromised
 Incorrect sample used  Laboratory Sample storage compromised
Documented evidence

METHOD OR PROCEDURE DEVIATIONS


 Incorrect method/ procedure used  Wrong Reference standard(s) used
 Instrument/System not calibrated  Wrong Reference standard(s) preparation
 Wrong concentration range used  Improper sample/standard solution storage
 Wrong grade of reagents/solvents used  Analyst not trained to carry out method/procedure
 Environmental conditions not met  Deviation for documented method/procedure
requirements
 Improper sample/standard solution storage  Method/procedure not followed
 Additional or missing peaks observed in a chromatogram  Additional or missing peaks observed in a spectrum
Documented evidence

SYSTEM SUITABILITY FAILURES


 RSD of standards exceeded  Range between injections exceeded
 RSD of samples exceeded  Range between sample replicates exceeded
 Measurement outside validated range  System suitability tests missing
 USP <621> or EP 2.2.46 criteria exceed 
Documented evidence

CALCULATION ERRORS
 Data transcription error  Calculation formula incorrect
 Arithmetical error  Incorrect factor used
Documented evidence

Analyst (Date & Time)

Supervisor (Date & Time)

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 11 of 16


Analytical Quality Control Working Group

Appendix 2; Example calculation of a 95% Confidence Limit

Assume that the registered specification is 95.0 to 105.0% of claim and that an OOS result is obtained of
94.7.
The Phase II investigation fails to identify a root cause. QU authorise a retesting protocol for n=6 and the
retest results obtained are; 98.0, 97.0, 96.1, 96.5, 97.4, 96.2.

The lower 95% Confidence Limit is calculated for all n (7) results from the formula;
t(0.05, n 1) s
X
n
where X is the mean of all 7 values, t(0.05, n 1) is the value of the t distribution for 6 degrees of freedom and
s is the calculated sample standard deviation.
t(0.05,n 1) s 2.45(1.06)
X  96.56   95.6
n 7
As this value is greater than the LSL of 95%, the 'isolation' of the initial OOS result is achieved.
The reportable value is hence the mean value of six retest results 96.9. This situation is shown graphically
below.
LSL USL
Low er 95%
C o n fid e n c e
L im it

R e te s ts
In itia l O O S
90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

A n a ly tic a l r e s u lts

However, if initial OOS result obtained was 92.7 instead of 94.7, the lower 95% confidence limit would have
been 94.7 which is below the registered specification limit.
t(0.05, n 1) s 2.45(1.71)
X  96.27   94.7
n 7
LSL USL

Low er 95%
C o n fid e n c e
L im it

R e te s ts
In itia l O O S
90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

A n a ly tic a l r e s u lts

In this case, the 'isolation' of the outlier was unsuccessful.

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 12 of 16


Analytical Quality Control Working Group

Appendix 3; Example calculation of a robust mean and robust 95%


Confidence Limit (H15 method)

In Appendix 2, the reportable value was calculated excluding the outlier which had been 'isolated' using a
standard 95% confidence interval.

A more statistically sound approach is include this value using a robust procedure based on the median and
deviations from it because medians are less influenced by outlying values. The method briefly described
here is based on Huber's H15 method. This is more difficult to calculate than the standard confidence
interval as it relies on an iterative procedure. However this is easily accomplished using an Excel
spreadsheet without the necessity for macros. The mathematical details are given by Ellison, Barwick and
Duguid Farrant3.

However H15 has a major advantage in that the calculation includes the outlier as part of the data set and
provides values for the robust mean and robust standard.

Briefly, the algorithm used is as follows;

1. Calculate the initial estimates of the median and robust standard deviation
x0*  Median[ x1....xn ]
s0*  1.483Median xi  x0*

2. Evaluate all the data points using


 xi*1  1.5si*1 if x*i  xi*1  1.5 si*1
 *
 xi 1  1.5si 1 if x i  xi 1  1.5 si 1
* * * *

xi  
*

 otherwise
 *
 xi 1
This replaces points outside the range with values at the calculated 99% confidence limits but leaves
other values unchanged

3. Update the estimate for the standard deviation based on the current estimate with a correction
factor, β, for the normal distribution.
4. Repeat steps 2 and 3 for the updated data until the values for the robust mean and robust standard
deviation no longer change, for example , by 0.1%

It is easier to understand the process graphically using the same data set as Appendix 2

3
Ellison SLR, Barwick VJ, Duguid Farrant TJ, Practical statistics for the Analytical Scientist, Section 5.3.2; Robust
estimators for population means, Royal Society of Chemistry, 2009, ISBN 978-0-85404-131-2

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 13 of 16


Analytical Quality Control Working Group

The initial OOS result is shown in red and the retest data are shown in green data in the figure below under
initial data. The lower specification limit (LSL) is also marked on the dot plot. The initial iteration moves
both the initial OOS result and one retest result to the calculated confidence interval. Note that 65 of the 6
retest results are unchanged throughout the calculation. As the iterations proceed the robust mean and
robust standard deviation changes until the values converge. After 28 iterations, convergence is obtained
and the 99% confidence interval is well within the specification and the robust mean value for the reportable
value based on all 7 data points of 96.64 is determined. For comparison, the values of the mean and
standard deviation are given for the initial data and for each iteration.

LSL X 0  96.5
2.96 s 0  2.20
Initial data
X  96.56
s  1.063
94.0 95.0 96.0 97.0 98.0 99.0

Iteration 1
X  96.62
94.0 95.0 96.0 97.0 98.0 99.0
s  0.731

Iteration 5
94.0 95.0 96.0 97.0 98.0 99.0 X  96.66
s  0.798
LSL
H15 X 28  96.64 s 28
2.96  2.54 Iteration 28
 (converged)
X  96.64
94.0 95.0 96.0 97.0 98.0 99.0 s  0.860

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 14 of 16


Analytical Quality Control Working Group

Appendix 4; Technical Glossary

Acceptance Criteria Numerical limits, ranges, or other suitable measures for acceptance of the
measurements or results of analytical methods and procedures
Analytical Measurement The output from an instrument or procedure eg absorbance or peak area.
Analytical Result The conversion of an analytical measurement to a property of the sample eg
concentration.
CAPA Corrective Action and Preventative Action system to manage significant
deviations.
Change Control A formal system to manage planned events or differences from policies,
procedures or standards.
Confidence Interval A range of values so defined that there is a specified probability (normally
95%) that the value of a measurement or result lies within it.
Deviation An unplanned event or a difference from procedures or standards.
Gross Error A mistake or systematic error that would cause a measurement to be invalid
or biased.
Original Laboratory Sample The representative sample obtained from the batch by a scientifically sound
sampling procedure.
Out of expectation result (OOE) An atypical, aberrant or anomalous result within a series of results obtained
over a short period of time but is still within the acceptable range
specification.
Out of Specification result (OOS) A result that falls outside established acceptance criteria which have been
established in official compendia and/or by company documentation.
Out of Trend result (OOT) A time dependent result which falls outside a prediction interval or fails a
statistical process control criterion.
Outlier A measurement or result which lies outside the expected distribution of
sample measurements or results at a defined level of confidence.
Protocol An approved document which specifies the extent of testing and methods for
data evaluation.
Random Error Errors in measurement that lead to measured values being inconsistent when
repeated analytical measurements are made.
Remeasurement The repeat of an analytical measurement on prepared solutions to test a
postulated root cause hypothesis.
Reportable Value (Result) The predefined combination of analytical measurements to arrive at a final
analytical result which is compared with a specification.
Resampling Any additional units collected as part of the original sampling procedure or
from a new sample collected from the batch, should that be necessary.

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 15 of 16


Analytical Quality Control Working Group

Retesting The repeat of an analytical procedure using the original sample under a
protocol to isolate an initial OOS, OOE or OOT result for which no root
cause could be identified.
Root Cause An assignable cause identifying the reason for obtaining an OOS, OOE
or OOT result.
Specification A list of tests with references to analytical procedures and appropriate
acceptance criteria.
Standard Error of the Mean The error of a mean value (of n determinations) at a specified
probability (normally 95%) determined by the product of the value of
the estimated sample standard deviation and the value of the t
distribution with n-1 degrees of freedom divided by the square root of
the number of determinations, n.
System Suitability Test A test to ensure that analytical data are within the predetermined
acceptance criteria for an analytical system based upon the known
analytical process measurement capability.

ECA _AQCWG_ SOP 01_v2.1_updated Aug 2013 Page 16 of 16

You might also like