0% found this document useful (0 votes)
690 views41 pages

Westgard Preview Advanced QC Strategies 2022

Uploaded by

Rodrigo Calquin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
690 views41 pages

Westgard Preview Advanced QC Strategies 2022

Uploaded by

Rodrigo Calquin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Advanced QC

Strategies
Risk-Based Design
for Medical Laboratories

James O. Westgard, PhD


Hassan Bayat, CLS
Sten Westgard, MS

Copyright © 2022

Westgard QC, Inc.


7614 Gray Fox Trail
Madison WI 53717
Phone 608-833-4718
http://www.westgard.com
Library of Congress Control Number: 2022942184
ISBN 1-886958-36-X
ISBN13: 978-1-886958-36-4
Published by Westgard QC, Inc.
7614 Gray Fox Trail
Madison, WI 53717

Phone 608-833-4718

Copyright © 2022 by Westgard QC (WQC). All rights reserved. No


part of this publication may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means, electronic,
mechanical, photocopying, recording, or otherwise, without prior
written permission of Westgard QC, Inc.

Portions of Chapter 17 were previously posted on Westgard.com.


Portions of Chapter 21 were previously posted on Westgard.com, as well as
the prepared text of the graduation address to the Mayo Clinic CLS program
in 2022.
Preface, James O. Westgard

Twenty years ago, we published a book on “Basic Planning for Quality” [1]
that used Charts of Operating Specifications to selected appropriate control
rules and numbers of control measurements based on the quality required
for the test and the performance (precision, bias) observed for the methods
in the laboratory. Since then, a major advance by Dr. Curtis Parvin’s de-
velopment of a patient risk model [2] has expanded the ability to design QC
to provide an objective selection of the frequency of SQC, modeled around
the number of patient samples between QC events. This model is especially
important for optimizing SQC strategies for the high volume continuous
production analyzers that are the workhorses in today’s highly automated
medical laboratories.

This book focuses on improving SQC practices by better design and


planning of risk-based SQC strategies. As you should recall, Deming’s PDCA
cycle (Plan, Do, Check, Act) is the fundamental underpinning of today’s
Quality Management Systems (QMS). In the Deming cycle, the Plan step
is perhaps the basic function most often overlooked (or under-developed) in
medical laboratories. Laboratory scientists tend to be “Do” people who want
to get on with doing the work, rather than sitting around thinking about
how to do it. Yet we know it is important to have well defined processes
and practices for doing the work if we are to provide consistent high-quality
testing for our patients.
One area where current processes and practices have questionable
quality is Statistical QC itself [3]. Many laboratories have used the “trial
and error” approach to establish their control rules, numbers of control
measurements, and frequency of QC events. We think laboratories can
and should do better by careful design and planning of SQC procedures.
Guidance is provided by the CLSI C24-Ed4 document [4] and its “road map”
for planning risk-based SQC strategies. The difficulty with this guidance is
the mathematical model and related calculations, hence the need for simple
and practical SQC planning tools to implement the risk-based model.
Improving SQC practices is our objective with this book. We approach
this issue as part of the broader laboratory QMS, recommending adoption
of Six Sigma principles and tools, a focus on a Total QC Plan (rather than
the Individualized QC Plan recommended for CLIA compliance), adopting
the C24-ED4 road map, and implementing the planning process with Sigma
SQC planning tools.
In this context, we begin by describing the basic philosophy of Demings’
PDCA cycle and then provide a Six Sigma QMS framework for analytical
quality management, followed by a detailed SQC planning process that
makes use of simple graphical tools and internet and spreadsheet calcula-
tors. We discuss how to design/plan SQC for different modes of operation,
such as batch, critical control point, and bracketed operation of continuous
production processes. We describe a variety of applications based on data
in the clinical chemistry literature to demonstrate the planning process and
planning tools, but also to address some current SQC problems such as the
use of a Repeat:2s sampling strategy, recommendations for patient based
real-time QC procedures (PBRTQC), application of clinical control limits,
and the use of moving average statistics with stable control materials.
We conclude with a summary of important conclusions, recommen-
dations on how to implement QC planning in your laboratory, and some
detailed directions and worksheets to guide and support your applications.
James O. Westgard
Madison Wisconsin

References

1. Westgard JO. Basic Planning for Quality: Training in analytical qual-


ity management for healthcare laboratories. Madison WI:Westgard QC,
Inc. 2000.
2. Parvin CA. Assessing the impact of frequency of quality control testing
on the quality of reported patient results. Clin Chem 2008;54:2049-2054.
3. Rosenbaum MW, Flood JG, Melanson SEF, Baumann NA, Marzinke
MA, et al. Quality control practices for chemistry and immunochemistry
in a cohort of 21 large academic medical centers. Am J Clin Pathol 2018;
150:96-104.
4. CLSI C24-Ed4. Statistical Quality Control for Quantitative Measure-
ment Procedures: Principles and Definitions. Clinical and Laboratory
Standards Institute, 950 West Valley Road, Suite 2500, Wayne PA, 2016.
Preface, Hassan Bayat

The ongoing improvements in science and technology provide more


options to treat healthcare issues, including Quality Management. This
book is to provide the readers with new achievements and approaches to
Statistical Quality Control in laboratory medicine. Adding new techniques
and tools to our toolbox, while honing the old ones, leads to a more empow-
ered Quality Management.
As a personal note, over the past 15 years I have learned a lot about
QC/QA from Professor Westgard and Sten Westgard. And now it’s my great
pleasure and pride to collaborate with the Westgards in this book.

Hassan Bayat
Doctor of Clinical Laboratory Science
Sina Clinical Laboratory, Qaem Shahr, Iran

Preface, Sten Westgard

From the vantage point of mid-2022, it is hard to view progress as


inevitable, that things always get better, that the “arc of the universe”
proceeds toward justice. Indeed, sometimes it feels like there are setbacks.
For laboratories, however, there is objective evidence that things, in
fact, have gotten worse. In global surveys on QC Practices conducted in 2017
and 2021, worrying trends were detected:
• The % of labs using manufacturer ranges increased from 43% to 57%.
• The use of 1:2s control rule increased from 55% to 59%.
• The use of manufacturer controls increased from 64% to 67%, while the
use of third party controls have declined.

• Running control once a day increased from 49% to 54% of labs.


• The number of labs that never release patient results when there is a
control failure declined from 54% to 48%.
• 30% of laboratories release results after control flags on a regular (if
rare) basis.
These are not advances, they look like regression into the past.
Sad, to see a resurgence of backward practices, when there are more
tools and opportunities than ever to make advances in QC practices. Indeed,
the book describes in detail a revolutionary new approach, through the
Risk-based MaxE(nuf) model, empowered by Sigma metrics, and enabled
by Westgard Sigma Rules and the Sigma QC Frequency Nomogram, that
offers a never-before chance for laboratories to design every element of their
QC: the right rules, the right number of controls, and the right frequency of
running QC. It simply has never been possible to answer all these questions
before now.
Confounding this opportunity to leap forward are a number of digres-
sions and distractions. The momentum behind measurement uncertainty
continues to metastasize – threatening to completely up-end the current
practice of quality control. We will discuss what is being proposed by the
latest calculations and intended control practices for measurement uncer-
tainty and uncertainty controls.
PBRTQC, the latest wave of enthusiasm for moving averages and other
patient-based approaches, has been touted as a replacement for traditional
quality control. While there are new capabilities to implement these tech-
niques, as we will discuss, complexities remain and the best approach is to
implement PBRTQC selectively, almost sparingly.
Navigating this landscape has been our passion for over 50 years,
through this book we hope to provide you the tools to continue moving into
a future that has better quality, more efficient operation, and reduced risk.
For more than 25 years, Westgard QC has been publishing books on
quality, becoming an essential part of many laboratory shelves. It is the
honor of a lifetime to be trusted colleague to so many, and we do not take
our responsibility lightly. Here we impart the latest wisdom, and hope you
are well equipped for the next part of your quality journey.
About the Authors
James O. Westgard, PhD, FACB is an Emeritus Professor in the Depart-
ment of Pathology and Laboratory Medicine at the University of Wisconsin Medical
School. In addition to pioneering the use of validation protocols, he is best known
for popularizing the multirule QC procedure, often called the "Westgard Rules."

Hassan Bayat, CLS was born in Tehran, Iran, in 1966. He studied Clinical
Laboratory Science, and completed his doctorate in Clinical Laboratory Science in
1994. His professional activity is mainly focused on directing his own private lab-
oratory from Shahid Beheshti University of Medical Sciences. In his research, he
has pursued the Total Error model, Sigma-metrics, MaxE(Nuf) QC model, Method
Validation/Verification, and Measurement Uncertainty. From 2014 until 2017 he was
a member of the EFLM Task and Finish group on Total Error. He has collaborated
with the Westgards on several papers; especially papers devoted to providing tools
for applying MaxE(Nuf) QC model.

Sten Westgard, MS, is the Director of Client Services and Technology for
Westgard QC, Inc. For more than 25 years, he has managed the Westgard media
and verification operations, from book publishing, to the web, to training portals and
quality assessment programs in Sigma quality. Westgard.com has a membership of
over 72,000 laboratory professionals worldwide. It provides over 800 articles, case
studies, downloads, and online tools for free to any laboratory. The monthly e-news-
letter reaches more than 26,000 laboratory professionals. The Westgard Sigma VP
program works with a network of over 80 laboratories worldwide.
There's more online at Westgard Web
Visit http://www.westgard.com/aqc-extras.html for access to:
• Spreadsheets, worksheets and other downloads
• Frequently-Asked-Questions (FAQs)
• Glossary of terms
• Complete reference list
• Links to QC Frequency calculators, including some exclusively
available to the owners of this book.
Advanced QC Strategies, 1st Edition

Table of Contents
1. Managing Quality............................................................................................................................................ 1
2. Reviewing Current SQC Practice Guidelines................................................................13
3. Developing a Total QC Plan.............................................................................................................27
4. Adopting a Sigma-Based SQC Planning Process....................................................33
5. Planning SQC Strategies for Bracketed Operation.................................................51
6. Optimizing QC Frequency for Patient Risk........................................................................71
7. Preparing Excel QC Frequency Calculators...................................................................91
8. Considering Sigma for Multiple Control Levels........................................................ 101
9. Planning SQC for Multitest Analyzers................................................................................ 109
10. Defining Quality Required for Intended Use............................................................ 123
11. Assessing Potential Usefulness of PBRTQC.................................. 131
12. Upgrading Multirules with Moving Averages........................................................... 143
13. Re-designing QC Wrongly for the Traceability Era.......................................... 151
14. Determing MU from QC Data.................................................................................................. 165
15. Evaluating Repeat:2s QC Practices................................................................................. 177
16. Standardizing Means and SDs for multiple instruments............................. 189
17. Controling Differences between Reagent Lots..................................................... 197
18. Summing it Up!....................................................................................................................................... 205
19. Boiling it Down......................................................................................................................................... 219
20. Preparing for Practical Applications................................................................................... 235
21. A Final Word............................................................................................................................................... 255

Index .................................................................................................................................................
259

Westgard QC, Inc. Copyright © 2022
Westgard QC, Inc., Copyright © 2022

1: Managing Quality
James O. Westgard, PhD
Quality management is often described as a journey without end. In
less charitable terms, it could be described as a death march. There’s
a little truth in both of those perspectives. Quality is never “done”
because your success today doesn’t guarantee that tomorrow will be
successful. It takes continuous effort, week after week, month after
month, year after year. You have to succeed every day. Ultimately
you will need to train the next generation to continue this pursuit.
I know something about that. I have spent more than 50 years
of my career devoted to Quality. I didn’t “solve” the quality challenges
and walk away to retirement and celebration. Each victory lead to
another challenge. For 40 years, I also trained the next generation
of laboratory scientists, so they can master these challenges, too. It
is their journey along the path of Quality that matters next.
In this sense, Quality has a philosophical dimension. But it is
equally important to have practical guidance. We might talk about
this journey in abstract ways, but we still need a road map and an
itinerary to identify the next stop.
Our journey starts with the basic philosophy of Deming: the Plan-
Do-Check-Act cycle, or PDCA. To this, we add an error framework
which can be applied in medical laboratories. We encapsulate that in
a Six Sigma Quality Management System for medical laboratories.

Deming’s Plan-Do-Check-Act Cycle


Fundamental to Deming’s approach to quality management is the
scientific method, which is embodied in the Plan-Do-Check-Act cycle,
commonly referred to as PDCA. As scientists, we learned the process
of planning an experiment, performing the experiment, checking
the experimental data, and acting on that data. In Total Quality
Management, this PDCA cycle is applied to planning, implementing,
monitoring, and improving production processes.
• PLAN refers to the initial phase where management plans
what needs to be done and how to do it.

Page 1
Westgard QC, Inc., Copyright © 2022

2. Reviewing Current SQC Practice


Guidelines
The state of QC practice in US laboratories is not good! According to
a recent survey of 21 large academic laboratories [1], the predominant
practice is to use 2 SD control limits and analyze 2 controls once
a shift or once a day. That practice represents the minimum
requirement for compliance with the CLIA rule [2] “at least once
each day patient specimens are assayed or examined, [laboratories
should for] each quantitative procedure include two control materials
of different concentrations…” By comparison, the global standard
for accreditation, ISO 15189 [3], requires laboratories to “design
statistical quality control procedures that verify the attainment of
the intended quality of results.” The ISO requirement focuses on
ensuring quality needed for patient care, whereas CLIA focuses on a
minimum frequency of running controls. For regulatory compliance,
such a minimum often becomes the maximum standard of practice.
The survey revealed that the frequency of QC varied widely
from 1 to 12 QC events a day for chemistry analyzers, with the
most common frequency being 3 times per day. For immunoassay
analyzers, frequency ranged from 1 to 4 events per day, with 2 or
3 being most common. In addition, the survey found that the most
common criterion for judging whether the analytical process is
in-control or out-of-control was the 2SD rule, i.e., Target Value ±
2SD. This control rule (1:2s) was used in 95% of these laboratories
and common practice was to repeat the control if outside of 2 SD,
accept the run if the repeat control was within 2 SD limits, and
reject the run if the repeat control was outside 2 SD control limits.
Everyone knows that 2 SD control limits cause a problem with
false rejections (remember 1 out of 20 outside the limits with N=1
and 1 out of 10 when N=2), but US laboratories have apparently
overcome this limitation, possibly by continuously repeating the
controls until they are “in”, or more likely selecting SDs that are
inflated for multiple instruments, multiple laboratories, peer groups,
or by using manufacturers’ labeled bottle values and assigned val-
ues that are expected to encompass the results from a large group
of laboratories. In addition, controls are typically analyzed upfront

Page 13
Westgard QC, Inc., Copyright © 2022

3. Developing a Total QC Plan


Our purpose in this book is to describe a QC planning methodology
that is practical for medical laboratories today. However, we first
focus on a Total QC Plan (TQCP) to provide an alternative to the
Individualized QC Plan (IQCP), the newest option for compliance
CLIA regulations.
We recommend development of a Total QC Plan because it keeps
you in compliance with CLIA’s minimum standards (2 controls per
day for most tests), but at the same time it accommodates additional
control mechanisms for specific failure-modes throughout the Total
Testing Process. This approach does not require a formal Failure
Mode and Error Assessment (FMEA). Instead, it fulfills the goal of
risk management by developing a risk-based Statistical QC (SQC)
strategy, which is easier to execute than formal FMEA.
The advantages of a risk-based SQC strategy are (a) it is a
reproducible outcome of quantitative SQC planning process and (b)
provides objective specifications for control rules, numbers of control
measurements, and the frequency of QC events. In contrast, an
IQCP is a subjective process that leads to an arbitrary set of control
mechanisms as well as an arbitrary SQC procedure with arbitrary
control rules, numbers of control measurements, and frequency of
QC events.
This chapter will focus even more narrowly on the Total QC
Plan and risk-based SQC Strategy.

Approach for Developing Risk-Based QC Plans


Figure 3-1 outlines the steps for developing QC plans, either a Total QC
Plan that includes a risk-based SQC procedure or an Individualized
QC Plan based on a risk assessment. As mentioned above, we focus
on the Total QC Plan in the methodology presented here.

Page 27
Advanced QC Strategies, 1st Edition

Figure 3-1. Flowchart showing the steps for developing and


implementing a QC Plan.

Page 28
Westgard QC, Inc., Copyright © 2022

5. Planning SQC Strategies for


Bracketed Operation
Our focus here is on risk-based SQC strategies for the bracketed
operation of continuous production processes, i.e., the high volume
testing processes in use in most medical laboratories. Bracketed
operation involves two QC events that are separated by a group of
patient samples. Patients’ results are not reported unless both the QC
events at the beginning and end of the group of patient samples pass
QC evaluation. The number of patient samples between consecutive
QC events defines the frequency of QC, a critical parameter for
continuous production with periodic release of patient test results.
The cost-effectiveness of bracketed operation of continuous
production processes may be improved by implementation of multi-
stage SQC procedures that involve two or more different designs,
switching from one to another when appropriate. For example, a
multi-stage control procedure could have a Startup design that is
used for initial testing, a Monitor design that is used for routine
operation following startup, and even a Retrospective design that is
used to review control data over a period longer than a single run.
The design of multi-stage Bracket SQC Strategies can be sup-
ported by use of a Sigma SQC Run Size Nomogram (also referred
to as Sigma Run Size Nomogram), coupled with a Power Function
Graph to ensure that the initial QC event provides the high error
detection required for a Critical Control Point Startup design. The
Monitor design may be based on the desired reporting interval and
may consider single rules with only 1 control measurement. Such
candidate SQC procedures have been included in both the Run Size
Nomogram and Power Function Graph in the materials provided
here. A worksheet is also included to guide and document the process.
These graphical tools have been demonstrated earlier in an
article in Clinical Chemistry that focused specifically on “Planning
risk-based SQC schedules for bracketed operation of continuous
production processes” [1]. The discussion in that paper is a valuable
addition to the material presented here.

Page 51
Westgard QC, Inc., Copyright © 2022

6. Optimizing QC Frequency for Patient


Risk
We focused on graphical tools in the earlier chapters, but now want
to describe some simple calculators available as online tools at the
Westgard Website and implementable with spreadsheets. Although
the graphical tools are simple to use, they are manual and therefore
laborious when considering multiple levels of controls and multi-test
analytical systems. To better support more complicated planning
activities, we have converted the Sigma Run Size Nomogram into a
calculator that also allows the patient risk factor to be a variable for
planning SQC strategies. This is particularly useful for applications
where there are differences in performance at different levels of
controls and differences in performance for individual tests in a
multi-test analyzer, which is the ultimate challenge in designing
risk-based SQC strategies.
At some point, it became apparent that the Sigma Run Size
Nomogram should be converted to a calculator. The relationships
between Sigma and the log-base 10 (log10) of run size is essentially
linear in the Sigma range from 3 to 6, which is the relevant range
of Sigma quality where the design of SQC strategies is important.
At 6-Sigma, world class quality is achieved, and QC is easy; below
3.5-Sigma, a laboratory can’t do enough QC to ensure the desired
quality is achieved; below 3-Sigma, industrial guidance says the
process is inadequate for routine production. In between, it is
important to implement appropriate SQC strategies to ensure the
quality needed for intended medical use.
One advantage of these calculators is that patient risk itself can
be a parameter for optimizing process performance [1,2]. There are
situations where performance at one level of control is more critical
than at another; cases where one test is more critical for patient care
than another in a multitest analytical system. Adjusting the patient
risk factor may allow implementation of a simpler SQC strategy.

Page 71
Westgard QC, Inc., Copyright © 2022

7. Preparing Simple Excel QC


Frequency Calculators
It is important that you have practical tools for your own work.
The Sigma Run Size Nomogram is practical [1], but we know many
laboratory analysts prefer an automated tool to a manual one. In this
case, a simple QC Frequency Calculator can be prepared to calculate
appropriate run sizes for different SQC procedures [2]. Given the
ready availability of Google Sheets and Microsoft Excel, labs can use
the directions here to set up their own run size calculators
The details are shown in Figure 7-1A and B on the following
pages. This view of the spreadsheet shows the formulas that are
needed in the various cells. Rows 4-12 are for the information that
must be entered by the user. Most critical are the rows for the
quality requirement, method inaccuracy, and method imprecision.
These must all be entered in the same units, either concentration
units or percentage related to the critical decision level in row 9.
We most often work in % units, but concentration units are fine.
What matters is that all three parameters are in the same format.
From this information, Sigma will be calculated as (%TEa
- |%Bias|)/%CV or (TEa-|Bias|)/SD in row 13, which is labeled
“Calculated Sigma-metric” to distinguish it from the “Patient Risk
Sigma” in row 14. If the calculated Sigma is greater than 6, it is
replaced with value of 6 as the maximum Sigma for use in the cal-
culations. That’s the outcome of the equation =+IF(G13>6,6,G13).
If G13 is greater than 6, then a value of 6 will be entered. If not
the actual calculated value in G13 will be used for the Patient Risk
Sigma. Setting a maximum value of 6 for Sigma and a maximum
value of 1,000 for run size makes the calculator behave the same as
the Sigma Run Size Nomogram, i.e., it limits the calculations to a
useful range and eliminates extrapolations that would go far beyond
the range of the nomogram (and the reality of the lab).

Page 91
Advanced QC Strategies, 1st Edition

Figure 7-1A. Left side of the worksheet shows the regression coefficients for
calculating log base 10 (log10) of run size. Middle section shows the entry
parameters at the top, calculated parameters in the middle, and candidate SQC
procedures for which run size will be calculated . Equations for calculating
Sigma (G13) and Patient Risk Sigma (G14) are shown at the top, followed by
the parameters for setting Patient Risk Factor of 1 (G15) and Maximum run
size of 1000 (G16), and finally the equations for calculating run sizes (G19 to
G28).

Page 92
Westgard QC, Inc., Copyright © 2022

8. Considering Sigma for Multiple


Control Levels
If you haven’t already figures it out, the Sigma quality of a test is
a predictor of risk and the key parameter for planning risk-based
SQC strategies. One issue that must be considered is what is the
best estimate of Sigma when 2 or 3 levels of controls are analyzed.
Many labs run two levels of controls for chemistry tests. For other
tests, e.g., immunoassays, hematology, labs often run three levels.
In an earlier chapter, we illustrated how the online QC Fre-
quency Calculator can accommodate up to 4 tests or up to 4 levels
of controls. That allows data from multiple levels of controls to be
used to calculate Sigma and compare the run sizes appropriate at
different concentrations and different decision levels.
To provide an alternative to use an average Sigma that
represents performance over a wide analytical range, 2 other QC
calculators are available:
• http://tools.westgard.com/frequency_calculator2.shtml and
• http://tools.westgard.com/frequency_calculator3.shtml.
These are similar in format to the first QC Frequency calculator
but include an additional column for the “average” Patient Risk Sigma.
This should facilitate selection/design of SQC strategies based on the
Sigma performance observed over a concentration range, rather than
the Sigma performance at a single concentration. These calculators
are intended to support the application of the CLSI C24-Ed4 “road
map” [1] for developing risk-based SQC strategies, with calculation
of QC Frequency in terms of run size, in accordance with Parvin’s
patient risk model [2].
These calculators can be used to compare the performance for
different levels of controls, compare the SQC strategies appropri-
ate at different levels of controls, and compare the SQC strategies
appropriate over the range of concentrations represented by the
controls. We know that it is likely to observe different Sigmas at
different concentrations. The issue is how to handle those differences

Page 101
Westgard QC, Inc., Copyright © 2022

10. Defining Quality Required for


Intended Use
Let us admit that what most laboratories actually practice is Arbitrary
Control. It doesn’t sound as nice as Quality Control, but if you run
QC without defining the goal for quality, you have no idea if you
are achieving anything.
Perhaps an analogy will help. You can’t tell if you have made
a basket (in basketball) if there is no rim, no net, and no backboard.
You’re just throwing a ball away. Simply put: without defining a
goal, you can’t tell if you’ve been successful or if you have failed.
When you have defined a goal, you can validate performance in the
laboratory, you can determine if the method will achieve the desired
quality, and later you can establish appropriate SQC procedures for
monitoring test performance.
In the absence of a stated requirement for quality, the manage-
ment of that process can only achieve an arbitrary level of quality
that may or may not meet customer needs. Think of the common
and widespread use of 2 SD control limits with Ns of 2 or 3 for most
of all tests in a laboratory [1]. While many laboratory professionals
agree that one size QC does not fit all tests, in practice many apply
2 SD limits across the board for their tests.
The remedy is to implement an objective process for designing
SQC procedures based on the quality required for intended use, the
imprecision and bias observed in the laboratory, and the rejection
characteristics inherent in the control rules and numbers of control
measurements applied.
Now we return to the issue of what quality is required for the
intended use of a test. We often take up this issue at the beginning
of the story, but in the context of the discussion here it fits nicely
following the planning process and the “options” available if run
size does not satisfy the desired reporting interval, as discussed in
the previous chapter and shown in Figure 10-1.

Page 123
Advanced QC Strategies, 1st Edition

Figure 10-1. Options 1 to 5 for improving QC when run size initially


does NOT satisfy the desired reporting interval.
Of course, the first option is to improve performance, if possible,
by reducing the bias and/or the SD. The second option is to reassess
the quality requirement that was applied. We mentioned earlier
that the EFLM is now advising labs using biologic goals is to switch
from the “desirable” goal to the “minimum” goal. Changing the goal
sounds simple, but it assumes considerable knowledge about ana-
lytical performance specifications, so we will undertake a thorough
discussion here to review some of the history and current practices.

Page 124
Westgard QC, Inc., Copyright © 2022

11. Assessing Potential Usefulness of


PBRTQC
One of the options for improving QC is to implement procedures that
make use of patient data, rather than depending on a few control
measurements using traditional SQC procedures. This approach is
becoming popular due to the recommendations and articles coming
from an IFCC working group on Patient Based Real Time Quality
Control (PBRTQC). Clinical Chemistry highlighted the potential
usefulness [1]. Presented in an informal Question and Answer format,
the IFCC workgroup optimistically promoted PBRTQC applications:
PBRTQC will become the mainstay of QC in laboratories once
the profession sees the advantages of this form of process control,
and manufacturers and middleware vendors provide the onboard
capability.
The power of these techniques is that they offer exquisite customization
to provide very sensitive detection of a change in bias.
Hand in hand with the implementation of PBRTQC is a need to
change the mindset from human decision making to AI approaches
to QC.
There is a need for large analytical systems to not only use the
Hospital Information System to identify patient subgroups, but
also for the Laboratory Information System to identify a significant
drift, interrogate manufacturers databases regarding calibrator
and reagent lot quality, and to initiate recalibration.
PBRTQC is a major step to integrating the laboratory into the
hospital information system, and to a bigger dataset with the
ultimate goal of better patient outcomes.

Dreams of the Future vs Present Reality


While it is exciting to speculate about the future, it’s also important
to assess what is practical in the present.

Page 131
Westgard QC, Inc., Copyright © 2022

12. Upgrading Multirules with Moving


Averages
The original multirule paper was never intended to be a “one size
fits all” recommendation for IQC. In fact, it recommended different
control rules for different numbers of control measurements [1, Table
4]. Certain rules were recommended to inspect within-run results
and others were recommended to be used across (consecutive) runs.
For example:
• for 2 control measurements, the 1:3s and 2:2s were recommended
for use within-run and the 4:1s and 10:x across-runs;
• for 3 control measurements, 1:3s, 2of3:2s, and 3:1s were
recommended for within-run and 9:x across-runs;
• for 4 control measurements per run, 1:3s, 2:2s, R:4s, and 4:1s
within-run and 8:x across-runs;
• for Ns greater than 4, the recommendation was to use mean
and range rules within-run and “trend rules” across-runs.
The term “trend rules” referenced a paper by Cembrowski et
al [2] that described the use of a Moving Average Algorithm (MAA)
in the form of an exponentially smoothed moving average. Thus,
it was expected that when the number of control measurements
increased above 4 per run, simple traditional control rules would
be replaced by control techniques related to mean and range rules
(and associated moving estimates).
Power curves for mean and range QC procedures with Ns of 6
and 8 are shown in Figure 12-1, along with the power curve for an
N=6 multirule. The mean and range procedures have been selected
to maintain low false rejections from 0.02 to 0.00, whereas the N=6
multirule procedure has a Pfr of 0.07. You can most easily identify
the multirule procedure by looking at the y-axis and identifying
the curve with the highest intercept. The family of mean/range
rules demonstrate their appropriateness for maintaining low false
rejections and high error detection as Sigma quality approaches
3.0. Thus, the recommendations from the original multirule paper

Page 143
Advanced QC Strategies, 1st Edition

anticipated the use of mean and range types of procedures for higher
numbers of control measurements due to the higher false rejections
for multirule procedures.

Figure 12-1. Power curves for mean and range rules with Ns of 6 and 8 compared
with a multirule procedure with N of 6.

Performance of Moving Average Algorithms


More recently, a paper by Po et al [3] recommended replacing
Westgard multirules by moving average algorithms (MAA). One
of these authors has been involved with the IFCC group that is
promoting PBRTQC procedures, thus their work with MAAs for
patient-based QC might be expected to carry over to applications
for stable control materials used in IQC. The authors studied the
performance of Westgard multirules with Ns of 2 and 4 and MAA
with block sizes of 5, 10, and 20. The larger block sizes for MAAs
should provide better error detection, however, there is a subtle
issue with the speed of response after a systematic error occurs that

Page 144
Westgard QC, Inc., Copyright © 2022

13. Re-designing QC Wrongly for the


Traceability Era
According to published recommendations from a 2019 conference
on metrological traceability and IQC [1], the structure of Internal
Quality Control (IQC) should be fundamentally changed. IQC should
be divided into two parts.
• IQC Component I applies to control materials that are used
to monitor analytic performance and make decisions to accept
or reject analytical runs.
• IQC Component II requires a commutable control that is
analyzed once per day over a period of 6 months solely for the
purpose of estimating measurement uncertainty (MU).
While there will be an obvious objection to doubling the amount
of QC being run in laboratories, that’s not what we want to address
in this chapter. Instead’ we will focus on the Component II’s recom-
mended decision-making for acceptance or rejection of analytical runs.
The specific recommendation is to calculate the control limits
for a control chart as Target Value ± 2*APSu, which represents a
95% “acceptability range” for the Analytical Performance Specifi-
cation (APS) for standard Measurement Uncertainty (u, expressed
as SD, s, or CV). One of the fundamental principles of SQC is that
each laboratory should characterize its own imprecision and use
that SD in calculation of control limits. Instead, the authors argue:
“What is lacking is the link with the new scientific background [for
metrological traceability] … To obtain this, the acceptability range
for QC component I should correspond to APS for MU derived
according to the appropriate Milan model and it should be set
based on unbiased target value of the material obtained by the
manufacturer as the mean of replicate measurement on the same
measuring system optimally calibrated to the selected reference.”

Page 151
Advanced QC Strategies, 1st Edition

Fixed control limits still have statistical performance


characteristics
The direct use of an “acceptability range” for control limits has the
same problems as earlier practices using “clinical limits” and “fixed
limits”. We discussed the fallacy of using such limits when the CLIA
rules were being finalized in the mid-1990s [2]. The mechanics of
applying today’s “acceptability limits” are the same. The idea is ,
just draw the limits that represent the performance specification
directly on the control charts, in this case ± 2*APSu. This advice
does not consider measurement uncertainty in the interpretation
of individual control measurements. If the purpose of MU is to aid
the interpretation of test results, that should apply to control results
as well as patient results.
Regardless of the rationale, those lines for fixed control limits
still have the properties of statistical control limits because of the
measurement uncertainty associated with each individual control
result. The particular statistical control rule can be identified by
dividing the clinical control limit by the SD observed for the partic-
ular laboratory method. Then the power curve for that control rule
can be determined to characterize the probabilities for rejection for
various error conditions. Given that individual laboratory methods
in different laboratories will have different amounts of imprecision,
the performance of such fixed control limits will differ from one lab-
oratory to another. Measurement uncertainty itself is the reason
that fixed clinical control limits won’t provide appropriate QC.
For example, APSu for HbA1c is 3.0%, according to recommen-
dations published by these same authors [3], so the acceptability
range of ± 2*APSu would be TV ± 6.0%. If Method A has stable im-
precision of 1.0% and bias of 0.0%, the method demonstrates 6-Sigma
performance [(6.0%-0.0%)/1.0%] and the MU acceptability range
provides a 6s control range (6.0%/1.0%). If out-of-control is defined
as 1 control result exceeding a control limit, then for Method A the
control rule is 1:6s N=1, where N represents the total number of
control measurements in a QC event. If another method has stable
imprecision of 1.5% and bias of 0.0%, it demonstrates 4-Sigma per-
formance and will require more intensive QC. A method with a CV
of 2.0% and bias of 0.0% would provide 3-Sigma performance, which

Page 152
Westgard QC, Inc., Copyright © 2022

14. Determining MU from QC Data


As discussed in the previous chapter, metrologists have proposed
that measurement uncertainty be estimated from QC data. While
they would prefer a commutable control material be used, current
practices for estimating MU do in fact rely on QC data. However,
there are issues about the proper way to estimate MU from that
data. Having already opened the metrology can of worms, it seems
necessary to address the issue of how to calculate MU from QC data.
According to ISO 15189 [1], section 5.5.1.4, “the laboratory shall
determine measurement uncertainty for each measurement procedure
in the examination phases used to report measured quantity values
on patients’ samples.” Although this requirement has been in place
for years, there are continuing arguments about how to calculate
measurement uncertainty. A new ISO document 20914:2019 [2]
specifically addresses the issue, but there still is vigorous debate in
the literature about how to properly calculate measurement uncer-
tainty [3-4], particularly how to incorporate the effects of uncorrected
clinically significant bias.
Originally, the debate was about proper application of the
bottom-up methodology recommended by GUM – Guide to the
expression of uncertainty in measurement [5]. The bottom-up ap-
proach depended on identifying individual components of variation,
estimating their size, then summing the variances and extracting
the overall standard deviation, or standard uncertainty. After many
attempts at implementation, it was concluded that the bottom-up ap-
proach was too complicated for medical laboratories. The alternative
was to employ a top-down methodology that made use of available
data on measurement precision, specifically, internal quality control
data obtained over a period of a few months, commonly referred to
as intermediate precision data. By 2012 when the CLSI published
guidance C51-A on “Expression of Measurement Uncertainty in Lab-
oratory Medicine”, both bottom-up and top-down methodologies were
included [6]. Given the more complicated mathematical calculations
behind the bottom-up model, a large portion of that document is
devoted to explaining that model.

Page 165
Westgard QC, Inc., Copyright © 2022

15. Evaluating Repeat:2s QC Practices

If at first you don’t succeed, try, try again


– Thomas H. Palmer

Those who do not remember the past are condemned to repeat it.
– George Santayana

History repeats itself, first as tragedy, second as farce.


– Karl Marx

There are many aphorisms that can provide us wisdom and guidance
on how to work in the laboratory. But while the proverbs listed above
are catchy, they are not QC rules.
In Chapter 2, we observed that common QC practices don’t
always conform to good laboratory practices. The issue of using
Repeat:2s control rules provides a good example of the problem. As
surveys of QC practices show[1], the most common QC practices is
using 2SD control limits. Everyone knows about the false rejection
problem with 2SD limits, so how have laboratories rationalized the
use of this practice? The existence of a scientific paper that recom-
mends a repeat:2s sampling strategy is the answer [2]. It may be
questionable whether laboratories actually comply with the protocols
for using Repeat:2s rules, but they still rationalize their applications
based on the theory of repeat QC sampling.
We first became aware of the Repeat:2s sampling strategy
from a poster presentation at the 2011 National AACC Meeting.
In response, we discussed this recommendation on the Westgard
website in October of 2011 [3].

Page 177
Westgard QC, Inc., Copyright © 2022

16. Applying Individual vs Pooled


Means and SDs for Multiple Analyzers
One of the biggest challenges of laboratories today is to grapple
with the sheer scale of testing. At the dawn of the laboratory age, a
laboratory had a single instrument for each test, and it operated in
isolation from all other tests. One of the first major breakthroughs
was the multitest instrument, but even then, the laboratory had a
single chemistry instrument that might run a score of tests.
Today’s laboratories can run dozens of instruments – reference
laboratories exist that run hundreds of instruments – and they no
longer operate in a vacuum. Your laboratory is probably part of a
healthcare system, and patients will migrate from outpatient clinics
to smaller clinical centers to large hospitals (and then back). They
will be tested by multiple instruments located across multiple lab-
oratories. And of course there is great pressure to make sure those
results are comparable across all instruments and all laboratories.
There is relatively little discussion in the literature of how to
sustain such an effort. It’s clear there are a wide range of approaches.
The most popular choice seems to be common means and common
SDs. While this may be the easiest and most convenient choice,
there’s no evidence that this is the appropriate solution to a scientific
problem. And while everyone seems to agree that the discussion is
restricted to a set of the same instruments, same lot of reagents,
same lot of control materials, etc., the reality is that this approach
is also being implemented across heterogeneous systems – where
different instruments, different reagent lots, are nevertheless being
assigned the same mean and SD.
Selecting SQC strategies for multiple instruments is a suffi-
ciently difficult problem that the most recent CLSI C24-Ed4 guid-
ance document [1] did not address this issue, stating that “although
significant advances in QC thinking have occurred, there are still
important areas that could benefit from additional developments,
such as QC strategy design and implementation for laboratories
with multiple instruments of the same type performing the same
measurement procedures.” The C24-Ed4 guideline deliberately

Page 189
Westgard QC, Inc., Copyright © 2022

17. Controlling Differences between


Reagent Lots
Reagent lots have differences. This is widely known and despite all
the advances in engineering and technology, remains distressingly
common. Across decades of encounters with laboratories, we have
seen a wide array of practices for approving / validating / verifying
new reagent lots. Some of the old habits include a simple of check of
the QC (“Controls in? all right then…”), to a flat goal of 10% allowable
difference between lots, to the use of the entire total allowable error
budget as the acceptability criteria.
Let’s be honest: in many cases, these practices are wrong.
The better approach to judging lot-to-lot reagent acceptability
is to use real patient samples and determine an analyte-specific
criterion for allowable difference. We’ll explain in more detail. But
first, let’s explain why the practices above are less than ideal.
1. The problem with just checking some controls is that there is always
the issue of commutability and matrix effects. If the controls aren’t
fully commutable (and most aren’t), the acceptability of controls does
not guarantee that the patients won’t be affected by a difference
in reagent lots.
2. The problem with using a single goal for lot acceptability for all
analytes is that we all know there are individual performance
specifications for individual analytes. Reagent lot criteria also
need to be individualized.
3. Finally, given an allowable error specification that needs to
encompass all sources of random and systematic error, you can’t
use it all up at once. You can’t blow the whole budget just on the
bias between reagent lots.

Page 197
Westgard QC, Inc., Copyright © 2022

20. Preparing for Practical Applications


This chapter provides materials that you can copy – or download – for
use in your own applications. You also can used them as a starting
point for developing your own QC design procedures. The materials
include step-by-step directions for use of the various graphical tools,
worksheets to guide the calculation of the Sigma quality of a testing
process, forms for documenting planning applications, and a template
for the Sigma Run Size Nomogram (likely the most useful tool).
D-1. Directions for Calculation of Sigma for a Testing Process
WS-1. Calculation of Sigma from Manufacturer’s Claims
WS-2. Calculation of Sigma from Method Validation Data
WS-3. Calculation of Sigma from SQC and PT(EQA) Data
D-2. Directions for Comparing Current QC Procedures with Westgard
Sigma Rules with Run Size.
WS-4. Initial Assessment of Current QC Procedures
D-3. Directions for Planning Batch and CCP SQC Events using Power
Function Graphs
WS-5. Planning a Batch/CCP SQC Event (2 control levels)
WS-6. Planning a Batch/CCP SQC Event (3 control levels)
D-4. Directions for Assessing Batch and CCP SQC Procedures for a
Group of Tests using Normalized OPSpecs Charts
WS-7. Assessing Batch/CCP SQC using NOPSpecs Chart for 2
levels
WS-8. Assessing Batch/CCP SQC using NOPSpecs Chart for 3
levels
D-5. Directions for Assessing Performance of Bracketed SQC using a
Sigma Run Size Matrix
WS-9. Assessing Bracketed SQC using a Sigma Run Size Matrix

Page 235
Westgard QC, Inc., Copyright © 2022

Page 253
Advanced QC Strategies, 1st Edition

Page 254
Westgard QC, Inc., Copyright © 2022

21. A Final Word


We cannot end this book without commenting on the impact of
COVID19 on the laboratory community, and how it reflects a longer-
term struggle for the future.
In the past 2 years, there has been ample dissection of what
went wrong in the US pandemic response, from initial CDC testing
method failures to the vacuum of political leadership. At the same
time, there’s been a similar media narrative praising all the hard-
won accomplishments, the heroic achievements, the sacrifices of
our healthcare heroes.
If these last few years represent a triumph for laboratory
testing, it is a strange victory. It’s not the win we were hoping for,
nor is it the win we needed.
By any measure, the response of the laboratory to the COVID19
has been amazing. From PCR to antibody to antigen, the testing
methods available now are ample to the need. But testing lagged so
far behind, it was the vaccinations that truly saved the patients, not
the laboratories. In the public mind, the laboratory was not where
the war was won. In the thick of this crisis, laboratory professionals
worked punishing hours, 6-7 days a week, grappling with constant
supply shortages, allocations, and a roller coaster of regulatory rec-
ommendations. Even as vaccinations have risen, and some COVID19
testing volumes have diminished, for many hospital labs, there is
now a new normal – COVID19 testing on everyone, on top of the
typical routine testing workload. So the “post”-pandemic workload
is greater than the pre-pandemic, which was already crushing.
As the news from medical technology programs comes in, we
have not seen larger incoming classes, expanded programs, or new
programs being established. Apparently, the pandemic has had no
impact at all on the number of people entering the profession.
So, instead of a triumph, we have the same crisis that we were
facing before: too much work, not enough staff, not enough respect or
resources given to us. Except now it is worse than ever. Particularly
in the US, there is a critical shortage of staff at the bench level – we
have gone from Lean to skeletal.

Page 255
Westgard QC, Inc., Copyright © 2022

Index
Average of Normals (AoN)   8, 119

B
Batch QC   20, 33, 87,   208,  226
Bias  220
Symbols biologic goals  124
block, block size   132, 134,  145
∆SEcrit  208
Boiling it Down   219–234
ΔPE  184
1. Determine the Sigma-metrics for your
tests  219
A 2. Make an initial assessment of current SQC
“acceptability range” for traceability con- performance using the Westgard Sigma
trols   152, 158, 161, 215 Rules with Run Size diagram and/or the
across-run  143 Sigma SQC Run Size Matrix   221
Adapting Deming PDCA to Laboratory Manage- 3. Consider application requirements to help
ment  3 select the right QC planning tools   225
Adopting a Sigma-Based SQC Planning Pro- 4. For planning batch and critical control point
cess  33–50 (CCP) applications, start with power
AdvaMed  14 function graphs as your QC planning
A Final Word   255–259 tool.  225
application requirements  225 5. For batch and CCP applications with multitest
Applying Individual vs Pooled Means and SDs for systems, consider Normalized OPSpecs
Multiple Analyzers  189–196 charts to display the performance of sever-
Consideration of an individual method or instru- al tests simultaneously   227
ment  190 6. For bracketed operation of continuous pro-
Consideration of multiple instruments   192 duction processes, adopt the Sigma Run
Possible recommendations  190 Size Nomogram to implement Parvin’s
Sustaining a Standard Mean or Standard patient-risk model to estimate frequency
SD  193 of QC, or run size   228
Approach for Developing Risk-Based QC 7. Implement QC Frequency calculators for com-
Plans  27 plicated applications, such as continuous
appropriate control materials   158 production multitest analyzers   230
APSu (analytical performance specifica- 8. Employ a common SQC design for high Sigma
tion for measurement uncertain- methods  230
ty)   151, 152, 161, 215 9. Individualize the SQC designs for low Sigma
AQA (analytical quality assurance)   208,  227 methods, particularly ≤ 4.0 Sigma quality,
Arbitrary Control  123 and pay special attention to the QC Plans
ARLed (Average Run Length to error detec- for these tests   231
tion)    42, 146,   184 bottom-up estimation of measurement uncertain-
Assessing Other Options for QC   118 ty  165
adjust the patient risk factor   118 Bracket SQC strategy   21, 33, 51, 230
patient data QC procedures   119
perform a more in-depth risk assessment   119
reassess TEa  118 C
reduce bias and/or imprecision   118
Calculate Sigma-Metric  221
Assessing Potential Usefulness of PBRTQC   131–
Candidate QC Procedures   73, 87
142
CCP QC  208
Average of Deltas   138

Page 259
Advanced QC Strategies, 1st Edition

CDC/CMS  17, 18 D-5. Directions for Assessing Performance of


Celebrate your victories   256 Bracketed SQC using a Sigma Run Size
Cembrowski and Cervinski   106 Matrix  249
Cembrowski et al   143 D-6. Directions for Planning Risk-Based Bracket
CHECK  7 SQC Strategies using Sigma Run Size
CLIA  7, 13, 27, 125, 152, 158, 212, 220 Nomograms and Power Function
Final Rule  14–25 Graphs  251
CLIA QC Options for Compliance   15–25 Decide whether to analyze 2 levels of controls/
Default QC  15 day  30
IQCP  16 Define quality for intended use   5, 29,    123–130
Right QC  15 Deming  1
CLSI  14, 18, 34 Deming’s Plan-Do-Check-Act Cycle   1
CLSI C24-Ed4 8,  14, 22,   30, 33,   59,  77,  79,  1 Determine precision  220
01, 102, 106, 109, 116, 132, 157,  161 Determine Sigma Quality   30
, 179, 186, 189, 206, 207, 217 Determine Trueness or Bias   220
CLSI C51-A  165 Determining MU from QC Data   165–176
CLSI EP23-A   8, 14, 158 Determining Precision, Bias, and Sigma   114
Considering Sigma for Multiple Control Lev- Develop an IQCP   31
els  101–108 Develop a Total Quality Control Plan (TQC
Controlling Differences between Reagent Plan)  8,  27–32, 30
Lots  197–204 Developing a QC strategy for a multi-test analyz-
A check of critical differences from from er  82
CLSI  198 Developing risk-based SQC strategies is the new
A simplified recommendation for allowable objective for improving QC   159
between-lot variation  202 Developing SQC strategy for different levels of
In the beginning, wilderness   198 control  82
Solutions at higher levels   201 Directions for Calculating Sigma   236
The Latest Attempt: MUsing on acceptable Don’t wait for a ref   256
differences  199 DPM  9, 30, 219
control mechanism  18
corrective action  18 E
coverage factor  167
COVID19  255 EFLM  118, 124, 127, 128, 213
Critical Control Point QC   20, 33,   51-52, EFLM database  29
87,   226 Electronic QC  14
Critical Difference  199 E(Nuf)  41, 42
Critical Error Graph   23, 205, 208 EQA  29, 112, 125, 128, 195, 201, 220
Critical systematic error   36 Equivalent QC  16
Erika Cheung  257
D Evaluating Repeat:2s QC Practices   177–188
examination procedure  5
D-2. Directions for Assessing QC Performance of expanded uncertainty interval, U   167
Tests  241 expected patient distribution   132
D-3. Directions for Planning Batch and CCP Exponentially Weighted Moving Average   132
SQC Events using Power Function Expression of Measurement Uncertainty in Labo-
Graphs  243 ratory Medicine  165
D-4. Directions for Assessing Batch and CCP External Quality Assessment   9
SQC Procedures for a Group of Tests Us-
ing a Normalized OPSpecs Chart   246

Page 260
Westgard QC, Inc., Copyright © 2022

F M
false rejection problems   192 Make Improvements in the QC Plan & Testing
Feldhammer et al  105, 109 Process  31
Fixed control limits still have statistical perfor- Managing Quality  1–12
mance characteristics   152, 161 Maroto et al   170
FMEA  9, 17, 27, 119, 207 Martindale, Cembrowski, Journault et al   198
Format of Run Size Calculator   73 Max(ENuf) 19, 41, 42, 52, 72, 84, 85, 88, 159, 
FRACAS  9 184, 185, 192, 209, 211, 228
Frenkel et al  169, 170 mean and range   143
Measurement uncertainty (see
G MU)  9,  151,  125   215
Definition  166
Goals based on biologic variability   127 measuring quality and performance   9
Graphic assessment of performance of a fixed Method Decision Chart   8
control limit  153 method validation  7
GUM  165 metrological traceability  151
Monitor 51,   73, 112, 114, 129, 216
H Monitor nonconformities  9
Monitor Performance, Quality, and Safety   31
HbA1c  82, 84, 152, 153, 156, 160, 213
Monitor SQC  128
standardization  86
Moving Average Algorithm (MAA)   143
MU
I Comparison of uncertainty intervals   173
identification of hazards   17 Comparison of uncertainty results when bias is
IFCC  86, 131, 144 included  172
implement the examination procedure   7 correct the uncertainty interval   175
implement the TQC Plan   8 Missing Option 4. Within lab imprecision,
improve the QC Plan   9 calibration uncertainty, and uncorrected
Influence of Risk Management   17 bias  169
intended use  5 Option 1. Within-laboratory imprecision or
intermediate precision conditions   9 random error  168
IQC Component I   151 Option 2. Within-lab imprecision and calibration
IQC Component II   151 uncertainty  168
IQCP  14, 17, 27, 84, 111, 158, 206 Option 3. Within-lab imprecision, calibration
Developing an IQCP   17 uncertainty, and bias correction   168
ISO 15189  5, 8, 13, 16, 165, 175, 205 Possible approaches for incorporating uncorrect-
ISO 20914:2019   165, 168 ed bias  170
ISO 20914 nomenclature   166 Process for estimation   168
SUMU model  172
L Type A evaluation   166
Type B evaluation   166
Laboratory quality control based on risk manage- multi-stage control procedure   51
ment  158 multi-stage QC design    61, 73, 109, 112,
lot-to-lot reagent acceptability   197 128,    216
multi-test analyzer  82
Multitest Chemistry Analyzer   112

Page 261
Advanced QC Strategies, 1st Edition

N Pfr (probability of false rejection)    23,  33,   3


5, 37, 52, 60,  73, 87, 137, 143, 210, 
New CLSI Practice Guideline for Risk-Based 211, 222
SQC  19 Phillips et al   170
NGSP  84 Planning SQC for Multitest Analyzers   109–122
normalized Method Decision Chart   228 Planning SQC Strategies for Bracketed Opera-
normalized OPSpecs Chart   227 tion  51–72
Po et al   144
O pooled mean  191
pooled SD  191
One size fits all   10 power function graph 35,   52, 134,  145,  148
operating point  38, 156 ,   155,  225, 226
OPSpecs chart Preparing for Practical Applications   235–254
OPSpecs Chart   8,  38-40, 126, 156, 205, 208, preventive maintenance  116
225 Process for Planning Batch and Critical Control
Optimizing QC Frequency for Patient Risk   71–90 Point SQC  37
Proficiency Testing  9
P PT (proficiency testing)  29, 125, 195
Parvin  19, 23, 33, 41, 72, 77, 84, 85, 101, 12
0, 158, 159, 184, 192, 209, 228 Q
Parvin’s Patient Risk Model   41 qualitative risk score   158
Patient risk  88,   223 Quality Control plan
Patient risk factor   77,  82, 87, 88, Definition  20
102,  111, 212 Quality Control (QC)   7
Patient Risk Nomograms   43 Quality Control (QC) event
Patient Risk, Quantified   72 Definition  20
Patient Risk Sigma 74, 77, 79, 101, 102, 104, 117 Quality Control (QC) strategy
Patient-weighted Sigma  84 Definition  20
PBRTQC  112, 131, 135, 144, 148, 213 Quality Goals and Requirements for Intended
A Quick Assessment of Usefulness   134 Use  125
Dreams of the Future vs Present Reality   131 Quality indicators  31
Error detection  145 Quality is a journey with no end   258
Real World Applications   138 Quality requirement
Speed of response   146 Definition  20
PDCA  1, 6 quality specifications  158
ACT  2, 5, 9 Quantitative assessment using QC planning
CHECK  2, 5 tools  155
DO  2, 4
PLAN  1, 4, 10 R
questions  3
Ped (Probability of error detection) 23,  33,  36,  37 RARTQC (Regression adjusted real time quality
, 42, 52, 60, 87, 137, 145, 184, 208, 21 control)   138
0, 211, 222, 226 Reagent lot  197
Peer Comparison  9 Recommendation for Combined Multirule SMA
Peng et al   112 Procedure  147
performance characteristics  158 Re-designing QC Wrongly for the Traceability
permissible MU  127 Era  151–164
Regulatory Environment  14–25

Page 262
Westgard QC, Inc., Copyright © 2022

Relationship between MaxE(Nuf) vs. Pedc   43 S


Relationship between Run Size vs. Pedc   44
Relationship between Run Size vs Sigma   45 SDmean  170
Relationship of Goals to Operating Specifica- select an analytic measurement procedure   5
tions  126–127 select an optimal Statistical QC (SQC)   8
Repeat:2s Rules  177,  180, 216 Selecting “Sets” of Control Rules   117
Confirmation by Run Size Calculations   184 Setting QC limits is a process, not a fixed specifica-
Logistical Considerations  186 tion  157
One size doesn’t fit all   186 severity of harm   158
Power Function Graphs   181 Shewhart-CuSum QC procedure   147
repeating endlessly  179 Sigma-based Planning Process   34
“repeat, repeat” practice   178 Sigma-metric 7, 8, 30, 34, 42, 52, 59, 77, 79, 8
“repeat, repeat, repeat” practice   178 4, 88, 104, 105, 114, 133, 159, 160, 19
“repeat, repeat, repeat, repeat” practice   178 5, 208,  219, 221, 226
retained patient specimens   193 Equation  74
Reviewing Current SQC Practice Guidelines   13– Sigma Run Size Matrix 87,  193,  210,  221,  222, 
26 224, 230
Ricos goals  29, 127, 213 Sigma Run Size Nomogram 46, 51,   53, 59, 71,
Risk ,  77, 105, 133, 157, 159, 193, 209, 2
Definition  17 25, 229
Risk analysis Sigma scale  36
Definition  18 Simple Moving Average (SMA)   119,    145,   213
Risk assessment Six Sigma  205
Definition  18 Six Sigma Quality Management Sys-
risk-based bracketed SQC strategy   40,    219 tem  5–11, 175
Risk-Based SQC Planning Tools   40 Flowchart  6
risk-based SQC procedure   19 SOPs  7
risk-based SQC strategy 27,  51,  101,  106,  112,  Spop/Smeas  134, 135, 136, 214
119, 120, 161, 228 standard uncertainty  167
overview  55 Startup design  51, 73, 112, 114, 128-129, 216
Risk-Based SQC strategy   206 State of the art   2
Risk estimation Stavelin et al   201
Definition  18 Sum Expanded Uncertainty + Absolute Bias,
Risk evaluation originally called Ubias by Maroto [9] but
Definition  18 termed USUMU|bias| here   171
risk management  14 Summing It Up   205–218
Risk management provides the new model for SQC 1. Adopt Six Sigma concepts and a Six Sigma
strategies  158 Quality Management framework   206
Road map for Planning SQC Strategies   22 2. Understand context and performance of cur-
Road maps through the Wilderness   198 rent SQC practices   206
root cause analysis   116 3. Focus on a Total QC Plan with a risk-based
Root Sum of Squares U ( expanded uncertainty), SQC strategy  206
URSSU  171 4. Use traditional QC planning tools for selecting
Root Sum of Squares u (standard uncertainty), Batch and Critical Control Point (CCP)
URSSu  170 QC  208
Rosenbaum et al  109, 179 5. Use “risk-based” planning tools based on
Run size  72,   185,  205, 210 Parvin’s patient risk model to determine
the Frequency of QC   209
6. Use patient risk to optimize QC Frequency, or

Page 263
Advanced QC Strategies, 1st Edition

run size  210 Upgrading Multirules with Moving Averag-


7. Setup QC Frequency calculators   211 es  143–150
8. Consider the Sigma-metrics of multiple levels uref standard uncertainty of the value assigned
of controls  212 to a reference material   168
9. Plan multitest applications   212 uRw standard uncertainty for long-term impre-
10. Assess (or re-assess) the quality required for cision of measured values obtained under
intended use  213 defined conditions in same laboratory for
11. Assess potential usefulness of PBRTQC   214 a period sufficient to include all routine
12. Consider multirule algorithms that include a changes to measuring conditions   167
moving average  214 uwrlot  200
13. Assess performance of “acceptability rang-
es” (fixed control limits) as statistical V
rules  215
14. Assess MU from routine control data   215 Validate safety characteristics   29
15. Prioritize Immediate Decision Rules over van Rossum and van den Broek   139
Repeat:2s Rules  216 verify the attainment of the intended quality of
Sum of Expanded Uncertainty +/- Bias, USU- results  8
MU  171
SUMU  174 W
Watch “The Drop Out.”   257
T Westgard Rules  192, 205
TE (total error)   36 Westgard Sigma Rules   23
TEa (total allowable error) 29,  37,  40,  52,  59,  7 Westgard Sigma Rules with Run
3, 77, 79, 82, 84, 109, 110, 111, 125,  Size  47, 105, 193, 221
126, 127, 129, 133, 175, 208, 213, 219 within-run  143
, 221, 227 Woodworth et al  84, 85
The 2SD dilemma!   178 WS/T 403-2012  112
Theranos  257
Three Modes of QC Operation   33 X
top-down measurement uncertainty   165
Xc  77
Total QC Plan  27, 33, 206
Total QC Strategy   112
Total Quality Management   205
Y
Total Testing Process  158, 207 Yago and Alcover  43, 106
TQC strategy  7
Traditional QC Planning   22 Z
troubleshooting  116
trueness  59 Zeng et al   128

U
ubias standard uncertainty of a bias value   168
ubrlot,a  199
ucal  199
ucal standard uncertainty of the value assigned
to an end-user calibrator   167
uncertainty interval  174
uncertainty in the estimate of bias (ubias)  170

Page 264
Westgard QC Order Form Westgard QC, 7614 Gray Fox Trail, Madison WI 53717
CALL 1-608-833-4718 if you wish to pay by purchase order or other means.

Item Price (US$) Quantity Subtotal

NEW! Advanced QC Strategies, 1st Edition $80.00

Most Popular! Basic QC Practices manual, 4th Edition $80.00

Basic Method Validation, 4th Edition $80.00

Six Sigma Quality Design & Control, 2nd Edition $90.00

Poor Lab’s Guide to the Regulations, 2021 Edition $80.00

Basic Quality Management Systems $80.00

Westgard Online Courses


NEW! Six Sigma Metrics $195.00

"Westgard Rules" and Levey-Jennings Charts mini-course (3 credits) $75.00

Most Popular! Basic QC Practices – complete online course (14 credits) $135.00

Basic Method Validation – complete online course (15 credits) $175.00

Grand Subtotal

Shipping & Handling: Within US = $9


Canada & Mexico = Add 10%; Europe, Asia, & all other countries = Add 20%

Sales Tax (Add 5.5% in WI, 6.0% in CT)


GRAND TOTAL

Visit http://www.westgard.com/store.htm to place order online


Use coupon code SKIPTHEFORM to save $15 off any purchase

Your Name

Institution & Department

Street Address

City State Zip Code

Country

Business Phone/Business Fax

E-mail Address

Credit Card Type (circle one) VISA Mastercard American Express

Credit Card Number Exp. Date

Signature

You might also like