PDA TR60-2013 Process Validation A Lifecycle Approach
PDA TR60-2013 Process Validation A Lifecycle Approach
60
Process Validation:
A Lifecycle Approach
Paradigm Change in
Manufacturing OperationsSM
Bethesda Towers
4350 East West Highway
Suite 200
Bethesda, MD 20814 USA
Tel: 1 (301) 656-5900
PDA Task Force on Technical Report No. 60: Process Validation: A Lifecycle Approach
Authors
Scott Bozzone, Ph.D., Chair, Pfizer, Inc. Raj Jani, Baxter Healthcare Corporation
Harold S. Baseman, Co-Chair, Valsource, LLC Peter F. Levy, PL Consulting, LLC
Vincent Anicetti, Parenteral Drug Association, Keck Michael Long, PhD Concordia Valsource, LLC
Graduate Institute
John McShane, Roche-Genentech, Inc.
John A. Bennan, Ph.D., ComplianceNet, Inc.
Victor G. Maqueda, Sr., Consultant
Michael N. Blackton, Imclone Systems, Inc.
José Luis Ortega, Pharma Mar S.A. Sociedad Unipersonal
Vijay Chiruvolu, Ph.D., MBA, Amgen, Inc.
Elizabeth Plaza, Pharma-Bio Serv, Inc.
Rebecca A. Devine, Ph.D., Consultant to the Bio-
pharmaceutical Industry Praveen Prasanna, Ph.D., Shire Human Genetic
Therapies, Inc.
Stephen Duffy, Covidien, LLC
David Reifsnyder, Roche-Genentech, Inc.
Panna L. Dutta, Ph.D., The Medicines Company
Kurtis Epp, BioTechLogic, Inc. Markus Schneider, Ph.D., Novartis Pharma AG
Igor Gorsky, Shire Pharmaceuticals, Inc. Iolanda Teodor, Baxter Healthcare Corporation
Norbert Hentschel, Boehringer Ingelheim Pharma Mark Varney, Abbott Laboratories
GmbH & Co., KG Alpaslan Yaman, Ph.D., Biotech, Pharma and Device
Pedro Hernandez, Ph.D., PHPD, LLC Consulting, LLC
Irwin Hirsh, Novo Nordisk A/S Wendy Zwolenski-Lambert, Abbott Laboratories
This technical report was developed as part of PDA’s Paradigm Change in Manufacturing Operations (PCMO)
project. The content and views expressed in this Technical Report are the result of a consensus achieved by the
members of the authorizing Task Force, and are not necessarily the views of the organizations they represent.
Process Validation: A
Lifecycle Approach
Technical Report No. 60
ISBN: 978-0-939459-51-3
© 2013 Parenteral Drug Association, Inc.
All rights reserved.
Bethesda Towers
4350 East West Highway
Suite 200
Bethesda, MD 20814 USA
Tel: 1 (301) 656-5900
Fax: 1 (301) 986-0296
E-mail: [email protected]
Web site: www.pda.org
Paradigm Change in Manufacturing Operations (PCMOSM)
PDA launched the project activities related to the PCMOSM program in December 2008 to help imple-
ment the scientific application of the ICH Q8, Q9 and Q10 series. The PDA Board of Directors ap-
proved this program in cooperation with the Regulatory Affairs and Quality Advisory Board, and the
Biotechnology Advisory Board and Science Advisory Board of PDA.
Although there are a number of acceptable pathways to address this concept, the PCMO program fol-
lows and covers the drug product lifecycle, employing the strategic theme of process robustness with-
in the framework of the manufacturing operations. This project focuses on Pharmaceutical Quality
Systems as an enabler of Quality Risk Management and Knowledge Management.
Using the Parenteral Drug Association’s (PDA) membership expertise, the goal of the Paradigm
Change in Manufacturing Operations Project is to drive the establishment of ‘best practice’ docu-
ments and /or training events in order to assist pharmaceutical manufacturers of Investigational
Medicinal Products (IMPs) and commercial products in implementing the ICH guidelines on Phar-
maceutical Development (ICH Q8, Q11), Quality Risk Management (ICH Q9) and Pharmaceutical
Quality Systems (ICH Q10).
The PCMO program facilitates communication among the experts from industry, university and regula-
tors as well as experts from the respective ICH Expert Working Groups and Implementation Working
Group. PCMO task force members also contribute to PDA conferences and workshops on the subject.
PCMO follows the product lifecycle concept and has the following strategic intent:
• Enable an innovative environment for continual improvement of products and systems
• Integrate science and technology into manufacturing practice
• Enhance manufacturing process robustness, risk based decision making and knowledge management
• Foster communication among industry and regulatory authorities
For more information, including the PCMOSM Dossier, and to get involved, go to
www.pda.org/pcmo
Table of Contents
Figure 1.2-1 Applicability of ICH Q8 (R2) through Q11 Figure 6.2.2-1 Process in Classical Statistical Control;
Relative to the FDA Stage Approach to Common Cause Variation only............ 59
Process Validation................................ 3 Figure 6.2.2-2 Process Not in Statistical
Figure 1.2-2 Common Timing of Process Validation Control -Special Cause Variation......... 60
Enablers and Deliverables to Validation Figure 6.2.2-3 A Process with Both Within-lot and
Stage Activities.................................... 5 Between-lot Variation......................... 60
Figure 3.0-1 Overall Sequence of Process Figure 6.2.2.1-1 Xbar/S Control Chart for Fill Weight,
Validation Activities ........................... 11 n=5 per group.............................. 61
Figure 3.4-1 Example Process Diagram for a Figure 6.2.2.1.3-1 Process Capability Statistics Cp
Tangential Flow Filtration Step........... 15 and Cpk........................................... 63
Table 3.4-1 Example Process Parameter Table Figure 6.2.2.1.3-2 Examples of Process Capability
for a Tangential Flow Filtration Step... 16 Statistics Cp and Cpk...................... 63
Figure 3.6-1 Decision Tree for Designating Table 6.2.2.1.3-2 Relationship Between Capability and
Parameter Criticality........................... 22 % or Per Million Nonconforming.... 64
Figure 4.1-1 Typical System Qualification Sequence... 28 Figure 6.2.3-1 Example of an Operating Characteristic
Table 4.1.3-1 Qualification Information..................... 30 Curve.................................................. 65
Figure 4.3.1-1 Relationship of Prior Knowledge to the Table 6.3.3-1 Examples of PAT Tools and Their
Amount of PPQ Data Required............ 33 Application......................................... 68
Table 4.3.2.6-1 Illustration of a Matrix Approach for Table 6.4-1 Technology Transfer Activities
Filling Process PPQ............................. 37 Throughout Product Lifecycle............. 71
Table 4.4-1 Example of PPQ Acceptance Figure 6.4-1 Distribution of Technology Transfer
Criteria Table...................................... 42 Activities throughout the Product
Lifecycle............................................. 73
Figure 5.1.2-1 Development of a Continued Process
Verification Plan.................................. 45 Table 7.1-1 Stage 1: Process Design..................... 75
Figure 5.1.2-2 CPV Plan within Validation Table 7.1-2 Stage 2: Process Qualification
Documentation System...................... 45 (Continued)......................................... 76
Table 7.1-3 Stage 3: Continued Process Verification.......77
Figure 5.1.3-1 CPV Plan Determination for
Legacy Products................................. 47 Table 7.2-1 Stage 1: Process Design..................... 77
Figure 5.2.1-1 Body of Knowledge and Maintenance Table 7.2-2 Stage 2: Process Qualification ........... 79
of Process Control.............................. 49 Table 7.2-3 Stage 3: Continued Process Verification.... 80
Figure 6.1-1 Quality Risk Management: A Lifecycle Table 8.1.2-1 Expected Between-Lot Variation
Tool for Process Development and Coverage in nL Lots............................. 81
Validation............................................ 52
Table 8.1.6-1 Number of lots to demonstrate
Figure 6.1-2 Product Attribute Criticality Risk confidence for lot conformance rate... 84
Assessment Example......................... 53
Figure 8.1.7-1 Wald’s Sequential Probability Ratio
Table 6.1-1 Risk-Based Qualification Planning....... 53 Example.............................................. 85
Table 6.1.2 Severity Rating and Sampling Table 8.1.9-1 Effect of between-lot variation on the
Requirements..................................... 54 total process variance........................ 85
Table 6.2-1 Statistical Methods and the Typical Table 8.1.10-1 Sample Size to estimate a standard
Stages at Which They Are Used........ 56 deviation to within ±X% of true value..... 86
1.0 Introduction
The PV lifecycle concept links product and process development, the qualification of the commercial
manufacturing processes, and maintenance of the commercial production process in a coordinated
effort (3). When based on sound process understanding and used with quality risk management prin-
ciples, the lifecycle approach allows manufacturers to use continuous process verification (enhanced
approach) in addition to, or instead of, traditional PV (1,2,6).
The information in this TR applies to the manufacturing processes for drug substances and drug
products, including:
• Pharmaceuticals, sterile and non-sterile
• Biotechnological/biological products, including vaccines
• Active Pharmaceutical Ingredients (APIs)
• Radiopharmaceuticals
• Veterinary drugs
• Drug constituents of combination products (e.g., a combination drug and medical device)
This report is prepared for global use and applies to new and existing (i.e., legacy) commercial manu-
facturing processes. Its scope does not include manufacturing processes for:
• Medical devices
• Dietary supplements
• Medicated feed
• Human tissues
Although these product categories are outside the scope of this TR, its recommendations are based
on modern quality concepts, ICH Quality Guidelines, and recent regulatory authority guidance docu-
ments. As such, it may be a useful reference in the development of PV lifecycle approaches for other
product categories. The validation of ancillary supporting operations used in pharmaceutical manu-
facturing processes is not discussed in the report. Many PDA TRs already provide specific guidance for
such procedures; for example, cleaning, aseptic process simulation, moist heat sterilization and dry
heat sterilization (7-10).
1.2 Background
The lifecycle concept includes all phases in the life of a product from initial development through
commercial production and product discontinuation (4,11). The use of a lifecycle approach to phar-
maceutical product quality is widely thought to facilitate innovation and continual improvement as
well as strengthen the link between pharmaceutical development and manufacturing (ICH Q10). The
lifecycle philosophy is fundamental in the ICH guidance documents for Pharmaceutical Develop-
The ICH Q8 (R2) guidance document for pharmaceutical development defines procedures for link-
ing product and process development planning to the final commercial process control strategy and
quality system. It describes an enhanced scientific and risk-based approach to product and process
development that emphasizes statistical analysis, formal experimental design, and the incorporation
of knowledge gained from similar products and processes. Manufacturing capabilities and the quality
system must be integrated into the process development plan to ensure effective and compliant com-
mercial operations. The functionality and limitations of commercial manufacturing equipment are a
primary consideration in the process design.
The ICH Quality Risk Management guidance document (ICH Q9) describes the use of a risk-based
approach to pharmaceutical development and manufacturing quality. These approaches identify and
prioritize those process parameters and product quality attributes with the greatest potential to af-
fect product quality. Specific guidance on the application of the ICH Q9 concepts can be found in
PDA Technical Report 54: Implementation of Quality Risk Management for Pharmaceutical and Biotechnology
Manufacturing Operations and PDA Technical Report 59: Utilization of Statistical Methods for Production
and Business Processes (13,14). The FDA process validation guidance document stresses a risk-based
approach to develop criteria and process performance indicators, and improve the design and execu-
tion of other validation-related activities, such as developing confidence levels and sampling plans (3).
Both the FDA and EMA process validation guidance documents aim to integrate PV activities into the
pharmaceutical quality system. To achieve the goals outlined in ICH Q10, it is essential to integrate
the process design stage into the quality system. Throughout the development effort, product and
process development input and alignment from the Quality Unit are required to ensure compatibility
with the quality system. Key considerations in product and process design include the commercial
control strategy and use of modern quality risk management procedures. Quality and Regulatory
organizational components should be part of the cross-functional product team from the beginning
of the process validation study design. Their participation is essential to ensure that the study design
is compatible with the firm’s quality system, and that submissions will meet regulatory agency ex-
pectations.
The Quality Unit should provide appropriate oversight and approval of process validation studies re-
quired under GMPs. Although not all process validation activities are performed under GMPs (for
example, some Stage 1 – Process Design studies) (4), it is wise to include the Quality and Regulatory
representatives on the cross-functional team. The degree and type of documentation required varies
during the validation lifecycle, but documentation is an important element of all stages of process
validation. Documentation requirements are greatest during the process qualification and verification
stages. Studies during these stages should conform to GMPs and be approved by the Quality Unit.
The Process Validation Master Plan (PVMP) should describe the rationale, overall validation strategy,
and list of specific studies. It should reside within the firm’s quality documentation system (15). A suc-
cessful validation program is one that is initiated early in the product lifecycle and is not completed
until the process or product reaches the end of that lifecycle. A comprehensive corporate policy that
This TR follows the principles and general recommendations presented in current regulatory process
validation guidance documents. Of particular note, is that the TR uses the traditional/nontraditional
(enhanced) process validation terminology employed by EMA (1). In this context, nontraditional or
enhanced process validation may use Continuous Process Verification as an alternative approach to
traditional PV. In the enhanced approach, manufacturing process performance is continuously moni-
tored and evaluated. It is a science and risk-based real-time approach to verify and demonstrate that
a process operates within specified parameters and consistently produces material that meets quality
and process performance requirements.
The FDA three-stage process validation lifecycle nomenclature (Stage 1-Process Design, Stage 2– Pro-
cess Qualification, and Stage 3–Continued Process Verification) is used in this TR. Implementation of
these stages is discussed in detail in Sections 3–5. It should be noted that Continued Process Verifica-
tion and Continuous Process Verification are distinct terms and have different meanings. Continuous
Process Verification refers to validating manufacturing processes that utilize advanced manufacturing
and analytical technologies (e.g., PAT systems). FDA uses the term Continued Process Verification
generally to mean those activities which maintain the process in a state of control and encompasses
all manufacturing scenarios, i.e., traditional manufacturing, manufacturing employing advanced tech-
nologies of any kind or any combination thereof.
These are defined in Section 2.0 and are also discussed later in this TR. Figure 1.2-1 shows the re-
lationship between the relevant ICH guidance documents and the FDA stage approach to process
validation across the product lifecycle.
Figure 1.2-1 Applicability of ICH Q8 (R2) through Q11 Relative to the FDA Stage Approach to Process Validation
Lifecycle
Development – ICH Q8(R2)
Need to make this read ICH Q8(R2)
Quality Systems / Knowledge Management – ICH Q10
Development and Manufacture of Drug Substances – ICH Q11
The intent of this TR is not to establish mandatory standards, but rather to be a single-source over-
view that complements existing regulatory authority guidance documents. References throughout
the document provide greater detail on various topics. It is always advisable to consult with the ap-
propriate regulatory authorities for agreement on the strategies employed for product development
and lifecycle management strategies.
Figure 1.2-2 illustrates the progression of typical process validation enablers or deliverables relative
to validation activities that are conducted throughout the product lifecycle. The figure represents
stages and validation studies as single “point in time” events. However, in practice, the exact timing
of product development activities or validation studies may vary with the specific product develop-
ment strategy. For example, the enablers for Stage 1 process validation activities will be much less
extensive for a production formulation change than for development of a new molecular entity. Thus,
the figure presents an overall sequence of activities and their approximate correlation to the stages of
process validation.
Stage 1
Quality Attributes Evaluation (updated)
Initiate Formal Stability Studies
Quality Target Product Profile (updated)
Clinical Manufacturing
Process Validation Master Plan
Process Parameter – Categorization Qualify Manufacturing Equipment & Facility
Process Parameter – Acceptable Ranges
Continued Assay and Process Development
Tools used throughout the lifecycle (e.g., risk management, statistical analysis, Process Analytical Tech-
nology [PAT], technology transfer, documentation, and knowledge management) are described in Sec-
tion 6.0. Examples of the lifecycle approach for a large and small molecule are described in Section 7.0.
Terminology usage may differ by company, at individual companies and some terms may be subject
to change over time. Those terms used in a validation program should be clearly defined, docu-
mented, and well-understood. Terminology definitions that are widely recognized by the industry
should be considered when establishing internal definitions. These can be found in regulatory guid-
ance documents. Definitions of company-specific terminology should also be included in the valida-
tion documents to provide clarity and context. This Technical Report uses the terms below, which are
accompanied by their definitions, synonyms, and references where applicable:
2.1 Acronyms
API — Active Pharmaceutical Ingredient KPP — Key Process Parameter
CMA — Critical Material Attribute NOR — Normal Operating Range
CPP — Critical Process Parameter PAR — Proven Acceptable Range
CPV — Continued Process Verification PAT — Process Analytical Technology
CQA — Critical Quality Attribute PPQ — Process Performance Qualification
DoE — Design of Experiments PTT — Product Technical Team
DP — Drug Product PVMP — Process Validation Master Plan
DS — Drug Substance QbD — Quality by Design
FMEA — Failure Mode Effects Analysis QTPP — Quality Target Product Profile
HCP — Host Cell Protein TPP — Target Product Profile
ICH — International Conference Harmonization TT — Technology Transfer
Sources of knowledge available prior to (and that may be used during) Stage 1 of the Process Valida-
tion lifecycle, include:
• Previous experience with similar processes (e.g., platform processes)
• Product and process understanding (from clinical and pre-clinical activities)
• Analytical characterization
• Published literature
• Engineering studies/batches
• Clinical manufacturing
• Process development and characterization studies
The following sections outline the Stage 1 outputs from a general lifecycle approach to Process Valida-
tion, as depicted in Figure 3.0-1.
Stage 1
Commercial
Stage 2 Process Facilities, Utilities, and Equipment Qualification
Qualification
• Process Characterization
• Process Characterization Plan and Protocols
• Study Data Reports
The QTPP captures all relevant quality requirements for the drug product. Consequently, it is peri-
odically updated to incorporate any new data that may be generated during pharmaceutical develop-
ment. However, the QTPP should not depart from the core targets established in the drug product
Target Product Profile (TPP).
Note: TPP is used as a tool that facilitates sponsor-regulator interactions and communication. Con-
sequently, the TPP contains such information as Drug Indications and Use; Dosage and Administra-
tion; Dosage Forms and Strengths; Contraindications; Warnings and Precautions; Adverse Reac-
tions; Drug Interactions; Abuse and Dependence; and Overdose that are not covered under the
scope of this document (25).
The QTPP summarizes the quality attributes of the product that ensure safety and efficacy. It pro-
vides a starting point for assessing the criticality of product quality attributes.
CQAs are not synonymous with specifications. In addition, there is not necessarily a one-to-one rela-
tionship between CQAs and specifications. Specifications are a list of tests, references to analytical pro-
cedures, and appropriate acceptance criteria that are numerical limits, ranges, or other criteria for the
tests described. Several product attributes identified as CQAs may be detected by a single test method,
and therefore, built into a single test specification (e.g., API solubility, hardness, porosity are CQAs
evaluated using a single test: dissolution). Some CQAs may not be included in the specifications if they
The identification of potential CQAs is an ongoing activity initiated early in product development. It
makes use of general knowledge about the product and its application, as well as available clinical and
non-clinical data. CQAs are subject to change in the early stages of product development, and thus re-
quire a quality risk management approach that evolves as knowledge about the product and process is
generated (for discussion, see Section 6.1 “Application of Risk Management”). CQAs for commercial
products should be defined prior to initiation of Stage 2 activities.
A process diagram for a single unit operation is presented as an example in Figure 3.3-1 and a sample
description table is provided in Table 3.4-1. The evolution of process knowledge and understanding
is reflected in clinical batch records; these are an important source of information for defining the
manufacturing process in the process description. Data collected from clinical trial material manu-
facture may be useful to determine process capabilities, set specifications, design PPQ protocols and
acceptance criteria, evaluate laboratory models, and transfer processes. Strategies and fundamentals
of knowledge management are discussed further in Section 6.5, Knowledge Management.
Process descriptions are documented in reports and may be incorporated into the Technology Trans-
fer (TT) Package for the product. The process may change during Stage 1 due to increases in material
demand (i.e., process and analytical development, clinical needs), improved product understanding
that leads to changes to CQAs, or improved process understanding that results in addition, elimina-
tion or adjustments of unit operations. Documentation should capture these changes and the sup-
porting justifications. This information should be archived in the Knowledge Management System.
Regulator CIP in
pH
UFDF
JACKET membrane
Agitation, Temperature, C FEED PERMEATE
rpm
Area, m2
Pump PSH
Flow 10 Size, kD
As shown in Figure 3.0-1, the initial identification of critical quality attributes is followed by a quality risk assessment in stage 1. The initial quality risk assessment
is a cause and effect type of analysis to identify process input parameters where variability is likely to have the greatest impact to product quality or process perfor-
mance. This assessment is based primarily on prior knowledge or early development work, and the outcome of this assessment provides the foundation for process
characterization studies that follow.
Understanding the impact of process parameter variability and applying the appropriate controls is a fundamental element in development of the commercial con-
trol strategy. ICH Q8 (R2) defines a Critical Process Parameter (CPP) as, “one with variability that has an impact on a CQA, and therefore, should be monitored or controlled
to ensure that the process produces the desired quality (3).
Beyond the generally recognized definition of a critical process parameter from of ICH Q8 (R2),
however, process parameter designations are not standardized and approaches may vary. For this
reason, definitions for parameter designations must be clearly documented and understood within
the organization. Definitions should remain consistent throughout the process validation lifecycle.
Figure 3.6-1 provides an example of a decision tree developed to guide the assignment of parameter
designations in conjunction with the quality risk assessments. The decision tree facilitates categori-
zation of process parameters as critical, key, or non-key (see definitions). Decision making tools can
facilitate common understanding among participants, and have the advantage increases consistency
in the decision making process as well as consistent documentation of rationales as part of the risk
assessment process.
The decision tree can be used for risk assessments both before and after the supporting data from
process characterizations studies are available.
• Parameter or Attribute: Process variables can be outputs from one unit operation and inputs to another. For
a given unit operation, each variable is initially established as a parameter or an attribute on the basis of direct
controllability
Yes — Directly controllable process input parameters can theoretically contribute to process variability.
No — Process outputs that are not directly controllable are attributes that are monitored and may be indicative of
process performance or product quality.
• Non-CPP: Potential to impact process performance or consistency if run outside of defined range.
Yes — Parameter designated a KPP
No — Parameter has little impact to the process over a wide range. Parameter is designated a non-KPP.
YES
Process Input
Process Parameter
NO
YES NO
YES
Since Studies designed to characterize the process and setting acceptable ranges for process param-
eters are usually performed at laboratory scale. The ability of laboratory-scale studies to predict pro-
cess performance is desirable. When a laboratory scale model is used in development, the adequacy
of the model should be verified and justified. When there are differences between actual and expected
performance, laboratory models and model predictions should be appropriately modified. In that the
conclusions drawn from the studies are applied directly to the commercial-scale process, qualification
of laboratory-scale models is essential. Qualification of the scaled- down models should confirm that
they perform in a manner that is representative of the full-scale process. This is shown by comparing
operational parameters and inputs and outputs, including product quality attributes.
Scaled- down models for chromatography steps for protein products can be qualified by performing mul-
tiple runs with input parameters at set points and comparing the results to the full-scale unit operation.
Parameters evaluated should include those that affect process consistency, such as step yields, elution
profile, elution volume, and/or retention time. These should then be combined with those that represent
product quality, such as pool purity and levels of process-related and host cell-related impurities.
Pilot-scale models of small molecules that are representative of the commercial manufacturing pro-
cess may be used for supportive PPQ data. In solid and liquid oral dosage forms, 10% of the commer-
cial batch size and/or 100,000 units have been considered a representative scale (1). Scale-up effects for
certain processes, such as mixing freely soluble substances, tablet compression, or liquid filling may
be well-known. Batch sizes at 10% of bulk size or run times of 100,000 dosage units provide a suf-
ficient duration to determine a degree of control and process characterization, while uncovering any
preliminary major problems. Full-scale confirmation/evaluation may be carried out when small-scale
studies are used to support PPQ.
For scale-down studies, the raw materials, component attributes, equipment, and process parameters
should be comparable and indicative of the process intended for the commercial product.
In-Process Controls
In-Process Controls (IPCs) are inputs to the process and are checks performed during production to
monitor and, if appropriate, to adjust the process, and/or to ensure that the intermediates or product
conform to specifications or other defined quality criteria.
Performance Parameters
Performance parameters (e.g., tablet/capsule disintegration; harvest or peak growth cell densities/
viability) are process outputs that cannot be directly controlled but are indicators that the process has
performed as expected.
PAT uses product and process knowledge as well as equipment automation and analytical instrumen-
tation technologies. Successful application of PAT requires a thoroughly characterized process (Sec-
tion 3.7) in which the relationship between CPPs and CQAs is explored using mathematical models,
such as multivariate analysis. Application of this understanding to the Control Strategy (Section 3.9)
also affects the design and qualification of the instrumentation and control systems in the manufac-
turing process.
To support implementation of PAT, Stage 1 deliverables must describe the CQA monitoring scheme
and the algorithm for adjusting CPPs based on the process response. Qualification of the equipment,
measurement system, and process (Stage 2) must demonstrate the capability to adjust CPPs accord-
ing to the established algorithm and confirm that these adjustments result in acceptable and predict-
able outputs. Therefore, PAT-based control methods need to be qualified (29).
Compatibility of the process streams with the equipment and materials that they contact (e.g., poly-
meric membranes, elastomers, disposable bags, and other plastic parts) is necessary to ensures prod-
uct safety and efficacy. Product contact materials as well as extractables and leachables need to be
evaluated for compatibility. This work should begin in Stage 1, may include studies that require long
lead times, and should be completed in conjunction with Stage 2.
Compatibility of the process streams with equipment surfaces is a measure of their reactivity, absorp-
tion, and stability when in contact during manufacturing. Compatibility tests should demonstrate
that the material properties of the equipment surfaces are not altered by contact with the solutions
or other product-related materials. In addition, the contact materials should not alter the process
solutions or materials (either by adsorption of product components or excessive leaching that could
adulterate the product).
Extractables are components of a material (e.g., a product contact surface that is used in drug manu-
facture or storage) that are recovered by use of an exaggerated force (solvent, time, temperature).
Leachables are contact material components from process equipment or storage containers that mi-
grate into the product under normal conditions of use.
The identity and quantity of leachables from polymeric wetted components (plastic storage contain-
ers, filters, primary packaging materials, gaskets and O-rings) used in drug manufacture, storage,
and packaging must be documented to assure that the product is not adulterated. A combination of
literature reviews, risk assessments, and laboratory studies can be used to address leachables. Various
approaches to determine the extent of testing and identification of leachable species, and the setting
of acceptable levels, have been published.
• “Safety Thresholds and Best Practices for Extractables and Leachables in Orally Inhaled and Nasal Drug Products” (30)
• “Evaluation of Extractables from Product-Contact Surfaces” (31)
• “Application of Quality by Design (QbD) Principles to Extractables/Leachables Assessment: Establishing a Design Space for
Terminally Sterilized Aqueous Drug Products Stored in a Plastic Packaging System” (32)
• “Leachables Evaluation for Bulk Drug Substances” (33)
Process Qualification (PQ) during Stage 2 demonstrates that the process works as intended and yields
reproducible commercial product. It should be completed before release of commercial product lots,
and covers the following elements:
1. Design and qualification of the facility, equipment, and utilities (this should be completed prior to
qualification of the process)
2. Process Performance Qualification (PPQ), which demonstrates control of variability and the ability to
produce product that meets predetermined quality attributes
The following section provides considerations for preparation and performance of system qualifica-
tion. More information on approaches to planning and performing system qualification may be found
in several sources (26,34,35). Figure 4.1-1 presents a typical sequence of activities that support the
system qualification effort.
Figure 4.1-1 Typical System Qualification Sequence
System design should be based on process parameters, control strategies, and performance require-
ments developed or identified during Stage 1 Process Design. This information is transferred to those
designing engineering requirements for facility and manufacturing systems. Design qualification in-
volves a review of the system design to assure that it is aligned with process control strategy and
performance requirements.
In situations where the process is being transferred to an established facility with qualified equipment,
a risk assessment should be performed to identify any equipment control gaps. These can be ad-
dressed through equipment modifications (which may require requalification) or through operational
controls.
4.1.2 Installation
Upon completion, system testing and inspection should be used to verify that the systems have been
fabricated, constructed, and installed to engineering, and process specifications. The information
from this verification should be accurate, reliable, and useful. If so, then information from these ac-
tivities may be leveraged or used to support qualification testing.
The start-up and commissioning of these systems should confirm that they are in good working order
and operate as designed. Engineering studies can provide confidence that the systems will perform
under process conditions. Adjustments to the systems to achieve the specified level of performance
and operation may be needed. Information on modifications or adjustments should be documented
and transferred to the team preparing the qualification plans and protocols.
If sufficient process understanding is not available, or the scale-up effect is unknown, existing knowl-
edge may be used during design and commissioning to define user requirements.
Formal system operating and maintenance procedures or instructions should be in place prior to the
execution of test functions. Operators and those conducting studies should be trained in the opera-
tion of the systems and conduct of the tests. These should be conducted under GMP conditions and
documented according to GDPs. All measuring and test instruments should be calibrated and trace-
able to appropriate standards.
Deviations in the execution of qualification testing should be documented, investigated, and ad-
dressed. Conclusions should be based on the suitability and capability of the system to meet the pro-
cess requirements. When necessary, systems may be modified and studies repeated.
Periodic assessment of systems may lead to additional qualification-related activities or testing. In addition to
periodic assessment, event-driven assessments and re-qualifications may arise from process-related changes,
out-of-specification results and trends, and investigations. The System assessments and the events that trig-
ger event-driven assessments should be recorded in a formal procedure that also addresses the mechanism
for deciding when re-qualification is warranted, the criteria for doing so, and those responsible. It is recom-
mended that Subject matter experts and the quality unit should also be involved in these decisions.
The number of “successful” batches executed during the PPQ study should not be viewed as the
primary objective of a PPQ campaign. While successful runs of commercial-scale batches can indi-
cate overall operational proficiency and sound process design, these batches should also be viewed
as a means to obtain information and data needed to demonstrate that the process control strategy
is effective. The type and amount of information should be based on understanding of the process,
the impact of process variables on product quality, and the process control strategy developed during
Stage 1 Process Design. As appropriate, other prior knowledge should be used as well. The number
of batches needed to acquire this information and data, may be based in part on a statistically sound
sampling plan that supports the desired confidence level. It may also be influenced by the approach
selected to demonstrate that the batch-to-batch variability of CQAs is acceptable.
This section will discuss design strategy for the PPQ, recommended content for the protocol and
report, and the transition to Stage 3 of the process validation lifecycle.
Critical Quality Attributes with Criticality Assessment — CQAs are identified early in Stage 1. They
are confirmed to account for additional analytical characterization, clinical and/or non-clinical data
and information gathered during Stage 1. CPPs that impact QCAs are reviewed and updated based on
detectability and occurrence (11,36).
Analytical Methods — Appropriately validated or suitably qualified methods should be identified and
their status documented. Methods for product release and stability should be fully validated accord-
ing to ICH requirements prior to initiating PPQ batch testing. Additional tests beyond normal release
testing used to support PPQ should be identified and suitably qualified/validated prior to being used
to test PPQ batches. The justification of the status for use in the PPQ studies (qualified and/or vali-
dated) should be fully documented for each analytical method.
Approved commercial batch records — Changes may be made to batch records during Stage 1 should
enhance, clarify, or optimize manufacturing instructions and/or to reflect knowledge gained during pro-
cess characterization. Batch records reflecting the final commercial process to be studied in PPQ should
be approved prior to PPQ batch execution.
Process Design Report — This report (as described in Section 3.11) is the repository for the process de-
sign justification, and includes parameter risk ranking, and ranges for the process that will undergo PPQ
study. The data summarized in this report will support the selection of the elements of the PPQ stud-
ies and proposed PPQ acceptance criteria. The process development summary should provide the link
between the detailed process description, risk assessments, control strategy description, characterization
reports, rationale for parameter designations, and clinical manufacturing history. It is a best practice for
this information to be finalized prior to PPQ study design since it provides the scientific support to justify
the PPQ acceptance criteria.
Process Validation Master Plan (PVMP) — Drafting of the process validation master should begin
in Stage 1 and be finalized prior to PPQ study initiation. Elements of the Process Validation Master
Plan are outlined in Section 3.12.
Quality System and Training — Qualified and trained personnel will be integral to the PPQ studies.
Detailed, documented training specific to the PPQ is recommended for functional groups directly
involved in the execution of the study. To minimize the risk of human error, personnel should un-
derstand their role in protocol execution to minimize the risk of human error. Quality Unit approval
of PPQ activities should be completed prior to PPQ study initiation, and all PPQ studies should be
conducted within the quality system.
Approved protocols for PPQ Studies — Protocols for each study should be approved and finalized
prior to initiation of PPQ studies. Design and content of process performance qualification protocols
is discussed in Section 4.4.
Figure 4.3.1-1 illustrates the relationship of the amount of knowledge to Stage 1 and 2 activities.
Where there is greater prior knowledge or process design for a new product or process, PPQ studies
may be decreased. Less prior knowledge will require more Stage 1 and/or PPQ data.
Figure 4.3.1-1 Relationship of Prior Knowledge to the Amount of PPQ Data Required
In some cases, Stage 1 data that supports PPQ may be supported in some cases by adding stricter
testing for a defined number of batches to confirm the results obtained in the Stage 1 studies and the
PPQ batches. For example, small-scale column lifetime studies may be used to support column reuse
limits. These are then confirmed with a heightened level of impurity monitoring until the reuse pe-
riod has been reached at full scale.
Using risk-based approaches allows a balance between the number of batches studied and the risk of
the process. They can also be used in conjunction with objective approaches to determine the number
of batches to include.
Where practical, statistical methods are recommended to guide the determination of the number of
PPQ batches needed to achieve a desired level of statistical confidence (see Sections 6 and 8 on statisti-
cal approaches to determining the number of batches and sampling plans). However, this approach
alone may not always be feasible or meaningful. One such example where is PPQ studies of a protein
drug substance process with a limited number of clinical batches. This dearth of output could be due
to such factors as manufacturing scale or product indications (e.g., orphan drug) where infrequent
future manufacturing campaigns are to be performed. In addition to limitations on manufacturing
batch production, the nature of protein drug substance manufacturing makes increased sample sizes
of the process streams of limited usefulness to achieve a statistically-based sample size. When it is
not feasible or meaningful to use conventional statistical approaches, a practical, scientifically-based,
holistic approach may be more appropriate. In this case, the following factors may be used to support
the rationale for the number of PPQ batches selected:
• Prior knowledge and platform manufacturing information/data
• Risk analysis of the process to factor the level of risk into the batch number selection
• Increased reliance on Stage 1 data to support that the process is under control and to add to the data set
• Continuation of heightened sampling/testing plans during continued process verification until a sufficient
dataset to achieve statistical confidence has been accumulated.
When a combination of approaches and data are used, the rationale and justification should be clear-
ly documented in the process validation master plan. Also, references to all supporting source data
should be included.
• Different container sizes or different fill volumes in the same container closure system.
The rationale for selection of representative groups and numbers of batches should be scientifically
justified, risk assessed, and outlined in the process validation master plan and PPQ protocols.
An example of a matrix design is shown for a PPQ of a filling process where manipulation of three
variables results in multiple drug product strengths. Variables in this example include:
• Fill Volume
• Bulk Drug Product Solution Concentration
• Final Drug Product Strength
Table 4.3.2.6-1 Illustration of a Matrix Approach for Filling Process PPQ
Rationale for selection of representative groups and number of batches should be scientifically justi-
fied and outlined in the process validation master plan and PPQ protocols.
Equipment Family
Cell culture for biological product manufacturing can be performed in multiple trains using the same
equipment and process in each one. Use of a family or grouping approach may be valid for the PPQ
for the fermentation unit operations. This example shows how such an approach, which limits the
number of batches for the PPQ versus repeated multiple runs from each fermenter, could be used.
In this case, each equipment train was evaluated for similarity of the equipment (identical equip-
ment trains with duplicated equipment of the same model and manufacturer). Identical equipment
trains reduce the number of batches needed to show that the process is reliable in each one. In this
The use of PAT controls can provide an alternate approach to PPQ. If a PAT system is used to control
every commercial batch, then the PPQ stage will have a different focus. For example, if a powder
blending or solution mixing operation is controlled by a PAT system, such as NIR (near infrared) as-
says, the PPQ will involve demonstrating the control model and system and the process model works
as predicted in commercial manufacturing.
Qualification of the equipment, measurement system, and process must demonstrate the capability to
adjust CPPs according to the established algorithm and confirm that the adjustments result in accept-
able and predictable outputs. In other words, a PAT-based control method needs to be qualified (20).
For processes or individual unit operations that yield a single homogenous pool of material, statisti-
cally based sampling plans may not be useful in ascertaining the level of intra-batch process variability.
For example, analysis of multiple samples from a homogeneous bulk solution or API provides infor-
mation on the variability of the analytical method only, not intra-batch variability of the process. In
these cases, extended characterization of intermediate pools and non-routine sampling performed at
certain points in the process and comparison of the data between batches can demonstrate process
control and reproducibility.
When establishing acceptance criteria for PPQ, the following considerations should be taken into account:
• Historical data / prior knowledge
• Preclinical, development, clinical, and pre-commercial batches
• Early analytical method suitability (if data is used from clinical lots)
• Amount of data available (level of process understanding)
• Sampling point in the process
• Compendial requirements can be met with high confidence
An overview of the factors considered for determining PPQ acceptance criteria should either be de-
scribed (or referenced, if included in a different document). Criteria for determining inter-and in-
tra-batch consistency should be defined. All parameters and attributes designated for tracking and
trending in Stage 3 Continued Process Verification should be included in PPQ acceptance criteria.
Acceptance criteria may include:
Incoming material — Meets designated criteria (may be raw material or the output of a preceding step).
Process Parameters — All process parameters are expected to remain within Normal Operating
Ranges; particular attention is focused on parameters with Critical or Key designations.
• Critical Process Parameters (CPP) with the potential to impact critical quality attributes
• Key Process Parameters (KPP) with the potential to impact process performance.
Attributes — All product quality and process performance attributes should meet pre-defined accep-
tance criteria and include statistical criteria where appropriate.
Introduction
The introduction should include a description of the process and/or specific unit operations under
qualification, including the intended purpose of the operations in the context of the overall manufac-
turing process. The introduction should provide an overview of the study(ies), and important back-
ground information.
References
References to relevant documents related to the study should be included in the protocol:
• Development and/or Process Characterization Reports that provide supporting data for Operational Parameter
and Attribute ranges
• Process Design Report
• Process Validation Master Plan
• Commercial manufacturing batch records
• Related qualification documents (facilities, utilities, equipment, other PPQ studies)
• Analytical methods
• Specification documents
• Approved batch records
Responsibilities
A designation of various functional groups and their responsibilities as they relate to execution of the
study, and verification that appropriate training has been conducted for all contributors.
A discussion of the number of batches planned should be included, and the rationale should be stated.
The level of confidence expected at the conclusion of the PPQ study should be included as applicable.
Data Collection
Roles and responsibilities for various functional groups as they relate to collection and analysis of
PPQ data and documentation should be included. The list of process data to be collected and how it
will be analyzed should be stated.
Sampling Plan
A description of a defined prospective sampling plan and its Operating Characteristic Curve with de-
tails on the number of samples, frequency of sampling, and sampling points supported by statistical
justification, as applicable:
• Sampling points
• Number of samples and statistical basis for sampling, as appropriate
• Sample volume
• Non-routine sampling for extended characterization
• Sample storage requirements
• Analytical testing for each sample
See Section 6 and Appendix 8 for further information on statistically-based sampling plans.
Analytical Testing
The overall validation package includes the methods used for all analytical testing performed, from
assessment of raw materials to extended characterization of the drug product. A listing of all analyti-
cal methods used in each protocol and the validation or qualification status of each (and references to
source documents) should be included. Analytical method validation should also be included as part
of the process validation master plan.
Deviations
All potential deviations cannot be anticipated regardless of the level of characterization and knowl-
edge. A general framework for defining the boundaries of qualification is appropriate, for example:
• Out-of-specification or out-of-limits test results.
• Failure of a CPP to remain within normal operating range; a CPP is designated as such due to the potential
impact on a corresponding CQA. Failure to control may indicate overconfidence in an immature control
strategy. This would be grounds for protocol failure.
• Missed samples or samples held under incorrect storage conditions
• How individual batches or lots failing to meet validation acceptance criteria will impact the study.
Introduction
The introduction should include a concise description and outline of the unit operations or group of
unit operations that have been qualified. It should summarize the overall results of the study, provid-
ing background information and explanations as necessary.
Deviations
A summary of the deviations and corresponding root causes, as well as a discussion of the potential
impact to the PPQ, should be included. Corrective actions resulting from deviations should be dis-
cussed. Their impact on the process, the PPQ, and on the affected batches should be provided.
Protocol Excursions
Protocol excursions and unexpected results should be included and fully described in the report. A ref-
erence to the root cause analysis should be provided if documented separately from the PPQ report.
Any corrective actions and their impact on PPQ should be outlined in the report.
Data summarized and compared with pre-defined acceptance criteria should be presented in tabular
or graphical format whenever possible, and data used from Stage 1 studies should be clearly identified.
A reference to the original study should be provided when data is used from outside the of PPQ is
used to augment the PPQ data set for statistical manipulation or other support The level of statistical
confidence achieved should be stated. If the desired level of statistical confidence was not achieved,
the reasons for this and follow up actions should be discussed.
The discussion should provide support for any study conclusions. The impact of ranges and devia-
tions should be discussed if they affect the study results. Risk assessment and any follow-up conclu-
sions, including corrective actions, should be stated.
Findings associated with batches or lots that fail to meet the acceptance criteria in the protocol should
be referenced in the final PPQ package; likewise, with any corrective measures taken in response to
the cause of failure
Conclusions
Conclusions as to whether data demonstrate that the process is in a state of control should be pro-
vided. Pass or fail results should be stated for each acceptance criteria and corresponding results.
When a unit operation approach is used, PPQ reports prepared for each unit operation study. A sum-
mary executive report that unifies all the study results to support the overall process PPQ should be
written.
Continued monitoring of process variables enables adjustments to inputs covered in the scope of a
CPV plan. It compensates for process variability, to ensuring that outputs remain consistent. Since all
sources of potential variability may not be anticipated and defined in Stages 1 and 2, unanticipated
events or trends identified from continued process monitoring may indicate process control issues
and/or highlight opportunities for process improvement. Science and risk-based tools help achieve
high levels of process understanding during the development phase, and subsequent knowledge man-
agement across the product life stages, facilitates implementing continuous monitoring (see Sections
3.0 and 4.0).
High-level quality system policies/documents outline how various departments interact and how
information is compiled and reviewed to ensure maintenance of the validated state. Under that policy
document as well as a process validation master plan, a product-specific CPV plan should include the
following elements:
• Roles and responsibilities of various functional groups
• Sampling and testing strategy
• Data analysis methods (e.g., Statistical Process Control methods)
• Acceptance criteria (where appropriate)
• Strategy for handling Out of Trend (OOT) and Out of Specification (OOS) results
• Mechanism for determining what process changes/trends require going back to Stage 1 and/or Stage 2
• Timing for reevaluation of the CPV testing plan
Figure 5.1.2-1 illustrates an example of the development of a CPV monitoring strategy throughout
the lifecycle stages. Ideally, the majority of the control strategy is established prior to Stage 2, when
PPQ is conducted. When adopting the concept of Continued Process Verification for legacy products,
the same general approach should be taken to document and execute the CPV program (see Section
5.1.3, Legacy Products and Continued Process Verification).
Because Stage 3 is part of the lifecycle validation approach (5.1.2-2), Continued Process Verification
should be governed by both an overarching quality system for validation practices and a process vali-
dation master plan. At a minimum, the process validation master plan should make high-level com-
mitments for both Process Design (Stage 1) and Continued Process Verification (Stage 3) in addition
to Process Qualification (Stage 2). The specifics of the CPV sampling/testing strategy may not be
finalized until completion of PPQ. Therefore, the process validation master plan may include general
commitments to the planned CPV strategy. These are then further clarified in a separate CPV Plan
referenced in the process validation master plan. It is still possible that a process validation master plan
• Statistical methods
• Data to be trended/rationale
• Establish confidence in process based on
Draft Initial Plan small-scale models
• Frequency of reporting
Stage 2
Periodic review to
assess state of control
In considering whether the sampling plans for legacy products are adequate, it may be determined
that a statistically-driven approach should be applied. However, the amount and type of data may also
lead to a decision that statistical justification of the sampling plan is unnecessary. This determination
should be part of the initial assessment of the historical data and monitoring approach. Although
statistically-derived models may not be required, the sampling plan should be scientifically sound and
representative of the process and each batch sampled.
NO
NO
NO
* Is an appropriate Process Control Strategy (demonstrating understanding of the impact of process parameters on
CQAs) defined and does statistical of data show that variability is controlled?
The prospective CPV plan should provide specific instructions for analysis conducted to a limited
degree, and subsequently discontinued once a sufficient number of data points are accumulated to
determine process control. The number of batches sampled and the frequency of sampling within a
batch should be stated in a Stage 3 enhanced sampling plan. Depending on the data generated, sam-
ples collected and analyzed for information only (FIO) should have a designated end-point. A more
open-ended approach, where no specific number of batches is identified, could be used to address
data trends and results. A plan that describes an approach to reduce (step-down) or increase (step-up)
sampling and testing as a result of trending and results is also an option.
Establish prospective criteria to ensure that the process is in a state of control. However companies
define it, an “out of control” result (e.g., Out-of-Trend, Out-of-Control, Out-of-Specification, outside
Action Limit) should trigger actions per the Quality System (e.g., investigation, impact assessment
to validated state). Specific actions will vary on a case-by-case basis, but the CPV plan should specify
Section 5.1.4 covers, sources of process variability that may not be parameter-related (e.g., raw mate-
rials, personnel, and environment). As part of the overall CPV assessment, high-risk potential sourc-
es of variability should be risk-mitigated, and also assessed and demonstrated to be under control.
Trends in purity for a critical raw material, for example, may indicate subtle differences between
suppliers. Even seemingly innocuous changes by a supplier may lead to out-of-trend or out-of-speci-
fication events. These should be evaluated in light of overall process consistency and product quality.
Figure 5.2.1-1 depicts sources of data that contribute to continuous improvement of a manufactur-
ing process. While not intended to be an all-inclusive list, the figure shows typical categories of data
associated with product production and performance.
Figure 5.2.1-1 Body of Knowledge and Maintenance of Process Control
Process Trend
Analysis
Process Change
Periodic Review
Impact
Extended
Deviation Review
Characterization
CAPA
Data
Product Complaints
The frequency of data review will depend heavily on risk. The period of review for various processes and
sub-processes is likely to vary greatly depending upon the levels of associated risk and the complexity of
control. The starting point for defining the review period will be the most recent process risk commu-
nication document. As more production data is generated, deeper process understanding is gained and
control is likely to be more easily demonstrated. Thus, the period or intensity of review may be reduced.
An annual commercial data compilation effort in preparation for Annual Product Review (APR) may
be sufficient. However, more frequent data reviews and comparisons to defined acceptance criteria
may help manufacturers be more proactive and less reactive. APR packages are necessary, as per regu-
latory guidelines. However, APR exercises are likely to become high-level reviews and summaries of
multiple, more frequent CPV data reviews. The APR will identify any gaps in the CPV data reviews
and will summarize long-term trends, but more frequent CPV data reviews should be performed by
the manufacturer at defined intervals.
Note: FDA 21 CFR 211.180(e) requires an evaluation at least annually. The periodicity of the review is
to be established by the manufacturer, but should be at least annually.
This section presents tools and methods to assist in the planning and performance of the process validation
program. It includes sections on risk and knowledge management, statistical methodology, process analyti-
cal technology, and technology transfer. These tools can be used to identify, capture, and communicate
information needed for the design and assurance of process control. They facilitate informed decision-
making, prioritization of activities, and interpretation of results related to the process validation effort.
The Quality Risk Management system is an “enabler” or “enabling system.” When correctly applied,
it adds supportive elements to the product lifecycle and other systems (e.g., the Pharmaceutical Qual-
ity System). The application of risk management principles and approaches is instrumental to effec-
tive decision-making in the Process Validation Lifecycle.
Management of variability is one example of applying risk management in the validation lifecycle.
The level of control required to manage variability is directly related to the level of risk that variability
imparts to the process and the product. The use of risk management to address variability requires
understanding of:
• The origin of the variability
• The potential range of the variability
• The impact of the variability on the process, product, and ultimately, the patient
Risk assessment should occur early in the lifecycle, be controlled appropriately, and effectively com-
municated. Risk Management increases product and process knowledge, which translates into greater
control of product and process variability, and a lower residual risk to patients.
The process validation lifecycle provides continued assurance that processes will manufacture prod-
uct in a predictable and consistent manner. Where decisions related to product quality or process per-
formance are made, risk can be assessed at several points throughout the process validation lifecycle.
Quality Risk Management applications throughout the process validation lifecycle include the follow-
ing (see Figure 6.1.1):
Stage 1 — Process Design
• Identification of product attributes that may affect quality and patient safety
• Criticality analysis of product quality attributes (CQA identification)
• Cause and Effect Analysis or Risk Ranking and Filtering, which link the process steps and parameters to process
performance or product quality attributes. These can be used to screen potential variables for future process
characterization (e.g., DoE) and testing.
• Preliminary Hazards Analysis (PHA) or early FMEA
Figure 6.1-1 depicts a quality risk management lifecycle tool for process development and validation (21).
Figure 6.1-1 Quality Risk Management: A Lifecycle Tool for Process Development and Validation
Based on product quality Is the process Are the variables When is confidence What is looked for
and patient safety known? known? achieved? and for how long?
Risk Assessment
CQAs Parameters Support System Qualification Continuous Process Verification
Requirements Variables Commissioning Monitoring
Control Strategy IQ, OQ Reaction to Issues
DOE PQ Process Improvements
Statistical Sampling Plans
Criticality of product attributes is assessed along a continuum; i.e., it not a yes or no question. This is
accomplished by performing a risk assessment analysis that uses Severity and Uncertainty, rather than
the usual Severity and Occurrence. The process, which is iterative, is based on building product and
Uncertainty
High (catastrophic
Critical Critical Critical
patient impact)
Severity
Medium (moderate
Potential Potential Potential
patient impact)
Risk management is commonly applied during the Facilities, Utilities, and Equipment Qualification phase
of Stage 2. Functional specifications are reviewed to help plan qualification activities. Higher-risk items
require a higher level of performance output, while lower-risk items can be satisfied by use of commis-
sioning activities with appropriate risk reviews and control. Risk assessment output ratings can be applied
against standard criteria to create the plan (see Table 6.1.1).
Table 6.1-1 Risk-Based Qualification Planning
Risk Assessment
Qualification Planning
Output Ratings
Severity — Determines the level of testing required during Stage 2. The higher the severity rating for
a particular attribute, the higher the statistical confidence required (see Table 6.1-2).
Occurrence — The occurrence rating is tied directly to variation. High Occurrence rates may require
further testing or development to reduce variation and increase process knowledge. Testing at this
stage reduces additional and more costly testing during Stage 3. When the true occurrence rate is
unknown, additional development or engineering studies may be required. When testing is complete,
the occurrence ranking and overall risk rating for the failure mode can be updated with the new pro-
cess knowledge.
Detection (controls) — If the level of assessed controls is zero, the control strategy may need to be
updated or new controls created. Controls do not have to be technology-based. The HACCP system
is an example of a control, as are procedures and training.
Table 6.1.2 Severity Rating and Sampling Requirements
Med ++ 95%
Low + 90%
Risks-to-patient should also be addressed during commercial production. This can be done, through
a risk assessment process that builds on current understanding of risk and process knowledge, com-
bined with the Continuous Process Verification Program. QRM is a lifecycle process, with assess-
ments that occur throughout the lifecycle of the product.
Often subtle changes in raw materials can lead to significant and unforeseen variations in production.
The cause of a change in elution profile was lot-to-lot variation in particle size distribution in a chro-
matographic resin (39). Applications like Near Infrared (NIR) or even Nuclear Magnetic Resonance
(NMR) can be used to ensure that raw materials meet their specifications and CQAs. An important
risk mitigation strategy is for drug manufacturers to work with their suppliers so that each can under-
stand the other’s quality systems and demands.
Design of Experiments X
Multi-Vari Chart X
Pareto Analysis X X
DoE differs from the classical approach to experimentation, where only one parameter is varied while
all others are held constant. This “one-factor-at-a-time” type of experimentation cannot determine
process parameter interactions, where the effect of one parameter on a quality attribute differs de-
pending on the level of the other parameters. The basic steps for the DoE approach are summarized
below:
1. Determine the input parameters and output quality attributes to study.
a. This is best done as part of a team approach to identify potential critical process parameters and
quality attributes; in many cases, the process may be well-understood and the parameters and
attributes for experimentation readily determined.
b. If there are a large number of input parameters, an initial screening design, such as a fractional
factorial or Plackett-Burman design, may be used (40). The purpose of a screening experiment
is to identify the critical parameters that have the most important statistical effect on the quality
attributes. Since screening designs do not always clearly identify interactions, the reduced number
of parameters identified by the screening experiment will be included in further experiments.
c. If the change is to an existing process, it is often valuable to construct a Multi-Vari chart
or SPC chart from current process data (41). A Multi-Vari chart can be used to identify if
the biggest sources of variation are within-batch variation, between-batch variation, or
positional variation (e.g., between fill heads on a multi-head filler). Variance components can
also be calculated from the data to determine the largest component of variance. Process
parameters that could be causing the largest sources of variation are then identified and
included in subsequent experiments.
For example, if within-batch variation appears to be the largest source of variation, then
charge-in of components done once at the beginning of the batch is not likely to be a key
contributor to this variation. Charge-in differences due to inadequate weighing, for example,
could cause between-batch variation rather than, within-batch variation. This simple but
powerful tool can sometimes discover important yet unsuspected critical parameters or
“lurking variables” that contribute to process variation, even if they are not initially on the
list of parameters.
The same data may also be used to create SPC charts to determine if the process is in statistical
control. Since a lack of statistical control will contribute to experimental error variation, it
will be more difficult to understand the results of an experiment if the process is not in
statistical control. Lack of statistical control may also mean that there are “lurking variables”
not on the list of process parameters that are contributing to process variation.
105
104
103
102
Total Variation
Within lot
101
100
99
98
97
96
95
Lot 1 Lot 2 Lot 3 Lot 4 Lot 5 Total Process Over Time
105 USL
104
103
102
101
Within lot
100
99
98
97
96
95 LSL
Lot 1 Lot 2 Lot 3 Lot 4 Lot 5 Total Process Over Time
A more complex form of a process that is also stable and in control is shown in Figure 6.2.2-3. This
pattern is typical of many processes where there is variation both within and between lots, but the
variation between lots is in control. One purpose of validation and CPV is to determine both within-
and between-lot variations.
Figure 6.2.2-3 A Process with Both Within-lot and Between-lot Variation
105
104
103
102
Total Variation
Within lot
Between lot
101
100
99
98
97
96
95
Lot 1 Lot 2 Lot 3 Lot 4 Lot 5 Total Process Over Time
Figure 6.2.2.1-1 shows an example of an Xbar/S chart for fill weight, where five vials from a single-
head filler were sampled every 15 minutes over a six hour production order or lot, for 24 samples.
Both the mean and standard deviation appear to be stable, with no values exceeding the 3-sigma con-
trol limits. The process appears to be stable and in a reasonable state of statistical control.
Figure 6.2.2.1-1 Xbar/S Control Chart for Fill Weight, n=5 per group
52.4 UCL=52.37
Sample Mean
52.2
52.0 X=52.04
51.8
LCL=51.70
1 3 5 7 9 11 13 15 17 19 21 23
Production Order
52.4 UCL=0.49
Sample StDev
52.2
52.0 S=0.24
51.8
LCL=0
1 3 5 7 9 11 13 15 17 19 21 23
Production Order
Control charts can be used during all three validation stages for within- or between-lot data. During
Stages 1 and 2, they can be used to determine if the process is stable and in control in order to com-
mence commercial production. Control charts are particularly useful during Stage 3 (CPV Stage).
Special causes of variation affect almost every process at some point. Control charts help identify
when such a special cause has occurred and when an investigation may be needed. As special causes
are identified and corrective actions taken, process variability is reduced and quality improved. Con-
trol charts are easy to construct and can be used by operators for ongoing process control. They also
create a common language for discussing process performance, and can prevent unnecessary adjust-
ments and investigations. They encourage staff to be responsible for monitoring and improving their
process, rather than just taking action when QC test results fail.
When possible, it is preferable to use variables data rather than attributes data. A measured value
contains more information than an attributes value, such as conforming/nonconforming. Control
charts for variables data have more statistical power and can use smaller sample sizes than attributes
data charts. Although the underlying theory for control charts assumes normally distributed and
uncorrelated data, control charts are robust and generally work well even when these assumptions
are not met (40). One exception is for attributes data with low values, which have a highly skewed
non-normal distribution. Bioburden monitoring is an example of a process with low attributes data
values, where many or most of the data are zeroes. Exact probability control limits use of the nega-
tive binomial, Poisson, or other suitable distribution that might be used to prevent too high of a false
alarm rate; see “Understanding Statistical Process Control, 2nd ed. (42). Additional information on control
charts is provided in Appendix 8.2, Types of Control Charts.
105 USL
104
103
102
101
100
99
98
97
96
95 LSL
Cp=2.0 Cp=2.0 Cp=1.0 Cp=1.33 Cp=1.33
Cpk=2.0 Cpk=1.0 Cpk=1.0 Cpk=1.33 Cpk=1.9
If the process is in statistical control, the standard deviation (s) used to calculate Cp and Cpk in Figure
6.2.2.1.3-1 is usually based on estimates derived from the control chart for the standard deviation or
range. These estimates of s will not include between-subgroup variation that may have occurred in
the mean. For an individuals chart where n=1 per subgroup, the standard deviation is usually based
on the moving range, which minimizes the effect of between-subgroup variation. If the standard
deviation is calculated by the familiar equation of all the data combined, this
estimate will include between-subgroup variation, such as between-lot variation, and the indices are
then called Pp and Ppk. If a process is in statistical control, there will be little difference between Cp and
Pp or between Cpk and Ppk. If a process is not in statistical control, it is difficult to determine process
capability because of the lack of process stability; see Figure 6.2.2-2. If a process is not in statistical
control, Pp and Ppk are preferred as they include variation due to lack of stability. However, this prac-
tice is somewhat controversial; see “Introduction to Statistical Quality Control, 6th ed.” (43)
Figure 6.2.2.1.3-2 shows the relationship between the process capability index Cpk and the probability
the process output will be out of specification. The table assumes the process is in statistical control,
normally distributed, and centered between the lower specification limits (LSL) and upper two-sided
specification limits (USL). If the process is not normally distributed, process capability methods for
non-normal distributions should be used.
Acceptable values for Cpk depend on the criticality of the characteristic, but 1.0 and 1.33 are common-
ly selected minimum values. Six-sigma quality is usually defined as Cp≥ 2.0 and Cpk ≥ 1.5 for a normally
distributed process in statistical control. See Wheeler (40) or Montgomery (43) for more complete
treatments of SPC and process capability.
Samples should be representative of the entire population being sampled. Random, stratified, and
periodic/systematic sampling are the most commonly used approaches. Targeted sampling to in-
clude suspected worst-case locations within the batch or process may be used when appropriate. For
example, samples from the very beginning and end of the batch may be selected to assure that these
potential trouble spots are included, while the rest of the required samples are randomly selected
from throughout the batch.
Reaching at least 90% confidence at the end of PPQ is desirable when using statistical acceptance
sampling for validation with little prior confidence. This means that the combined information from
the PPQ runs shows that there is at least 90% confidence that the validation performance level has
been met; 90% confidence is recommended as the minimum because it is the traditional confidence
associated with detecting unacceptable quality levels (called the Rejection Quality Level [RQL], Lot
Tolerance Percent Defective [LTPD], or Limiting Quality [LQ]) (46). Note that this use of the term
“confidence” is different than the traditional 95% confidence of acceptance associated with the Accep-
tance Quality Limit (AQL) in routine lot acceptance sampling. The AQL relates to the Type I error of
incorrectly rejecting an acceptable lot, while the 90% minimum confidence recommended here refers
to the Type II error of incorrectly accepting an unacceptable process.
Single sampling for attributes is the simplest type of sampling. For example, a sampling plan of n=388
units, accept on 1 nonconformance, reject on 2, would detect a 1% nonconformance rate with 90% con-
fidence. The statistical operating characteristic curve for this sampling plan is shown in Figure 6.2.3-1.
Confidence= 1– pr(accept)
0.8 20%
Probability of Acceptance
0.7 30%
0.6 40%
0.5 50%
0.4 60%
0.3 70%
0.2 80%
0.1 90%
0.0 100%
0.0% 0.5% 1.0% 1.5%
% Nonconforming Units in Process
Double sampling plans for attributes may take a second set of samples depending on the results of
the first set. For example, the double sampling plan n1=250, a1=0, r1=2; n2=250, a2=1, r2=2 will
also detect a 1% nonconformance rate with 90% confidence. The values n1 and n2 are the stage 1 and
stage 2 sample sizes; a1 and a2 are the accept numbers; r1 and r2 are the reject numbers. If a1=0 non-
conformances are found in the first set of n1=250 samples, the sampling plan is passed. If exactly 1
nonconformance is found in the first sample of n1=250 units, an additional n2=250 units are sampled.
If the total number of non-conformances found in the combined 500 samples is no more than a2=1,
the sampling plan is passed. If the total number of nonconformances found in the combined 500
samples is r2=2 or greater, the sampling plan is failed. One advantage of double sampling plans is that
they often have lower false reject rates; i.e., good processes will not fail the sampling plan as often.
Several types of variables sampling plans may be used for validation, one of the most common being
the normal tolerance interval. For example, one normal tolerance interval sampling plan for two-
sided specifications is n=30, k=3.17. If the average ± 3.17 standard deviation is contained within the
specification limits, the sampling plan is passed. This plan also provides 90% confidence in detecting a
1% nonconformance rate. Variables sampling plans assume the data are normally distributed, and this
assumption should be confirmed with a suitable normality test. An advantage of variables sampling
plans is that they often are able to use much smaller sample sizes than attributes plans to provide the
same confidence.
Example: The validation will show with 90% confidence that the process averages ≤0.1% leaking contain-
ers after simulated shipping. This requires an attributes sampling plan of n=2300, accept=0, reject=1.
Three lots will be used for the Stage 2 PPQ, so n = 2300/3 = 767 containers per lot will be inspected for
leakage after simulated shipping. If no leakers are found in the combined n=2300 samples, the sampling
plan is passed.
ANSI/ASQ Z1.4 “Sampling Procedures and Tables for Inspection by Attributes” and ANSI/ASQ Z1.9 “Sam-
pling Procedures and Tables for Inspection by Variables” are commonly used sampling plans for routine
production (47,48). They should be used with care for validation, since they may not provide a high
enough level of confidence. For example, one Z1.4 tightened sampling plan for AQL 0.4% is n=315,
Not all sampling plans used to make accept/reject decisions are for percent nonconforming units. For
example, the USP test for content uniformity (of dosage units) is specified in terms of a two-stage
sampling plan given in USP. In this case, validation sampling should provide confidence that the USP
test can be passed with high confidence (49).
Example: The sampling plan will show with 95% confidence that the routine USP content uniformity (of
dosage units) test requirements can be met.
Depending on the prior information and/or risk involved, it may not be necessary to determine the
number of PPQ lots using statistical methods. The less information and confidence at the transition to
Stage 2 (PPQ), the more advisable it is to use statistical methods to help determine the number of PPQ
lots where feasible and meaningful. See the Appendix 8.1, Statistical Methods for Determining the
Number of Lots, for statistical approaches to determine the number of lots. Regardless of the number
selected and acceptance criteria used, the data collected during PPQ should be statistically analyzed to
help understand process stability, capability, and within (intra-) and between (inter-) lot variation.
Lots produced during Stage 1 under similar conditions as the PPQ lots may potentially be used to
reduce the number of lots required at PPQ. This can be done using Bayesian statistical methods or by
combining the Stage 1 data and Stage 2 PPQ results – if there are no significant differences in the data
(50). The criteria for combining Stage 1 data and PPQ data should be specified before the PPQ lots
are produced. These criteria would typically include such statistical comparisons as ANOVA (analy-
sis of variance) to compare lot means, Levene/Brown-Forsythe or Bartlett’s test to compare the lot
standard deviations, SPC charts, and equivalence tests to demonstrate that Stage 1 and PPQ data are
similar (51).
PAT can provide an opportunity to enhance process analysis and process knowledge compared to
traditional tests. It can support process validation whether it is a parallel activity (concurrent with
process validation), reductive activity (reduces execution of existing tests), or replacement activity (al-
ternative to traditional testing). Effective use of PAT to provide process control relies on the selection
of correct quality attributes, process performance ranges, and methods for monitoring and reporting.
It also relies on the proper design, use, and validation of the PAT monitoring, measurement, and
control loop systems. The validation of the PAT system is based in part on the following principles:
1. Measurement of the correct product and in-process quality attributes
2. Accuracy and understanding of the correlation between these quality attributes and the process parameters
that will be adjusted
3. Reliability, suitability, capability, and accuracy of the monitoring, measurement, and process control loop or
adjustment systems
4. Acceptable performance of the PAT system throughout commercial manufacturing, including the ability to
identify opportunities for process improvement.
Function and operation of the equipment and instrumentation used in the PAT system should be
qualified to assure that it will monitor and control the process parameters accurately and reliably.
Equipment and instruments used during the process should be qualified to verify that they are suit-
able for in-process use, including compatibility with process materials and conditions, accuracy, sen-
sitivity, security, and reliability.
By definition, PAT provides continuous process and product attribute verification. Stage 3 activities
should therefore focus on accuracy and reliability of control methods, possible process control im-
provements, and process variables missed during process development and qualification. Evaluation
of PAT and or in-process derived data should be part of the Quality System and review processes (11).
Where data trending shows excursions in anticipated monitoring results, analysis of the cause of the
excursion should be conducted to determine if changes to the control system are needed or opportu-
nities for process improvement can be identified.
When variables are found that are not being monitored adequately, changes to the monitoring meth-
ods may be needed. All changes should be evaluated for impact on the process and product attributes.
Changes should be evaluated and actions implemented to assure that residual risks do not adversely
affect process performance or product quality. These actions may include steps to qualify the changed
process and equipment.
Technology transfer is successful if process understanding has increased, and there is documented
evidence that the recipient of the technology transfer can routinely reproduce the transferred prod-
uct, process, or method against a predefined set of specifications from the sender. Process understand-
ing and knowledge increase significantly during technology transfer, providing useful information
for process control strategy design and process validation. Technology transfer can occur at different
stages of the process validation lifecycle. If a new process is being transferred from research and de-
velopment to commercial manufacturing, the technology transfer may occur between Stages 1 and
2. However, if it occurs after a product has been launched and it is in the commercial manufactur-
ing phase, then transfer will occur during Stages 2 and 3. Refer to Table 6.4-1 below for distribution
of Technology Transfer Activities throughout the Product Lifecycle, which outlines the increasing
knowledge and process understanding with each technology transfer.
Process
Validation Activities Knowledge Development/Data Application
Lifecycle Stage
Stage 1 Process Design provides product and process develop- Development Report: Technology Transfer Batches manufactured during Stage 1
ment knowledge and data for technology transfer. • Development history, including criticality assessments are intended to establish comparability of product quality
between sites and, develop filing/market authorization data.
and DoE with sources of variation.
• Data and knowledge development from stability stud- Development Report summarizes activities from Stage 1.
ies and development batches
• Rationale for specifications and methods
• Critical Process Parameters (CPPs)
• Critical Material Attributes (CMAs)
• Critical Quality Attributes (CQAs)
• KPPs, PARs, NORs
• Manufacturing Process Description, Equipment Train
Stage 2 Most technology transfer activities in a product lifecycle Technology Transfer Strategy: Technology Transfer Batches manufactured during Stage
are carried out at Stage 2:
• Product and Process Description (as designed from 2 are intended to reproduce the manufacturing process,
• Development of Transfer Strategy Stage 1, and reported in the Development Report) including components and composition configurations at
the transfer site, and to conduct PPQ.
• Manufacturing of Commercial Scale PPQ Batches • Assessment of Site Change Requirements; e.g., Post-
• Site Equivalency Analysis (from receiving to sending unit) Approval and, Prior-Approval with rationale. Category Equivalency between sites is intended to compare equip-
under SUPAC guidelines, if applicable ment and facilities to assure that they are equivalent and
• Transfer and Validation of Analytical Methods qualified for commercial manufacturing
• Number of batches required to meet transfer require-
• Confirming CPPs at Commercial Scale. ments, including validation/PPQ strategy/Matrix Ap-
• Conducting Stability Studies at Commercial Scale under proach
Commercial Package configurations • Specifications and Methods Transfer Plan
• Confirming Risk Assessments, Criticality Analysis • Validation Plan
• Establish Sampling Plans and Statistical Methods at • Control Strategy
Commercial Scale
• Evaluation of Personnel Qualifications and Training
Process Understanding
Knowledge
Information
Data
Knowledge management includes systems that capture review and feedback information in an effort
to ensure correct decisions were made, and identify where process improvements can be implemented.
Sources of knowledge include, but are not limited to:
• Prior knowledge (public domain or internally • Manufacturing experience
documented, such as similar processes)
• Risk assessments
• Pharmaceutical development studies
• Technology transfer activities • Continual improvement
• Process validation studies over the product lifecycle • Change management activities
The concept of sustainable and continually improved knowledge systems is essential to a lifecycle
process validation program. The flow of information from Stage 1 Process Design to Stage 2 Process
Knowledge management systems should be designed, installed, used, and maintained. They play a
pivotal role in finding problems and preventing process shifts by providing feedback for continuous
improvement efforts (4).
Appropriate information must be acquired, used, and archived. It should be accurate, timely, and use-
ful. Information should also be properly interpreted and effectively communicated.
Information or knowledge is gained in Stages 2 and 3 that can improve the process should be communi-
cated back to those responsible for process design and development. The information (including respon-
sible individuals, sampling plans, and justification) should be communicated via an appropriate tool.
Information needed to support the process validation effort should also be communicated to those
responsible for monitoring and providing feedback on commercial product manufacturing. A system
should be in place to provide feedback to those responsible for process design and development, to
confirm the accuracy of early process design assumptions, and to improve the process where possible.
When changes are made in Stages 2 and 3, they should be communicated to all affected parties in an
efficient, accurate, and timely manner. Formal Change Control procedures are a recommended and
required Quality System component (4).
Transparent interaction between teams collecting data, performing risk assessments, and transferring
information is essential to the process validation effort. Joint reviews between teams responsible for
process development, risk assessments, and data collection should be conducted throughout the life-
cycle of the process. These reviews enable the effective transfer of information from scale-up through
full-scale manufacturing batches, and help to ensure that the process operates in a reliable and predict-
able manner.
Documenting Process Design • Analytical methods were not validated for PE/demonstration batches; however, they were validated and transferred from R&D to manufacturing sites prior to stability batch Analytical methods were dependable but not validated initially since:
production at a GMP site. Factors included specificity, forced degradation, precision, linearity, LOD/LOQ, accuracy, and robustness. • The knowledge-gathering phase with experimental batches early in the lifecycle were
• Scientists were encouraged to write technical reports that summarized different aspects of the process. In general, they focused on a single unit operation, describing carried out
changes and improvements. A technical review reference document was also prepared. It summarized all of the developmental reports covering methods, ranges, condi- • Draft specs were used and case changes made in the ranges
tions, and knowledge of the entire process.
• Saving on timeline of analytical method validation at this stage
• The documents are updated each time significant process changes occur. The technical review reference document and associated specifications and procedures are filed
in a central archiving system, and are then used by manufacturing for generation of batch production records. Upon site transfer, lab analysts will be present for method validation according to internal
SOPs. These will also meet ICH Q2, USP or other regulatory or compendial standards.
Process Validation Master Developed a detailed Validation Master Plan (VMP) that identified specific studies to be performed. Individual Process Validation protocols were written for each batch. The PPQ The process validation plan was initiated prior to Stage 2 to identify supportive information
Plan batches were completed just before the expected NDA approval. needed from Stage 1. However, the formal Validation Master Plan was finalized during
• In addition to new process validation studies, the plan identified studies and appropriate references that had been executed for other projects, but would be used to support Stage 2, when all attributes, parameters, and systems were known.
this product.
Process Performance Technology Transfer and • Manufacturing, analytical, and biological procedure specs were transferred to the manufacturing site based on process evalua- The demonstration batch included verifying:
Qualification Engineering Runs tion batch results. • Solution mixing process
• Three stability batches of DP strength were produced at 10-15% of commercial batch volume. • Filling process
• Three different batches from various API suppliers were factored in (matrix approach) among all stability batches. One batch with • Sterilization process (as applicable)
the highest strength per API supplier was performed using the worst case scenario for CPP (e.g., solution hold-time). Stability
• Packaging and confirmation of finished product meeting final specifications.
studies were initiated using tank release, in-process testing, and finished product release testing assays.
• Analytical and microbiological methods were validated. Assays were performed by a dedicated stability operations group. Long For the registration, the different suppliers provided matrix; everything else in the process remained the same.
term stability studies for the aforementioned batches at 2-8° C/60% RH; 30° C/65% RH and 40° C/75%RH were initiated. For one PPQ batches verified the same process parameters and quality attributes used in the demonstration (pre-validation) batch.
batch of each strength, stability data were generated for the products, which were stored in an inverted orientation. A formal
stability plan was prepared prior to entering stability production, and was issued prior to submission. Formal stability production
protocols were issued for each code prior to stability production.
• A full-scale commercial ‘demo” batch was followed by multiple PPQ batches at a manufacturing facility for launch quantities
Process Performance • Sites for the commercial production process were identified in Stage 1. Readiness for PPQ was confirmed at ‘Stage gate’ meet- Ensure finalization of:
Qualification Readiness ings. PPQs runs took place under nominal, routine conditions. • Specifications • Previous batches done at worst case scenario
Assessment
• Previous reports (e.g., Formulation, PE) • The readiness of other items, (e.g., labeling)
• Equipment and facility qualifications
PPQ campaign Material requirements to support commercial runs determined the number of runs for the campaign. The DP PPQ campaign consisted • Solution Mixing step: Time, Temp., dissolve oxygen, agitator speed, solution homogeneity by drug assay, and pH– mixing valida-
of 5 runs, covering 3 batches with highest strength, 1 batch with mid-range strength, and 1 batch with the lowest strength of finished tion
product made at the commercial production scale. Additional sampling was performed for all the runs. All runs that meet commercial • Sterile fill or terminally sterilized finished product testing: finished product assay, degradation and impurities, pH, particulate,
release criteria could be used to support commercial supply. All PPQ batches were performed at nominal conditions. microbial and sterility testing.
• Hold study was performed on one of the three highest strength product batches to establish and validate hold intervals for solu- The number of batches in the entire process validation Stages 1 and 2 were:
tion mix, fill, and hold prior to sterilization (as applicable).
• 3-6 feasibility batches
• Data report of initial analysis of the variation of outputs such as quality and performance attributes in stages 1 and 2
• 3-6 formulation batches
• Cleaning validation that was specific for the new process was performed concurrently with the PPQ runs.
• 1-2 PE batches
• 3 stability batches
• 5 PPQ batches
Data were analyzed for the total of both stages.
A slight variation in the number of development and PPQ batches can depend on dosage strengths, complexity of formulation and
process, and results of PE and stability batches.
Five batches provided 3 at worst-case of most concentrated conditions. Lower concentrations were confirmed with 2 batches.
Timing of PPQ batches were scheduled in advance of the targeted NDA submission date to allow for initial data in the application. In
this case, 1 month of real-time and accelerated stability data was available. This led to the respective start of the DS and DP PPQ runs
12 and 9 months prior to the anticipated approval date. PPQ batches were thus able to be commercialized.
Stability All batches of DP from the PPQ campaign were put into the stability program. In addition to real-time testing and the designated
storage temperature, stability at accelerated conditions was performed per ICH guidelines. In addition to the primary stability data
obtained during the Stage 1 runs, supportive stability data acquired during PPQ runs was also used in the submission package on an
as-needed basis.
8.1.1 Average Run Length (ARL) to detect a p×100% lot failure rate
The average number of lots until the first lot failure is ARL = 1/p, where p is the lot failure rate that
is important to detect.
Example: A lot failure rate of 20% is deemed unacceptable for a given process. A lot failure rate of 20%
would be detected on average in 1/0.2 = 5 lots.
Common choices for p would be 25%, 20%, 10%, and 5%, depending on the other factors given ear-
lier (e.g., prior knowledge, risk, production rate) Five (5%) would generally be the tightest value to
consider since a process running right at the Acceptance Quality Limit is still expected to have a 5%
lot rejection rate. If applicable, this approach can also be used to determine the number of lots to use
with tightened sampling during CPV (continued process verification).It may be particularly useful
when there are many quality attributes to assess. Rather than determine the number of lots required
separately for each attribute, the PPQ stage is complete when all attributes pass for the required num-
ber of lots.
Example: It is desired to represent two-thirds = 67% of the between-lot variation during PPQ. The number
of lots required is nL=5 lots.
If there are no significant differences between the lots, the simplest way to deal with multiple lots is to
combine the data. ANOVA may be used to compare lot means; within-lot variation may be compared
with the Levene/Brown-Forsyth, Bartlett, Cochran, or Fmax tests (54-57). An omnibus test may also
be used. If there are no significant differences between lots or if the between-lot variance component
is not statistically significant, the standard normal tolerance interval for the combined data may be
used. The sample size per lot and number of lots should be statistically determined to have adequate
power to detect any between-lot variation.
Example: The specification for cap removal torque for a small volume parenteral (SVP) product is 8.0-12.0
inch-pounds. Limited data from Stage 1 showed a standard deviation of about 0.5. The production AQL
(Acceptance Quality Limit) for removal torque is 1.0%. The acceptance criterion for the PPQ is to show with
90% confidence that at least 99% (1 minus the AQL) of the cap torques are within specifications.
Three lots are included in the PPQ to evaluate the within- and between-lot variations. A sample size of 30
units per PPQ lot was tested to detect between-lot variation as large as the within-lot variation with 90%
confidence (58). Samples were tested from throughout each of the three lots, and the acceptance criteria for
each lot was met. An I/MR SPC chart indicated that the process was in control during each lot. Normal-
ity tests for each lot did not indicate significant non-normality. Since ANOVA and Levene’s test showed no
significant difference between the three lots, the data were combined. The 90 test results had a mean of 9.59
and standard deviation of 0.51.
A 90% confidence normal tolerance interval for 99% of the population is 9.59 ± 2.872 x 0.51 = (8.13,
11.05). This interval is within the specification limits of 8.0-12.0. Thus, the PPQ has shown with 90%
confidence that at least 99% of torque results are expected to meet specifications.
If there are statistically significant differences between lots, the tolerance interval should be construct-
ed with more advanced methods that take the between-lot variance component into account (56,57).
A potential problem with the use of SPC charts, such as Xbar/S charts plotted across lots, is that they
define a process as being in statistical control if there is no underlying lot-to-lot variation (Figure
6.2.2.1-1). This is often not the case, and some lot-to-lot variation is typical and expected, especially
for lot means. In these cases, an I/MR chart for the mean and/or standard deviation or three-way
between/within chart can be used to detect out-of-statistical-control between-lot variation.
If there is only one test result per lot, such as lot assay or pH of a tank of solution, the 20-30 time peri-
ods become 20-30 lots. This is seldom feasible for PPQ. An alternative is to select a smaller number of
lots, perhaps 5-10, and construct a preliminary I/MR control chart. If it shows an in-control process,
the PPQ would be complete and the control chart extended into Stage 3 to verify longer term statisti-
cal control during CPV.
Example: Fill volume specification limits for a small-volume parenteral product are 98-102. PPQ acceptance
criteria are that each lot’s Ppk≥1.0; also, that the overall process Ppk is ≥1.0 with 95% confidence. To detect a
between-lot standard deviation that is half of the within-lot standard deviation with 90% confidence, 33
units will need to be tested from across each of five PPQ lots.
The data from the five lots were analyzed by control charts, histograms, normality tests, Levene’s test, and
ANOVA. These analyses indicated that the data from the five lots could be combined. Each of the five lots’
Ppks were > 1.0. The calculated Ppk from the combined data was 1.14, with a lower 95% confidence interval
of 1.03. Since each lot met its Ppk requirement and the lower confidence interval for the overall process, Ppk
was above the acceptance limit of 1.0. Thus, the PPQ acceptance criteria were met.
An alternative to calculating a parametric confidence interval for Ppk is to require four or five lots in a
row to each meet the Ppk acceptance criteria. For example, four PPQ lots, each with Ppk ≥ 1.0, provides
over 90% confidence that the process median Ppk is ≥1.0. Five lots provide over 95% confidence.
Example: To demonstrate the process is acceptable, the PPQ acceptance criterion will be to show with 90%
confidence that the process lot conformance rate (the lot pass rate) is at least 90%. A total of 22 passing lots
in a row will demonstrate this.
Requiring such a large number of lots during PPQ to reach 90% or 95% confidence is difficult. An
alternative is to use 50% confidence in PPQ and monitor the process further during CPV to reach the
final desired confidence. Crossing the 50% confidence threshold is the point at which it is more likely
that the selected lot conformance rate is met. For the example above, once 7 passing lots are reached,
it is more likely that the conformance rate is greater than 90% rather than less than 90%, and the
PPQ could be considered complete. An additional 15 lots during early CPV would reach the required
22 lots. This approach may be particularly useful when there are many quality attributes to assess.
Rather than determine the number of lots required separately for each attribute, the PPQ stage is
complete when all attributes pass for the required number of lots.
Example: A 5% lot failure rate is considered minimally acceptable, while a 25% lot failure rate is not ac-
ceptable. The SPR decision chart below was made using a=0.05, b=0.2, p1=0.05, p2=0.25. The failure
decision line was crossed at lot 7; the PPQ failed due to too many lot(3 in 7) failures.
5
FAIL
4
Lots Failed
X
3
X X
2
X X X
1
X PASS
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Lot Number
8.1.8 Narrow Limit Gauging
Narrow limit gauging can be used to reduce the sample size or number of lots required in PPQ. The
basic idea is to use narrowed pseudo-specification limits during PPQ to obtain more statistical power.
An example is a case in which the assay specification for an active ingredient in a solution is 95–105, and
only one assay result is determined per lot. If five lots all are within narrowed limits of 97.5–102.5, this
gives the same confidence as a larger number of lots being within the unadjusted 95-105 specification
limits in detecting the lot nonconformance rate (see Farnuma and Stantona for calculation details) (57).
One form of narrow limit gauging is called PRE-Control. It is often used as a QC procedure for fill
volume. If 5 units in a row fall in the middle 50% of the specification limit, then the lot is qualified for
startup. This concept can be extended to quality characteristics (e.g., lot assay, pH) ) where there is one
result per lot. For an assay specification of 95–105 for an active ingredient, the PRE-Control narrow
limits would be 97.5–102.5. If five PPQ lots in row meet the 50% narrow limits, then the PPQ is com-
plete. Note that the narrow limits are not used to determine lot acceptance, but only to determine
whether the PPQ acceptance criteria are met.
Total Between as
Within Between Total
Variance % of Total
σw σb σt
σt2 σb2 / σt2
1.00 2.00 2.24 5.00 80%
1.00 1.50 1.80 3.25 69%
1.00 1.00 1.41 2.00 50%
1.00 0.75 1.25 1.56 36%
1.00 0.50 1.12 1.25 20%
1.00 0.25 1.03 1.06 6%
Table 8.1.10-1 indicates that a minimum of 32 lots are required for the estimated between-lot stan-
dard deviation σb to be within ±25% of its true value σb with 95% confidence. Since the table assumes
the lot means are estimated exactly, more than 32 lots may be required if the sample size per lot is
small or there is substantial within-lot variation. Table 8.1.10-1 shows the difficulty in estimating a
standard deviation: large sample sizes are required to obtain precise estimates. Again, a phased ap-
proach could be used where the PPQ is based on five lots, and additional data is collected during CPV
to obtain a more precise estimate.
• One problem in using control charts during validation is that most only determine if the process is
in classical statistical control (Figure 6.2.2-1). If the process has the more complex form of statisti-
cal control with intra- and inter-lot variation (Figure 6.2.2-3), most charts will incorrectly indicate
that the process is “out of statistical control.” This problem is often exacerbated during validation
because of the large sample sizes used. This increases the statistical power of the control chart in
detecting small differences between subgroups or lots. The simplest solution for this problem is to
use an I/MR chart for the subgroup means and standard deviations, which will take the between-
lot variation into account. Other solutions are to use a three-way I/MR/S between/within control
charts (62) or use separate control charts for each lot and do not control chart across lots.
• Another problem in determining if a process is in control relates to the overall probability of find-
ing one or more subgroups beyond the 3-sigma limits. The false alarm probability of a subgroup
exceeding the 3-sigma limits or failing a runs test (if several of the commonly used runs tests are
used) is often as high as 1%. If a PPQ takes samples from 32 time periods from each of four PPQ
lots, a total of 4x32 = 128 subgroups will be plotted on each control chart. If Xbar/R charts are
used, there will be a total of 256 plotted values, each with as much as a 1% chance of indicating
“out of statistical control” even when the process is actually in statistical control. The probability
that all 256 plotted values will show a process to be in control is only 0.99256 = 0.08 = 8%. There is
only about an 8% chance that zero “out of statistical control” events will occur even though the
process is in statistical control. If acceptance criteria are used for control charts during PPQ, they
should take the number of sampled points into account and allow a statistically determined small
number of values to exceed the control limits or fail one of the runs tests to control the overall
false alarm rate.
In addition, the Individuals/Moving Range chart can be used for attributes data if most of the plotted
values are not 0. For very large sample sizes per period, say greater than 500 or 1,000, theI/MR chart is
usually preferred to the corresponding attributes chart. This is because the assumption of a binomial
or Poisson distribution that attributes charts use is often violated for large sample sizes and long time
periods. For example, if an automatic metal detector is used to inspect 100% of all tablets, an I/MR
control chart would be preferred to a p-chart for plotting the percent of tablets rejected across lots if
1,000 or more tablets are inspected per lot.