top of page
Search
  • Writer's pictureAlec Fufidio

Analytical Method Validation: The Rationale for the Rationale

Analytical methods are the interface between the theoretical and the practical. They are how we measure and derive quantifiable information about our processes and their final results.  We validate our methods to ensure they are suitable for their intended purpose and yield accurate and precise results[1].

 

Analytical method validation demonstrates performance characteristics of the method, which includes aspects such as accuracy, precision, linearity, and range. While not all methods are required to demonstrate the same performance characteristics, all performance characteristics must have predetermined acceptance criteria prior to initiating validation. These acceptance criteria are the numerical limits, ranges, or other suitable measures for acceptance of the results of analytical procedures[2]. They are a statement of tolerance we are willing to accept in our methods[3]. If the acceptance criteria for accuracy is 80-120% recovery, then we are stating there is the possibility of a 20% variance in our results.

 

Frequently companies set acceptance criteria without adequate rationale, setting a bar so low that passing acceptance criteria is a forgone conclusion. In their haste to produce a validated method, they neglect that these acceptance criteria must be inclusive of justification[4]. Without justification, the acceptance criteria are incomplete and may crumble if pressed. It only takes one simple question by an inspector to create panic: “Where did the acceptance criteria come from?”. If your acceptance criteria are found inadequately justified, your method is not suitable for its intended purpose and cannot be considered valid[5].

 

There are two opposing forces at work when creating acceptance criteria. A high degree of capability in method performance (e.g., very accurate) drives narrow acceptance criteria. Wide acceptance criteria is driven by a desire for a successful validation. If we observe consistent performance throughout development, we might be tempted to set our acceptance criteria as the highest variability observed during that limited number of trials. However, this might not encompass the full degree of variability that is possible once this method is introduced to routine use. We want to allow for a degree of unexpected variability in our method. Therefore, a balance between the two factors is imperative when setting acceptance criteria[3]. If we set our criteria too narrow, that introduces the risk of failing the validation, if they are too wide, we risk using unsuitable methods.

 

So, how do we generate good acceptance criteria that’s inclusive of rationale? Unfortunately, there is no simple mathematical equation for this. It is a complex interplay of method performance, method specifications, and risk.

 

Method performance, with data points gathered from the method development stage or a similar method that’s already validated, is a very strong starting point when setting acceptance criteria. Observations from method development give a good indicator of method performance for performance characteristics, such as accuracy or precision. These observations should then be used to set acceptance criteria from the narrow side.

 

The method specifications will then elucidate the most critical aspects of the method. What are we trying to measure and what is our desired outcome? For example, for a specification of less than 0.10 ppm of impurity, we can rationally set our precision acceptance criteria quite high, as a miniscule variance in measured value will have a large impact on the percent RSD. Contrast that with a release specification of 10 mg/mL, where the analyte is abundant in the sample. Our acceptance criteria should be narrow, as the impact of variance is buffered by the magnitude of measurement.

 

That leaves risk. The most important and yet least defined. The risk in the method variance is ultimately up to you to decide. Critically, the rationale for our acceptance criteria should justify the risk we are willing to accept. To elaborate upon our initial example of setting accuracy acceptance criteria at 80-120% recovery, let’s put it into a hypothetical example.

 

If we have a HPLC cleaning residue detection method with a specification of no more than 10 ppm, we can expect that typical HPLC accuracy is within 10% of the true value, based on method development and previous validations. For a standard with a concentration of exactly 10 ppm, our measures will give us a result between 9 – 11 ppm. Given the variance of swab recovery from field samples, a higher tolerance for accuracy may be justified, typically 80-120%. In this scenario, if our acceptance criteria is 80 – 120% recovery, and our specification is 10 ppm, we are stating that we are confident our result is between 8 – 12 ppm. This is where risk comes into play. If our method measures low on a sample that is at our limit, we may have a failing sample with a passing result. If there are controls in place to flag results close to the limit or that are out of trend, this acceptance criteria is justifiable. If there are no such controls, the risk may not be worth taking. If our acceptance criteria for carryover is 10 ppm, but our maximum allowable carryover (MACO) is 15 ppm, the risk could be justified. However, if these worst-case results could pass our MACO limit (e.g., 12 ppm), and thus have possible patient safety impact, this risk would be unacceptable and the acceptance criteria would need to be tighter. These considerations must be taken into account when setting your method acceptance criteria. Failure to do so could result in deviations, failed validations, patient safety issues, or regulatory actions.

 

References-

1-    ICH. (2023). Q2 (R2). Validation of Analytical Procedures.

2-    ICH. (1999). Q6A. Specifications: Test Procedures and Acceptance Criteria for New Drug Substances and New Drug Products: Chemical Substances.

3-    PDA. (2012). Technical Report No. 57. Analytical Method Validation and Transfer for Biotechnology Products.

4-    FDA. (2015). Analytical Procedure and Methods Validation for Drugs and Biologics: Guidance for Industry.

5-    USP. (2017). General Chapter <1225> Validation of Compendial Methods.

Recent Posts

See All

Why One Data Point Is Not Enough

An unfortunate truth often seen is the attempt to support a claim or a validation using only a single data point. It is a common issue in our industry, from analytical methods to cleaning validation.

Comments


How Can We Help?

Tell us what you're looking to achieve.  We'll put our experience to work to solve your specific problem.  If you have any questions or inquiries, please do not hesitate to contact us. Our team is here to assist you and provide you with the information you need.

bottom of page