Journal of biopharmaceutical statistics
-
One of the most challenging aspects of the pharmaceutical development is the demonstration and estimation of chemical stability. It is imperative that pharmaceutical products be stable for two or more years. Long-term stability studies are required to support such shelf life claim at registration. ⋯ In this paper, we introduce two nonparametric bootstrap procedures for shelf life estimation based on accelerated stability testing, and compare them with a one-stage nonlinear Arrhenius prediction model. Our simulation results demonstrate that one-stage nonlinear Arrhenius method has significant lower coverage than nominal levels. Our bootstrap method gave better coverage and led to a shelf life prediction closer to that based on long-term stability data.
-
Comparative Study
Dissolution curve comparisons through the F(2) parameter, a Bayesian extension of the f(2) statistic.
Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. ⋯ Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.
-
Single-arm studies are typically used in phase II of clinical trials, whose main objective is to determine whether a new treatment warrants further testing in a randomized phase III trial. The introduction of randomization in phase II, to avoid the limits of studies based on historical controls, is a critical issue widely debated in the recent literature. We use a Bayesian approach to compare single-arm and randomized studies, based on a binary response variable, in terms of their abilities of reaching the correct decision about the new treatment, both when it performs better than the standard one and when it is less effective. We evaluate how the historical control rate, the total sample size, and the elicitation of the prior distributions affect the decision about which study performs better.
-
Here, we developed a new dose-finding method that partitions a cohort of patients based on the number of dose combinations within a prespecified acceptable toxicity range in two-agent combination Phase I trials. In the proposed method, patients in the same cohort are partitioned according to several dose combinations, although most of the existing methods allocate patients in the same cohort according to a single-dose combination. We compared the operating characteristics of the proposed and existing methods through simulation studies.
-
Validation of linearity is a regulatory requirement. Although many methods are proposed, they suffer from several deficiencies including difficulties of setting fit-for-purpose acceptable limits, dependency on concentration levels used in linearity experiment, and challenges in implementation for statistically lay users. ⋯ The method uses a two one-sided test (TOST) of equivalence to evaluate the bias that can result from approximating a higher-order polynomial response with a linear function. By using orthogonal polynomials and generalized pivotal quantity analysis, the method provides a closed-form solution, thus making linearity testing easy to implement.