Journal of biopharmaceutical statistics
-
In dose-finding trials of chemotherapeutic agents, the goal of identifying the maximum tolerated dose is usually determined by considering information on toxicity only, with the assumption that the highest safe dose also provides the most promising outlook for efficacy. Trials of molecularly targeted agents challenge accepted dose-finding methods because minimal toxicity may arise over all doses under consideration and higher doses may not result in greater response. In this article, we propose a new early-phase method for trials investigating targeted agents. We provide simulation results illustrating the operating characteristics of our design.
-
One of the most challenging aspects of the pharmaceutical development is the demonstration and estimation of chemical stability. It is imperative that pharmaceutical products be stable for two or more years. Long-term stability studies are required to support such shelf life claim at registration. ⋯ In this paper, we introduce two nonparametric bootstrap procedures for shelf life estimation based on accelerated stability testing, and compare them with a one-stage nonlinear Arrhenius prediction model. Our simulation results demonstrate that one-stage nonlinear Arrhenius method has significant lower coverage than nominal levels. Our bootstrap method gave better coverage and led to a shelf life prediction closer to that based on long-term stability data.
-
Validation of linearity is a regulatory requirement. Although many methods are proposed, they suffer from several deficiencies including difficulties of setting fit-for-purpose acceptable limits, dependency on concentration levels used in linearity experiment, and challenges in implementation for statistically lay users. ⋯ The method uses a two one-sided test (TOST) of equivalence to evaluate the bias that can result from approximating a higher-order polynomial response with a linear function. By using orthogonal polynomials and generalized pivotal quantity analysis, the method provides a closed-form solution, thus making linearity testing easy to implement.
-
The cut point of the immunogenicity screening assay is the level of response of the immunogenicity screening assay at or above which a sample is defined to be positive and below which it is defined to be negative. The Food and Drug Administration Guidance for Industry on Assay Development for Immunogenicity Testing of Therapeutic recommends the cut point to be an upper 95 percentile of the negative control patients. In this article, we assume that the assay data are a random sample from a normal distribution. ⋯ The selected methods evaluated for the immunogenicity screening assay cut-point determination are sample normal percentile, the exact lower confidence limit of a normal percentile (Chakraborti and Li, 2007) and the approximate lower confidence limit of a normal percentile. It is shown that the actual coverage probability for the lower confidence limit of a normal percentile using approximate normal method is much larger than the required confidence level with a small number of assays conducted in practice. We recommend using the exact lower confidence limit of a normal percentile for cut-point determination.
-
Substantial heterogeneity in treatment effects across subgroups can cause significant findings in the overall population to be driven predominantly by those of a certain subgroup, thus raising concern on whether the treatment should be prescribed for the least benefitted subgroup. Because of its low power, a nonsignificant interaction test can lead to incorrectly prescribing treatment for the overall population. This article investigates the power of the interaction test and its implications. Also, it investigates the probability of prescribing the treatment to a nonbenefitted subgroup on the basis of a nonsignificant interaction test and other recently proposed criteria.