Biometrics
-
We develop a new Bayesian approach of sample size determination (SSD) for the design of noninferiority clinical trials. We extend the fitting and sampling priors of Wang and Gelfand (2002, Statistical Science 17, 193-208) to Bayesian SSD with a focus on controlling the type I error and power. ⋯ Various properties of the proposed Bayesian SSD methodology are examined and a simulation-based computational algorithm is developed. The proposed methodology is applied to the design of a noninferiority medical device clinical trial with historical data from previous trials.
-
Minimization as an alternative to randomization is gaining popularity for small clinical trials. In response to critics' questions about the proper analysis of such a trial, proponents have argued that a rerandomization approach, akin to a permutation test with conventional randomization, can be used. However, they add that this computationally intensive approach is not necessary because its results are very similar to those of a t-test or test of proportions unless the sample size is very small. We show that minimization applied with unequal allocation causes problems that challenge this conventional wisdom.
-
Often clinical studies periodically record information on disease progression as well as results from laboratory studies that are believed to reflect the progressing stages of the disease. A primary aim of such a study is to determine the relationship between the lab measurements and a disease progression. If there were no missing or censored data, these analyses would be straightforward. ⋯ In this article, we propose a simple test for the association between a longitudinal marker and an event time from incomplete data. We derive the test using a very intuitive technique of calculating the expected complete data score conditional on the observed incomplete data (conditional expected score test, CEST). The problem was motivated by data from an observational study of patients with diabetes.
-
Recurrent event data analyses are usually conducted under the assumption that the censoring time is independent of the recurrent event process. In many applications the censoring time can be informative about the underlying recurrent event process, especially in situations where a correlated failure event could potentially terminate the observation of recurrent events. In this article, we consider a semiparametric model of recurrent event data that allows correlations between censoring times and recurrent event process via frailty. ⋯ Large sample properties of the regression parameter estimates and the estimated baseline cumulative intensity functions are studied. Numerical studies demonstrate that the proposed methodology performs well for realistic sample sizes. An analysis of hospitalization data for patients in an AIDS cohort study is presented to illustrate the proposed method.