Biometrical journal. Biometrische Zeitschrift
-
Comparative Study
Comparison of procedures to assess non-linear and time-varying effects in multivariable models for survival data.
The focus of many medical applications is to model the impact of several factors on time to an event. A standard approach for such analyses is the Cox proportional hazards model. It assumes that the factors act linearly on the log hazard function (linearity assumption) and that their effects are constant over time (proportional hazards (PH) assumption). ⋯ However, they are not sufficient to derive a model, as appropriate modelling of the shape of time-varying effects is required. In three examples we will compare five recently published strategies to assess whether and how the effects of covariates from a multivariable model vary in time. For practical use we will give some recommendations.
-
The receiver operating characteristic (ROC) curve is often used to assess the usefulness of a diagnostic test. We present a new method to estimate the parameters of a popular semi-parametric ROC model, called the binormal model. Our method is based on minimization of the functional distance between two estimators of an unknown transformation postulated by the model, and has a simple, closed-form solution. We study the asymptotics of our estimators, show via simulation that they compare favorably with existing estimators, and illustrate how covariates may be incorporated into the norm minimization framework.
-
For a non-inferiority trial without a placebo arm, the direct comparison between the test treatment and the selected positive control is in principle the only basis for statistical inference. Therefore, evaluating the test treatment relative to the non-existent placebo presents extreme challenges and requires some kind of bridging from the past to the present with no current placebo data. For such inference based partly on an indirect bridging manipulation, fixed margin method and synthesis method are the two widely discussed methods in the recent literature. ⋯ In contrast, the synthesis method connects the historical data to the non-inferiority trial data for making broader inferences relating the test treatment to the non-existent current placebo. On the other hand, the type I error rate associated with the direct comparison between the test treatment and the active control cannot shed any light on the appropriateness of the indirect inference for faring the test treatment against the non-existent placebo. This work explores an approach for assessing the impact of potential bias due to violation of a key statistical assumption to guide determination of the non-inferiority margin.
-
In recent times, group sequential and adaptive designs for clinical trials have attracted great attention from industry, academia and regulatory authorities. These designs allow analyses on accumulating data - as opposed to classical, "fixed-sample" statistics. ⋯ First, we provide a concise overview of the essential technical concepts, with special emphasis on their interrelationships. Second, we give a structured review of the current controversial discussion on practical issues, opportunities and challenges of these new designs.
-
In cluster randomized trials, intact social units such as schools, worksites or medical practices - rather than individuals themselves - are randomly allocated to intervention and control conditions, while the outcomes of interest are then observed on individuals within each cluster. Such trials are becoming increasingly common in the fields of health promotion and health services research. Attrition is a common occurrence in randomized trials, and a standard approach for dealing with the resulting missing values is imputation. ⋯ We show that cluster mean imputation yields valid inferences and given its simplicity, may be an attractive option in some large community intervention trials which are subject to individual-level attrition only; however, it may yield less powerful inferences than alternative procedures which pool across clusters especially when the cluster sizes are small and cluster follow-up rates are highly variable. When pooling across clusters, the imputation procedure should generally take intracluster correlation into account to obtain valid inferences; however, as long as the intracluster correlation coefficient is small, we show that standard multiple imputation procedures may yield acceptable type I error rates; moreover, these procedures may yield more powerful inferences than a specialized procedure, especially when the number of available clusters is small. Within-cluster multiple imputation is shown to be the least powerful among the procedures considered.