Bmc Med Res Methodol
-
Bmc Med Res Methodol · Oct 2015
Evaluation of a weighting approach for performing sensitivity analysis after multiple imputation.
Multiple imputation (MI) is a well-recognised statistical technique for handling missing data. As usually implemented in standard statistical software, MI assumes that data are 'Missing at random' (MAR); an assumption that in many settings is implausible. It is not possible to distinguish whether data are MAR or 'Missing not at random' (MNAR) using the observed data, so it is desirable to discover the impact of departures from the MAR assumption on the MI results by conducting sensitivity analyses. A weighting approach based on a selection model has been proposed for performing MNAR analyses to assess the robustness of results obtained under standard MI to departures from MAR. ⋯ Overall, the weighting approach is not recommended for sensitivity analyses following MI, and further research is required to develop more appropriate methods to perform such sensitivity analyses.
-
Bmc Med Res Methodol · Sep 2015
Impact of preconception enrollment on birth enrollment and timing of exposure assessment in the initial vanguard cohort of the U.S. National Children's Study.
The initial vanguard cohort of the U.S. National Children's Study was a pregnancy and birth cohort study that sought to enroll some women prior to pregnancy, and to assess exposures early in pregnancy. ⋯ There were demographic differences in births from women enrolled preconception trying for pregnancy, preconception not trying for pregnancy, or during pregnancy. Time to pregnancy was shorter for women actively trying for pregnancy. Most women enrolled preconception did not have exposure assessment within 30 days of conception, but they did have exposure assessment much earlier during pregnancy than women who enrolled during pregnancy.
-
Bmc Med Res Methodol · Aug 2015
A general framework for comparative Bayesian meta-analysis of diagnostic studies.
Selecting the most effective diagnostic method is essential for patient management and public health interventions. This requires evidence of the relative performance of alternative tests or diagnostic algorithms. Consequently, there is a need for diagnostic test accuracy meta-analyses allowing the comparison of the accuracy of two or more competing tests. The meta-analyses are however complicated by the paucity of studies that directly compare the performance of diagnostic tests. A second complication is that the diagnostic accuracy of the tests is usually determined through the comparison of the index test results with those of a reference standard. These reference standards are presumed to be perfect, i.e. allowing the classification of diseased and non-diseased subjects without error. In practice, this assumption is however rarely valid and most reference standards show false positive or false negative results. When an imperfect reference standard is used, the estimated accuracy of the tests of interest may be biased, as well as the comparisons between these tests. ⋯ Our proposed meta-analytic model can improve the comparison of the diagnostic accuracy of competing tests in a systematic review. This is however only true if the studies and especially information on the reference tests used are sufficiently detailed. More specifically, the type and exact procedures used as reference tests are needed, including any cut-offs used and the number of subjects excluded from full reference test assessment. If this information is lacking, it may be better to limit the meta-analysis to direct comparisons.
-
Bmc Med Res Methodol · Jul 2015
Ranking treatments in frequentist network meta-analysis works without resampling methods.
Network meta-analysis is used to compare three or more treatments for the same condition. Within a Bayesian framework, for each treatment the probability of being best, or, more general, the probability that it has a certain rank can be derived from the posterior distributions of all treatments. The treatments can then be ranked by the surface under the cumulative ranking curve (SUCRA). For comparing treatments in a network meta-analysis, we propose a frequentist analogue to SUCRA which we call P-score that works without resampling. ⋯ Ranking treatments in frequentist network meta-analysis works without resampling. Like the SUCRA values, P-scores induce a ranking of all treatments that mostly follows that of the point estimates, but takes precision into account. However, neither SUCRA nor P-score offer a major advantage compared to looking at credible or confidence intervals.
-
Bmc Med Res Methodol · Jul 2015
Efficiency of pragmatic search strategies to update clinical guidelines recommendations.
A major challenge in updating clinical guidelines is to efficiently identify new, relevant evidence. We evaluated the efficiency and feasibility of two new approaches: the development of restrictive search strategies using PubMed Clinical Queries for MEDLINE and the use of the PLUS (McMaster Premium Literature Service) database. ⋯ The proposed restrictive approach is a highly efficient and feasible method to identify new evidence that triggers a recommendation update. Searching only in the PLUS database proved to be a suboptimal approach and suggests the need for topic-specific tailoring.