Journal of comparative effectiveness research
-
Big data holds big potential for comparative effectiveness research. The ability to quickly synthesize and use vast amounts of health data to compare medical interventions across settings of care, patient populations, payers and time will greatly inform efforts to improve quality, reduce costs and deliver more patient-centered care. However, the use of big data raises significant legal and ethical issues that may present barriers or limitations to the full potential of big data. This paper addresses the scope of some of these legal and ethical issues and how they may be managed effectively to fully realize the potential of big data.
-
Cluster randomized trials are trials that randomize clusters of people, rather than individuals. They are becoming increasingly common. ⋯ This article will highlight and illustrate these developments. It will also discuss issues with regards to the reporting of cluster randomized trials.
-
Chronic conditions are the most important cause of morbidity, mortality and health expense in the USA. Comparative effectiveness research (CER) seeks to provide evidence supporting the relative value of alternative courses of action. This research often concludes with estimates of the likelihood of desirable and undesirable outcomes associated with each option. ⋯ In these ways, SDM and CER are interrelated. SDM translates CER into patient-centered practice, while CER provides the backbone evidence about options and outcomes in SDM interventions. In this review, we explore the potential for a SDM-CER synergy in improving healthcare for patients with chronic conditions.
-
Quasi-experiments are likely to be the workhorse study design used to generate evidence about the comparative effectiveness of alternative treatments, because of their feasibility, timeliness, affordability and external validity compared with randomized trials. In this review, we outline potential sources of discordance in results between quasi-experiments and experiments, review study design choices that can improve the internal validity of quasi-experiments, and outline innovative data linkage strategies that may be particularly useful in quasi-experimental comparative effectiveness research. There is an urgent need to resolve the debate about the evidentiary value of quasi-experiments since equal consideration of rigorous quasi-experiments will broaden the base of evidence that can be brought to bear in clinical decision-making and governmental policy-making.
-
Traditional randomized controlled trials are the 'gold standard' for evaluating health interventions and are typically designed to maximize internal validity, often at the cost of limited generalizability. Pragmatic randomized controlled trials should be designed with a conscious effort to generate evidence with a greater external validity by making the research question as similar as possible to the questions faced by clinical decision-makers (i.e., patients and their families, physicians, policy makers and administrators) and then answer that question with rigor. Clarity and transparency about the specifics of the research question are the keys to designing, as well as interpreting, any clinical trial.