Bmc Med Res Methodol
-
Bmc Med Res Methodol · Jun 2006
Circular instead of hierarchical: methodological principles for the evaluation of complex interventions.
The reasoning behind evaluating medical interventions is that a hierarchy of methods exists which successively produce improved and therefore more rigorous evidence based medicine upon which to make clinical decisions. At the foundation of this hierarchy are case studies, retrospective and prospective case series, followed by cohort studies with historical and concomitant non-randomized controls. Open-label randomized controlled studies (RCTs), and finally blinded, placebo-controlled RCTs, which offer most internal validity are considered the most reliable evidence. Rigorous RCTs remove bias. Evidence from RCTs forms the basis of meta-analyses and systematic reviews. This hierarchy, founded on a pharmacological model of therapy, is generalized to other interventions which may be complex and non-pharmacological (healing, acupuncture and surgery). ⋯ Instead of an Evidence Hierarchy, we propose a Circular Model. This would imply a multiplicity of methods, using different designs, counterbalancing their individual strengths and weaknesses to arrive at pragmatic but equally rigorous evidence which would provide significant assistance in clinical and health systems innovation. Such evidence would better inform national health care technology assessment agencies and promote evidence based health reform.
-
Bmc Med Res Methodol · Jun 2006
Comparative StudyDoes updating improve the methodological and reporting quality of systematic reviews?
Systematic reviews (SRs) must be of high quality. The purpose of our research was to compare the methodological and reporting quality of original versus updated Cochrane SRs to determine whether updating had improved these two quality dimensions. ⋯ The overall quality of Cochrane SRs is fair-to-good. Although reporting quality improved on certain individual items there was no overall improvement seen with updating and methodological quality remained unchanged. Further improvement of quality of reporting is possible. There is room for improvement of methodological quality as well. Authors updating reviews should address identified methodological or reporting weaknesses. We recommend to give full attention to both quality domains when updating SRs.
-
Bmc Med Res Methodol · Mar 2006
Reproducibility of the STARD checklist: an instrument to assess the quality of reporting of diagnostic accuracy studies.
In January 2003, STAndards for the Reporting of Diagnostic accuracy studies (STARD) were published in a number of journals, to improve the quality of reporting in diagnostic accuracy studies. We designed a study to investigate the inter-assessment reproducibility, and intra- and inter-observer reproducibility of the items in the STARD statement. ⋯ Although the overall reproducibility of the quality of reporting on diagnostic accuracy studies using the STARD statement was found to be good, substantial disagreements were found for specific items. These disagreements were not so much caused by differences in interpretation of the items by the reviewers but rather by difficulties in assessing the reporting of these items due to lack of clarity within the articles. Including a flow diagram in all reports on diagnostic accuracy studies would be very helpful in reducing confusion between readers and among reviewers.
-
Bmc Med Res Methodol · Jan 2006
Does a "Level I Evidence" rating imply high quality of reporting in orthopaedic randomised controlled trials?
The Levels of Evidence Rating System is widely believed to categorize studies by quality, with Level I studies representing the highest quality evidence. We aimed to determine the reporting quality of Randomised Controlled Trials (RCTs) published in the most frequently cited general orthopaedic journals. ⋯ Our findings suggest that readers should not assume that 1) studies labelled as Level I have high reporting quality and 2) Level I studies have better reporting quality than Level II studies. One should address methodological safeguards individually.
-
Bmc Med Res Methodol · Jan 2006
Reviewer agreement trends from four years of electronic submissions of conference abstract.
The purpose of this study was to determine the inter-rater agreement between reviewers on the quality of abstract submissions to an annual national scientific meeting (Canadian Association of Emergency Physicians; CAEP) to identify factors associated with low agreement. ⋯ The correlation between reviewers' total scores suggests general recognition of "high quality" and "low quality" abstracts. Criteria based on the presence/absence of objective methodological parameters (i.e., blinding in a controlled clinical trial) resulted in higher inter-rater agreement than the more subjective and opinion-based criteria. In future abstract competitions, defining criteria more objectively so that reviewers can base their responses on empirical evidence may lead to increased consistency of scoring and, presumably, increased fairness to submitters.