Bmc Med Res Methodol
-
Bmc Med Res Methodol · Jan 2011
Methodological criteria for the assessment of moderators in systematic reviews of randomised controlled trials: a consensus study.
Current methodological guidelines provide advice about the assessment of sub-group analysis within RCTs, but do not specify explicit criteria for assessment. Our objective was to provide researchers with a set of criteria that will facilitate the grading of evidence for moderators, in systematic reviews. ⋯ There is consensus from a group of 21 international experts that methodological criteria to assess moderators within systematic reviews of RCTs is both timely and necessary. The consensus from the experts resulted in five criteria divided into two groups when synthesising evidence: confirmatory findings to support hypotheses about moderators and exploratory findings to inform future research. These recommendations are discussed in reference to previous recommendations for evaluating and reporting moderator studies.
-
Bmc Med Res Methodol · Jan 2011
Potential application of item-response theory to interpretation of medical codes in electronic patient records.
Electronic patient records are generally coded using extensive sets of codes but the significance of the utilisation of individual codes may be unclear. Item response theory (IRT) models are used to characterise the psychometric properties of items included in tests and questionnaires. This study asked whether the properties of medical codes in electronic patient records may be characterised through the application of item response theory models. ⋯ The application of item response theory models to coded electronic patient records might potentially contribute to identifying medical codes that offer poor discrimination or low calibration. This might indicate the need for improved coding sets or a requirement for improved clinical coding practice. However, in this study estimates were only obtained for a small proportion of participants and there was some evidence of poor model fit. There was also evidence of variation in the utilisation of codes between family practices raising the possibility that, in practice, properties of codes may vary for different coders.
-
Bmc Med Res Methodol · Jan 2011
The Global Evidence Mapping Initiative: scoping research in broad topic areas.
Evidence mapping describes the quantity, design and characteristics of research in broad topic areas, in contrast to systematic reviews, which usually address narrowly-focused research questions. The breadth of evidence mapping helps to identify evidence gaps, and may guide future research efforts. The Global Evidence Mapping (GEM) Initiative was established in 2007 to create evidence maps providing an overview of existing research in Traumatic Brain Injury (TBI) and Spinal Cord Injury (SCI). ⋯ GEM Initiative evidence maps have a broad range of potential end-users including funding agencies, researchers and clinicians. Evidence mapping is at least as resource-intensive as systematic reviewing. The GEM Initiative has made advancements in evidence mapping, most notably in the area of question development and prioritisation. Evidence mapping complements other review methods for describing existing research, informing future research efforts, and addressing evidence gaps.
-
Competing risks methodology allows for an event-specific analysis of the single components of composite time-to-event endpoints. A key feature of competing risks is that there are as many hazards as there are competing risks. This is not always well accounted for in the applied literature. ⋯ There are as many hazards as there are competing risks. All of them should be analysed. This includes estimation of baseline hazards. Study planning must equally account for these aspects.
-
Bmc Med Res Methodol · Jan 2011
Comparative StudyLogistic random effects regression models: a comparison of statistical packages for binary and ordinal outcomes.
Logistic random effects models are a popular tool to analyze multilevel also called hierarchical data with a binary or ordinal outcome. Here, we aim to compare different statistical software implementations of these models. ⋯ On relatively large data sets, the different software implementations of logistic random effects regression models produced similar results. Thus, for a large data set there seems to be no explicit preference (of course if there is no preference from a philosophical point of view) for either a frequentist or Bayesian approach (if based on vague priors). The choice for a particular implementation may largely depend on the desired flexibility, and the usability of the package. For small data sets the random effects variances are difficult to estimate. In the frequentist approaches the MLE of this variance was often estimated zero with a standard error that is either zero or could not be determined, while for Bayesian methods the estimates could depend on the chosen "non-informative" prior of the variance parameter. The starting value for the variance parameter may be also critical for the convergence of the Markov chain.