Academic medicine : journal of the Association of American Medical Colleges
-
The Medical Education Research Study Quality Instrument (MERSQI) and the Newcastle-Ottawa Scale-Education (NOS-E) were developed to appraise methodological quality in medical education research. The study objective was to evaluate the interrater reliability, normative scores, and between-instrument correlation for these two instruments. ⋯ The MERSQI and NOS-E are useful, reliable, complementary tools for appraising methodological quality of medical education research. Interpretation and use of their scores should focus on item-specific codes rather than overall scores. Normative scores should be used for relative rather than absolute judgments because different research questions require different study designs.
-
To identify and examine the characteristics of the 50 top-cited articles in medical education. ⋯ The finding that over half of List B articles were published in nonmedical education journals is consistent with medical education's integrated nature and subspecialty breadth. Twenty of these articles were among their respective non-medical-education journals' 50 top-cited papers, showing that medical education articles can compete with subject-based articles.
-
To evaluate interns' perceived preparedness for defined surgical residency responsibilities and to determine whether fourth-year medical school (M4) preparatory courses ("bootcamps") facilitate transition to internship. ⋯ Entering surgical residency, interns report not feeling prepared to fulfill common clinical and professional responsibilities. As M4 curricula may enhance preparation, programs facilitating transition to residency should be developed and evaluated.
-
To compare procedure-specific checklists and a global rating scale in assessing technical competence. ⋯ Assessment using a global rating scale may be superior to assessment using a checklist for evaluation of technical competence. Traditional standard-setting methods may establish checklist cut scores with too-low specificity: High checklist scores did not rule out incompetence. The role of clinically significant errors in determining procedural competence should be further evaluated.