Artigo Acesso aberto Revisado por pares

Predictors of Narrative Evaluation Quality in Undergraduate Medical Education Clerkships

2022; Lippincott Williams & Wilkins; Volume: 97; Issue: 11S Linguagem: Inglês

10.1097/acm.0000000000004809

ISSN

1938-808X

Autores

Christopher J. Mooney, Amy E. Blatt, Jennifer M. Pascoe, Valerie J. Lang, Michael S. Kelly, Mélanie Braun, Jaclyn E. Burch, Robert Thompson Stone,

Tópico(s)

Empathy and Medical Education

Resumo

Purpose: Prior work has established validity evidence of narrative assessments 1 and suggests that constructivist–interpretivist assessment approaches provide more meaningful 2 and potentially, more valid representations of trainee performance than numeric-based assessments. 3 Yet, narratives are frequently perceived as vague, nonspecific, and low quality. 4 Evidence also points to consistent patterns of bias in narrative evaluations by factors including student gender and underrepresented minority status. 5 To date, there is little research examining factors associated with narrative evaluation quality, particularly in the undergraduate medical education setting. Thus, the purpose of this work was to examine associations of faculty- and student-level characteristics with the quality of faculty members’ narrative evaluations within in-training evaluation reports. Method: We reviewed faculty narrative evaluations of 50 randomly selected students who completed their medicine and neurology clerkships, resulting in 165 and 87 unique evaluations in the neurology and medicine clerkships, respectively. We evaluated narrative evaluation quality using the Narrative Evaluation Quality Instrument (NEQI). We used linear mixed effects modeling to predict total NEQI score (maximum 12 points). Explanatory covariates included: time to evaluation completion, number of weeks spent with student, faculty total weeks on service per year, total faculty years in clinical education, student gender, faculty gender, and an interaction term between student and faculty gender. Secondary analyses explored association of explanatory covariates with NEQI subcomponent scores: performance domains, specificity, and usefulness. The study was approved as exempt by our institutional committee on human subjects research. Results: Significantly higher narrative evaluation quality was associated with a shorter time to evaluation completion, with NEQI scores decreasing by approximately .3 points every 10 days following students’ rotations (b = −0.03, P = .004). Additionally, female faculty had statistically higher-quality narrative evaluations with NEQI scores 1.81 points greater than their male counterparts (b = 1.81, P = .012). All other covariates were not significant. The “pseudo R2” or estimated proportion of variance accounted for within faculty (R21) was 0.08, and 0.09 between faculty (R22), with the latter suggesting the model explained about 9% of between-faculty differences in NEQI scores. Secondary analyses showed that none of the covariates predicted the performance domain subcomponent score. For the specificity subcomponent score, only faculty gender was statistically significant (b = 0.52; P = .006). However, time to evaluation completion, (b = −0.02, P < .001), time evaluator spent with student (b = 0.40; P = .04), and faculty gender (b = 1.02, P = .005) were all statistically significant predictors of the usefulness subcomponent score. Discussion: We found that time to narrative evaluation completion and faculty gender were associated with overall narrative evaluation quality. Conversely, factors reflecting continuity of supervision and faculty clinical teaching experience were not related to overall narrative evaluation quality. Importantly, there was not an association between faculty and student gender. Significance: Findings from this study advance understanding on ways to improve the quality of narrative evaluations, which are imperative given programmatic assessment models that will increase the volume and reliance on narrative assessments. Additionally, the recent elimination of the USMLE Step 2 Clinical Skills examination further increases the importance of narrative assessments as residency programs look for alternative means to discriminate between levels of trainee performance. Findings can also inform faculty development efforts to improve the quality of student evaluation by promoting processes that facilitate timely completion of narratives. In addition, further investigation should explore the narrative quality disparity by faculty gender and its impact on faculty professional development.

Referência(s)
Altmetric
PlumX