Artigo Acesso aberto Revisado por pares

Pediatric Emergency Medicine Asynchronous E-learning: A Multicenter Randomized Controlled Solomon Four-group Study

2014; Wiley; Volume: 21; Issue: 8 Linguagem: Inglês

10.1111/acem.12434

ISSN

1553-2712

Autores

Todd P. Chang, Phung K. Pham, Brad Sobolewski, Cara Doughty, Nazreen Jamal, Karen Y. Kwan, Kim Little, Timothy E. Brenkert, David J. Mathison,

Tópico(s)

Radiology practices and education

Resumo

Asynchronous e-learning allows for targeted teaching, particularly advantageous when bedside and didactic education is insufficient. An asynchronous e-learning curriculum has not been studied across multiple centers in the context of a clinical rotation. We hypothesize that an asynchronous e-learning curriculum during the pediatric emergency medicine (EM) rotation improves medical knowledge among residents and students across multiple participating centers. Trainees on pediatric EM rotations at four large pediatric centers from 2012 to 2013 were randomized in a Solomon four-group design. The experimental arms received an asynchronous e-learning curriculum consisting of nine Web-based, interactive, peer-reviewed Flash/HTML5 modules. Postrotation testing and in-training examination (ITE) scores quantified improvements in knowledge. A 2 × 2 analysis of covariance (ANCOVA) tested interaction and main effects, and Pearson's correlation tested associations between module usage, scores, and ITE scores. A total of 256 of 458 participants completed all study elements; 104 had access to asynchronous e-learning modules, and 152 were controls who used the current education standards. No pretest sensitization was found (p = 0.75). Use of asynchronous e-learning modules was associated with an improvement in posttest scores (p < 0.001), from a mean score of 18.45 (95% confidence interval [CI] = 17.92 to 18.98) to 21.30 (95% CI = 20.69 to 21.91), a large effect (partial η2 = 0.19). Posttest scores correlated with ITE scores (r2 = 0.14, p < 0.001) among pediatric residents. Asynchronous e-learning is an effective educational tool to improve knowledge in a clinical rotation. Web-based asynchronous e-learning is a promising modality to standardize education among multiple institutions with common curricula, particularly in clinical rotations where scheduling difficulties, seasonality, and variable experiences limit in-hospital learning. El aprendizaje electrónico asincrónico (AEA) permite la docencia dirigida y es particularmente ventajoso cuando la formación didáctica y a pie de cama es insuficiente. No se ha estudiado un plan de estudios de AEA en muchos centros en el contexto de un rotación clínica. La hipótesis fue que un plan de estudios de AEA durante la rotación en Medicina de Urgencias y Emergencias (MUE) Pediátrica mejora el conocimiento médico entre los residentes y estudiantes en los numerosos centros participantes. Se asignaron aleatoriamente los alumnos que rotaron en MUE Pediátrica a cuatro grandes centros pediátricos de 2012 a 2013 siguiendo un diseño Solomon-4-group. El brazo experimental recibió un plan de estudios AEA consistente en nueve módulos con formato Flash/HTML5, revisados por pares, interactivos y basados en la web. Los exámenes tras la rotación y las puntuaciones del examen durante la rotación (In-Training Examination (ITE)) cuantificaron las mejoras del conocimiento. Una ANCOVA 2 × 2 comprobó la interacción y los efectos principales y la correlación de Pearson testó las asociaciones entre el uso de los módulos, las calificaciones y las puntuaciones del ITE. De los 458 participantes, 256 completaron todos los elementos del estudio: 104 tuvieron acceso a los módulos de AEA y 152 fueron controles que siguieron la formación actual habitual. No se encontró sensibilización pretest (p = 0,75). El uso de módulos de AEA se asoció con una mejora en las puntuaciones postest (p < 0,001), desde una puntuación media de 18,45 (IC 95% = 17,92 a 18,98) a 21,30 (IC 95% = 20,69 a 21,91), lo que supone un gran efecto (η2 parcial = 0,19). Entre los residentes de pediatría, las puntuaciones postest se correlacionaron con las puntuaciones ITE (r2 = 0,14, p < 0,001) El AEA es una herramienta formativa efectiva para mejorar el conocimiento en una rotación clínica. El AEA basado en la web es una modalidad prometedora para estandarizar la educación entre multitud de instituciones con planes de estudios comunes, particularmente en rotaciones clínicas donde las dificultades de programación, la estacionalidad y las experiencias variables limitan el aprendizaje intrahospitalario. Web-based asynchronous e-learning has become a common educational modality in undergraduate and graduate medical education. Increasingly, medical educators are turning to asynchronous e-learning as an efficient, effective method of delivering learner-centered education1 when direct patient care is insufficient.2 Learners are receptive to this type of teaching,3, 4 and there is evidence to show improvements in knowledge in the small, local venues in which the modules were designed.3, 5-10 In pediatric emergency medicine (EM), asynchronous e-learning is particularly important for three reasons. First, resident trainees come from disparate disciplines—general pediatrics, adult EM, and family medicine—each with his or her own needs, knowledge deficits, and objectives.11 Second, different institutions provide variable teaching and patient exposure opportunities.2, 11 Moreover, the shift-based nature of EM poses a logistical challenge to the scheduling of synchronous education suitable to all trainees. Asynchronous e-learning provides a palatable solution to learners who are limited by work schedules or by duty hours and can encompass learners with varied needs, clinical knowledge, and learning styles.5, 12 Such educational interventions have been shown to be successful at single institutions for rotating trainees in both pediatric and general EM.3, 13 However, to our knowledge no studies have examined the feasibility of a large-scale asynchronous e-learning curriculum in pediatric EM. In addition, most studies do not address pretest sensitization that could threaten the internal validity of education studies; pretest sensitization occurs when simply taking the pretest assessment improves knowledge, even in control groups without an intervention.14 To minimize this effect, a Solomon four-group design can be used to examine main effects of an asynchronous e-learning intervention while accounting for expected pretest sensitization.15 We sought to develop a multi-institution, learner-centered, Web-based, asynchronous e-learning curriculum using multimedia. We then sought to evaluate its effects on medical knowledge as measured by posttests and in-training examination (ITE) scores. We hypothesized that a single multi-center e-learning curriculum was feasible and could universally improve medical knowledge among many different types of residents and students in the pediatric emergency department (ED). This was a prospective study using the Solomon four-group design.15 Institutional review board approvals were obtained in all four institutions. These were all free-standing tertiary-care children's hospitals across the United States with dedicated pediatric EDs staffed with residents, fellows, and board-certified attending emergency physicians (EPs). Patient volumes ranged from 66,000 to 110,000 patient visits per year. Eligible participants included any resident or fourth-year medical student rotating through the ED for a minimum of 2 weeks. Those rotating multiple times were only enrolled once. All e-learning modules were developed, storyboarded, and published using the curriculum design process of Kern et al.16 among board-certified/board-eligible pediatric EPs from multiple institutions to accommodate for regional differences in clinical management. Educational materials were chosen from a needs assessment survey from residents in multiple institutions and matched to the American Board of Pediatrics General Pediatrics and Pediatric Emergency Medicine specifications.17, 18 Six topics were chosen for the study, three of which (apparent life-threatening event, pediatric upper airway obstruction, and febrile seizure) were pediatric-centric and the other three of which (ocular trauma, mammalian bites, and fractures) were EM-focused. Modules were storyboarded in Adobe Captivate 5.5 or 6 software. This allowed for maximized user interactivity and multimedia approach with audio narration, clinical videos, and photos, diagrams, and links to evidence-based articles. Instructional design was standardized using principles of cognitive load theory and multimedia learning theory.10, 19, 20 These included specific photos, diagrams, and patient videos or audio clips when appropriate, with minimal written text. All modules had voiceover accompaniment by the author of the module, and they contained regularly spaced questions to bolster interactivity as recommended by Cook et al.8 Two pediatric EPs from different institutions critiqued each module on content, technical specifications, and generalizability. Institutional-specific content, such as protocol management not applicable to all institutions, was omitted. The modules were also beta-tested on attending board-certified pediatric EPs among our four institutions for further feedback for a minimum of three iterative upgrades. The completed modules were published on a password-protected Web site that tracked usage statistics via embedded tracking software within the Adobe Captivate file or through a Moodle Learning Management System, depending on the institutions' information technology infrastructure. Otherwise all modules functioned identically across users, lasting approximately 20 minutes each for a total of nine modules. Examples of the modules used are provided at http://www.therotationstation.org/EmergencyMedicine.html. A total of 60 multiple choice questions were written based on learning objectives for each topic and revised by group consensus over 3 months; these were divided into a 30 multiple-choice question pretest and a 30 multiple-choice question posttest. Both tests were piloted on a preliminary group of nonparticipating residents, pediatric emergency fellows, and attending physicians (n = 22), and items with poor item-total correlations were revised. The pretest underwent one revision and the posttest underwent two revisions to achieve a Kuder-Richardson (KR-20) reliability coefficient of 0.6 or better, using pilot data. An a priori decision to remove three items with the lowest item-total correlation was made for the final analysis. This yielded a maximum score of 27 multiple-choice questions for each corrected pretest and posttest. Participants were enrolled in a rolling fashion from each institution within the 2012–2013 academic year. The duration of participation by each institution ranged from 4 to 10 months. Stratified block randomization assigned each participant to one of four groups using a Solomon four-group design through a random number generator (http://www.random.org), stratified by specialty. That is, pediatric residents were block randomized within their specialty, EM residents were randomized separately, and so forth. Each institution randomized independently from each other. This design had two experimental groups and two control groups15 and is summarized in Figure 1. Participants were enrolled when starting their pediatric ED rotations and immediately randomized. Tests were administered in either paper or electronic format at the institution's discretion to accommodate administrative and technology support. We administered pretests and posttests beginning and at the end of their rotations, respectively. Experimental groups received module access for the entire rotation. Participants were informed that study participation was optional, and study completion did not affect their rotation evaluations or residency status. Up to three automated e-mail reminders were sent for incomplete pretests, posttests, or modules, but neither module nor test completion was mandated for the participant as part of the standard rotation. Participants were withdrawn from analysis for incompletion of tests within 90 days; experimental group participants were also withdrawn if they completed zero modules. Modules were unlocked for all participants at study completion. Physicians providing usual bedside teaching were blinded to group assignments and to enrollment status of participants. Demographic variables, such as specialty and training year, were analyzed to compare withdrawn participants to participants included in the final sample. We also collected ITE scores from the 2012–2013 academic year when available for postgraduate residents. ITEs represent practice examinations administered by a specialty American Board for trainees and serve as a source for external validity.21 All electronic data were collected by corresponding software such as SurveyMonkey, REDCap,22 or by Moodle, depending on the institution. All data were deidentified at the source. Outcome variables included corrected pretest and posttest scores, number of modules completed, and ITE scores. Statistical Package for the Social Sciences version 20 (IBM, Armonk, NY) was used for all data analyses. Initial sample size calculations were performed using G*Power 3.1.5.23 We used an analysis of covariance (ANCOVA) model assuming training year as a covariate, with power = 0.8, alpha = 0.05, and 30% standard deviation (SD) from mean posttest scores for a 10% expected difference between groups, representing an effect size of Cohen's f = 0.2 or partial η2 = 0.04. These values reflect a predicted improvement from previous asynchronous e-learning literature,3, 13 as well as values gleaned from the pilot data. This yielded a minimum of 50 per group, for a total sample size of 200. An anticipated 40% attrition rate increased the target sample size to 280. Intent-to-treat analysis was not possible as withdrawn participants contributed little analyzable data. Descriptive statistics and chi-square tests with standardized residuals were used to characterize demographic, enrollment, and usage data. Internal consistency of binary scales of pretest and posttest assessments was established using KR-20, and three items from each test with the poorest item-total correlations were deleted from the total score, yielding a possible score of 27 points for each test. This was renamed the corrected test score, and we recalculated revised internal consistency coefficient for both tests. Data from trainees were analyzed using two-way between-groups full factorial ANCOVA in accordance with Braver and Braver's statistical approach to the Solomon four-group design.15 Training year was the covariate. This ANCOVA model could detect a pretest sensitization that could spuriously affect the corrected posttest score. The two categorical independent variables comprising this analysis were "modules" and "pretest." We verified that the data met equality of error variances assumptions15 before setting an alpha of 0.05 for the primary ANCOVA. Effect size was expressed as partial η2 and interpreted using Huck's parameters: a partial η2 of 0.01 conferred a small effect size, 0.06 a medium effect size, and 0.14 a large effect size.24 We repeated the ANCOVA for all four specialties represented using a Bonferroni's correction alpha of 0.012. A one-way between-groups analysis of variance analyzed differences in corrected test scores between the four institutions. Pearson's correlations tested associations among corrected test scores, ITE scores, and number of modules used; the alpha for these analyses were conservatively set to 0.001. To seek differences between the withdrawn population and the enrolled population, the posttest scores available from the withdrawn participants were compared against those who completed the asynchronous e-learning. Among 506 eligible trainees, 458 enrolled, and 48 declined, most citing lack of time as the primary reason. Out of 458 participants across the four institutions, 256 completed all parts of the study. The full CONSORT diagram is shown in Figure 2. The majority of enrolled trainees were residents in general pediatrics or medicine-pediatrics (n = 173), followed by EM (n = 41), family medicine (n = 22), and fourth-year medical students (n = 20). Our sample represented a spectrum of fourth-year medical students to PGY-4 residents. A large dropout rate of 44% was noted. There was a higher withdrawal rate due to incompletion of study materials among participants randomized to use modules than those without modules (p < 0.001). EM residents were excluded more than any other group (p = 0.04), but otherwise no demographic differences were found between the analyzed and the withdrawn trainees (p > 0.18). Table 1 summarizes comparisons between withdrawn and analyzed participants. The 104 participants remaining who received access to modules (groups A and C) viewed a mean (±SD) of 6.4 (±2.6) modules out of nine total; 28 participants (26.9%) completed all nine modules, and an additional 25 (24%) completed eight modules. The final 27-point multiple-choice question assessments demonstrated internal consistency at a KR-20 of 0.56 for the pretest and 0.61 for the posttest. Our primary ANCOVA using the 27-point posttest met all required assumptions of homogeneity of variance (Levene's test p = 0.28), normality of sampling distributions,25 linearity of training year as a covariate (r2 = 0.11, p < 0.001), and homogeneity of regression (no effect of covariate on independent variables (p ≥ 0.07). Our results did not change significantly with removal of two low-scoring outliers. Using our primary ANCOVA, we found no significant interaction (p = 0.75) between administration of a pretest and the modules, indicating no pretest sensitization (for full results see the Data Supplement S1, available as supporting information in the online version of this paper). The use of asynchronous e-learning modules raised mean posttest corrected scores from 18.45 to 21.30 and exerted a significant main effect (p < 0.001) with a partial η2 of 0.19, corresponding to a large effect size (Table 2).24 When broken down among specialties, there was no significant pretest sensitization (p ≥ 0.30). Module usage as a main effect was significant for both pediatrics (p < 0.001) and EM residents (p = 0.004), with large effect sizes of partial η2 of 0.23 and 0.21, respectively. No significant module effects were found among family medicine residents (p = 0.33, partial η2 = 0.06) or medical students (p = 0.11, partial η2 = 0.15; Table 2). One-way between-groups ANOVA indicated that corrected posttest scores did not significantly differ between institutions (p = 0.22). Correlations between ITE scores and corrected scores are listed in Table 3. For pediatrics residents, both the pretest and the posttest corrected scores were significantly associated with ITE scores. This correlation was not found with EM residents or family medicine residents. In addition, there was a correlation between the numbers of modules viewed (one to nine of nine) and posttest corrected score among all participants (r2 = 0.14, p < 0.001). Within group A, there was no significant correlation with the delta—the increase in score from pretest to posttest, as a function of the number of modules viewed (r2 = 0.01, p = 0.4). Among those withdrawn, 28 completed posttests, with a mean (±SD) corrected score of 17.68 (±4.47). This was significantly lower than the mean experimental group score (p < 0.001), but not significantly different than control group scores (p = 0.3). To the best of our knowledge, our study is the first rigorous Solomon four-group, randomized controlled study evaluating effects of asynchronous e-learning by trainees on a clinical rotation. Our data suggest positive incremental gains in knowledge using supplemental asynchronous e-learning. The curriculum also improves knowledge among pediatric and EM residents, despite baseline knowledge differences. Our study was designed to determine if pretest sensitization exists, which is a methodologic weakness of simple pretest/posttest studies.14 That is, taking a pretest gives a preview of the posttest; learners may learn directly from the pretest or alter their rotation learning habits to match the posttest, independent of the asynchronous e-learning intervention's effects. Our data show no significant pretest sensitization and instead show a large effect using a single asynchronous e-learning curriculum across four institutions. We did not find significant improvements attributable to module usage in medical students nor family medicine residents. This is likely a result of smaller sample sizes for either group (n = 20 and n = 22) rather than a discordance between the module content and learner skill. Branzetti et al.13 showed significant improvements in internal medicine residents rotating through an adult ED, and their numbers were double with only two study arms. We demonstrated a weak to moderate correlation between both pretest and posttest corrected scores and ITEs in pediatrics residents, suggesting that our tests have similar content to those questions demanded in ITEs. With small numbers of 22 or fewer, we were unable to demonstrate conclusively any such correlation with the ITEs in EM or family medicine residents. Asynchronous e-learning has been frequently cited as a useful instructional method.7 It provides asynchronous education for residencies affected by strict resident duty hour restrictions.26 With decreased opportunities for didactic lectures and bedside teaching, asynchronous e-learning allows residents to learn on their own time.27, 28 Our findings are consistent with previous studies evaluating the use of asynchronous e-learning in single institutions to augment trainee knowledge. Asynchronous e-learning for medical students has been used to replace or complement didactic teaching in EM with relative improvements in learning.3, 29, 30 In previous studies, trainees have shown a preference for computerized learning,30 and asynchronous e-learning is a promising way to allow for nonlinear and customized teaching.31 Among our included study participants, use of asynchronous e-learning resulted in a 15% increase in test scores, and there is an association between increased module usage and increased knowledge. Our data are consistent with those found in other pediatric EM asynchronous e-learning studies using PowerPoint or augmented PowerPoint slides in single institutions, with score improvements in the literature ranging from 5.2% to 15.7%.3, 13 We have expanded the single-institution pediatric EM curriculum to a multi-institution curriculum and have shown similar improvements, despite logistic and shift schedule differences among different institutions. Multicenter asynchronous e-learning studies in specific patient care elements have shown effectiveness in the literature,32-34 but our data are the first to demonstrate a curricular effect using asynchronous e-learning during a pediatric EM rotation. The knowledge gain demonstrated in this multicenter study indicates the possibility of a standardized pediatric EM curriculum for trainees across the country. Emergency medicine faculty can find scheduling didactic teaching difficult, and asynchronous e-learning has emerged as a teaching technique that mitigates time constraints.1, 35 Online asynchronous e-learning curricula, once developed, are scalable and easy to implement at no further financial or time costs. Cook et al.9 have shown that asynchronous e-learning allows learners to spend as much time learning as with traditional methods, but requires less time from faculty. Many universities and hospitals already have hosting platforms such as learning management systems for educational coursework or employee safety compliance. Organizations that lack a system can use a simple online hosting service, such as Moodle, with universal access from computers, tablets, or other mobile devices. In addition, the scalability allows sharing of a single asynchronous e-learning curriculum across multiple institutions, to distribute up-front costs. We studied modules authored by physicians in two of the four institutions; aside from research resources, there were no implementation costs for the other two institutions, as these modules were accessible at any time at any location. Maintenance of the system is minimal and can be relegated to nonfaculty in information technology or educational technology services; periodic updates to asynchronous e-learning modules are planned to reflect advancements in science every few years, but the infrastructure is already in place. Although we had many participants across multiple institutions, the withdrawal rate was 44%. We attribute this to a high enrollment rate from the asynchronous e-learning's allure that inflated the denominator,30 combined with a lack of consequence for study incompletion. Unlike other asynchronous e-learning studies,3, 13, 36 we deliberately chose not to mandate module usage, to simulate self-motivated learning similar to reference books and reading journals. Without consequences for incompletion, a high withdrawal rate tempers the strength of conclusions from our data. The additional comparison of withdrawn participants to the analyzed cohort shows similar scores to the control group, but lower than those who viewed the asynchronous e-learning modules. The reasons for withdrawal may be due to technical problems, perceived lack of value, or lack of time. We addressed all technical complaints successfully, but did not pursue feedback from all participants. In addition, the included participants of the experimental groups likely selected for more motivated, self-disciplined learners who would naturally perform better on any assessment.37 Because all four institutions in the study were children's hospitals, EM and family medicine residents were all rotating residents. Rotating residents likely feel less compelled or obligated to complete assigned tasks from outside their home institutions. This likely explains their higher withdrawal rates. On the other hand, fourth-year medical students seeking residencies are more likely to follow through and complete the study, as reflected in their low withdrawal rate. Our data are insufficient to comment on the effects of mandated asynchronous e-learning on learners and on learners not completing the assignments. Another limitation aside from the high withdrawal rate is the fair internal consistency of the posttest that threatens the reliability and validity of our assessments, despite positive correlation to ITE scores.21 We developed the questions using the five validity principles by Cook et al.21 and improved our reliability using pilot testing. Reliability would have improved with more questions; Burnette et al.3 used 75 questions for their mandatory assessment compared to our 27; however, they used the same questions on the pretest as the posttest, risking pretest sensitization. A longer test would have also risked further incompletion. We chose 60 questions to ensure at least five questions on each topic on each test and used different questions to minimize pretest sensitization. Our study lacked sufficient numbers in the nonpediatric resident and medical student subgroups and was not powered for subgroup analyses. Finally, we did not seek outcomes beyond test scores, and our study was not designed to seek trainee behavior changes or patient care changes. It is difficult to know if a 15% improvement in knowledge translated to higher level outcomes. The literature shows no obvious correlations between test scores and clinical performance,38, 39 and there is sparse literature in asynchronous e-learning that has shown improved clinical outcomes.40 Further study should begin to bridge the gap from improved knowledge to improved care. An asynchronous e-learning curriculum in pediatric emergency medicine can improve medical knowledge among learners independent of clinical bedside teaching. Given our large withdrawal rate, future studies should seek to address barriers to asynchronous e-learning usage, including mandated usage and distributing asynchronous e-learning through the home institution for rotating residents. In addition, further inquiry into how learners appropriately choose topics important to their own specialties and career paths can shed light on how a larger, national or international asynchronous e-learning curriculum can best serve our subspecialty and other health care disciplines. Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.

Referência(s)