Artigo Revisado por pares

PD19-05 HIGH-VOLUME ASSESSMENT OF SURGICAL VIDEOS VIA CROWD-SOURCING: THE BASIC LAPAROSCOPIC UROLOGIC SKILLS (BLUS) INITIATIVE

2015; Lippincott Williams & Wilkins; Volume: 193; Issue: 4S Linguagem: Inglês

10.1016/j.juro.2015.02.704

ISSN

1527-3792

Autores

Timothy M. Kowalewski, Robert Sweet, Ashleigh Menhadji, Timothy D. Averch, Geoffrey N. Box, Timothy C. Brand, Michael Fearrandino, Jihad Kaouk, Bodo E. Knudsen, Jamie Landman, Benjamin Lee, Bradley F. Schwartz, Bryan Comstock, Cory R. Schaffhausen, Elspeth M. McDougall, Thomas S. Lendvay,

Tópico(s)

Surgical Simulation and Training

Resumo

You have accessJournal of UrologyTechnology & Instruments: Surgical Education & Skills Assessment III1 Apr 2015PD19-05 HIGH-VOLUME ASSESSMENT OF SURGICAL VIDEOS VIA CROWD-SOURCING: THE BASIC LAPAROSCOPIC UROLOGIC SKILLS (BLUS) INITIATIVE Timothy Kowalewski, Robert Sweet, Ashleigh Menhadji, Timothy Averch, Geoffrey Box, Timothy Brand, Michael Fearrandino, Jihad Kaouk, Bodo Knudsen, Jamie Landman, Benjamin Lee, Bradley Schwartz, Bryan Comstock, Cory Schaffhausen, Elspeth McDougall, and Thomas Lendvay Timothy KowalewskiTimothy Kowalewski More articles by this author , Robert SweetRobert Sweet More articles by this author , Ashleigh MenhadjiAshleigh Menhadji More articles by this author , Timothy AverchTimothy Averch More articles by this author , Geoffrey BoxGeoffrey Box More articles by this author , Timothy BrandTimothy Brand More articles by this author , Michael FearrandinoMichael Fearrandino More articles by this author , Jihad KaoukJihad Kaouk More articles by this author , Bodo KnudsenBodo Knudsen More articles by this author , Jamie LandmanJamie Landman More articles by this author , Benjamin LeeBenjamin Lee More articles by this author , Bradley SchwartzBradley Schwartz More articles by this author , Bryan ComstockBryan Comstock More articles by this author , Cory SchaffhausenCory Schaffhausen More articles by this author , Elspeth McDougallElspeth McDougall More articles by this author , and Thomas LendvayThomas Lendvay More articles by this author View All Author Informationhttps://doi.org/10.1016/j.juro.2015.02.704AboutPDF ToolsAdd to favoritesDownload CitationsTrack CitationsPermissionsReprints ShareFacebookTwitterLinked InEmail INTRODUCTION AND OBJECTIVES The American Urological Association created the Basic Laparoscopic Urologic Skills (BLUS) skills tasks to objectively measure technical skills on a large scale for urologic trainees. Widespread adoption of this curriculum will yield massive amounts of video data. While quantitatively rigorous, this presents a major challenge for large scale evaluation using validated objective structured assessment tools like the Global Objective Assessment of Laparopscopic Skills (GOALS). We sought to evaluate whether large-scale crowd-sourcing can effectively cope with such large datasets and provide statistically-reliable ratings. METHODS A total of 454 videos performed by urology trainees and experienced faculty of dry-lab laparoscopic tasks: Peg Transfer (n=110), Cutting (n=110), Intracorporeal Suturing(n=115), and Clip Apply (n=119) tasks were recorded with the Electronic Data Generation and Evaluation (EDGE) system (Simulab Corp, Seattle WA). We sought at least 30 Crowd-Sourced Assessment of Technical Skills (C-SATS) ratings via the GOALS survey tool with an added pass/fail domain by crowdworkers from the Amazon.com Mechancial Turk Platform (Amazon Inc., Seattle WA). We created a custom-built web-based video display and survey tool (Zoho Corp., Pleasanton, CA) that automatically uploads and manages hundreds of video ratings. We employed Receiver-Operator Characteristic (ROC) curves and the area under the curve (AUC) metric to characterize crowd scores. RESULTS Crowdworkers provided 16,418 complete GOALS ratings of 454 videos in 8.7 days. This averaged 78.6 ratings per hour or 1.3 ratings per minute. Each video received an average of 38 evaluations. The ROC curve generated by a faculty-derived cutoff score (11.1/20) and the crowd's pass/fail votes appears in Figure 1 along with the corresponding probability of passing per-video. The failing score is defined by the mean crowd GOALS score below 11.1 and passing probability less than 50%. The area under the curve (AUC) measure was 99.1%, indicating strong internal consistency within the crowd's Likert domain ratings and overall pass/fail votes. CONCLUSIONS We conclude that that crowd-sourcing can realize high-volume assessment of basic dry lab laparoscopic surgical skill via blinded video review. Figure 1. ROC curve for all crowd GOALS scores and GOALS Pass/Fail question (left) and the corresponding Passing probability per video (right). © 2015 by American Urological Association Education and Research, Inc.FiguresReferencesRelatedDetails Volume 193Issue 4SApril 2015Page: e393 Advertisement Copyright & Permissions© 2015 by American Urological Association Education and Research, Inc.MetricsAuthor Information Timothy Kowalewski More articles by this author Robert Sweet More articles by this author Ashleigh Menhadji More articles by this author Timothy Averch More articles by this author Geoffrey Box More articles by this author Timothy Brand More articles by this author Michael Fearrandino More articles by this author Jihad Kaouk More articles by this author Bodo Knudsen More articles by this author Jamie Landman More articles by this author Benjamin Lee More articles by this author Bradley Schwartz More articles by this author Bryan Comstock More articles by this author Cory Schaffhausen More articles by this author Elspeth McDougall More articles by this author Thomas Lendvay More articles by this author Expand All Advertisement Advertisement PDF downloadLoading ...

Referência(s)
Altmetric
PlumX