
Reproducible Artificial Intelligence Research Requires Open Communication of Complete Source Code
2020; Radiological Society of North America; Volume: 2; Issue: 4 Linguagem: Inglês
10.1148/ryai.2020200060
ISSN2638-6100
AutoresFelipe Kitamura, Ian Pan, Timothy L. Kline,
Tópico(s)AI in cancer detection
ResumoHomeRadiology: Artificial IntelligenceVol. 2, No. 4 PreviousNext Letters to the EditorFree AccessReproducible Artificial Intelligence Research Requires Open Communication of Complete Source CodeFelipe C. Kitamura* , Ian Pan†, Timothy L. Kline‡Felipe C. Kitamura* , Ian Pan†, Timothy L. Kline‡Author AffiliationsDepartment of Diagnostic Imaging, Universidade Federal de São Paulo, Rua Napoleão de Barros 800, Vila Clementino, São Paulo, SP 04024-002, Brazil; Head of AI at DASA*Warren Alpert Medical School, Brown University, Providence, RI†Department of Radiology, Mayo Clinic, Rochester, Minn‡e-mail: [email protected]Felipe C. Kitamura* Ian Pan†Timothy L. Kline‡Published Online:Jul 29 2020https://doi.org/10.1148/ryai.2020200060MoreSectionsPDF ToolsImage ViewerAdd to favoritesCiteTrack CitationsPermissionsReprints ShareShare onFacebookTwitterLinked In Editor:In the March 2020 issue of Radiology: Artificial Intelligence, Drs Mongan, Moy, and Kahn proposed a thorough checklist (Checklist for Artificial Intelligence in Medical Imaging [CLAIM]) to aid authors and reviewers when evaluating artificial intelligence (AI) manuscripts (1). CLAIM was modeled after STARD (Standards for Reporting of Diagnostic Accuracy Studies) and has been extended to address applications of AI in medical imaging.The checklist presents a thoughtful approach, including the importance of depositing all the “computer code used for modeling and/or data analysis into a publicly accessible repository” (1).Unfortunately, this element is presented with very little emphasis. We believe that open communication of source code is of utmost importance, and this component should be highlighted and expanded upon in such a checklist for AI research in the field of medical imaging. AI research can be considered essentially useless for other researchers if it is not readily usable, reproducible, replicable (2), or transparent. Access to working code and models is required to make this type of research actionable for others.We propose the use of the Machine Learning Code Completeness Checklist, part of the Neural Information Processing Systems (NeurIPS) 2020 official submission guideline (3). It is a checklist for code repositories that could complement the best practices proposed by the authors of CLAIM. Of note, this checklist was published after CLAIM, so the authors could not have been aware of it at the time.Given that AI research and code are inextricably linked, we believe that more emphasis should be placed on making code available, which is both sound and reproducible so that the wider scientific community can efficiently build upon the work initiated by the original published manuscript. Otherwise, these publications risk becoming simple “show-and-tell” projects without meaningful contribution to the field.We thank Drs Mongan, Moy, and Kahn for putting together an excellent guide for authors and reviewers in the field of AI in medical imaging. Our suggestions are meant to complement their work and are in no way intended to detract from the CLAIM article, which will serve as an excellent resource to help foster useful and reproducible AI research.Disclosures of Conflicts of Interest: F.C.K. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: author is consultant for MD.ai; author employed by DASA as head of AI. Other relationships: disclosed no relevant relationships. I.P. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: author is consultant for MD.ai. Other relationships: disclosed no relevant relationships. T.L.K. disclosed no relevant relationships.References1. Mongan J, Moy L, Kahn CE. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers. Radiol Artif Intell 2020;2(2):e200029. Link, Google Scholar2. Beam AL, Manrai AK, Ghassemi M. Challenges to the Reproducibility of Machine Learning Models in Health Care. JAMA 2020 Jan 6 [Epub ahead of print]. Crossref, Medline, Google Scholar3. Stojnic R. Tips for Publishing Research Code. NeurIPS. https://github.com/paperswithcode/releasing-research-code. Accessed April 11, 2020. Google ScholarArticle HistoryPublished online: July 29 2020 FiguresReferencesRelatedDetailsCited ByCode and Data Sharing Practices in the Radiology Artificial Intelligence Literature: A Meta-Research StudyKesavan VenkateshSamantha M. Santomartino, Jeremias Sulam, Paul H. Yi, 17 August 2022 | Radiology: Artificial Intelligence, Vol. 4, No. 5Recommended Articles Realizing Improvement through Team Empowerment (RITE): A Team-based, Project-based Multidisciplinary Improvement ProgramRadioGraphics2016Volume: 36Issue: 7pp. 2170-21832016: Reviewing for Radiology—Reporting Guidelines and Why We Use ThemRadiology2016Volume: 280Issue: 3pp. 659-662The Development of Expertise in Radiology: In Chest Radiograph Interpretation, “Expert” Search Pattern May Predate “Expert” Levels of Diagnostic Accuracy for Pneumothorax IdentificationRadiology2016Volume: 280Issue: 1pp. 252-260On the Path to a Published ManuscriptRadiology: Imaging Cancer2020Volume: 2Issue: 6The Cases for and against Artificial Intelligence in the Medical School CurriculumRadiology: Artificial Intelligence2022Volume: 4Issue: 5See More RSNA Education Exhibits How to Use PRISMA-DTA to Improve Your Imaging Systematic ReviewDigital Posters2018Integrated Diagnostics (IDx) Research Program: An Imaging-centric, Interdisciplinary Approach to Understand the Nature and Biology of CancersDigital Posters2020Do You Want to Avoid Burnout? Why We Need to Think Critically About How Burnout is Defined in Radiology Literature - Experienced by RadiologistsDigital Posters2019 RSNA Case Collection Peritoneal mesotheliomaRSNA Case Collection2020Fitz-Hugh-Curtis syndromeRSNA Case Collection2021Basilar Tip AnuersymRSNA Case Collection2022 Vol. 2, No. 4 Metrics Downloaded 818 times Altmetric Score PDF download
Referência(s)