Artigo Acesso aberto Revisado por pares

Are Artificial Intelligence Challenges Becoming Radiology’s New “Bee’s Knees”?

2021; Radiological Society of North America; Volume: 3; Issue: 3 Linguagem: Inglês

10.1148/ryai.2021210056

ISSN

2638-6100

Autores

Hesham Elhalawani, Raymond H. Mak,

Tópico(s)

Radiation Dose and Imaging

Resumo

HomeRadiology: Artificial IntelligenceVol. 3, No. 3 PreviousNext CommentaryFree AccessAre Artificial Intelligence Challenges Becoming Radiology's New "Bee's Knees"?Hesham Elhalawani , Raymond MakHesham Elhalawani , Raymond MakAuthor AffiliationsFrom the Department of Radiation Oncology, Brigham and Women's Hospital and Dana-Farber Cancer Institute, Harvard Medical School, 75 Francis St, Boston, MA 02115.Address correspondence to H.E. (e-mail: [email protected]).Hesham Elhalawani Raymond MakPublished Online:Apr 21 2021https://doi.org/10.1148/ryai.2021210056MoreSectionsPDF ToolsImage ViewerAdd to favoritesCiteTrack CitationsPermissionsReprints ShareShare onFacebookTwitterLinked In See also the article by Desai et al in this issue.Hesham Elhalawani, MD, MSc, is a clinical fellow in CNS radiation oncology division at Harvard Medical School and Brigham and Women's Hospital/Dana-Farber Cancer Institute. Dr Elhalawani co-organized the MICCAI grand challenges in 2016, 2018, and 2020. His research interests focus on leveraging artificial intelligence and quantitative imaging analytics, including radiomics and multi-parametric MRI, to personalize radiation therapy. He serves as a member of the RSNA Radiology Informatics Committee.Download as PowerPointOpen in Image Viewer Raymond Mak, MD, is an associate professor of radiation oncology at Harvard Medical School and Brigham and Women's Hospital/Dana-Farber Cancer Institute. Dr Mak's research interests focus on developing imaging biomarkers to predict radiation therapy response in patients with lung cancer and applying artificial intelligence techniques to automate radiation therapy planning. He has led crowd innovation and clinical trials to develop novel, clinically relevant artificial intelligence techniques.Download as PowerPointOpen in Image Viewer In a recently published research report by Data Bridge on artificial intelligence (AI), the global market for AI in medical imaging was estimated to rise to a projected value of $264.85 billion in 2026 from $21.48 billion in 2018 (1). Radiology, with its ingrained big-data potentials, peculiar data elements, and structured reporting, needs to embrace and invest in AI via personnel education, dedicated research, and resource allocation. The article by Desai et al in the current issue of Radiology: Artificial Intelligence engaged crowdsourcing methods to prime AI-enabled algorithms to accurately replicate expert radiologists' performance in segmenting knee articular cartilage and meniscus on MRI (2). Manual three-dimensional (3D) segmentation is integral to automated diagnosis and evaluating imaging biomarkers but is strikingly time- and training-intensive and entails inherent interobserver inconsistencies with potentially detrimental effect on biomarker robustness. Hence, it is no surprise that AI research endeavors have sought to automate segmentation workflows, which is crucial to a field known to be short of practitioners on a global scale (3).Beyond segmentation, other AI applications in radiology—including natural language processing (NLP), radiomics, and radiogenomics—have shown promise to reform radiology workflows and to enhance diagnostic accuracy and patient risk stratification. Nonetheless, dearth of large-scale curated clinical-imaging datasets and lack of data standardization are ongoing challenges for generalization in AI (4). Organizational competitions, such as Medical Image Computing and Computer Assisted Intervention (MICCAI) SKI10 challenge, have provided reliable curated datasets to investigate AI solutions (5). However, generalizability of the resulting models was always in question given varying real-world scanner and acquisition parameters and inconsistent techniques of data partitioning into training, validation, and test subsets.Desai et al hosted the 2019 International Workshop on Osteoarthritis Imaging Knee Segmentation Challenge on an open-source website dedicated to the Osteoarthritis Initiative. This multicenter, 10-year observational study of men and women, sponsored by the National Institutes of Health, aims to enable better understanding of prevention and treatment of knee osteoarthritis (https://nda.nih.gov/oai) (6). The authors provided a curated imaging dataset of 88 patients with expert segmentations, a feat not seen routinely in most clinical studies. Moreover, the authors are to be commended for hosting this contest on a non–custom-constructed challenge platform, while providing a generalized framework for characterizing and evaluating the semantic and clinical efficacy of automatic segmentation methods. This framework included careful definitions and justification of dataset partitioning into training, validation, and test subsets of images, independently, without overlap. All participants were allowed to use training data from other sources and perform data augmentation. Participants' models were then evaluated on the unreleased ground truth test set segmentations.The authors evaluated the potential to use an ensemble of convolutional neural networks (CNNs) to grade and combine outputs from multiple high-performance networks. CNNs have shown great potential for automating segmentation. Yet, comparing the performance of different networks has always been a challenge. Other investigators recently evaluated 3D CNN–assisted detection and grading of abnormalities in knee MRI studies (7). Interestingly, in the current study, the voting ensembles, that is, the model that combines the predictions from multiple other models, did not exceed individual network performance, and high segmentation accuracy did not correlate to cartilage thickness accuracy. This finding may indicate that all networks systematically overestimate or underestimate cartilage thickness per patient. This also suggests the added value of pixel-level segmentation accuracy and tissue-level thickness accuracy metrics when constructing the networks.Appropriately, the authors reported caveats to their work, including small sample size and challenge participants minimally applying CNN postprocessing, a step important to refine CNN outputs. Of note, only five teams participated in the challenge, while a sixth team submitted an entry following the challenge. This brings up the question of how to incentivize contestants to participate in similar challenges and what tasks are most appropriate for crowdsourcing. Nonetheless, Desai et al are to be commended for going beyond just publishing "significant" challenge results to outlining a roadmap for future organizers to carry out and evaluate challenge entries.Data repositories such as The Cancer Imaging Archive (TCIA) (http://www.cancerimagingarchive.net/) founded in 2011 and the Radiological Society of North America (RSNA) Medical Imaging and Data Resource Center (MIDRC) (https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70230281) were created to facilitate sharing of high-quality clinical data in a centralized standardized fashion to promote collaborative AI research. The continued development and integration of these medical data repositories and others provide robust, high-fidelity "big data" sets that are critical to the future of AI analyses. As AI investigations have become increasingly cross-disciplinary, the need for collaboration between these distinct databases is more imperative now than ever. For instance, Zhu et al used integrated data from TCIA and The Cancer Genome Atlas databases to cross-link radiomic phenotypes and genomic mechanisms in head and neck cancers (8).We believe that a complementary data descriptor published and then indexed in PubMed/PubMed Central goes along with our community's ethical responsibility to promote data sharing in accordance with the FAIR (Findability, Accessibility, Interoperability, and Reusability) guiding principles for scientific data management (9) and the CLAIM (Checklist for Artificial Intelligence in Medical Imaging) standards for reporting AI in medical imaging studies (10). In parallel, we should move toward workflow automation to enable "batch" data anonymization and transfer to the repository to become more and more compelling. Moreover, it is crucial to develop community-driven container-based software engines and platforms for the structured dissemination of deep learning models (eg, ModelHub.AI, http://modelhub.ai/). Further support is needed so that these models could evolve into application programming interfaces with eventual commercialization of AI tools. The American College of Radiology's Data Science Institute has paved the way for developers to expedite Food and Drug Administration (FDA) clearance of new AI tools in accord with the FDA review process through its Assess AI program (https://www.acrdsi.org/DSI-Services/Assess-AI).Incorporating AI into the mainstream of clinical research and practice should be done responsibly, and results should be interpreted wisely to maximize the benefits and avoid any potential or collateral damage. Perhaps the most critical concern would be ensuring patients' safety and their health information privacy as prerequisites for interinstitutional or public data sharing. The Clinical Trial Processor, developed by the RSNA (http://mircwiki.rsna.org/index.php?title=CTP-The_RSNA_Clinical_Trial_Processor), is an excellent tool to ensure Health Insurance Portability and Accountability Act compliant anonymization of patient health identifiers embedded in images. Also, simultaneous access capacity to public and private data will also be a significant consideration to account for various data protection policies among all stakeholders. For instance, the newly instituted European General Data Protection Regulations are now comparatively more constrained by necessitating both public and private versions of large-scale databases. Not surprisingly, enthusiasm has been growing for a more distributed learning approach where advanced statistical and AI models are developed and externally validated safely in-house using multi-institutional datasets.In sum, the radiology community has been taking measures to not only keep up with the AI revolution taking place in almost every discipline, but also to steer it in the direction that best serves our clinical practice. It is imperative that we build AI infrastructures that have the capacity to include and process multidisciplinary data attributes. Large-scale data curation-transfer workflows—as well as advanced postprocessing toolboxes like autosegmentation algorithms, and batched anonymization capabilities with common ontology data dictionaries—are central to the success of this revolutionary endeavor. Integrating NLP and machine learning algorithms and evaluating the effect of their interplay on refining data extraction from radiology reports are other promising horizons, yet to be fully explored (11). Multi-institutional AI challenges like the current study can hone and leverage robust AI techniques to identify clinically applicable means of extracting clinical outcomes from existing large-scale multi-institutional data streams toward more intelligent evidence-based personalized medicine.Disclosures of Conflicts of Interest: H.E. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: trainee editorial board member of Radiology: Artificial Intelligence. Other relationships: disclosed no relevant relationships. R.M. Activities related to the present article: disclosed no relevant relationships. Activities not related to the present article: consultant for ViewRay and AstraZeneca; author received grant from ViewRay. Other relationships: disclosed no relevant relationships.References1. Global Artificial Intelligence in Medical Imaging Market – Industry Trends and Forecast to 2028. https://www.databridgemarketresearch.com/reports/global-artificial-intelligence-in-medical-imaging-market?pm. Accessed February 7, 2021. Google Scholar2. Desai AD, Caliva F, Iriondo C, et al. The International Workshop on Osteoarthritis Imaging Knee MRI Segmentation Challenge: A Multi-Institute Evaluation and Analysis Framework on a Standardized Dataset. Radiol Artif Intell 2021;3(3):e200078. Link, Google Scholar3. Mollura DJ, Culp MP, Pollack E, et al. Artificial Intelligence in Low- and Middle-Income Countries: Innovating Global Health Radiology. Radiology 2020;297(3):513–520. Link, Google Scholar4. Welch ML, McIntosh C, Haibe-Kains B, et al. Vulnerabilities of radiomic signature development: The need for safeguards. Radiother Oncol 2019;130(2):9. Google Scholar5. Emmanuel K, Quinn E, Niu J, et al. Quantitative measures of meniscus extrusion predict incident radiographic knee osteoarthritis--data from the Osteoarthritis Initiative. Osteoarthritis Cartilage 2016;24(2):262–269. Crossref, Medline, Google Scholar6. Eckstein F, Wirth W, Nevitt MC. Recent advances in osteoarthritis imaging--the osteoarthritis initiative. Nat Rev Rheumatol 2012;8(10):622–630. Crossref, Medline, Google Scholar7. Astuto B, Flament I, Namiri NK, et al. Automatic Deep Learning Assisted Detection and Grading of Abnormalities in Knee MRI Studies. Radiol Artif Intell 2021;3(3):e200165. Link, Google Scholar8. Zhu Y, Mohamed ASR, Lai SY, et al. Imaging-Genomic Study of Head and Neck Squamous Cell Carcinoma: Associations Between Radiomic Phenotypes and Genomic Mechanisms via Integration of The Cancer Genome Atlas and The Cancer Imaging Archive. JCO Clin Cancer Inform 2019;3(3):1–9. Crossref, Medline, Google Scholar9. Wilkinson MD, Dumontier M, Aalbersberg IJ, et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data 2016;3(1):160018 [Published correction appears in Sci Data 2019;6(1):6.]. Crossref, Medline, Google Scholar10. Mongan J, Moy L, Kahn CE Jr. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A guide for authors and reviewers. Radiol Artif Intell 2020;2(2):e200029. Link, Google Scholar11. Chen PH, Zafar H, Galperin-Aizenberg M, Cook T. Integrating Natural Language Processing and Machine Learning Algorithms to Categorize Oncologic Response in Radiology Reports. J Digit Imaging 2018;31(2):178–184. Crossref, Medline, Google ScholarArticle HistoryReceived: Feb 16 2021Revision requested: Feb 19 2021Revision received: Feb 20 2021Accepted: Mar 11 2021Published online: Apr 21 2021 FiguresReferencesRelatedDetailsAccompanying This ArticleThe International Workshop on Osteoarthritis Imaging Knee MRI Segmentation Challenge: A Multi-Institute Evaluation and Analysis Framework on a Standardized Dataset10 Feb 2021Radiology: Artificial IntelligenceRecommended Articles The International Workshop on Osteoarthritis Imaging Knee MRI Segmentation Challenge: A Multi-Institute Evaluation and Analysis Framework on a Standardized DatasetRadiology: Artificial Intelligence2021Volume: 3Issue: 3Automatic Deep Learning–assisted Detection and Grading of Abnormalities in Knee MRI StudiesRadiology: Artificial Intelligence2021Volume: 3Issue: 3From Morphology to Biomarker: Quantitative Texture Analysis of the Infrapatellar Fat Pad Reliably Predicts Knee OsteoarthritisRadiology2022Volume: 304Issue: 3pp. 622-623Standardization of Compositional MRI of Knee Cartilage: Why and HowRadiology2021Volume: 301Issue: 2pp. 433-434Use of 2D U-Net Convolutional Neural Networks for Automated Cartilage and Meniscus Segmentation of Knee MR Imaging Data to Determine Relaxometry and MorphometryRadiology2018Volume: 288Issue: 1pp. 177-185See More RSNA Education Exhibits Anatomy of a Deep Learning Project for Breast Cancer Prognosis Prediction: From Collecting Data to Building a PipelineDigital Posters2019Detection of Meniscal Degeneration for Osteoarthritis in UltrasonographyDigital Posters2019Postoperative MRI Findings of Arthroscopic Transtibial Pullout Repair for Medial Meniscus Posterior Root TearsDigital Posters2020 RSNA Case Collection Bucket handle tearRSNA Case Collection2020Osteochondritis dissecansRSNA Case Collection2020Chondroblastoma of the glenoidRSNA Case Collection2021 Vol. 3, No. 3 Metrics Downloaded 969 times Altmetric Score PDF download

Referência(s)