Carta Acesso aberto Revisado por pares

Artificial intelligence and capsule endoscopy: Is the truly “smart” capsule nearly here?

2018; Elsevier BV; Volume: 89; Issue: 1 Linguagem: Inglês

10.1016/j.gie.2018.08.017

ISSN

1097-6779

Autores

Michael F. Byrne, Fergal Donnellan,

Tópico(s)

Gastric Cancer Management and Outcomes

Resumo

One wonders whether Gavriel Iddan truly realized what an impact his invention of wireless capsule endoscopy (CE) would have in the world of GI endoscopy.1Iddan G. Meron G. Glukhovsky A. et al.Wireless capsule endoscopy.Nature. 2000; 405: 417Crossref PubMed Scopus (2481) Google Scholar Since its official release in 2001, it has revolutionized the management of small-bowel diseases, including GI bleeding, Crohn’s disease, abnormal radiologic imaging, polyposis syndromes, and celiac disease. Furthermore, the advent of balloon enteroscopy to “chase” the findings of small-bowel CE has brought a very powerful solution to occult GI bleeding. Despite these advances, the field of CE has not stood still. We now have a third generation of the original capsule since 2014, with improved image resolution and a wider field of view than its predecessors (PillCam SB3; Given Imaging, Yokneam, Israel). In addition, we have other capsule systems from Korea (MiroCam; IntroMedic, Seoul, Korea), Japan (EndoCapsule; Olympus, Tokyo, Japan), and China (OMOM; Jinshan Science and Technology Company, Chongqing, China). Despite the undoubted benefits of CE, there have, however, been several challenges since its first clinical use. Chief among them is the ability of the human reader to identify disease in a study that may be several hours in length, whereas the offending pathologic changes may be seen in only a few frames or occasionally only 1 frame. There have been many advances in technology to try to improve the reading experience, accuracy, and diagnostic capability of capsule studies. They include the suspected blood indicator, adaptive frame rate technology, and the Quick-view algorithm. However, the challenge of maximizing the function of CE as a diagnostic tool still exists. The suspected blood indicator, introduced in 2003, has not replaced the physician reader, and in its current format is unlikely to, with a recent study demonstrating a sensitivity below 60% even in active bleeding.2Buscaglia J.M. Giday S.A. Kantsevoy S.V. et al.Performance characteristics of the suspected blood indicator feature in capsule endoscopy according to indication for study.Clin Gastroenterol Hepatol. 2008; 6: 298-301Abstract Full Text Full Text PDF PubMed Scopus (70) Google Scholar Similarly, adaptive frame rate technology in association with a 30% improved image resolution, introduced on the latest PillCam SB3, has not resulted in better diagnostic yields.3Xavier S. Monteiro S. Magalhães J. et al.Capsule endoscopy with PillCamSB2 versus PillCamSB3: has the improvement in technology resulted in a step forward?.Rev Esp Enferm Dig. 2018; 110: 155-159Google Scholar In addition, Quick-view, which selects 10% of the most relevant images for review, can reduce reading times but has been associated with a not-insignificant miss rate for noteworthy lesions.4Shiotani A. Honda K. Kawakami M. et al.Analysis of small-bowel capsule endoscopy reading by using Quickview mode: training assistants for reading may produce a high diagnostic yield and save time for physicians.J Clin Gastroenterol. 2012; 46: e92-e95Crossref PubMed Scopus (34) Google Scholar Clearly, there are still many challenges in the reporting of capsule studies. The human eye is imprecise, and the human attention span has limits. It is not uncommon that a capsule study is reviewed by the same or a different reader, and pathologic changes that were missed in the first read are noted on a subsequent re-evaluation. Are better solutions coming? We are hearing a lot in both the medical and the nonmedical spaces in the past few years about the great promise of artificial intelligence (AI) or various loose synonyms such as machine learning, deep learning, and computer-aided diagnosis (CADx) and computer-aided detection (CADe). AI is making huge advances in medicine, owing to significant improvements in machine learning, faster computational power, and the availability of large bodies of clinical data. In the field of camera-based imaging in the GI tract alone, several recent studies have revealed the potential for AI to make a significant positive impact on the quality of endoscopy.5Byrne M.F. Shahidi N. Rex D.K. Will computer-aided detection and diagnosis revolutionize colonoscopy?.Gastroenterology. 2017; 153: 1460-1464Abstract Full Text Full Text PDF PubMed Scopus (45) Google Scholar For example, AI-assisted lesion detection, and lesion differentiation or “optical biopsy,” are areas of much interest for colon polyps, with several studies showing great promise for real-time solutions to be available for some form of clinical use in the near future, rather than several years away.6Byrne M.F. Chapados N. Soudan F. et al.Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model.Gut. Epub. 2017 Oct 24; Google Scholar, 7Misawa M. Kudo S.E. Mori Y. et al.Artificial intelligence-assisted polyp detection for colonoscopy: initial experience.Gastroenterology. 2018; 154: 2027-2029Abstract Full Text Full Text PDF PubMed Scopus (199) Google Scholar, 8Urban G, Tripathi P, Alkayali T, et al. Deep learning localizes and identifies polyps in real time with 96% accuracy in screening colonoscopy. Gastroenterology. Epub 2018 Jun 18.Google Scholar Moving away from AI methods that rely on “human feature extraction” to true deep learning where algorithms can “learn” from raw and unprocessed videos has put us at a tipping point in this field. In this edition of Gastrointestinal Endoscopy, Leenhardt et al9Leenhardt R. Vasseur P. Li C. et al.A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy.Gastrointest Endosc. 2019; 89: 189-194Abstract Full Text Full Text PDF PubMed Scopus (120) Google Scholar report the use of convolutional neural networks, a common platform in deep learning, to improve the detection of GI angiectasias in the small bowel identified on small-bowel CE. The study is impressive in that it stems from a French national database for CE incorporating 15 centers. Still frames from 4166 third-generation small-bowel CE videos (PillCam SB3 system) were collected. A total of 2946 still frames showing vascular lesions were extracted and used in 1 dataset. The same number of normal still frames was used as the control dataset. All still frames were validated by a group of CE expert readers. The primary endpoint of sensitivity of the CADx algorithm was 100% for the detection of GI angiectasias. A secondary endpoint of specificity of the algorithm was 96% In addition, the reading process for a single frame was found to be 46.8 milliseconds, equating to a reading time of approximately 39 minutes for a full-length capsule study of 50,000 images. Despite the interesting findings, the authors acknowledge several of the study’s drawbacks. The algorithm was used on still images rather than video, on the PillCam SB3 system rather than the other systems, and on clean images rather than suboptimally prepared images with a significant bubble burden. The use of still images—and “clean” images at that—rather than videos is worthy of discussion, inasmuch as this has been an approach of earlier AI work in endoscopy and means that in their current state, such AI algorithms are not ready for real-time clinical use. In real video scenarios, blurry frames, dirty frames, and frames with only partial views of pathologic changes are a reality and present much more of a challenge for AI reading assistance. In relation to the machine learning and technical aspects of the study by Leenhardt et al,9Leenhardt R. Vasseur P. Li C. et al.A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopy.Gastrointest Endosc. 2019; 89: 189-194Abstract Full Text Full Text PDF PubMed Scopus (120) Google Scholar the authors have used a robust AI model that stands up to scrutiny. For this, they are to be commended. However, there are some limitations with their neural network techniques that limit the ability of this kind of work to translate into a tool that could be used right now or in the near future in a clinical scenario. For example, they used manually segmented images, which are difficult and expensive to obtain—a limitation they acknowledge in their article. More recent AI techniques can use “weak labels” to achieve really good classification/detection performances. Other approaches use bounding boxes to localize the region of interest. Bounding boxes are easier to obtain, and approaches using localization convolutional neural networks have shown outstanding results for classification and detection of objects. Temporal information could also be used to improve the results, such as recurrent neural networks. This work is, nonetheless, a good proof of concept and offers promise for the future. However, we have seen several “promising” AI detection and diagnosis proof-of-concept studies in the field of GI endoscopy in the past couple of years, but proof of concept on perfect still images is a little repetitive at this stage. Clearly, for the work presented by the French group to be getting us excited in the near future will require rapid progress toward real-time video analysis, along with appropriate clinical trials, and the authors’ stated commitment to testing their work in more live scenarios is very welcome. So, where does this leave us currently with CE? The French study in this issue at least points to a different and better future with capsule reading. Although there is clearly still a gap between the current methodology described and a true real-time clinically applicable solution, we are no longer talking in aspirational terms of a future with AI in CE. There is genuine hope that AI solutions in various forms of GI endoscopic imaging will be available to us in the relatively near future. Whether AI technology will actually replace the human reader is yet to be seen. For now, we should seek to embrace AI as a clinical decision support tool or as a second reader. The European Society of Gastrointestinal Endoscopy has made some recent statements in this regard, suggesting that we are not yet at a place where we have standalone AI systems in place for various forms of endoscopic imaging.10East J.E. Vleugels J.L. Roelandt P. et al.Advanced endoscopic imaging: European Society of Gastrointestinal Endoscopy (ESGE) technology review.Endoscopy. 2016; 48: 1029-1045Crossref PubMed Scopus (121) Google Scholar We agree with this, but that does not mean that we cannot and should not encourage a stepwise adoption of AI into our practice. We do not need the perfect AI of futuristic thinking to start to embed some of this incredible technology into our daily practice. Advances in AI will undoubtedly make for an improved diagnostic yield of CE, not just for angiectasia but in other disease states as well, such as Crohn’s disease, tumors, and ulcers. This will be particularly welcome in the field of colon polyp detection using the colon capsule, because it has proved very difficult to achieve widespread efficient human reading of colon capsule studies for polyps. This has also been more of a challenge for AI than for optical colonoscopy because the quality of images for colon capsule studies is inferior to that achieved by optical colonoscopy, but we need to be wary of not falling into the trap of thinking like humans when it comes to the potential of AI. In our own work with colon polyp optical biopsy during optical colonoscopy, whereas the NBI International Colorectal Endoscopic (NICE) classification requires the human eye to discern 3 features for polyp differentiation, dissection of our neural networks revealed that the algorithm was seeing over 1000 discriminating features per polyp. It is probably fair to say that current medical hardware lags behind the advances made in AI and that a more cohesive collaboration among clinicians, computer scientists, industry, and regulatory bodies will see an acceleration in the adoption of AI in medical devices in the near future. “Transfer learning” in AI, whereby lessons learned in 1 area can be applied to different but related problems, will undoubtedly aid in this acceleration, when advances in polyp detection and optical biopsy with standard endoscopes will lead to improvement in CE reading, in assessment of Barrett’s esophagus and early esophageal cancer, and in many other luminal camera-based imaging techniques such as confocal laser endomicroscopy and optical coherence tomography. The future of endoscopy, including CE, is going to be really quite different in the next few years as this tidal wave of AI technology starts to find its place in our daily practice. Physician and patient acceptance will be obvious barriers, but we, as a clinical community, should start preparing now to embrace this radically different future, inasmuch as it is coming sooner rather than later. We are not there yet, but it now just seems a matter of how quickly, rather than whether, we will get there. Dr Byrne is the founder of ai4gi, a joint venture between Imagia and Satis Operations; the CEO of Satis Operations Inc, and a participant in a codevelopment agreement between ai4gi and Olympus America. The other author disclosed no financial relationships relevant to this publication. A neural network algorithm for detection of GI angiectasia during small-bowel capsule endoscopyGastrointestinal EndoscopyVol. 89Issue 1PreviewGI angiectasia (GIA) is the most common small-bowel (SB) vascular lesion, with an inherent risk of bleeding. SB capsule endoscopy (SB-CE) is the currently accepted diagnostic procedure. The aim of this study was to develop a computer-assisted diagnosis tool for the detection of GIA. Full-Text PDF

Referência(s)