Automated Quantitative Analysis of Wound Histology Using Deep-Learning Neural Networks
2020; Elsevier BV; Volume: 141; Issue: 5 Linguagem: Inglês
10.1016/j.jid.2020.10.010
ISSN1523-1747
Autores Tópico(s)Diabetic Foot Ulcer Assessment and Management
ResumoEvery year, ∼8 million Americans require advanced care for nonhealing wounds that collectively are estimated to cost between $28–96 billion (Sen, 2019Sen C.K. Human wounds and its burden: an updated compendium of estimates.Adv Wound Care (New Rochelle). 2019; 8: 39-48Crossref PubMed Scopus (222) Google Scholar). Complications in healing disproportionally afflict the elderly who commonly suffer from comorbidities, such as vascular insufficiency and diabetes mellitus (Gosain and DiPietro, 2004Gosain A. DiPietro L.A. Aging and wound healing.World J Surg. 2004; 28: 321-326Crossref PubMed Scopus (562) Google Scholar; Gould et al., 2015Gould L. Abadir P. Brem H. Carter M. Conner-Kerr T. Davidson J. et al.Chronic wound repair and healing in older adults: current status and future research.J Am Geriatr Soc. 2015; 63: 427-438Crossref PubMed Scopus (140) Google Scholar), that disrupt wound closure. Histological staining of skin tissue sections with H&E can provide insight into cellular infiltration into the wound, infection, hyperproliferation at the edge of the wound, or fibrosis and serves as a critical technique in the research laboratory to understand wound pathophysiology and evaluate new wound care products (Eming et al., 2014Eming S.A. Martin P. Tomic-Canic M. Wound repair and regeneration: mechanisms, signaling, and translation.Sci Transl Med. 2014; 6: 265sr6Crossref PubMed Scopus (1189) Google Scholar; Gantwerker and Hom, 2012Gantwerker E.A. Hom D.B. Skin: histology and physiology of wound healing.Clin Plast Surg. 2012; 39: 85-97Abstract Full Text Full Text PDF PubMed Scopus (74) Google Scholar). However, the analysis of wound histology is time-intensive, reliant on subjective user input, and largely qualitative. The goal of this study was to develop an objective and automated method to quantitatively assess H&E-stained wound sections to aid in wound healing research. Recently, convolutional neural networks (CNNs) have been applied to many biomedical applications and demonstrated an ability to classify and segment large quantities of image data rapidly and accurately (Calderon-Delgado et al., 2018Calderon-Delgado M, Tiju J-W, Lin M-Y, Huang S-L. High resolution human skin image segmentation by means of fully convolutional neural networks. Paper presented at: International Conference on Numerical Simulation of Optoelectronic Devices (NUSOD). 5–9 November, 2018; Hong Kong, China.Google Scholar; Kose et al., 2020Kose K. Bozkurt A. Alessi-Fox C. Brooks D.H. Dy J.G. Rajadhyaksha M. et al.Utilizing machine learning for image quality assessment for reflectance confocal microscopy.J Invest Dermatol. 2020; 140: 1214-1222Abstract Full Text Full Text PDF PubMed Scopus (14) Google Scholar; Oskal et al., 2019Oskal K.R. Risdal M. Janssen E.A. Undersrud E.S. Gulsrud T.O. A U-Net based approach to epidermal tissue segmentation in whole slide histopathological images.SN Appl Sci. 2019; 1: 672Crossref Scopus (25) Google Scholar; Rivenson et al., 2019Rivenson Y. Wang H. Wei Z. de Haan K. Zhang Y. Wu Y. et al.Virtual histological staining of unlabelled tissue-autofluorescence images via deep learning.Nat Biomed Eng. 2019; 3: 466-477Crossref PubMed Scopus (149) Google Scholar; Ronneberger et al., 2015Ronneberger O. Fischer P. Brox T. U-net: convolutional networks for biomedical image segmentation.in: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science. vol. 9351. Springer International Publishing, Cham2015: 234-241Crossref Scopus (23256) Google Scholar; Tang et al., 2019Tang Y, Yang F, Yuan S, Zhan C. Multi-stage framework with context information fusion structure for skin lesion segmentation. Paper presented at: IEEE 16th International Symposium on Biomedical Imaging. 8–11 April 2019; Venice, Italy.Google Scholar). CNNs typically utilize a deep-learning approach that allows them to learn features unique to different image regions and delineate them from other distinct regions of an image. This is accomplished using supervised learning in which a CNN learns image features from user-traced image segmentation it considers as ground truth. This contrasts with unsupervised approaches that do not require labeled data and instead find intrinsic patterns and features within the data set provided. Although unsupervised approaches are immune to any potential training biases, it is difficult to control what patterns the network will choose to delineate. Supervised learning benefits from being able to teach a network a known number of relevant classes leading to its applications in biomedical image segmentation. Training a neural network can be a significant time investment, as it requires hundreds or more training images and significant processing power to learn to classify data accurately. Once trained, however, CNNs produce repeatable, consistent results rapidly across datasets. In the last five years, networks employing U-Net architectures (Ronneberger et al., 2015Ronneberger O. Fischer P. Brox T. U-net: convolutional networks for biomedical image segmentation.in: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science. vol. 9351. Springer International Publishing, Cham2015: 234-241Crossref Scopus (23256) Google Scholar) have proven capable of segmenting images on a pixel-by-pixel basis with accuracies greater than 90% in OCT images of skin (Calderon-Delgado et al., 2018Calderon-Delgado M, Tiju J-W, Lin M-Y, Huang S-L. High resolution human skin image segmentation by means of fully convolutional neural networks. Paper presented at: International Conference on Numerical Simulation of Optoelectronic Devices (NUSOD). 5–9 November, 2018; Hong Kong, China.Google Scholar) and uninjured H&E skin sections (Oskal et al., 2019Oskal K.R. Risdal M. Janssen E.A. Undersrud E.S. Gulsrud T.O. A U-Net based approach to epidermal tissue segmentation in whole slide histopathological images.SN Appl Sci. 2019; 1: 672Crossref Scopus (25) Google Scholar). This pixel-wise accuracy makes U-Net CNNs ideal for collecting automated dimensional measurements and could provide quantitative metrics to evaluate pathological delays in healing. In this study, we trained a CNN capable of segmenting morphologically distinct and clinically relevant regions of wound tissue for the automated calculation of wound depth, wound width, epidermal and dermal thicknesses, and re-epithelialization percentage. To accomplish this, a U-Net segmentation network was trained and evaluated using images of H&E-stained murine skin tissue containing full-thickness, excisional wounds from animals between 4 and 24 months of age, with and without streptozotocin-induced diabetes (Jones et al., 2018Jones J.D. Ramser H.E. Woessner A.E. Quinn K.P. In vivo multiphoton microscopy detects longitudinal metabolic changes associated with delayed skin wound healing.Commun Biol. 2018; 1: 198Crossref PubMed Scopus (30) Google Scholar). Animal studies were approved and conducted in accordance with University of Arkansas Institutional Animal Care and Use Committee protocols #16001 and #17063. Full details on the methods can be found in the Supplementary Material. The U-Net architecture was composed of four symmetric encoding and decoding layers created using the deep-learning toolbox in MATLAB 2019a (Supplementary Figure S1a). To train the network, 395 unique 512 × 512 pixel images from 25 H&E-stained murine tissue sections were collected at days 3 (n = 8), 5 (n = 8), and 10 (n = 9) after wounding. Custom written MATLAB code was used to manually segment seven regions, including the epidermis, dermis and hypodermis, granulation tissue, scab, hair follicles, skeletal muscle, and background. The 395 images were each augmented by reflection to improve network accuracy and robustness (Perez and Wang, 2017 1Perez L, Wang J. The effectiveness of data augmentation in image classification using deep learning. 2017. arXiv:1712.04621v1.1Perez L, Wang J. The effectiveness of data augmentation in image classification using deep learning. 2017. arXiv:1712.04621v1.), thus increasing the size of the image set to 790. Of these images, 70% were randomly assigned to a training set, 20% to a validation set, and 10% to a testing set (Supplementary Figure S1b). Training was performed with an initial learning rate of 10−3 using an Adam optimizer and a cross-entropy loss function. Training continued for 100 epochs and was terminated when validation loss stopped decreasing to prevent overfitting. Once trained, an independent test set of images was segmented by the network and its output masks were compared on a pixel-by-pixel basis directly to the corresponding user-segmented image masks (Figure 1). The granulation tissue, epidermis, dermis, muscle, and background all were classified with accuracies ≥90%, whereas the scab and hair follicle classes were slightly lower with some misclassification along their boundaries with surrounding tissue regions (Figure 2b). Overall, the network had a classification accuracy of 92.5% when compared with the user-defined images in the test set, performing similarly to published segmentation networks for other applications (Calderon-Delgado et al., 2018Calderon-Delgado M, Tiju J-W, Lin M-Y, Huang S-L. High resolution human skin image segmentation by means of fully convolutional neural networks. Paper presented at: International Conference on Numerical Simulation of Optoelectronic Devices (NUSOD). 5–9 November, 2018; Hong Kong, China.Google Scholar; Oskal et al., 2019Oskal K.R. Risdal M. Janssen E.A. Undersrud E.S. Gulsrud T.O. A U-Net based approach to epidermal tissue segmentation in whole slide histopathological images.SN Appl Sci. 2019; 1: 672Crossref Scopus (25) Google Scholar; Roy et al., 2017Roy A.G. Conjeti S. Karri S.P.K. Sheet D. Katouzian A. Wachinger C. et al.ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks.Biomed Opt Express. 2017; 8: 3627-3642Crossref PubMed Scopus (280) Google Scholar; Tang et al., 2019Tang Y, Yang F, Yuan S, Zhan C. Multi-stage framework with context information fusion structure for skin lesion segmentation. Paper presented at: IEEE 16th International Symposium on Biomedical Imaging. 8–11 April 2019; Venice, Italy.Google Scholar).Figure 2Automated network segmentation and quantification of whole wound sections. (a) Representative H&E-stained sections of skin wound tissue from days 3, 5, and 10 after wounding (top row). Segmentation results from manual user tracing (middle row) and the CNN (bottom row) demonstrate the network's ability to accurately segment full-thickness wounds. (b) The network demonstrated good accuracy across different wound regions and had an overall accuracy of 94.06%. (c) Automated measurements using the wound segmentation results revealed only small errors between the network- and user-defined ground truth results. Bar = 500 μm. Avg., average; BG, background; CNN, convolutional neural network; D, dermis; E, epidermis; GT, granulation tissue; HF, hair follicle; M, muscle; S, scab.View Large Image Figure ViewerDownload Hi-res image Download (PPT) To assess the ability of the network to segment and quantify whole wound sections, an additional test set of six whole sections from days 3, 5, and 10 after wounding were manually traced and then segmented by the trained network (Figure 2a). Accuracy of the whole section classification was 94.06% (Figure 2b), which was similar to the original test set evaluation (Figure 1). The segmentation accuracies of individual slides all fell in a range from 92.32–96.22%. Based on the segmented regions, measurements of wound depth, wound width, epidermal and dermal thicknesses, and percentage of re-epithelialization were automatically quantified from the whole tissue sections. Minimum separation distances based on the pixel-wise locations of epidermis and dermis classes were used to define wound width and the percentage of re-epithelialization, whereas wound depth was assessed using the depth of the granulation at the midpoint (Figure 2a). The average thicknesses of the epithelium, including the migrating epithelial tongue, and dermis and hypodermis were calculated using Euclidean distance transform measurements. Percent error was calculated for each metric based on the results from the network segmentation relative to the measurements of the user-traced sections (Figure 2c). Overall error in these measurements was 4.3% ± 2.7%, with no time-point demonstrating substantially different (>2 SD) levels of error. Additionally, thickness measurements along the length of the wound sections were strongly correlated between the user- and network-defined masks (R = 0.91 ± 0.04 for the epidermis and R = 0.98 ± 0.02 for the dermis) (Supplementary Figure S2). In summary, this work demonstrates that a CNN can be developed to accurately segment full H&E-stained wound sections on a pixel-wise basis in less than 30 seconds using a desktop computer (Figure 1). These segmentation masks can be used to automatically measure wound geometry with minimal error (Figure 2). Automatic delineation of relevant wound regions also provides a foundation to further quantify other image features, and our network could be paired with additional neural networks or automated image processing techniques to quantify region-specific microvessel or cellular densities in the future. Furthermore, the network generated here for rapid segmentation and evaluation of H&E sections can be retrained via transfer learning (Shin et al., 2016Shin H.C. Roth H.R. Gao M. Lu L. Xu Z. Nogues I. et al.Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning.IEEE Trans Med Imaging. 2016; 35: 1285-1298Crossref PubMed Scopus (2787) Google Scholar) to develop future CNNs capable of quantifying wound features using substantially different staining protocols, imaging parameters, or sources of contrast. Code, example images, and trained network data related to this article can be found at https://github.com/kylepquinn/JID_WoundSegmentation_2020, hosted at Github. Imaging datasets related to this article can be provided by the authors on request. Jake D. Jones: http://orcid.org/0000-0002-6407-8726 Kyle P. Quinn: http://orcid.org/0000-0002-6876-3608 The authors state no conflict of interest. We would like to acknowledge Gianna Busch, Caila Hanes, and Ayman Yosef for their help with tissue sectioning and imaging of H&E-stained tissue samples. This research was funded by National Institutes of Health grant numbers R00EB017723 and R01AG056560 and the Arkansas Biosciences Institute . Conceptualization: JDJ, KPQ; Data Curation: JDJ, KPQ; Formal Analysis: JDJ; Funding Acquisition: KPQ; Methodology: JDJ, KPQ; Project Administration: KPQ; Software: JDJ; Supervision: KPQ; Validation: JDJ; Writing - Original Draft Preparation: JDJ, KPQ; Writing - Review and Editing: JDJ, KPQ Download .pdf (.27 MB) Help with pdf files Supplementary Materials
Referência(s)