Subcutaneous sweat pore estimation from optical coherence tomography
2021; Institution of Engineering and Technology; Volume: 15; Issue: 13 Linguagem: Inglês
10.1049/ipr2.12322
ISSN1751-9667
AutoresBaojin Ding, Haixia Wang, Peng Chen, Yilong Zhang, Ronghua Liang, Yipeng Liu,
Tópico(s)Image and Object Detection Techniques
ResumoIET Image ProcessingVolume 15, Issue 13 p. 3267-3280 ORIGINAL RESEARCH PAPEROpen Access Subcutaneous sweat pore estimation from optical coherence tomography Baojin Ding, Baojin Ding College of Information Engineering, Zhejiang University of Technology, Hangzhou, ChinaSearch for more papers by this authorHaixia Wang, Corresponding Author Haixia Wang hxwang@zjut.edu.cn orcid.org/0000-0002-2378-2725 College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China Correspondence Haixia Wang, College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China. Email: hxwang@zjut.edu.cnSearch for more papers by this authorPeng Chen, Peng Chen orcid.org/0000-0001-6122-0574 College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, ChinaSearch for more papers by this authorYilong Zhang, Yilong Zhang College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, ChinaSearch for more papers by this authorRonghua Liang, Ronghua Liang College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, ChinaSearch for more papers by this authorYipeng Liu, Yipeng Liu College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, ChinaSearch for more papers by this author Baojin Ding, Baojin Ding College of Information Engineering, Zhejiang University of Technology, Hangzhou, ChinaSearch for more papers by this authorHaixia Wang, Corresponding Author Haixia Wang hxwang@zjut.edu.cn orcid.org/0000-0002-2378-2725 College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China Correspondence Haixia Wang, College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China. Email: hxwang@zjut.edu.cnSearch for more papers by this authorPeng Chen, Peng Chen orcid.org/0000-0001-6122-0574 College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, ChinaSearch for more papers by this authorYilong Zhang, Yilong Zhang College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, ChinaSearch for more papers by this authorRonghua Liang, Ronghua Liang College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, ChinaSearch for more papers by this authorYipeng Liu, Yipeng Liu College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, ChinaSearch for more papers by this author First published: 09 August 2021 https://doi.org/10.1049/ipr2.12322AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinked InRedditWechat Abstract Abstract Sweat pore, one of the level 3 features of fingerprint, has attracted much attention in fingerprint recognition. Traditional sweat pores on surface fingerprint are unclear or blurred when fingers are stained or damaged. Subcutaneous sweat pores, as cross section of the sweat glands, are resistant to external interferences. With 3D fingertip information measured by optical coherence tomography (OCT), the subcutaneous sweat pore estimation from OCT volume data is investigated. First, an adaptive subcutaneous pore image reconstruction method is proposed. It utilizes the skin surface and viable epidermis junction as reference and realizes depth-adaptive pore image reconstruction. Second, a dilated U-Net combining the U-Net with dilated convolution is proposed for subcutaneous sweat pore extraction, which can prevent information loss of sweat pores caused by downsampling. To the best knowledge, it is the first time that subcutaneous sweat pore extraction is investigated and proposed. Experiments on subcutaneous pore image reconstruction and sweat pore extraction are both conducted. The qualitative and quantitative results show that the proposed adaptive method performs better in subcutaneous pore image reconstruction compared with the fix-depth method, and the dilated U-Net outperforms other methods on subcutaneous sweat pore extraction. 1 INTRODUCTION Fingerprint plays an important role in identity security worldwide [1-3]. The fingerprint features can be divided into three levels [4]. Level 1 features refer to the overall ridge flow patterns. Level 2 features mainly refer to minutiae points such as ridge bifurcations and endings. Level 3 features are dimensional attributes of the ridges including ridge path deviations, widths, sweat pores, within which, the pores lying on the ridges of the fingerprint are most distinctive. Conventional automated fingerprint recognition systems (AFRS) capture fingerprint images with standard resolution of 500 dpi, and utilize the level 1 and level 2 features for identification. As for the level 3 features like sweat pores, they can be captured in high-resolution images with at least 1000 dpi [5]. The effectiveness of sweat pores in personal identification has been statistically analysed and validated [6]. In recent studies, sweat pores have been found to yield high recognition capability especially in the case of partial fingerprints [7, 8]. However, they have obvious shortages preventing wide applications. First, an appropriate image resolution is required to realize proper capture [9]. Second, skin condition is critical. Sweat pores can be unclear or blurred when fingers are wet, stained, or damaged [10]. Lastly, sweat pore status is always changing with perspiring. They may be either closed or open. A closed pore looks like an isolated bright dot, whereas an open pore, which is perspiring, is very likely to be connected to its neighbouring valleys [4, 5]. Hence, it is not easy to obtain pore image with high quality and to extract sweat pores from the pore image. Sweat pores are present not only on the skin surface, but also under the skin as sweat glands, which grow from tissue deep under the skin. Traditional sweat pores are defined as the openings of sweat glands on the skin surface, while sweat pores under the skin can also be estimated from the cross section of the sweat glands. These two types of sweat pores are referred to as surface sweat pores and subcutaneous sweat pores, respectively. Subcutaneous sweat pores are robust against poor skin conditions and are basically isolated. Hence, they are more stable than the conventional counterpart and promising in fingerprint recognition. Optical coherence tomography (OCT) is a non-invasive imaging technique that can acquire information about 0–3 mm under the skin surface, and has become popular in three-dimensional (3D) fingerprint capturing [11-13]. It obtains high-resolution, 3D volume data of finger, which contains information of biological tissues such as epidermis, dermis, and sweat gland [14, 15], and so on. The sweat glands measured by OCT are presented between skin surface and viable epidermis junction. There are two ways to estimate subcutaneous sweat pores from OCT fingertip volume data. The first way is to extract 3D sweat glands, and then the locations of sweat glands can be utilized to obtain subcutaneous sweat pores. Sun et al. [16] proposed to use the Frangi's filter to detect sweat glands' location and segment them by thresholding. Ding et al. [17] proposed a modified U-Net that combines residual learning, bidirectional convolutional long short-term memory and hybrid dilated convolution to segment sweat glands directly. They both estimated subcutaneous sweat pores from the extracted sweat glands. This category of methods needs to perform 3D segmentation, which is computationally expensive and results in redundant information. Another way to estimate subcutaneous sweat pores consists of two steps, (i) the subcutaneous pore image reconstruction using the en face information of the OCT volume data, and (ii) the subcutaneous sweat pore extraction from the pore image. Currently, some works have been proposed for the first step. Liu et al. [18] demonstrated the subcutaneous sweat pore image by averaging the en face images over the depths of 100 to 270 um from spectral domain OCT (SD-OCT) fingertip data. Zam et al. [19] used correlation mapping OCT (cmOCT) to measure live fingertip, and showed the sweat pore distributions at a preset depth. The performances of these works rely on the manually set depth of the OCT data. However, the depth varies in different individuals, which is hard to determine when large number of samples are considered. Though work on subcutaneous pore extraction has not been presented, methods have been proposed to extract surface sweat pores from fingerprint image. Conventional methods use models to detect sweat pores [4, 5]. Recently, deep learning methods have gradually become a new trend for pore extraction [20-24]. Here, the subcutaneous sweat pore estimation is of interest. In this paper, a SD-OCT system is established to measure 3D fingertip volume data. There are two challenges to achieve subcutaneous sweat pore estimation from the measured volume data. First, the performance of subcutaneous pore image reconstruction depends on the proper en face depth setting. However, the depth varies in different fingers and is hard to determine. Second, sweat pore extraction from OCT subcutaneous pore image has not been studied. Noise and some subcutaneous tissue structures which are not presented in traditional fingerprints may be misjudged and become disturbances in the subcutaneous pore extraction. To tackle these challenges, two strategies are proposed here. Since only the parts of sweat glands between viable epidermis junction and skin surface can be measured by OCT, an adaptive method based on the location of viable epidermis junction is proposed for subcutaneous pore image reconstruction. U-Net [25] is utilized for subcutaneous sweat pore extraction due to its great performance in medical/biological image segmentation [26-28]. Subcutaneous sweat pores are small, therefore dilated convolution [29] is introduced to prevent information loss of the small-size objects. The flowchart is shown in Figure 1. The main contributions are listed as follows: An adaptive subcutaneous pore image reconstruction method is proposed. The proposed method utilizes the depth of viable epidermis junction as reference and realizes depth-adaptive reconstruction. The influence of en face depth on the subcutaneous pore image reconstruction is also investigated. To our best knowledge, it is the first time that subcutaneous sweat pore extraction from OCT subcutaneous pore image is presented. A dilated U-Net is proposed for the subcutaneous sweat pore extraction. The dilated U-Net combines the dilated convolution with U-Net, which can prevent information loss of sweat pores caused by the downsampling of U-Net while realizing the accurate extraction. FIGURE 1Open in figure viewerPowerPoint The flowchart of subcutaneous sweat pore estimation The rest is structured as follows. Related works are shown in Section 2. The subcutaneous sweat pore image reconstruction method and the sweat pore extraction method are presented in Sections 3 and 4, respectively. The experimental results are given in Section 5. And conclusions are drawn in Section 6. 2 RELATED WORK 2.1 Contour extraction in OCT volume data The tissue structures of the fingertip are presented using a slice image (B-scan) of OCT volume data as shown in Figure 2. The corresponding reconstructed surface fingerprint, subcutaneous pore image, and internal fingerprint are also presented. The upper layer of epidermis is the stratum corneum, whose top surface represents the surface fingerprint. And the bottom junction of stratum corneum is the viable epidermis junction, which represents the internal fingerprint [30, 31]. Sweat glands grow from dermis layer and have helix-like or oval-shaped structures [32]. The sweat glands measured by OCT are mainly presented inside stratum corneum bounded by skin surface and viable epidermis junction, where subcutaneous pore image can be reconstructed. Current methods usually use the skin surface as the sole reference to locate the en face image for subcutaneous sweat pores [18, 19]. When the en face image is close to viable epidermis junction or even cross it, the sweat glands are unclear or disappear. Hence, beside skin surface, viable epidermis junction can also be a good reference for pore image reconstruction. FIGURE 2Open in figure viewerPowerPoint The demonstration of a B-scan and the corresponding reconstructed images Methods have been proposed to extract the contour of viable epidermis junction for internal fingerprint reconstruction. Bossen et al. [33] demonstrated that the viable epidermis junction is approximately located at an average depth of 0.34 mm but only few examples are used for validation. Aum et al. [34] calculated the distance between the first and second maximum intensity of averaged B-scans to obtain more accurate depth of the viable epidermis junction. Darlow et al. [35] used k-means clustering to extract viable epidermis junction contour after locating the stratum corneum. Wang et al. [36] proposed a hybrid hierarchical clustering (HHC) to segment the stratum corneum and viable epidermis junction. Parameters used in these methods are usually case-sensitive and require manual selection. Convolutional neural networks(CNN)-based methods were also proposed for viable epidermis junction extraction [37, 17]. The CNN-based methods perform better than the traditional methods, but require specific hardware such as GPU and adequate time to train the network in advance. Although internal fingerprint reconstruction and subcutaneous pore image reconstruction are both involved with the viable epidermis junction extraction, their estimation methods are different. For internal fingerprint reconstruction, accurate contour of viable epidermis junction is demanded as the internal fingerprint is located at the junction zone. For subcutaneous pore image, the en face information within a large zone (between skin surface and viable epidermis junction) can be used for reconstruction. The depth of viable epidermis junction can be used for reference but accurate contour is not necessary. Therefore, a simple and time-efficient method for viable epidermis junction extraction is required here. 2.2 Sweat pore extraction from surface fingerprint image Many methods have been proposed to extract surface sweat pores from traditional fingerprint images. Jain et al. [4] used Gabor filters and Mexican hat wavelet transform to extract pores. Zhao et al. used difference of Gaussian (DoG) [38] to extract sweat pores, then proposed dynamic anisotropic pore model (DAPM) [5] to describe the pores more flexibly and accurately. Genovese et al. [20] proposed computational intelligence techniques based on neural networks to select only the actual sweat pores from the set of extracted candidate points. Recently, deep learning has achieved great success in the field of image analysis [39, 40]. Labati et al. [21] proposed a CNN for pore detection (CNND) and another CNN for refinement (CNNR). Jang et al. [22] proposed a Deep CNN-based pore extraction method (DeepPore), which consists of pore extraction using CNN and postprocessing using pore intensity refinement (PIR). Wang et al. [23] proposed a U-Net to extract sweat pores. Liu et al. [24] proposed a Judge-CNN taking pore extraction as a binary classification problem and used the ridge–valley information as prior knowledge in the pre- and post-processing. Meanwhile, sweat pore extraction from OCT subcutaneous pore image has not been studied. Unlike the traditional fingerprint image, the ridges and valleys in the subcutaneous pore image are not clearly represented as references and some subcutaneous tissues are also presented as disturbance and make the extraction challenging. The U-Net network is considered to extract sweat pores from subcutaneous pore image due to its good performance in medical/biological image segmentation [26-28]. U-Net is a fully convolutional U-shape network, whose left part is contracting (encoding) stage and right part is expanding (decoding) stage. The contracting stage uses convolution and downsampling to extract features and reduce feature size, while the expanding stage uses convolution and upsampling to decode and expand features. However, the downsampling operation in the contracting stage can lead to the information loss of sweat pores, which needs to be dealt with. Therefore dilated convolution [29] is introduced to prevent information loss of the small-size objects. 3 SUBCUTANEOUS SWEAT PORE IMAGE RECONSTRUCTION 3.1 OCT data measurement and preprocessing A SD-OCT system is built as in [36] to acquire fingertip volume data, as shown in Figure 3. The system consists of a broadband light source, a spectrometer, a reference arm and a sample arm. The light source in the system has central wavelength of λ = 848 nm, spectral bandwidth of Δλ = 46 nm and output power P of 4.81 mw. In order to reduce the influence of finger curvature, a fixed cover glass G1 is placed. Finger needs to be placed tightly close to the surface of G1 during the acquiring process, so that most of the fingertip is imaged in the focal plane. Another identical cover glass G2 is placed at the corresponding position in the reference light path for dispersion compensation. In the case of large-area acquisition, the reflected light at the edge of image is weak due to the oscillating mode of galvanometer, leading to defocusing phenomenon. Therefore, a small focus collimator M (F260APC-C, Thorlabs Inc.) is installed in sample arm to increase the focus depth of the system. The 3D volume data measured by the system includes 1200 B-scan images that consists of 1500 A-scan lines, and each line has 500 pixels. The volume size of OCT data is 1500 × 500 × 1200 (1 ≤ x ≤ 1500, 1 ≤ y ≤ 500, 1 ≤ z ≤ 1200). The offset for both A-scan and B-scan is set to 0.01 mm, measuring a fingertip area of 15 mm × 12 mm. FIGURE 3Open in figure viewerPowerPoint Optical coherence tomography system The OCT data are deteriorated by speckle noise that results from the coherent nature of laser radiation and the interferometric detection of the scattered light. Block-matching 3D method [41] is applied to remove the speckle noise and enhance the structural information such as viable epidermis junction. 3.2 Subcutaneous sweat pore image reconstruction The sweat glands are located between skin surface and viable epidermis junction. An adaptive method is proposed for subcutaneous sweat pore image reconstruction in this subsection. The contours of the skin surface and viable epidermis junction are identified adaptively as references in Section 3.2.1. The subcutaneous sweat pore image reconstruction is then presented in Section 3.2.2. 3.2.1 Contour extractions of the skin surface and viable epidermis junction This subsection presents a simple and fast contour extraction method for the skin surface and viable epidermis junction. It contains three steps: feature point determination, skin surface contour extraction and viable epidermis junction contour extraction. First, as red arrows pointed in Figure 4a, there are obvious intensity changes at vertical direction in skin surface and viable epidermis junction. The pixels with large intensity derivatives are thus defined as initial feature points. For a pixel at coordinate ( x , y ) (1 ≤ x ≤ 1500, 1 ≤ y ≤ 499) in a B-Scan image with index z (1 ≤ z ≤ 1200), its vertical derivative value Iy(x,y) is calculated as follows: I y ( x , y ) = I ( x , y + 1 ) − I ( x , y ) , (1)where I(x,y) is the intensity value of pixel at coordinate (x, y) in the B-Scan; larger value of y presents deeper position. For each column of OCT image, N largest values of Iy are selected as the initial feature points. N is set to 8 to ensure that feature points of the skin surface and viable epidermis junction are included in case of disturbances. The initial feature point set is denoted as P0 and shown in Figure 4b, containing skin surface, viable epidermis junction and outliers. FIGURE 4Open in figure viewerPowerPoint Illustration of contour extractions of skin surface and viable epidermis junction. (a) Denoised B-scan patch; (b) the initial feature point set P0; (c) extracted skin surface contour Lg; (d) point set Pr; (e) green curve Q; (f) extracted viable epidermis junction contour Lej Second, the contour of the skin surface is identified. The feature points of skin surface are generally well connected and locally straight. Hough transform [42] is adapted and line segments that basically cover the skin surface contour are obtained. These line segments are connected to a continuous line. The points whose distances are less than 1 from the line are regarded as skin surface points and denoted as Pg. Since the contour of skin surface is a slightly smooth curve, quadratic polynomial fitting is utilized to model the skin surface contour Lg form Pg, as shown in Figure 4c. Lastly, after removing Pg and the points above the skin surface as outliers from P0, the remaining feature point set Pr includes viable epidermis junction and outliers mainly caused by sweat glands, as shown in Figure 4d. The discrete points in Pr away from the viable epidermis junction are considered as outliers. A point is defined as outlier if it meets the following criterion: y − Q x ≥ T , (2)where (x, y) is the coordinate of the point; Q is the quadratic polynomial fitting curve for the point set Pr to coarsely approximate the locations of viable epidermis junction as the green line shown in Figure 4e; Q(x) is the y coordinate value of curve Q corresponding to the abscissa x; T is the threshold defined as T = β × T d , (3) with T d = 1 X ∑ x = 1 X Q x − L g x , (4)where β is the distance coefficient set to 0.4; Td is set as the average distance between the skin surface and the viable epidermis junction to adapt to the depth of the viable epidermis junction. After the points satisfying Equation (2) are removed as outliers, the remainder point set is used to construct Pej by only keeping the top point in each column. Pej is usually discontinuous due to the low contrast of viable epidermis junction, thus cubic spline interpolation is employed for fitting. The obtained continuous and complete contour of viable epidermis junction Lej is shown in Figure 4f. 3.2.2 Image reconstruction An adaptive reconstruction method for subcutaneous sweat pore image is presented in this subsection. En face information between skin surface and viable epidermis junction is utilized for reconstruction. For every B-scan image, a line between skin surface and viable epidermis junction is estimated to cross the sweat glands. These lines are stacked together orderly to form the subcutaneous pore image. For a B-scan, the line can be obtained as follows: L p l ( x , z ) = L g ( x , z ) + D x , z , (5)where z is the index of the B-Scan; D is the depth adaptively generated by the following equation as: D x , z = r o u n d λ ( L e j ( x , z ) − L g ( x , z ) ) , 0 < λ < 1 , (6)where λ determines the weights of the line Lej to Lpl; round(•) is the rounding function. Thus, the proposed method can adapt to the depth of the viable epidermis junction and will never exceed it. As for a conventional fix-depth method, D is set as follows: D x , z = d , (7)where d is a fixed value. Consequently, the subcutaneous pore image can be reconstructed as follows: I p ( x , z ) = I x , L p l ( x , z ) , z , (8)where I(x, y, z) denotes the intensity value at coordinate (x, y, z) in the OCT volume data. Examples of the pore images reconstructed by the two methods are shown in Figure 5, where d is set to 30 and λ is set to 0.7, respectively. The subcutaneous pore image estimated by the fix-depth method may cause unclear white areas as the green arrows pointed. The pore image estimated by the adaptive method has higher quality than the fix-depth method. In the subcutaneous pore image, the white dots represent sweat pores that are brighter than the background. Unlike surface sweat pores that may be perspiring and connected to their neighbouring valleys, subcutaneous sweat pores are always isolated dots distributed on the ridges. More discussions will be presented in Section 5. FIGURE 5Open in figure viewerPowerPoint Demonstration of subcutaneous pore image reconstruction 4 SUBCUTANEOUS SWEAT PORE EXTRACTION 4.1 Proposed dilated U-Net In this section, we propose a dilated U-Net that combines the U-Net with dilated convolution to achieve the sweat pore extraction from the subcutaneous pore image. Pore extraction is a task to find dense, small-size and isolated targets. Here, U-Net is utilized to achieve pixel-wise segmentation of sweat pores. There are four downsampling operations (max pooling) in the encoding stage in the U-Net, which can keep the feature invariant and increase the receptive field. Each downsampling will halve the width/height of feature maps, hence, it may cause the information loss of small-size objects. If all the max pooling layers in the U-Net are simply removed, the receptive field of the network will be reduced and the computation burden will be increased. To balance efficiency and effectiveness, half of the max pooling layers in the U-Net are removed here. Dilated convolution [29] is a variation of convolution, which can enlarge the receptive field and does not reduce the feature resolution. Hence, the dilation convolution is integrated to recover the receptive field, which can better segment the sweat pores. The proposed dilated U-Net is shown in Figure 6. The input of the network is a subcutaneous sweat pore image, and the output is the pixel-wise probability map. The network consists of a contracting stage (left side) and an expansive stage (right side). In the contracting stage, the number of feature maps is enlarged and the sizes of feature maps are reduced, while the expansive stage mostly performs the opposite process. As shown in Figure 6, each blue box represents multi-channel feature maps. The number of channels is shown on top of each box. The white box is the copied feature maps. Arrows with different colours represent different operations/units. FIGURE 6Open in figure viewerPowerPoint The dilated U-net architecture The contracting stage contains three convolution blocks, two dilated blocks and two max pooling layers. These components are implemented alternately. The convolution block consists of two convolution units, where each has a 3 × 3 convolution layer, a rectified linear unit (Relu) activation layer [43] and a batch normalization (BN) layer [44]. The dilated block consists of three dilated convolution units, where each has a 3 × 3 dilated convolution layer, a Relu activation layer and a BN layer. Dilated convolution can enlarge the receptive field without loss of resolution. The dilated block utilizes three dilated convolution with dilation rate of 1, 2, 3, which can avoid the gridding effect [45]. Two max pooling layers are placed before the two dilated blocks respectively, which halves the size of feature maps. The right part of the dilated U-Net is the expanding stage. It contains four convolution blocks and two deconvolution layers. The deconvolution layer increases the resolution and decreases the number of feature maps. The results of each deconvolution are combined with features from the contracting stage at the same level, which can utilize low-level global information for pore segmentation. At last, a 1 × 1 convolution and Sigmoid function are used to generate the probability map. The output probability map is binarized with a threshold of 0.6. 4.2 Network training The network training is performed by minimizing the dice loss function [46] between the predicted result and the ground truth. The dice loss function can be used for the segmentation of unbalanced categories (the sweat pores occupy small areas in the pore image), defined as L = 1 − 2 ∗ ∑ i N p i l i ∑ i N p i + ∑ i N l i , (9)where N is the number of pixels, Pi is the predictive probability of the pixel i, while li is the value of the corresponding label. To optimize network learning, the adaptive moment estimation (ADAM) [47] is used with adaptive learning rates. The initial learning rate is set to 10−4. It reduces with a factor of 0.8 times if the dice loss does not decrease in 10 consecutive epochs. The training will end early if the learning rate reaches a minimum of 10−8. We used batch size of 4 and trained data for 100 epochs. In the training stage, each image with resolution of 1500 × 1200 is divided into 99 sub-images with resolution of 128 × 128 without overlapping. Two types of flips (vertical and horizontal directions) and three types of rotation (90, 180, and 270 degrees) are utilized for data augmentation. The experiments were run on Intel Xeon E5 processor and Nvidia TIAN XP. The training time was approximately 7 h. 5 EXPERIMENTS 5.1 Database and evaluation criteria There are 60 OCT volume data from 60 fingers collected in our study. The adaptive method and the fix-depth method are both used to reconstruct subcu
Referência(s)