YOLOv4 Object Detection Model for Nondestructive Radiographic Testing in Aviation Maintenance Tasks
2021; American Institute of Aeronautics and Astronautics; Linguagem: Inglês
10.2514/1.j060860
ISSN1533-385X
AutoresZhi-Hao Chen, Jyh‐Ching Juang,
Tópico(s)Non-Destructive Testing Techniques
ResumoOpen AccessYOLOv4 Object Detection Model for Nondestructive Radiographic Testing in Aviation Maintenance TasksZhi-Hao Chen and Jyh-Ching JuangZhi-Hao Chen https://orcid.org/0000-0002-9455-0833Air Force Institute of Technology, Kaohsiung City 820, Taiwan, Republic of China and Jyh-Ching JuangNational Cheng Kung University, Tainan City 701, Taiwan, Republic of ChinaPublished Online:7 Oct 2021https://doi.org/10.2514/1.J060860SectionsPDFPDF Plus ToolsAdd to favoritesDownload citationTrack citations ShareShare onFacebookTwitterLinked InRedditEmail AboutI. IntroductionThis paper's purpose is to mitigate the risk of civic aircraft damage accidents, and it is of paramount importance to perform quality-assured aviation maintenance safety inspections, manufacturing, and overhauls. Indeed, if the online inspections and main aviation maintenance safety tasks of the failure power plant are neglected, flight safety problems may result. This kind of incident, an "uncontained failure," has resulted in two turbofan failure events: one on United Airlines Flight 328 (UA328), a Boeing 777-222, on February 20, 2021, and another on a Boeing 737-700 passenger aircraft in April 2018 (Fig. 1) [1]. The main cause of the civil aviation accident is that a proper inspection was not implemented when the aircraft and engine were repaired in the plant. They had exceeded their service life, and the hidden fatigue defects in the engine blades were not detected by the inspector for engine blades with hidden cracks. Fatigue cracks on metal surfaces cause cracks that continue crack expansion and lead to eventual failure. It is thus desired to augment the existing inspection procedure with artificial intelligence to recognize potential cracks using automatic mechanism applications for nondestructive testing (NDT) in a civic aeroengine. Therefore, this paper proposes an innovative idea of a deep learning (DL) algorithm for fast object detection and an identification model in NDT systems with optimization for Graphics Processing Unit (GPU) parallel computations. From a comparison of the proposed YOLOv4 and other state-of-the-art object detectors, the YOLOv4 runs twice as fast as the Faster Region-based Convolutional Neural Network (R-CNN) object detector algorithm [16]. The high level of computation volume theoretical indicator, billion floating-point operations (BFLOP) [2], was suitable for an NDT dataset on aviation maintenance that operates in real-time on a conventional GPU. Unlike the previous innovation two-stage object detectors, which are the R-CNN, Faster R-CNN, and Region-based Fully Convolutional Networks (R-FCN) series, the most representative one-stage object detector YOLO, in a deterministic manner and state-of-the-art method, uses techniques by convolution feature extraction, which make single GPU training more efficient and suitable. This one-stage object detector modifies the path-aggregation blocks on the neck, including Path Aggregation Network (PAN) [3] and spatial pyramid pooling (SPP) [17]. As the result, the innovation YOLOv4 model uses plugin modules in NDT image maximizing generation. A variational lower boundary increases the inference cost by a small amount but can significantly improve the accuracy of object detection [27].Fig. 1 CFM56-7B turbofan and Pratt and Whitney model PW4077 turbofan failure. (Adapted from [1] and reprinted with permission from the National Transportation Safety Board, 2021.)II. Techniques and MethodsAccording to the Federal Aviation Administration's (FAA's) Maintenance Review Board (MRB) [7], each online civic aircraft should pass the period A-, B-, C-, and D-level checks for work, including maintenance, repair, and overhaul (MRO) [8]. Civil aircraft structural and engine component parts are inspected for hidden cracks within the internal structure of each component using NDT when undergoing a D-level maintenance check [9]. The National Transportation Safety Board's Global Aviation Safety Bulletin [10] states that "a single abnormal engine can cause a serious aircraft accident." An unfortunate accident occurred in United Airlines Flight 328 (UA328), a Boeing 777-222 airliner. In this incident, a serious accident occurred when an engine blade ruptured, puncturing the aircraft's cabin, and damaging the fuselage, with no reported injuries (Fig. 2) [1]. Improvements to convolutional architecture for fast feature embedding (Caffe) model layers, the YOLOv4 model, and multiple DL algorithms used for object identification with classification are discussed. The improved CNN and YOLOv4 models are adopted and then delineated. Some modifications to the existing one-stage neural network net are also highlighted. The two-stage network model is a more accurate architecture than the one-stage network model; even though R-CNN [16] and R-FCN [21] are faster networks, neither is more accurate or faster than the YOLOv4 model. Both the Faster R-CNN and the R-CNN models can take advantage of a better feature extractor but are less significant with the SSD and DetectNet algorithms. Obviously, the one-stage network model performs well even with a simple extractor, which can even match other detectors' accuracies using better extractors. For this reason, we finally chose the YOLO structure to modify and improve the innovation DL model, which becomes faster and more accurate. The thesis uses SPP and PANet [14] as the parameter aggregation methods for different backbone levels for different detector levels rather than the FPN used in YOLOv3. Finally, we chose the SPP add-on module, the CSPDarknet53 backbone, the SPP and PAN path-aggregation neck, and the YOLOv3 (anchor-based) [2] head as the architecture for the innovation YOLOv4 neural network, as shown in Fig. 3.Fig. 2 UA1175 No. 11 fan blade root section fracture surface. (Adapted from [1] and reprinted with permission from the National Transportation Safety Board, 2021.)Fig. 3 Innovation Darknet-Caffe YOLOv4 network structure.III. Experiment Process, Method Setup, and ResultsTo ensure that the NDT automatic classified high speed and accuracy, the YOLOv4 algorithm was added to the Caffe model layers [11] to use as a structure on basic framework to provide training and testing of the NDT with one-stage detector defect paradigms for radiographic testing (RT) images. In the case of high manual failure rates in NDT, the limited amount and insufficient type of samples of radiographic inspection defect data can introduce another serious failure in the performance accuracy of the training model to cope with multiple inspections. Therefore, the YOLOv4 model was deployed on an Artificial Intelligence (AI) chip with a C++ code compiler, so converting the Caffe model is a necessary task. The preliminary experiment used a CNN model to enhance and improve existing automated NDT diagnostics [12]. Fortunately, aircraft maintenance technical managers inspired by AI technology have improved recent work operation manuals, which apply to aircraft production and maintenance so that AI automatically learns to describe the content of the RT file image defect area marks. This ability benefits the attention mechanism on feature map applications for NDT to improve automated NDT, which enables faster and more accurately understood repairs. This paper begins by briefly providing an overview of the related literature on the automatic inspection of aerial engines using RT [9] on a DL model.A. System SetupIn this paper, the image processing techniques based on the YOLOv4 object detection model are combined with NDT techniques to improve the inspection tasks of aircraft structural and engine components. In the past, some renowned manufacturers of civil aircraft and cargo planes reached the goal this year of updating their factories' NDT machinery and equipment to help engineers visually inspect all RT images faster, with the aim of assisting NDT engineers to reduce the burden of judging images via visual inspection. Long hours and work fatigue can increase the occurrence of mistakes during manual work. The contribution of this experiment is the completion of a dataset of over 6100 labeled RT images of civil aircraft and engines using DL techniques for machine learning computer vision applications and the successful compilation of a unique DL framework model for a one-stage detector defect paradigm. The results of the paper can help civil companies' aircraft and engine factories to update their machine capabilities for defective object detection of RT images, successfully achieving a learning accuracy of 80% using the DL net model we designed. The result is just as we predicted: the YOLOv4 object detection model could easily and cheaply be installed in connection with the RT image machine, successfully saving the data to SD disks. This paper shows how a plant's machinery was successfully updated and that a small model for deployment on the NVIDIA® JetsonTM TX2 platform was retrained.B. Preparation DatasetAll training and testing of image task data sets were taken from the archives of the civic aircraft fuselage and engine repair records. Next, preprocessing using the Python toolkit's image processing library, Python Imaging Library (PIL), was executed to convert the image file format and to adjust the output 480×480 pixels each in size to a universal lightning memory-mapped database (LMDB) data stored in a .txt file data format [20], as depicted in Fig. 4.In this preparation dataset, defects can be categorized as a) slag inclusion, b) undercut, c) incomplete penetration, d) blowhole, e) cracks, f) incomplete fusion, g) welding spatter, or h) porosity. Some RT images in different light conditions and resolutions have been labeled with the above defects. In this endeavor, eight types of defect class are stored in the first row of the category table string, with a dataset of over 6100 labeled RT images, as depicted in Fig. 5.Fig. 4 a) LMDB data stored; b) RT files. (Adapted from Chen and Juang [35], 2021.)Fig. 5 Classification of defective labels. (Adapted from Chen and Juang [35], 2021.)C. YOLOv4-Based Edge Object Detection TechniquesCompared with traditional machine learning methods, the YOLOv4 model has the ability to explore object detection and to learn representations from the entire image. This YOLOv4 model detector consists of a backbone pretrained on the Darknet53 [18] network and a head for predicting object classes and object bounding boxes. These innovative DL models can quickly detect the location of defects in material components inside a civil aircraft fuselage. An expanded convolutional neural network chain is proposed by applying the SPP block over the CSPDarknet53 backbone, and these are used to extend the receptive field to further extract rich information about defects in NDT in civic aircraft fuselage and engine parts. The convolutional neural network chain model integrates the global and local information of the feature map to locate the engine defect in the SPP plane more precisely and accurately. YOLOv3 (anchor-based) is used to extract the region of interest (ROI) [19] to detect composite material hidden cracks from single and multiple NDT images with high reliability in the SPP additional module, and then it uses the global spatial information of the ROI to detect anatomical structures. The Darknet-Caffe model proposes a method of improvement based on the YOLOv4 (one-stage detector) structure, which is a simple method for creating bypass nets based on the remaining layers of the encoder–decoder neural block. Based on an improved YOLOv4, this innovative DL model is unique and simple to code for fast detection of material defects, such as cracks. In Fig. 6, this region is highlighted with a red square for crack, and manually labeled damage is marked with solid red lines for the position, size, shape, and direction. The innovation DL model is a one-stage detector structure. Its region-based detectors have an impressive frame per seconds (FPS) [9] using lower resolution images at the cost of accuracy that demonstrate a small accuracy advantage. In addition to the modified R-CNN models, an ordinary modified YOLOv4 object detector is composed of five parts: the input RT images, backbone, neck, and dense prediction.Fig. 6 One-Stage Detector parts. (Adapted from Chen and Juang [35], 2021.)IV. Experimental Results and DiscussionThese experimental results that focus on the YOLO depend on the performance of the region proposal module. So, some region modules composed from an R-CNN model can be updated to a YOLO neural network. It is also possible in the next step to make a one-stage object detector an anchor-free object detector, such as RetinaNet [15], and the most representative models are YOLOv4 [16,17] and SSD [13].A. System SetupThis innovative Darknet-Caffe for a YOLOv4 DL object detection, classification, and recognition module for LMDB data stored in a .txt file data format and labeled defect feature maps were trained to identify the object localization bounding box. The host computer is responsible for the entire data processing and training task. Once the model has been trained and validated in the host, it is compiled and ported to the NVIDIA Jetson TX2 embedded development board for inference.B. Training OptimizationThe model introduced in the new data enhancement method structure of the YOLOv4 model used in the experimental process is improved and optimized, and it consists of backbone (CSPDarknet53 [18], CutMix); neck (SPP [32], PAN [3]); detector (CutMix [33], cross mini-batch normalization (CmBN), DropBlock regularization [34], CIoU loss [30]); and head (YOLOv3 [2]). The loss function for target detection tasks usually consists of two components: classification loss function and bounding box regression loss function. YOLOv4 uses bounding box regression loss that has evolved in DIoU loss (2020) and CIoU loss (2020). Furthermore, CmBN is defined as cross mini-batch normalization. This only collects statistics between mini batches within a single batch. These require careful adjustment of the loss function to converge the influence factor functions more quickly on Eqs. (1) and (2) and to enhance the ability to extract features which are shown in Fig. 7.Fig. 7 Bounding of cracks.C. Self-Advocacy TrainingTo overcome this attention-based neural network issue, this paper employed a DL paradigm of optimal object detection to tackle both single and multiple detections. The simple experiment is compared with the different Faster R-CNN and YOLOv4 structure models. Although the mean average precision (mAP) results are listed in Table 1, the accuracy rate of YOLO is slightly higher than that of Fast R-CNN, and the loss score of Faster R-CNN is higher than that of YOLO. However, the average elapsed times of the one-stage methods were much less than the two-stage methods. Our YOLOv4 is superior to the fastest detectors in terms of both speed and accuracy.V. ConclusionsThe results of this paper have made considerable contribution to improve the accuracy and efficiency of the NDT of civil aircraft and engines at the D-level inspection stage. The outcome of this experiment provides a new model for DL to detect good results in aeroengine radiographic inspection systems to assist in the detection of defect damage in RT image files. In the experimental results, over 6100 RT images were pretrained and input, and the region model size of films 480×480 pixels achieved a 0.9 mAP of material defect classifier problems and wasted only approximately 100 s. Considerable efforts have been made to improve the accuracy and efficiency of the nondestructive inspection of jet engine fans through the use of the YOLOv4 object detection algorithm for the DL techniques model in order to benefit the methods used in the aerospace industry.R. OhayonAssociate EditorAcknowledgmentThe second author would like to acknowledge the support of Ministry of Science and Technology, Taiwan (Grant No. MOST 109-2224-E-006-005). References [1] "United Airlines Flight 328," and "Southwest Airlines Flight 1380," National Transportation Safety Board, NTSB/AAR-19/03, Feb. 2021. Google Scholar[2] Redmon J. and Farhadi A., "Yolov3: An Incremental Improvement," Computer Vision and Pattern Recognition, 2018, https://arxiv.org/abs/1804.02767v1. Google Scholar[3] Liu S., Qi L., Qin H., Shi J. and Jia J., "Path Aggregation Network for Instance Segmentation," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8759–8768. Google Scholar[4] Woo S., Park J., Lee J.-Y. and Kweon I. S., "CBAM: Convolutional Block Attention Module," Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 3–19. Google Scholar[5] Yao Z., Cao Y., Zheng S., Huang G. and Lin S., "Cross-Iteration Batch Normalization," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 12,331–12,340. Google Scholar[6] Shin H. C., Roth H. R., Gao M., Lu L., Xu Z., Nogues I., Yao J., Mollura D. and Summers R. M., "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning," IEEE Transactions on Medical Imaging Vol. 35, No. 5, 2016, pp. 1285–1298. https://doi.org/10.1109/TMI.2016.2528162 CrossrefGoogle Scholar[7] Pontecorvo J. A., "MSG-3–A Method for Maintenance Program Planning," SAE International, Oct. 1984. https://doi.org/10.4271/841485 Google Scholar[8] Mathaisel D. F., "A Lean Architecture for Transforming the Aerospace Maintenance, Repair and Overhaul (MRO) Enterprise," International Journal of Productivity and Performance Management, Vol. 54, No. 8, 2005, pp. 623–644. https://doi.org/10.1108/17410400510627499 CrossrefGoogle Scholar[9] Sikora R., Baniukiewicz P., Chady T., Łopato P., Piekarczyk B., Psuj G., Grzywacz B. and Misztal L., "Detection and Classification of Weld Defects in Industrial Radiography with use of Advanced AI Methods," Far East Forum on Nondestructive Evaluation/Testing: New Technology and Application, 2013, pp. 12–17. https://doi.org/10.1109/FENDT.2013.6635520 Google Scholar[10] "DCA18MA142 SWA1380 Investigative Update," National Transportation Safety Board, 2018. Google Scholar[11] Komar M., Yakobchuk P., Golovko V., Dorosh V. and Sachenko A., "Deep Neural Network for Image Recognition Based on the Caffe Framework," IEEE Second International Conference on Data Stream Mining & Processing (DSMP), 2018, pp. 102–106. https://doi.org/10.1109/DSMP.2018.8478621 Google Scholar[12] Gong Y., Shao H., Luo J. and Li Z., "A Deep Transfer Learning Model for Inclusion Defect Detection of Aeronautics Composite Materials," Composite Structures, Vol. 252, Nov. 2020, Paper 112681. https://doi.org/10.1016/j.compstruct.2020.112681 Google Scholar[13] Liu W., Anguelov D., Erhan D., Szegedy C., Reed S., Fu C.-Y. and Berg A. C., "SSD: Single Shot MultiBox Detector," European Conference on Computer Vision—ECCV, Springer International Publishing, Vol. 9905, Springer, Cham, 2016, pp. 21–37. https://doi.org/10.1007/978-3-319-46448-0_2 Google Scholar[14] Kannadaguli P., "YOLO v4 Based Human Detection System Using Aerial Thermal Imaging for UAV Based Surveillance Applications," IEEE International Conference on Decision Aid Sciences and Application (DASA), 2020, pp. 1213–1219. https://doi.org/10.1109/DASA51403.2020.9317198 Google Scholar[15] Yang F., Fan H., Chu P., Blasch E. and Ling H., "Clustered Object Detection in Aerial Images," IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 8310–8319. Google Scholar[16] Ren S. Q., He K. M., Girshick R. and Sun J., "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks," IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 39, No. 6, 2017, pp. 1137–1149. https://doi.org/10.1109/TPAMI.2016.2577031 CrossrefGoogle Scholar[17] Redmon J. and Farhadi A., "YOLO9000: Better, Faster, Stronger," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 7263–7271, https://arxiv.org/abs/1612.08242v1. Google Scholar[18] Wang C.-Y., Liao H.-Y. M., Wu Y.-H., Chen P.-Y., Hsieh J.-W. and Yeh I.-H., "CSPNet: A New Backbone That Can Enhance Learning Capability of CNN," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 390–391. Google Scholar[19] Xiao Z. F., Gong Y. P., Long Y., Li D. R., Wang X. Y. and Liu H., "Airport Detection Based on a Multiscale Fusion Feature for Optical Remote Sensing Images," IEEE Geoscience and Remote Sensing Letters Vol. 14, No. 9, 2017, pp. 1469–1473. https://doi.org/10.1109/LGRS.2017.2712638 Google Scholar[20] Henry G., "Howard Chu on Lightning Memory-Mapped Database," IEEE Software Vol. 36, No. 6, 2019, pp. 83–87. https://doi.org/10.1109/MS.2019.2936273 Google Scholar[21] Zhang J., Cosma G. and Watkins J., "Image Enhanced Mask R-CNN: A Deep Learning Pipeline with New Evaluation Measures for Wind Turbine Blade Defect Detection and Classification," Journal of Imaging, Vol. 7 No. 3, 2021, pp. 1–20. https://doi.org/10.3390/jimaging7030046 Google Scholar[22] Aswathy P. and Mishra D., "Deep GoogLeNet Features for Visual Object Tracking," IEEE 13th International Conference on Industrial and Information Systems (ICIIS), 2018, pp. 60–66. https://doi.org/10.1109/ICIINFS.2018.8721317 Google Scholar[23] Buck I., "GPU Computing: Programming a Massively Parallel Processor," International Symposium on Code Generation and Optimization (CGO '07), 2007, pp. 17–18. https://doi.org/10.1109/CGO.2007.13 Google Scholar[24] Subramanian A. and Schwartz R., "Reference-Free Inference of Tumor Phylogenies from Single-Cell Sequencing Data," IEEE 4th International Conference on Computational Advances in Bio and Medical Sciences (ICCABS), 2014, pp. 1–2. https://doi.org/10.1109/ICCABS.2014.6863944 Google Scholar[25] Chang Y., Huang J. C., Su L., Chen Y. A., Chen C. and Chou C., "Localized Surface Plasmon Coupled Fluorescence Fiber-Optic Biosensor for Severe Acute Respiratory Syndrome Coronavirus Nucleocapsid Protein Detection," 14th OptoElectronics and Communications Conference, 2009, pp. 1–2. https://doi.org/10.1109/OECC.2009.5215723 Google Scholar[26] Saari I. S., Mahmud Z. and Abdullah N. N., "Diagnosis of Response Behavioural Patterns Towards the Risk of Pandemic Flu Influenza A (H1N1) of Urban Community Based on Rasch Measurement Model," International Conference on Statistics in Science, Business and Engineering (ICSSBE), 2012, pp. 1–0. https://doi.org/10.1109/ICSSBE.2012.6396567 Google Scholar[27] Bochkovskiy A., Wang C.-Y. and Liao H.-Y. M., "YOLOv4: Optimal Speed and Accuracy of Object Detection," arXiv preprint arXiv:2004.10934, 2020. Google Scholar[28] Lin T.-Y., Maire M., Belongie S., Hays J., Perona P., Ramanan D., Dollár P. and Zitnick C. L., "Microsoft Coco: Common Objects in Context," European Conference on Computer Vision, Vol. 8693, Springer, Cham, 2014, pp. 740–755. https://doi.org/10.1007/978-3-319-10602-1_48 Google Scholar[29] Rezatofighi H., Tsoi N., Gwak J., Sadeghian A., Reid I. and Savarese S., "Generalized Intersection over Union: A Metric and a Loss for Bounding Box Regression," Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 658–666. Google Scholar[30] Zheng Z., Wang P., Liu W., Li J., Ye R. and Ren D., "Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression," Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, 2020, pp. 12,993–13,000. https://doi.org/10.1609/aaai.v34i07.6999 Google Scholar[31] Hu J., Shen L. and Sun G., "Squeeze-and-Excitation Networks," IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132–7141. Google Scholar[32] He K., Zhang X., Ren S. and Sun J., "Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 37, No. 9, 2015, pp. 1904–1916. https://doi.org/10.1109/TPAMI.2015.2389824 CrossrefGoogle Scholar[33] Yun S., Han D., Oh S. J., Chun S., Choe J. and Yoo Y., "Cutmix: Regularization Strategy to Train Strong Classifiers with Localizable Features," Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 6023–6032. Google Scholar[34] Ma N., Zhang X., Zheng H.-T. and Sun J., "Shufflenet v2: Practical Guidelines for Efficient CNN Architecture Design," Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 116–131. Google Scholar[35] Chen Z. and Juang J., "Attention-Based YOLOv4 Algorithm in Non-Destructive Radiographic Testing for Civic Aviation Maintenance," Preprints April 2021, 2021040653, pp. 1–15. https://doi.org/10.20944/preprints202104.0653.v1 Google ScholarTablesTable 1 Inspection results of four methodsMethodAccuracy, mAPT, sFaster R-CNN[0.5, 0.78229]0.2708Faster R-CNN[0.5, 0.78229]0.2708SSD[0.5, 0.79129]0.3982YOLO-v3[0.5, 0.79229]0.0298YOLO-v4[0.5, 0.72239]0.0326Note: Adapted from Chen and Juang [35], 2021. Previous article Next article FiguresReferencesRelatedDetails What's Popular Articles in AdvanceSupplemental Materials CrossmarkInformationCopyright © 2021 by Zhi-Hao Chen and Jyh-Ching Juang. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission. All requests for copying and permission to reprint should be submitted to CCC at www.copyright.com; employ the eISSN 1533-385X to initiate your request. See also AIAA Rights and Permissions www.aiaa.org/randp. AcknowledgmentThe second author would like to acknowledge the support of Ministry of Science and Technology, Taiwan (Grant No. MOST 109-2224-E-006-005). Received29 April 2021Accepted13 August 2021Published online7 October 2021
Referência(s)