Automatic Container Code Recognition From Multiple Views
2016; Electronics and Telecommunications Research Institute; Linguagem: Inglês
10.4218/etrij.16.0014.0069
ISSN2233-7326
AutoresYoungwoo Yoon, Kyu-Dae Ban, Hosub Yoon, Jaehong Kim,
Tópico(s)Advanced Image and Video Retrieval Techniques
ResumoETRI JournalVolume 38, Issue 4 p. 767-775 ArticleFree Access Automatic Container Code Recognition from Multiple Views Youngwoo Yoon, Corresponding Author Youngwoo Yoon [email protected] Corresponding Author[email protected]Search for more papers by this authorKyu-Dae Ban, Kyu-Dae Ban [email protected] Search for more papers by this authorHosub Yoon, Hosub Yoon [email protected] Search for more papers by this authorJaehong Kim, Jaehong Kim [email protected] Search for more papers by this author Youngwoo Yoon, Corresponding Author Youngwoo Yoon [email protected] Corresponding Author[email protected]Search for more papers by this authorKyu-Dae Ban, Kyu-Dae Ban [email protected] Search for more papers by this authorHosub Yoon, Hosub Yoon [email protected] Search for more papers by this authorJaehong Kim, Jaehong Kim [email protected] Search for more papers by this author First published: 01 August 2016 https://doi.org/10.4218/etrij.16.0014.0069Citations: 6 Youngwoo Yoon (corresponding author, [email protected]), Kyu-Dae Ban ([email protected]), Hosub Yoon ([email protected]), and Jaehong Kim ([email protected]) are with the SW & Content Research Laboratory, ETRI, Daejeon, Rep. of Korea. AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Abstract Automatic container code recognition from a captured image is used for tracking and monitoring containers, but often fails when the code is not captured clearly. In this paper, we increase the accuracy of container code recognition using multiple views. A character-level integration method combines recognized codes from different single views to generate a new code. A decision-level integration selects the most probable results from the codes from single views and the new integrated code. The experiment confirmed that the proposed integration works successfully. The recognition from single views achieved an accuracy of around 70% for the test images collected on a working pier, whereas the proposed integration method showed an accuracy of 96%. I. Introduction A shipping container is a standardized steel box that carries and stores freight. Containers are widely used by transportation and trading companies owing to their safety and efficiency. There are over 17 million containers in the world, and each one has a unique code consisting of four characters, six numbers, and a check digit for identification. The format of such codes has been standardized as ISO 6346 [1]. The flow of a container is tracked by reading the code printed on its five exposed sides. Automatic container code recognition (ACCR) systems have been developed for automated transportation or the monitoring of container flows. The two most popular solutions are radio frequency identification (RFID) and optical character recognition (OCR) technologies. These solutions have their own relative strengths and weaknesses. An RFID-based solution has nearly perfect accuracy, but requires high installation and maintenance costs. An OCR-based solution has a lower accuracy of under 95%, but does not require an installation of tags on the containers. In this paper, we focused on OCR-based container code recognition. Existing studies on ACCR have reported accuracies of around 92% [2], [3]. The primary reason for this low accuracy is that the codes printed on the planes become damaged or contaminated because of rust and structural damage from rough handling and exposure to sea water. To increase the accuracy, we propose container code recognition from multiple views. Previous studies have recognized a single image per container, and thus identifying containers is often impossible when the code in an image is not sufficiently clean. The proposed method looks into a container using multiple view optics, and integrates the recognition results not only at the decision level but also at the character level. The gates of a pier are used as a targeting site. Trucks carrying containers go through the gates to load the containers onto ships. Currently, office workers check the codes by sight to determine whether the containers are passing through the correct gates. However, people can make mistakes. Although the frequency of such mistakes is not very high, a single mistake can be costly. At this point, ACCR helps in reducing such mistakes by double-checking the codes. An entrance gate has five CCTV cameras capturing the top, front, rear, left, and right planes of a container placed on a truck. The configuration of the cameras and some sample images are shown in Figs. 1 and 2, respectively. Figure 1Open in figure viewerPowerPoint Configuration of the cameras capturing each plane of a container. Five cameras are installed to capture the top, front, rear, left, and right planes. Image capturing is triggered by infrared sensors. Figure 2Open in figure viewerPowerPoint Samples images of (a) top, (b) front, (c) left, (d) right, and (e) rear views. In the following sections, we first review existing methods for recognizing codes in natural scene images and information fusion methods. Section III introduces our code recognition algorithm used for a single view, and Section IV describes how we combine the results from multiple views. Experimental results are then shown in Section V. II. Related Works OCR in natural scene images has been widely studied for license plates, signboards, road signs, and other items. The conventional pipeline of these processes includes localization of the regions of interest, character segmentation, and character recognition. For the first localization step, many studies have used edge statistics based the fact that the characters have more edges than other parts of an image [4]. Some studies have additionally used mathematical morphology [5] and a heuristic blob analysis [3]. Localizations using appearance learning have recently been increasing in popularity [4]. This method looks into numerous images of regions of interest and trains a classification model, which determines whether an input image is a region of interest or a background region. Signboards in street view images are successfully found using appearance-based detection [6]. The character segmentation step is used to find the exact position of the characters from a region of interest found in the localization step. A segmentation scheme that uses a connected component analysis from binary images has been used in many studies [4], [7]. The segmentation often fails in natural scene images since damaged characters and unexpected noises may exist. Different approaches for better segmentation have been proposed. Local binarization [8], combining multiple binary images [9], and pixel projection with post-processing methods [3] have worked successfully. The last step is character recognition, and has been well studied in the field of document analyses and handwritten character recognition [10]. Characters on license plates and road signs are in a fixed font, and thus the character recognition is not problematic. Containers also use few sets of different fonts. Container code recognition has been studied in a few papers. Wu and co-workers implemented a complete process of code localization, character segmentation, and recognition [3]. They found the codes from a text-line analysis, and a histogram of the vertical projection of a binary edge image then segmented the characters. This work also suppresses the reflections generated by the zigzag patterns of the containers during the character segmentation step. Another study on recognizing the rear surface of the containers proposed the use of a character segmentation method using a dynamic design that recursively finds the optimal segmentation by considering the character likelihood and space between characters [2]. Recognition systems using multiple views, multiple instances, or multiple modalities have been studied. A survey paper providing an overview of fusion information describes two different levels of fusion [11]. Feature-level fusion combines features in multiple sources, and a classifier is used to conduct an analysis. In a decision-level fusion, each source is analyzed independently, and an aggregator gathers the results and makes a final decision. A few studies have introduced the use of multiple views for computer vision problems. The recognition of objects, faces, and gaits uses multiple views to reduce the negative effect of the viewing angles [12], [13]. Multiple views are also used for tracking. The use of multiple views reduces occlusions between objects [14]. Our study uses multiple views of a container along with a hybrid fusion method, which is a combination of feature- and decision-level fusion. III. Code Recognition from a Single View In this section, we describe a code recognition method from a single view. The recognition follows the conventional pipeline of code localization, character segmentation, and character recognition. Figure 3 shows these steps with sample results. For all planes, the same processes are applied, but the rear planes have additional handling for vertical bars. Note that the recognition from a single view is a baseline of the recognition from multiple views. The proposed algorithm, which uses multiple views, utilizes the recognition results of each plane of a container. Figure 3Open in figure viewerPowerPoint Code recognition steps: (a) code localization, where the red rectangles show detected character regions, and the green dotted box shows the location of the code, (b) character segmentation, where the first image is the initial result of segmentation, and the second one is for finding missing blobs, and (c) the character recognition results 1. Code Localization Codes are localized by grouping the characters. We used an appearance-based character detection that has been proven to be effective in license plate localization [15]. The detector finds characters on an input image by inspecting every region using a sliding window across the image. The Adaboost algorithm is used for training the detector, and the training set includes more than 40,000 character images and 260 background images (also known as negative images) exerted from the container code images. As shown in Fig. 3(a), many false detections occur because natural scene images have edges that look similar to those of numbers and alphabet characters. Based on these detections, we found vertical or horizontal alignments that indicate a container code. 2. Character Segmentation In the previous code localization step, characters are detected from natural scene images, but the accuracy is insufficient. Appearance-based detection tends to produce oversized and undersized bounding boxes. This character segmentation step finds more precise bounding boxes of characters. We use a connected component analysis (CCA) from the binary image. First, the input code image is binarized using Niblack's method, which was proven to be more effective in character segmentation over conventional binarization methods [8]. Then, CCA finds the connected regions from the binary image. The blobs smaller or larger than the estimated character sizes are filtered out. The outlier blobs that seem to be in the wrong places vertically or horizontally are also excluded. After the filtering, the number of remaining blobs may be exactly 11 as the best case. For this case, we proceed to the next character recognition step. However, we may have fewer or more than 11 blobs. When there are more than 11 blobs, the blobs having a higher score are selected. The score is defined as a summation of the character recognition score and the geometric relevance score. The geometric relevance score considers how much a blob fits the others in terms of the height and vertical position (horizontal position when the code is vertically written). When there are fewer than 11 blobs, we find missing blobs by inspecting empty spaces in front of the first blob, between the blobs, and behind the last blob. The sliding window with the same size as the other blobs inspects every candidate region. Regions having higher character recognition scores are selected as missing blobs. Figure 3(b) shows the sample results of finding missing blobs. 3. Character Recognition We used the character recognition algorithm described in [10]. The algorithm converts an input character image into an eight-gradient feature vector, and SVM classifies the feature vector as alphabet characters and numbers. There are 44 classes in total, that is, ten for numbers, 24 for alphabet characters, and ten for numbers with surrounding rectangles (check digits). The alphabet characters 'I' and 'O' are included in the classes of zero and 1. The first four characters in a code are recognized as alphabet characters, and the remaining characters are numbers. Training samples are selected from a set of container code images. About 135 samples are trained per class. The check digit, which appears last in a code, is surrounded by a rectangle, which makes it difficult to segment the digit correctly when the check digit is connected with the surrounding rectangle by dirt or noise. To recognize the connected blob, the check digit image, including the digit and surrounding rectangle altogether, is also included in the training set as a separate class. 4. Recognition of Rear Planes The rear planes of the containers depicted in Fig. 2(e) have vertical bars for opening the doors. These bars cross the codes, and thus the segmentation step often selects the regions of the bars by recognizing them as a number 1. Therefore, we added a procedure to detect the vertical bars and set the region so as to not be selected as a character. The Hough Transform reveals long vertical lines from the binary edge images of a container, and the lines are marked as the regions of vertical bars if the lines cross the code region. IV. Integrating Results from Multiple Views The recognition of five planes of a container provides five recognition results. The results may be the same as the best case, but normally are not. Integrating or selecting the different results is not an easy task because we are not sure which one is correct. Herein, we propose character-level integration and decision-level integration of the results. The character-level integration produces a new code fabricated from the recognized codes of the planes. The decision-level integration finally selects one among six codes including the new code from the character-level integration. Figure 4 shows the procedure of the character-level integration with example results. The example input images are partially shaded by dirt, and thus there is no correct result from the single views. However, we can assemble a correct code because all characters are readable in at least one image. The first step of the integration is aligning the codes. Multiple sequence alignment (MSA) is used to find the optimal alignment between the codes. The MSA technique is widely used in bioinformatics to compare RNA and DNA. In calculating an alignment, a matrix that defines the matching score between the characters is needed. Most simple matching applies a '1' for two of the same characters and a '0' for different characters. We customized the matrix to complement the failures of the character recognition. The output class probabilities of the character recognizer is used to construct the matrix. For instance, the classifier may give probabilities of 0.7 for '2' and 0.3 for 'Z' when it recognizes an image of the number '2.' The matrix is set to the average output probabilities for each pairs of characters, and we then have a greater chance to obtain the correct alignment even when incorrect character recognition results are obtained. We used the SeqAn sequence alignment library for MSA [16]. Figure 4Open in figure viewerPowerPoint Character-level integration of the code recognition results: (a) processing flow and (b) example results of each step. After MSA, a new code is generated by assembling the aligned codes. In the assembly, the characters in each place are separately compared. If the aligned codes have the same characters in the same place, then the new code simply follows the characters. When there is a conflict between aligned codes, the character having a higher character recognition score is selected for the new code. Lastly, in the character selection step, the final 11 characters are selected based on the character recognition scores. Among the five original recognition results and new integrated result, we select one as the final result. We do not simply use the integrated code because it occasionally gives an incorrect result when a noise blob has a high character recognition score. For this decision-level integration, we first exclude those codes that do not pass the check digit test. The check digit should be calculated from the remaining alphabet characters and numbers [1]. A test of the blob arrangement is applied for codes that have passed the check digit test. We can examine the variances of the vertical positions, heights, and spaces between the characters for horizontally written codes as follows: (1) where Y is the vertical position, H is the height of the characters, and Spi is the horizontal space between the i-th character and the character immediately to the right. In addition, Sp4 and Sp10 are not considered because their spaces are usually much bigger than the others. The mean height is divided to normalize the scores. For vertical codes, horizontal positions will be considered instead of vertical positions. If the variance scores are less than a predetermined threshold, we select the code with minimum variances. Note that this arrangement test is for codes recognized from a single view because the integrated code only has character information and not position information. If there is no code passing the blob arrangement test, we use the new integrated code for the final result. The following algorithm summarizes the overall processes. V. Experimental Results The proposed ACCR was evaluated for the test images corrected at a pier. We installed five cameras capturing each plane of a container for the gate of the pier. Conventional surveillance cameras were used with fish-eye lenses for the left and right planes because the cameras are installed in a close proximity to the container. The image capturing is triggered by a sensor that detects the entrance of a truck carrying a container. For all images, the perspective transformation is applied to make the codes appear at the correct angle. The transformation matrix is manually calculated from the sample images. Fish-eye distortions in the images for the left and right planes are also corrected by estimating intrinsic camera parameters including skew and radial distortion coefficients. Figure 5 shows warped images. Figure 5Open in figure viewerPowerPoint Results of perspective warping of images in Fig. 2. Algorithm 1. Proposed Container Code Recognition from Multiple Views Input: ImgSet←{five images of the planes for a container} 1: for each I in ImgSet 2: CodeResult1, …, 5←RecognizeSingleView (I) 3: end 4: 5: CodeResult6←CharacterLevelIntegration (CodeResult1, …, 5) 6: 7: minVariance←MAX_VAL 8: for each Code in CodeResult1, …, 6 9: passed←TestCheckDigit (Code) 10: if then 11: minVariance←VarianceScore(Code) 12: tempResult←Code 13: end 14: end 15: 16: 17: if 18: finalResult←tempResult 19: elseif 20: finalResult←CodeResult 6 21: else 22: finalResult←NULL 23: end We captured 1,902 containers for two consecutive days, and the number of images totaled 9,260. The capturing system occasionally failed to capture all planes, and thus there were about 4.87 images per container. The image resolution was . 1. Quantitative Results Table 1 shows the experimental results of the baseline algorithm, which recognizes the images from single views, and the proposed algorithm integrating multiple views. The baseline algorithm showed a relatively low accuracy of below 72%. We counted a code as correct when all 11 characters were correctly recognized. The recognition in the rear planes showed the best accuracy among the five planes, and the right and left planes showed relatively low accuracies. The codes on the rear planes are clearer than the others because the bars for the rear doors protect the code printing. In addition, the rear plane has no zig-zag pattern, which makes the character segmentation difficult. The proposed algorithm using multiple views showed a superior accuracy of over 96%. A total of 1,808 of the 1,902 containers were correctly recognized. We also tested the proposed algorithm without the use of character-level integration, and used only decision-level integration. The degradation of the performance was about 3%. Table 1. Recognition accuracies from single and multiple views. Method Accuracy (%) Recognition from a single view Top 56.33 Front 47.71 Rear 71.45 Left 42.45 Right 39.51 Proposed algorithm using multiple views without character-level integration 93.33 Proposed algorithm using multiple views 96.20 From the fact that each view has different accuracies, we also tested an integration method that gives penalties to those views having a low accuracy. The character recognition scores for each view are decreased before MSA by the different amounts, which are inversely proportional to the single-view accuracies. However, there was no improvement over the use of no score penalties. The recognition rates for a number of involved images are shown in Fig. 6. Clearly, if we use more images per container, we obtain a higher accuracy. The planes having a higher accuracy are first applied, and thus the gains in the accuracies declined. Figure 6Open in figure viewerPowerPoint Recognition rates for a number of images. One to five images are used for the recognition of a container. The planes having higher accuracies are first applied. 'All' includes rear, top, front, left, and right views. Table 2 compares the performance of the proposed method with previous studies. It is worth noting that these studies used different datasets and environments. There is no public test database for ACCR, and thus we cannot directly compare the results. Kumano and co-workers showed a recognition rate of for the rear planes, which is much higher than our result of . We built a baseline algorithm for a single view that can be applied to all of the top, front, rear, left, and right planes. In contrast, previous studies used an optimized algorithm on the specific planes of interest, and used the dictionary of the owner codes, which is the first four alphabet characters in a container code, to correct the codes. Therefore, this difference in approach creates a gap in accuracy between the previous and our own studies. Our main contribution is that we achieved 96% accuracy by combining low-accuracy results with the means of the character-level and decision-level integration. Table 2. Comparison with previous studies Reference Recognition approach Dataset Accuracy (%) [2] Dynamic design of character segmentation, using a dictionary for owner codes (first alphabets) Rear view 92.8(558/601) [3] Heuristic segmentation suppressing reflection and noise Side view 91.7(1,113/1,214) [5] Spatial structure window for recognizing different arrangements of characters Side view 53.5(18,201/34,000) Ours Integrating results from multiple views Multiple views 96.2(1,830/1,902) The processing time of the proposed algorithm is 3 s per set of five container images on a 3.0 GHz PC and a single-thread environment. Recognizing one plane without integration takes approximately 0.58 s. 2. Qualitative Results Figure 7 shows three correct and three incorrect samples. The codes in correct sample #1 are easy to recognize. Except for the top plane, the results of both multiple and single views are correct. In correct sample #2, most of the single-view recognitions failed in the character segmentation step because of dirty and broken characters. Character-level integration also showed incorrect codes. The background region with some edges was recognized as a number '4,' and was selected instead of the letter 'X.' However, the decision-level integration selects the top plane as the final result because the recognized code of the top plane passed the check digit test. Correct sample #3 is a case in which all single-view recognitions have failed, but the integration worked successfully. The character-level integration step obtains the alphabet characters from the front plane, and the numbers from the remaining planes. Incorrect samples #1 through #3 contain severely damaged characters. Even the rear plane of incorrect sample #2 is not visible because the container is hidden by the following container. The integration also failed because there is a lack of evidence for integration. Figure 7Open in figure viewerPowerPoint Correct and incorrect sample images. The aspect ratios of the images for the left and right are adjusted for better viewing. VI. Conclusion In this study, we proposed a novel container code recognition method that uses multiple views. A baseline algorithm for single views was developed. A character-level integration algorithm assembles a new code from the recognized codes of the five different container planes. Decision-level integration selects the most convincing result from the raw results of the baseline algorithm and the integrated code. An experiment confirmed that the proposed integration works successfully. The recognition from single views achieved accuracies of 40% to 71%, which varied according to the planes, but the proposed integration method shows an accuracy of 96%, which is higher than other previous works. The integration is independent from the baseline algorithm recognizing single views, and thus replacing the baseline algorithm is very straightforward. We hope to increase the overall accuracy to nearly 99% by adapting more sophisticated single-view recognition algorithms, such as those proposed by Kumano and others [2]. Moreover, the recognition from multiple views can be used in other applications such as license plate and signboard recognition. Currently, there are a number of sources for obtaining images; for example, images of signboards can be obtained from multiple street-views in online map services and geo-tagged images. Expanding and generalizing our algorithms for other applications remain as future work. Acknowledgements This work was supported by the Converging Research Center Program funded by the Ministry of Education, Science and Technology, Korea (2012K001330). Biographies Youngwoo Yoon received his BS and MS degrees in computer science with honors from the Information and Communications University (merged with Korea Advanced Institute of Science and Technology, Daejeon, Rep. of Korea, in 2009), Daejeon, Rep. of Korea, in 2006 and 2008, respectively. He is currently a researcher at ETRI, Daejeon, Rep. of Korea. His research interests include computer vision, HRI, and HCI. Kyu-Dae Ban received his PhD in computer software and engineering from the University of Science and Technology, Daejeon, Rep. of Korea, in 2011. He has been a researcher at ETRI, since 2011. His research interests include image processing and pattern recognition. Hosub Yoon received his BS and MS degrees in computer science from Soongsil University, Seoul, Rep. of Korea, in 1989 and 1991, respectively. He received his PhD degree in image processing from Korea Advanced Institute Science and Technology Daejeon, Rep. of Korea, in 2003. He joined Korea Institute of Science Technology/System Engineering Research Institute, Daejeon, Rep. of Korea, in 1991, and transferred to ETRI, in 1999. His major research interests include HRI, image processing, audio processing, and pattern recognition. Jaehong Kim received his PhD from Kyungpook National University, Daegu, Rep. of Korea, in 1996. He has been a researcher at ETRI, since 2001. His research interests include elderly-care robotics and social HRI framework. References 1 ISO 6346: 1995, ISO (International Organization for Standardization). http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber=20453 2S. Kumano et al., "Development of a Container Identification Mark Recognition System," Electron. Commun. Jpn. (Part II: Electron.), vol. 87, no. 12, Dec. 2004, pp. 38– 50. 3W. Wu et al., "An Automated Vision System for Container-Code Recognition," Expert Syst. Appl., vol. 39, no. 3, Feb. 2012, pp. 2842– 2855. 4C.E Anagnostopoulos et al., "License Plate Recognition from Still Images and Video Sequences: A Survey," IEEE Trans. Intell. Transp. Syst., vol. 9, no. 3, 2008, pp. 377– 391. 5K.M Koo and E.Y. Cha, "A Novel Container ISO-Code Recognition Method Using Texture Clustering with a Spatial Structure Window," Int. J. Adv. Sci. Technol., vol. 41, 2012, pp. 83– 92. 6K. Wang, B. Babenko, and S. Belongie, "End-to-End Scene Text Recognition," Int. Conf. Comput. Vis., Barcelona, Spain, Nov. 6–13, 2011, pp. 1457– 1464. 7Y. Yoon et al., "Blob Extraction based Character Segmentation Method for Automatic License Plate Recognition System," IEEE Int. Conf, Syst., Man, Cybernetics, Anchorage, AK, USA, Oct. 9–12, 2011, pp. 2192– 2196. 8M. Sezgin and B. Sankur, "Survey over Image Thresholding Techniques and Quantitative Performance Evaluation," J. Electron. Imaging, vol. 13, no. 1, 2004, pp. 146– 168. 9Y. Yoon et al., "Best Combination of Binarization Methods for License Plate Character Segmentation," ETRI J., vol. 35, no. 3, June 2013, pp. 491– 500. 10C. Liu et al., "Handwritten Digit Recognition: Benchmarking of State-of-the-Art Techniques," Pattern Recogn., vol. 36, no. 10, Oct. 2003, pp. 2271– 2285. 11P.K Atrey et al., "Multimodal Fusion for Multimedia Analysis: A Survey," Multimedia Syst., vol. 16, no. 6, 2010, pp. 345– 379. 12A. Selinger and R.C. Nelson, "Appearance-Based Object Recognition Using Multiple Views," Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recogn., vol. 1, 2001, pp. I-905– I-911. 13G. Shakhnarovich, L. Lee, and T. Darrell, "Integrated Face and Gait Recognition from Multiple Views," Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recogn., vol. 1, 2001, pp. I-439– I-446 14D. Delannay, N. Danhier, and C. De Vleeschouwer, "Detection and Recognition of Sports (Wo)men from Multiple Views," ACM/IEEE Int. Conf. Distrib. Smart Cameras, Como, Italy, Aug. 30–Sept. 2, 2009, pp. 1– 7. 15Y. Yoon et al., "Blob Detection and Filtering for Character Segmentation of License Plates," IEEE Int. Workshop Multimedia Signal Process., Banff, AB, USA, Sept. 17–19, 2012, pp. 349– 353. 16A. Döring et al., "SeqAn an Efficient, Generic C++ Library for Sequence Analysis," BMC Bioinformatics, vol. 9, no. 1, 2008, p. 11. Citing Literature Volume38, Issue4August 2016Pages 767-775 FiguresReferencesRelatedInformation
Referência(s)