Analysis of the synthetic periocular iris images for robust Presentation Attacks Detection algorithms
2022; Institution of Engineering and Technology; Volume: 11; Issue: 4 Linguagem: Inglês
10.1049/bme2.12084
ISSN2047-4946
AutoresJose Maureira, Juan Tapia, Claudia Arellano, Christoph Busch,
Tópico(s)Face recognition and analysis
ResumoIET BiometricsVolume 11, Issue 4 p. 343-354 ORIGINAL RESEARCHOpen Access Analysis of the synthetic periocular iris images for robust Presentation Attacks Detection algorithms Jose Maureira, Jose Maureira TOC Biometrics, R&D Center SR-226, Santiago, ChileSearch for more papers by this authorJuan E. Tapia, Corresponding Author Juan E. Tapia [email protected] orcid.org/0000-0001-9159-4075 da/sec-Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, Germany Correspondence Juan E. Tapia, da/sec-Biometrics and Internet Security Research Group, Hochschule Darmstadt, Haardtring 100, 64295 Darmstadt, Germany. Email: [email protected]Search for more papers by this authorClaudia Arellano, Claudia Arellano Universidad Adolfo Ibañez, Santiago, ChileSearch for more papers by this authorChristoph Busch, Christoph Busch da/sec-Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, GermanySearch for more papers by this author Jose Maureira, Jose Maureira TOC Biometrics, R&D Center SR-226, Santiago, ChileSearch for more papers by this authorJuan E. Tapia, Corresponding Author Juan E. Tapia [email protected] orcid.org/0000-0001-9159-4075 da/sec-Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, Germany Correspondence Juan E. Tapia, da/sec-Biometrics and Internet Security Research Group, Hochschule Darmstadt, Haardtring 100, 64295 Darmstadt, Germany. Email: [email protected]Search for more papers by this authorClaudia Arellano, Claudia Arellano Universidad Adolfo Ibañez, Santiago, ChileSearch for more papers by this authorChristoph Busch, Christoph Busch da/sec-Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, GermanySearch for more papers by this author First published: 07 June 2022 https://doi.org/10.1049/bme2.12084AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinkedInRedditWechat Abstract The LivDet-2020 competition focuses on Presentation Attacks Detection (PAD) algorithms, has still open problems, mainly unknown attack scenarios. It is crucial to enhance PAD methods. This can be achieved by augmenting the number of Presentation Attack Instruments (PAI) and Bona fide (genuine) images used to train such algorithms. Unfortunately, the capture and creation of PAI and even the capture of Bona fide images are sometimes complex to achieve. The generation of synthetic images with Generative Adversarial Networks (GAN) algorithms may help and has shown significant improvements in recent years. This paper presents a benchmark of GAN methods to achieve a novel synthetic PAI from a small set of periocular near-infrared images. The best PAI was obtained using StyleGAN2, and it was tested using the best PAD algorithm from the LivDet-2020. The synthetic PAI was able to fool such an algorithm. As a result, all images were classified as Bona fide. A MobileNetV2 was trained using the synthetic PAI as a new class to achieve a more robust PAD. The resulting PAD was able to classify 96.7% of synthetic images as attacks. BPCER10 was 0.24%. Such results demonstrated the need for PAD algorithms to be constantly updated and trained with synthetic images. 1 INTRODUCTION Iris recognition systems have been increasing in robustness over time. They are also affordable, non-invasive, and touchless; these strengths have promoted their popularity in the market in recent years. Iris recognition systems are usually based on eye images captured using near-infrared (NIR) lighting and sensors. Although the constant development of new algorithms and their increasing improvement, they are still susceptible to Presentation Attack Instruments (PAI) [1]. PAI corresponds to a biometric characteristic or object used in a presentation attack. In other words, a presentation attack instrument is any set of characteristics or objects, such as images, that are used to attack and fool a biometric system. There are several techniques used to create PAIs. Printed images, for instance, are easy to reproduce with different kinds of paper. PAIs can also be made using contact lenses, cosmetic lenses or plastic lenses, which are readily available from other brands, although harder to acquire than just printed images [1]. On the other hand, Presentation Attack Detection refers to the ability of a biometric system to recognise PAIs that would otherwise fool the system into identifying an illegitimate user as genuine by presenting a synthetic forged version of the original biometric trait to the capture device. The biometric community, including both researchers and vendors, has been investigating the challenging task of proposing and developing efficient protection mechanisms against the threat that PAIs represents [2, 3]. Attacks on biometric systems are not restricted to merely theoretical or academic scenarios anymore as they are starting to be carried out in real-life. One example is the hacking of Samsung Galaxy S8 devices with the iris unlock system using a regular printer and a contact lens that has been reported by hacking groups attempting to get recognition for real criminal cases, including live biometric demonstrations at conferences.1 Results from the LivDet-2020 [4] competition indicate that the problem of the development of iris PAD is still far from fully solved. There is a significant difference in accuracy among baseline algorithms since they are primarily trained with different data. Therefore, it is essential to access more extensive and more diversified training datasets. Such datasets need to include a more significant number of Bona fide images and PAIs to train more robust PAD systems, considering known and unknown attacks. This paper proposes the following contributions: First, the creation of synthetic periocular NIR iris images that can be used as PAI. A benchmark of four state-of-the-art GAN's methods is used to synthesise such images for this task. Second, the best synthetic PAI generated is tested using a state-of-the-art PAD algorithm (winner of the LivDet-2020 competition) [5]. Finally, A PAD algorithm is proposed using a MobileNetV2 architecture and trained using the synthetic PAI as an additional Attack class. As a result, a robust PAD to synthetic PAI may be achieved. The rest of the article is organised as follows: Section 2 summarises the related works on Presentation Attack Detection, PAI creation and Generative Adversarial Networks. The work proposed in this paper is presented in Section 3. Section 4 describes the experiments and results obtained while Section 5 and Section 6 present the conclusions and future work, respectively. 2 RELATED WORK Presentation Attack Detection There is a vast amount of research regarding PAD algorithms. Hu et al. [6] proposed a Regional PAD, where regional features are extracted from local neighbourhoods. This method was based on the spatial pyramid (multi-level resolution) and relational measures (convolution on features with variable-size kernels). Several feature extractors were examined, such as Local Binary Patterns (LBP), Local Phase Quantisation (LPQ), and intensity correlogram. The best performance was obtained using a three-scale LBP-based feature. Nguyen et al. [7] also proposed a PAD method by combining features extracted from local and global iris regions. First, they trained multiple VGG19 [8] networks for different iris regions from scratch. Then, the features were separately extracted from the last fully connected layer before the classification layer of the trained models. The experimental results showed that PAD performance improved by fusing the features based on feature-level and score-level fusion rules. Gragnaniello et al. [9] have explored liveness detection in order to recognise attack presentation. They have found that the sclera region contains essential information about iris liveness (SIDPAD). Therefore, the authors extracted features from both the iris and sclera regions. First, the two areas are segmented and applied scale-invariant local descriptors (SID). A bag-of-features method was then used to summarise the features. A linear Support Vector Machine (SVM) was used to perform the final prediction. In Ref.[10], the authors used domain-specific knowledge of iris PAD to incorporate it into the design of their prediction model (DACNN). With this domain knowledge, a compact network architecture was obtained, and regularisation terms were added to the loss function to enforce high-pass/low-pass behaviour. The authors demonstrated that this method could detect both face and iris attacks. Yadav et al. [11] reported a novel method where a combination of handcrafted and deep-learning-based features was used for iris PAD. They fused multi-level Haralick features with VGG16 features to encode the iris textural patterns. The VGG16 features were extracted from the last fully connected layer, with a size of 4096, and then its dimensions were reduced by Principal Component Analysis (PCA). A more recent set of algorithms has been presented at the LivDet-2020 competition [4]. The best method was proposed by Tapia et al. [5] who presented an approach that achieved an Average Classification Error Rate (ACER) of 29.78%. This method also reached the lowest Bona Fide Classification Error Rate (BPCER) of 0.46% out of all participants. This work showed the relevance of focussing mainly on Bona fide images as a "first-filter". However, broad space for improvement was detected in identifying the PAIs scenarios, especially in cadaver and printed iris images. Figure 1 shows an example of different PAIs used during training by the winning algorithm in the LivDet-2020 competition. FIGURE 1Open in figure viewerPowerPoint Image examples of presentation attack instruments used in LivDet-2020. Left to right: Bona fide, Print-out LivDet-2020-Iris, Cadaver (post-mortem subject) and Cosmetic contact lens In order to achieve better PAD algorithms, it is important to train such algorithms with all sorts of PAI that could be used to fool biometric systems. The most common Presentation Attack Instruments are created using contact lenses with a printed pattern or printouts of an iris [12]. Building PAIs is not trivial and it usually requires great effort. Even the creation of PAI using printed eye images can be time-consuming in order to get large data sets using different quality paper. That is even more complicated when other methods like those using images from cadavers, contact lenses, or others are used. The generation of artificial images can be an alternative to create novel PAIs and also bona fide images that can be used to train better PAD algorithms. Generative Adversarial Networks (GAN) are a promising technique that allow novel images to be synthesised from a data set. GANs for synthetic images The GAN algorithm was first introduced by Ian Goodfellow et al. [13]. It approaches the problem of unsupervised learning by simultaneously training two deep networks, called Generator G and Discriminator D, respectively. These networks compete and cooperate with each other. While the generator creates new instances of the data, the discriminator evaluates them for authenticity. In the course of training, both networks learn to perform their tasks. To learn a generator distribution pg over data x, the generator builds a mapping function from a prior noise distribution pz(z) to data space as G(z; θg). The discriminator D(x; θd), on the other hand, outputs a single scalar representing the probability that x came from training data rather than pg. During training parameters of G to minimise log(1 − D(G(z))) while adjusting parameters of D to minimise log(D(x)) were simultaneously looked for as if they were following the two-player min-max game with value function V(G, D): min G max D V ( D , G ) = E x ∼ p data ( x ) [ log ( D ( x ) ) ] + E z ∼ p z ( z ) [ log ( 1 − D ( G ( z ) ) ) ] $\underset{G}{\mathrm{min}\,}\underset{D}{\mathrm{max}\,}V(D,G)=\underset{x\sim {p}_{\mathit{data}}(x)}{\mathbb{E}}[\mathrm{log}(D(x))]+\underset{z\sim {p}_{z}(z)}{\mathbb{E}}[\mathrm{log}(1-D(G(z)))]$ (1) There are several variants of the GAN algorithm. Chen et al. [14] for instance, have proposed a method called Info-GAN. This is an information theoretic extension applied to GAN. This method is able to learn disentangled representations in a completely unsupervised manner. Info-GAN also maximises mutual information between a small subset of the latent variables and the observation. Zhu et al. [15] proposed an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples. The goal is to learn a mapping G: X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement. Wang et al. [16] have presented a method based on pix2pix network called pix2pixHD [16]. This GAN synthesises high resolution images from semantic label maps using cGAN. The cGANs have enabled a variety of applications, but the results are often limited to low resolution and still far from realistic. The author used a generator with 2048 × 1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Yingying et al. [17] have introduced Wasserstein GANs (WGAN), an alternative to traditional GAN training. In this model, the authors can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, they also show that the corresponding optimisation problem is sound, and provides extensive theoretical work highlighting the deep connections to other distances between distributions. Afterwards, Gulrajani et al. [18] propose a new method Wasserstein GAN-GP (WANGP) to improve the training process in order to get better results and to stabilise the results. Karras et al. [19] developed the StyleGAN2. This GAN is an extension of the progressive, growing GAN that is an approach for training generator models capable of synthesising huge high-quality images via the incremental expansion of discriminator and generator models from small to large images during the training process. In addition to the gradual growth of the models during training, the StyleGAN2 changes the architecture of the generator significantly. The StyleGAN2 generator no longer takes a point from the latent space as input; instead, two new sources of randomness are used to generate a synthetic image: a standalone mapping network and noise layers. StyleGAN2 introduces the mapping network f to transform z into this intermediate latent space w using eight fully connected layers. This intermediate latent space w can be viewed as the new z, (z′). Through this network, a 512-D latent space z is transformed into a 512-D intermediate latent space w. Generative adversarial nets have also been extended to conditional models [14, 15, 20, 21] if both the generator and discriminator are conditioned with extra information y. The introduction of external information allows specific representations of generated images to be created. The cGAN is a variant of standard GANs that was introduced to augment GANs with the capability for the conditional generation of data samples based on both latent variables (or intrinsic factors) and known auxiliary information (or extrinsic factors). Extrinsic factors could be class information or associated data from other modalities. In other words, cGANs are generative models which can produce data samples x, conditioned on both latent variables z and known auxiliary information y. GANs applied to iris images GAN algorithms have also been applied to generate synthetics gender labelled iris images. Tapia et al. [22], for instance, proposed E-DCGAN, a conditional GAN algorithm that preserves gender information while generating synthetic images from periocular NIR images. They have shown that standard GAN algorithms are not able to preserve soft biometric features such as gender, while using conditional information allows a gender labelled synthetic database to be achieved. Such a synthetic database has been demonstrated to improve gender classification algorithms when used to augment the training data set. Hoffman et al. [23] proposed an iris PAD method that performs well in both intra-dataset and cross-dataset scenarios. It advances the state-of-the-art by considering the cross dataset evaluation scenario that has received very little attention in the iris biometrics literature. Segmented iris images were used for this evaluation. Yadav et al. [24, 25] have also proposed a new technique for generating synthetic iris images and demonstrated its potential for Presentation Attack Detection (PAD). The proposed technique improved the loss function to the generative capability of a Relativistic Average Standard Generative Adversarial Network (RaSGAN) to synthesise high-quality NIR images previously aligned and cropped (iris). Kohli et al. [26] proposed a new iris presentation attack by synthesising iris images through the deep convolutional generative adversarial network and demonstrated that it is possible to attack commercial iris recognition systems. However, only segmented iris and pupil areas were used as input. One of the difficulties with GAN algorithms, particularly when applied to iris images or biometrics in general, is assessing the quality and meaningfulness of the resulting (synthesised) images. Only recently, a suite of qualitative and quantitative metrics has been developed to assess the performance of a GAN model based on the quality and diversity of the generated synthetic images [24, 26-29]. Some such metrics are: The Inception Score (IS) [27], Frechet Inception Distance (FID) [28, 30] and Perceptual Path Length (PPL) [19]. These metrics allow the results from different GAN models to be compared. FID and IS are based on feature extraction (the presence or absence of features). This work uses four state-of-the-art GAN algorithms to generate synthetic PAD from a small set of periocular Iris NIR images. The FID metric is used to compare and evaluate the quality of the resulting collection of images. The resulting PAI is considered as an unknown attack and used to test the best algorithm from the LiveDet-2020 competition [4]. 3 METHODOLOGY This paper explores the robustness of state-of-the-art PAD to synthetic attacks (PAIs). Synthetic periocular iris images were created using four state-of-the-art methods using a small dataset: cGAN, Wasserstein GAN, Wasserstein Gradient Penalty GAN, and StyleGAN-2 (Section 3.2). To compare the quality of the generated synthetic images the FID, metric is chosen (Section 3.5). As a result, a novel Synthetic Periocular Iris PAI (SPI-PAI) was created from the best synthetic images generated. This new data set will be made available for the researchers upon request. It is expected that this analysis will contribute to creating new PAD systems. It is essential to point out that the last LiveDet-2020 training dataset did not consider synthetics images. The recently created PAI is evaluated using a Live Iris image detector algorithm proposed by Tapia et al. at the LivDet-2020 competition. [5]. A flow chart of the process presented is shown in Figure 2. Finally, a MobileNetV2 architecture trains a new PAD method using the synthetic PAI as a new class. Therefore, a more robust PAD to synthetic attack is expected. FIGURE 2Open in figure viewerPowerPoint Proposal framework to generate a Synthetic Presentation Attack Instruments (PAI) using Generative Adversarial Network (GAN) algorithms. The synthetic images are evaluated using Frechet Inception Distance (FID) score and the best PAI is tested using state-of-the-art Presentation Attacks Detection (PAD) algorithms As follows, the database used for the synthesis of images is presented in Section 3.1. The GAN algorithms and metrics used for assessing the resulting synthetic images are described in Sections 3.2 and 3.5, respectively. Finally, Section 3.4 describes the new PAD algorithm trained using the synthetic PAI and Section 3.5 presents the metrics about Presentation Attack algorithms. Database For this paper, and for all GAN methods implemented, the GFI-UND database was used [31]. This database is organised in 3000 NIR periocular Iris images with a resolution of 640 × 480, captured with an LG-4000 capture device. The database is equally distributed with 1500 left and 1500 right NIR iris images. The database is also gender-balanced, with 750 males and 750 females subject-disjoint. For all training methods, the input to the GANs was 3000 images. A probabilistic data-augmentation based on imgaug library [32] with a high occurrence rate (p: 0.75) was used in all experiments. GAN algorithms benchmark Unlike other deep learning neural network models trained with a loss function until convergence, a GAN generator model is trained using a second model called a discriminator that learns to classify images as real or generated. Both the generator and the discriminator models are trained together to maintain an equilibrium. This work implements four state-of-the-art GANs methods and tests to create synthetics NIR periocular Iris images. Such methods are: Conditional GAN (cGAN). Conditional GAN was proposed by Mirza et al. [20] and it considers conditional information. In this case, the conditional information used is gender. The database contains labels for images in each class of gender (Female and Male). All images from the data set are used as input with its respective labels. Wasserstein GAN (WGAN). This method was proposed by Ref.[14] and despite the cGAN does not consider conditional information. Therefore, distinctions between images from male and female subjects were not made. All images in the training database are used as input to generate the synthetic images. Wasserstein Gradient Penalty GAN (WGAN-GP). This method is an improvement from previous Wasserstein GAN and does not include conditional information, all images are considered from the same class. StyleGAN2. StyleGAN2 was proposed by Ref.[19] and differs from the above techniques mainly in two characteristics. First, this method used an internal step that evaluates the quality of the generated images using the FID distance between the generated images and the database used for training (See FID distance in Section 3.5). Second, a seed is generated (instead of a set of images) which can be used to synthesise new images from the same class that was used to train the model (with similar mean and variance). A novel PAI is created from each of these techniques. However, in order to achieve a benchmark of the above algorithms and to choose the best one, the FID metric is used. This metric measures the similarity of synthetic images in comparison to the original class [28]. See Figure 2. Evaluation metrics for synthetic images There is no objective loss function used to train the GAN generator models and no way to assess the progress of the training and the relative or absolute quality of the model from loss alone. In most methods, the resulting images look like the original class of images, but specific features are not necessarily preserved. This is particularly challenging for periocular images, since the resulting synthesised PAI can look like an eye but soft-biometric information, features or other identifying information may not have been preserved. In FID, a pre-trained Inception network is used to extract features from an intermediate representation from the Inception pre-trained network. The data distribution for these features is modelled using a multivariate Gaussian distribution with mean ux and covariance ϵ. Similarly, a Gaussian distribution is modelled for the original image G(ug, ϵg). In order to compare both Gaussian distributions, the FID metric is applied as it is shown in the following expression: F I D ( x , g ) = ‖ u x − u g ‖ 2 + Tr ϵ x + ϵ g − 2 ϵ x ϵ g 1 / 2 $FID(x,g)={\Vert}{u}_{x}-{u}_{g}{{\Vert}}^{2}+\mathrm{Tr}\left({{\epsilon}}_{x}+{\epsilon}g-2{\left({{\epsilon}}_{x}{{\epsilon}}_{g}\right)}^{1/2}\right)$ (2)where Tr sums up all the diagonal elements. FID is more robust to noise than the IS metric. FID is also a better measurement for image diversity than other previously reported metrics [27]. When computing the FID between training and synthetically generated datasets the expected output should be close to zero. As the FID minimises, the closer the synthesised images are to the originals used for training. In this work, this metric is used to compare the synthetic images generated using the four state-of-the-art GAN algorithms described above. When using cGAN, WGAN and WGAN-GP, The FID score is computed between the original set of images used for training the algorithms and the resulting synthetic images. Presentation Attack Detection algorithm This work used a modified MobileNetV2 to detect Bona fide and attack presentation images. An algorithm was presented based on a multi-label CNN network that has been used to detect printed images and patterned contact lenses [4]. The SMobileNet and FMobileNet models are both based on MobilenetV2 [5]. SMobileNet was trained from scratch to detect the presence of patterned contact lenses in the iris image area. FMobilNet was trained using fine-tuning with average and max pooling options in order to detect the printed images of the whole image by identifying the physical source of the image. Finally, a multi-output classifier was developed to identify fake or live or real images. This option allowed a lightweight classifier to be created and implemented in a mobile iris recognition camera like Gemini Iritech. Evaluation metrics for PAD algorithms Regarding Presentation attack metrics, the ISO/IEC 30107-3 standard2 presents methodologies for the evaluation of the performance of PAD algorithms for biometric systems. The APCER metric measures the proportion of attack presentations—for each different PAI—incorrectly classified as Bona fide presentations. This metric is calculated for each PAI, where ultimately, the worst-case scenario is considered. Equation (3) details how to compute the APCER metric, in which the value of NPAIS corresponds to the number of attack presentation images, where RESi for the ith image is 1 if the algorithm classifies it as an attack presentation (spoofed image), or 0 if it is classified as a Bona fide presentation (real image). A P C E R = 1 N PAIS ∑ i = 1 N PAIS 1 − R E S i $APCER=\frac{1}{{N}_{\mathit{PAIS}}}\sum\limits _{i=1}^{{N}_{\mathit{PAIS}}}\left(1-RE{S}_{i}\right)$ (3) Additionally, the BPCER metric measures the proportion of Bona fide (live images) presentations mistakenly classified as attack presentations to the biometric algorithm or the ratio between false rejection to total genuine attempts. The BPCER metric is formulated according to Equation (4), where NBF corresponds to the number of Bona fide (live) presentation images, and RESi takes identical values of those of the APCER metric. B P C E R = ∑ i = 1 N B F R E S i N B F $BPCER=\frac{{\sum }_{i=1}^{{N}_{BF}}RE{S}_{i}}{{N}_{BF}}$ (4) These metrics effectively measure to what degree the algorithm confuses presentations of spoofed images with real images, and vice versa. Furthermore, the Average Classification Error Rate (ACER) is also used. This is computed by averaging the APCER and BPCER metrics, as shown in Equation (5). Whereas this metric evaluates the overall system performance, it has not been approved in the ISO/IEC 30107-3, and is computed mainly for the purpose of comparing with the state-of-the-art. The APCER, BPCER, and ACER metrics are dependent on a decision threshold. A C E R = A P C E R + B P C E R 2 $ACER=\frac{APCER+BPCER}{2}$ (5) A Detection Error Trade-off (DET) curve is also reported for all experiments. In the DET curve, the Equal Error Rate (EER) value represents the trade-off when the APCER is equal to the BPCER. Values in this curve are presented as percentages. Additionally, two different operational points are reported, according to ISO/IEC 30107-3. BPCER10 which corresponds to the BPCER when the APCER is fixed at 10%, and BPCER20 which is the BPCER when the APCER is fixed at 5%. BPCER10 and BPCER20 are independent of decision thresholds. 4 EXPERIMENTS AND RESULTS Synthetics image generation As mentioned before, this experiment focuses on obtaining high-quality synthetic images using four state-of-the-art GAN-based algorithms. All the experiments were performed on an Intel Xeon E5 and GPU Nvidia Tesla V100 16 GB system. Four experiments (one for each GAN method: cGAN, WGAN, WGAN-GP, and StyleGAN2) are performed as follows: 4.1.1 Experiment 1: cGAN The cGAN methods were implemented and trained using two sizes of images, 80 × 160 and 320 × 240 pixels. Several sets of synthetic images were created using this method. Each s
Referência(s)