Face morph detection for unknown morphing algorithms and image sources: a multi‐scale block local binary pattern fusion approach
2020; Institution of Engineering and Technology; Volume: 9; Issue: 6 Linguagem: Inglês
10.1049/iet-bmt.2019.0206
ISSN2047-4946
AutoresUlrich Scherhag, Jonas Kunze, Christian Rathgeb, Christoph Busch,
Tópico(s)Face and Expression Recognition
ResumoIET BiometricsVolume 9, Issue 6 p. 278-289 Research ArticleOpen Access Face morph detection for unknown morphing algorithms and image sources: a multi-scale block local binary pattern fusion approach Ulrich Scherhag, Corresponding Author Ulrich Scherhag ulrich.scherhag@h-da.de da/sec – Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, GermanySearch for more papers by this authorJonas Kunze, Jonas Kunze da/sec – Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, GermanySearch for more papers by this authorChristian Rathgeb, Christian Rathgeb orcid.org/0000-0003-1901-9468 da/sec – Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, GermanySearch for more papers by this authorChristoph Busch, Christoph Busch orcid.org/0000-0002-9159-2923 da/sec – Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, GermanySearch for more papers by this author Ulrich Scherhag, Corresponding Author Ulrich Scherhag ulrich.scherhag@h-da.de da/sec – Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, GermanySearch for more papers by this authorJonas Kunze, Jonas Kunze da/sec – Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, GermanySearch for more papers by this authorChristian Rathgeb, Christian Rathgeb orcid.org/0000-0003-1901-9468 da/sec – Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, GermanySearch for more papers by this authorChristoph Busch, Christoph Busch orcid.org/0000-0002-9159-2923 da/sec – Biometrics and Internet Security Research Group, Hochschule Darmstadt, Darmstadt, GermanySearch for more papers by this author First published: 24 September 2020 https://doi.org/10.1049/iet-bmt.2019.0206Citations: 5AboutSectionsPDF ToolsRequest permissionExport citationAdd to favoritesTrack citation ShareShare Give accessShare full text accessShare full-text accessPlease review our Terms and Conditions of Use and check box below to share full-text version of article.I have read and accept the Wiley Online Library Terms and Conditions of UseShareable LinkUse the link below to share a full-text version of this article with your friends and colleagues. Learn more.Copy URL Share a linkShare onFacebookTwitterLinked InRedditWechat Abstract The vulnerability of face recognition systems against so-called morphing attacks has been revealed in the past years. Recently, different kinds of morphing attack detection approaches have been proposed. However, the vast majority of published results has been obtained from rather constrained experimental setups. In particular, most investigations do not consider variations in morphing techniques, image sources, and image post-processing. Hence, reported performance rates can not be maintained in realistic scenarios, as the NIST FRVT MORPH performance evaluation showed. In this work, existing algorithms are benchmarked on a new, more realistic database. This database consists of two different data sets, from which morphs were created using four different morphing algorithms. In addition, the database contains four different post-processings (including print-scan transformation and JPEG2000 compression). Further, a new morphing attack detection method based on a fusion of different configurations of Multi-scale Block Local Binary Patterns (MB-LBP) on an image divided into multiple cells is presented. The proposed score-level fusion of a maximum number of 18 different configurations is shown to significantly improve the robustness of the resulting morphing attack detection scheme, yielding an average performance between 2.26% and 8.52% in terms of Detection Equal Error Rate (D-EER), depending on the applied post-processing. 1 Introduction Image manipulation techniques can be applied to substantially change the appearance of face images and hence negatively affect the recognition accuracy and security of face recognition systems (FRSs). Face alteration methods include replacement or re-enactment [1, 2], which are frequently referred to as 'face swapping' or 'deep-fakes', retouching [3, 4] as well as morphing [5, 6]. Morphing techniques can be used to create artificial face images that resemble the biometric information of two (or more) subjects in the image and feature domain. Usually, the morphing process comprises the definition of corresponding landmarks, averaging, triangulation, warping, and alpha-blending [5]. Alternatively, morphs might as well be created using generative adversarial networks (GANs) [7]. An example of a morphed facial image is shown in Fig. 1b. With high probability, the morphed facial image is successfully verified against probe samples from both subjects contributing to the morph using state-of-the-art FRSs. This means that if a morphed facial image is somehow stored as a reference in the database of an FRS or in the chip of an electronic travel document, both subjects involved can successfully be verified against this manipulated reference. Morphed facial images thus pose a serious threat to FRSs as the basic principle of biometrics, i.e. the unambiguous link between biometric data and the subject, is broken. Fig. 1Open in figure viewerPowerPoint Example for a morphed face image of subject 1 and subject 2; the morph was created using FantaMorph (a) Subject 1, (b) Morph, (c) Subject 2 In 2014, Ferrara et al. [8] were the first to thoroughly investigate the vulnerability of a commercial FRS against face morphing attacks. So far, a considerable amount of morphing attack detection (MAD) mechanisms has been published. For a comprehensive survey, the reader is referred to [5]. Proposed approaches can be categorised according to the MAD scenario. In the no-reference MAD scenario, the detector processes a single image, e.g. an image that is presented in a passport application procedure (this scenario is also referred to as single image MAD or forensic MAD). On the contrary, in the differential MAD scenario, a trusted live capture from an authentication attempt serves as an additional source of information for the morph detector, e.g. during authentication at an automatic border control gate (this scenario is also referred to as image pair-based MAD). Note that all information extracted by no-reference morph detectors might as well be leveraged within this scenario [9]. In this work, the focus is put on the more challenging no-reference scenario. A comprehensive evaluation of two different face databases using four morphing algorithms and four post-processing methods is conducted. It is shown that a fusion of multiple configurations of multi-scale block LBP (MB-LBP) improves the performance as well as the robustness of the MAD system. Further, the proposed fusion-based scheme that combines the complementary information extracted from various scales outperforms diverse published approaches. Moreover, as opposed to existing works, it is shown that MAD remains a challenging task in real-world scenarios where the image source and/or the algorithm used to morph face images is unknown to the detection system. The remaining of this paper is organised as follows: In Section 2, the related work is briefly revisited. Subsequently, the used image databases are summarised in Section 3. In Section 4, the proposed system is described in detail. An in-depth evaluation is presented in Section 5. Finally, a conclusion is given in Section 6. 2 Related work In general, no-reference face morphing attack detectors can be divided into three algorithm classes which utilise either (i) texture descriptors, (ii) digital forensics, or (iii) deep learning. Most relevant published approaches and their properties are listed in Table 1. Table 1. Overview of most relevant no-reference face MAD algorithms (adapted from [5]) Ref. Approach Category Morphing method Source face database Post-processing Remarks [10] BSIF with SVM texture descriptors GIMP/GAP in-house — — [11] BSIF with SVM texture descriptors GIMP/GAP in-house print and scan fixed database of [12] [13] multi-channel-LBP with Pro-CRC texture descriptors OpenCV FRGCv2 print and scan — [14] multi-channel-LBP with SRKDA texture descriptors [15] [15] print and scan — [16] WLMP with SVM texture descriptors Snapchat in-house — — [17, 18] ULBP and RIPS with KNN texture descriptors [19] Utrecht — — [9] BSIF with SVM texture descriptors triangulation + blending FRGCv2 — — [12] score-level fusion of general-purpose image descriptors texture descriptors triangulation + blending FRGCv2 — — [20] HOG with SVM texture descriptors triangulation + blending FRGCv2, FERET, ARface — cross-database performance evaluation [21] LBP with SVM texture descriptors triangulation + blending FRGCv2, FERET — cross-database performance evaluation [7] LBP with SVM texture descriptors MorGan [7] CelebA — — [22] high-Dim. LBP with SVM texture descriptors triangulation + blending + swapping Multi-PIE — — [23] Image degradation digital forensics triangulation + blending (+ swapping) in-house, Utrecht — — [24-26] PRNU analysis digital forensics triangulation + blending FRGCv2 hist. equalisation, scaling, sharpening — [27] PRNU analysis digital forensics triangulation + blending MorGan [7] CelebA — — [28] SPN analysis digital forensics triangulation + blending (+ swapping) Utrecht, FEI — — [19] double-compression artefacts analysis digital forensics triangulation + blending (+ swapping) Utrecht, FEI — — [29] double-compression artefacts analysis digital forensics [19] Utrecht, FEI — — [30] reflection analysis digital forensics triangulation + blending (+ swapping) in-house — — [15] luminance component and steerable pyramid with ProCRC digital forensics triangulation + blending (+ swapping) [13] extended print and scan — [31] image quality features with SVM digital forensics GAN generated Morphs VidTIMIT — — [32] VGG19 and AlexNet with ProCRC deep learning [11] in-house print and scan — [33] VGG19, GoogLeNet, AlexNet deep learning triangulation + blending (+ swapping) in-house — — [34] VGG19 deep learning triangulation + blending (+ swapping) BU-4DFE, CFD, FEI, FERET, PUT, scFace, Utrecht, in-house motion blur, Gaussian blur, salt-and-pepper noise, Gaussian noise trained on all combinations (no unseen attack classes) [35] OpenFace NN4.SMALL2 and LBP with SVM deep learning and texture descriptors [36] CelebA — candidate selection presented in [36] [37] VGG19 with SVM deep learning triangulation + blending (+ swapping) FRGCv2, FERET, ARface, Biometix print and scan — During the morphing process, various artefacts are created, which can be detected by analysing the texture. Due to the averaging of two images, the resulting morph is smoothed, e.g. the skin textures will loose their sharpness. Furthermore, ghost artefacts or half-shade effects occur if the two morphed images are not aligned correctly and if there are too few or incorrectly positioned landmarks. Especially in the area of the pupils and the nostrils, these artefacts occur more frequently. Other artefacts detectable by texture descriptors are distorted corners and offset image areas. In several publications, the use of common texture descriptors, e.g. local binary patterns (LBPs) [38] or binarised statistical image features (BSIFs) [39], has already been demonstrated [7, 9-11, 21]. An extension of these algorithms to several colour channels [13, 14] or higher dimensions [22] can lead to further improvements. Other texture descriptors, such as unified LBPs (ULBPs) [17, 18] or weighted local magnitude patterns (WLMPs) [16] have also been tested. The distortion and blending during the morphing process influence the high-frequency information of the image. These changes can be analysed by image forensics-based detection methods. For example, it has been shown that morphs can be detected by analysing photo response non-uniformity (PRNU) [24-27] or sensor pattern noise (SPN) [28]. Moreover, the quality of the images is reduced by editing and saving them in the morphing process. Under the assumption that the quality of morphed images is always lower than those of bona fide images, image quality can be used for morph detection. This can be done by either analysing intentional degradation of the image in question [23] or by using several quality features in combination with a classifier [31]. Under the assumption that the images are stored in a lossy compression format before and after morphing, it is possible to detect morphs by analysing double compression artefacts [19, 29]. Furthermore, the images can be examined for inconsistencies, e.g. for non-natural lighting conditions or colour values [30]. The third class of no-reference algorithms are based on deep learning. Deep learning-based feature extractors offer the advantage that they can theoretically learn to detect any artefact present in the training set. This, however, carries the risk of over-fitting to artefacts, which only occur with the morph algorithms used for training and are therefore not generally valid. One possibility is the training or transfer learning of a network for the detection of morphs [33, 34]. Another possibility is the use of pre-trained neural networks for feature extraction in combination with a classifier (e.g. support vector machine – SVM) [32, 37]. Deep features can also be combined with other features (e.g. LBP) [35]. While the majority of morph detection approaches report practical detection error rates, these are commonly evaluated on a dataset of bona fide and morphed face images which are extracted from a single (in-house) face database and created by a single morphing algorithm. It was shown that variations in the dataset [20], morphing process, and post-processing (e.g. print and scan [11]) might negatively influence the performance of the morph detection algorithms. This has also been confirmed in the face recognition vendor test (FRVT) MORPH conducted by the National Institute of Technology (NIST) [40]. In [12], a fusion of multiple algorithms was proposed, as it might improve the detection performance of no-reference algorithms. Even the fusion of different configurations of the same algorithm was found to be beneficial. 3 Database The results of this work were obtained based on subsets of the FRGCv2 [41] and FERET [42] face image databases. From these databases, potential reference images meeting the International Civil Aviation Organization (ICAO) passport photo quality standards [43] are selected. From the pre-selected images, image pairs are created for the morphing process, where possible, different images are used for morphing and as bona fide samples. However, for some subjects, there are not enough samples, so the same image is used in both subsets (morphed and bona fide). The number of used subjects, bona fide images as well as the number of created morphs are given in Table 2. Table 2. Number of subjects, bona fide and morphed face images (per morphing algorithm) of used datasets. 'F' and 'M' indicate female and male subjects, respectively Database Subjects Face images Bona fide Morphed FRGCv2 533 (231 F, 302 M) 1441 964 FERET 529 (200 F, 329 M) 622 529 3.1 Morphing Different morphing algorithms produce morphs with different artefacts. For a comprehensive evaluation, a database with different morphing algorithms is, therefore, necessary to ensure that the morph attack detection algorithms have not stiffened to algorithm-specific artefacts. For this purpose, four morphing algorithms were used to ensure a large variation of morphs, examples are shown in Fig. 2 with equal contribution of both subjects: (i) FaceFusion [www.wearemoment.com/FaceFusion], a proprietary morphing algorithm. Due to the inaccessible source code, it is not possible to determine in which way the morphs are generated. It can be seen, that after the morphing process, parts of the first subject are blended over the morph to hide artefacts (eyes, nostrils, outer facial region). The created morphs have a high quality and low to no visible artefacts. (ii) FaceMorpher [ github.com/alyssaq/face_morpher], an open-source implementation using Python. In the used version, the algorithm applies STASM for landmark detection. Delaunay triangles are formed from the landmarks, which are distorted and blended. The area outside the landmarks is averaged. The generated morphs show strong artefacts in particular in the area of neck and hair. (iii) OpenCV, a self-made morphing algorithm based on the tutorial 'Face Morph Using OpenCV' [www.learnopencv.com/face-morph-using-opencv-cpp-python/]. This algorithm works similar to FaceMorpher. Important differences between the algorithms are that for landmark recognition Dlib is used instead of STASM and that for this algorithm landmarks are positioned at the edge of the image, which is also used to create the morphs. Thus, in contrast to FaceMorpher, the edge does not consist of an averaged image, but like the rest of the image, of morphed triangles. However, also in this version, strong artefacts outside the facial area can be observed, which is mainly due to missing landmarks. (iv) UBO-Morpher, the morphing tool of the University of Bologna, as used e.g. in [44]. This algorithm receives two input images as well as the corresponding landmarks. Dlib landmarks were used for this morphing tool. The morphs are generated by triangulation, averaging and blending. To avoid the artefacts in the area outside the face, the morphed face is copied to the background of one of the original images. Even if the colours are adjusted, visible edges may appear at the transitions. In order to be able to conduct a fair benchmark in our experiments, the same combination of morphed face images was created for each of the listed algorithms. Fig. 2Open in figure viewerPowerPoint Examples for morphed face images from all four algorithms (resized). From left to right: Subject 1, FaceFusion morph, FaceMorpher morph, OpenCV morph, UBO-Morpher morph, and Subject 2 3.2 Post-processing In addition to the considered ICAO compliance, various post-processings of the images must also be taken into account, since images of the database aim at imitating real-world scenario of the application process of an electronic travel document. In many countries, the images are down-scaled, e.g. to 360 × 480 pixels, and compressed, e.g. to 15 kB using JPEG2000, prior to storing them on the chip of an electronic travel document, e.g. an ePassport. In addition, the images can be handed over in printed form by the applicant. It can be assumed that morphs are more easier to be recognised in unprocessed images and that each post-processing step increases the difficulty of reliable detection. In order to cover the realistic scenarios, the following post-processings have been applied: (i) Resizing (RS) : The resolution of the images is reduced to the minimum inter-eye distance (90px) required by the ICAO guidelines for electronic travel documents [43]. This post-processing corresponds to the scenario that an image is submitted digitally by the applicant. An example is shown in Fig. 3a. This post-processing is applied in advance to all subsequent post-processings described below. (ii) JPEG2000 Compression (J2): A wavelet-based image compression method that is recommended for electronic travel documents [45]. The setting is selected in a way that a target file size of 15 KB is achieved. This post-processing corresponds to the scenario that a digitally submitted image is stored in the chip of the electronic travel document. An example is shown in Fig. 3b. (iii) Printing and scanning (PS): The images are first printed with a high-quality laser printer (Fujifilm Frontier 5700R Minlab on Fujicolor Crystal Archive Papier Supreme HD Lustre photo paper) and then scanned with a premium flatbed scanner (Epson DS-50000) with 300 dpi. A dust and scratch filter is then applied in order to reduce image noise. This post-processing corresponds to the scenario that an analogue image is submitted with the electronic travel document application. An example is shown in Fig. 3c. (iv) Printing, Scanning and JPEG2000 Compression (PS–J2) : A combination of the previous post-processings. The images are first printed and scanned and then compressed using JPEG2000. This post-processing corresponds to the scenario that an analogue submitted image is stored in the chip of the electronic travel document. An example is shown in Fig. 3d. Fig. 3Open in figure viewerPowerPoint Comparison of different post-processings (FaceFusion). Zoomed in to reveal artefacts and noise more clearly (a) Resized, (b) JPEG2000, (c) Print/Scan, (d) Print/Scan and JPEG2000 3.3 Validation of attack potential To assure the significance of the following experiments, the attack potential of the created databases is evaluated in a first step. For this purpose, comparison scores for genuine and impostor comparisons, as well as for morphing attacks are determined and the mated morph presentation match rate (MMPMR) and the relative morph match rate (RMMR) defined in [46] is estimated. The FRGCv2 provides probe images showing a significantly higher variance (and therefore higher realism) compared to the probe images contained in the FERET database, thus the validation of the attack potential is limited to the FRGCv2 database. Due to the lower variance of the sample images, the comparisons of the FERET database results in higher comparison scores for genuine and morph attack comparisons, thus the results obtained on FRGCv2 can be considered as a lower limit for the attack potential. The comparison scores were generated using a commercial-of-the-shelf (COTS) FRS. The resulting probability density functions (PDFs) are depicted in Fig. 4. In most publications, databases with symmetric morphs are used. This means that both subjects are equally contributing to the creation of the morph. However, it is also suggested, e.g. in [44], to assign a lower weight to one subject, in order to increase the chances in the case of a manual control with this subject. For this reason, in addition to the PDFs of symmetrical morphs in Fig. 4b, the distributions of asymmetrical morphs with a weighting of 25 and 75% () are shown in Fig. 4a, the corresponding MMPMR and RMMR are listed in Table 3. Since the FRS maintains a zero FNMR at the considered FMR of 0.1% the MMPMR is equal to the RMMR. However, it is evident that the asymmetric morphs, regardless of the applied morphing algorithm, have no attack potential for the used FRS. This behaviour is reinforced by the realistic variance of the probe images used. As a consequence, only symmetrical morphs are considered in this paper. Fig. 4Open in figure viewerPowerPoint PDFs of comparison scores of genuine, impostor, and morphing attacks for symmetrical and asymmetrical morphs. depicts the estimated threshold for an FMR of 0.1% (a) 25/75 morphs, (b) 50/50 morphs Table 3. Vulnerability assessment of COTS FRS MMPMR/RMMR, % FaceFusion FaceMorpher OpenCV UBO-Morpher 0.25 18.8 8.4 9.8 3.0 0.5 79.4 60.1 62.8 81.5 4 Proposed system The proposed system, which is depicted in Fig. 5, comprises three key modules, (i) MB-LBP extraction, (ii) cell division and (iii) training and score-level fusion; in the following subsections, all modules are described in detail. To avoid algorithm overfitting on avoidable artefacts, e.g. ghost-artefacts in hair regions, the image is cropped to a size of 320 × 320 pixels using predefined offsets, whereby the image area showing the face is cut out. Finally, the cropped face part is converted to a greyscale image. Fig. 5Open in figure viewerPowerPoint Overview of the proposed multiple configuration MB-LBP fusion approach with division into multiple cells to detect morphed facial images. k is the parameter for the MB-LBP block size and c the parameter for the cell division 4.1 Multi-scale block LBP LBP is a powerful feature for texture classification. Specifically, LBP is suitable for detecting morphed face images in no-reference scenarios [12]. The distortions of the images introduced by the morphing process are changing the texture of the images in a way that can be detected in an LBP-histogram. Further, the images are averaged during the blending, which smooths the resulting morph, leading to less sharp edges, which are reflected in an LBP-histogram, too. In addition, the morphing process might introduce minor artefacts to the image [46]. As LBP is designed for the representation of surface properties, these artefacts can be represented in the LBP-histogram as well and can be utilised to detect morphed face images. The original LBP operator labels the pixels of an image by thresholding the 3 × 3-neighbourhood of each pixel with the centre value and considering the result as a binary string or a decimal number. Then the histogram of extracted LBP values can be used as a texture descriptor. MB-LBP [47] is an extension to the basic LBP, with respect to neighbourhoods of different sizes. In MB-LBP, the comparison operator between single pixels in LBP is replaced with the comparison between average pixel intensities of sub-regions. Each sub-region is a square block containing neighbouring pixels. In each sub-region, the average sum of pixel intensities is computed. These average sums are then thresholded by that of the centre block. The whole filter is composed of nine blocks (centre block and eight neighbouring blocks) of the size pixels. If a higher value for k is selected, details are lost while robustness increases [47]. An example of the basic LBP and the MB-LBP operator is shown in Fig. 6. In order to be able to compute the LBP blocks in the peripheral regions, padding border lines and columns are added to the image in advance, which replicates the outer pixel values. Fig. 6Open in figure viewerPowerPoint Basic LBP operator and the MB-LBP operator with (a) Basic LBP, (b) MB-LBP 4.2 MB-LBP feature extraction over multiple cells Even if the performance of LBP in constrained scenarios is promising, the detection performance of LBP highly degrades when the face images are post-processed, e.g. by printing and scanning. Further, it was observed, that smaller blocks show a higher performance on single databases, but larger blocks are more robust in a cross-database analysis [20]. Scherhag et al. [12] have shown that a fusion of two LBP configurations might lead to increased performance and robustness of the algorithm. After the computation of the MB-LBP values, the resulting image is divided into cells. For each cell a histogram is calculated, the individual histograms are concatenated to a longer MB-LBP feature vector. As c increases, so does the number of concatenated histograms and thus the size of the feature vector. With that comes the benefit of retaining more local information. Thus, at feature-extraction, the MB-LBP feature extraction is applied to the post-processed image in different configurations. The configurations consist of the possible combinations resulting from the values for k and c. Values from 0 to 5 are selected for k, since too much information is lost with even larger values. The picture is divided into a maximum of 3 × 3 cells (), otherwise, the ratio between the patch size and the cell size is disproportionate. This results in possible configurations. 4.3 Training and score-level fusion To distinguish between bona fide and morphed face images one SVM is trained per configurations of k and c. The default hyperparameters of the scikit-learn implementation of linear kernel SVM [https://scikit-learn.org/stable/modules/svm.html] are used (C = 1.0, gamma = (n_features × variance)−1). For a given face image each SVM generates a normalised attack detection score in the range [0, 1]. In the fusion stage, a sum-rule score-level fusion is applied to the scores of the different classifiers. The number of fused algorithms ranges from 1 (no fusion) to the total number of MB-LBP configurations and cell divisions, i.e. 18. Considering all possible combinations, this results in a quantity of fusions. Despite this large amount of possible MB-LBP configurations, it is expected that the maximum number of configurations reveals competitive detection performance, as will be shown in experiments. 5 Experiments In the following section, the experimental setup, as well as the evaluation of the experiments, are described, including a discussion of the observed results. The performance evaluations are conducted based on the database described in Section 3. 5.1 Morph detection performance evaluation For the performance evaluation of the described algorithm, the SVM classifiers are each trained on one post-processing and one morphing algorithm at a time using the FERET database. The evaluation is performed on FRGCv2 database and all other morphing algorithms, resulting in 12 combinations per post-processing and 48 combinations in total. The performance of the detection algorithms is reported using the detection equal error rate (D-EER), i.e. the operating point where the proportion of attack presentations incorrectly classified as bona fide presentations (APCER) is as high as the proportion of bona fide presentations incorrectly classified as presentation attack (BPCER). For APCER and BPCER the definitions of ISO IEC 30107–3 [48] are used APCER : the proportion of attack presentations incorrectly classified as bona fide presentations in a specific scenario. BPCER : the proportion of bona fide presentations incorrectly classified as presentation attacks in a specific scenario. In a preliminary analys
Referência(s)