Feature Level Fusion Approach in Multimodal Biometric System Design

A Comparative Analysis of Feature Fusion Techniques in Multimodal Biometric Systems

by Mahananda D. Malkauthekar*,

- Published in Journal of Advances in Science and Technology, E-ISSN: 2230-9659

Volume 13, Issue No. 1, Mar 2017, Pages 312 - 317 (6)

Published by: Ignited Minds Journals


ABSTRACT

This paper discusses the fusion at feature extraction level for face and fingerprint biometrics. The proposed approach is based on the fusion of the two traits by extracting independent feature point sets from the two modalities, and making the two point sets compatible for concatenation. Moreover, to handle the ‘problem of curse of dimensionality’, the feature point sets are properly reduced in dimension. Different feature reduction techniques are implemented, prior and after the feature point sets fusion, and the results are duly recorded. Principal Component Analysis is used to reduce the dimensionality of facial images.

KEYWORD

feature level fusion, multimodal biometric system design, face, fingerprint, biometrics, fusion, feature extraction level, independent feature point sets, modalities, concatenation, problem of curse of dimensionality, feature reduction techniques, dimensionality, principal component analysis

INTRODUCTION

Multi biometric systems combine the information presented by multiple biometric sensors, algorithms, samples, units, or traits [1]. Biometric data represents physical and behavioral characteristics that can enable verification and validation of a person's identity. Biometrics includes but is not limited to finger, face, hand, eye, voice, and DNA characteristics of an individual [2]. The biometric verification problem can be considered as a classification problem wherein a decision is made upon whether or not a claimed identity is genuine with inference to some matching criteria. A brief description of the commonly used biometrics is given below in figure 1[3][4].

a) Face:

Face recognition is a nonintrusive method, and facial images are probably the most common biometric characteristic used by humans to make personal recognition [33][34].

b) Fingerprint:

Humans have used fingerprints for personal identification for many decades and the matching (i.e., identification) accuracy using fingerprints has been shown to be very high.

c) Hand Geometry:

Hand geometry recognition systems are based on a number of measurements taken from the human hand, including its shape, size of palm, and lengths and widths of the fingers.

Fig1. Examples of biometric characteristics: (a) face, (b) fingerprint, (c) hand geometry, (d) iris, (e) keystroke, (f) signature, and (g) voice.

d) Iris:

The iris is the annular region of the eye bounded by the pupil and the sclera (white portion of the eye) on either side. The visual texture of the iris is formed during fetal development and stabilizes during the first two years of life. The complex iris texture carries very distinctive information useful for personal recognition.

e) Keystroke:

It is hypothesized that each person types on a keyboard in a characteristic way. This behavioral biometric is not expected to be unique to each individual but it is expected to offer sufficient discriminatory information that permits identity verification.

3

Biometric identifiers represent measurements of a biological trait or behaviour. These identifiers are prone to wear-and-tear, accidental injuries, malfunctions, and pathophysiological development. Manual work, accidents, etc., inflict injuries to the finger, thereby changing the ridge structure of the finger either permanently or semi permanently. Facial hair growth, accidents, attachments, makeup, swellings, and different hairstyles may all correspond to irreproducible face depictions. Retinal measurements can change in some pathological developments (e.g., diabetic retinopathy). Inebriation results in erratic signatures. The common cold changes a person‘s voice [4]. Design a multimodal biometric system to overcome limitations of unimodal biometric systems. Such systems are expected to be more reliable due to the presence of multiple, independent pieces of evidence. Multi biometric systems can address the problem of non universality, since multiple traits ensure sufficient population coverage. Further, multi biometric systems could provide anti spoofing measures by making it difficult for an intruder to simultaneously spoof the multiple biometric traits of a legitimate user [1]. Still there is a challenge of finding the optimal approach to combine different biometrics, and algorithms. This challenge is expected to continue for the coming years.

LITERATURE SURVEY

Overview of the different biometric systems, enumerate the advantages and weaknesses of such systems, and some of the newly introduced biometrics is presented in [1]. Fingerprint enhancement is the first step in identification system by fingerprint, which includes different methods such as separable Gabor filter [8], the segmentation which is performed in the online process of capturing image. Fingerprint singularities play an important role in several fingerprint recognition and classification systems [10][11]. Different methods for feature extraction are used in fingerprint like minutiae method [12-18], global and local features [19], level 3 features pores and ridges as shown in figure 2 [20]

Level 3 Feature Extraction

It must be noted that Level 1, 2 and 3 features are not independent within the domain of fingerprint authentication. For example, the distribution of pores is not random, but naturally follows the ridge structure. Therefore, in order to reliably extract Level 3 features, namely, pores and ridge contours, the following feature extraction algorithm is proposed by combining wavelet transform and Gabor filter enhancement.

Fig. 2: Level 3 feature extraction. (a) A partial fingerprint image at 1000dpi. (b) Wavelet response (s=1.32) of the image in (a). (c) Ridge enhancement of image in (a) using Gabor filters. (d) Pore enhancement using a linear addition of (b) and (c). (e) Extracted pores (red circles) after thresholding on (d). (f) Ridge enhancement using a linear subtraction of wavelet response (s=1.74) and (c). (g) Identified ridges after binarization on (f). (h) Extracted ridge contours after applying filters on (g)

1) Pore Detection

Based on the position on the ridges, pores are often divided into two categories: open and closed. A closed pore is entirely enclosed by a ridge, while an open pore intersects with the valley lying between two ridges (Figure 2(a)). A method to extract pores using skeletonized image was proposed for 2000dpi fingerprint images [6, 8]. Generally, if a point has 1 (or 3) neighbours in the skeletonized image, it is determined as an open (or close) pore. However, this method is very sensitive to noise and fails to work in cases when images are of poor quality or of lower resolution (1000dpi). Pore positions often give high negative frequency response as intensity values change abruptly from white to black. In order to capture this sudden change, we apply the Mexican hat wavelet transform to the original image f(x, y) ∈ R2 to obtain the frequency response w: where s is the scale factor (= 1.32) and (a, b) is the shifting parameter. Essentially, This wavelet is a band

Mahananda D. Malkauthekar* 3

response (0-255) using min-max rule, pore regions that typically have high negative frequency response are represented by small blobs with low intensities (Figure 2(b)). Since pores are naturally distributed along the ridge, it is important to also identify the ridges such that no points in the valley are misclassified as pores. We apply Gabor filter enhancement proposed in [9] to separate ridges from valleys (Figure 2(c)). By simply adding the wavelet response to the Gabor enhanced image, we obtain ―optimal ―enhancement of pores on the ridges (Figure 2(d)). This procedure also removes the difference between open and closed pores and therefore, simplifies the pore extraction process. Finally, an empirically determined threshold (=58) is applied to extract pores with blob size less than 40 pixels (Figure 2(e)).

Ridge Contour Extraction

Since the wavelet response of an image emphasizes the regions with high intensity variation, we further exploit it for the extraction of ridge contours. First, the scale s (Eq. 1) is increased (=1.74) to accommodate smoother ridge contours. Then we subtract the wavelet response from the enhanced image to identify the ridges (Figure 2(f)). The resulting image is further binarized using an empirically defined threshold δ (=10) (Figure 2(g)). Finally, ridge contours can be extracted by convolving the binarized image f(x, y)b with a filter H (Figure 2(h)): where filter H = (0, 1, 0; 1, 0, 1; 0, 1, 0) counts the number of neighbourhood edge points for each pixel. A point (x, y) is classified as a ridge contour point if r(x, y) = 1 or 2. Local triangle feature [21], model-based density map [22], [23] directional fields using PCA based method [24], Gabor filter [25]. In [26], author proposed novel methods of feature extraction from ears, lips and palm print images. The implementation of the approach by considering the combination of iris and fingerprint biometrics is discussed in [27]. If fingerprint and voice data are combined in biometric fusion problem, it gives a performance comparable to that of a neural network with a much faster computing speed [28]. A method of text-prompted speaker recognition is proposed in [29] based on multimodal biometrics by using the kernel fisher discriminant analysis. In [30], author has investigated a new approach for adaptive combination of multiple biometrics to dynamically ensure the performance for the desired level of security, and combinations of multiple biometrics at the matching score level. The score level representation contains more information than decision level and therefore the more reliable performance. The study of the fusion at feature extraction level for face and fingerprint biometrics is done in [31]. It is noticed that fusion at feature level is relatively difficult to achieve because multiple modalities may have incompatible feature sets and the correspondence among different feature spaces may be unknown. The multimodal biometric decision fusion problem is addressed in [32].

Feature level

The feature set is extracted from the multiple sources of information and is further concatenated into a joint feature vector. This new high dimensional feature vector represents an individual. Various feature selection or transformation procedures may be adopted to reduce the dimensionality of this resultant high dimensional feature set. Then this vector is compared to an enrolment template (which itself is a joint feature vector stored in a database) and classification is performed accordingly. The block diagram representing the flow of feature level fusion is shown in Fig. 3

Fig.3 Feature level Fusion

Fusion at the feature level, however, is relatively understudied problem. [4]. Fusion at this level is difficult to achieve in practice because multiple modalities may have incompatible feature set or the feature space may be unknown, concatenated feature vector may lead to the problem of curse of dimensionality, a more complex matcher may be required for concatenated feature vector and concatenated feature vector may contain noisy or redundant data thus leading to decrease in the performance of the classifier. But fusion at feature level is expected to provide better authentication results than the match score or the final decision level as its feature set contains richer information about the raw biometric data. Mark Abernethy stated that the data fusion investigation demonstrated that multi-modal biometric authentication systems provide additional accuracy improvement compared to uni-modal biometric authentication systems. Also the fusion at feature level demonstrated improved accuracy compared with confidence score level and decision level data fusion methods [11][35].

3

The proposed feature level fusion in multimodal biometrics can achieve higher classification accuracy than the score level fusion. To tackle the other grand challenges of biometrics such as performance, security, and privacy it is necessary to adopt a multi biometric approach. It is possible to adopt a more flexible approach in choosing which modalities to integrate depending on individual user needs and constraints – thus removing, or at least reducing, the barrier to use by ‗‗outlier‘‘ individuals and facilitating universal access through biometrics. Using face and fingerprint features in multimodal system can address the problem of false rejection caused by sustained change in biometric features due to aging or any other factor. The challenge is to integrate these observations across modality and over time.

CONCLUSION

Multi biometric systems are expected to be more reliable due to presence of multiple, independent pieces of evidence. These systems are also able to meet the stringent performance requirements imposed by various applications. Multi biometric systems address the problem of non-universality, since multiple traits ensure sufficient population coverage [1]. Attacks to fingerprint-based biometric systems using fake reproductions of the finger may be a serious threat, in particular for non-supervised access control applications and remote authentication applications [6]. Further, multi biometric systems provide anti-spoofing measures by making it difficult for an intruder to simultaneously spoof the multiple biometric traits of a legitimate user. Thus, a challenge-response type of authentication can be facilitated using multi biometric systems. Multi-biometric systems seek to alleviate some of these drawbacks by providing multiple evidences of the same identity. These systems help achieve an increase in performance that may not be possible using a single biometric indicator. Combinations of Face and finger print recognition can give better performance than individual biometrics. However an effective fusion scheme is necessary to combine information presented by multiple experts.

Future work

This work will focus on face and fingerprint features in multimodal system and its fusion at the feature extraction level, and main effort will be on arriving at a fusion methodology that maximizes the accuracy of the combined decision.

1. Mohamed Deriche, ―Trends and Challenges in Mono and Multi Biometrics‖, Image Processing Theory, Tools & Applications, 2008 IEEE. 2. Kristin Giammarco , Deepinder Sidhu, ―Building systems with predictable performance: a joint biometrics architecture emulation‖, pp. 1-8, 2008 IEEE. 3. Kar-Ann Toh and Wei-Yun Yau, ―Combination of Hyperbolic Functions for Multimodal Biometrics Data Fusion ―, pp.1199-1209, IEEE transactions on systems, man, and cybernetics—part b: cybernetics, Vol. 34, No. 2, April 2004. 4. Anil K. Jain, Arun Ross and Sharath Pankanti, ―Biometrics: A Tool for Information Security‖, pp.125-143, IEEE transactions on information forensics and security, Vol. 1, No. 2, June 2006. 5. Jiying Wu, Gaoyun An, and Qiuqi Ruan,” Independent Gabor Analysis of Discriminant Features Fusion for Face Recognition‖, pp. 97-100, IEEE signal processing letters, Vol. 16, No. 2, February 2009. 6. Athos Antonelli, Raffaele Cappelli, Dario Maio, and Davide Maltoni, ―Fake Finger Detection by Skin Distortion Analysis‖, pp. 360-373, IEEE transactions on information forensics and security, Vol. 1, No. 3, September 2006. 7. Anil K. Jain,‖ Biometrics: Proving Ground for Image and Pattern Recognition‖, Fourth International Conference on Image and Graphics. 8. Vutipong Areekul, Ukrit Watchareeruetai, Kittiwat, ―Separable Gabor Filter Realization for Fast Fingerprint Enhancement‖, 2005 IEEE 9. Jiang-Zhong Cao, Qing-Yun Dai, ―A Novel Online Fingerprint Segmentation Method Based on Frame-Difference‖, IEEE 2009. 10. Raffaele Cappelli and Davide Maltoni, ―On the Spatial Distribution of Fingerprint Singularities‖, pp. 742-748, IEEE transactions on pattern analysis and machine intelligence, Vol. 31, No. 4, April 2009. 11. Yi Wang, Jiankun Hu, and Damien Phillips, ‖A Fingerprint Orientation Model Based on 2D Fourier Expansion (FOMFE) and Its Application to Singular-Point Detection and

Mahananda D. Malkauthekar* 3

transactions on pattern analysis and machine intelligence, Vol. 29, No. 4, April 2007. 12. Marius Tico, Pauli Kuosmanen, ―Fingerprint Matching Using an Orientation-Based Minutia Descriptor‖, pp. 1009-1014, IEEE transactions on pattern analysis and machine intelligence, Vol. 25, No. 8, August 2003. 13. Virginia Espinosa, ―Minutiae Detection Algorithm for Fingerprint Recognition‖, pp. 7-10, IEEE Systems Magazine, March 2002. 14. Shubhangi Vaikole, Dr.S.D.Sawarkar, Shila Hivrale, Taruna Sharma, ―Minutiae feature extraction from fingerprint images‖, pp. 691-696, 2009 IEEE International Advance Computing Conference (IACC 2009)Patiala, India, 6-7 March 2009. 15. Fanglin Chen, Jie Zhou, and Chunyu Yang, ―Reconstructing Orientation Field From Fingerprint Minutiae to Improve Minutiae-Matching Accuracy‖, pp. 1665-1670, IEEE transactions on image processing, Vol. 18, No. 7, July 2009. 16. Hartwig Fronthaler, Klaus Kollreider, and Josef Bigun, “Local Features for Enhancement and Minutiae Extraction in Fingerprints‖, pp. 354-363, IEEE transactions on image processing, Vol. 17, No. 3, March 2008. 17. Chulhan Lee, Jeung-Yoon Choi, Kar-Ann Toh, Sangyoun Lee, and Jaihie Kim, ―Alignment-Free Cancelable Fingerprint Templates Based on Local Minutiae Information‖, pp. 980-992, IEEE transactions on systems, man, and cybernetics—part b: cybernetics, Vol. 37, No. 4, August 2007. 18. Yongfang Zhu, Sarat C. Dass, and Anil K. Jain,‖Statistical Models for Assessing the Individuality of Fingerprints‖, pp.391-401, IEEE transactions on information forensics and security, Vol. 2, No. 3, September 2007. 19. Jinwei Gu, Jie Zhou, and Chunyu Yang, ―Fingerprint Recognition by Combining Global Structure and Local Cues‖, pp. 1952-1964, IEEE transactions on image processing, Vol. 15, No. 7, July 2006. 20. Anil K. Jain, Yi Chen, and Meltem Demirkus, ―Pores and Ridges: High-Resolution Fingerprint Matching Using Level 3 Features‖, pp. 15-27, IEEE transactions on pattern analysis and machine intelligence, Vol. 29, No. 1, January 2007. Yangyang Zhang, ―An Algorithm for Distorted Fingerprint Matching Based on Local Triangle Feature Set‖, pp. 169-177, IEEE transactions on information forensics and security, Vol. 1, No. 2, June 2006. 22. Dingrui Wan and Jie Zhou, ―Fingerprint Recognition Using Model-Based Density Map‖, pp.1690-1696, IEEE transactions on image processing, Vol. 15, No. 6, June 2006. 23. Jie Zhou, and Jinwei Gu, ―A Model-Based Method for the Computation of Fingerprints‘ Orientation Field‖, pp.821-835, IEEE transactions on image processing, Vol. 13, No. 6, June 2004. 24. Asker M. Bazen and Sabih H. Gerez, ―Systematic Methods for the Computation of the Directional Fields and Singular Points of Fingerprints‖, pp. 905-919, IEEE transactions on pattern analysis and machine intelligence, Vol. 24, No. 7, July 2002. 25. Chih-Jen Lee and Sheng De Wang, ―Fingerprint feature extraction using Gabor filters‖, pp. 288-290, Electronics Letters, Vol.35, No. 4, February 1999. 26. Michal Choras, ―Emerging Methods of Biometrics Human Identification‖, 2007 IEEE. 27. Stelvio Cimato, Marco Gamassi, Vincenzo Piuri, Roberto Sassi and Fabio Scotti, ―Privacy-aware Biometrics: Design and Implementation of a Multimodal Verification System‖, Annual Computer Security Applications, pp. 130-139, 2008 IEEE. 28. Kar-Ann Toh, Wei-Yun Yau, and Xudong Jiang,‖A Reduced Multivariate Polynomial Model for Multimodal Biometrics and Classifiers Fusion‖, pp. 224-233, IEEE transactions on circuits and systems for video technology, Vol. 14, No. 2, February 2004. 29. Masatsugu Ichino, Hitoshi Sakano and Naohisa Komatsu, ―Multimodal Biometrics of Lip Movements and Voice using Kernel Fisher Discriminant Analysis‖, ICARCV 2006, IEEE. 30. Ajay Kumar, Vivek Kanhangad, David Zhang, ―Multimodal biometrics management using adaptive score-level combination‖, 2008 IEEE.

3

Fingerprint Biometrics‖, 2007 IEEE. 32. Kar-Ann Toh, Xudong Jiang and Wei-Yun Yau, ―Exploiting global and local decisions for multimodal biometrics verification‖, pp. 3059-3071, IEEE transactions on signal processing, Vol. 52, No. 10, October 2004. 33. M. D. Malkauthekar and S.D.Sapkal,‖Analysis of Classification Methods ofFace Images using PCA and Fisher-Based Algorithms‖, pp. 442-446,ICACT 2008,Hyderabad. 34. M. D. Malkauthekar,‖Template Security for Fingerprint Recognition System with Two Variables Polynomial of Fuzzy Vault for Minutiae Points‖, pp. 1856-1859 ,ICCSP-2015,Tamilnadu. 35. Shubhangi Sapkal, ―Data Level Fusion for Multibiometric System Using Face and Finger‖,pp.80-84, Volume 1, Issue 2, April 2012,IJARCSEE.

Corresponding Author Mahananda D. Malkauthekar*

Department of MCA, Government College of Engineering, Karad, Maharashtra, India

E-Mail – mahananda.malkauthekar@gcekarad.ac.in