Palmprint Recognitionvia Bandlet, Ridgelet, Wavelet and Neural Network

Mohanad A. M. Abukmeil, Hatem Elaydi, Mohammed Alhanjouri

  Open Access OPEN ACCESS  Peer Reviewed PEER-REVIEWED

Palmprint Recognitionvia Bandlet, Ridgelet, Wavelet and Neural Network

Mohanad A. M. Abukmeil1,, Hatem Elaydi1,, Mohammed Alhanjouri2

1Electrical Engineering, Islamic University of Gaza, Gaza, Palestine

2Computer Engineering, Islamic University of Gaza, Gaza, Palestine

Abstract

Palmprint recognition has emerged as a valid biometric based personal identification tool. Palmprints with high resolution features such minutia points, ridges and singular points or low resolution features such as wrinkles and principals determine their applications. In this paper a 700nm spectral band PolyU hyperspectral palmprint database is utilized and the multiscale band let image transform is utilized in features extraction; moreover, its results are compared with the ridgelet and 2D discrete wavelet results. The size of features is reduced using principle component analysis and linear discriminate analysis; in addition, a feed forward back-propagation neural network is used as a classifier. The results show that the recognition rate accuracy of the band let transform outperforms others.

At a glance: Figures

Cite this article:

  • Abukmeil, Mohanad A. M., Hatem Elaydi, and Mohammed Alhanjouri. "Palmprint Recognitionvia Bandlet, Ridgelet, Wavelet and Neural Network." Journal of Computer Sciences and Applications 3.2 (2015): 23-28.
  • Abukmeil, M. A. M. , Elaydi, H. , & Alhanjouri, M. (2015). Palmprint Recognitionvia Bandlet, Ridgelet, Wavelet and Neural Network. Journal of Computer Sciences and Applications, 3(2), 23-28.
  • Abukmeil, Mohanad A. M., Hatem Elaydi, and Mohammed Alhanjouri. "Palmprint Recognitionvia Bandlet, Ridgelet, Wavelet and Neural Network." Journal of Computer Sciences and Applications 3, no. 2 (2015): 23-28.

Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks

1. Introduction

Biometric systems may be used for personal identification instead of token-based systems such as passports, physical keys and ID cards or Knowledge-based systems such as passwords. In the token-based, “token” can be stolen or lost easily while knowledge can be forgotten or guessed in a knowledge-base [1].

Palmprint identification has emerged as a leading and promising biometric modalities whether for forensic or commercial applications [2, 3]. Palmprint features are unique and may be used efficiently in identify people. Palmprint features can be classified, according to their use, into two groups: low resolution features (<100 dpi) such as principal lines and wrinkles which may be used in the commercial applications; high resolution features (>100 dpi) such as singular point, ridges and minutiae point which may be used for forensic applications [3]. Both high and low resolution image features in palmprint are shown in Figure 1.

Several mutiscale image transforms were utilized in the analysis and features extraction from plamprints. The results vary depending on the other components of the system such as the dimension reduction methods and the recognition techniques.

This article uses multiscale bandlet image transform for feature extractions from palmprint images and compares its results with the ridgelet and 2D discrete wavelet. It also utilizes the 2D PCA and 2D LDA for dimensionality reduction and compares their results. The recognition is accomplished using feedforward back-propagation neural network.

The rest of this paper is organized as follows: Section 2 gives a brief description of related work. Multiscale image transform, dimensionality reduction 2D PCA and 2D LDA, in addition to feed-forward back-propagation neural network will be highlighted in section 3. Section 4 reports feature extraction and recognition results for each multiscale image transform. Finally, the conclusion and future work are presented in section 5.

2. Related Work

The development of multiscale image transforms together with dimensionality reduction technique leads to valuable research for the identification of people using palmprint features. Various techniques have gained popularity and attracting much interest to extract features from palmprint images.

Jiwen et.al. (2006) [4] used wavelet decomposition and 2D principal component analysis (2DPCA) for palmprint recognition of PolyU database. 2D wavelet transform and 2DPCA were applied to the low-frequency components. The algorithm achieved comparatively high recognition rate. The major limitation consists of using only 100 palmprints and six samples for each palm. The results were with PCA and ICA while used images require 2-D domain. Ten projection vectors were used as classifier input resulting in high complexity and long delays.

Masood et.al. (2009) [5] suggested a palmprint based identification approach that drew on the textural information available on the palmprint by utilizing a combination of contourlet and non-subsampled contourlet transforms. The algorithm was tested on a 500 palm images of GPDS hand database. The results of the proposed algorithm were compared with reported results in literature. The proposed algorithm outperformed other reported methods of palmprint matching using equal error rate (EER) metric. ROI was 256256 pixels which may increase the complexity in some phases. The selected features may be inadequate to distinguish the different classes especially with the limitations of Euclidean distance classifier.

Sharkas et.al. (2010) [6] compared two techniques for palmprint recognition. The first technique extracted the edges from the palm images, then performed the CT or the Discrete Wavelet Transform (DWT) on the edge extracted images. The second technique employed the principal component analysis PCA. Features extracted from both techniques were tested and compared where it was found that the best achieved recognition rate was about 94%. However, the minimum distance classifier used was insensitive to differences in variance. Five palmprint images were trained and the recognition depended on the number of eigenvectors which was insufficient.

Kekre et.al. (2012) [7] suggested the use of a hybrid wavelet generated by Kronecker product of two existing orthogonal transforms, Walsh and DCT, to identify multi-spectral palmprints. One-to-many identification on a large database containing three sets of 6000 multi-spectral palmprint images from 500 different palms was used to validate the performance. The matching accuracy of the proposed method of genuine acceptance ratio achieved 99.979% using score level fusion. Selection feature vectors depended on high energy components and was insufficient to select the most discriminative feature. However, the recognition phase was complex and time consuming.

Elaydi et.al. [8] used PolyU Hyper-spectral palmprint database, and applied back-propagation neural network for recognition, linear discriminate analysis for dimensionality reduction, and 2D discrete wavelet, ridgelet, curvelet, and contourlet for feature extraction. The paper ridgelet and curvelet showed promising outcomes.

Elaydi et. al. [9] provided a comparative palmprint recognition approach using multi-scale transforms: 2D wavelets, ridgelets, curvelets, and contourlets for feature extraction phase, 2-D Principal Component Analysis (2-D PCA) for dimensionality reduction and artificial neural network for recognition phase. Finally, a comparative analysis was presented. The algorithms were tested using PolyU hyper-spectral palmprint database. The recognition rate accuracy was very good and is listed in this order curvelets, contourlets, ridgelets, and 2D discrete wavelets where the curvelets outperformed the others.

The major disadvantages of presented work are the high implementation complexity, execution time, cost, etc. The classifier type in some researches may be time consuming and has reliability issue when it compared with neural network classifier. The number of vectors that used as a classifier input in some researches is more than one vector meaning that recognition may consume more time. The projection technique may not supports 2D domain and the combination between the classifier and image transform technique may be inconsistent [8, 9]. In order to overcome the disadvantages of existing techniques, a new palmprint recognition based on the combinations between multiscale image transforms, dimensionality reduction by 2D PCA and 2D LDA, and back-propagations neural networks that require less formal statistical training and fast in testing for recognition was proposed.

3. Multiscale and Classifier

Multiscales describe passband systems with spatial scale controlled by a single parameter such as linear filter with its wavelength a parameter. Thus, wavelengths are closely related to resolutions such that short wavelengths are needed to describe small sized objects associated with fine resolutions [10].

3.1. Transforms
3.1.1. The 2D Discrete Wavelet

2D DWT is built with separable orthogonal mother wavelets with a given regularity [11, 12]. At every iteration of the DWT, the lines of the input image (obtained in the previous iteration) are low-pass filtered with a filter having the impulse response and high-pass filtered with the filter . Then, the lines of the two images obtained at the output of the two filters are decimated with a factor of 2. Next, the columns of the two images obtained are low- pass filtered with and high-pass filtered with The columns of those four images are also decimated with a factor of 2.

Four new sub-images (representing the result of the current iteration) are generated. The first one is obtained after two low-pass filtering; it is named approximation sub-image (or LL image), the others three are named details sub-images: LH, HL and HH. The LL image represents the input for the next iteration. In the following, the coefficients of the DWT will be noted with where represents the image who’s DWT is computed, m represents the iteration index (the resolution level) and k = 1,2,3 and 4 where =1, for the HH image, = 2, for the HL image, = 3, for LH image and = 4 for the LL image. These coefficients are computed using the following relation:

(1)

Where the wavelets can be factorized:

(2)

And the two factors can be computed using the scale function and the mother wavelets


3.1.2. Continuous Ridgelet Transform

Given an integrable bivariate function , its Continuous Ridgelet Transform (CRT) in is defined by [13, 14],

(3)

where the ridgelets, in 2-D are defined from a wavelet type function in 1-D as

(4)

Figure 2 shows an example of ridgelet function, which is oriented at an angle and is constant along the lines

(5)

The CRT is similar to the 2-D continuous wavelet transform except that the point parameters are replaced by the line parameters In other words, these 2-D multiscale transform are related by:

As a consequence, wavelets are very effective in representing objects with isolated point singularities, while ridgelets are very effective in representing objects with singularities along lines. Thus, ridgelets can represent a way of concatenating 1-D wavelets along lines. Hence, the motivation for using ridgelets in image processing tasks is appealing since singularities are often joined together along edges or contours in images [15, 16].


3.1.3. Bandlet Transform

Orthogonal bandlets using an adaptive segmentation and a local geometric flow are well suited to capture the anisotropic regularity of edge structures. They are constructed with a “bandletization” which is a local orthogonal transformation applied to wavelet coefficients. The approximation in these bandlet bases exhibits an asymptotically optimal decay for images that are regular outside a set of regular edges. These bandlets can be used to perform image compression and noise removal [17].

Each orthogonal bandlet basis is parametrized using a geometry that specifies, for each scale and each orientation k of the wavelet transform,

•  a dyadic segmentation of the corresponding wavelet coefficients,

•  a flow, that indicates the approximate geometric direction over each square of the segmentation that contains an edge [18].

The bandlets are obtained through an orthogonal retransformation of the wavelet coefficients inside each square that contains an edge. This retransformation is the decomposition of each set of wavelet coefficients on an orthogonal basis of Alpert multi-wavelets [19].

Bandlet bases are gathered in a dictionary of orthogonal bandlet bases indexed by a geometry. The efficiency of these bases is linked to the use of two fast algorithms. The first one performs the analysis (decomposition) and synthesis (reconstruction) of a function f in some given basis . The second algorithm searches in the whole dictionary of B for a best basis adapted to some function f one wishes to approximate.

3.2. Dimensionality Reduction

After a palmprint image is transformed from time to frequency domain, a matrix is formed as a result of the transformation; meaning that each image pixel is represents by a number. In our experimental work, we used 128 x 128 pixels palmprint images. It’s unfeasible to use the matrix which represents image directly without reduce its size, so the recognition process can meet commercial requirements. Two powerful dimensionality reduction technique are used in this paper.


3.2.1. 2D Principal Component Analysis

Principal components analysis, PCA, is one of the simplest, oldest and most robust methods of performing dimensionality reduction. PCA is a technique used for taking high-dimensional data and utilizes the dependencies between the variables to represent it in a more tractable, lower-dimensional form, without losing too much information.

The purpose of 2D PCA is to select a good projection vector x. To evaluate the goodness of a projection vector, the 2D PCA uses the total scatter of the projected samples which can be characterized by the trace of the covariance matrix of the projected feature vectors [20]. Thus, the criterion is to maximize the following:

(6)

Where is the covariance matrix of the projected feature vectors, written by

(7)

Hence

(8)

Where is trace of covariance matrix

Given a set of training images the criterion (8) becomes

(9)

Where is the average of all training images.


3.2.2. The 2D Linear Discriminate Analysis

Linear discriminate analysis is based on linear combinations between vectors. The 2D LDA [21] directly performs discriminate feature analysis on an image matrix rather than on a vector. 2D LDA tries to find the optimal vector

(10)

Where and are between-class scatter matrix and within-class scatter matrix respectively.

3.3. Feed-Forward Back-Propagation Neural Network

The back-propagation neural network (BPNN) is widely used learning algorithm in training multilayer perceptron (MLP) [22]. Back propagation is a multi-layer feed forward, supervised learning network based on gradient descent learning rule.

A typical back propagation network [23] with multi-layer, feed-forward supervised learning is shown in Fig. 3. Here learning process in back-propagation requires pairs of input and target vectors. The output vector ’o‘ is compared with target vector ’t‘. In case of difference of ’o‘ and ‘t‘ vectors, the weights are adjusted to minimize the difference. Initially random weights and thresholds are assigned to the network. These weights are updated every iteration in order to minimize the mean square error between the output vector and the target vector [22].

Figure 3. Basic block of Back-propagation neural networks

Appropriate selection of the parameters used for training to ensure efficient operation. The initial weight will influence whether the net reaches a global or local minima of the error and if so how rapidly it converges. To get the best result the initial weights are set to random numbers between -1 and 1 [22, 23].

Training a net is performed in order to achieve a balance between memorization and generalization. It is not necessarily advantageous to continue training until the error reaches a minimum value. The weight adjustments are based on the training patterns. As long as the error for validation decreases, training continues. Whenever the error begins to increase, the net is starting to memorize the training patterns. At this point training is terminated. If the activation function can vary with the function, then it can be seen that an input, m output function requires at most 2n+1 hidden units.

3.4. Palmprint Database

Hyperspectral palmprints database developed by the Biometric Research Centre of the Department of Computing at Hong Kong Polytechnic University has been used [24]. Hyperspectral palmprint images were collected from 190 volunteers. The age distribution is from 20 to 60 years old. The size for each palmprint is 128x128 pixels. Palmprint images with 700nm hyperspectral are used in this research and Region on interest (ROI) is depicted in Figure 4.

4. Features Extraction and Recognition Results

4.1. Feature Extraction

Biometric based commercial application requires fast and effective pattern recognizer. Low resolution palmprint images that meet commercial requirements can be represented by some line features. The principle lines can be extracted using stack filters or other filters. However, these principal lines are not sufficient to represent the uniqueness of each individual’s palmprint because different people may have similar principal lines in their palmprints; moreover, some palmprint images do not have clear wrinkles.

Several techniques have been implemented to extract features such as bandlet, ridgelets, and 2D discrete wavelet. These features have been projected in order to reduce the dimensionality using 2D PCA and 2D LDA. Finally, a vector which is a projection resultant is passed to the feed-forward back-propagation neural network for training and testing phases.

Bandlet features: Bandlet transform is based on adaptive segmentation and local geometric flow. Applying local orthogonal transformation of wavelets coefficients leads to bandlet coefficients. Applying wavelet in palmprint can capture isotropic regularity of edges in square domain of varying size but geometric regularity that offered by bandlet can capture anisotropic regularity of edges in palmprint. Bandlet transform can exploit such anisotropic regularity by constructing orthogonal vectors in the direction where the function has the maximum regularity.

Although the principal lines and wrinkles are discontinuous in some images, using bandlet the image can be differentiable in the parallel direction of edge curve. Geometric representation of bandlet is illustrated in Figure 5.

MATLAB Bandlet toolbox, developed by Research Center of Magnetic Resonance and Medical Imaging at Xiamen University [25], is used to extract Bandlets coefficients which can be reduced by applying 2D PCA and 2D LDA to get the most discriminate vector to be passed into feed-forward back propagation classifier.

Ridgelet features; Ridgelet transform offers a mathematical framework in order to organize the liner information at different scale and resolution. First, the ridgelet transform is applied to palmprint images in order to convert them into time-frequency domain leading to ridgelet coefficients. Figure 6 shows ridgelet transform features of palmprint image which is illustrated in Figure 4.

When the palmprints are transformed by ridgelet transform, 2D PCA and 2D LDA are applied in order to reduce the dimensionally in order to obtain a vector as a projection resultant. Then, the resulted vector is passed to feed-forward back-propagations neural network for recognition phase.

2D Wavelet features; applying wavelet transformation leads to different band of wavelet coefficient of the original palmprint images. In this paper, 2nd level of 2D discreet wavelet is applied, Figure 6 shows the vertical coefficients of 2nd 2D discreet wavelet decomposition level which is taken as a palmprint features. 2D PCA and 2D LDA have been applied to vertical coefficients leading to single vector which is passed to feed- forward back-propagation neural network for training or testing.

4.2. Recognition Results

A test sample of 30 persons have been taken into account with a total of 360 palmprint images divided as: 240 palmprints for training phase and 120 palmprints for testing phase. For each person 8 palmprint images have been used as a training set and four images have been used as a testing set. The results are shown in Table 1.

Table 1. Recognition Results

Table 1 shows the highest recognition accuracy is 96.5% that is obtained using bandlet transform for features extraction and 2D LDA for dimensionality reduction. This is because bandlet transform is based on adaptive segmentation and a local geometric flow which is suited to capture the anisotropic regularity of principal lines and winkles structures. By considering recognition accuracy as a function of dimensionality reduction the 2D LDA outperformed the 2D PCA.

2D LDA tries to identify attributes that account for the most variance between classes; thus, the 2D LDA is a supervised method, using known class labels. The class labels field is also called target field. But, in 2D PCA definition there is no mention of class label and keeping the dimensions of largest energy (variance) is good but not always enough.

The recognition phase in this work has been divided into two stages; the first one is called training stage. Each feature vector which resulted from multiscale transformation and projected by 2D LDA or 2D LDA is passed to feed-forward back propagation neural network and trained using gradient function. The same transformation has been applied to palmprint images which are used in test stage but the resultant feature vector didn't trained. The learning rate was 0.05.

Comparing the results presented in this paper with similar works, we can see: In [7] the accuracy was 99.9%. The dependency on high energy components was insufficient to select the most discriminative feature and the whole recognition algorithm was time consuming and unreliable. The contourlet transform, PCA, and minimum distance classifier were used for transformation in [6] and the accuracy was 94% but the limitations in the classifier and database made the result inconsistent. In [5] the combination between contourlet and sub-sampled contourlet was used where Euclidean distance classifier was used. Different levels of accuracy were achieved but the classifier type, image size, and comparison were inadequate. In [4] wavelets and 2D PCA and the recognition accuracy was 97% but different limitations appear in this work such as the sample size was inconsistent, comparison with 1D projection techniques and 10 projection vectors were used in recognition, but our experimental work used only one projection vector.

5. Conclusion

This paper proposed a novel recognition approach of individuals based on their palmprints. The novelty of the approach is in the combinations of images transform techniques, 2D LDA and 2DPCA features reduction techniques and feed- forward NN classifier. PolyU pre-processed 700nm hyperspectral database was used. The recognition accuracy for bandlet, ridgelet, 2D discrete wavelet were 78%, 91.3%, and 87.5% with 2D PCA and 96.5%, 95.8% and 93.3% 2D LDA respectively. The best result was obtained using bandlet with 2D LDA.

Future work might utilize other multiscale image transformations such as shaplet, platlet, surfacelet, beamlet, widdglet and other modern techniques; in addition, other features reduction techniques such as independent component analysis (2D ICA), kernel PCA and other modern techniques could be investigated. Swarm optimization such as pee colony practical swarm bacteria foraging etc. can be used to enhance the learning procedure.

References

[1]  D. Zhang, Wai-Kin Kong, J. You and Michael Wong, “Online palmprint identification,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp. 1041-1050, Sept. 2003.
In article      CrossRef
 
[2]  A. Jain, R. Bolle and S. Pankanti (eds.), Biometrics: Personal Identification in Networked Society, Boston, Mass: Kluwer Academic Publishers, 1999.
In article      CrossRef
 
[3]  A. Kong, D. Zhang and M. Kamel, “A survey of palmprint recognition,” Journal of Pattern Recognition, vol. 42, pp. 1408-1418, July. 2009.
In article      
 
[4]  Jiwen Lu, Erhu Zhang, Xiaobin Kang and YanxueXue, “Palmprint recognition using wavelet decomposition and 2D principal component analysis,” International conference on Communications, Circuits and Systems Proceedings, vol. 3, pp. 2133-2136, June. 2006.
In article      
 
[5]  H.Masood, M. Asim, M. Mumtaz and A. Mansoor, “Combined contourlet and non-subsampled contourlet transforms based approach for personal identification using palmprint,” Digital Image Computing: Techniques and Applications, DICTA '09, pp.408-415, Dec.2009.
In article      
 
[6]  M. Sharkas, I. El-Rube and M.A. Mostafa, “The contourlet transform with the principal component analysis for palmprint recognition,” International Conference on Computational Intelligence, Communication Systems and Networks (CICSyN), pp. 262-267, July. 2010.
In article      
 
[7]  H.B.Kekre, R. Vig and S. Bisani, “Identification of multi-spectral palmprints using energy compaction by hybrid wavelet,” International Conference on Biometric (ICB), pp. 433-438, March. 2012.
In article      CrossRef
 
[8]  H. Elaydi, M. Alhanjouri, and M. Abukmeil, “Palmprint recognition using 2-d wavelet, ridgelet, curvelet and contourlet,” i-manager's Journal on Electrical Engineering (JEE), vol. 7 Issue 1, pp. 9-19, Jul-Sep 2013.
In article      
 
[9]  Hatem Elaydi, Mohanad A. M. Abukmeil, Mohammed Alhanjouri, Palmprint Recognition Using Multiscale Transform, Linear Discriminate Analysis, and Neural Network, Science Journal of Circuits, Systems and Signal Processing. vol. 2, no. 5, pp. 112-118, 2013.
In article      
 
[10]  J. Andrew Bangham and Richard V. Aldridge, “Multiscale decomposition using median and morphological filters,” IEEE Winter Workshop on Nonlinear Digital Signal Processing, 6.1_1.1 - 6.1_1.4, 1993.
In article      CrossRef
 
[11]  Alexandru Isar, Sorin Moga, and Xavier Lurton, “A statistical analysis of the 2d discrete wavelet transform,” Proceedings of the International Conference AMSDA 2005, 1275-1281, 17-20 May 2005
In article      
 
[12]  Samuel Foucher, Goz´e Bertin B´eni´e, Jean-Marc Boucher, “Multiscale map filtering of sar images,” IEEE Transactions on Image Processing, vol. 10, no.1, January 2001, 49-60.
In article      CrossRefPubMed
 
[13]  E. J. Candµes and D. L. Donoho, “Ridgelets: a key to higher- dimensional intermittency?” Phil. Trans. R. Soc. Lond. A., pp. 2495-2509, 1999.
In article      
 
[14]  G. T. Herman, Image Reconstruction from Projections: The Fundamentals of Computerized Tomography, Academic Press, 1980.
In article      
 
[15]  A. Rosenfeld and A. C. Kak, Digital Picture Processing, Aca- demic Press, 2nd edition, 1982..
In article      
 
[16]  H. Führ, L. Demaret and F. Friedrich, Beyond wavelets: new image representation paradigms. Book chapter. In: M. Barni and F. Bartolini (Eds.), Document and Image Compression, CRC Press, Boca Raton, FL, 2006.
In article      
 
[17]  Stéphane Mallat and Gabriel Peyré, “A Review of Bandlet Methods for Geometrical Image Representation,” Numerical Algorithms 44, 3 (2007) 205-234,February 26, 2008.
In article      
 
[18]  Gabriel Peyréa, Erwan Le Pennecb, Charles Dossalc, Stéphane Mallatd,” Geometrical Image Estimation with Orthogonal Bandlet Bases,”Numerical Algorithms 44, 3 (2007) 205-234”.
In article      
 
[19]  B. Alpert,” Wavelets and Other Bases for Fast Numerical Linear Algebra,” pp. 181–216. C. K. Chui, editor, Academic Press, San Diego, CA, USA, 1992.
In article      CrossRef
 
[20]  J. Yangand D. Zhang, “Two-dimensional PCA: A new approach to appearance-based face representation and recognition,” IEEE Trans. Pattern Anal. Machine Intell. PAMI-26 (1), 131-137, 2004.
In article      
 
[21]  W.S. Zheng, J.H. Lai, S.Z. Li, “1D-LDA vs. 2DLDA: When is vector-based linear discriminate analysis better than matrix-based?” Pattern Recognition, vol. 41, pp. 2156-2172, July 2008.
In article      CrossRef
 
[22]  P.Latha, L.Ganesan and S.Annadurai, “Face recognition using neural networks,” Signal Processing: An International Journal (SPIJ) 3 (5), 153-160, Nov 2009.
In article      
 
[23]  S.Lawrence, C.L.Giles, A.C.Tsoi, and A.d.Back, “Face recognition: a convolutional neural network approach,” IEEE Transactions of Neural Networks, vol.8, no.1, pp.98-113, 1993.
In article      CrossRefPubMed
 
[24]  Department of Computing, The Hong Kong Polytechnic University (PolyU), Hyperspectra Palmprint database, Polyu, accessed on Aug. 22, 2013, available at: https://www4.comp.polyu.edu.hk/~biometrics/Hyperspectral Palmprint/HSP.htm.
In article      
 
[25]  Xiaobo Qu, Bandelet Image Fusion Toolbox, accessed on 27 March 27, 2014. Available at: https://www.quxiaobo.org/software/software_BandeletFusion.html.
In article      
 
  • CiteULikeCiteULike
  • MendeleyMendeley
  • StumbleUponStumbleUpon
  • Add to DeliciousDelicious
  • FacebookFacebook
  • TwitterTwitter
  • LinkedInLinkedIn