Article Versions
Export Article
Cite this article
  • Normal Style
  • MLA Style
  • APA Style
  • Chicago Style
Research Article
Open Access Peer-reviewed

Illumination-Invariant Face Recognition in Hyperspectral Images

Han Wang , Glenn Healey
Journal of Computer Sciences and Applications. 2019, 7(1), 21-30. DOI: 10.12691/jcsa-7-1-4
Received March 21, 2019; Revised April 13, 2019; Accepted April 22, 2019

Abstract

Illumination-invariant face recognition remains a challenging problem. Previous studies use either spatial or spectral information to address this problem. In this paper, we propose an algorithm that uses spatial and spectral information simultaneously. We first learn a basis in the spectral domain. We then extract spatial features using 2D Gabor filters. Finally, we use the basis and the spatial features to classify face images. We demonstrate the effectiveness of the algorithm on a database of 200 subjects.

1. Introduction

The performance of face recognition systems under controlled conditions has reached a satisfactory level. However, when conditions are not controlled, the performance degrades dramatically. Illumination variation is one of the challenges 1. In some cases, the variations caused by illumination are larger than the variations between subjects, causing problems for algorithms that do not compensate for illumination variations 2. To address the challenges, many algorithms have been proposed. Making use of the spatial information is the most common approach. For example, a variety of subspace-based methods have been proposed 3, 4. Methods based on illumination-invariant features have also been proposed 5, 6, 7, 8, 9, 10, 11, 12, 13, 14. On the other hand, previous studies have shown that spectral information is useful for this purpose 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30. However, existing approaches use either spatial or spectral information. In recent years, methods that use 3D face scans have also been proposed 31, 32. In this paper, we propose an algorithm that uses spatial and spectral information simultaneously. We first learn a basis in the spectral domain that represents a large number of illumination conditions. We then filter the image to obtain the retinex representation and extract Gabor phase features from it. When a probe is processed, it is first projected to the basis to obtain its spectral representation. It is then processed to obtain phase features using 2D Gabor filters. Finally, the spectral and the phase features are used by a nearest neighbor classifier. In the remainder of this paper, we first review related work. We then introduce our method and present our results.

2. Related Work

Existing face recognition methods can be divided into two categories: visible spectrum-based and non-visible spectrum-based 2. Visible spectrum-based methods use gray-scale or color images. Methods in this category can be further divided into two categories: variation modeling and invariant features. Representatives from the modeling category include the 3D linear subspace method 3 and the illumination cone method 4. The 3D linear subspace method uses multiple images of the same face taken under different lighting directions to construct a 3D basis for the face using the observation that the images of a Lambertian surface lie in a 3D linear subspace of the image space. Belhumeur and Kriegman 4 showed that all images of a convex Lambertian object taken from the same viewpoint but illuminated by an arbitrary number of distant point sources form a convex illumination cone. Using three images of a face taken with different lighting directions, the shape and albedo of the face can be estimated. However, these methods often require multiple images of a subject to recover the subspace, which may not be available in practice.

Methods using invariant features have also been proposed where only one image per subject during training is needed or multiple images are required only for a small set of subjects during training. Methods in this category aim to find illumination-invariant representations. The Quotient Image 5 uses the relative reflectance with respect to model images to represent a subject. Retinex-based approaches 6, 7, 11 aim to remove the illumination effect by estimating the illumination through filtering. Transformation-based methods 8, 9, 10, 11 use phase or other representations in the frequency domain after a transformation of features. Zhang and Xie proposed a two-stage framework that consists of a preprocessing stage and a feature extraction stage 10. Kaur et al. proposed a method that extracts LOG-DCT features from retinex images 11. Fan et al. proposed a method based on the phase of 2D Gabor features 12. Zhu et al. proposed using logarithm gradient orientation and logarithm gradient magnitude to derive gradient histogram 13. Essa et al. proposed using local edge responses and region histograms as features 14.

Non-visible spectral ranges can also be used for face recognition. Thermal infrared images capture the thermal emission of subjects and are not affected by illumination conditions. Face recognition is performed in thermal imagery to deal with variations caused by outdoor illumination 15. Ghiass et al. proposed a method that is based on a series of AAM models 16. Each of these models specializes in a range of poses and a region of thermal IR face space. Shwetank et al. compared classifier performance of different cost functions that are maximum likelihood, minimum distance, and spectral angle 17. A comprehensive review of thermal methods can be found in 18, 19, 20.

Previous studies have shown that spectral information can be used for face recognition 21, 22, 23, 24, 25, 26, 27, 28, 29, 30. Pan et al. proposed a method that uses spectral signatures in the near-infrared (near-IR) to overcome variation in expression, pose, and illumination 21, 22, 26. The near-IR range was chosen because it has a larger penetration depth than for visible radiation which makes near-IR characteristics difficult for a subject to modify 33. These works 21, 22, 23, 24, 25, 26, 27, 28, 29, 30 showed that spectral signatures are stable for a person and are different from person-to-person which makes these signatures useful for recognition. Near-IR images also provide spatial information that can be exploited.

In recent years, methods based on 3D face scans have also been proposed 31, 32. Drira et al. proposed a method that uses radial curves emanating from the nose tips to represent faces and elastic Riemannian metric to measure the distance 31. Liang et al. proposed a Bayesian multi-distribution-based feature extraction method to enhance the dataset 32. However, these methods require 3D face databases which are often not available in practice.

No previous studies have looked at making use of spatial and spectral information associated with Near-IR illumination. This is the approach we take to address illumination variations.

3. Background

A 2D Gabor function is a sinusoidal function modulated by a Gaussian envelope given by

(1)

Where

(2)

is the Gaussian component and

(3)

is the sinusoidal component. The standard deviations (σx, σy) define the size of the Gaussian envelope, f is the center frequency magnitude, and θ is the center frequency orientation in the frequency domain. An example of a Gabor function in the spatial domain is shown in Figure 1(a) where the red area represents positive values of the filter and the blue area represents negative values. The frequency magnitude of the filter in the frequency domain is shown in Figure 1(b) where the central gray dot indicates the origin. The distance between the origin and the magnitude center is associated with the center frequency f. The angle between the line connecting the origin and the center frequency and the horizontal axis is associated with the orientation θ.

4. Method

The proposed algorithm for illumination-invariant face recognition uses two types of features: spectral features and Gabor phase features. In the remainder of this section, we explain how the two types of features are obtained and used for classification.

4.1. Spectral Subspace

A hyperspectral image of a Lambertian surface can be represented by

(4)

where R(x, y, λ) is the reflectance function of the surface material and L(x, y, λ) is the illumination function. The reflectance function characterizes the surface response to different wavelengths and can be found using the method described in 21.

We model the illumination function according to

(5)

where M(λ) describes the spectral variation and S(x, y) describes the spatial variation.

We use MODTRAN 34 to model the spectral variation of M(λ). A set of n illumination spectra M1(λ), M2(λ), …, Mn(λ) is generated. An example of the resulting irradiance spectra Mi(λ) generated for different solar angles is shown in Figure 2. As we can see, the irradiance functions exhibit a large degree of variation.

We use the simulated M(λ) spectra and a reflectance function R(λ) for a subject tissue type to simulate radiance spectra for S(x, y) = 1 using equation (4). S(x, y) = 1 is assumed to represent the effect of diffuse light on the surface that is used in the study to learn spectral bases. Previous studies have shown that low-dimensional subspaces are useful for representing spectral variation caused by varying illumination 35, 36. Similarly we can use a linear subspace to model I(x, y, λ) spectra. Therefore the image spectra for a tissue type with reflectance R(λ) can be represented by

(6)

where B = [b1(λ), b2(λ), …, bK(λ)] is a basis for the I(λ) spectra, α=[α12,…, αk] is the corresponding coefficient vector, and e(λ) is an error term.

To learn the basis, we apply PCA to the set of spectra M1(λ)R(λ), M2(λ)R(λ),, Mn(λ)R(λ) to obtain B. A basis is learned for each of the four tissue types: left cheek, right cheek, forehead, and chin. Denote the reflectance spectra for the four tissue types by R1(λ), R2(λ), R3(λ), and R4(λ). Each reflectance spectrum Ri(λ) is obtained by averaging an 11x11 pixel region for the given tissue type for a particular subject. An example of the four regions highlighted by black squares and their average radiance for a particular M(λ) is shown in Figure 3.

The result of the process is a basis Bt for each of the four tissue types for each subject where t indicates the tissue type. The number of basis vectors K is chosen to capture 90% of the variance over the training data.

4.2. Gabor Phase

The central band of a hyperspectral image is used to extract Gabor phase features. This band is first normalized to the range [0,1] to remove scaling effects. The normalization is achieved by subtracting the minimum pixel value from each pixel and dividing the result by the difference between the maximum and the minimum pixel values. The resulting image is denoted by I(x, y). To alleviate shadow effects, the retinex representation 6 is computed according to

(7)

where F(x, y) is a Gaussian filter and * denotes the convolution operation. We use 10 pixels as the standard deviation for the Gaussian filter. An example of an original, normalized, and retinex image is shown in Figure 4. For viewing purposes, a monotonic gray-scale mapping is applied to the retinex image to generate the image in Figure 4(d). Compared to the original image, the retinex representation alleviates shadow effects significantly.

The retinex image is then filtered by 2D Gabor filters. The phase of the filtered image is defined by

(8)

where g(x, y) is a Gabor filter and angle() is the phase extraction operation. We define the Gabor filters using σx = 8 and σy = σx /2. We use f = 1/σx to ensure that the half peak bandwidth of adjacent filters overlap in the frequency domain. Eight orientations are used θ=[0, 1/8, …, 7/8] This gives 8 phase images that are used to generate features for classification. An example of the eight phase images using the image in Figure 4(c) is shown in Figure 5. The phase images demonstrate the orientation selection property of Gabor filters where structure at certain orientations is kept. Shadow effects are less visible in the phase images than in the original image shown in Figure 4(a).

4.3. Classification

The four tissue samples are extracted from a probe image and the average radiances are computed by averaging over 11x11 pixel regions. is then projected to the corresponding spectral basis Bt for each gallery subject as defined by equation (6) to obtain a vector of coefficients αt = We use the Euclidean distance to measure the spectral similarity between and the best fit using the basis for the gallery subject according to

(9)

The spectral distances for the four tissue types are then combined to give the total spectral distance

(10)

where it is a binary indicator function that determines if the tissue type should be removed due to a shadow in the probe image and s indicates that the distance is associated with the spectral features. it is found by comparing the average radiance value of the central band λ of the probe image in the sample region with a threshold given by

(11)

where I0 is the threshold. In this study, I0 is chosen to be half of the maximum of the average radiances for the central band λ for the four tissue types for the probe image. This eliminates tissue types that are dark.

The central band of a hyperspectral image is used to extract Gabor phase features as described in section 4.2. The image is first cropped so that only the face region is kept as shown in Figure 4. The cropped image is then processed to extract Gabor phase features. We use the Euclidean distance to measure the similarity between the phase images obtained from a probe image and a gallery image given by

(12)

where i indicates that the phase image is obtained by using the ith Gabor filter, G indicates that the phase image is obtained from the gallery image, and p indicates that the distance is associated with the Gabor phase features.

The total distance is defined as a weighted average of the two distance metrics according to

(13)

where the weights and are the reciprocal of the maximum distance between a probe image and a gallery image over the test data for each metric.

5. Experiment

5.1. Database

We used a face database of 200 subjects for our experiments 21. All images have 31 spectral bands with center wavelengths separated by 0.01μm over the near-IR (0.7 μm-1.0μm). The spatial resolution is 494x468 pixels. All subjects are illuminated by diffuse light sources. Each subject has two images fg and fa which were collected several minutes apart. Reflectance images were obtained using the method described in 21. An example of the two reflectance images fg and fa is shown in Figure 6.

After the reflectance images were obtained, they were rotated and cropped so that the eyes were roughly aligned and only the face regions were kept. An example of a rotated and cropped image is shown in Figure 4(a).

We used fg and MODTRAN to generate training data and to learn spectral basis sets. To simulate irradiance functions M(λ) using MODTRAN, we used four elevations (0km, 2km, 4km, 6km), four solar angles (0o, 20o, 40o, 60o), two atmospheric models (tropical and U.S. standard), four aerosol models (rural, urban, maritime, desert), and five visibilities (5km, 10km, 15km, 20km, 25km). This gave 640 illumination conditions. We randomly chose 320 conditions to form a training set using fg according to equation (4). The training data was used to learn the basis for each subject for each tissue type according to the process described in section 2.1.

We used fa and the other 320 conditions to generate simulated test data. Test data should have different illumination conditions than those used to generate training data. This is achieved by using different illumination spectra and spatial variation functions. Test data is thus obtained using simulated illumination spectra and spatial data according to equations (4) and (5).

Spatial variation S(x, y) can be simulated using a frontal illuminated image A(x, y) and a model image m(x, y) that has a different illumination condition. This is an illumination synthesis problem and a variety of approaches have been proposed. We use the Quotient Image method 5 to obtain the synthesized image A' (x, y) given by

(14)

where Q(•) denotes the Quotient Image method. An example of an original, synthesized and model image is shown in Figure 7. The spatial variation function S(x, y) is obtained according to

(15)

The spatial variation function S(x, y) is then used with the other 320 MODTRAN conditions M(λ) to simulate illumination functions according to equation (5). As a result, the simulated illumination functions provide a shadow effect similar to what is seen in the model image m(x, y) with wavelength dependence specified by M(λ). After the illumination functions are obtained, the radiance data is obtained by using fa according to equation (4) to form the test data.

In our experiment, we used the central band of the reflectance function R(x, y, λ) of fa as A(x, y). Shadow conditions m(x, y) were from the Extended Yale Face Database B 37. Thirty images of ten subjects under three independent shadow conditions (A+000E+00, A-095E+00, A+000E+90) were used as the training set required by the Quotient Image method. The condition name indicates the light position where A denotes azimuth followed by the angle and E denotes elevation followed by the angle. Both angles are in degrees. Nine shadow conditions of another subject were used as the model condition m(x, y). Among the nine conditions, five conditions have the light rotated to the right incrementally. They are A-025E+00, A-050E+00, A-070E+00, A+095E+00 and A-120E+00. Two conditions A-020E-10 and A-020E-40 have the light rotated to the lower right corner. Two conditions A-020E+10 and A-035E+65 have the light rotated to the upper right corner. For each shadow condition m(x, y) and for each subject, the spatial variation S(x, y) was obtained using the central band of fa and the shadow condition m(x, y) according to the process described above. The test data was obtained by using S(x, y), the other 320 spectral conditions from the MODTRAN data, and fa according to equation (4). Therefore, for each shadow condition, we synthesized 320 hyperspectral test images corresponding to the 320 spectral conditions for each subject. An example of the central band of a gallery image and probe images of a subject is shown in Figure 8.

4.2. Results

The classification results for the various illumination conditions are shown in Table 1 and Table 2. We report the results for each shadow condition separately. The spectral result for each table entry is the average performance over the 320 test images. The Gabor phase results are the same for all of the test images with a given shadow condition because only the central band of an image is used to extract features and it is normalized to [0,1] so that the scaling effect introduced by the spectral variation is removed.

To compare with other algorithms, we also used the non-weighted version of the SQI method 7 which is based on retinex. For this method, we used a Gaussian filter with a standard deviation of 10 pixels as the smoothing filter and the natural logarithm as the nonlinear transformation. We applied the method to the central band of a hyperspectral image and averaged the results for the 320 spectral conditions. The result is reported as SQI in Table 1 and Table 2. We also used the Eigenface method provided by the CSU Face Identification Evaluation System 38, 39 for comparison. Results from this method are also included in Table 1 and Table 2.

As the illumination direction moves to the right, the face becomes darker, and the difficulty of classification increases. This is reflected in Table 1 as the performance of all methods degrades or remains about the same as we move from left to right. Among the Gabor phase method, the SQI method, and the Eigenface method, all of which are spatial methods, the Gabor phase method performs reasonably well across the shadow conditions except for the condition A-120E+00 where the light is almost behind the face which leaves the face very dark. When the light is slightly behind the face as for condition A-095E+00, the method still achieves a 75% classification rate. This suggests that retinex-based phase information is invariant to shadow variation to a large extent. On the other hand, the SQI method and the Eigenface method degrade significantly as the face gets darker, degrading from an 80% classification rate to less than a 1% classification rate. Spectral features also provide a significant amount of useful information for classification. The spectral method achieves the best result among all of the individual methods for the most extreme condition A-120E+00. This suggests that the spectral method is more robust in extreme shadow conditions. The spectral results also suggest that the spectral basis learned from the training data is able to represent variation in the test data. By using the spectral and Gabor phase features, the classification rate is improved further and reaches more than 85% for the four less severe shadow conditions and 47% for the most extreme condition. The combined method outperforms the baseline methods by a large margin ranging from 15% to 70%.

Table 2 includes the classification results when the light position also changes vertically. In this case, shadow effects are moderate. The Gabor phase method performs consistently across the shadow conditions and achieves a 90% classification rate for all conditions. Interestingly, spectral classification does not work well for condition A-035E+65 where the entire face except the nose area is dark. This suggests that in order for spectral features to be effective, a minimum level of illumination is required. We speculate that if spectral features are extracted from the nose, then performance could be improved. To test this hypothesis, another tissue sample extracted from the nose area was added as shown in Figure 9 and the results are reported after the slash in Table II. The improved results support this hypothesis. In other words, for spectral features to be effective, they need to be extracted from areas that have a certain level of illumination. Again, by using the spectral and Gabor phase features, performance is improved. The combined method reaches a 95% classification rate across all of the illumination conditions. Compared to the baseline methods, the combined method is more effective and more robust to illumination variation.

The cumulative match score (CMS) functions for the nine shadow conditions are shown in Figure 10 and Figure 11 where rank N means that the correct match is within the top N candidates selected by the algorithm.

From Figure 10 we see that the combined method is the least affected by the illumination variation. This method reaches a 95% classification rate at rank 3 for the first four shadow conditions and an 85% classification rate at rank 10 for condition A-120E+00. This suggests that spectral and spatial information are complementary to each other. On the contrary, the classification rate for the SQI method and the Eigenface method varies significantly as the conditions change, and for the most extreme condition A-120E+00, the classification rate is not more than 20% even at rank 10.

Figure 11 shows the CMS functions for the illumination conditions when the light also moves vertically. In this case, the Gabor phase method and the combined method perform consistently across illumination variation while the other methods are more significantly affected by the variation.

A case-by-case analysis reveals something interesting. Figure 12 and Figure 13 show an example of a subject misclassified by the SQI method that is classified correctly by the Gabor phase method. Figure 12(a) shows the squared difference between the SQI representation of the probe image and the correct gallery image, and Figure 12(b) shows the squared difference between the SQI representation of the probe image and the mismatched gallery image. Each of the difference images is on a log scale where larger distances are redder. These images show that a significant proportion of the difference is near high intensity areas like the eyes and nose. Figure 12(a) seems to have a smaller total difference than in Figure 12(b) due to a smaller reddish area, whereas a region-by-region comparison reveals that the eyebrow and nostril areas contribute a significant amount of error which makes the total difference in Figure 12(a) larger than the total difference in Figure 12(b).

For the same probe and gallery images, Figure 13(a) shows the difference between the phase images of the probe image and the correct gallery image, and Figure 13(b) shows the difference between the phase images of the probe image and the mismatched gallery image. The difference at each pixel is obtained by summing up the squared differences of the eight phase images between the probe image and the gallery image. Compared to Figure 12, the difference is no longer concentrated in high intensity areas but is scattered around the face. This suggests that Gabor phase features are more dependent on fine structures than on intensity levels. Visually, Figure 13(b) has more bright spots than Figure 13(a) which results in the correct gallery image being selected.

Figure 14 shows an example of a probe that is classified incorrectly when using Gabor phase features. The probe image has an exaggerated expression causing the eyes to look very different from those in the correct gallery image. This is reflected in the Gabor phase difference image shown in Figure 14(a) as the difference between the probe and the correct gallery images has large values around the eyes. Fortunately, spectral features provide helpful information. Figure 15 plots the absolute difference between the probe spectrum p and the reconstructed spectrum g obtained by projecting the probe spectrum to the basis for the correct gallery subject, and the absolute difference between the probe spectrum p and the reconstructed spectrum g' obtained by projecting the probe spectrum to the basis for the mismatched gallery subject. From the figure, we see that the spectral difference is smaller for the correct match than for the incorrect match.

In other cases, Gabor phase features classify correctly while spectral features misclassify a probe image. Figure 16 shows an example where the probe image is misclassified by spectral features while Figure 17 shows that the probe image is classified correctly by the Gabor phase features.

6. Conclusion

We have presented an algorithm for illumination-invariant face recognition in hyperspectral images that uses both spatial and spectral information. We constructed a basis to represent spectral variation. We used a Gaussian filter to alleviate shadow effects and designed a set of 2D Gabor filters to extract spatial information. Experimental results show that phase information and spectral information are complementary to each other and that the new approach can accommodate large illumination variation to improve on the effectiveness of existing methods. Future work could include testing the method on a larger dataset or using other filtering techniques to extract invariant features.

References

[1]  W. Zhao, R. Chellappa, P. J. Phillips, “A. Rosenfeld, Face recognition: A literature survey,” ACM Computing Survey, 35 (4) (2003) 399-458.
In article      View Article
 
[2]  X. Zou, J. Kittler, K. Messer, “Illumination invariant face recognition: A survey,” in: International Conference on Biometrics: Theory, Applications, and Systems, 2007, pp. 1-8.
In article      View Article
 
[3]  P. Belhumeur, J. Hespanha, D. Kriegman, “Eigenfaces vs fisherfaces: Recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (7) (1997) 711-720.
In article      View Article
 
[4]  P. Belhumeur, D. Kriegman, “What is the set of images of an object under all possible illumination conditions,” International Journal of Computer Vision 28 (3) (1998) 245-260.
In article      
 
[5]  A. Shashua, T. Riklin-Raviv, “The quotient image: class-based rerendering and recognition with varying illuminations,” IEEE Transactions on Pattern Analysis and Machine Intelligence 23 (2) (2001) 129-139.
In article      View Article
 
[6]  D. J. Jobson, Z. Rahman, G. A. Woodel, “Properties and performance of a center/surround retinex,” IEEE Transactions on Image Processing: special issue on color processing 6 (3) (1997) 451-462.
In article      View Article
 
[7]  H. Wang, S. Li, Y. Wang, “Face recognition under varying lighting condition using self quotient image,” in: Proceedings of IEEE Conference on Automatic Face and Gesture Recognition, 2004, pp. 819-824.
In article      
 
[8]  L. Qing, S. Shan, X. Chen, W. Gao, “Face recognition under varying lighting based on the probabilistic model of gabor phase,” in: Proceedings of IEEE Conference on Pattern Recognition, 2006, pp. 1139-1142.
In article      
 
[9]  M. Savvides, B. V. K. V. Kumar, P. K. Khosla, “Eigenphases vs. eigenfaces,” in: Proceedings of IEEE Conference on Pattern Recognition, Vol. 3, 2004, pp. 810-813.
In article      View Article
 
[10]  J. Zhang, X. Xie, “A study on the effective approach to illumination-invariant face recognition based on a single image,” Biometric Recognition (2012) 33-41.
In article      View Article
 
[11]  H. Kaur, A. Kaur, “Illumination invariant face recognition,” International Journal of Computer Applications 64 (21) (2013) 23-27.
In article      View Article
 
[12]  C. Fan, S. Wang, H. Zhang, “Efficient Gabor phase based illumination invariant for face recognition,” Advances in Multimedia, Vol. 2017, Article ID 1356385.
In article      View Article
 
[13]  J. Zhu, , “Illumination invariant single face image recognition under heterogeneous lighting condition,” Pattern Recognition, Vol. 66, 2017, pp. 313-327.
In article      View Article
 
[14]  A. Essa, Asari, “Local boosted features for illumination invariant face recognition,” , Imaging and Multimedia Analytics in a Web and Mobile World, Vol 4, 2017, pp. 70-73.
In article      View Article
 
[15]  D. Socolinsky, A. Selinger, “Thermal face recognition in an operational scenario,” in: Proceedings of IEEE Conference on Pattern Recognition Compter Vision and Pattern Recognition, Vol. 2, 2004, pp. 1012-1019.
In article      
 
[16]  R. S. Ghiass, O. Arandjelovic, H. Bendada, X. Maldague, “Illumination-invariant face recognition from a single image across extreme pose using a dual dimension aam ensemble in the thermal infrared spectrum,” in: International Joint Conference on Neural Network, 2013.
In article      View Article
 
[17]  Shwetank, Neeraj, Jitendra, Vikesh, , “Pixel based supervised classification of hyperspectral face images for face recognition,” , , 2018, pp. 706-717.
In article      View Article
 
[18]  G. Hermosilla, J. R. del Solar, R. Verschae, M. Correa, “A comparative study of thermal face recognition methods in unconstrained environments,” Pattern Recognition 45 (2012) 2445-2459.
In article      View Article
 
[19]  R. S. Ghiass, O. Arandjelovic, A. Bendada, X. Maldague, “Infrared face recognition: A comprehensive review of methodologies and databases,” Pattern Recognition 47 (2014) 2807-2824.
In article      View Article
 
[20]  R. S. Choras, “Thermal face recognition,” in: Image Processing and Communications Challenges 7, 2016, pp. 37-46.
In article      View Article
 
[21]  Z. Pan, G. Healey, M. Prasad, B. Tromberg, “Face recognition in hyperspectral images,” IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (12) (2003) 1552-1560.
In article      View Article
 
[22]  Z. Pan, G. Healey, M. Prasad, B. Tromberg, “Recognizing faces in hyperspectral image,” in: Proceedings of the SPIE, Vol. 4725, 2002, pp. 168-176.
In article      View Article
 
[23]  S. A. Robila, “Toward hyperspectral face recognition,” in: Proceedings of the SPIE, Vol. 6812, 2008.
In article      View Article
 
[24]  C. P. Huynh, A. Robles-Kelly, “Hyperspectral imaging for skin recognition and biometrics,” in: Proceedings of IEEE Conference on Image Processing, 2010, p. 23252328.
In article      View Article
 
[25]  A. Wimberly, S. A. Robila, T. Peplau, “Spectral face recognition using orthogonal subspace bases,” in: Proceedings of the SPIE, Vol. 7695, 2010.
In article      View Article
 
[26]  Z. Pan, G. Healey, B. J. Tromberg, “Hyperspectral face recognition under unknown illumination,” Optical Engineering 46 (7).
In article      View Article
 
[27]  H. Wang, G. Healey, “Pose-invariant face recognition in hyperspectral images,” in: Proceedings of the Image Processing and Computer Vision, 2013.
In article      
 
[28]  H. Wang, T. C. Bau, G. Healey, “Expression-invariant face recognition in hyperspectral images,” in: Proceedings of the SPIE, 2011.
In article      View Article
 
[29]  L. Shen, S. Zheng, “Hyperspectral face recognition using 3d gabor wavelets,” in: Proceedings of the International Conference on Pattern Recognition, 2012.
In article      
 
[30]  M. Uzair, A. Mahmood, A. Mian, “Hyperspectral face recognition with spatiospectral information fusion and pls regression,” IEEE Transactions on Image Process 24 (2015) 1127-1137.
In article      View Article  PubMed
 
[31]  H. Drira, B. Amor, A. Srivastava, M. Daoudi, R. Slama, “3D face recognition under expressions, occlusions, and pose variations,” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (2013) 2270-2283.
In article      View Article  PubMed
 
[32]  R. Liang, W. Shen, X.-X. Li, H. Wang, “Bayesian multidistribution-based discriminative feature extraction for 3D face recognition,” Information Sciences 320 (2015) 406-417.
In article      View Article
 
[33]  M. Hiraoka, M. Firbank, M. Essenpreis, M. Cope, S. Arridge, P. van der Zee, D. Delpy, “A monte carlo investigation of optical pathlength in inhomogeneous tissue and its application to nearinfrared spectroscopy,” Physics in Medicine and Biology 38 (12) (1994) 1859-1876.
In article      View Article
 
[34]  A. Berk, G. Anderson, P. Acharya, L. Bernstein, L. Muratov, J. Lee, M. Fox, S. Adler-Golden, J. Chetwynd, M. Hoke, R. Lockwood, J. Gardner, T. Cooley, C. Borel, P. Lewis, E. Shettle, “Modtran5: 2006 update,” in: Proceedings of the SPIE, Vol. 6233, 2006.
In article      View Article
 
[35]  D. Judd, D. MacAdam, G. Wyszecki, “Spectral distribution of typical daylight as a function of correlated color temperature,” Journal of the Optical Society of America 54 (8) (1964) 1031-1040.
In article      View Article
 
[36]  D. Slater, G. Healey, “Analyzing the spectral dimensionality of outdoor visible and near-infrared illumination functions,” Journal of the Optical Society of America 15 (11) (1998) 2913-2920.
In article      View Article
 
[37]  A. Georghiades, P. Belhumeur, D. J. Kriegman, “From few to many: illumination cone models for face recognition under differing pose and lighting,” IEEE Transactions on Pattern Analysis and Machine Intelligence 23 (6) (2001) 643-660.
In article      View Article
 
[38]  J. R. Beveridge, D. S. Bolme, M. Teixeira, B. Draper, “The csu face identification evaluation system users guide: version 5.0,” in: Technical Report, Computer Science Department, Colorado State University, 2003.
In article      
 
[39]  D. Bolme, J. R. Beveridge, M. Teixeira, B. A. Draper, “The csu face identification evaluation system: its purpose, features and structure,” in: Proceedings of the International Conference on Computer Vision Systems, no. 304-311, 2003.
In article      View Article
 

Published with license by Science and Education Publishing, Copyright © 2019 Han Wang and Glenn Healey

Creative CommonsThis work is licensed under a Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

Cite this article:

Normal Style
Han Wang, Glenn Healey. Illumination-Invariant Face Recognition in Hyperspectral Images. Journal of Computer Sciences and Applications. Vol. 7, No. 1, 2019, pp 21-30. http://pubs.sciepub.com/jcsa/7/1/4
MLA Style
Wang, Han, and Glenn Healey. "Illumination-Invariant Face Recognition in Hyperspectral Images." Journal of Computer Sciences and Applications 7.1 (2019): 21-30.
APA Style
Wang, H. , & Healey, G. (2019). Illumination-Invariant Face Recognition in Hyperspectral Images. Journal of Computer Sciences and Applications, 7(1), 21-30.
Chicago Style
Wang, Han, and Glenn Healey. "Illumination-Invariant Face Recognition in Hyperspectral Images." Journal of Computer Sciences and Applications 7, no. 1 (2019): 21-30.
Share
[1]  W. Zhao, R. Chellappa, P. J. Phillips, “A. Rosenfeld, Face recognition: A literature survey,” ACM Computing Survey, 35 (4) (2003) 399-458.
In article      View Article
 
[2]  X. Zou, J. Kittler, K. Messer, “Illumination invariant face recognition: A survey,” in: International Conference on Biometrics: Theory, Applications, and Systems, 2007, pp. 1-8.
In article      View Article
 
[3]  P. Belhumeur, J. Hespanha, D. Kriegman, “Eigenfaces vs fisherfaces: Recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence 19 (7) (1997) 711-720.
In article      View Article
 
[4]  P. Belhumeur, D. Kriegman, “What is the set of images of an object under all possible illumination conditions,” International Journal of Computer Vision 28 (3) (1998) 245-260.
In article      
 
[5]  A. Shashua, T. Riklin-Raviv, “The quotient image: class-based rerendering and recognition with varying illuminations,” IEEE Transactions on Pattern Analysis and Machine Intelligence 23 (2) (2001) 129-139.
In article      View Article
 
[6]  D. J. Jobson, Z. Rahman, G. A. Woodel, “Properties and performance of a center/surround retinex,” IEEE Transactions on Image Processing: special issue on color processing 6 (3) (1997) 451-462.
In article      View Article
 
[7]  H. Wang, S. Li, Y. Wang, “Face recognition under varying lighting condition using self quotient image,” in: Proceedings of IEEE Conference on Automatic Face and Gesture Recognition, 2004, pp. 819-824.
In article      
 
[8]  L. Qing, S. Shan, X. Chen, W. Gao, “Face recognition under varying lighting based on the probabilistic model of gabor phase,” in: Proceedings of IEEE Conference on Pattern Recognition, 2006, pp. 1139-1142.
In article      
 
[9]  M. Savvides, B. V. K. V. Kumar, P. K. Khosla, “Eigenphases vs. eigenfaces,” in: Proceedings of IEEE Conference on Pattern Recognition, Vol. 3, 2004, pp. 810-813.
In article      View Article
 
[10]  J. Zhang, X. Xie, “A study on the effective approach to illumination-invariant face recognition based on a single image,” Biometric Recognition (2012) 33-41.
In article      View Article
 
[11]  H. Kaur, A. Kaur, “Illumination invariant face recognition,” International Journal of Computer Applications 64 (21) (2013) 23-27.
In article      View Article
 
[12]  C. Fan, S. Wang, H. Zhang, “Efficient Gabor phase based illumination invariant for face recognition,” Advances in Multimedia, Vol. 2017, Article ID 1356385.
In article      View Article
 
[13]  J. Zhu, , “Illumination invariant single face image recognition under heterogeneous lighting condition,” Pattern Recognition, Vol. 66, 2017, pp. 313-327.
In article      View Article
 
[14]  A. Essa, Asari, “Local boosted features for illumination invariant face recognition,” , Imaging and Multimedia Analytics in a Web and Mobile World, Vol 4, 2017, pp. 70-73.
In article      View Article
 
[15]  D. Socolinsky, A. Selinger, “Thermal face recognition in an operational scenario,” in: Proceedings of IEEE Conference on Pattern Recognition Compter Vision and Pattern Recognition, Vol. 2, 2004, pp. 1012-1019.
In article      
 
[16]  R. S. Ghiass, O. Arandjelovic, H. Bendada, X. Maldague, “Illumination-invariant face recognition from a single image across extreme pose using a dual dimension aam ensemble in the thermal infrared spectrum,” in: International Joint Conference on Neural Network, 2013.
In article      View Article
 
[17]  Shwetank, Neeraj, Jitendra, Vikesh, , “Pixel based supervised classification of hyperspectral face images for face recognition,” , , 2018, pp. 706-717.
In article      View Article
 
[18]  G. Hermosilla, J. R. del Solar, R. Verschae, M. Correa, “A comparative study of thermal face recognition methods in unconstrained environments,” Pattern Recognition 45 (2012) 2445-2459.
In article      View Article
 
[19]  R. S. Ghiass, O. Arandjelovic, A. Bendada, X. Maldague, “Infrared face recognition: A comprehensive review of methodologies and databases,” Pattern Recognition 47 (2014) 2807-2824.
In article      View Article
 
[20]  R. S. Choras, “Thermal face recognition,” in: Image Processing and Communications Challenges 7, 2016, pp. 37-46.
In article      View Article
 
[21]  Z. Pan, G. Healey, M. Prasad, B. Tromberg, “Face recognition in hyperspectral images,” IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (12) (2003) 1552-1560.
In article      View Article
 
[22]  Z. Pan, G. Healey, M. Prasad, B. Tromberg, “Recognizing faces in hyperspectral image,” in: Proceedings of the SPIE, Vol. 4725, 2002, pp. 168-176.
In article      View Article
 
[23]  S. A. Robila, “Toward hyperspectral face recognition,” in: Proceedings of the SPIE, Vol. 6812, 2008.
In article      View Article
 
[24]  C. P. Huynh, A. Robles-Kelly, “Hyperspectral imaging for skin recognition and biometrics,” in: Proceedings of IEEE Conference on Image Processing, 2010, p. 23252328.
In article      View Article
 
[25]  A. Wimberly, S. A. Robila, T. Peplau, “Spectral face recognition using orthogonal subspace bases,” in: Proceedings of the SPIE, Vol. 7695, 2010.
In article      View Article
 
[26]  Z. Pan, G. Healey, B. J. Tromberg, “Hyperspectral face recognition under unknown illumination,” Optical Engineering 46 (7).
In article      View Article
 
[27]  H. Wang, G. Healey, “Pose-invariant face recognition in hyperspectral images,” in: Proceedings of the Image Processing and Computer Vision, 2013.
In article      
 
[28]  H. Wang, T. C. Bau, G. Healey, “Expression-invariant face recognition in hyperspectral images,” in: Proceedings of the SPIE, 2011.
In article      View Article
 
[29]  L. Shen, S. Zheng, “Hyperspectral face recognition using 3d gabor wavelets,” in: Proceedings of the International Conference on Pattern Recognition, 2012.
In article      
 
[30]  M. Uzair, A. Mahmood, A. Mian, “Hyperspectral face recognition with spatiospectral information fusion and pls regression,” IEEE Transactions on Image Process 24 (2015) 1127-1137.
In article      View Article  PubMed
 
[31]  H. Drira, B. Amor, A. Srivastava, M. Daoudi, R. Slama, “3D face recognition under expressions, occlusions, and pose variations,” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (2013) 2270-2283.
In article      View Article  PubMed
 
[32]  R. Liang, W. Shen, X.-X. Li, H. Wang, “Bayesian multidistribution-based discriminative feature extraction for 3D face recognition,” Information Sciences 320 (2015) 406-417.
In article      View Article
 
[33]  M. Hiraoka, M. Firbank, M. Essenpreis, M. Cope, S. Arridge, P. van der Zee, D. Delpy, “A monte carlo investigation of optical pathlength in inhomogeneous tissue and its application to nearinfrared spectroscopy,” Physics in Medicine and Biology 38 (12) (1994) 1859-1876.
In article      View Article
 
[34]  A. Berk, G. Anderson, P. Acharya, L. Bernstein, L. Muratov, J. Lee, M. Fox, S. Adler-Golden, J. Chetwynd, M. Hoke, R. Lockwood, J. Gardner, T. Cooley, C. Borel, P. Lewis, E. Shettle, “Modtran5: 2006 update,” in: Proceedings of the SPIE, Vol. 6233, 2006.
In article      View Article
 
[35]  D. Judd, D. MacAdam, G. Wyszecki, “Spectral distribution of typical daylight as a function of correlated color temperature,” Journal of the Optical Society of America 54 (8) (1964) 1031-1040.
In article      View Article
 
[36]  D. Slater, G. Healey, “Analyzing the spectral dimensionality of outdoor visible and near-infrared illumination functions,” Journal of the Optical Society of America 15 (11) (1998) 2913-2920.
In article      View Article
 
[37]  A. Georghiades, P. Belhumeur, D. J. Kriegman, “From few to many: illumination cone models for face recognition under differing pose and lighting,” IEEE Transactions on Pattern Analysis and Machine Intelligence 23 (6) (2001) 643-660.
In article      View Article
 
[38]  J. R. Beveridge, D. S. Bolme, M. Teixeira, B. Draper, “The csu face identification evaluation system users guide: version 5.0,” in: Technical Report, Computer Science Department, Colorado State University, 2003.
In article      
 
[39]  D. Bolme, J. R. Beveridge, M. Teixeira, B. A. Draper, “The csu face identification evaluation system: its purpose, features and structure,” in: Proceedings of the International Conference on Computer Vision Systems, no. 304-311, 2003.
In article      View Article