Iris Verification and ANOVA for Iris Image Quality
Department of Engineering Technology, Mississippi Valley State University, Itta Bena, USA2. The Iris Recognition System and the Experimental Method
3. Experimental Results and Analysis
Abstract
Iris recognition has been acknowledged as the most accurate method in biometrics that is one of main automated identification technologies. The iris image quality and iris verification of eyes in dark brown, regular brown, hazel, green, and blue was tested. The effects of eyeglasses and contact lenses on the iris image quality score and iris verification were investigated, respectively. The investigated results indicate that the iris verifications with eyeglasses or contact lenses can still be successful although both eyeglasses and contact lenses decrease iris image quality. Analysis of variance (ANOVA) for the iris image quality of three eye colors (dark brown, hazel, and blue) was conducted to study the difference in the image quality due to the eye colors. The ANOVA results show there is no significant difference in the iris image quality of eyes in dark brown, hazel, and blue at the 0.05 level of significance.
At a glance: Figures
Keywords: iris image quality, iris verification, biometrics, automated identification, ANOVA, level of significance
Journal of Automation and Control, 2014 2 (1),
pp 33-38.
DOI: 10.12691/automation-2-1-5
Received November 27, 2013; Revised February 27, 2014; Accepted March 04, 2014
Copyright © 2013 Science and Education Publishing. All Rights Reserved.Cite this article:
- Wang, Lidong. "Iris Verification and ANOVA for Iris Image Quality." Journal of Automation and Control 2.1 (2014): 33-38.
- Wang, L. (2014). Iris Verification and ANOVA for Iris Image Quality. Journal of Automation and Control, 2(1), 33-38.
- Wang, Lidong. "Iris Verification and ANOVA for Iris Image Quality." Journal of Automation and Control 2, no. 1 (2014): 33-38.
Import into BibTeX | Import into EndNote | Import into RefMan | Import into RefWorks |
1. Introduction
The human iris is rich with features and can be used to quantitatively distinguish one eye from another. The iris contains many collagen fibers, contraction furrows, coronas, crypts, color, serpentine vasculature, striations, freckles, rifts, and pits. Measuring the patterns of these features and their spatial relationships to each other provides other quantifiable parameters for identification processes [1]. To capture a rich iris texture, the iris recognition system should have high resolution, good sharpness, and good lighting conditions [2]. Iris recognition technology has been reported to be more stable and reliable than other biometric technologies; however, not all of the iris images acquired from the device are in-focus and sharp enough for recognition [3].
Poor quality images increase false rejection rate (FRR) and false accept rate (FAR), significantly degrade the performance of iris recognition systems [4]. It has been shown that the matching performance of iris recognition systems can be improved when the iris image quality is improved [3]. Best image quality optimizes segmentation and recognition performance, especially the false non-match rate (FNMR). Quality predicts match performance [5]. Therefore, iris image quality assessment is very important for an iris recognition system.
Several quality factors have an effect on the iris image quality. They are: defocus blur, motion blur, occlusion, off-angle, specular reflection, lighting, and pixel-counts [6, 7, 8, 9]. It is a complex problem to assess the overall quality of an iris image. Defocus blur, motion blur, occlusion, and off-angle are the main factors among these factors [3, 6].
Defocus blur images can result from many sources, but in general, defocus occurs when the focal point is outside the ‘depth of field’ of the object to be captured. The further an object is from this depth of field, the higher the degree of defocus. Depth of field is affected by the aperture size (i.e. the smaller the aperture size, the greater the depth of field) [3]. Motion blur can result from either the relative motion of an object or relative motion of the camera during exposure time [6]. In general, there are two types of motion blur: linear and non-linear. Linear motion blur can be considered as smearing in only one direction, while non-linear can be considered smearing in multiple directions at different strengths. An occlusion image is the case that a part of iris area is covered by eyelid and eyelashes [3]. Off-angle can result from non-cooperative users or when capturing iris’s at a distance [6].
Specular reflections come in two forms: diffuse-source reflections, caused by scattered ambient light, and point-source reflections (PSRs), usually caused by highly localized sources such as desk lamps or overhead lighting. With diffuse-source reflections the underlying image remains partially visible and the affected area can be quite large. PSRs are usually smaller high-intensity regions with semi-transparent borders [10]. The position of specular reflection relative to pupil boundary provides indication of gaze angle; the size of specular reflection indicates focus quality and motion blur; pupil and iris edge contrast/sharpness indicate focus quality; the distance between upper and lower lid can be compared to iris diameter to estimate iris exposure; and the presence of specular reflections outside pupil may indicate obscuration of the iris area [5].
Several kinds of iris image quality metrics were presented. Cambier (2007) [5] presented the quality metrics including: precise segmentation and determination of iris area based on eyelids, eyelashes, specular reflections, etc.; the focus assessment based on spatial frequency content – may be limited to iris area; and the measurement of pupil/iris diameter ratio. McConnon et al. (2010) [10] presented their quality metrics to assess the iris image characteristics with regard to focus, entropy, reflections, pupil constriction and pupillary boundary contrast. Li et al. (2011) [9] proposed three approaches to estimate the quality metrics of defocus, motion blur, and off-angle in an iris image, respectively. A fusion method based on likelihood ratio was proposed to combine the six quality factors (defocus, motion blur, off-angle, occlusion, deformation, and light variation) into a unified quality score; and a statistical quantization method based on t-test was proposed to adaptively classify the iris images in a database into a number of quality levels. The relationship between iris recognition results and the quality level of iris images could be explicitly formulated.
Chaskar et al. (2012) [11] though that iris images usually got affected from wide range of qualities like dilation, iris resolution, specular reflection, motion blur, camera diffusion, the presence of eyelids and eyelashes, head rotation, camera angle, contrast, luminosity etc. Nine quality factors were assessed. They are: dilation measure (DM), ideal iris resolution(IIR), actual iris resolution (AIR), processable iris resolution (PIR), signal to noise ratio (SNR), occlusion measure (OM), specular reflection (SR), eccentric distance measure (EDM), and angular assessment (θ). The dilation of a pupil can affect the recognition accuracy. If the iris is too dilated, there is a possibility of losing the information; this may not serve the adequate necessary information for recognition. IIR is the resolution of the iris obtained ideally in the absence of noise; AIR is the resolution of the iris obtained in the presence of noise like an eyelid or an eyelash; PIR is the available part of iris from which features can be extracted for further processing and the ratio of AIR to IIR gives the PIR; SNR should be as high as possible and it is high if the noise is low; OM measures how much percentage of the iris area is invalid due to eyelids, eyelashes, and other noise; Once eyelid occlusions are estimated, occlusions resulting from specular reflection (SR) are evaluated on the remaining iris portion unaffected by the eyelids; EDM measures the position of the iris and the pupil with respect to each other using the coordinates of their centers; and θ is the evaluation of the iris center relative to the pupil center.
Research on iris recognition was conducted based on iris images acquired under varying illumination, at the visible wavelength, and using less constrained imaging protocols. The term “less constrained iris imaging conditions” refers to the acquisition of iris images at larger distances, under dynamic lighting conditions, and without requiring any active participation of the subjects. This extends the use of iris recognition technology to domains where the subjects’ cooperation cannot be expected. However, it is highly probable that the captured images are off-angle or contain partial or blurred irises (due to bad focus or motion) [12].
As wireless technology has advanced, many applications have been developed on mobile devices. Therefore, many mobile iris recognition devices, such as the PierTM 2.3 system and the HIIDE TM Series 4, were commercialized recently. A motion-and-optical blurred image can be sometimes captured because a user captures the iris images of a subject by holding the recognition device and the user’s hand-shaking happens. Motion-and-optical blurred images reduce iris recognition accuracy. To overcome this problem, a new method of restoring motion-and-optical blurred iris images at the same time was proposed [13].
In general, greater capture distance has convenience and hygiene value as the device need not be taken very close to eyes during the capture. Sources of motion can be: body motion in three dimensions, the head’s independent motion, and the eye’s independent motion. Many professionals typically thought: 1) both the eyes and the head’s position must be controlled and eye motion during exposure causes image blurring; 2) the iris capture process was sensitive to lighting conditions present in the testing room and that no direct or artificial light should directly reflect off the enrollee’s eyes [6]; 3) image quality can be affected under changes in the stand-off distance (capture distance). The author in this paper studied the effects of the three factors on iris image quality using the IrisAccessTM 4000 system [14, 15] and the following conclusions were reached [16]: 1) there were almost no motion-induced effects on iris image quality scores and the iris verification was successful when a person shook his/her head, nodded, or had eye motion during the enrollment and the verification; 2) while the lights were on and off in the lab where iris images were captured, there was almost no difference in iris image scores; and 3) distance changes within the capture range (18-25cm) of the IrisAccessTM 4000 system did not affect the iris image quality score.
Dark brown irises often recode lighter with infrared photography than blue ones. For blue irises, infrared illumination may produce less iris detail than visible light. Under visible illumination, blues irises are lighter than dark brown irises [17]. The iris has a complex texture. As all enrollment forms suggest, the iris color (usually called eye color) is a recognized discriminant among people. Yet iris recognition methods do not use color. They use structure (shape) [18]. For Asian people, brown irises are dominant. Their irises have a lower contrast texture [19]. The utility of texture and color for iris recognition systems was proposed. It contributes for improvement of system accuracy with a reduction of false acceptance rate (FAR) and false rejection rate (FRR). Experimental results indicated that the proposed method using only color achieves 99.9993 accuracy, 0.0160 FAR, and 0.0813 FRR [20]. The effect of eye color on texture-based iris recognition in both the near -IR and visible bands was examined. The findings of this work showed that the matching performance associated with texture-based encoding varies with eye color and the specific band of the iris template [21]. Iris recognition performance as a function of eye color was studied. The performance was described in the form of relative operating characteristic (ROC) curve. It was shown that there was statistically significant difference in performance between blue eyes and brown eyes [22, 25].
In this paper, the following study has been conducted: 1) iris images and their quality scores of people with different eye colors (dark brown, regular brown, hazel, and blue); 2) the effect of eyeglasses on the iris image quality score and iris verification; 3) the effect of contact lenses on the iris image quality score and iris verification; and 4) statistical analysis for the image quality of three eye colors (dark brown, hazel, and blue) based on the parametric method ANOVA to study the difference in the iris image quality of eyes due to the eye colors.
2. The Iris Recognition System and the Experimental Method
An IrisAccessTM recognition system developed by the LG Electronics – Iris Technology Division, was used in this study. The EAC Software v 3.00.14 was installed in the iris recognition system. The EAC Software was designed to operate using both IrisAccessTM 3000 and IrisAccessTM 4000 series hardware. An iris camera iCAM4000 in the IrisAccessTM 4000 series was used to capture the subjects’ iris images. The iCAM4000 is a two-eye iris camera which includes an alignment indicator behind the mirror. The iris system can be used for enrollment and verification [14, 15]. The system displays the iris image and the iris image quality score as soon as an iris scanning is completed. The iris image quality score called “IrisCodeTM quality” is from 0 to 100. The lowest value is fixed at 0 and the highest value is fixed at 100. To obtain iris images with a higher quality, a user should keep both eyes wide open and look into the rectangular mirror aligning the colored dot between the eyes until the audio message of “we finish taking pictures of your eyes” plays. The user should not rotate, pan, or tilt his/her face. To test the effect of eyeglasses or contact lenses on the iris image quality score and iris verification, eyeglasses or contact lenses are worn during both the iris enrollment and the iris verification.
The user can choose to scan the right eye, the left eye, or both though a setting in the system. When both the right eye and the left eye are chosen, the iris image quality score is a combined score for both.
3. Experimental Results and Analysis
3.1. Image Quality Scores and Iris Verification for Different Eye Colors and RacesFour people’s iris images were captured using the IrisAccessTM 4000 system in the Automated Identification Technology Lab at Mississippi Valley State University, USA, in April, 2013. The four people had hazel, dark brown, regular brown, and blue eyes, respectively. None of them wore eyeglasses. The stand-off distance from the camera to the individual was 21 cm. Figure 1 shows the four individuals’ right iris images and the image quality scores after the enrollment process. The iris image acquisition yielded images of the irises and the surrounding eye regions. All of the four images were used for the each individual verification and the verifications were successful, although Figure 1 (d) is an image with a relatively low quality score due to the lower eyelid occlusion.
Figure 2 (a) shows a Chinese male’s brown iris image and the quality score during the iris enrollment with eyeglasses. It indicates that there were reflections due to his eyeglasses; therefore, the iris quality score decreased. Figure 2 (b) shows an African American male’s dark brown iris image and the quality score during the iris enrollment with contact lenses. It shows a contact lens shifted over the lower right part of the iris region and slightly off-center of the pupil.
Six people were tested when they wore eyeglasses and when they did not wear eyeglasses using the IrisAccessTM 4000 system. Ten people were tested when they wore contact lenses and when they did not wear contact lenses using the system. The stand-off distance was still 21 cm. Table 1 lists the iris testing results, including the iris image quality scores of the enrollment and verification outcomes under the conditions with/without eyeglasses. Table 2 lists the iris testing results, including the iris image quality scores of the enrollment and verification outcomes under the conditions with/without contact lenses. Because the verification was conducted right after the enrollment process (almost at the same time), the iris image score during the verification was regarded as almost the same as the score during the enrollment. Table 1 indicates that eyeglasses did not affect the success in iris verification although they can decrease the iris image quality scores. Table 2 indicates that contact lenses can decrease the iris image quality scores; however, they did not affect the success in iris verification.
Table 1. The effects of eyeglasses on iris image scores and verification outcomes (without occlusion; the right eye tested)
In addition to the above subjects tested, 98 additional people were invited to participate in iris enrollment and verification tests in April, 2013 to study the difference in iris image quality among three types of eyes in dark brown, hazel, and blue. The individuals were 17 to 28 years old. Thirty four of them are African Americans with dark brown eyes; and 33 are African Americans with hazel eyes; and 31 are Caucasians with blue eyes. The mean of the 98 people’s iris image quality scores is ; the median m= 98.8; and the standard deviation s=0.393. Figure 3 shows the frequency distribution of these people’s iris image quality scores.
The Pearsonian coefficient of skewness [23] is given by
![]() | (1) |
Substituting into the formula the values of the values of the mean, the median, and standard deviation, hence, SK = - 0.0856.
This SK value reflects that the distribution is of an approximately symmetrical distribution with a very slight negative skewness. Therefore, the distribution can be assumed to be approximated closely with a normal distribution.
Table 3 shows the means and standard deviations of the iris image quality scores for people in three eye colors. The means and the standard deviations for the people in dark brown eyes, hazel eyes, and blue eyes are very close, respectively.
Analysis of variance (ANOVA) for the means the iris image quality scores for the people in the three eye colors was conducted. During the ANOVA procedures, the total sum of squares (SST), the treatment of squares (SS(Tr)), the error sum of squares (SSE), the treatment mean square (MS(Tr)), the error mean square (MSE), the F statistic were calculated using the following formula [23]:
![]() | (2) |
![]() | (3) |
![]() | (4) |
![]() | (5) |
![]() | (6) |
![]() | (7) |
Where is the jth observation of the ith sample;
denotes the sum of the values in the ith sample; and
denotes the grand total of all the data in the k samples. In this study, k = 3;
= 34;
= 33;
= 31; N = 98;
= 3359.2;
= 3261.2;
= 3060.9; and
= 9681.3.
The author formulated the following null hypothesis:
There is no statistically significant difference in means of the iris image quality scores for the people in the three eye colors (dark brown, hazel, and blue). The outcome is: the hypothesis is accepted or rejected at
is the level of significance. It can be other values such as 0.01. The ANOVA test follows the following criterion: If F exceeds the value
, the null hypothesis must be rejected.
All the results for the people in the three eye colors are shown in the following analysis-of-variance table:
Since F = 0.4153; the value is less than = 3.09 obtained from the literature [23, 24], the null hypothesis must be accepted. In other words, there is no significant difference in the iris image quality scores of eyes in dark brown, hazel, and blue when the level of significance is 0.05.
4. Conclusions and Future Work
Iris image quality scores for five kinds of eyes (dark brown, regular brown, hazel, green, and blue) without eyeglasses were obtained through the IrisAccessTM 4000 system. The iris verifications for the five kinds of eyes were successful although occlusion reduced the image quality score of the blue eye.
The iris verifications with eyeglasses can be still successful although eyeglasses decrease iris image quality scores. Contact lenses decrease the iris image quality scores; however, they do not affect the success in iris verification.
The results obtained from the parametric method based on ANOVA indicate that there is no significant difference in the iris image quality of eyes in dark brown, hazel, and blue when the level of significance α is 0.05.
For the author’s future work, the experimental datasets should include more iris image quality data, in particular the subjects from different races. The effect of choosing the left eye, the right eye, or both on the performance (such as the number of trials or attempts to capture images or complete verification, the successful rate) during iris enrollment and iris verification will be evaluated. This needs a lot of subjects to participate in testing. More research will be conducted on iris recognition in less-controlled conditions, and improving recognition or identification performance for people wearing eyeglasses or contact lenses.
References
[1] | Devireddy, Srinivasa Kumar. “An Accurate Human Identification Through Iris Recognition.” Computer Science and Telecommunications, vol. 23, no. 6, 2009, pp. 22-29. | ||
![]() | |||
[2] | Ezhilarasan, M., Jacthish, R., Subramanian, Ganabathy K.S. and Umapathy, R. “Iris Recognition Based on Its Texture Patterns.” International Journal on Computer Science and Engineering, vol. 2, no. 9, 2010, pp. 3071-3074. | ||
![]() | |||
[3] | Lee, J-C, Su, Y., Tu, T-M. and Chang C-P. “A novel approach to image quality assessment in iris recognition systems.” The Imaging Science Journal, vol. 58, 2010, pp. 136-145. | ||
![]() | CrossRef | ||
[4] | Wei, Zhuoshi, Tan, Tieniu, Sun, Zhenan and Cui, Jiali. “Robust and Fast Assessment of Iris Image Quality.” Proceedings of the 2006 International Conference on Advances in Biometrics, Springer-Verlag Berlin, Heidelberg, 2006, pp. 464-471. | ||
![]() | |||
[5] | Cambier, Jim. “Iris Image Quality Metrics”, Company Confidential and Proprietary, Technical Report, November, 2007. | ||
![]() | |||
[6] | Kalka, Nathan D., Zuo, Jinyu, Schmid, Natalia A. Cukic, Bojan. “Image quality assessment for iris biometric.” Proc. of 2006 SPIE Conf. on Biometric Technology for Human Identification III, Orlando, FL, USA, April 17-18, 2006, vol. 6202, pp. pp. 445-452. | ||
![]() | |||
[7] | Bowyer, Kevin W., Hollingsworth, Karen and Flynn, Patrick J. “Image Understanding for Iris Biometrics: A Survey.” Computer Vision and Image Understanding, vol. 110, no. 2, May 2008, pp. 281-307. | ||
![]() | CrossRef | ||
[8] | Kang, Byung Jun, Park, Kang Ryoung. “A new multi-unit iris authentication based on quality assessment and score level fusion for mobile phones.” Machine Vision and Applications, no. 21, 2010, pp. 541-553. | ||
![]() | |||
[9] | Li, Xingguang, Sun, Zhenan, Tan, Tieniu. “Comprehensive assessment of iris image quality.” 18th IEEE International Conference on Image Processing, Brussels, Belgium, September 11-14, 2011, pp. 3117-3120. | ||
![]() | |||
[10] | McConnon, G. et al. “A Survey of Point-Source Specular Reflections in Noisy Iris Images.” International Conference on Emerging Security Technologies, Canterbury, United Kingdom, September 6-7, 2010, pp. 13-17. | ||
![]() | |||
[11] | Chaskar, U. M., Sutaone, M. S., Shah, N. S., Jaison. T. “Iris Image Quality Assessment for Biometric Application.” International Journal of Computer Science Issues, vol. 9, no. 3, May 2012, pp. 474-478. | ||
![]() | |||
[12] | Proenca, Hugo. “An iris recognition approach through structural pattern analysis methods.” Expert Systems, February 2010, vol. 27, no. 1, pp. 6-16. | ||
![]() | CrossRef | ||
[13] | Kang, Byung Jun, Park, Kang Ryoung. “A Study on Restoration of Iris Images with Motion-and-Optical Blur on Mobile Iris Recognition Devices.” International Journal of Imaging Systems & Technology, vol. 19, 2009, pp. 323-331. | ||
![]() | CrossRef | ||
[14] | LG Electronics - Iris Technology Division. IrisAccessTM Software User Manual, Version 3.00, New Jersey, USA, December13, 2007. | ||
![]() | |||
[15] | LG Electronics - Iris Technology Division. IrisAccessTM 4000 Hardware Manual, New Jersey, USA, 2008. | ||
![]() | |||
[16] | Wang, Lidong. “Iris Image Quality Testing and Iris Verification.” International Journal of Electrical and Computer Engineering, vol. 3, no. 4, 2013, pp. 1-7. | ||
![]() | CrossRef | ||
[17] | Wasserman, Philip D. “Digital Image Quality for Iris Recognition.” Biometric Image Quality Workshop, National Institute of Standards and Technology, USA, March 8-9, 2006. | ||
![]() | |||
[18] | Fu, Jian, Caulfield, H. John, Yoo, Seong-Moo, Atluri, Venkata, “Use of Artificial Color filtering to improve iris recognition and searching.” Pattern Recognition Letters, vol. 26, 2005, pp. 2244-2251. | ||
![]() | CrossRef | ||
[19] | Wang, Changyu, Song, Shangling. “An iris recognition algorithm based on fractal dimension.” Acta Automatica Sinica, vol. 33, no. 7, 2007, pp.608-702. | ||
![]() | |||
[20] | Birgale, L., Kokare, M. “Comparison of Color and Texture for Iris Recognition.” International Journal of Pattern Recognition and Artificial Intelligence, vol. 26, no. 3, 1256007, 2012. | ||
![]() | CrossRef | ||
[21] | Monaco, Matthew K. “Effect of Eye Color on Iris Recognition.” Thesis for the degree of Master of Science, Virginia University, 2007. | ||
![]() | |||
[22] | Smith, Kelly N. “Analysis of Pigmentation and Wavefront Coding(TM) Acquisition in Iris Recognition.” January 1, 2007, ProQuest Publisher. | ||
![]() | |||
[23] | Freund, J. E. and Perles, B. M. Statistics: A First Course. (8th Ed.), Pearson Prentice Hall, New Jersey, 2004. | ||
![]() | |||
[24] | Merrington M. and Thompson, C. M. “Tables of percentage points of the inverted beta (F) distribution.” Biometrika, vol. 33, 1943. | ||
![]() | |||
[25] | Sulem, P. et al. “Genetic determinants of hair, eye and skin pigmentation in Europeans.” Nature Genetics, vol. 39, Dec. 2007, pp. 1443-1452. | ||
![]() | CrossRef | ||