• 1
  • 2
  • 3

ISIS (Iris location and Segmentation)

ISIS (Iris Segmentation for Identification Systems) is an iris segmentation system which solves many problems, such as those related to ellipse fitting, presence of spotlights or image heterogeneity. The algorithm applies three main phases each consisting of several steps: a) image pre-processing, b) pupil detection, c) polar transform, d) limbus detection.

Pre-processing: input image I contains superfluous information, if not even misleading, for the purpose of locating the pupil. Details in pupil veins, skin pores, or eyelashes shape, are complex patterns that can negatively interfere with edge detection operations. The first phase of the algorithm eliminates such interferences through a posterization filter FE (Enhance).

 

Segmentation: a number of approaches in literature, after locating the pupil, start from its centre to analyze the image in radial direction, to find the limbus region, which represents the iris contour. ISIS adopts a different approach. It relies on the observation that segmenting an eye image searching for linear structures produces more accurate results, through techniques which are less computationally expensive.

 

Special Cases: when the image is blurred, or the iris is poor of details, the process of pupil location might identify the whole iris instead of the iris within it. This might appear as a negative aspect of the approach, if interpreted as a sign of scarce robustness. On the contrary, this can be easily transformed in a strong point of the algorithm.

 

ISIS has been tested on three iris databases: UPOL, CASIA v1.0, and UBIRIS Session 2. Performances have been compared with that of the Masek's implementation of the integro-differential. In most cases both pupil and iris are correctly detected even for large variations of the acquisition conditions without any parameter tuning.

 

Quality indexes for detected irises

Most existing quality indexes aim at quantitatively and qualitatively evaluating the information provided by the iris after its location, i.e. the range of feature that can be useful for an accurate recognition. This is the main reason why they massively use the Fourier transform, the Wavelet transform or the entropy to measure the information content.

The main goal of the index used by  ISIS is rather to provide an estimate regarding the quality of the performed location, in terms of the accuracy in determining the iris radius, the pupil radius and the centre of the latter. Such accuracy measure is computed as the weighted sum of three different indexes: pupil separability (Spupil), iris separability (Siris), and Gaussian distribution of grey levels (Gdist).

 

N-IRIS (Iris Coding and Matching)

After pupil and iris contours identi?cation, and after noisy elements have been accounted for (e.g. eyelashes, which occupy di?erent positions and extent in di?erent captures of the same subject), the relevant iris annulus must be coded. The di?erent nature of its useful visual structures seems to call for di?erent coding schemes, and for the fusion of their results as in multimodal feature extraction. N-Iris exploites a version of Linear Binary Patterns (LBP) to record the textural regularities present in the iris, and BLOB identi?cation for coding lighter or darker spots inside the iris region. Future developments will investigate the addition of appropriate versions of more local operators.

The Local Binary Pattern (LBP) was introduced in 1996 to analyze image texture. In its basic version, the operator associates to each pixel in the image a value, which is computed according to its 3 × 3 neighbourhood. This value is the decimal representation of the binary string (number) obtained by comparing the value of the pixel with each value in its neighbourhood. If the central pixel has a lower value than one of its neighbours, a 1 is recorded in the string for such neighbour, and a 0 otherwise. Afterwards the basic operator is extended to process pixel neighbourhoods of variable dimension, and to be invariant to rotations. The circular neighbourhood of a pixel is exploited, and sample points are identi?ed by interpolation. The resulting operator is called LPBP,R where P is the number of sample points, and R is the radius of the neighbourhood.

Di?erential operators can address the problem of identifying lighter or darker regions in the iris. The combination of the Laplacian operator (e?ective as contour detector, but very sensible to noise) with a Gaussian ?lter (to preliminarily smooth the image) presents two advantages: noise reduction, due to smoothing, and better blob setting o?, due to increased size of the Gaussian ?lter.   This is the core idea of what we call BLOB, where iris blobs are modeled by a Gaussian 2-dimensional non-symmetric function.

N-Iris makes the fusion of LBP and BLOB methods. The two codes are simply chained, and results from the two matching procedures are fused at score level, so that, at present, coding and matching of one method do not a?ect the other. Matching between two binary codes can be performed using Hamming distance weighted by the segmentation masks, as discussed by Daugman. We also considered shifts of 10 pixels to address rotation variations.   The ?nal distance is the one computed on the alignment returning the maximum match. The experiments to assess N-IRIS performances were mainly performed on the databases UBIRIS v1s2 (version 1 and session 2) and UBIRIS v2 (version 2, and following the protocols de?ned by the program committee of NICE II.

The combined method was tested within the provided JAVA framework on a dataset of 1000 images that the NICE II program committee extracted from UBIRIS v2, and its performances were measured in terms of decidability value. On such dataset, the method achieved a decidability value of 1.4825, while on the dataset used during the independent evaluation within NICE II, the obtained decidability value was 1.2565. We had no access to the latter dataset, so we cannot justify this di?erence.