WO2006025129A1 - 個人認証装置 - Google Patents
個人認証装置 Download PDFInfo
- Publication number
- WO2006025129A1 WO2006025129A1 PCT/JP2005/004214 JP2005004214W WO2006025129A1 WO 2006025129 A1 WO2006025129 A1 WO 2006025129A1 JP 2005004214 W JP2005004214 W JP 2005004214W WO 2006025129 A1 WO2006025129 A1 WO 2006025129A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- iris
- image
- recognition
- pupil
- personal authentication
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Definitions
- the present invention relates to a personal authentication device that uses an iris network to acquire an iris pattern of a human eye to identify and authenticate an individual.
- iris personal identification is that it has been stable for two years after birth, and since then it has not changed, so counterfeiting that does not require re-registration is difficult. Another point is that there is a lower possibility of injury than a finger or face. Therefore, iris recognition can be expected as a security system for personal computer and mobile phone passwords, gate management for entrance and exit.
- Patent Document 1 discloses a method of identifying an eye position by detecting the center position and diameter of a pupil and an iris in iris recognition.
- Patent Document 2 discloses a method of determining impersonation by extracting the density of a specific region of a human eye in iris recognition.
- Patent Document 3 discloses an iris authentication device that detects whether or not an iris is a living iris by eye movement or the like in iris recognition.
- Patent Document 4 discloses an iris imaging device that enables iris imaging even if the identified person moves after the position of the identified person is measured in iris recognition.
- Patent Document 5 discloses a personal identification device that makes it possible to identify and identify the iris of a person to be identified!
- Patent Document 6 discloses an iris image acquisition device that enables the imaging of an iris image by searching for the position of the eye from the silhouette of the person to be identified in iris recognition.
- the conventional pattern matching method used for iris recognition is vulnerable to rotation of the recognized iris because the iris is circular, and it takes time to recognize because it matches the registered image during recognition. There was a problem of power. Also, when using the system, the subject had to adjust his eyes to the designated position on the system. This is very problematic when the iris recognition system is miniaturized and used as a password for a PC or mobile phone.
- Non-Patent Documents 1 and 2 focusing on the information processing process of the spatial recognition 'memory system (parietal association area) of the brain, as disclosed in Non-Patent Documents 1 and 2, the rotation that can recognize the rotation direction and shape of the object A diffusion type neural network has been announced by the present inventors.
- This rotation-diffusion neural network performs polar coordinate conversion and is suitable for recognizing the shape and rotational orientation of concentric patterns such as irises.
- orientation-invariant shape recognition and shape-invariant orientation recognition are possible at the same time.
- the memory matrix is created during learning (registration), the recognition time is very short.
- Patent Document 1 Japanese Patent Application Laid-Open No. 2004-21406
- Patent Document 2 JP 2002-312772 A
- Patent Document 3 Japanese Patent Laid-Open No. 2001-34754
- Patent Document 4 Japanese Unexamined Patent Publication No. 2000-11163
- Patent Document 5 JP-A-10-137220
- Patent Document 6 JP-A-10-137219
- Non-patent literature 1 IEICE Transactions
- Non-Patent Document 2 IEICE Technical Report, NC2002-207, pp.25-30,2003 Disclosure of Invention
- the conventional rotational diffusion type-Ural network recognizes characters and faces using still images, and its use is limited for recognition in offline processing using still images. It was inferior in practicality. In practice, it is impossible to capture the rotation direction of the iris pattern in the same orientation as the learning (registered) image during recognition. Therefore, the iris image actually used for recognition is rotated and corrected! / Wow! Therefore, the recognition rate of practical accuracy was not reached. Furthermore, the identification (rejection) rate for unregistered iris patterns has not been investigated and verified, and is insufficient for application to personal authentication devices.
- the present invention has been made in view of the above-described problems of the prior art, and can recognize a human iris image to accurately identify and authenticate a person with high accuracy and high speed.
- the purpose is to provide a personal authentication device that can be widely used in information systems and other management tasks.
- the present invention includes a camera capable of capturing a moving image and a storage device that stores images captured by the camera in a predetermined cycle, and the image information of a person's face captured by the camera is used as an iris of a human eye. Or, compare with a template that falls within the size range of the pupil, and scan the iris or pupil to detect the iris or pupil part of the eye, and the iris pattern that captures the iris pattern detected by this iris' pupil detection means
- the rotation direction and shape of the iris pattern is obtained by transforming the rotation direction and shape of the iris pattern by a rotational diffusion type-Ural network that creates a diffusion pattern by multiplying a polar coordinate transformed image by a periodic Gaussian function.
- the memory conversion means for learning and storing the shape as vector information, and learning by converting by the memory conversion means.
- an iris pattern shape determining means for comparing an arbitrary acquired iris pattern with the vector information converted by the rotational diffusion type neural network by the storage conversion means in the same manner as described above, and by the iris pattern shape determining means
- the shape determination is performed by recognizing and correcting the direction of each vector information to be compared with each other, forming an orientation memory matrix and a shape memory matrix in the vector information, and recognizing the orientation that is an output of the rotational diffusion type-Ural network.
- -Eron and shape recognition -Eron is a personal authentication device that performs shape recognition by correlating each of the recognized iris shapes with each other.
- an original image of a predetermined 0 ° azimuth of the iris or a polar coordinate conversion image thereof is registered as vector information, and at the time of recognition, the iris pattern shape determination unit can perform arbitrary orientation.
- the input iris image is corrected using the recognition orientation obtained by the rotation diffusion type neural network, and the rotation diffusion type-Ural network output is obtained using the original image or its polar coordinate conversion image as the vector information, This is to be compared with a preset threshold value.
- the iris pattern acquisition means includes pupil center position detection means by labeling, center correction by least square method, iris edge detection means using Laplacian filter, and iris size standard using line correction. It has an eaves means.
- At least one of the inner product and the minimum distance of each vector information is obtained, and the identity / inconsistency of each vector information is judged by comparing with a preset threshold value to perform personal identification.
- the iris pattern acquisition means includes a flash light emitting device that causes a pupil reaction, an infrared light source for imaging, and an infrared transmission filter attached to the lens of the camera, and the size of the pupil is substantially constant. In this state, an iris image is acquired. Furthermore, the iris pattern acquisition means continuously measures the time change of the pupil diameter, and can cope with impersonation using a still image such as a photograph.
- the correction of the azimuth in the shape determination by the iris pattern shape determination means replaces the correction of the azimuth recognized by the rotational diffusion type-Eural network, and learns the diffusion pattern of the iris used for recognition.
- the input azimuth is recognized and the azimuth is corrected by vector synthesis.
- the iris pattern acquisition means searches for the position of the person's face and eyes to be recognized from a low-resolution image by thinning out, identifies the position of the eyes, and then detects an iris region from the high-resolution image. It's okay.
- each pixel value in a certain area of an image including a human eye is measured to obtain an average luminance, and this is standardized to a certain value to set the luminance of the iris. Furthermore, the luminance average and standard deviation of the pupil and iris of the measurer are obtained, and the binary threshold value determined by the ratio of the standard deviation is used. It is a personal authentication device that determines the pupil and iris.
- the pupil diameter change at the time of light reflection caused by flash light irradiation is measured, and the biological reaction is detected from the difference or ratio between the maximum pupil diameter and the minimum pupil diameter, and this difference or ratio is the reference value. If the following, the measured iris image is a personal authentication device that determines that the image is impersonated.
- the personal authentication device can accurately and quickly identify an individual using an iris pattern, and reliably prevent impersonation due to iris counterfeiting that reduces the burden on the subject at the time of authentication. Can be stopped. In particular, since the rotation and displacement of the iris pattern can be corrected, it is not limited by the usage pattern such as the installation status of the authentication apparatus.
- FIG. 1 is a schematic diagram showing a device configuration of a personal authentication device according to an embodiment of the present invention.
- FIG. 2 is a conceptual diagram showing a rotational diffusion type-Ural network used in the personal authentication device of this embodiment.
- FIG. 3 is a conceptual explanatory view showing image conversion of a rotation diffusion type-Ural network used in the personal authentication device of this embodiment.
- FIG. 4 is a schematic flowchart showing iris image acquisition using a rotation diffusion type-Ural network used in the personal authentication device of this embodiment.
- FIG. 5 is a graph showing changes in pupil diameter due to flash irradiation of the personal authentication device of this embodiment.
- FIG. 6 is a front view showing an eye and a template in iris image acquisition by the personal authentication device of this embodiment.
- FIG. 7 is a schematic diagram showing labeling of the personal authentication device of this embodiment.
- FIG. 8 is a front view showing a display screen when an iris image is acquired by the personal authentication device of this embodiment.
- FIG. 9 is a front view showing an image for obtaining an average luminance for acquiring an iris image by the personal authentication device of this embodiment.
- FIG. 10 Brightness of iris and pupil in iris image acquisition by personal authentication device of this embodiment It is a graph which shows a frequency value and a cumulative pixel number.
- FIG. 11 is a schematic flowchart showing processing of the personal authentication device of this embodiment.
- FIG. 16 is a graph showing the orientation recognition characteristic (a) and the shape recognition characteristic (b) of the recognition experiment result by the example of the personal authentication device of the present invention.
- FIG. 18 is a graph showing an identity rejection rate and an acceptance rate by others according to an embodiment of the personal authentication device of the present invention.
- Flash light emitting device 22 Flash light emitting device 24 Infrared transmission filter
- Rotation-Diffusion-Eural Network is a diffusion pattern created by multiplying a polar coordinate transformed image by a periodic Gaussian function in the rotation direction, and recognizes the direction of the object and the shape of the object. It consists of a shape recognition system.
- Figure 2 shows a conceptual diagram of the rotation diffusion type-Eural network of this embodiment.
- the network's orientation recognition memory system-Euron (azimuth recognition-Euron) is arranged, for example, 30 per 12 ° on the circumference, and the shape recognition memory system-Euron (shape recognition-Euron) is an appropriate number, For example, 10 pieces correspond to the shape of each object.
- This rotational diffusion type-Eural network inputs the converted image on the polar coordinates generated by the original image force to the diffusion layer, and diffuses the rotation information in the surrounding space.
- the object orientation and shape are recognized using the diffusion pattern that is the output of the diffusion layer.
- the coordinate system was rotated 90 ° counterclockwise to match the position vector. As shown in the explanatory diagram of FIG.
- the object direction is defined as the rotation angle from the non-rotation state to the counterclockwise direction, with the rotation angle being 0 °.
- object orientation is a problem in object orientation recognition, but measures against this can be done by another method. Therefore, in object orientation recognition, the figure (object) is positioned at the center of the image, and the center of rotation of the object coincides with the origin on the xy coordinates of the original image.
- the original image used for learning and recall (Arabic numeral 1) is a binary image of 480 X 4 80 dots, and the original image is displayed in polar coordinates with a certain radius and angle. Split and The created image is used as a converted image.
- Figure 3 shows an example of the Arabic numeral 1 with a rotation angle of 0 ° and its converted image. Object orientation recognition is performed on the premise that object position recognition has already been performed, and it is considered that there is no effect of object position deviation. The converted image is generated using Equation (1).
- T is the pixel value at coordinates (r, ⁇ ) on the converted image.
- Formula (1) divides the original image by 20 with a radius of 200 dots and an angle every 3 °, and further divides the small area surrounded by the boundary into 10 x 10, and calculates the value of each point. It is shown that the sum of them is used as the value of one element of the transformed image. For this reason, one element of the converted image has a value of 0–100, r takes a value from 1 to 20, and ⁇ takes a range of integer values from 1 to 120.
- Figure 3 shows how to calculate the pixel value T at the coordinates (r, ⁇ ) of the transformed image by dividing a small area with a radius of 10 dots X angle 3 ° on the corresponding original image into 10 X 10
- Each point is represented by (X, y), and the total pixel value I (x, y) is calculated.
- an input pattern of 300 X 300 pixels is polar coordinates of 25 X 120 pixels excluding the pupil (center) portion. Convert to the image above.
- a polar coordinate conversion image is input to the diffusion layer to obtain a diffusion pattern that is vector information.
- an azimuth memory matrix and a shape memory matrix is input to this diffusion pattern.
- an azimuth recognition-Euron output and a shape recognition-Euron output are obtained.
- the resulting 30 orientation recognition-Euron output forces are also recognized using the population vector method.
- shape recognition associates each recognition object with a different shape recognition-Euron on a one-to-one basis, and shape recognition is performed using the maximum output of these -Eurons.
- the operation of this rotational diffusion type neural network is performed by learning using an orthogonal learning method in the rotational diffusion type dual network in the inscription process.
- learning is performed between the diffusion pattern V of the transformed image for learning V and the orientation recognition-Euron teacher signal TO and the shape recognition neuron teacher signal TF according to equations (2)-(7).
- the transformed image of the input iris image in an arbitrary orientation is input to the diffusion layer, and the output is the product of the diffusion pattern V and the orientation memory matrix M.
- FIG. 1 shows a system configuration diagram of one embodiment of the present invention.
- This system has a small camera 14 and a lens 15 capable of capturing a moving image for capturing an iris 12 of a human eye 10, a computer 16 for capturing the captured iris image, and a display 18.
- the main body of the computer 16 includes an image input board for capturing image data into the CPU, a DLL (Dynamic Link Library) for manipulating and processing iris images, and other storage devices.
- the small camera 14 also cuts the visible light noise reflected in the iris 12, near-infrared projector 20 which is an infrared light source for clearly capturing iris patterns, flash light emitting device 22 for causing pupil reflection, and so on.
- a plastic infrared transmission filter 24 is installed.
- the light emitting device 22 can emit light at an arbitrary timing (in frame units) in synchronization with an external trigger output signal from the image input board of the computer 16.
- the input image is a grayscale image with 256 gradations of 640 x 480 pixels. This system is capable of real-time image capture at approximately 13 frames Z s.
- the computer 16 and its operating system used a commercially available personal computer.
- the processing flow of the iris recognition system is shown in the flow chart of FIG.
- the eye 10 of the person to be recognized is photographed with the small camera 14 (slO).
- the imaging iris diameter is initialized (si 1).
- the image of iris 12 when the pupil diameter is constant pupil size (2.9 mm-3. Omm) is acquired.
- the average luminance of a certain region including the eye 10 as shown in FIG. 9 is used to normalize the image, and the one-eye partial template 26 as shown in FIG.
- the pupil 27 is detected from (sl3).
- labeling detects specific parts such as irises by attaching the same label (number) to all connected pixels (connected components) and assigning different numbers to different connected components. It is a method to do.
- the pupil center, the pupil diameter, and the pupil area are simultaneously measured, and the pupil detection is completed (sl5).
- the pupil center measured by the above labeling is corrected using the least squares method.
- the iris diameter is measured. If the iris diameter is initialized, the previously measured value is used as it is (si 6).
- Laplacian processing is performed in the measurement of iris diameter.
- the right iris is 0 ° above the center of the pupil, the counterclockwise direction is +, the angle is 75 ° —— 135 °, and the left iris is 1 ° to the angle 75 ° — 135 °.
- the parts with the maximum cumulative pixel value are the right edge of the iris and the left edge of the iris, respectively.
- the relative ratio force between the measurement size (pixels) on the image of the iris 12 and the pupil is 2.9 mm— 3.
- the pupil diameter is set to 2.9 mm-3.0 mm in order to allow a slight size error in order to facilitate the acquisition of the iris image.
- Fig. 8 shows the screen of display 18 when an iris image is acquired. From the obtained image, a 300 x 300 pixel image centered on the pupil is cut out (sl8).
- the size on the image of the iris and pupil changes depending on the distance from the camera 14 and the zoom
- the size is standardized using a known linear interpolation method in order to make the iris size constant.
- the average luminance of multiple images is obtained and a correction coefficient is set to normalize the luminance (sl9).
- This standardized image is defined as a reference iris pattern (rotation-diffused-input image of the neural network) (s20). To learn and memorize the reference iris pattern.
- each pixel value in a certain area A of the image including the eyes is measured to obtain an average luminance, and this is standardized to a constant value for each measurement image, thereby allowing each measurer to The variation in luminance of each acquired image is eliminated.
- the average brightness of area A was measured in order to ensure that the sclera, iris, and pupil are clear even after the brightness standard is reached. This is to set the value to the middle of the 256-level display.
- Range B surrounded by the inner line is the pupil detection range.
- a method for determining a binary threshold for pupil detection will be described. After the luminance standard, measure the luminance of the iris and pupil. The measurement method is the same as when the luminance standard is determined. The optimum binary threshold is determined from the ratio of the standard deviations obtained by calculating the average brightness and standard deviation of the pupils and irises of multiple measurers. The threshold Y is determined by the following formula.
- AV is the average pupil brightness
- AV is the average iris brightness
- SD is the pupil brightness standard.
- Fig. 10 shows a graph of the cumulative number of pixels for each luminance value using the data of all subjects after the luminance specification. According to Fig. 10, it can be seen that the iris luminance and pupil luminance are clearly separated by the binary key threshold. As a result, the center of the pupil is detected by the one-eye template, and the pupil end 'iris end' is accurately detected.
- an iris input pattern at the time of recognizing the iris of another person is also obtained by the same procedure as that for acquiring the reference iris pattern.
- personal identification is performed by shape recognition.
- the regular diffusion pattern is used to perform learning and recognition using the above-described rotational diffusion type-Eural network.
- a new shape recognition criterion is added in order to improve the discrimination accuracy of the unlearned iris.
- the new shape recognition criteria used the inner product and the minimum distance, which are often used to investigate vector similarity.
- these methods are known to be vulnerable to pattern variations.
- iris recognition there are various pattern variations, but one of them is misorientation.
- Image from camera 14 It is almost impossible to capture the learning image and the recognition image in exactly the same direction when performing recognition with. Therefore, it is possible to introduce the inner product and the minimum distance as shape recognition criteria by correcting the direction using orientation recognition, which is a feature of the rotational diffusion type-Eural network.
- the orientation of the learning image is defined as 0 °
- the orientation of the input image at the time of recognition is not necessarily 0 °. Unlike other people. Therefore, in the authentication using the rotational diffusion type neural network, the orientation is first recognized and the orientation of the input image is corrected.
- Fig. 11 shows the flow of personal authentication using a rotation diffusion type-Eural network.
- the direction correction range, step angle, and recognition method are selected.
- the lower limit of azimuth correction is set to 3 °
- the upper limit is set to 3 °
- the step angle is set to 1 °.
- the correction azimuth is 13 ° to 7 °, and an iris pattern with seven different azimuth corrections is obtained.
- the reason why the correction azimuth is not limited to 10 ° is to take into account the resolution and error of the recognition azimuth obtained by the rotational diffusion type-Eural network.
- individual iris pattern (shape) authentication is performed by the specified recognition method (inner product, minimum distance, shape recognition-Euron output).
- recognition by inner product and minimum distance the vector calculation of the input image and the learning image (inner product, minimum distance) corrected for rotation within the specified range based on the rotation orientation recognized by the rotational diffusion type-Eural network. I do .
- the inner product and minimum distance between the input image and the learning image that have been subjected to seven types of rotation correction are calculated.
- the maximum value is used for the inner product, and the minimum value is used for the minimum distance.
- the minimum value is used for the minimum distance.
- the maximum value is greater than the determination threshold, the person is identified, and if the maximum is less than the threshold, the person is determined to be another person.
- Minimum if minimum distance If the value is smaller than the determination threshold value, the person is identified, and if the value is larger than the threshold value!
- shape recognition-Euron output is used for shape determination (personal authentication)
- the orientation-corrected image is input as an input image again to the rotational diffusion type -Eural network, and it is determined by shape recognition -Euron output. If the shape recognition neuron output representing the registered person is larger than the preset judgment threshold, it is judged that the input iris image is the iris of the person representing -Yelon. If any shape recognition-Euron output does not exceed the judgment threshold, it is registered and judged as a bad one.
- V is a vector representing the normal diffusion pattern of the learning iris image
- V is for recognition
- I is the absolute value of V and V, and represents the length of each vector.
- the minimum distance is the vector difference (distance) of I V— V
- FIG. 13 shows a flowchart of the recognition of the rotational diffusion type-Eural network in the above-described iris pattern recognition according to this embodiment.
- Orientation correction is performed in the recognition process when the inner product and the minimum distance are used, since the iris image of 0 ° is used for the learning (registered) image, the recognition target must be in the 0 ° orientation. Because there is.
- the orientation of the iris pattern can be recognized by the rotational diffusion type-Ural network, so that it is possible to cope with the rotational change by correcting the orientation. Furthermore, by introducing the inner product and the minimum distance as the shape recognition criterion, it is possible to achieve a 0% acceptance rate by using an iris-corrected iris pattern. Moreover, it is possible to automatically detect the pupil center, pupil edge, and iris edge by detecting the pupil center position by labeling, center correction by the least square method, and edge detection using a Laplacian filter.
- the iris and pupil size on the image will change depending on the distance from the camera 14 and zoom, but it can be scaled up or down using linear interpolation and the iris size can be standardized to accommodate the size change. it can. Furthermore, by imposing a flash light on the eye 10 to induce a pupil response and measuring a temporal change in the pupil diameter, it is possible to reject impersonation using an iris photograph or the like.
- the personal authentication device of the present invention creates a memory matrix that characterizes each learned iris. Therefore, the recognition time is short because the amount of calculation is small. In addition, since the pupil center position, pupil edge, and iris edge are automatically detected, eye alignment is not required and it can be used in a wide range of applications.
- the personal authentication device of the present invention is not limited to the above-described embodiment, but has a camera power capable of capturing moving images. If an iris image can be directly captured by the CPU, the image input board can operate the iris image. A DLL for processing is not always necessary.
- the personal authentication device of the present invention can be used without using a flash light emitting device. In this case, an iris image converted into a fixed relative pupil size by the linear interpolation is captured.
- the vector information used for recognition is not limited to the original image captured as described above or its polar coordinate conversion image, but may be information obtained by performing image processing on the original image using a diffusion pattern or a Laplacian filter. .
- the following processing is performed to further prevent impersonation.
- the processing below sl6 and sl7 is performed as shown in FIG.
- the LED is flashed to obtain a constant pupil size
- the impersonation is determined by comparing the relative pupil diameter before and after the light emission.
- the relative pupil diameter from when the first LED emits light during recognition until the light reaction of the pupil occurs is used for impersonation determination.
- the first flash light is emitted at the 20th frame of the measurement image, so the light response occurs by 30th frame.
- the relative pupil diameter of 20 to 29 frames where flash light is emitted and light reflection occurs is preserved.
- the maximum or minimum is calculated from the stored relative pupil diameters, and the difference or ratio is calculated. If the difference or ratio is equal to or less than the reference value, it is determined to be impersonation.
- Fig. 15 shows the iris image (300 X 300 pixels) used for learning and recognition. Learning, All recognition was performed with the iris image of the subject's right eye. The learning orientation was 0 ° and 360 ° with 6 orientations every 60 °. The number of learning patterns is given by (number of recognized irises) X (number of learning orientations), and is 18 patterns, 30 patterns, and 60 patterns for each number of subjects.
- Fig. 16 (a) shows the orientation recognition characteristics and Fig. 16 (b) shows the shape recognition characteristics when the recognition experiment was performed by 10 people.
- the horizontal axis is the input rotation orientation of the iris
- the vertical axis is the recognition orientation.
- the horizontal axis is the input rotation direction of the iris
- the vertical axis is the shape recognition-Euron output
- ⁇ is the average value of the target neuron output
- X is the average value of the non-target neuron output Represents a value.
- the vertical line in each input direction represents the standard deviation.
- the average value of the target neuron is approximately 1.0 and the average value of the non-target neuron is approximately 0.0, and the target neuron output is higher than the non-target neuron output.
- the iris images used for learning and recognition were changed to three and five people, and after learning in advance, a real-time recognition experiment was performed.
- the iris image (300 X 300 pixels) used for learning was selected from Fig. 15. Learning'recognition was performed with the same subject's right eye iris.
- the number of learning notes is given by (number of recognized irises) X (number of learning orientations), 18 and 30 patterns for 3 and 5 subjects, respectively.
- the images used for learning were taken in advance on the day of the experiment.
- Fig. 17 (a) shows the azimuth recognition characteristics when a recognition experiment is performed by five persons
- Fig. 17 (b) shows the shape recognition characteristics.
- the horizontal axis is the input rotation direction of the iris
- the vertical axis is the recognition direction. Good linearity was seen in the input rotation direction and recognition direction of the iris, and it was found that the direction could be recognized almost correctly.
- the horizontal axis is the input rotation direction of the iris
- the vertical axis is the shape recognition-Euron output
- ⁇ is the average value of the target neuron output
- X is the non-target neuron output. Represents an average value.
- the vertical line in each input azimuth represents the standard deviation.
- the average value of the target neuron is approximately 1.0
- the average value of the non-target neuron is approximately 0.0
- the target neuron output is always higher than the non-target neuron output. large.
- a person authentication experiment was performed using images captured from the camera.
- the experiment was conducted with 10 subjects. Of the 10 people, 5 were studied and 5 were unlearned.
- the images used for learning were taken in advance on the day of the authentication experiment. Learning was performed in a total of 10 sets by replacing the iris images of five people used for learning one by one.
- the learning iris image the iris image when the input rotation direction of the iris is 0 ° was used. Since 10 subjects were recognized for 10 sets of learning, a total of 100 trials (50 trials for learning and 50 trials for unlearned persons) were obtained.
- FIG. 18 shows the error rate using the shape recognition-Euron output, FIG. 19 the inner product, and FIG. 20 using the minimum distance as a criterion.
- the dotted line represents the rejection rate (the rate of rejecting the user by mistake), and the solid line represents the acceptance rate of others (the rate of accepting others by mistake).
- the vertical axis represents each error rate, and the horizontal axis represents the evaluation threshold. The method of calculating the rejection rate of the person is considered to have rejected the person if the output value of the shape corresponding to the person-the euron or inner product is smaller than the judgment threshold, or if the output value of the minimum distance is larger than the judgment threshold, The number of trials was counted and asked to determine the rejection rate.
- the acceptance rate of others was calculated by counting the number of trials. From the experimental results, when the shape recognition neuron output was used as the criterion, the intersection of the rejection rate and the rejection rate was about 0.78, and the error rate was about 43%. When the inner product was used as the criterion, the intersection between the rejection rate and the acceptance rate was about 0.94, and the error rate was about 15%. However, if the decision threshold is 0.96, the rejection rate is 20%, but others can be completely rejected. When the judgment based on the minimum distance was used as the criterion, the intersection of the rejection rate and the acceptance rate of others was when the decision threshold was about 0.35 and the error rate was about 13%. However, when the decision threshold was 0.25, the rejection rate was 26%, but others could be completely rejected.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Collating Specific Patterns (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2004249933 | 2004-08-30 | ||
| JP2004-249933 | 2004-08-30 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2006025129A1 true WO2006025129A1 (ja) | 2006-03-09 |
Family
ID=35999793
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2005/004214 Ceased WO2006025129A1 (ja) | 2004-08-30 | 2005-03-10 | 個人認証装置 |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2006025129A1 (ja) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104781830A (zh) * | 2012-11-19 | 2015-07-15 | 虹膜技术公司 | 活人眼睛的识别方法及识别装置 |
| CN106778567A (zh) * | 2016-12-05 | 2017-05-31 | 望墨科技(武汉)有限公司 | 一种通过神经网络来进行虹膜识别的方法 |
| CN107330395A (zh) * | 2017-06-27 | 2017-11-07 | 中国矿业大学 | 一种基于卷积神经网络的虹膜图像加密方法 |
| CN114727094A (zh) * | 2022-03-23 | 2022-07-08 | 苏州思源科安信息技术有限公司 | 一种旋转机构零位校准的实现方法 |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH05288520A (ja) * | 1992-04-14 | 1993-11-02 | Matsushita Electric Ind Co Ltd | パターンマッチング法 |
| JP2001167284A (ja) * | 1999-12-06 | 2001-06-22 | Oki Electric Ind Co Ltd | 眼鏡反射検出装置及び眼鏡反射検出方法 |
| JP2002006474A (ja) * | 2000-06-21 | 2002-01-09 | Toppan Printing Co Ltd | マスクパターン画像処理方法 |
| JP2003030659A (ja) * | 2001-07-16 | 2003-01-31 | Matsushita Electric Ind Co Ltd | 虹彩認証装置及び虹彩撮像装置 |
| JP2003187247A (ja) * | 2001-12-14 | 2003-07-04 | Fujitsu Ltd | 口唇形状特定プログラム,発話意図検出プログラム及び顔認識用画像生成プログラム |
-
2005
- 2005-03-10 WO PCT/JP2005/004214 patent/WO2006025129A1/ja not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH05288520A (ja) * | 1992-04-14 | 1993-11-02 | Matsushita Electric Ind Co Ltd | パターンマッチング法 |
| JP2001167284A (ja) * | 1999-12-06 | 2001-06-22 | Oki Electric Ind Co Ltd | 眼鏡反射検出装置及び眼鏡反射検出方法 |
| JP2002006474A (ja) * | 2000-06-21 | 2002-01-09 | Toppan Printing Co Ltd | マスクパターン画像処理方法 |
| JP2003030659A (ja) * | 2001-07-16 | 2003-01-31 | Matsushita Electric Ind Co Ltd | 虹彩認証装置及び虹彩撮像装置 |
| JP2003187247A (ja) * | 2001-12-14 | 2003-07-04 | Fujitsu Ltd | 口唇形状特定プログラム,発話意図検出プログラム及び顔認識用画像生成プログラム |
Non-Patent Citations (2)
| Title |
|---|
| ARIMURA K ET AL: "Kaiten Kakusangata Neural Net ni yoru 2 jigen Teiji Ichi ni Taisuru Ichi Fuhen na Buttai Hoi to Keijo no Doji Ninshiki.", THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS., vol. 100, no. 490, 1 December 2000 (2000-12-01), pages 23 - 30, XP002996087 * |
| MURAKAMI M ET AL: "Kaiten Kakusangata Neural Net o Mochiita Kosai ni yoru Real time Kojin Shikibetsu.", THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS., vol. 103, no. 733, 11 March 2004 (2004-03-11), pages 55 - 60, XP002996086 * |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104781830A (zh) * | 2012-11-19 | 2015-07-15 | 虹膜技术公司 | 活人眼睛的识别方法及识别装置 |
| CN104781830B (zh) * | 2012-11-19 | 2018-02-02 | 虹膜技术公司 | 活人眼睛的识别方法及识别装置 |
| CN106778567A (zh) * | 2016-12-05 | 2017-05-31 | 望墨科技(武汉)有限公司 | 一种通过神经网络来进行虹膜识别的方法 |
| CN106778567B (zh) * | 2016-12-05 | 2019-05-28 | 望墨科技(武汉)有限公司 | 一种通过神经网络来进行虹膜识别的方法 |
| CN107330395A (zh) * | 2017-06-27 | 2017-11-07 | 中国矿业大学 | 一种基于卷积神经网络的虹膜图像加密方法 |
| CN107330395B (zh) * | 2017-06-27 | 2018-11-09 | 中国矿业大学 | 一种基于卷积神经网络的虹膜图像加密方法 |
| CN114727094A (zh) * | 2022-03-23 | 2022-07-08 | 苏州思源科安信息技术有限公司 | 一种旋转机构零位校准的实现方法 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12223760B2 (en) | Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices | |
| US9361507B1 (en) | Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices | |
| CN101339607B (zh) | 人脸识别方法及系统、人脸识别模型训练方法及系统 | |
| CN109800643B (zh) | 一种活体人脸多角度的身份识别方法 | |
| US9064145B2 (en) | Identity recognition based on multiple feature fusion for an eye image | |
| US20190034746A1 (en) | System and method for identifying re-photographed images | |
| CN100458831C (zh) | 人脸模型训练模块及方法、人脸实时认证系统及方法 | |
| KR20170006355A (ko) | 모션벡터 및 특징벡터 기반 위조 얼굴 검출 방법 및 장치 | |
| JP3855025B2 (ja) | 個人認証装置 | |
| JP2005309765A (ja) | 画像認識装置、画像抽出装置、画像抽出方法及びプログラム | |
| CN114092679B (zh) | 目标鉴别方法及设备 | |
| Krichen et al. | A new phase-correlation-based iris matching for degraded images | |
| WO2006025129A1 (ja) | 個人認証装置 | |
| US9977889B2 (en) | Device for checking the authenticity of a fingerprint | |
| Ribarić et al. | Personal recognition based on the Gabor features of colour palmprint images | |
| Domínguez et al. | Automated banknote identification method for the visually impaired | |
| WO2024042674A1 (ja) | 情報処理装置、認証方法および記憶媒体 | |
| CN113240043A (zh) | 基于多图片差异性的鉴伪方法、装置、设备及存储介质 | |
| Takano et al. | Rotation invariant iris recognition method adaptive to ambient lighting variation | |
| Ahlawat et al. | Online invigilation: A holistic approach | |
| CN114078206B (zh) | 目标验证方法及设备 | |
| US12361761B1 (en) | System and method for access control using liveness detection | |
| KR102529513B1 (ko) | 스마트폰을 이용한 신원확인 시스템 | |
| Takano et al. | Rotation independent iris recognition by the rotation spreading neural network | |
| Burghardt | Inside iris recognition |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
| AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase | ||
| NENP | Non-entry into the national phase |
Ref country code: JP |