HK1211716B - Methods and systems for spoof detection for biometric authentication - Google Patents
Methods and systems for spoof detection for biometric authentication Download PDFInfo
- Publication number
- HK1211716B HK1211716B HK15110176.8A HK15110176A HK1211716B HK 1211716 B HK1211716 B HK 1211716B HK 15110176 A HK15110176 A HK 15110176A HK 1211716 B HK1211716 B HK 1211716B
- Authority
- HK
- Hong Kong
- Prior art keywords
- images
- eye
- determining
- metric
- image
- Prior art date
Links
Abstract
The subject application relates to methods and systems for spoof detection for biometric authentication. This specification describes technologies relating to biometric authentication based on images of the eye. In general, one aspect of the subject matter described in this specification can be embodied in methods that include obtaining images of a subject including a view of an eye. The methods may further include determining a behavioral metric based on detected movement of the eye as the eye appears in a plurality of the images, determining a spatial metric based on a distance from a sensor to a landmark that appears in a plurality of the images each having a different respective focus distance, and determining a reflectance metric based on detected changes in surface glare or specular reflection patterns on a surface of the eye. The methods may further include determining a score based on the behavioral, spatial, and reflectance metrics and rejecting or accepting the one or more images based on the score.
Description
Related information of divisional application
The scheme is a divisional application. The parent of the division is an invention patent application with the application date of 2013, 7 and 2, and the application number of 201310276022.6, entitled "method and system for spoofing detection for biometric authentication".
Technical Field
The invention relates to biometric verification based on images of the eye.
Background
It is often desirable to limit access to a property or resource to a particular individual. Biometric identification systems may be used to verify the identity of an individual to grant or disallow access to a resource. For example, an iris scanner may be used by a biometric security system to identify an individual based on unique structures in the iris of the individual.
Disclosure of Invention
This specification describes technologies relating to biometric verification of eye-based images. In general, one aspect of the subject matter described in this specification can be embodied in a method that includes obtaining two or more images of a subject that include a view of an eye, where the images collectively include multiple focal lengths. The method may further include determining a behavioral metric based at least on the detected movement of the eye as it appears in the plurality of the images. The behavioral metric may be a measure of deviation of the detected movement and timing from an expected movement of the eye. The method may further include determining a spatial metric based at least on distances from sensors to landmarks appearing in a plurality of the images each having a different respective focal length. The method may further include determining a reflection metric based at least on the detected change in surface glare or specular reflection pattern on a surface of the eye as the eye appears in a plurality of the images, wherein the reflection metric is a measure of change in glare or specular reflection patches on the surface of the eye. The method may further include determining a score based at least on the behavioral, spatial, and reflection metrics. The method may further include rejecting or accepting one or more images based on the score.
In general, one aspect of the subject matter described in this specification can be embodied in a system that includes a sensor configured to capture two or more images of a subject that include a view of an eye, where the images collectively include multiple focal lengths. The system may further include an illumination element that provides light stimulation in synchronization with the capture of one or more images by the sensor. The system may further include means for determining a behavioral metric based at least on the detected movement of the eye as it appears in the plurality of the images. The behavioral metric is a measure of deviation of the detected movement and timing from an expected movement of the eye. The system can further include a module configured to determine a spatial metric based at least on distances from sensors to landmarks appearing in a plurality of the images each having a different respective focal length. The system may further include a module configured to determine a reflection metric based at least on the detected change in surface glare or specular reflection pattern on a surface of the eye as the eye appears in a plurality of the images, wherein the reflection metric is a measure of the change in glare or specular reflection patch on the surface of the eye. The system may further include a module configured to determine a score based at least on the behavioral, spatial, and reflectance metrics. The system may further include an interface configured to reject or accept one or more images based on the score.
In general, one aspect of the subject matter described in this specification can be embodied in a system that includes a data processing apparatus and a memory coupled to the data processing apparatus. The memory has stored thereon instructions that, when executed by the data processing apparatus, cause the data processing apparatus to perform operations including obtaining two or more images of a subject including a view of an eye, wherein the images collectively include a plurality of focal lengths. The operations may further include determining a behavioral metric based at least on the detected movement of the eye as it appears in the plurality of the images. The behavioral metric may be a measure of deviation of the detected movement and timing from an expected movement of the eye. The operations may further include determining a spatial metric based at least on distances from sensors to landmarks appearing in a plurality of the images each having a different respective focal length. The operations may further include determining a reflection metric based at least on the detected change in surface glare or specular reflection pattern on a surface of the eye as the eye appears in the plurality of the images, wherein the reflection metric is a measure of change in glare or specular reflection patches on the surface of the eye. The operations may further include determining a score based at least on the behavioral, spatial, and reflection metrics. The operations may further include rejecting or accepting one or more images based on the score.
In general, one aspect of the subject matter described in this specification can be embodied in a non-transitory computer-readable medium that stores software including instructions executable by a processing device, which upon such execution, cause the processing device to perform operations including obtaining two or more images of a subject including a view of an eye, wherein the images collectively include multiple focal lengths. The operations may further include determining a behavioral metric based at least on the detected movement of the eye as it appears in the plurality of the images. The behavioral metric may be a measure of deviation of the detected movement and timing from an expected movement of the eye. The operations may further include determining a spatial metric based at least on distances from sensors to landmarks appearing in a plurality of the images each having a different respective focal length. The operations may further include determining a reflection metric based at least on the detected change in surface glare or specular reflection pattern on a surface of the eye as the eye appears in the plurality of the images, wherein the reflection metric is a measure of change in glare or specular reflection patches on the surface of the eye. The operations may further include determining a score based at least on the behavioral, spatial, and reflection metrics. The operations may further include rejecting or accepting one or more images based on the score.
These and other embodiments may each optionally include one or more of the following features. Determining the behavioral metric may include determining the onset, duration, velocity, or acceleration of pupil constriction in response to light stimuli. The light stimulus may comprise a flash pulse. The light stimulus may include a change in intensity of light output by the display. The determining the behavioral metric may include determining a start, duration, or acceleration of a gaze point transition in response to an external stimulus. The external stimulus may include a prompt for instructing the user to orient the point of regard. The external stimulus may include an object depicted in a display that moves within the display. The spatial metric may be a measure of deviation of the subject from a two-dimensional plane. The spatial metric may be a measure of deviation of the subject from an expected three-dimensional shape. Determining the spatial metric may include determining a disparity of two or more landmarks appearing in a plurality of the images. A halftone may be detected in an image captured using reduced dynamic range, and the image may be rejected based at least in part on the halftone. Determining the behavioral metric may include detecting blood flow of the eye as the eye appears in a plurality of the images. Determining the score may include determining the score using a trained function approximator. The landmark may be a portion of a face depicted in the image. Determining the reflection metric may include issuing glints in pulses while capturing one or more of the images, illuminating the subject, detecting occurrences of glare on the eye in the images from the glints, and measuring time differences between the pulses of the glints and the occurrences of corresponding glare on the eye in the images. Determining the reflection metric may include pulsing a glint to illuminate the subject when one or more of the images are captured, and detecting a fine three-dimensional texture of the white of the eye by measuring a pattern uniformity of glare on the eye from the glint in the images. The sensor settings that control focus may be adjusted to a plurality of different settings during capture of two or more of the images. The images captured with different focus settings may be compared to determine whether these images reflect their respective focus settings. The sensor settings that control exposure may be adjusted to a plurality of different settings during capture of two or more of the images. The images captured with different exposure settings may be compared to determine whether these images reflect their respective exposure settings. The sensor settings that control white balance may be adjusted to a plurality of different settings during capture of two or more of the images. The images captured with different white balance settings may be compared to determine whether these images reflect their respective white balance settings.
Particular embodiments of the invention may be implemented to realize none, one, or more of the following advantages. Some embodiments may provide security by reliably authenticating individuals. Some embodiments may prevent spoofing of an eye biometric-based authentication system using an object that is not a real human eye.
The details of one or more embodiments of the invention are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the invention will become apparent from the description, the drawings, and the claims.
Drawings
Fig. 1 is a diagram of the anatomy of a human eye.
Fig. 2 is a diagram of an example image including a portion of vasculature showing the white of the eye.
FIG. 3 is a diagram of an example image segmented for analysis.
Fig. 4 is a block diagram of an example security system configured to authenticate an individual based at least in part on one or more images of the whites of the eyes.
FIG. 5 is a block diagram of an example online environment.
FIG. 6 is a flow diagram of an example process for authenticating an individual based on one or more images of the whites of the eyes, where the obtained images used for authentication are checked for the authenticity of the eyes.
FIG. 7 is a flow diagram of an example process for determining a liveness score for one or more images of an eye.
Fig. 8A is a flow diagram of an example process for determining a behavioral metric based on pupil contraction in response to a light stimulus.
Fig. 8B is a flow diagram of an example process for determining a behavioral metric based on a gaze point transition of an iris in response to an external stimulus.
FIG. 9 shows an example of a computer device and a mobile computer device that may be used to implement the techniques described herein.
Detailed Description
The unique features of the visible vasculature in the white of the eye of an individual can be used to identify or authenticate the individual. For example, an image of the white of the eyes of the user may be obtained and analyzed to compare features of the eyes to a reference record in order to authenticate the user and grant or disallow the user access to the resource. An enemy or intruder may attempt to spoof a security system using this authentication method by presenting something other than a live eye (e.g., an image of an authorized user's face or a plastic model of an authorized user's eye) to the optical sensor of the security system. Some spoofing attempts may be defeated by configuring the security system to analyze the acquired images to distinguish the image of a live eye from the image of a prop.
One or more measures of realism may be calculated that reflect the nature of what a real eye is expected to exhibit, which may not be exhibited by certain spoofing attempts. For example, stimuli may be applied to a user during an image acquisition process, and the response of the eye depicted in the image may be quantified with a metric compared to the expected response of a real human eye to those stimuli. In some implementations, the acquired images can be examined at multiple focal distances to determine whether the eye depicted in the images is three-dimensional (e.g., whether the eye has landmarks that appear to be positioned at a distance from the sensor that produces the deviation from a single plane). In some embodiments, a metric related to the reflection of the eye may be determined. The human eye has unique reflective properties resulting from its three-dimensional shape and its fine surface texture and moisture, which may not be exhibited by many spoof attack props. For example, the flash device may be used to illuminate the subject during part of the image acquisition process, and the timing and quality of the reflection of the flash pulse on the subject's eye may be analyzed to determine whether it is indeed a real human eyeball imaged in real time.
In some implementations, multiple liveness metrics may be combined to determine a liveness score or decision that reflects the likelihood that the image depicts a live eye, rather than, for example, a model image or a two-dimensional image of the eye. For example, a trained function approximator (e.g., a neural network) may be used to determine a liveness score based on a plurality of liveness metrics. The obtained image may then be accepted or rejected based on the liveness score. In some implementations, a spoofing attempt may be reported when the liveness score indicates that the image does not depict a live eye.
Fig. 1 is a diagram of the anatomy of a human eye 100. The figure is a section of an eye in which an enlarged view 102 of the anatomical structures is near the limbal boundary of the eye, which separates the colored iris 110 from the surrounding white of the eye. The white of the eye contains complex vascular structures that are not only easily visible and scannable from outside the eye, but also are unique and vary from individual to individual. Thus, these vascular structures of the white of the eye can be scanned and advantageously used as biological features, primarily due to the vasculature of the conjunctiva and the outer layers of the sclera. This biometric feature can be used to authenticate a particular individual, or to identify an unknown individual.
The white of the eye has several layers. The sclera 120 is an opaque fibrous protective layer of the eye that contains collagen and elastic fibers. The sclera 120 is covered by an outer scleral layer 130, the outer scleral layer 130 having a substantial number of blood vessels and veins therethrough and thereon. The scleral outer layer 130 is covered by a bulbar conjunctiva 140, the bulbar conjunctiva 140 being a thin transparent membrane that interfaces with the eyelid 150 or with the environment when the eyelid is open. Blood vessels and veins pass through all these layers of the white of the eye and can be detected in the image of the eye. The eye also contains eyelashes 160, which eyelashes 160 may sometimes obscure portions of the white of the eye in the image.
Fig. 2 is a diagram of an example image 200 including a portion of vasculature showing the white of an eye. This image 200 may be captured with a sensor (e.g., a camera) integrated into a computing device, such as a smartphone, tablet, television, laptop, or personal computer. For example, the user may be prompted by a display or audio prompt to look to the left when capturing an image, thus exposing a larger area of the eye white to the right of the iris to the field of view of the sensor. Similarly, the user may be prompted to look right, up, down, forward, etc. when capturing an image. The example image includes a view of the iris 220 with the pupil 210 at its center. The iris 220 extends to the limbus boundary 225 of the eye. The white 230 of the eye is outside the limbus boundary 225 of the eye. The coarse vasculature 240 of the white of the eye is visible in the image 100. This vasculature 240 may be specific to an individual. In some embodiments, the unique features of the vasculature 240 may be used as a basis for identifying, verifying, or authenticating individual users.
Fig. 3 is a diagram of an example image 300 including portions of vasculature showing the whites of both eyes, segmented for analysis. Captured image 310 may be obtained in a variety of ways. The captured image 310 may be preprocessed and segmented to isolate regions of interest within the image and enhance the view of vasculature in the white of the eye. For example, the region of interest may be a tiled portion that forms a grid covering some or all of the whites of the eyes. The portion 320 to the left of the iris corresponding to the white of the right eye may be isolated, for example, by identifying the limbal boundary and the edges of the eyelids. Similarly, a portion 322 to the left of the iris corresponding to the left eye white may be isolated. Preprocessing may be used to enhance the view of the vasculature in this region, for example, by selecting from the image data a component color that maximizes the contrast between the vasculature of the white of the eye and the surrounding white portions of the eye. In some embodiments, these portions 320, 322 of the image may be further segmented into tiles of grids 330, 332 that divide the exposed surface area of the whites of the eyes into smaller regions for analysis purposes. Features of the vasculature in these regions of interest may be used to identify, verify or authenticate an individual.
Fig. 4 is a block diagram of an example security system 400 configured to authenticate an individual based at least in part on one or more images of the white of an eye 410. A user of the security system 400 may present his eye 410 to the light sensor 420. In this way, one or more images of the white of the eye 410 may be captured. Digital cameras, three-dimensional (3D) cameras, and light field sensors are examples of light sensors that may be used. The light sensor 420 may use a variety of technologies, such as digital Charge Coupled Device (CCD) or Complementary Metal Oxide Semiconductor (CMOS). In some implementations, the user may be prompted via a message displayed on display 424 to pose certain gestures to expose portions of the whites of eyes 410 and facilitate image acquisition. For example, the user may be prompted to orient their gaze point so as to rotate the iris of their eye 410 to the left, to the right, to the upper left, and to the upper right. In some embodiments, not shown, the user may be prompted to take certain gestures by a message played through a speaker, by an indicator light (e.g., an LED), or not at all.
In some implementations, the sensor 420 can be configured to detect when the eye 410 has been properly positioned in the field of view of the sensor. Alternatively, software or hardware implemented on computing device 430 may analyze one or more images generated by light sensor 420 to determine whether eye 410 has been properly positioned. In some implementations, the user can manually indicate when the eye 410 is properly positioned through a user interface (e.g., a button, keyboard, keypad, touchpad, or touch screen).
A verification module 440 implemented on computing device 430 may obtain one or more images of the whites of the eyes via light sensor 420. In some implementations, the computing device 430 is integrated with the light sensor 420 or electronically coupled to the light sensor 420. In some implementations, the computing device 430 may communicate with the light sensor 420 through a wireless interface (e.g., an antenna).
The authentication module 440 processes images obtained via the light sensor 420 to control access to the security device 450. For example, the verification module 440 may implement the verification process described with respect to fig. 6. In some implementations, the security device 450 may include an actuator 460 (e.g., a locking mechanism) that affects access control instructions from the authentication module 440.
The computing device may integrate or interface with the security device 450 in a variety of ways. For example, security device 450 may be an automobile, light sensor 420 may be a camera integrated into the steering wheel or dashboard of the automobile, and computing device 430 may be integrated into the automobile and electrically connected to the camera and ignition lock system acting as security actuator 460. The user may present a view of the whites of their eyes to the camera in order to be verified as an authorized driver of the automobile and start the engine.
In some implementations, the security device 450 may be a real estate key box, the light sensor 420 may be a camera integrated with the user's mobile device (e.g., a smartphone or tablet device), and the processing of the authentication module 440 may be performed in part by the user's mobile device and in part by a computing device integrated with the key box that controls the power locking mechanism. Two computing devices may communicate via a wireless interface. For example, a user (e.g., a real estate broker giving a property show) may use a camera on their mobile device to obtain one or more images and submit data to a key box based on the images in order to be authenticated as an authorized user and authorized to use keys deposited in the key box.
In some embodiments, the security device 450 is a gate or door that controls access to a property. The light sensor 420 may be integrated into the door or gate or positioned on a wall or fence near the door or gate. The computing device 430 may be positioned near the light sensor 420 and the power locking mechanism in the door or gate acting as the actuator 460, and may communicate with the light sensor 420 and the power locking mechanism via a wireless interface. In some implementations, the security device 450 may be a rifle, and the light sensor 420 may be integrated with a scope attached to the rifle. The computing device 430 may be integrated into the stock of the rifle, and may be electrically connected to the light sensor 420 and a trigger or striker locking mechanism that acts as an actuator 460. In some embodiments, security apparatus 450 may be a piece of rental equipment (e.g., a bicycle).
Computing device 430 may include a processing device 432 (e.g., as described with respect to fig. 9) and a machine-readable repository or database 434. In some implementations, the machine-readable storage library may include flash memory. The machine-readable repository 434 may be used to store one or more reference records. The reference record may include data derived from one or more images of the whites of the eyes of the registered or authorized user of the security device 450. In some implementations, the reference record includes a complete reference image. In some implementations, the reference record includes features extracted from the reference image. In some implementations, the reference record includes encrypted features extracted from the reference image. In some implementations, the reference record includes an identification key encrypted by features extracted from the reference image. To establish a reference record for a new user, a registration or registration process may be performed. The enrollment process may include capturing one or more reference images of the whites of the eyes of the newly enrolled user. In some implementations, the light sensor 420 and the processing device 430 of the verification system 400 can be used to perform the enrollment process.
FIG. 5 is a block diagram showing an example of a network environment 500 in which techniques described herein may be implemented. The network environment 500 includes computing devices 502, 504, 506, 508, 510 configured to communicate with a first server system 512 and/or a second server system 514 via a network 511. Computing devices 502, 504, 506, 508, 510 have respective users 522, 524, 526, 528, 530 associated therewith. First and second server systems 512, 514 each include a computing device 516, 517 and a machine-readable repository or database 518, 519. Example environment 500 may include thousands of websites, computing devices, and servers not shown.
The network 511 may include a large computer network, examples of which include a Local Area Network (LAN), a Wide Area Network (WAN), a cellular network, or a combination thereof, connecting a number of mobile computing devices, fixed computing devices, and a server system. The networks included in network 511 may provide communication under various modes or protocols, examples of which include transmission control protocol/internet protocol (TCP/IP), global system for mobile communications (GSM) voice calls, short electronic messaging service (SMS), Enhanced Messaging Service (EMS), or Multimedia Messaging Service (MMS) messaging, ethernet, Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), Personal Digital Cellular (PDC), Wideband Code Division Multiple Access (WCDMA), CDMA2000, or General Packet Radio System (GPRS), among others. Communication may occur through a radio-frequency transceiver. Additionally, short-range communications may occur, for example, using BLUETOOTH (BLUETOOTH), WiFi, or other such transceiver systems.
Computing devices 502, 504, 506, 508, 510 enable respective users 522, 524, 526, 528, 530 to access and view documents, such as web pages included in a website. For example, a user 522 of the computing device 502 may view a web page using a web browser. The web page may be provided to computing device 502 by server system 512, server system 514, or another server system (not shown).
In the example environment 500, the computing devices 502, 504, 506 are illustrated as desktop computing devices, the computing device 508 is illustrated as a laptop computing device 508, and the computing device 510 is illustrated as a mobile computing device. It is noted, however, that the computing devices 502, 504, 506, 508, 510 may include, for example, a desktop computer, a laptop computer, a handheld computer, a television having one or more processors embedded therein and/or coupled thereto, a tablet computing device, a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a camera, a smartphone, an Enhanced General Packet Radio Service (EGPRS) mobile phone, a media player, a navigation device, an electronic messaging device, a game console, or a combination of two or more of these data processing devices or other suitable data processing devices. In some implementations, the computing device can be included as part of a motor vehicle (e.g., an automobile, an emergency vehicle (e.g., a fire truck, an ambulance), a bus).
A user interacting with a computing device 502, 504, 506, 508, 510 may interact with a secure transaction service 523 hosted, for example, by a server system 512, by authenticating itself and issuing instructions or commands over a network 511. The secure transaction may include, for example, an e-commerce purchase, a financial transaction (e.g., an online banking transaction, a credit or bankcard transaction, a membership reward point redemption), or an online vote. The secure transaction service may include an authentication module 525 that coordinates authentication of the user from the secure server side of the transaction. In some implementations, authentication module 525 may receive image data from a user device (e.g., computing devices 502, 504, 506, 508, 510) that includes one or more images of the eyes of a user (e.g., users 522, 524, 526, 528, 530). An authentication module may then process the image data to authenticate the user by determining whether the image data matches a reference record of a recognized user identity, the reference record having been previously established based on image data collected during an enrollment session.
In some embodiments, a user who has submitted a service request may be redirected to an authentication module 540 running on a separate server system 514. The authentication module 540 may maintain reference records for registered or enrolled users of the secure transaction service 523, and may also include reference records for users of other secure transaction services. The authentication module 540 may establish secure sessions with various secure transaction services (e.g., the secure transaction service 523) using encrypted network communications (e.g., using public key encryption protocols) to indicate to the secure transaction service whether the user is authenticated as a registered or registered user. Much like authentication module 525, authentication module 540 may receive image data from a requesting user's computing device (e.g., computing devices 502, 504, 506, 508, 510) and may process the image data to authenticate the user. In some implementations, the verification module may determine a liveness score for an image received from a user, and may accept or reject the image based on the liveness score. When the image is rejected as a spoofing attempt presenting something other than a real person's eye, the authentication module 540 may send a network communication message to report the spoofing attempt to the secure transaction service 523 or an associated authority.
The authentication module 540 may be implemented as software, hardware, or a combination of software and hardware executing on a processing apparatus, such as one or more computing devices, such as the computer system illustrated in fig. 9.
A user device, such as computing device 510, may include authentication application 550. The authentication application 550 may facilitate authentication of a user as a registered or registered user identity to access a secure service (e.g., the secure transaction service 523) via the network 511. For example, the authentication application 550 may be a mobile application or another type of client application for interacting with a server-side authentication module (e.g., authentication module 540). The authentication application 550 may drive a sensor (e.g., a camera connected to or integrated with the user computing device) to capture one or more images of the user (e.g., the user 530), including a view of the white of the eyes of the user. The verification application 550 may prompt (e.g., via a display or speaker) the user to pose for image capture. For example, the user may be prompted to face the sensor and orient their gaze point to the left or right to expose a large portion of the white of the eye to the sensor.
In some implementations, the authentication application 550 transmits the captured image data to an authentication module (e.g., authentication module 525 or 540) on a remote server (e.g., server system 512 or 514) via the network 511. Collecting image data from a user may facilitate enrollment and establishment of a reference record for the user. Collecting image data from the user may also facilitate verifying the user's identity against a reference record.
In some implementations, additional processing of the image data may be performed by the authentication application 550 for authentication purposes, and the results of the processing may be transmitted to an authentication module (e.g., authentication module 525 or 540). In this way, the authentication functions may be distributed between the client and server side processes in a manner appropriate to the particular application. For example, in some implementations, the validation application 550 determines a liveness score for the captured image, and rejects any images having liveness scores indicative of a spoofing attack. If the liveness score indicates a live eye, the image data may be transmitted to a server-side verification module (e.g., verification module 525 or 540) for further analysis based on the accepted image.
In some embodiments, the authentication application accesses a reference record of the user's identity and performs a full authentication process, then reports the result (e.g., whether the user was accepted or rejected) to the server-side authentication module.
The authentication application 550 may be implemented as software, hardware, or a combination of software and hardware executing on a processing apparatus, such as one or more computing devices, such as the computer system illustrated in fig. 9.
FIG. 6 is a flow diagram of an example process 600 for authenticating an individual based on one or more images of the white of the eye. A liveness score for the obtained image is determined and the image is accepted or rejected using the liveness score. When an image with a real human eye is detected and accepted, the image is further analyzed to determine a match score by extracting features from the image and comparing the features to a reference record. The user is then accepted or rejected based on the match score.
For example, the process 600 may be implemented by the authentication module 440 in the computing device 430 of FIG. 4. In some implementations, the computing device 430 is a data processing apparatus that includes one or more processors configured to perform the actions of the process 600. For example, the data processing apparatus may be a computing device (e.g., as illustrated in fig. 9). In some implementations, the process 600 can be implemented in whole or in part by the authentication application 550, the authentication application 550 being executed by a user computing device (e.g., computing device 510). For example, the user computing device may be a mobile computing device (e.g., mobile computing device 950 of fig. 9). In some embodiments, process 600 may be implemented in whole or in part by authentication module 540, authentication module 540 being performed by a user server system (e.g., server system 514). In some implementations, the server system 514 is a data processing apparatus that includes one or more processors configured to perform the actions of the process 600. For example, the data processing apparatus may be a computing device (e.g., as illustrated in fig. 9). In some implementations, a computer-readable medium may include instructions that, when executed by a computing device (e.g., a computer system), cause the device to perform the actions of processor 600.
One or more images of the eye are obtained 602. The image includes a view of a portion of the vasculature of the eye outside of a limbus boundary of the eye. The obtained image may be monochrome or represented in various color spaces (e.g., RGB, SRGB, HSV, HSL, or YCbCr). In some implementations, the image can be obtained using a light sensor (e.g., a digital camera, a 3D camera, or a light field sensor). The sensor may be sensitive to light in various wavelength ranges. For example, the sensor may be sensitive to the visible spectrum of light. In some embodiments, the sensor is paired with a flash or flashlight that can be pulsed to illuminate objects in the view of the sensor. The capture of the image may be synchronized with or time-locked with the pulsing of the flash. In some implementations, the sensor captures a series of images that can be used to track the motion of an object within the field of view of the sensor. The sensor may include one or more settings that control image capture (e.g., focus, flash intensity, exposure, and white balance). The images may collectively include multiple focal lengths. For example, a series of images may be captured, each captured with a different focal length setting of the sensor, and/or some sensors (e.g., light field sensors) may capture images focused at multiple distances from the sensor. In some implementations, one or more images may be obtained 502 by acceptance through a network interface (e.g., a network interface of the server system 514).
A liveness score for the one or more images may then be determined 604. In some embodiments, image data elements (e.g., voxels, pixels, rays, or red, green, or blue channel values) are input directly to a trained function approximator, which outputs a liveness score. The function approximator may be trained using data corresponding to training images of both real eyes and spoof props paired with ideal scores (e.g., a real eye is 1, and a spoof prop is 0). The function approximator or classifier models a mapping from input data (i.e., training image data or features) to output data (i.e., resulting truth scores or binary decisions) with a set of model parameters. The model parameter values are selected using a training algorithm applied to the training data. For example, the function approximator may be based on the following model: linear regression, Waltera series, wiener series, radial basis kernel functions, kernel methods, polynomial methods, piecewise linear models, Bayesian classifiers, k-nearest neighbor classifiers, neural networks, support vector machines, or chaotic function approximators. Other models are possible. In some implementations, the liveness score may be binary.
In some implementations, the liveness score is determined 604 based on one or more liveness metrics, which in turn are determined based on the obtained images. Some examples of this process are described with respect to fig. 7.
For example, the liveness score may be determined 604 by the verification module 440, the verification application 550, the verification module 525, or the verification module 540.
The liveness score is checked 606 to determine if the image is likely to contain a view of a real eye. In some implementations, the liveness score may be compared to a threshold.
If the liveness score indicates a low likelihood of a live eye and, therefore, a high likelihood of a spoofing attack, then one or more images are rejected 608. In some implementations, a spoofing attack may then be reported 610. In some embodiments, the spoofing attack is reported 610 by a display or speaker (e.g., with an alarm sound or flashing a display). In some implementations, a spoofing attack is reported 610 by transmitting one or more messages over a network using a network interface. The user may then deny 630 and not allow access to the secure device or service.
In some implementations (not shown), a check may be performed to verify that the obtained image was captured from a particular sensor, and that the particular sensor has not been bypassed by the submission of spoofed image data. For example, during image capture, one or more sensor configuration settings may be adjusted to assume different settings during capture of two or more of the images. It is expected that these different settings are reflected in the obtained image data. If the image data changes between images with different settings, it may indicate that the sensor has been bypassed by a spoofing attack. For example, sensor configuration settings that control focus, exposure time, or white balance may be adjusted in this manner. If no corresponding change in the acquired image data is detected, the acquired image may be rejected 608.
If the liveness score indicates a high likelihood of depicting a live eye in the image, then one or more images are accepted 616 and subjected to further analysis to complete the verification process.
The one or more images may be segmented 620 to identify regions of interest that include the best view of vasculature in the white of the eye. In some implementations, anatomical landmarks (e.g., the iris, its center and limbal boundaries, the corners of the eye, and the edges of the eyelids) may be identified in the one or more images. Regions of interest within the image may be identified and selected based on their location relative to the identified anatomical landmarks. For example, the region of interest may be located to the left, right, above, or below the iris in the white of the eye. In some implementations, the selected regions of interest are tiled to form a grid that covers a larger portion of the white of the eye. In some implementations, selected regions of the image are discontinuous (e.g., adjacent regions may overlap, or adjacent regions may have space between them). The selected region of interest may correspond to a region of interest selected from a reference image on which the data in the reference record is based.
In some embodiments, the canthus is found by fitting curves to selected portions of the eyelids on the sclera, and then extrapolating and finding the intersection of those curves. If one intersection point (canthus) cannot be found due to the fact that the sclera is too close (e.g., due to gaze direction), a template from the same canthus region but from the opposite gaze direction photograph can be derived and applied to the problematic canthus neighborhood in the hand image, and the maximum relevant location can be labeled canthus.
In some implementations, the eyelids are found by an adaptive thresholding method that finds the white of the eye from the image, which borders the eyelids. The sclera mask itself may be corrected by morphological operations (e.g., convex hull) to remove aberrations.
In some implementations, the edge boundary is found from the sclera mask, which is where the sclera ends because it terminates at the iris edge boundary.
In some embodiments, the center of the iris is found via a variety of methods. If the eye is bright in color, the center of the pupil can be found as the center of the iris. If the iris is too dark, the center of the ellipse fitted to the edge boundary and its center is found, or determined as the focus of the normal ray (i.e., the line perpendicular to the tangent of the edge boundary) converging around the center of the iris, or a combination of the above methods.
The image region may be preprocessed 622 to enhance the view of the vasculature within the image. In some implementations, the pre-processing 622 includes color image enhancement and Contrast Limited Adaptive Histogram Equalization (CLAHE), which enhances the contrast of the intensity image. CLAHE operates in small regions of the image, called tiles. The contrast of each tile is enhanced such that the output histogram approximately matches the histogram specified by a particular distribution (e.g., a uniform distribution, an exponential distribution, or a rayleigh distribution). Adjacent tiles are then combined using bilinear interpolation to eliminate the artificially created boundaries. In some implementations, the image may be enhanced by selecting one of the red, green, or blue components that has the best contrast between the blood vessels and the background. The green component may be preferred as it may provide the best contrast between the blood vessels and the background.
In some implementations, the pre-processing 622 includes applying a multi-scale enhancement filtering scheme to enhance the intensity of the image, thereby facilitating detection and subsequent extraction of features of the vascular structure. The parameters of the filter may be determined empirically to account for variations in the girth of the vessel. The algorithm used can have good curve sensitivity, good curve specificity and suppress objects of other shapes. The algorithm may be based on the second derivative of the image. First, since the second derivative is sensitive to noise, the image segment is convolved with a gaussian function. The parameter σ of the gaussian function may correspond to the thickness of the blood vessel. Next, for each image data element, a hessian matrix may be established and eigenvalues λ l and λ 2 may be calculated. In each hessian matrix, the matrix ridge is defined as the point where the image has an extreme value in the direction of curvature. The curvature direction is the eigenvector of the second derivative of the image, which corresponds to the largest absolute eigenvalue λ. The sign of the eigenvalue determines whether it is a local minimum lambda >0 or a maximum lambda < 0. The calculated characteristic values are then used to filter the blood line with the following equation:
i _ line (λ 1, λ 2) ═ λ 1| - | λ 2| (if λ l <0), and I _ line (λ 1, λ 2) ═ 0 (if λ 1 ≧ 0)
The diameter of the vessel varies, but the algorithm assumes that the diameter is within the interval d0, d 1. A Gaussian smoothing filter may be used in the scale range [ d0/4, d1/4 ]. This filtering can be repeated N times based on the following smoothing scale:
σ1=d0/4,σ2=r*σ1,σ2=r^2*σ1,...σ2=r^(N-1)*σ1=d1/4
this final output may be the maximum of the outputs from all individual filters of the N scales.
A feature is determined 624 for each image region that reflects a structure or characteristic of vasculature visible in the region of the user's eye. In some embodiments, a detail detection method may be used to extract features of the user's vasculature. An example of a detail detection process is described in U.S. patent No. 7,327,860.
In some implementations, the features may be determined 624 in part by applying a set of filters to the image regions that correspond to the texture features of those image regions. For example, the features may be determined in part by applying a set of complex gabor filters at various angles to the image. The parameters of the filter may be determined empirically to account for variations in the spacing, orientation, and girth of the blood vessels. The textural features of the image may be measured as the amount of sharp visible vasculature in the region of interest. This quality can be determined by the ratio of the area of the sharply visible vasculature to the area of the region of interest. The phase of the gabor filtered image, when binarized using a threshold, may facilitate detection and reveal sharply visible vasculature.
When the gabor filter kernel is configured with a sigma of 2.5 pixels, a frequency of 6 and a gamma of 1, the phase of the complex gabor filtered image reflects the vessel pattern at different angles. The choice of frequency may depend on the distance between the vessels, which in turn depends on the resolution and the distance between the image acquisition system and the subject. For images, these parameters may be invariant. For example, nuclear parameters may be derived for eye images captured using a particular sensor (e.g., a rear camera on a smartphone) at a distance of 6 to 12 centimeters away from the eye, and the segmented scleral region may be resized to a resolution of (e.g., 401x501 pixels) for analysis. It can be seen that the surface vasculature of the eye may be spread in all directions over the white of the eye. For example, gabor nuclei may be aligned over six different angles (0, 30, 60, 90, 120, and 150 degrees). The phase of the gabor filtered image may vary from-pi to + pi radians. Phase values above 0.25 and below-0.25 radians may correspond to vascular structures. To binarize the phase image using thresholding, all values of the phase above 0.25 or below-0.25 may be set to 1, and the remaining values to 0. This may result in sharp vasculature structures in the corresponding phase images. This operation may be performed for images produced by applications of all six gabor kernels at different angles. All six binarized images may be added to reveal fine and clear vascular structures. In some implementations, the vector of elements of the binarized phase image can be used as a feature vector for comparing the image to a reference record. In some implementations, differences in textural features between regions of interest of an image may be used as feature vectors. The sum of all 1's of the binarized image area divided by the area of interest may reflect the extent of visible vasculature.
A match score is determined 626 based on the features and corresponding features from the reference record. The reference record may include data based at least in part on one or more reference images captured during the user's enrollment or registration process. In some implementations, the match score may be determined 626 as a distance (e.g., euclidean distance, correlation coefficient, improved hausdorff distance, mahalanobis distance, bragemann divergence, cosine similarity, kulbeck-lewy distance, and yanson-shannon divergence) between feature vectors extracted from one or more obtained images and feature vectors from reference records. In some implementations, the 626 match score may be determined by inputting features extracted from one or more obtained images and features from reference records to a trained function approximator.
In some embodiments, a quality-based fused match score is determined 626 based on match scores of multiple images of the same vasculature. In some implementations, the match scores of the multiple images are combined by adding the match scores together in a weighted linear combination with weights that respectively depend on the quality scores determined for each of the multiple images. Other examples of techniques that may be used to combine matching scores for multiple images based on their respective quality scores include hierarchical mixing, summation rules, product rules, gated fusion, doplets-safver combination, and stacked generalizations, among others.
In some implementations, the match score is determined 626 by a verification module (e.g., a verification module 440 running on the computing device 430).
The match score may be checked 628 to determine whether a match between the one or more obtained images and the reference record exists. For example, the match score may be compared to a threshold. The match may reflect a high likelihood that the user depicting his eyes in the one or more obtained images is the same as the individual associated with the reference record.
If there is no match, the user may be rejected 630. As a result, the user may not be allowed access to the secure device or service (e.g., secure device 450 or secure transaction device 523). In some implementations, the user may be notified of the rejection 630 by a message shown on a display or played through a speaker. In some embodiments, the rejection may be affected by transmitting a message over the network reflecting the status of the rejected user. For example, the authentication module 540, upon rejecting the user 530, may transmit a rejection message to the secure transaction server 523 using the network interface of the server system 514. The authentication module 540 in this case may also send a denial message to the user computing device 510.
If there is a match, then the user may be accepted 632. As a result, the user may be granted access to a secure device or service (e.g., secure device 450 or secure transaction device 523). In some implementations, the user may be notified of the acceptance 632 by a message shown on a display or played through a speaker. In some embodiments, acceptance may be affected by transmitting a message over the network reflecting the status of the accepted user. For example, the authentication module 540, upon accepting the user 530, may transmit an acceptance message to the secure transaction server 523 using the network interface of the server system 514. The authentication module 540 in this case may also send an acceptance message to the user computing device 510.
FIG. 7 is a flow diagram of an example process 700 for determining a liveness score for one or more images of an eye. One or more liveness metrics are determined 710 for the image, and a liveness score is determined 730 based on the one or more liveness metrics.
For example, process 700 may be implemented by authentication module 440 in computing device 430 of FIG. 4. In some implementations, computing device 430 is a data processing apparatus that includes one or more processors configured to perform the actions of process 700. For example, the data processing apparatus may be a computing device (e.g., as illustrated in fig. 9). In some implementations, the process 700 may be implemented in whole or in part by an authentication application 550, the authentication application 550 being executed by a user computing device (e.g., computing device 510). For example, the user computing device may be a mobile computing device (e.g., mobile computing device 950 of fig. 9). In some embodiments, process 700 may be implemented in whole or in part by authentication module 540, authentication module 540 being performed by a user server system (e.g., server system 514). In some implementations, the server system 514 is a data processing apparatus that includes one or more processors configured to perform the actions of the process 700. For example, the data processing apparatus may be a computing device (e.g., as illustrated in fig. 9). In some implementations, a computer-readable medium may include instructions that, when executed by a computing device (e.g., a computer system), cause the device to perform the actions of processor 700.
The processor 700 begins 702 when one or more images are received for processing. For example, one or more images may be encoded as a two-, three-, or four-dimensional array of data image elements (e.g., pixels, voxels, rays, or red, green, or blue channel values).
One or more measures of realism may then be determined 710 based on the one or more images. In this example, a behavioral metric is determined 712 based on the detected movement of the eye as it appears in the plurality of images. The behavioral metric may be a measure of deviation of the detected movement and timing from an expected movement of the eye.
In some implementations, a light stimulus (e.g., a flash pulse, changing brightness of an LCD display) is applied to the subject while the image is captured. In response to these light stimuli, the pupils of the real human eyes are expected to constrict to accommodate the changes in illumination. Furthermore, the pupil is expected to contract in a particular manner over time, with a particular acceleration profile depending on the start time of the user's reaction time, the duration of the contraction movement required to reach the new steady-state pupil diameter, the average speed of contraction, and the contraction motion. By examining the sequence of images captured before and after the start of the light stimulus, one or more parameters of the detected motion may be determined and compared to one or more parameters of the expected motion. Substantial deviation from expected motion in response to the light stimulus may indicate that the subject in the camera view is not a real human eye and a spoofing attack has occurred. An example of this implementation is described with respect to fig. 8A.
In some implementations, the behavioral metric may be determined 712 by applying an external stimulus to the subject during image capture (e.g., a prompt instructing the user to orient their gaze point or a display showing a moving object the user follows with their eyes) and tracking a likely resulting gaze point transition. In response to these external stimuli, the real human eye is expected to move in a particular manner over time. Some parameters of the expected gaze point transition motion may include a start time depending on the reaction time of the user, the duration of the gaze point transition movement required to reach the new steady gaze direction, the average velocity and a specific acceleration profile of the gaze point transition motion. By examining the sequence of images captured before and after the onset of the external stimulus, one or more parameters of the detected motion may be determined and compared to one or more parameters of the expected motion. Substantial deviation from expected motion in response to an external stimulus may indicate that the subject in the camera view is not a real human eye and a spoofing attack has occurred. An example of this implementation is described with respect to fig. 8B.
In some embodiments, determining 712 a behavioral metric may include detecting blood flow in vasculature of the white of the eye (e.g., vasculature in the outer layer of the sclera) of the eye. The sequence of images may be analyzed to detect changes in hue and changes in visible width of veins and blood vessels in the white of the eye over time. The vasculature of a human eye is expected to exhibit regular changes in vessel width and tone corresponding to user impulses. Substantial deviation from the expected blood flow pattern may indicate that the subject in the camera view is not a real human eye and a spoofing attack has occurred.
For example, consider a section of vasculature between two branching points or sharp bends. The tubular body of that vessel changes shape and color as the heart pumps blood through it. In some implementations, 300 frames or images may be captured over a 10 second period. The image area may be registered one capture instance after another. Blood flow can then be measured by comparing the physical dimensions (2d or 3d) of points of interest along the vessels over time and the color of those vessels over time. In this way, changes consistent with the pulse may be detected. For example, it is measured whether the "pulse" signal resembles a square wave that will not conform to the natural circulation system. If the measurement "pulse" signal consists of spikes (both vasodilation and appropriate color changes) at regular time intervals over time within the normal range of a human user (possibly even a particular user), then the input may correspond to a true pulse. The distance between the measured pulse signal and the expected pulse signal may be determined to assess the likelihood that the subject is a real eye rather than a spoofing attack.
In some implementations, the expected motion parameters are specific to a particular user, and are determined during a registration session and stored as part of a reference record for the particular user. In some embodiments, the expected athletic parameters are determined for a large collected population based on user data or offline studies.
For example, the behavior metric may be determined 712 by a verification module or application (e.g., verification module 440).
In this example, a spatial metric is determined 714 based on distances from the sensors to landmarks appearing in the plurality of images each having a different respective focal length. The focal length is the distance from the sensor to a point in its field of view that is perfectly focused. For some reasons, the focal length may be adjusted for different images by adjusting the focus configuration settings of the sensor. For example, landmarks (e.g., iris, canthus, nose, ear, or background objects) may be identified and located in multiple images with different focal lengths. Landmarks in a particular image are represented with a degree of focus that depends on how far the object corresponding to the landmark is from a focal point in the field of view of the sensor. The degree of focus is a measure of the degree to which the image of the landmark is blurred by optical effects in the photosensor (e.g., due to diffraction and convolution due to the shape of the aperture). The degree of focus of a landmark in a particular image may be estimated by determining the high frequency components of the image signal in the vicinity of the landmark. When a landmark is focused, more high frequency components are expected in its vicinity. When the degree of focusing of the landmarks is low, smaller high frequency components are expected. By comparing the degree of focus of the landmarks in the image with different focal lengths, the distance from the sensor to the landmarks can be estimated. In some embodiments, the distances of a plurality of landmarks from a sensor (e.g., a camera) are estimated to form a topological map (comprised of a set of three-dimensional landmark positions) of the subject in the sensor view. The locations of these landmarks in the space viewed by the camera may be compared to the model by determining a spatial metric that reflects a deviation from the model (e.g., a mean square error between a detected location of one or more landmarks and a corresponding modeled location of one or more landmarks).
In some embodiments, the spatial metric is a measure of deviation of the subject from a two-dimensional plane. One possible spoofing strategy is to present a two-dimensional image (e.g., a photograph) of the registered user's eyes to the sensor. However, unlike landmarks in and around the real eye, the locations of landmarks (e.g., eyes, nose, mouth, and ears) in the two-dimensional image will appear in a two-dimensional plane. For example, the locations of a plurality of landmarks may be fitted to the nearest two-dimensional plane, and the average distance of a landmark from this fitted plane may be determined as a spatial metric. A high value of this spatial metric may indicate a high likelihood of the three-dimensional body and the body being a real human eye, while a low value may indicate a high likelihood of the body being a two-dimensional spoofing attack.
In some embodiments, the spatial metric is a measure of deviation of the subject from an expected three-dimensional shape. A three-dimensional model containing locations corresponding to landmarks containing the expected shape of the subject of the user's real eye can be used to compare with the detected landmark locations. In some implementations, the relative positions of landmarks on a particular user's face may be determined during a registration session and used to generate a three-dimensional model stored as part of a reference record. In some embodiments, a three-dimensional model of a user population may be determined based on a collection of measurements or studies by a number of people. Various types of metrics may be used as spatial metrics to compare detected landmark positions to expected shapes (e.g., euclidean distance, correlation coefficient, modified hausdorff distance, mahalanobis distance, bragemann divergence, kulbeck-leibler distance, and fenssen-shannon divergence).
In some implementations, determining 714 the spatial metric includes determining a disparity of two or more landmarks appearing in the plurality of images. Parallax is the apparent displacement of an observed object due to a change in the position of the observer. Multiple images taken from different angles on the subject may cause landmarks within the images to appear to move different amounts because of their differences in distance from the sensor. This parallax effect can be measured and used as a spatial metric that reflects the three-dimensional nature of the subject in the sensor view. If all landmarks in the image experience the same apparent displacement caused by the relative motion of the sensors, i.e. the disparity of the parallax effects of the landmarks is small, then the subject being viewed by the camera has a higher probability of being a two-dimensional spoof attack. In some implementations, the sensor is moved around the subject during image capture to collect image data from different orientations relative to the subject. For example, a single camera may rotate or slide slightly, or multiple cameras at different positions may be used for image capture. In some embodiments, the user is prompted to move in order to change the relative orientation of the body and the sensor. In some embodiments, it is assumed that the sensor will naturally move relative to the body. For example, where the sensor is a camera in a handheld user device (e.g., a smartphone or tablet computer), the sensor may naturally move relative to the user's face due to involuntary tactile motion.
For example, the spatial metric may be determined 714 by a verification module or application (e.g., verification module 440).
In this example, the reflection metric is determined 716 based on the detected change in surface glare or specular reflection pattern on the surface of the eye as the eye appears in the plurality of images. The reflectance metric may be a measure of glare on the surface of the eye or changes in the specular reflective film. As the illumination of the eye in the view of the sensor changes, the glare and specular reflection patterns visible on the eye are expected to change by appearing, disappearing, growing, shrinking, or moving due to relative motion of the eye and the light source or due to a dynamic light source (e.g., a flashlight, LCD screen, or other lighting element). In some implementations, the illumination change is caused by a light stimulus (e.g., a flash pulse) or an external stimulus (e.g., a prompt instructing the user to change the gaze direction) during image capture. For example, glare (including its boundaries) may be detected by thresholding the contrast enhanced image to find the whitest point. By determining 716 a reflection metric that measures a deviation of the detected change from an expected change, the detected change in the pattern of glare or specular reflection on the eye in the image may be compared to the expected change in the pattern.
Looking for changes in the area and shape of this glare. The perimeter to area ratio of the glare membrane can also be viewed.
In some implementations, the flash may be pulsed while one or more of the images are captured, illuminating the subject. Glare from glints may be detected on the eye as the eye appears in the image. The pulsing of the flash may be synchronized with the image capture such that the time difference between the time the flash is pulsed and the time the corresponding glare appears in the image may be measured. The reflection metric may be based on this time difference. A large deviation from the expected synchronization or time-locking of the flash pulse with the onset of the corresponding glare or specular reflection may indicate a spoofing attack. For example, a replay attack uses pre-recorded video of a capture scenario. Glare changes of pre-recorded video are unlikely to be time-locked with real-time flash events during the current session. Another example is to present an image of an impression of the eye to the sensor, in which case glare may be spread across the image of the impression in an unnatural, uniform manner, or may change imperceptibly due to lack of moisture on the viewed surface. If the corresponding glare or specular reflection is not detected, the reflection metric may be determined to correspond to an arbitrarily large number of poor synchronizations or lack of time-locking between the flash of light and the detected glare or specular reflection.
In some implementations, illumination changes can be detected when a change in the uniformity of the glare pattern caused by a greater amount of fine three-dimensional texture of the white of the eye is revealed as the intensity of the illumination increases. For example, a flash of light may be pulsed while one or more of the images are captured, illuminating the subject at a higher intensity. The fine three-dimensional texture of the white of the eye can be detected by measuring the uniformity of the pattern of glare on the eye in the image before and after the start of the flash pulse. For example, the uniformity of glare in the specular reflection mode may be measured as the ratio of the circumference to the area of the glare. The more this number is larger than 2/R, the less rounded and the more uneven the glare (R is the estimated radius of the glare film). In some implementations, a function approximator (e.g., a neural network) is trained to distinguish between specular reflection patterns recorded from a real human eye and a synthetic eye (e.g., a 3D imprinted eye) using a sensor with an illumination element (e.g., a flash).
For example, the reflection metric may be determined 716 by a verification module or application (e.g., verification module 440).
In some implementations (not shown), additional measures of liveness may be determined 710. For example, a metric reflecting the degree of saccadic movement of the eye in the sensor view may be determined. The iris of an eye may be landmarked in a sequence of images so that its position or orientation may be tracked. This sequence of positions or orientations can be analyzed to determine the degree of saccade motion by filtering the motion at a particular frequency associated with normal saccade motion.
In some implementations, a liveness metric that reflects the degree of halftone in the captured image may be determined 710. Halftoning is an artifact of the digital print image that is useful in a spoofing attack, and thus its presence can indicate a high likelihood of a spoofing attack. For example, one or more images may be captured using a reduced dynamic range of a sensor (e.g., a camera) such that finer resolution in the intensity of the detected light is achieved in the range in which it appears in the captured image. In this way, the intensity or color scale may be amplified to reveal more subtle changes in the level of the detected image signal. If the captured image is a real human eye, it is expected that the range of detected color or intensity values will continue to change continuously. In contrast, spoofed images (e.g., digital photographs presented to the sensor) may exhibit large, discontinuous jumps corresponding to halftones. The degree of halftoning in an image can be measured in various ways (e.g., as an average or maximum eigenvalue of an estimated hessian matrix in an image region, or as a high frequency component of an image signal). In some embodiments, images with a halftone metric above a threshold are rejected. In some implementations, a histogram of the shades of gray in the image is generated and the uniformity of the distribution between gray level bins (e.g., 256 bins) is measured.
In some implementations, the liveness metric is determined 710 in parallel. In some implementations, the measure of truth is determined 710 continuously.
A liveness score may then be determined 730 based on the one or more liveness metrics. In some implementations, the liveness score is determined by inputting one or more liveness metrics into the trained function approximator.
The function approximator may be trained using data corresponding to a real human eye and training images that have been correctly labeled to provide various spoofing attacks of the desired output signal. The function approximator models a mapping from input data (i.e., a training image liveness metric) to output data (i.e., a liveness score) with a set of model parameters. The model parameter values are selected using a training algorithm applied to the training data. For example, the function approximator may be based on the following model: linear regression, Waltera series, wiener series, radial basis kernel functions, kernel methods, polynomial methods, piecewise linear models, Bayesian classifiers, k-nearest neighbor classifiers, neural networks, support vector machines, or chaotic function approximators. In some implementations, the liveness score may be binary.
For example, a liveness score may be determined 730 by a verification module or application (e.g., verification module 440) based on one or more liveness metrics.
The resulting liveness score may then be returned 740 and may be used by a verification system (e.g., verification system 400) in a variety of ways. For example, the liveness score may be used to accept or reject one or more images.
Fig. 8A is a flow diagram of an example process 800 for determining a behavioral metric based on pupil contraction in response to a light stimulus. One or more light stimuli are applied 810 to a scene viewed by a sensor, such as light sensor 420. For example, the light stimulus may include a flash pulse or a change in brightness of a display (e.g., an LCD display). A sequence of images is captured 812 by the sensor before and after the start of the light stimulus. For example, the sequence of images may be captured at regularly spaced times (e.g., 10, 30, or 60Hz) within a time interval (e.g., 2, 5, or 10 seconds) that includes the start of the light stimulus.
In some implementations, the pupil and the diameter of the pupil that demarcate the landmark in each of the captured images are determined 814 in each captured image. A diameter may be determined 814 relative to a starting diameter of the pupil measured in one or more images captured prior to the start of the light stimulus.
The resulting sequence of pupil diameters measured in response to the light stimulus may be analyzed to determine 816 one or more motion parameters of pupil constriction in response to the light stimulus. In some implementations, the motion parameters of pupil constriction can include the start time of the constriction motion relative to the start of the light stimulus. Onset is the time delay between the onset of light stimulation and the onset of contractile motion. In some implementations, the motion parameters of pupil constriction can include the duration of the constriction motion. The duration is the length of time between the beginning of the systolic motion and the end of the systolic motion when the pupil diameter reaches a new steady state value (e.g., after which the diameter does not change for a minimum time interval). In some implementations, the motion parameter of the pupil constriction can include a speed of the pupil constriction. For example, the velocity may be determined as the difference in pupil diameter between two points in time divided by the length of the time interval therebetween. In some implementations, the motion parameters of pupil constriction may include acceleration of pupil constriction in different time segments of the constriction cycle. For example, the acceleration may be determined as a difference in velocity between two time intervals.
The performance metric may be determined 818 as a distance between the one or more determined motion parameters and the one or more expected motion parameters. For example, the behavioral metric may include a difference between a detected start time and an expected start time of a real human eye. For example, the behavioral metric may include a difference between a detected duration and an expected duration of pupil constriction of a real human eye. In some embodiments, the sequence of pupil diameters is compared to an expected sequence of pupil diameters by determining the distance between the two sequences (e.g., euclidean distance, correlation coefficient, modified hausdorff distance, mahalanobis distance, bragemann divergence, kulbeck-lewy distance, and fenson-shannon divergence). In some embodiments, the sequence of pupil constriction velocities for a constriction movement is compared to the expected sequence of pupil constriction velocities by determining the distance between the two sequences of velocities. In some embodiments, the sequence of pupil contraction accelerations of the contraction movement is compared to an expected sequence of pupil contraction accelerations by determining the distance between the two sequences of accelerations.
For example, process 800 may be implemented by an authentication module or application (e.g., authentication module 440) that controls light sensors (e.g., light sensor 420) and lighting elements.
Fig. 8B is a flow diagram of an example process 820 for determining a behavioral metric based on a gaze point transition of an iris in response to an external stimulus. One or more external stimuli are applied 830 to a user viewed by a sensor (e.g., light sensor 420). For example, the external stimulus may include a prompt during image capture instructing the user to orient their gaze point (e.g., look right, look left, look up, look down, look forward). The cues may be visual, auditory, and/or tactile. In some implementations, the external stimulus may include an object followed by the user's eyes moving within the display.
A sequence of images is captured 832 by the sensor before and after the start of the external stimulus. For example, the sequence of images may be captured at regularly spaced times (e.g., 10, 30, or 60Hz) over a time interval (e.g., 2, 5, or 10 seconds) that includes the onset of the external stimulus.
In some implementations, an iris and a position or orientation of the iris demarcated in each of the captured images is determined 834 in each captured image. The location relative to a starting location of an iris measured in one or more images captured prior to the start of the external stimulus may be determined 834.
The resulting sequence of iris positions measured in response to the external stimulus may be analyzed to determine 836 one or more motion parameters of the gaze point transition in response to the external stimulus. In some implementations, the motion parameters of the gaze point transition may include a start time of the gaze point transition motion relative to a start of the external stimulus. Onset is the time delay between the onset of the external stimulus and the onset of the gaze point transition movement. In some implementations, the motion parameters for the gaze point transition may include a duration of the gaze point transition motion. The duration is the length of time between the beginning of the gaze point transition motion and the end of the gaze point transition motion when the iris reaches a new steady-state position (e.g., after which the iris does not move within a minimum time interval). In some implementations, the motion parameters of the gaze point transition may include a speed of the gaze point transition. For example, the velocity may be determined as the difference in iris position between two points in time divided by the length of the time interval therebetween. In some implementations, the motion parameters of the gaze point transition may include an acceleration of the gaze point transition. For example, the acceleration may be determined as a difference in velocity between two time intervals.
The performance metric may be determined as a distance 838 between the one or more determined motion parameters and the one or more expected motion parameters. For example, the behavioral metric may include a difference between a detected start time and an expected start time of a real human eye. For example, the behavioral metric may include a difference between a detected duration and an expected duration of pupil constriction of a real human eye. In some embodiments, the sequence of iris positions is compared to an expected sequence of iris positions by determining the distance between the two sequences (e.g., euclidean distance, correlation coefficient, modified hausdov distance, mahalanobis distance, bragemann divergence, kurbek-leibler distance, and fenssen-shannon divergence). In some embodiments, the sequence of transition speeds of the gaze point transition motion is compared to an expected sequence of transition speeds by determining the distance between the two sequences of speeds. In some embodiments, the sequence of gaze point transition accelerations of the contraction motion is compared to an expected sequence of gaze point transition accelerations by determining the distance between the two sequences of accelerations.
For example, process 820 may be implemented by a verification module or application (e.g., verification module 440) that controls a light sensor (e.g., light sensor 420) and a prompting device (e.g., a display, speaker, or haptic feedback device).
FIG. 9 shows an example of a general purpose computer device 900 and a general purpose mobile computing device 950, which may be used with the techniques described herein. Computing device 900 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Computing device 950 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed in this document.
Computing device 900 includes a processor 902, memory 904, a storage device 906, a high-speed interface 908 connecting to memory 904 and high-speed expansion ports 910, and a low speed interface 912 connecting to low speed bus 914 and storage device 906. Each of the components 902, 904, 906, 908, 910, and 912, are interconnected using various buses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 902 may process instructions for execution within the computing device 900, including instructions stored in the memory 904 or on the storage device 906 to display graphical information for a GUI on an external input/output device, such as display 916 coupled to high speed interface 908. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and memory types. Also, multiple computing devices 900 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
The memory 904 stores information within the computing device 900. In one implementation, the memory 904 is a volatile memory unit or units. In another implementation, the memory 904 is one or more non-volatile memory units. The memory 904 may also be another form of computer-readable medium, such as a magnetic or optical disk.
Storage 906 is capable of providing a large amount of storage for computing device 900. In one implementation, the storage device 906 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. A computer program product may be tangibly embodied in an information carrier. The computer program product may also contain instructions that, when executed, perform one or more methods, such as those described above. For example, the information carrier is a computer-or machine-readable medium, such as the memory 904, the storage device 906, or memory on processor 902.
The high-speed controller 908 manages broadband-intensive operations for the computing device 900, while the low-speed controller 912 manages lower broadband-intensive operations. This allocation of functionality is exemplary only. In one implementation, the high-speed controller 908 is coupled to memory 904, display 916 (e.g., through a graphics processor or accelerator), to high-speed expansion ports 910, which may accept various expansion cards (not shown). In an embodiment, low-speed controller 912 is coupled to storage 906 and low-speed expansion port 914. The low-speed expansion port, which may include various communication ports (e.g., USB, bluetooth, ethernet, wireless ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a network connection device, such as a switch or router, for example, through a network adapter.
As shown in the figures, computing device 900 may be implemented in a number of different forms. For example, computing device 900 may be implemented as a standard server 920 or multiple times in a group of such servers. The computing device 900 may also be implemented as part of a rack server system 924. Additionally, computing device 900 may be implemented in a personal computer such as a laptop computer 922. Alternatively, components from computing device 900 may be combined with other components in a mobile device (not shown), such as device 950. Each of these devices may contain one or more of computing device 900, 950, and an entire system may be made up of multiple computing devices 900, 950 communicating with each other.
Computing device 950 includes a processor 952, memory 964, input/output devices such as a display 954, a communication interface 966, and a transceiver 968, among other components. The device 950 may also be provided with a storage device, such as a mini-hard disk or other device, to provide additional storage. Each of the components 950, 952, 964, 954, 966, and 968, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
The processor 952 may execute instructions within the computing device 950, including instructions stored in the memory 964. The processor may be implemented as a chipset of chips that include separate and multiple analog and digital processors. For example, the processor may provide for coordination of the other components of the device 950, such as control of user interfaces, applications executed by device 950, and wireless communication by device 950.
The processor 952 may communicate with a user through the control interface 958 and a display interface 956 coupled to a display 954. For example, the display 954 may be a TFT LCD (thin film transistor liquid Crystal display) or OLED (organic light emitting diode) display or other suitable display technology. The display interface 956 may comprise appropriate circuitry for driving the display 954 to present graphical and other information to a user. The control interface 958 may receive commands from a user and convert them for submission to the processor 952. In addition, an external interface 962 may be provided in communication with processor 952, so as to enable near area communication of device 950 with other devices. For example, external interface 962 may provide for wired communication in some implementations, or wireless communication in other implementations, and multiple interfaces may also be used.
The memory 964 stores information within the computing device 950. The memory 964 may be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. Expansion memory 974 may also be provided and connected to device 950 through expansion interface 972, which may include, for example, a SIMM (Single in line memory Module) card interface. Such expansion memory 974 may provide additional storage space for device 950, or may also store applications or other information for device 950. Specifically, expansion memory 974 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, expansion memory 974 may be provided as a security module for device 950, and may be programmed with instructions that permit secure use of device 950. In addition, secure applications may be provided by the SIMM card, as well as additional information, such as placing identification information on the SIMM card in an uncontrolled manner.
For example, the memory may include flash memory and/or NVRAM memory, as discussed below. In one embodiment, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer-or machine-readable medium, such as the memory 964, expansion memory 974, memory on processor 952, or a propagated signal that may be received, for example, over transceiver 968 or external interface 962.
Device 950 may communicate wirelessly via communication interface 966, which communication interface 966 may include digital signal processing circuitry as necessary. Communication interface 966 may provide for communications under various modes or protocols, such as GSM voice calls, SMS, EMS, or MMS messaging, CDMA, TDMA, PDC, WCDMA, CDMA2000, or GPRS, among other protocols. This communication may occur, for example, via radio frequency transceiver 968. Additionally, short-range communications may occur, for example, using a bluetooth, WiFi, or other such transceiver (not shown). Additionally, GPS (global positioning system) receiver module 970 may provide additional navigation-and location-related wireless data to device 950, which may be used as appropriate by applications running on device 950.
Device 950 may also communicate audibly using audio codec 960, and audio codec 960 may receive verbal information from a user and convert it to usable digital information. Audio codec 960 may likewise produce audible sound for a user (e.g., through a speaker, for example, in a handset of device 950). Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, etc.), and may also include sound generated by applications operating on device 950.
As shown in the figures, computing device 950 may be implemented in a number of different forms. For example, the computing device 950 may be implemented as a cellular telephone 980. Computing device 950 may also be implemented as part of a smartphone 982, personal digital assistant, or other similar mobile device.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be coupled for special or general purpose receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide interaction with the user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, voice, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a client computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), and the internet.
A computing system may include clients and servers. The client and server are typically remote from each other and typically interact via a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Several embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the invention.
In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other embodiments are within the scope of the following claims.
Claims (27)
1. A computer-implemented method, comprising:
capturing, using a light sensor, a plurality of images of a subject including a view of an eye of the subject;
determining a reflectance metric by at least one of:
the time difference between the following two was measured: (i) an occurrence of a glint pulse captured by one or more of the images and (ii) an occurrence of a corresponding glare detected on the eye in one or more of the images; and
measuring mode uniformity of glare on an outer surface of the eye visible in one or more of the images caused by one or more glint pulses captured by one or more of the images; and
rejecting or accepting the image based at least on the reflection metric.
2. The method of claim 1, wherein rejecting or accepting the image comprises:
determining a behavioral metric based at least on the detected movement of the eye as it appears in the plurality of images, wherein the behavioral metric is a measure of deviation of the detected movement and timing from expected movement of the eye; and
rejecting or accepting the image based on at least the behavioral and reflectance metrics.
3. The method of claim 2, wherein determining the behavioral metric comprises determining an onset, duration, velocity, or acceleration of pupil constriction in response to a light stimulus.
4. The method of claim 2, wherein determining the behavioral metric comprises determining a start, duration, or acceleration of a gaze point transition in response to an external stimulus.
5. The method of claim 2, wherein determining the behavioral metric comprises detecting blood flow of the eye as it appears in the plurality of images.
6. The method of claim 1, further comprising:
determining a spatial metric based at least on distances from the light sensor to landmarks appearing in the plurality of images each having a different respective focal length according to the pattern; and
wherein the image is rejected or accepted further based on the spatial metric.
7. The method of claim 6, wherein the spatial metric is a measure of deviation of the subject from an expected three-dimensional shape.
8. The method of claim 6, wherein determining the spatial metric comprises determining a disparity of two or more landmarks appearing in the plurality of images.
9. The method of claim 1, further comprising changing one or more parameters at different times during the image being captured.
10. The method of claim 9, wherein a particular parameter is a pulse, a focus setting of the light sensor, a brightness of a display, an exposure of the light sensor, a white balance of the light sensor, or an external stimulus, wherein the external stimulus is a prompt to indicate a user to orient a point of regard or an object depicted in a display that moves within the display.
11. A system for spoof detection for biometric verification comprising a data processing device programmed to perform operations comprising:
capturing, using a light sensor, a plurality of images of a subject including a view of an eye of the subject;
determining a reflectance metric by at least one of:
the time difference between the following two was measured: (i) an occurrence of a glint pulse captured by one or more of the images and (ii) an occurrence of a corresponding glare detected on the eye in one or more of the images; and
measuring mode uniformity of glare on an outer surface of the eye visible in one or more of the images caused by one or more glint pulses captured by one or more of the images; and
rejecting or accepting the image based at least on the reflection metric.
12. The system of claim 11, wherein rejecting or accepting the image comprises:
determining a behavioral metric based at least on the detected movement of the eye as it appears in the plurality of images, wherein the behavioral metric is a measure of deviation of the detected movement and timing from expected movement of the eye; and
rejecting or accepting the image based on at least the behavioral and reflectance metrics.
13. The system of claim 12, wherein determining the behavioral metric comprises determining an onset, duration, velocity, or acceleration of pupil constriction in response to a light stimulus.
14. The system of claim 12, wherein determining the behavioral metric comprises determining a start, duration, or acceleration of a gaze point transition in response to an external stimulus.
15. The system of claim 12, wherein determining the behavioral metric comprises detecting blood flow of the eye as it appears in the plurality of images.
16. The system of claim 11, the operations further comprising:
determining a spatial metric based at least on distances from the light sensor to landmarks appearing in the plurality of images each having a different respective focal length according to the pattern; and
wherein the image is rejected or accepted further based on the spatial metric.
17. The system of claim 16, wherein the spatial metric is a measure of deviation of the subject from an expected three-dimensional shape.
18. The system of claim 16, wherein determining the spatial metric comprises determining a disparity of two or more landmarks appearing in the plurality of images.
19. The system of claim 11, wherein the operations further comprise changing one or more parameters at different times during the capturing of the image.
20. The system of claim 19, wherein a particular parameter is a pulse, a focus setting of the light sensor, a brightness of a display, an exposure of the light sensor, a white balance of the light sensor, or an external stimulus, wherein the external stimulus is a prompt to instruct a user to orient a point of regard or an object depicted in a display that moves within the display.
21. A computer-readable storage medium encoded with instructions that, when executed by data processing apparatus, cause the data processing apparatus to perform operations comprising:
capturing, using a light sensor, a plurality of images of a subject including a view of an eye of the subject;
determining a reflectance metric by at least one of:
the time difference between the following two was measured: (i) an occurrence of a glint pulse captured by one or more of the images and (ii) an occurrence of a corresponding glare detected on the eye in one or more of the images; and
measuring mode uniformity of glare on an outer surface of the eye visible in one or more of the images caused by one or more glint pulses captured by one or more of the images; and
rejecting or accepting the image based at least on the reflection metric.
22. The storage medium of claim 21, wherein rejecting or accepting the image comprises:
determining a behavioral metric based at least on the detected movement of the eye as it appears in the plurality of images, wherein the behavioral metric is a measure of deviation of the detected movement and timing from expected movement of the eye; and
rejecting or accepting the image based on at least the behavioral and reflectance metrics.
23. The storage medium of claim 22, wherein determining the behavioral metric comprises determining an onset, duration, velocity, or acceleration of pupil constriction in response to a light stimulus.
24. The storage medium of claim 22, wherein determining the behavioral metric comprises determining a start, duration, or acceleration of a gaze point transition in response to an external stimulus.
25. The storage medium of claim 22, wherein determining the behavioral metric comprises detecting blood flow of the eye as it appears in the plurality of images.
26. The storage medium of claim 21, wherein the operations further comprise changing one or more parameters at different times during the capturing of the image.
27. The storage medium of claim 26, wherein a particular parameter is a pulse, a focus setting of the light sensor, a brightness of a display, an exposure of the light sensor, a white balance of the light sensor, or an external stimulus, wherein the external stimulus is a prompt to instruct a user to direct a point of regard or an object depicted in a display that moves within the display.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/572,097 | 2012-08-10 | ||
US13/572,097 US8437513B1 (en) | 2012-08-10 | 2012-08-10 | Spoof detection for biometric authentication |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
HK14102694.9A Addition HK1189673B (en) | 2012-08-10 | 2014-03-18 | Methods and systems for spoof detection for biometric authentication |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
HK14102694.9A Division HK1189673B (en) | 2012-08-10 | 2014-03-18 | Methods and systems for spoof detection for biometric authentication |
Publications (2)
Publication Number | Publication Date |
---|---|
HK1211716A1 HK1211716A1 (en) | 2016-05-27 |
HK1211716B true HK1211716B (en) | 2019-02-01 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9971920B2 (en) | Spoof detection for biometric authentication | |
KR101495430B1 (en) | Quality metrics for biometric authentication | |
CN105787454B (en) | Method and system for biometric verification | |
HK1211716B (en) | Methods and systems for spoof detection for biometric authentication | |
HK1189673B (en) | Methods and systems for spoof detection for biometric authentication | |
HK1211721B (en) | Methods and systems for spoof detection for biometric authentication | |
HK1238756A1 (en) | Methods and systems for quality metrics for biometric authentication | |
HK1189685B (en) | Methods and systems for texture features for biometric authentication | |
HK1227139A1 (en) | Methods and systems for biometric authentication | |
HK1189977A (en) | Methods and systems for quality metrics for biometric authentication | |
HK1189977B (en) | Methods and systems for quality metrics for biometric authentication |