[go: up one dir, main page]

CN120452049A - Iris recognition method, device, electronic device and storage medium - Google Patents

Iris recognition method, device, electronic device and storage medium

Info

Publication number
CN120452049A
CN120452049A CN202410173894.8A CN202410173894A CN120452049A CN 120452049 A CN120452049 A CN 120452049A CN 202410173894 A CN202410173894 A CN 202410173894A CN 120452049 A CN120452049 A CN 120452049A
Authority
CN
China
Prior art keywords
iris
image
images
pupil
iris image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410173894.8A
Other languages
Chinese (zh)
Inventor
张晓翼
侯锦坤
郭润增
王少鸣
崔齐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202410173894.8A priority Critical patent/CN120452049A/en
Priority to PCT/CN2025/075701 priority patent/WO2025167869A1/en
Publication of CN120452049A publication Critical patent/CN120452049A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application relates to the technical field of image processing, in particular to an iris recognition method, an iris recognition device, electronic equipment and a storage medium, which are used for improving the accuracy of iris recognition. The method comprises the steps of obtaining an iris image group collected for an object to be identified, wherein the iris image group comprises a plurality of iris images collected under different collection view angles for the same eye area of the object to be identified, respectively carrying out three-dimensional matching between every two iris images in the iris image group to obtain corresponding parallax images, positioning pupil edges from any one iris image contained in the iris image group based on the obtained parallax images, determining the iris area in any one iris image according to the pupil edges, carrying out feature extraction on any one iris area to obtain corresponding iris features, and carrying out identity identification on the object to be identified based on the iris features. According to the application, the pupil edge is positioned through stereo matching, so that the edge positioning error is reduced, and the iris recognition accuracy is improved.

Description

Iris recognition method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an iris recognition method, an iris recognition device, an electronic device, and a storage medium.
Background
Iris recognition is a biometric technique that performs iris recognition by analyzing iris texture features in an individual's eye. The iris recognition has the characteristics of high reliability and uniqueness, and is widely applied to scenes such as security authentication and identity verification.
The iris recognition method in the related art is mainly used for positioning the pupil edge by methods such as edge detection, morphological processing and the like, and further, the iris area is determined on the basis of positioning the pupil edge so as to perform iris recognition.
However, under the conditions of pupil size change, light ray change, pupil edge blurring and the like, the edge positioning effect obtained by adopting the method is poor, so that the accuracy of iris recognition is influenced.
Therefore, how to improve the accuracy of iris recognition is urgently needed.
Disclosure of Invention
The embodiment of the application provides an iris recognition method, an iris recognition device, electronic equipment and a storage medium, which are used for improving the accuracy of iris recognition.
The iris recognition method provided by the embodiment of the application comprises the following steps:
the method comprises the steps of acquiring an iris image group acquired aiming at an object to be identified, wherein the iris image group comprises a plurality of iris images acquired aiming at the same eye area of the object to be identified under different acquisition view angles;
In the iris image group, respectively carrying out stereo matching between every two iris images to obtain corresponding parallax images, wherein parallax elements in each parallax image represent displacement of each corresponding point between the corresponding two iris images in a specified direction;
Positioning the pupil edge from any one iris image contained in the iris image group based on each obtained parallax image;
Determining an iris region in any one iris image according to the pupil edge, and extracting features of any one iris region to obtain corresponding iris features;
and carrying out identity recognition on the object to be recognized based on the iris characteristics.
The iris recognition device provided by the embodiment of the application comprises:
the device comprises an image acquisition unit, a recognition unit and a display unit, wherein the image acquisition unit is used for acquiring an iris image group acquired for an object to be recognized, and the iris image group comprises a plurality of iris images acquired under different acquisition view angles for the same eye area of the object to be recognized;
The stereo matching unit is used for respectively carrying out stereo matching between every two iris images in the iris image group to obtain corresponding parallax images, wherein parallax elements in each parallax image represent displacement of each corresponding point between the corresponding two iris images in a specified direction;
A pupil positioning unit, configured to position a pupil edge from any one iris image included in the iris image group based on each obtained parallax map;
The iris feature extraction unit is used for determining an iris region in any one iris image according to the pupil edge, and extracting features of any one iris region to obtain corresponding iris features;
and the identification unit is used for carrying out identity identification on the object to be identified based on the iris characteristics.
Optionally, the stereo matching unit is specifically configured to:
Eliminating parallax between two iris images by correcting the two iris images;
Pupil characteristic points are extracted from each corrected iris image respectively;
Determining corresponding points from the extracted pupil characteristic points through stereo matching;
And obtaining parallax maps corresponding to the two iris images based on the determined positions of the corresponding points in the corresponding iris images.
Optionally, the pupil positioning unit is specifically configured to:
if one parallax map is obtained, positioning the pupil edge from any one iris image contained in the iris image group based on the parallax map;
if multiple parallax images are obtained, the multiple parallax images are subjected to image fusion, pupil edges are positioned from any one iris image contained in the iris image group based on the fused parallax images, or one pupil edge is positioned from any one iris image corresponding to the parallax images based on each parallax image respectively, and the determined pupil edges are fused to obtain the fused pupil edges.
Optionally, the pupil positioning unit is specifically configured to:
based on a disparity map, pupil edges are located from an iris image by:
Extracting gradient information in the parallax map through an edge detection operator to obtain a gradient map corresponding to the parallax map, wherein gradient elements in the gradient map represent the change rate of gray values of all pixel points in the parallax map;
Determining pupil edge points from the disparity map based on a preset gradient threshold;
And positioning the pupil edge from the iris image according to the determined pupil edge point.
Optionally, the iris feature extraction unit is specifically configured to:
Setting a preset margin around the pupil edge in any one iris image;
and taking the annular area determined based on the pupil edge and the preset margin as an iris area in any one iris image.
Optionally, the iris feature extraction unit is specifically configured to perform at least one of the following steps:
Extracting features of the iris region through filters with different scales and directions to obtain corresponding response values; combining the response values extracted by the filters to form iris features corresponding to the iris areas;
And extracting iris characteristics corresponding to the iris region by comparing the gray values of each pixel point in the iris region and the corresponding neighborhood pixel points.
Optionally, the iris feature extraction unit is further configured to enhance a contrast of the iris region before the feature extraction is performed on the iris region to obtain the corresponding iris feature.
Optionally, the iris feature extraction unit is further configured to process, before the feature extraction is performed on the iris region to obtain the corresponding iris feature, at least one disparity map to update the iris region by:
Extracting gradient information in the parallax map through different edge detection operators to obtain a plurality of gradient maps corresponding to the parallax map, wherein each edge detection operator corresponds to one gradient map, and gradient elements in the gradient map represent the change rate of gray values of pixel points in the parallax map;
determining new pupil edge points from the disparity map based on the plurality of gradient maps;
And updating the iris region in any one iris image according to the new pupil edge point.
Optionally, if there are multiple iris image groups, each iris image group corresponds to an image mode, the stereo matching unit is further configured to, before the stereo matching is performed between each two iris images, obtain a corresponding disparity map:
Dividing the iris image groups into iris image candidate sets corresponding to different acquisition view angles, wherein the iris image candidates in each iris image candidate set are iris images acquired under different image modes and the same acquisition view angle;
Aiming at each pixel point, carrying out feature fusion on the corresponding pixel point on each iris image candidate in the same iris image candidate set to obtain a fused iris image;
The stereo matching unit is specifically configured to:
And respectively carrying out three-dimensional matching between every two fused iris images to obtain corresponding parallax images.
Optionally, the image mode includes some or all of the following:
RGB image mode, infrared image mode, visible light image mode.
Optionally, the stereo matching unit is specifically configured to:
And carrying out weighted average on the gray values of the corresponding pixel points on each iris image candidate in the same iris image candidate set.
Optionally, before the stereo matching unit performs stereo matching between each two iris images, the stereo matching unit is further configured to:
and carrying out image enhancement processing on each iris image in the iris image group in at least one mode.
An electronic device provided in an embodiment of the present application includes a processor and a memory, where the memory stores a computer program, and when the computer program is executed by the processor, the processor is caused to execute any one of the steps of the iris recognition method described above.
An embodiment of the present application provides a computer-readable storage medium including a computer program for causing an electronic device to execute the steps of any one of the iris recognition methods described above when the computer program is run on the electronic device.
Embodiments of the present application provide a computer program product comprising a computer program stored in a computer readable storage medium, which when read from the computer readable storage medium by a processor of an electronic device, causes the electronic device to perform the steps of any one of the iris recognition methods described above.
The application has the following beneficial effects:
The embodiment of the application provides an iris recognition method, an iris recognition device, electronic equipment and a storage medium. In the embodiment of the application, when iris recognition is carried out, an iris image group is acquired, and the pupil edge is finally positioned by carrying out stereo matching between every two iris images in the iris image group. The pupil edge is positioned by the stereo matching technology, so that the stereo matching technology can accurately position the pupil edge under different illumination conditions, different facial poses and expressions, and has stronger robustness to noise, shielding and other interference factors, and therefore, the pupil edge positioning error can be effectively reduced based on the positioning mode.
In summary, using stereo matching techniques can more accurately locate pupil edges under complex illumination and pupil size variations. On the basis of accurately finding the edge of the pupil, a more accurate iris range can be determined, the iris recognition accuracy is improved, and the method still has a better recognition effect under the complex conditions of pupil size change, light change, pupil edge blurring and the like.
In addition, the stereo matching technology has higher real-time performance, can rapidly detect and position pupil edges, can meet the requirements of real-time application, can be optimized and improved according to different application scenes and requirements, and has higher expandability and flexibility.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is an alternative schematic diagram of an application scenario in an embodiment of the present application;
FIG. 2 is a schematic diagram of an iris image of a human eye in accordance with an embodiment of the present application;
FIG. 3 is a flowchart of an iris recognition method according to an embodiment of the present application;
FIG. 4 is a logical schematic diagram of stereo matching of four iris images in an embodiment of the application;
FIG. 5 is a schematic diagram of a gray level histogram according to an embodiment of the present application;
FIG. 6 is a schematic illustration of two iris images in an embodiment of the application;
FIG. 7 is a schematic diagram of a correction principle in an embodiment of the present application;
FIG. 8 is a schematic diagram of an iris image before and after correction in an embodiment of the application;
FIG. 9 is a schematic diagram of disparity map fusion according to an embodiment of the present application;
Fig. 10 is a flowchart of a pupil edge localization method according to an embodiment of the present application;
FIG. 11 is a schematic diagram of a determination of iris areas in an embodiment of the application;
FIG. 12 is a flowchart of an iris region updating method according to an embodiment of the application;
FIG. 13 is a schematic diagram of an identification process performed on an object through feature matching in an embodiment of the present application;
FIG. 14 is a schematic diagram of partitioning logic of a candidate iris image set according to an embodiment of the application;
FIG. 15 is a modular block diagram of an iris recognition process in an embodiment of the application;
Fig. 16 is a schematic diagram of the composition structure of an iris recognition device according to an embodiment of the present application;
FIG. 17 is a schematic diagram of a hardware configuration of an electronic device to which embodiments of the present application are applied;
fig. 18 is a schematic diagram of a hardware configuration of another electronic device to which the embodiment of the present application is applied.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the technical solutions of the present application, but not all embodiments. All other embodiments, based on the embodiments described in the present document, which can be obtained by a person skilled in the art without any creative effort, are within the scope of protection of the technical solutions of the present application.
Some of the concepts involved in the embodiments of the present application are described below.
Pupil is a small circular hole in the center of iris in animal or human eye, and is the passage for light to enter eye. Constriction of the sphincter of the pupil on the iris can constrict the pupil, constriction of the dilated muscle of the pupil can dilate the pupil, and the dilation and constriction of the pupil can control the amount of light entering the pupil.
The iris is an annular film containing pigment at the front part of the eyeball, which is positioned at the outer side of the pupil and takes a round shape. The center of the circle where the iris is located is usually the center of the pupil.
Iris Recognition (Iris Recognition), a biological Recognition technology, performs individual identification by analyzing the texture features of the Iris of an eye.
Stereo Matching (Stereo Matching), a computer vision technique, by comparing similarities in two Stereo views, determines the position of a corresponding point in an image, thereby realizing three-dimensional reconstruction or object positioning.
Disparity map DISPARITY MAP a disparity map is an image representing the amount of horizontal displacement between corresponding points in left and right views. The disparity map may be used to calculate depth information of an object, thereby enabling three-dimensional reconstruction or object localization. In the stereo matching process, the disparity map is a representation of the matching result. Each pixel value in the disparity map represents the amount of horizontal displacement of the corresponding point in the left and right views, with a larger value representing that the object is closer to the observer (e.g., binocular camera, as follows).
Artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is a theory, method, technique, and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision. The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions. The technical scheme provided by the embodiment of the application mainly relates to a computer vision technology, machine learning/deep learning and the like in artificial intelligence.
Computer Vision (CV) is a science of researching how to make a machine "look at", and more specifically, it means to replace a human eye with a camera and a Computer to perform machine Vision such as identifying and measuring on a target, and further perform graphic processing, so that the Computer is processed into an image more suitable for human eye observation or transmitting to an instrument for detection. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D techniques, virtual reality, augmented reality, synchronous positioning, map construction, and other techniques, as well as common biometric recognition techniques such as face recognition, fingerprint recognition, and the like.
The technical scheme provided by the embodiment of the application mainly relates to image recognition in the computer vision technology. Specifically, the iris recognition method of the application realizes the identification of the object to be recognized by recognizing the iris image acquired based on the object to be recognized.
The following briefly describes the design concept of the embodiment of the present application:
The biometric identification technology is an identification technology which has been recently developed, and simply speaking, an identification technology which performs identity verification by using physiological characteristics of living things. Compared with the traditional identification technology (such as a door lock key, a password and the like), the biological characteristic identification technology has higher stability, safety and portability.
The iris recognition technology is a biological recognition technology for performing identity authentication by utilizing iris texture features of biological eyes, and has unique advantages compared with other biological recognition technologies such as face recognition, palm print recognition, fingerprint recognition and the like. Firstly, the iris has extremely strong biological activity and is co-mingled with the life phenomenon of a human body, so that the iris image of a living body is not feasible to replace by a photo or a video, secondly, the iris has extremely strong stability, the iris is formed before birth and shaped after 6-18 months of birth, and is unchanged and extremely stable after life, finally, the iris has uniqueness, the information contained in each iris is different, the iris has extremely high randomness, and the iris textures of the left eye and the right eye of the same person are not mutually identified.
Specifically, iris recognition performs iris recognition by analyzing iris texture features in the eyes of an individual. The iris recognition has the characteristics of high reliability and uniqueness, and is widely applied to security authentication and identity verification scenes.
In the iris recognition method in the related art, iris images are acquired through iris acquisition equipment, then pupil edges in the iris images are detected, the pupil edges are positioned mainly through methods such as edge detection and morphological processing, and further, iris areas are determined on the basis of positioning pupil edges, so that iris recognition is performed.
However, under the conditions of pupil size change, light ray change, pupil edge blurring and the like, the edge positioning effect obtained by adopting the method is poor, so that the accuracy of iris recognition is influenced.
In view of this, the embodiments of the present application provide an iris recognition method, apparatus, electronic device and storage medium. In the embodiment of the application, when iris recognition is carried out, an iris image group is acquired, and the pupil edge is finally positioned by carrying out stereo matching between every two iris images in the iris image group. The pupil edge is positioned by the stereo matching technology, so that the stereo matching technology can accurately position the pupil edge under different illumination conditions, different facial poses and expressions, and has stronger robustness to noise, shielding and other interference factors, and therefore, the pupil edge positioning error can be effectively reduced based on the positioning mode.
In summary, using stereo matching techniques can more accurately locate pupil edges under complex illumination and pupil size variations. On the basis of accurately finding the edge of the pupil, a more accurate iris range can be determined, the iris recognition accuracy is improved, and the method still has a better recognition effect under the complex conditions of pupil size change, light change, pupil edge blurring and the like.
In addition, the stereo matching technology has higher real-time performance, can rapidly detect and position pupil edges, can meet the requirements of real-time application, can be optimized and improved according to different application scenes and requirements, and has higher expandability and flexibility.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are for illustration and explanation only, and not for limitation of the present application, and embodiments of the present application and features of the embodiments may be combined with each other without conflict.
The scheme provided by the embodiment of the application can be suitable for iris recognition, such as iris recognition during identity recognition, and can be applied to various scenes as a basic technology, including but not limited to cloud technology, artificial intelligence, intelligent transportation, auxiliary driving and other scenes. Fig. 1 is a schematic diagram of an application scenario in an embodiment of the present application. The application scene graph comprises an iris acquisition device 101 and an iris recognition device 102.
The iris acquisition device 101 is used as a front-end device, and can be used for acquiring iris images of an eye region of an object to be identified, and can include, but is not limited to, a contact iris acquisition instrument, an intelligent iris face integrated machine, a portable iris recognition device, a remote noninductive iris acquisition recognition device and other devices specially used for iris image acquisition, and can also be a mobile phone, a tablet personal computer (PAD), a notebook computer, a desktop computer, an intelligent vehicle-mounted device, an intelligent voice interaction device, an intelligent household appliance, an intelligent wearable device, an aircraft and other terminal devices with iris image acquisition functions, which are not particularly limited herein.
Iris recognition technology is widely used in various fields due to its high safety and accuracy. Along with the development of technology, iris acquisition equipment is also continuously advancing, becomes more efficient and convenient, and can adapt to various different application scenes. In selecting iris acquisition devices, it is desirable to consider the acquisition efficiency, accuracy, portability of the device, and whether the requirements of a particular application are met.
As shown in fig. 1, taking an iris acquisition device such as a binocular camera as an example, an object 1012 to be identified is photographed by the binocular camera to acquire an iris image, as shown by 1011. Wherein the object to be identified 1012 shown in fig. 1 is a person. Of course, in addition to this, the iris recognition method in the embodiment of the present application is also applicable to other individuals with irises, such as other primates, other mammals, birds, etc., and will not be described in detail herein.
The following is a brief description of the human example:
Fig. 2 is a schematic diagram of an iris image of a human eye according to an embodiment of the present application. The human eye structure is composed of sclera, iris, pupil lens, retina, etc. As shown in fig. 2, the iris (gray area in fig. 2) is an annular portion between the black pupil and the white sclera, which contains numerous interlaced spots, filaments, crowns, fringes, crypts, etc. of detail. Under the irradiation of infrared light (generally between 700 and 900 nanometers) with a certain wavelength, the iris generally presents a radial structure from inside to outside, and the fine features are called texture features of the iris, have 'uniqueness', and have important application value in various fields.
It should be noted that, the iris image is only a simple example, and detailed features such as texture included in the actual iris may not be represented in the drawings herein, but are not meant to be included in the iris image, so as to be described.
The iris recognition device 102 may be various electronic devices with iris recognition function, such as security access control system, identity authentication device, etc., and may also be terminal device, server, etc. In the case of taking the iris recognition device 102 as a server, the iris recognition device may be an independent physical server, a server cluster or a distributed system formed by two physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content distribution network (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligence platform.
If the iris acquisition device is a terminal device, the terminal device may be further provided with a client related to iris recognition, where the client may be software (such as payment software, iris recognition software, etc.), or may be a web page, an applet, etc., and the server is a background server corresponding to the software or web page, applet, etc., or a server specifically used for iris recognition, and the application is not limited specifically.
In practical application, the iris acquisition device 101 can send an iris image group containing a plurality of iris images to the iris recognition device 102, the iris recognition device 102 performs stereo matching between every two iris images in the iris image group by adopting the iris recognition method of the embodiment of the application to obtain corresponding parallax images, then positions pupil edges from any one iris image contained in the iris image group based on the obtained parallax images, determines an iris region in any one iris image according to the pupil edges, performs feature extraction on any one iris region to obtain corresponding iris features, and finally performs identity recognition on an object to be recognized based on the extracted iris features.
It should be noted that, in practical application, when the processing capability of the iris acquisition apparatus is sufficient, the iris recognition apparatus 102 and the iris acquisition apparatus 101 may be implemented by the same apparatus, which is not limited in the embodiment of the present application.
In the embodiment of the present application, the iris acquisition device 101 and the iris recognition device 102 may be directly or indirectly connected through one or more networks. The network may be a wired network, or may be a Wireless network, for example, a mobile cellular network, or may be a Wireless-Fidelity (WIFI) network, or may be other possible networks, which embodiments of the present application are not limited in this respect.
It should be noted that, the number of iris acquisition devices and iris recognition devices shown in fig. 1 is merely illustrative, and the number of iris acquisition devices and iris recognition devices is not limited in practice, and the embodiment of the present application is not particularly limited.
In the embodiment of the application, when the number of the servers is two, the two servers can be formed into a blockchain, and the servers are nodes on the blockchain, and the iris recognition method disclosed by the embodiment of the application can save related iris recognition data on the blockchain, such as iris images, parallax images, pupil edges, iris areas, iris features, identity recognition results and the like.
The following list some common iris recognition application scenarios:
The iris recognition can be used for entrance guard systems of enterprises, government institutions, laboratories and the like, and only authorized personnel can enter a specific area.
And (II) the iris recognition can be used for banking, self-service teller machines (ATM) and other financial scenes, and a safe and convenient customer identity verification mode is provided.
And thirdly, unlocking the electronic equipment, namely, iris recognition can be used for unlocking the electronic equipment such as a smart phone, a tablet personal computer and the like, and a safer and more convenient unlocking mode is provided.
And fourthly, the iris recognition can be used for attendance systems of enterprises and schools, and the attendance records of staff or students are ensured to be accurate.
And fifthly, in the medical industry, iris recognition can be used for medical institutions to carry out identity verification on patients, and accuracy and safety of medical information are ensured.
And (six) driving license examination, namely iris recognition can be used for driving license examination, so that identity authenticity of an examinee is ensured, and a tilter phenomenon is prevented.
And (seven) the voting system can be used for voting the voting system by iris recognition, so that the identity authenticity of the voter is ensured, and the fairness of the election is ensured.
And (eight) intelligent home iris recognition can be used for an intelligent home system, so that the identity recognition of family members is realized, and personalized home services are provided.
It should be noted that, the above-listed iris recognition scenarios are also only examples, and other iris recognition scenarios are also applicable to the embodiments of the present application, and are not described herein in detail.
It will be appreciated that in particular embodiments of the present application, related data such as iris images are involved, and that when the above embodiments of the present application are applied to particular products or technologies, subject permissions or consents need to be obtained, and that the collection, use and processing of related data is required to comply with relevant laws and regulations and standards of the relevant countries and regions.
The iris recognition method provided by the exemplary embodiments of the present application will be described below with reference to the accompanying drawings in conjunction with the above-described application scenarios, it being noted that the above-described application scenarios are only shown for the convenience of understanding the spirit and principle of the present application, and the embodiments of the present application are not limited in any way in this respect.
Referring to fig. 3, a flowchart of an implementation of an iris recognition method according to an embodiment of the present application is shown, where the implementation of the method is as follows:
S31, acquiring an iris image group acquired for an object to be identified.
The iris image group comprises a plurality of iris images acquired at different acquisition view angles for the same eye region of the object to be identified.
Specifically, this step is mainly for acquiring iris images of the object (individual) to be identified. In the present application, a dedicated iris acquisition device, such as an iris recognition camera, a contact iris acquisition instrument, a non-contact iris acquisition instrument, etc., may be generally used to capture a clear iris image.
The iris image acquired in the application needs to be subjected to stereo matching, and the stereo matching technology determines the positions of corresponding points in the image by comparing the similarity of two or more stereo views, thereby realizing three-dimensional reconstruction or object positioning.
Thus, it is necessary to perform the acquisition of iris images at different acquisition perspectives for the same eye region of the object to be identified to obtain a plurality of iris images, wherein each acquisition perspectives corresponds to at least one iris image.
For example, a left camera and a right camera of a binocular camera are adopted to shoot left and right view images of the same scene, wherein the left and right cameras correspond to different acquisition view angles, so that iris images of the same eye region of an object to be identified are acquired through the binocular camera, and iris images containing a certain overlapping region under two different acquisition view angles can be obtained so as to be matched.
For another example, a monocular or other camera may be used to switch between different acquisition perspectives to acquire iris images of the same eye region of the subject to be identified, and so on.
It should be noted that, any iris acquisition device is adopted, and the method for acquiring the iris image set for the object to be identified is applicable to the embodiment of the present application, and will not be described in detail herein.
In order to further improve the image quality and thus the accuracy of subsequent pupil edge positioning, some preprocessing operations for enhancing the image quality can be performed on the acquired iris image.
In an alternative embodiment, after step S31 and before step S32, the acquired iris image may be further preprocessed by at least one of the following methods:
In the first mode, denoising is performed on the iris image.
In particular, image denoising aims to eliminate noise in an image to improve the quality and visual effect of the image. Noise may come from sensor errors, illumination non-uniformities, etc. during image acquisition.
In the embodiment of the application, a plurality of methods for denoising iris images are provided, including but not limited to the following parts or all:
and (one) denoising through a filter.
Such as mean filters, gaussian filters, median filters, bilateral filters, etc.
The average value filter is used for smoothing an iris image and reducing noise by replacing the pixel value with an average value in a pixel neighborhood, the Gaussian filter is used for realizing a smoothing effect by taking a Gaussian function as a weight and carrying out weighted average on the iris image, and the median filter is used for taking a neighborhood (such as 3x3, 5x5 and the like) with a fixed size as a center, sequencing the pixel values in the neighborhood and taking a median value as a new value of the current pixel. The method can effectively retain the image edge information and remove noise at the same time, and the bilateral filter can remove noise at the same time when the edge information is retained by combining the spatial proximity and the pixel value similarity.
In iris recognition, a common denoising method is median filtering, which can effectively eliminate Salt-and-Pepper Noise (Salt-and-Pepper Noise).
And (II) denoising the non-local mean value, namely averaging the whole iris image by utilizing the self-similarity of the iris image, and effectively retaining textures and details.
And thirdly, wavelet transformation, namely performing wavelet decomposition of different layers on the iris image through multi-scale analysis, performing threshold processing on detail coefficients, and finally reconstructing to obtain the denoised iris image.
And (IV) anisotropic diffusion, namely simulating a heat conduction process, and gradually smoothing the iris image through iterative calculation while retaining edges.
And (V) total variation denoising, namely denoising by minimizing the total variation of the iris image, and maintaining the edge and texture information of the iris image.
And (six) low-rank matrix recovery, namely regarding the iris image as a matrix, and removing noise by solving a low-rank matrix recovery problem.
And (seventh) deep learning method, namely learning the priori knowledge of the iris image by using a deep neural Network, such as a convolutional neural Network (Convolutional Neural Networks, CNN) and a generating countermeasure Network (GENERATIVE ADVERSARIAL Network, GAN), so as to realize efficient denoising.
It should be noted that, the above-listed image denoising method is only a simple example, and other methods are also applicable to the embodiments of the present application, and are not described in detail herein. Furthermore, the selection of an appropriate denoising method depends on the noise type, image content, and application scenario. In practical applications, it may be necessary to combine various methods or adjust parameters to obtain the best denoising effect, which is not particularly limited herein.
And secondly, carrying out histogram equalization processing on the iris image.
Histogram equalization is an image enhancement method that can improve the contrast of the image, making the iris texture more pronounced. The basic idea is to widen the gray level with high frequency of occurrence in the image and reduce the gray level with low frequency of occurrence, so that the gray histogram of the image tends to be uniform, thereby improving the overall contrast and visual effect of the image.
Specifically, the steps of performing histogram equalization processing on an iris image are summarized as follows:
(1) The original iris image is converted into a gray image, the gray level of the gray image is determined, and a gray histogram is obtained through statistics, wherein the gray histogram represents the number distribution of each gray level pixel in the iris image.
Fig. 5 is a schematic diagram of a gray level histogram according to an embodiment of the present application. Where the horizontal axis k in fig. 5 represents the gray level, the vertical axis h (k) represents the number of pixels, fig. 5 simply illustrates 4 gray levels, and the number distribution of each gray level pixel, as in fig. 5, 4 pixels at gray level 0, 5 pixels at gray level 1,3 pixels at gray level 2, and 4 pixels at gray level 3.
It should be noted that, in practice, the gray level in the image will be greater (typically 256), the number of pixels in the image will be greater, and fig. 5 is only a simple example and will not be described in detail herein.
(2) A cumulative distribution function (Cumulative Distribution Function, CDF) for each gray level is calculated.
The cumulative distribution function is simply called as the distribution function, is the integral of the probability density function, and can completely describe the probability distribution of a real random variable.
For example, the total of L gray levels (typically 256) in the original iris image, and the number of pixels n i, 0< i < L for each gray level. The probability of occurrence of a pixel of gray i in the image is p x(i)=p(x=i)=ni/n. Where n is the number of all pixels in the image.
The cumulative distribution function of p x is the cumulative normalized histogram of the image:
(3) The gray values of the original iris image are mapped to new gray values using a cumulative distribution function, so that the gray distribution of the new image is more uniform.
The specific mapping formula is as follows:
wherein cdfmin is the minimum value of the cumulative distribution function, M and N respectively represent the number of long and wide pixels, v is the original gray value, and h (v) is the new gray value after mapping.
In the embodiment of the application, the histogram equalization can simply and effectively improve the contrast of the image, and especially for the image with smaller dynamic range, the contrast of the iris image is improved after the histogram equalization treatment, so that details such as iris textures in the iris image are clearer, the boundary between the pupil and the iris is clearer, and the subsequent operations such as pupil edge positioning and iris feature extraction are facilitated.
And mode three, gray scale stretching.
The contrast range is extended by linearly stretching the pixel value of the iris image, so that the details of the iris image are more obvious.
In addition to the above-mentioned image quality enhancement methods, image enhancement may be performed in other ways to improve image quality, such as sharpening, color correction, etc., and may also be used to improve image quality.
It should be noted that each method has its applicable scene and advantages, and the selection of the appropriate method depends on the specific application requirements and the characteristics of the image. In the embodiment of the application, the purpose of image enhancement is to improve the visual effect of the image, so that the image is more suitable for further analysis and processing, such as subsequent pupil edge positioning, iris feature extraction and the like. In practical applications, it may be desirable to combine several methods to achieve the best enhancement, without specific limitation.
In the embodiment, the image quality is improved through a certain preprocessing operation, so that the related detailed features of the iris, the pupil and the like in the image are clearer, and the accuracy of the subsequent pupil edge positioning can be effectively improved.
S32, respectively carrying out stereo matching between every two iris images in the iris image group to obtain corresponding parallax images.
Wherein, the parallax elements in each parallax map represent the displacement of each corresponding point between the corresponding two iris images in the appointed direction.
Specifically, the specified direction is related to the placement position of the iris acquisition device.
Taking a binocular camera as an example, the disparity map is actually the deviation of the pixel positions of the same scene imaged under the two cameras, and the position deviation is reflected in the horizontal direction because the two binocular cameras are horizontally placed. Of course, if the camera is not placed horizontally, the designated direction may be other directions, and specific analysis is not repeated here.
In the embodiment of the application, when the number of iris images in the iris image group is different, the number of parallax images obtained is also different:
If the iris image group only contains two iris images, the two iris images are subjected to stereo matching, so that a parallax image can be obtained, and the parallax image reflects the displacement of corresponding points in the two iris images in a specified direction (such as a horizontal direction).
If the iris image group contains three or more iris images, the iris images correspond to the same eye area of the object to be identified, and therefore, a certain overlapping area exists between every two iris images (if no overlapping area exists between a certain iris image and other iris images or the overlapping area is irrelevant to eyes, the iris images can be ignored), so that three-dimensional matching can be performed between every two iris images in the iris image group, and a plurality of parallax images can be obtained, wherein every two iris images correspond to one parallax image.
Fig. 4 is a logic diagram of stereo matching of four iris images according to an embodiment of the present application.
It is assumed that there are four iris images in the iris image group, which are respectively denoted as an iris image 1, an iris image 2, an iris image 3, and an iris image 4. The three-dimensional matching is performed between the iris image 1 and the iris image 2, so that a parallax image 1 can be obtained, the three-dimensional matching is performed between the iris image 1 and the iris image 3, a parallax image 2 can be obtained, the three-dimensional matching is performed between the iris image 1 and the iris image 4, a parallax image 3 can be obtained, the three-dimensional matching is performed between the iris image 2 and the iris image 3, a parallax image 4 can be obtained, the three-dimensional matching is performed between the iris image 2 and the iris image 4, a parallax image 5 can be obtained, and a parallax image 6 can be obtained.
It should be noted that, in the embodiment of the present application, the parallax map and the iris image are identical in size, that is, the parallax map corresponds to the pixels in the iris image one by one.
The following describes a process of obtaining a disparity map by performing stereo matching between each two iris images:
Stereo matching is a computer vision technique for finding the position of a corresponding point in two stereo images, thereby realizing three-dimensional reconstruction or object positioning. In iris recognition, stereo matching may be used to locate the pupil edge.
In an alternative embodiment, the stereo matching process in S32 may be implemented according to the following steps, including the following steps S321 to S324 (not shown in fig. 3):
and S321, correcting the two iris images to eliminate parallax between the two iris images.
Specifically, the stereo matching requires the use of two iris images, taking two cameras through a binocular camera as an example, the image taken by the left camera is denoted as a left view, and the image taken by the right camera is denoted as a right view.
Fig. 6 is a schematic diagram of two iris images in an embodiment of the application. The left view in fig. 6 is an image of one eye of the object to be identified, which is captured by the left camera of the binocular camera, the right view is an image of the eye, which is captured by the right camera of the binocular camera, and both images may be referred to as iris images, and the two images have a certain overlapping area so as to be matched.
Further, before stereo matching is performed, the images may also be corrected to eliminate parallax between the left view and the right view. Parallax is caused by the difference in position between the left and right eyes, and affects the accuracy of stereo matching. There are many correction methods, such as binocular stereo correction (Stereo Rectification), perspective transformation, affine transformation, etc., so that later, when forming a disparity map, the point at each position is only required to represent the horizontal displacement of the pixels of the left and right views.
Binocular stereo correction is an image preprocessing method for eliminating parallax between a pair of binocular images such that left and right views are on the same horizontal line. Therefore, the stereo matching process can be simplified, and the matching accuracy is improved. Binocular stereo correction is typically achieved by computing the internal and external parameters of the camera, and geometrically transforming the image.
The stereo correction is to strictly correspond two distorted images in rows, and the epipolar lines of the two images are exactly on the same horizontal line by using epipolar constraint, so that any point on one image and the corresponding point on the other image have the same row number, and the corresponding point can be matched by one-dimensional search on the row.
The binocular correction is to eliminate distortion and align lines of the left and right views according to monocular internal reference data (focal length, imaging origin, distortion coefficient) and binocular relative position relation (rotation matrix and translation vector) obtained after camera calibration, so that the imaging origin coordinates of the left and right views are consistent, the optical axes of the two cameras are parallel, the left and right imaging planes are coplanar, and the epipolar lines are aligned.
Fig. 7 is a schematic diagram of a correction principle in an embodiment of the present application.
Assuming that there is a point P in space whose coordinates in the world coordinate system are Pw, its coordinates in the left camera coordinate system in the binocular camera may be represented as Pl and its coordinates in the right camera coordinate system in the binocular camera may be represented as Pr.
As is apparent from fig. 7, the two corrected images achieve planar coplanarity and alignment of epipolar lines, so that any point on one image and a corresponding point on the other image have the same line number, and thus, the corresponding points can be matched only by one-dimensional search in the subsequent line, and the matching efficiency of the subsequent corresponding points is improved.
In iris recognition, the binocular stereo correction method is adopted to carry out stereo correction on two iris images. Fig. 8 is a schematic diagram of iris images before and after correction in an embodiment of the application. The two iris images on the left side of fig. 8 represent left and right view images captured by the binocular camera in the same eye area of the object to be identified before correction, and the two iris images after binocular stereo correction are shown on the right side of fig. 8.
In the embodiment, the searching range of the matching of the corresponding points in the two iris images is reduced from two dimensions to one dimension through image correction, so that the efficiency is greatly improved.
S322, extracting pupil characteristic points from each corrected iris image respectively.
In the embodiment of the application, in the corrected image, the characteristic points of the pupils need to be extracted so as to be matched. Specifically, the extracted pupil feature points include, but are not limited to, some or all of the following:
Corner points (one).
Where corner points are one of the local features in an iris image, generally points with significant angular variations. The corner points have good stability and distinguishability.
In the embodiment of the application, the algorithm for extracting the corner points includes Harris corner point detection, shi-Tomasi corner point detection and the like, and is not particularly limited herein.
(II) edge points.
The edge points are also one of local features in the image, and generally refer to points with significantly changed gray values in the iris image, and can be used for describing contour and shape information of an object.
In the embodiment of the present application, the algorithm for extracting the edge points includes Canny (Canny) edge detection, sobel (Sobel) edge detection, robbert (Roberts) edge detection, plague (Prewitt) edge detection, laplace (laplace) edge detection, laplace (LAPLACIAN OF GAUSSIAN, loG) edge detection, and the like, which are not particularly limited herein.
S323, determining corresponding points from the extracted pupil characteristic points through stereo matching.
After feature points are extracted, a stereo matching algorithm may be used to find corresponding points in the two iris images (left and right views as listed above).
Among these, stereo matching algorithms include, but are not limited to, some or all of the following:
Region-based Matching algorithm (Area-based Matching), feature-point-based Matching algorithm (Feature-based Matching), and the like.
In the embodiment of the application, stereo matching is mainly used for pupil positioning in iris recognition, and takes an area-based matching algorithm as an example, namely, the positions of corresponding points are determined by comparing pixel value similarity in two iris images (such as a left view and a right view listed above).
Specifically, the measurement method of the similarity of the pixel values includes, but is not limited to, some or all of the following:
algorithm error sum of squares (Sum of Squared Differences, SSD) algorithm.
Wherein SSD is a similarity measure for calculating the difference between two image areas, a smaller SSD value indicating that the two areas are more similar.
In the embodiment of the application, SSD values of the pupil feature points extracted from the two iris images can be compared through an SSD algorithm, and corresponding points are analyzed according to the values. For example, the SSD value between two pupil feature points is smaller than a preset SSD threshold, and the two pupil feature points can be considered as a set of corresponding points in the two iris images.
And (II) normalizing the correlation coefficient (Normalized Cross Correlation, NCC) algorithm.
Where NCC is another similarity measure for calculating the correlation between two image regions, a larger NCC value indicates that the two regions are more similar.
In the embodiment of the application, the NCC values of the pupil characteristic points extracted from the two iris images can be compared through an NCC algorithm, and corresponding points are analyzed according to the values. For example, when the NCC value between two pupil feature points is greater than a preset NCC threshold, the two pupil feature points can be considered as a set of corresponding points in the two iris images.
It should be noted that, the above-listed manner of stereo matching two iris images is only a simple example, and other stereo matching manners are also applicable to the embodiments of the present application, and are not described herein again.
S324, based on the determined positions of the corresponding points in the corresponding iris images, obtaining parallax images corresponding to the two iris images.
In particular, when the human eye (or camera) views the same object from two slightly different viewpoints, two slightly different images are seen. This difference between images is referred to herein as parallax. Taking the human eye as an example, when we concentrate the line of sight on distant objects, the positions of near objects in the images seen by both eyes appear to be greatly different, in which case the parallax is greater. Conversely, distant objects have less difference in position in the images seen by the two eyes and therefore less parallax. Based on this principle, our brain can understand the three-dimensional structure of the world we see by resolving these differences (parallaxes).
In the present context, parallax refers to such spatial difference between images, i.e. the relative position of points on one image on another image. This can be described as a value (e.g. pixel difference of position) or a brightness level (in a disparity map). Such differences or "disparities" may be used to calculate the depth or distance of the object.
In the embodiment of the application, a parallax map is generated in the process of stereo matching of two iris images, wherein the value of each pixel in the parallax map can represent the horizontal displacement of the corresponding point in the left view and the right view, and the larger value represents that the object is closer to an observer (such as a binocular camera).
Assume a simplified disparity map, represented as a two-dimensional matrix of:
In the disparity map, a larger value indicates that the object is closer to the observer.
This simplified disparity map represents a pupil-like structure. In particular, in this simplified example, it can be seen that the central value is 5 and the peripheral values gradually decrease, the value change in the center of the matrix being the greatest, which means that the pupil edge may be located in this area.
In the embodiment of the application, the pupil edge can be more accurately positioned under the condition of complex illumination and pupil size change by using the stereo matching technology. On the basis of accurately finding the edge of the pupil, a more accurate iris range can be determined, the iris recognition accuracy is improved, and the method still has a better recognition effect under the complex conditions of pupil size change, light change, pupil edge blurring and the like.
Specifically, after the parallax image is obtained by adopting a stereo matching algorithm, the position of the parallax image can be determined by calculating the depth information of the pupil, so as to accurately position the edge of the pupil, and the specific process is as follows:
And S33, positioning the pupil edge from any one iris image contained in the iris image group based on each obtained parallax image.
As described in the above embodiments, the iris image group in the embodiments of the present application may include two iris images or three or more iris images, and when the iris image group includes two iris images, one parallax map may be obtained, and when the iris image group includes three or more iris images, a plurality of parallax maps may be obtained.
Thus, in implementing step S33, different positioning methods may be set according to the number of disparity maps.
Specifically, if a disparity map is obtained, the pupil edge is located from any one of the iris images included in the iris image group directly based on the disparity map.
If multiple parallax images are obtained, the following two positioning methods can be specifically classified:
and the first positioning mode is that the parallax images are fused and positioned.
The method includes the steps of performing image fusion on a plurality of parallax images, and positioning pupil edges from any one iris image contained in an iris image group based on the fused parallax images.
Optionally, when the multiple parallax images are fused, an optional implementation manner is as follows:
and operating on a pixel level, and carrying out weighted average or other forms of combination on pixel values of corresponding points on different parallax maps to obtain a final parallax map.
Fig. 9 is a schematic diagram of disparity map fusion according to an embodiment of the present application. Assuming that three simplified disparity maps are provided, each of which is represented as a two-dimensional matrix, the three disparity maps shown in fig. 9 can be represented as:
In each disparity map, a larger value indicates that the object is closer to the viewer.
When the disparity maps are fused, the values of corresponding points in the three disparity maps can be averaged, and the finally fused disparity maps can be expressed as the following two-dimensional matrix:
It should be noted that, the above-listed manner of fusion of parallax images is only a simple example, and other image fusion manners are also applicable to the embodiments of the present application, and are not described herein in detail.
And the second positioning mode is the pupil edge fusion positioning mode.
The method includes the steps of locating one pupil edge from any iris image corresponding to each parallax image based on each parallax image, and fusing the determined pupil edges to obtain fused pupil edges.
Specifically, when pupil edge fusion is performed, a plurality of pupil edges can be aligned in space position, for example, the corresponding relationship between the pupil edges can be found through feature point matching technologies such as corner detection and descriptor matching.
An image fusion algorithm may then be applied to merge these edges. Alternative image fusion methods include, but are not limited to, partial or complete Alpha fusion, pyramid fusion, and poisson fusion. The Alpha fusion is a process of superposing a foreground to a background through transparency, and pyramid fusion and poisson fusion provide different image fusion technologies, so that images can be fused while image details are kept.
Optionally, post-processing, such as morphological manipulation, may be performed on the fused pupil edges to remove possible gaps and discontinuities, ensuring edge continuity and integrity.
In the above embodiment, the pupil positioning accuracy can be improved to a certain extent, whether the plurality of parallax images are fused or the plurality of pupil positioning results are fused to comprehensively determine and divide one pupil edge.
In summary, the above-listed ways of locating the pupil edge in several multi-views are only examples, and other ways are also applicable to the embodiments of the present application, and are not described herein in detail.
In the embodiment of the present application, in either of the above modes, the pupil edge may be determined based on a disparity map. Specifically, when the pupil edge is located based on the disparity map, gradient information in the disparity map may be calculated in order to find the position of the pupil edge. The gradient represents the change rate of gray values in the image, can be used for detecting edges, and the gray change of the positions of the edges is larger, and the positions of the edges of the pupils can be obtained through subsequent threshold processing on the basis. An alternative embodiment is:
fig. 10 is a flowchart of a pupil edge positioning method according to an embodiment of the present application, where positioning a pupil edge from an iris image in the manner shown in fig. 10 includes the following steps S101 to S103:
s101, extracting gradient information in the parallax map through an edge detection operator to obtain a gradient map corresponding to the parallax map.
The gradient element at a certain position in the gradient map represents the change rate of the gray value of the pixel point corresponding to the position in the parallax map.
S102, determining pupil edge points from the parallax map based on a preset gradient threshold.
And S103, positioning the pupil edge from an iris image according to the determined pupil edge point.
In the embodiment of the present application, a slightly larger value (e.g., the number 5 described above) in the disparity map matrix indicates that the pupil is closer because the pupil is recessed in the eyeball, and thus in a relatively stereoscopic image, the pupil region is farther from the actual camera position in depth of field than the iris (has less disparity).
Next, the position of the pupil edge needs to be found from the disparity map. In order to find the edges, the method employed herein is to calculate the gray gradient of the disparity map. A gradient is a vector representing the direction and maximum of the direction derivative of a function (here, the function is the gray value of the image) at a point where the derivative is greatest. In images, the magnitude of the gradient is often used as a representation of the edge intensity.
Taking the disparity map matrix listed above as an example, the values in the center of the matrix vary the most, meaning from 5 to 3 to 1, which is a significant variation, and the surrounding values are relatively stable (all 1). Thus, the gradient will have a larger value in this region. Thus, a gradient map can be obtained by calculating the difference (i.e., gradient) between each pixel point and its neighbor point.
In the embodiment of the application, the process of calculating the gradient map can be realized through an edge detection operator, and the principle of extracting gradient information by the edge detection operator is mainly based on discontinuity of local characteristics of an image. In the image, the edges usually represent abrupt changes of gray, color or texture structures, and in the embodiment of the present application, the edges can be understood as abrupt changes of gray in the iris image, and specifically, any one or more of the following edge detection operators can be used to detect the edges.
Specifically, the edge detection operator includes, but is not limited to, part or all of the following:
Canny operator, sobel operator, roberts operator, prewitt operator, laplacian operator, loG operator.
In general, these operators have advantages and disadvantages, and the applicable conditions are different. For example, the Roberts operator is simple and fast but sensitive to noise, the Sobel operator and the Prewitt operator have certain resistance to noise but may lose some edge information, the Laplacian operator has accurate edge positioning but is very sensitive to noise, and the Canny operator provides a more comprehensive edge detection method but is more complex in calculation. In practical applications, the selection of an appropriate edge detection operator needs to be determined according to specific image content and processing requirements, which are not specifically limited herein.
For example, gradient information in the disparity map is extracted through a Canny operator, and a gradient map corresponding to the disparity map can be obtained.
Like the disparity map listed above, the gradient map may also be represented as a two-dimensional matrix of the same size as the original map, i.e. the gradient map may also be represented as a two-dimensional matrix of 5*5.
In the embodiment of the application, after the gradient map is obtained, a threshold is applied to determine which gradient values can be considered edges. This threshold may be fixed or may be calculated dynamically. By setting the threshold it is possible to determine which edges are significant and which are likely to be caused by noise, resulting in a final edge detection result.
Since the lower the threshold, the more edges can be detected, the more susceptible the result is to noise in the image and the more uncorrelated features are picked up from the image. In contrast, a high threshold value will miss thin or short line segments. Thus, it is important to select an appropriate threshold value.
An alternative dynamic determination is:
In the embodiment of the application, a threshold selection method with hysteresis can be adopted, and different thresholds are used for searching pupil edge points. An upper threshold is first used to find where the edge starts. Once a starting point is found, the path of the pupil edge point is detected point by point on the image, the pupil edge point position is recorded when the position is larger than the lower limit of the threshold, and the recording is stopped until the value is smaller than the lower limit.
This approach assumes that the pupil edge points are continuous boundaries and can follow the blurred portion of the pupil edge points seen before without marking noise points in the image as pupil edge points.
Or considering that a single global threshold may not be sufficient to process the entire image, it may be considered to use an adaptive thresholding method to adjust the threshold according to the local characteristics of the image.
Still alternatively, a fixed threshold may be set empirically or experimentally, and then the gradient array is filtered based on a certain threshold, and a point with a gradient value greater than the threshold is selected as the pupil edge point. The optimal threshold is determined, for example, by experimentation. A lower threshold may be applied first and then gradually increased to observe the effect of edge detection. The ideal threshold should be able to extract the true edges to the maximum extent while suppressing noise.
It should be noted that, the two modes of selecting the threshold value to find the pupil edge point are also only simple examples, and other modes are also applicable to the embodiments of the present application, and are not described here again.
In step S103, after determining the pupil edge point, a least square parabolic fitting method (or other fitting methods) may be used to calculate the coordinates of the extreme points of the edge points in the left and right fixed areas, so as to obtain the initial center coordinates and the radius of the pupil, and further determine the pupil edge.
In the above embodiment, the present application considers that the texture and structural features of the pupil area are different from those of the iris area, and if the pupil area is not excluded, the feature extraction may be interfered.
The following procedure after determining the pupil edge is described below:
S34, determining an iris region in any one iris image according to the pupil edge, and extracting the characteristics of any one iris region to obtain corresponding iris characteristics.
As can be seen from the eye diagram illustrated in fig. 2, the iris is an annular region located outside the pupil, so that after determining the pupil edge, the annular region within a certain range of the pupil edge can be used as the iris region, and an alternative embodiment is as follows:
and setting a preset margin around the pupil edge in any one iris image, and taking the annular area determined based on the pupil edge and the preset margin as an iris area in any one iris image.
Specifically, the preset margin represents the difference between the annular outer diameter and the annular inner diameter of the iris region, that is, the annular width in the embodiment of the present application. Generally, different types of objects to be identified can correspond to different preset margins according to the types of objects to be identified, for example, the iris area of an adult is about 2-4 mm, but the range may be different from person to person, for example, the preset margin may be set to be 3 mm. The iris area of an infant is generally narrow, on the order of 1-2 mm, for example, a preset margin of 1.5 mm may be provided.
As another example, the width of the iris region varies from cat to dog with variety, age, and individual variation. Typically, the width of the iris area of the cat is about 1-2 mm, for example, a predetermined margin of 1.5 mm may be provided, and the width of the iris area of the dog is about 2-4 mm, for example, a predetermined margin of 3mm may be provided.
It should be noted that the above-listed preset margin values are only examples, and may be flexibly set in practice due to individual differences, etc., and are not particularly limited herein.
Fig. 11 is a schematic diagram of a manner of determining an iris region according to an embodiment of the application. The black circular area is a pupil, the white circular area is a pupil edge, and when the pupil edge and the preset margin r are determined, an annular area with a width of the preset margin r outside the pupil edge may be determined as an iris area, such as a gray area in fig. 11.
In the above embodiment, the position and size information of the pupil edge can help to determine the range of the iris, specifically, by setting a certain margin around the pupil edge, an iris region unrelated to the pupil can be obtained, and the region contains the main texture information of the iris, which is beneficial to improving the accuracy of feature extraction.
After determining the iris region based on the above manner, the texture features of the iris need to be extracted, and the main purpose is to extract distinguishing features capable of reflecting the texture and structure information of the iris image, and the features should have high distinguishing and stability so as to be used in the subsequent recognition and matching process.
In the embodiment of the application, the feature extraction is performed on the iris region to obtain the corresponding iris feature, which comprises at least one of the following modes:
The first feature extraction mode is to extract features of the iris region through filters with different scales and directions to obtain corresponding response values, and to combine the response values extracted by the filters to form iris features corresponding to the iris region.
For example, a Gabor filter is used to extract the texture features of the iris region. Gabor filters are a method commonly used for texture analysis and feature extraction. It can capture local texture information of an image in different scales and directions.
The orientation of the Gabor filter is determined by an angle parameter θ defined by the filter kernel, which describes the orientation of the parallel strips in the kernel, and in practice the orientation parameter may take any real value from 0 ° to 360 °, which allows the Gabor filter to respond to features in different directions in the image. This directional selectivity makes Gabor filters particularly suitable for processing texture information and find wide application in vision science, as it is able to simulate the sensitivity of the human visual system to directional characteristics.
The dimension of the Gabor filter is a parameter related to the frequency bandwidth and the directional selectivity of the Gabor filter. The scale parameter is typically related to the standard deviation sigma of the gaussian envelope function, which determines the width of the filter in the frequency domain. A larger sigma value means that the filter has a wider response range in the frequency domain, capturing more frequency components, while a smaller sigma value corresponds to a narrower frequency domain response, capturing only frequency components of a specific range. By adjusting these parameters, the Gabor filter can be designed to respond only to image features of a particular scale, which makes it very useful in multi-scale analysis.
In iris recognition in the present application, the Gabor filter can effectively extract iris texture features. Specifically, a series of Gabor filters with different scales and directions are applied to the iris image, the response of the Gabor filters captures rich details of the iris texture according to spectrum and space local characteristics, and the extracted response values can be used for representing texture information in the iris image, so that the response values can be combined to serve as feature vectors of the iris.
The selection of parameters such as the dimension, the direction and the like of the Gabor filter needs to adapt to the characteristics of the iris texture, and a self-adaptive method can be adopted to select proper parameters of the filter.
In the above embodiment, the Gabor filter can simulate the response of the human visual system, and can effectively extract iris features. And by adjusting parameters of the filter, the filter can be better adapted to the texture characteristics of the iris, so that the accuracy and the robustness of identification are improved.
It should be noted that, in addition to the above-mentioned manner of extracting iris features by using Gabor filters, other filters may be used to extract iris features, for example, gaussian filters, butterworth filters, laplace filters, and the like, which are not described in detail herein.
In addition, when selecting the filter, the characteristics of the iris texture and the requirements of subsequent feature matching and classification need to be considered, and various filters and analysis methods can be combined to extract the iris texture features with more distinguishing components, such as a Gabor filter, a Gaussian filter and the like.
And in a second feature extraction mode, extracting iris features corresponding to the iris region by comparing gray values of each pixel point in the iris region and the corresponding neighborhood pixel points.
This approach, local binary pattern (Local Binary Pattern, LBP), is a method for texture description that can describe local texture features in an image. Specifically, the LBP algorithm generates a binary sequence as the LBP value of the pixel by comparing the gray values of the pixel and its neighboring pixels.
In iris recognition in the present application, LBP may be used to extract local texture features of the iris. Applying LBP to the whole iris image can result in a feature vector describing the texture of the iris.
In the above embodiments, both Gabor filters and LBP algorithms can effectively capture iris texture features, thereby providing strong support for subsequent recognition and matching processes.
It should be noted that the above feature extraction method is only a simple example, and in practical application, other methods may be used to extract iris features, for example, based on LoG operator, scale-invariant feature transform (Scale-INVARIANT FEATURE TRANSFORM, SIFT), acceleration robust feature extraction (Speeded-Up Robust Features, SURF), fourier transform, multi-Scale analysis method, etc., so that feature information of iris textures may be effectively extracted to improve accuracy and robustness of iris recognition, which will not be described in detail herein.
Where the fourier transform may transform the image from the spatial domain to the frequency domain, and thus may be processed with a frequency domain filter. By analyzing the spectral characteristics of the iris texture, a corresponding frequency domain filter can be designed to extract specific texture features. Multiscale analysis methods such as wavelet transform and multiscale analysis may also be used to extract iris texture features. These methods enable analysis of images at different scales, capturing multi-scale characteristics of the iris texture.
Any of the feature extraction methods listed above may be used alone or in combination with other feature extraction methods, and are not particularly limited herein.
In the embodiment of the application, the pupil region can be removed from the iris image by locating the pupil edge, and interference of the pupil region on feature extraction is eliminated, so that interference in the iris feature extraction process is reduced.
In the embodiment of the application, considering that the iris area near the pupil edge is possibly influenced by illumination, shielding and other reasons, by locating the pupil edge, corresponding processing strategies can be adopted for the influenced areas, such as adjusting filter parameters, enhancing contrast and the like, so as to optimize the feature extraction effect.
An alternative embodiment is to enhance the contrast of the iris region before feature extraction of the iris region to obtain corresponding iris features.
For example, the iris region may be subjected to a histogram equalization process again, or subjected to local contrast enhancement, adaptive contrast enhancement, gamma correction, filtering using a filter, or the like, to enhance the contrast of the iris region.
It should be noted that any manner of enhancing the contrast of the image is suitable for the embodiments of the present application, and will not be described herein.
In another alternative implementation manner, before extracting features of the iris region to obtain corresponding iris features, at least one parallax map may be further processed to update the iris region, and the specific flow is shown in fig. 12, which is a flow chart of an iris region updating method according to an embodiment of the present application, and includes the following steps S121 to S124:
S121, extracting gradient information in the parallax map through different edge detection operators to obtain a plurality of gradient maps corresponding to the parallax map.
Each edge detection operator corresponds to one gradient graph, and gradient elements in the gradient graph represent the change rate of gray values of pixel points in the parallax graph.
And S122, determining new pupil edge points from the parallax images based on the gradient images.
In step S121, for a disparity map, a plurality of different edge detection operators may be used for gradient computation, for example, a Sobel operator, a Roberts operator, a Prewitt operator, a LoG operator, and a Canny operator are used to process the disparity map. After that, step S122 may be performed.
Specifically, when multiple edge detection algorithms are used simultaneously, the gradient graphs extracted by each edge detection operator can be fused, for example, gradient magnitudes obtained by different operators can be weighted and averaged, on the basis, a threshold value is applied to find out a new pupil edge point for the fused gradient graphs, and the threshold value can be a fixed threshold value set empirically or through experiments before, or can be a new threshold value obtained by combining characteristics of different edge detection operators according to the fused gradient graphs and adjusting the fixed threshold value to a certain extent, or can be a threshold value selected by adopting the above listed threshold value selection method with hysteresis effect or the adaptive threshold value method, etc., and the specific embodiment can be seen, and the detailed description is omitted.
The method of fusing the gradient map is the same as the method of fusing the parallax map, and will not be described in detail here.
S123, updating the iris region in any iris image according to the new pupil edge point.
In step S123, after determining a new pupil edge point, the coordinates of extreme points of edge points in the left and right fixed areas may be calculated by using a least square parabolic fitting method (or other fitting methods), so as to obtain an initial center coordinate and a radius of the pupil, and further determine a new pupil edge.
On this basis, a preset margin can be set around the new pupil edge in any one iris image, an annular area determined based on the new pupil edge and the preset margin is used as a new iris area, after the new iris area is extracted, the iris features of the area are re-extracted, and the specific implementation can be seen in the above embodiments, and the repetition is omitted.
Through the embodiment, the extraction of the iris features can be optimized, and more accurate iris features can be effectively extracted under the condition that texture information is possibly influenced by illumination, shielding and other reasons, so that the accuracy of subsequent recognition is improved.
Specifically, after the iris features are extracted, the iris features are compared with iris features stored in a pre-constructed database to realize identification of the individual identity, and the specific process is as follows:
and S35, carrying out identity recognition on the object to be recognized based on the iris characteristics.
In the embodiment of the application, the identification process generally adopts a feature matching algorithm, such as calculating the cosine distance, the hamming distance, the minkowski distance and the like between two iris features to measure the similarity between the two iris features so as to determine whether the two iris features belong to the same individual.
Taking the example of calculating the cosine distance between two iris features, the iris features in the embodiment of the application may be represented as feature vectors, and the cosine similarity between the two feature vectors may be determined by calculating the dot product of the two vectors and dividing by the respective modulo length product, so that the obtained cosine similarity may be converted into the cosine distance, and the obtained cosine distance is assumed to be a fraction between 0 and 1, where 0 represents the exact same direction and 1 represents the exact opposite direction.
If the cosine distance between the two iris features is smaller than a certain threshold value, the two iris features can be considered to be matched, and then the identity information of the object to be identified can be determined according to the identity information corresponding to the matched iris features.
Fig. 13 is a schematic diagram of an identification process performed on an object through feature matching in an embodiment of the present application. Assuming that 4 iris features are stored in the database, as shown in fig. 13, the iris features of the object to be identified and the iris features in the database are required to be subjected to feature matching respectively, and the second feature is determined to be successfully matched, and further according to the identity information associated with each iris feature in the database, the identity information of the object to be identified can be determined, and the identity of the object to be identified is identified, wherein in fig. 13, the identity identification result of the object to be identified includes name, gender, age and the like.
It should be noted that the above-listed databases and the identity information in the databases are only examples, and in practical applications, the number of iris features stored in the databases may be more or less, and correspondingly, the stored identity information may be simpler or more complex, which is only a simple example and is not limited herein.
Based on the above, the application considers that the useful information contained in other types of images can be ignored when depending on a certain type of image too much, and in order to further improve the accuracy of iris recognition, the application can also combine multi-source image information, such as infrared images, visible light images and the like, to improve the accuracy of pupil edge positioning, thereby improving the accuracy of iris recognition.
That is, in step S31, the iris image may be acquired in a plurality of image modes, and in each image mode, a plurality of iris image groups each corresponding to one image mode may be obtained by acquiring iris images at a plurality of acquisition perspectives.
In embodiments of the present application, different image modes refer to different ways in which image data is captured and represented.
As in the present application, several image modes are included, but are not limited to:
RGB image mode, infrared image mode, visible light image mode.
The iris image acquired in the RGB image mode is an RGB image, which is more common, and will not be described here too much.
The iris image acquired in the infrared image mode is an infrared image, and the pupil region can be highlighted by the infrared image because the pupil has high transmittance to infrared light.
The iris image collected in the visible light image mode can be a visible light image, and the visible light image can clearly display the structure of the whole eye.
First, it is necessary to collect both infrared and visible images of the pupil. This may be done by special equipment, such as a camera with both infrared and visible modes.
Then, before step S32, the iris images in the multiple image modes should be fused, and an alternative embodiment is as follows:
First, a plurality of iris image groups are divided into corresponding iris image candidate sets under different acquisition perspectives.
The iris image candidates in each iris image candidate set are iris images acquired under different image modes and the same acquisition view angle.
Fig. 14 is a schematic diagram of division logic of a candidate iris image set according to an embodiment of the application. Fig. 14 illustrates three image modes, namely, iris images acquired by using a binocular camera, wherein images acquired by two cameras of the binocular camera can be recorded as a left view and a right view, and then 3 iris image groups are acquired in three image modes of RGB, infrared and visible light, for example, the iris image group 1 in fig. 14 comprises a left RGB image and a right RGB image, the iris image group 2 comprises a left infrared image and a right infrared image, and the iris image group 3 comprises a left visible light image and a right visible light image.
In the embodiment of the application, the number of the candidate iris image sets is consistent with the number of the acquisition view angles, and the number of the candidate iris images in one candidate iris image set is consistent with the number of the image modes. That is, iris images in the same candidate iris image set belong to iris images acquired under the same acquisition view angle in different image modes.
In fig. 14, the binocular camera corresponds to two acquisition view angles, and two iris image candidate sets may be obtained by dividing, for example, the iris image candidate set 1 includes a left RGB image, a left infrared image and a left visible light image, and the iris image candidate set 2 includes a right RGB image, a right infrared image and a right visible light image.
And then, aiming at each pixel point, carrying out feature fusion on the corresponding pixel point on each iris image candidate in the same iris image candidate set to obtain a fused iris image.
In this process, features between images are fused, and an alternative implementation is to weight average the gray values of corresponding pixels on each iris image in the same iris image set.
Specifically, the gray values of the pixels at the same position of the plurality of candidate iris images are weighted and averaged, and then the gray values of the pixels at the position in the iris images after corresponding fusion can be obtained. The weight value when the weighted average is performed can be flexibly set according to actual requirements, or determined through experiments, for example, when the RGB image, the infrared image and the visible light image are fused, the weights of the respective weights can be set to be 1/3, and the specific limitation is omitted herein.
Alternatively, for subsequent pupil edge positioning, more emphasis may be placed on information in the infrared image, because the infrared image may better highlight the pupil, and when the RGB image, the infrared image, and the visible image are fused, the respective 1/3 weights may not be set, and instead the weights of the infrared image may be set higher than the other two weights, for example, the weights of the RGB image, the infrared image, and the visible image may be set to 1/4, 1/2, and 1/4, respectively. For another example, the weights of the RGB image, the infrared image, and the visible image are set to 1/6, 1/2, and 1/3, respectively, and so on.
In addition, other pixel features, such as color, transparency, brightness, saturation, etc., may be fused in addition to the gray level, and the detailed description is omitted here.
In addition, in order to further improve accuracy of iris recognition, besides fusing the multi-source images, the above-mentioned method may be adopted, and each iris image in each iris image group may be preprocessed, for example, the collected images may be preprocessed by denoising, histogram equalization, and the like, so that the image quality is improved.
On the basis of obtaining the fused iris images, when step S32 is performed, that is, three-dimensional matching is performed between every two fused iris images to obtain corresponding parallax images, and then, subsequent processes such as pupil edge positioning, iris region determination, iris feature extraction and the like can be performed, and specific embodiments can be seen in the above embodiments, and repeated parts are omitted.
In the embodiment of the application, through fusion among the multi-source images, richer and more accurate information can be obtained, and on the basis, when a subsequent recognition process is carried out through the fused images, recognition can be carried out based on the richer and more accurate information, so that the accuracy of iris recognition is further improved.
Taking the following example of capturing an iris image group (also called an iris image pair) containing two iris images by using a binocular camera, a specific process of iris recognition is described in a modularized manner:
Referring to fig. 15, a block diagram of an iris recognition process according to an embodiment of the application is shown. Specifically, the process can be simply divided into an image acquisition module, a preprocessing module, a stereo matching module, a feature extraction module and an identification and matching module.
The image acquisition module may acquire iris images of the same eye region of the object to be identified based on the binocular camera, so as to obtain an iris image pair as shown in fig. 15.
Then, each iris image in the iris image pair can be preprocessed by the preprocessing module so as to improve the image quality. Specifically, operations such as denoising, histogram equalization and the like can be performed on the iris image, and the image quality is improved. In fig. 15, taking histogram equalization as an example, firstly, a gray level histogram of an image needs to be counted, a cumulative distribution function of each gray level is calculated on the basis, and finally, the gray level value of an original image is mapped to a new gray level value by using the cumulative distribution function, so that the gray level distribution of the new image is more uniform, and the histogram equalization process is completed.
After the histogram equalization treatment, the contrast of the image is improved, the iris texture is clearer, and the subsequent pupil edge positioning and feature extraction operation are facilitated.
In the stereo matching module, a stereo matching algorithm can be adopted to accurately position the pupil edge, and the process can be simply summarized as obtaining an iris image pair, correcting an image, namely correcting the image before stereo matching so as to eliminate parallax between left and right eye images, and then extracting features, and extracting feature points of the pupil in the corrected image so as to perform matching. After the feature points are extracted, a stereo matching algorithm can be adopted to find the corresponding points in the left and right eye images and obtain a parallax image, after the parallax image is obtained, pupil edge positioning can be performed to determine the position of the pupil by calculating the depth information of the pupil, and specific embodiments can be seen in the above embodiments and repeated parts are omitted.
In the feature extraction module, iris features may be extracted based on one or more modes listed in fig. 15, for subsequent recognition and matching processes, and the specific implementation may refer to the above embodiments, and the repetition is omitted.
In the identifying and matching module, the extracted iris features may be respectively matched with the iris features in the database to obtain an identification result, and the specific implementation can refer to the above embodiment, and the repetition is omitted.
In conclusion, by adopting the iris recognition method in the embodiment of the application, the pupil edge positioning error can be effectively reduced, and the iris recognition accuracy can be improved. Under the complex conditions of pupil size change, light ray change, pupil edge blurring and the like, the method still has a good recognition effect.
Based on the same inventive concept, the embodiment of the application also provides an iris recognition device. As shown in fig. 16, which is a schematic structural diagram of the iris recognition device 1600, may include:
An image acquisition unit 1601 for acquiring an iris image group acquired for an object to be identified, the iris image group including a plurality of iris images acquired under different acquisition perspectives for the same eye region of the object to be identified;
The stereo matching unit 1602 is configured to perform stereo matching between each two iris images in the iris image group, so as to obtain corresponding parallax images, where parallax elements in each parallax image represent displacement amounts of corresponding points between the two iris images in a specified direction;
A pupil positioning unit 1603 for positioning a pupil edge from any one of the iris images included in the iris image group based on the obtained parallax images;
The iris feature extraction unit 1604 is configured to determine an iris region in any one of the iris images according to the pupil edge, and perform feature extraction on any one of the iris regions to obtain a corresponding iris feature;
the identifying unit 1605 is configured to identify the object to be identified based on the iris feature.
Optionally, the stereo matching unit 1602 is specifically configured to:
By correcting the two iris images, eliminating parallax between the two iris images;
Pupil characteristic points are extracted from each corrected iris image respectively;
Determining corresponding points from the extracted pupil characteristic points through stereo matching;
Based on the determined positions of the corresponding points in the corresponding iris images, parallax images corresponding to the two iris images are obtained.
Optionally, the pupil positioning unit 1603 is specifically configured to:
if a parallax map is obtained, positioning the pupil edge from any one iris image contained in the iris image group based on the parallax map;
if multiple parallax images are obtained, the multiple parallax images are subjected to image fusion, pupil edges are positioned from any one iris image contained in the iris image group based on the fused parallax images, or one pupil edge is positioned from any one iris image corresponding to the parallax images based on each parallax image, and the determined pupil edges are fused to obtain the fused pupil edges.
Optionally, the pupil positioning unit 1603 is specifically configured to:
based on a disparity map, pupil edges are located from an iris image by:
extracting gradient information in the parallax map through an edge detection operator to obtain a gradient map corresponding to the parallax map, wherein gradient elements in the gradient map represent the change rate of gray values of all pixel points in the parallax map;
Determining pupil edge points from the parallax map based on a preset gradient threshold;
based on the determined pupil edge points, pupil edges are located from an iris image.
Optionally, the iris feature extraction unit 1604 is specifically configured to:
Setting a preset margin around the pupil edge in any one iris image;
and taking the annular area determined based on the pupil edge and the preset margin as an iris area in any one iris image.
Optionally, the iris feature extraction unit 1604 is specifically configured to perform at least one of the following steps:
the iris region is subjected to characteristic extraction through filters with different scales and directions to obtain corresponding response values;
And extracting iris characteristics corresponding to the iris region by comparing the gray value of each pixel point in the iris region with the gray value of the corresponding neighborhood pixel point.
Optionally, the iris feature extraction unit 1604 is further configured to enhance the contrast of the iris region before performing feature extraction on the iris region to obtain the corresponding iris feature.
Optionally, the iris feature extraction unit 1604 is further configured to process at least one disparity map to update the iris region before performing feature extraction on the iris region to obtain the corresponding iris feature, by:
extracting gradient information in the parallax map through different edge detection operators to obtain a plurality of gradient maps corresponding to the parallax map, wherein each edge detection operator corresponds to one gradient map, and gradient elements in the gradient map represent the change rate of gray values of all pixel points in the parallax map;
determining new pupil edge points from the disparity map based on the plurality of gradient maps;
And updating the iris region in any one iris image according to the new pupil edge point.
Optionally, if there are multiple iris image groups, each iris image group corresponds to one image mode, the stereo matching unit 1602 is further configured to, before performing stereo matching between each two iris images, obtain a corresponding disparity map:
Dividing a plurality of iris image groups into corresponding iris image candidate sets under different acquisition view angles, wherein the iris image candidates in each iris image candidate set are iris images acquired under different image modes and the same acquisition view angle;
Aiming at each pixel point, carrying out feature fusion on the corresponding pixel point on each iris image candidate in the same iris image candidate set to obtain a fused iris image;
the stereo matching unit 1602 specifically is configured to:
And respectively carrying out three-dimensional matching between every two fused iris images to obtain corresponding parallax images.
Optionally, the image mode includes some or all of the following:
RGB image mode, infrared image mode, visible light image mode.
Optionally, the stereo matching unit 1602 is specifically configured to:
And carrying out weighted average on the gray values of the corresponding pixel points on each iris image candidate in the same iris image candidate set.
Optionally, before stereo matching is performed between each two iris images, the stereo matching unit 1602 is further configured to:
for each iris image in the iris image group, image enhancement processing is performed on the iris image by at least one mode.
For convenience of description, the above parts are described as being functionally divided into modules (or units) respectively. Of course, the functions of each module (or unit) may be implemented in the same or two pieces of software or hardware when implementing the present application.
In the present embodiment, the term "module" or "unit" refers to a computer program or a part of a computer program having a predetermined function and working together with other relevant parts to achieve a predetermined object, and may be implemented in whole or in part by using software, hardware (such as a processing circuit or a memory), or a combination thereof. Also, one processor (or two processors or memories) may be used to implement one or two modules or units. Furthermore, each module or unit may be part of an overall module or unit that incorporates the functionality of the module or unit.
Having described the iris recognition method and apparatus of an exemplary embodiment of the present application, next, an electronic device according to another exemplary embodiment of the present application is described.
Those skilled in the art will appreciate that the various aspects of the application may be implemented as a system, method, or program product. Accordingly, aspects of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects that may be referred to herein collectively as a "circuit," module "or" system.
The embodiment of the application also provides electronic equipment based on the same conception as the embodiment of the method. In one embodiment, the electronic device may be a server, and in this embodiment, the electronic device may be configured as shown in fig. 17, including a memory 1701, a communication module 1703, and one or two processors 1702.
A memory 1701 for storing computer programs for execution by the processor 1702. The memory 1701 may mainly include a storage program area in which an operating system, programs required for running an instant messaging function, and the like are stored, and a storage data area in which various instant messaging information, an operation instruction set, and the like are stored.
The memory 1701 may be a volatile memory (RAM) such as a random-access memory (RAM), the memory 1701 may be a non-volatile memory (rom) such as a read-only memory (rom), a flash memory (flash memory), a hard disk (HARD DISK DRIVE, HDD) or a Solid State Disk (SSD), or the memory 1701 may be any other medium that can be used to carry or store a desired computer program in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The memory 1701 may be a combination of the above.
The processor 1702 may include one or two central processing units (central processing unit, CPUs) or a digital processing unit or the like. Processor 1702 is configured to implement the iris recognition method described above when calling the computer program stored in memory 1701.
The communication module 1703 is used for communicating with a terminal device and other servers.
The specific connection medium between the memory 1701, the communication module 1703 and the processor 1702 is not limited to the above embodiments of the present application. The embodiment of the present application is illustrated in fig. 17 by a bus 1704 between the memory 1701 and the processor 1702, and the bus 1704 is illustrated in fig. 17 by a bold line, and the connection between other components is merely illustrative and not limiting. The bus 1704 may be classified as an address bus, a data bus, a control bus, or the like. For ease of description, only one thick line is depicted in fig. 17, but only one bus or one type of bus is not depicted.
The memory 1701 stores therein a computer storage medium having stored therein computer executable instructions for implementing the iris recognition method of the embodiment of the present application. The processor 1702 is configured to perform the iris recognition method described above, as shown in fig. 3.
In another embodiment, the electronic device may also be other electronic devices, such as a terminal device. In this embodiment, the structure of the electronic device may include a communication component 1810, a memory 1820, a display unit 1830, a camera 1840, a sensor 1850, an audio circuit 1860, a Bluetooth module 1870, a processor 1880, and the like, as shown in FIG. 18.
The communication component 1810 is for communicating with a server. In some embodiments, a circuit wireless fidelity (WIRELESS FIDELITY, WIFI) module may be included, the WiFi module belongs to a short-range wireless transmission technology, and the electronic device may help the user to send and receive information through the WiFi module.
Memory 1820 may be used for storing software programs and data. The processor 1880 performs various functions and data processing of the terminal device 110 by executing software programs or data stored in the memory 1820. Memory 1820 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. The memory 1820 stores an operating system that enables the terminal device 110 to operate. The memory 1820 may store an operating system and various application programs, and may also store a computer program for performing the iris recognition method according to the embodiment of the present application.
The display unit 1830 may also be used to display information input by a user or information provided to the user and a graphical user interface (GRAPHICAL USER INTERFACE, GUI) of various menus of the terminal device 110. In particular, the display unit 1830 may include a display screen 1832 disposed on a front surface of the terminal device 110. The display 1832 may be configured in the form of a liquid crystal display, light emitting diodes, or the like. The display unit 1830 may be used to display the identification result and the like in the embodiment of the present application.
The display unit 1830 may also be used to receive input numeric or character information, generate signal inputs related to user settings and function control of the terminal device 110, and in particular, the display unit 1830 may include a touch screen 1831 disposed on the front of the terminal device 110, and may collect touch operations on or near the user, such as clicking buttons, dragging scroll boxes, and the like.
The touch screen 1831 may be covered on the display screen 1832, or the touch screen 1831 may be integrated with the display screen 1832 to implement input and output functions of the terminal device 110, and after integration, the touch screen may be abbreviated as touch screen. The display unit 1830 may display an application program and corresponding operation steps in the present application.
The camera 1840 may be used to capture still images and a user may post images captured by the camera 1840 through an application. The number of cameras 1840 may be one or two. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive elements convert the optical signals to electrical signals, which are then passed to a processor 1880 for conversion to digital image signals.
The terminal device may further comprise at least one sensor 1850, such as an acceleration sensor 1851, a distance sensor 1852, a fingerprint sensor 1853, a temperature sensor 1854. The terminal device may also be configured with other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, light sensors, motion sensors, and the like.
Audio circuitry 1860, speaker 1861, microphone 1862 may provide an audio interface between a user and terminal device 110. The audio circuit 1860 may transmit the received electrical signal converted from audio data to the speaker 1861, and may be converted into a sound signal by the speaker 1861 for output. The terminal device 110 may also be configured with a volume button for adjusting the volume of the sound signal. On the other hand, microphone 1862 converts the collected sound signals into electrical signals, which are received by audio circuit 1860 and converted into audio data, which are output to communication component 1810 for transmission to, for example, another terminal device 110, or to memory 1820 for further processing.
The bluetooth module 1870 is used for exchanging information with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the terminal device may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) that also has a bluetooth module through the bluetooth module 1870, thereby performing data interaction.
The processor 1880 is a control center of the terminal device, connects various parts of the entire terminal using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs stored in the memory 1820, and calling data stored in the memory 1820. In some embodiments, the processor 1880 may include one or two processing units, and the processor 1880 may also integrate an application processor and a baseband processor, wherein the application processor primarily processes operating systems, user interfaces, application programs, and the like, and the baseband processor primarily processes wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 1880. The processor 1880 of the present application may run an operating system, application programs, user interface displays and touch responses, as well as iris recognition methods of embodiments of the present application. In addition, the processor 1880 is coupled to a display unit 1830.
In some possible embodiments, aspects of the iris recognition method provided by the present application may also be implemented in the form of a program product comprising a computer program for causing an electronic device to perform the steps in the iris recognition method according to the various exemplary embodiments of the application described above, when the program product is run on an electronic device, e.g. the electronic device may perform the steps as shown in fig. 3.
The program product may take the form of any combination of one or two readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of a readable storage medium include an electrical connection having one or two wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product of embodiments of the present application may take the form of a portable compact disc read only memory (CD-ROM) and comprise a computer program and may be run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with a command execution system, apparatus, or device.
The readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave in which a readable computer program is embodied. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with a command execution system, apparatus, or device.
A computer program embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer programs for performing the operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer program may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic device may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., connected through the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the elements described above may be embodied in one element in accordance with embodiments of the present application. Conversely, the features and functions of one unit described above may be further divided into two units to be embodied.
Furthermore, although the operations of the methods of the present application are depicted in the drawings in a particular order, this is not required or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, combined into one step to perform, and/or split into two steps to perform.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the application may take the form of a computer program product embodied on one or two computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having a computer-usable computer program embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program commands may be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing apparatus to produce a machine, such that the commands executed by the processor of the computer or other programmable data processing apparatus produce means for implementing the functions specified in the flowchart one or two flows and/or block diagram one or two blocks.
These computer program commands may also be stored in a computer readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the commands stored in the computer readable memory produce an article of manufacture including command means that implement the function specified in the flowchart one or two flows and/or block diagram one or two blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart one or two flows and/or block diagram one or two blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (16)

1.一种虹膜识别方法,其特征在于,所述方法包括:1. An iris recognition method, characterized in that the method comprises: 获取针对待识别对象采集到的虹膜图像组;所述虹膜图像组包含:针对所述待识别对象的同一眼部区域,在不同采集视角下采集的多个虹膜图像;Acquire an iris image group collected for the object to be identified; the iris image group includes: multiple iris images collected at different acquisition viewing angles for the same eye area of the object to be identified; 在所述虹膜图像组中,分别在每两个虹膜图像之间进行立体匹配,得到相应的视差图,每个所述视差图中的视差元素表示:相应的两个虹膜图像之间的各对应点在指定方向的位移量;In the iris image group, stereo matching is performed between each two iris images to obtain corresponding disparity maps, wherein the disparity elements in each disparity map represent: the displacement of each corresponding point between the two corresponding iris images in a specified direction; 基于获得的各视差图,从所述虹膜图像组包含的任意一个虹膜图像中定位出瞳孔边缘;Locating a pupil edge from any one iris image included in the iris image group based on the obtained disparity maps; 根据所述瞳孔边缘,确定所述任意一个虹膜图像中的虹膜区域,并对所述任意一个虹膜区域进行特征提取,获得相应的虹膜特征;Determining an iris region in any one of the iris images according to the pupil edge, and performing feature extraction on the any one of the iris regions to obtain corresponding iris features; 基于所述虹膜特征,对所述待识别对象进行身份识别。Based on the iris features, the identity of the object to be identified is identified. 2.如权利要求1所述的方法,其特征在于,所述分别在每两个虹膜图像之间进行立体匹配,得到相应的视差图,包括:2. The method according to claim 1, wherein performing stereo matching between each two iris images to obtain corresponding disparity maps comprises: 通过对两个虹膜图像进行校正,消除所述两个虹膜图像之间的视差;Eliminating the parallax between the two iris images by correcting the two iris images; 分别在校正后的每个虹膜图像中提取瞳孔特征点;Extract pupil feature points from each rectified iris image respectively; 通过立体匹配,从提取到的各个瞳孔特征点中,确定对应点;Through stereo matching, corresponding points are determined from the extracted pupil feature points; 基于确定的对应点在相应虹膜图像中的位置,获得所述两个虹膜图像对应的视差图。Based on the determined positions of the corresponding points in the corresponding iris images, a disparity map corresponding to the two iris images is obtained. 3.如权利要求1所述的方法,其特征在于,所述基于获得的各视差图,从所述虹膜图像组包含的任意一个虹膜图像中定位出瞳孔边缘,包括:3. The method according to claim 1, wherein locating the pupil edge from any one iris image included in the iris image group based on the obtained disparity maps comprises: 若获得一个视差图,则基于所述一个视差图,从所述虹膜图像组包含的任意一个虹膜图像中定位出瞳孔边缘;If a disparity map is obtained, locating a pupil edge from any one iris image included in the iris image group based on the disparity map; 若获得多个视差图,则将所述多个视差图进行图像融合,基于融合后的视差图,从所述虹膜图像组包含的任意一个虹膜图像中定位出瞳孔边缘;或者,分别基于每个视差图,从所述视差图相应的任意一个虹膜图像中定位出一个瞳孔边缘,并将确定出的多个瞳孔边缘进行融合,获得融合后的瞳孔边缘。If multiple disparity maps are obtained, the multiple disparity maps are image fused, and based on the fused disparity map, the pupil edge is located from any iris image included in the iris image group; alternatively, based on each disparity map, a pupil edge is located from any iris image corresponding to the disparity map, and the multiple determined pupil edges are fused to obtain a fused pupil edge. 4.如权利要求1~3任一项所述的方法,其特征在于,基于一个视差图,通过如下方式从一个虹膜图像中定位出瞳孔边缘:4. The method according to any one of claims 1 to 3, wherein the pupil edge is located from an iris image based on a disparity map by: 通过边缘检测算子提取所述视差图中的梯度信息,得到所述视差图对应的梯度图;所述梯度图中的梯度元素表示:所述视差图中各像素点灰度值的变化率;Extracting gradient information from the disparity map using an edge detection operator to obtain a gradient map corresponding to the disparity map; a gradient element in the gradient map represents: a rate of change of the grayscale value of each pixel in the disparity map; 基于预设的梯度阈值,从所述视差图中确定出瞳孔边缘点;Determining a pupil edge point from the disparity map based on a preset gradient threshold; 根据确定的瞳孔边缘点,从所述一个虹膜图像中定位出瞳孔边缘。The pupil edge is located from the one iris image according to the determined pupil edge point. 5.如权利要求1~3任一项所述的方法,其特征在于,所述根据所述瞳孔边缘,确定所述任意一个虹膜图像中的虹膜区域,包括:5. The method according to any one of claims 1 to 3, wherein determining the iris region in any one iris image based on the pupil edge comprises: 在所述任意一个虹膜图像中瞳孔边缘的周围,设定预设边距;Setting a preset margin around the pupil edge in any one of the iris images; 将基于所述瞳孔边缘以及所述预设边距确定的环形区域,作为所述任意一个虹膜图像中的虹膜区域。An annular area determined based on the pupil edge and the preset margin is used as the iris area in any one iris image. 6.如权利要求1~3任一项所述的方法,其特征在于,所述对所述虹膜区域进行特征提取,获得相应的虹膜特征,包括如下至少一种方式:6. The method according to any one of claims 1 to 3, wherein extracting features from the iris region to obtain corresponding iris features comprises at least one of the following methods: 通过不同尺度和方向的滤波器,对所述虹膜区域进行特征提取,获得相应的响应值;将各滤波器提取到的响应值组合,形成所述虹膜区域对应的虹膜特征;Extracting features from the iris region using filters of different scales and directions to obtain corresponding response values; combining the response values extracted by each filter to form iris features corresponding to the iris region; 通过比较所述虹膜区域内的每个像素点与对应的邻域像素点的灰度值,提取得到所述虹膜区域对应的虹膜特征。By comparing the grayscale values of each pixel point in the iris area with the corresponding neighboring pixel points, the iris features corresponding to the iris area are extracted. 7.如权利要求6所述的方法,其特征在于,在所述对所述虹膜区域进行特征提取,获得相应的虹膜特征之前,还包括:7. The method according to claim 6, characterized in that before extracting features from the iris region to obtain corresponding iris features, the method further comprises: 增强所述虹膜区域的对比度。The contrast of the iris region is enhanced. 8.如权利要求6所述的方法,其特征在于,在所述对所述虹膜区域进行特征提取,获得相应的虹膜特征之前,还包括:8. The method according to claim 6, characterized in that before extracting features from the iris region to obtain corresponding iris features, the method further comprises: 通过如下方式对至少一个视差图进行处理,以更新所述虹膜区域:Processing at least one disparity map to update the iris region is performed in the following manner: 通过不同的边缘检测算子提取所述视差图中的梯度信息,得到所述视差图对应的多个梯度图;其中,每个边缘检测算子对应一个梯度图;所述梯度图中的梯度元素表示:所述视差图中各像素点灰度值的变化率;Extracting gradient information from the disparity map using different edge detection operators to obtain multiple gradient maps corresponding to the disparity map; wherein each edge detection operator corresponds to a gradient map; and gradient elements in the gradient map represent: a rate of change of the grayscale value of each pixel in the disparity map; 基于所述多个梯度图,从所述视差图中确定出新的瞳孔边缘点;determining a new pupil edge point from the disparity map based on the multiple gradient maps; 根据所述新的瞳孔边缘点,更新所述任意一个虹膜图像中的虹膜区域。The iris region in any one iris image is updated according to the new pupil edge point. 9.如权利要求1~3任一项所述的方法,其特征在于,若所述虹膜图像组有多个,每个虹膜图像组对应一种图像模式,则在所述分别在每两个虹膜图像之间进行立体匹配,得到相应的视差图之前,还包括:9. The method according to any one of claims 1 to 3, wherein if there are multiple iris image groups, each iris image group corresponds to an image pattern, before performing stereo matching between each two iris images to obtain corresponding disparity maps, the method further comprises: 将所述多个虹膜图像组,划分为不同采集视角下对应的候选虹膜图像集,每个候选虹膜图像集中的候选虹膜图像为:在不同图像模式,同一采集视角下采集到的虹膜图像;Dividing the plurality of iris image groups into candidate iris image sets corresponding to different acquisition perspectives, wherein the candidate iris images in each candidate iris image set are iris images acquired in different image modes and at the same acquisition perspective; 针对每个像素点,将同一候选虹膜图像集中每个候选虹膜图像上对应的像素点进行特征融合,获得融合后的虹膜图像;For each pixel point, the corresponding pixel points on each candidate iris image in the same candidate iris image set are subjected to feature fusion to obtain a fused iris image; 所述分别在每两个虹膜图像之间进行立体匹配,得到相应的视差图,包括:The stereo matching is performed between each two iris images to obtain a corresponding disparity map, including: 分别在每两个融合后的虹膜图像之间进行立体匹配,得到相应的视差图。Stereo matching is performed between each two fused iris images to obtain the corresponding disparity map. 10.如权利要求9所述的方法,其特征在于,所述图像模式包括下列的部分或全部:10. The method according to claim 9, wherein the image mode includes part or all of the following: RGB图像模式、红外图像模式、可见光图像模式。RGB image mode, infrared image mode, visible light image mode. 11.如权利要求9所述的方法,其特征在于,所述将同一候选虹膜图像集中每个候选虹膜图像上对应的像素点进行特征融合,包括:11. The method according to claim 9, wherein the step of fusing features of corresponding pixels on each candidate iris image in the same candidate iris image set comprises: 将同一候选虹膜图像集中每个候选虹膜图像上对应的像素点的灰度值进行加权平均。The grayscale values of the corresponding pixels on each candidate iris image in the same candidate iris image set are weighted averaged. 12.如权利要求1~3任一项所述的方法,其特征在于,在所述分别在每两个虹膜图像之间进行立体匹配,得到相应的视差图之前,还包括:12. The method according to any one of claims 1 to 3, characterized in that before performing stereo matching between each two iris images to obtain corresponding disparity maps, the method further comprises: 对于所述虹膜图像组中的每个虹膜图像,通过至少一种方式对所述虹膜图像进行图像增强处理。For each iris image in the iris image group, image enhancement processing is performed on the iris image in at least one manner. 13.一种虹膜识别装置,其特征在于,包括:13. An iris recognition device, comprising: 图像获取单元,用于获取针对待识别对象采集到的虹膜图像组;所述虹膜图像组包含:针对所述待识别对象的同一眼部区域,在不同采集视角下采集的多个虹膜图像;An image acquisition unit is configured to acquire an iris image group collected for an object to be identified; the iris image group comprises: a plurality of iris images collected at different acquisition viewing angles for the same eye region of the object to be identified; 立体匹配单元,用于在所述虹膜图像组中,分别在每两个虹膜图像之间进行立体匹配,得到相应的视差图,每个所述视差图中的视差元素表示:相应的两个虹膜图像之间的各对应点在指定方向的位移量;a stereo matching unit, configured to perform stereo matching between each two iris images in the iris image group to obtain a corresponding disparity map, wherein the disparity elements in each disparity map represent the displacement of corresponding points between the two iris images in a specified direction; 瞳孔定位单元,用于基于获得的各视差图,从所述虹膜图像组包含的任意一个虹膜图像中定位出瞳孔边缘;a pupil locating unit, configured to locate a pupil edge from any one iris image included in the iris image group based on the obtained disparity maps; 虹膜特征提取单元,用于根据所述瞳孔边缘,确定所述任意一个虹膜图像中的虹膜区域,并对所述任意一个虹膜区域进行特征提取,获得相应的虹膜特征;an iris feature extraction unit, configured to determine an iris region in any one iris image according to the pupil edge, and perform feature extraction on the any one iris region to obtain corresponding iris features; 识别单元,用于基于所述虹膜特征,对所述待识别对象进行身份识别。The identification unit is used to identify the object to be identified based on the iris feature. 14.一种电子设备,其特征在于,其包括处理器和存储器,其中,所述存储器存储有计算机程序,当所述计算机程序被所述处理器执行时,使得所述处理器执行权利要求1~12中任一所述方法的步骤。14. An electronic device, characterized in that it comprises a processor and a memory, wherein the memory stores a computer program, and when the computer program is executed by the processor, the processor executes the steps of any one of the methods of claims 1 to 12. 15.一种计算机可读存储介质,其特征在于,其包括计算机程序,当所述计算机程序在电子设备上运行时,所述计算机程序用于使所述电子设备执行权利要求1~12中任一所述方法的步骤。15. A computer-readable storage medium, characterized in that it comprises a computer program, and when the computer program is run on an electronic device, the computer program is used to enable the electronic device to execute the steps of any one of the methods of claims 1 to 12. 16.一种计算机程序产品,其特征在于,包括计算机程序,所述计算机程序存储在计算机可读存储介质中;当电子设备的处理器从所述计算机可读存储介质读取所述计算机程序时,所述处理器执行所述计算机程序,使得所述电子设备执行权利要求1~12中任一所述方法的步骤。16. A computer program product, characterized in that it comprises a computer program, wherein the computer program is stored in a computer-readable storage medium; when a processor of an electronic device reads the computer program from the computer-readable storage medium, the processor executes the computer program, so that the electronic device performs the steps of any one of the methods described in claims 1 to 12.
CN202410173894.8A 2024-02-07 2024-02-07 Iris recognition method, device, electronic device and storage medium Pending CN120452049A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202410173894.8A CN120452049A (en) 2024-02-07 2024-02-07 Iris recognition method, device, electronic device and storage medium
PCT/CN2025/075701 WO2025167869A1 (en) 2024-02-07 2025-02-05 Iris recognition method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410173894.8A CN120452049A (en) 2024-02-07 2024-02-07 Iris recognition method, device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN120452049A true CN120452049A (en) 2025-08-08

Family

ID=96620784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410173894.8A Pending CN120452049A (en) 2024-02-07 2024-02-07 Iris recognition method, device, electronic device and storage medium

Country Status (2)

Country Link
CN (1) CN120452049A (en)
WO (1) WO2025167869A1 (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3998863B2 (en) * 1999-06-30 2007-10-31 富士フイルム株式会社 Depth detection device and imaging device
CN101051349B (en) * 2007-05-18 2010-10-13 北京中科虹霸科技有限公司 Multiple iris collecting device using active vision feedback
KR101316316B1 (en) * 2011-12-07 2013-10-08 기아자동차주식회사 Apparatus and method for extracting the pupil using streo camera
CN107194231A (en) * 2017-06-27 2017-09-22 上海与德科技有限公司 Unlocking method, device and mobile terminal based on iris

Also Published As

Publication number Publication date
WO2025167869A1 (en) 2025-08-14

Similar Documents

Publication Publication Date Title
US11783639B2 (en) Liveness test method and apparatus
JP6599421B2 (en) Feature extraction and matching and template update for biometric authentication
WO2019056988A1 (en) Face recognition method and apparatus, and computer device
US11227149B2 (en) Method and apparatus with liveness detection and object recognition
CN112651380B (en) Face recognition method, face recognition device, terminal equipment and storage medium
US11625954B2 (en) Method and apparatus with liveness testing
EP3825905A1 (en) Method and apparatus with liveness test and/or biometric authentication, computer program therefore and medium storing the same
CN113614731B (en) Authentication verification using soft biometrics
TW201712580A (en) Image and feature quality for image enhancement and feature capture of ocular blood vessels and facial recognition, and fusion of ocular blood vessels and facial and/or sub-facial information for biometric systems
WO2016010721A1 (en) Multispectral eye analysis for identity authentication
CN107480654B (en) A three-dimensional vein recognition device applied to wearable devices
US12300038B2 (en) Method and apparatus with liveness detection
Chen et al. 3d face mask anti-spoofing via deep fusion of dynamic texture and shape clues
Bastias et al. A method for 3D iris reconstruction from multiple 2D near-infrared images
CN120124032A (en) Intelligent lock unlocking method and system based on face image processing
CN116978081A (en) Image processing method and device, storage medium, and program product
CN120452049A (en) Iris recognition method, device, electronic device and storage medium
CN120496155B (en) Iris recognition method, device, medium and equipment based on uncertainty modeling
CN120032416A (en) Iris recognition method, device, equipment and storage medium
Stentiford Visual attention: low-level and high-level viewpoints
CN120183007A (en) Fingerprint liveness detection method, model training method and security equipment
CN118366191A (en) Palm image processing method, device, equipment and storage medium
HK40016785B (en) Biometric system and computer-implemented method based on image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication