CN111832464A - Living body detection method and device based on near-infrared camera - Google Patents
Living body detection method and device based on near-infrared camera Download PDFInfo
- Publication number
- CN111832464A CN111832464A CN202010649058.4A CN202010649058A CN111832464A CN 111832464 A CN111832464 A CN 111832464A CN 202010649058 A CN202010649058 A CN 202010649058A CN 111832464 A CN111832464 A CN 111832464A
- Authority
- CN
- China
- Prior art keywords
- face
- pixel points
- image
- screening condition
- infrared image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/162—Detection; Localisation; Normalisation using pixel segmentation or colour matching
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a living body detection method and a living body detection device based on a near-infrared camera, which comprise the following steps: acquiring image data, wherein the image data comprises a color image obtained by a visible light camera and a near-infrared image obtained by a near-infrared camera; locating a face region on the image data; judging whether the color image is a real face or not according to the color of a face area in the color image, and traversing pixel points in the face area in the color image according to a screening condition; counting the number of the pixel points meeting the screening condition, if the number of the pixel points meeting the screening condition exceeds a number threshold, continuing the next step, and if not, finishing the detection; judging whether the near-infrared image is a real face or not according to the brightness of the face region in the near-infrared image, and traversing pixel points in the face region in the near-infrared image according to a screening condition; counting the number of pixel points meeting the screening condition, and if the number of the pixel points meeting the screening condition exceeds a number threshold, determining that the near-infrared image is a real face; otherwise, ending the detection.
Description
Technical Field
The invention relates to a living body detection method and a living body detection device based on a near-infrared camera, and belongs to the field of face recognition.
Background
With the vigorous development of the face recognition technology, the application range of the face recognition technology is wider and wider, and the application in life scenes covers clothes and eating houses. However, although the current face recognition technology can recognize the identity of the face, the authenticity of the input face cannot be accurately distinguished, and the risk of being attacked by a forged face exists. Common ways to forge faces include the following three types: pictures containing the user's face, videos containing the user's face, 3D models or mask headgear made using the user's face. It is therefore necessary to perform liveness detection on the recognized face, i.e. to determine whether the recognized face is from a real user or from a picture, video or other fake face containing the user's face.
Currently, methods for in vivo detection are mainly classified into the following three categories: based on micro-texture, based on motion information, based on multi-spectra. Wherein the micro-texture based approach is susceptible to light, image resolution, and video fraud; the method based on motion information requires user interaction, and user experience is not high; the multispectral-based method utilizes the difference of skin and other materials in spectral reflectivity for detection, mostly adopts near infrared light, has strong identification capability and high accuracy, but has poor identification capability on printed pictures of some special materials. In addition, the existing living body detection method based on near infrared light mostly performs living body detection by constructing and training a deep learning model, but a large amount of data and time are needed for training, so that the cost is high and the complexity is high.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a living body detection method and a living body detection device based on a near-infrared camera, and the living body detection method and the living body detection device are simple and easy to implement by analyzing the color range and the illumination distribution rule of a human face under different illumination conditions. The technical scheme of the invention is as follows:
the first technical scheme is as follows:
a living body detection method based on a near-infrared camera comprises the following steps:
acquiring image data, wherein the image data comprises a color image obtained by a visible light camera and a near-infrared image obtained by a near-infrared camera;
locating a face area on image data, respectively inputting the color image and the near-infrared image into a multitask convolution neural network, locating the face area on the two images by the multitask convolution neural network, and continuing the next step if the face area is successfully located on both the two images; otherwise, ending the detection;
judging whether the color image is a real face or not according to the color of a face area in the color image, presetting a screening condition of pixel points, wherein the screening condition comprises three factors of hue, saturation and lightness, and traversing the pixel points in the face area in the color image according to the screening condition; presetting a number threshold of pixels, counting the number of the pixels meeting the screening condition, continuing the next step if the number of the pixels meeting the screening condition exceeds the number threshold, and otherwise, finishing the detection;
judging whether the near-infrared image is a real face or not according to the brightness of the face region in the near-infrared image, presetting a screening condition of pixel points, and traversing the pixel points in the face region in the near-infrared image according to the screening condition; presetting a number threshold of pixels, counting the number of the pixels meeting the screening condition, and if the number of the pixels meeting the screening condition exceeds the number threshold, determining that the near-infrared image is a real face; otherwise, ending the detection.
Further, the screening condition used in the process of judging whether the color image is a real face is specifically as follows: the hue range of the pixel point is 0 to 25 or 335 to 360, the saturation range is 0.2 to 0.9, and the brightness range is 0 to 0.4.
Further, the screening condition used in the process of judging whether the near-infrared image is a real face is specifically as follows: traversing pixel points in a face area in the near-infrared image, and recording the point accessed each time as a point A; selecting another point near the point A and marking the other point as a point B, wherein the point B is farther away from the center of the face than the point A; and respectively calculating to obtain average values avrA and avrB of pixel point lightness in respective fields of A, B points, and if avrA is greater than avrB, determining that the point A meets the screening condition.
Further, the multitask convolution neural network locates the human face areas on the two images and also locates the area where the human eyes are located; after the face area on the image data is positioned, whether the color image and the near-infrared image are real faces is judged according to the bright pupil effect, and the method specifically comprises the following steps:
respectively calculating the brightness average values of all pixel points in the human eye region in the color image and the near-infrared image; respectively traversing pixel points in the human eye areas in the color image and the near-infrared image, respectively counting the number of the pixel points with the brightness being 1.5 times larger than the average value of the brightness of the human eye areas where the pixel points are located, and respectively recording the pixel points as Sum1 and Sum 2; if Sum1 is more than 15% of the total number of the pixel points in the eye region in the color image and Sum2 is less than 20% of the total number of the pixel points in the eye region in the near-infrared image, the color image and the near-infrared image are considered to be a real face; otherwise, ending the detection.
Further, after the face region on the image data is located, an edge detection algorithm is used to judge whether the near-infrared image is a real face, and the specific steps are as follows:
finding out pixel points forming the human face contour in the near-infrared image, and if the number of the pixel points forming the human face contour is larger than a threshold value, determining that the near-infrared image is a real human face; otherwise, the detection is finished.
Technical scheme two
A near-infrared camera-based liveness detection device comprising a memory and a processor, the memory storing instructions adapted to be loaded by the processor and to perform the steps of:
acquiring image data, wherein the image data comprises a color image obtained by a visible light camera and a near-infrared image obtained by a near-infrared camera;
locating a face area on image data, respectively inputting the color image and the near-infrared image into a multitask convolution neural network, locating the face area on the two images by the multitask convolution neural network, and continuing the next step if the face area is successfully located on both the two images; otherwise, ending the detection;
judging whether the color image is a real face or not according to the color of a face area in the color image, presetting a screening condition of pixel points, wherein the screening condition comprises three factors of hue, saturation and lightness, and traversing the pixel points in the face area in the color image according to the screening condition; presetting a number threshold of pixels, counting the number of the pixels meeting the screening condition, continuing the next step if the number of the pixels meeting the screening condition exceeds the number threshold, and otherwise, finishing the detection;
judging whether the near-infrared image is a real face or not according to the brightness of the face region in the near-infrared image, presetting a screening condition of pixel points, and traversing the pixel points in the face region in the near-infrared image according to the screening condition; presetting a number threshold of pixels, counting the number of the pixels meeting the screening condition, and if the number of the pixels meeting the screening condition exceeds the number threshold, determining that the near-infrared image is a real face; otherwise, ending the detection.
Further, the screening condition used in the process of judging whether the color image is a real face is specifically as follows: the hue range of the pixel point is 0 to 25 or 335 to 360, the saturation range is 0.2 to 0.9, and the brightness range is 0 to 0.4.
Further, the screening condition used in the process of judging whether the near-infrared image is a real face is specifically as follows: traversing pixel points in a face area in the near-infrared image, and recording the point accessed each time as a point A; selecting another point near the point A and marking the other point as a point B, wherein the point B is farther away from the center of the face than the point A; and respectively calculating to obtain average values avrA and avrB of pixel point lightness in respective fields of A, B points, and if avrA is greater than avrB, determining that the point A meets the screening condition.
Further, the multitask convolution neural network locates the human face areas on the two images and also locates the area where the human eyes are located; after the face area on the image data is positioned, whether the color image and the near-infrared image are real faces is judged according to the bright pupil effect, and the method specifically comprises the following steps:
respectively calculating the brightness average values of all pixel points in the human eye region in the color image and the near-infrared image; respectively traversing pixel points in the human eye areas in the color image and the near-infrared image, respectively counting the number of the pixel points with the brightness being 1.5 times larger than the average value of the brightness of the human eye areas where the pixel points are located, and respectively recording the pixel points as Sum1 and Sum 2; if Sum1 is more than 15% of the total number of the pixel points in the eye region in the color image and Sum2 is less than 20% of the total number of the pixel points in the eye region in the near-infrared image, the color image and the near-infrared image are considered to be a real face; otherwise, ending the detection.
Further, after the face region on the image data is located, an edge detection algorithm is used to judge whether the near-infrared image is a real face, and the specific steps are as follows:
finding out pixel points forming the human face contour in the near-infrared image, and if the number of the pixel points forming the human face contour is larger than a threshold value, determining that the near-infrared image is a real human face; otherwise, the detection is finished.
The invention has the following beneficial effects:
1. the method is simple and easy to implement, and has strong real-time performance; the judgment accuracy is high, and the detection capability is excellent; the user does not need to do actions for interaction, and the user experience is good.
2. The method is used for screening according to the bright pupil effect, so that the subsequent calculation amount is reduced, and the detection capability is enhanced.
3. According to the invention, the screening of the near-infrared image is enhanced through an edge detection algorithm, the accuracy is improved, and the detection capability is enhanced.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of brightness of a face region;
FIG. 3 is a comparison graph of human eyes in a forged face photographed by visible light and near infrared light, respectively;
fig. 4 is a schematic diagram of a human face contour in a near-infrared image.
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments.
Example one
Referring to fig. 1 to 4, a living body detection method based on a near-infrared camera includes the following steps:
acquiring image data, wherein the image data comprises a color image obtained by a visible light camera and a near-infrared image obtained by a near-infrared camera;
locating a face area on image data, respectively inputting the color image and the near-infrared image into a multitask convolution neural network, locating the face area on the two images by the multitask convolution neural network, and continuing the next step if the face area is successfully located on both the two images; otherwise, ending the detection; when the human face is positioned on the color image and the near-infrared image does not exist, the human face is an attack behavior for counterfeiting the human face once, and the next detection is not needed;
judging whether the color image is a real face or not according to the color of a face area in the color image, presetting a screening condition of pixel points, wherein the screening condition comprises three factors of hue, saturation and lightness, and traversing the pixel points in the face area in the color image according to the screening condition; presetting a number threshold of pixels, counting the number of the pixels meeting the screening condition, if the number of the pixels meeting the screening condition exceeds the number threshold (in the embodiment, the number threshold is 15% of the number of all the pixels in the face area), continuing the next step, otherwise, ending the detection;
judging whether the near-infrared image is a real face or not according to the brightness of the face region in the near-infrared image, presetting a screening condition of pixel points, and traversing the pixel points in the face region in the near-infrared image according to the screening condition; presetting a number threshold of pixels, counting the number of the pixels meeting the screening condition, and if the number of the pixels meeting the screening condition exceeds the number threshold (in the embodiment, the number threshold is 50% of the number of all the pixels in the face region), determining that the near-infrared image is a real face; otherwise, ending the detection.
Further, the screening condition used in the process of judging whether the color image is a real face is specifically as follows: the hue range of the pixel point is 0 to 25 or 335 to 360, the saturation range is 0.2 to 0.9, and the brightness range is 0 to 0.4.
Further, referring to fig. 2, the specific screening conditions used in the process of determining whether the near-infrared image is a real face are as follows: traversing pixel points in a face area in the near-infrared image, and recording the point accessed each time as a point A; selecting another point near the point A as a point B, wherein the point B is farther away from the center of the face than the point A (in the embodiment, the point B is selected in a mode that the ordinate of the point B is set to be the same as the point A, and the abscissa shifts a plurality of pixels in the direction away from the center of the face); respectively calculating to obtain average values avrA and avrB of pixel point lightness in respective fields of A, B two points; the calculation formula of avrA is specifically as follows:
A. the area of B is a square of 5 × 5 with A, B as the centroid. Where a neighborhood of a points is represented and i (k) represents points on the neighborhood. avrB can be obtained in the same way.
And if the avrA > avrB, the point A is considered to meet the screening condition.
The method has the advantages that the color image shot by the visible light camera is judged according to the rule that the color of the real face is in a certain range; and judging the near-infrared image shot by the near-infrared camera according to the rule that the illumination distribution of the portrait is gradually reduced from the central line to the two sides of the portrait. The method is simple and easy to implement and has strong real-time performance; the accuracy of the judgment of the living body is high, and the detection capability is excellent; the user does not need to do actions for interaction, and the user experience is good.
Example two
Further, the multitask convolution neural network locates the human face areas on the two images and also locates the area where the human eyes are located; after the face area on the image data is located, referring to fig. 3, it is further determined whether the color image and the near-infrared image are real faces according to the bright pupil effect (the reflection of the human eyes to the near-infrared is not strong, so the white of the real human eyes in the near-infrared image is darker than that in the color image), and the specific steps are as follows:
the brightness average values of all pixel points in the eye region in the color image and the near-infrared image are respectively calculated, and in this embodiment, the calculation formula of the brightness average value is specifically:
wherein, the whole eye area is represented, I (k) represents the brightness value of the pixel points in the area, and n represents the number of all the pixel points in the eye area;
respectively traversing pixel points in the human eye areas in the color image and the near-infrared image, respectively counting the number of the pixel points with the brightness being 1.5 times larger than the average value of the brightness of the human eye areas where the pixel points are located, and respectively recording the pixel points as Sum1 and Sum 2; if Sum1 is more than 15% of the total number of the pixel points in the eye region in the color image and Sum2 is less than 20% of the total number of the pixel points in the eye region in the near-infrared image, the color image and the near-infrared image are considered to be a real face; otherwise, ending the detection.
The method has the advantages that whether the color image and the near-infrared image are real faces or not is judged according to the bright pupil effect, and subsequent judgments are primarily screened, so that the subsequent calculated amount is reduced, and the detection capability is enhanced.
EXAMPLE III
Further, after the face region on the image data is located, an edge detection algorithm is used to judge whether the near-infrared image is a real face, and the specific steps are as follows:
referring to fig. 4, the pixels forming the face contour in the near-infrared image are found (according to the open source edge detection algorithm canny), and if the number of the pixels forming the face contour is greater than the threshold (in this embodiment, the threshold is specifically 50), the near-infrared image is considered as a real face; otherwise, the detection is finished.
The method has the advantages that the screening of the near-infrared image is enhanced through an edge detection algorithm, the accuracy is improved, and the detection capability is enhanced.
Example four
Referring to fig. 1-4, a near-infrared camera-based liveness detection device includes a memory and a processor, the memory storing instructions adapted to be loaded by the processor and to perform the steps of:
acquiring image data, wherein the image data comprises a color image obtained by a visible light camera and a near-infrared image obtained by a near-infrared camera;
locating a face area on image data, respectively inputting the color image and the near-infrared image into a multitask convolution neural network, locating the face area on the two images by the multitask convolution neural network, and continuing the next step if the face area is successfully located on both the two images; otherwise, ending the detection; when the human face is positioned on the color image and the near-infrared image does not exist, the human face is an attack behavior for counterfeiting the human face once, and the next detection is not needed;
judging whether the color image is a real face or not according to the color of a face area in the color image, presetting a screening condition of pixel points, wherein the screening condition comprises three factors of hue, saturation and lightness, and traversing the pixel points in the face area in the color image according to the screening condition; presetting a number threshold of pixels, counting the number of the pixels meeting the screening condition, if the number of the pixels meeting the screening condition exceeds the number threshold (in the embodiment, the number threshold is 15% of the number of all the pixels in the face area), continuing the next step, otherwise, ending the detection;
judging whether the near-infrared image is a real face or not according to the brightness of the face region in the near-infrared image, presetting a screening condition of pixel points, and traversing the pixel points in the face region in the near-infrared image according to the screening condition; presetting a number threshold of pixels, counting the number of the pixels meeting the screening condition, and if the number of the pixels meeting the screening condition exceeds the number threshold (in the embodiment, the number threshold is 50% of the number of all the pixels in the face region), determining that the near-infrared image is a real face; otherwise, ending the detection.
Further, the screening condition used in the process of judging whether the color image is a real face is specifically as follows: the hue range of the pixel point is 0 to 25 or 335 to 360, the saturation range is 0.2 to 0.9, and the brightness range is 0 to 0.4.
Further, referring to fig. 2, the specific screening conditions used in the process of determining whether the near-infrared image is a real face are as follows: traversing pixel points in a face area in the near-infrared image, and recording the point accessed each time as a point A; selecting another point near the point A as a point B, wherein the point B is farther away from the center of the face than the point A (in the embodiment, the point B is selected in a mode that the ordinate of the point B is set to be the same as the point A, and the abscissa shifts a plurality of pixels in the direction away from the center of the face); respectively calculating to obtain average values avrA and avrB of pixel point lightness in respective fields of A, B two points; the calculation formula of avrA is specifically as follows:
A. the area of B is a square of 5 × 5 with A, B as the centroid. Where a neighborhood of a points is represented and i (k) represents points on the neighborhood. avrB can be obtained in the same way.
And if the avrA > avrB, the point A is considered to meet the screening condition.
The method has the advantages that the color image shot by the visible light camera is judged according to the rule that the color of the real face is in a certain range; and judging the near-infrared image shot by the near-infrared camera according to the rule that the illumination distribution of the portrait is gradually reduced from the central line to the two sides of the portrait. The method is simple and easy to implement and has strong real-time performance; the accuracy of the judgment of the living body is high, and the detection capability is excellent; the user does not need to do actions for interaction, and the user experience is good.
EXAMPLE five
Further, the multitask convolution neural network locates the human face areas on the two images and also locates the area where the human eyes are located; after the face area on the image data is located, referring to fig. 3, it is further determined whether the color image and the near-infrared image are real faces according to the bright pupil effect (the reflection of the human eyes to the near-infrared is not strong, so the white of the real human eyes in the near-infrared image is darker than that in the color image), and the specific steps are as follows:
the brightness average values of all pixel points in the eye region in the color image and the near-infrared image are respectively calculated, and in this embodiment, the calculation formula of the brightness average value is specifically:
wherein, the whole eye area is represented, I (k) represents the brightness value of the pixel points in the area, and n represents the number of all the pixel points in the eye area;
respectively traversing pixel points in the human eye areas in the color image and the near-infrared image, respectively counting the number of the pixel points with the brightness being 1.5 times larger than the average value of the brightness of the human eye areas where the pixel points are located, and respectively recording the pixel points as Sum1 and Sum 2; if Sum1 is more than 15% of the total number of the pixel points in the eye region in the color image and Sum2 is less than 20% of the total number of the pixel points in the eye region in the near-infrared image, the color image and the near-infrared image are considered to be a real face; otherwise, ending the detection.
The method has the advantages that whether the color image and the near-infrared image are real faces or not is judged according to the bright pupil effect, and subsequent judgments are primarily screened, so that the subsequent calculated amount is reduced, and the detection capability is enhanced.
EXAMPLE six
Further, after the face region on the image data is located, an edge detection algorithm is used to judge whether the near-infrared image is a real face, and the specific steps are as follows:
referring to fig. 4, the pixels forming the face contour in the near-infrared image are found (according to the open source edge detection algorithm canny), and if the number of the pixels forming the face contour is greater than the threshold (in this embodiment, the threshold is specifically 50), the near-infrared image is considered as a real face; otherwise, the detection is finished.
The method has the advantages that the screening of the near-infrared image is enhanced through an edge detection algorithm, the accuracy is improved, and the detection capability is enhanced.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A living body detection method based on a near-infrared camera is characterized by comprising the following steps:
acquiring image data, wherein the image data comprises a color image obtained by a visible light camera and a near-infrared image obtained by a near-infrared camera;
locating a face area on image data, respectively inputting the color image and the near-infrared image into a multitask convolution neural network, locating the face area on the two images by the multitask convolution neural network, and continuing the next step if the face area is successfully located on both the two images; otherwise, ending the detection;
judging whether the color image is a real face or not according to the color of a face area in the color image, presetting a screening condition of pixel points, wherein the screening condition comprises three factors of hue, saturation and lightness, and traversing the pixel points in the face area in the color image according to the screening condition; presetting a number threshold of pixels, counting the number of the pixels meeting the screening condition, continuing the next step if the number of the pixels meeting the screening condition exceeds the number threshold, and otherwise, finishing the detection;
judging whether the near-infrared image is a real face or not according to the brightness of the face region in the near-infrared image, presetting a screening condition of pixel points, and traversing the pixel points in the face region in the near-infrared image according to the screening condition; presetting a number threshold of pixels, counting the number of the pixels meeting the screening condition, and if the number of the pixels meeting the screening condition exceeds the number threshold, determining that the near-infrared image is a real face; otherwise, ending the detection.
2. The living body detection method based on the near-infrared camera according to claim 1, wherein the screening condition used in the process of judging whether the color image is a real face is specifically: the hue range of the pixel point is 0 to 25 or 335 to 360, the saturation range is 0.2 to 0.9, and the brightness range is 0 to 0.4.
3. The living body detection method based on the near-infrared camera according to claim 2, wherein the screening condition used in the process of judging whether the near-infrared image is a real face is specifically: traversing pixel points in a face area in the near-infrared image, and recording the point accessed each time as a point A; selecting another point near the point A and marking the other point as a point B, wherein the point B is farther away from the center of the face than the point A; and respectively calculating to obtain average values avrA and avrB of pixel point lightness in respective fields of A, B points, and if avrA is greater than avrB, determining that the point A meets the screening condition.
4. The living body detection method based on the near-infrared camera according to claim 1, characterized in that: the multitask convolution neural network positions the human face areas on the two images and positions the area where the human eyes are located; after the face area on the image data is positioned, whether the color image and the near-infrared image are real faces is judged according to the bright pupil effect, and the method specifically comprises the following steps:
respectively calculating the brightness average values of all pixel points in the human eye region in the color image and the near-infrared image; respectively traversing pixel points in the human eye areas in the color image and the near-infrared image, respectively counting the number of the pixel points with the brightness being 1.5 times larger than the average value of the brightness of the human eye areas where the pixel points are located, and respectively recording the pixel points as Sum1 and Sum 2; if Sum1 is more than 15% of the total number of the pixel points in the eye region in the color image and Sum2 is less than 20% of the total number of the pixel points in the eye region in the near-infrared image, the color image and the near-infrared image are considered to be a real face; otherwise, ending the detection.
5. The living body detection method based on the near-infrared camera according to claim 1, wherein after the face region on the image data is located, an edge detection algorithm is further used to determine whether the near-infrared image is a real face, and the specific steps are as follows:
finding out pixel points forming the human face contour in the near-infrared image, and if the number of the pixel points forming the human face contour is larger than a threshold value, determining that the near-infrared image is a real human face; otherwise, the detection is finished.
6. A near-infrared camera-based liveness detection device comprising a memory and a processor, the memory storing instructions adapted to be loaded by the processor and to perform the steps of:
acquiring image data, wherein the image data comprises a color image obtained by a visible light camera and a near-infrared image obtained by a near-infrared camera;
locating a face area on image data, respectively inputting the color image and the near-infrared image into a multitask convolution neural network, locating the face area on the two images by the multitask convolution neural network, and continuing the next step if the face area is successfully located on both the two images; otherwise, ending the detection;
judging whether the color image is a real face or not according to the color of a face area in the color image, presetting a screening condition of pixel points, wherein the screening condition comprises three factors of hue, saturation and lightness, and traversing the pixel points in the face area in the color image according to the screening condition; presetting a number threshold of pixels, counting the number of the pixels meeting the screening condition, continuing the next step if the number of the pixels meeting the screening condition exceeds the number threshold, and otherwise, finishing the detection;
judging whether the near-infrared image is a real face or not according to the brightness of the face region in the near-infrared image, presetting a screening condition of pixel points, and traversing the pixel points in the face region in the near-infrared image according to the screening condition; presetting a number threshold of pixels, counting the number of the pixels meeting the screening condition, and if the number of the pixels meeting the screening condition exceeds the number threshold, determining that the near-infrared image is a real face; otherwise, ending the detection.
7. The living body detecting device based on the near-infrared camera as claimed in claim 6, wherein the screening condition used in the process of determining whether the color image is a real human face is specifically: the hue range of the pixel point is 0 to 25 or 335 to 360, the saturation range is 0.2 to 0.9, and the brightness range is 0 to 0.4.
8. The living body detection device based on the near-infrared camera as claimed in claim 7, wherein the screening condition used in the process of determining whether the near-infrared image is a real human face is specifically: traversing pixel points in a face area in the near-infrared image, and recording the point accessed each time as a point A; selecting another point near the point A and marking the other point as a point B, wherein the point B is farther away from the center of the face than the point A; and respectively calculating to obtain average values avrA and avrB of pixel point lightness in respective fields of A, B points, and if avrA is greater than avrB, determining that the point A meets the screening condition.
9. The living body detecting device based on the near infrared camera as claimed in claim 6, wherein: the multitask convolution neural network positions the human face areas on the two images and positions the area where the human eyes are located; after the face area on the image data is positioned, whether the color image and the near-infrared image are real faces is judged according to the bright pupil effect, and the method specifically comprises the following steps:
respectively calculating the brightness average values of all pixel points in the human eye region in the color image and the near-infrared image; respectively traversing pixel points in the human eye areas in the color image and the near-infrared image, respectively counting the number of the pixel points with the brightness being 1.5 times larger than the average value of the brightness of the human eye areas where the pixel points are located, and respectively recording the pixel points as Sum1 and Sum 2; if Sum1 is more than 15% of the total number of the pixel points in the eye region in the color image and Sum2 is less than 20% of the total number of the pixel points in the eye region in the near-infrared image, the color image and the near-infrared image are considered to be a real face; otherwise, ending the detection.
10. The living body detection device method based on the near-infrared camera as claimed in claim 6, wherein after the face region on the image data is located, an edge detection algorithm is further used to determine whether the near-infrared image is a real face, and the specific steps are as follows:
finding out pixel points forming the human face contour in the near-infrared image, and if the number of the pixel points forming the human face contour is larger than a threshold value, determining that the near-infrared image is a real human face; otherwise, the detection is finished.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010649058.4A CN111832464B (en) | 2020-07-08 | 2020-07-08 | Living body detection method and device based on near infrared camera |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010649058.4A CN111832464B (en) | 2020-07-08 | 2020-07-08 | Living body detection method and device based on near infrared camera |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111832464A true CN111832464A (en) | 2020-10-27 |
| CN111832464B CN111832464B (en) | 2024-10-15 |
Family
ID=72899692
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010649058.4A Active CN111832464B (en) | 2020-07-08 | 2020-07-08 | Living body detection method and device based on near infrared camera |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111832464B (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113435287A (en) * | 2021-06-21 | 2021-09-24 | 深圳拓邦股份有限公司 | Lawn obstacle recognition method and device, mowing robot and readable storage medium |
| CN114067445A (en) * | 2021-11-26 | 2022-02-18 | 中科海微(北京)科技有限公司 | Data processing method, device and equipment for face authenticity identification and storage medium |
| CN114187667A (en) * | 2021-12-14 | 2022-03-15 | 中国电信集团系统集成有限责任公司 | IT engineering project full life cycle management system |
| CN114220147A (en) * | 2021-12-06 | 2022-03-22 | 盛视科技股份有限公司 | Silent living body face recognition method, terminal and readable medium |
| CN115439885A (en) * | 2022-08-24 | 2022-12-06 | 奥比中光科技集团股份有限公司 | Skin region identification method, living body detection method and related device |
| US12279857B2 (en) | 2021-04-01 | 2025-04-22 | Hill-Rom Services, Inc. | Video used to estimate vital signs |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2013131407A1 (en) * | 2012-03-08 | 2013-09-12 | 无锡中科奥森科技有限公司 | Double verification face anti-counterfeiting method and device |
| CN105205437A (en) * | 2014-06-16 | 2015-12-30 | 浙江宇视科技有限公司 | Side face detecting method and device based on head profile authentication |
| CN107798281A (en) * | 2016-09-07 | 2018-03-13 | 北京眼神科技有限公司 | A kind of human face in-vivo detection method and device based on LBP features |
| CN108764071A (en) * | 2018-05-11 | 2018-11-06 | 四川大学 | It is a kind of based on infrared and visible images real human face detection method and device |
| CN110532993A (en) * | 2019-09-04 | 2019-12-03 | 深圳市捷顺科技实业股份有限公司 | A kind of face method for anti-counterfeit, device, electronic equipment and medium |
-
2020
- 2020-07-08 CN CN202010649058.4A patent/CN111832464B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2013131407A1 (en) * | 2012-03-08 | 2013-09-12 | 无锡中科奥森科技有限公司 | Double verification face anti-counterfeiting method and device |
| CN105205437A (en) * | 2014-06-16 | 2015-12-30 | 浙江宇视科技有限公司 | Side face detecting method and device based on head profile authentication |
| CN107798281A (en) * | 2016-09-07 | 2018-03-13 | 北京眼神科技有限公司 | A kind of human face in-vivo detection method and device based on LBP features |
| CN108764071A (en) * | 2018-05-11 | 2018-11-06 | 四川大学 | It is a kind of based on infrared and visible images real human face detection method and device |
| CN110532993A (en) * | 2019-09-04 | 2019-12-03 | 深圳市捷顺科技实业股份有限公司 | A kind of face method for anti-counterfeit, device, electronic equipment and medium |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US12279857B2 (en) | 2021-04-01 | 2025-04-22 | Hill-Rom Services, Inc. | Video used to estimate vital signs |
| CN113435287A (en) * | 2021-06-21 | 2021-09-24 | 深圳拓邦股份有限公司 | Lawn obstacle recognition method and device, mowing robot and readable storage medium |
| CN114067445A (en) * | 2021-11-26 | 2022-02-18 | 中科海微(北京)科技有限公司 | Data processing method, device and equipment for face authenticity identification and storage medium |
| CN114220147A (en) * | 2021-12-06 | 2022-03-22 | 盛视科技股份有限公司 | Silent living body face recognition method, terminal and readable medium |
| CN114187667A (en) * | 2021-12-14 | 2022-03-15 | 中国电信集团系统集成有限责任公司 | IT engineering project full life cycle management system |
| CN115439885A (en) * | 2022-08-24 | 2022-12-06 | 奥比中光科技集团股份有限公司 | Skin region identification method, living body detection method and related device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111832464B (en) | 2024-10-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN111832464B (en) | Living body detection method and device based on near infrared camera | |
| CN111488756B (en) | Method, electronic device and storage medium for liveness detection based on facial recognition | |
| CN108038456B (en) | Anti-deception method in face recognition system | |
| CN108416291B (en) | Face detection and recognition method, device and system | |
| CN112487921B (en) | Face image preprocessing method and system for living body detection | |
| JP4307496B2 (en) | Facial part detection device and program | |
| CN101930543B (en) | Method for adjusting eye image in self-photographed video | |
| CN111368666B (en) | Living body detection method based on novel pooling and attention mechanism double-flow network | |
| CN110084135A (en) | Face identification method, device, computer equipment and storage medium | |
| CN108549886A (en) | A kind of human face in-vivo detection method and device | |
| WO2006087581A1 (en) | Method for facial features detection | |
| CN104598882A (en) | Method and system of spoofing detection for biometric authentication | |
| CN103942539B (en) | A kind of oval accurate high efficiency extraction of head part and masking method for detecting human face | |
| CN109858439A (en) | A kind of biopsy method and device based on face | |
| CN111652082A (en) | Face liveness detection method and device | |
| CN106951869B (en) | A kind of living body verification method and equipment | |
| WO2016010724A1 (en) | Multispectral eye analysis for identity authentication | |
| CN106056064A (en) | Face recognition method and face recognition device | |
| CN107368778A (en) | Method for catching, device and the storage device of human face expression | |
| CN110175530A (en) | A kind of image methods of marking and system based on face | |
| CN113673378B (en) | Face recognition method and device based on binocular camera and storage medium | |
| CN112232204A (en) | Living body detection method based on infrared image | |
| CN106156739B (en) | A method for detecting and extracting ears in ID photos based on facial contour analysis | |
| Subasic et al. | Face image validation system | |
| CN117623031B (en) | Elevator sensorless control system and method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |