CN111738176A - A living body detection model training, living body detection method, device, equipment and medium - Google Patents
A living body detection model training, living body detection method, device, equipment and medium Download PDFInfo
- Publication number
- CN111738176A CN111738176A CN202010594280.9A CN202010594280A CN111738176A CN 111738176 A CN111738176 A CN 111738176A CN 202010594280 A CN202010594280 A CN 202010594280A CN 111738176 A CN111738176 A CN 111738176A
- Authority
- CN
- China
- Prior art keywords
- living body
- body detection
- face
- picture
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Cable Transmission Systems, Equalization Of Radio And Reduction Of Echo (AREA)
Abstract
Description
技术领域technical field
本申请涉及计算机技术领域,尤其涉及一种活体检测模型训练、活体检测方法、装置、设备及介质。The present application relates to the field of computer technology, and in particular, to a training of a living body detection model, a living body detection method, an apparatus, a device and a medium.
背景技术Background technique
活体检测是计算机技术的一种重要应用,如何提高活体检测效果和效率成为一个重要课题。Liveness detection is an important application of computer technology, and how to improve the effect and efficiency of liveness detection has become an important topic.
有鉴于此,需要更有效和更高效的活体检测方案。In view of this, there is a need for more effective and efficient living detection schemes.
发明内容SUMMARY OF THE INVENTION
本说明书实施例提供一种活体检测模型训练、活体检测方法、装置、设备及介质,用以解决如何更有效和更高效地进行活体检测的技术问题。The embodiments of the present specification provide a living body detection model training, a living body detection method, an apparatus, a device, and a medium, so as to solve the technical problem of how to perform the living body detection more effectively and efficiently.
为解决上述技术问题,本说明书实施例采用如下技术方案:In order to solve the above-mentioned technical problems, the embodiments of this specification adopt the following technical solutions:
本说明书实施例提供一种活体检测模型训练方法,包括:The embodiments of this specification provide a method for training a live detection model, including:
获取用于模型训练的样本集,所述样本集中的样本包括正样本和负样本,每个样本均包含至少两张不同距离的脸部图片;Obtain a sample set for model training, the samples in the sample set include positive samples and negative samples, and each sample includes at least two face pictures with different distances;
确定各个样本对应的光流图;Determine the optical flow map corresponding to each sample;
使用各个样本对应的所述光流图训练活体检测模型。Using the optical flow map corresponding to each sample to train a living body detection model.
本说明书实施例提供第一种活体检测方法,包括:The embodiments of this specification provide a first method for detecting a living body, including:
获取包含至少两张不同距离的脸部图片的图片组,所述脸部图片包含活体检测对象脸部区域;Obtaining a picture group including at least two face pictures of different distances, the face pictures including the face area of the living body detection object;
确定所述图片组对应的光流图;determining an optical flow map corresponding to the picture group;
将所述光流图输入上述训练得到的活体检测模型,根据所述活体检测模型的输出数据判断所述活体检测对象是否为活体。Input the optical flow map into the living body detection model obtained by the above training, and determine whether the living body detection object is a living body according to the output data of the living body detection model.
本说明书实施例提供第二种活体检测方法,包括:The embodiments of this specification provide a second method for detecting a living body, including:
采集包含至少两张不同距离的脸部图片的图片组,所述脸部图片包含活体检测对象脸部区域;collecting a picture group including at least two face pictures at different distances, the face pictures including the face region of the living body detection object;
向活体检测端发送所述图片组,以使所述活体检测端确定所述图片组对应的光流图;以及,将所述光流图输入上述训练得到的活体检测模型,根据所述活体检测模型的输出数据判断所述活体检测对象是否为活体。Send the picture group to the living body detection end, so that the living body detection end determines the optical flow map corresponding to the picture group; and, input the optical flow map into the living body detection model obtained by the above training, The output data of the model determines whether the living body detection object is a living body.
本说明书实施例提供一种活体检测模型训练装置,包括:The embodiments of this specification provide a live detection model training device, including:
样本模块,用于获取用于模型训练的样本集,所述样本集中的样本包括正样本和负样本,每个样本均包含至少两张不同距离的脸部图片;a sample module, configured to obtain a sample set for model training, the samples in the sample set include positive samples and negative samples, and each sample includes at least two face pictures with different distances;
光流模块,用于确定各个样本对应的光流图;The optical flow module is used to determine the optical flow map corresponding to each sample;
训练模块,用于使用各个样本对应的所述光流图训练活体检测模型。A training module, configured to train a living body detection model by using the optical flow graph corresponding to each sample.
本说明书实施例提供一种活体检测装置,所述装置配置有上述训练得到的活体检测模型,包括:The embodiments of this specification provide a living body detection device, the device is configured with the living body detection model obtained by the above training, including:
图片模块,用于获取包含至少两张不同距离的脸部图片的图片组,所述脸部图片包含活体检测对象脸部区域;A picture module, configured to obtain a picture group including at least two face pictures at different distances, the face pictures including the face region of the living body detection object;
光流模块,用于确定所述图片组对应的光流图;an optical flow module, configured to determine an optical flow map corresponding to the picture group;
检测模块,用于将所述光流图输入所述活体检测模型,根据所述活体检测模型的输出数据判断所述活体检测对象是否为活体。The detection module is configured to input the optical flow map into the living body detection model, and determine whether the living body detection object is a living body according to the output data of the living body detection model.
本说明书实施例提供一种活体检测装置,包括:The embodiments of this specification provide a living body detection device, including:
图片模块,用于采集包含至少两张不同距离的脸部图片的图片组,所述脸部图片包含活体检测对象脸部区域;A picture module, configured to collect a picture group including at least two face pictures at different distances, the face pictures including the face region of the living body detection object;
发送模块,用于向活体检测端发送所述图片组,以使所述活体检测端确定所述图片组对应的光流图;以及,将所述光流图输入权利要求1至7中任一项所得到的活体检测模型,根据所述活体检测模型的输出数据判断所述活体检测对象是否为活体。a sending module, configured to send the picture group to the living body detection end, so that the living body detection end determines the optical flow map corresponding to the picture group; and, input the optical flow map into any one of claims 1 to 7 item, and determine whether the object of living body detection is a living body according to the output data of the living body detection model.
本说明书实施例提供一种活体检测模型训练设备,包括:The embodiments of this specification provide a device for training a living body detection model, including:
至少一个处理器;at least one processor;
以及,as well as,
与所述至少一个处理器通信连接的存储器;a memory communicatively coupled to the at least one processor;
其中,in,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,使所述至少一个处理器能够执行上述的活体检测模型训练方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the above-described method for training a living body detection model.
本说明书实施例提供一种活体检测设备,包括:The embodiments of this specification provide a living body detection device, including:
至少一个处理器;at least one processor;
以及,as well as,
与所述至少一个处理器通信连接的存储器;a memory communicatively coupled to the at least one processor;
其中,in,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,使所述至少一个处理器能够执行上述第一种活体检测方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the above-mentioned first method of living body detection.
本说明书实施例提供一种活体检测设备,包括:The embodiments of this specification provide a living body detection device, including:
至少一个处理器;at least one processor;
以及,as well as,
与所述至少一个处理器通信连接的存储器;a memory communicatively coupled to the at least one processor;
其中,in,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,使所述至少一个处理器能够执行上述第二种活体检测方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to execute the above-mentioned second method of living body detection.
本说明书实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现上述的活体检测模型训练方法。An embodiment of the present specification provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, implements the above-mentioned method for training a living body detection model.
本说明书实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现上述第一种活体检测模型训练方法。An embodiment of the present specification provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, implements the above-mentioned first method for training a living body detection model.
本说明书实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现上述第二种活体检测模型训练方法。An embodiment of the present specification provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, implements the above-mentioned second method for training a living body detection model.
本说明书实施例采用的上述至少一个技术方案能够达到以下有益效果:The above-mentioned at least one technical solution adopted in the embodiments of this specification can achieve the following beneficial effects:
计算不同距离的脸部图片的光流并生成光流图,通过光流图训练活体检测模型并用于活体检测,能够体现人脸的动态变化以及透视效果,提高活体检测效果和效率。Calculate the optical flow of face pictures at different distances and generate an optical flow map. The optical flow map is used to train the living body detection model and use it for living body detection, which can reflect the dynamic changes of the face and the perspective effect, and improve the living body detection effect and efficiency.
附图说明Description of drawings
为了更清楚地说明本说明书实施例或现有技术中的技术方案,下面将对本说明书实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments of the present specification or the prior art. Obviously, the drawings in the following description These are only some embodiments described in this specification, and for those skilled in the art, other drawings can also be obtained from these drawings without any creative effort.
图1是本说明书第一个实施例提供的活体检测模型训练方法的执行主体示意图。FIG. 1 is a schematic diagram of an execution body of a method for training a living body detection model provided by the first embodiment of the present specification.
图2是本说明书第一个实施例提供的活体检测模型训练方法的流程示意图。FIG. 2 is a schematic flowchart of a method for training a living body detection model provided by the first embodiment of the present specification.
图3是本说明书第一个实施例提供的活体检测模型训练示意图。FIG. 3 is a schematic diagram of training a living body detection model provided by the first embodiment of this specification.
图4是本说明书第二个实施例提供的活体检测方法的流程示意图。FIG. 4 is a schematic flowchart of a method for detecting a living body provided in a second embodiment of the present specification.
图5是本说明书第二个实施例中的一种显示示意图。FIG. 5 is a schematic diagram of a display in the second embodiment of the present specification.
图6是本说明书第二个实施例中的另一种显示示意图。FIG. 6 is another schematic diagram of display in the second embodiment of the present specification.
图7是本说明书第二个实施例中的另一种显示示意图。FIG. 7 is another schematic diagram of display in the second embodiment of the present specification.
图8是本说明书第三个实施例提供的活体检测方法的流程示意图。FIG. 8 is a schematic flowchart of a method for detecting a living body provided by a third embodiment of the present specification.
图9是本说明书第四个实施例提供的活体检测模型训练装置的结构示意图。FIG. 9 is a schematic structural diagram of an apparatus for training a living body detection model provided in a fourth embodiment of the present specification.
图10是本说明书第五个实施例提供的活体检测装置的结构示意图。FIG. 10 is a schematic structural diagram of a living body detection device provided in a fifth embodiment of the present specification.
图11是本说明书第六个实施例提供的活体检测装置的结构示意图。FIG. 11 is a schematic structural diagram of a living body detection device provided in a sixth embodiment of the present specification.
具体实施方式Detailed ways
为了使本技术领域的人员更好地理解本说明书中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本说明书实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。In order to make those skilled in the art better understand the technical solutions in this specification, the technical solutions in the embodiments of this specification will be clearly and completely described below with reference to the accompanying drawings in the embodiments of this specification. Obviously, the described The embodiments are only a part of the embodiments of the present application, but not all of the embodiments. Based on the embodiments of the present specification, all other embodiments obtained by persons of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
现有技术中,活体检测已经是较为普遍的需求,例如刷脸登录、刷脸支付等场景,往往都需要进行活体检测。一般来说,需要采集检测对象的图片或者视频等用作活体检测,并出现了许多用于活体检测的模型或算法,例如基于深度学习模型的单帧静默活体算法或动作活体算法(即要求做出例如眨眼、摇头等动作的活体检测算法)。另一方面,目前出现了一些针对活体检测的欺骗行为,例如通过软件生成的视频或者图片或者翻拍的图片或者打印的图片进行活体检测。比如,对于上述的单帧静默活体算法来说,虽然其可以有效拦截具有明显边框、材质特征的屏幕和翻拍照片,但单帧算法对于高清屏幕照片的识别效果不佳;又比如对于上述的动作活体算法,一定程度上也可以识别高清屏照片或翻拍或打印的照片,但容易被人脸生成软件生成的包含动作(例如眨眼摇头张嘴等)的视频突破。In the prior art, liveness detection has become a relatively common requirement. For example, in scenarios such as face-swiping login, face-swiping payment, etc., liveness detection is often required. Generally speaking, it is necessary to collect pictures or videos of the detection object for live detection, and there are many models or algorithms for live detection, such as single-frame silent live algorithms or action live algorithms based on deep learning models (that is, requiring live detection algorithms for actions such as blinking, shaking your head, etc.). On the other hand, there are currently some deceptive behaviors for liveness detection, such as liveness detection through software-generated videos or pictures, or reproduced pictures or printed pictures. For example, for the above-mentioned single-frame silent living algorithm, although it can effectively intercept screens and retake photos with obvious borders and material characteristics, the single-frame algorithm has poor recognition effect for high-definition screen photos; The living algorithm can also recognize high-definition screen photos or remade or printed photos to a certain extent, but it is easily broken by videos generated by face generation software that contain actions (such as blinking, shaking, opening, etc.).
本说明书第一个实施例提供一种活体检测模型训练方法,本实施例的执行主体可以是终端(包括但不限于手机、计算机、pad等)或者服务器或者相应的活体检测模型训练平台或系统或操作系统等,即执行主体可以是多种多样的,可以根据需要设置、使用或者变换执行主体。另外,也可以有第三方应用程序协助所述执行主体执行本实施例。例如图1所示,可以由服务器来执行本实施例中的活体检测模型训练方法,并且可以在(用户所持有的)终端上安装(与所述服务器)相对应的应用程序,终端或应用程序与服务器之间可以进行数据传输,通过终端或应用程序来进行数据的采集或输入或输出或(向用户)进行页面或信息展示,从而辅助服务器执行本实施例中的活体检测模型训练方法。The first embodiment of this specification provides a method for training a living body detection model. The execution subject of this embodiment may be a terminal (including but not limited to a mobile phone, a computer, a pad, etc.) or a server or a corresponding living body detection model training platform or system or The operating system, etc., that is, the execution body, can be various, and the execution body can be set, used or transformed as required. In addition, there may also be a third-party application program to assist the executive body to execute this embodiment. For example, as shown in FIG. 1 , the method for training a living body detection model in this embodiment may be executed by a server, and an application program (corresponding to the server) may be installed on a terminal (held by the user), the terminal or the application Data transmission can be performed between the program and the server, data collection, input or output, or page or information display (to the user) can be performed through a terminal or an application program, thereby assisting the server to execute the method for training a living body detection model in this embodiment.
如图2和图3所示,本实施例提供的活体检测模型训练方法包括:As shown in FIG. 2 and FIG. 3 , the method for training a living body detection model provided by this embodiment includes:
S101:(执行主体)获取用于模型训练的样本集,所述样本集中的样本包括正样本和负样本,每个样本均包含至少两张不同距离的脸部图片;S101: (execution subject) obtain a sample set for model training, the samples in the sample set include positive samples and negative samples, and each sample includes at least two face pictures with different distances;
本实施例中,可以获取样本集,样本集中包含若干样本,样本可以包括正样本和负样本,每个样本均包含至少两张不同距离的脸部图片(脸部图片可以来自于相应的数据库)。进一步的,每张脸部图片中应包含完整的脸部区域。本实施例中,正样本即来自于活体的样本,即正样本中的脸部图片来自于活体(本说明书中的活体一般指自然人),例如图3中“活体”一行。负样本即非来自于活体的样本,即负样本中的脸部图片并非来自于活体,包括但不限于负样本中的脸部图片是由软件生成的或者屏幕或照片翻拍的,例如图3中“照片”和“屏幕”两行。In this embodiment, a sample set can be obtained, the sample set includes several samples, the samples can include positive samples and negative samples, and each sample includes at least two face pictures with different distances (the face pictures can be from a corresponding database) . Further, each face image should contain the complete face area. In this embodiment, the positive sample is a sample from a living body, that is, the face picture in the positive sample is from a living body (the living body in this specification generally refers to a natural person), for example, the row of “living body” in FIG. 3 . Negative samples are samples that are not from a living body, that is, the face pictures in the negative samples are not from a living body, including but not limited to the face pictures in the negative samples are generated by software or reproduced from screens or photos, such as in Figure 3 Two lines for "Photo" and "Screen".
本实施例中,每个样本中包含的脸部图片数量不作具体限定,可以是两张,也可以多于两张。一般来说,每个正样本中包含的脸部图片应来自于同一活体,每个负样本中包含的脸部图片也应是同一人的脸部,即使是软件生成的或者翻拍的。In this embodiment, the number of face pictures included in each sample is not specifically limited, and may be two or more than two. Generally speaking, the face pictures contained in each positive sample should be from the same living body, and the face pictures contained in each negative sample should also be the face of the same person, even if it is generated by software or retaken.
本实施例中,可以根据每张脸部图片中感兴趣区域与该张脸部图片的比例确定任两张所述脸部图片的距离是否相同或不同。具体的,可以将脸部图片中的脸部区域(可以通过现有的人脸检测算法来选定脸部区域,选取的脸部区域的形状包括但不限于矩形)作为感兴趣区域,计算每张脸部图片中感兴趣区域与该张脸部图片的比例,包括但不限于计算每张脸部图片中感兴趣区域的面积与该张脸部图片的面积的比例,或者计算每张脸部图片中感兴趣区域的宽度与该张脸部图片的宽度的比例,或者计算每张脸部图片中感兴趣区域的长度与该张脸部图片的长度的比例。上述面积/宽度/长度的单位包括但不限于厘米、像素、英寸等。上述的比例可以使用(现有的)人脸检测算法或模型计算,包括但不限于MTCNN算法。In this embodiment, whether the distance between any two facial pictures is the same or different may be determined according to the ratio of the region of interest in each facial picture to the facial picture. Specifically, the face region in the face picture (the face region can be selected by an existing face detection algorithm, and the shape of the selected face region includes but not limited to a rectangle) can be used as the region of interest, and each calculation The ratio of the region of interest in the face picture to the face picture, including but not limited to calculating the ratio of the area of the region of interest in each face picture to the area of the face picture, or calculating the ratio of each face picture The ratio of the width of the region of interest in the picture to the width of the face picture, or the ratio of the length of the region of interest in each face picture to the length of the face picture is calculated. The above-mentioned units of area/width/length include, but are not limited to, centimeters, pixels, inches, and the like. The above ratio can be calculated using (existing) face detection algorithms or models, including but not limited to the MTCNN algorithm.
根据上述比例可以判定不同脸部图片之间的距离,具体的,若上述比例越小,则判定距离越大。例如图片A中上述比例为0.5,图片B中上述比例为0.3,则判定图片A的距离小于图片B的距离。应当说明的是,当比较任两张脸部图片的上述比例时,应比较相同含义的比例,例如两张图片都采用面积比例,或都采用宽度比例,或都采用长度比例等。The distance between different face pictures can be determined according to the above ratio. Specifically, if the above ratio is smaller, the larger the distance is determined. For example, the above ratio in picture A is 0.5, and the above ratio in picture B is 0.3, then it is determined that the distance of picture A is smaller than the distance of picture B. It should be noted that when comparing the above ratios of any two face pictures, the ratios with the same meaning should be compared, for example, the two pictures both adopt the area ratio, or both adopt the width ratio, or both adopt the length ratio, etc.
本实施例中的“不同距离”相当于不同拍摄距离,对正样本来说,不同距离的脸部图片可以是由于活体与拍摄设备远近不同而得到的,对负样本来说,由于上述的比例不同,负样本中的脸部图片至少“看起来”是不同拍摄距离的,或表示出不同拍摄距离。"Different distances" in this embodiment are equivalent to different shooting distances. For positive samples, face pictures at different distances may be obtained due to the difference between the living body and the shooting device. For negative samples, due to the above ratio Differently, the face pictures in the negative samples at least "look" at different shooting distances, or represent different shooting distances.
S103:(执行主体)确定各个样本对应的光流图;S103: (execution subject) determine the optical flow map corresponding to each sample;
本实施例中,对于各个样本,都可以确定其对应的光流图。具体的,确定各个样本对应的光流图包括:对任一样本,对齐该样本所包含的各脸部图片,使用该样本对齐后的各脸部图片确定该样本对应的光流图。In this embodiment, for each sample, its corresponding optical flow map can be determined. Specifically, determining the optical flow map corresponding to each sample includes: for any sample, aligning each face picture included in the sample, and using each face picture aligned by the sample to determine the optical flow map corresponding to the sample.
本实施例中,对任一样本,对齐该样本所包含的各脸部图片包括:In this embodiment, for any sample, aligning the facial pictures included in the sample includes:
S1031:对任一样本,根据该样本所包含的各脸部图片中的脸部关键点对该样本所包含的各脸部图片进行第一阶段对齐;S1031: For any sample, perform the first-stage alignment of each facial image included in the sample according to the facial key points in each facial image included in the sample;
具体的,对任一样本,提取该样本中各脸部图片的关键点,根据关键点对该样本中的各脸部图片进行第一阶段对齐(第一阶段对齐也可称为“粗对齐”)。提取关键点可以使用现有算法或模型。Specifically, for any sample, extract the key points of each face picture in the sample, and perform the first-stage alignment of each face picture in the sample according to the key points (the first-stage alignment may also be referred to as "coarse alignment" ). Extracting keypoints can use existing algorithms or models.
S1033:根据该样本所包含的各脸部图片的机器视觉特征,对该样本经所述第一对齐后的各脸部图片进行第二阶段对齐。S1033: According to the machine vision feature of each facial image included in the sample, perform a second-stage alignment on each facial image after the first alignment of the sample.
在对该样本的脸部图片进行第一阶段对齐后,可以提取该样本中各脸部图片的机器视觉特征,根据机器视觉特征对该样本中的各脸部图片进行第二阶段对齐(第二阶段对齐也可称为“细对齐”)。其中,机器视觉特征可以包括SIFT特征、HOG特征、SURF特征、ORB特征、LBP特征、HAAR特征的至少一种。After the face pictures of the sample are aligned in the first stage, the machine vision features of the face pictures in the sample can be extracted, and the face pictures in the sample can be aligned in the second stage according to the machine vision features (the second stage). Stage alignment may also be referred to as "fine alignment"). The machine vision features may include at least one of SIFT features, HOG features, SURF features, ORB features, LBP features, and HAAR features.
对任一样本,在对齐该样本中各脸部图片后,可以计算该样本中(对齐后的)各脸部图片的光流,根据该样本中(对齐后的)各脸部图片的光流生成该样本中各脸部图片对应的光流图,也即该样本对应的光流图。For any sample, after aligning the face pictures in the sample, the optical flow of the (aligned) face pictures in the sample can be calculated, according to the optical flow of the (aligned) face pictures in the sample An optical flow graph corresponding to each face picture in the sample is generated, that is, an optical flow graph corresponding to the sample.
采用上述内容,也就可以确定各个样本(包括正样本和负样本)对应的光流图。Using the above content, the optical flow map corresponding to each sample (including positive samples and negative samples) can also be determined.
S105:(执行主体)使用各个样本对应的所述光流图训练活体检测模型。S105: (Execution subject) Use the optical flow graph corresponding to each sample to train a living body detection model.
本实施例中,在生成了各样本对应的光流图后,就可以使用各个样本对应的所述光流图训练活体检测模型。其中,使用各个样本对应的所述光流图训练活体检测模型包括:使用各个样本对应的所述光流图训练分类模型(或光流图分类模型),以获得活体检测模型。In this embodiment, after the optical flow map corresponding to each sample is generated, the living body detection model can be trained by using the optical flow map corresponding to each sample. Wherein, using the optical flow graph corresponding to each sample to train the living body detection model includes: using the optical flow graph corresponding to each sample to train a classification model (or optical flow graph classification model) to obtain a living body detection model.
上述的分类模型包括但不限于CNN分类模型、PCA模型等,本实施例不作具体限定。The above classification models include but are not limited to CNN classification models, PCA models, etc., which are not specifically limited in this embodiment.
本实施例中,每个样本中包含的是不同距离的脸部图片,若样本中的脸部图片是来自于活体的,会因活体与拍摄设备的距离不同而形成不同的透视效果,通过光流图可以精确表现出这种透视效果,故本实施例利用光流图来训练活体检测模型,使得所得到的活体检测模型能够基于上述透视效果进行活体检测。包括但不限于软件生成的视频或者图片或者翻拍的图片或者打印的图片等非活体视频或图片,无法表现出上述透视效果,故本实施例所得到的的活体检测模型能够对是否活体进行精准检测,活体检测效果好,效率高。本实施例中,脸部图片可以经过两阶段对齐以生成光流图,使得生成的光流图(对上述透视效果的)表现效果更好,提高以此得到的活体检测模型的活体检测效果和效率。In this embodiment, each sample contains face pictures at different distances. If the face pictures in the sample are from a living body, different perspective effects will be formed due to the different distances between the living body and the photographing device. The flow graph can accurately represent the perspective effect, so in this embodiment, the optical flow graph is used to train the living body detection model, so that the obtained living body detection model can perform living body detection based on the above-mentioned perspective effect. Including, but not limited to, non-living videos or pictures such as videos or pictures generated by software, re-taken pictures or printed pictures, etc., cannot show the above perspective effect, so the living body detection model obtained in this embodiment can accurately detect whether there is a living body , the live detection effect is good and the efficiency is high. In this embodiment, the facial image can be aligned in two stages to generate an optical flow map, so that the generated optical flow map (for the above perspective effect) has a better performance effect, and improves the living body detection effect of the obtained living body detection model. efficiency.
本说明书第二个实施例提供了一种活体检测方法,可看作第一个实施例获得的活体检测模型的应用。本实施例的执行主体可以是终端(包括但不限于手机、计算机、pad等)或者服务器或者相应的活体检测平台或系统或操作系统等,即执行主体可以是多种多样的,可以根据需要设置、使用或者变换执行主体。另外,也可以有第三方应用程序协助所述执行主体执行本实施例。例如图1所示,可以由服务器来执行本实施例中的活体检测方法,并且可以在(用户所持有的)终端上安装(与所述服务器)相对应的应用程序,终端或应用程序与服务器之间可以进行数据传输,通过终端或应用程序来进行数据的采集或输入或输出或(向用户)进行页面或信息展示,从而辅助服务器执行本实施例中的活体检测方法。The second embodiment of the present specification provides a living body detection method, which can be regarded as an application of the living body detection model obtained in the first embodiment. The execution body of this embodiment may be a terminal (including but not limited to a mobile phone, a computer, a pad, etc.) or a server or a corresponding living body detection platform or system or operating system, etc., that is, the execution body may be various, and can be set as required , use or transform the execution body. In addition, there may also be a third-party application program to assist the executive body to execute this embodiment. For example, as shown in FIG. 1 , the living body detection method in this embodiment may be executed by a server, and an application program (corresponding to the server) may be installed on a terminal (held by the user), and the terminal or application program is associated with Data transmission can be performed between servers, data collection, input or output, or page or information display (to the user) can be performed through a terminal or an application program, thereby assisting the server to execute the living body detection method in this embodiment.
如图4所示,本实施例提供的活体检测方法包括:As shown in FIG. 4 , the live detection method provided in this embodiment includes:
S201:(执行主体)获取包含至少两张不同距离的脸部图片的图片组,所述脸部图片包含活体检测对象脸部区域;S201: (execution subject) acquiring a picture group including at least two face pictures with different distances, the face pictures including the face region of the living body detection object;
本实施例中,可以获取包含至少两张不同距离(相当于不同拍摄距离)的脸部图片的图片组。其中,获取包含至少两张不同距离的脸部图片的图片组可以采用如下1.1或1.2或1.3所述的方式(本实施例不限于1.1或1.2或1.3所述的方式):In this embodiment, a picture group including at least two face pictures at different distances (equivalent to different shooting distances) may be acquired. The method described in 1.1 or 1.2 or 1.3 below can be used to obtain a picture group including at least two face pictures with different distances (this embodiment is not limited to the method described in 1.1 or 1.2 or 1.3):
1.1、视频分解1.1. Video decomposition
本实施例中,拍摄设备(可以包括但不限于摄像头,下同)可以拍摄针对活体检测对象(根据活体检测结果的不同,这里的活体检测对象可能是活体、也可以能是非活体的视频或图片,简称“非活体”,例如翻拍或生成的视频或图片,下同)的拍摄视频。具体的,拍摄设备可以在检测到活体检测对象后再开始拍摄。拍摄设备可以将拍摄的视频发送给本实施例的执行主体(简称“执行主体”)。In this embodiment, a shooting device (which may include but is not limited to a camera, the same below) can shoot a video or picture for a living body detection object (according to the different living body detection results, the living body detection object here may be a living body or a non-living body) , referred to as "non-living body", such as remake or generated video or picture, the same below). Specifically, the photographing device may start photographing after detecting the living body detection object. The photographing device may send the photographed video to the execution body (“execution body” for short) in this embodiment.
执行主体接收到上述拍摄视频后,可以分解所述拍摄视频以获取若干帧图像,从所述若干帧图像中选择至少两张不同距离的脸部图片以形成图片组。其中各帧图片的距离是否不同可以采用第一种实施方式中提供的方式来判断,下同。After receiving the above-mentioned shooting video, the execution subject can decompose the shooting video to obtain several frames of images, and select at least two face pictures with different distances from the several frames of images to form a picture group. Whether the distances of each frame of pictures are different can be determined by using the method provided in the first embodiment, the same below.
1.2、接收图片1.2. Receive pictures
本实施例中,拍摄设备可以每拍摄一张针对活体检测对象的脸部图片,就将拍摄的脸部图片发送给执行主体。执行主体每接收到一张脸部图片后,向拍摄设备发送指令,以使拍摄设备发布提示信息,所述提示信息用于提示所述活体检测对象靠近或远离所述拍摄设备,例如提示信息可以是“请靠近或远离摄像头”。其中,拍摄设备可以通过文字或音频形式发布所述提示信息,下同。例如拍摄设备带有显示屏,通过显示屏以文字形式发布所述提示信息,如图5所示;和/或,拍摄设备带有音频模块,通过音频模块以音频形式发布所述提示信息。In this embodiment, the photographing device may send the photographed facial image to the execution subject every time a facial image is photographed for the living body detection object. Each time the execution subject receives a face picture, it sends an instruction to the shooting device, so that the shooting device issues prompt information, and the prompt information is used to prompt the living body detection object to approach or stay away from the shooting device. For example, the prompt information can be It's "please move closer to or away from the camera". The photographing device may issue the prompt information in the form of text or audio, the same below. For example, the photographing device has a display screen, and the prompt information is published in text form through the display screen, as shown in FIG. 5; and/or the photographing device has an audio module, and the prompt information is published in audio form through the audio module.
可以预料的,若活体检测对象是活体,则其接收到(例如看到或听到,下同)所述提示信息后,会靠近或远离拍摄设备;若活体检测对象是非活体,则其应当是由其持有者放置在拍摄设备前以供拍摄的,若活体检测对象的持有者接收到所述提示信息后,也会将活体检测对象靠近或远离拍摄设备。It can be expected that if the living body detection object is a living body, after receiving (for example, seeing or hearing, the same below) the prompt information, it will approach or move away from the photographing device; if the living body detection object is a non-living body, it should be If the holder of the object is placed in front of the photographing device for photographing, if the holder of the living body detection object receives the prompt information, the owner of the living body detection object will also move the living body detection object closer to or away from the photographing device.
拍摄设备发布所述提示信息后,就会拍摄下一张脸部图片并发送至执行主体。After the photographing device releases the prompt information, it will photograph the next face picture and send it to the execution subject.
拍摄设备重复上述过程,即“拍摄并发送上一张脸部图片-接收指令发布提示信息-拍摄并发送下一张脸部图片”(“上一张”和“下一张”为相邻两张,下同),从而执行主体可以接收一系列脸部图片,并从中选择至少两张不同距离的脸部图片以形成图片组。The photographing device repeats the above process, that is, "take and send the last face picture-receive instruction and release prompt information-take and send the next face picture" ("previous" and "next" are adjacent two. Zhang, the same below), so that the execution subject can receive a series of face pictures, and select at least two face pictures with different distances from them to form a picture group.
1.3、接收图片并判断距离1.3. Receive pictures and judge the distance
本实施例中,拍摄设备可以每拍摄一张针对活体检测对象的脸部图片,就将拍摄的脸部图片发送给执行主体。执行主体每接收到一张脸部图片后,根据该脸部图片中感兴趣区域与该张图片的比例(同第一个实施例)确定提示信息,并向拍摄设备发送所述提示信息,以使拍摄设备发布所述提示信息,所述提示信息用于提示所述活体检测对象靠近或远离所述拍摄设备。In this embodiment, the photographing device may send the photographed facial image to the execution subject every time a facial image is photographed for the living body detection object. Each time the execution subject receives a face picture, it determines the prompt information according to the ratio of the region of interest in the face picture to the picture (same as the first embodiment), and sends the prompt information to the shooting device to The photographing device is caused to issue the prompt information, where the prompt information is used to prompt the living body detection object to approach or move away from the photographing device.
具体的,可以设置阈值,执行主体每接收到一张脸部图片后,判定该张脸部图片中感兴趣区域与该张图片的比例,若比例大于(或大于等于)所述阈值,说明活体检测对象距离拍摄设备较近,则提示信息可以用于提示所述活体检测对象远离所述拍摄设备,例如提示信息可以是“请远离摄像头”,如图6所示;若比例小于等于(或小于)所述阈值,说明活体检测对象距离拍摄设备较远,则提示信息可以用于提示所述活体检测对象靠近所述拍摄设备,例如提示信息可以是“请靠近摄像头”,如图7所示。Specifically, a threshold can be set, and each time the execution subject receives a face picture, it determines the ratio of the region of interest in the face picture to the picture. If the ratio is greater than (or greater than or equal to) the threshold, it means that the living body The detection object is closer to the shooting device, and the prompt information can be used to prompt the living body detection object to stay away from the shooting device. For example, the prompt information can be "Please stay away from the camera", as shown in Figure 6; if the ratio is less than or equal to (or less than or equal to) ) the threshold value, indicating that the living body detection object is far away from the shooting device, and the prompt information can be used to prompt the living body detection object to approach the shooting device, for example, the prompt information can be “please approach the camera”, as shown in FIG. 7 .
拍摄设备发布所述提示信息后,就会拍摄下一张脸部图片并发送至执行主体。After the photographing device releases the prompt information, it will photograph the next face picture and send it to the execution subject.
拍摄设备重复上述过程,即“拍摄并发送上一张脸部图片-接收指令发布提示信息-拍摄并发送下一张脸部图片”,从而执行主体可以接收一系列脸部图片,并从中选择至少两张不同距离的脸部图片以形成图片组。The shooting device repeats the above process, that is, "shooting and sending the last face picture - receiving instruction and issuing prompt information - shooting and sending the next face picture", so that the execution subject can receive a series of face pictures, and select at least one of them. Two pictures of faces at different distances to form a picture group.
特别的,1.2或1.3中,拍摄设备发布所述提示信息后,可以检测活体检测对象是否发生了移动,只有在发生移动后,才拍摄下一张脸部图片。若发布所述提示信息后,活体检测对象不移动,则持续发布所述提示信息,以防活体检测对象是非活体的情况下,非活体的持有者不移动活体检测对象,逃避活体检测。In particular, in 1.2 or 1.3, after the photographing device issues the prompt information, it can detect whether the living body detection object has moved, and only after the movement has occurred, the next face picture can be taken. If the living body detection object does not move after the prompt information is issued, the prompt information is continuously issued to prevent the non-living body holder from moving the living body detection object and evading the living body detection when the living body detection object is a non-living body.
1.2中,提示信息并不限定活体检测对象靠近或远离拍摄设备,1.3中(通过比例)判断活体检测对象与拍摄设备的远近,进而提示信息限定活体检测对象靠近或远离拍摄设备,使得脸部图像中的脸部区域不至于过大或过小,使得脸部图像质量更好。1.3中,若活体检测对象距离拍摄设备较近,则可以提示其远离拍摄设备,反之则提示其靠近拍摄设备,即能够自适应活体检测对象距离拍摄设备的距离,提高脸部图片质量。In 1.2, the prompt information does not limit the living body detection object to be close to or away from the shooting device. In 1.3 (by proportion), the distance between the living body detection object and the shooting device is judged, and then the prompt information limits the living body detection object to approach or stay away from the shooting device, so that the facial image The face area in the image is not too large or too small, making the face image better. In 1.3, if the living body detection object is close to the shooting device, it can be prompted to stay away from the shooting device, otherwise, it can be prompted to approach the shooting device, that is, it can adapt to the distance between the living body detection object and the shooting device, and improve the quality of face pictures.
1.2和1.3中,若执行主体已经接收到了足够数量、且满足不同距离的脸部图像,则可以指示拍摄图像不再拍摄脸部图像。In 1.2 and 1.3, if the execution subject has received a sufficient number of face images that satisfy different distances, it may instruct the image capture to no longer capture face images.
上述图片组中的脸部图像应满足包含活体检测对象的(完整)脸部区域,即使活体检测对象是非活体,可以预见为了用于活体检测,其也是包含脸部区域的。The face images in the above-mentioned group of pictures should satisfy the (complete) face region containing the living body detection object, even if the living body detection object is non-living body, it can be foreseen that it also contains the face region for use in the living body detection.
本实施例中,可以根据每张脸部图片中感兴趣区域与该张脸部图片的比例确定任两张所述脸部图片的距离是否相同或不同,可以参照第一个实施例中的内容。In this embodiment, it can be determined whether the distance between any two facial images is the same or different according to the ratio of the region of interest in each facial image to the facial image. Refer to the content in the first embodiment. .
S203:(执行主体)确定所述图片组对应的光流图;S203: (execution subject) determine the optical flow map corresponding to the picture group;
获取图片组后,就可以确定图片组中各个脸部图片对应的光流图,也就确定了图片组对应的光流图,即对齐各所述脸部图片,使用对齐后的各所述脸部图片确定所述图片组对应的光流图。其中,根据各所述脸部图片中的脸部关键点对各所述脸部图片进行第一阶段对齐;根据各所述脸部图片的机器视觉特征,对经所述第一对齐后的各所述脸部图片进行第二阶段对齐。所述机器视觉特征包括SIFT特征、HOG特征、SURF特征、ORB特征、LBP特征、HAAR特征的至少一种。After the picture group is obtained, the optical flow map corresponding to each face picture in the picture group can be determined, and the optical flow map corresponding to the picture group is also determined, that is, each of the face pictures is aligned, and each of the aligned faces is used. The partial picture determines the optical flow map corresponding to the picture group. The first-stage alignment is performed on each of the facial pictures according to the facial key points in each of the facial pictures; according to the machine vision features of each of the facial pictures, each The face image is aligned in the second stage. The machine vision features include at least one of SIFT features, HOG features, SURF features, ORB features, LBP features, and HAAR features.
使用对齐后的各所述脸部图片确定所述图片组对应的光流图包括:计算各所述脸部图片的光流;根据各所述脸部图片的光流生成该图片组对应的光流图。Using the aligned face pictures to determine the optical flow map corresponding to the picture group includes: calculating the optical flow of each of the face pictures; generating an optical flow corresponding to the picture group according to the optical flow of each of the face pictures flow graph.
具体内容参照第一个实施例中的内容。For specific content, refer to the content in the first embodiment.
S205:(执行主体)将所述光流图输入第一个实施例所得到的活体检测模型,根据所述活体检测模型的输出数据判断所述活体检测对象是否为活体。S205: (Execution subject) Input the optical flow graph into the living body detection model obtained in the first embodiment, and determine whether the living body detection object is a living body according to the output data of the living body detection model.
确定图片组的光流图后,就可以将光流图输入第一个实施例训练得到的活体检测模型,根据所述活体检测模型的输出数据判断所述活体检测对象是否为活体。其中,根据所述活体检测模型的输出数据判断所述活体检测对象是否为活体包括:根据所述活体检测模型输出的置信度(即输出数据包括置信度)判断所述活体检测对象是否为活体。After the optical flow diagram of the picture group is determined, the optical flow diagram can be input into the living body detection model trained in the first embodiment, and whether the living body detection object is a living body is determined according to the output data of the living body detection model. Wherein, judging whether the living body detection object is a living body according to the output data of the living body detection model includes: judging whether the living body detection object is a living body according to the confidence level output by the living body detection model (ie, the output data includes the confidence level).
本实施例中,将所述光流图输入上述活体检测模型后,本实施例还包括:In this embodiment, after the optical flow map is input into the above-mentioned living body detection model, this embodiment further includes:
使所述活体检测模型提取所述光流图的图像特征,并对图像特征计算softmax,以使所述活体检测模型确定并输出所述置信度,即将所述光流图输入上述活体检测模型,所述活体检测模型提取所述光流图的图像特征,并对图像特征计算softmax,确定并输出所述置信度,置信度代表了活体检测对象为活体的概率。Let the living body detection model extract the image features of the optical flow map, and calculate the softmax on the image features, so that the living body detection model can determine and output the confidence, that is, input the optical flow map into the above living body detection model, The living body detection model extracts the image features of the optical flow map, calculates softmax on the image features, determines and outputs the confidence level, and the confidence level represents the probability that the living body detection object is a living body.
通过上述内容,可以判断活体检测对象是否为活体。From the above, it can be determined whether or not the object of living body detection is a living body.
本实施例中,图片组中包含的是不同距离的脸部图片,若图片组中的脸部图片是来自于活体的,会因活体与拍摄设备的距离不同而形成不同的透视效果,通过光流图可以精确表现出这种透视效果,本实施例活体检测模型能够基于上述透视效果进行活体检测,将光流图输入活体检测模型能够对是否活体进行精准检测,活体检测效果好,效率高。本实施例中,脸部图片可以经过两阶段对齐以生成光流图,使得生成的光流图(对上述透视效果的)表现效果更好,提高活体检测效果和效率。本实施例中,自适应活体检测对象的拍摄距离,并进行提示,以使活体检测对象运动,从而获取活体人像在运动过程中形成的特殊光流,提高活体检测效果和效率。In this embodiment, the picture group contains face pictures at different distances. If the face pictures in the picture group are from a living body, different perspective effects will be formed due to the different distances between the living body and the shooting device. The flow map can accurately represent this perspective effect. The living body detection model in this embodiment can perform live body detection based on the above-mentioned perspective effect. Inputting the optical flow map into the living body detection model can accurately detect whether there is a living body. The living body detection effect is good and the efficiency is high. In this embodiment, the facial image can be aligned in two stages to generate an optical flow map, so that the generated optical flow map (for the above perspective effect) has a better performance and improves the effect and efficiency of living body detection. In this embodiment, the shooting distance of the living body detection object is adapted and prompted to make the living body detection object move, so as to obtain the special optical flow formed by the living body portrait during the movement process, and improve the living body detection effect and efficiency.
本说明书第三个实施例提供了一种活体检测方法,可看作第一个实施例获得的活体检测模型的应用。本实施例的执行主体可以是终端(包括但不限于手机、计算机、pad等)或者服务器或者相应的活体检测平台或系统或操作系统等,即执行主体可以是多种多样的,可以根据需要设置、使用或者变换执行主体。另外,也可以有第三方应用程序协助所述执行主体执行本实施例。例如图1所示,可以由服务器来执行本实施例中的活体检测方法,并且可以在(用户所持有的)终端上安装(与所述服务器)相对应的应用程序,终端或应用程序与服务器之间可以进行数据传输,通过终端或应用程序来进行数据的采集或输入或输出或(向用户)进行页面或信息展示,从而辅助服务器执行本实施例中的活体检测方法。以下以执行主体为拍摄设备为例进行说明。The third embodiment of the present specification provides a method for living body detection, which can be regarded as an application of the living body detection model obtained in the first embodiment. The execution body of this embodiment may be a terminal (including but not limited to a mobile phone, a computer, a pad, etc.) or a server, or a corresponding living body detection platform or system or operating system, etc., that is, the execution body may be various, and can be set as required , use or transform the execution body. In addition, there may also be a third-party application program to assist the executive body to execute this embodiment. For example, as shown in FIG. 1 , the living body detection method in this embodiment may be executed by a server, and an application program (corresponding to the server) may be installed on a terminal (held by the user), and the terminal or application program is associated with Data transmission can be performed between servers, data collection, input or output, or page or information display (to the user) can be performed through a terminal or an application program, thereby assisting the server to execute the living body detection method in this embodiment. The following description takes the execution subject as the photographing device as an example.
如图8所示,本实施例提供的活体检测方法包括:As shown in FIG. 8 , the live detection method provided in this embodiment includes:
S301:(执行主体)采集包含至少两张不同距离的脸部图片的图片组,所述脸部图片包含活体检测对象脸部区域;S301: (execution subject) collect a picture group including at least two face pictures at different distances, the face pictures including the face region of the living body detection object;
本实施例中,可以采集包含至少两张不同距离(相当于不同拍摄距离)的脸部图片的图片组。其中,获取包含至少两张不同距离的脸部图片的图片组可以采用如下2.1或2.2或2.3所述的方式(本实施例不限于2.1或2.2或2.3所述的方式):In this embodiment, a picture group including at least two face pictures at different distances (equivalent to different shooting distances) may be collected. The method described in 2.1 or 2.2 or 2.3 below may be used to obtain a picture group including at least two face pictures with different distances (this embodiment is not limited to the method described in 2.1 or 2.2 or 2.3):
2.1、视频分解2.1. Video decomposition
本实施例中,拍摄设备(可以包括但不限于摄像头)可以拍摄针对活体检测对象(根据活体检测结果的不同,这里的活体检测对象可能是活体、也可以能是非活体的视频或图片,简称“非活体”,例如翻拍或生成的视频或图片,下同)的拍摄视频。具体的,拍摄设备可以在检测到活体检测对象后再开始拍摄。In this embodiment, a shooting device (which may include but is not limited to a camera) can shoot a video or picture for a living body detection object (according to the different living body detection results, the living body detection object here may be a living body or a non-living body video or picture, referred to as "" "Non-living", such as remake or generated videos or pictures, the same below). Specifically, the photographing device may start photographing after detecting the living body detection object.
执行主体可以分解所述拍摄视频以获取若干帧图像,从所述若干帧图像中选择至少两张不同距离的脸部图片以形成图片组。其中各帧图片的距离是否不同可以采用第一种实施方式中提供的方式来判断,下同。The execution subject may decompose the captured video to obtain several frames of images, and select at least two face pictures at different distances from the several frames of images to form a picture group. Whether the distances of each frame of pictures are different can be determined by using the method provided in the first embodiment, the same below.
2.2、采集图片2.2. Collect pictures
本实施例中,拍摄设备可以采集针对活体检测对象的脸部图片,并每拍摄一张针对活体检测对象的脸部图片,就发布提示信息,所述提示信息用于提示所述活体检测对象靠近或远离所述拍摄设备,例如提示信息可以是“请靠近或远离摄像头”。其中,拍摄设备可以通过文字或音频形式发布所述提示信息,同1.2。In this embodiment, the photographing device may collect a face picture for the living body detection object, and each time a face picture for the living body detection object is taken, it releases prompt information, and the prompt information is used to prompt the living body detection object to approach Or stay away from the photographing device, for example, the prompt message may be "please move closer to or stay away from the camera". Wherein, the photographing device may issue the prompt information in the form of text or audio, the same as 1.2.
可以预料的,若活体检测对象是活体,则其接收到(例如看到或听到,下同)所述提示信息后,会靠近或远离拍摄设备;若活体检测对象是非活体,则其应当是由其持有者放置在拍摄设备前以供拍摄的,若活体检测对象的持有者接收到所述提示信息后,也会将活体检测对象靠近或远离拍摄设备。It can be expected that if the living body detection object is a living body, after receiving (for example, seeing or hearing, the same below) the prompt information, it will approach or move away from the photographing device; if the living body detection object is a non-living body, it should be If the holder of the object is placed in front of the photographing device for photographing, if the holder of the living body detection object receives the prompt information, the owner of the living body detection object will also move the living body detection object closer to or away from the photographing device.
拍摄设备发布所述提示信息后,就会拍摄下一张脸部图片并发送至执行主体。After the photographing device releases the prompt information, it will photograph the next face picture and send it to the execution subject.
拍摄设备重复上述过程,即“拍摄上一张脸部图片-发布提示信息-拍摄下一张脸部图片”,从而可以采集一系列脸部图片,并从中选择至少两张不同距离的脸部图片以形成图片组。The photographing device repeats the above process, that is, "take the previous face picture - release the prompt information - take the next face picture", so that a series of face pictures can be collected, and at least two face pictures with different distances can be selected from them. to form a picture group.
2.3、接收图片并判断距离2.3. Receive pictures and judge the distance
本实施例中,拍摄设备可以采集针对活体检测对象的脸部图片,并每拍摄一张针对活体检测对象的脸部图片,就根据该脸部图片中感兴趣区域与该张图片的比例(同第一个实施例)确定提示信息,并发布所述提示信息,所述提示信息用于提示所述活体检测对象靠近或远离所述拍摄设备。In this embodiment, the photographing device can collect a face picture for the living body detection object, and each time a face picture for the living body detection object is taken, the ratio of the region of interest in the face picture to the picture (same as the The first embodiment) determines prompt information, and publishes the prompt information, the prompt information is used to prompt the living body detection object to approach or move away from the photographing device.
具体的,可以设置阈值,每采集到一张脸部图片后,判定该张脸部图片中感兴趣区域与该张图片的比例,若比例大于(或大于等于)所述阈值,说明活体检测对象距离拍摄设备较近,则提示信息可以用于提示所述活体检测对象远离所述拍摄设备,例如提示信息可以是“请远离摄像头”;若比例小于等于(或小于)所述阈值,说明活体检测对象距离拍摄设备较远,则提示信息可以用于提示所述活体检测对象靠近所述拍摄设备,例如提示信息可以是“请靠近摄像头”。Specifically, a threshold can be set. After each face image is collected, the ratio of the region of interest in the face image to the image is determined. If the ratio is greater than (or greater than or equal to) the threshold, it indicates that the living body detection object If the distance is close to the shooting device, the prompt information can be used to prompt the living body detection object to stay away from the shooting device. For example, the prompt information can be “Please stay away from the camera”; if the ratio is less than or equal to (or less than) the threshold, it indicates that the living body detection object If the object is far away from the photographing device, the prompt information may be used to prompt the living body detection object to approach the photographing device, for example, the prompt information may be "please approach the camera".
拍摄设备发布所述提示信息后,就会拍摄下一张脸部图片并发送至执行主体。After the photographing device releases the prompt information, it will photograph the next face picture and send it to the execution subject.
拍摄设备重复上述过程,即“拍摄上一张脸部图片-发布提示信息-拍摄下一张脸部图片”,从而可以采集一系列脸部图片,并从中选择至少两张不同距离的脸部图片以形成图片组。The photographing device repeats the above process, that is, "take the previous face picture - release the prompt information - take the next face picture", so that a series of face pictures can be collected, and at least two face pictures with different distances can be selected from them. to form a picture group.
特别的,1.2或1.3中,拍摄设备发布所述提示信息后,可以检测活体检测对象是否发生了移动,只有在发生移动后,才拍摄下一张脸部图片。若发布所述提示信息后,活体检测对象不移动,则持续发布所述提示信息,以防活体检测对象是非活体的情况下,非活体的持有者不移动活体检测对象,逃避活体检测。In particular, in 1.2 or 1.3, after the photographing device issues the prompt information, it can detect whether the living body detection object has moved, and only after the movement has occurred, the next face picture can be taken. If the living body detection object does not move after the prompt information is issued, the prompt information is continuously issued to prevent the non-living body holder from moving the living body detection object and evading the living body detection when the living body detection object is a non-living body.
1.2中,提示信息并不限定活体检测对象靠近或远离拍摄设备,1.3中(通过比例)判断活体检测对象与拍摄设备的远近,进而提示信息限定活体检测对象靠近或远离拍摄设备,使得脸部图像中的脸部区域不至于过大或过小,使得脸部图像质量更好。1.3中,若活体检测对象距离拍摄设备较近,则可以提示其远离拍摄设备,反之则提示其靠近拍摄设备,即能够自适应活体检测对象距离拍摄设备的距离,提高脸部图片质量。In 1.2, the prompt information does not limit the living body detection object to be close to or away from the shooting device. In 1.3 (by proportion), the distance between the living body detection object and the shooting device is judged, and then the prompt information limits the living body detection object to approach or stay away from the shooting device, so that the facial image The face area in the image is not too large or too small, making the face image better. In 1.3, if the living body detection object is close to the shooting device, it can be prompted to stay away from the shooting device, otherwise, it can be prompted to approach the shooting device, that is, it can adapt to the distance between the living body detection object and the shooting device, and improve the quality of face pictures.
1.2和1.3中,若执行主体已经接收到了足够数量、且满足不同距离的脸部图像,则可以指示拍摄图像不再拍摄脸部图像。In 1.2 and 1.3, if the execution subject has received a sufficient number of face images that satisfy different distances, it may instruct the image capture to no longer capture face images.
上述图片组中的脸部图像应满足包含活体检测对象的(完整)脸部区域,即使活体检测对象是非活体,可以预见为了用于活体检测,其也是包含脸部区域的。The face images in the above-mentioned group of pictures should satisfy the (complete) face region containing the living body detection object, even if the living body detection object is non-living body, it can be foreseen that it also contains the face region for use in the living body detection.
本实施例中,可以根据每张脸部图片中感兴趣区域与该张脸部图片的比例确定任两张所述脸部图片的距离是否相同或不同,可以参照第一个实施例中的内容。In this embodiment, it can be determined whether the distance between any two facial images is the same or different according to the ratio of the region of interest in each facial image to the facial image. Refer to the content in the first embodiment. .
S303:向活体检测端发送所述图片组,以使所述活体检测端确定所述图片组对应的光流图;以及,将所述光流图输入第一个实施例所得到的活体检测模型,根据所述活体检测模型的输出数据判断所述活体检测对象是否为活体。S303: Send the picture group to the living body detection end, so that the living body detection end determines the optical flow graph corresponding to the picture group; and, input the optical flow graph into the living body detection model obtained in the first embodiment , judging whether the living body detection object is a living body according to the output data of the living body detection model.
活体检测端可以是第二个实施例的执行主体,也可以是其他主体。此处内容与第二个实施例的内容相对应,在此不再赘述。The living body detection end may be the execution body of the second embodiment, or may be another body. The content here corresponds to the content of the second embodiment, and will not be repeated here.
本实施例未详尽说明的内容可参照第一或第二个实施例。For the content not described in detail in this embodiment, reference may be made to the first or second embodiment.
本实施例中,图片组中包含的是不同距离的脸部图片,若图片组中的脸部图片是来自于活体的,会因活体与拍摄设备的距离不同而形成不同的透视效果,通过光流图可以精确表现出这种透视效果,本实施例活体检测模型能够基于上述透视效果进行活体检测,将光流图输入活体检测模型能够对是否活体进行精准检测,活体检测效果好,效率高。本实施例中,脸部图片可以经过两阶段对齐以生成光流图,使得生成的光流图(对上述透视效果的)表现效果更好,提高活体检测效果和效率。本实施例中,自适应活体检测对象的拍摄距离,并进行提示,以使活体检测对象运动,从而获取活体人像在运动过程中形成的特殊光流,提高活体检测效果和效率。In this embodiment, the picture group contains face pictures at different distances. If the face pictures in the picture group are from a living body, different perspective effects will be formed due to the different distances between the living body and the shooting device. The flow map can accurately represent this perspective effect. The living body detection model in this embodiment can perform live body detection based on the above-mentioned perspective effect. Inputting the optical flow map into the living body detection model can accurately detect whether there is a living body. The living body detection effect is good and the efficiency is high. In this embodiment, the facial image can be aligned in two stages to generate an optical flow map, so that the generated optical flow map (for the above perspective effect) has a better performance and improves the effect and efficiency of living body detection. In this embodiment, the shooting distance of the living body detection object is adapted and prompted to make the living body detection object move, so as to obtain the special optical flow formed by the living body portrait during the movement process, and improve the living body detection effect and efficiency.
如图9所示,本说明书第四个实施例提供一种活体检测模型训练装置,包括:As shown in FIG. 9 , a fourth embodiment of the present specification provides an apparatus for training a living body detection model, including:
样本模块401,用于获取用于模型训练的样本集,所述样本集中的样本包括正样本和负样本,每个样本均包含至少两张不同距离的脸部图片;The
光流模块403,用于确定各个样本对应的光流图;The
训练模块405,用于使用各个样本对应的所述光流图训练活体检测模型。The
可选的,确定各个样本对应的光流图包括:Optionally, determining the optical flow map corresponding to each sample includes:
对任一样本,对齐该样本所包含的各脸部图片,使用该样本对齐后的各脸部图片确定该样本对应的光流图。For any sample, align the face pictures included in the sample, and use the aligned face pictures of the sample to determine the optical flow map corresponding to the sample.
可选的,对任一样本,对齐该样本所包含的各脸部图片包括:Optionally, for any sample, aligning the face pictures included in the sample includes:
对任一样本,根据该样本所包含的各脸部图片中的脸部关键点对该样本所包含的各脸部图片进行第一阶段对齐;For any sample, perform the first-stage alignment of each facial image included in the sample according to the facial key points in each facial image included in the sample;
根据该样本所包含的各脸部图片的机器视觉特征,对该样本经所述第一对齐后的各脸部图片进行第二阶段对齐。According to the machine vision feature of each face picture included in the sample, a second-stage alignment is performed on each face picture of the sample after the first alignment.
可选的,所述机器视觉特征包括SIFT特征、HOG特征、SURF特征、ORB特征、LBP特征、HAAR特征的至少一种。Optionally, the machine vision features include at least one of SIFT features, HOG features, SURF features, ORB features, LBP features, and HAAR features.
可选的,使用该样本对齐后的各脸部图片确定该样本对应的光流图包括:Optionally, determining the optical flow map corresponding to the sample by using the aligned face pictures of the sample includes:
计算该样本对齐后的各脸部图片的光流;Calculate the optical flow of each face image after the sample is aligned;
根据该样本对齐后的各脸部图片的光流生成该样本对应的光流图。An optical flow map corresponding to the sample is generated according to the optical flow of each face image after the sample is aligned.
可选的,所述样本模块401根据每张脸部图片中感兴趣区域与该张脸部图片的比例确定任两张所述脸部图片的距离是否相同或不同。Optionally, the
可选的,使用各个样本对应的所述光流图训练活体检测模型包括:Optionally, using the optical flow graph corresponding to each sample to train the living body detection model includes:
使用各个样本对应的所述光流图训练分类模型,以获得活体检测模型。A classification model is trained using the optical flow graph corresponding to each sample to obtain a living body detection model.
如图10所示,本说明书第五个实施例提供一种活体检测装置,所述装置配置有第一个实施例所得到的活体检测模型,包括:As shown in FIG. 10 , a fifth embodiment of the present specification provides a living body detection device, and the device is configured with the living body detection model obtained in the first embodiment, including:
图片模块501,用于获取包含至少两张不同距离的脸部图片的图片组,所述脸部图片包含活体检测对象脸部区域;A
光流模块503,用于确定所述图片组对应的光流图;An
检测模块505,用于将所述光流图输入所述活体检测模型,根据所述活体检测模型的输出数据判断所述活体检测对象是否为活体。The
可选的,获取包含至少两张不同距离的脸部图片的图片组包括:Optionally, acquiring a picture group including at least two face pictures with different distances includes:
接收拍摄设备发送的针对活体检测对象的拍摄视频,分解所述拍摄视频以获取若干帧图像,从所述若干帧图像中选择至少两张不同距离的脸部图片以形成图片组;Receive the shooting video for the living body detection object sent by the shooting device, decompose the shooting video to obtain several frames of images, and select at least two face pictures with different distances from the several frame images to form a picture group;
或,or,
接收拍摄设备发送的上一张脸部图片,向所述拍摄设备发送指令,以使所述拍摄设备发布提示信息,所述提示信息用于提示所述活体检测对象靠近或远离所述拍摄设备;接收所述拍摄设备发送的所述下一张脸部图片;Receive the last face picture sent by the photographing device, and send an instruction to the photographing device, so that the photographing device issues prompt information, the prompt information is used to prompt the living body detection object to approach or move away from the photographing device; receiving the next face picture sent by the photographing device;
从接收的各所述脸部图片中选择至少两张不同距离的脸部图片以形成图片组;Selecting at least two face pictures at different distances from the received face pictures to form a picture group;
或,or,
接收拍摄设备发送的上一张脸部图片,根据所述上一张脸部图片中感兴趣区域与该张脸部图片的比例确定并向所述拍摄设备发送提示信息,以使所述拍摄设备发布所述提示信息,所述提示信息用于提示所述活体检测对象靠近或远离所述拍摄设备;接收所述拍摄设备发送的所述下一张脸部图片;Receive the last face picture sent by the shooting device, determine the ratio of the region of interest in the last face picture to the face picture, and send prompt information to the shooting device, so that the shooting device Publishing the prompt information, where the prompt information is used to prompt the living body detection object to approach or move away from the photographing device; receive the next face picture sent by the photographing device;
从接收的各所述脸部图片中选择至少两张不同距离的脸部图片以形成图片组。At least two face pictures at different distances are selected from each of the received face pictures to form a picture group.
可选的,所述图片模块501根据每张脸部图片中感兴趣区域与该张脸部图片的比例确定任两张所述脸部图片的距离是否相同或不同。Optionally, the
可选的,确定所述图片组对应的光流图包括:Optionally, determining the optical flow map corresponding to the picture group includes:
对齐各所述脸部图片,使用对齐后的各所述脸部图片确定所述图片组对应的光流图。Align each of the face pictures, and use the aligned face pictures to determine an optical flow map corresponding to the picture group.
可选的,对齐各所述脸部图片包括:Optionally, aligning each of the face pictures includes:
根据各所述脸部图片中的脸部关键点对各所述脸部图片进行第一阶段对齐;Perform the first-stage alignment on each of the facial pictures according to the facial key points in each of the facial pictures;
根据各所述脸部图片的机器视觉特征,对经所述第一对齐后的各所述脸部图片进行第二阶段对齐。According to the machine vision feature of each of the face pictures, a second-stage alignment is performed on each of the face pictures after the first alignment.
可选的,所述机器视觉特征包括SIFT特征、HOG特征、SURF特征、ORB特征、LBP特征、HAAR特征的至少一种。Optionally, the machine vision features include at least one of SIFT features, HOG features, SURF features, ORB features, LBP features, and HAAR features.
可选的,使用对齐后的各所述脸部图片确定所述图片组对应的光流图包括:Optionally, using the aligned face pictures to determine the optical flow map corresponding to the picture group includes:
计算各所述脸部图片的光流;calculating the optical flow of each of the face pictures;
根据各所述脸部图片的光流生成该图片组对应的光流图。An optical flow map corresponding to the picture group is generated according to the optical flow of each of the face pictures.
可选的,根据所述活体检测模型的输出数据判断所述活体检测对象是否为活体包括:Optionally, judging whether the living body detection object is a living body according to the output data of the living body detection model includes:
根据所述活体检测模型输出的置信度判断所述活体检测对象是否为活体。Whether the living body detection object is a living body is determined according to the confidence level output by the living body detection model.
可选的,所述输出数据包括置信度;将所述光流图输入所述活体检测模型后,所述检测模块505还用于:Optionally, the output data includes confidence; after inputting the optical flow graph into the living body detection model, the
使所述活体检测模型提取所述光流图的图像特征,并对图像特征计算softmax,以使所述活体检测模型确定并输出所述置信度。The living body detection model is made to extract image features of the optical flow map, and a softmax is calculated on the image features, so that the living body detection model can determine and output the confidence level.
如图11所示,本说明书第六个实施例提供一种活体检测装置,包括:As shown in FIG. 11 , a sixth embodiment of the present specification provides a living body detection device, including:
图片模块601,用于采集包含至少两张不同距离的脸部图片的图片组,所述脸部图片包含活体检测对象脸部区域;The
发送模块603,用于向活体检测端发送所述图片组,以使所述活体检测端确定所述图片组对应的光流图;以及,将所述光流图输入第一个实施例所得到的活体检测模型,根据所述活体检测模型的输出数据判断所述活体检测对象是否为活体。The sending
可选的,获取包含至少两张不同距离的脸部图片的图片组包括:Optionally, acquiring a picture group including at least two face pictures with different distances includes:
采集针对活体检测对象的拍摄视频,分解所述拍摄视频以获取若干帧图像,从所述若干帧图像中选择至少两张不同距离的脸部图片以形成图片组;Collecting a shooting video for a living body detection object, decomposing the shooting video to obtain several frames of images, and selecting at least two face pictures with different distances from the several frames of images to form a picture group;
或,or,
采集上一张脸部图片后,发布提示信息,所述提示信息用于提示活体检测对象靠近或远离所述拍摄设备;After collecting the last face picture, release prompt information, where the prompt information is used to prompt the living body detection object to approach or stay away from the shooting device;
若判断所述活体检测对象在采集所述上一张脸部图片后发生了移动,则采集下一张脸部图片;If it is judged that the living body detection object has moved after collecting the last face picture, collect the next face picture;
从采集的各所述脸部图片中选择至少两张不同距离的脸部图片以形成图片组;Select at least two face pictures at different distances from the collected face pictures to form a picture group;
或,or,
采集上一张脸部图片后,根据所述上一张脸部图片中感兴趣区域与该张脸部图片的比例确定并发布提示信息,所述提示信息用于提示所述活体检测对象靠近或远离所述拍摄设备;After collecting the last face picture, determine and publish prompt information according to the ratio of the region of interest in the last face picture to the face picture, and the prompt information is used to prompt the living body detection object to approach or away from said photographing equipment;
若判断所述活体检测对象在采集所述上一张脸部图片后发生了移动,则采集下一张脸部图片;If it is judged that the living body detection object has moved after collecting the last face picture, collect the next face picture;
从采集的各所述脸部图片中选择至少两张不同距离的脸部图片以形成图片组。At least two face pictures at different distances are selected from the collected face pictures to form a picture group.
本说明书第七个实施例提供一种活体检测模型训练设备,包括:A seventh embodiment of the present specification provides a device for training a living body detection model, including:
至少一个处理器;at least one processor;
以及,as well as,
与所述至少一个处理器通信连接的存储器;a memory communicatively coupled to the at least one processor;
其中,in,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,使所述至少一个处理器能够执行第一个实施例所述的活体检测模型训练方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the living body detection model training described in the first embodiment method.
本说明书第八个实施例提供一种活体检测模型训练设备,包括:An eighth embodiment of the present specification provides a device for training a living body detection model, including:
至少一个处理器;at least one processor;
以及,as well as,
与所述至少一个处理器通信连接的存储器;a memory communicatively coupled to the at least one processor;
其中,in,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,使所述至少一个处理器能够执行第二个实施例所述的活体检测方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the living body detection method of the second embodiment.
本说明书第九个实施例提供一种活体检测模型训练设备,包括:The ninth embodiment of the present specification provides a living detection model training device, including:
至少一个处理器;at least one processor;
以及,as well as,
与所述至少一个处理器通信连接的存储器;a memory communicatively coupled to the at least one processor;
其中,in,
所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,使所述至少一个处理器能够执行第三个实施例所述的活体检测方法。The memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to enable the at least one processor to execute the living body detection method of the third embodiment.
本说明书第十个实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现第一个实施例所述的活体检测模型训练方法。A tenth embodiment of the present specification provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, implement the method described in the first embodiment. Liveness detection model training method.
本说明书第十一个实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现第二个实施例所述的活体检测方法。An eleventh embodiment of the present specification provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, implement the description in the second embodiment method of live detection.
本说明书第十二个实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令被处理器执行时实现第三个实施例所述的活体检测方法。A twelfth embodiment of the present specification provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are executed by a processor, implements the third embodiment. method of live detection.
上述各实施例可以结合使用,不同实施例之间名称相同的模块可相同可不同。The above embodiments may be used in combination, and modules with the same name may be the same or different between different embodiments.
上述对本说明书特定实施例进行了描述,其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,附图中描绘的过程不一定必须按照示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。While the foregoing has described specific embodiments of this specification, other embodiments are within the scope of the appended claims. In some cases, the actions or steps recited in the claims can be performed in an order different from that in the embodiments and still achieve desirable results. Additionally, the processes depicted in the figures do not necessarily have to follow the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置、设备、非易失性计算机可读存储介质实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。Each embodiment in this specification is described in a progressive manner, and the same and similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from other embodiments. Especially, for the embodiments of the apparatus, equipment, and non-volatile computer-readable storage medium, since they are basically similar to the method embodiments, the description is relatively simple.
本说明书实施例提供的装置、设备、非易失性计算机可读存储介质与方法是对应的,因此,装置、设备、非易失性计算机存储介质也具有与对应方法类似的有益技术效果,由于上面已经对方法的有益技术效果进行了详细说明,因此,这里不再赘述对应装置、设备、非易失性计算机存储介质的有益技术效果。The apparatuses, devices, and non-volatile computer-readable storage media provided in the embodiments of this specification correspond to the methods. Therefore, the apparatuses, devices, and non-volatile computer storage media also have beneficial technical effects similar to those of the corresponding methods. The beneficial technical effects of the method have been described in detail above, therefore, the beneficial technical effects of the corresponding apparatus, equipment, and non-volatile computer storage medium will not be repeated here.
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable GateArray,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware DescriptionLanguage)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(RubyHardware Description Language)等,目前最普遍使用的是VHDL(Very-High-SpeedIntegrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。In the 1990s, improvements in a technology could be clearly differentiated between improvements in hardware (eg, improvements to circuit structures such as diodes, transistors, switches, etc.) or improvements in software (improvements in method flow). However, with the development of technology, the improvement of many methods and processes today can be regarded as a direct improvement of the hardware circuit structure. Designers almost get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by hardware entity modules. For example, a Programmable Logic Device (PLD) (eg, Field Programmable Gate Array (FPGA)) is an integrated circuit whose logic function is determined by user programming of the device. It is programmed by the designer to "integrate" a digital system on a PLD without having to ask the chip manufacturer to design and manufacture a dedicated integrated circuit chip. And, instead of making integrated circuit chips by hand, these days, much of this programming is done using software called a "logic compiler", which is similar to the software compiler used in program development and writing, but before compiling The original code also has to be written in a specific programming language, which is called Hardware Description Language (HDL), and there is not only one HDL, but many kinds, such as ABEL (Advanced Boolean Expression Language) , AHDL (Altera Hardware Description Language), Confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), Lava, Lola, MyHDL, PALASM, RHDL (RubyHardware Description Language), etc. The most commonly used are VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog. It should also be clear to those skilled in the art that a hardware circuit for implementing the logic method process can be easily obtained by simply programming the method process in the above-mentioned several hardware description languages and programming it into the integrated circuit.
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。The controller may be implemented in any suitable manner, for example, the controller may take the form of eg a microprocessor or processor and a computer readable medium storing computer readable program code (eg software or firmware) executable by the (micro)processor , logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers, examples of controllers include but are not limited to the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as part of the control logic of the memory. Those skilled in the art also know that, in addition to implementing the controller in the form of pure computer-readable program code, the controller can be implemented as logic gates, switches, application-specific integrated circuits, programmable logic controllers and embedded devices by logically programming the method steps. The same function can be realized in the form of a microcontroller, etc. Therefore, such a controller can be regarded as a hardware component, and the devices included therein for realizing various functions can also be regarded as a structure within the hardware component. Or even, the means for implementing various functions can be regarded as both a software module implementing a method and a structure within a hardware component.
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。The systems, devices, modules or units described in the above embodiments may be specifically implemented by computer chips or entities, or by products with certain functions. A typical implementation device is a computer. Specifically, the computer can be, for example, a personal computer, a laptop computer, a cellular phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or A combination of any of these devices.
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本说明书时可以把各单元的功能在同一个或多个软件和/或硬件中实现。For the convenience of description, when describing the above device, the functions are divided into various units and described respectively. Of course, when implementing this specification, the functions of each unit may be implemented in one or more software and/or hardware.
本领域内的技术人员应明白,本说明书实施例可提供为方法、系统、或计算机程序产品。因此,本说明书实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本说明书实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by one skilled in the art, the embodiments of the present specification may be provided as a method, a system, or a computer program product. Accordingly, embodiments of this specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present specification may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本说明书是参照根据本说明书实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The specification is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the specification. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。Memory may include non-persistent memory in computer readable media, random access memory (RAM) and/or non-volatile memory in the form of, for example, read only memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带式磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology. Information may be computer readable instructions, data structures, modules of programs, or other data. Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cartridges, tape-based disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, excludes transitory computer-readable media, such as modulated data signals and carrier waves.
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个......”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。It should also be noted that the terms "comprising", "comprising" or any other variation thereof are intended to encompass a non-exclusive inclusion such that a process, method, article or device comprising a series of elements includes not only those elements, but also Other elements not expressly listed, or which are inherent to such a process, method, article of manufacture, or apparatus are also included. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in the process, method, article of manufacture or device that includes the element.
本说明书可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。This specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including storage devices.
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。Each embodiment in this specification is described in a progressive manner, and the same and similar parts between the various embodiments may be referred to each other, and each embodiment focuses on the differences from other embodiments. In particular, for the system embodiments, since they are basically similar to the method embodiments, the description is relatively simple, and for related parts, please refer to the partial descriptions of the method embodiments.
以上所述仅为本说明书实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。The above descriptions are merely embodiments of the present specification, and are not intended to limit the present application. Various modifications and variations of this application are possible for those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included within the scope of the claims of this application.
Claims (27)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010594280.9A CN111738176B (en) | 2020-06-24 | 2020-06-24 | Liveness detection model training, liveness detection method, device, equipment and medium |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010594280.9A CN111738176B (en) | 2020-06-24 | 2020-06-24 | Liveness detection model training, liveness detection method, device, equipment and medium |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111738176A true CN111738176A (en) | 2020-10-02 |
| CN111738176B CN111738176B (en) | 2025-02-25 |
Family
ID=72651225
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010594280.9A Active CN111738176B (en) | 2020-06-24 | 2020-06-24 | Liveness detection model training, liveness detection method, device, equipment and medium |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111738176B (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107179683A (en) * | 2017-04-01 | 2017-09-19 | 浙江工业大学 | Interactive robot intelligent motion detection and control method based on neural network |
| CN109598242A (en) * | 2018-12-06 | 2019-04-09 | 中科视拓(北京)科技有限公司 | A kind of novel biopsy method |
| US20190377963A1 (en) * | 2018-06-11 | 2019-12-12 | Laurence Hamid | Liveness detection |
| CN111104833A (en) * | 2018-10-29 | 2020-05-05 | 北京三快在线科技有限公司 | Method and apparatus for in vivo examination, storage medium, and electronic device |
-
2020
- 2020-06-24 CN CN202010594280.9A patent/CN111738176B/en active Active
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107179683A (en) * | 2017-04-01 | 2017-09-19 | 浙江工业大学 | Interactive robot intelligent motion detection and control method based on neural network |
| US20190377963A1 (en) * | 2018-06-11 | 2019-12-12 | Laurence Hamid | Liveness detection |
| CN111104833A (en) * | 2018-10-29 | 2020-05-05 | 北京三快在线科技有限公司 | Method and apparatus for in vivo examination, storage medium, and electronic device |
| CN109598242A (en) * | 2018-12-06 | 2019-04-09 | 中科视拓(北京)科技有限公司 | A kind of novel biopsy method |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111738176B (en) | 2025-02-25 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107358157B (en) | Face living body detection method and device and electronic equipment | |
| CN111178341B (en) | A kind of living body detection method, device and equipment | |
| CN109086691B (en) | A three-dimensional face living body detection method, face authentication and recognition method and device | |
| CN111553333B (en) | Face image recognition model training method, recognition method, device and electronic equipment | |
| US9898849B2 (en) | Facial expression based avatar rendering in video animation and method | |
| US10755087B2 (en) | Automated image capture based on emotion detection | |
| CN111368944B (en) | Method and device for recognizing copied image and certificate photo and training model and electronic equipment | |
| CN107977634A (en) | A kind of expression recognition method, device and equipment for video | |
| CN110263805B (en) | Document verification, identity verification method, device and equipment | |
| WO2019214321A1 (en) | Vehicle damage identification processing method, processing device, client and server | |
| CN111242034A (en) | A document image processing method, device, processing device and client | |
| CN110287851A (en) | A target image positioning method, device, system and storage medium | |
| CN115004245A (en) | Target detection method, target detection device, electronic equipment and computer storage medium | |
| CN114581951B (en) | A gesture recognition method, system, device and medium | |
| CN111753583A (en) | A kind of identification method and device | |
| CN114331848A (en) | Video image splicing method, device and equipment | |
| CN111738176A (en) | A living body detection model training, living body detection method, device, equipment and medium | |
| CN114663965B (en) | Testimony comparison method and device based on two-stage alternative learning | |
| CN118379772A (en) | Micro-expression recognition method and device | |
| HK40039022B (en) | Living body detection model training method and device, living body detection method and device, equipment and medium | |
| CN110059576A (en) | Screening technique, device and the electronic equipment of picture | |
| HK40039022A (en) | Living body detection model training method and device, living body detection method and device, equipment and medium | |
| CN118379781B (en) | Damping-off face recognition method and device based on damping-off face recognition model | |
| CN118038087B (en) | Image processing method and device | |
| KV et al. | A Synopsis on Intelligent Face Discovery Frameworks. |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40039022 Country of ref document: HK |
|
| TA01 | Transfer of patent application right |
Effective date of registration: 20241113 Address after: 128 Meizhi Road, Singapore, Guohao Times City # 20-01189773 Applicant after: Ant Shield Co.,Ltd. Country or region after: Singapore Address before: 45-01 Anson Building, 8 Shanton Avenue, Singapore 068811 Applicant before: Alipay laboratories (Singapore) Ltd. Country or region before: Singapore |
|
| TA01 | Transfer of patent application right | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |