[go: up one dir, main page]

HK1246921B - Face shielding detection method, device and storage medium - Google Patents

Face shielding detection method, device and storage medium Download PDF

Info

Publication number
HK1246921B
HK1246921B HK18106241.4A HK18106241A HK1246921B HK 1246921 B HK1246921 B HK 1246921B HK 18106241 A HK18106241 A HK 18106241A HK 1246921 B HK1246921 B HK 1246921B
Authority
HK
Hong Kong
Prior art keywords
face
lip
image
facial
eye
Prior art date
Application number
HK18106241.4A
Other languages
Chinese (zh)
Other versions
HK1246921A1 (en
HK1246921A (en
Inventor
陳林
陈林
張國輝
张国辉
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Priority to HK18106241.4A priority Critical patent/HK1246921B/en
Publication of HK1246921A1 publication Critical patent/HK1246921A1/en
Publication of HK1246921A publication Critical patent/HK1246921A/en
Publication of HK1246921B publication Critical patent/HK1246921B/en

Links

Landscapes

  • Collating Specific Patterns (AREA)

Description

人脸遮挡检测方法、装置及存储介质Face occlusion detection method, device and storage medium

技术领域Technical Field

本发明涉及计算机视觉处理技术领域,尤其涉及一种人脸遮挡检测方法、装置及计算机可读存储介质。The present invention relates to the field of computer vision processing technology, and in particular to a face occlusion detection method, device and computer-readable storage medium.

背景技术Background Art

人脸识别是基于人的脸部特征信息进行身份认证的一种生物识别技术。通过采集含有人脸的图像或视频流,并在图像中检测和跟踪人脸,进而对检测到的人脸进行匹配与识别。目前,人脸识别的应用领域很广泛,在金融支付、门禁考勤、身份识别等众多领域起到非常重要的作用,给人们的生活带来很大便利。然而,保证人脸没有发生遮挡至关重要,故在进行人脸识别之前需检测图像中的人脸是否发生遮挡。Face recognition is a biometric technology that authenticates people based on their facial features. It collects images or video streams containing faces, detects and tracks faces in the images, and then matches and identifies the detected faces. At present, face recognition has a wide range of applications, playing a very important role in many fields such as financial payment, access control and attendance, and identity recognition, bringing great convenience to people's lives. However, it is crucial to ensure that the face is not blocked, so before performing face recognition, it is necessary to detect whether the face in the image is blocked.

业内一般产品判断人脸遮挡是通过深度学习训练的方式,判断人脸遮挡情况,但该判断方法对样本量要求高,并且如果采用深度学习的方式预测遮挡,计算量很大,速度比较慢。Products in the industry generally determine face occlusion through deep learning training, but this judgment method has high requirements on sample size, and if deep learning is used to predict occlusion, the amount of calculation is large and the speed is relatively slow.

发明内容Summary of the invention

本发明提供一种人脸遮挡检测方法、装置及计算机可读存储介质,其主要目的在于快速检测实时脸部图像中的人脸遮挡情况。The present invention provides a face occlusion detection method, device and computer-readable storage medium, the main purpose of which is to quickly detect face occlusion in real-time facial images.

为实现上述目的,本发明提供一种电子装置,该装置包括:存储器、处理器及摄像装置,所述存储器中包括人脸遮挡检测程序,所述人脸遮挡检测程序被所述处理器执行时实现如下步骤:To achieve the above object, the present invention provides an electronic device, which includes: a memory, a processor and a camera device, wherein the memory includes a face occlusion detection program, and when the face occlusion detection program is executed by the processor, the following steps are implemented:

图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Image acquisition step: acquiring a real-time image captured by a camera device, and extracting a real-time facial image from the real-time image using a face recognition algorithm;

特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;及Feature point recognition step: input the real-time facial image into a pre-trained facial average model, and use the facial average model to recognize t facial feature points from the real-time facial image; and

特征区域判断步骤:根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。Feature area judgment step: determine the eye area and the lip area according to the position information of the t facial feature points, input the eye area and the lip area into a pre-trained face eye classification model and a face lip classification model, judge the authenticity of the eye area and the lip area, and judge whether the face in the real-time image is occluded based on the judgment result.

可选地,所述人脸遮挡检测程序被所述处理器执行时,还实现如下步骤:Optionally, when the face occlusion detection program is executed by the processor, the following steps are further implemented:

判断步骤:判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。Judgment step: judging whether the judgment results of the eye area and the lip area of the face by the eye classification model and the lip classification model are both true.

可选地,所述人脸遮挡检测程序被所述处理器执行时,还实现如下步骤:Optionally, when the face occlusion detection program is executed by the processor, the following steps are further implemented:

当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡;及When the judgment results of the eye classification model and the lip classification model for the eye region and the lip region are both true, it is judged that the face in the real-time facial image is not blocked; and

当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。When the judgment results of the eye area and the lip area by the eye classification model and the lip classification model of the face contain untrue information, it is prompted that the face in the real-time facial image is occluded.

此外,为实现上述目的,本发明还提供一种人脸遮挡检测方法,该方法包括:In addition, to achieve the above purpose, the present invention also provides a face occlusion detection method, the method comprising:

图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Image acquisition step: acquiring a real-time image captured by a camera device, and extracting a real-time facial image from the real-time image using a face recognition algorithm;

特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;及Feature point recognition step: input the real-time facial image into a pre-trained facial average model, and use the facial average model to recognize t facial feature points from the real-time facial image; and

特征区域判断步骤:根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。Feature area judgment step: determine the eye area and the lip area according to the position information of the t facial feature points, input the eye area and the lip area into a pre-trained face eye classification model and a face lip classification model, judge the authenticity of the eye area and the lip area, and judge whether the face in the real-time image is occluded based on the judgment result.

可选地,该方法还包括:Optionally, the method further comprises:

判断步骤:判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。Judgment step: judging whether the judgment results of the eye area and the lip area of the face by the eye classification model and the lip classification model are both true.

可选地,该方法还包括:Optionally, the method further comprises:

当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡;及When the judgment results of the eye classification model and the lip classification model for the eye region and the lip region are both true, it is judged that the face in the real-time facial image is not blocked; and

当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。When the judgment results of the eye area and the lip area by the eye classification model and the lip classification model of the face contain untrue information, it is prompted that the face in the real-time facial image is occluded.

此外,为实现上述目的,本发明还提供一种计算机可读存储介质,所述计算机可读存储介质中包括人脸遮挡检测程序,所述人脸遮挡检测程序被处理器执行时,实现如上所述的人脸遮挡检测方法中的任意步骤。In addition, to achieve the above-mentioned purpose, the present invention also provides a computer-readable storage medium, which includes a face occlusion detection program. When the face occlusion detection program is executed by a processor, any step in the face occlusion detection method as described above is implemented.

本发明提出的人脸遮挡检测方法、电子装置及计算机可读存储介质,通过将实时脸部图像输入面部平均模型,识别出该实时脸部图像中的面部特征点,利用人脸的眼部分类模型和人脸的唇部分类模型判断面部特征点确定的眼部区域及唇部区域的真实性,并根据眼部区域及唇部区域的真实性判断该实时脸部图像中的人脸是否发生遮挡。The face occlusion detection method, electronic device and computer-readable storage medium proposed in the present invention input a real-time face image into a face average model, identify facial feature points in the real-time face image, use a face eye classification model and a face lip classification model to judge the authenticity of the eye area and the lip area determined by the facial feature points, and judge whether the face in the real-time face image is occluded based on the authenticity of the eye area and the lip area.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明人脸遮挡检测方法较佳实施例的应用环境示意图;FIG1 is a schematic diagram of an application environment of a preferred embodiment of a face occlusion detection method of the present invention;

图2为图1中人脸遮挡检测程序的功能模块图;FIG2 is a functional module diagram of the face occlusion detection program in FIG1 ;

图3为本发明人脸遮挡检测方法较佳实施例的流程图。FIG. 3 is a flow chart of a preferred embodiment of a face occlusion detection method of the present invention.

本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization of the purpose, functional features and advantages of the present invention will be further explained in conjunction with embodiments and with reference to the accompanying drawings.

具体实施方式DETAILED DESCRIPTION

应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。It should be understood that the specific embodiments described herein are only used to explain the present invention, and are not used to limit the present invention.

本发明提供一种人脸遮挡检测方法,应用于电子装置1。参照图1所示,为本发明人脸遮挡检测方法较佳实施例的应用环境示意图。The present invention provides a method for detecting face occlusion, which is applied to an electronic device 1. Referring to Fig. 1, it is a schematic diagram of an application environment of a preferred embodiment of the method for detecting face occlusion of the present invention.

在本实施例中,电子装置1可以是服务器、智能手机、平板电脑、便携计算机、桌上型计算机等具有运算功能的终端设备。In this embodiment, the electronic device 1 may be a terminal device with computing functions, such as a server, a smart phone, a tablet computer, a portable computer, or a desktop computer.

该电子装置1包括:处理器12、存储器11、摄像装置13、网络接口14及通信总线15。其中,摄像装置13安装于特定场所,如办公场所、监控区域,对进入该特定场所的目标实时拍摄得到实时图像,通过网络将拍摄得到的实时图像传输至处理器12。网络接口14可选地可以包括标准的有线接口、无线接口(如WI-FI接口)。通信总线15用于实现这些组件之间的连接通信。The electronic device 1 includes: a processor 12, a memory 11, a camera 13, a network interface 14 and a communication bus 15. The camera 13 is installed in a specific place, such as an office place or a monitoring area, and a real-time image is obtained by shooting a target entering the specific place in real time, and the real-time image is transmitted to the processor 12 through a network. The network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface). The communication bus 15 is used to realize the connection and communication between these components.

存储器11包括至少一种类型的可读存储介质。所述至少一种类型的可读存储介质可为如闪存、硬盘、多媒体卡、卡型存储器等的非易失性存储介质。在一些实施例中,所述可读存储介质可以是所述电子装置1的内部存储单元,例如该电子装置1的硬盘。在另一些实施例中,所述可读存储介质也可以是所述电子装置1的外部存储器,例如所述电子装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。The memory 11 includes at least one type of readable storage medium. The at least one type of readable storage medium may be a non-volatile storage medium such as a flash memory, a hard disk, a multimedia card, a card-type memory, etc. In some embodiments, the readable storage medium may be an internal storage unit of the electronic device 1, such as a hard disk of the electronic device 1. In other embodiments, the readable storage medium may also be an external memory of the electronic device 1, such as a plug-in hard disk equipped on the electronic device 1, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital, SD) card, a flash card (Flash Card), etc.

在本实施例中,所述存储器11的可读存储介质通常用于存储安装于所述电子装置1的人脸遮挡检测程序10、人脸图像样本库、人眼样本库、人的唇部样本库、构建并训练好的面部特征点的面部平均模型、眼部分类模型及人脸的唇部分类模型等。所述存储器11还可以用于暂时地存储已经输出或者将要输出的数据。In this embodiment, the readable storage medium of the memory 11 is generally used to store the face occlusion detection program 10 installed in the electronic device 1, the face image sample library, the eye sample library, the lip sample library, the constructed and trained face average model of facial feature points, the eye classification model and the lip classification model of the face, etc. The memory 11 can also be used to temporarily store data that has been output or is to be output.

处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU),微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行人脸遮挡检测程序10等。In some embodiments, the processor 12 may be a central processing unit (CPU), a microprocessor or other data processing chip, used to run program codes or process data stored in the memory 11, such as executing the face occlusion detection program 10.

图1仅示出了具有组件11-15的电子装置1,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。FIG. 1 only shows an electronic device 1 having components 11 - 15 , but it should be understood that it is not required to implement all of the components shown, and more or fewer components may be implemented instead.

可选地,该电子装置1还可以包括用户接口,用户接口可以包括输入单元比如键盘(Keyboard)、语音输入装置比如麦克风(microphone)等具有语音识别功能的设备、语音输出装置比如音响、耳机等,可选地用户接口还可以包括标准的有线接口、无线接口。Optionally, the electronic device 1 may further include a user interface, which may include an input unit such as a keyboard, a voice input device such as a microphone and other devices with voice recognition function, a voice output device such as a speaker, headphones, etc. Optionally, the user interface may also include a standard wired interface or a wireless interface.

可选地,该电子装置1还可以包括显示器,显示器也可以适当的称为显示屏或显示单元。在一些实施例中可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。显示器用于显示在电子装置1中处理的信息以及用于显示可视化的用户界面。Optionally, the electronic device 1 may further include a display, which may also be appropriately referred to as a display screen or a display unit. In some embodiments, it may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, and an OLED (Organic Light-Emitting Diode) touch device. The display is used to display information processed in the electronic device 1 and to display a visual user interface.

可选地,该电子装置1还包括触摸传感器。所述触摸传感器所提供的供用户进行触摸操作的区域称为触控区域。此外,这里所述的触摸传感器可以为电阻式触摸传感器、电容式触摸传感器等。而且,所述触摸传感器不仅包括接触式的触摸传感器,也可包括接近式的触摸传感器等。此外,所述触摸传感器可以为单个传感器,也可以为例如阵列布置的多个传感器。Optionally, the electronic device 1 further includes a touch sensor. The area provided by the touch sensor for the user to perform a touch operation is called a touch control area. In addition, the touch sensor described here may be a resistive touch sensor, a capacitive touch sensor, etc. Moreover, the touch sensor includes not only a contact touch sensor, but also a proximity touch sensor, etc. In addition, the touch sensor may be a single sensor or a plurality of sensors arranged in an array, for example.

此外,该电子装置1的显示器的面积可以与所述触摸传感器的面积相同,也可以不同。可选地,将显示器与所述触摸传感器层叠设置,以形成触摸显示屏。该装置基于触摸显示屏侦测用户触发的触控操作。In addition, the area of the display of the electronic device 1 can be the same as or different from the area of the touch sensor. Optionally, the display and the touch sensor are stacked to form a touch display screen. The device detects touch operations triggered by the user based on the touch display screen.

可选地,该电子装置1还可以包括RF(Radio Frequency,射频)电路,传感器、音频电路等等,在此不再赘述。Optionally, the electronic device 1 may further include an RF (Radio Frequency) circuit, a sensor, an audio circuit, etc., which will not be described in detail here.

在图1所示的装置实施例中,作为一种计算机存储介质的存储器11中可以包括操作系统、以及人脸遮挡检测程序10;处理器12执行存储器11中存储的人脸遮挡检测程序10时实现如下步骤:In the device embodiment shown in FIG1 , the memory 11 as a computer storage medium may include an operating system and a face occlusion detection program 10; when the processor 12 executes the face occlusion detection program 10 stored in the memory 11, the following steps are implemented:

获取摄像装置13拍摄的实时图像,处理器12利用人脸识别算法从该实时图像中提取出实时脸部图像,从存储器11中调用面部平均模型、人脸的眼部分类模型及人脸的唇部分类模型,将该实时脸部图像输入所述面部平均模型,识别出该实时脸部图像中面部特征点,将所述面部特征点确定的眼部区域及唇部区域输入人脸的眼部分类模型及唇部分类模型,通过判断该眼部区域及唇部区域的真实性,判断该实时脸部图像中的额人脸是否发生遮挡。A real-time image captured by the camera device 13 is acquired, and the processor 12 extracts a real-time facial image from the real-time image by using a facial recognition algorithm, calls a facial average model, an eye classification model of a face, and a lip classification model of a face from the memory 11, inputs the real-time facial image into the facial average model, identifies facial feature points in the real-time facial image, inputs the eye area and the lip area determined by the facial feature points into the eye classification model and the lip classification model of the face, and determines whether the face in the real-time facial image is occluded by judging the authenticity of the eye area and the lip area.

在其他实施例中,人脸遮挡检测程序10还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由处理器12执行,以完成本发明。本发明所称的模块是指能够完成特定功能的一系列计算机程序指令段。In other embodiments, the face occlusion detection program 10 may be divided into one or more modules, one or more modules are stored in the memory 11, and executed by the processor 12 to complete the present invention. The module referred to in the present invention refers to a series of computer program instruction segments that can complete specific functions.

参照图2所示,为图1中人脸遮挡检测程序10的功能模块图。2 , which is a functional module diagram of the face occlusion detection program 10 in FIG. 1 .

所述人脸遮挡检测程序10可以被分割为:获取模块110、识别模块120、判断模块130及提示模块140。The face occlusion detection program 10 can be divided into: an acquisition module 110 , a recognition module 120 , a determination module 130 and a prompt module 140 .

获取模块110,用于获取摄像装置13拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像。当摄像装置13拍摄到一张实时图像,摄像装置13将这张实时图像发送到处理器12,当处理器12接受到该实时图像后,所述获取模块110利用人脸识别算法提取出实时的脸部图像。The acquisition module 110 is used to acquire the real-time image captured by the camera device 13 and extract a real-time facial image from the real-time image using a facial recognition algorithm. When the camera device 13 captures a real-time image, the camera device 13 sends the real-time image to the processor 12. When the processor 12 receives the real-time image, the acquisition module 110 extracts the real-time facial image using a facial recognition algorithm.

具体地,从该实时图像中提取实时脸部图像的人脸识别算法可以为基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法,等等。Specifically, the face recognition algorithm for extracting the real-time facial image from the real-time image may be a method based on geometric features, a local feature analysis method, a feature face method, a method based on an elastic model, a neural network method, and the like.

识别模块120,用于将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点。假设t=34,面部平均模型中的27个面部特征点中,有12个眼眶特征点、2个眼球特征点、20个唇部特征点。当获取模块110提取出实时脸部图像后,所述识别模块120从存储器11中调用训练好的面部特征点的面部平均模型后,将实时脸部图像与该面部平均模型进行对齐,然后利用特征提取算法在该实时脸部图像中搜索与该面部平均模型的27个面部特征点匹配的12个眼眶特征点、2个眼球特征点、20个唇部特征点。其中,所述面部特征点的面部平均模型是预先构建并训练好的,具体实施方式将在下述人脸遮挡检测方法中进行说明。The recognition module 120 is used to input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image. Assume that t=34, among the 27 facial feature points in the facial average model, there are 12 eye socket feature points, 2 eyeball feature points, and 20 lip feature points. After the acquisition module 110 extracts the real-time facial image, the recognition module 120 calls the trained facial average model of facial feature points from the memory 11, aligns the real-time facial image with the facial average model, and then uses the feature extraction algorithm to search for 12 eye socket feature points, 2 eyeball feature points, and 20 lip feature points that match the 27 facial feature points of the facial average model in the real-time facial image. Among them, the facial average model of facial feature points is pre-constructed and trained, and the specific implementation method will be described in the following face occlusion detection method.

在本实施例中,所述特征提取算法为SIFT(scale-invariant featuretransform)算法。SIFT算法从面部特征点的面部平均模型后提取每个面部特征点的局部特征,如12个眼眶特征点、2个眼球特征点、20个唇部特征点,选择一个眼部特征点或唇部特征点为参考特征点,在实时脸部图像中查找与该参考特征点的局部特征相同或相似的特征点,例如,两个特征点的局部特征的差值是否在预设范围内,若是,则表明该特征点与参考特征点的局部特征相同或相似,并将其作为一个眼部特征点或唇部特征点。依此原理直到在实时脸部图像中查找出所有面部特征点。在其他实施例中,该特征提取算法还可以为SURF(Speeded Up Robust Features)算法,LBP(Local Binary Patterns)算法,HOG(Histogram of Oriented Gridients)算法等。In this embodiment, the feature extraction algorithm is a SIFT (scale-invariant feature transform) algorithm. The SIFT algorithm extracts the local features of each facial feature point from the average facial model of the facial feature points, such as 12 orbital feature points, 2 eyeball feature points, and 20 lip feature points, and selects an eye feature point or a lip feature point as a reference feature point, and searches for feature points that are the same or similar to the local features of the reference feature point in the real-time facial image. For example, whether the difference between the local features of the two feature points is within a preset range, if so, it indicates that the feature point is the same or similar to the local features of the reference feature point, and it is used as an eye feature point or a lip feature point. This principle is followed until all facial feature points are found in the real-time facial image. In other embodiments, the feature extraction algorithm can also be a SURF (Speeded Up Robust Features) algorithm, an LBP (Local Binary Patterns) algorithm, a HOG (Histogram of Oriented Gridients) algorithm, etc.

判断模块130,用于根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。当所述识别模块120从实时脸部图像中识别到12个眼眶特征点、2个眼球特征点、20个唇部特征点后,可以根据该12个眼眶特征点、2个眼球特征点确定一个眼部区域,根据该20个唇部特征点确定一个唇部区域,然后将确定的眼部区域及唇部区域输入训练好的人脸的眼部分类模型及人脸的唇部分类模型,根据模型所得的结果判断所述确定的眼部区域及唇部区域的真实性,也就是说,模型输出的结果既可能全为false,也可能全为true,也可能既包含true,也包含false。当人脸的眼部分类模型及人脸的唇部分类模型输出的结果中均为false,则表示所述眼部区域及唇部区域不是人的眼部区域和人的唇部区域;当人脸的眼部分类模型及人脸的唇部分类模型输出的结果中均为true,则表示所述眼部区域及唇部区域是人的眼部区域和人的唇部区域。其中,所述人脸的眼部分类模型及人脸的唇部分类模型是预先构建并训练好的,具体实施方式将在下述人脸遮挡检测方法中进行说明。The judgment module 130 is used to determine the eye area and the lip area according to the position information of the t facial feature points, input the eye area and the lip area into the pre-trained eye classification model and the lip classification model of the face, judge the authenticity of the eye area and the lip area, and judge whether the face in the real-time image is blocked according to the judgment result. When the recognition module 120 recognizes 12 eye socket feature points, 2 eyeball feature points, and 20 lip feature points from the real-time facial image, an eye area can be determined according to the 12 eye socket feature points and the 2 eyeball feature points, and a lip area can be determined according to the 20 lip feature points, and then the determined eye area and lip area are input into the trained eye classification model and the lip classification model of the face, and the authenticity of the determined eye area and lip area is judged according to the results obtained by the model, that is, the results output by the model may be all false, all true, or both true and false. When the results output by the eye classification model and the lip classification model are both false, it means that the eye area and the lip area are not the eye area and the lip area of a person; when the results output by the eye classification model and the lip classification model are both true, it means that the eye area and the lip area are the eye area and the lip area of a person. The eye classification model and the lip classification model are pre-built and trained, and the specific implementation method will be described in the following face occlusion detection method.

具体地,所述判断模块130,还用于判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。当所述人脸的眼部分类模型、人脸的唇部分类模型输出结果后,判断结果中是不是只包含true。Specifically, the judgment module 130 is further used to judge whether the judgment results of the eye classification model and the lip classification model of the face are both true for the eye area and the lip area. After the eye classification model and the lip classification model of the face output the results, it is judged whether the results only contain true.

判断模块130,还用于当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡。也就是说,当根据面部特征点确定的眼部区域和唇部区域,均为人的眼部区域或人的唇部区域,则认为该实时脸部图像中的人脸没有发生遮挡。The judgment module 130 is further configured to judge that the face in the real-time facial image is not blocked when the judgment results of the eye classification model and the lip classification model for the face are both true for the eye region and the lip region. In other words, when the eye region and the lip region determined according to the facial feature points are both the eye region or the lip region of a person, it is considered that the face in the real-time facial image is not blocked.

提示模块140,用于当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。当根据面部特征点确定的眼部区域和唇部区域中,存在任意一个区域不是人的眼部区域或人的唇部区域,则认为该实时脸部图像中的人脸发生遮挡,提示模块140提示该实时脸部图像中的人脸发生遮挡。The prompt module 140 is used to prompt that the face in the real-time facial image is occluded when the judgment results of the eye classification model and the lip classification model of the face include untrue results for the eye area and the lip area. When any one of the eye area and the lip area determined according to the facial feature points is not the eye area or the lip area of the person, it is considered that the face in the real-time facial image is occluded, and the prompt module 140 prompts that the face in the real-time facial image is occluded.

进一步的,当所述人脸的眼部分类模型输出结果为false,则认为图像中的眼部区域发生遮挡,当所述人脸的唇部分类模型输出结果为false,则认为图像中的唇部区域发生遮挡,并作出相应提示。Furthermore, when the output result of the eye classification model of the face is false, it is considered that the eye area in the image is occluded; when the output result of the lip classification model of the face is false, it is considered that the lip area in the image is occluded, and a corresponding prompt is given.

在其他实施例中,若检测完人脸是否遮挡后还进行后续的人脸识别,那么对于实时脸部图像中的人脸发生遮挡时,提示模块140还用于提示当前脸部图像中的人脸发生遮挡,获取模块重新获取摄像装置拍摄到的实时图像,并进行后续步骤。In other embodiments, if subsequent face recognition is performed after detecting whether the face is obscured, then when the face in the real-time facial image is obscured, the prompt module 140 is also used to prompt that the face in the current facial image is obscured, and the acquisition module re-acquires the real-time image captured by the camera device and performs subsequent steps.

本实施例提出的电子装置1,从实时图像中提取实时脸部图像,利用面部平均模型识别出该实时脸部图像中的面部特征点,利用人脸的眼部分类模型及人脸的唇部分类模型对面部特征点确定的眼部区域及唇部区域进行分析,根据眼部区域及唇部区域的真实性,快速判断当前图像中人脸是否发生遮挡。The electronic device 1 proposed in this embodiment extracts a real-time facial image from a real-time image, identifies facial feature points in the real-time facial image using a facial average model, analyzes the eye area and lip area determined by the facial feature points using a face eye classification model and a face lip classification model, and quickly determines whether the face in the current image is occluded based on the authenticity of the eye area and the lip area.

此外,本发明还提供一种人脸遮挡检测方法。参照图3所示,为本发明人脸遮挡检测方法第一实施例的流程图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。In addition, the present invention also provides a face occlusion detection method. Referring to Figure 3, it is a flow chart of a first embodiment of the face occlusion detection method of the present invention. The method can be executed by a device, and the device can be implemented by software and/or hardware.

在本实施例中,人脸遮挡检测方法包括:In this embodiment, the face occlusion detection method includes:

步骤S10,获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像。当摄像装置拍摄到一张实时图像,摄像装置将这张实时图像发送到处理器,当处理器接收到该实时图像后,利用人脸识别算法提取出实时的脸部图像。Step S10, obtaining a real-time image captured by the camera device, and extracting a real-time facial image from the real-time image using a face recognition algorithm. When the camera device captures a real-time image, the camera device sends the real-time image to the processor, and when the processor receives the real-time image, the real-time facial image is extracted using a face recognition algorithm.

具体地,从该实时图像中提取实时脸部图像的人脸识别算法可以为基于几何特征的方法、局部特征分析方法、特征脸方法、基于弹性模型的方法、神经网络方法,等等。Specifically, the face recognition algorithm for extracting the real-time facial image from the real-time image may be a method based on geometric features, a local feature analysis method, a feature face method, a method based on an elastic model, a neural network method, and the like.

步骤S20,将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点。Step S20: input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image.

建立一个有n张人脸图像的第一样本库,在每张人脸图像中标记t个面部特征点,所述t个面部特征点包括:代表眼部位置的t1个眼眶特征点、t2个眼球特征点及代表唇部位置的t3个唇部特征点。在第一样本库中的每张人脸图像中,手动标记t1个眼眶特征点、t2个眼球特征点及t3个唇部特征点,每张人脸图像中的该(t1+t2+t3)个特征点组成一个形状特征向量S,得到面部的n个形状特征向量S。A first sample library with n face images is established, and t facial feature points are marked in each face image, wherein the t facial feature points include: t 1 eye socket feature points representing eye positions, t 2 eyeball feature points, and t 3 lip feature points representing lip positions. In each face image in the first sample library, t 1 eye socket feature points, t 2 eyeball feature points, and t 3 lip feature points are manually marked, and the (t 1 +t 2 +t 3 ) feature points in each face image form a shape feature vector S, thereby obtaining n shape feature vectors S of the face.

利用所述t个面部特征点对人脸特征识别模型进行训练得到面部平均模型。所述人脸特征识别模型为Ensemble of Regression Tress(简称ERT)算法。ERT算法用公式表示如下:The face feature recognition model is trained using the t facial feature points to obtain a facial average model. The face feature recognition model is an Ensemble of Regression Tress (ERT) algorithm. The ERT algorithm is expressed in the following formula:

其中t表示级联序号,τt(·,·)表示当前级的回归器。每个回归器由很多棵回归树(tree)组成,训练的目的就是得到这些回归树。Where t represents the cascade number, and τ t (·,·) represents the regressor of the current level. Each regressor consists of many regression trees, and the purpose of training is to obtain these regression trees.

其中为当前模型的形状估计;每个回归器τt(·,·)根据输入图像I和来预测一个增量把这个增量加到当前的形状估计上来改进当前模型。其中每一级回归器都是根据特征点来进行预测。训练数据集为:(I1,S1),...,(In,Sn)其中I是输入的样本图像,S是样本图像中的特征点组成的形状特征向量。Where is the shape estimate of the current model; each regressor τ t (·,·) predicts an increment based on the input image I and adds this increment to the current shape estimate to improve the current model. Each level of regressor makes predictions based on feature points. The training data set is: (I1,S1),...,(In,Sn) where I is the input sample image and S is the shape feature vector composed of feature points in the sample image.

在模型训练的过程中,第一样本库中人脸图像的数量为t,假设每一张样本图片有34个特征点,特征向量x1~x12表示眼眶特征点的横坐标,x13~x14表示眼球特征点的横坐标,x15~x34表示唇部特征点的横坐标。取所有样本图片的部分特征点(例如在每个样本图片的34个特征点中随机取25个特征点)形成的所述特征向量S训练出第一棵回归树,将第一棵回归树的预测值与所述部分特征点的真实值(每个样本图片所取的25个特征点的加权平均值)的残差用来训练第二棵树...依次类推,直到训练出第N棵树的预测值与所述部分特征点的真实值的残差接近于0,得到ERT算法的所有回归树,根据这些回归树得到面部平均模型(mean shape),并将模型文件及样本库保存至存储器中。因为训练模型的样本标记了12个眼眶特征点、2个眼球特征点及20个唇部特征点,则训练得到的人脸的面部平均模型可用于从人脸图像中识别12个眼眶特征点、2个眼球特征点及20个唇部特征点。In the process of model training, the number of face images in the first sample library is t. Assuming that each sample image has 34 feature points, feature vectors x1 to x12 represent the horizontal coordinates of the eye socket feature points, x13 to x14 represent the horizontal coordinates of the eyeball feature points, and x15 to x34 represent the horizontal coordinates of the lip feature points. The feature vector S formed by taking some feature points of all sample images (for example, randomly taking 25 feature points from the 34 feature points of each sample image) is trained to train the first regression tree, and the residual between the predicted value of the first regression tree and the true value of the partial feature points (the weighted average of the 25 feature points taken from each sample image) is used to train the second tree... and so on, until the residual between the predicted value of the Nth tree and the true value of the partial feature points is close to 0, and all regression trees of the ERT algorithm are obtained, and the facial average model (mean shape) is obtained according to these regression trees, and the model file and the sample library are saved in the memory. Because the samples of the training model are marked with 12 eye socket feature points, 2 eyeball feature points and 20 lip feature points, the trained facial average model of the face can be used to identify 12 eye socket feature points, 2 eyeball feature points and 20 lip feature points from the face image.

获取到实时脸部图像后,从存储器中调用训练好的面部平均模型,将实时脸部图像与面部平均模型进行对齐,利用特征提取算法在该实时脸部图像中搜索与该面部平均模型的12个眼眶特征点、2个眼球特征点及20个唇部特征点匹配的12个眼眶特征点、2个眼球特征点及20个唇部特征点。其中,该20个唇部特征点均匀分布在唇部。After acquiring the real-time facial image, the trained average facial model is called from the memory, the real-time facial image is aligned with the average facial model, and the feature extraction algorithm is used to search for 12 orbital feature points, 2 eyeball feature points and 20 lip feature points that match the 12 orbital feature points, 2 eyeball feature points and 20 lip feature points of the average facial model in the real-time facial image. Among them, the 20 lip feature points are evenly distributed on the lips.

步骤S30,根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。Step S30, determining the eye area and the lip area according to the position information of the t facial feature points, inputting the eye area and the lip area into a pre-trained face eye classification model and a face lip classification model, judging the authenticity of the eye area and the lip area, and judging whether the face in the real-time image is occluded according to the judgment result.

收集第一数量的人眼正样本图像和第二数量的人眼负样本图像,提取每张人眼正样本图像、人眼负样本图像的局部特征。人眼正样本图像是指包含人眼的眼睛样本,可以从人脸图像样本库中抠出双眼部分作为眼睛样本,人眼负眼睛样本图像是指眼睛区域残缺的图像,多张人眼正样本图像及负样本图像形成第二样本库。A first number of positive sample images of human eyes and a second number of negative sample images of human eyes are collected, and local features of each positive sample image of human eyes and negative sample image of human eyes are extracted. Positive sample images of human eyes refer to eye samples containing human eyes, and both eyes can be cut out from the face image sample library as eye samples. Negative eye sample images of human eyes refer to images with incomplete eye areas. Multiple positive sample images of human eyes and negative sample images form the second sample library.

收集第三数量的唇部正样本图像和第四数量的唇部负样本图像,提取每张唇部正样本图像、唇部负样本图像的局部特征。唇部正样本图像是指包含人类的唇部的图像,可以从人脸图像样本库中抠出唇部部分作为唇部正样本图像。唇部负样本图像是指人的唇部区域残缺、或是图像中的唇部不是人类(例如动物)的唇部的图像,多张唇部正样本图像及负样本图像形成第三样本库。A third number of lip positive sample images and a fourth number of lip negative sample images are collected, and local features of each lip positive sample image and each lip negative sample image are extracted. A lip positive sample image refers to an image containing human lips, and the lip part can be cut out from the face image sample library as a lip positive sample image. A lip negative sample image refers to an image in which the lip area of a person is incomplete, or the lips in the image are not the lips of a human (such as an animal). Multiple lip positive sample images and negative sample images form a third sample library.

具体地,所述局部特征为方向梯度直方图(Histogram of Oriented Gradient,简称HOG)特征,通过特征提取算法从人眼样本图像及唇部样本图像中提取得到。由于样本图像中颜色信息作用不大,通常将其转化为灰度图,并将整个图像进行归一化,计算图像横坐标和纵坐标方向的梯度,并据此计算每个像素位置的梯度方向值,以捕获轮廓、人影和一些纹理信息,且进一步弱化光照的影响。然后把整个图像分割为一个个的Cell单元格(8*8像素),为每个Cell单元格构建梯度方向直方图,以统计局部图像梯度信息并进行量化,得到局部图像区域的特征描述向量。接着把cell单元格组合成大的块(block),由于局部光照的变化以及前景-背景对比度的变化,使得梯度强度的变化范围非常大,这就需要对梯度强度做归一化,进一步地对光照、阴影和边缘进行压缩。最后将所有“block”的HOG描述符组合在一起,形成最终的HOG特征描述向量。Specifically, the local feature is a Histogram of Oriented Gradient (HOG) feature, which is extracted from the human eye sample image and the lip sample image by a feature extraction algorithm. Since the color information in the sample image is not very useful, it is usually converted into a grayscale image, and the entire image is normalized, the gradients in the horizontal and vertical directions of the image are calculated, and the gradient direction value of each pixel position is calculated accordingly to capture the contour, human shadow and some texture information, and further weaken the influence of illumination. Then the entire image is divided into individual Cell cells (8*8 pixels), and a gradient direction histogram is constructed for each Cell cell to count the local image gradient information and quantize it to obtain the feature description vector of the local image area. Then the cell cells are combined into large blocks. Due to the change of local illumination and the change of foreground-background contrast, the range of gradient intensity is very large, which requires normalization of the gradient intensity, and further compression of illumination, shadow and edge. Finally, all the HOG descriptors of the "block" are combined together to form the final HOG feature description vector.

用上述第二样本库及第三样本库中的正、负样本图像及提取的HOG特征对支持向量机分类器(Support Vector Machine,SVM)进行训练,得到所述人脸的眼部分类模型及人脸的唇部分类模型。The positive and negative sample images in the second sample library and the third sample library and the extracted HOG features are used to train a support vector machine classifier (SVM) to obtain the eye classification model and the lip classification model of the human face.

当从实时脸部图像中识别到12个眼眶特征点、2个眼球特征点、20个唇部特征点后,可以根据该12个眼眶特征点、2个眼球特征点确定一个眼部区域,根据该20个唇部特征点确定一个唇部区域,然后将确定的眼部区域及唇部区域输入训练好的人脸的眼部分类模型及人脸的唇部分类模型,根据模型所得的结果判断所述确定的眼部区域及唇部区域的真实性,也就是说,模型输出的结果既可能全为false,也可能全为true,也可能既包含true,也包含false。当人脸的眼部分类模型及人脸的唇部分类模型输出的结果中均为false,则表示所述眼部区域及唇部区域不是人的眼部区域和人的唇部区域;当人脸的眼部分类模型及人脸的唇部分类模型输出的结果中均为true,则表示所述眼部区域及唇部区域是人的眼部区域和人的唇部区域。After identifying 12 eye socket feature points, 2 eyeball feature points, and 20 lip feature points from the real-time facial image, an eye region can be determined based on the 12 eye socket feature points and the 2 eyeball feature points, and a lip region can be determined based on the 20 lip feature points, and then the determined eye region and lip region are input into the trained eye classification model and the lip classification model of the face, and the authenticity of the determined eye region and lip region is judged based on the results obtained by the model, that is, the results output by the model may be all false, all true, or both true and false. When the results output by the eye classification model and the lip classification model of the face are both false, it means that the eye region and the lip region are not the eye region and the lip region of a person; when the results output by the eye classification model and the lip classification model of the face are both true, it means that the eye region and the lip region are the eye region and the lip region of a person.

步骤S40,判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。当所述人脸的眼部分类模型、人脸的唇部分类模型输出结果后,判断结果中是不是只包含true。Step S40, judging whether the judgment results of the eye classification model and the lip classification model for the eye area and the lip area are both true. After the eye classification model and the lip classification model for the face output the results, judging whether the results only contain true.

步骤S50,当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡。也就是说,当根据面部特征点确定的眼部区域和唇部区域,均为人的眼部区域或人的唇部区域,则认为该实时脸部图像中的人脸没有发生遮挡。Step S50, when the eye classification model and the lip classification model of the face have judgment results that the eye region and the lip region are both true, it is determined that the face in the real-time facial image is not blocked. In other words, when the eye region and the lip region determined based on the facial feature points are both the eye region or the lip region of a person, it is considered that the face in the real-time facial image is not blocked.

步骤S60,当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。当根据面部特征点确定的眼部区域和唇部区域中,存在任意一个区域不是人的眼部区域或人的唇部区域,则认为该实时脸部图像中的人脸发生遮挡,提示该实时脸部图像中的人脸发生遮挡。Step S60: When the eye classification model and the lip classification model of the face contain untrue judgment results for the eye area and the lip area, it is prompted that the face in the real-time facial image is occluded. When any one of the eye area and the lip area determined according to the facial feature points is not the eye area or the lip area of the person, it is considered that the face in the real-time facial image is occluded, and it is prompted that the face in the real-time facial image is occluded.

进一步的,当所述人脸的眼部分类模型输出结果为false,则认为图像中的眼部区域发生遮挡,当所述人脸的唇部分类模型输出结果为false,则认为图像中的唇部区域发生遮挡,并作出相应提示。Furthermore, when the output result of the eye classification model of the face is false, it is considered that the eye area in the image is occluded; when the output result of the lip classification model of the face is false, it is considered that the lip area in the image is occluded, and a corresponding prompt is given.

在其他实施例中,若检测完人脸是否遮挡后还进行后续的人脸识别,那么对于实时脸部图像中的人脸发生遮挡时,步骤S50还包括:In other embodiments, if subsequent face recognition is performed after detecting whether the face is blocked, when the face in the real-time facial image is blocked, step S50 further includes:

提示当前脸部图像中的人脸发生遮挡,获取模块重新获取摄像装置拍摄到的实时图像,并进行后续步骤。It is prompted that the face in the current facial image is blocked, the acquisition module reacquires the real-time image captured by the camera device, and performs subsequent steps.

本实施例提出的人脸遮挡检测方法,利用面部特征点的面部平均模型识别出该实时脸部图像中关键面部特征点,利用人脸的眼部分类模型及人脸的唇部分类模型对特征点确定的眼部区域及唇部区域进行分析,并根据该眼部区域及唇部区域的真实性判断当前图像中的人脸是否发生遮挡,快速检测实时脸部图像中人脸的遮挡情况。The face occlusion detection method proposed in this embodiment uses the facial average model of facial feature points to identify the key facial feature points in the real-time facial image, uses the eye classification model of the face and the lip classification model of the face to analyze the eye area and the lip area determined by the feature points, and judges whether the face in the current image is occluded based on the authenticity of the eye area and the lip area, so as to quickly detect the occlusion of the face in the real-time facial image.

此外,本发明实施例还提出一种计算机可读存储介质,所述计算机可读存储介质中包括人脸遮挡检测程序,所述人脸遮挡检测程序被处理器执行时实现如下操作:In addition, an embodiment of the present invention further provides a computer-readable storage medium, wherein the computer-readable storage medium includes a face occlusion detection program, and when the face occlusion detection program is executed by a processor, the following operations are implemented:

图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Image acquisition step: acquiring a real-time image captured by a camera device, and extracting a real-time facial image from the real-time image using a face recognition algorithm;

特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点;及Feature point recognition step: input the real-time facial image into a pre-trained facial average model, and use the facial average model to recognize t facial feature points from the real-time facial image; and

特征区域判断步骤:根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。Feature area judgment step: determine the eye area and the lip area according to the position information of the t facial feature points, input the eye area and the lip area into a pre-trained face eye classification model and a face lip classification model, judge the authenticity of the eye area and the lip area, and judge whether the face in the real-time image is occluded based on the judgment result.

可选地,所述人脸遮挡检测程序被处理器执行时,还实现如下操作:Optionally, when the face occlusion detection program is executed by the processor, the following operations are also implemented:

判断步骤:判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。Judgment step: judging whether the judgment results of the eye area and the lip area of the face by the eye classification model and the lip classification model are both true.

可选地,所述人脸遮挡检测程序被处理器执行时,还实现如下操作:Optionally, when the face occlusion detection program is executed by the processor, the following operations are also implemented:

当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡;及When the judgment results of the eye classification model and the lip classification model for the eye region and the lip region are both true, it is judged that the face in the real-time facial image is not blocked; and

当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。When the judgment results of the eye area and the lip area by the eye classification model and the lip classification model of the face contain untrue information, it is prompted that the face in the real-time facial image is occluded.

可选地,所述面部平均模型的训练步骤包括:Optionally, the training step of the facial average model includes:

建立一个有n张人脸图像的第一样本库,在每张人脸图像中标记t个面部特征点,所述t个面部特征点包括:代表眼部位置的t1个眼眶特征点、t2个眼球特征点及代表唇部位置的t3个唇部特征点;及Establishing a first sample library with n face images, marking t facial feature points in each face image, wherein the t facial feature points include: t1 eye socket feature points representing eye positions, t2 eyeball feature points, and t3 lip feature points representing lip positions; and

利用所述t个面部特征点对人脸特征识别模型进行训练得到面部平均模型;Using the t facial feature points to train a facial feature recognition model to obtain a facial average model;

可选地,所述人脸的眼部分类模型及唇部分类模型的训练步骤包括:Optionally, the steps of training the eye classification model and the lip classification model of the face include:

收集第一数量的人眼正样本图像和第二数量的人眼负样本图像,提取每张人眼正样本图像、人眼负样本图像的局部特征;Collecting a first number of positive sample images of human eyes and a second number of negative sample images of human eyes, and extracting local features of each positive sample image of human eyes and each negative sample image of human eyes;

利用人眼正样本图像、人眼睛负样本图像及其局部特征对支持向量分类器(SVM)进行训练,得到人脸的眼部分类模型;The support vector classifier (SVM) is trained using positive sample images of human eyes, negative sample images of human eyes and their local features to obtain a face eye classification model;

收集第三数量的唇部正样本图像和第四数量的唇部负样本图像,提取每张唇部正样本图像、唇部负样本图像的局部特征;及Collecting a third number of lip positive sample images and a fourth number of lip negative sample images, and extracting local features of each lip positive sample image and each lip negative sample image; and

利用唇部正样本图像、唇部负样本图像及其局部特征对支持向量分类器(SVM)进行训练,得到人脸的唇部分类模型。The lip positive sample images, lip negative sample images and their local features are used to train a support vector classifier (SVM) to obtain a lip classification model for the face.

本发明之计算机可读存储介质的具体实施方式与上述人脸遮挡检测方法的具体实施方式大致相同,在此不再赘述。The specific implementation of the computer-readable storage medium of the present invention is substantially the same as the specific implementation of the above-mentioned face occlusion detection method, and will not be described in detail here.

需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。It should be noted that, in this article, the terms "include", "comprises" or any other variations thereof are intended to cover non-exclusive inclusion, so that a process, device, article or method including a series of elements includes not only those elements, but also other elements not explicitly listed, or also includes elements inherent to such process, device, article or method. In the absence of further restrictions, an element defined by the sentence "includes a ..." does not exclude the existence of other identical elements in the process, device, article or method including the element.

上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。The serial numbers of the above embodiments of the present invention are only for description and do not represent the advantages and disadvantages of the embodiments. Through the description of the above implementation modes, those skilled in the art can clearly understand that the above embodiment methods can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware, but in many cases the former is a better implementation mode. Based on such an understanding, the technical solution of the present invention, in essence, or the part that contributes to the prior art, can be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above, and includes a number of instructions for enabling a terminal device (which can be a mobile phone, a computer, a server, or a network device, etc.) to execute the methods described in each embodiment of the present invention.

以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above are only preferred embodiments of the present invention, and are not intended to limit the patent scope of the present invention. Any equivalent structure or equivalent process transformation made using the contents of the present invention specification and drawings, or directly or indirectly applied in other related technical fields, are also included in the patent protection scope of the present invention.

Claims (9)

1.一种电子装置,其特征在于,所述装置包括:存储器、处理器及摄像装置,所述存储器中包括人脸遮挡检测程序,所述人脸遮挡检测程序被所述处理器执行时实现如下步骤:1. An electronic device, characterized in that the device comprises: a memory, a processor and a camera device, the memory comprises a face occlusion detection program, and the face occlusion detection program, when executed by the processor, implements the following steps: 图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Image acquisition step: acquiring a real-time image captured by a camera device, and extracting a real-time facial image from the real-time image using a face recognition algorithm; 特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点,该面部平均模型的训练步骤包括:Feature point recognition step: input the real-time facial image into a pre-trained facial average model, and use the facial average model to recognize t facial feature points from the real-time facial image. The training steps of the facial average model include: 建立一个有n张人脸图像的第一样本库,在每张人脸图像中标记t个面部特征点,所述t个面部特征点包括:代表眼部位置的t1个眼眶特征点、t2个眼球特征点及代表唇部位置的t3个唇部特征点,每张人脸图像中的该t个特征点组成一个形状特征向量S,得到训练数据集(I1,S1),...,(In,Sn),其中I为人脸图像,S是人脸图像中的特征点组成的形状特征向量;及Establish a first sample library with n face images, mark t facial feature points in each face image, the t facial feature points include: t 1 orbital feature points representing eye positions, t 2 eyeball feature points and t 3 lip feature points representing lip positions, the t feature points in each face image form a shape feature vector S, and obtain a training data set (I1, S1), ..., (In, Sn), where I is a face image, and S is a shape feature vector composed of feature points in the face image; and 利用所述训练数据集对人脸特征识别模型进行训练得到面部平均模型,该人脸特征识别模型为ERT算法,公式为:其中t表示级联序号,τt(·,·)表示当前级的回归器、每个回归器由多棵回归树组成,为当前模型的形状估计;每个回归器τt(·,·)根据输入图像I和来预测一个增量The training data set is used to train the face feature recognition model to obtain the face average model. The face feature recognition model is the ERT algorithm, and the formula is: where t represents the cascade number, τ t (·, ·) represents the regressor of the current level, each regressor is composed of multiple regression trees, and is the shape estimation of the current model; each regressor τ t (·, ·) predicts an increment according to the input image I and 在模型训练的过程中,根据样本库中人脸图像中所述t个面部特征点组成的所述特征向量S训练出第一棵回归树,将第一棵回归树的预测值与所述t个面部特征点的真实值的残差用来训练第二棵树;依次类推,直到训练出第N棵树的预测值与所述t个面部特征点的真实值的残差接近于0,得到ERT算法的所有回归树,根据所述回归树得到该面部平均模型;及In the process of model training, a first regression tree is trained according to the feature vector S composed of the t facial feature points in the face image in the sample library, and the residual between the predicted value of the first regression tree and the true value of the t facial feature points is used to train the second tree; and so on, until the residual between the predicted value of the trained Nth tree and the true value of the t facial feature points is close to 0, all regression trees of the ERT algorithm are obtained, and the facial average model is obtained according to the regression trees; and 特征区域判断步骤:根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。Feature area judgment step: determine the eye area and the lip area according to the position information of the t facial feature points, input the eye area and the lip area into a pre-trained face eye classification model and a face lip classification model, judge the authenticity of the eye area and the lip area, and judge whether the face in the real-time image is occluded based on the judgment result. 2.根据权利要求1所述的电子装置,其特征在于,所述人脸遮挡检测程序被所述处理器执行时,还实现如下步骤:2. The electronic device according to claim 1, wherein when the face occlusion detection program is executed by the processor, the following steps are further implemented: 判断步骤:判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。Judgment step: judging whether the judgment results of the eye area and the lip area of the face by the eye classification model and the lip classification model are both true. 3.根据权利要求1或2所述的电子装置,其特征在于,所述人脸遮挡检测程序被所述处理器执行时,还实现如下步骤:3. The electronic device according to claim 1 or 2, characterized in that when the face occlusion detection program is executed by the processor, the following steps are further implemented: 当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡;及When the judgment results of the eye classification model and the lip classification model for the eye region and the lip region are both true, it is judged that the face in the real-time facial image is not blocked; and 当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。When the judgment results of the eye area and the lip area by the eye classification model and the lip classification model of the face contain untrue information, it is prompted that the face in the real-time facial image is occluded. 4.根据权利要求1所述的电子装置,其特征在于,所述人脸的眼部分类模型及唇部分类模型的训练步骤包括:4. The electronic device according to claim 1, wherein the training steps of the eye classification model and the lip classification model of the face include: 收集第一数量的人眼正样本图像和第二数量的人眼负样本图像,提取每张人眼正样本图像、人眼负样本图像的局部特征;Collecting a first number of positive sample images of human eyes and a second number of negative sample images of human eyes, and extracting local features of each positive sample image of human eyes and each negative sample image of human eyes; 利用人眼正样本图像、人眼睛负样本图像及其局部特征对支持向量分类器(SVM)进行训练,得到人脸的眼部分类模型;The support vector classifier (SVM) is trained using positive sample images of human eyes, negative sample images of human eyes and their local features to obtain a face eye classification model; 收集第三数量的唇部正样本图像和第四数量的唇部负样本图像,提取每张唇部正样本图像、唇部负样本图像的局部特征;及Collecting a third number of lip positive sample images and a fourth number of lip negative sample images, and extracting local features of each lip positive sample image and each lip negative sample image; and 利用唇部正样本图像、唇部负样本图像及其局部特征对支持向量分类器(SVM)进行训练,得到人脸的唇部分类模型。The lip positive sample images, lip negative sample images and their local features are used to train a support vector classifier (SVM) to obtain a lip classification model for the face. 5.一种人脸遮挡检测方法,其特征在于,所述方法包括:5. A face occlusion detection method, characterized in that the method comprises: 图像获取步骤:获取摄像装置拍摄的实时图像,利用人脸识别算法从该实时图像中提取一张实时脸部图像;Image acquisition step: acquiring a real-time image captured by a camera device, and extracting a real-time facial image from the real-time image using a face recognition algorithm; 特征点识别步骤:将该实时脸部图像输入预先训练好的面部平均模型,利用该面部平均模型从该实时脸部图像中识别出t个面部特征点,该面部平均模型的训练步骤包括:建立一个有n张人脸图像的第一样本库,在每张人脸图像中标记t个面部特征点,所述t个面部特征点包括:代表眼部位置的t1个眼眶特征点、t2个眼球特征点及代表唇部位置的t3个唇部特征点,每张人脸图像中的该t个特征点组成一个形状特征向量S,得到训练数据集(I1,S1),...,(In,Sn),其中I为人脸图像,S是人脸图像中的特征点组成的形状特征向量;及Feature point recognition step: input the real-time facial image into a pre-trained facial average model, and use the facial average model to identify t facial feature points from the real-time facial image. The training step of the facial average model includes: establishing a first sample library with n facial images, marking t facial feature points in each facial image, the t facial feature points including: t1 orbital feature points representing eye positions, t2 eyeball feature points and t3 lip feature points representing lip positions, the t feature points in each facial image form a shape feature vector S, and obtain a training data set (I1, S1), ..., (In, Sn), where I is a facial image and S is a shape feature vector composed of feature points in the facial image; and 利用所述训练数据集对人脸特征识别模型进行训练得到面部平均模型,该人脸特征识别模型为ERT算法,公式为:其中t表示级联序号,τt(·,·)表示当前级的回归器、每个回归器由多棵回归树组成,为当前模型的形状估计;每个回归器τt(·,·)根据输入图像I和来预测一个增量The training data set is used to train the face feature recognition model to obtain the face average model. The face feature recognition model is the ERT algorithm, and the formula is: where t represents the cascade number, τ t (·, ·) represents the regressor of the current level, each regressor is composed of multiple regression trees, and is the shape estimation of the current model; each regressor τ t (·, ·) predicts an increment according to the input image I and 在模型训练的过程中,根据样本库中人脸图像中所述t个面部特征点组成的所述特征向量S训练出第一棵回归树,将第一棵回归树的预测值与所述t个面部特征点的真实值的残差用来训练第二棵树;依次类推,直到训练出第N棵树的预测值与所述t个面部特征点的真实值的残差接近于0,得到ERT算法的所有回归树,根据所述回归树得到该面部平均模型;及In the process of model training, a first regression tree is trained according to the feature vector S composed of the t facial feature points in the face image in the sample library, and the residual between the predicted value of the first regression tree and the true value of the t facial feature points is used to train the second tree; and so on, until the residual between the predicted value of the trained Nth tree and the true value of the t facial feature points is close to 0, all regression trees of the ERT algorithm are obtained, and the facial average model is obtained according to the regression trees; and 特征区域判断步骤:根据该t个面部特征点的位置信息确定眼部区域和唇部区域,将该眼部区域和该唇部区域输入预先训练好的人脸的眼部分类模型、人脸的唇部分类模型,判断所述眼部区域和唇部区域的真实性,并根据判断结果判断该实时图像中的人脸是否发生遮挡。Feature area judgment step: determine the eye area and the lip area according to the position information of the t facial feature points, input the eye area and the lip area into a pre-trained face eye classification model and a face lip classification model, judge the authenticity of the eye area and the lip area, and judge whether the face in the real-time image is occluded based on the judgment result. 6.根据权利要求5所述的人脸遮挡检测方法,其特征在于,该方法还包括:6. The face occlusion detection method according to claim 5, characterized in that the method further comprises: 判断步骤:判断所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果是否均为真实。Judgment step: judging whether the judgment results of the eye area and the lip area of the face by the eye classification model and the lip classification model are both true. 7.根据权利要求5或6所述的人脸遮挡检测方法,其特征在于,该方法还包括:7. The face occlusion detection method according to claim 5 or 6, characterized in that the method further comprises: 当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果均为真实时,判断该实时脸部图像中的人脸未发生遮挡;及When the judgment results of the eye classification model and the lip classification model for the eye region and the lip region are both true, it is judged that the face in the real-time facial image is not blocked; and 当所述人脸的眼部分类模型、人脸的唇部分类模型对所述眼部区域及唇部区域的判断结果包含不真实时,提示该实时脸部图像中的人脸发生遮挡。When the judgment results of the eye area and the lip area by the eye classification model and the lip classification model of the face contain untrue information, it is prompted that the face in the real-time facial image is occluded. 8.根据权利要求5所述的人脸遮挡检测方法,其特征在于,所述人脸的眼部分类模型及唇部分类模型的训练步骤包括:8. The face occlusion detection method according to claim 5, wherein the training steps of the eye classification model and the lip classification model of the face include: 收集第一数量的人眼正样本图像和第二数量的人眼负样本图像,提取每张人眼正样本图像、人眼负样本图像的局部特征;Collecting a first number of positive sample images of human eyes and a second number of negative sample images of human eyes, and extracting local features of each positive sample image of human eyes and each negative sample image of human eyes; 利用人眼正样本图像、人眼睛负样本图像及其局部特征对支持向量分类器(SVM)进行训练,得到人脸的眼部分类模型;The support vector classifier (SVM) is trained using positive sample images of human eyes, negative sample images of human eyes and their local features to obtain a face eye classification model; 收集第三数量的唇部正样本图像和第四数量的唇部负样本图像,提取每张唇部正样本图像、唇部负样本图像的局部特征;及Collecting a third number of lip positive sample images and a fourth number of lip negative sample images, and extracting local features of each lip positive sample image and each lip negative sample image; and 利用唇部正样本图像、唇部负样本图像及其局部特征对支持向量分类器(SVM)进行训练,得到人脸的唇部分类模型。The lip positive sample images, lip negative sample images and their local features are used to train a support vector classifier (SVM) to obtain a lip classification model for the face. 9.一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中包括人脸遮挡检测程序,所述人脸遮挡检测程序被处理器执行时,实现如权利要求5至8中任一项所述的人脸遮挡检测方法的步骤。9. A computer-readable storage medium, characterized in that the computer-readable storage medium includes a face occlusion detection program, and when the face occlusion detection program is executed by a processor, the steps of the face occlusion detection method as described in any one of claims 5 to 8 are implemented.
HK18106241.4A 2018-05-14 2018-05-14 Face shielding detection method, device and storage medium HK1246921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
HK18106241.4A HK1246921B (en) 2018-05-14 2018-05-14 Face shielding detection method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
HK18106241.4A HK1246921B (en) 2018-05-14 2018-05-14 Face shielding detection method, device and storage medium

Publications (3)

Publication Number Publication Date
HK1246921A1 HK1246921A1 (en) 2018-09-14
HK1246921A HK1246921A (en) 2018-09-14
HK1246921B true HK1246921B (en) 2019-11-22

Family

ID=71144579

Family Applications (1)

Application Number Title Priority Date Filing Date
HK18106241.4A HK1246921B (en) 2018-05-14 2018-05-14 Face shielding detection method, device and storage medium

Country Status (1)

Country Link
HK (1) HK1246921B (en)

Similar Documents

Publication Publication Date Title
US10534957B2 (en) Eyeball movement analysis method and device, and storage medium
US10445562B2 (en) AU feature recognition method and device, and storage medium
WO2019033572A1 (en) Method for detecting whether face is blocked, device and storage medium
CN107633205B (en) lip motion analysis method, device and storage medium
US10489636B2 (en) Lip movement capturing method and device, and storage medium
WO2019033571A1 (en) Facial feature point detection method, apparatus and storage medium
US10650234B2 (en) Eyeball movement capturing method and device, and storage medium
WO2019071664A1 (en) Human face recognition method and apparatus combined with depth information, and storage medium
CN113743371B (en) Fingerprint identification method and fingerprint identification device
Lahiani et al. Hand pose estimation system based on Viola-Jones algorithm for android devices
EP3200092A1 (en) Method and terminal for implementing image sequencing
CN109711287B (en) Face acquisition method and related product
HK1246921B (en) Face shielding detection method, device and storage medium
HK1246926B (en) Eyeball motion analysis method, device and storage medium
HK1246925B (en) Lip motion analysis method, device and storage medium
HK1246921A (en) Face shielding detection method, device and storage medium
HK1246921A1 (en) Face shielding detection method, device and storage medium
HK1246926A (en) Eyeball motion analysis method, device and storage medium
HK1246926A1 (en) Eyeball motion analysis method, device and storage medium
HK1246925A (en) Lip motion analysis method, device and storage medium
HK1246925A1 (en) Lip motion analysis method, device and storage medium
HK1246923A (en) Lip motion capturing method, device and storage medium
HK1246924B (en) Identifying method, device and storage medium for au features
HK1246923A1 (en) Lip motion capturing method, device and storage medium
HK1246924A (en) Identifying method, device and storage medium for au features