CN109035167A - Method, device, device and medium for processing multiple human faces in an image - Google Patents
Method, device, device and medium for processing multiple human faces in an image Download PDFInfo
- Publication number
- CN109035167A CN109035167A CN201810786664.3A CN201810786664A CN109035167A CN 109035167 A CN109035167 A CN 109035167A CN 201810786664 A CN201810786664 A CN 201810786664A CN 109035167 A CN109035167 A CN 109035167A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- faces
- processing
- given
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
根据本公开内容涉及一种对图像中的多个人脸进行处理的方法、装置、设备和介质。根据一个实施方式,提供了对图像中的多个人脸进行处理的方法。在该方法中,检测所述图像中的多个人脸;对所述图像进行分割处理以将所述多个人脸与所述图像的背景分离;针对分离后的多个人脸中的给定人脸,基于所述给定人脸的属性生成给定评估向量;基于所述给定评估向量,对所述给定人脸进行相应的模糊处理;以及将所述模糊处理后的多个人脸恢复到所述图像中。
According to the present disclosure, it relates to a method, device, device and medium for processing multiple human faces in an image. According to one embodiment, a method for processing multiple human faces in an image is provided. In the method, multiple human faces in the image are detected; the image is segmented to separate the multiple human faces from the background of the image; for a given human face in the separated multiple human faces , generate a given evaluation vector based on the attributes of the given face; based on the given evaluation vector, perform corresponding blur processing on the given face; and restore the multiple faces after the blur processing to in the image.
Description
技术领域technical field
本公开涉及图像处理领域,具体涉及对包含多个人脸的图像进行人脸模糊处理的方法、装置、设备和介质。The present disclosure relates to the field of image processing, and in particular to a method, device, device and medium for performing face blurring processing on an image containing multiple faces.
背景技术Background technique
伴随着大数据时代的全面到来,人们身边每天都在产生海量的数据,这些数据包括图片、语音、视频、文本等等。随着互联网的全面普及,普通人可以轻易的拿到各种各样的图片,这就会涉及到个人隐私、肖像权问题。基于此,人们在使用图片和视频时经常需要对这些图片和视频进行模糊处理。With the advent of the era of big data, massive amounts of data are generated around people every day, including pictures, voice, video, text and so on. With the full popularity of the Internet, ordinary people can easily get a variety of pictures, which will involve issues of personal privacy and portrait rights. Based on this, people often need to blur these pictures and videos when using them.
发明内容Contents of the invention
基于上述问题,根据本公开内容的示例实施方式,提供了对图像中的多个人脸进行处理的方案。Based on the above problems, according to example implementations of the present disclosure, a solution for processing multiple human faces in an image is provided.
在本公开内容的第一方面中,提供了一种对图像中的多个人脸进行处理的方法。具体地,该方法包括:检测图像中的多个人脸;对所述图像进行分割处理以将多个人脸与图像的背景分离;针对分离后的多个人脸中的给定人脸,基于给定人脸的属性生成给定评估向量;基于给定评估向量,对给定人脸进行相应的模糊处理;以及将模糊处理后的多个人脸恢复到图像中。In a first aspect of the present disclosure, a method for processing multiple human faces in an image is provided. Specifically, the method includes: detecting multiple human faces in an image; performing segmentation processing on the image to separate the multiple human faces from the background of the image; for a given human face among the separated multiple human faces, based on a given The attribute of the face generates a given evaluation vector; based on the given evaluation vector, corresponding blurring is performed on the given face; and multiple faces after blurring are restored to the image.
在本公开内容的第二方面中,提供了一种对图像中的多个人脸进行处理的装置。具体地,该装置包括:检测模块,被配置为检测图像中的多个人脸;分割模块,被配置为对图像进行分割处理以将多个人脸与图像的背景分离;评估模块,被配置为针对分离后的多个人脸中的给定人脸,基于给定人脸的属性生成给定评估向量;处理模块,被配置为基于给定评估向量,对给定人脸进行相应的模糊处理;以及恢复模块,将模糊处理后的多个人脸恢复到图像中。In a second aspect of the present disclosure, an apparatus for processing multiple human faces in an image is provided. Specifically, the device includes: a detection module configured to detect multiple human faces in an image; a segmentation module configured to perform segmentation processing on the image to separate the multiple human faces from the background of the image; an evaluation module configured to target For a given face among the separated multiple faces, a given evaluation vector is generated based on the attributes of the given face; the processing module is configured to perform corresponding blurring processing on the given face based on the given evaluation vector; and The recovery module restores multiple faces after blurring to the image.
在本公开内容的第三方面中,提供了一种设备,包括一个或多个处理器;以及存储装置,用于存储一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现根据本公开内容的第一方面的方法。In a third aspect of the present disclosure, there is provided an apparatus comprising one or more processors; and storage means for storing one or more programs, when the one or more programs are executed by the one or more processors Executing such that the one or more processors implement the method according to the first aspect of the present disclosure.
在本公开内容的第四方面中,提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理器执行时实现根据本公开内容的第一方面的方法。In a fourth aspect of the present disclosure, there is provided a computer-readable medium on which is stored a computer program which, when executed by a processor, implements the method according to the first aspect of the present disclosure.
本公开的实施方式能够对大批量的图片和视频进行自动人脸模糊处理。此外,本公开的实施方式在对多张人脸进行人脸模糊处理的同时,减小了模糊处理对图片或视频质量的影响,进而改善了图片或视频的观看效果。The embodiments of the present disclosure can perform automatic face blurring processing on large batches of pictures and videos. In addition, the embodiment of the present disclosure reduces the influence of the blurring process on the picture or video quality while performing the face blurring process on multiple faces, thereby improving the viewing effect of the picture or video.
应当理解,发明内容部分中所描述的内容并非旨在限定本公开内容的实施方式的关键或重要特征,亦非用于限制本公开内容的范围。本公开内容的其它特征将通过以下的描述变得容易理解。It should be understood that what is described in the Summary of the Invention is not intended to define key or critical features of the embodiments of the present disclosure, nor is it intended to limit the scope of the present disclosure. Other features of the present disclosure will be readily understood through the following description.
附图说明Description of drawings
结合附图并参考以下详细说明,本公开内容的各实施方式的上述和其他特征、优点及方面将变得更加明显。在附图中,相同或相似的附图标记表示相同或相似的元素,其中:The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent with reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, identical or similar reference numerals denote identical or similar elements, wherein:
图1示意性示出根据本公开的一个实施方式的用于对图像中的多个人脸进行处理的方法的流程图;FIG. 1 schematically shows a flowchart of a method for processing multiple human faces in an image according to an embodiment of the present disclosure;
图2A示意性示出根据本公开的一个实施方式的检测图像中的多个人脸的方法的流程图;FIG. 2A schematically shows a flowchart of a method for detecting multiple human faces in an image according to an embodiment of the present disclosure;
图2B示意性示出根据本公开的一个实施方式的对图像进行分割处理的方法的流程图;FIG. 2B schematically shows a flowchart of a method for image segmentation processing according to an embodiment of the present disclosure;
图3示意性示出根据本公开另一实施方式的用于对图像中的多个人脸进行处理的方法的流程图。Fig. 3 schematically shows a flowchart of a method for processing multiple human faces in an image according to another embodiment of the present disclosure.
图4示意性示出根据本公开的一个实施方式的用于对图像中的多个人脸进行处理的装置的框图。Fig. 4 schematically shows a block diagram of an apparatus for processing multiple human faces in an image according to an embodiment of the present disclosure.
图5A示意性示出根据本公开的一个实施方式的检测模块的框图。Fig. 5A schematically shows a block diagram of a detection module according to one embodiment of the present disclosure.
图5B示意性示出根据本公开的一个实施方式的分割模块的示意图。FIG. 5B schematically shows a schematic diagram of a segmentation module according to one embodiment of the present disclosure.
图6示意性示出根据另一实施方式的对图像中的多个人脸进行处理的装置的框图。Fig. 6 schematically shows a block diagram of an apparatus for processing multiple human faces in an image according to another embodiment.
图7示出了能够实施本公开的多个实施方式的计算设备的框图。FIG. 7 shows a block diagram of a computing device capable of implementing various embodiments of the present disclosure.
具体实施方式Detailed ways
下面将参照附图更详细地描述本公开内容的实施方式。虽然附图中显示了本公开内容的某些实施方式,然而应当理解的是,本公开内容可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施方式,相反提供这些实施方式是为了更加透彻和完整地理解本公开内容。应当理解的是,本公开内容的附图及实施方式仅用于示例性作用,并非用于限制本公开内容的保护范围。Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein; It is for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and implementations of the present disclosure are for exemplary purposes only, and are not intended to limit the protection scope of the present disclosure.
在本公开内容的实施方式的描述中,术语“包括”及其类似用语应当理解为开放性包含,即“包括但不限于”。术语“基于”应当理解为“至少部分地基于”。术语“一个实施方式”或“该实施方式”应当理解为“至少一个实施方式”。术语“第一”、“第二”等等可以指代不同的或相同的对象。下文还可能包括其他明确的和隐含的定义。In the description of the embodiments of the present disclosure, the term "comprising" and its similar expressions should be understood as an open inclusion, ie "including but not limited to". The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first", "second", etc. may refer to different or the same object. Other definitions, both express and implied, may also be included below.
在图像处理中,可以将对人脸等敏感因素进行模糊化的过程称为“脱敏”处理。传统的脱敏手段通常使用专业的图像处理软件来手工的对人脸进行模糊处理,这需要专业人士使用复杂繁琐的图片处理软件来处理,并且需要手动的选择出要处理的区域。然而,如果图像中人物很多,传统的处理方式需要非常大的工作量并耗费大量时间。In image processing, the process of blurring sensitive factors such as faces can be called "desensitization" processing. Traditional desensitization methods usually use professional image processing software to manually blur the face, which requires professionals to use complex and cumbersome image processing software to process, and manually select the area to be processed. However, if there are many people in the image, the traditional processing method requires a very large workload and consumes a lot of time.
此外,传统的人脸模糊处理对包含多个人脸的图像进行处理时,对不同方位和状况的所有人脸都采用单一的模糊处理方式,并且也不会考虑模糊后人脸图像与背景是否协调。这导致多人人脸的图像在所有人脸被模糊处理之后,模糊区域较为突兀进而导致图像的观看效果明显劣化。In addition, when the traditional face blurring process processes images containing multiple faces, a single blurring method is used for all faces in different orientations and situations, and it does not consider whether the blurred face image is in harmony with the background . As a result, after all the faces of an image with multiple faces are blurred, the blurred area is relatively abrupt, which leads to obvious degradation of the viewing effect of the image.
本公开提出了一种对包含多人人脸的图像进行脱敏处理的方案。该方案能够对大批量的图片或视频进行自动脱敏处理,大大降低了脱敏工作的工作量,此外,本公开还对脱敏图像的图像效果进行改进,使得多人图像脱敏处理后仍然能保持较佳的观看效果。The present disclosure proposes a solution for desensitizing an image containing multiple human faces. This solution can automatically desensitize a large number of pictures or videos, greatly reducing the workload of desensitization work. In addition, the disclosure also improves the image effect of desensitized images, so that after desensitization of multiple people’s images Can maintain a better viewing effect.
本公开的实施方式主要涉及对包含多个人脸的图像进行脱敏处理。但应当理解的是,本公开并不限于仅仅用于人脸的模糊处理,凡是以本公开所附权利要求精神和范围所限定的需要进行脱敏处理的目标均应被认为是属于本公开的保护范围。Embodiments of the present disclosure mainly relate to desensitizing images containing multiple human faces. However, it should be understood that the present disclosure is not limited to be used only for the blurring of human faces, and any object requiring desensitization as defined by the spirit and scope of the appended claims of the present disclosure should be considered as belonging to the present disclosure protected range.
图1示意性示出了根据本公开实施方式的用于对图像中的多个人脸进行处理的方法100的流程图。如图1所示,该方法100的处理过程可以包括框101-框105。以下将对方法100的处理过程进行示例性详细地描述。Fig. 1 schematically shows a flowchart of a method 100 for processing multiple human faces in an image according to an embodiment of the present disclosure. As shown in FIG. 1 , the process of the method 100 may include blocks 101 - 105 . The processing procedure of the method 100 will be described in detail as an example below.
如图1所示,在框101处,检测图像中的多个人脸。此处涉及采集需要进行脱敏处理的图像,这里的图像可以是图片或视频。如果是视频文件,需要解析视频,并将按照视频帧的方式进行后续的处理。在采集完需要处理的图像之后,可以开始进行批量地人脸模糊处理。在下文中,将参见图2A详细描述有关在框101处执行的操作的更多细节。As shown in FIG. 1, at block 101, a plurality of human faces in an image are detected. This involves collecting images that need to be desensitized, and the images here can be pictures or videos. If it is a video file, the video needs to be parsed, and subsequent processing will be performed according to the video frame. After the images that need to be processed are collected, batch face blurring can be started. Hereinafter, more details about the operations performed at block 101 will be described in detail with reference to FIG. 2A .
图2A示出了根据本公开的一个实施方式的检测图像中的多个人脸的方法200A的流程图。首先对图像进行人脸,检测出本张图片或视频帧中所有人的脸部图像。具体地,如图2A所示,检测人脸的过程可以包括框210-230。FIG. 2A shows a flowchart of a method 200A for detecting multiple human faces in an image according to an embodiment of the present disclosure. Firstly, the face is processed on the image, and the face images of all people in this picture or video frame are detected. Specifically, as shown in FIG. 2A, the process of detecting a human face may include blocks 210-230.
在框210处,首先对包含多个人脸的图像进行预处理,预处理可以包括减去均值、归一化和重新调整图像尺寸。具体地,为了算法的有效工作,希望图像特征具有相似的取值范围,因此需要进行归一化操作,归一化操作可以包括减去均值、归一化和图像尺寸的调整(resize)。At block 210, an image containing multiple faces is first preprocessed, which may include mean subtraction, normalization, and resizing of the image. Specifically, for the effective operation of the algorithm, it is hoped that the image features have a similar value range, so a normalization operation is required, and the normalization operation may include mean subtraction, normalization, and image size adjustment (resize).
在框220处,对预处理后的图像进行检测以生成人脸区域。这里需要利用已有的人脸检测算法对图像进行人脸检测以获得图片或视频帧中的人脸区域。该人脸检测算法可以是基于神经网络的检测算法,其包括了人脸粗检测、人脸关键点检测、人脸再筛选的过程。人脸检测算法可以以模块的形式形成,从而可以将图像输入到该模块中并从模块中输出检测结果。最后的检测结果可以是所有人脸在图像中位置,并使用(x,y,w,h)来表示,其中x、y表示人脸区域的顶点坐标,w和h代表人脸区域的宽度和高度。At block 220, detection is performed on the preprocessed image to generate face regions. Here, it is necessary to use the existing face detection algorithm to perform face detection on the image to obtain the face area in the picture or video frame. The face detection algorithm may be a detection algorithm based on a neural network, which includes a process of rough detection of a face, detection of key points of a face, and re-screening of a face. The face detection algorithm can be formed in the form of a module, so that an image can be input into the module and a detection result can be output from the module. The final detection result can be the position of all faces in the image, and use (x, y, w, h) to represent, where x, y represent the vertex coordinates of the face area, w and h represent the width and width of the face area high.
在框230处,将人脸区域向外扩展一定数目的像素。经过框220后可以获得图像中的所有人脸的位置区域。为了防止后续分割处理中造成的人脸的缺失,可以在上述位置区域的基础上向外扩展,例如在框形区域的四个边界上向外扩展5-10个像素。At block 230, the face region is expanded outward by a certain number of pixels. After going through block 220, the location areas of all the faces in the image can be obtained. In order to prevent the lack of human faces caused in the subsequent segmentation process, it can be extended outwards based on the above location area, for example, 5-10 pixels outwards on the four borders of the frame area.
经过框101之后,获得了多个人脸的图像,但是这些图像仅是粗糙的人脸图像,在人脸周边区域包含了一定的非人脸背景。由于在对人脸进行定点处理的过程中并不需要这些背景,还需要将人脸图像进行进一步的分割出来。After going through block 101, a plurality of images of human faces are obtained, but these images are only rough human face images, and include certain non-human face backgrounds in the peripheral area of the human faces. Since these backgrounds are not needed in the fixed-point processing of the face, the face image needs to be further segmented.
返回图1,在框102处,对图像进行分割处理以将所述多个人脸与所述图像的背景分离。具体地,在该框处将对框101获得的人脸图像进行精确的人脸分割以最终获得像素级别的人脸图像。在下文中,将参见图2B描述有关在框102处执行的操作的更多细节。Returning to FIG. 1 , at block 102 , the image is segmented to separate the plurality of human faces from the background of the image. Specifically, at this block, the face image obtained in block 101 will be accurately segmented to finally obtain a pixel-level face image. In the following, more details regarding the operations performed at block 102 will be described with reference to FIG. 2B .
图2B示出了根据本公开的一个实施方式的对图像进行分割处理的方法200B的流程图。如图2B所示,方法200B包括在框240-260处执行的多个操作。Fig. 2B shows a flowchart of a method 200B for segmenting an image according to an embodiment of the present disclosure. As shown in FIG. 2B, method 200B includes a number of operations performed at blocks 240-260.
在框240处,对框230中扩展后的人脸区域进行预处理,该预处理包括减去均值、归一化和重新调整图像尺寸。At block 240, the face region expanded in block 230 is pre-processed, including mean subtraction, normalization, and image resizing.
在框250处,精确分割预处理后的人脸区域中的人脸以生成像素级别的人脸图像。At block 250, the faces in the preprocessed face region are precisely segmented to generate a pixel-level face image.
根据本公开的一个实施例,在框250处,可以利用人脸分割算法进行精确的人脸分割。该算法基于全卷积神经网络构建,并应用了空洞卷积、多尺度的FeatureMap的采集、全连接CRF等技术,最大程度的保证了人脸定点处理作用范围精准度。人脸分割算法可以以模块的形式形成,从而可以将图像输入到分割模块中并从模块中输出结果。According to an embodiment of the present disclosure, at block 250, a face segmentation algorithm may be used to perform precise face segmentation. The algorithm is built on the basis of a fully convolutional neural network, and applies technologies such as atrous convolution, multi-scale FeatureMap collection, and fully connected CRF to ensure the accuracy of the range of face fixed-point processing to the greatest extent. The face segmentation algorithm can be formed in the form of modules, so that images can be input into the segmentation module and results can be output from the module.
在框260处,记录所分割的人脸的像素点。具体地,为了便于后面的恢复处理,将所分割的人脸的像素点记录到文档中。At block 260, the pixels of the segmented face are recorded. Specifically, in order to facilitate subsequent restoration processing, the pixels of the segmented face are recorded in the file.
返回图1,在框103处,针对分离后的多个人脸中的给定人脸,基于所述给定人脸的属性生成给定评估向量。Returning to FIG. 1 , at block 103 , for a given face among the multiple separated faces, a given evaluation vector is generated based on attributes of the given face.
具体地,经过框102处的分割处理之后获得了精确分割的人脸图像。然后,需要对多个人脸中的给定人脸进行评估,并生成评估向量。其中给定人脸可以是单个人脸,也可以是若干个人脸的组合。Specifically, after the segmentation processing at block 102, an accurately segmented human face image is obtained. Then, an evaluation needs to be performed on a given face among multiple faces and an evaluation vector is generated. The given face may be a single face or a combination of several faces.
根据本公开的一个实施方式,评估向量的评估维度有多个,例如清晰度、分辨率、人脸角度、图像亮度、图像对比度等,其中人脸角度包括滚动角(roll)、俯仰角(pitch)和偏航角(yaw)。此外,可以根据需要设置任何其它的人脸属性。上述例举的属性可以被表示为一个1×7向量,例如[1,1,0.8,0.9,0.8,1,1],其中每一维度都对应一个上述属性。According to an embodiment of the present disclosure, there are multiple evaluation dimensions of the evaluation vector, such as sharpness, resolution, face angle, image brightness, image contrast, etc., wherein the face angle includes roll angle (roll), pitch angle (pitch ) and yaw angle (yaw). In addition, any other face attributes may be set as desired. The aforementioned attributes can be expressed as a 1×7 vector, such as [1,1,0.8,0.9,0.8,1,1], where each dimension corresponds to one of the aforementioned attributes.
根据本公开的一个实施方式,关于清晰度、分辨率、人脸角度、图像亮度、图像对比度,可以采用以下方式来计算:According to an embodiment of the present disclosure, the definition, resolution, face angle, image brightness, and image contrast can be calculated in the following manner:
清晰度:可以采用分辨率和视频码率来加权计算出视频的清晰度,并将其归一化到[0,1]之间;Clarity: The resolution and video bit rate can be used to weight and calculate the clarity of the video, and normalize it to [0,1];
分辨率:直接可通过图像来获取,然后将所有分辨率归一化到[0,1];Resolution: It can be obtained directly through the image, and then normalize all resolutions to [0,1];
人脸角度:采用开源算法直接计算出人脸三个角度(roll、pitch、yaw)的角度值,然后再转换成弧度值并做归一化处理;Face angle: use open-source algorithms to directly calculate the angle values of the three angles (roll, pitch, yaw) of the face, and then convert them into radian values and normalize them;
图像亮度:在图像中随机采取100个点计算均值,并归一化处理;Image brightness: Randomly take 100 points in the image to calculate the mean value and normalize it;
图像对比度:可以根据公式获得,其中δ(i,j)=|i-j|,其表示相邻像素间的灰度差,Pδ(i,j)为相邻像素间的灰度差为δ的像素分布概率。Image contrast: can be based on the formula Obtained, where δ(i, j)=|ij|, which represents the gray level difference between adjacent pixels, P δ (i, j) is the distribution probability of the pixel whose gray level difference between adjacent pixels is δ.
如图1所示,在框104处,基于给定评估向量,对给定人脸进行相应的模糊处理。具体地,根据评估向量对多个人脸中每一个人脸图像进行不同级别的模糊处理。As shown in FIG. 1 , at block 104 , based on a given evaluation vector, a corresponding blurring process is performed on a given human face. Specifically, different levels of blurring are performed on each face image in the plurality of faces according to the evaluation vector.
根据本公开的一个实施方式,可以使评估向量中的每个维度参与模糊处理的计算,从而得到一个加权的系数,并根据该加权系数对人脸图像进行处理。例如,如果将抬头角度大、清晰度差且图像亮度低的人脸图像与角度相对较正、清晰度好且图像亮度正常的图像进行比较,则前者将获得与后者不同的加权系数,进而使得前者仅需要进行较为轻微的图像处理,而后者则需要进行更高级别的处理。According to an embodiment of the present disclosure, each dimension in the evaluation vector can be involved in the calculation of the blurring process, so as to obtain a weighted coefficient, and the face image is processed according to the weighted coefficient. For example, if a face image with a large head-up angle, poor definition and low image brightness is compared with an image with a relatively straight angle, good definition and normal image brightness, the former will get different weighting coefficients from the latter, and then This allows the former to require only light image processing, while the latter requires a higher level of processing.
根据本公开的一个实施方式,模糊处理例如包括高斯滤波、对比度增强、图像抖动、打码等图像处理手段。According to an embodiment of the present disclosure, the blurring processing includes, for example, Gaussian filtering, contrast enhancement, image shaking, coding and other image processing means.
如图1所示,在框105处,将模糊处理后的多个人脸恢复到图像中。具体地,当人脸图像在框104处完成处理之后,每一张人脸将被还原到原来的图像中。As shown in FIG. 1 , at block 105 , the blurred faces are restored to the image. Specifically, after the face image is processed at block 104, each face will be restored to the original image.
根据本公开的一个实施方式,在前述分割处理时(参见框260)记录了所分割的人脸像素点,可以根据所记录的数据(例如分割位置文档)将处理后的人脸图像恢复到原图像中。According to an embodiment of the present disclosure, during the aforementioned segmentation processing (see block 260), the segmented face pixels are recorded, and the processed face image can be restored to its original in the image.
如果最初输入的是图片,则处理过程到此结束;如果是视频,还需要随着视频的分析,对每一个视频帧进行相同的处理,并在最终将处理后的视频帧合成为视频。If the initial input is a picture, the processing process ends here; if it is a video, it is also necessary to perform the same processing on each video frame along with the analysis of the video, and finally synthesize the processed video frames into a video.
图3示出了根据本公开另一实施方式的用于对图像中的多个人脸进行处理的方法300的流程图。如图3所示,该方法300包括框101-103、105、304,此外还包括有框306。图3所示方法300与图1所示的方法100的不同之处在于在框102和框103之间增加了框306,以及框304处的处理方式在图1的框104的基础上进行了相应调整。Fig. 3 shows a flowchart of a method 300 for processing multiple human faces in an image according to another embodiment of the present disclosure. As shown in FIG. 3 , the method 300 includes blocks 101 - 103 , 105 , 304 , and further includes a block 306 . The difference between the method 300 shown in FIG. 3 and the method 100 shown in FIG. 1 is that a block 306 is added between the block 102 and the block 103, and the processing method at the block 304 is carried out on the basis of the block 104 in FIG. 1 Adjust accordingly.
如图3所示,在框306处,确定图像的偏离率,所述偏离率指示多个人脸与图像的背景之间的差异;以及在框304处,除了给定评估向量,还需要基于偏离率对给定人脸进行相应的模糊处理。As shown in Figure 3, at block 306, determine the deviation rate of the image, which indicates the difference between the multiple faces and the background of the image; and at block 304, in addition to the given evaluation vector, it is also necessary to The corresponding blurring process is performed on a given face.
根据本公开的一个实施方式,偏离率通过原始图像获得,偏离率能够指示多个人脸与图像的背景之间的差异,以控制后续图像处理的力度。例如,如果偏离率为1,即不进行任何偏离限制,如果偏离率为0.8,则表示人脸模糊处理的力度要降低为原始的0.8倍。通过设置偏离率,可以最大程度的保证处理后的人脸图像不会和背景特别冲突。例如,如果与正常情况下的背景光线相比,待处理的原始图像中背景光线明显过暗,则可以通过偏离率来调节模糊效果,使得模糊处理的人脸图像整体也呈现偏暗的效果,从而与偏暗的背景相匹配,以避免处理后的人脸图像相比于图像背景过于突兀。According to an embodiment of the present disclosure, the deviation rate is obtained from the original image, and the deviation rate can indicate the difference between multiple human faces and the background of the image, so as to control the intensity of subsequent image processing. For example, if the deviation rate is 1, that is, no deviation restriction is performed, and if the deviation rate is 0.8, it means that the strength of the face blurring process should be reduced to 0.8 times of the original. By setting the deviation rate, it can ensure that the processed face image will not conflict with the background to the greatest extent. For example, if the background light in the original image to be processed is obviously too dark compared with the background light under normal conditions, the blur effect can be adjusted through the deviation rate, so that the blurred face image as a whole also presents a darker effect, In order to match the dark background, the processed face image is not too abrupt compared with the image background.
根据本公开的一个实施方式,偏离率可以通过如下方式确定:将图像的像素值归一化;基于预定概率分布从多个人脸和图像的背景中选择多个采样点;以及基于选择的多个采样点确定所述偏离率。例如,先将图片的所有像素点的像素值归一化到[-1,1],接着以均值为0、方差为1的高斯分布从图像中以p的概率随机采集N个点并获取每个点的像素值,再通过加权平均的方式获得加权均值,该加权均值的取值仍然在[-1,1],然后再对该加权均值进行绝对值运算,最后用1减去所得到的绝对值以计算出偏离率,其中p和N均为预设参数。According to an embodiment of the present disclosure, the deviation rate can be determined by: normalizing the pixel values of the image; selecting a plurality of sampling points from a plurality of faces and the background of the image based on a predetermined probability distribution; The sampling points determine the deviation rate. For example, first normalize the pixel values of all pixels in the picture to [-1,1], then randomly collect N points from the image with a probability of p using a Gaussian distribution with a mean of 0 and a variance of 1 and obtain each The pixel value of a point, and then obtain the weighted mean by weighted average, the value of the weighted mean is still in [-1,1], and then perform the absolute value operation on the weighted mean, and finally subtract it from 1. Absolute value to calculate the deviation rate, where p and N are preset parameters.
本公开的上述实施方式通过自动识别人脸、分割剪裁、模糊、还原,能够对大批量的图片和视频进行自动人脸模糊处理。此外,本公开的实施方式还对每张人脸进行评估及建立评估向量以区别处理每张人脸,以及根据图像情况设置偏离率减低处理后人脸与背景的冲突,从而在对多张人脸进行人脸模糊处理的同时,尽可能减小对图片或视频的影响,改善了图片或视频的观看效果。The above-mentioned embodiments of the present disclosure can perform automatic face blurring processing on large batches of pictures and videos through automatic face recognition, segmentation and cropping, blurring, and restoration. In addition, the embodiment of the present disclosure also evaluates each face and establishes an evaluation vector to distinguish each face, and sets the deviation rate according to the image situation to reduce the conflict between the processed face and the background, so that when multiple faces While the face is being blurred, the impact on the picture or video is minimized, and the viewing effect of the picture or video is improved.
图4示出了根据本公开的一个实施方式的用于对图像中的多个人脸进行处理的装置400的框图。该装置400包括模块401-405。以下将对装置400的各个模块进行示例性的描述。Fig. 4 shows a block diagram of an apparatus 400 for processing multiple human faces in an image according to an embodiment of the present disclosure. The apparatus 400 includes modules 401-405. Each module of the device 400 will be described exemplarily below.
如图4所示,模块401为检测模块,其被配置为检测图像中的多个人脸。在下文中,将参见图5A详细描述模块401的更多细节。As shown in FIG. 4 , module 401 is a detection module configured to detect multiple human faces in an image. Hereinafter, more details of the module 401 will be described in detail with reference to FIG. 5A .
图5A示出了根据本公开的一个实施方式的检测模块的示意框图。如图5A所示,检测模块500A(即图4中的检测模块401)可以包括子模块510-530。Fig. 5A shows a schematic block diagram of a detection module according to an embodiment of the present disclosure. As shown in FIG. 5A , the detection module 500A (ie, the detection module 401 in FIG. 4 ) may include submodules 510-530.
子模块510为第一预处理模块,其被配置为对包含多个人脸的图像进行预处理,预处理可以包括减去均值、归一化和重新调整图像尺寸。The sub-module 510 is a first pre-processing module, which is configured to pre-process the image containing multiple faces, and the pre-processing may include subtracting the mean value, normalizing and re-adjusting the size of the image.
子模块520为检测子模块,其被配置为对预处理后的图像进行检测以生成人脸区域。The sub-module 520 is a detection sub-module configured to detect the preprocessed image to generate a human face area.
子模块530为扩展子模块,其被配置为将人脸区域向外扩展一定数目的像素。The sub-module 530 is an expansion sub-module, which is configured to expand the face area outward by a certain number of pixels.
通过检测模块401获得了多个人脸的图像。由于这些图像包含了一定的非人脸背景,还需要将人脸图像进行进一步的分割出来。Multiple face images are obtained through the detection module 401 . Since these images contain certain non-face backgrounds, it is necessary to further segment the face images.
返回图4,模块402为分割模块。分割模块被配置为对图像进行分割处理以将所述多个人脸与所述图像的背景分离。具体地,分割模块402将对从检测模块401获得的人脸图像进行精确的人脸分割以最终获得像素级别的人脸图像。在下文中,将参见图5B详细描述模块402的更多细节。Returning to Fig. 4, module 402 is a segmentation module. The segmentation module is configured to perform segmentation processing on the image to separate the plurality of human faces from the background of the image. Specifically, the segmentation module 402 will perform accurate face segmentation on the face image obtained from the detection module 401 to finally obtain a pixel-level face image. Hereinafter, more details of the module 402 will be described in detail with reference to FIG. 5B .
图5B示出了根据本公开的一个实施方式的分割模块的示意框图。如图所示,分割模块500B(即图4中的检测模块402)包括子模块540-560。Fig. 5B shows a schematic block diagram of a segmentation module according to an embodiment of the present disclosure. As shown, segmentation module 500B (ie, detection module 402 in FIG. 4 ) includes submodules 540-560.
子模块540为第二预处理子模块,其被配置为对扩展后的人脸区域进行预处理,该预处理包括减去均值、归一化和重新调整图像尺寸。The sub-module 540 is a second pre-processing sub-module, which is configured to perform pre-processing on the expanded face area, the pre-processing includes mean subtraction, normalization and re-adjustment of image size.
子模块550为精确分割子模块,其被配合为精确分割预处理后的人脸区域中的人脸以生成像素级别的人脸图像。The sub-module 550 is an accurate segmentation sub-module, which is adapted to accurately segment the human face in the pre-processed human face area to generate a pixel-level human face image.
子模块560为记录子模块,其被配置为记录所分割的人脸的像素点。具体地,为了便于后面的恢复处理,将所分割的人脸的像素点记录到文档中。The sub-module 560 is a recording sub-module configured to record the pixels of the segmented face. Specifically, in order to facilitate subsequent restoration processing, the pixels of the segmented face are recorded in the file.
返回图4,模块403为评估模块,其被配置为针对分离后的多个人脸中的给定人脸,基于所述给定人脸的属性生成给定评估向量。Returning to FIG. 4 , module 403 is an evaluation module, which is configured to generate a given evaluation vector for a given face among the multiple separated faces based on attributes of the given face.
具体地,经过模块402的分割处理之后获得了精确分割的人脸图像。然后,需要对多个人脸中的每一个人脸进行评估,并生成评估向量。Specifically, after the segmentation processing in module 402, an accurately segmented face image is obtained. Then, each of the plurality of faces needs to be evaluated and an evaluation vector is generated.
根据本公开的一个实施方式,评估向量的评估维度有多个,例如清晰度、分辨率、人脸角度、图像亮度、图像对比度等,其中人脸角度包括滚动角(roll)、俯仰角(pitch)和偏航角(yaw)。According to an embodiment of the present disclosure, there are multiple evaluation dimensions of the evaluation vector, such as sharpness, resolution, face angle, image brightness, image contrast, etc., wherein the face angle includes roll angle (roll), pitch angle (pitch ) and yaw angle (yaw).
如图4所示,模块404为处理模块,其被配置为基于给定评估向量,对给定人脸进行相应的模糊处理。具体地,根据评估向量对多个人脸中每一个人脸图像进行不同级别的模糊处理。As shown in FIG. 4 , module 404 is a processing module configured to perform corresponding blurring processing on a given face based on a given evaluation vector. Specifically, different levels of blurring are performed on each face image in the plurality of faces according to the evaluation vector.
根据本公开的一个实施方式,模糊处理例如包括高斯滤波、对比度增强、图像抖动、打码等图像处理手段。According to an embodiment of the present disclosure, the blurring processing includes, for example, Gaussian filtering, contrast enhancement, image shaking, coding and other image processing means.
如图4所示,模块405为恢复模块,其被配置为将模糊处理后的多个人脸恢复到图像中。具体地,当人脸图像在处理模块404处完成处理之后,每一张人脸将在恢复模块405处被还原到原来的图像中。As shown in FIG. 4 , the module 405 is a restoration module, which is configured to restore the blurred faces into the image. Specifically, after the face image is processed at the processing module 404 , each face will be restored to the original image at the restoration module 405 .
根据本公开的一个实施方式,在前述分割处理时记录了所分割的人脸像素点,可以根据所记录的数据(例如分割位置文档)将处理后的人脸图像恢复到原图像中。According to an embodiment of the present disclosure, the segmented face pixels are recorded during the aforementioned segmentation processing, and the processed face image can be restored to the original image according to the recorded data (eg, segmentation location file).
图6示出了根据另一实施方式的对图像中的多个人脸进行处理的装置600的框图。如图所示,该装置600包括模块401、402、606、403、604和405。图6所示装置600与图4所示的装置400的不同之处在于增加了偏离率计算模块606,以及相应地调整了处理模块(即将图4的处理模块404更换为模块604)。Fig. 6 shows a block diagram of an apparatus 600 for processing multiple human faces in an image according to another implementation manner. As shown in the figure, the apparatus 600 includes modules 401 , 402 , 606 , 403 , 604 and 405 . The difference between the apparatus 600 shown in FIG. 6 and the apparatus 400 shown in FIG. 4 is that a deviation rate calculation module 606 is added, and a processing module is adjusted accordingly (ie, the processing module 404 in FIG. 4 is replaced with a module 604).
增加的偏离率计算模块606被配置为确定图像的偏离率,所述偏离率指示多个人脸与图像的背景之间的差异;以及新的处理模块604被配置为:除了基于给定评估向量,还基于偏离率对给定人脸进行相应的模糊处理。The increased deviation rate calculation module 606 is configured to determine the deviation rate of the image, the deviation rate indicates the difference between the plurality of faces and the background of the image; and the new processing module 604 is configured to: in addition to based on the given evaluation vector, The given face is also blurred accordingly based on the deviation rate.
根据本公开的一个实施方式,偏离率通过原始图像获得,偏离率能够指示多个人脸与图像的背景之间的差异,以控制后续图像处理的力度。According to an embodiment of the present disclosure, the deviation rate is obtained from the original image, and the deviation rate can indicate the difference between multiple human faces and the background of the image, so as to control the intensity of subsequent image processing.
根据本公开的一个实施方式,偏离率可以通过如下方式确定:将图像的像素值归一化;基于预定概率分布从多个人脸和图像的背景中选择多个采样点;以及基于选择的多个采样点确定所述偏离率。According to an embodiment of the present disclosure, the deviation rate can be determined by: normalizing the pixel values of the image; selecting a plurality of sampling points from a plurality of faces and the background of the image based on a predetermined probability distribution; The sampling points determine the deviation rate.
图4中的装置400和图6中的装置600分别与图1中的方法100和图3中的方法300相对应。因此,针对方法100和300的描述同样适用于装置400和600。The apparatus 400 in FIG. 4 and the apparatus 600 in FIG. 6 respectively correspond to the method 100 in FIG. 1 and the method 300 in FIG. 3 . Therefore, the description for methods 100 and 300 is equally applicable to apparatuses 400 and 600 .
根据本公开内容的示例性实现方式,提供了一种设备,包括一个或多个处理器;以及存储装置,用于存储一个或多个程序。当一个或多个程序被一个或多个处理器执行时,使得一个或多个处理器实现根据本公开内容的方法。According to an exemplary implementation of the present disclosure, there is provided an apparatus including one or more processors; and a storage device for storing one or more programs. When the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the method according to the present disclosure.
根据本公开内容的示例性实现方式,提供了一种计算机可读介质,其上存储有计算机程序,该程序被处理器执行时实现根据本公开内容的方法。According to an exemplary implementation of the present disclosure, there is provided a computer-readable medium on which a computer program is stored, and when the program is executed by a processor, the method according to the present disclosure is implemented.
图7示出了能够实施本公开内容的多个实施方式的计算设备700的框图。如图所示,设备700包括中央处理单元(CPU)701,其可以根据存储在只读存储器(ROM)702中的计算机程序指令或者从存储单元708加载到随机访问存储器(RAM)703中的计算机程序指令,来执行各种适当的动作和处理。在RAM 703中,还可存储设备700操作所需的各种程序和数据。CPU 701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。FIG. 7 shows a block diagram of a computing device 700 capable of implementing various embodiments of the present disclosure. As shown, the device 700 includes a central processing unit (CPU) 701 that can be programmed according to computer program instructions stored in a read-only memory (ROM) 702 or loaded from a storage unit 708 into a random-access memory (RAM) 703 program instructions to perform various appropriate actions and processes. In the RAM 703, various programs and data necessary for the operation of the device 700 can also be stored. The CPU 701 , ROM 702 , and RAM 703 are connected to each other via a bus 704 . An input/output (I/O) interface 705 is also connected to the bus 704 .
设备700中的多个部件连接至I/O接口705,包括:输入单元706,例如键盘、鼠标等;输出单元707,例如各种类型的显示器、扬声器等;存储单元708,例如磁盘、光盘等;以及通信单元709,例如网卡、调制解调器、无线通信收发机等。通信单元709允许设备700通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。Multiple components in the device 700 are connected to the I/O interface 705, including: an input unit 706, such as a keyboard, a mouse, etc.; an output unit 707, such as various types of displays, speakers, etc.; a storage unit 708, such as a magnetic disk, an optical disk, etc. ; and a communication unit 709, such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 709 allows the device 700 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
处理单元701执行上文所描述的各个方法和处理,例如方法100、200A、200B和/或方法300。例如,在一些实施方式中,方法100、200A、200B和/或方法300可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元708。在一些实施方式中,计算机程序的部分或者全部可以经由ROM 702和/或通信单元709而被载入和/或安装到设备700上。当计算机程序加载到RAM 703并由CPU701执行时,可以执行上文描述的方法100、200A、200B和/或方法300的一个或多个框。备选地,在其他实施方式中,CPU 701可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行方法100、200A、200B和/或方法300。The processing unit 701 executes various methods and processes described above, such as methods 100 , 200A, 200B and/or method 300 . For example, in some implementations, methods 100 , 200A, 200B, and/or method 300 may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708 . In some embodiments, part or all of the computer program may be loaded and/or installed on the device 700 via the ROM 702 and/or the communication unit 709 . When the computer program is loaded into RAM 703 and executed by CPU 701 , one or more blocks of methods 100 , 200A, 200B and/or method 300 described above may be performed. Alternatively, in other implementations, the CPU 701 may be configured to execute the methods 100 , 200A, 200B and/or the method 300 in any other suitable manner (for example, by means of firmware).
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)等等。The functions described herein above may be performed at least in part by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field programmable gate array (FPGA), application specific integrated circuit (ASIC), application specific standard product (ASSP), system on a chip (SOC), load programmable logic device (CPLD), etc.
用于实施本公开内容的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general-purpose computer, a special purpose computer, or other programmable data processing devices, so that the program codes, when executed by the processor or controller, make the functions/functions specified in the flow diagrams and/or block diagrams Action is implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
在本公开内容的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device. A machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
此外,虽然采用特定次序描绘了各操作,但是这应当理解为要求这样操作以所示出的特定次序或以顺序次序执行,或者要求所有图示的操作应被执行以取得期望的结果。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节,但是这些不应当被解释为对本公开内容的范围的限制。在单独的实施方式的上下文中描述的某些特征还可以组合地实现在单个实现中。相反地,在单个实现的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实现中。In addition, while operations are depicted in a particular order, this should be understood to require that such operations be performed in the particular order shown, or in sequential order, or that all illustrated operations should be performed to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while the above discussion contains several specific implementation details, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are merely example forms of implementing the claims.
Claims (18)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810786664.3A CN109035167B (en) | 2018-07-17 | 2018-07-17 | Method, device, equipment and medium for processing multiple faces in image |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810786664.3A CN109035167B (en) | 2018-07-17 | 2018-07-17 | Method, device, equipment and medium for processing multiple faces in image |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN109035167A true CN109035167A (en) | 2018-12-18 |
| CN109035167B CN109035167B (en) | 2021-05-18 |
Family
ID=64643667
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201810786664.3A Active CN109035167B (en) | 2018-07-17 | 2018-07-17 | Method, device, equipment and medium for processing multiple faces in image |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN109035167B (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110210393A (en) * | 2019-05-31 | 2019-09-06 | 百度在线网络技术(北京)有限公司 | The detection method and device of facial image |
| CN111783814A (en) * | 2019-11-27 | 2020-10-16 | 北京沃东天骏信息技术有限公司 | Data augmentation method, apparatus, device and computer readable medium |
| CN113362369A (en) * | 2021-06-07 | 2021-09-07 | 中国科学技术大学 | State detection method and detection device for moving object |
| CN114187628A (en) * | 2021-11-24 | 2022-03-15 | 支付宝(杭州)信息技术有限公司 | Identity authentication method, device and equipment based on privacy protection |
| CN115019348A (en) * | 2022-06-27 | 2022-09-06 | 北京睿家科技有限公司 | Biological feature recognition processing method, device, system, equipment and medium |
| CN115795507A (en) * | 2022-12-01 | 2023-03-14 | 厦门瑞为信息技术有限公司 | A side-end multi-channel video stream desensitization and reversal method, system and special player |
Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1767638A (en) * | 2005-11-30 | 2006-05-03 | 北京中星微电子有限公司 | Visible image monitoring method for protecting privacy right and its system |
| US7440594B2 (en) * | 2002-07-30 | 2008-10-21 | Omron Corporation | Face identification device and face identification method |
| CN101510957A (en) * | 2008-02-15 | 2009-08-19 | 索尼株式会社 | Image processing device, camera device, communication system, image processing method, and program |
| CN104966067A (en) * | 2015-06-29 | 2015-10-07 | 福建天晴数码有限公司 | Image processing method and system for protecting privacy |
| CN105550592A (en) * | 2015-12-09 | 2016-05-04 | 上海斐讯数据通信技术有限公司 | Face image protection method and system and mobile terminal |
| CN105957001A (en) * | 2016-04-18 | 2016-09-21 | 深圳感官密码科技有限公司 | Privacy protecting method and privacy protecting device |
| CN105989574A (en) * | 2015-02-25 | 2016-10-05 | 光宝科技股份有限公司 | Image processing apparatus and image depth processing method |
| CN107038362A (en) * | 2015-12-01 | 2017-08-11 | 卡西欧计算机株式会社 | Image processing apparatus and image processing method |
| CN107454332A (en) * | 2017-08-28 | 2017-12-08 | 厦门美图之家科技有限公司 | Image processing method, device and electronic equipment |
| CN108073909A (en) * | 2017-12-29 | 2018-05-25 | 深圳云天励飞技术有限公司 | Method and apparatus, computer installation and the storage medium of the fuzzy facial image of synthesis |
-
2018
- 2018-07-17 CN CN201810786664.3A patent/CN109035167B/en active Active
Patent Citations (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7440594B2 (en) * | 2002-07-30 | 2008-10-21 | Omron Corporation | Face identification device and face identification method |
| CN1767638A (en) * | 2005-11-30 | 2006-05-03 | 北京中星微电子有限公司 | Visible image monitoring method for protecting privacy right and its system |
| CN101510957A (en) * | 2008-02-15 | 2009-08-19 | 索尼株式会社 | Image processing device, camera device, communication system, image processing method, and program |
| CN105989574A (en) * | 2015-02-25 | 2016-10-05 | 光宝科技股份有限公司 | Image processing apparatus and image depth processing method |
| CN104966067A (en) * | 2015-06-29 | 2015-10-07 | 福建天晴数码有限公司 | Image processing method and system for protecting privacy |
| CN107038362A (en) * | 2015-12-01 | 2017-08-11 | 卡西欧计算机株式会社 | Image processing apparatus and image processing method |
| CN105550592A (en) * | 2015-12-09 | 2016-05-04 | 上海斐讯数据通信技术有限公司 | Face image protection method and system and mobile terminal |
| CN105957001A (en) * | 2016-04-18 | 2016-09-21 | 深圳感官密码科技有限公司 | Privacy protecting method and privacy protecting device |
| CN107454332A (en) * | 2017-08-28 | 2017-12-08 | 厦门美图之家科技有限公司 | Image processing method, device and electronic equipment |
| CN108073909A (en) * | 2017-12-29 | 2018-05-25 | 深圳云天励飞技术有限公司 | Method and apparatus, computer installation and the storage medium of the fuzzy facial image of synthesis |
Non-Patent Citations (3)
| Title |
|---|
| F. DUFAUX 等,: "Scrambling for Video Surveillance with Privacy", 《2006 CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOP (CVPRW"06)》 * |
| PANAGIOTIS ILIA 等,: "Face/Off: Preventing Privacy Leakage From Photos in Social Networks", 《CCS "15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY》 * |
| 徐家运,: "基于感兴趣区域的视频加解密算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110210393A (en) * | 2019-05-31 | 2019-09-06 | 百度在线网络技术(北京)有限公司 | The detection method and device of facial image |
| CN111783814A (en) * | 2019-11-27 | 2020-10-16 | 北京沃东天骏信息技术有限公司 | Data augmentation method, apparatus, device and computer readable medium |
| CN113362369A (en) * | 2021-06-07 | 2021-09-07 | 中国科学技术大学 | State detection method and detection device for moving object |
| CN114187628A (en) * | 2021-11-24 | 2022-03-15 | 支付宝(杭州)信息技术有限公司 | Identity authentication method, device and equipment based on privacy protection |
| CN115019348A (en) * | 2022-06-27 | 2022-09-06 | 北京睿家科技有限公司 | Biological feature recognition processing method, device, system, equipment and medium |
| CN115795507A (en) * | 2022-12-01 | 2023-03-14 | 厦门瑞为信息技术有限公司 | A side-end multi-channel video stream desensitization and reversal method, system and special player |
| CN115795507B (en) * | 2022-12-01 | 2026-02-03 | 厦门瑞为信息技术股份有限公司 | Method, system and special player for desensitizing and reversing side-end multipath video streams |
Also Published As
| Publication number | Publication date |
|---|---|
| CN109035167B (en) | 2021-05-18 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109035167A (en) | Method, device, device and medium for processing multiple human faces in an image | |
| Ding et al. | Single image rain and snow removal via guided L0 smoothing filter | |
| EP3798975B1 (en) | Method and apparatus for detecting subject, electronic device, and computer readable storage medium | |
| CN110136224B (en) | Image fusion method and device | |
| US8971628B2 (en) | Face detection using division-generated haar-like features for illumination invariance | |
| CN113781406B (en) | Scratch detection method and device for electronic component and computer equipment | |
| CN109492642B (en) | License plate recognition method, device, computer equipment and storage medium | |
| KR20180109665A (en) | A method and apparatus of image processing for object detection | |
| JP7419080B2 (en) | computer systems and programs | |
| US20110085741A1 (en) | Methods and apparatus for editing images | |
| WO2017100971A1 (en) | Deblurring method and device for out-of-focus blurred image | |
| CN112214773B (en) | Image processing method and device based on privacy protection and electronic equipment | |
| WO2023051377A1 (en) | Desensitization method and apparatus for image data | |
| CN111145086A (en) | Image processing method and device and electronic equipment | |
| Tao et al. | Simultaneous enhancement and noise suppression under complex illumination conditions | |
| CN114943649A (en) | Image deblurring method, device and computer readable storage medium | |
| US20080007747A1 (en) | Method and apparatus for model based anisotropic diffusion | |
| CN113239738B (en) | Image blurring detection method and blurring detection device | |
| CN116245752A (en) | Infrared image processing method and device, storage medium and electronic equipment | |
| WO2011033744A1 (en) | Image processing device, image processing method, and program for processing image | |
| CN113971671B (en) | Instance segmentation method, device, electronic device and storage medium | |
| US9686449B1 (en) | Methods and systems for detection of blur artifact in digital video due to high quantization | |
| CN118314591A (en) | Document image processing method, device, electronic equipment and medium | |
| CN111062272A (en) | Image processing and pedestrian identification method and device based on color recovery and readable storage medium | |
| CN117974482A (en) | Image enhancement method, system and device for enhancing anomaly detection capability |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |