[go: up one dir, main page]

CN115063322A - An image processing method and system based on deep learning - Google Patents

An image processing method and system based on deep learning Download PDF

Info

Publication number
CN115063322A
CN115063322A CN202210879112.3A CN202210879112A CN115063322A CN 115063322 A CN115063322 A CN 115063322A CN 202210879112 A CN202210879112 A CN 202210879112A CN 115063322 A CN115063322 A CN 115063322A
Authority
CN
China
Prior art keywords
image
important
area
image frame
target feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210879112.3A
Other languages
Chinese (zh)
Inventor
黄梅志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Huaxia University of Technology
Original Assignee
Wuhan Huaxia University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Huaxia University of Technology filed Critical Wuhan Huaxia University of Technology
Priority to CN202210879112.3A priority Critical patent/CN115063322A/en
Publication of CN115063322A publication Critical patent/CN115063322A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method and system based on deep learning. The method comprises the following steps: receiving a video file transmitted by an external terminal, and carrying out security check on the external terminal; separating an image frame from a video file, and extracting a significant image frame containing target features from the image frame; correcting the important image frame, adjusting the definition of the corrected important image frame, and outputting a high-resolution image; and constructing a target feature rule model based on deep learning, and inputting the obtained high-resolution image into the target feature rule model to determine a target feature rule. By adopting the technical scheme of the invention, the image processing efficiency can be improved, and the target characteristic rule can be rapidly determined.

Description

一种基于深度学习的图像处理方法及系统An image processing method and system based on deep learning

技术领域technical field

本发明涉及图像处理技术领域,尤其涉及一种基于深度学习的图像处理方法及系统。The present invention relates to the technical field of image processing, and in particular, to an image processing method and system based on deep learning.

背景技术Background technique

图像处理技术是用计算机对图像信息进行处理的技术。主要包括图像数字化、图像增强和复原、图像数据编码、图像分割和图像识别等。Image processing technology is a technology that uses a computer to process image information. It mainly includes image digitization, image enhancement and restoration, image data encoding, image segmentation and image recognition.

外部设备在请求服务器进行目标轨迹规律追踪时,要向服务器上传大量的视频文件,同时服务器每天要接收大量外部设备上传的视频文件。而在一个视频中,通常要关注的目标是有迹可循的,由此能够预测目标之后的轨迹规律。而现有的图形处理方法在进行轨迹规则判定时,一般都是人为进行大量筛查工作,效率低下。基于此,本发明提出一种基于深度学习的图像处理方法及系统,能够快速进行图像处理,以提高目标轨迹规则的筛查速率。When an external device requests the server to track the target trajectory regularly, it needs to upload a large number of video files to the server, and at the same time, the server needs to receive a large number of video files uploaded by the external device every day. In a video, the target to be concerned is usually traceable, so that the trajectory rule after the target can be predicted. However, the existing graphics processing methods generally perform a large amount of screening work manually when determining the trajectory rules, which is inefficient. Based on this, the present invention proposes an image processing method and system based on deep learning, which can quickly perform image processing to improve the screening rate of target trajectory rules.

发明内容SUMMARY OF THE INVENTION

本发明提供了一种基于深度学习的图像处理方法,包括:The present invention provides an image processing method based on deep learning, including:

接收外部终端传输的视频文件,对外部终端进行安全检查;Receive video files transmitted by external terminals, and conduct security checks on external terminals;

从视频文件中分离图像帧,并从图像帧中提取含有目标特征的重要图像帧;Separate image frames from video files, and extract important image frames containing target features from image frames;

对重要图像帧进行校正,对校正后的重要图像帧进行清晰度调整,输出高分辨率图像;Correct the important image frames, adjust the sharpness of the corrected important image frames, and output high-resolution images;

基于深度学习构建目标特征规律模型,将得到的高分辨率图像输入目标特征规律模型确定目标特征规律。The target feature law model is constructed based on deep learning, and the obtained high-resolution image is input into the target feature law model to determine the target feature law.

如上所述的一种基于深度学习的图像处理方法,其中,从视频文件中提取外部终端信息,计算外部终端的安全度,若外部设备的安全度高于服务器安全度,则允许对外部设备的视频文件进行图像处理,否则拒绝外部终端请求。The above-mentioned image processing method based on deep learning, wherein, the external terminal information is extracted from the video file, and the security degree of the external terminal is calculated. The video file is image processed, otherwise the external terminal request is rejected.

如上所述的一种基于深度学习的图像处理方法,其中,从图像帧中提取含有目标特征的重要图像帧,具体包括如下子步骤:A deep learning-based image processing method as described above, wherein extracting important image frames containing target features from image frames specifically includes the following sub-steps:

从视频分离的所有图像帧中,查找包含目标特征的首个重要图像帧;From all the image frames separated from the video, find the first important image frame containing the target feature;

将首个重要图像帧中的目标特征进行灰度处理,确定目标特征的灰度值;Perform grayscale processing on the target feature in the first important image frame to determine the grayscale value of the target feature;

依据目标特征的灰度分布值计算待提取图像帧与首个重要图像帧之间的帧距,根据帧距确定包含目标特征的所有重要图像帧。Calculate the frame distance between the image frame to be extracted and the first important image frame according to the gray distribution value of the target feature, and determine all important image frames including the target feature according to the frame distance.

如上所述的一种基于深度学习的图像处理方法,其中,对重要图像帧进行校正,具体包括如下子步骤:The above-mentioned image processing method based on deep learning, wherein, correcting important image frames specifically includes the following sub-steps:

从所有重要图像帧中提取目标区域,对目标区域进行反投影,比较原图像与反投影图像是否存在差异,选定差异在预定范围内的若干重要图像帧;Extract the target area from all important image frames, perform back-projection on the target area, compare whether there is a difference between the original image and the back-projected image, and select several important image frames whose differences are within a predetermined range;

计算每个重要图像帧的清晰度,选择清晰度最高的图像作为参考图像帧,并对清晰度低于预设值的重要图像帧进行初始清晰度调整;Calculate the sharpness of each important image frame, select the image with the highest sharpness as the reference image frame, and perform initial sharpness adjustment on the important image frames whose sharpness is lower than the preset value;

按照参考图像帧对其他重要图像帧进行像素、明暗度和特征区域调整。Perform pixel, shading, and feature area adjustments on other important image frames according to the reference image frame.

如上所述的一种基于深度学习的图像处理方法,其中,从重要图像帧中提取目标区域,具体包括:获取重要图像帧中的前背景区域和未知区域,为了使未知区域中的点尽量向前背景区域靠近,以未知区域的每一个点为中心,获取其半径领域内的像素颜色与该点的像素颜色之间的距离,将该像素距离大于设定最大阈值的像素点作为前景区域,将小于设定最小阈值的像素点作为背景区域,以缩小未知区域的范围。An image processing method based on deep learning as described above, wherein, extracting the target area from the important image frame specifically includes: obtaining the front background area and the unknown area in the important image frame, in order to make the points in the unknown area as far as possible. The front and background areas are close, take each point of the unknown area as the center, obtain the distance between the pixel color in the radius field and the pixel color of the point, and use the pixel point whose pixel distance is greater than the set maximum threshold as the foreground area, The pixels smaller than the set minimum threshold are used as the background area to reduce the range of the unknown area.

本发明还提供一种基于深度学习的图像处理系统,包括:The present invention also provides an image processing system based on deep learning, comprising:

安全检查模块,用于接收外部终端传输的视频文件,对外部终端进行安全检查;The security check module is used to receive the video files transmitted by the external terminal and perform security check on the external terminal;

重要图像帧确定模块,用于从视频文件中分离图像帧,并从图像帧中提取含有目标特征的重要图像帧;The important image frame determination module is used to separate the image frame from the video file, and extract the important image frame containing the target feature from the image frame;

高分辨率图像输出模块,用于对重要图像帧进行校正,对校正后的重要图像帧进行清晰度调整,输出高分辨率图像;The high-resolution image output module is used to correct important image frames, adjust the sharpness of the corrected important image frames, and output high-resolution images;

深度学习模块,用于基于深度学习构建目标特征规律模型,将得到的高分辨率图像输入目标特征规律模型确定目标特征规律。The deep learning module is used to construct a target feature law model based on deep learning, and input the obtained high-resolution image into the target feature law model to determine the target feature law.

如上所述的一种基于深度学习的图像处理系统,其中,安全检查模块具体用于从视频文件中提取外部终端信息,计算外部终端的安全度,若外部设备的安全度高于服务器安全度,则允许对外部设备的视频文件进行图像处理,否则拒绝外部终端请求。The above-mentioned deep learning-based image processing system, wherein the security checking module is specifically used to extract external terminal information from the video file, and calculate the security degree of the external terminal, if the security degree of the external device is higher than the server security degree, Then it is allowed to perform image processing on the video file of the external device, otherwise, the request of the external terminal is rejected.

如上所述的一种基于深度学习的图像处理系统,其中,重要图像帧确定模块,具体用于从视频分离的所有图像帧中,查找包含目标特征的首个重要图像帧;将首个重要图像帧中的目标特征进行灰度处理,确定目标特征的灰度值;依据目标特征的灰度分布值计算待提取图像帧与首个重要图像帧之间的帧距,根据帧距确定包含目标特征的所有重要图像帧。A deep learning-based image processing system as described above, wherein the important image frame determination module is specifically used to find the first important image frame containing the target feature from all the image frames separated from the video; The target feature in the frame is subjected to grayscale processing to determine the grayscale value of the target feature; the frame distance between the image frame to be extracted and the first important image frame is calculated according to the grayscale distribution value of the target feature, and the target feature is determined according to the frame distance. of all important image frames.

如上所述的一种基于深度学习的图像处理系统,其中,高分辨率图像输出模块,具体用于从所有重要图像帧中提取目标区域,对目标区域进行反投影,比较原图像与反投影图像是否存在差异,选定差异在预定范围内的若干重要图像帧;计算每个重要图像帧的清晰度,选择清晰度最高的图像作为参考图像帧,并对清晰度低于预设值的重要图像帧进行初始清晰度调整;按照参考图像帧对其他重要图像帧进行像素、明暗度和特征区域调整。An image processing system based on deep learning as described above, wherein the high-resolution image output module is specifically used to extract the target area from all important image frames, back-project the target area, and compare the original image and the back-projected image. Whether there is a difference, select several important image frames with the difference within a predetermined range; calculate the sharpness of each important image frame, select the image with the highest sharpness as the reference image frame, and evaluate the important images whose sharpness is lower than the preset value. The initial sharpness adjustment is performed on the frame; pixel, shading, and feature area adjustments are performed on other important image frames according to the reference image frame.

如上所述的一种基于深度学习的图像处理系统,其中,高分辨率图像输出模块中,从重要图像帧中提取目标区域,具体用于:获取重要图像帧中的前背景区域和未知区域,为了使未知区域中的点尽量向前背景区域靠近,以未知区域的每一个点为中心,获取其半径领域内的像素颜色与该点的像素颜色之间的距离,将该像素距离大于设定最大阈值的像素点作为前景区域,将小于设定最小阈值的像素点作为背景区域,以缩小未知区域的范围。The above-mentioned deep learning-based image processing system, wherein, in the high-resolution image output module, the target area is extracted from the important image frame, which is specifically used for: obtaining the front background area and the unknown area in the important image frame, In order to make the points in the unknown area as close as possible to the forward background area, take each point in the unknown area as the center, and obtain the distance between the pixel color in the radius area and the pixel color of the point, and the pixel distance is greater than the set point. The pixels with the maximum threshold are used as the foreground area, and the pixels smaller than the set minimum threshold are used as the background area to narrow the range of the unknown area.

本发明实现的有益效果如下:采用本发明技术方案,能够提高图像处理效率,快速确定目标特征规律。The beneficial effects achieved by the present invention are as follows: by adopting the technical scheme of the present invention, the image processing efficiency can be improved, and the target characteristic law can be quickly determined.

附图说明Description of drawings

为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments described in the present invention, and for those of ordinary skill in the art, other drawings can also be obtained according to these drawings.

图1是本发明实施例一提供的一种基于深度学习的图像处理方法流程图;1 is a flowchart of a deep learning-based image processing method provided in Embodiment 1 of the present invention;

图2是本发明实施例二提供的一种基于深度学习的图像处理系统示意图。FIG. 2 is a schematic diagram of an image processing system based on deep learning according to Embodiment 2 of the present invention.

具体实施方式Detailed ways

下面结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those skilled in the art without creative efforts shall fall within the protection scope of the present invention.

实施例一Example 1

如图1所示,本发明实施例提供一种基于深度学习的图像处理方法,包括:As shown in FIG. 1, an embodiment of the present invention provides an image processing method based on deep learning, including:

步骤S110、服务器接收外部终端传输的视频文件,对外部终端进行安全检查;Step S110, the server receives the video file transmitted by the external terminal, and performs a security check on the external terminal;

本申请实施例中,从视频文件中提取外部终端信息,计算外部终端的安全度,若外部设备的安全度高于服务器安全度,则允许对外部设备的视频文件进行图像处理,否则拒绝外部终端请求。In the embodiment of the present application, the external terminal information is extracted from the video file, and the security degree of the external terminal is calculated. If the security degree of the external device is higher than the security degree of the server, image processing of the video file of the external device is allowed, otherwise, the external terminal is rejected. ask.

具体地,对外部终端进行安全检查,具体包括如下子步骤:Specifically, the security check on the external terminal specifically includes the following sub-steps:

步骤111、从外部终端的视频文件中提取外部终端信息,包括外部终端设备信息、数据包的发送接收时间、IP地址、端口等。Step 111: Extract the external terminal information from the video file of the external terminal, including the external terminal device information, the sending and receiving time of the data packet, the IP address, the port, and the like.

步骤112、判断外部终端IP地址和端口是否在服务器中允许直接访问,如果是,则允许对外部设备的视频文件进行图像处理,否则执行步骤113;Step 112, determine whether the external terminal IP address and port allow direct access in the server, if so, allow image processing to the video file of the external device, otherwise perform step 113;

如果外部设备的IP地址和端口在服务器中已经存储为安全地址,则直接允许对外部设备的视频文件进行后续图像处理,如果外部设备的IP地址和端口未在服务器中存储,则该外部设备可能存在安全风险,因此需要对外部设备进行安全鉴定。If the IP address and port of the external device have been stored as secure addresses in the server, subsequent image processing of the video file of the external device is directly allowed. If the IP address and port of the external device are not stored in the server, the external device may There is a security risk, so external equipment needs to be security authenticated.

步骤113、计算外部设备的安全程度,并计算服务器对外部设备的访问开放程度,如果外部设备的安全程度大于服务器的开放程度,则允许对外部设备的视频文件进行图像处理,否则拒绝外部终端请求;Step 113: Calculate the security degree of the external device, and calculate the access openness of the server to the external device. If the security degree of the external device is greater than the openness of the server, image processing of the video file of the external device is allowed, otherwise, the request from the external terminal is rejected. ;

具体地,采用公式

Figure BDA0003763542370000051
计算外部设备的安全程度,其中,Se表示外部设备的安全程度;λ1为防火墙给出的外部设备风险对安全程度的影响权重;λ2为外部设备访问地址与防火墙过滤规则的关联关系对安全评估值的影响权重;e=2.718;μ为防火墙对外部设备IP端口的阻断风险因子;Nr为防火墙影响外部设备IP端口数据处理的漏洞个数;Ns为防火墙总的漏洞个数;
Figure BDA0003763542370000052
为外部设备IP地址的向量表示;
Figure BDA0003763542370000053
为防火墙过滤规则的向量表示,t的取值为1到T,T为防火墙功率规则的字符总数。Specifically, using the formula
Figure BDA0003763542370000051
Calculate the security degree of the external device, where Se represents the security degree of the external device; λ 1 is the influence weight of the risk of the external device given by the firewall on the security degree ; The impact weight of the evaluation value; e=2.718; μ is the risk factor of the firewall blocking the IP port of the external device; Nr is the number of vulnerabilities that the firewall affects the data processing of the IP port of the external device; Ns is the total number of vulnerabilities in the firewall;
Figure BDA0003763542370000052
is the vector representation of the IP address of the external device;
Figure BDA0003763542370000053
is the vector representation of firewall filtering rules, t ranges from 1 to T, and T is the total number of characters of firewall power rules.

然后采用公式

Figure BDA0003763542370000054
计算服务器对外部设备的访问开放程度,其中,Kf表示服务器对外部设备的访问开放程度;e=2.718;N1表示外部设备业务种类属于服务器类别的业务范畴总个数;N2表示外部设备业务种类不属于服务器类别的业务范畴总个数;βIP表示外部设备IP地址是否为服务器的非法IP域,若否,则βIP为1,若是,则βIP为0;
Figure BDA0003763542370000055
表示服务器和外部设备是否有共同遵循的安全协议,若是,则
Figure BDA0003763542370000056
否则
Figure BDA0003763542370000057
Then use the formula
Figure BDA0003763542370000054
Calculate the access openness of the server to the external device, where Kf represents the access openness of the server to the external device; e=2.718; N 1 represents the total number of business categories that the external device business type belongs to the server category; N 2 represents the external device business The total number of business categories whose types do not belong to the server category; β IP indicates whether the IP address of the external device is an illegal IP domain of the server, if not, β IP is 1; if so, β IP is 0;
Figure BDA0003763542370000055
Indicates whether the server and the external device have a common security protocol, and if so, then
Figure BDA0003763542370000056
otherwise
Figure BDA0003763542370000057

若Se≥Kf,则说明外部设备的安全程度较高,则允许对外部设备的视频文件进行图像处理,否则说明外部设备的安全程度较低,则服务器拒绝外部终端请求。If Se≥Kf, it means that the security degree of the external device is relatively high, and the video file of the external device is allowed to perform image processing; otherwise, the security degree of the external device is low, and the server rejects the request of the external terminal.

步骤S120、从视频中分离图像帧,并从图像帧中提取含有目标特征的重要图像帧;Step S120, separate the image frame from the video, and extract the important image frame containing the target feature from the image frame;

具体地,从图像帧中提取含有目标特征的重要图像帧,具体包括如下子步骤:Specifically, extracting important image frames containing target features from the image frames specifically includes the following sub-steps:

步骤121、从视频分离的所有图像帧中,查找包含目标特征的首个重要图像帧;Step 121, from all the image frames separated from the video, find the first important image frame containing the target feature;

步骤122、将首个重要图像帧中的目标特征进行灰度处理,确定目标特征的灰度值;Step 122, performing grayscale processing on the target feature in the first important image frame to determine the grayscale value of the target feature;

具体地,从首个重要图像帧中的目标特征图像区域中提取每个像素点的RGB值,然后根据公式Gray(i,j)=WR*R(i,j)+WG*G(i,j)+WB*B(i,j)计算目标特征中每个像素点的灰度值,WR、WG和WB分别为像素点RGB值的权重,然后首个重要图像帧目标特征的灰度值,即

Figure BDA0003763542370000058
Specifically, the RGB value of each pixel is extracted from the target feature image area in the first important image frame, and then according to the formula Gray(i,j)=WR*R(i,j)+WG*G(i, j)+WB*B(i,j) Calculate the gray value of each pixel in the target feature, WR, WG and WB are the weights of the RGB values of the pixel respectively, and then the gray value of the target feature of the first important image frame ,Right now
Figure BDA0003763542370000058

步骤123、依据目标特征的灰度分布值计算待提取图像帧与首个重要图像帧之间的帧距,根据帧距确定包含目标特征的所有重要图像帧;Step 123, calculating the frame distance between the image frame to be extracted and the first important image frame according to the grayscale distribution value of the target feature, and determining all important image frames containing the target feature according to the frame distance;

具体地,在确定出首个重要图像帧之后,计算后续图像帧与首个重要图像帧的帧距,即

Figure BDA0003763542370000061
其中,N为图像帧数量,
Figure BDA0003763542370000062
表示第i+1、i图像帧的灰度值,
Figure BDA0003763542370000063
表示首个重要图像帧的灰度值。其中,根据帧距确定包含目标特征的所有重要图像帧,具体为对所有帧按照帧距进行排序,选择帧距最高的若干图像、或者帧距高于预设阈值的若干图像作为重要图像帧。Specifically, after the first important image frame is determined, the frame distance between the subsequent image frame and the first important image frame is calculated, that is,
Figure BDA0003763542370000061
Among them, N is the number of image frames,
Figure BDA0003763542370000062
Represents the gray value of the i+1, i image frame,
Figure BDA0003763542370000063
Represents the grayscale value of the first significant image frame. Wherein, all important image frames containing the target feature are determined according to the frame distance, specifically, all frames are sorted according to the frame distance, and several images with the highest frame distance or several images with a frame distance higher than a preset threshold are selected as important image frames.

步骤S130、对重要图像帧进行校正,对校正后的重要图像帧进行清晰度调整,输出高分辨率图像;Step S130, correcting the important image frame, adjusting the sharpness of the corrected important image frame, and outputting a high-resolution image;

本申请实施例中,对重要图像帧进行校正,具体包括如下子步骤:In the embodiment of the present application, the correction of important image frames specifically includes the following sub-steps:

步骤131、从所有重要图像帧中提取目标区域,对目标区域进行反投影,比较原图像与反投影图像是否存在差异,选定差异在预定范围内的若干重要图像帧;Step 131, extracting the target area from all important image frames, back-projecting the target area, comparing whether there is a difference between the original image and the back-projected image, and selecting several important image frames whose differences are within a predetermined range;

其中,从所有重要图像帧中提取目标区域,具体包括:获取重要图像帧中的前背景区域和未知区域Ii=αFi+(1-α)Bi,α表示透明度、Fi为前景像素,Bi为背景像素,i表示第i个像素点。为了使未知区域中的点尽量向前背景区域靠近,以未知区域的每一个点为中心,获取其半径领域内的像素颜色与该点的像素颜色之间的距离

Figure BDA0003763542370000064
将该像素距离大于设定最大阈值的像素点作为前景区域,将小于设定最小阈值的像素点作为背景区域,以缩小未知区域的范围。对反投影得到的数值与正投影得到的数值进行比较,若二者得出的结果在预定范围外,则说明该图像是非常不清晰的,则丢弃该图像。Among them, extracting the target area from all important image frames specifically includes: obtaining the front and background areas and unknown areas in the important image frames Ii=αFi+(1-α)Bi, where α represents transparency, Fi is the foreground pixel, and Bi is the background pixel , i represents the ith pixel. In order to make the points in the unknown area as close as possible to the forward background area, take each point in the unknown area as the center to obtain the distance between the pixel color in the radius area and the pixel color of the point
Figure BDA0003763542370000064
The pixels whose pixel distance is greater than the set maximum threshold are regarded as the foreground area, and the pixels whose distance is less than the set minimum threshold are regarded as the background area, so as to reduce the range of the unknown area. Compare the value obtained by back projection with the value obtained by forward projection. If the results obtained by the two are outside the predetermined range, it means that the image is very unclear, and the image is discarded.

步骤132、计算每个重要图像帧的清晰度,选择清晰度最高的图像作为参考图像帧,并对清晰度低于预设值的重要图像帧进行初始清晰度调整;Step 132: Calculate the sharpness of each important image frame, select the image with the highest sharpness as the reference image frame, and perform initial sharpness adjustment on the important image frame whose sharpness is lower than the preset value;

具体地,采用公式

Figure BDA0003763542370000065
计算每个重要图像帧的清晰度,其中,x为重要图像帧长度像素数,y为重要图像帧宽度像素数,z为图像尺寸,将清晰度最高的图像作为参考图像帧。Specifically, using the formula
Figure BDA0003763542370000065
Calculate the sharpness of each important image frame, where x is the number of pixels in the length of the important image frame, y is the number of pixels in the width of the important image frame, z is the image size, and the image with the highest sharpness is used as the reference image frame.

若存在清晰度低于预设值的重要图像帧,则对这些数据进行清晰度调整,具体包括:在清晰度低于预设值的重要图像帧中寻找模糊区域块,进行模糊区域块的重建,并对重建后的图像帧进行编码。其中,重建模糊区域块具体为进行模糊区域块的像素重建,重建像素值坐标

Figure BDA0003763542370000071
n为以起始区域块的中心为原点,起始区域块对角边长为半径的圆周上的点,一般选取n=4作为圆周上均匀分布的四个点,di为选取的起始区域块的重建坐标与圆周像素坐标的距离,Rj为圆周上选取点的像素坐标的像素值,即d1为选取的起始区域块的重建坐标与左边相邻区域块像素坐标的距离,R1表示右边相邻区域块像素坐标的像素值;d2为起始区域块重建坐标与右边相邻区域块像素坐标中心点的距离,R2表示左边相邻区域块像素坐标的像素值;d3为起始区域块重建坐标与上边相邻区域块像素坐标中心点的距离,R3表示下边相邻区域块像素坐标的像素值;d4为起始区域块重建坐标与下边相邻区域块像素坐标中心点的距离,R4表示上边相邻区域块像素坐标的像素值。If there are important image frames whose resolution is lower than the preset value, adjust the sharpness of these data, which specifically includes: searching for blurred area blocks in the important image frames whose resolution is lower than the preset value, and reconstructing the blurred area blocks , and encode the reconstructed image frame. Among them, reconstructing the fuzzy area block is specifically performing pixel reconstruction of the fuzzy area block, reconstructing the pixel value coordinates
Figure BDA0003763542370000071
n is a point on the circle with the center of the starting area block as the origin and the diagonal side length of the starting area block as the radius. Generally, n=4 is selected as the four points evenly distributed on the circumference, and d i is the starting point of the selection. The distance between the reconstructed coordinates of the region block and the pixel coordinates of the circumference, R j is the pixel value of the pixel coordinates of the selected point on the circumference, that is, d 1 is the distance between the reconstructed coordinates of the selected starting region block and the pixel coordinates of the adjacent region block on the left, R 1 represents the pixel value of the pixel coordinates of the adjacent region block on the right; d 2 is the distance between the reconstruction coordinates of the starting region block and the center point of the pixel coordinates of the adjacent region block on the right, and R 2 represents the pixel value of the pixel coordinates of the adjacent region block on the left; d 3 is the distance between the reconstructed coordinates of the starting area block and the center point of the pixel coordinates of the upper adjacent area block, R 3 is the pixel value of the pixel coordinates of the lower adjacent area block; d 4 is the reconstructed coordinates of the starting area block and the adjacent area on the lower side The distance from the center point of the block pixel coordinate, R 4 represents the pixel value of the block pixel coordinate of the adjacent area above.

步骤133、按照参考图像帧对其他重要图像帧进行像素、明暗度和特征区域调整;Step 133: Adjust the pixels, brightness and characteristic area of other important image frames according to the reference image frame;

其中,像素调整具体包括:根据其他重要图像帧的反投影图像确定其他重要图像帧目标区域的像素值

Figure BDA0003763542370000072
其中n表示图像内的投影线条数,qk,i表示经过像素点k的第i条投影线,计算这些目标区域与参考图像帧目标区域的像素比Z=Pv/h(k),PV为参考图像帧像素值,根据像素比对其他重要图像帧进行像素缩放。明暗度调整具体包括:计算参考图像帧的明暗度=R*0.299+G*0.587+B*0.114,明暗度计算中的权重数根据实际需要进行调整,按照参考图像帧明暗度进行其他重点图像帧的明暗度调整。特征区域调整具体包括:按照参考图像帧对其他重点图像帧进行旋转操作、等比缩放等。The pixel adjustment specifically includes: determining the pixel value of the target area of the other important image frame according to the back-projection image of the other important image frame
Figure BDA0003763542370000072
where n represents the number of projected lines in the image, q k,i represents the i-th projected line passing through pixel k, calculate the pixel ratio between these target areas and the target area of the reference image frame Z=Pv/h(k), P V For the pixel value of the reference image frame, pixel scaling is performed on other important image frames according to the pixel ratio. The brightness adjustment specifically includes: calculating the brightness of the reference image frame=R*0.299+G*0.587+B*0.114, adjusting the number of weights in the brightness calculation according to actual needs, and performing other key image frames according to the brightness of the reference image frame. shading adjustment. The feature area adjustment specifically includes: performing a rotation operation, proportional scaling, etc. on other key image frames according to the reference image frame.

步骤S140、基于深度学习构建目标特征规律模型,将得到的高分辨率图像输入目标特征规律模型确定目标特征规律;Step S140, constructing a target feature law model based on deep learning, and inputting the obtained high-resolution image into the target feature law model to determine the target feature law;

具体地,基于深度学习构建目标特征规律模型,将得到的高分辨率图像输入目标特征规律模型确定目标特征规律,具体包括如下子步骤:Specifically, a target feature law model is constructed based on deep learning, and the obtained high-resolution image is input into the target feature law model to determine the target feature law, which specifically includes the following sub-steps:

步骤141、从得到的高分辨率图像中提取目标特征,形成目标特征向量;Step 141, extracting target features from the obtained high-resolution image to form a target feature vector;

步骤142、将目标特征向量集输入目标特征规律模型,训练多个阻尼趋势预测模型得到不同的子趋势预测模型,分别利用各个子趋势预测模型对目标特征向量集进行趋势预测,通过趋势预测结果估计得到各个子预测模型的权重的集合;Step 142: Input the target feature vector set into the target feature regularity model, train multiple damping trend prediction models to obtain different sub-trend prediction models, use each sub-trend prediction model to perform trend prediction on the target feature vector set, and estimate the trend through the trend prediction results. Get the set of weights of each sub-prediction model;

具体地,将目标行人特征向量集输入目标行人特征趋势模型,利用目标行人特征向量集训练多个阻尼趋势预测模型,将每个阻尼趋势预测模型作为子预测模型

Figure BDA0003763542370000084
再利用子预测模型对目标行人特征向量集进行预测,得到预测结果,通过预测结果采用公式
Figure BDA0003763542370000081
估计子预测模型的权重的集合{λ1,λ2,λ3......λT},其中,xi为目标特征向量中的目标形状特征,yi为目标特征向量的目标空间关系特征,μ1和μ2为目标形状特征和目标空间关系特征的影响权重,n为目标特征向量的总数,T为子预测模型的数量。Specifically, input the target pedestrian feature vector set into the target pedestrian feature trend model, use the target pedestrian feature vector set to train multiple damping trend prediction models, and use each damping trend prediction model as a sub-prediction model
Figure BDA0003763542370000084
Then use the sub-prediction model to predict the target pedestrian feature vector set to obtain the prediction result, and use the formula through the prediction result.
Figure BDA0003763542370000081
Estimate the set of weights of the sub-prediction model {λ 1 , λ 2 , λ 3 ......λ T }, where x i is the target shape feature in the target feature vector, and yi is the target space of the target feature vector Relational features, μ1 and μ2 are the influence weights of target shape features and target spatial relationship features, n is the total number of target feature vectors, and T is the number of sub-prediction models.

步骤143、寻找权重的集合中的每个权重对应的最优值,通过各个子预测模型和其对应的权重的最优值的组合确定目标特征规律;Step 143: Find the optimal value corresponding to each weight in the set of weights, and determine the target characteristic rule by combining each sub-prediction model and the optimal value of its corresponding weight;

具体地,计算各个子预测模型

Figure BDA0003763542370000085
的权重的集合{λ1,λ2,λ3……λT}中,每个权重对应的最优值;通过各个子预测模型
Figure BDA0003763542370000082
和其对应的权重的最优值{λ1,λ2,λ3……λT}组合确定目标特征规律
Figure BDA0003763542370000083
Specifically, each sub-prediction model is calculated
Figure BDA0003763542370000085
In the set of weights {λ 1 , λ 2 , λ 3 ......λ T }, the optimal value corresponding to each weight; through each sub-prediction model
Figure BDA0003763542370000082
and the optimal value of its corresponding weight {λ 1 , λ 2 , λ 3 ......λ T } to determine the target characteristic law
Figure BDA0003763542370000083

实施例二Embodiment 2

如图2所示,本发明实施例提供一种基于深度学习的图像处理系统2,包括:As shown in FIG. 2, an embodiment of the present invention provides an image processing system 2 based on deep learning, including:

安全检查模块21,用于接收外部终端传输的视频文件,对外部终端进行安全检查;The security check module 21 is used for receiving the video file transmitted by the external terminal, and performing security check on the external terminal;

其中,安全检查模块21用于从视频文件中提取外部终端信息,计算外部终端的安全度,若外部设备的安全度高于服务器安全度,则允许对外部设备的视频文件进行图像处理,否则拒绝外部终端请求。具体地,安全检查模块21,具体用于从外部终端的视频文件中提取外部终端信息,包括外部终端设备信息、数据包的发送接收时间、IP地址、端口等;判断外部终端IP地址和端口是否在服务器中允许直接访问,如果是,则允许对外部设备的视频文件进行图像处理,否则计算外部设备的安全程度,并计算服务器对外部设备的访问开放程度,如果外部设备的安全程度大于服务器的开放程度,则允许对外部设备的视频文件进行图像处理,否则拒绝外部终端请求。Among them, the security checking module 21 is used to extract the external terminal information from the video file, and calculate the security degree of the external terminal. If the security degree of the external device is higher than the security degree of the server, it is allowed to perform image processing on the video file of the external device, otherwise it is rejected. External terminal request. Specifically, the security check module 21 is specifically used to extract the external terminal information from the video file of the external terminal, including the information of the external terminal equipment, the sending and receiving time of the data packet, the IP address, the port, etc.; determine whether the IP address and port of the external terminal are not Allow direct access in the server, if yes, allow image processing of the video files of the external device, otherwise calculate the security degree of the external device, and calculate the access openness of the server to the external device, if the security degree of the external device is greater than that of the server If the degree of openness is set, it is allowed to perform image processing on the video file of the external device, otherwise, the request of the external terminal is rejected.

具体地,采用公式

Figure BDA0003763542370000091
计算外部设备的安全程度,其中,Se表示外部设备的安全程度;λ1为防火墙给出的外部设备风险对安全程度的影响权重;λ2为外部设备访问地址与防火墙过滤规则的关联关系对安全评估值的影响权重;e=2.718;μ为防火墙对外部设备IP端口的阻断风险因子;Nr为防火墙影响外部设备IP端口数据处理的漏洞个数;Ns为防火墙总的漏洞个数;
Figure BDA0003763542370000092
为外部设备IP地址的向量表示;
Figure BDA0003763542370000093
为防火墙过滤规则的向量表示,t的取值为1到T,T为防火墙功率规则的字符总数。Specifically, using the formula
Figure BDA0003763542370000091
Calculate the security degree of the external device, where Se represents the security degree of the external device; λ 1 is the influence weight of the risk of the external device given by the firewall on the security degree ; The impact weight of the evaluation value; e=2.718; μ is the risk factor of the firewall blocking the IP port of the external device; Nr is the number of vulnerabilities that the firewall affects the data processing of the IP port of the external device; Ns is the total number of vulnerabilities in the firewall;
Figure BDA0003763542370000092
is the vector representation of the IP address of the external device;
Figure BDA0003763542370000093
is the vector representation of firewall filtering rules, t ranges from 1 to T, and T is the total number of characters of firewall power rules.

然后采用公式

Figure BDA0003763542370000094
计算服务器对外部设备的访问开放程度,其中,Kf表示服务器对外部设备的访问开放程度;e=2.718;N1表示外部设备业务种类属于服务器类别的业务范畴总个数;N2表示外部设备业务种类不属于服务器类别的业务范畴总个数;βIP表示外部设备IP地址是否为服务器的非法IP域,若否,则βIP为1,若是,则βIP为0;
Figure BDA0003763542370000096
表示服务器和外部设备是否有共同遵循的安全协议,若是,则
Figure BDA0003763542370000097
否则
Figure BDA0003763542370000098
Then use the formula
Figure BDA0003763542370000094
Calculate the access openness of the server to the external device, where Kf represents the access openness of the server to the external device; e=2.718; N 1 represents the total number of business categories that the external device business type belongs to the server category; N 2 represents the external device business The total number of business categories whose types do not belong to the server category; β IP indicates whether the IP address of the external device is an illegal IP domain of the server, if not, β IP is 1; if so, β IP is 0;
Figure BDA0003763542370000096
Indicates whether the server and the external device have a common security protocol, and if so, then
Figure BDA0003763542370000097
otherwise
Figure BDA0003763542370000098

若Se≥Kf,则说明外部设备的安全程度较高,则允许对外部设备的视频文件进行图像处理,否则说明外部设备的安全程度较低,则服务器拒绝外部终端请求。If Se≥Kf, it means that the security degree of the external device is relatively high, and the video file of the external device is allowed to perform image processing; otherwise, the security degree of the external device is low, and the server rejects the request of the external terminal.

重要图像帧确定模块22,用于从视频文件中分离图像帧,并从图像帧中提取含有目标特征的重要图像帧;The important image frame determination module 22 is used to separate the image frame from the video file, and extract the important image frame containing the target feature from the image frame;

其中,重要图像帧确定模块22,具体用于从视频分离的所有图像帧中,查找包含目标特征的首个重要图像帧;将首个重要图像帧中的目标特征进行灰度处理,确定目标特征的灰度值;依据目标特征的灰度分布值计算待提取图像帧与首个重要图像帧之间的帧距,根据帧距确定包含目标特征的所有重要图像帧。Among them, the important image frame determination module 22 is specifically used to find the first important image frame containing the target feature from all the image frames separated from the video; perform grayscale processing on the target feature in the first important image frame to determine the target feature Calculate the frame distance between the image frame to be extracted and the first important image frame according to the grayscale distribution value of the target feature, and determine all important image frames containing the target feature according to the frame distance.

具体地,从首个重要图像帧中的目标特征图像区域中提取每个像素点的RGB值,然后根据公式Gray(i,j)=WR*R(i,j)+WG*G(i,j)+WB*B(i,j)计算目标特征中每个像素点的灰度值,WR、WG和WB分别为像素点RGB值的权重,然后首个重要图像帧目标特征的灰度值,即

Figure BDA0003763542370000095
在确定出首个重要图像帧之后,计算后续图像帧与首个重要图像帧的帧距,即
Figure BDA0003763542370000101
其中,N为图像帧数量,
Figure BDA0003763542370000102
Figure BDA0003763542370000103
表示第i+1、i图像帧的灰度值,
Figure BDA0003763542370000104
表示首个重要图像帧的灰度值。其中,根据帧距确定包含目标特征的所有重要图像帧,具体为对所有帧按照帧距进行排序,选择帧距最高的若干图像、或者帧距高于预设阈值的若干图像作为重要图像帧。Specifically, the RGB value of each pixel is extracted from the target feature image area in the first important image frame, and then according to the formula Gray(i,j)=WR*R(i,j)+WG*G(i, j)+WB*B(i,j) Calculate the gray value of each pixel in the target feature, WR, WG and WB are the weights of the RGB values of the pixel respectively, and then the gray value of the target feature of the first important image frame ,Right now
Figure BDA0003763542370000095
After the first important image frame is determined, the frame distance between the subsequent image frame and the first important image frame is calculated, that is,
Figure BDA0003763542370000101
Among them, N is the number of image frames,
Figure BDA0003763542370000102
Figure BDA0003763542370000103
Represents the gray value of the i+1, i image frame,
Figure BDA0003763542370000104
Represents the grayscale value of the first significant image frame. Wherein, all important image frames containing the target feature are determined according to the frame distance, specifically, all frames are sorted according to the frame distance, and several images with the highest frame distance or several images with a frame distance higher than a preset threshold are selected as important image frames.

高分辨率图像输出模块23,用于对重要图像帧进行校正,对校正后的重要图像帧进行清晰度调整,输出高分辨率图像;The high-resolution image output module 23 is used to correct important image frames, adjust the sharpness of the corrected important image frames, and output high-resolution images;

其中,高分辨率图像输出模块23,具体用于从所有重要图像帧中提取目标区域,对目标区域进行反投影,比较原图像与反投影图像是否存在差异,选定差异在预定范围内的若干重要图像帧;计算每个重要图像帧的清晰度,选择清晰度最高的图像作为参考图像帧,并对清晰度低于预设值的重要图像帧进行初始清晰度调整;按照参考图像帧对其他重要图像帧进行像素、明暗度和特征区域调整。Among them, the high-resolution image output module 23 is specifically used to extract the target area from all important image frames, perform back-projection on the target area, compare whether there is a difference between the original image and the back-projected image, and select a number of differences within a predetermined range. Important image frames; calculate the sharpness of each important image frame, select the image with the highest sharpness as the reference image frame, and perform initial sharpness adjustment on the important image frames whose sharpness is lower than the preset value; Important image frames undergo pixel, shading, and feature area adjustments.

具体地,从所有重要图像帧中提取目标区域,具体包括:获取重要图像帧中的前背景区域和未知区域Ii=αFi+(1-α)Bi,α表示透明度、Fi为前景像素,Bi为背景像素,i表示第i个像素点。为了使未知区域中的点尽量向前背景区域靠近,以未知区域的每一个点为中心,获取其半径领域内的像素颜色与该点的像素颜色之间的距离

Figure BDA0003763542370000105
将该像素距离大于设定最大阈值的像素点作为前景区域,将小于设定最小阈值的像素点作为背景区域,以缩小未知区域的范围。对反投影得到的数值与正投影得到的数值进行比较,若二者得出的结果在预定范围外,则说明该图像是非常不清晰的,则丢弃该图像。Specifically, extracting the target area from all important image frames includes: acquiring the front and background areas and unknown areas in the important image frames Ii=αFi+(1-α)Bi, where α represents transparency, Fi is the foreground pixel, and Bi is the background pixel, i represents the ith pixel. In order to make the points in the unknown area as close as possible to the forward background area, take each point in the unknown area as the center to obtain the distance between the pixel color in the radius area and the pixel color of the point
Figure BDA0003763542370000105
The pixels whose pixel distance is greater than the set maximum threshold are regarded as the foreground area, and the pixels whose distance is less than the set minimum threshold are regarded as the background area, so as to reduce the range of the unknown area. Compare the value obtained by back projection with the value obtained by forward projection. If the results obtained by the two are outside the predetermined range, it means that the image is very unclear, and the image is discarded.

采用公式

Figure BDA0003763542370000106
计算每个重要图像帧的清晰度,其中,x为重要图像帧长度像素数,y为重要图像帧宽度像素数,z为图像尺寸,将清晰度最高的图像作为参考图像帧。using the formula
Figure BDA0003763542370000106
Calculate the sharpness of each important image frame, where x is the number of pixels in the length of the important image frame, y is the number of pixels in the width of the important image frame, z is the image size, and the image with the highest sharpness is used as the reference image frame.

若存在清晰度低于预设值的重要图像帧,则对这些数据进行清晰度调整,具体包括:在清晰度低于预设值的重要图像帧中寻找模糊区域块,进行模糊区域块的重建,并对重建后的图像帧进行编码。其中,重建模糊区域块具体为进行模糊区域块的像素重建,重建像素值坐标

Figure BDA0003763542370000107
n为以起始区域块的中心为原点,起始区域块对角边长为半径的圆周上的点,一般选取n=4作为圆周上均匀分布的四个点,di为选取的起始区域块的重建坐标与圆周像素坐标的距离,Rj为圆周上选取点的像素坐标的像素值,即d1为选取的起始区域块的重建坐标与左边相邻区域块像素坐标的距离,R1表示右边相邻区域块像素坐标的像素值;d2为起始区域块重建坐标与右边相邻区域块像素坐标中心点的距离,R2表示左边相邻区域块像素坐标的像素值;d3为起始区域块重建坐标与上边相邻区域块像素坐标中心点的距离,R3表示下边相邻区域块像素坐标的像素值;d4为起始区域块重建坐标与下边相邻区域块像素坐标中心点的距离,R4表示上边相邻区域块像素坐标的像素值。If there are important image frames whose resolution is lower than the preset value, adjust the sharpness of these data, which specifically includes: searching for blurred area blocks in the important image frames whose resolution is lower than the preset value, and reconstructing the blurred area blocks , and encode the reconstructed image frame. Among them, reconstructing the fuzzy area block is specifically performing pixel reconstruction of the fuzzy area block, reconstructing the pixel value coordinates
Figure BDA0003763542370000107
n is a point on the circle with the center of the starting area block as the origin and the diagonal side length of the starting area block as the radius. Generally, n=4 is selected as the four points evenly distributed on the circumference, and d i is the starting point of the selection. The distance between the reconstructed coordinates of the region block and the pixel coordinates of the circumference, R j is the pixel value of the pixel coordinates of the selected point on the circumference, that is, d 1 is the distance between the reconstructed coordinates of the selected starting region block and the pixel coordinates of the adjacent region block on the left, R 1 represents the pixel value of the pixel coordinates of the adjacent region block on the right; d 2 is the distance between the reconstruction coordinates of the starting region block and the center point of the pixel coordinates of the adjacent region block on the right, and R 2 represents the pixel value of the pixel coordinates of the adjacent region block on the left; d 3 is the distance between the reconstructed coordinates of the starting area block and the center point of the pixel coordinates of the upper adjacent area block, R 3 is the pixel value of the pixel coordinates of the lower adjacent area block; d 4 is the reconstructed coordinates of the starting area block and the adjacent area on the lower side The distance from the center point of the block pixel coordinate, R 4 represents the pixel value of the block pixel coordinate of the adjacent area above.

像素调整具体包括:根据其他重要图像帧的反投影图像确定其他重要图像帧目标区域的像素值

Figure BDA0003763542370000111
其中n表示图像内的投影线条数,qk,i表示经过像素点k的第i条投影线,计算这些目标区域与参考图像帧目标区域的像素比Z=Pv/h(k),PV为参考图像帧像素值,根据像素比对其他重要图像帧进行像素缩放。明暗度调整具体包括:计算参考图像帧的明暗度=R*0.299+G*0.587+B*0.114,明暗度计算中的权重数根据实际需要进行调整,按照参考图像帧明暗度进行其他重点图像帧的明暗度调整。特征区域调整具体包括:按照参考图像帧对其他重点图像帧进行旋转操作、等比缩放等。The pixel adjustment specifically includes: determining the pixel value of the target area of the other important image frame according to the back-projected image of the other important image frame
Figure BDA0003763542370000111
where n represents the number of projected lines in the image, q k,i represents the i-th projected line passing through pixel k, calculate the pixel ratio Z=Pv/h(k) between these target areas and the target area of the reference image frame, and PV is Refer to the pixel value of the image frame, and perform pixel scaling on other important image frames according to the pixel ratio. The brightness adjustment specifically includes: calculating the brightness of the reference image frame=R*0.299+G*0.587+B*0.114, adjusting the number of weights in the brightness calculation according to actual needs, and performing other key image frames according to the brightness of the reference image frame. shading adjustment. The feature area adjustment specifically includes: performing a rotation operation, proportional scaling, etc. on other key image frames according to the reference image frame.

深度学习模块24,用于基于深度学习构建目标特征规律模型,将得到的高分辨率图像输入目标特征规律模型确定目标特征规律。The deep learning module 24 is configured to construct a target feature law model based on deep learning, and input the obtained high-resolution image into the target feature law model to determine the target feature law.

其中,深度学习模块24,具体用于:从得到的高分辨率图像中提取目标特征,形成目标特征向量;将目标特征向量集输入目标特征规律模型,训练多个阻尼趋势预测模型得到不同的子趋势预测模型,分别利用各个子趋势预测模型对目标特征向量集进行趋势预测,通过趋势预测结果估计得到各个子预测模型的权重的集合;寻找权重的集合中的每个权重对应的最优值,通过各个子预测模型和其对应的权重的最优值的组合确定目标特征规律;Among them, the deep learning module 24 is specifically used for: extracting target features from the obtained high-resolution images to form target feature vectors; inputting the target feature vector set into the target feature law model, and training multiple damping trend prediction models to obtain different subtypes The trend prediction model uses each sub-trend prediction model to predict the trend of the target feature vector set, and estimates the set of weights of each sub-prediction model through the trend prediction result; finds the optimal value corresponding to each weight in the set of weights, Determine the target characteristic law by the combination of each sub-prediction model and the optimal value of its corresponding weight;

具体地,将目标行人特征向量集输入目标行人特征趋势模型,利用目标行人特征向量集训练多个阻尼趋势预测模型,将每个阻尼趋势预测模型作为子预测模型

Figure BDA0003763542370000113
再利用子预测模型对目标行人特征向量集进行预测,得到预测结果,通过预测结果采用公式
Figure BDA0003763542370000112
估计子预测模型的权重的集合{λ1,λ2,λ3……λT},其中,xi为目标特征向量中的目标形状特征,yi为目标特征向量的目标空间关系特征,μ1和μ2为目标形状特征和目标空间关系特征的影响权重,n为目标特征向量的总数,T为子预测模型的数量。Specifically, input the target pedestrian feature vector set into the target pedestrian feature trend model, use the target pedestrian feature vector set to train multiple damping trend prediction models, and use each damping trend prediction model as a sub-prediction model
Figure BDA0003763542370000113
Then use the sub-prediction model to predict the target pedestrian feature vector set to obtain the prediction result, and use the formula through the prediction result.
Figure BDA0003763542370000112
The set of weights of the estimated sub-prediction model {λ 1 , λ 2 , λ 3 ...... 1 and μ 2 are the influence weights of the target shape feature and the target spatial relationship feature, n is the total number of target feature vectors, and T is the number of sub-prediction models.

计算各个子预测模型

Figure BDA0003763542370000123
的权重的集合{λ1,λ2,λ3......λT}中,每个权重对应的最优值;通过各个子预测模型
Figure BDA0003763542370000121
和其对应的权重的最优值{λ1,λ2,λ3......λT}组合确定目标特征规律
Figure BDA0003763542370000122
Calculate each sub-prediction model
Figure BDA0003763542370000123
In the set of weights {λ 1 , λ 2 , λ 3 ......λ T }, the optimal value corresponding to each weight; through each sub-prediction model
Figure BDA0003763542370000121
and the optimal value of its corresponding weight {λ 1 , λ 2 , λ 3 ......λ T } to determine the target characteristic law
Figure BDA0003763542370000122

以上所述的具体实施方式,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而已,并不用于限定本发明的保护范围,凡在本发明的技术方案的基础之上,所做的任何修改、等同替换、改进等,均应包括在本发明的保护范围之内。The specific embodiments described above further describe the objectives, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above descriptions are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made on the basis of the technical solution of the present invention shall be included within the protection scope of the present invention.

Claims (10)

1.一种基于深度学习的图像处理方法,其特征在于,包括:1. an image processing method based on deep learning, is characterized in that, comprises: 接收外部终端传输的视频文件,对外部终端进行安全检查;Receive video files transmitted by external terminals, and conduct security checks on external terminals; 从视频文件中分离图像帧,并从图像帧中提取含有目标特征的重要图像帧;Separate image frames from video files, and extract important image frames containing target features from image frames; 对重要图像帧进行校正,对校正后的重要图像帧进行清晰度调整,输出高分辨率图像;Correct the important image frames, adjust the sharpness of the corrected important image frames, and output high-resolution images; 基于深度学习构建目标特征规律模型,将得到的高分辨率图像输入目标特征规律模型确定目标特征规律。The target feature law model is constructed based on deep learning, and the obtained high-resolution image is input into the target feature law model to determine the target feature law. 2.如权利要求1所述的一种基于深度学习的图像处理方法,其特征在于,从视频文件中提取外部终端信息,计算外部终端的安全度,若外部设备的安全度高于服务器安全度,则允许对外部设备的视频文件进行图像处理,否则拒绝外部终端请求。2. a kind of image processing method based on deep learning as claimed in claim 1, is characterized in that, extracts external terminal information from video file, calculates the security degree of external terminal, if the security degree of external equipment is higher than server security degree , the image processing of the video file of the external device is allowed, otherwise the request of the external terminal is rejected. 3.如权利要求1所述的一种基于深度学习的图像处理方法,其特征在于,从图像帧中提取含有目标特征的重要图像帧,具体包括如下子步骤:3. a kind of image processing method based on deep learning as claimed in claim 1 is characterized in that, extracting important image frames containing target features from image frames, specifically comprises the following substeps: 从视频分离的所有图像帧中,查找包含目标特征的首个重要图像帧;From all the image frames separated from the video, find the first important image frame containing the target feature; 将首个重要图像帧中的目标特征进行灰度处理,确定目标特征的灰度值;Perform grayscale processing on the target feature in the first important image frame to determine the grayscale value of the target feature; 依据目标特征的灰度分布值计算待提取图像帧与首个重要图像帧之间的帧距,根据帧距确定包含目标特征的所有重要图像帧。Calculate the frame distance between the image frame to be extracted and the first important image frame according to the gray distribution value of the target feature, and determine all important image frames including the target feature according to the frame distance. 4.如权利要求1所述的一种基于深度学习的图像处理方法,其特征在于,对重要图像帧进行校正,具体包括如下子步骤:4. a kind of image processing method based on deep learning as claimed in claim 1, it is characterised in that the important image frame is corrected, specifically comprises the following substeps: 从所有重要图像帧中提取目标区域,对目标区域进行反投影,比较原图像与反投影图像是否存在差异,选定差异在预定范围内的若干重要图像帧;Extract the target area from all important image frames, perform back-projection on the target area, compare whether there is a difference between the original image and the back-projected image, and select several important image frames whose differences are within a predetermined range; 计算每个重要图像帧的清晰度,选择清晰度最高的图像作为参考图像帧,并对清晰度低于预设值的重要图像帧进行初始清晰度调整;Calculate the sharpness of each important image frame, select the image with the highest sharpness as the reference image frame, and perform initial sharpness adjustment on the important image frames whose sharpness is lower than the preset value; 按照参考图像帧对其他重要图像帧进行像素、明暗度和特征区域调整。Perform pixel, shading, and feature area adjustments on other important image frames according to the reference image frame. 5.如权利要求4所述的一种基于深度学习的图像处理方法,其特征在于,从重要图像帧中提取目标区域,具体包括:获取重要图像帧中的前背景区域和未知区域,为了使未知区域中的点尽量向前背景区域靠近,以未知区域的每一个点为中心,获取其半径领域内的像素颜色与该点的像素颜色之间的距离,将该像素距离大于设定最大阈值的像素点作为前景区域,将小于设定最小阈值的像素点作为背景区域,以缩小未知区域的范围。5. a kind of image processing method based on deep learning as claimed in claim 4 is characterized in that, extracting the target area from important image frame, specifically comprises: obtaining the front background area and unknown area in important image frame, in order to make The points in the unknown area are as close as possible to the forward background area, and each point in the unknown area is taken as the center to obtain the distance between the pixel color in the radius area and the pixel color of the point, and the pixel distance is greater than the set maximum threshold. The pixels of , are taken as the foreground area, and the pixels smaller than the set minimum threshold are taken as the background area to reduce the range of the unknown area. 6.一种基于深度学习的图像处理系统,其特征在于,包括:6. An image processing system based on deep learning, comprising: 安全检查模块,用于接收外部终端传输的视频文件,对外部终端进行安全检查;The security check module is used to receive the video files transmitted by the external terminal and perform security check on the external terminal; 重要图像帧确定模块,用于从视频文件中分离图像帧,并从图像帧中提取含有目标特征的重要图像帧;The important image frame determination module is used to separate the image frame from the video file, and extract the important image frame containing the target feature from the image frame; 高分辨率图像输出模块,用于对重要图像帧进行校正,对校正后的重要图像帧进行清晰度调整,输出高分辨率图像;The high-resolution image output module is used to correct important image frames, adjust the sharpness of the corrected important image frames, and output high-resolution images; 深度学习模块,用于基于深度学习构建目标特征规律模型,将得到的高分辨率图像输入目标特征规律模型确定目标特征规律。The deep learning module is used to construct a target feature law model based on deep learning, and input the obtained high-resolution image into the target feature law model to determine the target feature law. 7.如权利要求6所述的一种基于深度学习的图像处理系统,其特征在于,安全检查模块具体用于从视频文件中提取外部终端信息,计算外部终端的安全度,若外部设备的安全度高于服务器安全度,则允许对外部设备的视频文件进行图像处理,否则拒绝外部终端请求。7. a kind of image processing system based on deep learning as claimed in claim 6, is characterized in that, the security check module is specifically used for extracting external terminal information from video file, calculates the security degree of external terminal, if the security of external equipment If the degree is higher than the security degree of the server, it is allowed to perform image processing on the video file of the external device, otherwise, the request of the external terminal is rejected. 8.如权利要求6所述的一种基于深度学习的图像处理系统,其特征在于,重要图像帧确定模块,具体用于从视频分离的所有图像帧中,查找包含目标特征的首个重要图像帧;将首个重要图像帧中的目标特征进行灰度处理,确定目标特征的灰度值;依据目标特征的灰度分布值计算待提取图像帧与首个重要图像帧之间的帧距,根据帧距确定包含目标特征的所有重要图像帧。8. A kind of image processing system based on deep learning as claimed in claim 6, it is characterised in that the important image frame determination module is specifically used to find the first important image containing the target feature in all the image frames separated from the video frame; perform grayscale processing on the target feature in the first important image frame to determine the grayscale value of the target feature; calculate the frame distance between the image frame to be extracted and the first important image frame according to the grayscale distribution value of the target feature, Determine all important image frames that contain the target feature according to the frame distance. 9.如权利要求6所述的一种基于深度学习的图像处理系统,其特征在于,高分辨率图像输出模块,具体用于从所有重要图像帧中提取目标区域,对目标区域进行反投影,比较原图像与反投影图像是否存在差异,选定差异在预定范围内的若干重要图像帧;计算每个重要图像帧的清晰度,选择清晰度最高的图像作为参考图像帧,并对清晰度低于预设值的重要图像帧进行初始清晰度调整;按照参考图像帧对其他重要图像帧进行像素、明暗度和特征区域调整。9. a kind of image processing system based on deep learning as claimed in claim 6, is characterized in that, high-resolution image output module, is specifically used for extracting target area from all important image frames, carries out back-projection to target area, Compare whether there is a difference between the original image and the back-projected image, and select several important image frames whose differences are within a predetermined range; The initial definition adjustment is performed on the important image frame of the preset value; the pixel, brightness and characteristic area adjustment is performed on other important image frames according to the reference image frame. 10.如权利要求9所述的一种基于深度学习的图像处理系统,其特征在于,高分辨率图像输出模块中,从重要图像帧中提取目标区域,具体用于:获取重要图像帧中的前背景区域和未知区域,为了使未知区域中的点尽量向前背景区域靠近,以未知区域的每一个点为中心,获取其半径领域内的像素颜色与该点的像素颜色之间的距离,将该像素距离大于设定最大阈值的像素点作为前景区域,将小于设定最小阈值的像素点作为背景区域,以缩小未知区域的范围。10. The deep learning-based image processing system according to claim 9, wherein, in the high-resolution image output module, the target area is extracted from the important image frame, and is specifically used for: obtaining the image in the important image frame. For the front-background area and the unknown area, in order to make the points in the unknown area as close as possible to the front-background area, take each point in the unknown area as the center to obtain the distance between the pixel color in the radius area and the pixel color of the point, The pixels whose pixel distance is greater than the set maximum threshold are taken as the foreground area, and the pixels less than the set minimum threshold are taken as the background area to narrow the range of the unknown area.
CN202210879112.3A 2022-07-25 2022-07-25 An image processing method and system based on deep learning Pending CN115063322A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210879112.3A CN115063322A (en) 2022-07-25 2022-07-25 An image processing method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210879112.3A CN115063322A (en) 2022-07-25 2022-07-25 An image processing method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN115063322A true CN115063322A (en) 2022-09-16

Family

ID=83206694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210879112.3A Pending CN115063322A (en) 2022-07-25 2022-07-25 An image processing method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN115063322A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118429854A (en) * 2024-04-09 2024-08-02 博林中凯(北京)科技有限公司 A method and system for visual image processing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030210886A1 (en) * 2002-05-07 2003-11-13 Ying Li Scalable video summarization and navigation system and method
CN108550163A (en) * 2018-04-19 2018-09-18 湖南理工学院 Moving target detecting method in a kind of complex background scene
CN110276379A (en) * 2019-05-21 2019-09-24 方佳欣 A kind of the condition of a disaster information rapid extracting method based on video image analysis
CN112017120A (en) * 2020-09-04 2020-12-01 北京伟杰东博信息科技有限公司 Image synthesis method and device
CN112270247A (en) * 2020-10-23 2021-01-26 杭州卷积云科技有限公司 Key frame extraction method based on inter-frame difference and color histogram difference
CN113516739A (en) * 2020-04-09 2021-10-19 上海米哈游天命科技有限公司 Animation processing method and device, storage medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030210886A1 (en) * 2002-05-07 2003-11-13 Ying Li Scalable video summarization and navigation system and method
CN108550163A (en) * 2018-04-19 2018-09-18 湖南理工学院 Moving target detecting method in a kind of complex background scene
CN110276379A (en) * 2019-05-21 2019-09-24 方佳欣 A kind of the condition of a disaster information rapid extracting method based on video image analysis
CN113516739A (en) * 2020-04-09 2021-10-19 上海米哈游天命科技有限公司 Animation processing method and device, storage medium and electronic equipment
CN112017120A (en) * 2020-09-04 2020-12-01 北京伟杰东博信息科技有限公司 Image synthesis method and device
CN112270247A (en) * 2020-10-23 2021-01-26 杭州卷积云科技有限公司 Key frame extraction method based on inter-frame difference and color histogram difference

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118429854A (en) * 2024-04-09 2024-08-02 博林中凯(北京)科技有限公司 A method and system for visual image processing
CN118429854B (en) * 2024-04-09 2024-12-20 博林中凯(北京)科技有限公司 A method and system for visual image processing

Similar Documents

Publication Publication Date Title
US11790499B2 (en) Certificate image extraction method and terminal device
Chen et al. Hazy image restoration by bi-histogram modification
CN108876756B (en) Image similarity measurement method and device
CN110706196B (en) Clustering perception-based no-reference tone mapping image quality evaluation algorithm
CN111179202B (en) Single image defogging enhancement method and system based on generation countermeasure network
CN111259792B (en) DWT-LBP-DCT feature-based human face living body detection method
CN115131714A (en) Video image intelligent detection and analysis method and system
CN107358585A (en) Misty Image Enhancement Method based on fractional order differential and dark primary priori
CN112508800A (en) Attention mechanism-based highlight removing method for surface of metal part with single gray image
CN116681627B (en) Cross-scale fusion self-adaptive underwater image generation countermeasure enhancement method
CN109003287A (en) Image partition method based on improved adaptive GA-IAGA
CN111445487A (en) Image segmentation method and device, computer equipment and storage medium
CN109933639A (en) A Layer Stack-Oriented Adaptive Fusion Method of Multispectral Image and Panchromatic Image
CN117151990B (en) Image defogging method based on self-attention coding and decoding
CN109949200B (en) Filter subset selection and CNN-based steganalysis framework construction method
CN114724218A (en) Video detection method, device, equipment and medium
CN112116535A (en) Image completion method based on parallel self-encoder
Yousaf et al. Single Image Dehazing and Edge Preservation Based on the Dark Channel Probability‐Weighted Moments
CN108961209B (en) Pedestrian image quality evaluation method, electronic device and computer readable medium
CN115063322A (en) An image processing method and system based on deep learning
CN110189262B (en) Image defogging method based on neural network and histogram matching
CN109784357B (en) Image rephotography detection method based on statistical model
Rani et al. for Underwater Image Enhancement
CN115620117B (en) Face information encryption method and system for network access authority authentication
Wang et al. Fast visibility restoration using a single degradation image in scattering media

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination