[go: up one dir, main page]

CN105005977B - A kind of single video frame per second restored method based on pixel stream and time prior imformation - Google Patents

A kind of single video frame per second restored method based on pixel stream and time prior imformation Download PDF

Info

Publication number
CN105005977B
CN105005977B CN201510414187.4A CN201510414187A CN105005977B CN 105005977 B CN105005977 B CN 105005977B CN 201510414187 A CN201510414187 A CN 201510414187A CN 105005977 B CN105005977 B CN 105005977B
Authority
CN
China
Prior art keywords
pixel stream
restored
video
pixel
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510414187.4A
Other languages
Chinese (zh)
Other versions
CN105005977A (en
Inventor
徐枫
蒋德富
王慧斌
石爱业
张振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201510414187.4A priority Critical patent/CN105005977B/en
Publication of CN105005977A publication Critical patent/CN105005977A/en
Application granted granted Critical
Publication of CN105005977B publication Critical patent/CN105005977B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

本发明公开了一种基于像素流和时间先验信息的单视频帧率复原方法,首先获取待复原的单视频和构造观测像素流,并将单视频表示为观测像素流的矩阵形式;然后建立观测像素流的退化模型和原始像素流的概率估计式,由此导出含时间先验信息的原始像素流的复原式;再对观测像素流进行逐一复原,复原中采用的时间先验模型均通过数据驱动方式确定;最后将逐一复原的复原像素流以矩阵形式组合成复原视频,作为最终复原的高帧率视频。本发明复原方法采用单视频,使得视频采集便捷、复原环节简洁;帧率复原基于像素流概率统计框架,并引入时间先验信息,以及采用数据驱动方式确定时间先验模型,不仅提高视频保真度,还有效消除视频帧的拖尾现象。

The invention discloses a single video frame rate restoration method based on pixel stream and time prior information. Firstly, the single video to be restored is obtained and the observed pixel stream is constructed, and the single video is expressed as a matrix form of the observed pixel stream; and then established The degradation model of the observed pixel stream and the probability estimation formula of the original pixel stream are derived from this to derive the restoration formula of the original pixel stream with time prior information; The data-driven method is determined; finally, the restored pixel streams restored one by one are combined into a restored video in matrix form, which is the final restored high frame rate video. The restoration method of the present invention adopts a single video, which makes video collection convenient and the restoration process is simple; the frame rate restoration is based on the pixel flow probability statistics framework, and introduces time prior information, and adopts a data-driven method to determine the time prior model, which not only improves video fidelity It also effectively eliminates the smearing of video frames.

Description

一种基于像素流和时间先验信息的单视频帧率复原方法A Single Video Frame Rate Restoration Method Based on Pixel Flow and Time Prior Information

技术领域technical field

本发明涉及一种视频复原方法,特别是涉及一种基于像素流和时间先验信息的单视频帧率复原方法,属于计算机图像与视频处理技术领域。The invention relates to a video restoration method, in particular to a single video frame rate restoration method based on pixel stream and time prior information, and belongs to the technical field of computer image and video processing.

背景技术Background technique

传统的视频复原方法,大多针对视频空间分辨率较低的问题,通过对单视频逐帧进行空间复原或对多视频进行空间信息互补重建,来提高视频空间分辨率、恢复视频空间细节信息。广大科研人员在多年广泛深入研究的基础上,提出的各类视频复原方法,均能在一定程度上解决空间分辨率低的问题,或满足基本应用需求。Most of the traditional video restoration methods aim at the problem of low spatial resolution of the video, and improve the spatial resolution of the video and restore the spatial detail information of the video by performing spatial restoration on a single video frame by frame or performing complementary spatial information reconstruction on multiple videos. On the basis of years of extensive and in-depth research, the majority of scientific researchers have proposed various video restoration methods, which can solve the problem of low spatial resolution to a certain extent, or meet the basic application requirements.

但是,上述有针对性的视频复原方法即使恢复了视频空间细节信息,却仍存在时间分辨率低、帧信息缺失等问题,以至于可能出现视频闪烁、停顿或抖动效应。However, even if the above-mentioned targeted video restoration methods restore the video spatial detail information, there are still problems such as low temporal resolution and missing frame information, so that video flickering, pause or jitter effects may occur.

因此,不同于常用的视频复原技术,除了涉及空间分辨率提高和空间细节信息恢复,视频复原还应关注时间分辨率(也即帧率)和时间细节信息,以进一步提高视频质量或视觉效果。目前,国内外已有一些学者注意到这一问题,并提出了视频的帧率复原方法。Therefore, unlike commonly used video restoration techniques, in addition to involving spatial resolution improvement and spatial detail information recovery, video restoration should also focus on temporal resolution (ie, frame rate) and temporal detail information to further improve video quality or visual effects. At present, some scholars at home and abroad have noticed this problem and proposed a frame rate restoration method for video.

现有的视频帧率复原方法通常遵循两种路径。Existing video frame rate restoration methods generally follow two paths.

一种是采集同时段同场景的多部视频,通过融合多视频冗余/互补的帧信息,来实现视频帧率的复原。然而,此路径会受到视频采集条件的约束,如设备数量是否充足、型号是否统一等条件;且涉及多视频的同步和时间配准问题,实现起来较为复杂。One is to collect multiple videos of the same scene at the same time, and restore the video frame rate by fusing redundant/complementary frame information of multiple videos. However, this path will be constrained by video acquisition conditions, such as whether the number of devices is sufficient and whether the models are uniform; and it involves synchronization and time registration of multiple videos, which is more complicated to implement.

另一种较为简洁的帧率复原遵循的路径,只需一部视频,通过帧间内插,实现视频帧率的复原。但是,单视频的帧间内插所需内插函数(例如线性函数、样条函数、二次函数等)的假定具有较大的随意性,使得内插帧的保真度不高;而即使采用最小均方误差实现内插,却仍解决不了视频帧的拖尾现象,这种现象主要因采集设备曝光时间较长而导致,使高速运动物体在图像中表现为沿其运动轨迹的模糊现象。Another relatively simple path for frame rate restoration is to restore the frame rate of the video through inter-frame interpolation with only one video. However, the assumption of interpolation functions (such as linear functions, spline functions, quadratic functions, etc.) required for inter-frame interpolation of single video is relatively random, making the fidelity of interpolated frames not high; and even The minimum mean square error is used to achieve interpolation, but it still cannot solve the tailing phenomenon of the video frame. This phenomenon is mainly caused by the long exposure time of the acquisition device, which makes the high-speed moving object appear in the image as a blurred phenomenon along its trajectory. .

发明内容Contents of the invention

本发明的主要目的在于,克服现有技术中的不足,提供一种基于像素流和时间先验信息的单视频帧率复原方法,特别适用于高速运动物体的视频图像复原。The main purpose of the present invention is to overcome the deficiencies in the prior art and provide a single video frame rate restoration method based on pixel flow and time prior information, which is especially suitable for video image restoration of high-speed moving objects.

本发明所要解决的技术问题是提供视频采集便捷、复原环节简洁、复原结果可靠、实用性强的基于像素流和时间先验信息的单视频帧率复原方法,不仅可摆脱使用多视频复原所涉及到的同步采集或时间配准的复杂程序过程,而且大幅提高视频保真度,以及有效消除因曝光时间长而导致的拖尾现象,极具有产业上的利用价值。The technical problem to be solved by the present invention is to provide a single video frame rate restoration method based on pixel stream and time prior information, which is convenient for video collection, simple in the restoration process, reliable in restoration results, and strong in practicability. The complex program process of synchronous acquisition or time registration can be obtained, and the video fidelity can be greatly improved, and the tailing phenomenon caused by long exposure time can be effectively eliminated, which is extremely valuable in industry.

为了达到上述目的,本发明所采用的技术方案是:In order to achieve the above object, the technical scheme adopted in the present invention is:

一种基于像素流和时间先验信息的单视频帧率复原方法,其特征在于,包括以下步骤:A single video frame rate restoration method based on pixel flow and time prior information, characterized in that it comprises the following steps:

步骤(1)获取待复原视频:通过视频采集得到单视频I={I(i)|i∈N}作为待复原视频,其中,I(i)为视频的帧,i为按时间顺序对每帧进行的编号;Step (1) Obtain the video to be restored: A single video I={I(i)|i∈N} is obtained through video collection as the video to be restored, where I(i) is the frame of the video, and i is the time sequence for each frame number;

步骤(2)设计待复原像素流的构造方法:按帧排序,将单视频每帧位于同一坐标(m,n)的像素Imn(i)进行串联,形成观测像素流Imn={Imn(i)|i∈N}作为待复原像素流;Step (2) Design the construction method of the pixel stream to be restored: sort by frame, connect the pixels I mn (i) at the same coordinates (m, n) in each frame of the single video in series to form the observed pixel stream I mn ={I mn (i)|i∈N} as the pixel stream to be restored;

步骤(3)将单视频表示为观测像素流的矩阵形式:根据步骤(2)的构造方法,按单视频每帧内的坐标顺序逐一构造出观测像素流,将构造的观测像素流进行组合,可使单视频以观测像素流的矩阵形式表示为其矩阵中的每一个元素均为一个构造的观测像素流;Step (3) Express the single video as the matrix form of the observed pixel stream: according to the construction method of step (2), construct the observed pixel stream one by one according to the order of coordinates in each frame of the single video, and combine the constructed observed pixel streams, A single video can be expressed in the form of a matrix of observed pixel streams as Each element in its matrix is a constructed observation pixel stream;

步骤(4)建立观测像素流的退化模型:Imn=DBHmn+E,其中,D为时间降采样矩阵,B为时间模糊矩阵、用来模拟曝光时间,Hmn为原始像素流,E为加入的高斯分布的噪声向量;Step (4) Establish the degradation model of the observed pixel stream: I mn =DBH mn +E, where D is the temporal downsampling matrix, B is the temporal fuzzy matrix for simulating the exposure time, H mn is the original pixel stream, and E is The noise vector of the added Gaussian distribution;

步骤(5)计算得出原始像素流Hmn的概率估计式:Step (5) calculates the probability estimation formula of the original pixel stream Hmn :

根据贝叶斯概率统计法则,计算得出原始像素流Hmn的概率估计式为According to the Bayesian probability statistics rule, the probability estimation formula of the original pixel stream Hmn is calculated as

Hh ^^ mm nno == argarg maxmax Hh mm nno PP (( Hh mm nno || II mm nno )) == argarg maxmax Hh mm nno [[ PP (( II mm nno || Hh mm nno )) PP (( Hh mm nno )) ]] ;;

步骤(6)计算得出原始像素流Hmn的复原式:Step (6) calculates the restoration formula of the original pixel stream Hmn :

根据步骤(4)中所建观测像素流的退化模型和步骤(5)所得原始像素流Hmn的概率估计式,通过对数计算得出原始像素流Hmn的复原式为According to the degradation model of the observed pixel stream built in step (4) and the probability estimation formula of the original pixel stream H mn obtained in step (5), the restoration formula of the original pixel stream H mn is obtained by logarithmic calculation:

Hh ^^ mm nno == argarg maxmax Hh mm nno (( loglog PP (( Hh mm nno )) -- αα || || II mm nno -- DBHDBH mm nno || || 22 22 )) ,,

其中,为复原像素流,logP(Hmn)代表原始像素流的时间先验信息项,α为优化参数;in, To restore the pixel stream, logP(H mn ) represents the time prior information item of the original pixel stream, and α is an optimization parameter;

步骤(7)对观测像素流进行复原:利用步骤(6)所得原始像素流Hmn的复原式,将步骤(3)所得以观测像素流的矩阵形式表示的单视频按下标顺序逐一进行复原,矩阵中每个Imn均以如下步骤进行复原得到复原像素流 Step (7) restore the observed pixel stream: use the restoration formula of the original pixel stream H mn obtained in step (6), restore the single video in the matrix form of the observed pixel stream obtained in step (3) one by one in the order of subscripts , each I mn in the matrix is restored by the following steps to obtain the restored pixel stream

步骤(7-1)确定观测像素流的时间先验模型P(Hmn)并判断其是高斯型或拉普拉斯型:Step (7-1) Determine the time prior model P(H mn ) of the observed pixel flow and judge whether it is Gaussian or Laplacian:

根据观测像素流Imn,采用数据驱动的方式,确定观测像素流的时间先验模型P(Hmn)是高斯型还是拉普拉斯型其中,Γ表示信号的高通算子;According to the observed pixel flow I mn , adopt a data-driven method to determine that the time prior model P(H mn ) of the observed pixel flow is Gaussian or Laplace type Among them, Γ represents the high-pass operator of the signal;

步骤(7-2)基于原始像素流Hmn的复原式导出偏导方程:Step (7-2) derives the partial derivative equation based on the restoration formula of the original pixel stream H mn :

确定时间先验模型P(Hmn)后,基于步骤(6)所得原始像素流Hmn的复原式After determining the time prior model P(H mn ), based on the restoration formula of the original pixel stream H mn obtained in step (6)

Hh ^^ mm nno == argarg maxmax Hh mm nno (( loglog PP (( Hh mm nno )) -- αα || || II mm nno -- DBHDBH mm nno || || 22 22 )) ,,

导出偏导方程为 ∂ ∂ H m n [ log P ( H m n ) ] + αB T D T ( I m n - DBH m n ) = 0 ; The partial derivative equation is derived as ∂ ∂ h m no [ log P ( h m no ) ] + αB T D. T ( I m no - DBH m no ) = 0 ;

步骤(7-3)对观测像素流Imn进行线性插值,得到像素流作为迭代初始值;Step (7-3) performs linear interpolation on the observed pixel stream I mn to obtain the pixel stream as the iteration initial value;

步骤(7-4)利用共轭梯度法对步骤(7-2)所得偏导方程进行迭代求解,得到复原像素流 Step (7-4) Use the conjugate gradient method to iteratively solve the partial derivative equation obtained in step (7-2) to obtain the restored pixel stream

步骤(8)组合获得复原视频:将步骤(7)逐一复原的复原像素流以矩阵形式组合成复原视频H作为输出,复原视频H以矩阵形式表示为Step (8) is combined to obtain the restored video: restore the restored pixel stream in step (7) one by one The restored video H is combined in matrix form as output, and the restored video H is expressed in matrix form as

本发明进一步设置为:所述步骤(1)中的视频采集可使用摄像机对活动场景进行拍摄获取单视频。The present invention is further set as: the video collection in the step (1) can use a camera to shoot the moving scene to obtain a single video.

本发明进一步设置为:所述步骤(6)中的优化参数α采用试错法设定进行取值。In the present invention, it is further set that: the optimization parameter α in the step (6) is set by trial and error to obtain a value.

本发明进一步设置为:所述步骤(7-1)中确定观测像素流的时间先验模型P(Hmn)并判断其是高斯型或拉普拉斯型,通过数据驱动方法进行判断,包括以下步骤,The present invention is further set to: in the step (7-1), determine the time prior model P(Hmn) of the observed pixel flow and judge whether it is Gaussian or Laplace-type, and judge it through a data-driven method, including the following step,

步骤(7-1-1)对观测像素流Imn进行线性插值得到像素流 Step (7-1-1) Perform linear interpolation on the observed pixel stream Imn to obtain the pixel stream

步骤(7-1-2)建立像素流的高通版的高斯先验模型Step (7-1-2) establish pixel stream Qualcomm version The Gaussian prior model of

PP GG (( ΓΓ Hh ^^ mm nno 00 )) == (( 22 πσπσ GG 22 )) -- KK 22 expexp {{ -- || || ΓΓ Hh ^^ mm nno 00 || || 22 22 22 σσ GG 22 }}

和拉普拉斯先验模型 P L ( Γ H ^ m n 0 ) = ( 2 σ L ) - K exp { - | | Γ H ^ m n 0 | | 1 1 σ L } , and the Laplace prior model P L ( Γ h ^ m no 0 ) = ( 2 σ L ) - K exp { - | | Γ h ^ m no 0 | | 1 1 σ L } ,

其中,σG和σL分别表示高斯先验模型和拉普拉斯先验模型的标准差,K表示的维数;Among them, σ G and σ L represent the standard deviation of Gaussian prior model and Laplace prior model respectively, and K represents the dimension;

步骤(7-1-3)利用步骤(7-1-2)所建的高斯先验模型和拉普拉斯先验模型,在已知的前提下,根据最大似然法则,Step (7-1-3) utilizes the Gaussian prior model and the Laplace prior model built in step (7-1-2), in Under the known premise, according to the maximum likelihood rule,

估计出标准差 estimated standard deviation

步骤(7-1-4)求出的数值,比较两者的大小,若大于则判定观测像素流的时间先验模型为高斯型;反之,则判定观测像素流的时间先验模型为拉普拉斯型。Step (7-1-4) finds and value, compare the size of the two, if more than the Then it is determined that the time prior model of the observed pixel flow is Gaussian; otherwise, it is determined that the time prior model of the observed pixel flow is Laplace type.

与现有技术相比,本发明具有的有益效果是:Compared with prior art, the beneficial effect that the present invention has is:

采集单视频进行一种基于像素流的复原方法不仅可以摆脱采集多视频复原所需条件的约束,如设备数量是否充足、型号是否统一等方面;而且不用涉及多视频同步采集或时间配准的复杂程序过程,路径简洁,简化复原环节;通过原始像素流的概率统计框架实现帧率复原,避免假定内插函数的随意性,可大幅提高视频保真度;同时将时间先验信息引入到概率统计框架中,并通过数据驱动方式确定时间先验模型,提高先验模型的可信度,以及有效消除因曝光时间长而导致的拖尾现象。Collecting a single video and performing a pixel stream-based restoration method can not only get rid of the constraints of the conditions required for multi-video restoration, such as whether the number of devices is sufficient, whether the model is uniform, etc.; and does not involve the complexity of multi-video simultaneous acquisition or time registration. The program process, the path is concise, and the restoration link is simplified; the frame rate restoration is realized through the probability statistics framework of the original pixel stream, which avoids the randomness of the assumed interpolation function, and can greatly improve the video fidelity; at the same time, the time prior information is introduced into the probability statistics In the framework, the time prior model is determined in a data-driven manner, the credibility of the prior model is improved, and the tailing phenomenon caused by long exposure time is effectively eliminated.

上述内容仅是本发明技术方案的概述,为了更清楚的了解本发明的技术手段,下面结合附图对本发明作进一步的描述。The above content is only an overview of the technical solution of the present invention. In order to understand the technical means of the present invention more clearly, the present invention will be further described below in conjunction with the accompanying drawings.

附图说明Description of drawings

图1是本发明方法的流程图;Fig. 1 is a flow chart of the inventive method;

图2是待复原像素流的构造方法示意图;Fig. 2 is a schematic diagram of a construction method of a pixel stream to be restored;

图3是构造的观测像素流按矩阵形式组合成单视频的示意图;Fig. 3 is the schematic diagram that the observed pixel flow of construction is combined into a single video in matrix form;

图4是以观测像素流的矩阵形式表示的单视频的侧视示意图;Figure 4 is a schematic side view of a single video expressed in the form of a matrix of observed pixel streams;

图5是原始像素流和观测像素流之间的关系图;Fig. 5 is a diagram of the relationship between the original pixel stream and the observed pixel stream;

图6是将观测像素流复原为原始像素流的流程图;Fig. 6 is a flowchart of restoring the observed pixel stream to the original pixel stream;

图7是基于数据驱动法确定观测像素流的时间先验模型的流程图。Fig. 7 is a flow chart of determining a temporal prior model of observed pixel streams based on a data-driven approach.

具体实施方式detailed description

下面结合说明书附图,对本发明作进一步的说明。Below in conjunction with accompanying drawing of description, the present invention will be further described.

如图1所示,本发明提供一种基于像素流和时间先验信息的单视频帧率复原方法,首先获取待复原的单视频,构造观测像素流作为待复原像素流,并将单视频表示为观测像素流的矩阵形式;然后建立观测像素流的退化模型和原始像素流的概率估计式,由此导出含时间先验信息的原始像素流的复原式;再对观测像素流进行逐一复原,复原中采用的时间先验模型均通过数据驱动方式确定;最后将逐一复原的复原像素流以矩阵形式组合成复原视频,作为最终复原的高帧率视频。具体包括步骤如下:As shown in Figure 1, the present invention provides a single video frame rate restoration method based on pixel stream and time prior information. First, the single video to be restored is obtained, and the observed pixel stream is constructed as the pixel stream to be restored, and the single video is represented as is the matrix form of the observed pixel stream; then the degradation model of the observed pixel stream and the probability estimation formula of the original pixel stream are established, and the restoration formula of the original pixel stream containing time prior information is derived; and then the observed pixel stream is restored one by one, The time prior models used in the restoration are all determined through data-driven methods; finally, the restored pixel streams restored one by one are combined into a restored video in the form of a matrix, which is the final restored high frame rate video. The specific steps are as follows:

步骤(1)获取待复原视频:通过使用摄像机对活动场景进行拍摄完成视频采集得到单视频I={I(i)|i∈N}作为待复原视频,其中,I(i)为视频的帧,i为按时间顺序对每帧进行的编号。Step (1) Obtain the video to be restored: use the camera to shoot the active scene to complete the video acquisition to obtain a single video I={I(i)|i∈N} as the video to be restored, where I(i) is the frame of the video , i is the number of each frame in chronological order.

步骤(2)设计待复原像素流的构造方法:按帧排序,将单视频每帧位于同一坐标(m,n)的像素Imn(i)进行串联,形成观测像素流Imn={Imn(i)|i∈N}作为待复原像素流,如图2所示。Step (2) Design the construction method of the pixel stream to be restored: sort by frame, connect the pixels I mn (i) at the same coordinates (m, n) in each frame of the single video in series to form the observed pixel stream I mn ={I mn (i)|i∈N} is used as the pixel stream to be restored, as shown in Figure 2.

步骤(3)将单视频表示为观测像素流的矩阵形式:根据步骤(2)的构造方法,按单视频每帧内的坐标顺序逐一构造出观测像素流,将构造的观测像素流进行组合,如图3所示,可使单视频以观测像素流的矩阵形式表示为Step (3) Express the single video as the matrix form of the observed pixel stream: according to the construction method of step (2), construct the observed pixel stream one by one according to the order of coordinates in each frame of the single video, and combine the constructed observed pixel streams, As shown in Figure 3, a single video can be expressed in the form of a matrix of observed pixel streams as

其矩阵中的每一个元素均为一个构造的观测像素流;如图4所示为以观测像素流的矩阵形式表示的单视频的侧视示意图。Each element in the matrix is a structured observation pixel stream; FIG. 4 is a schematic side view of a single video expressed in the form of a matrix of observation pixel streams.

步骤(4)建立观测像素流的退化模型:Imn=DBHmn+E,其中,D为时间降采样矩阵,B为时间模糊矩阵、用来模拟曝光时间,Hmn为原始像素流,E为加入的高斯分布的噪声向量。Step (4) Establish the degradation model of the observed pixel stream: I mn =DBH mn +E, where D is the temporal downsampling matrix, B is the temporal fuzzy matrix for simulating the exposure time, H mn is the original pixel stream, and E is A Gaussian-distributed noise vector to join.

步骤(5)计算得出原始像素流Hmn的概率估计式:Step (5) calculates the probability estimation formula of the original pixel stream Hmn :

根据贝叶斯概率统计法则,计算得出原始像素流Hmn的概率估计式为According to the Bayesian probability statistics rule, the probability estimation formula of the original pixel stream Hmn is calculated as

Hh ^^ mm nno == argarg maxmax Hh mm nno PP (( Hh mm nno || II mm nno )) == argarg maxmax Hh mm nno [[ PP (( II mm nno || Hh mm nno )) PP (( Hh mm nno )) ]] ;;

通过概率统计框架实现原始像素流的复原,而不是利用内插估计原始像素流Hmn,使得复原结果的保真度更高。The restoration of the original pixel stream is realized through the framework of probability and statistics, instead of using interpolation to estimate the original pixel stream H mn , so that the fidelity of the restoration result is higher.

步骤(6)计算得出原始像素流Hmn的复原式:Step (6) calculates the restoration formula of the original pixel stream Hmn :

根据步骤(4)中所建观测像素流的退化模型和步骤(5)所得原始像素流Hmn的概率估计式,通过对数计算得出原始像素流Hmn的复原式为According to the degradation model of the observed pixel stream built in step (4) and the probability estimation formula of the original pixel stream H mn obtained in step (5), the restoration formula of the original pixel stream H mn is obtained by logarithmic calculation:

Hh ^^ mm nno == argarg maxmax Hh mm nno (( loglog PP (( Hh mm nno )) -- αα || || II mm nno -- DBHDBH mm nno || || 22 22 )) ,,

其中,为复原像素流,logP(Hmn)代表原始像素流的时间先验信息项,α为优化参数、可采用试错法设定进行取值。in, To restore the pixel stream, logP(H mn ) represents the time prior information item of the original pixel stream, and α is an optimization parameter, which can be set by trial and error.

上述步骤(6)中时间先验信息的加入,可以使视频的拖尾现象得以有效消除。The addition of time prior information in the above step (6) can effectively eliminate the tailing phenomenon of the video.

因为拖尾现象本质上是代表曝光时间的时间模糊矩阵B对原始像素流Hmn进行时间卷积的结果,如图5所示,Imn的每个像素值主要由Hmn若干像素的卷积和得到;所以消除拖尾现象自然要解卷积。Because the smearing phenomenon is essentially the result of temporal convolution of the original pixel stream H mn by the temporal fuzzy matrix B representing the exposure time, as shown in Figure 5, each pixel value of I mn is mainly composed of the convolution of several pixels of H mn and obtained; therefore, deconvolution is naturally required to eliminate smearing.

如果仅用最小均方误差的解卷积方式来复原原始像素流Hmn,就忽略了时间先验信息,其本质上是一种似然估计,因此很难将观测像素流中隐含的卷积和形式分解开,进而难以消除拖尾现象。If only the minimum mean square error The deconvolution method to restore the original pixel stream H mn ignores the time prior information, which is essentially a likelihood estimation, so it is difficult to decompose the implicit convolution and form in the observed pixel stream, Then it is difficult to eliminate the tailing phenomenon.

步骤(7)对观测像素流进行复原:Step (7) restores the observed pixel stream:

利用步骤(6)所得原始像素流Hmn的复原式,将步骤(3)所得以观测像素流的矩阵形式表示的单视频按下标顺序逐一进行复原,矩阵中每个Imn均以如图6所示的步骤进行复原得到复原像素流 Using the restoration formula of the original pixel stream H mn obtained in step (6), the single video obtained in step (3) expressed in the matrix form of the observed pixel stream is restored one by one in the following order, and each I mn in the matrix is shown in the figure The steps shown in 6 are restored to obtain the restored pixel stream

步骤(7-1)确定观测像素流的时间先验模型P(Hmn)并判断其是高斯型或拉普拉斯型:Step (7-1) Determine the time prior model P(H mn ) of the observed pixel flow and judge whether it is Gaussian or Laplacian:

根据观测像素流Imn,采用数据驱动的方式,确定观测像素流的时间先验模型P(Hmn)是高斯型还是拉普拉斯型其中,Γ表示信号的高通算子;具体通过如图7所示的步骤,采用数据驱动方法确定时间先验模型,可使模型更加逼近原始像素流的数据特征,因而可避免通常人为假设模型的随意性。According to the observed pixel flow I mn , adopt a data-driven method to determine that the time prior model P(H mn ) of the observed pixel flow is Gaussian or Laplace type Among them, Γ represents the high-pass operator of the signal; specifically, through the steps shown in Figure 7, using the data-driven method to determine the time prior model can make the model closer to the data characteristics of the original pixel stream, thus avoiding the artificial assumption of the model. Randomness.

步骤(7-1-1)对观测像素流Imn进行线性插值得到像素流 Step (7-1-1) Perform linear interpolation on the observed pixel stream Imn to obtain the pixel stream

步骤(7-1-2)建立像素流的高通版的高斯先验模型Step (7-1-2) establish pixel stream Qualcomm version The Gaussian prior model of

PP GG (( ΓΓ Hh ^^ mm nno 00 )) == (( 22 πσπσ GG 22 )) -- KK 22 expexp {{ -- || || ΓΓ Hh ^^ mm nno 00 || || 22 22 22 σσ GG 22 }}

和拉普拉斯先验模型 P L ( Γ H ^ m n 0 ) = ( 2 σ L ) - K exp { - | | Γ H ^ m n 0 | | 1 1 σ L } , and the Laplace prior model P L ( Γ h ^ m no 0 ) = ( 2 σ L ) - K exp { - | | Γ h ^ m no 0 | | 1 1 σ L } ,

其中,σG和σL分别表示高斯先验模型和拉普拉斯先验模型的标准差,K表示的维数;Among them, σ G and σ L represent the standard deviation of Gaussian prior model and Laplace prior model respectively, and K represents the dimension;

步骤(7-1-3)利用步骤(7-1-2)所建的高斯先验模型和拉普拉斯先验模型,在已知的前提下,根据最大似然法则,Step (7-1-3) utilizes the Gaussian prior model and the Laplace prior model built in step (7-1-2), in Under the known premise, according to the maximum likelihood rule,

估计出标准差 σ ^ G = | | Γ H ^ m n 0 | | 2 2 K , σ ^ L = | | Γ H ^ m n 0 | | 1 1 K ; estimated standard deviation σ ^ G = | | Γ h ^ m no 0 | | 2 2 K , σ ^ L = | | Γ h ^ m no 0 | | 1 1 K ;

步骤(7-1-4)求出的数值,比较两者的大小,若大于则判定观测像素流的时间先验模型为高斯型;反之,则判定观测像素流的时间先验模型为拉普拉斯型。Step (7-1-4) finds and value, compare the size of the two, if more than the Then it is determined that the time prior model of the observed pixel flow is Gaussian; otherwise, it is determined that the time prior model of the observed pixel flow is Laplace type.

步骤(7-2)基于原始像素流Hmn的复原式导出偏导方程:Step (7-2) derives the partial derivative equation based on the restoration formula of the original pixel stream H mn :

确定时间先验模型P(Hmn)后,基于步骤(6)所得原始像素流Hmn的复原式After determining the time prior model P(H mn ), based on the restoration formula of the original pixel stream H mn obtained in step (6)

Hh ^^ mm nno == argarg maxmax Hh mm nno (( loglog PP (( Hh mm nno )) -- αα || || II mm nno -- DBHDBH mm nno || || 22 22 )) ,,

导出偏导方程为 ∂ ∂ H m n [ log P ( H m n ) ] + αB T D T ( I m n - DBH m n ) = 0. The partial derivative equation is derived as ∂ ∂ h m no [ log P ( h m no ) ] + αB T D. T ( I m no - DBH m no ) = 0.

步骤(7-3)对观测像素流Imn进行线性插值,得到像素流作为迭代初始值。Step (7-3) performs linear interpolation on the observed pixel stream I mn to obtain the pixel stream as the iteration initial value.

步骤(7-4)利用共轭梯度法对步骤(7-2)所得偏导方程进行迭代求解,得到复原像素流 Step (7-4) Use the conjugate gradient method to iteratively solve the partial derivative equation obtained in step (7-2) to obtain the restored pixel stream

步骤(8)组合获得复原视频:将步骤(7)逐一复原的复原像素流以矩阵形式组合成复原视频H作为输出,复原视频H以矩阵形式表示为Step (8) is combined to obtain the restored video: restore the restored pixel stream in step (7) one by one The restored video H is combined in matrix form as output, and the restored video H is expressed in matrix form as

本发明的创新点在于,采用单视频,实现视频采集便捷、复原环节简洁的帧率复原,其基于像素流概率统计框架,并引入时间先验信息,以及采用数据驱动方式确定时间先验模型,不仅提高视频保真度,还有效消除视频帧的拖尾现象。The innovation of the present invention lies in the fact that a single video is used to achieve convenient video acquisition and simple frame rate recovery in the recovery process. It is based on the pixel flow probability statistics framework, and introduces time prior information, and adopts a data-driven method to determine the time prior model. It not only improves the video fidelity, but also effectively eliminates the smearing phenomenon of the video frame.

以上显示和描述了本发明的基本原理、主要特征及优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的只是说明本发明的原理,在不脱离本发明精神和范围的前提下,本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明范围内。本发明要求保护范围由所附的权利要求书及其等效物界定。The basic principles, main features and advantages of the present invention have been shown and described above. Those skilled in the industry should understand that the present invention is not limited by the above-mentioned embodiments. What are described in the above-mentioned embodiments and the description only illustrate the principle of the present invention. Without departing from the spirit and scope of the present invention, the present invention will also have Variations and improvements are possible, which fall within the scope of the claimed invention. The protection scope of the present invention is defined by the appended claims and their equivalents.

Claims (4)

1. A method for single video frame rate restoration based on pixel stream and temporal prior information, comprising the steps of:
acquiring a video to be restored: acquiring a single video I ═ { I (I) | I ∈ N } through video acquisition, wherein I (I) is a frame of the video, and I is a number of each frame in time sequence;
step (2) designing a construction method of a pixel stream to be restored: according to frame ordering, locating pixels I of single video each frame at same coordinate (m, n)mn(i) Are connected in series to form an observation pixel stream Imn={Imn(i) I ∈ N as a pixel stream to be restored;
step (3) represents the single video as a matrix form of the observed pixel stream: according to the construction method in the step (2), the observation pixel streams are constructed one by one according to the coordinate sequence in each frame of the single video, and the constructed observation pixel streams are combined, so that the single video can be expressed as the observation pixel streams in a matrix formEach element in the matrix is a constructed observation pixel stream;
step (4) establishing a degradation model of the observation pixel flow: i ismn=DBHmn+ E, where D is the time down-sampling matrix, B is the time-blurring matrix, used to simulate the exposure time, HmnE is the added noise vector of Gaussian distribution;
calculating to obtain an original pixel stream H in step (5)mnThe probability estimation equation of (1):
according to the Bayes probability statistical rule, calculating to obtain the original pixel flow HmnIs estimated as
H ^ m n = argmax H m n P ( H m n | I m n ) = argmax H m n [ P ( I m n | H m n ) P ( H m n ) ] ;
Calculating to obtain an original pixel stream HmnThe restoration formula of (2):
according to the degradation model of the observed pixel flow established in the step (4) and the original pixel flow H obtained in the step (5)mnIs obtained by logarithmic calculation to obtain the original pixel stream HmnIs restored to the formula
H ^ m n = arg max H m n ( log P ( H m n ) - α | | I m n - DBH m n | | 2 2 ) ,
Wherein,to restore the pixel stream, logP (H)mn) A temporal prior information item representing the original pixel stream, α being an optimization parameter;
and (7) restoring the observation pixel stream: utilizing the original pixel stream H obtained in the step (6)mnThe single video frequency expressed in the matrix form of the observation pixel stream obtained in the step (3) is restored one by one according to the subscript sequence, and each I in the matrixmnAre restored by the following steps to obtain a restored pixel stream
Step (7-1) of determining a temporal prior model P (H) of the observed pixel streammn) And judging it to be of Gaussian or Laplace type:
from the observed pixel stream ImnDetermining a temporal prior model P (H) of the observed pixel stream in a data-driven mannermn) Is of Gaussian typeOr laplace typeWherein a high pass operator of the signal is represented;
step (7-2) is based on the original pixel stream HmnThe restitution of (c) derives the partial derivative equation:
determining a temporal prior model P (H)mn) Then, based on the original pixel stream H obtained in step (6)mnOf the restoration type H ^ m n = arg max H m n ( log P ( H m n ) - α | | I m n - DBH m n | | 2 2 ) ,
Deriving a partial derivative equation of ∂ ∂ H m n [ log P ( H m n ) ] + αB T D T ( I m n - DBH m n ) = 0 ;
Step (7-3) for observing pixel stream ImnPerforming linear interpolation to obtain pixel streamAs an iteration initial value;
step (7-4) iterative solution is carried out on the partial derivative equation obtained in step (7-2) by using a conjugate gradient method, and a restored pixel stream is obtained
And (8) combining to obtain a restored video: the restored pixel stream restored one by one in the step (7)The restored video H is combined in a matrix form as output, and the restored video H is expressed in a matrix form as
2. The method of claim 1, wherein the method for restoring single video frame rate based on pixel stream and temporal prior information comprises: the video acquisition in the step (1) can use a camera to shoot a moving scene to obtain a single video.
3. The method of claim 1, wherein the method for restoring single video frame rate based on pixel stream and temporal prior information comprises: and (4) adopting a trial and error method to set the optimization parameter alpha in the step (6) for value taking.
4. The method of claim 1, wherein the method for restoring single video frame rate based on pixel stream and temporal prior information comprises: determining a temporal prior model P (H) of the observed pixel stream in said step (7-1)mn) Judging whether the signal is Gaussian or Laplace, and judging by a data driving method, comprising the following stepsIn the step of,
step (7-1-1) for observing the pixel stream ImnLinear interpolation is carried out to obtain pixel stream
Step (7-1-2) of establishing a pixel streamHigh-pass plate ofGaussian prior model of P G ( Γ H ^ m n 0 ) = ( 2 πσ G 2 ) - K 2 exp { - | | Γ H ^ m n 0 | | 2 2 2 σ G 2 }
And laplacian prior model P L ( Γ H ^ m n 0 ) = ( 2 σ L ) - K exp { - | | Γ H ^ m n 0 | | 1 1 σ L } ,
Wherein σGAnd σLRespectively representing the standard deviation of a Gaussian prior model and a Laplace prior model, and K representsThe dimension of (a);
step (7-1-3) utilizes the Gauss prior model and the Laplace prior model built in step (7-1-2) to solve the problem that the prior model is not a good modelGiven the known premises, according to the maximum likelihood rule,
estimate the standard deviation σ ^ G = | | Γ H ^ m n 0 | | 2 2 K , σ ^ L = | | Γ H ^ m n 0 | | 1 1 K ;
Determination of step (7-1-4)Andcomparing the magnitudes of the two values, ifIs greater thanThen judging that the time prior model of the observed pixel flow is Gaussian; and otherwise, judging that the time prior model of the observed pixel flow is of a Laplace type.
CN201510414187.4A 2015-07-14 2015-07-14 A kind of single video frame per second restored method based on pixel stream and time prior imformation Expired - Fee Related CN105005977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510414187.4A CN105005977B (en) 2015-07-14 2015-07-14 A kind of single video frame per second restored method based on pixel stream and time prior imformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510414187.4A CN105005977B (en) 2015-07-14 2015-07-14 A kind of single video frame per second restored method based on pixel stream and time prior imformation

Publications (2)

Publication Number Publication Date
CN105005977A CN105005977A (en) 2015-10-28
CN105005977B true CN105005977B (en) 2016-04-27

Family

ID=54378636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510414187.4A Expired - Fee Related CN105005977B (en) 2015-07-14 2015-07-14 A kind of single video frame per second restored method based on pixel stream and time prior imformation

Country Status (1)

Country Link
CN (1) CN105005977B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112291676B (en) * 2020-05-18 2021-10-15 珠海市杰理科技股份有限公司 Method and system, chip, and electronic device for suppressing audio signal smearing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5493513A (en) * 1993-11-24 1996-02-20 Intel Corporation Process, apparatus and system for encoding video signals using motion estimation
CN104103050A (en) * 2014-08-07 2014-10-15 重庆大学 Real video recovery method based on local strategies
CN104376547A (en) * 2014-11-04 2015-02-25 中国航天科工集团第三研究院第八三五七研究所 Motion blurred image restoration method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5493513A (en) * 1993-11-24 1996-02-20 Intel Corporation Process, apparatus and system for encoding video signals using motion estimation
CN104103050A (en) * 2014-08-07 2014-10-15 重庆大学 Real video recovery method based on local strategies
CN104376547A (en) * 2014-11-04 2015-02-25 中国航天科工集团第三研究院第八三五七研究所 Motion blurred image restoration method

Also Published As

Publication number Publication date
CN105005977A (en) 2015-10-28

Similar Documents

Publication Publication Date Title
CN105847804B (en) A kind of up-conversion method of video frame rate based on sparse redundant representation model
CN113362223B (en) Image Super-Resolution Reconstruction Method Based on Attention Mechanism and Two-Channel Network
CN109671023B (en) A Super-resolution Reconstruction Method of Face Image
CN114494050B (en) A self-supervised video deblurring and image interpolation method based on event camera
CN111028150B (en) Rapid space-time residual attention video super-resolution reconstruction method
CN110458756A (en) Fuzzy video super-resolution method and system based on deep learning
CN111539884A (en) A neural network video deblurring method based on fusion of multi-attention mechanisms
CN110223242A (en) A kind of video turbulent flow removing method based on time-space domain Residual Generation confrontation network
CN111798395B (en) Event camera image reconstruction method and system based on TV constraint
CN107274347A (en) A kind of video super-resolution method for reconstructing based on depth residual error network
CN104867111B (en) A kind of blind deblurring method of non-homogeneous video based on piecemeal fuzzy core collection
Liu et al. Large motion video super-resolution with dual subnet and multi-stage communicated upsampling
Xie et al. Mitigating artifacts in real-world video super-resolution models
CN107613299A (en) A kind of method for improving conversion effect in frame rate using network is generated
CN106056622A (en) A multi-view depth video restoration method based on Kinect camera
CN115330592A (en) Video super-resolution method based on optical flow enhancement algorithm
Mao et al. Aggregating global and local representations via hybrid transformer for video deraining
Kai et al. Event-enhanced blurry video super-resolution
Wang et al. Ultra-high-definition image restoration: New benchmarks and a dual interaction prior-driven solution
CN103971335B (en) A kind of image super-resolution rebuilding method based on confidence level kernel regression
CN105005977B (en) A kind of single video frame per second restored method based on pixel stream and time prior imformation
CN115760647A (en) A high dynamic range imaging system and imaging method based on unified image block level and pixel level
CN109087247B (en) A Method for Super-resolution of Stereo Images
CN114143410B (en) Electric power monitoring image encryption transmission method based on Internet of things
CN116228550A (en) A self-enhanced image defogging algorithm based on generative confrontation network

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160427

Termination date: 20210714

CF01 Termination of patent right due to non-payment of annual fee