CN102131094A - motion prediction method - Google Patents
motion prediction method Download PDFInfo
- Publication number
- CN102131094A CN102131094A CN 201110020283 CN201110020283A CN102131094A CN 102131094 A CN102131094 A CN 102131094A CN 201110020283 CN201110020283 CN 201110020283 CN 201110020283 A CN201110020283 A CN 201110020283A CN 102131094 A CN102131094 A CN 102131094A
- Authority
- CN
- China
- Prior art keywords
- motion
- unit
- candidate
- units
- motion vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
技术领域technical field
本发明有关于视频处理,特别有关于视频数据的运动预测。This invention relates to video processing and in particular to motion prediction of video data.
背景技术Background technique
H.264压缩标准可通过采用例如子像素精度(sub-pixel accuracy)与多参考(multiple-referencing)的特征,可提供比特率(bit rate)相对于先前标准低很多的优秀的视频质量。视频压缩程序通常可被划分为5个部分,包含:帧间预测/帧内预测(inter-prediction/intra-prediction)、变换/反变换(transform/inverse-transform)、量化/反量化(quantization/inverse-quantization)、环路滤波(loop filter)、以及熵编码(entropy encoding)。H.264被用于各种应用,例如蓝光光盘(Blu-ray Disc)、DVB广播服务、直接广播卫星电视(direct-broadcastsatellite television)服务、有线电视服务、以及实时(real-time)视频会议(conferencing)。The H.264 compression standard can provide excellent video quality at a much lower bit rate than previous standards by employing features such as sub-pixel accuracy and multiple-referencing. Video compression programs can usually be divided into five parts, including: inter-prediction/intra-prediction (inter-prediction/intra-prediction), transform/inverse-transform (transform/inverse-transform), quantization/inverse quantization (quantization/ inverse-quantization), loop filter, and entropy encoding. H.264 is used in various applications, such as Blu-ray Disc (Blu-ray Disc), DVB broadcast service, direct-broadcast satellite television (direct-broadcast satellite television) service, cable TV service, and real-time (real-time) video conferencing ( conferencing).
视频数据流包含一系列帧。每一帧被划分为用于视频处理的多个编码单元(例如宏块或扩充的(extended)宏块)。每一编码单元可被分割为四叉树分区(quad-tree partition),以及叶子编码单元被称为预测单元。预测单元可进一步被分割为四叉树分区,以及每一分区被分配有运动参数。为降低传输大量运动参数的成本,通过参考邻近已编码块,为每一分区计算运动向量预测子(motion vector predictor,以下简称为MVP),因邻近块的运动趋向于具有高空间相关性(spatialcorrelation),从而编码效率可被改善。A video data stream consists of a sequence of frames. Each frame is divided into coding units (eg, macroblocks or extended macroblocks) for video processing. Each CU can be partitioned into quad-tree partitions, and the leaf CUs are called PUs. The PU can be further partitioned into quadtree partitions, and each partition is assigned a motion parameter. In order to reduce the cost of transmitting a large number of motion parameters, a motion vector predictor (motion vector predictor, hereinafter referred to as MVP) is calculated for each partition by referring to adjacent coded blocks, because the motion of adjacent blocks tends to have high spatial correlation (spatial correlation) ), so that the coding efficiency can be improved.
请参考图1,图1是当前(编码)单元100与多个相邻(编码)单元A、B、C、以及D的示意图。在本范例中,当前单元100与相邻单元A、B、C、以及D的大小相同;然而,上述单元的大小不必相同。当前单元100的MVP根据相邻单元A、B、与C或者A、B、与D(若C不可用)被预测。当当前单元100为16×16块且相邻单元C的运动向量存在时,相邻单元A、B、与C的运动向量的中值(medium)被决定为当前单元100的MVP。当当前单元100为16×16块且相邻单元C的运动向量不存在时,相邻单元A、B、与D的运动向量的中值被决定为当前单元100的MVP。当当前单元100为16×16块的左半边的8×16分区时,相邻单元A的运动向量被决定为当前单元100的MVP。当当前单元100为16×16块的右半边的8×16分区时,相邻单元C的运动向量被决定为当前单元100的MVP。当当前单元100为16×16块的上半边的16×8分区时,相邻单元B的运动向量被决定为当前单元100的MVP。当当前单元100为16×16块的下半边的16×8分区时,相邻单元A的运动向量被决定为当前单元100的MVP。Please refer to FIG. 1 , which is a schematic diagram of a current (encoding)
当当前单元的MVP根据相邻单元A、B、C、以及D的运动向量被预测时,相邻单元A、B、C、以及D的运动向量并未被合适地在时间(temporal)上被缩放。举例来说,相邻单元A、B、与C的参考帧不同,而相邻单元A、B、与C的运动向量分别对应于上述参考帧。每一参考帧与当前帧之间的时间距离不同。因此在根据相邻单元A、B、与C的运动向量预测当前单元100的MVP之前,相邻单元A、B、与C的运动向量应根据时间距离在时间上被缩放。When the MVP of the current unit is predicted from the motion vectors of neighboring units A, B, C, and D, the motion vectors of neighboring units A, B, C, and D are not properly temporally zoom. For example, the reference frames of the neighboring units A, B, and C are different, and the motion vectors of the neighboring units A, B, and C respectively correspond to the aforementioned reference frames. The temporal distance between each reference frame and the current frame is different. Therefore, before predicting the MVP of the
当前单元100的MVP仅根据相邻单元A、B、C、以及D的运动向量(motion vector,以下简称为MV)被预测。若考虑更多候选MVP并通过率失真优化(rate-distortion optimization)从候选MVP中选择最佳者,MVP的预测精度可进一步改善。举例来说,运动向量竞争(motion vector competition,MVC)被提出以从序列级别(sequencelevel)指定的预定候选集合中选择最佳MVP。预定候选集合包含H.264标准预测子(例如相邻单元的中值运动向量(median MV)),同位(collocated)单元的MV,以及相邻单元的MV,其中同位单元在参考帧中的位置与当前单元在当前帧中的位置相同。推荐的预定候选集合中的MVP的数量为二。预定候选集合,根据运动向量竞争方法,在视频序列级别中是固定的。The MVP of the
发明内容Contents of the invention
为解决以上技术问题,特提供以下技术方案:In order to solve the above technical problems, the following technical solutions are provided:
本发明实施方式提供一种运动预测方法,包含:决定对应于当前帧的当前单元的多个候选单元;获取该多个候选单元的运动向量;根据该多个候选单元的参考帧与当前帧之间的时间距离,计算该多个候选单元的时间缩放因子;根据时间缩放因子,缩放该多个候选单元的运动向量,以获取缩放的运动向量;以及根据缩放的运动向量,从多个候选单元选择用于当前单元的运动预测的运动向量预测子。An embodiment of the present invention provides a motion prediction method, including: determining a plurality of candidate units corresponding to the current unit of the current frame; obtaining motion vectors of the plurality of candidate units; Calculate the time scaling factor of the plurality of candidate units; according to the time scaling factor, scale the motion vectors of the plurality of candidate units to obtain the scaled motion vector; and according to the scaled motion vector, from the multiple candidate units A motion vector predictor is selected for motion prediction of the current unit.
本发明实施方式另提供一种运动预测方法,包含:决定用于当前单元的运动预测的候选单元;决定对应于当前单元的已编码单元;计算对应于已编码单元中的每一个的候选单元的运动向量与已编码单元中的每一个的运动向量之间的运动差值;根据一系列的权重,将对应于候选单元中的每一个的运动差值相加,以获取多个分别对应于候选单元中的每一个的加权和;以及根据加权和,从候选单元选择至少一用于当前单元的运动预测的选定的候选单元。The embodiment of the present invention further provides a motion prediction method, including: determining a candidate unit for motion prediction of the current unit; determining a coded unit corresponding to the current unit; and calculating a candidate unit corresponding to each of the coded units The motion difference between the motion vector and the motion vector of each of the coded units; according to a series of weights, the motion difference corresponding to each of the candidate units is added to obtain a plurality of a weighted sum of each of the units; and selecting at least one selected candidate unit for motion prediction of the current unit from the candidate units based on the weighted sum.
以上所述的运动预测方法,候选集合根据当前单元的特征被自适应地决定,可改善运动预测的性能。In the motion prediction method described above, the candidate set is adaptively determined according to the characteristics of the current unit, which can improve the performance of motion prediction.
附图说明Description of drawings
图1是当前编码单元与多个相邻编码单元的示意图。FIG. 1 is a schematic diagram of a current coding unit and multiple adjacent coding units.
图2是根据本发明一个实施方式的视频编码器的方框图。Figure 2 is a block diagram of a video encoder according to one embodiment of the present invention.
图3是两个候选单元的运动向量的缩放的示意图。Fig. 3 is a schematic diagram of scaling of motion vectors of two candidate units.
图4是具有时间差调整的运动预测方法的流程图。FIG. 4 is a flowchart of a motion prediction method with time difference adjustment.
图5是根据本发明一个实施方式的用于当前单元的运动预测的多个候选单元的示意图。FIG. 5 is a schematic diagram of multiple candidate units for motion prediction of a current unit according to an embodiment of the present invention.
图6A与图6B是根据本发明一个实施方式的具有自适应(adaptively)选定的候选单元的运动预测方法的流程图。6A and 6B are flowcharts of a motion prediction method with adaptively selected candidate units according to an embodiment of the present invention.
图7是根据本发明一个实施方式的对应于不同已编码单元与候选单元的记录运动差值的表的示意图。FIG. 7 is a schematic diagram of a table of recorded motion differences corresponding to different coded units and candidate units according to one embodiment of the present invention.
具体实施方式Detailed ways
在说明书及权利要求书当中使用了某些词汇来指称特定的元件。所属技术领域的技术人员应可理解,硬件制造商可能会用不同的名词来称呼同一个元件。本说明书及权利要求书并不以名称的差异作为区分元件的方式,而是以元件在功能上的差异作为区分的准则。在说明书及权利要求书中所提及的“包含”为开放式的用语,因此,应解释成“包含但不限定在”。此外,“耦接”一词在这里包含任何直接及间接的电气连接手段。因此,若文中描述第一装置耦接于第二装置,则代表第一装置可直接电气连接在第二装置,或通过其它装置或连接手段间接地电气连接到第二装置。Certain terms are used in the description and claims to refer to particular elements. Those skilled in the art should understand that hardware manufacturers may use different terms to refer to the same component. The specification and claims do not use the difference in name as a way to distinguish components, but use the difference in function of components as a criterion for distinguishing. The "comprising" mentioned in the description and the claims is an open term, therefore, it should be interpreted as "including but not limited to". Furthermore, the term "coupled" herein includes any direct and indirect means of electrical connection. Therefore, if it is described that the first device is coupled to the second device, it means that the first device may be directly electrically connected to the second device, or indirectly electrically connected to the second device through other devices or connection means.
请参考图2,图2是根据本发明一个实施方式的视频编码器200的方框图。在一个实施方式中,视频编码器200包含运动预测模块202、减法模块204、变换模块206、量化模块208、以及熵编码模块210。视频编码器200接收视频输入并产生作为输出的比特流。运动预测模块202对视频输入执行运动预测以产生预测样本与预测信息。然后减法模块204从视频输入减去预测样本以获取残差(residue),从而将视频输入的视频数据量减少至残差的视频数据量。然后残差被顺序发送至变换模块206与量化模块208。变换模块206对残差执行离散余弦变换(discrete cosine transform,DCT)以获取变换的残差。然后量化模块208量化变换的残差以获取量化的残差。然后熵编码模块210对量化的残差与预测信息执行熵编码以获取作为输出的比特流。Please refer to FIG. 2 , which is a block diagram of a
运动预测模块202根据多个候选单元的运动向量预测当前帧的当前单元的MVP。在一个实施方式中,候选单元为与当前单元相邻的相邻单元。在运动预测模块202预测当前单元的MVP之前,计算候选单元的参考帧与当前帧之间的时间距离,以及候选单元的运动向量根据时间距离被缩放。请参考图3,图3是两个候选单元310与320的运动向量的缩放的示意图。当前帧k包含用于当前单元300的运动预测的两个候选单元:第一候选单元310与第二候选单元320。第一候选单元310具有相应于参考帧i的运动向量MV1,以及参考帧i与当前帧k之间的第一时间差Dik被计算。第二候选单元320具有相应于参考帧l的运动向量MV2,以及参考帧l与当前帧k之间的第二时间差Dlk被计算。The
然后目标搜索帧j与当前帧k之间的目标时间距离Djk被计算。目标搜索帧j为选择的参考帧。然后通过第一时间距离Dik划分目标时间距离Djk,第一时间缩放因子被计算,以及第一候选单元310的运动向量MV1被乘以第一时间缩放因子(Djk/Dik)以获取对应于第一候选单元310的缩放的运动向量MV1’。然后通过第二时间距离Dlk划分目标时间距离Djk,第二时间缩放因子被计算,以及第二候选单元320的运动向量MV2被乘以第二时间缩放因子(Djk/Dlk)以获取对应于第二候选单元320的缩放的运动向量MV2’。这样,缩放的运动向量MV1’与MV2’都相应于目标搜索帧j被测量,因此时间距离差因子从缩放的运动向量MV1’与MV2’移除。然后运动预测模块202可根据候选单元310与320的缩放的运动向量MV1’与MV2’预测当前帧300的MVP。Then the target temporal distance Djk between the target search frame j and the current frame k is calculated. The target search frame j is the selected reference frame. The target temporal distance D jk is then divided by the first temporal distance D ik , a first temporal scaling factor is calculated, and the motion vector MV 1 of the
请参考图4,图4是具有时间差调整的运动预测方法400的流程图。首先,决定当前帧的当前单元的多个候选单元(步骤402)。候选单元与当前单元是具有相同大小或不同大小的块,且上述单元的每一个可以是编码单元、预测单元、或预测单元分区。在一个实施方式中,候选单元包含当前单元的左边的左单元A、当前单元的上边的上单元B、当前单元的右上方的右上单元C、以及当前单元的左上方的左上单元D。然后获取候选单元的多个运动向量(步骤404)。然后根据候选单元的多个参考帧与当前帧之间的时间距离,计算候选单元的多个时间缩放因子(步骤406)。在一个实施方式中,候选单元的参考帧与当前帧之间的多个时间距离首先被计算,目标搜索帧与当前帧之间的目标时间距离也被计算,以及然后分别通过对应于候选单元的时间距离划分目标时间距离,以获取对应于候选单元的多个时间缩放因子,如图3所示。Please refer to FIG. 4 , which is a flowchart of a
然后,根据时间缩放因子,缩放候选单元的运动向量,以获取多个缩放的运动向量(步骤408)。在一个实施方式中,将候选单元的运动向量分别乘以候选单元的时间缩放因子,以获取候选单元的缩放的运动向量,如图3所示。然后根据缩放的运动向量,从候选单元选择当前单元的运动向量预测子(步骤410)。在一个实施方式中,根据缩放的运动向量计算中值缩放的运动向量(例如对缩放的运动向量排序),以及然后从缩放的运动向量选择中值缩放的运动向量作为当前单元的MVP。Then, the motion vectors of the candidate units are scaled according to the temporal scaling factor to obtain a plurality of scaled motion vectors (step 408). In one embodiment, the motion vectors of the candidate units are respectively multiplied by the time scaling factors of the candidate units to obtain the scaled motion vectors of the candidate units, as shown in FIG. 3 . A motion vector predictor for the current unit is then selected from the candidate units based on the scaled motion vector (step 410). In one embodiment, a median scaled motion vector is calculated from the scaled motion vectors (eg, the scaled motion vectors are sorted), and the median scaled motion vector is then selected from the scaled motion vectors as the MVP of the current unit.
当运动预测模块202根据运动向量竞争方法决定当前单元的MVP时,通常,仅在序列级别决定的两个候选单元的运动向量被包含在用于决定当前单元的MVP的候选集合中。另外,候选集合并非根据当前单元的特征被自适应地决定。若候选集合根据当前单元的特征被自适应地决定,则运动预测的性能可被改善。When the
请参考图5,图5是根据本发明一个实施方式的用于当前单元512的运动预测的多个候选单元的示意图。在本实施方式中,当前单元512与候选单元为具有不同大小的块,举例来说,当前单元512是16x 16块而候选单元是4x4块。在另一实施方式中,当前与候选单元的大小可相同或不同,其大小可为4x4、8x8、8x16、16x8、16x 16、32x32、或64x64。在本实施方式中,当前帧502的四个候选单元A、B、C、与D的运动向量可作为用于决定当前单元512的MVP的候选者。另外,同位单元514在参考帧504中的位置与当前单元512在当前帧502中的位置相同,以及与同位单元514相邻或位于同位单元514中的多个候选单元a~j的运动向量也可作为用于决定当前单元512的MVP的候选者。Please refer to FIG. 5 , which is a schematic diagram of multiple candidate units used for motion prediction of the current unit 512 according to an embodiment of the present invention. In this embodiment, the current unit 512 and the candidate unit are blocks with different sizes, for example, the current unit 512 is a 16x16 block and the candidate unit is a 4x4 block. In another embodiment, the size of the current and candidate cells can be the same or different, and the size can be 4x4, 8x8, 8x16, 16x8, 16x16, 32x32, or 64x64. In this embodiment, the motion vectors of the four candidate units A, B, C, and D of the current frame 502 can be used as candidates for determining the MVP of the current unit 512 . In addition, the position of the colocated unit 514 in the reference frame 504 is the same as the position of the current unit 512 in the current frame 502, and the motion vectors of multiple candidate units a~j adjacent to the colocated unit 514 or located in the colocated unit 514 are also Can be used as a candidate for determining the MVP of the current unit 512 .
当前帧502中的候选单元A为位于当前单元512左边的分区,当前帧502中的候选单元B为位于当前单元512上边的分区,当前帧502中的候选单元C为位于当前单元512右上方的分区,以及当前帧502中的候选单元D为位于当前单元512左上方的分区。参考帧504中的候选单元a为位于同位单元514左边的分区,参考帧504中的候选单元b为位于同位单元514上边的分区,参考帧504中的候选单元c为位于同位单元514右上方的分区,参考帧504中的候选单元d为位于同位单元514左上方的分区。另外,参考帧504中的候选单元e为位于同位单元514内部的分区,参考帧504中的候选单元f与g为位于同位单元514右边的分区,参考帧504中的候选单元h为位于同位单元514左下方的分区,参考帧504中的候选单元i为位于同位单元514下边的分区,参考帧504中的候选单元j为位于同位单元514右下方的分区。在一个实施方式中,用于决定当前单元512的MVP的候选集合更包含计算的运动向量,举例来说,等于候选单元A、B、与C的运动向量的中值的运动向量、等于候选单元A、B、与D的运动向量的中值的运动向量、以及通过类似于图4中所示的方法得到的缩放的MVP。The candidate unit A in the current frame 502 is the partition located on the left side of the current unit 512, the candidate unit B in the current frame 502 is the partition located above the current unit 512, and the candidate unit C in the current frame 502 is located at the top right of the current unit 512. The partition, and the candidate unit D in the current frame 502 is the partition located above and to the left of the current unit 512 . The candidate unit a in the reference frame 504 is the partition located on the left side of the collocated unit 514, the candidate unit b in the reference frame 504 is the partition located above the collocated unit 514, and the candidate unit c in the reference frame 504 is located at the top right of the collocated unit 514 For the partition, the candidate unit d in the reference frame 504 is the partition located at the upper left of the co-located unit 514 . In addition, the candidate unit e in the reference frame 504 is a partition located inside the co-located unit 514, the candidate units f and g in the reference frame 504 are partitions located on the right side of the co-located unit 514, and the candidate unit h in the reference frame 504 is located in the co-located unit 514 in the lower left partition, the candidate unit i in the reference frame 504 is the partition located below the collocated unit 514 , and the candidate unit j in the reference frame 504 is the partition located in the lower right of the collocated unit 514 . In one embodiment, the candidate set used to determine the MVP of the current unit 512 further includes a calculated motion vector, for example, a motion vector equal to the median of the motion vectors of candidate units A, B, and C, equal to the candidate unit The motion vector of the median of the motion vectors of A, B, and D, and the scaled MVP obtained by a method similar to that shown in FIG. 4 .
在对应于当前单元512的多个运动向量被决定为包含在候选集合中之后,至少一运动向量从用于当前单元512的运动预测的候选集合被自适应地选择。请参考图6A与图6B,图6A与图6B是根据本发明一个实施方式的具有自适应选定的候选单元的运动预测方法600的流程图。决定对应于当前单元512的多个已编码单元(步骤602)。决定对应于当前单元512的多个候选单元(步骤603)。用于当前单元512的候选集合从对应于当前单元512的多个运动向量中选择。运动向量可包含同一帧中的已编码分区/块的运动向量的一个或组合、计算的运动向量、以及参考帧中的运动向量。在一个实施方式中,对应于图5中所示的当前单元512的候选集合包含当前帧502中的单元A、B、C、以及D的运动向量与参考帧504中的单元e的运动向量。候选集合可根据一个或多个先前统计、相邻信息、当前单元的形状、以及当前单元的位置而被决定。举例来说,对应于当前单元512的多个运动向量根据相邻信息被归类(rank),且前三个运动向量被选择为包含在候选集合之中。最终的MVP可通过运动向量竞争方法或其它选择方法从候选集合中选择。在某些实施方式中,多个运动向量根据选择顺序被归类,以及选择顺序由运动差的加权和决定。运动差为每一运动向量预测子与候选单元的对应解码运动向量(即实时运动向量)之间的差。权重可通过当前单元的形状与位置决定,或权重可通过相邻块的形状与位置决定。After a plurality of motion vectors corresponding to the current unit 512 are determined to be included in the candidate set, at least one motion vector is adaptively selected from the candidate set for motion prediction of the current unit 512 . Please refer to FIG. 6A and FIG. 6B . FIG. 6A and FIG. 6B are flowcharts of a
请参考图7,图7是根据本发明一个实施方式的对应于不同已编码单元与候选单元的记录运动差值的表的示意图。举例来说,假定单元A被选择为目标已编码单元。计算单元A与位于单元A左边的候选单元AA的运动向量之间的运动差值DA,A。也计算单元A与位于单元A上边的候选单元BA的运动向量之间的运动差值DB,A。也计算单元A与位于单元A右上方的候选单元CA的运动向量之间的运动差值DC,A。也计算单元A与位于单元A左上方的候选单元DA的运动向量之间的运动差值DD,A。也计算单元A与位于对应于单元A的同位单元的左边的候选单元aA的运动向量之间的运动差值Da,A。类似地,也计算对应于已编码单元A的运动差值Db,A,...,Dj,A。然后对应于已编码单元A的计算的运动差值DA,A,DB,A,DC,A,DD,A,Da,A,Db,A,...,Dj,A被记录在图7所示的表中。然后从已编码单元选择目标已编码单元B(步骤604),计算目标已编码单元B的运动向量与对应于目标已编码单元B的多个候选单元的运动向量之间的运动差值DA,B,DB,B,DC,B,DD,B,Da,B,Db,B,...,Dj,B(步骤606)并将其记录在图7所示的表中。步骤604与步骤606重复执行直到所有已编码单元A、B、C、D、以及e均被选择为目标已编码单元且对应于已编码单元A、B、C、D、以及e的运动差值均已计算(步骤608)。Please refer to FIG. 7 . FIG. 7 is a diagram illustrating a table of recorded motion differences corresponding to different coded units and candidate units according to an embodiment of the present invention. For example, assume that unit A is selected as the target coded unit. The motion difference D A,A between the motion vectors of unit A and the candidate unit A A to the left of unit A is calculated. The motion difference D B, A between cell A and the motion vectors of the candidate cell B A above cell A is also calculated. The motion difference Dc, A between the motion vectors of cell A and the candidate cell CA located above and to the right of cell A is also calculated. The motion difference DD ,A between the motion vectors of unit A and the candidate unit DA located above and to the left of unit A is also calculated. The motion difference D a,A between the motion vectors of cell A and the candidate cell a A located to the left of the cell corresponding to cell A is also calculated. Similarly, motion difference values D b,A , . . . , D j,A corresponding to coded unit A are also calculated. Then the calculated motion differences DA ,A, DB,A,DC,A , DD,A,DA , A , Db,A , ..., Dj, corresponding to the coded unit A A is recorded in the table shown in FIG. 7 . Then select the target coded unit B from the coded units (step 604), calculate the motion difference DA between the motion vector of the target coded unit B and the motion vectors of the plurality of candidate units corresponding to the target coded unit B, B , D B , B , D C , B , D D , B , D a , B , D b , B , ..., D j , B (step 606) and record it in the table shown in Fig. 7 middle. Step 604 and step 606 are repeated until all coded units A, B, C, D, and e are selected as target coded units and corresponding to motion difference values of coded units A, B, C, D, and e have been calculated (step 608).
在对应于已编码单元A、B、C、D、以及e的运动差均已计算之后,通过运动差的加权和决定多个运动向量的选择顺序,从候选单元选择目标候选单元(步骤610)。举例来说,若候选单元A被选择作为目标候选单元,则根据一系列权重WA、WB、WC、WD、以及We将对应于目标候选单元A的运动差值DA,A、DA,B、DA,C、DA,D、以及DA,e相加,以获取对应于目标候选单元A的加权和SA=[(DA,A×WA)+(DA,B×WB)+(DA,C×WC)+(DA,D×WD)+(DA,e×We)](步骤612),其中权重WA、WB、WC、WD、以及We分别对应于已编码单元A、B、C、D、以及e中的一个。然后其它候选单元B、C、D、e、...、i、以及j被顺序选择为目标候选单元,对应于候选单元B、C、D、e、...、i、以及j的加权和SB、SC、SD、Se、...、Si、以及Sj被顺序计算。After the motion differences corresponding to the coded units A, B, C, D, and e have been calculated, the selection order of multiple motion vectors is determined by the weighted sum of the motion differences, and the target candidate unit is selected from the candidate units (step 610) . For example, if the candidate unit A is selected as the target candidate unit, according to a series of weights WA , WB , WC , WD , and We will correspond to the motion difference D A, A of the target candidate unit A , D A, B , D A, C , D A, D , and D A, e are added together to obtain the weighted sum S A corresponding to the target candidate unit A = [(D A, A × W A )+( DA , B ×W B )+(DA ,C ×W C )+(DA ,D ×W D )+(DA ,e ×W e )] (step 612), where the weights W A , W B , W C , W D , and We correspond to one of the encoded units A, B, C, D, and e , respectively. Then other candidate units B, C, D, e, ..., i, and j are sequentially selected as target candidate units, corresponding to the weights of candidate units B, C, D, e, ..., i, and j and S B , S C , SD , Se , . . . , S i , and S j are sequentially calculated.
当所有候选单元均已被选择为目标候选单元且对应于所有候选单元A、B、C、D、e、...、i、以及j的加权和SA、SB、SC、SD、Se、...、Si、以及Sj均已计算时(步骤614),根据对应于候选单元A、B、C、D、e、...、i、以及j的加权和SA、SB、SC、SD、Se、...、Si、以及Sj从候选单元A、B、C、D、e、...、i、以及j选择至少一用于当前单元的运动预测的选定的候选单元(步骤616)。在一个实施方式中,根据大小对加权和SA、SB、SC、SD、Se、...、Si、以及Sj排序,且对应于最佳加权和(根据不同加权方法可为最小加权和或最大加权和)的候选单元被决定为选定的候选单元。最后,根据选定的候选单元的运动向量预测当前单元512的运动向量。When all candidate units have been selected as target candidate units and correspond to the weighted sums S A , S B , S C , SD of all candidate units A, B , C, D, e, ..., i, and j , S e , ... , S i , and S j have been calculated (step 614), according to the weighted sum S corresponding to candidate units A, B, C, D, e, ..., i, and j A , S B , S C , SD , Se , ..., S i , and S j select at least one from candidate units A, B, C, D, e, ..., i, and j for Selected candidate units for motion prediction of the current unit (step 616). In one embodiment, the weighted sums S A , S B , S C , SD , Se , ..., S i , and S j are sorted according to size, and correspond to the best weighted sums (according to different weighting methods The candidate unit which may be the smallest weighted sum or the largest weighted sum) is decided as the selected candidate unit. Finally, the motion vector of the current unit 512 is predicted based on the motion vectors of the selected candidate units.
虽然本发明已以较佳实施方式揭露如上,然其并非用于限定本发明,任何所属技术领域中的技术人员,在不脱离本发明的范围内,可以做一些改动,因此本发明的保护范围应以权利要求所界定的范围为准。Although the present invention has been disclosed above in a preferred embodiment, it is not intended to limit the present invention, and any person skilled in the art can make some changes without departing from the scope of the present invention, so the protection scope of the present invention The scope defined by the claims shall prevail.
Claims (18)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201210272628.8A CN102833540B (en) | 2010-01-18 | 2011-01-18 | motion prediction method |
Applications Claiming Priority (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US29581010P | 2010-01-18 | 2010-01-18 | |
| US61/295,810 | 2010-01-18 | ||
| US32673110P | 2010-04-22 | 2010-04-22 | |
| US61/326,731 | 2010-04-22 | ||
| US12/957,644 US9036692B2 (en) | 2010-01-18 | 2010-12-01 | Motion prediction method |
| US12/957,644 | 2010-12-01 |
Related Child Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201210272628.8A Division CN102833540B (en) | 2010-01-18 | 2011-01-18 | motion prediction method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN102131094A true CN102131094A (en) | 2011-07-20 |
Family
ID=44268965
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN 201110020283 Pending CN102131094A (en) | 2010-01-18 | 2011-01-18 | motion prediction method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN102131094A (en) |
Cited By (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014106435A1 (en) * | 2013-01-07 | 2014-07-10 | Mediatek Inc. | Method and apparatus of spatial motion vector prediction derivation for direct and skip modes in three-dimensional video coding |
| CN104838656A (en) * | 2012-07-09 | 2015-08-12 | 高通股份有限公司 | Temporal motion vector prediction in video coding extensions |
| CN106851273A (en) * | 2017-02-10 | 2017-06-13 | 北京奇艺世纪科技有限公司 | A kind of motion-vector coding method and device |
| CN107483956A (en) * | 2011-11-07 | 2017-12-15 | 英孚布瑞智有限私人贸易公司 | The coding/decoding method of video data |
| CN107483928A (en) * | 2011-09-09 | 2017-12-15 | 株式会社Kt | Method for decoding video signal |
| CN107566835A (en) * | 2011-12-23 | 2018-01-09 | 韩国电子通信研究院 | Picture decoding method, method for encoding images and recording medium |
| CN107948656A (en) * | 2011-10-28 | 2018-04-20 | 太阳专利托管公司 | Picture decoding method and picture decoding apparatus |
| CN108184125A (en) * | 2011-11-10 | 2018-06-19 | 索尼公司 | Image processing equipment and method |
| CN108696754A (en) * | 2017-04-06 | 2018-10-23 | 联发科技股份有限公司 | Method and apparatus for motion vector prediction |
| CN113678455A (en) * | 2019-03-12 | 2021-11-19 | Lg电子株式会社 | Video or image coding for deriving weight index information for bi-prediction |
| US11356696B2 (en) | 2011-10-28 | 2022-06-07 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1523896A (en) * | 2003-09-12 | 2004-08-25 | 浙江大学 | Method and device for predicting motion vector in video codec |
| US20040223548A1 (en) * | 2003-05-07 | 2004-11-11 | Ntt Docomo, Inc. | Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, and moving picture decoding program |
| US20080285653A1 (en) * | 2007-05-14 | 2008-11-20 | Himax Technologies Limited | Motion estimation method |
| CN101605256A (en) * | 2008-06-12 | 2009-12-16 | 华为技术有限公司 | Method and device for video encoding and decoding |
-
2011
- 2011-01-18 CN CN 201110020283 patent/CN102131094A/en active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040223548A1 (en) * | 2003-05-07 | 2004-11-11 | Ntt Docomo, Inc. | Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, moving picture decoding method, moving picture encoding program, and moving picture decoding program |
| CN1592421A (en) * | 2003-05-07 | 2005-03-09 | 株式会社Ntt都科摩 | Moving image encoder, moving image decoder, moving image encoding method, moving image decoding method |
| CN1523896A (en) * | 2003-09-12 | 2004-08-25 | 浙江大学 | Method and device for predicting motion vector in video codec |
| US20080285653A1 (en) * | 2007-05-14 | 2008-11-20 | Himax Technologies Limited | Motion estimation method |
| CN101605256A (en) * | 2008-06-12 | 2009-12-16 | 华为技术有限公司 | Method and device for video encoding and decoding |
Cited By (56)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107580219B (en) * | 2011-09-09 | 2020-12-08 | 株式会社Kt | Method for decoding video signal |
| CN107635140B (en) * | 2011-09-09 | 2020-12-08 | 株式会社Kt | Method for decoding video signal |
| US10805639B2 (en) | 2011-09-09 | 2020-10-13 | Kt Corporation | Method for deriving a temporal predictive motion vector, and apparatus using the method |
| CN107580220B (en) * | 2011-09-09 | 2020-06-19 | 株式会社Kt | Method for decoding video signal |
| CN107483928A (en) * | 2011-09-09 | 2017-12-15 | 株式会社Kt | Method for decoding video signal |
| CN107580218B (en) * | 2011-09-09 | 2020-05-12 | 株式会社Kt | Method for decoding video signal |
| CN107580218A (en) * | 2011-09-09 | 2018-01-12 | 株式会社Kt | Method for decoding video signal |
| CN107580219A (en) * | 2011-09-09 | 2018-01-12 | 株式会社Kt | Method for decoding video signal |
| CN107580220A (en) * | 2011-09-09 | 2018-01-12 | 株式会社Kt | Method for decoding video signal |
| CN107580221A (en) * | 2011-09-09 | 2018-01-12 | 株式会社Kt | Method for decoding video signal |
| CN107592528B (en) * | 2011-09-09 | 2020-05-12 | 株式会社Kt | Method for decoding video signal |
| CN107592527A (en) * | 2011-09-09 | 2018-01-16 | 株式会社Kt | Method for decoding video signal |
| CN107592528A (en) * | 2011-09-09 | 2018-01-16 | 株式会社Kt | Method for decoding video signal |
| CN107635140A (en) * | 2011-09-09 | 2018-01-26 | 株式会社Kt | Method for decoding video signal |
| CN107483928B (en) * | 2011-09-09 | 2020-05-12 | 株式会社Kt | Method for decoding video signal |
| CN107592529B (en) * | 2011-09-09 | 2020-05-12 | 株式会社Kt | Method for decoding video signal |
| CN107592527B (en) * | 2011-09-09 | 2020-05-12 | 株式会社Kt | Method for decoding video signal |
| CN107580221B (en) * | 2011-09-09 | 2020-12-08 | 株式会社Kt | Method for decoding video signal |
| CN107592529A (en) * | 2011-09-09 | 2018-01-16 | 株式会社Kt | Method for decoding video signal |
| US10523967B2 (en) | 2011-09-09 | 2019-12-31 | Kt Corporation | Method for deriving a temporal predictive motion vector, and apparatus using the method |
| US11089333B2 (en) | 2011-09-09 | 2021-08-10 | Kt Corporation | Method for deriving a temporal predictive motion vector, and apparatus using the method |
| US11902568B2 (en) | 2011-10-28 | 2024-02-13 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
| US12132930B2 (en) | 2011-10-28 | 2024-10-29 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
| CN107948656B (en) * | 2011-10-28 | 2021-06-01 | 太阳专利托管公司 | Image decoding method and image decoding device |
| CN107948656A (en) * | 2011-10-28 | 2018-04-20 | 太阳专利托管公司 | Picture decoding method and picture decoding apparatus |
| US11356696B2 (en) | 2011-10-28 | 2022-06-07 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
| US11622128B2 (en) | 2011-10-28 | 2023-04-04 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
| US12225228B2 (en) | 2011-10-28 | 2025-02-11 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
| US11115677B2 (en) | 2011-10-28 | 2021-09-07 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
| US11831907B2 (en) | 2011-10-28 | 2023-11-28 | Sun Patent Trust | Image coding method, image decoding method, image coding apparatus, and image decoding apparatus |
| CN107483956A (en) * | 2011-11-07 | 2017-12-15 | 英孚布瑞智有限私人贸易公司 | The coding/decoding method of video data |
| CN107483956B (en) * | 2011-11-07 | 2020-04-21 | 英孚布瑞智有限私人贸易公司 | Method for decoding video data |
| CN108184125B (en) * | 2011-11-10 | 2022-03-08 | 索尼公司 | Image processing device and method |
| CN108184125A (en) * | 2011-11-10 | 2018-06-19 | 索尼公司 | Image processing equipment and method |
| CN107659813A (en) * | 2011-12-23 | 2018-02-02 | 韩国电子通信研究院 | Picture decoding method, method for encoding images and recording medium |
| US11843768B2 (en) | 2011-12-23 | 2023-12-12 | Electronics And Telecommunications Research Institute | Method and apparatus for setting reference picture index of temporal merging candidate |
| CN107659813B (en) * | 2011-12-23 | 2020-04-17 | 韩国电子通信研究院 | Image decoding method, image encoding method, and recording medium |
| CN107682704B (en) * | 2011-12-23 | 2020-04-17 | 韩国电子通信研究院 | Image decoding method, image encoding method, and recording medium |
| CN107566835B (en) * | 2011-12-23 | 2020-02-28 | 韩国电子通信研究院 | Image decoding method, image encoding method, and recording medium |
| CN107566835A (en) * | 2011-12-23 | 2018-01-09 | 韩国电子通信研究院 | Picture decoding method, method for encoding images and recording medium |
| US10848757B2 (en) | 2011-12-23 | 2020-11-24 | Electronics And Telecommunications Research Institute | Method and apparatus for setting reference picture index of temporal merging candidate |
| US12212740B2 (en) | 2011-12-23 | 2025-01-28 | Electronics And Telecommunications Research Institute | Method and apparatus for setting reference picture index of temporal merging candidate |
| US12212741B2 (en) | 2011-12-23 | 2025-01-28 | Electronics And Telecommunications Research Institute | Method and apparatus for setting reference picture index of temporal merging candidate |
| US11284067B2 (en) | 2011-12-23 | 2022-03-22 | Electronics And Telecommunications Research Institute | Method and apparatus for setting reference picture index of temporal merging candidate |
| US11843769B2 (en) | 2011-12-23 | 2023-12-12 | Electronics And Telecommunications Research Institute | Method and apparatus for setting reference picture index of temporal merging candidate |
| CN107682704A (en) * | 2011-12-23 | 2018-02-09 | 韩国电子通信研究院 | Picture decoding method, method for encoding images and recording medium |
| CN104838656B (en) * | 2012-07-09 | 2018-04-17 | 高通股份有限公司 | Decoding and method, video decoder and the computer-readable storage medium of encoded video data |
| CN104838656A (en) * | 2012-07-09 | 2015-08-12 | 高通股份有限公司 | Temporal motion vector prediction in video coding extensions |
| WO2014106435A1 (en) * | 2013-01-07 | 2014-07-10 | Mediatek Inc. | Method and apparatus of spatial motion vector prediction derivation for direct and skip modes in three-dimensional video coding |
| US9967586B2 (en) | 2013-01-07 | 2018-05-08 | Mediatek Inc. | Method and apparatus of spatial motion vector prediction derivation for direct and skip modes in three-dimensional video coding |
| CN106851273A (en) * | 2017-02-10 | 2017-06-13 | 北京奇艺世纪科技有限公司 | A kind of motion-vector coding method and device |
| CN106851273B (en) * | 2017-02-10 | 2019-08-06 | 北京奇艺世纪科技有限公司 | A kind of motion-vector coding method and device |
| CN108696754A (en) * | 2017-04-06 | 2018-10-23 | 联发科技股份有限公司 | Method and apparatus for motion vector prediction |
| CN113678455B (en) * | 2019-03-12 | 2024-01-16 | Lg电子株式会社 | Video or image coding for deriving weight index information for bi-prediction |
| US11876960B2 (en) | 2019-03-12 | 2024-01-16 | Lg Electronics Inc. | Video or image coding for inducing weight index information for bi-prediction |
| CN113678455A (en) * | 2019-03-12 | 2021-11-19 | Lg电子株式会社 | Video or image coding for deriving weight index information for bi-prediction |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN102833540B (en) | motion prediction method | |
| CN102131094A (en) | motion prediction method | |
| EP2534841B1 (en) | Motion vector prediction method | |
| EP3886437B1 (en) | Video encoding and decoding | |
| RU2705435C1 (en) | Method and device for encoding motion information, as well as a method and apparatus for decoding | |
| KR100952340B1 (en) | Method and apparatus for determining coding mode using spatiotemporal complexity | |
| KR101387467B1 (en) | Methods For Encoding/Decoding High Definition Image And Apparatuses For Performing The Same | |
| KR101444675B1 (en) | Method and Apparatus for Encoding and Decoding Video | |
| KR20130116056A (en) | Methods for encoding/decoding high definition image and apparatuses for performing the same | |
| KR20200055124A (en) | Methods and devices for video encoding and video decoding | |
| KR20130112374A (en) | Video coding method for fast intra prediction and apparatus thereof | |
| KR20080069069A (en) | Method and apparatus for intra/inter prediction | |
| WO2012081949A2 (en) | Method and apparatus for inter prediction | |
| KR101582501B1 (en) | Intra Prediction Method and Apparatus and Image Encoding/Decoding Method and Apparatus Using Same | |
| KR101533434B1 (en) | Intra Prediction Method and Apparatus and Image Encoding/Decoding Method and Apparatus Using Same | |
| Mayuran et al. | Evolutionary strategy based improved motion estimation technique for H. 264 video coding | |
| Hsu et al. | Selective Block Size Decision Algorithm for Intra Prediction in Video Coding and Learning Website | |
| KR20100098225A (en) | Method and apparatus for computational complexity control of video encoding |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20110720 |