[go: up one dir, main page]

HK1216273B - Method and device for encoding and decoding video data - Google Patents

Method and device for encoding and decoding video data Download PDF

Info

Publication number
HK1216273B
HK1216273B HK16104244.8A HK16104244A HK1216273B HK 1216273 B HK1216273 B HK 1216273B HK 16104244 A HK16104244 A HK 16104244A HK 1216273 B HK1216273 B HK 1216273B
Authority
HK
Hong Kong
Prior art keywords
uni
prediction
list
inter
directional
Prior art date
Application number
HK16104244.8A
Other languages
Chinese (zh)
Other versions
HK1216273A1 (en
Inventor
翔林.王
瓦迪姆.谢廖金
马尔塔.卡切维奇
Original Assignee
高通股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/628,562 external-priority patent/US9451277B2/en
Application filed by 高通股份有限公司 filed Critical 高通股份有限公司
Publication of HK1216273A1 publication Critical patent/HK1216273A1/en
Publication of HK1216273B publication Critical patent/HK1216273B/en

Links

Abstract

The subject application relates to restriction of prediction units in B slices to uni-directional inter prediction. A computing device determines whether a prediction unit (PU) in a B slice is restricted to uni-directional inter prediction. In addition, the computing device generates a merge candidate list for the PU and determines a selected merge candidate in the merge candidate list. If the PU is restricted to uni-directional inter prediction, the computing device generates a predictive video block for the PU based on no more than one reference block associated with motion information specified by the selected merge candidate. If the PU is not restricted to uni-directional inter prediction, the computing device generates the predictive video block for the PU based on one or more reference blocks associated with the motion information specified by the selected merge candidate.

Description

Method and apparatus for encoding and decoding video data
The scheme is a divisional application. The parent application of the scheme is an invention patent application with the international application number of PCT/US2013/025153, the application date of 2013, 2, month and 7, the application number of 201380008193.9 after entering the China national stage and the invention name of 'the prediction unit in the B slice is limited to unidirectional inter-frame prediction'.
The present application claims the benefit of united states provisional patent application No. 61/596,597, filed on day 2, 8, 2012 and united states provisional patent application No. 61/622,968, filed on day 11, 4, 2012, the entire contents of each of which are incorporated herein by reference.
Technical Field
This disclosure relates to video coding, and in particular to inter-prediction in video coding.
Background
Digital video capabilities can be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless broadcast systems, Personal Digital Assistants (PDAs), laptop or desktop computers, tablet computers, e-book readers, digital cameras, digital recording devices, digital media players, video gaming devices, video gaming consoles, cellular or satellite radio telephones, so-called "smart phones," video teleconferencing devices, video streaming devices, and the like. Digital video devices implement video compression techniques such as those described in MPEG-2, MPEG-4, ITU-T h.263, ITU-T h.264/MPEG-4 part 10, Advanced Video Coding (AVC), standards defined by the High Efficiency Video Coding (HEVC) standard currently in development, and extensions of such standards. Video devices may transmit, receive, encode, decode, and/or store digital video information more efficiently by implementing such video compression techniques.
Video compression techniques perform spatial (intra-picture) prediction and/or temporal (inter-picture) prediction to reduce or remove redundancy inherent in video sequences. For block-based video coding, a video slice (i.e., a video frame or a portion of a video frame) may be partitioned into video blocks, which may also be referred to as treeblocks, Coding Units (CUs) and/or coding nodes. Video blocks in an intra-coded (I) slice of a picture are encoded relative to reference samples in neighboring blocks in the same picture using spatial prediction. Video blocks in an inter-coded (P or B) slice of a picture may use spatial prediction with respect to reference samples in neighboring blocks in the same picture, or temporal prediction with respect to reference samples in other reference pictures. A picture may be referred to as a frame and a reference picture may be referred to as a reference frame.
The spatial or temporal prediction results generate a predicted video block of the block to be coded. The residual data represents pixel differences between the original block to be coded and the predictive video block. An inter-coded block is encoded according to a motion vector that points to a block of reference samples that forms a predictive video block and residual data that indicates the difference between the coded block and the predictive video block. The intra-coded block is encoded according to an intra-coding mode and residual data. For further compression, the residual data may be transformed from the pixel domain to a transform domain, producing residual transform coefficients, which may then be quantized. The quantized transform coefficients initially arranged in a two-dimensional array may be scanned in order to generate a one-dimensional vector of transform coefficients, and entropy coding may be applied to achieve more compression.
Disclosure of Invention
In general, techniques are described for inter-prediction in a video coding process. The video coder determines whether a Prediction Unit (PU) in the B slice is restricted to uni-directional inter prediction. In addition, the video coder generates a merge candidate list for the PU and determines a selected merge candidate in the merge candidate list. If the PU is restricted to uni-directional inter prediction, the video coder generates a predictive video block for the PU based on no more than one reference block associated with the motion information specified by the selected merge candidate. If the PU is not restricted to uni-directional inter prediction, the video coder generates a predictive video block for the PU based on one or more reference blocks associated with the motion information specified by the selected merge candidate.
In one aspect, this disclosure describes a method for coding video data. The method includes determining whether a PU in a B slice is restricted to uni-directional inter prediction. The method also includes generating a merge candidate list for the PU. Additionally, the method includes determining a selected merge candidate in a merge candidate list. In addition, the method includes generating, if the PU is restricted to uni-directional inter prediction, a predictive video block for the PU based on no more than one reference block associated with the motion information specified by the selected merge candidate. The method also includes generating, without limitation to uni-directional inter prediction, a predictive video block for the PU based on one or more reference blocks associated with the motion information specified by the selected merge candidate.
In another aspect, this disclosure describes a video coding device comprising one or more processors configured to determine whether a PU in a B slice is restricted to uni-directional inter prediction. The one or more processors are also configured to generate a merge candidate list for the PU and determine a selected merge candidate in the merge candidate list. The one or more processors are configured such that if the PU is restricted to uni-directional inter prediction, the one or more processors generate a predictive video block for the PU based on no more than one reference block associated with the motion information specified by the selected merge candidate. Furthermore, the one or more processors are configured such that if the PU is not restricted to uni-directional inter prediction, the one or more processors generate the predictive video block for the PU based on one or more reference blocks associated with the motion information specified by the selected merge candidate.
In another aspect, this disclosure describes a video coding device comprising means for determining whether a PU in a B slice is restricted to uni-directional inter prediction. The video coding device also comprises means for generating a merge candidate list for the PU. In addition, the video coding device comprises means for determining a selected merge candidate in a merge candidate list. The video coding device also comprises means for generating, if the PU is restricted to uni-directional inter prediction, a predictive video block for the PU based on no more than one reference block associated with the motion information specified by the selected merge candidate. The video coding device also comprises means for generating, if the PU is not restricted to uni-directional inter prediction, a predictive video block for the PU based on one or more reference blocks associated with the motion information specified by the selected merge candidate.
In another aspect, this disclosure describes a computer program product comprising one or more computer-readable storage media storing instructions that, when executed, configure one or more processors to determine whether a PU in a B slice is restricted to unidirectional inter prediction. The instructions also configure the one or more processors to generate a merge candidate list for the PU, and determine a selected merge candidate in the merge candidate list. If the PU is restricted to uni-directional inter prediction, the instructions configure the one or more processors to generate a predictive video block for the PU based on no more than one reference block associated with the motion information specified by the selected merge candidate. If the PU is not restricted to uni-directional inter prediction, the instructions configure the one or more processors to generate a predictive video block for the PU based on one or more reference blocks associated with the motion information specified by the selected merge candidate.
The details of one or more examples are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
Drawings
FIG. 1 is a block diagram illustrating an example video coding system that may utilize the techniques described in this disclosure.
FIG. 2 is a block diagram illustrating an example video encoder configured to implement the techniques described in this disclosure.
FIG. 3 is a block diagram illustrating an example video decoder configured to implement the techniques described in this disclosure.
Fig. 4 is a flow diagram illustrating an example motion compensation operation.
Fig. 5 is a flow diagram illustrating another example motion compensation operation.
Fig. 6 is a flow diagram illustrating example operations for generating a merge candidate list.
Fig. 7 is a flow diagram illustrating an example process for generating artificial merge candidates.
Fig. 8 is a flow diagram illustrating example operations for determining motion information for a prediction unit using an advanced motion vector prediction mode.
Detailed Description
As described below, a picture may be divided into one or more slices. Each of the slices may include an integer number of Coding Units (CUs). Each CU may have one or more Prediction Units (PUs). The slice may be an I slice, a P slice, or a B slice. In an I slice, all PUs are intra predicted. The video encoder may perform intra prediction or uni-directional inter prediction on PUs in the P slice. When the video encoder performs uni-directional inter prediction on a PU in a P slice, the video encoder may identify or synthesize reference samples in reference pictures listed in a first list of reference pictures ("list 0"). The reference block may be a block of reference samples within a reference picture. The reference samples may correspond to actual pixels in the reference block, or pixels synthesized, for example, by interpolation using actual pixels. The video encoder may then generate the predictive video block for the PU based on the reference block for the PU.
The video encoder may perform list 0 uni-directional inter prediction, list 1 uni-directional inter prediction, or bi-directional inter prediction on the PUs in the B slice. When the video encoder performs list 0 uni-directional inter prediction on a PU, the video encoder may identify a reference block in the reference picture listed in list 0 or synthesize a reference block based on reference samples in the reference picture listed in list 0. The video encoder may then generate the predictive video block for the PU based on the reference block. When the video encoder performs list 1 uni-directional inter prediction on a PU, the video encoder may identify a reference block in a reference picture listed in a second reference picture list ("list 1") or may synthesize a reference block based on reference samples in the reference picture listed in list 1. The video encoder may then generate the predictive video block for the PU based on the reference block. When the video encoder performs bi-directional inter prediction on a PU, the video encoder may identify a reference block in the reference picture listed in list 0 or synthesize a reference block based on reference samples in the reference picture listed in list 0. In addition, when the video encoder performs bi-directional inter prediction on the PU, the video encoder may identify a reference block in the reference pictures listed in list 1 or synthesize a reference block based on reference samples in the reference pictures listed in list 1. The video encoder may then generate the predictive video block for the PU based on the two reference blocks.
The video encoder may signal the motion information of the PU to enable the video decoder to identify or synthesize a reference block that the video encoder uses to generate the predictive video block of the PU. The motion information of the PU may include one or more motion vectors, reference picture indices, and a flag indicating whether inter prediction is based on list 0 and/or list 1. In some examples, a video encoder may signal the motion information of a PU using merge mode. When the video encoder signals the motion information of the PU using the merge mode, the video encoder may generate a merge candidate list for the PU. The merge candidate list may include a plurality of merge candidates, each specifying a set of motion information.
A merge candidate may be a uni-directional merge candidate if the merge candidate specifies motion information that identifies a single position in a reference picture listed in list 0 or list 1. A reference block may be associated with a set of motion information if samples in the reference block are determined based on samples at locations identified by the motion information in a reference picture identified by the motion information. For example, a reference block may be associated with a set of motion information if the samples in the reference block are the same as the samples in the video block at the location identified by the motion information in the reference picture identified by the motion information. A reference block may also be associated with a set of motion information if samples in the reference block are synthesized (e.g., interpolated) from samples in a video block at a location identified by the motion information in a reference frame identified by the motion information.
The merge candidate may be a bi-directional merge candidate if the merge candidate specifies motion information that identifies a position in a reference picture listed in list 0 and a position in a reference picture listed in list 1. The video encoder may generate the motion information specified by the merge candidate based on motion information of PUs that spatially neighbor the current PU and/or the co-located PU in the different picture. After generating the merge list for the current PU, the video encoder may select one of the merge candidates in the merge candidate list and signal a position within the merge candidate list of the selected merge candidate. The video decoder may determine motion information for the current PU based on the motion information specified by the selected merge candidate.
Depending on the operation and the required memory bandwidth, generating a predictive video block for a PU based on two reference blocks may be more complex than generating a predictive video block for a PU based on a single reference block. The complexity associated with generating predictive video blocks based on two reference blocks may increase as the number of bi-directionally inter-predicted PUs in a B slice increases. This may be especially true when the number of small bi-directional inter-predicted PUs increases. Thus, some PUs in a B slice may advantageously be limited to uni-directional inter prediction.
The video encoder may restrict PUs in a B slice to uni-directional inter prediction by selecting uni-directional merge candidates only from the merge candidate list of the PU. However, in some examples, the merge candidate list may not include any uni-directional merge candidates. In such examples, the video encoder may not be able to signal the motion information of the PU using merge mode. This may reduce coding performance. Furthermore, even if the merge candidate list includes at least one uni-directional merge candidate, coding efficiency may be reduced if the reference block associated with the motion information specified by the uni-directional merge candidate is not sufficiently similar to the video block associated with the PU.
In accordance with the techniques of this disclosure, a video coder (e.g., a video encoder or a video decoder) may determine whether a PU in a B slice is restricted to uni-directional inter prediction. For example, the video coder may determine that the PU is restricted to uni-directional inter prediction if a size characteristic of the PU is less than a particular threshold. The size characteristic of the PU may be a characteristic of the size of the video block associated with the PU, such as the height, width, diagonal length, etc., of the video block associated with the PU. In addition, the video coder may generate a merge candidate list for the PU and determine a selected merge candidate in the merge candidate list. If the PU is restricted to uni-directional inter prediction, the video coder may generate the predictive video block for the PU based on no more than one reference block associated with the motion information specified by the selected merge candidate. If the PU is not restricted to uni-directional inter prediction, the video coder may generate the predictive video block for the PU based on one or more reference blocks associated with the motion information specified by the selected merge candidate. By limiting some PUs to uni-directional inter prediction in this manner, a video coder may reduce the complexity associated with generating a prediction video block based on multiple reference blocks. This may increase the speed at which a video coder is able to code video data and may reduce data bandwidth requirements.
For ease of explanation, this disclosure may describe locations or video blocks as having various spatial relationships with CUs or PUs. This description may be interpreted to mean that locations or video blocks have various spatial relationships to video blocks associated with a CU or PU. Furthermore, this disclosure may refer to a PU that the video coder is currently coding as the current PU. This disclosure may refer to a CU that a video coder is currently coding as a current CU. This disclosure may refer to a picture that a video coder is currently coding as a current picture.
The figures illustrate examples. Elements indicated by reference numerals in the drawings correspond to elements indicated by the same reference numerals in the following description. In the present disclosure, elements having names beginning with ordinal words (e.g., "first," "second," "third," etc.) do not necessarily imply a particular order to the elements. Rather, such ordinal words are used only to refer to different elements of the same or similar type.
FIG. 1 is a block diagram illustrating an example video coding system 10 that may utilize techniques of this disclosure. As used herein, the term "video coder" generally refers to both video encoders and video decoders. In this disclosure, the term "video coding" or "coding" may generally refer to video encoding or video decoding.
As shown in fig. 1, video coding system 10 includes a source device 12 and a destination device 14. Source device 12 generates encoded video data. Accordingly, source device 12 may be referred to as a video encoding device or a video encoding apparatus. Destination device 14 may decode the encoded video data generated by source device 12. Destination device 14 may therefore be referred to as a video decoding device or a video decoding apparatus. Source device 12 and destination device 14 may be examples of video coding devices or video coding apparatuses.
Source device 12 and destination device 14 may comprise a wide range of devices, including desktop computers, mobile computing devices, notebook (e.g., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called "smart" phones, televisions, cameras, display devices, digital media players, video game consoles, in-vehicle computers, and the like. In some examples, source device 12 and destination device 14 may be equipped for wireless communication.
Destination device 14 may receive encoded video data from source device 12 via channel 16. Channel 16 may comprise a type of media or device capable of moving encoded video data from source device 12 to destination device 14. In one example, channel 16 may comprise a communication medium that enables source device 12 to transmit encoded video data directly to destination device 14 in real-time. In this example, source device 12 may modulate the encoded video data according to a communication standard, such as a wireless communication protocol, and may transmit the modulated video data to destination device 14. The communication medium may comprise a wireless or wired communication medium such as a Radio Frequency (RF) spectrum or one or more physical transmission lines. The communication medium may form part of a packet-based network, such as a local area network, a wide area network, or a global network (e.g., the internet). The communication medium may include routers, switches, base stations, or other apparatus that facilitate communication from source device 12 to destination device 14.
In another example, channel 16 may correspond to a storage medium that stores encoded video data generated by source device 12. In this example, destination device 14 may access the storage medium via disk access or card access. The storage medium may comprise a variety of locally-accessed data storage media such as blu-ray discs, DVDs, CD-ROMs, flash memory, or other suitable digital storage media for storing encoded video data. In another example, channel 16 may include a file server or another intermediate storage device that stores encoded video generated by source device 12. In this example, destination device 14 may access encoded video data stored at a file server or other intermediate storage device via streaming or download. The file server may be a server of the type capable of storing encoded video data and transmitting the encoded video data to destination device 14. Example file servers include web servers (e.g., for websites), File Transfer Protocol (FTP) servers, Network Attached Storage (NAS) devices, and local disk drives. Destination device 14 may access the encoded video data via a standard data connection, including an internet connection. Example types of data connections may include a wireless channel (e.g., a Wi-Fi connection), a wired connection (e.g., DSL, cable modem, etc.), or a combination of both suitable for accessing encoded video data stored on a file server. The transmission of the encoded video data from the file server may be a streaming transmission, a download transmission, or a combination of both.
The techniques of this disclosure are not limited to wireless applications or settings. The techniques may be applied to video coding to support any of a variety of multimedia applications, such as over-the-air television broadcasts, cable television transmissions, satellite television transmissions, streaming video transmissions (e.g., via the internet), encoding digital video for storage on a data storage medium, decoding digital video stored on a data storage medium, or other applications. In some examples, video coding system 10 may be configured to support one-way or two-way video transmission to support applications such as video streaming, video playback, video broadcasting, and/or video telephony.
In the example of fig. 1, source device 12 includes a video source 18, a video encoder 20, and an output interface 22. In some cases, output interface 22 may include a modulator/demodulator (modem) and/or a transmitter. In source device 12, video source 18 may include sources such as a video capture device (e.g., a video camera), a video archive containing previously captured video data, a video feed interface to receive video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources.
Video encoder 20 may encode captured, pre-captured, or computer-generated video data. The encoded video data may be transmitted directly to destination device 14 via output interface 22 of source device 12. The encoded video data may also be stored onto a storage medium or file server for subsequent access by destination device 14 for decoding and/or playback.
In the example of fig. 1, destination device 14 includes an input interface 28, a video decoder 30, and a display device 32. In some cases, input interface 28 may include a receiver and/or a modem. Input interface 28 of destination device 14 receives the encoded video data over channel 16. The encoded video data may include a variety of syntax elements generated by video encoder 20 that represent the video data. Such syntax elements may be included with encoded video data transmitted over a communication medium, stored on a storage medium, or stored on a file server.
The display device 32 may be integrated with the destination device 14 or may be external to the destination device 14. In some examples, destination device 14 may include an integrated display device, and may also be configured to interface with an external display device. In other examples, destination device 14 may be a display device. In general, display device 32 displays the decoded video data to a user. The display device 32 may comprise any of a variety of display devices, such as a Liquid Crystal Display (LCD), a plasma display, an Organic Light Emitting Diode (OLED) display, or another type of display device.
Video encoder 20 and video decoder 30 may operate according to a video compression standard, such as the High Efficiency Video Coding (HEVC) standard currently under development, and may follow the HEVC test mode (HM). A recent draft of the upcoming HEVC standard, called "HEVC working draft 7" or "WD 7", is described in document JCTVC-I1003_ d54 "High Efficiency Video Coding (HEVC) text specification draft 7" by Bross et al, the video coding joint collaboration team (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, ninth meeting: swedish geneva, month 5 2012, which is available from http: int-evry, fr/jct/doc end user/documents/9 Geneva/wg11/JCTVC-I1003-v6.zip download, the entire contents of which are incorporated herein by reference. Alternatively, video encoder 20 and video decoder 30 may operate according to other proprietary or industry standards, such as the ITU-T H.264 standard, alternatively referred to as MPEG-4, part 10, Advanced Video Coding (AVC), or an extension of such a standard. However, the techniques of this disclosure are not limited to any particular coding standard or technique. Other examples of video compression standards and techniques include MPEG-2, ITU-T H.263, and proprietary or open source compression formats (e.g., VP8 and related formats).
Although not shown in the example of fig. 1, video encoder 20 and video decoder 30 may each be integrated with an audio encoder and decoder, and may include appropriate MUX-DEMUX units, or other hardware and software, to handle encoding of both audio and video in a common data stream or separate data streams. If applicable, in some examples, the MUX-DEMUX units may conform to the ITUH.223 multiplexer protocol, or other protocols such as the User Datagram Protocol (UDP).
Again, fig. 1 is merely an example, and the techniques of this disclosure may be applicable to video coding settings (e.g., video encoding or video decoding) that do not necessarily include any data communication between encoding and decoding devices. In other examples, the data may be retrieved from local memory, streamed over a network, and so on. The encoding device may encode and store data to memory, and/or the decoding device may retrieve and decode data from memory. In many examples, the encoding and decoding are performed by devices that do not communicate with each other, but only encode data to and/or retrieve and decode data from memory.
Video encoder 20 and video decoder 30 may each be implemented as any of a variety of suitable circuits, such as one or more microprocessors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), discrete logic, hardware, or any combinations thereof. When the techniques are implemented in part in software, a device may store instructions of the software in a suitable non-transitory computer-readable storage medium and may execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Any of the above, including hardware, software, a combination of hardware and software, etc., may be considered as one or more processors. Each of video encoder 20 and video decoder 30 may be included in one or more encoders or decoders, either of which may be integrated in a respective device as part of a combined encoder/decoder (CODEC).
This disclosure may generally refer to video encoder 20 "signaling" particular information to another device, such as video decoder 30. It should be understood, however, that video encoder 20 may signal information by associating particular syntax elements with various encoded portions of video data. That is, video encoder 20 may "signal" the data by storing particular syntax elements to headers of various encoded portions of the video data. In some cases, such syntax elements may be encoded and stored (e.g., in a storage system) prior to being received and decoded by video decoder 30. Thus, the term "signaling" may generally refer to communication of syntax or other data for decoding compressed video data. This communication may occur in real time or near real time. Alternatively, such communication may occur over a span of time, such as may occur when, when encoded, syntax elements are stored to a medium in an encoded bitstream, which syntax elements may then be retrieved by a decoding device at any time after storage to such a medium.
As briefly mentioned above, video encoder 20 encodes video data. The video data may include one or more pictures, each of which may be a still image that forms part of the video. In some examples, a picture may be referred to as a video "frame". When video encoder 20 encodes video data, video encoder 20 may generate a bitstream. The bitstream may include a sequence of bits that form a coded representation of the video data. The bitstream may include coded pictures and associated data. A coded picture is a coded representation of a picture.
To generate the bitstream, video encoder 20 may perform an encoding operation on each picture in the video data. When video encoder 20 performs an encoding operation on a picture, video encoder 20 may generate a series of coded pictures and associated data. The associated data may include sequence parameter sets, picture parameter sets, adaptation parameter sets, and other syntax structures. A Sequence Parameter Set (SPS) may contain parameters applicable to zero or more picture sequences. A Picture Parameter Set (PPS) may contain parameters applicable to zero or more pictures. An Adaptation Parameter Set (APS) may contain parameters applicable to zero or more pictures. The parameters in the APS may be parameters that are more likely to change than the parameters in the PPS.
To generate a coded picture, video encoder 20 may partition the picture into equal-sized video blocks. The video blocks may be a two-dimensional array of samples. Each of the video blocks is associated with a tree block (treeblock). In some examples, a treeblock may be referred to as a Largest Coding Unit (LCU) or "coding treeblock. The treeblocks of HEVC may be broadly similar to macroblocks of previous standards such as h.264/AVC. However, a treeblock is not necessarily limited to a particular size and may include one or more Coding Units (CUs). Video encoder 20 may partition the video blocks of the treeblock into video blocks associated with the CU using quadtree partitioning, hence the term "treeblock.
In some examples, video encoder 20 may partition a picture into multiple slices. Each of the slices may include an integer number of CUs. In some examples, a slice includes an integer number of treeblocks. In other examples, the boundary of the slice may be within a tree block.
As part of performing encoding operations on the picture, video encoder 20 may perform encoding operations on each slice of the picture. When video encoder 20 performs an encoding operation on a slice, video encoder 20 may generate encoded data associated with the slice. The encoded data associated with a slice may be referred to as a "coded slice".
To generate a coded slice, video encoder 20 may perform an encoding operation on each treeblock in the slice. When video encoder 20 performs an encoding operation on a treeblock, video encoder 20 may generate a coded treeblock. The coded treeblock may comprise an encoded representation of the treeblock.
When video encoder 20 generates a coded slice, video encoder 20 may perform encoding operations (i.e., encode) on treeblocks in the slice, which in this case represent largest coding units, according to a raster scan order. In other words, video encoder 20 may encode the treeblocks of the slice in an order that video encoder 20 has encoded each of the treeblocks in the slice from left to right on a topmost row of treeblocks in the slice, then from left to right on a next lower row of treeblocks, and so on.
Since the treeblocks are encoded according to a raster scan order, treeblocks above and to the left of a given treeblock may have been encoded, but treeblocks below and to the right of the given treeblock have not been encoded. Thus, video encoder 20 may be able to access information generated by encoding treeblocks above and to the left of a given treeblock when encoding the given treeblock. However, video encoder 20 may not have access to information generated by encoding treeblocks below and to the right of a given treeblock when encoding the given treeblock.
To generate the coded treeblocks, video encoder 20 may recursively perform quadtree partitioning on the video blocks of the treeblocks to partition the video blocks into progressively smaller video blocks. Each of the smaller video blocks may be associated with a different CU. For example, video encoder 20 may partition a video block of a treeblock into four equally sized sub-blocks, partition one or more of the sub-blocks into four equally sized sub-blocks, and so on. A partitioned CU may be a CU whose video block is partitioned into video blocks associated with other CUs. An undivided CU may be a CU whose video block is undivided into video blocks associated with other CUs.
One or more syntax elements in the bitstream may indicate a maximum number of times video encoder 20 may partition a video block of a treeblock. The video block of a CU may be square in shape. The size of the video block of a CU (i.e., the size of the CU) may range from 8x8 pixels to the size of the video block of a treeblock (i.e., the size of the treeblock), up to 64x64 pixels or more.
Video encoder 20 may perform an encoding operation (i.e., encoding) on each CU of a treeblock according to the z-scan order. In other words, video encoder 20 may encode the top-left CU, the top-right CU, the bottom-left CU, and then the bottom-right CU in that order. When video encoder 20 performs an encoding operation on a partitioned CU, video encoder 20 may encode CUs associated with sub-blocks of a video block of the partitioned CU according to the z-scan order. In other words, video encoder 20 may encode the CU associated with the top-left sub-block, the CU associated with the top-right sub-block, the CU associated with the bottom-left sub-block, and then the CU associated with the bottom-right sub-block in that order.
Since the CUs of a treeblock are encoded according to the z-scan order, CUs above, and to the left of, above, and to the right, left, and below, and to the left of a given CU may already be encoded. The CU below or to the right of a given CU has not yet been encoded. Thus, video encoder 20 may be able to access information generated by encoding some CUs that neighbor a given CU when the given CU is encoded. However, video encoder 20 may not have access to information generated by encoding other CUs that neighbor the given CU when the given CU is encoded.
When video encoder 20 encodes an undivided CU, video encoder 20 may generate one or more Prediction Units (PUs) of the CU. Each of the PUs of the CU may be associated with a different video block within the video block of the CU. Video encoder 20 may generate a predictive video block for each PU of the CU. The predictive video block of the PU may be a block of samples. Video encoder 20 may use intra prediction or inter prediction to generate the predictive video block for the PU.
When video encoder 20 generates the predictive video block for the PU using intra prediction, video encoder 20 may generate the predictive video block for the PU based on decoded samples of the picture associated with the PU. A CU is an intra-predicted CU if video encoder 20 uses intra-prediction to generate predicted video blocks for PUs of the CU.
When video encoder 20 generates the predictive video block for the PU using inter prediction, video encoder 20 may generate the predictive video block for the PU based on decoded samples of one or more pictures other than the picture associated with the PU. A CU is an inter-predicted CU if video encoder 20 uses inter prediction to generate the predictive video blocks for the PUs of the CU.
Moreover, when video encoder 20 generates the predictive video block for the PU using inter prediction, video encoder 20 may generate motion information for the PU. The motion information of the PU may indicate one or more reference blocks of the PU. Each reference block of a PU may be a video block within a reference picture. The reference picture may be a picture other than the picture associated with the PU, and in some examples, the reference block of the PU may also be referred to as a "reference sample" of the PU. Video encoder 20 may generate the predictive video block for the PU based on the reference block for the PU.
As discussed above, a slice may be an I slice, a P slice, or a B slice. In an I slice, all PUs are intra predicted. In P-slices and B-slices, a PU may be intra-predicted or inter-predicted. When video encoder 20 performs inter prediction on PUs in a P slice, video encoder 20 may generate motion information that identifies a location in a single reference picture. In other words, a PU may be uni-directionally inter predicted. The motion information may include a reference picture index and a motion vector. The reference picture index may indicate the position of the reference picture in a first reference picture list ("list 0"). The motion vector may indicate a spatial displacement between a video block associated with the PU and a reference block within a reference picture. A video coder, such as video encoder 20 or video decoder 30, may then generate the predictive video block for the PU based on the single reference block associated with the motion information of the PU. For example, the video coder may generate the predictive video block for the PU such that the predictive video block matches the reference block.
A PU in a B slice may be uni-directional inter-predicted based on list 0, uni-directional inter-predicted based on a second reference picture list ("list 1"), or bi-directional inter-predicted. If a PU in a B slice is uni-directionally inter predicted based on list 0, the motion information of the PU may include a list 0 reference picture index and a list 0 motion vector. The list 0 reference picture index may identify a reference picture by indicating the position of the reference picture in list 0. The list 0 motion vector may indicate a spatial displacement between the video block associated with the PU and a reference block within a reference picture. Video encoder 20 may generate the predictive video block for the PU based on the reference block associated with the list 0 motion vector. In other words, video encoder 20 may generate the predictive video block for the PU based on the block of reference samples identified by the list 0 motion vector, or may generate the predictive video block for the PU based on a block of reference samples synthesized (e.g., interpolated) from the block of reference samples identified by the list 0 motion vector.
If a PU in a B slice is uni-directionally inter predicted based on list 1, the motion information of the PU may include a list 1 reference picture index and a list 1 motion vector. The list 1 reference picture index may identify the reference picture by indicating the position of the reference picture in list 1. The list 1 motion vector may indicate a spatial displacement between the PU and a reference block within the reference picture. Video encoder 20 may generate the predictive video block for the PU based on the block of reference samples associated with the list 1 motion vector. For example, video encoder 20 may generate the predictive video block for the PU based on the block of reference samples identified by the list 1 motion vector, or may generate the predictive video block for the PU based on a block of reference samples synthesized (e.g., interpolated) from the block of reference samples identified by the list 1 motion vector.
If a PU in a B slice is bi-directionally inter predicted, the motion information of the PU may include a list 0 reference picture index, a list 0 motion vector, a list 1 reference picture index, and a list 1 motion vector. In some examples, the list 0 and list 1 reference picture indices may identify the same picture. Video encoder 20 may generate the predictive video block for the PU based on the reference blocks associated with the list 0 and list 1 motion vectors. In some examples, video encoder 20 may generate the predictive video block for the PU by interpolating the predictive video block from samples in the reference block associated with the list 0 motion vector and samples in the reference block associated with the list 1 motion vector.
After video encoder 20 generates the predictive video blocks for one or more PUs of the CU, video encoder 20 may generate residual data for the CU based on the predictive video blocks for the PUs of the CU. The residual data of the CU may indicate differences between samples in the prediction video blocks of the PUs of the CU and the original video block of the CU.
Moreover, as part of performing encoding operations on an undivided CU, video encoder 20 may perform recursive quadtree partitioning on residual data of the CU to partition the residual data of the CU into one or more blocks of residual data (i.e., blocks of residual data) associated with Transform Units (TUs) of the CU.
Video coder 20 may apply one or more transforms to a residual video block associated with a TU to generate a block of transform coefficients (i.e., a block of transform coefficients) associated with the TU. Conceptually, a transform coefficient block may be a two-dimensional (2D) matrix of transform coefficients.
After generating the transform coefficient block, video encoder 20 may perform a quantization process on the transform coefficient block. Quantization generally refers to the process by which transform coefficients are quantized to possibly reduce the amount of data used to represent the transform coefficients, providing further compression. The quantization process may reduce the bit depth associated with some or all of the transform coefficients. For example, an n-bit transform coefficient may be rounded down to an m-bit transform coefficient during quantization, where n is greater than m.
Video encoder 20 may associate each CU with a Quantization Parameter (QP) value. The QP value associated with a CU may determine how video encoder 20 quantizes a transform coefficient block associated with the CU. Video encoder 20 may adjust the degree of quantization applied to the transform coefficient block associated with the CU by adjusting the QP value associated with the CU.
After video encoder 20 quantizes the transform coefficient block, video encoder 20 may generate a set of syntax elements that represent the transform coefficients in the quantized transform coefficient block. Video encoder 20 may apply entropy encoding operations, such as Context Adaptive Binary Arithmetic Coding (CABAC) operations, to some of these syntax elements.
The bitstream generated by video encoder 20 may include a series of Network Abstraction Layer (NAL) units. Each of the NAL units may be a syntax structure containing an indication of the type of data in the NAL unit and the bytes containing the data. For example, a NAL unit may contain data representing a sequence parameter set, a picture parameter set, a coded slice, Supplemental Enhancement Information (SEI), an access unit delimiter, padding data, or another type of data. The data in a NAL unit may include various syntax structures.
Video decoder 30 may receive the bitstream generated by video encoder 20. The bitstream may include a coded representation of the video data encoded by video encoder 20. When video decoder 30 receives the bitstream, video decoder 30 may perform a parsing operation on the bitstream. When video decoder 30 performs a parsing operation, video decoder 30 may extract syntax elements from the bitstream. Video decoder 30 may reconstruct pictures of the video data based on syntax elements extracted from the bitstream. The process of reconstructing video data based on syntax elements may generally be reciprocal to the process performed by video encoder 20 to generate syntax elements.
After video decoder 30 extracts the syntax elements associated with the CU, video decoder 30 may generate the predictive video blocks for the PUs of the CU based on the syntax elements. In addition, video decoder 30 may inverse quantize transform coefficient blocks associated with TUs of the CU. Video decoder 30 may perform an inverse transform on the transform coefficient blocks to reconstruct residual video blocks associated with TUs of the CU. After generating the prediction video block and reconstructing the residual video block, video decoder 30 may reconstruct the video block of the CU based on the prediction video block and the residual video block. In this way, video decoder 30 may reconstruct the video block of the CU based on the syntax elements in the bitstream.
As described above, video encoder 20 may generate the predictive video block associated with the motion information of the PU of the CU using inter prediction. In many examples, the motion information of a given PU may be the same as or similar to the motion information of one or more nearby PUs (i.e., PUs whose video blocks are nearby, spatially or temporally, to the video block of the given PU). Because nearby PUs frequently have similar motion information, video encoder 20 may encode the motion information of a given PU with reference to the motion information of one or more nearby PUs. Encoding motion information of a given PU with reference to motion information of the one or more nearby PUs may reduce the number of bits in the bitstream required to indicate the motion information of the given PU.
Video encoder 20 may encode the motion information for a given PU in various ways with reference to the motion information of one or more nearby PUs. For example, video encoder 20 may encode the motion information for a given PU using merge mode or Advanced Motion Vector Prediction (AMVP) mode. To encode the motion information of the PU using merge mode, video encoder 20 may generate a merge candidate list for the PU. The merge candidate list may include one or more merge candidates. One of the merge candidates specifies a set of motion information. Video encoder 20 may generate one or more of the merge candidates based on motion information specified by PUs that spatially neighbor PUs in the same picture, which may be referred to as spatial merge candidates, or based on co-located PUs in another picture, which may be referred to as temporal merge candidates. A merge candidate may be referred to herein as a bi-directional merge candidate or a bi-directional merge candidate if the motion information specified by the merge candidate is associated with two reference blocks. Otherwise, if the motion information specified by the merge candidate is associated with only a single reference block, the merge candidate may be referred to herein as a uni-directional merge candidate or a uni-directional merge candidate. Video encoder 20 may select one of the merge candidates from the merge candidate list and signal the candidate index value of the PU. The candidate index value may indicate a position of the selected merge candidate in the merge candidate list.
When video encoder 20 encodes the motion information of the PU using merge mode, video decoder 30 may generate a merge candidate list for the same PU as video encoder 20 generated for the PU. Video decoder 30 may then determine which of the merge candidates in the merge candidate list was selected by video encoder 20 based on the candidate index value of the PU. Video decoder 30 may then employ the motion information specified by the selected merge candidate as the motion information for the PU. The motion information specified by the selected candidate may include one or more motion vectors and one or more reference picture indices.
When video encoder 20 uses AMVP to signal the motion information of a PU, video encoder 20 may generate a list 0MV predictor candidate list for the PU if the PU is uni-directionally inter predicted based on list 0 or if the PU is bi-directionally inter predicted based on reference pictures in list 0 and list 1. The list 0MV predictor candidate list may include one or more MV predictor candidates. Each of the MV predictor candidates specifies a set of motion information. Video encoder 20 may select a list 0MV predictor candidate from the list 0MV predictor candidate list. Video encoder 20 may signal a list 0MV predictor flag indicating the position of the selected list 0MV predictor candidate in the list 0MV predictor candidate list. The list 0MV predictor flag may be denoted as "mvp _10_ f1 ag".
In addition, when video encoder 20 uses AMVP to signal motion information of a PU, video encoder 20 may generate a list 1MV predictor candidate list for the PU if the PU is uni-directionally inter predicted based on list 1 or if the PU is bi-directionally inter predicted. The list 1MV predictor candidate list may include one or more MV predictor candidates. Each of the MV predictor candidates specifies a set of motion information. Video encoder 20 may then select a list 1MV predictor candidate from the list 1MV predictor candidate list. Video encoder 20 may signal a list 1MV predictor flag indicating the position of the selected list 1MV predictor candidate in the list 1MV predictor candidate list. The list 1MV predictor flag may be denoted as "mvp _11_ flag".
In addition, when video encoder 20 uses AMVP to signal the motion information of the PU, video encoder 20 may calculate a list 0 Motion Vector Difference (MVD) for the PU if the PU is uni-directionally inter predicted based on list 0 or if the PU is bi-directionally inter predicted. The list 0MVD indicates the difference between the list 0 motion vector for the PU and the list 0 motion vector specified by the selected list 0MV predictor candidate. In addition, video encoder 20 may output the list 1MVD for the PU if the PU is uni-directionally inter predicted based on list 1 or if the PU is bi-directionally inter predicted. The list 1MVD indicates the difference between the list 1 motion vector of the PU and the list 1 motion vector specified by the selected list 1MV predictor candidate. Video encoder 20 may signal a list 0MVD and/or a list 1 MVD.
When video encoder 20 signals the motion information of the PU using AMVP mode, video decoder 30 may independently generate the same list 0 and/or list 1MV predictor candidate lists generated by video encoder 20. In other examples, video encoder 20 may encode syntax elements that specify a list 0 and a list 1MV predictor candidate list. If the PU is uni-directionally inter predicted based on list 0 or if the PU is bi-directionally inter predicted, video decoder 30 may determine the selected list 0MV predictor candidate from the list 0MV predictor candidate list. Video decoder 30 may then determine the list 0 motion vector for the PU based on the selected list 0MV predictor candidate and the list 0MVD for the PU. For example, video decoder 30 may determine the list 0 motion vector for the PU by adding the selected list 0MV predictor candidate and the list 0 motion vector specified by the list 0 MVD. If the PU is uni-directionally inter predicted based on list 1 or if the PU is bi-directionally inter predicted, video decoder 30 may determine the selected list 1MV predictor candidate from the list 1MV predictor candidate list. Video decoder 30 may then determine the list 1 motion vector for the PU based on the selected list 1MV candidate and the list 1 motion vector specified by the list 1 MVD. For example, video decoder 30 may determine the list 1 motion vector for the PU by adding the selected list 1MV candidate and the list 1 motion vector specified by the list 1 MVD.
As discussed briefly above, when video encoder 20 performs inter prediction on a PU in a B slice, video encoder 20 may generate motion information associated with one or two reference blocks of the PU. A video coder, such as video encoder 20 or video decoder 30, may then generate the predictive video block for the PU based on the reference block associated with the motion information of the PU. To generate the prediction video block based on the two reference blocks, the video coder may retrieve both of the reference blocks from memory. Because memory bandwidth (i.e., the rate at which data can be transferred from memory) may be limited, it may take longer to retrieve two reference blocks from memory than it would take to retrieve a single reference block from memory. Thus, if a B slice includes many small bi-directionally inter-predicted PUs, the additional time required to retrieve the two reference blocks for each of the PUs may slow the video coder down in being able to generate the predictive video blocks for the PUs in the B slice.
In accordance with various examples of the techniques of this disclosure, a video coder, such as video encoder 20 or video decoder 30, may determine whether a PU in a B slice is restricted to uni-directional inter prediction. In some examples, the video coder may determine, based on a size characteristic of the PU or some parameter, that the PU is restricted to uni-directional inter prediction. In addition, the video coder may generate a merge candidate list for the PU and determine a selected merge candidate in the merge candidate list. If the PU is restricted to uni-directional inter prediction, the video coder may generate the predictive video block for the PU based on no more than one reference block associated with the motion information specified by the selected merge candidate. On the other hand, if the PU is not restricted to uni-directional inter prediction, the video coder may generate the predictive video block for the PU based on one or more reference blocks associated with the motion information specified by the selected merge candidate. Limiting, by the video encoder and decoder, a particular PU in a B slice to uni-directional inter prediction may increase the speed at which the video encoder and decoder are able to generate predictive video blocks for PUs in the B slice because the video coder transfers less data from memory when generating the predictive video block based on a single reference block than when generating the predictive video block based on two reference blocks.
A video coder (i.e., a video encoder or a video decoder) may determine that a PU in a B slice is restricted to uni-directional inter prediction based on various criteria. For example, if the size characteristic of the PU is below a particular threshold, the video coder may determine that the PU in the B slice is restricted to uni-directional inter prediction. In this example, the video coder may determine that the PU is not restricted to uni-directional inter prediction if the size characteristic of the PU is not below a particular threshold. For example, in this example, the video coder may determine that the PU is restricted to uni-directional inter prediction if a height or width of a video block associated with the PU is below a threshold. For example, if the height and/or width of the video block associated with the PU is less than N (e.g., N-8) pixels, the video coder may determine that the PU is restricted to uni-directional inter prediction.
In another example, the video coder may determine that a PU in a B slice is restricted to uni-directional inter prediction if a first size of a video block associated with the PU is less than a threshold and a second size of the video block associated with the PU is less than or equal to the threshold. The size of the video block may be the width or height of the video block. For example, if the threshold is equal to 8, the video coder may determine that the PU is not restricted to uni-directional inter prediction if the width of the video block is equal to 4 but the height of the video block is equal to 16. However, if the threshold is equal to 8, the video coder may determine that the PU is restricted to uni-directional inter prediction if the width of the video block is equal to 4 but the height of the video block is equal to 8.
In another example, the video coder may determine that a PU in a B slice is restricted to uni-directional inter prediction if a first size of a video block associated with the PU is less than a first threshold and a second size of the video block associated with the PU is less than a second threshold. For example, if the width of the video block is less than 8 and the height of the video block is less than 16, the video coder may determine that the PU is restricted to uni-directional inter prediction. In some examples, the first threshold may be the same as the second threshold.
In another example, the video coder may determine that the PU is restricted to uni-directional inter prediction if the size characteristic of the CU associated with the PU (e.g., the current CU) is equal to a particular size and the size characteristic of the PU is below a threshold. In this example, the video coder may determine that the PU is not restricted to uni-directional inter prediction if the size characteristic of the CU is not equal to the particular size or the size characteristic of the PU is not below the threshold. In this example, the particular size may be equal to N (e.g., N-8) pixels, and the threshold may also be equal to N (e.g., N-8) pixels. In this example, for a CU having a size of 8x8, any PU of the CU having a size less than 8x8 may be suppressed from bi-directional inter prediction.
In another example, if the parameter indicates that the PU in the B slice is to be restricted to uni-directional inter prediction, the video coder may determine that the PU in the B slice is restricted to uni-directional inter prediction.
A video coder may restrict PUs in a B slice to uni-directional inter prediction in various ways. For example, the video coder may ignore one of the reference blocks associated with the motion information of the PU and generate the predictive video block for the PU based on another of the reference blocks associated with the motion information of the PU. For example, the video coder may generate a merge candidate list, and if the selected merge candidate is a bi-directional merge candidate, the video coder may generate the predictive video block for the PU based on the reference block associated with the list 0 reference picture index of the selected merge candidate and the list 0 motion vector of the selected merge candidate. In a similar example, the video coder may generate the predictive video block for the PU based on the reference block associated with the list 1 reference picture index of the selected merge candidate and the list 1 motion vector of the selected merge candidate.
In another example regarding how a video coder may restrict a PU in a B slice to uni-directional inter prediction, the video coder may include uni-directional merge candidates in a merge candidate list of the PU, and not include bi-directional merge candidates in the merge candidate list of the PU. In this example, the video coder does not convert the bi-directional merge candidate into a uni-directional merge candidate. In this example, the video coder may include an artificial uni-directional merge candidate in the merge candidate list if the number of available uni-directional merge candidates is insufficient to fill the merge candidate list. An artificial merge candidate may be a merge candidate that is generated based on motion information of one or more PUs, but does not specify motion information of the one or more PUs.
In another example regarding how a video coder may restrict PUs in a B slice to uni-directional inter prediction, the video coder may convert bi-directional merge candidates into one or more uni-directional merge candidates, and include the one or more uni-directional merge candidates in a merge candidate list. In some such examples, the video coder may convert the bi-directional merge candidate into a single uni-directional merge candidate associated with a reference picture in list 0 or a reference picture in list 1. In some such cases, whenever a video coder converts a bi-directional merge candidate into a uni-directional merge candidate, the uni-directional merge candidate is associated with a reference picture in a particular reference list. For example, the video coder may only convert the bi-directional merge candidate into a single uni-directional merge candidate associated with the reference picture in list 0. Alternatively, the video coder may only convert the bi-directional merge candidate into a single uni-directional merge candidate associated with the reference picture in list 1. In other such examples, the video coder may convert a bi-directional merge candidate into two uni-directional merge candidates, one of which is associated with a reference picture in list 0 and the other of which is associated with a reference picture in list 1. Thus, in some examples, after generating the merge candidate list, the video coder may convert the bi-directional merge candidates in the merge candidate list into uni-directional merge candidates, and include the uni-directional merge candidates in the merge candidate list in place of the bi-directional merge candidates.
In some examples, the video coder may remove duplicate merge candidates from the merge candidate list prior to converting the bi-directional merge candidate into a uni-directional merge candidate. In other examples, the video coder may remove duplicate merge candidates from the merge candidate list after converting the bi-directional merge candidate into a uni-directional merge candidate.
When video encoder 20 encodes motion information for a PU in a B slice using AMVP, video encoder 20 may generate, entropy encode, and output an inter prediction mode indicator for the PU. The inter prediction mode indicator may be denoted as "inter _ pred _ idc". The inter prediction mode indicator may indicate whether the PU is uni-directional inter predicted based on list 0, uni-directional inter predicted based on list 1, or bi-directional inter predicted. Video decoder 30 may use the inter-prediction mode indicator when performing inter-prediction on the PU. Since the inter prediction mode indicator has three possible values, the inter prediction mode indicator may be represented using two bits in a conventional line.
However, if a PU in a B slice is restricted to uni-directional inter prediction, the inter prediction mode indicator may have two possible values: list 0 based unidirectional inter prediction and list 1 based unidirectional inter prediction. Thus, according to the techniques of this disclosure, if a PU in a B slice is restricted to uni-directional inter prediction, the inter prediction mode indicator may be represented using a single bit. Otherwise, if the PU is not restricted to uni-directional inter prediction, the inter prediction mode indicator may be represented using two bits. Using a single bit to represent the inter-prediction mode indicator may increase coding efficiency when the PU is restricted to uni-directional inter prediction.
Moreover, inter-prediction mode indicators for PUs in a B slice may be entropy coded using different contexts if the PU is restricted to uni-directional inter prediction than if the PU is restricted to uni-directional inter prediction. This may further increase coding efficiency.
FIG. 2 is a block diagram illustrating an example video encoder 20 configured to implement the techniques of this disclosure. Fig. 2 is provided for purposes of illustration and should not be construed as limiting the technology as broadly exemplified and described herein. For purposes of explanation, this disclosure describes video encoder 20 in the context of HEVC coding. However, the techniques of this disclosure may be applicable to other coding standards or methods.
In the example of fig. 2, video encoder 20 includes a plurality of functional components. The functional components of video encoder 20 include a prediction module 100, a residual generation module 102, a transform module 104, a quantization module 106, an inverse quantization module 108, an inverse transform module 110, a reconstruction module 112, a filter module 113, a decoded picture buffer 114, and an entropy encoding module 116. Prediction module 100 includes an intra-prediction module 121, a motion estimation module 122, a motion compensation module 124, and an intra-prediction module 126. In other examples, video encoder 20 may include more, fewer, or different functional components. Furthermore, motion estimation module 122 and motion compensation module 124 may be highly integrated, but are represented separately in the example of fig. 2 for purposes of explanation.
Video encoder 20 may receive video data. Video encoder 20 may receive video data from various sources. For example, video encoder 20 may receive video data from video source 18 (fig. 1) or another source. The video data may represent a series of pictures. To encode video data, video encoder 20 may perform encoding operations on each of the pictures. As part of performing encoding operations on the picture, video encoder 20 may perform encoding operations on each slice of the picture. As part of performing encoding operations on slices, video encoder 20 may perform encoding operations on treeblocks in the slices.
As part of performing encoding operations on treeblocks, prediction module 100 may perform quadtree partitioning on video blocks of treeblocks to divide the video blocks into progressively smaller video blocks. Each of the smaller video blocks may be associated with a different CU. For example, prediction module 100 may partition a video block of a treeblock into equally sized sub-blocks, partition one or more of the sub-blocks into equally sized sub-blocks, and so on.
The size of the video blocks associated with a CU may range from 8x8 samples to a size of treeblock having a maximum of 64x64 samples or larger. In this disclosure, "NxN" and "N by N" may be used interchangeably to refer to the sample size of a video block in terms of vertical and horizontal dimensions (e.g., 16x16 samples or 16 by 16 samples). In general, a 16x16 video block has sixteen samples in the vertical direction (y 16) and sixteen samples in the horizontal direction (x 16). Likewise, an N by N block typically has N samples in the vertical direction and N samples in the horizontal direction, where N represents a non-negative integer value.
Further, as part of performing encoding operations on the treeblock, prediction module 100 may generate a hierarchical quadtree data structure of the treeblock. For example, a treeblock may correspond to a root node of a quadtree data structure. If prediction module 100 partitions the video block of the tree block into four sub-blocks, the root node has four sub-nodes in a quadtree data structure. Each of the sub-nodes corresponds to a CU associated with one of the sub-blocks. If prediction module 100 partitions one of the sub-blocks into four sub-blocks, the node corresponding to the CU associated with the sub-block may have four sub-nodes, each of which corresponds to the CU associated with one of the sub-blocks.
Each node of the quadtree data structure may contain syntax data (e.g., syntax elements) for a corresponding treeblock or CU. For example, a node in a quadtree may include a split flag that indicates whether a video block of a CU corresponding to the node is partitioned (i.e., split) into four sub-blocks. The syntax elements of a CU are recursively defined and may depend on whether the video blocks of the CU are separated into sub-blocks. A CU that is not partitioned by a video block may correspond to a leaf node in a quadtree data structure. The coded treeblock may include quadtree data structure-based data for the corresponding treeblock.
Video encoder 20 may perform an encoding operation on each undivided CU of a treeblock. When video encoder 20 performs an encoding operation on an undivided CU, video encoder 20 may generate an encoded representation of the undivided CU.
As part of performing the encoding operation on the CU, prediction module 100 may partition the video block of the CU among one or more PUs of the CU. Video encoder 20 and video decoder 30 may support various PU sizes. Assuming that the size of a particular CU is 2Nx2N, video encoder 20 and video decoder 30 may support PU sizes of 2Nx2N or NxN for intra prediction, as well as 2Nx2N, 2NxN, Nx2N, NxN, or similar symmetric PU sizes for inter prediction. Video encoder 20 and video decoder 30 may also support asymmetric partitioning of PU sizes for 2NxnU, 2NxnD, nLx2N, and nRx2N for inter-prediction. In some examples, prediction module 100 may perform geometric partitioning to partition the video block of the CU between PUs of the CU along boundaries that do not meet sides of the video block of the CU at right angles.
Inter prediction module 121 may perform inter prediction on each of the PUs of the CU. Inter-prediction may provide temporal compression. When inter prediction module 121 performs inter prediction on a PU, inter prediction module 121 may generate prediction data for the PU. The prediction data for the PU may include prediction video blocks corresponding to the PU and motion information of the PU. Motion estimation module 122 may generate motion information for the PU. In some examples, motion estimation module 122 may use merge mode or AMVP mode to signal the motion information of the PU. Motion compensation module 124 may generate the predictive video block for the PU based on samples of one or more pictures other than the current picture (i.e., the reference picture).
The slice may be an I slice, a P slice, or a B slice. Motion estimation module 122 and motion compensation module 124 may perform different operations for a PU of a CU depending on whether the PU is in an I-slice, a P-slice, or a B-slice. In an I slice, all PUs are intra predicted. Thus, if the PU is in an I-slice, motion estimation module 122 and motion compensation module 124 do not perform inter prediction on the PU.
If the PU is in a P slice, the picture containing the PU is associated with a list of reference pictures called "list 0". In some examples, each reference picture listed in list 0 occurs before the current picture in display order. Each of the reference pictures in list 0 contains samples that may be used for inter-prediction of other pictures. When motion estimation module 122 performs a motion estimation operation with respect to a PU in a P slice, motion estimation module 122 may search the reference pictures in list 0 for the reference block of the PU. The reference block of the PU may be a set of samples, e.g., a block of samples that most closely correspond to samples in the video block of the PU. Motion estimation module 122 may use a variety of metrics to determine how closely a set of samples in a reference picture corresponds to samples in the video block of the PU. For example, motion estimation module 122 may determine how close a set of samples in a reference picture corresponds to samples in the video block of the PU by a Sum of Absolute Differences (SAD), a sum of mean squared differences (SSD), or other difference metric.
After identifying or synthesizing the reference block for the PU in the P slice, motion estimation module 122 may generate a reference picture index that indicates a reference picture in list 0 that contains the reference block, and a motion vector that indicates a spatial displacement between the PU and the reference block. Motion estimation module 122 may generate motion vectors of different precision. For example, motion estimation module 122 may generate motion vectors of one-quarter sample precision, one-eighth sample precision, or other fractional sample precision. In the case of fractional sample precision, the reference block value may be interpolated from integer position sample values in the reference picture. Motion estimation module 122 may output the reference picture index and the motion vector as the motion information for the PU. Motion compensation module 124 may generate the predictive video block for the PU based on a reference block associated with the motion information of the PU.
If the PU is in a B slice, the picture containing the PU may be associated with two lists of reference pictures, referred to as "list 0" and "list 1". In some examples, a picture containing a B slice may be associated with a list combination that is a combination of list 0 and list 1. In some examples, each reference picture listed in list 1 occurs after the current picture in display order.
Furthermore, if the PU is in a B slice, motion estimation module 122 may perform uni-directional inter prediction or bi-directional inter prediction for the PU. When motion estimation module 122 performs uni-directional inter prediction for the PU, motion estimation module 122 may search the reference pictures of list 0 or list 1 for the reference block of the PU. Motion estimation module 122 may then generate a reference picture index that indicates a reference picture in list 0 or list 1 that contains the reference block, and a motion vector that indicates a spatial displacement between the PU and the reference block.
When motion estimation module 122 performs bi-directional inter prediction for the PU, motion estimation module 122 may search the reference pictures of list 0 for the reference block of the PU and may also search the reference pictures of list 1 for another reference block of the PU. Motion estimation module 122 may then generate reference picture indices that indicate the reference pictures in list 0 and list 1 that contain the reference block, and motion vectors that indicate spatial displacements between the reference block and the PU. The motion information of the PU may include a reference picture index and a motion vector of the PU. Motion compensation module 124 may generate the predictive video block for the PU based on the reference block indicated by the motion information of the PU.
Motion compensation module 124 may generate the predictive video block for the PU based on one or more reference blocks associated with the motion information of the PU. In accordance with the techniques of this disclosure, motion compensation module 124 may determine whether the PU is restricted to uni-directional inter prediction. In addition, motion compensation module 124 may generate a merge candidate list for the PU and determine a selected merge candidate in the merge candidate list. If the PU is restricted to uni-directional inter prediction, motion compensation module 124 may generate the predictive video block for the PU based on no more than one reference block associated with the motion information specified by the selected merge candidate. If the PU is not restricted to uni-directional inter prediction, motion compensation module 124 may generate the predictive video block for the PU based on one or more reference blocks associated with the motion information specified by the selected merge candidate.
As part of performing the encoding operation on the CU, intra prediction module 126 may perform intra prediction on PUs of the CU. Intra-prediction may provide spatial compression. When intra-prediction module 126 performs intra-prediction on a PU, intra-prediction module 126 may generate prediction data for the PU based on decoded samples of other PUs in the same picture. The prediction data for a PU may include a prediction video block and various syntax elements. Intra prediction module 126 may perform intra prediction on PUs in I-slices, P-slices, and B-slices.
To perform intra-prediction on a PU, intra-prediction module 126 may use a plurality of intra-prediction modes to generate a plurality of sets of prediction data for the PU. When intra-prediction module 126 uses the intra-prediction mode to generate the set of prediction data for the PU, intra-prediction module 126 may extend samples from the video blocks of neighboring PUs across the video block of the PU in a direction and/or gradient associated with the intra-prediction mode. Neighboring PUs may be above, and right, above, and left, or left of the PU, assuming left-to-right, top-to-bottom coding order of PU, CU, and treeblock. Intra-prediction module 126 may use various numbers of intra-prediction modes, such as 33 directional intra-prediction modes. In some examples, the number of intra prediction modes may depend on the size of the PU.
Prediction module 100 may select the prediction data for the PU from the prediction data generated by motion compensation module 124 for the PU or the prediction data generated by intra prediction module 126 for the PU. In some examples, prediction module 100 selects the prediction data for the PU based on a rate/distortion metric for the set of prediction data.
If prediction module 100 selects the prediction data generated by intra-prediction module 126, prediction module 100 may signal the intra-prediction mode used to generate the prediction data for the PU, i.e., the selected intra-prediction mode. The prediction module 100 may signal the selected intra-prediction mode in various ways. For example, it is possible that the selected intra prediction mode is the same as the intra prediction mode of the neighboring PU. In other words, the intra prediction mode of the neighboring PU may be the most probable mode of the current PU. Thus, prediction module 100 may generate a syntax element to indicate that the selected intra-prediction mode is the same as the intra-prediction modes of the neighboring PUs.
After prediction module 100 selects the prediction data for the PUs of the CU, residual generation module 102 may generate the residual data for the CU by subtracting the prediction video blocks of the PUs of the CU from the video blocks of the CU. The residual data of the CU may include 2D residual video blocks corresponding to different sample components of samples in the video block of the CU. For example, the residual data may include residual video blocks corresponding to differences between the luma components of the samples in the prediction video blocks of the PUs of the CU and the luma components of the samples in the original video blocks of the CU. In addition, the residual data of the CU may include a residual video block corresponding to a difference between chrominance components of samples in the prediction video block of the PU of the CU and chrominance components of samples in the original video block of the CU.
Prediction module 100 may perform quadtree partitioning to partition a residual video block of a CU into sub-blocks. Each undivided residual video block may be associated with a different TU of the CU. The size and location of the residual video blocks associated with the TUs of the CU may or may not be based on the size and location of the video blocks associated with the PUs of the CU. A quadtree structure, referred to as a "residual quadtree" (RQT), may include nodes associated with each of the residual video blocks. A TU of a CU may correspond to a page node of a RQT.
Transform module 104 may generate one or more transform coefficient blocks for each TU of the CU by applying one or more transforms to residual video blocks associated with the TU. Each of the transform coefficient blocks may be a 2D matrix of transform coefficients. Transform module 104 may apply various transforms to residual video blocks associated with TUs. For example, transform module 104 may apply a Discrete Cosine Transform (DCT), a directional transform, or a conceptually similar transform to a residual video block associated with a TU.
After transform module 104 generates a transform coefficient block associated with a TU, quantization module 106 may quantize the transform coefficients in the transform coefficient block. Quantization module 106 may quantize transform coefficient blocks associated with TUs of a CU based on QP values associated with the CU.
Video encoder 20 may associate QP values with CUs in various ways. For example, video encoder 20 may perform rate-distortion analysis on treeblocks associated with CUs. In rate-distortion analysis, video encoder 20 may generate multiple coded representations of a treeblock by performing an encoding operation on the treeblock multiple times. When video encoder 20 generates different encoded representations of treeblocks, video encoder 20 may associate different QP values with the CUs. When the given QP value is associated with the CU having the lowest bitrate and distortion metric in the coded representation of the treeblock, video encoder 20 may signal that the given QP value is associated with the CU.
Inverse quantization module 108 and inverse transform module 110 may apply inverse quantization and inverse transform, respectively, to the transform coefficient block to reconstruct a residual video block from the transform coefficient block. Reconstruction module 112 may add the reconstructed residual video block to corresponding samples from one or more prediction video blocks generated by prediction module 100 to generate a reconstructed video block associated with the TU. By reconstructing video blocks for each TU of a CU in this manner, video encoder 20 may reconstruct the video blocks of the CU.
After reconstruction module 112, represented as an adder, reconstructs the video block of the CU, filter module 113 may perform deblocking operations to reduce blocking artifacts in the video block associated with the CU. After performing the one or more deblocking operations, filter module 113 may store the reconstructed video block of the CU in decoded picture buffer 114. Motion estimation module 122 and motion compensation module 124 may perform inter prediction on PUs of subsequent pictures using the reference picture containing the reconstructed video block. In addition, intra prediction module 126 may perform intra prediction on other PUs in the same picture as the CU using reconstructed video blocks in decoded picture buffer 114.
Entropy encoding module 116 may receive data from other functional components of video encoder 20. For example, entropy encoding module 116 may receive the transform coefficient block from quantization module 106 and may receive syntax elements from prediction module 100. When entropy encoding module 116 receives the data, entropy encoding module 116 may perform one or more entropy encoding operations to generate entropy encoded data. For example, video encoder 20 may perform a Context Adaptive Variable Length Coding (CAVLC) operation, a CABAC operation, a variable-to-variable (V2V) length coding operation, a syntax-based context adaptive binary arithmetic coding (SBAC) operation, a Probability Interval Partitioning Entropy (PIPE) coding operation, or another type of entropy encoding operation on the data. Entropy encoding module 116 may output a bitstream that includes the entropy encoded data.
As part of performing entropy encoding operations on the data, entropy encoding module 116 may select a context mode. If entropy encoding module 116 is performing a CABAC operation, the context mode may indicate an estimate of the probability of a particular bin having a particular value. In the context of CABAC, the term "bin" is used to refer to a bit of a binarized version of a syntax element.
FIG. 3 is a block diagram illustrating an example video decoder 30 configured to implement the techniques of this disclosure. Fig. 3 is provided for purposes of illustration and is not limiting of the techniques as broadly exemplified and described herein. For purposes of explanation, this disclosure describes video decoder 30 in the context of HEVC coding. However, the techniques of this disclosure may be applicable to other coding standards or methods.
In the example of fig. 3, video decoder 30 includes a plurality of functional components. The functional components of video decoder 30 include an entropy decoding module 150, a prediction module 152, an inverse quantization module 154, an inverse transform module 156, a reconstruction module 158, a filter module 159, and a decoded picture buffer 160. Prediction module 152 includes a motion compensation module 162 and an intra-prediction module 164. In some examples, video decoder 30 may perform a decoding pass that is generally reciprocal to the encoding pass described with respect to video encoder 20 of fig. 2. In other examples, video decoder 30 may include more, fewer, or different functional components.
Video decoder 30 may receive a bitstream that includes encoded video data. The bitstream may include a plurality of syntax elements. When video decoder 30 receives the bitstream, entropy decoding module 150 may perform a parsing operation on the bitstream. As a result of performing a parsing operation on the bitstream, entropy decoding module 150 may extract syntax elements from the bitstream. As part of performing the parsing operation, entropy decoding module 150 may entropy decode entropy-encoded syntax elements in the bitstream. Prediction module 152, inverse quantization module 154, inverse transform module 156, reconstruction module 158, and filter module 159 may perform reconstruction operations that generate decoded video data based on syntax elements extracted from the bitstream.
As discussed above, a bitstream may comprise a series of NAL units. NAL units of a bitstream may include sequence parameter set NAL units, picture parameter set NAL units, SEI NAL units, and the like. As part of performing a parsing operation on the bitstream, entropy decoding module 150 may perform a parsing operation that extracts sequence parameter sets from a sequence parameter set NAL unit, picture parameter sets from a picture parameter set NAL unit, and SEI data from a SEI NAL unit, and entropy decodes the sequence parameter sets, the picture parameter sets, and the SEI data.
In addition, NAL units of the bitstream may include coded slice NAL units. As part of performing a parsing operation on the bitstream, entropy decoding module 150 may perform a parsing operation that extracts and entropy decodes a coded slice from a coded slice NAL unit. Each of the coded slices may include a slice header and slice data. The slice header may contain syntax elements for the slice. The syntax elements in the slice header may include syntax elements that identify a picture parameter set associated with the picture containing the slice. Entropy decoding module 150 may perform entropy decoding operations (e.g., CABAC decoding operations) on syntax elements in the coded slice header to recover the slice header.
As part of extracting slice data from the coded slice NAL units, entropy decoding module 150 may perform a parsing operation that extracts syntax elements from the coded CUs in the slice data. The extracted syntax elements may include syntax elements associated with transform coefficient blocks. Entropy decoding module 150 may then perform CABAC decoding operations on some syntax elements.
After entropy decoding module 150 performs the parsing operation on the undivided CU, video decoder 30 may perform the reconstruction operation on the undivided CU. To perform a reconstruction operation on an undivided CU, video decoder 30 may perform a reconstruction operation on each TU of the CU. By performing a reconstruction operation for each TU of the CU, video decoder 30 may reconstruct a residual video block associated with the CU.
As part of performing the reconstruction operation on the TU, inverse quantization module 154 may inverse quantize, i.e., dequantize, the transform coefficient block associated with the TU. Inverse quantization module 154 may inverse quantize the transform coefficient block in a manner similar to the inverse quantization process proposed for HEVC or defined by the h.264 decoding standard. Inverse quantization module 154 may use a quantization parameter QP calculated by video encoder 20 for a CU of a transform coefficient block to determine a degree of quantization and a degree of inverse quantization that is also applied by inverse quantization module 154.
After inverse quantization module 154 inverse quantizes the transform coefficient block, inverse transform module 156 may generate a residual video block for a TU associated with the transform coefficient block. Inverse transform module 156 may apply an inverse transform to the transform coefficient block in order to generate a residual video block for the TU. For example, inverse transform module 156 may apply an inverse DCT, an inverse integer transform, an inverse Karhunen-Loeve transform (KLT), an inverse rotational transform, an inverse directional transform, or another inverse transform to the transform coefficient block.
In some examples, inverse transform module 156 may determine the inverse transform applied to the transform coefficient block based on signaling from video encoder 20. In such examples, inverse transform module 156 may determine the inverse transform based on a signaled transform at a root node of a quadtree of a treeblock associated with the transform coefficient block. In other examples, inverse transform module 156 may infer the inverse transform from one or more coding characteristics, such as block size, coding mode, and so on. In some examples, inverse transform module 156 may apply a cascaded inverse transform.
If the PU encodes the motion information of the PU in skip mode or encodes the PU using merge mode, motion compensation module 162 may generate a merge candidate list for the PU. The motion compensation module 162 may then identify the selected merge candidate in the merge candidate list. After identifying the selected merge candidate in the merge candidate list, motion compensation module 162 may generate the predictive video block for the PU based on the one or more reference blocks associated with the motion information specified by the selected merge candidate.
In accordance with the techniques of this disclosure, motion compensation module 162 may determine whether the PU is restricted to uni-directional inter prediction. Further, motion compensation module 162 may generate a merge candidate list for the PU and determine a selected merge candidate in the merge candidate list. If the PU is restricted to uni-directional inter prediction, motion compensation module 162 may generate the predictive video block for the PU based on no more than one reference block associated with the motion information specified by the selected merge candidate. Otherwise, if the PU is not restricted to uni-directional inter prediction, motion compensation module 162 may generate the predictive video block for the PU based on one or more reference blocks associated with the motion information specified by the selected merge candidate.
If the motion information of the PU is encoded using AMVP mode, motion compensation module 162 may generate a list 0MV predictor candidate list and/or a list 1MV predictor candidate list. The motion compensation module 162 may then determine a selected list 0MV predictor candidate and/or a selected list 1MV predictor candidate. Next, motion compensation module 162 may determine a list 0 motion vector for the PU and/or a list 1 motion vector for the PU based on the list 0MVD, the list 1MVD, the list 0 motion vector specified by the selected list 0MV predictor candidate, and/or the list 1 motion vector specified by the selected list 1MV predictor candidate. Motion compensation module 162 may then generate the predictive video block for the PU based on the reference blocks associated with the list 0 motion vector and the list 0 reference picture index and/or the list 1 motion vector and the list 1 reference picture index.
In some examples, motion compensation module 162 may improve the predictive video block of the PU by performing interpolation based on the interpolation filter. An identifier of an interpolation filter to be used for motion compensation with sub-sample precision may be included in the syntax element. Motion compensation module 162 may use the same interpolation filters as video encoder 20 used during generation of the predictive video block for the PU to calculate interpolated values for sub-integer samples of the reference block. Motion compensation module 162 may determine the interpolation filters used by video encoder 20 according to the received syntax information and use the interpolation filters to generate the predictive video block.
If the PU is encoded using intra prediction, intra prediction module 164 may perform intra prediction to generate a predicted video block for the PU. For example, intra-prediction module 164 may determine the intra-prediction mode of the PU based on syntax elements in the bitstream. The bitstream may include syntax elements that intra-prediction module 164 may use to determine the intra-prediction mode of the PU.
In some examples, the syntax element may indicate that intra prediction module 164 is to use the intra prediction mode of another PU to determine the intra prediction mode of the current PU. For example, it is possible that the intra prediction mode of the current PU is the same as the intra prediction mode of the neighboring PU. In other words, the intra prediction mode of the neighboring PU may be the most probable mode of the current PU. Thus, in this example, the bitstream may include a small syntax element that indicates that the intra-prediction mode of the PU is the same as the intra-prediction modes of neighboring PUs. Intra-prediction module 164 may then generate prediction data (e.g., prediction samples) for the PU based on the video blocks of the spatially neighboring PUs using the intra-prediction mode.
Reconstruction module 158 may reconstruct the video blocks of the CU using residual video blocks associated with the TUs of the CU and predictive video blocks, i.e., intra-prediction data or inter-prediction data (if desired), of the PUs of the CU. In particular, reconstruction module 158 may add residual data to the prediction data to reconstruct the coded video data. Accordingly, video decoder 30 may generate the prediction video block and the residual video block based on syntax elements in the bitstream, and may generate the video block based on the prediction video block and the residual video block.
After reconstruction module 158 reconstructs the video block of the CU, filter module 159 may perform deblocking operations to reduce blocking artifacts associated with the CU. After filter module 159 performs the deblocking operation to reduce blocking artifacts associated with the CU, video decoder 30 may store the video block of the CU in decoded picture buffer 160. Decoded picture buffer 160 may provide reference pictures for subsequent motion compensation, intra prediction, and presentation on a display device (e.g., display device 32 of fig. 1). For example, video decoder 30 may perform intra-prediction or inter-prediction operations on PUs of other CUs based on the video blocks in decoded picture buffer 160.
Fig. 4 is a flow diagram illustrating an example motion compensation operation 200. A video coder, such as video encoder 20 or video decoder 30, may perform motion compensation operation 200. The video coder may perform motion compensation operation 200 to generate a predictive video block for the current PU.
After the video coder begins motion compensation operation 200, the video coder may determine whether the prediction mode of the current PU is a skip mode (202). If the prediction mode of the current PU is not the skip mode ("no" of 202), the video coder may determine that the prediction mode of the current PU is inter mode and that the inter prediction mode of the current PU is merge mode (204). If the prediction mode of the current PU is skip mode ("yes" of 202), or if the prediction mode of the current PU is inter mode and the inter prediction mode of the current PU is merge mode ("yes" of 204), the video coder may generate a merge candidate list for the current PU (206). The merge candidate list may include a plurality of merge candidates. Each of the merge candidates specifies a set of motion information, such as one or more motion vectors, one or more reference picture indices, a list 0 prediction flag, and a list 1 prediction flag. The merge candidate list may include one or more uni-directional merge candidates or bi-directional merge candidates. In some examples, the video coder may perform the example operations described below with reference to fig. 6 to generate the merge candidate list.
After generating the merge candidate list, the video coder may determine a selected merge candidate in the merge candidate list (208). If the video coder is a video encoder, the video coder may select a merge candidate from a merge candidate list based on rate-distortion analysis. If the video coder is a video decoder, the video coder may select a merge candidate based on a syntax element (e.g., merge idx) that identifies a position of the selected merge candidate in the merge candidate list.
The video coder may then determine motion information for the current PU based on the motion information specified by the selected merge candidate (210). The motion information may include one or more motion vectors and reference picture indices. The video coder may determine the motion information for the current PU based on the motion information specified by the selected merge candidate in various ways. For example, the video coder may determine that the motion information of the current PU is the same as the motion information specified by the selected merge candidate.
If the inter-prediction mode of the current PU is not merge mode ("no" of 204), the video coder may determine motion information of the current PU using AMVP mode (212). Fig. 8, described in detail below, is a flowchart illustrating an example operation for determining motion information for a PU using AMVP mode.
After determining the motion information for the current PU, the video coder may determine whether the current PU is restricted to uni-directional inter prediction (214). The video coder may determine whether the current PU is restricted to uni-directional inter prediction in various ways. For example, if the size characteristic of the current PU is less than the threshold, the video coder may determine that the current PU is restricted to uni-directional inter prediction. In this example, if the size of the PU is either 8x4, 4x8, or less, the video coder may determine that the current PU is restricted to uni-directional inter prediction. In another example, if the video coder is a video decoder, the video decoder may determine, based on syntax elements in the received bitstream, that the current PU is restricted to uni-directional inter prediction.
In response to determining that the current PU is restricted to uni-directional inter prediction ("yes" of 214), the video coder may generate a predicted video block for the current PU based on no more than one reference block associated with motion information of the current PU (216). As indicated above, the reference block may be identified by or synthesized from the reference sample identified by the motion information specified by the selected merge candidate.
On the other hand, in response to determining that the current PU is not restricted to uni-directional inter prediction ("no" of 214), the video coder may generate a predictive video block for the current PU based on one or more reference blocks associated with motion information of the current PU (218). As indicated above, the one or more reference blocks may be identified by the motion information specified by the selected merge candidate and/or synthesized from reference samples identified by the motion information specified by the selected merge candidate.
Fig. 5 is a flow diagram illustrating another example motion compensation operation 270. A video coder, such as video encoder 20 or video decoder 30, may perform motion compensation operation 270 to generate a predictive video block for the current PU. The video coder may perform motion compensation operation 270 as an alternative to performing motion compensation operation 200.
After the video coder begins motion operation 270, the video coder may determine whether the prediction mode of the current PU is a skip mode (272). If the prediction mode of the current PU is not the skip mode ("no" of 272), the video coder may determine that the prediction mode of the current PU is inter mode and the inter prediction mode of the current PU is merge mode (273). If the prediction mode of the current PU is skip mode ("yes" of 272) or if the prediction mode of the current PU is inter mode and the inter prediction mode of the current PU is merge mode ("yes" of 273), the video coder may determine whether the current PU is restricted to uni-directional inter prediction (274). If the current PU is restricted to uni-directional inter prediction ("yes" of 274), the video coder may generate a merge candidate list for the current PU such that the merge candidate list does not include bi-directional merge candidates (276). The video coder may generate the merge candidate list for the current PU using the example operation illustrated in fig. 6.
On the other hand, if the current PU is not restricted to uni-directional inter prediction ("no" of 274), the video coder may generate a merge candidate list (278) that may include uni-directional and bi-directional merge candidates. In some examples, the video coder may perform the example operations described below with reference to fig. 6 to generate the merge candidate list. Thus, if the current PU is not restricted to uni-directional inter prediction, the merge candidate list may include uni-directional merge candidates and bi-directional merge candidates.
After generating the merge candidate list for the current PU, the video coder may determine a selected merge candidate in the merge candidate list (280). If the video coder is a video encoder, the video coder may select a merge candidate from a merge candidate list based on rate-distortion analysis. If the video coder is a video decoder, the video coder may select a merge candidate based on a syntax element (e.g., merge idx) that identifies a position of the selected merge candidate in the merge candidate list.
The video coder may then determine motion information for the current PU based on the motion information specified by the selected merge candidate (282). The motion information specified by the selected merge candidate may specify one or more motion vectors and one or more reference picture indices. The video coder may determine the motion information for the current PU based on the motion information specified by the selected merge candidate in various ways. For example, the video coder may determine that the motion information of the current PU is the same as the motion information specified by the selected merge candidate.
If the inter-prediction mode of the current PU is not the merge mode ("no" of 273), the video coder may determine motion information of the current PU using AMVP mode (284). Fig. 8, described in detail below, is a flowchart illustrating an example operation for determining motion information for a PU using AMVP mode.
After determining the motion information for the current PU, the video coder may generate a predictive video block for the current PU (286). Because the merge candidate list includes only uni-directional merge candidates if the current PU is restricted to uni-directional inter prediction, the selected merge candidate is associated with only a single reference block. Thus, if the current PU is in a B slice and is restricted to uni-directional inter prediction, the predictive video block for the current PU may be based on no more than one reference block associated with the motion information specified by the selected merge candidate.
On the other hand, if the current PU is not restricted to uni-directional inter prediction, the merge candidate list may include uni-directional merge candidates and bi-directional merge candidates. Because the merge candidate list may include uni-directional merge candidates and bi-directional merge candidates, the selected merge candidate may be associated with one or two reference blocks. Thus, if the current PU is in a B slice and is not limited to uni-directional inter prediction, the predictive video block for the current PU may be based on one or more reference blocks associated with the selected merge candidate.
Fig. 6 is a flow diagram illustrating example operations 300 for generating a merge candidate list. A video coder, such as video encoder 20 or video decoder 30, may perform operation 300 to generate a merge candidate list for a current PU. The video coder may perform operation 300 when the prediction mode of the current PU is a skip mode or when the prediction mode of the current PU is an inter mode and the inter prediction mode of the current PU is a merge mode.
After the video coder begins operation 300, the video coder may determine motion information and availability of spatial merge candidates (302). The video coder may determine motion information for the spatial merge candidate based on motion information of PUs that cover locations spatially neighboring the current PU. For example, the video coder may determine motion information for the spatial merge candidate based on motion information of PUs that cover the left, lower left, upper left, and upper right sides of the current PU.
The video coder may determine the availability of spatial merge candidates in various ways. For example, if the spatial merge candidate corresponds to an intra-predicted PU that is located outside the current frame or outside the current slice, the video coder may determine that the spatial merge candidate is unavailable. Furthermore, the video coder may determine that the spatial merge candidate is unavailable if the motion information of the spatial merge candidate is the same as the motion information of another spatial merge candidate.
In addition, the video coder may determine motion information and availability of temporal merging candidates (304). The temporal merge candidate may specify motion information for a PU collocated with the current PU but in a different picture than the current PU. The video coder may determine the availability of temporal merging candidates in various ways. For example, if a temporal merge candidate corresponds to an intra-predicted PU, the video coder may determine that the temporal merge candidate is unavailable.
After generating the spatial merge candidate and the temporal merge candidate, the video coder may include the available ones of the spatial merge candidate and the temporal merge candidate in the merge candidate list for the current PU (306). The video coder may include a spatial or temporal merge candidate in the merge candidate list if the merge candidate is available, and may exclude the merge candidate from the merge candidate list if the merge candidate is not available. By excluding unavailable merge candidates from the merge candidate list, the video coder may actually perform a pruning process that prunes (e.g., omits) the unavailable merge candidates from the merge candidate list.
In some examples, the video coder generates the merge candidate list such that the merge candidate list includes only uni-directional merge candidates. In some such examples, the video coder may determine that a bi-directional merge candidate is not available. That is, if the merge candidate specifies a list 0 motion vector and a list 1 motion vector, the video coder may determine that the merge candidate is unavailable. Thus, if the current PU is restricted to uni-directional prediction, the video coder may determine that uni-directional merge candidates are available, while bi-directional merge candidates are not available. Because the video coder may not include an unavailable merge candidate in the merge candidate list, the merge candidate list may include only uni-directional merge candidates in some examples. In this example, the video coder may actually perform a pruning process that prunes bi-directional merge candidates from the merge list.
In other examples where the video coder generates the merge candidate list such that the merge candidate list includes only uni-directional merge candidates, the video coder may convert bi-directional merge candidates into uni-directional merge candidates, and then include the available ones of the uni-directional merge candidates in the merge candidate list. In such examples, the video coder may not add the uni-directional merge candidate to the merge candidate list if the uni-directional merge candidate is the same as the uni-directional merge candidate that has been added to the merge candidate list. In this way, the video coder may prune duplicate uni-directional merge candidates from the merge candidate list. By converting bi-directional merge candidates into uni-directional merge candidates before pruning duplicate uni-directional merge candidates from the merge candidate list, the video coder may be able to avoid redundant merge candidates in the merge candidate list after pruning. Converting bi-directional merge candidates to uni-directional merge candidates before pruning duplicate uni-directional merge candidates may increase the hardware complexity of the video coder. In addition, the video coder may convert the same plurality of bi-directional merge candidates into uni-directional merge candidates.
In other examples, the video coder may initially include the available bi-directional merge candidates in the merge candidate list of the current PU. The video coder may then delete the duplicate merge candidates from the merge candidate list. After the video coder has generated the merge candidate list, the video coder may determine a selected merge candidate from the merge candidate list and convert the selected merge candidate into a uni-directional merge candidate if the selected merge candidate is a bi-directional merge candidate. In this example, the video coder may convert the selected bi-directional merge candidate into a uni-directional merge candidate by generating the predictive video block for the current PU using only the reference blocks indicated by the list 0 motion vector or the list 1 motion vector.
Converting the selected bi-directional merge candidate into a uni-directional merge candidate after pruning the duplicate merge candidate from the merge candidate list may involve only a single conversion, as opposed to multiple conversions, as compared to converting the bi-directional merge candidate into a uni-directional merge candidate before pruning the duplicate merge candidate from the merge candidate list. For example, if the conversion occurs after pruning duplicate merge candidates, the selected merge candidate is the third merge candidate in the merge candidate list, and the third merge candidate is a bi-directional merge candidate, the video coder may convert only the third merge candidate to a uni-directional merge candidate. In this example, if the conversion occurs before pruning the duplicate merge candidate, the selected merge candidate is the third merge candidate in the merge candidate list, and the third merge candidate is a bi-directional merge candidate, the video coder may have to convert the three bi-directional merge candidates before the video coder is able to determine the selected merge candidate due to performing the pruning operation after the conversion.
The video coder may generate a different merge candidate list depending on whether the video coder converts a bi-directional merge candidate into a uni-directional merge candidate before or after removing duplicate merge candidates from the merge candidate list. For example, the video coder may convert a bi-directional merge candidate into a uni-directional merge candidate by taking the list 0 motion vector of the bi-directional merge candidate and ignoring the list 1 motion vector of the bi-directional merge candidate. In this example, the first merge candidate may be uni-directional and may specify a list 0 motion vector equal to the value MV 1. In this example, the second merge candidate may be bi-directional and may specify a list 0 motion vector equal to value MV1 and a list 1 motion vector equal to value MV 2. The first and second merge candidates may specify the same list 0 reference picture index. In this example, if the video coder converts the second merge candidate into a uni-directional merge candidate before pruning duplicate merge candidates from the merge candidate list, there may be two uni-directional merge candidates equal to MV 1. Thus, the video coder may delete the uni-directional merge candidate generated from the second merge candidate because it is redundant on the first merge candidate. Thus, the video coder may include only one merge candidate (e.g., the first merge candidate) in the merge candidate list.
However, in the example of the previous paragraph, if the video coder converts the second merge candidate into a uni-directional merge candidate after deleting duplicate merge candidates from the merge candidate list, the video coder may include both the first and second merge candidates in the merge candidate list. After including the first and second merge candidates in the merge candidate list, the video coder may convert the second merge candidate into a uni-directional merge candidate by taking (i.e., holding) the list 0 motion vector of the second merge candidate and ignoring the list 1 motion vector of the second merge candidate. Thus, the merge candidate list may actually include two merge candidates, both of which specify a list 0 motion vector equal to MV 1.
After including the available merge candidates in the merge candidate list, the video coder may determine whether the current PU is in a B slice (308). In response to determining that the current PU is in a B slice ("yes" of 308), the video coder may perform a process of generating zero or more artificial merge candidates and including the artificial merge candidates in a merge candidate list (310). FIG. 7, described in detail below, illustrates an example process for generating artificial merge candidates.
In response to determining that the current PU is not in a B slice ("no" of 308), or after performing the process of generating artificial merge candidates, the video coder may determine that the number of merge candidates in the merge candidate list is less than the maximum number of merge candidates (312). If the number of merge candidates in the merge candidate list is not less than the maximum number of merge candidates ("NO" of 312), the video coder has completed the merge candidate list.
However, in response to determining that the number of merge candidates in the merge candidate list is less than the maximum number of merge candidates ("yes" of 312), the video coder may generate zero-valued merge candidates (314). If the current PU is in a P slice, a zero value merge candidate may specify a list 0 motion vector with a magnitude equal to zero. If the current PU is in a B slice and the current PU is not restricted to uni-directional inter prediction, a zero value merge candidate may specify a list 0 motion vector having a magnitude equal to zero and a list 1 motion vector having a magnitude equal to zero. In some examples, if the current PU is in a B slice and the current PU is restricted to uni-directional inter prediction, a zero value merge candidate may specify a list 0 motion vector or a list 1 motion vector having a magnitude equal to zero. The video coder may then include a zero value merge candidate in the merge candidate list 316.
After including the zero-value merge candidate in the merge candidate list, the video coder may again determine whether the number of merge candidates in the merge candidate list is less than the maximum number of merge candidates (312), and if not, the video coder may generate additional zero-value merge candidates. In this way, the video coder may continue to generate zero-valued merge candidates and include the zero-valued merge candidates in the merge candidate list until the number of merge candidates in the merge candidate list is equal to the maximum number of merge candidates.
Fig. 7 is a flow diagram illustrating an example process 350 for generating artificial merge candidates. A video coder, such as video encoder 20 or video decoder 30, may perform process 350 of generating artificial merge candidates for inclusion in the merge candidate list of the current PU.
After the video coder begins process 350, the video coder may determine whether to generate an artificial merge candidate (352). The video coder may determine whether to generate an artificial merge candidate in various ways. For example, the video coder may determine whether the number of artificial merge candidates in the merge candidate list is equal to a total number of unique artificial candidates that may be generated based on the original merge candidates in the merge candidate list. The original merge candidate may be a merge candidate that specifies motion information of PUs other than the current PU. Furthermore, in this example, the video coder may determine whether the merge candidate list includes the maximum number of merge candidates. In this example, if both of these conditions are false, the video coder may make a determination to generate an artificial merge candidate.
If the video coder makes a determination to generate an artificial merge candidate ("yes" of 352), the video coder may determine whether the current PU is restricted to uni-directional inter prediction (354). As described above, the video coder may determine whether the current PU is restricted to uni-directional inter prediction in various ways. For example, the video coder may determine whether the current PU is restricted to uni-directional inter prediction based on size characteristics of the current PU. In another example, the video coder may determine whether the current PU is restricted to uni-directional inter prediction based on parameters indicated in syntax elements of the current treeblock, current CU, or current PU, or in a slice header, PPS, APS, SPS, or in another parameter set. In some examples, the parameters in the treeblock may specify that all PUs associated with the treeblock are restricted to unidirectional inter prediction. In some examples, the parameters in the CU may specify that all PUs associated with the CU are restricted to unidirectional inter prediction. In some examples, parameters in a PPS may specify that all PUs associated with the PPS are restricted to uni-directional inter prediction. In some examples, parameters in the APS may specify that all PUs associated with the APS are restricted to uni-directional inter prediction. In some examples, parameters in an SPS may specify that all PUs associated with pictures in a sequence associated with the SPS are restricted to uni-directional inter prediction.
In response to determining that the current PU is restricted to uni-directional inter prediction ("yes" of 354), the video coder may generate an artificial uni-directional merge candidate (356). After generating the artificial uni-directional merge candidate, the video coder may include the artificial uni-directional merge candidate in a merge candidate list (358). After including the artificial uni-directional merge candidate in the merge candidate list, the video coder may determine whether to generate another artificial merge candidate (352), and if so, generate another artificial merge candidate.
The video coder may generate artificial uni-directional merge candidates in various ways. For example, a video coder may generate an artificial uni-directional merge candidate by first taking a pair of uni-directional merge candidates that are already in the candidate list. The first and second uni-directional merge candidates may specify motion vectors MV1 and MV2, respectively. In this example, the video coder may then scale MV2 according to the time difference between the reference frame specified by the first uni-directional merge candidate and the reference frame specified by the second uni-directional merge candidate. In this example, the video coder may generate an artificial uni-directional merge candidate that specifies a scaled version of MV 2. For example, in this example, the reference picture associated with the first uni-directional merge candidate may occur one picture after the current picture and the reference picture associated with the second uni-directional merge candidate may occur four pictures after the current picture. In this example, the video coder may divide both the horizontal and vertical components of MV2 by four and use this scaled MV2 along with the reference picture index corresponding to MV1 as an artificial candidate. Similar scaling may be performed on MV1 based on MV 2.
In another example, the video coder may generate an artificial uni-directional merge candidate that specifies one of the motion vectors specified by the bi-directional merge candidate. For example, a bi-directional merge candidate may specify a list 0 motion vector and a list 1 motion vector. In this example, the video coder may generate an artificial uni-directional merge candidate that specifies a list 0 motion vector but does not specify a list 1 motion vector. In this example, the video coder may generate another artificial uni-directional merge candidate that specifies a list 1 motion vector but does not specify a list 0 motion vector. In this way, the video coder may generate unidirectional artificial merge candidates from bidirectional spatial or temporal merge candidates by separating the bidirectional merge candidate into two unidirectional merge candidates (one from list 0 motion vectors and the other from list 1 motion vectors). The video coder may include either or both of the unidirectional merge candidates in the merge candidate list. In other words, the video coder may generate an artificial uni-directional merge candidate such that the artificial uni-directional merge candidate specifies a motion vector specified by a bi-directional merge candidate.
In examples where the video coder generates an artificial uni-directional merge candidate based on the motion vectors specified by the bi-directional merge candidate, the video coder may add the artificial uni-directional merge candidate to the merge candidate list according to various orders. For example, the video coder may add an artificial uni-directional merge candidate based on the list 0 motion vector of the first bi-directional merge candidate, then add an artificial uni-directional merge candidate based on the list 1 motion vector of the first bi-directional merge candidate, then add an artificial uni-directional merge candidate based on the list 0 motion vector of the second bi-directional merge candidate, then add an artificial uni-directional merge candidate based on the list 1 motion vector of the second bi-directional merge candidate. And so on.
If the current PU is not restricted to uni-directional inter prediction ("no" of 354), the video coder may generate an artificial bi-directional merge candidate (360). As mentioned above, the video coder may determine whether the current PU is restricted to uni-directional inter prediction based on various factors such as size characteristics, parameters, and so on of the PU. The video coder may generate artificial bi-directional merge candidates in various ways. For example, the video coder may select a combination of two merge candidates in the merge candidate list. In this example, the video coder may determine whether a first one of the merge candidates is selected to specify a reference picture in list 0, whether a second one of the merge candidates is selected to specify a reference picture in list 1, and whether the specified reference pictures have different picture order counts. If each of these conditions is true, the video coder may generate an artificial bi-directional merge candidate that specifies a list 0 motion vector for the first merge candidate in the combination and a list 1 motion vector for the second merge candidate in the combination. In some examples, such as the example of fig. 4, where the merge candidate list may include uni-directional merge candidates and bi-directional merge candidates, process 350 does not include acts 354, 356, and 358. Instead, the video coder may generate artificial bi-directional merge candidates in the merge candidate list for PUs in the B-slice.
After generating the artificial bi-directional merge candidate, the video coder may include the artificial bi-directional merge candidate in the merge candidate list for the current PU (362). The video coder may then determine whether to generate another artificial merge candidate (352), and so on.
Fig. 8 is a flow diagram illustrating example operations 400 for determining motion information for a PU using AMVP mode. A video coder, such as video encoder 20 or video decoder 30, may perform operation 400 to determine motion information for a PU using AMVP mode.
After the video coder begins operation 400, the video coder may determine whether inter prediction for the current PU is based on list 0 (402). If the inter prediction of the current PU is based on list 0 ("yes" of 402), the video coder may generate a list 0MV predictor candidate list for the current PU (404). The list 0MV predictor candidate list may include two list 0MV predictor candidates. Each of the list 0MV predictor candidates may specify a list 0 motion vector.
After generating the list 0MV predictor candidate list, the video coder may determine a selected list 0MV predictor candidate in the list 0MV predictor candidate list (406). The video coder may determine the selected list 0MV predictor candidate based on a list 0MV predictor flag ("mvp _10 flag"). The video coder may then determine a list 0 motion vector for the current PU based on the list 0MVD for the current PU and the list 0 motion vector specified by the selected list 0MV predictor candidate (408).
Furthermore, after determining that the inter prediction of the current PU is not based on list 0 ("no" of 402), or after determining the list 0 motion vector of the current PU (408), the video coder may determine whether the inter prediction of the current PU is based on list 1 or whether the PU is bi-directionally inter predicted (410). If the inter prediction of the current PU is not based on list 1 and the current PU is not bi-directionally inter predicted ("no" of 410), the video coder has completed determining the motion information of the current PU using AMVP mode. In response to determining that inter prediction of the current PU is based on list 1 or that the current PU is bi-directionally inter predicted ("yes" of 410), the video coder may generate a list 1MV predictor candidate list for the current PU (412). The list 1MV predictor candidate list may include two list 1MV predictor candidates. Each of the list 0MV predictor candidates may specify a list 1 motion vector.
After generating the list 1MV predictor candidate list, the video coder may determine a selected list 1MV predictor candidate in the list 1MV predictor candidate list (414). The video coder may determine the selected list 1MV predictor candidate based on a list 1MV predictor flag ("mvp _11 flag"). The video coder may then determine a list 1 motion vector for the current PU based on the list 1MVD of the current PU and the list 1 motion vector specified by the selected list 1MV predictor candidate (416).
In some examples, the video coder may not add bi-directional MV predictor candidates to the list 0 and list 1MV predictor candidate lists. In other words, if the MV predictor candidate specifies a list 0 motion vector and a list 1 motion vector, the video coder may exclude the MV predictor candidate from the list 0 and list 1MV predictor candidate lists. Rather, the video coder may only add one-way MV predictor candidates to the list 0 and list 1MV predictor candidate lists. The video coder may achieve this by checking that each possible and available MV predictor candidate is unidirectional and only include unidirectional MV predictor candidates in the MV predictor candidate list.
In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media corresponding to tangible media, such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another (e.g., according to a communication protocol). In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium, or (2) a communication medium such as a signal or carrier wave. A data storage medium may be any available medium that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, Digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. However, it should be understood that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are directed to non-transitory tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, Application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor," as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within specialized hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques may be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including wireless handsets, Integrated Circuits (ICs), or groups of ICs (e.g., chipsets). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, and do not necessarily require realization by different hardware or software units. Rather, as noted above, the various components, modules and units may be combined in a codec hardware unit or provided by an interoperating hardware unit (including one or more processors as described above) in conjunction with a suitable set of software and/or firmware.
Various examples have been described. These and other examples are within the scope of the following claims.

Claims (29)

1. A method for decoding video data, the method comprising:
determining that a Coding Unit (CU) in a B slice is partitioned into one or more Prediction Units (PUs); and
for at least one PU of the CU:
determining, based on a size characteristic of the PU, whether the PU is restricted to uni-directional inter prediction; and
parsing an inter-prediction mode indicator for the PU from a bitstream, wherein the inter-prediction mode indicator can indicate that the PU is list 0 uni-directional inter-predicted based or list 1 uni-directional inter-predicted when the PU is restricted to uni-directional inter-prediction, wherein the inter-prediction mode indicator can indicate that the PU is list 0 uni-directional inter-predicted based or list 1 uni-directional inter-predicted based or bi-predicted when the PU is not restricted to uni-directional inter-prediction.
2. The method of claim 1, wherein:
when the PU is restricted to uni-directional inter prediction, using only a single bit to represent the inter prediction mode indicator; and
when the PU is not restricted to uni-directional inter prediction, two bits are used to represent the inter prediction mode indicator.
3. The method of claim 1, wherein determining whether the PU is restricted to uni-directional inter prediction comprises determining that the PU is restricted to uni-directional inter prediction if a height or width of a video block associated with the PU is below a threshold.
4. The method of claim 3, wherein the threshold is 8.
5. The method of claim 1, wherein determining whether the PU is restricted to uni-directional inter prediction comprises determining that the PU is restricted to uni-directional inter prediction if a first size of a video block associated with the PU is less than a threshold and a second size of the video block associated with the PU is less than or equal to the threshold.
6. The method of claim 1, further comprising entropy decoding the inter-prediction mode indicator using different contexts depending on whether the PU is restricted to uni-directional inter prediction.
7. A method for encoding video data, the method comprising:
partitioning a Coding Unit (CU) in a B slice into one or more Prediction Units (PUs); and
for at least one PU of the CU:
determining, based on a size characteristic of the PU, whether the PU is restricted to uni-directional inter prediction; and
signaling an inter-prediction mode indicator for the PU in a bitstream, wherein the inter-prediction mode indicator is capable of indicating that the PU is list 0 uni-directional inter-predicted or list 1 uni-directional inter-predicted when the PU is restricted to uni-directional inter-prediction, wherein the inter-prediction mode indicator is capable of indicating that the PU is list 0 uni-directional inter-predicted or list 1 uni-directional inter-predicted or bi-directional predicted when the PU is not restricted to uni-directional inter-prediction.
8. The method of claim 7, wherein:
when the PU is restricted to uni-directional inter prediction, using only a single bit to represent the inter prediction mode indicator; and
when the PU is not restricted to uni-directional inter prediction, two bits are used to represent the inter prediction mode indicator.
9. The method of claim 7, wherein determining whether the PU is restricted to uni-directional inter prediction comprises determining that the PU is restricted to uni-directional inter prediction if a height or width of a video block associated with the PU is below a threshold.
10. The method of claim 9, wherein the threshold is 8.
11. The method of claim 7, wherein determining whether the PU is restricted to uni-directional inter prediction comprises determining that the PU is restricted to uni-directional inter prediction if a first size of a video block associated with the PU is less than a threshold and a second size of the video block associated with the PU is less than or equal to the threshold.
12. The method of claim 7, further comprising outputting encoded video data based at least in part on a predictive video block of the PU.
13. The method of claim 7, further comprising entropy encoding the inter-prediction mode indicator using different contexts depending on whether the PU is restricted to uni-directional inter-prediction.
14. A device configured to decode video data, the device comprising:
a storage medium configured to store the video data; and
one or more processors configured to:
determining that a Coding Unit (CU) in a B slice is partitioned into one or more Prediction Units (PUs); and
for at least one PU of the CU:
determining, based on a size characteristic of the PU, whether the PU is restricted to uni-directional inter prediction; and
parsing an inter-prediction mode indicator for the PU from a bitstream, wherein the inter-prediction mode indicator can indicate that the PU is list 0 uni-directional inter-predicted based or list 1 uni-directional inter-predicted when the PU is restricted to uni-directional inter-prediction, wherein the inter-prediction mode indicator can indicate that the PU is list 0 uni-directional inter-predicted based or list 1 uni-directional inter-predicted based or bi-predicted when the PU is not restricted to uni-directional inter-prediction.
15. The apparatus of claim 14, wherein
When the PU is restricted to uni-directional inter prediction, using only a single bit to represent the inter prediction mode indicator; and
when the PU is not restricted to uni-directional inter prediction, two bits are used to represent the inter prediction mode indicator.
16. The device of claim 14, wherein the one or more processors are configured to determine that the PU is restricted to uni-directional inter prediction if a height or width of a video block associated with the PU is below a threshold.
17. The apparatus of claim 16, wherein the threshold is 8.
18. The device of claim 14, wherein the one or more processors are configured to determine that the PU is restricted to uni-directional inter prediction if a first size of a video block associated with the PU is less than a threshold and a second size of the video block associated with the PU is less than or equal to the threshold.
19. The device of claim 14, wherein the one or more processors are further configured to entropy decode the inter-prediction mode indicator using a different context depending on whether the PU is restricted to uni-directional inter prediction.
20. A device configured to encode video data, the device comprising:
a storage medium configured to store the video data; and
one or more processors configured to:
partitioning a Coding Unit (CU) in a B slice into one or more Prediction Units (PUs); and
for at least one PU of the CU:
determining, based on a size characteristic of the PU, whether the PU is restricted to uni-directional inter prediction; and
signaling an inter-prediction mode indicator for the PU in a bitstream, wherein the inter-prediction mode indicator is capable of indicating that the PU is list 0 uni-directional inter-predicted or list 1 uni-directional inter-predicted when the PU is restricted to uni-directional inter-prediction, wherein the inter-prediction mode indicator is capable of indicating that the PU is list 0 uni-directional inter-predicted or list 1 uni-directional inter-predicted or bi-directional predicted when the PU is not restricted to uni-directional inter-prediction.
21. The apparatus of claim 20, wherein:
when the PU is restricted to uni-directional inter prediction, using only a single bit to represent the inter prediction mode indicator; and
when the PU is not restricted to uni-directional inter prediction, two bits are used to represent the inter prediction mode indicator.
22. The device of claim 20, wherein the one or more processors are configured to determine that the PU is restricted to uni-directional inter prediction if a height or width of a video block associated with the PU is below a threshold.
23. The apparatus of claim 22, wherein the threshold is 8.
24. The device of claim 20, wherein the one or more processors are configured to determine that the PU is restricted to uni-directional inter prediction if a first size of a video block associated with the PU is less than a threshold and a second size of the video block associated with the PU is less than or equal to the threshold.
25. The device of claim 20, wherein the one or more processors are further configured to entropy encode the inter-prediction mode indicator using a different context depending on whether the PU is restricted to uni-directional inter-prediction.
26. A device configured to decode video data, the device comprising:
means for determining that a Coding Unit (CU) in a B slice is partitioned into one or more Prediction Units (PUs); and
for at least one PU of the CU:
means for determining whether the PU is restricted to uni-directional inter prediction based on a size characteristic of the PU; and
means for parsing an inter-prediction mode indicator for the PU from a bitstream, wherein the inter-prediction mode indicator is capable of indicating that the PU is list 0 uni-directional inter-predicted or list 1 uni-directional inter-predicted when the PU is restricted to uni-directional inter-prediction, wherein the inter-prediction mode indicator is capable of indicating that the PU is list 0 uni-directional inter-predicted or list 1 uni-directional inter-predicted or bi-directional predicted when the PU is not restricted to uni-directional inter-prediction.
27. A device configured to encode video data, the device comprising:
means for partitioning a Coding Unit (CU) in a B slice into one or more Prediction Units (PUs); and for at least one PU of the CU:
means for determining whether the PU is restricted to uni-directional inter prediction based on a size characteristic of the PU; and
means for signaling an inter-prediction mode indicator for the PU in a bitstream, wherein the inter-prediction mode indicator is capable of indicating that the PU is list 0 uni-directional inter-predicted or list 1 uni-directional inter-predicted based when the PU is restricted to uni-directional inter prediction, wherein the inter-prediction mode indicator is capable of indicating that the PU is list 0 uni-directional inter-predicted or list 1 uni-directional inter-predicted based or bi-directional predicted when the PU is not restricted to uni-directional inter prediction.
28. A computer-readable storage medium having instructions stored thereon that, when executed, configure one or more processors to:
determining that a Coding Unit (CU) in a B slice is partitioned into one or more Prediction Units (PUs); and
for at least one PU of the CU:
determining, based on a size characteristic of the PU, whether the PU is restricted to uni-directional inter prediction; and
parsing an inter-prediction mode indicator for the PU from a bitstream, wherein the inter-prediction mode indicator can indicate that the PU is list 0 uni-directional inter-predicted based or list 1 uni-directional inter-predicted when the PU is restricted to uni-directional inter-prediction, wherein the inter-prediction mode indicator can indicate that the PU is list 0 uni-directional inter-predicted based or list 1 uni-directional inter-predicted based or bi-predicted when the PU is not restricted to uni-directional inter-prediction.
29. A computer-readable storage medium having instructions stored thereon that, when executed, configure one or more processors to:
partitioning a Coding Unit (CU) in a B slice into one or more Prediction Units (PUs); and
for at least one PU of the CU:
determining, based on a size characteristic of the PU, whether the PU is restricted to uni-directional inter prediction; and
signaling an inter-prediction mode indicator for the PU in a bitstream, wherein the inter-prediction mode indicator is capable of indicating that the PU is list 0 uni-directional inter-predicted or list 1 uni-directional inter-predicted when the PU is restricted to uni-directional inter-prediction, wherein the inter-prediction mode indicator is capable of indicating that the PU is list 0 uni-directional inter-predicted or list 1 uni-directional inter-predicted or bi-directional predicted when the PU is not restricted to uni-directional inter-prediction.
HK16104244.8A 2012-02-08 2015-02-04 Method and device for encoding and decoding video data HK1216273B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201261596597P 2012-02-08 2012-02-08
US61/596,597 2012-02-08
US201261622968P 2012-04-11 2012-04-11
US61/622,968 2012-04-11
US13/628,562 US9451277B2 (en) 2012-02-08 2012-09-27 Restriction of prediction units in B slices to uni-directional inter prediction
US13/628,562 2012-09-27

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
HK15101228.5A Addition HK1201000A1 (en) 2012-02-08 2013-02-07 Restriction of prediction units in b slices to uni-directional inter prediction

Related Child Applications (1)

Application Number Title Priority Date Filing Date
HK15101228.5A Division HK1201000A1 (en) 2012-02-08 2013-02-07 Restriction of prediction units in b slices to uni-directional inter prediction

Publications (2)

Publication Number Publication Date
HK1216273A1 HK1216273A1 (en) 2016-10-28
HK1216273B true HK1216273B (en) 2019-07-12

Family

ID=

Similar Documents

Publication Publication Date Title
CA2862311C (en) Restriction of prediction units in b slices to uni-directional inter prediction
US9426463B2 (en) Restriction of prediction units in B slices to uni-directional inter prediction
US9736489B2 (en) Motion vector determination for video coding
US10178403B2 (en) Reference picture list construction in intra block copy mode
CN104471946B (en) Unification of Signaling Lossless Decoding Mode and Pulse Code Modulation (PCM) Mode in Video Decoding
CN103797802B (en) Line buffer reduction for short distance intra-prediction in video coding
US20130272411A1 (en) Scalable video coding prediction with non-causal information
EP2859722A2 (en) Adaptive upsampling filters for video compression
CN104081778B (en) Implicit derivation of range size for parallel motion estimation
HK1216273B (en) Method and device for encoding and decoding video data
HK1193283B (en) Motion vector determination for video coding
HK1197512A (en) Method and apparatus for padding segments in coded slice nal units
HK1197512B (en) Method and apparatus for padding segments in coded slice nal units
HK1193283A (en) Motion vector determination for video coding