[go: up one dir, main page]

HK1208577B - High precision up-sampling in scalable coding of high bit-depth video - Google Patents

High precision up-sampling in scalable coding of high bit-depth video Download PDF

Info

Publication number
HK1208577B
HK1208577B HK15109134.1A HK15109134A HK1208577B HK 1208577 B HK1208577 B HK 1208577B HK 15109134 A HK15109134 A HK 15109134A HK 1208577 B HK1208577 B HK 1208577B
Authority
HK
Hong Kong
Prior art keywords
parameter
rounding
bit depth
data
ioffset2
Prior art date
Application number
HK15109134.1A
Other languages
Chinese (zh)
Other versions
HK1208577A1 (en
Inventor
尹鹏
吕陶然
陈涛
Original Assignee
杜比实验室特许公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杜比实验室特许公司 filed Critical 杜比实验室特许公司
Priority claimed from PCT/US2013/073006 external-priority patent/WO2014099370A1/en
Publication of HK1208577A1 publication Critical patent/HK1208577A1/en
Publication of HK1208577B publication Critical patent/HK1208577B/en

Links

Description

High precision upsampling in scalable coding of high bit depth video
Cross Reference to Related Applications
This application claims priority from U.S. provisional patent application No.61/745050, filed 12/21/2012, the entire contents of which are incorporated herein by reference.
Technical Field
The present invention generally relates to images. More particularly, embodiments of the present invention relate to high precision upsampling in scalable (scalable) video codecs for high bit depth video.
Background
Audio and video compression are key components of the development, storage, distribution and consumption of multimedia content. The choice of compression method involves a trade-off between coding efficiency, coding complexity and delay. As the ratio of processing power to computational cost increases, more complex compression techniques are allowed to develop that allow for more efficient compression. For example, in video compression, the Moving Picture Experts Group (MPEG) from the International organization for standardization (ISO) continues to improve upon the original MPEG-1 video standard by releasing the MPEG-2, MPEG-4 (part 2), and H.264/AVC (or MPEG-4, part 10) coding standards.
Despite the compression efficiency and success of h.264, a new generation of video compression technology known as High Efficiency Video Coding (HEVC) is currently under development. HEVC is expected to provide increased compression capabilities over the existing h.264 (also known as AVC) standard, and the draft for HEVC is available from "High Efficiency Video Coding (HEVC) text specification raw 9", ITU-T/ISO/IEC Video Coding joint group (JCT-VC) document JCTVC-K1003, month 10 2012, authors b.bros, w.j.han, g.j.sublevelan, j.r.ohm and t.wiegand, the entire contents of which are incorporated herein by reference, while the h.264 standard is published as "Advanced Video Coding for genetic audio-visual services", ITU T rec.h.264 and ISO/IEC 14496-10, the entire contents of which are incorporated herein by reference.
A video signal may be characterized by a number of parameters such as bit depth, color space, color gamut, and resolution. Modern television and video playback devices (e.g., blu-ray players) support a variety of resolutions, including standard definition (e.g., 720 x 480i) and High Definition (HD) (e.g., 1090 x 1080 p). Ultra High Definition (UHD) is a next generation resolution format with a resolution of at least 3840 x 2160. Ultra-high definition may also be referred to as ultra-HD, UHDTV, or ultra-high definition. As used herein, UHD denotes any resolution higher than HD resolution.
Another aspect of the video signal characteristics is its dynamic range. Dynamic Range (DR) refers to the range of intensities (e.g., luminance, luma) in an image, for example, from darkest to brightest. As used herein, the term "dynamic range" (DR) may relate to the ability of the human psychovisual system (HVS) to perceive a range of intensities (e.g., luminance, luma) in an image, e.g., from darkest to brightest. In this sense, DR is related to the intensity of the "scene-referred". DR may also be related to the ability of the display device to adequately or approximately present an intensity range of a particular width. In this sense, DR is related to the intensity of the "involved display". Unless a particular meaning is explicitly specified at any point in the description to have a particular meaning, it should be inferred that the terms may be used in either sense, for example, interchangeably.
As used herein, the term High Dynamic Range (HDR) relates to a DR breadth spanning some 14 to 15 orders of magnitude of the Human Visual System (HVS). For example, a substantially normal well-adapted person (e.g., in one or more of statistical, biometric, and ophthalmological senses) has an intensity range that spans about 15 orders of magnitude. The adapted person may perceive a weak light source with as few as a few photons. However, also these people can perceive the almost painful dazzling intensity of the midday sun in desert, sea or snowfield (or even look directly at the sun, but briefly to avoid injury). Such a span may be useful for "adapted" persons, for example, those whose HVS has a reset and adjustment period.
In contrast, the DR over which a person can simultaneously perceive a wide width of the intensity range is truncated to some extent compared to HDR. As used herein, the terms "enhanced dynamic range" (EDR), "viewable dynamic range" or "variable dynamic range" (VDR) may relate to DR that can be simultaneously perceived by the HVS, either individually or interchangeably. As used herein, EDR may involve a DR spanning 5 to 6 orders of magnitude. Thus, EDR represents a wide DR breadth, although perhaps somewhat narrower than HDR, which relates to real scenes. As used herein, the term "simultaneous dynamic range" may relate to EDR.
As used herein, the term image or video "bit depth" refers to the number of bits used to represent or store pixel values for color components of an image or video signal. For example, the term N-bit video (e.g., N-8) indicates that the pixel values of the color component (e.g., R, G or B) in the video signal may take the range 0 to 2N-a value within 1.
As used herein, the term "high bit depth" means any bit depth value greater than 8 bits (e.g., N-10 bits). Note that while HDR images and video signals are typically associated with high bit-depth, images of high bit-depth do not necessarily have a high dynamic range. Thus, as used herein, high bit depth imaging may be associated with both HDR and SDR.
To support backward compatibility with legacy playback devices and new display technologies, multiple layers can be used to transfer UHD and HDR (or SDR) video data from an upstream device to a downstream device. Given such a multi-layered stream, a conventional decoder can use the base layer to reconstruct the HD SDR version of the content. Advanced decoders can use both the base layer and the enhancement layer to reconstruct a UHD EDR version of the content to render it on a more capable display. As appreciated by the inventors herein, improved techniques for codec of high bit depth video using scalable codecs are desirable.
The methods described in this section are available methods, but are not necessarily ones that have been previously conceived or obtained. Accordingly, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, problems identified with respect to one or more methods should not be considered as having been identified in any prior art based on this section.
Drawings
Embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
FIG. 1 depicts an exemplary implementation of a scalable coding system according to an embodiment of the invention;
FIG. 2 depicts an exemplary implementation of a scalable decoding system according to an embodiment of the invention;
FIG. 3 depicts an exemplary process for upsampling image data according to an embodiment of the present invention.
Detailed Description
High precision upsampling in scalable coding of video inputs with high bit depth is described herein. Given the parameters related to the bit depth of the intermediate result, the internal input bit depth, and the filter precision bit depth, scaling and rounding factors may be determined to maintain the precision of the operation and prevent overflow.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It may be evident, however, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are not described in detail to avoid unnecessarily obscuring the present invention.
Overview
The exemplary embodiments described herein relate to high-precision sampling in layered encoding and decoding of video signals having high bit depths. Input data, filter coefficients, and scaling and rounding parameters are determined for a separable up-scaling (up-scaling) filter in response to bit depth requirements of a video encoding and decoding system. First, input data is filtered in a first spatial direction using a first rounding parameter to generate first upsampled data. First intermediate data is generated by scaling the first up-sampled data using a first offset parameter. The intermediate data is then filtered in a second spatial direction using a second rounding parameter to generate second upsampled data. Second intermediate data is generated by scaling the second up-sampled data using a second offset parameter. The final up-sampled data may be generated by pruning the second intermediate data.
High precision separable upsampling
Existing display and playback devices such as HDTV, set top boxes or blu-ray players typically support signals up to 1080p HD resolution (e.g., 1920 x 1080, 60 frames per second). For consumer applications, such signals are now typically compressed using a bit depth of 8 bits per pixel per color component in a luminance-chrominance color format, where the chrominance component typically has a lower resolution than the luminance component (e.g., color formats of YCbCr or YUV 4:2: 0). Due to the 8-bit depth and the corresponding low dynamic range, such signals are typically referred to as signals with Standard Dynamic Range (SDR).
As new television standards, such as Ultra High Definition (UHD), are being developed, it is desirable to encode signals with increased resolution and/or higher bit depth in scalable formats.
Fig. 1 depicts an embodiment of an exemplary implementation of a scalable coding system. In an exemplary embodiment, the Base Layer (BL) input signal 104 may represent an HD SDR signal and the Enhancement Layer (EL) input 102 may represent a high bit depth UHD HDR (or SDR) signal. The BL input 104 is compressed (or encoded) using a BL encoder 105 to generate an encoded BL bit stream 107. The BL encoder 105 may compress or encode the BL input signal 104 using any of the known or future video compression algorithms such as MPEG-2, MPEG-4 part 2, h.264, HEVC, VP8, etc.
Assuming a BL input 104, the encoding system 100 generates not only an encoded BL bit stream 107 but also a BL signal 112, wherein the BL signal 112 represents the BL signal 107 to be decoded by the respective receiver. In some embodiments, the signal 112 may be generated by a separate BL decoder (110) after the BL encoder 105. In some other embodiments, the signal 112 may be generated from a feedback loop for performing motion compensation in the BL encoder 105. As shown in fig. 1, the signal 112 may be processed by an inter-layer processing unit 115 to generate a signal suitable for use by an inter-layer prediction process 120. In some embodiments, interlayer processing unit 115 may expand signal 112 upward to match the spatial resolution of EL input 102 (e.g., from HD resolution to UHD resolution). After the inter-layer prediction 120, a residual 127 is calculated and then encoded by the EL encoder 132 to generate an encoded EL bit stream 132. The BL bit-stream 107 and the EL bit-stream 132 are typically multiplexed into a single encoded bit-stream, which is sent to a suitable receiver.
The term SHVC denotes an extensible extension of the new generation of video compression technology known as High Efficiency Video Coding (HEVC) [1] that enables significantly higher compression capabilities than the existing AVC (h.264) standard [2 ]. SHVC is currently being developed jointly by the ISO/IEC MPEG and ITU-T WP3/16 groups. One of the key aspects of SHVC is spatial scalability, where inter-layer texture prediction (e.g., 120 or 210) provides the most important gain. An example of an SHVC decoder is shown in fig. 2. As part of inter-layer prediction, an upsampling process (220) upsamples or upconverts (up-convert) pixel data from the base layer (215) to match the pixel resolution of the data received in the enhancement layer (e.g., 202 or 230). In an embodiment, the upsampling process may be performed by applying an upsampling or interpolation filter. In the scalable extension of h.264(SVC) or shvcsmuc0.1.1 software [3], a separable polyphase upsampling/interpolating filter is applied. When these filters perform well on input data with a standard bit depth (e.g., an image using 8 bits per pixel per color component), they may overflow for input data with a high bit depth (e.g., an image using 10 or more bits per pixel per color component).
In a 2D upsampling or interpolation process, it is common practice to apply separable filters to reduce the processing complexity. Such a filter first upsamples the image in one spatial direction (e.g., the horizontal or vertical direction) and then in the other direction (e.g., the vertical or horizontal direction). Without loss of generality, in the following description it is assumed that horizontal upsampling is followed by vertical upsampling. The filtering process can then be described as:
horizontal up-sampling:
tempArray[x,y]=∑i,j(eF[xPhase,i]*refSampleArray[xRef+j,y]) (1)
vertical upsampling
predArray[x,y=Clip((∑i,j(eF[yPhase,i]*tempArray[x,yRef+j])+offset)>>nshift) (2)
Where eF stores polyphase upsampling filter coefficients, refSampleArray contains reference sample values from the reconstructed base layer, tempArray stores intermediate values after the first 1D filtering, predArray stores final values after the second 1D filtering, xRef and yRef correspond to relative pixel positions of the upsampling, nshift represents a scaling or normalization parameter, offset represents a rounding parameter, and Clip () represents a pruning function. For example, assuming data x and thresholds a and B, in an exemplary embodiment, the function y is represented by Clip (x, a, B)
For example, for N-bit image data, example values of a and B may include a-0 and B-2Ν-1。
In equation (2), the operation a ═ b>>c denotes dividing b by 2 by shifting the binary representation of b by c bits to the rightc(e.g., a ═ b/2)c). Note that in equation (1), no clipping or shifting operations are applied for the first stage filtering. It should also be noted that the order of horizontal filtering and vertical filtering is not important under the present implementation. Applying vertical filtering first and then horizontal filtering produces the same result as applying horizontal filtering first and then vertical filtering.
In SMuCO.01[3], the FILTER precision of eF (denoted as US _ FILTER _ PREC) is set to 6 bits. When the internal bit depth of refSampleArray is 8 bits, then tempArray may remain within the target implementation bit depth (e.g., 14 or 16 bits). But when the internal bit depth of refSampleArray exceeds 8 bits, then the output of equation (1) may overflow.
In an embodiment, such overflow may be avoided by: (a) fix the order of operations in the upsampling process, and (b) include intermediate scaling operations. In an embodiment, when vertical filtering is performed after horizontal filtering, upsampling may be implemented as follows:
horizontal up-sampling:
tempArray[x,y]=∑i,j(eF[xPhase,i]*refSampleArray[xRef+j,iOffset1))>>nShift1 (3)
vertical upsampling
predArray[x,y]=Clip((∑i,j(eF[yPhase,i]*tempArray[x,yRef+j])+iOffset2)>>nshift2) ,(4)
Let INTERM _ BITDEPTH represent the bit depth (or bit resolution) requirement of the intermediate filtering process without loss of commonality; that is, no result may be represented by more bits than inter _ BITDEPTH (e.g., inter _ BITDEPTH-16). Let INTERNAL _ INPUT _ BITDEPTH denote the bit depth used to represent the INPUT video signal in the processor. Note that the INTERNAL _ INPUT _ BITDEPTH may be equal to or greater than the original bit depth of the INPUT signal. For example, in some embodiments, the 8-bit INPUT video data may be internally represented by an interval AL _ INPUT _ BITDEPTH of 10. Alternatively, in another example, a 14-bit INPUT video may be represented by an interval _ INPUT _ BITDEPTH ═ 14.
In one embodiment, the scaling parameters in equations (3) and (4) may be calculated as
nShift1=(US_FILTER_PREC+INTERNAL_INPUT_BITDEPTH)-INTERM_BITDEPTH, (5)
nShift2=2*US_FILTER_PREC-nShiftl. (6)
In an embodiment, the values of nShiftl and nShift2 cannot be negative numbers. For example, a negative value of nShiftl indicates that the bit resolution allowed by the intermediate result is higher than sufficient to prevent overflow; therefore, when it is negative, nShiftl can be set to 0.
If rounding is used in both (3) and (4) (highest complexity, highest precision):
iOffset1=1<<(nShift1-1), (7)
iOffset2=1<<(nShift2-1), (8)
wherein a is 1<<c represents a binary left offset c bit of "1", i.e. a equals 2c
Alternatively, if rounding is not used in both (3) and (4) (lowest complexity, lowest precision):
iOffsetl=0, (9)
iOffset2=0。 (10)
alternatively, if rounding is used in (3) and not used in (4):
iOffsetl=1<<(nShiftl-1), (11)
iOffset2=0。 (12)
alternatively, if rounding is used in (4) and not in (3) (which is common):
iOffsetl=0, (13)
iOffset2=1<<(nShift2-1); (14)
in an exemplary embodiment, let inter _ BITDEPTH be 14, US _ FILTER _ PREC 6, and inter _ INPUT _ BITDEPTH be 8, then with equations (5) and (6), nShiftl is 0 and nShift2 is 12. In another example, for US _ FILTER _ PREC equal to 6, if INTERNAL _ INPUT _ BITDEPTH equal to 10 and INTERNAL _ BITDEPTH equal to 14, depending on the selected rounding mode, nShiftl equal to 2 and iOffsetl equal to 0or 2. Further, depending on the rounding mode selected, nShift2 is 10 and iOffset2 is 0or 29
Note that using the implementations shown in equations (3) and (4), horizontal filtering after vertical filtering may yield different results than vertical filtering after horizontal filtering, so in a decoder, the appropriate filtering can either be fixed and predetermined by all decoders (e.g., by a decoding standard or specification), or in some embodiments the appropriate order can be signaled to the decoder by the encoder using the appropriate flags in the metadata.
FIG. 3 depicts an exemplary process for image data upsampling according to an embodiment of the present invention. First (305), an encoder or decoder in a layered coding system determines the appropriate filtering order (e.g., vertical filtering after horizontal filtering) and scaling and rounding parameters. In an embodiment, the scaling and rounding parameters may be determined according to equations (5) - (14) based on the bit depth required for the intermediate storage (e.g., INTERM _ BITDEPTH), the FILTER coefficients (e.g., US _ FILTER _ PREC), and the internal INPUT representation (e.g., INTERN AL _ INPUT _ B ITDEPTH). In step 310, image data is upsampled in a first direction (e.g., a horizontal direction). The output result of this stage is rounded and scaled using a first offset parameter (e.g., nShiftl) and a first rounding parameter (e.g., ioffetl) prior to intermediate storage. Next, the intermediate result is upsampled in a second direction (e.g., vertical direction) (315). The output result of this stage is rounded and scaled using a second offset parameter (e.g., nShift2) and a second rounding parameter (e.g., iOffset 2). Finally (320), the output data of the second stage is trimmed before final output or storage.
The methods described herein may also be applied to other imaging applications that utilize separable filtering of image data at high bit depth, such as down-scaling, noise filtering, or frequency translation.
Exemplary computer System implementation
Embodiments of the invention may be implemented in computer systems, systems configured in electronic circuits and components, Integrated Circuit (IC) devices such as microcontrollers, Field Programmable Gate Arrays (FPGAs), or other configurable or Programmable Logic Devices (PLDs), discrete-time or Digital Signal Processors (DSPs), Application Specific ICs (ASICs), and/or apparatus including one or more of such systems, devices or components. The computer and/or IC may carry out, control, or execute instructions related to high precision upsampling, such as those described herein. The computer and/or IC may calculate any of a variety of parameters or values related to the high precision upsampling described herein. The encoding and decoding embodiments may be implemented in hardware, software, firmware, and various combinations thereof.
Some implementations of the invention include a computer processor executing software instructions that cause the processor to perform the methods of the invention. For example, one or more processors in a display, encoder, set-top box, transcoder, etc. may implement the methods described above with respect to high precision upsampling by executing software instructions in a program memory accessible to the processors. The present invention may also be provided as a program product. The program product may comprise any medium carrying a set of computer-readable signals comprising instructions which, when executed by a data processor, cause the data processor to carry out the method of the invention. The program product according to the invention may have any of various forms. For example, the program product may include physical media such as magnetic data storage media including floppy disks, hard disk drives, optical data storage media including CD ROMs, DVDs, electronic data storage media including ROMs, flash RAM, and the like. The computer readable signal on the program product may optionally be compressed or encrypted.
Where a component (e.g., a software module, processor, assembly, device, circuit, etc.) is referred to above, unless otherwise indicated, reference to that component (including a reference to a "means") should be interpreted as including as equivalents of that component any component which performs the function of the described component (e.g., that is functionally equivalent), including components which are not structurally equivalent to the disclosed structure which performs the function in the illustrated exemplary embodiments of the invention.
Equivalence, extension, substitution and mixing
Exemplary embodiments related to high precision upsampling for scalable coding of high bit depth video are thus described. In the foregoing specification, embodiments of the invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. For a term contained in a claim, any definition explicitly set forth herein shall govern the meaning of that term as used in the claim. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Reference to the literature
[1] B.bross, w. -. j.han, g.j.subllivan, j. -. r.ohm, and t.wiegand, "High Efficiency Video Coding (HEVC) text specification draft 9," ITU-T/ISO/IEC video coding joint group (JCT-VC) document JCTVC-K1003,2012 months.
[2] ITU-T and ISO/IEC JTC 1, "Advanced Video Coding for genetic audio-visual services," ITU T Rec.H.264 and ISO/IEC 14496-10(AVC)
[3] SMuCO.1.1 software for SHVCS (scalable extension of HEVC): https:/HEVC. hhi. fraunhofer. de/svn/svn _ SMuCSofware/tags/0.1.1-

Claims (13)

1. A method for upsampling image data from a first layer to a second layer in a scalable video system, the method comprising:
determining, with a processor, scaling and rounding parameters in response to a bit depth requirement of a scalable video system, wherein the bit depth requirement of the scalable video system comprises an intermediate value bit depth, an internal input bit depth, and a filter coefficient precision bit depth;
generating first upsampled data by filtering image data from a first layer, wherein the filtering of the image data is performed in a first spatial direction using a first rounding parameter;
generating first intermediate data by scaling the first up-sampled data by a first offset parameter;
generating second upsampled data by filtering the first intermediate data, wherein the filtering of the first intermediate data is performed in a second spatial direction using a second rounding parameter;
generating second intermediate data by scaling the second up-sampled data by a second offset parameter; and the number of the first and second groups,
generating output upsampled data for the second layer by pruning the second intermediate data, wherein determining the first offset parameter comprises: adding the internal input bit depth to the bit depth value of the filter coefficient precision bit depth and subtracting the intermediate value bit depth from their sum, and wherein determining the second offset parameter comprises subtracting the first offset parameter from twice the filter coefficient precision bit depth.
2. The method of claim 1, wherein the scalable video system comprises a video encoder.
3. The method of claim 1, wherein the scalable video system comprises a video decoder.
4. The method of claim 1, wherein determining a first rounding parameter and a second rounding parameter comprises calculating:
iOffsetl 1 < (nShiftl-1), and
iOffset2=1<<(nShift2-1),
where iOffsetl is the first rounding parameter, iOffset2 is the second rounding parameter, nShiftl is the first offset parameter, and nShift2 is the second offset parameter.
5. The method of claim 1, wherein the first rounding parameter and the second rounding parameter are determined to be equal to zero.
6. The method of claim 1, wherein determining a first rounding parameter and a second rounding parameter comprises calculating
iOffsetl 1 < (nShiftl-1), and
iOffset2=0,
where iOffsetl is the first rounding parameter, iOffset2 is the second rounding parameter, and nShiftl is the first offset parameter.
7. The method of claim 1, wherein determining a first rounding parameter and a second rounding parameter comprises calculating
iOffsetl is 0, and
iOffset2=1<<(nShift2-1),
where iOffsetl is the first rounding parameter, iOffset2 is the second rounding parameter, and nShift2 is the second offset parameter.
8. The method of claim 1, wherein the first spatial direction is a horizontal direction and the second spatial direction is a vertical direction.
9. The method of claim 1, wherein the first spatial direction is a vertical direction and the second spatial direction is a horizontal direction.
10. The method of claim 1, wherein the first spatial direction is fixed and predetermined by a specification of a video decoding standard.
11. The method of claim 1, wherein the first spatial direction is determined by an encoder and communicated by the encoder to a decoder.
12. An apparatus for upsampling image data from a first layer to a second layer in a scalable video system, comprising a processor configured to perform the method recited in claim 1.
13. A non-transitory computer-readable storage medium having stored thereon computer-executable instructions for performing the method of claim 1.
HK15109134.1A 2012-12-21 2013-12-04 High precision up-sampling in scalable coding of high bit-depth video HK1208577B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201261745050P 2012-12-21 2012-12-21
US61/745,050 2012-12-21
PCT/US2013/073006 WO2014099370A1 (en) 2012-12-21 2013-12-04 High precision up-sampling in scalable coding of high bit-depth video

Publications (2)

Publication Number Publication Date
HK1208577A1 HK1208577A1 (en) 2016-03-04
HK1208577B true HK1208577B (en) 2018-09-07

Family

ID=

Similar Documents

Publication Publication Date Title
US11792416B2 (en) High precision up-sampling in scalable coding of high bit-depth video
CN104885457B (en) Method and apparatus for backward compatible encoding and decoding of video signals
HK1208577B (en) High precision up-sampling in scalable coding of high bit-depth video
HK1235587A (en) Method and apparatus for backward-compatible coding and decoding for video signals
HK1235587A1 (en) Method and apparatus for backward-compatible coding and decoding for video signals
HK1209937B (en) Method and apparatus for backward-compatible coding and decoding for video signals