GB2510814A - Luma-indexed chroma sub-sampling - Google Patents
Luma-indexed chroma sub-sampling Download PDFInfo
- Publication number
- GB2510814A GB2510814A GB1301743.9A GB201301743A GB2510814A GB 2510814 A GB2510814 A GB 2510814A GB 201301743 A GB201301743 A GB 201301743A GB 2510814 A GB2510814 A GB 2510814A
- Authority
- GB
- United Kingdom
- Prior art keywords
- attenuation
- pixels
- video frames
- region
- dropping
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3182—Colour adjustment, e.g. white balance, shading or gamut
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/64—Systems for the transmission or the storage of the colour picture signal; Details therefor, e.g. coding or decoding means therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3141—Constructional details thereof
- H04N9/3147—Multi-projection systems
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
An apparatus and method of processing video frames to generate compressed video frames, comprising attenuating luminance components of pixels contained in an attenuation region of the video frames by applying attenuation coefficients and compressing pixels of the attenuation region based on the attenuation coefficients associated with the pixels. Preferably, the video frames are for display by a multi-screen projection system and the attenuation regions are in areas of overlap between adjacent projectors. The overlap regions ensure no gaps are present in the projection and the attenuation means the image is viewed with the correct brightness. Preferably the chroma or colour components are compressed more in regions with a greater amount of luma attenuation or dimming. Coding techniques may include subsampling or truncation of the data words. The invention also relates to a decompression method of processing encoded video frames to generate uncompressed video frames, and the corresponding apparatus.
Description
TITLE OF THE INVENTION
Luma-indexed chroma sub-sampling
BACKGROUND OF THE INVENTION
This invention relates in general to the video data communication, and in particular, to processing uncompressed video streams for transmission over a communication network.
New recent multimedia applications, such as high-definition audio/video streaming, require transmission of uncompressed video at high data late of about several Gbps (Gigabits per second), with low latency. Also, it is quite common that the transmission is required to be performed wirelessly.
One example of such high definition audio/video streaming application is the large high-resolution displays created by tiling together multiple video projectors, which are commonly used, nowadays, for many applications, such as visualization, training, simulation and collaboration. Each video projector displays a high definition video so as to form a large composite display wall.
Edge-blending is a commonly used technique to achieve a large composite seamless image by overlapping the video frames displayed by the plurality of video projectors so as to create what can be referred to as an overlap region, or edge-blending area. In this respect, performing "edge-blending" requires sending redundant data over the communication medium, i.e. the data belonging to the overlap regions, twice over the network (edge-blending area represents typically 20%, and up to 25% in some particular cases, of a displayed video frame), thus leading to a non-optimized usage of the channel bandwidth.
On approach to solve this issue would consist in reducing the size of the edge-blending area according to the actually available channel bandwidth. However this may have an impact on the seamless transition between frames in the composite image due to the reduced size of the overlap region, with potentially creating "gaps" between frames if one is not physically overlapped with the other.
The present invention has been devised to address at least the foregoing concern. More specifically, an object of the present invention is to have a rate-reduced video stream while maintaining the rendering quality of the video frames. A further object is to address this concern in the context of multi-projection system.
SUMMARY OF THE INVENTION
To this end, the present invention provides according to a first aspect a method of processing video frames to generate compressed video frames. The method comprising: attenuating luminance components of pixels contained in an attenuation legion of the video frames by applying attenuation coefficients, where an attenuation coefficient represents a percentage by which the luminance component value is attenuated; and compressing pixels of the attenuation region based on the attenuation coefficients associated with the pixels.
By relating the compressing to the magnitude of attenuation of the luminance components, rate-reduced video frames are obtained while maintaining good rendering quality of the video flames when displayed. The compression can be lossless r with information loss as it will be discussed with regards to the embodiments of the invention.
Preferably, the compressing comprises choosing a compression ratio based on the attenuation coefficients, wherein the greater is the attenuation coefficient associated with a pixel, the lower is the compression ratio. Here the compression ratio is defined as the size of compressed data divided by the size of original data. So the lower is the compression ratio, the belier is the compression.
According to one embodiment, the attenuation region is divided into sub-regions and wherein a same compression ratio is chosen for all pixels of each sub-region based on the attenuation coefficients applied to the pixels of said sub-region.
According to one implementation, the compressing comprises dropping chrominance components from the pixels of the attenuation region, the number of dropped chrominance components being dependent on the chosen compression ratio, thereby relating the number of dropped chrominance components and the value of the attenuation coefficients.
In particular, the dropping comprises applying a plurality of different operations for dropping chrominance components to respective sub-regions, two different chrominance dropping operations applied to two different sub-regions drop different number of chrominance components.
Advantageously, each chrominance dropping operation is defined by a chrominance dropping profile selected from a set of predetermined dropping profiles.
According to one embodiment, luminance component values of the frame pixels are encoded using N bits and wherein the compressing comprises shortening the encoding of the attenuated luminance component value of each pixel of the attenuation region by removing the L most significant bits, where L = N -M and M being the maximum size in bits of the attenuated luminance component value of said pixel.
According to one embodiment, chrominance component values of the frame pixels are encoded using N bits and wherein the compressing comprises shortening the encoding of chrominance component values of pixels of the attenuation region by removing the R least significant bits, where R is related to the attenuation coefficients applied to said pixels, wherein the greater is the attenuation coefficient, the greater is R. According to one embodiment, chrominance component values of the frame pixels are encoded using N bits and wherein the compressing comprises upsampling chrominance component values of pixels of the attenuation region and re-encoding said values with N' bits, where N' c N and wherein N' for a given pixel is inversely related to the attenuation coefficient applied to said pixel, the greater is the attenuation coefficient, the lower is N'.
According to one implementation, the method of processing video frames further comprises transmitting the compressed video frames resulting from the compressing of pixels components values over a communication network to a projection display apparatus for projecting the video frames.
Advantageously, the compressing is furthermore based on the bandwidth available in the communication network for transmitting the compressed video frames.
According to a preferred implementation, the frames are to be displayed by a multi-projection system comprising a plurality of projection display apparatus for displaying the frames, the attenuation region being defined by the region of overlap between two frames to be displayed by adjacent projection display apparatus.
In this implementation, the attenuation coefficients are advantageously obtained from a brightness attenuation function defined for edge-blending the overlap regions.
The present invention provides according to a second aspect a method of processing compressed video frames to generate uncompressed video frames. The method comprising: obtaining attenuation coefficients used to attenuate luminance components of pixels of an attenuation region contained in the compressed video frames; and decompressing pixels of the attenuation region based on the obtained attenuation coefficients.
According to one embodiment, the decompressing comprises recovering missing chrominance components in pixels of the attenuation region from the components of neighbouring pixels.
In particular, the recovering comprises: identifying sub-regions in the attenuation region; determining compression ratios used for compressing pixels of respective sub-regions based on the attenuation coefficients applied to the pixels of said sub-regions; obtaining a chrominance dropping profile applied to each sub-region based on the determined compression ratio, the chrominance dropping profiles are selected from a set of predetermined dropping profiles; and recovering missing chrominance components according to the obtained chrominance dropping profiles.
According to one embodiment, the decompressing comprises: determining the maximum size in bits M of the attenuated luminance component values of pixels of the attenuation region based on the obtained attenuation coefficients; and appending L most significant bits with null value to the M bits of each attenuated luminance component value thereby reconstructing a fully encoded values of N = L + M bits.
According to one embodiment, the decompressing comprises: determining a number of least significance bits R removed from the encoding of each chrominance component of pixels of the attenuation region when compressing the video frames based on the obtained attenuation coefficients; and appending R least significant bits to the N -S bits of each chrominance component value thereby reconstructing a fully encoded values of N bits.
According to one embodiment, the decompressing comprises: determining a number of bits N' used to encode after upsampling chrominance component values of pixels of the attenuation region when compressing the video frames based on the obtained attenuation coefficients; and oversampling the chrominance component values and re-encoding sais values with N bits, where N > N'.
According to a preferred implementation, the uncompressed video frames are to be displayed by a projection display apparatus of a multi-projection system, the multi-projection system comprising a further projection apparatus adapted to display adjacent video frames where a region of overlap between the video frames and the adjacent video frames defines the attenuation region.
In this implementation, the attenuation coefficients are obtained from a brightness attenuation function defined for edge-blending the overlap region.
The present invention provides according to a third aspect an apparatus for processing video frames to generate compressed video frames. The apparatus comprising: attenuating means for attenuating luminance components of pixels contained in an attenuation legion of the video frames by applying attenuation coefficients, where an attenuation coefficient represents a percentage by which the luminance component value is attenuated; and compression module for compressing pixels of the attenuation region based on the attenuation coefficients associated with the pixels.
The present invention provides according to a fourth aspect an apparatus for processing compressed video flames to generate uncompressed video flames. The apparatus comprising: obtaining means for obtaining attenuation coefficients used to attenuate luminance components of pixels of an attenuation region contained in the compressed video frames; and decompression module for decompressing pixels of the attenuation region based on the obtained attenuation coefficients.
The present invention also extends to programs which, when run on a computer or processor, cause the computer or processor to carry out the method described above or which, when loaded into a programmable device, cause that device to become the device described above. The program may be provided by itself, or carried by a carrier medium. The carrier medium may be a storage or recording medium, or it may be a transmission medium such as a signal. A program embodying the present invention may be transitory or non-transitory.
The particular features and advantages of the transmitting and receiving devices and the program being similar to those of the methods for transmitting and receiving data blocks, they are not repeated here.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 depicts for illustrative purposes a multi-projection system comprising multiple projection display apparatus in which an embodiment of the invention can be implemented.
Figure 2 shows a frame split example that may be associated with the multi-projection system of figure 1.
Figures 3a and 3b illustrates a s-curve functions typically used for video frame edge-blending process.
Figure 4 illustrates, through functional blocks, a wireless communication system, and particularly the multi-projection system of figure 1.
Figure 5 is a flowchart illustrating general steps of a method of processing video frames according to one embodiment of the invention.
Figure 6 is a flowchart illustrating general steps of a method of processing compressed video frames according to one embodiment of the invention.
Figure 7 illustrates the sub-division of horizontal and vertical attenuation regions into regular sub-regions.
Figure 8 illustrates the sub-division of an attenuation region into irregular sub-regions.
Figure 9 illustrates the structure of a high resolution Pixel Block, a medium resolution Pixel Block and a low resolution Pixel Block.
Figure 10 represents an exemplary Macro Pixel Block (MPB) resulting from the implementation of a process according to a variant of the present invention.
Figure 11 illustrates a set of ten predetermined dropping profiles each defining a pattern of chroma pixel information to drop within a MPB.
Figure 12 illustrates the recovering of missing chrominance components in a Macro Pixel Block based chronia information dropping profiles.
Figure 13 schematically illustrates a processing device configured to implement at least one embodiment of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
The invention provides methods and systems of processing video frames, in particular uncompressed video frames which luminance components are subject to attenuation prior to their display.
As illustrated below, the invention may come within the scope of transmission data rate adaptation for uncompressed video communication in wireless systems. The video frames comprise pixels of video information. Video information is transmitted over a wireless communication medium from a wireless transmitting device to a wireless receiving device. Transmission data rate adaptation is performed typically by dropping some video information from pixels of the video frames (compressing) to obtain rate-reduced video frames. The video information dropping takes into account the luminance information of these pixels so that to reduce the visual impact when the video frames are displayed. The rate-reduced video frames are transmitted over the wireless communication medium to the wireless receiving device. At the receiver, a reverse operation is performed to recover the uncompressed video pixel information as close as possible to the original pixel information.
Figure 1 depicts for illustrative purposes a multi-projection system 100 comprising multiple projection display apparatus 111, 112, 113 and 114 for projecting on a screen video frames delivered by a source device 101. Each projection display apparatus, referred to as a projector for simplicity, is typically a video projector that projects a video stream but may encompass any type of projector such as for example a still image projector.
The projectors receive video information from the source device 101 over communication links 121,122, 123 and 124, forming a communication network.
According to one implementation example, the communication network is a wireless communication network, i.e. all the communication links are wireless. This wireless communication network may for example operate in the 57-66 GHz millimeter-wave unlicensed spectrum for providing the necessary bandwidth to transport video data; particularly if this latter is high definition (HD) video data.
Alternatively, the communication links 121 -124 are wired communication links -e.g. HDMI.
A frame to be projected by the multi-projection system 100 is split into a plurality of sub-frames. The number of sub-frames per frame is typically equal to the number of projectors in use in the multi-projection system. The size and shape of each sub-frame is chosen so that a full composite frame can be reconstructed when all the sub-frames are projected by their corresponding projectors.
Figure 2 shows a frame split example that may be associated with the multi-projection system 100 of figure 1.
An original frame 200 is split into four sub-frames 210, 220, 230 and 240 to be projected respectively by projectors 111, 112, 113 and 114 of the multi-projection system 100. The resulting frame projected on the screen is a composite frame 250.
It is to be noted that the split of the frame 200 is done with overlap to seamlessly reconstruct the composite frame 250. This technique is known as edge-blending and consists in overlapping regions at edges of adjacent sub-frames and blending the brightness of these regions during projection. Indeed, the composite frame naturally appears brighter where projected sub-frames overlap and thus a brightness adjustment is necessary to resolve the brightness distortion.
In the present example, overlap regions 212 and 222 are associated respectively with adjacent sub-frames 210 and 220; overlap regions 211 and 231 are associated respectively with adjacent sub-frames 210 and 230; overlap regions 232 and 242 are associated respectively with adjacent sub-frames 230 and 240; overlap regions 241 and 221 are associated respectively with adjacent sub-frames 240 and 220; and overlap regions 213, 223, 233 and 243 are associated respectively with the four sub-frames 210, 220, 230 and 240. The existence of these latter four overlap regions is peculiar to the arrangement in rows and columns of the multi-projection cluster of figure 1.
Brightness adjustment is performed by adjusting light intensity (luminance component of the pixels) in the overlap regions to match the rest of the projected sub-frames. Adjustment is typically performed by gradually dropping off luminance along the width of an overlap region associated with one sub-frame while gradually increasing the luminance along the width of the corresponding overlap region associated with the adjacent sub-frame. How the brightness varies along the width of an overlap region is defined by a function referred to as brightness aftenuation function f(x). This function is typically an s-curve function as illustrated in figures 3a and 3b.
Figure 3a illustrates an s-curve function [(4 301 and its complementary function g(x) = 1 -f(x) 302, normalized between 0 and 1. Attenuating the brightness of the overlap regions of adjacent sub-frames according to the functions f(x) and g(x) ensures to keep the brightness of the superposed regions nearly constant and similar to the rest of the associated sub-frames.
Attenuating the brightness of a pixel consists into multiplying its luminance component value L(c,r), where c and rare the column and row coordinates of the pixel, either by f(x0) or by 1 -[(x0) depending on whether the pixel is located in one overlap region or in the adjacent overlap region, where x0 = c in the case of a vertical overlap region (212, 222, 232, 242) or r = r in the case of a horizontal overlap region (211, 221, 231, 241). Thus, light intensity projected for that pixel would correspond to the superposition of light intensities L(c,r) x f(x0) and L(c,r) x (1 -f(x0)) projected by the respective adjacent projectors.
In the above illustrative example, two different functions f(x) and g(x) are defined because a common coordinate system 300 is considered for both adjacent overlap regions. In a practical implementation however and in order to keep independent processing of sub-frames, one coordinate system (311, 312) relative to each overlap region may be considered in which the brightness attenuation function would have the same expression, say [(4. In this relative coordinate system, edge-blending in each overlap region would always consist into multiplying the luminance of a pixel by a same function f(x) but taken at different relative abscissa. Superposed light intensities would then be, in the case of a vertical overlap region (315, 316) as illustrated in the example of figure 3b, L(c1,r) xf(c1) and L(c2,r) xf(c2), where c2 = w -c1 and w being the width of the overlap region. Indeed, a pixel positioned at column c1 in overlap region 315 corresponds to a same pixel at position c2 = w -c1 in overlap region 316.
In figure 3b, the illustrated overlap regions 315 and 316 represent, respectively, the regions 212 and 222 of sub-frames 210 and 220 of figure 2. Same concept applies for the horizontal overlap regions 211/231 or 2211241. In this situation, the differentiating coordinate is pixel row instead of column. For the particular case of overlap region 213, 223, 233 and 243 (central overlap regions), four s-curve functions which sum is unity can be considered (not illustrated).
Also, the zero of the abscissa axis (columns or rows) of each coordinate systems coincides with the start of the overlap regions respectively and expands towards to the edge of the corresponding sub-frames.
The partitioning of each video frame 200 into sub-frames 210, 220, 230 and 240 and the edge-blending of the sub-frames may be performed centrally at the source device 101 for example. The sub-frames are then delivered through the communication links 121, 122, 123 and 124 to the corresponding projectors.
Figure 4 illustrates, through functional blocks, such a wireless communication system, and particularly the multi-projection system 100. The figure illustrates the functions of a transmitting device 400 such as the source device 101 and the functions of a receiving device 450 such as one of the projectors 111 -114.
The transmitting station 400 contains an application module 405 which delivers a well-known 4:4:4 uncompressed high definition video stream.
For instance, a typical HD uncompressed video such as lOBOp is characterized by video frames of 1920 vertical lines (columns) and 1080 horizontal lines (rows), 24 bits per pixel (4:4:4 sampling) and 60 Hz as frame rate.
Each pixel of the uncompressed video frame is defined by base and enhancement pixel information, for example by three video colour components Y, Cb and Cr, where: -Y is the base pixel information, known as luminance component (or luma component) and represents the brightness of the image, i.e. the achromatic or "black & white" portion of the image; -Ob and Cr are the enhancement pixel information, known as chrominance components (or chroma components) and represent the colour information of the pixel -the blue information minus the brightness for Cb and the red information minus the brightness for Cr.
Each pixel colour component (Y, Cb or Cr) may be displayed with various colour depths! which means that the information relating to a pixel colour component may be coded over 8 bits, 12 bits, 16 bits or 32 bits, for instance.
For the purpose of illustration below, a pixel colour component will be assumed to be coded over 8 bits (i.e. 1 byte). A pixel is thus made of 24 bits of colour components.
The human vision system is less sensitive to colour than to brightness. More precisely, the human eye is able to distinguish more accurate details conveyed by the differences in the luma component than details conveyed by differences in the chroma component. This is why the luma component Y of a pixel is generally considered as the base pixel information, whereas the less significant chroma components are only enhancement pixel information that improves the primary pixel rendering based on the luma component only. Based on this fact, the luma component is taken as basis for compressing the frames and for dropping of video information, where the lower is the brightness of the pixels the more video information is dropped from those pixels. The brightness attenuation function described above is relied upon to obtain the evolution of the maximum level of brightness the pixels may be subject to (which correspond to the case where video information to be displayed is white) as it will be detailed hereinafter according to the embodiments of the invention.
Still referring to Figure 4, the transmitting device 400 also comprises a communication module 410 including a Protocol Adaptation Layer (PAL) 415, a Medium Access Control (MAC) layer 425 and a physical (PHY) layer 430.
The application module 405 sends the uncompressed video frames to the Protocol Adaptation Layer (PAL) 415 at given rate, referred to as full source rate. For instance, the full source rate is equal to 3 Gbps for a 4:4:4 uncompressed high definition video stream.
The Protocol Adaptation Layer (PAL) 415 comprises a brightness attenuation module 421 in charge of attenuating luminance component values of pixels contained in an attenuation region of the video frames. For example, in the case of the multi-projection system of figure 1, the module 421 is in charge of attenuating the brightness of edge-blending areas (overlapping regions) according to a brightness attenuation function as described above. The attenuation function is either applied to edges of the video frames as received from the application module 405, or to sub-frames obtained by splitting up the received video frames. In this latter case, the transmitting device 400 is assumed to be in charge of generating the sub-frames for the different projectors. The PAL 415 may thus comprise optionally a splitting up module (not illustrated) to handle the division of the received video frames. The PAL 415 further comprises a compression module 422 for compressing the attenuated frames or sub-frames. The brightness attenuation module 421 and the compression module 422 implement embodiments of the present invention and are further described below.
Optionally, the Protocol Adaptation Layer (PAL) 415 is also connected to the physical (PHY) layer 430 in order for the PAL to retrieve information about the network conditions to drive the rate adaptation and the compressing of the video information in the compression module 422. Information about the network conditions may include quality information of each transmission path, characterized by, for example, a Radio Signal Strength Indication (RSSI) or a Signal-to-Noise Ratio (SNR), the available bandwidth or the time for which the wireless transmitting device 400 has the wireless communication medium 40. From this network conditions information, the PHY layer indicates to the PAL the current network state or conditions. This information may be obtained periodically, even during the processing of a current video frame, thus enabling the module 422 to dynamically adapt the compression ratio and thus the rate of the compressed video frames information as is apparent from the following description.
The PAL 415 then sends the obtained rate-reduced (compressed) video frames to the MAC layer 425 where they are packetized to construct MAC Protocol Data Units for transmission over the communication links. The MAC layer 425 also provides addressing and channel access control mechanisms.
Each MAC Protocol Data Unit is then transferred to the PHY layer 430 to construct a corresponding PHY Protocol Data Unit that is then transmitted over a respective wireless communication link of the network 40 to the wireless receiving device 450.
In order to transmit data as HD video, the physical (PHY) layer 430 relies upon Millimetre Wave frequency bands, as for instance 60 GHz (or even higher) and the data are sent over the wireless medium 40 using one or more smart antennas 435.
The wireless receiving device 450 comprises an application module 455 and a communication module 460 which includes a Protocol Adaptation Layer (PAL) 465, a MAC layer 475 and a physical (PHY) layer 480.
The PHY layer 480 receives the packets send by the wireless transmitting device 400 over at least one of the communication links of the communication network 40, via one or more smart antennas 485. The physical (PHY) layer 480 then processes the received packets up to the MAC layer 475, which then processes the received data packets up to the Protocol Adaptation Layer (PAL) 465.
In the Protocol Adaptation Layer (PAL) 465, a decompression module 470 decompresses the rate-reduced video frames contained in the received data packets to generate uncompressed video frames. In particular cases as illustrated below, the decompressing comprises recovering missing information using information from neighbouring pixels or by applying an over-sampling process to obtain the particular missing information.
The decompression module 470 then provides the application module 455 with uncompressed video frames made of all reconstructed pixel blocks, for example 4:4:4 video frames. The application module 455 is in charge of final processing the video frames for rendering. In the case of the multi-projection system of figuie 1, the rendering consists into displaying the video frames or sub-frames by the piojectol embodying the receiving device 450.
Figure 5 is a flowchart illustrating general steps of a method of processing video frames to generate compressed video frames and to transmit those compressed video frames over the communication network according to one embodiment of the invention.
These steps may be performed, for example, by source device 101 of multi-projection system 100 and may be implemented in hardware, software, firmware or any combination theieof by the communication module 410 of the source device 101. If implemented in software, the flowchart may correspond to a segment of a program stored in the ROM 1307 of source device 101 as illustrated by figure 13.
In a first step S500, a region in the video frames which brightness is to be attenuated is determined. This region is referied to generally as an attenuation region.
In the context of multi-projection system 100, source device 101 processes video sub-frames where each sub-frame is equivalent to one frame intended for a projector. In this context, an attenuation region may correspond to any one of the vertical, horizontal and central overlap regions of each sub-frame, or to a combination of these overlap regions.
More generally, the attenuation region does not necessarily relate to a multi-piojection system and may coiiespond to any region of a video frame of which the function of variation of the brightness to be applied can be determined. This function of variation is also referred to for convenience as f(x).
In a second step S510, luminance components of pixels contained in the attenuation region of the video frames are attenuated by applying attenuation coefficients.
This attenuation is typically performed by the brightness attenuation module 421 of source device 101.
The attenuation coefficients represent here the percentage by which the luminance components values are attenuated. These coefficients are derived in the present embodiment from the brightness attenuation function f(x). As f(x) represents the factoi by which the luma values are multiplied, resulting into an attenuated luma value, the attenuation coefficients can be written as A = 1 -f(x). For example, a multiplicative factor of 0.8 corresponds to an attenuation coefficient of 20%.
In a thud step S520, the pixels of the attenuation region are compressed based on the attenuation coefficients associated with the pixels. The compression is performed by the compression module 422 of source device 101. Several compression techniques may be used to compress pixels as it will be detailed below.
In a fouith step S530, the resulting compressed video frames are transmitted over the communication network to the intended projector for example. The transmission is performed by the MAC and PHY layers 425/430 as discussed above and may use conventional mechanisms to that end.
Figure 6 is a flowchart illustrating general steps of a method of processing compressed video frames to generate uncompressed video frames according to one embodiment of the invention. These steps may be performed, for example, by each piojector 111-1 14 of multi-projection system 100 and may be implemented in hardware, software, firmware or any combination thereof by the communication module 460 of the projector device. If implemented in software, the flowchart may correspond to a segment of a progiam stored in the ROM 1307 of the projector as illustrated by figuie 13.
In a step S600, the attenuation coefficients used to attenuate, at the tiansmitting device! luminance components of pixels of the attenuation region are obtained. The coefficients are derived for example from the brightness attenuation function as the following: A = 1 -f(x). This information is generally piedetermined and shared between the transmitting device, e.g. source device 101, and the receiving device, e.g. one of the projectors 111-114.
In a step S61 0, the pixels of the attenuation region are decompiessed based on the attenuation coefficients associated with the pixels. The decompression is performed by the decompression module 470 of a projector device. Decompression techniques symmetrical of the compiession one applied at the tiansmitting device are used. The compression techniques applied can either be predetermined and known to the receiving device oi signalled in the compressed video stream to the receiving device. More details on the decompiession techniques that may be used are given blow.
Going back to the compression step 3520 of Figure 5, different compression techniques are now described according to different embodiments of the invention. These techniques can be used individually or cumulatively. They are aimed to reduce the compression ratio -compressed data size divided by original data size -when the attenuation coefficient is high. This advantageously leads to obtain rate-reduced video frames while maintaining good rendering quality. In particular, when the compression is performed with information loss, a good rendering quality is maintained because the human eye is less sensitive to chroma distortions when the brightness is low as will be detailed hereinafter.
A first technique relates to optimizing the encoding of each luma component once the brightness attenuation function, as described in Figure 3b, is applied -i.e. after the attenuated value of the luma component is obtained. As the brightness attenuation function drops off the luma component value of each pixel within the edge-blending area, most pixels in this edge-blending area may now have their luma component encoding length reduced, i.e. some of the MSBs (Most Significant Bits) have now a 0' value. Thus, due to the attenuation coefficient imposed by the brightness correction function, one luma component initially encoded over 8 bits, may now be encoded over less than 8 bits. The coding length is related to the maximum component value allowed by the attenuation coefficient. Indeed, relying on the brightness attenuation function makes it possible to know the maximum value that the luma components may take, and this independently from the content of the video information transported. Because the removed MSBs transport no information, the first compression technique is a lossless compression technique.
More generally, if we consider a color depth of N bits, which means that the color components are encoded using N bits where N is 8 bits, 12 bits, 16 bits or 32 bits, for instance, the compression would comprise shortening the encoding of the attenuated luminance component value of each pixel of the attenuation region by removing the L most significant bits, where L = N -M and M being the maximum size in bits of the attenuated luminance component value of said pixel. The maximum size M is obtained for each pixel from the attenuation coefficient applied to that pixel, independently from the actual video information carried on. For example, M = 1 bit at least per luma component is saved if the attenuation coefficient is greater than 0.5.
The corresponding decompressing step S610 to this first compression technique comprises then determining the maximum size in bits M based on the attenuation coefficients, and appending L most significant bits with null value to the M bits of each attenuated luminance component value thereby reconstructing a fully encoded values of N = L + M bits. By deriving the parameter M from the attenuation coefficients, it is not necessary to signal in the transmitted bitstieam this parameter, which saves bandwidth.
Along with the aforementioned first compression technique for the luma component, a second technique directed to optimizing the encoding of the chroma component can also be applied.
In one implementation variant, the compressing comprises shortening the encoding of chrominance component values of pixels of the attenuation region by removing the S least significant bits, where R is related to the attenuation coefficients applied to said pixels. More specifically, the greater is the attenuation coefficient, the greater is R, and thus the lower is the number of bits used for encoding the value of the chroma components. For instance, considering 8-bit chroma components when no attenuation coefficient is applied, the chroma components value may be encoded ovei 6 bits when the attenuation coefficient is 25%, and encoded over 4 bits when the attenuation coefficient is 50%. The corresponding decompressing step 5610 to this implementation valiant comprises then deteimining the number of least significance bits R removed from the encoding of each chrominance component of pixels of the attenuation region when compressing the video frames based on the attenuation coefficients; and appending R least significant bits to the N -R bits of each chrominance component value thereby reconstructing a fully encoded values of N bits. The values of the R least significant bits added can either be piedetermined values like 100.0", where the number of zeros is equal to R-1, representing a median value. Error concealment techniques can also be used to set the R bit values based on color information of neighboring pixels. By deriving the parameter S from the attenuation coefficients, it is not necessaiy to signal in the tiansmitted bitstream this paiameter, which saves bandwidth.
Alternatively, the compressing may consist in re-quantifying the value of the chroma component, from 8 bits to a lower number of bits. Stated in general terms, the compressing would comprise upsampling chrominance component values of pixels of the attenuation region originally encoded with a depth of N bits and re-encoding said values with N' bits, where N' c N. Here N' for a given pixel is inversely related to the attenuation coefficient applied to said pixel -i.e. the greater is the attenuation coefficient, the lower is N'. The coiiesponding decompressing step S610 to this implementation variant comprises then determining a number of bits N' based on the obtained attenuation coefficients, and oversampling the chrominance component values and re-encoding sais values with N bits. By deriving the paiameter N' from the attenuation coefficients, it is not necessaiy to signal in the transmitted bitstream this parametei, which saves bandwidth.
The second compression technique is a compression technique with information loss.
The above first and second compression techniques relate to optimizing the encoding the luma and chroma components. A third compression technique is now described which relates to reducing the quantity of information transported by dropping chrominance components from the pixels of the attenuation region (thus a compression technique with information loss). Here, the number of dropped chrominance components is made dependent on the chosen compression ratio, thereby relating the number of dropped chrominance components and the value of the attenuation coefficients -thus a low compression ratio or high attenuation coefficient leads to a high number of dropped chrominance components. The chrominance components that are to be deleted are chosen so that their reconstruction at the receiving side, using concealment techniques, introduces as less distortion as possible.
For this third compression technique, the attenuation region may be divided into sub-regions where a same compression ratio is chosen for all pixels of each sub-region based on the attenuation coefficients applied to the pixels of said sub-region. For example, one global attenuation coefficient is determined for each sub-region corresponding to the mean value of the attenuation coefficients applied to the pixels of the sub-region. The compression ratio for the sub-region defining the amount of dropped information is then set based on the global attenuation coefficient.
For example, horizontal attenuation region 701 in Figure 7 may be divided into a plurality of sub-regions 710, 711, 712, 713, 714, 715, 716 and 717, each made of a plurality of pixel rows, while vertical attenuation region 702 in the same figure may be divided into a plurality of sub-regions 720, 721, 722, 723, 724, 725, 726 and 727, each made of a plurality of pixel columns. Horizontal attenuation region 701 may represent any one of the horizontal edge-blending regions 211, 221, 231 and 241 of sub-frames 210, 220, 230 and 240, and vertical attenuation region 702 may represent any one of the vertical edge-blending regions 212, 222, 232 and 242 of these same sub-frames.
It should be noted that the number of sub-regions in each region and their relative size is not necessarily fixed. For example, it is not very important to have many sub-divisions in regions where the brightness attenuation function varies slowly. This is illustrated by Figure 8 in which sub-regions 810 and 850 correspond to slowly varying attenuation coefficients and sub-regions 820, 830 and 840 to quickly varying attenuation coefficients.
The dropping of chrominance components comprises applying a plurality of different operations for dropping chrominance components to respective sub-regions, two different chrominance dropping operations applied to two different sub-regions drop different number of chrominance components.
In one implementation variant, each chrominance dropping operation is defined by a chrominance dropping profile selected from a set of predetermined dropping profiles.
Description of this implementation variant and of the dropping profiles is now given by reference to figures 9 to 12.
Figure 9 illustrates a high resolution Pixel Block 900, a medium resolution Pixel Block 910 and a low resolution Pixel Block 920.
The high resolution Pixel Block 900 contains four pixels 901, 902, 903 and 904.
Each pixel is defined by three video components Y, Cb and Cr (4:4:4 sampling). A high resolution Pixel Block is then made in total of four luminance video components Y, four chroma video components Cb, and four chroma components Cr. Thus, the size of a high resolution Pixel Block 900 is 12 bytes (8 bits depth per component is assumed). Such high resolution Pixel Block 900 is referenced HR_PB. The 4:4:4 scheme corresponds to the scheme of the uncompressed video frame provided by the application module 405.
Consequently, to generate a pixel block having this 4:4:4 scheme (i.e. a HR_PB), the module 422 implements an operation that does not drop any chrominance pixel information from the source uncompressed pixel block, since the 12 pixel colour components are kept or remain "invariant".
The medium resolution Pixel Block 910 contains four pixels 911, 912, 913 and 914. As a 4:2:2 sub-sampling is applied, the chroma components are sampled at half the horizontal resolution of the luminance component, compared to a 4:4:4 sub-sampling scheme in this Pixel Block. Thus, pixels 911 and 913 are made of one luminance component and two chroma components while pixels 912 and 914 are made of only one luminance component. A medium resolution Pixel Block is then made of four luminance video components Y, two chroma video components Cb, and two chroma video components Cr. Thus, the size of a medium resolution Pixel Block 910 is 8 bytes, since only 8 pixel colour components remain "invariant". Such a Pixel Block is referenced MR_PB. The 4:2:2 sub-sampling scheme then corresponds to a chrominance pixel information dropping operation that drops at least one chroma component from the pixel block.
The low resolution Pixel Block 920 contains four pixels 921, 922, 923 and 924.
As 4:2:0 sub-sampling is applied, the chroma components are sampled at half the horizontal and vertical resolutions of the luminance Y video component, compared to a 4:4:4 sub-sampling scheme. Thus, pixel 921 is made of one luma component and one chroma component Cr, pixel 923 is made of one luminance component and one chroma component Cb, while pixels 922 and 924 are made of only one luminance component. A low resolution Pixel Block is then made of four luminance video components Y, one chroma video component Cb, and one chroma video component Cr. Thus, the size of a medium resolution Pixel Block 920 is 6 bytes, since only 6 pixel colour components remain "invariant". Such a Pixel Block is referenced LR_PB. The 4:2:0 sub-sampling scheme then corresponds to a chrominance pixel intormation dropping operation that drops more chroma components within the pixel block than the 4:2:2 sub-sampling scheme.
From these examples, one may understand that depending on the chrominance components dropping operation used, the amount of chroma colour components that is dropped or kept varies. In particular, one may note that, whatever the chrominance components dropping operation used, some pixel colour information is kept and remains unchanged. This is the case for all the luma components, as well as for the Cr component of the first pixel and for the Cb component of the third (bottom left) pixel. Below, they are referred to as "fully invariant" pixel information or component.
All these three 4:4:4, 4:2:2 and 4:2:0 sub-sampling schemes may be considered as forming a set of predetermined dropping profiles with respective compression ratios of 1, 0.66 and 0.5 (with no optimization at components encoding level applied). It should be noted that for simplicity, the 4:4:4 sub-sampling scheme is also considered as forming a dropping profile although it does not drop any information.
This makes it then possible to select for each sub-region one chrominance dropping profile among the set of profiles which compression ratio is the closest from the global attenuation coefficient of the sub-region.
It may happen however that the granularity in terms of compression ratios of the different dropping profiles is not fine enough to cover distinctly the different global attenuation coefficients of the sub-regions. Example of figure 7 shows 8 sub-regions which may be sufficiently covered by 6 levels of compression ratios if it is admitted that the global attenuation coefficients of the two first sub-regions 710-711 or 720-721 are too close to be represent by distinct dropping profiles, as well as the global attenuation coefficients of the two last sub-regions 716-717 or 726-727. For the example of figure 8, 5 levels of compression ratios are assumed to be needed as the size of the sub-regions is adapted according to the slope of the brightness attenuation function which makes the global attenuation coefficients corresponding to each sub-region distinct enough. If we consider the first compression technique, for an attenuation coefficient is greater than 0.5 In case the granularity in terms of compression ratios is judged not sufficient, it can be extended by several ways.
A first way is to combine the different dropping profiles with the first and/or the second compression techniques discussed above. For example, let's consider a sub-region that is eligible for a compression by the first technique, and assume that the highest attenuation coefficient applied in this sub-region is greater than 0.75, but lower than 0.875. The compression ratio by this first technique would then correspond to 25%. If the dropping profile applied to this sub-region corresponds to a compression ratio of 50% as shown by LR_PB, the cumulative compression ratio by applying both techniques is then 12.5%. We have then extended the granularity of the compression ratios to the set of 1, 0.66, 0.5 and 0.125. Same logic could be applied with the second compression technique.
A second way is just to consider other sub-sampling schemes such as for example 4:1:0, i.e. a scheme that keeps only one chroma colour component in a block of 8 pixels leading to a compression ratio of 0.375.
A third way is to further refine the granularity of the compression ratios by constructing combinations of Pixel Blocks -referred to as Macro Pixel Blocks. A Macro Pixel Block comprise an array of Pixel Blocks mixing HR_PB, MR_PB and LR_PB therein according to a plurality of patterns. These patterns would form then the set of predetermined dropping profiles.
Figure 10 represents an exemplary Macro Pixel Block (MPB) resulting from the implementation of a process according to a variant of the present invention.
A Macro Pixel Block, or MPB, is made of a plurality of N x M adjacent Pixel Blocks (PBs). In the example of the Figure, a MPB is made of 4x4 PBs, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015 and 1016, also referenced PB(i,j) with the row index i from 1 to 4 and the column index j from 1 to 4. An attenuation region representing 20% of an HO video frame may then comprise 32,400 x 0.2 = 6480 MPBs of such size (i.e. of sixteen 4-pixel PBs).
Other MPB sizes may be contemplated within the scope of this variant of the invention, for example 2x2, 2x4, 4x2, 4x8, 8x4, 8x8 PBs, etc. Different chroma dropping operations may be applied to the pixel blocks of the MPB as illustrated in the Figure. In that case, each Pixel Block of a resulting MPB is either a high, medium or low resolution Pixel Block. In particular some pixel blocks are of high resolution (HR_PB), namely pixel blocks 1001, 1003, 1006, 1008, 1009, 1011, 1014 and 1016 in the example of the figure; some are of medium resolution (MR_PB), namely pixel blocks 1004, 1007, 1010 and 1013; and some are of low resolution (LR_PB), namely pixel blocks 1002, 1005, 1012 and 1015.
-20 -The pattern of HR_PB, MR_PB and LR_PB within the MPB of Figure 10 defines a chroma information dropping profile that can be applied to any MFB of the attenuation region of the uncompressed video frame to generate rate-reduced video frames.
Figure 11 illustrates a set often predetermined dropping profiles each defining a pattern of chroma pixel information to drop within a MPB.
Based on the above three sub-sampling schemes (4:4:4, 4:2:2 or 4:2:0), up to 316 different dropping profiles can be built. In the example of the Figure, only ten dropping profiles are defined.
The ten dropping profiles are referenced 1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109 and 1110 in the Figure. Eight of them combine pixel blocks associated with different sub-sampling schemes. This means that, when applying one of these eight profiles to an uncompressed MPB, two different chroma information dropping operations are applied to two pixel blocks within the same MPB, to generate different numbers of dropped chroma components and thus different compression ratios.
As may be observed in these eight dropping profiles, each of the chroma information dropping operations applied to four pixel blocks adjacent to a central pixel block keeps more chroma information than the chronia information dropping operation applied to the central pixel block. In other words, the high, medium and low resolution Pixel Blocks are interleaved within a Macro Pixel Block so that a low-resolution Pixel Block is adjacent to four higher-resolution Pixel Blocks (either medium or high resolution).
Preferably, this adjacency is performed inside the Macro Pixel Block but also between any two adjacent Macro Pixel Blocks.
One main advantage of using this adjacency scheme is to ensure that during the reconstructing process performed by the decompression module 470, a missing chroma component (i.e. a video component dropped during application of a chroma information dropping operation by the module 422) is recovered from an adjacent video component.
This is for example illustrated below with reference to Figure 12. Therefore, the propagation of an error, resulting from the mismatch between the recovered video component and the original component before dropping, is significantly reduced.
The dropping profiles 1101, 1102, 1103, 1104, 1105, 1106, 1107, 1108, 1109 or 1110 allow MPBs to be built with different compression ratios, depending on how many pixel chroma components are dropped. These dropping profiles have respectively the following compression ratios: 1,0.916,0.875,0.833,0.791,0.75,0.708,0.666,0.625 and 0.583. This provides a finer granularity of the compression ratios, and makes it possible to achieve a better matching with a given global attenuation coefficient. Also, one may easily -21 -understand that by switching between the dropping profiles to follow the brightness variation, the video late will change.
In the dropping profile 1101, the operations to apply to all the pixels blocks do not drop any chroma information but keep the pixel information invariant. The dropping profile 1101 (to be collect, a non-dropping profile or invariant diopping profile) then corresponds to only 4:4:4 Pixel Blocks, and the size of the dropping profile 1101 is defined as 192 bytes (one byte for each of the three colour components of the four pixels of the 16 pixel blocks).
In the dropping profile 1102, the chroma information dropping operations to apply to the Pixel Blocks PB(1,4), PB(2,3), PB(3,2) and PB(4,1) correspond to the 4:2:2 sub-sampling scheme, thus resulting for these Pixel Blocks in being 4:2:2 PBs while the other PBs remain 4:4:4 Pixel Blocks. The size of this diopping piofile 1102 is 176 bytes.
In the dropping piofile 1103, the chroma information diopping opeiations to apply to the Pixel Blocks PB(1 4), PB(2,3), PB(3,2) and PB(4,1) correspond to the 4:2:0 sub-sampling scheme, thus resulting for these Pixel Blocks in being 4:2:0 PBs while the others remain 4:4:4 Pixel Blocks. The size of this dropping protile 1103 is 168 bytes.
In the dropping profile 1104, the chroma information dropping operations to apply to the Pixel Blocks PB(1,2), PB(1,4), PB(2,1), PB(2,3), PB(3,2), PB(3,4), PB(4,1), PB(4,3) correspond to the 4:2:2 sub-sampling scheme, thus resulting for these Pixel Blocks in being 4:2:0 PBs while the others remain 4:4:4 Pixel Blocks. The size of this dropping piofile 1104 is 160 bytes.
In the dropping profile 1105, the chroma information dropping operations to apply to the Pixel Blocks PB(1 4), PB(2,3), PB(3,2) and PB(4,1) correspond to the 4:2:2 sub-sampling scheme, the dropping operations to apply to the Pixel Blocks PB(1,2), PB(2,1), FB(3,4) and PB(4,3) coirespond to the 4:2:0 sub-sampling scheme, thus resulting foi the latter in being 4:2:0 PBs and for the first ones in being 4:2:2 PBs while the others remain 4:4:4 Pixel Blocks. The size of this dropping profile 1105 is 152 bytes.
In the dropping profile 1106, the operations to apply to the Pixel Blocks PB(1,4), PB(2,3), PB(3,2) and PB(4,1) do not drop any chroma information (i.e. correspond to 4:4:4 sub-sampling scheme), while the chroma information dropping operations to apply to the other PBs correspond to the 4:2:2 sub-sampling scheme. The size of this dropping piofile 1106 is 144 bytes.
In the dropping profile 1107, the chroma information dropping operations to apply to the Pixel Blocks PB(1 4), PB(2,3), PB(3,2) and PB(4,1) correspond to the 4:2:0 sub-sampling scheme, the opeiations to apply to the Pixel Blocks PB(1,1), PB(2,2), PB(3,3) and PB(4,4) do not diop any chioma information (i.e. coriespond to 4:4:4 sub-sampling scheme), while the chroma information dropping operations to apply to the other PBs -22 -correspond to the 4:2:2 sub-sampling scheme. The size of this dropping profile 1107 is 152 bytes.
In the dropping profile 1108, the chroma information dropping operations to apply to all the Pixels blocks correspond to the 4:2:2 sub-sampling scheme. The size of this dropping profile 1108 is 128 bytes.
In the dropping profile 1109, the chroma information dropping operations to apply to the Pixel Blocks PB(1 4), PB(2,3), PB(3,2) and PB(4,1) correspond to the 4:2:0 sub-sampling scheme, and the chroma information dropping operations to apply to the other PBs correspond to the 4:2:2 sub-sampling scheme. The size of this dropping profile 1109 is 120 bytes.
In the dropping profile 1110, the chroma information dropping operations to apply to the Pixel Blocks PB(1,2), PB(1,4), PB(2,1), PB(2,3), PB(3,2), PB(3,4), PB(4,1), PB(4,3) correspond to the 4:2:0 sub-sampling scheme, and the chroma information dropping operations to apply to the other PBs correspond to the 4:2:2 sub-sampling scheme. The size of this dropping profile 1110 is 112 bytes.
The selection of one chrominance dropping profile among the set of predetermined profiles for a given sub-region is based on the attenuation coefficients as discussed above. Taking a global attenuation coefficient as the mean value for representing all possible attenuation coefficients applied in the sub-region is one example.
Another example is to choose the global attenuation coefficient as the lowest attenuation coefficient value applied in the sub-region and choosing a compression technique, or a combination of compression techniques, leading to a compression ratio that is the closest to this chosen global attenuation coefficient. This advantageously ensures that the compression ratio obtained drops just enough information to maintain a good visual quality for the pixels in the sub-region with the less attenuated luminance components (corresponding to the lowest attenuation coefficient, and thus to the most bright pixels in the sub-region).
In the above examples, compression ratios are chosen to be the closest from the (global) attenuation coefficients in each sub-region. In another variant, it is sufficient to ensure that the variation of the compression ratio goes opposite the variation of the attenuation coefficients, so that the greater is the (global) attenuation coefficient, the lower is the compression ratio. By considering monotonic functions, this means that the attenuation coefficients function A is a monotonically increasing function while the compression ratio is a monotonically decreasing function.
If we apply this other variant to the 8 sub-divisions of the attenuation region of figure 7 and to the set of predetermined dropping profiles of figure 11, it is enough to map -23 - 8 dropping profiles sorted in decreasing order of compression ratio, e.g. from 1102 to 1109, on the 8 sub-regions sorted in increasing order of the attenuation coefficients, e.g. from 710 to 717. In explicit terms, we have: -edge-blending sub-region 710 may be encoded using the chroma information dropping profile 1102; -edge-blending sub-region 711 may be encoded using the chroma information dropping profile 1103; -edge-blending sub-region 712 may be encoded using the chroma information dropping profile 1104; -edge-blending sub-region 713 may be encoded using the chroma information dropping profile 1105; -edge-blending sub-region 714 may be encoded using the chroma information dropping profile 1106; -edge-blending sub-region 715 may be encoded using the chroma information dropping profile 1107; -edge-blending sub-region 716 may be encoded using the chroma information dropping profile 1108; -edge-blending sub-region 717 may be encoded using the chroma information dropping profile 1109.
So far, only the attenuation coefficients or information derived from these coefficients have been used as a parameter for compressing the pixels of the attenuation region and thus generating rate-reduced video frames. In another embodiment of the invention, the compressing is furthermore based on the bandwidth available in the communication network for transmitting the compressed video frames. To take into account the available bandwidth, a weighting factor may be used to further reduce the rate of the compressed video frames. For example, the weighting factor may take a value between 1 and 0.5 for example and is applied to the attenuation coefficients prior deriving the compression ratio according to one of the compression techniques described above.
This weighting factor does not affect the actual attenuation coefficients used to attenuate the luminance component values, but only those used to determine the compression ratio.
If the weighting factor is 1, then the available bandwidth is not taken into account. If the weighting factor is 0.5 for example, this is equivalent to further attenuating the luma components by 50%, which leads eventually to reducing further the compression ratio. It can also be envisaged to apply the weighting factor directly to the compression ratio instead of the attenuation coefficients. The weighting factor makes it possible to tune the -24 -rate of the compressed video frames to belier match the available bandwidth of the network.
The decompressing step S610 corresponding to the above described third compression technique comprises then recovering missing chrominance components in pixels of the attenuation region from the components of neighbouring pixels. More particularly, the recovering comprises: identifying sub-regions in the attenuation region; determining compression ratios used for compressing pixels of respective sub-regions based on the attenuation coefficients applied to the pixels of said sub-regions; obtaining a chrominance dropping profile applied to each sub-region, from a set of predetermined dropping profiles, based on the determined compression ratio; and recovering missing chrominance components according to the obtained chrominance dropping profiles.
Is The set of predetermined dropping profiles can be either predetermined or signalled for example during the transmission of each compressed video frame by the transmitting device. The receiving device can then obtain the dropping profile applied by the transmitting device from the set of predetermined dropping profiles, reconstructed at the receiving device, based on the determined compression ratio.
Figure 12 illustrates the recovering of missing chrominance components in the case where Macro Pixel Blocks are used in the chroma information dropping profiles according to figures 10 and 11. It shows one advantage of using the adjacency scheme introduced by the concept of Macro Pixel Blocks. In particular, it makes it possible to use an efficient process for reconstructing uncompressed video pixels from the received compressed video pixels (in the rate-reduced video frame). The process takes place within the decompression module 470 to recover the missing pixel information due to the above dropping operations applied to the uncompressed video frame.
Reconstructing missing video pixel chroma information may be based on interpolation or duplication from neighbouring pixels.
The decompression module 470 may receive the rate-reduced video frame in one shot, or may receive it by streaming, i.e. packet of part of a video frame after packet.
Flags or structure within the data in the packets makes it possible for the receiving device to parse the data corresponding to each Macro Pixel Block.
For each received MPB, the decompression module 470 is able to retrieve the corresponding pixel information dropping profile applied by the compression module 422.
This may be done based on compression ratio used for compressing pixels of the MPB -25 -determined from the (global) the attenuation coefficient applied for that MPB at the transmitting device. No signalling is thus necessary. Alternatively, the dropping profile is retrieved based on additional header provided together with the pixel information of the MPB, for example through an identifier of the applied pixel information dropping profile associated with the rate-reduced MPB.
Based on the retrieved pixel information dropping profile, the decompression module 470 knows the dropping operations applied to each Pixel Block, and then the resolution of each PB within the current MPB, i.e. the pixel information missing from each Pixel Block.
Upon reception of the MPB, the decompression module 470 first reconstructs the LR_PB pixel blocks (4:2:0) to obtain HR_PB pixel blocks (4:4:4), and then reconstructs the MR_PB pixel blocks (4:2:2) to obtain HR_PB pixel blocks (4:4:4). This results in having only HR_PB pixel blocks (4:4:4), i.e. the original uncompressed video frame has been recovered.
Figure 12 illustrates the recovery of pixel information in a LR_PB and in a MR_PB, where the above adjacency scheme has been applied by the compression module 420.
The decompression module 470 obtains the Pixel Blocks 1200, 1210, 1220, 1230 and 1240, where the central pixel block 1200 is LR_PB and its adjacent pixel blocks are of higher resolution, i.e. either HR_PB (PB 1240) or MR_PB (PBs 1210, 1220 and 1230).
The reconstruction of pixel block 1200 is done by interpolation from the adjacent pixel blocks.
The interpolation of pixel block 1200 comprises: -to recover the Cb component of pixel 1201: interpolation (e.g. computing the mean) of the Cb components of its adjacent pixels from the adjacent pixel blocks (i.e. from pixels 1211 and 1241). In particular, only the adjacent pixel blocks having the lowest resolution (here MR_PB) may be taken into account. In the example of the Figure, this results in the duplication of the Gb video component of pixel 1211 (since 1210 is MR_PB while 1240 is HR_PB); -to recover the Cb and Cr components of pixel 1202: interpolation (mean) of respectively the Cb and Cr components of its adjacent pixels from the adjacent pixel blocks (i.e. from pixels 1212 and 1221). In particular, only the adjacent pixel blocks having the lowest resolution may be taken into account. In the example of the Figure, this results in the duplication of the Cb and Cr video components of pixel 1221 (since pixel 1212 does not have any Cb or Cr component); -to recover the Cb and Cr components of pixel 1203: interpolation (mean) of respectively the Cb and Cr components of its adjacent pixels from the adjacent pixel -26 -blocks (i.e. from pixels 1222 and 1231). In particular, only the adjacent pixel blocks having the lowest resolution may be taken into account. In the example of the Figure, this results in the duplication of the Cb and Cr video components of pixel 1222 (since pixel 1232 does not have any Cb or Cr component); -to recover the Cr component of pixel 1204: interpolation (mean) of the Cr components of its adjacent pixels from the adjacent pixel blocks (i.e. from pixels 1232 and 1242). In particular, only the adjacent pixel blocks having the lowest resolution may be taken into account. In the example of the Figure, this results in the duplication of the Cr video component of pixel 1232 (since 1230 is MR_PB while 1240 is HR_PB).
When this interpolation process has been applied to all the LR_PBs to recover HR_PBs, the decompression module 470 then processes the MR_PBs in a similar manner. The decompression module 470 obtains the Pixel Blocks 1250, 1260, 1270, 1280 and 1290, where the central pixel block 1250 is MRPB and its adjacent pixel blocks are of the same or higher resolution, i.e. either HR_PB (PBs 1260, 1280 and 1290) or MR_PB (PB 1270).
The reconstruction of pixel block 1250 is done by interpolation from the adjacent pixel blocks.
The interpolation of pixel block 1250 comprises: -to recover the Cb and Cr components of pixel 1252: interpolation (mean) of respectively the Cb and Cr components of its adjacent pixels from the adjacent pixel blocks (i.e. from pixels 1261 and 1271). In particular, only the adjacent pixel blocks having the lowest resolution (here MR_PB) may be taken into account. In the example of the Figure, this results in the duplication of the Cb and Cr video components of pixel 1271 (since 1270 is MR_PB while 1260 is HR_PB); -to recover the Cb and Cr components of pixel 1253: interpolation (mean) of respectively the Cb and Cr components of its adjacent pixels from the adjacent pixel blocks (i.e. from pixels 1272 and 1281). In particular, only the adjacent pixel blocks having the lowest resolution may be taken into account. In the example of the Figure, this results in the duplication of the Cb and Cr video components of pixel 1272 (since 1270 is MR_PB while 1280 is HR_PB).
Following the interpolation of the MR_PBs, the decompression module 470 has thus reconstructed a high resolution video frame (4:4:4), i.e. an uncompressed video frame. The 4:4:4 video frame is then transferred to the Application Layer 455 for further processing (such as rendering).
One skilled in the art will easily understand that reconstruction schemes other than the above interpolations may be implemented within the scope of the present invention.
-27 -This embodiment of the invention has been illustrated above through the use of pixel information dropping profiles (Figure 11) made of dropping operations defined by 4:4:4 PBs and/or 4:2:2 PBs and/or 4:2:0 PBs. The decompression of a rate-reduced video frame may then involve interpolation of 4:2:2 PBs and 4:2:0 PBs from 4:2:2 PBs and 4:4:4 PBs to recover an uncompressed video frame.
Figure 13 schematically illustrates a processing device 1300, either a wireless transmitting station, or a wireless receiving station, or a station embedding both functionalities, configured to implement at least one embodiment of the present invention.
The processing device 1300 may be a device such as a micro-computer, a workstation or a light portable device. The device 1300 comprises a communication bus 1313 to which there are preferably connected: -a central processing unit 1311, such as a microprocessor, denoted CPU; -a read only memory 1307, denoted ROM, for storing computer programs for implementing the invention; -a random access memory 1312, denoted RAM, for storing the executable code of the method of embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing methods according to embodiments of the invention; and -a communication interface 1302 connected to a communications network 1303 over which digital data to be processed are transmitted, for example a wireless communication network.
Optionally, the apparatus 1300 may also include the following components: -a data storage means 1304 such as a hard disk, for storing computer programs for implementing methods of one or more embodiments of the invention and data used or produced during the implementation of one or more embodiments of the invention; -a disk drive 1305 for a disk 1306, the disk drive being adapted to read data from the disk 1306 orto write data onto said disk; -a screen or projection means 1309 for displaying data or projecting data; -a keyboard 1310 or any other pointing means for user input.
The apparatus 1300 can be connected to various peripherals, such as for example a digital camera 1335 or a microphone 1308, each being connected to an inputfoutput card (not shown) so as to supply multimedia data to the apparatus 1300.
The communication bus provides communication and interoperability between the various elements included in the apparatus 1300 or connected to it. The -28 -representation of the bus is not limiting and in particular the central processing unit is operable to communicate instructions to any element ot the apparatus 1300 directly or by means of another element of the apparatus 1300.
The disk 1306 can be replaced by any information medium such as for example a compact disk (CD-ROM), rewritable or not, a ZIP disk or a memory card and, in general terms! by an information storage means that can be read by a microcomputer or by a microprocessor, integrated or not into the apparatus, possibly removable and adapted to store one or more programs whose execution enables a method according to the invention to be implemented.
The executable code may be stored either in read only memory 1307, on the hard disk 1304 or on a removable digital medium such as for example a disk 1306 as described previously. According to a variant, the executable code of the programs can be received by means of the communication network 1303, via the interface 1302, in order to be stored in one of the storage means of the apparatus 1300, such as the hard disk 1304, before being executed.
The central processing unit 1311 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to the invention, which instructions are stored in one of the aforementioned storage means. On powering up, the program or programs that are stored in a non-volatile memory, for example on the hard disk 1304 or in the read only memory 1307, are transferred into the random access memory 1312, which then contains the executable code of the program or programs, as well as registers for storing the variables and parameters necessary for implementing the invention.
In this embodiment, the apparatus is a programmable apparatus which uses software to implement the invention. However, alternatively, the present invention may be implemented in hardware (for example, in the form of an Application Specific Integrated Circuit or ASIC).
Although the present invention has been described hereinabove with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications which lie within the scope of the present invention will be apparent to a person skilled in the art. Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention as determined by the appended claims. In particular different features from different embodiments may be interchanged, where appropriate.
Claims (13)
- -29 -CLAIMS1. A method of processing video frames to generate compressed video frames, comprising: attenuating luminance components of pixels contained in an attenuation region of the video frames by applying aftenuation coefficients, where an attenuation coefficient represents a percentage by which the luminance component value is attenuated; and compressing pixels of the attenuation region based on the attenuation coefficients associated with the pixels.
- 2. The processing method of claim 1, wherein the compressing comprises choosing a compression ratio based on the attenuation coefficients, wherein the greater is the attenuation coefficient associated with a pixel, the lower is the compression ratio.
- 3. The processing method of claim 2, wherein the attenuation region is divided into sub-regions and wherein a same compression ratio is chosen for all pixels of each sub-region based on the attenuation coefficients applied to the pixels of said sub-region.
- 4. The processing method of claim 3, wherein the compressing comprises dropping chrominance components from the pixels of the attenuation region, the number of dropped chrominance components being dependent on the chosen compression ratio, thereby relating the number of dropped chrominance components and the value of the attenuation coefficients.
- 5. The processing method of claim 4, wherein the dropping comprises applying a plurality of different operations for dropping chrominance components to respective sub- regions, two different chrominance dropping operations applied to two different sub-regions drop different number of chrominance components.
- 6. The processing method of claim 5, wherein each chrominance dropping operation is defined by a chrominance dropping profile selected from a set of predetermined dropping profiles.
- 7. The processing method of any one of claims 1 to 6, wherein luminance component values of the frame pixels are encoded using N bits and wherein the -30 -compressing comprises shortening the encoding of the attenuated luminance component value of each pixel of the attenuation legion by removing the L most significant bits, where L = N -M and M being the maximum size in bits of the attenuated luminance component value of said pixel.
- 8. The processing method of any one of claims 1 to 7, wherein chrominance component values of the frame pixels are encoded using N bits and wherein the compressing comprises shortening the encoding of chrominance component values of pixels of the attenuation region by removing the R least significant bits, where R is related to the attenuation coefficients applied to said pixels, wherein the greater is the attenuation coefficient, the greater is R.
- 9. The processing method of any one of claims 1 to 7, wherein chrominance component values of the frame pixels are encoded using N bits and wherein the compressing comprises upsampling chrominance component values of pixels of the attenuation region and re-encoding said values with N' bits, where N' C N and wherein N' for a given pixel is inversely related to the attenuation coefficient applied to said pixel, the greater is the attenuation coefficient, the lower is N'.
- 10. The processing method of any one of claims 1 to 9, further comprises transmitting the compressed video frames resulting from the compressing of pixels components values over a communication network to a projection display apparatus for projecting the video frames.
- 11. The processing method of claim 10, wherein the compressing is furthermore based on the bandwidth available in the communication network for transmitting the compressed video frames.
- 12. The processing method of any one of claims 1 to 11, wherein the frames are to be displayed by a multi-projection system comprising a plurality of projection display apparatus for displaying the frames, the attenuation region being defined by the region of overlap between two frames to be displayed by adjacent projection display apparatus.
- 13. The processing method of claim 12, wherein the attenuation coefficients are obtained from a brightness attenuation function defined for edge-blending the overlap regions. -31 -14. A method of processing compressed video frames to generate uncompressed video frames, comprising: obtaining attenuation coefficients used to attenuate luminance components of pixels of an attenuation region contained in the compressed video frames; and decompressing pixels of the attenuation region based on the obtained attenuation coefficients.15. The processing method of claim 14, wherein the decompressing comprises recovering missing chrominance components in pixels of the attenuation region from the components of neighbouring pixels.16. The processing method of claim 15, wherein the recovering comprises: identifying sub-regions in the attenuation region; determining compression ratios used for compressing pixels of respective sub-Is regions based on the attenuation coefficients applied to the pixels of said sub-regions; obtaining a chrominance dropping profile applied to each sub-region based on the determined compression ratio, the chrominance dropping profiles are selected from a set of predetermined dropping profiles; and recovering missing chrominance components according to the obtained chrominance dropping profiles.17. The processing method of any one of claims 14 to 16, wherein the decompressing comprises: determining the maximum size in bits M of the attenuated luminance component values of pixels of the attenuation region based on the obtained attenuation coefficients; and appending L most significant bits with null value to the M bits of each attenuated luminance component value thereby reconstructing a fully encoded values of N = L + M bits.18. The processing method of any one of claims 14 to 17, wherein the decompressing comprises: determining a number of least significance bits R removed from the encoding of each chrominance component of pixels of the attenuation region when compressing the video frames based on the obtained attenuation coefficients; and appending R least significant bits to the N -R bits of each chrominance component value thereby reconstructing a fully encoded values of N bits.-32 - 19. The processing method of any one of claims 14 to 17, wherein the decompressing comprises: determining a number of bits N' used to encode after upsampling chrominance component values of pixels of the attenuation region when compressing the video frames based on the obtained attenuation coefficients; and oversampling the chrominance component values and re-encoding sais values with N bits, where N > N'.20. The processing method of any one of claims 14 to 19, wherein the uncompressed video frames are to be displayed by a projection display apparatus of a multi-projection system, the multi-projection system comprising a further projection apparatus adapted to display adjacent video frames where a region of overlap between the video frames and the adjacent video frames defines the attenuation region.21. The processing method of claim 20, wherein the attenuation coefficients are obtained from a brightness attenuation function defined for edge-blending the overlap region.22. A non-transitory computer-readable storage medium storing a program for causing a computer to execute the method according to any one of claims 1 to 21.23. A program which, when run by a programmable apparatus, causes the programmable apparatus to execute the method according to any one of claims 1 to 21.24. An apparatus for processing video frames to generate compressed video frames, comprising: attenuating means for attenuating luminance components of pixels contained in an attenuation region of the video frames by applying attenuation coefficients, where an attenuation coefficient represents a percentage by which the luminance component value is attenuated; and compression module for compressing pixels of the attenuation region based on the attenuation coefficients associated with the pixels.25. An apparatus for processing compressed video frames to generate uncompressed video frames, comprising: -33 -obtaining means for obtaining attenuation coefficients used to attenuate luminance components of pixels of an attenuation region contained in the compressed video frames; and decompression module for decompressing pixels of the attenuation region based on the obtained attenuation coefficients.26. A method of processing video frames substantially as herein described with reference to, and as shown in, Figures 4 to 12 of the accompanying drawings
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1301743.9A GB2510814B (en) | 2013-01-31 | 2013-01-31 | Luma-indexed chroma sub-sampling |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1301743.9A GB2510814B (en) | 2013-01-31 | 2013-01-31 | Luma-indexed chroma sub-sampling |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| GB201301743D0 GB201301743D0 (en) | 2013-03-20 |
| GB2510814A true GB2510814A (en) | 2014-08-20 |
| GB2510814B GB2510814B (en) | 2016-05-18 |
Family
ID=47988499
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB1301743.9A Active GB2510814B (en) | 2013-01-31 | 2013-01-31 | Luma-indexed chroma sub-sampling |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2510814B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117574329B (en) * | 2024-01-15 | 2024-04-30 | 南京信息工程大学 | Nitrogen dioxide refined space distribution method based on ensemble learning |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP0473820A1 (en) * | 1988-01-14 | 1992-03-11 | Metavision | Seamless display for multiple video images |
| US5162898A (en) * | 1988-10-06 | 1992-11-10 | Sharp Kabushiki Kaisha | Color image data compressing apparatus and method |
| EP0535963A2 (en) * | 1991-10-02 | 1993-04-07 | Matsushita Electric Industrial Co., Ltd. | Orthogonal transformation encoder |
| JPH08102967A (en) * | 1994-09-30 | 1996-04-16 | Canon Inc | Image coding method and apparatus |
| US6456655B1 (en) * | 1994-09-30 | 2002-09-24 | Canon Kabushiki Kaisha | Image encoding using activity discrimination and color detection to control quantizing characteristics |
| US6480175B1 (en) * | 1999-09-17 | 2002-11-12 | International Business Machines Corporation | Method and system for eliminating artifacts in overlapped projections |
| JP2003047024A (en) * | 2001-05-14 | 2003-02-14 | Nikon Corp | Image compression apparatus and image compression program |
| US6570623B1 (en) * | 1999-05-21 | 2003-05-27 | Princeton University | Optical blending for multi-projector display wall systems |
-
2013
- 2013-01-31 GB GB1301743.9A patent/GB2510814B/en active Active
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP0473820A1 (en) * | 1988-01-14 | 1992-03-11 | Metavision | Seamless display for multiple video images |
| US5162898A (en) * | 1988-10-06 | 1992-11-10 | Sharp Kabushiki Kaisha | Color image data compressing apparatus and method |
| EP0535963A2 (en) * | 1991-10-02 | 1993-04-07 | Matsushita Electric Industrial Co., Ltd. | Orthogonal transformation encoder |
| JPH08102967A (en) * | 1994-09-30 | 1996-04-16 | Canon Inc | Image coding method and apparatus |
| US6456655B1 (en) * | 1994-09-30 | 2002-09-24 | Canon Kabushiki Kaisha | Image encoding using activity discrimination and color detection to control quantizing characteristics |
| US6570623B1 (en) * | 1999-05-21 | 2003-05-27 | Princeton University | Optical blending for multi-projector display wall systems |
| US6480175B1 (en) * | 1999-09-17 | 2002-11-12 | International Business Machines Corporation | Method and system for eliminating artifacts in overlapped projections |
| JP2003047024A (en) * | 2001-05-14 | 2003-02-14 | Nikon Corp | Image compression apparatus and image compression program |
Non-Patent Citations (1)
| Title |
|---|
| (MURCHING & WOODS) Adaptive subsampling of color images, Proceedings of International Conference on Image Processing 1994. * |
Also Published As
| Publication number | Publication date |
|---|---|
| GB2510814B (en) | 2016-05-18 |
| GB201301743D0 (en) | 2013-03-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12136186B2 (en) | Super resolution image processing method and apparatus | |
| EP3633995B1 (en) | Method and device for chroma prediction | |
| EP3053335B1 (en) | Transmitting display management metadata over hdmi | |
| CA2540575C (en) | Digital video signature apparatus and methods for use with video program identification systems | |
| KR101710723B1 (en) | Image signal processing device, transmitting device, image signal processing method, program and image signal processing system | |
| CN103841389B (en) | A kind of video broadcasting method and player | |
| RU2665308C2 (en) | Image processing apparatus and image processing method | |
| US11223841B2 (en) | Apparatus and method for performing artificial intelligence encoding and artificial intelligence decoding on image | |
| KR100995798B1 (en) | Video Processing System Using Layer Structured Video Coding and Method Using The Same | |
| SA516370356B1 (en) | A method and apparatus for coding video data | |
| JP5969597B2 (en) | Data transmission method, data reception method, wireless transmitter, and wireless receiver | |
| US20200236323A1 (en) | Method and system for combining multiple area-of-interest video codestreams into a combined video codestream | |
| CN109076231A (en) | Method and apparatus, corresponding coding/decoding method and decoding device for being encoded to high dynamic range photo | |
| JP2021513288A (en) | A system that handles multiple HDR video formats | |
| EP2583456A1 (en) | Encoding, distributing and displaying video data containing customized video content versions | |
| KR20170047489A (en) | Apparatus for Processing Images, Method for Processing Images, and Computer Readable Recording Medium | |
| CN112752038A (en) | Background replacing method and device, electronic equipment and computer readable storage medium | |
| US9071768B2 (en) | Method of transmitting video information over a wireless multi-path communication link and corresponding wireless station | |
| CN101014132B (en) | Selection of encoded data, creation of recoding data, and recoding method and device | |
| GB2526148A (en) | Seamless display of a video sequence with increased frame rate | |
| KR20190036328A (en) | Frame selective encryption method for video data | |
| CN108886615A (en) | The device and method of perception quantization parameter (QP) weighting for the compression of display stream | |
| GB2510814A (en) | Luma-indexed chroma sub-sampling | |
| KR102842977B1 (en) | Video transmission/reception method and apparatus through video separation and recombination | |
| JP5432690B2 (en) | Image coding apparatus and control method thereof |