GB2526062A - Method for transmitting video data defining images, and next displaying said images, comprising reordering the video data for transmission - Google Patents
Method for transmitting video data defining images, and next displaying said images, comprising reordering the video data for transmission Download PDFInfo
- Publication number
- GB2526062A GB2526062A GB1407449.6A GB201407449A GB2526062A GB 2526062 A GB2526062 A GB 2526062A GB 201407449 A GB201407449 A GB 201407449A GB 2526062 A GB2526062 A GB 2526062A
- Authority
- GB
- United Kingdom
- Prior art keywords
- video
- pixels
- node
- pixel
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000005540 biological transmission Effects 0.000 title claims abstract description 67
- 238000000034 method Methods 0.000 title claims description 44
- 238000012545 processing Methods 0.000 claims abstract description 89
- 238000012937 correction Methods 0.000 claims abstract description 21
- 238000002156 mixing Methods 0.000 claims abstract description 19
- 238000005192 partition Methods 0.000 claims description 39
- 230000000875 corresponding effect Effects 0.000 claims description 28
- 238000004891 communication Methods 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 8
- 238000000638 solvent extraction Methods 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 4
- 230000006978 adaptation Effects 0.000 claims description 2
- 230000003139 buffering effect Effects 0.000 abstract description 4
- 241000023320 Luma <angiosperm> Species 0.000 description 11
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 101100465000 Mus musculus Prag1 gene Proteins 0.000 description 2
- 238000007792 addition Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/37—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability with arrangements for assigning different transmission priorities to video input data or to video coded data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/88—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving rearrangement of data among different coding units, e.g. shuffling, interleaving, scrambling or permutation of pixel data or permutation of transform coefficient data among different blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/89—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
- H04N19/895—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3141—Constructional details thereof
- H04N9/3147—Multi-projection systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3182—Colour adjustment, e.g. white balance, shading or gamut
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3185—Geometric adjustment, e.g. keystone or convergence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3191—Testing thereof
- H04N9/3194—Testing thereof including sensor feedback
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Geometry (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Video data defining images is transmitted from a first node (e.g. data store) to a second node (e.g. video projector) of a network, processed (e.g. geometric distortion correction using bilinear interpolation, edge-blending or photometric/colorimetric adjustment), and then displayed. The pixel data defining the image is reordered for transmission, depending on the image processing to be performed at the second node. Video data transmission is optimised, allowing the destination processor to start processing the data after receiving the first data in parallel with receiving subsequent data, reducing buffering requirements and end-to-end latency. The transmission order may depend either on the complexity or on the order of the processing steps performed at the second node. Data dependencies may be identified between the video data pixels. The processing may comprise transmission error concealment, and the relative importance of each pixel or pixel group may be determined to influence transmission order. A property of a pixel to be displayed may be interpolated (e.g. using bilinear or bicubic interpolation) from the properties of neighbouring pixels. The second node may be a projector in a multi-projector video projection system.
Description
I
Method for transmitting video data defining images, and next displaying said images, comprising reordering the video data for transmission
FIELD OF THE INVENTION
The present invention relates to a method and device for transmitting and displaying images of video data.
The invention relates more particularly to transmission of video data, in particular raw video data, from a video source to a display device which displays the video after application of image processing. Such a display device may comprise for example a display screen, a video projector, or a group of aggregated video projectors (multi-projector video system). The video source delivers a video sequence to the display apparatus at a given resolution, color depth and frame rate (e.g. 1920 x 1080 pixels at 24 bits per pixel and 60 frames per second) over a suitable transmission network. The video source may comprise for example a camera, a DVD or Blu-ray player, a personal computer (PC) or a Set-top Box.
BACKGROUND OF THE INVENTION
Commonly, the video display devices used to display a video sequence have to perform processing on the received video data prior to displaying the images forming the video sequence. Such processing is often called video "pre-processing.
For example, if the apparatus is a video projector (whether or not part of a multi-projector system), geometric distortion (keystone) correction may be required. When a projector is part of a multi-projector system] edge blending may be required. Other examples of such pre-processing include photometric or colorimetric adjustments.
A large amount of buffer memory, computer power (calculation capacities) and/or dedicated hardware may be needed to implement such processing.
In addition, the transmission of data between the video source and the destination display apparatus may be subject to transmission errors and data losses. In this case, the processing performed at the destination node of the network (i.e. at the display device end) may include error concealment. In a video transmission and display process, such an error concealment and data loss recovery through retransmission capabilities are limited because video is displayed in "real-time".
Thus, the processing which has to be performed on the received video data for display may be quite time consuming or may require expensive hardware resources. For example, geometric distortion correction for video projection may require bi-linear or bi-cubic interpolation. Bi-linear interpolation requires performing at least four (respectively sixteen for bi-cubic interpolation) floating point multiplications and additions for each one of the projector's pixels, for each one of the three color channels, and for each video frame.
In case of a 1920 x 1080 projector resolution, bi-linear interpolation requires almost 25 million floating point multiplications to be performed for each video frame, i.e. i.5109 multiplications per second at 60 frames per second. Bi-cubic interpolation yields higher image quality but requires in the same conditions nearly 100 million multiplications per frame and 610 multiplications per second.
In a general manner, is then advisable to conceive methods and devices which limit the needed buffer memory for storing video data between receiving and displaying a video frame by a display apparatus, keeping the needed computing power and specific hardware at an acceptable level, and reducing the delay between reception and display of a video frame, and / or facilitating error concealment.
The document US20100309379 describes reordering video frames for processing, in particular de-noising, in an example of image processing executed by a video projector which is part of a multi-projector system and which receives the video data to be displayed through a communication network.
However, other methods and devices could advantageously be conceived to optimize processing of video data for display.
SUMMARY OF THE INVENTION
According to a first aspect of the invention, there is provided a method for transmitting images of raw video data from a first node to a second node of a communication network and next displaying said images, said images comprising pixels having properties defined by pixel data. The method comprises processing the images at the second node before display. In the method, the pixel data are reordered for transmission, depending on the processing of the images at the second node.
In a general manner, this makes possible to optimize the transmission of video data (in terms of speed, and/or reliability, needed resources...). In particular, reordering the data defining the pixels before transmission may allow the destination display device to start processing the data after reception of the first data fraction, e.g. data partition, in parallel with reception of subsequent data fractions. This results in reduction of buffering requirements and end-to-end latency.
The processing of the images may comprise a plurality of processing steps applied to the pixels of the image, the order of transmission of the pixel data depending on the complexity of the processing steps applied to the corresponding pixel.
The processing of the images may comprise a plurality of ordered processing steps applied to the pixels of the image, the order of transmission of the pixel data depending on the order of the plurality of ordered processing steps applied to the corresponding pixel.
The method may comprise identifying data dependencies between the pixels of the video data and the corresponding pixels to be displayed to characterize processing of the images at the second node.
Processing the images at the second node may include transmission error concealment. In such a case, the method may comprise partitioning the video data for transmission into partitions, and the relative importance of each pixel or predefined group of pixels of an image of the video data is determined in view of possible concealment of transmission errors or data loss as processing at the second node, and the content of the partitions and their transmission order depends on the importance of the pixels.
According to an embodiment, the method may comprise partitioning the video data for transmission into partitions, and a property of a pixel to be displayed is interpolated from the corresponding properties of neighbouring pixels of an image of the video data, and the pixel data of each neighbouring pixel are put in a different partition of the said image of the video data for transmission. In a variant of this embodiment, bilinear interpolation may be used to determine the property of the pixel to be displayed based on the corresponding properties of four neighbouring pixels. The pixels of the image of the video data may be arranged in a matrix having rows and columns, each pixel having a first coordinate corresponding to a column number, and a second coordinate corresponding to a row number, thus defining the position of the pixel in said matrix, and the image is split into four partitions, said partitions being respectively composed of: a) the pixels having an even first coordinate and an even second coordinate; b) the pixels having an even first coordinate and an odd second coordinate; c) the pixels having an odd first coordinate and an even second coordinate; d) the pixels having an odd first coordinate and an odd second coordinate.
In another variant of this embodiment, bicubic interpolation may be used to determine the property of the pixel to be displayed based on the corresponding properties of sixteen neighbouring pixels.
In any embodiment of the invention, the second node may comprise a projector of a video projection system, and processing the images at the second node may include correction of a geometric distortion of the image. Processing the images at the second node may include photometric and/or colorimetric adjustment. If the second node comprises a projector of a multi-projector video projection system, processing the images at the second node may include brightness adaptation for edge blending of images from different projectors of the multi-projector video projection system.
According to a second aspect of the invention, there is provided a video system comprising a video source at a first node of a communication network, a video display device at a second node of said communication network, and means configured to transmit video data defining images from said first node to said second node, said images comprising pixels having properties defined by pixel data, the video system further comprising means configured to process the images at the second node before display, and the video system comprising means configured to reorder the pixel data for transmission, depending on the processing of the images at the second node.
DETAILLED DESCRIPTION OF EMBODIEMENTS OF THE INVENTION
Other particularities and advantages of the invention will also emerge
from the following description.
In the accompanying drawings, given by way of non-limiting examples: * Figure 1 represents an example of a system in which a method according to the invention may be used; * Figure 2A represents an example of a projection zone of a projector of a multi-projector system, the displayed image requiring geometric distortion correction and edge blending; * Figure 2B represents the same example of a projection zone as Figure 2A, as seen from the projector point of view; * Figure 3 illustrates an example of bi-linear interpolation for geometric distortion correction, the image being partitioned for transmission according to an embodiment of the invention; * Figure 4 illustrates on a time-line video frame transmission, processing, and display, using a method according to an embodiment of the invention; * Figure 5A shows an example of a step of a method according to an embodiment of the invention, consisting in assigning different importance levels to source pixels for bi-linear interpolation; * Figure SB shows on a time-line video frames transmission with acknowledgements, processing, and display, which may be implemented taking into account the importance of the pixels as defined in reference to Figure 5A; * Figure 6 illustrates on a time line transmission and processing of video data with chroma subsampling, in which the transmission order of different partitions of data is adapted, according to an embodiment of the invention, to the duration of required processing at the display target device; * Figure 7 is a functional diagram of a video display device as used in an embodiment of the invention; * Figure 8 is a functional diagram of a video source device as used in an embodiment of the invention.
Figure 1 illustrates an example of a system in which a method according to the invention may be used. The system comprises a video source 100, which may comprise for example a camera, a DVD or Blu-ray player, a hard drive based video storage, a PC (Personal Computer) or a Set-top Box. The system also comprises a video display device 110 which may comprise for example a screen (e.g. a computer or television screen), a video projector, a group of aggregated screens arranged to display each a part of video frame to form said full video frame, or projectors configured to display adjacent, eventually partially overlapping, parts of a full video frame on a projection screen, the association of the different parts forming the full display.
A network 120, which may be wired or wireless, is used for data transmission between the video source 100 and the video display device 110. The source constitutes a first node of the network; the display device constitutes a second node of the network. The network 120 may be for example composed of a simple point-to-point link connecting the source 100 to the target, i.e. the video display device 110. The network 120 may be, in another embodiment, a larger network interconnecting said first and second nodes and other nodes as well, sharing the communication medium among different connections between different nodes.
If the display device 110 comprises several display units (e.g. screens or projectors), the network 120 may be used to distribute the video data from the source 100 to each display unit.
A method according to the invention may be used for video data transmission over such a system.
For example, if the video source 100 delivers video images at a given resolution (e.g. 1920 x 1080 pixels) and colour depth (e.g. 24 bits/pixel) that the video display device 110 is designed to handle, said video display device 110 may have to perform some heavy processing on each video frame received from source through the communication link 120.
In the example embodiments described hereafter to illustrate the invention, the display device 110 is a multi-projector system composed of several video projectors projecting images on adjacent overlapping areas of a projection screen so as to create a seamless aggregate display.
In such a case, video processing performed by the display device 110 (i.e. at the second node of the network) consists in both geometric distortion correction and determination of edge blending in the overlapping areas. Such processing has to be performed for each individual projector composing the aggregate device 110 (i.e. the multi-projector video system).
Other kinds of processing applicable to single-and multi-projector video systems, and other kinds of display devices, comprise photometric and colorimetric image adjustments or up-and downscaling of images.
Figure 2A represents an example of a projection zone of a projector of a multi-projector system. The displayed image requires geometric distortion correction and edge blending. Indeed, the projector covers, on the projection screen, the quadrilateral area having the four corners denoted P1, P2, P3 and P4.
Because the optical axis of the projector is not orthogonal to the center of the screen, the illuminated quadrilateral is not necessarily rectangle: the opposite sides of the quadrilateral may not be parallel to each other and to the horizontal and vertical screen borders. Distortion correction is thus needed. In particular, in a multi-projector video system, such a correction is needed to make the images of two projectors of the systems match in the overlapping zones.
In the represented example, the projector has to display a rectangular part of the full video image. In this example, this part of the image is situated in the upper left corner of said image. The rectangular area (corresponding to the upper left part of a projection screen), in which the projector's part of the video image is displayed, has the four corners marked Qi, 02, 03 and 04.
In this part of the screen, the image situated in the rectangle having the four corners denoted R1, R2, R3 and R4 has to be displayed with full brightness.
Indeed, the considered projector is the only projector of the multi-projector systems which illuminates this part of the screen.
In the rest of the image area, image blending with the images projected by neighbouring projectors needs to be performed. In particular, the rectangular area with the corners R2, 02, 023 and R3 constitutes a vertical edge-blending zone with the right neighbour of the considered projector. The projector has to reduce brightness of the image projected in this area, as represented by the brightness curve 201, while the right neighbour projector (not represented in the drawing) applies a complementary brightness curve in such a manner that uniform brightness is obtained through superposition of the respective projected images.
Gamma correction, known to the skilled in the art, may have to be applied to compensate non-linear brightness response to the numerical values which define the pixels.
In a similar manner, the rectangular area with the corners R4, R3, and 04 constitutes a horizontal edge-blending zone with the bottom neighbour of the considered projector. The projector has to reduce brightness of the image projected in this area, as represented by the vertical brightness curve 202.
The rectangular area with the corners R3, 023, 03 and 034 constitutes the intersection of the horizontal and vertical edge blending zones. Four projectors illuminate this area: the considered projector, its right neighbour, its bottom neighbour and its right-bottom neighbour. Both horizontal and vertical brightness curves 201 and 202 are used to adapt brightness of the pixels projected in this area.
In the area outside the image rectangle defined by the corners Oi, 02, 03 and 04, the projector in question projects black pixels, as this area is either situated outside the display area of the full image or overlaps an area in which another projector displays a part of the image.
In the depicted example in which the considered projector is in charge of projecting the top-left corner of the input image, the points 01 and R1 coincide, the point R2 is situated on the line connecting points 01 and 02 and the point R4 is situated on the line connecting points Qi and 04.
For the other projectors of the multi-projection system, depending on the position of the area of the screen on which they display a part of the full image, other analogous coincidences may exist or not. The horizontal brightness curve 201, the vertical brightness curve 202, or both, are modified accordingly, to take into account the presence or absence of left, right, top and bottom neighbour projectors.
Figure 2B represents the same example of a projection zone as Figure 2A, as seen "from the projector point of view". This figure is thus equivalent to figure 2A, showing the same information, but from the point of view of the considered projector.
The points represented in Figure 2B bear the same references as the corresponding points in Figure 2A, with an additional prime (). In other words, point P,1 projected by the projector as shown in Figure 2B corresponds to point P1 as displayed on the projection screen as shown in Figure 2A.
Thus, points P'1, P'2, P,3 and P'4 correspond to the four corners of the projector's image, which is rectangular in the projector's co-ordinates. The quadrilateral areas delimited by corners Q'1, Q', Q'3 and Q'4 respectively R'1, R'2, R'3 and R'3 are obtained by applying to the image of the video stream a geometric distortion. This distortion is the inverse of the geometric distortion engendered by the geometrical configuration of the projection system, as represented by the quadrilateral defined by points P1, P2, P3 and P4 in figure 2A.
In the preferred embodiment, the projection screen is flat and the geometric distortion to be applied is called an homography. The homography may be described in a known manner using a 3 x 3 matrix with real coefficients and which can be determined from four corresponding points, for example P1, P2, P3 and P4 and their respective counterparts F'i, F'2, P'3 and P'4.
If a curved screen is used, the geometric distortion can be described e.g. through interpolation tables, using algorithms known the state of the art. The distortion calculations are advantageously executed during initial projection system setup and calibration and their results are stored for reuse during on-going video projection, as long as the projectors and the projection screen do not move relatively to each other.
Calibration of the system is advantageously performed using a camera to gather the coordinates on the screen of particular points such as P1, P2, P3 and P4.
The position of other particular points such as Q1, Q2, Q3 and Q4 delimiting the projection zone on the screen, and R1, R2, R3 and R4 delimiting the edge blending zones are determined by fitting a rectangular projection area having an aspect ratio matching the source video format inside the zone of the screen in which the video has to be displayed and fitting rectangular horizontal and vertical edge blending zones within the overlapping zones of projection of adjacent video projectors.
In the preferred embodiment, each projector applies a specific geometric distortion correction to its rectangular input image part before projection. Image processing (distortion correction) is thus generally distributed within the multi-projector system.
Figure 3 illustrates an example of bi-linear interpolation for geometric distortion correction (as required for example in the case shown in Figures 2A and 2B), the image being partitioned for transmission according to an embodiment of the invention.
The figure presents a small part of the projection screen, on which a 4 x 4 pixel part of the source image has to be displayed. The values of the parameters defining the pixels of said source image are used to determine the values of the corresponding parameters that have to be assigned to the pixels projected by the projector to make the source image look like it is geometrically well-aligned on the screen.
The pixels of the source image ("source pixels") are disposed in a matrix having vertical columns and horizontal rows. The columns and rows are given integer numbers as coordinates, denoted respectively x and y. Each pixel has a first coordinate (x coordinate) corresponding to a column number: and a second coordinate (y coordinate) corresponding to a row number, thus defining the position of the pixel in the matrix.
In Figure 3, the pixels of the source image are referenced "ee" if both the x and the y coordinate are even, "eo" if the x coordinate is even and the y coordinate is odd, "oe" if the x coordinate is odd and the y coordinate is even and finally "oo" if both coordinates are odd.
Figure 3 also shows the position of sixteen of the projector's pixels that are situated in the corresponding represented part of the screen. The coordinates (x; y) of the center of each pixel of the projector are noted using the coordinate system of the source image, rounded to two decimal places.
Bi-linear interpolation comprises determining the value of each projector's pixel as a weighted mean of its four closest neighbouring source pixels (i.e. pixels of the source image). The closest neighbouring source pixels of the projector pixel (x; y) have the following coordinates:Qxj; [yj). Qxj + 1; Ly]), ([xi; Lyi+1) and ([xi +1; Lyi+1).
[xj denotes the floor of x, i.e. the greatest integer less than or equal to x.
The weight of each source pixel depends on the respective distances of the considered projector pixel (x; y) to said four source pixels: the closer a source pixel to the resulting pixel, the greater its weight.
The formula used is: Result(x; y) = (1 -(x)) . (1 -(y)) Src([xj; [yj) + (x) (1 -(y)) SrcQxj + 1; [yj) + (1-(x))(y)Src([xj;[yj+1) + (x)(y)Src([xj +1;[yj +1) wherein [xi denotes the floor of x, and (x) denotes the fractional part of x with(x)=x-[xjandO«=(x)<1.
For example, the value set for a given parameter of the projector's pixel having the source coordinates (3.76; 2.36) in Figure 3 is: (1 -0.76) (1 -0.36) . Src(3; 2) + 0.76 (1 -0.36) Src(4; 2) + (1 -0.76) 0.36 Src(3; 3) + 0.76 0.36 Src(4; 3) = 0.1536. Src(3; 2) + 0.4864 Src(4; 2) + 0.0864 Src(3; 3) + 0.2736 Src(4; 3) in which Src(n;m) is the value of the corresponding parameter of the source pixel having the coordinates (n;m).
This calculation has to be typically repeated for each pixel of the projector, for each of the three colour channels (e.g. red/green/blue or Y/Cb/Cr).
The four weight coefficients (e.g. 0.1536, 0.4864, 0.0854 and 0.2736 as noted in Figure 3 for the projector pixel having the coordinates (3.76;2.36)) are advantageously pre-calculated and stored during the initial system calibration, so that they can be used for each frame of the source video.
The projector pixels situated outside the image area are easily detectable because they have out-of-range coordinates in the source image coordinate system (i.e. they are outside the matrix in which the pixels of the source image are disposed). No projection is carried out for these pixels (in other words, they may be defined as black pixels).
Furthermore, the projector pixels situated in edge blending zones, also detectable by their coordinates, are given colour values reduced for example by using a coefficient (between 0 and 1) according to the horizontal and vertical edge blending curves 201 and 202 shown in Figure 2A.
Edge-blending calculation may be performed by adjusting the aforementioned pre-calculated weight coefficients during system setup and calibration, so that the adjusted weight coefficients are stored to be used for each video frame.
In such a bi-linear interpolation, each projector pixel (except the potential black" pixels outside the display area of the screen) are defined based on four pixels respectively marked "ee", ed', oe" and "oo", that is: * one pixel having an even first coordinate and an even second coordinate; * one pixel having an even first coordinate and an odd second coordinate; * one pixel having an odd first coordinate and an even second coordinate; * one pixel having an odd first coordinate and an odd second coordinate.
Furthermore, the calculation of each projector's pixel value, with weight coefficients which are pre-calculated and stored as explained above, consists in four multiplications and additions per colour channel, advantageously implemented as fused multiply-add operations (FMA) commonly available on known floating point computation hardware.
Figure 4 illustrates on a time-line video frames transmission, processing, and display, using a method according to an embodiment of the invention. The process illustrated in Figure 4 may in particular be advantageously used when geometric distortion correction and edge-blending calculation as presented in Figure 3 has been performed on the video for display.
In the illustrated example, the video frame rate is 60 frames per second.
At the top of Figure 4, a time-line is represented. The origin of the time-line (time "zero") corresponds to the start of transmission of video data.
Three lines are drawn under the time line, respectively illustrating the sequences of transmission, processing, and display of the video.
As shown on the "Transmission" line, the video source device reorders the video data in order to split said video data of each video frame into four partitions composed of data corresponding to pixels having respectively: * both odd x and y coordinates, denoted "oo", * both even x and y coordinates, denoted ee", * even x and odd y coordinates, denoted "eo", and * odd x and even y coordinates, denoted "oe".
The so formed partitions are successively transmitted in separate data packets, in time slots having each a duration of at most one fourth of the time slot allotted to the transmission of the whole video frame (1/60 s in the shown
example).
The total quantity of data, and hence the required transmission bandwidth, remains the same as for conventional raw video transmission.
In this example, the destination projector receiving the portioned data performs geometric distortion correction and edge blending calculation as represented on the Processing" line: processing performed on each partition is started as soon as the partition has been fully received, in parallel with the current reception of subsequent data partitions.
For example, the processing comprises, for each projector's pixel and colour channel, multiplying the value associated with a parameter of a neighbouring source pixel (obtained from the received data partition) by the corresponding pre-calculated weight coefficient, and adding to so-obtained values based on each of the four neighbouring pixels. Adding the values based on each neighbouring pixel is advantageously done using an intermediate value storage in which the values based on each neighbouring pixel are accumulated as soon as they have been calculated.
Advantageously this is done using one FMA instruction per projector's pixel and colour channel and per partition. There are four partitions, and the four neighbouring pixels are contained each one in a different partition. Consequently, each update per received partition involves only one of the neighbouring pixels.
Thus, the execution of the four EMA instructions necessary per projector's pixel and colour channel for each video source frame is spread in time and can be performed during a parallel reception of data. Contrary to conventional processing in which a whole video frame has to be received and stored prior to beginning the processing, the delay between the end of reception of the last data partition of a given video frame and the moment the processed video frame becomes available for displaying (denoted "display delay" in Figure 4), is shortened from at least one frame duration (e.g. 1/60 s) to about one fourth of a frame duration (e.g. 1/240 5) without needing faster (and consequently more expensive and power-consuming) floating point calculation hardware.
Furthermore, the size of the buffering memory is reduced, since the buffer storing a given video data partition may be emptied and reused as soon as the processing performed on the partition is finished.
Figure 5A shows an embodiment wherein different importance levels are assigned to the pixels in the source video according to their importance in determining projector's pixel values through bi-linear interpolation. The so defined importance of the pixels may be used to optimize the transmission order of the data, and to optimize processing related to retransmission of data and error concealment.
In particular, Figure 5A presents a part of the projection screen, on which a 4 x 3 pixel part of the source image is displayed. It also presents the position of four of the projector pixels with their respective coordinates (x; y). The coordinates of the center of each pixel of the projector are noted using the coordinate system of the source image, rounded to two decimal places, as in Figure 3.
For each projector pixel, an arrow is drawn from each one of its four neighbouring source pixels, labelled with the respective weight coefficient for bi-linear interpolation, as explained in reference to Figure 3.
These weight coefficients are comprised between 0 and 1, their sum for a given projector's pixel equals 1, and their values depend on the respective pixel distances from the center of the considered projector pixel (the closer the projector's pixel to a source pixel, the greater the respective coefficient).
Based on its associated weight coefficients, it may be determined whether a video pixel is important or not for the determination of the properties of a given projector pixel. For example, during setup of the system (comprising the calibration for correction of geometric distortion and determination of the weight coefficients), a threshold value of the weight coefficients shall be fixed in such a manner that a low importance flag is assigned to source pixels if they contribute to the determination of no projector pixel with a weight above said threshold. A high importance flag is associated with the other pixels (which contribute to the determination of at least one projector pixel with a weight above the threshold).
In the example of embodiment represented in Figure 5A, the used threshold is 0.3. The weight coefficients above this value are presented in bold.
Consequently, five of the represented source pixels are low-importance pixels (denoted "10") and six of the source pixels are high-importance pixels (denoted "hi").
Some source pixels may not contribute to the determination of any projector pixel (for example if the density of a projector's pixels is below the density of source pixels, i.e. if downscaling is required). Such a pixel (denoted "un" for "unused") is shown in Figure 5A. Unused pixels don't need to be transmitted from the video source device to the video display device. The classification of the sources pixels into unused, low-importance and high-importance source pixels may be communicated during system setup and calibration to both the video source device and to the video display device.
In other embodiments, more than one threshold and two importance levels may be used.
Figure 5B shows on a time-line video frames transmission with acknowledgements, processing, and display, which may be implemented taking into account the importance of the pixels as defined in reference to Figure 5A.
Four lines are drawn under the time line, respectively illustrating the sequences of transmission, reception acknowledgement, processing, and display of the video.
The video having a 60 frames per second rate: the transmission of each video frame has to be done within a time slot of 1/60 s. The pixels of each video frame are split at the video source (source node or first node of the considered communication network) into high-importance and low-importance pixels (e.g. based on a method as described in reference to Figure SA). In this example embodiment of the invention, the video-data are reordered for transmission in the following way: the data defining high importance pixels and data defining low importance pixels are put in separate data packets. The high importance pixels and low importance pixels are transmitted in said separate data packets referenced "hi" respectively "10" in figure 5B in allocated time slots, as shown in the line representing the transmission sequence and labelled "Data transmission Source Dest." The data packet comprising high-importance pixels is transmitted first.
In figure 5B, the vertical arrows symbolise the dependencies between actions represented on different lines.
Data transmission over an unreliable communication channel may be subject to occasional errors and packet losses.
The following three cases have to be distinguished: 1st Case: No error occurs, as is illustrated for "Frame n" and "Frame n+3" in Figure 5B.
In this case, the display device (destination or second node of the implemented communication network), after having received the packet enclosing data relating to the high-importance pixels, returns a positive acknowledgement to the video source device. Acknowledgement is denoted "ack" in the line labelled "ACK transmission Dest. = Source".
Once the positive acknowledgment message ack has been received by the source device, the source device transmits the packet comprising the data relating to the low-importance pixels to the display device.
In the represented example embodiment, no acknowledgement method is implemented for the packet comprising the data relating to the low-importance pixels.
Data relating to the in the high and low importance pixels are processed by the destination display device as shown in the line labelled "processing by Destination". Processing on the data relating to the high importance pixels may be performed in parallel to sending the acknowledgement message and receiving the packet comprising the data relating to the low importance pixels of the same video frame. Respectively processing on the data relating to the low importance pixels may be performed in parallel to receiving the packet comprising the data relating to the high importance pixels of the next video frame.
The performed processing may consist in determining the properties of the pixels which have to be displayed, according a method similar to the method described in reference to Figure 4. Finally the video frame becomes ready for display, and is displayed by the destination device (second node of the implemented communication network) as illustrated on the line labelled "Display by Destination".
2nd case: The packet enclosing the data relating to the high importance pixels is lost, as shown for "Frame n+1" in Figure 5B. In this case the display device returns a negative acknowledgement (denoted "nack" in the figure) to the video source device.
A common manner to return a negative acknowledgement nack to the destination device is to stay silent (i.e. to send nothing, or at least not to send a positive acknowledgement) during the timeslot allotted to positive acknowledgement transmission.
After detection of a negative acknowledgement, the source device retransmits the same packet comprising the data relating to the high importance pixels.
After reception, the display device, having received only of the packet comprising the data relating to the high importance pixel for the considered video frame, performs processing comprising error concealment. For example, for each projector pixel, each weight coefficient associated with a high importance source pixel is adjusted by dividing it by the sum of all the available coefficients for determining the considered projector pixel.
For example, regarding the projector pixel having the coordinates (3.76; 2.36) in figure 5A, the only weight coefficient associated with a neighbouring pixel above the threshold of 0.3 is 0.4864. This weight coefficient is associated with the high-importance source pixel having for coordinates (4 2). The coefficient is 0.4864 adjusted to = 1.
0.4864 Regarding the projector pixel having the coordinates (1.86; 3.59) in Figure 5A, the weight coefficients 0.5074 and 0.3526 associated with two neighbouring pixels are above the 0.3 threshold. The associated high-importance source pixels have for coordinates (2; 4) and (2; 3). The associated coefficients * 0.5074 0.3526 are thus adjusted to = 0.59 and = 0.41 respectively.
0.5074+0.3526 0.5074+0.3526 In this second case, properties (typically the colour values) of the projector pixels are determined based on the high-importance source pixels only.
The impact on quality of the displayed image remains relatively low.
3rd case: the packet comprising the data relating to the low importance pixels is lost, as shown for "Frame n+2" in Figure SB. In this case, the display device discards the processing possibly started on the data relating to the high-importance pixels, and implements processing with error concealment, as described above for the 2nd case.
Figure 6 illustrates on a time line transmission and processing of video data with chroma subsampling, in which the transmission order of different partitions of data is adapted, according to an embodiment of the invention, to the duration of required processing at the display target device.
Three lines are drawn under the time line, respectively illustrating the sequences of transmission, chroma processing, luma processing, and display of the video.
In this example, 4:2:0 chroma subsampling is used. This kind of subsampling results in luma (Y) information having the same spatial resolution as the original picture, while the both chroma components (Cb and Cr) have their horizontal resolution and their vertical resolution divided by two.
Consequently, the whole video data belonging to a given frame is reordered according to an embodiment of the invention to be split into six equal partitions as shown in the line labelled:ETransmission In the represented example, each partition is transmitted in a dedicated time slot, in the following order: the partition comprising chroma component Cb, the partition comprising luma information relating to all pixels having both odd x and y coordinates Y(oo), the partition comprising luma information relating to pixels with both even x and y coordinates Y(ee), the partition comprising chroma component Cr, the partition comprising luma information relating to pixels with even x and odd y coordinates Y(eo), and the partition comprising luma information relating to pixels with odd x and even y coordinates Y(oe).
In such a method, partitioning the luma (Y) data is thus similar to partitioning the pixel data as described in reference to Figures 3 and 4.
Geometric distortion correction by bi-linear interpolation using the formula given with reference to Figure 3 may have to be performed for display, requiring four FMA operations per projector pixel, for each colour components Cb, Cr and Y. The processing pertormed on luma and chroma components is advantageously performed in parallel. An example of timing used for processing is illustrated on the lines labelled "Chroma Processing" and "Luma Processing" in Figure 6.
The Luma processing is spread in time in a manner analogous to what was described in reference to Figure 4, with one FMA operation per projector pixel per luma partition.
Using such a method for transmission, processing and display, the delay between the end of data reception of a given video frame by the display device and the availability for displaying of the processed video data is reduced.
Figure 7 is a functional diagram of a video display device as used in an embodiment of the invention.
A processor 701 manages most of the configuration tasks, and executes data processing algorithms like those described in relation with the previous figures.
These algorithms generate configuration values that can be set in corresponding functional blocks thanks to the processor interconnection bus 703.
All the functional blocks that have to be configured are linked to the bus 703.
The illustrated video display device comprises a random access memory (RAM) 700 and a processor 701. The random access memory 700 makes it possible to store program instructions and data handled by the processor 701.
During initial system setup and calibration, the processor 700 determines calibration data for example such as the weight coefficients for geometric distortion correction using bi-linear interpolation as described in Figures 3 or Figure 5A, and provides said calibration data to a video processing unit 711.
Video data are received from the network 120 through a network controller 708. A video buffer controller 707 manages access to a video buffer 706, implemented using a Random Access Memory, and forming a temporary storage for the received video partitions.
The network controller 708 provides synchronization signals to a synchronization controller 709. The synchronization controller 709 generates video synchronization signals used to manage video rendering. The video buffer controller 707 also reads video data from the video buffer 706, and provides it to the video processing unit 711 performing processing for example such as geometric distortion correction or edge-blending determination as described in reference to Figures 3 to 6.
The intermediate and final results of this processing (according e.g. to Figure 4, SB or 6) are stored by the controller 707 in the buffer 706. The final processed video data are transmitted to a local display controller 710, performing a display update for each completely processed video frame, following a signal of the synchronization controller 709.
Figure 8 is a functional diagram of a video source device as used in an embodiment of the invention. The functions of the processor 801 (denoted CPU), the Random Access Memory (RAM) 800 and the bus 803 are analogous to the functions of the corresponding elements in figure 7.
The video source 811 may for example be a camera module, a mass storage module, a TV or cable tuner or similar device, depending on the nature of the source device 100 represented in Figure 1.
The video source interface module (denoted Video source IF) 804 receives the video data and synchronization information from the video source.
The video source interface module 804 outputs the video data which are transmitted to a video buffer controller 807. The video source interface module 804 also outputs synchronization signals which are transmitted to a synchronization controller 809. The video buffer controller 807 manages access to the video buffer 806, implemented using a Random Access Memory.
The video buffer controller 807 writes data from video source interface module 804 into the video buffer 806. The video buffer controller 807 also reads video data from the video buffer 806 (partitioning said data for example according to one of an embodiments described in any one of Figures 3 to 6).
The details of the partitioning parameters may be determined through a communication between the source device 100 and the display device 110 over network 120 during calibration stage and setup of the video system. Video buffer controller 807 finally provides the partitioned data to a network controller 808 for transmission over said network.
The present invention thus provides a method which makes possible to optimize the transmission of video data (in terms of speed, and/or reliability, needed resources, etc.). In particular, re-ordering the data defining the pixels before transmission, depending on the data processing performed at the destination side, may allow the destination display device to start processing the data after reception of the first data fraction, e.g. data partition, in parallel with reception of subsequent data fractions. This results in reduction of buffering requirements and end-to-end latency. In various embodiments, the provided method may allow improving transmission error concealment, and/or improving parallelization of data processing steps performed at the destination end.
Although the present invention has been described hereinabove with reference to specific embodiments, the present invention is not limited to the specific embodiments, and modifications will be apparent to a person skilled in the art which lie within the scope of the present invention.
Many further modifications and variations will suggest themselves to those versed in the art upon making reference to the foregoing illustrative embodiments, which are given by way of example only and which are not intended to limit the scope of the invention, that being determined solely by the appended claims. In particular the different features from different embodiments may be interchanged, where appropriate.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or an" does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used.
Claims (16)
- CLAIMS1. A method for transmitting video data defining images, from a first node to a second node of a communication network, and next displaying said images, said images comprising pixels having properties defined by pixel data, the method comprising processing the images at the second node before display, wherein the pixel data are reordered for transmission, depending on the processing of the images at the second node.
- 2. The method of claim 1, wherein the processing of the images comprises a plurality of processing steps applied to the pixels of the image, the order of transmission of the pixel data depending on the complexity of the processing steps applied to the corresponding pixel.
- 3. The method of claim 1, wherein the processing of the image comprises a plurality of ordered processing steps applied to the pixels of the image, the order of transmission of the pixel data depending on the order of the plurality of ordered processing steps applied to the corresponding pixel.
- 4. The method of claim 1, comprising identifying data dependencies between the pixels of the video data and the corresponding pixels to be displayed to characterize processing of the images at the second node.
- 5. The method of claim 1, wherein processing the images at the second node includes transmission error concealment.
- 6. The method of claim 5, comprising partitioning the video data for transmission into partitions, wherein the relative importance of each pixel or predefined group of pixels of an image of the video data is determined in view of possible concealment of transmission errors or data loss as processing at the second node, and wherein the content of the partitions and their transmission order depend on the importance of the pixels.
- 7. The method of claim 1, comprising partitioning the video data for transmission into partitions, wherein a property of a pixel to be displayed is interpolated from the corresponding properties of neighbouring pixels of an image of the video data, wherein the pixel data of each neighbouring pixel are put in a different partition of the said image of the video data for transmission.
- 8. The method of claim 7, wherein bilinear interpolation is used to determine the property of the pixel to be displayed based on the corresponding properties of four neighbouring pixels.
- 9. The method of claim 8, the pixels of the image of the video data being arranged in a matrix having rows and columns, each pixel having a first coordinate corresponding to a column number, and a second coordinate corresponding to a row number, thus defining the position of the pixel in said matrix, wherein the image is split into four partitions, said partitions being respectively composed of: a) the pixels having an even first coordinate and an even second coordinate; b) the pixels having an even first coordinate and an odd second coordinate; c) the pixels having an odd first coordinate and an even second coordinate; d) the pixels having an odd first coordinate and an odd second coordinate.
- 10. The method of claim 7, wherein bicubic interpolation is used to determine the property of the pixel to be displayed based on the corresponding properties of sixteen neighbouring pixels.
- 11. The method of any of claims I to 10, the second node comprising a projector of a video projection system, wherein processing the images at the second node includes correction of a geometric distortion of the image.
- 12. The method of any of claims 1 to 11 the second node comprising a projector of a video projection system, wherein processing the images at the second node includes photometric and/or colorimetric adjustment.
- 13. The method of any of claims ito 12, the second node comprising a projector of a multi-projector video projection system, wherein processing the images at the second node includes brightness adaptation for edge blending of images from different projectors of the multi-projector video projection system.
- 14. A video system comprising a video source at a first node of a communication network, a video display device at a second node of said communication network, and means configured to transmit video data defining images from said first node to said second node, said images comprising pixels having properties defined by pixel data, the video system further comprising means configured to process the images at the second node before display, wherein the video system comprises means configured to reorder the pixel data for transmission, depending on the processing of the images at the second node.
- 15. A method for transmitting video data defining images, and next displaying said images, as hereinbefore described with reference to, and as shown in Figure 4, SB or 6 of the accompanying drawings.
- 16. A video system comprising a video source as hereinbefore described with reference to, and as shown in Figure 8 of the accompanying drawings at a first node of a communication network, and a video display device as hereinbefore described with reference to, and as shown in Figure 7 of the accompanying drawings at a second node of said communication network.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1407449.6A GB2526062B (en) | 2014-04-28 | 2014-04-28 | Method for transmitting video data defining images, and next displaying said images, comprising reordering the video data for transmission |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1407449.6A GB2526062B (en) | 2014-04-28 | 2014-04-28 | Method for transmitting video data defining images, and next displaying said images, comprising reordering the video data for transmission |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| GB201407449D0 GB201407449D0 (en) | 2014-06-11 |
| GB2526062A true GB2526062A (en) | 2015-11-18 |
| GB2526062B GB2526062B (en) | 2016-09-07 |
Family
ID=50971988
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB1407449.6A Expired - Fee Related GB2526062B (en) | 2014-04-28 | 2014-04-28 | Method for transmitting video data defining images, and next displaying said images, comprising reordering the video data for transmission |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2526062B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108280801A (en) * | 2018-01-10 | 2018-07-13 | 武汉精测电子集团股份有限公司 | Method, apparatus and programmable logic device are remapped based on bilinear interpolation |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2006075302A1 (en) * | 2005-01-17 | 2006-07-20 | Koninklijke Philips Electronics N.V. | System, transmitter, receiver, method and software for transmitting and receiving ordered sets of video frames |
| US20100169390A1 (en) * | 2008-12-30 | 2010-07-01 | Samsung Electronics Co., Ltd. | File transfer method and terminal adapted thereto |
-
2014
- 2014-04-28 GB GB1407449.6A patent/GB2526062B/en not_active Expired - Fee Related
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2006075302A1 (en) * | 2005-01-17 | 2006-07-20 | Koninklijke Philips Electronics N.V. | System, transmitter, receiver, method and software for transmitting and receiving ordered sets of video frames |
| US20100169390A1 (en) * | 2008-12-30 | 2010-07-01 | Samsung Electronics Co., Ltd. | File transfer method and terminal adapted thereto |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108280801A (en) * | 2018-01-10 | 2018-07-13 | 武汉精测电子集团股份有限公司 | Method, apparatus and programmable logic device are remapped based on bilinear interpolation |
| CN108280801B (en) * | 2018-01-10 | 2021-08-17 | 武汉精测电子集团股份有限公司 | Remapping method and device based on bilinear interpolation and programmable logic device |
Also Published As
| Publication number | Publication date |
|---|---|
| GB2526062B (en) | 2016-09-07 |
| GB201407449D0 (en) | 2014-06-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN107610143B (en) | Image processing method, image processing apparatus, image processing system, and display apparatus | |
| US8866871B2 (en) | Image processing method and image processing device | |
| JP5744374B2 (en) | Method and apparatus for generating a scale-changed image by changing the scale of the image | |
| US7596280B2 (en) | Video acquisition with integrated GPU processing | |
| US7075993B2 (en) | Correction system and method for enhancing digital video | |
| EP3471392A1 (en) | Panoramic camera and photographing method thereof | |
| JP3752448B2 (en) | Image display system | |
| US20120212481A1 (en) | Video Acquisition With Integrated GPU Processing | |
| CN101388950B (en) | Content adaptive contrast enhancement method and apparatus for digital images | |
| CN105185314A (en) | Uniformity compensation method for LED display screen | |
| US20110134217A1 (en) | Method and system for scaling 3d video | |
| JP2012027154A (en) | Image display system | |
| CN101754047A (en) | Method for detection of film mode or camera mode | |
| US20100260272A1 (en) | Transmission apparatus and transmission method | |
| US9030485B2 (en) | Apparatus and method for correcting color of image projection device | |
| JP6418010B2 (en) | Image processing apparatus, image processing method, and display apparatus | |
| GB2526062A (en) | Method for transmitting video data defining images, and next displaying said images, comprising reordering the video data for transmission | |
| US9179163B2 (en) | Device and method for processing image | |
| JP4997167B2 (en) | Image signal processing apparatus, method and program for image division processing | |
| US8488897B2 (en) | Method and device for image filtering | |
| CN105554487A (en) | Method and apparatus for format conversion of digital image | |
| CN113194267B (en) | Image processing method and device and photographing method and device | |
| CN111836055B (en) | Image processing device and image block matching method based on image content for MEMC | |
| CN100379258C (en) | Phase Adjustment Method for Analog-to-Digital Conversion of Video Signal | |
| GB2507482A (en) | Progressive Transmission of Still Images to a Video Display System |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PCNP | Patent ceased through non-payment of renewal fee |
Effective date: 20180428 |