GB2492749A - Synchronising Wireless Video Data Nodes - Google Patents
Synchronising Wireless Video Data Nodes Download PDFInfo
- Publication number
- GB2492749A GB2492749A GB1111349.5A GB201111349A GB2492749A GB 2492749 A GB2492749 A GB 2492749A GB 201111349 A GB201111349 A GB 201111349A GB 2492749 A GB2492749 A GB 2492749A
- Authority
- GB
- United Kingdom
- Prior art keywords
- video
- node
- text
- clock
- source
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 claims description 29
- 230000005540 biological transmission Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 7
- 230000001360 synchronised effect Effects 0.000 claims description 5
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 claims description 2
- 239000000284 extract Substances 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 241000238876 Acari Species 0.000 description 3
- 230000003139 buffering effect Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 230000003213 activating effect Effects 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 241001025261 Neoraja caerulea Species 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4305—Synchronising client clock from received content stream, e.g. locking decoder clock with encoder clock, extraction of the PCR packets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L7/00—Arrangements for synchronising receiver with transmitter
- H04L7/02—Speed or phase control by the received code signals, the signals containing no special synchronisation information
- H04L7/033—Speed or phase control by the received code signals, the signals containing no special synchronisation information using the transitions of the received signal to control the phase of the synchronising-signal-generating means, e.g. using a phase-locked loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
- H04N21/4363—Adapting the video stream to a specific local network, e.g. a Bluetooth® network
- H04N21/43632—Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wired protocol, e.g. IEEE 1394
- H04N21/43635—HDMI
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
- H04N21/4363—Adapting the video stream to a specific local network, e.g. a Bluetooth® network
- H04N21/43637—Adapting the video stream to a specific local network, e.g. a Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Synchronisation In Digital Transmission Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
In a wireless video display system requiring synchronisation between the respective clocks of a RF video send node (102) and a receive node (103), a Phase Locked Loop (PLL) in a source part 206 of the sender node generates HDMI video data in synchronism with a source clock including timestamps (306) from a Vsync sampler 217, and a similar source part in the receiver node uses the same PLL via the timestamp information in the received video data to synchronise a local clock via a RX synchro manager 213, avoiding the need for an additional PLL.
Description
Receiving video data
Field of the invention
This invention relates to receiving video data. More particularly the invention is applicable to the real time display synchronization of a transmitted video, particularly using wireless transmission.
Description of the prior-art
Wireless video transmission systems are generally used to replace traditional video cabling between video sources and video display devices. One known issue appearing with the transition from wired to wireless, is the real-time video display synchronization after transmission.
This problem comes from the fact that from one side, the video is generated synchronously with a first video clock (the clock of the video source device), and from the other side the video is displayed synchronously with a second video clock (the receiver node clock).
Generally, these two video clocks are not running exactly at the same speed.
Without any synchronization solution, the video receiver node could potentially display the video too fast or too slowly. In addition to the annoying visual problem of viewing a video with an incorrect frame rate, it could also induce memory underflow or overflow at the receiver side.
Video receiver clock synchronization techniques exist to overcome this problem.
A first technique aiming at synchronizing the video display rate with the source device rate is based on video frame dropping and duplication. This technique is based on a monitoring of the receiver node's network input buffer level. If the network input buffer level is increasing over a period of time, it means that the receiver node's video clock is slower than the video source clock. If the network input buffer level is getting lower during a time then it means that the receiver's node video clock is faster than the video source clock. In order to prevent an abrupt buffer overflow or buffer under-run at receiver node, the receiver duplicates a previously received video frame if the buffer level is getting lower or suppresses a video frame from the network input buffer if the buffer level is getting higher. This technique requires a large amount of buffering (one to two video frames), which increases the cost of the devices and the system latency. In addition, it may induce a visible artifact for some sequences. The document US5621775 describes such a synchronization mechanism as applied more generically to network synchronization.
A second technique for video clock synchronization is based on video clock transport. The sender node sends the source device's video clock to the receiver node. The receiver node displays the video stream with the source device's video clock. In some wired systems the video clock is used as data serialization clock (HDMI, SDI). Other wired systems use a dedicated wire for the video clock. One can imagine using the video clock as input for a wireless modem so it can be recovered at receiver node.
This wired solution overcomes the buffering cost of the first solution. However it is hardly applicable to the wireless transmission case.
The third and most used technique consists in giving to the receiver node means to reconstruct the video source's clock. The sender node samples the video source's clock at regular intervals. The sender node sends information representative of the sampled video clock to the receiver node. This information is called time stamps. The receiver node changes it own video clock rate based on the received time stamps information. In order to change its own clock rate the receiver node uses a PLL (Phase Loop Lock) apparatus.
This technique is used in IEC 61883 (IEC-PAS 61883-6, Edition 1.0, 1998-05, Consumer audiolvideo equipment -Digital interface -Part 6: Audio and music data transmission protocol) based video streaming systems as well as RTP based streaming systems. It overcomes the buffering cost of the first solution, and unlike the second solution this technique is well adapted to wireless systems. But this technique adds an extra cost regarding PLL apparatus.
In figure 1, we represented a wireless video transmission system. This system is based on video sender nodes and video receiver nodes. As represented in figure 1, a video sender node 102 is connected to a video source device 101 using standard video cables 105. The video sender node 102 sends the video stream through a wireless communication path to a video receiver node 103. The video receiver node 103 is connected to a video display device 104 through standard cables 105. The video receiver node 103 causes the wirelessly received video stream to be displayed on the attached display device 104.
Both receiver node 103 and sender node 102 are called network nodes in this system. Network nodes have two parts: a video controller part and a network part.
The video controller part is in charge of driving the video interface to the video source device in case of the sender node, and of driving the video interface to the display device in case of the receiver node. These two video functions are different in nature and are implemented in separate electronic components.
The network controller part is in charge of sending and receiving wireless data packets. This part is common to the sender and the receiver node.
As a consequence we can say that the function of a network node is determined by the video controller specifications. If the video controller includes a video source interface driver then the network node is a sender node. If the video controller includes a video display interface driver then it is a receiver node.
For questions of development and manufacturing cost, a manufacturer can be incited to manufacture "generic" network nodes with a video controller embedding both video interface capabilities. The particular video controller is called a generic video controller. In that case, the end user will take benefit of flexibility, each network nodes supporting the sender or the receiver functionality.
It is particularly desirable to find a video synchronization technique suitable for use in generic network nodes.
It is also particularly desirable to take advantage of the use of generic nodes, embedding a generic video controller, to perform the synchronization process.
Summary of the invention
To that end, a first aspect of the invention relates to a method of receiving video data in a video node that is capable of acting selectively as a sender node or as a receiver node and that has a source part, including a PLL, which is used to generate video data for transmission from said video node to another node when said video node is acting as a sender node, the video data being generated by a source device in synchronism with a source clock and being transmitted to the video node by a sender node other than said video node, the method comprising when the video node is acting as a receiving node, employing said FLL in said source part to synchronize with said source clock a local clock of the receiving node, in synchronism with which local clock the received video data is processed.
Thanks to this, the introduction of an additional FLL is not necessary, which allow decreasing the manufacturing cost of the node.
According to a preferred embodiment, said local clock of the receiver node is synchronized with respect to a synchronization signal extracted from the received video data.
According to another embodiment, the synchronization with respect to a synchronization signal extracted from the received video data comprises a generation of an analog synchronization signal in function of said extracted video synchronization information.
According to another embodiment, said analog synchronization signal is digitally converted by the source part, where the PLL comprised in said source part of the receiving node is used to generate said local clock synchronized with said source clock based on the digitally converted analog synchronization signal. This feature avoids introducing an additional PLL means on the receiver node.
According to another embodiment, said analog synchronization signal generation step is performed in function of a preliminary step of comparison of said source clock and said local clock. This feature allows avoiding the adjustment of the local synchronization information when it is not necessary.
In addition, according to an embodiment, if the result of the comparison indicates that said source clock is slower than the local clock, the synchronization of said local clock is performed by slowing down said analog synchronization signal rate.
This feature allows avoiding a memory underifiow on the receiver side.
According to another embodiment, if the result of the comparison indicates that said source clock is faster than the local clock, the synchronization of said local clock is performed by speeding up said analog synchronization signal rate. This feature allows avoiding a memory overflow on the receiver side.
According to another embodiment, said analog synchronization signal is a dummy analog image signal. This feature avoids converting digital video data to analog video data. Only synchronization information are converted.
A second aspect of the invention relates to a video node that is capable of acting selectively as a sender node or as a receiver node and that has a source part, including a PLL, which is used to generate video data for transmission from said video node to another node when said video node is acting as a sender node, the video data being generated by a source device in synchronism with a source clock and being transmitted to the video node by a sender node other than said video node, the claimed video node comprising synchronization means operable, when the video node is acting as a receiving node, to employ said PLL in said source part to synchronize with said source clock a local clock of the receiving node, in synchronism with which local clock the received video data is processed.
Preferably, the video node according to the second aspect of the invention comprises means for implementing all the steps of the method embodying the aforesaid first aspect of the invention as briefly described above.
The invention also relates to an information storage means that can be read by a computer or a microprocessor and that stores instructions of a computer program for the implementation of the method embodying the aforesaid first aspect of the invention.
The invention also relates to a computer program which, when executed by a computer or a processor in a video node, causes the video node to carry out a method embodying the aforesaid first aspect of the present invention.
A computer program embodying the present invention may be provided by itself or may be carried on or by a carrier medium. The carrier medium may be a recording medium, such as a computer-readable storage medium. The carrier medium may be transmission medium, such as a signal. For example. a program embodying the invention may be supplied via a network such as the Internet.
The particular characteristics and advantages of the video node, of the storage means and of the computer program being similar to those of the method embodying the first aspect of the invention, they are not repeated here.
Brief description of the drawings
Other features and advantages will appear in the following description, which is given solely by way of non-limiting example and made with reference to the accompanying drawings, in which: -Figure 1, described hereinbefore, describes an example of wireless video system.
-Figure 2 shows a block diagram of a generic node suitable for use as a sender node or as a receiver node.
-Figure 3 shows in more details than figure 2, interconnections in a part of the generic node of figure 2.
-Figure 4 shows a flow chart of an algorithm executed by a video packet transmitter module.
-Figure 5 shows a detailed block diagram of a display part of the generic node of figure 2.
-Figure 6 shows a flowchart of an algorithm executed by a Drift computer module shown in figure 5.
-Figure 7 shows a flowchart of an algorithm executed by a Rx from network module, shown in figure 5.
-Figure 8 shows a video packet format.
-Figure 9 shows a flowchart explaining a principle of the invention.
Detailed description of the embodiment
Figure 1, described hereinbetore, shows an example of wireless video system.
The video source device 101 can be for example a blue-ray player, a set-top box or a multimedia hard drive. The video source 101 is connected to the sender node 102 through a standard digital video connection 105 (for example HDMI). The sender node 102 communicates with a receiver node 103 using wireless transmission (for example 60 Ghz IEEEBO2.15.3 (iEEE Standard for information technology, Telecommunications and information exchange between systems, Local and metropolitan area networks, Specific requirements: Part 153: Wireless Medium Access Control (MAC) and Physical Layer (PHY), Specifications for High Rate Wireless Personal Area Networks (WPANs): Amendment2: Millimeter wave based alternative Physical layer extension)). The receiver node 103 is connected to a display device 104 through another standard digital video connection 105 (for
example HDMI).
Figure 2 depicts a generic node suitable for use as a sender node 102 or as a receiver node 103.
The generic node comprises of a network controller part 207 and a video controller part 204. The network controller part 207 is in charge of implementing the wireless communications of the video data and control data. The video controller part 204 comprises a display pad 205 and a source part 206. The display pad 205 is in charge of causing displaying the video received from the network controller part 207 to be displayed on a video display device. The source part 206 is in charge of receiving a video stream from a video source device and sending it to the receiver node through the network controller pad 207. The two pads 204 and 207 are controlled by a CPU subsystem composed by a CPU 201 and RAM memory 202 and a ROM memory 203.
The network controller part is a standard 60 GHz communication subsystem implementing either the IEEE 80215.3 orWirelessHD (http://www.wirelesshcLorg/) standards. It is composed of a MAC module 208, a PHY module 209, a TX antenna module 210 and a RX antenna module 211. The detailed description of these modules is not provided in this specification as it can be obtained from either the IEEE 802.15.3 amendment c or from the WirelessHD consortium. The network controller part is also applying a network time protocol that insures that all nodes in the network have access to a single time reference. For example the network controller can implement the IEEE 1588 (jjfp/ieee1588.nisfljpy/) time protocol to achieve this.
The source part 206 of the video controller part 204 comprises a HDMI receiver module 215 capable of receiving an HDMI flow from a video source through a HDMI connector and comprising a PLL used to control a clock rate. The HDMI receiver module 215 outputs pixel data to a pixel data bus and also outputs three video synchronization signals namely a pixel clock, an horizontal synchronization and a vertical synchronization. The HDMI receiver module 215 can be the AD9880 from Analog Devices (http,//www.analog. com/erilaudiovideo-product s/ana/oghdmi dviinterfaces/ad9880/product s/product. html). The source part 206 comprises a video packet transmitter module 216 which receives video data from the HDMI receiver 215 and also receives time stamp information from the Vsync Sampler module 217 included in the source part 206. The video packet transmitter module 216 builds video packets and sends these video packets to the network controller part 207. The function of the video Packet transmitter module 216 will be further detailed in figure 4. The Vsync sampler module 217 is responsible for sampling the occurrences of the vertical synchronization signal coming from the HDMI receiver module 215. The sampling is done by generating a time stamp relative to the network time as received from the network controller part 207. The resulting time stamp is sent to the video packet transmitter module 216 for inclusion in the packet header.
The display part 205 of the video controller part 204 comprises of a video packet receiver module 214. The video packet receiver module receives video packets from the network controller part 207, extracts pixel data for internal storage and extracts time stamp information and sends the pixel data and time stamp information to a RX synchronization manager module 213. The RX synchronization manager module 213 is responsible for generating video synchronization signals for an HDMI transmitter module 212 and for reading the pixel data stored in the video packet receiver module 214 and sending them to the HDMI transmitter module 212. For the purpose of generating the video synchronization signals, the RX synchronization manager 213 advantageously uses resources of the HDMI receiver module 215 and more precisely of the PLL contained in this module as later described in detail in figures 7 and 6. The HDMI transmitter module 212 implements a HDMI TX controller for display devices. It takes as inputs pixel data supplied via a pixel bus and three video synchronization signals namely a pixel clock, an horizontal synchronization signal and a vertical synchronization signal. On the output side the HDMI transmitter module 212 drives the HDMI connector. The HDMI transmitter module can be the AD9889B circuit from Analog Device (http://www.analog.com/en/audiovideo-product s/analoghdmidvi-interfaces/ad9889b/product s/product. html).
The HDMI transmitter and receivers modules 215 and 212 are controlled and initialized by the CPU 201 by means of an 12C bus, not represented in this figure in the interests of clarity. The CPU implements a HDMI software driver as can be obtained from the HDMI chip manufacturer.
Typical sender nodes have a video controller part 204 that includes only the source part 206 and no display part 205.
Typical receiver nodes have a video controller part 204 that includes only the display part 205 and no source part 206. Without the source part included in the generic node, a typical receiver node cannot implement the synchronization method as described below with reference to figure 9.
Figure 3 gives details of interconnections in the source part 206 of the video controller part 204.
The HDMI receiver module 215 receives HDMI video data 308 from an HDMI connector. It outputs the received video as pixel data on a pixel bus 300 and also outputs the aforementioned video synchronization signals (the pixel clock 301, the horizontal synchronization signal 302 and the vertical synchronization signal 304).
The HDMI receiver receives also an analog input video signal 310 from the display part 205. This signal 310 is a dummy analog video signal. The video synchronization signals 301, 302, 304 generated by the HDMI receiver are also sent to the display part 205 (interface 309).
The Vsync sampler module 217 receives the Vsync signal 304 from the HDMI receiver module 215, and it receives a network time reference 305 from the network controller part 207. It sends time stamps 306 to the video packet transmitter module 216.
The video packet transmitter module 216 receives the pixel bus 300 and the video synchronization signals 301, 302, 304 from the HDMI receiver 215. It receives time stamp information 306 from the Vsync sampler 217 and it sends video packets 307 to the network controller 207.
Figure 4 represents the algorithm executed by the video packet transmitter module 216 of the sender node 102.
In an initial step 400 we wait for a start of image. The start of image occurs when the HDMI receiver module 215 asserts the vertical synchronization signal Vsync.
Once we detect the start of a new image we move to next step 401.
In step 401 we get a time stamp from the Vsync sampler module. The time stamp indicates the time of occurrence of the Vsync signal as detected in previous step 400. The time is relative to the network time as given by the network controller 207. Then we move to the next step 402.
In step 402 we build a video packet header including the time stamp obtained in 401. Then we move to step 403.
In step 403 we build a video packet payload by aggregating several video lines (16, for example). Then we move to step 404.
In step 404 we send a video packet comprising the header to the network controller 207. Then we move to step 405.
In step 405 we test if we have reached the end of the image. The end of the image is reached when we have sent all the lines of the image. The number of lines per image is dependent on the video format. The video format information is sent by the HDMI receiver module 215 to the CPU 201. The CPU 201 then sends the information of the number of lines per video frame to the video packet transmitter module 216. If we have not reached the end of the image we move to step 406 otherwise we return to the initial step 400 to wait the start of a new image.
In step 406 we build a video packet header for a video packet not containing the start of the image. In this video packet header the time stamp information is not meaningful. Then we loop back to the step 403 for building the video packet payload.
Figure 5 gives a detailed view of the display part 205 of the video controller 204. It comprises the aforementioned HDMI transmitter module 212, video packet receiver module 214 and RX synchronization manager module 213. The HDMI receiver module 215 is shown here as part of the RX synchronization manager for convinience, but it is in reality part of the source part 206 as shown in figure 2.
The video packet receiver module 214 comprises a RX from network module 500 responsible for packet reception management as further described in figure 7. It extracts pixels from video packets and stores them in a video FIFO 501. The module 500 also extracts time stamp information and sends them to the drift computer 504 (part of the RX synchronization manager 213). The video FIFO 501 is read by a playback manager 507 (part of the RX synchronization manager 213), and its output is connected to the pixel bus of the HDMI transmitter module 212.
The RX synchronization manager module 213 comprises a drift computer module 504 responsible for computing a drift between the local Vsync signal 508 and the source Vsync signal as reflected by the time stamps. Operations of this module are described later with reference to figure 6.
The local Vsync 508 is obtained as a sum of clock ticks. We use a local clock 509, a counter 502, a register 503 and a comparator 505. The nominal value of the Vsync signal for a given image frequency is computed as a number of local clock ticks. For example an image frequency of 60 Hz can be obtained from a local clock running at 25 MHz by counting 416666 times (ticks) with a precision of +1-40 nanoseconds. So when the current value of the counter5o2 reaches a nominal value (i.e. 416666) registered in the register 503 the comparator 505 asserts the local Vsync signal 508. As described later in Figure 6 the drift computer 504 will increase or decrease the nominal value registered in register 503 depending on the computed drift to make the local Vsync signal 508 as close as possible to the sender node Vsync signal.
The local Vsync signal 508 is inputted to an analog dummy generator module 506.
This module is responsible for generating a dummy analog video signal from the local Vsync signal 508. Generally an analog video signal is a combination of Vsync, Hsync and color level signals multiplexed together. The dummy analog video signal is in fact made uo exclusively of the Vsync signal. This signal can be considered as an analog synchronization signal. So the role of the analog dummy generator is only to adapt the local Vsync signal from the digital levels to an acceptable analog level for the HDMI receiver module 215. The acceptable analog level can be found for example in the data sheet describing the considered HDMI receiver module 215.
The HDMI receiver module 215 is configured by the CPU module 201 to take an analog video signal as input and to generate a digital video signal as output. The means of configuration is defined in the manufacturer's data sheet (i.e. AD9880 data sheet).
The playback manager receives the digital video signals resulting from the digitization by the HDMI receiver module 215. It is not necessary to supply the pixel data to the playback manager since we are only interested in the resulting video synchronization signals generated by the HDMI receiver module 215, namely the pixel clock, the horizontal synchronization signal and the vertical synchronization signal. Based on these video synchronization signals the playback manager reads the video FIFO 501 to generate the pixels along with the video synchronization signals to the HDMI transmitter module 212. The playback manager follows the video format timing as defined in the document CEA-861-D (HDMI Specification 1.3a, HDMI Licensing, LLC, 2006-11-10). The video format information is given by the CPU module.
Referring now to figure 9, in a first step 900 the receiver node gets temporal references from the sender node. The temporal references are time stamps representative of the Vsync signal as observed at the sender node.
Then in step 901, the receiver builds a dummy analog video signal based on a Vsync signal that is generated from the temporal reference. As already seen above a dummy analog video signal is an analog video that has vertical synchronization pulses only. There are no horizontal synchronization pulses, and no color (pixel) signals. The result of this step is a dummy analog video signal that follows the image rate of the source connected to the sender node.
Then in step 902 the receiver node configures the HDMI receiver module 215 so that it converts the dummy analog video signal into a digital video signal. In the present embodiment, the receiver node is actually a generic node capable of being used selectively as a receiver node or a sender node. Such a generic node has an HDMI receiver that is used by the generic node when it is acting as a sender node to receive digital video signals from the HDMI connector. Thus, the HDMI receiver module 215 is unused when the generic node is acting as a receiver node. The receiver node thus configures the HDMI receiver module 215 to convert the dummy analog video signal to digital video signal. The configuration includes specification that the input analog video signal is only based on Vsync synchronization; the configuration includes also the specification of the video format. Based on this information the HDMI receiver module 215 will convert the analog dummy video signal to a digital video signal including pixel clock, horizontal synchronization and vertical synchronization signals. The pixel clock and the horizontal synchronization signals are obtained thanks to the HDMI receiver's internal PLL and VCO through Vsync signal division based on video format information. The result of this step is a digital image synchronization signal that is locked on the sender Vsync signal.
Finally in step 903 the receiver node uses the resulting digital image synchronization signal to display the digital video pixels received from the wireless interlace (sent by the receiver node) on the display device. The result of this final step is the display of the received video locked on the sender's Vsync rate.
The figure 6 depicts the algorithm executed by the Drift computer module 504 of the receiver node 103.
In an initial state 600 we load the nominal Vsync register with a value representative of the targeted image frequency and representative of the local clock 509 frequency. These two values are set by the CPU 201. For example, for a local clock frequency of 25 MHz and for a target image frequency of 60 GHz we set the nominal Vsync register 503 to ((1/60/ 1/25)*106) 416666. Then we move to step 601.
In step 601 we wait for a time stamp representing video synchronization information representative of the video source clock from the Rx from network module 500. At time stamp reception we move to step 602.
In step 602 we start the counter 502. As a result, the counter 502 is synchronized with the start of image Vsync. We setup the variables remote VI to the time stamp value and local VI to the network time. Then we move to step 604.
In step 604 we wait for the occurrence of both the next time stamp from the RX from network module 500, and for the occurrence of a local time stamp triggered by the local Vsync signal 508. When the local Vsync signal rises, we take the value of the network time 510 as the value of the local time stamp. The network time 510 is driven by the network controller 207. Then we move to step 605.
In step 605 we test if the remote Vsync is slower than the local Vsync 508. For this purpose we compare (time stamp -remoteVi) and (local time stamp -localVi). If (remote time stamp -remoteVi) is greater than (local time stamp -localVi), then the remote Vsync is slower and we move to step 606 to slowdown the local Vsync.
Else we move to step 607.
In step 606 we slowdown the local Vsync by one local clock 509 step (40 nanosecond) by activating once the decrement command of the nominal Vsync register 503. Then we update the variables remoteVi to new remote time stamp and localVi to new local time stamp. Then we loop back to step 604 to wait the next time stamps.
In step 607 we test if the remote Vsync is faster than the local Vsync 508. For this purpose we compare (remote time stamp -remoteVi) and (local time stamp - localVl). If (remote time stamp -remoteVi) is lower than (local time stamp -localVi), then the remote Vsync is faster and we move to step 608 to speed up the local Vsync. Else we update the variables remoteVl=new remote time stamp and localVl = new local time stamp and we move to step 604.
In step 608 we speed up the local Vsync by one local clock 509 step (40 nanosecond) by activating once the increment command of the nominal Vsync register 503. Then we update the variables remoteVi mew remote time stamp and IocalVl= new local time stamp. Then we loop back to step 604 to wait the next time stamps.
Figure 7 depicts the algorithm executed by the Rx from network module 500 of the receiver node 103.
At an initial step 700 we wait for the reception of a video packet from the network controller part 207. Once a video packet is received we move to the step 701.
In step 701 we inspect the video packet header to determine if it contains a start of image. If it is not a start of image we Ioopback to the initial step 700. If it is a start of image we move to step 702.
In step 702 we extract the time stamp from the video packet header. And we move to step 703.
In step 703 we send the time stamp to the drift computer module 504. Then we move to step 704.
In step 704 we store the video line from the video packet payload in the video FIFO 501. Then we move to step 705.
In step 705 we test if it is the end of an image. The end of the image is reached when we have stored all the lines of the image. The number of lines per image is dependent on the video format. The video format information is sent by the HDMI receiver module 215 to the CPU 201 of the sender node. Then the CPU 201 of the sender node sends this information to the receiver node CPU 201 through control data of the network controller 207. The CPU 201 of the receiver node sends back the information of the number of line per video frame to the RX from network module 500. So if the end of the image is not reached we move to step 706 to wait for the next video packet. If the end of the image is reached then we loop back to the initial step 700.
In step 706 we wait for the next video packet reception from the network controller 207. On video packet reception we loop back to step 704.
Figure B depicts the video packet format.
The video packet is composed of a header 800 and a payload 807.
The header 800 contains a start of image flag 801. This flag is set by the Video Packet transmitter 216 when it builds the first packet of one image.
The header 800 contains also a time stamp field 802. This field is set by the Video Packet transmitter 216 when it builds the first packet of one image.
The header 800 contains also a first line number field 803. This field is set with number of the first line in the payload 807. It is set by the Video Packet transmitter 216 when it builds the payload 807.
The header 800 contains also a last line number field 804. This field is set with number of the last line in the payload 807. It is set by the Video Packet transmitter 216 when it builds the payload 807.
The payload 807 contains the video lines from the number 803 shown as 805 to the line number 804 shown as 806. l*7
Claims (1)
- <claim-text>Claims 1. A method of receiving video data in a video node that is capable of acting selectively as a sender node or as a receiver node and that has a source part, including a PLL, which is used to generate video data for transmission from said video node to another node when said video node is acting as a sender node, the video data being generated by a source device in synchronism with a source clock and being transmitted to the video node by a sender node other than said video node, the method comprising: when the video node is acting as a receiving node, employing said PLL in said source part to synchronize with said source clock a local clock of the receiving node, in synchronism with which local clock the received video data is processed.</claim-text> <claim-text>2. A method according to claim 1, in which said local clock of the receiver node is synchronized with respect to a synchronization signal extracted from the received video data.</claim-text> <claim-text>3. A method according to claim 2, in which the synchronization with respect to a synchronization signal extracted from the received video data comprises generation of an analog synchronization signal in dependence upon said extracted video synchronization information.</claim-text> <claim-text>4. A method according to claim 3, in which said analog synchronization signal is digitally converted by the source part, where the PLL comprised in said source part of the receiving node is used to generate said local clock synchronized with said source clock based on the digitally converted analog synchronization signal.</claim-text> <claim-text>5. A method according to claim 4, in which said analog synchronization signal generation step is performed in dependence upon a preliminary step of comparison of said source clock and said local clock.</claim-text> <claim-text>6. A method according to claim 5, in which if the result of the comparison indicates that said source clock is slower than the local clock, the synchronization of said local clock is performed by slowing down a rate of said analog synchronization signal.</claim-text> <claim-text>7. A method according to claim 5, in which if the result of the comparison indicates that said source clock is faster than the local clock, the synchronization of said local clock is performed by speeding up a rate of said analog synchronization signal rate.</claim-text> <claim-text>8. A method according to any one of claims 3 to 7, in which said analog synchronization signal is a dummy analog image signal.</claim-text> <claim-text>9. A video node that is capable of acting selectively as a sender node or as a receiver node and that has a source part, including a PLL, which is used to generate video data for transmission from said video node to another node when said video node is acting as a sender node, the video data being generated by a source device in synchronism with a source clock and being transmitted to the video node by a sender node other than said video node, the claimed video node comprising synchronization means operable, when the video node is acting as a receiving node, to employ said PLL in said source part to synchronize with said source clock a local clock of the receiving node, in synchronism with which local clock the received video data is processed.</claim-text> <claim-text>10. A computer program which, when executed by a computer or a processor in a video node, causes the video node to carry out a method as claimed in any one of claims ito 8.</claim-text> <claim-text>11. A computer program as claimed in claim ii, carried by a carrier medium.</claim-text> <claim-text>12. A method, video node or computer program substantially as hereinbefore described with reference to Figures 2 to 9 of the accompanying drawings.</claim-text>
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1111349.5A GB2492749B (en) | 2011-07-04 | 2011-07-04 | Receiving video data |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1111349.5A GB2492749B (en) | 2011-07-04 | 2011-07-04 | Receiving video data |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| GB201111349D0 GB201111349D0 (en) | 2011-08-17 |
| GB2492749A true GB2492749A (en) | 2013-01-16 |
| GB2492749B GB2492749B (en) | 2013-11-20 |
Family
ID=44512026
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB1111349.5A Active GB2492749B (en) | 2011-07-04 | 2011-07-04 | Receiving video data |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2492749B (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11075971B2 (en) | 2019-04-04 | 2021-07-27 | Evertz Microsystems Ltd. | Systems and methods for operating a media transmission network |
| US11323780B2 (en) | 2019-04-04 | 2022-05-03 | Evertz Microsystems Ltd. | Systems and methods for determining delay of a plurality of media streams |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2093926A1 (en) * | 2008-02-01 | 2009-08-26 | Thomson Licensing | Method of receiving and method of sending data over a network |
| US20100061406A1 (en) * | 2007-03-28 | 2010-03-11 | Akihiro Tatsuta | Clock synchronization method for use in communication system for transmitting at least one of video data and audio data |
-
2011
- 2011-07-04 GB GB1111349.5A patent/GB2492749B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20100061406A1 (en) * | 2007-03-28 | 2010-03-11 | Akihiro Tatsuta | Clock synchronization method for use in communication system for transmitting at least one of video data and audio data |
| EP2093926A1 (en) * | 2008-02-01 | 2009-08-26 | Thomson Licensing | Method of receiving and method of sending data over a network |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11075971B2 (en) | 2019-04-04 | 2021-07-27 | Evertz Microsystems Ltd. | Systems and methods for operating a media transmission network |
| US11323780B2 (en) | 2019-04-04 | 2022-05-03 | Evertz Microsystems Ltd. | Systems and methods for determining delay of a plurality of media streams |
| US11695999B2 (en) | 2019-04-04 | 2023-07-04 | Evertz Microsystems Ltd. | Systems and methods for determining delay of a plurality of media streams |
| US11722541B2 (en) | 2019-04-04 | 2023-08-08 | Evertz Microsystems Ltd. | Systems and methods for operating a media transmission network |
| US12143431B2 (en) | 2019-04-04 | 2024-11-12 | Evertz Microsystems Ltd. | Systems and methods for operating a media transmission network |
Also Published As
| Publication number | Publication date |
|---|---|
| GB201111349D0 (en) | 2011-08-17 |
| GB2492749B (en) | 2013-11-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| AU2022200777B2 (en) | Methods and apparatus for an embedded appliance | |
| US6429902B1 (en) | Method and apparatus for audio and video end-to-end synchronization | |
| CN102474656B (en) | Signal receiving device and camera system | |
| US9451313B2 (en) | Network media adapter | |
| EP2866458B1 (en) | Reception device, and synchronous processing method therefor | |
| EP2866451A1 (en) | Method and apparatus for IP video signal synchronization | |
| GB2485977A (en) | Audio playback system | |
| CN102017645B (en) | Time labelling associated with an equipment synchronisation system connected to a network | |
| US9425948B2 (en) | Techniques for synchronizing a clock of a wired connection when transmitted over a wireless channel | |
| US10231007B2 (en) | Transmission device, transmitting method, reception device, and receiving method | |
| JP2014510426A (en) | Clock recovery mechanism for streaming content transmitted over packet communication networks | |
| CN115529481A (en) | Video synchronous display system, method and input device based on fusion signal source | |
| GB2492749A (en) | Synchronising Wireless Video Data Nodes | |
| CN117440063A (en) | A signal transmission method and device | |
| US8792484B2 (en) | Device for receiving of high-definition video signal with low-latency transmission over an asynchronous packet network | |
| JP6318953B2 (en) | Transmitting apparatus, transmitting method, receiving apparatus, and receiving method | |
| EP4539471A1 (en) | Media transmission system, sending device, sending system, reception device, and reception system | |
| Savino et al. | A framework for adaptive PCR jitter correction in MPEG-2 TS processors | |
| JP2014150335A (en) | Information processing apparatus, information processing method, and program | |
| CN121357337A (en) | Live media frame synchronization method, device, computer equipment and storage medium | |
| CA3203196C (en) | Methods and apparatus for an embedded appliance | |
| CN115695880A (en) | Large screen splicing synchronous display method based on FPGA and ARM hardware platform | |
| WO2015151781A1 (en) | Transmission device, transmission method, reception device, and reception method |