[go: up one dir, main page]

HK1169205A - A method and system for processing electronic image content for display - Google Patents

A method and system for processing electronic image content for display Download PDF

Info

Publication number
HK1169205A
HK1169205A HK12109914.0A HK12109914A HK1169205A HK 1169205 A HK1169205 A HK 1169205A HK 12109914 A HK12109914 A HK 12109914A HK 1169205 A HK1169205 A HK 1169205A
Authority
HK
Hong Kong
Prior art keywords
image
array
display
pixels
computing system
Prior art date
Application number
HK12109914.0A
Other languages
Chinese (zh)
Inventor
H.拉维德拉巴布
刘祖江
Original Assignee
Ncs私人有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ncs私人有限公司 filed Critical Ncs私人有限公司
Publication of HK1169205A publication Critical patent/HK1169205A/en

Links

Abstract

The invention provides a method, apparatus and device for processing electronic image content for display. In the method, a computing system (12) receives a first image comprising a first array of pixels and a second image comprising a second array of pixels; a comparator (14) of the computing system (12) identifies which pixels of the second array are different from corresponding pixels of the first array by comparing the first array and the second array and outputting the different pixels of the second array to at least one image composer (16); the image composer (16) constructs a new image for display comprising the different pixels from the second array identified by the comparator and pixels of the first array complementary to the different pixels of the second array. The new image is then output for display.

Description

Method and system for processing electronic image content for display
Technical Field
The present invention relates to a method and system for processing electronic image content for display, and in particular to an image processor; and computer program code for performing the method. The present invention gives a detailed description but does not exclude applications where image data is processed for display on a television or other display device such as a projector in the form of content displayed on a computer screen.
Background
Electronic image content is typically displayed on a display device, such as a projector, television, or the like, by transmitting the content in a format suitable for receipt by the display device, or via an intermediary device that typically receives the original electronic image content and outputs the content in a suitable display format, such as VGA or HDMI for standard interfacing.
One existing intermediary device is a digital media receiver for use within a home network to enable the content of a home computer to be displayed on a television, to enjoy image content in a location separate from the computer (e.g., a living room), or to present the content of the computer on a more convenient viewing device, such as a projection screen. The digital media receiver enables content previously accessible on a computer to be displayed on and accessible by any display device. However, although such devices are capable of displaying the content of a home computer, there are a number of problems associated with the quality and robustness of the display.
Modern display devices, such as plasma displays, LCD screens and projectors, are capable of displaying high resolution image content, which requires a large bandwidth when viewed as an input stream of data from a computer. Compression techniques may be employed to reduce bandwidth requirements, but these techniques may reduce image quality and are not only time consuming but also processor intensive. This is particularly evident when multiple computers are connected to the device and/or multiple display devices. Another technique employed is to reduce the number of image frames transmitted per second, but this typically results in image jitter and/or audio being displayed.
Disclosure of Invention
According to one aspect of the invention, there is provided a method of processing electronic image content for display, the method comprising: a computing system receiving a first image comprising a first array of pixels; the computing system receiving a second image comprising a second array of pixels; a comparator of the computing system identifying which pixels of a second array are different from corresponding pixels of a first array by comparing the first array and the second array and outputting the different pixels of the second array to at least one image editor; the image editor constructing a new image for display comprising the different pixel identified by the comparator from the second array and a pixel in the first array that is complementary to the different pixel of the second array; and outputs the new image for display.
The computing system may be a distributed computing system and may include one or more computers. The computing system may also include other devices with computing capabilities, such as digital cameras, PDAs, mobile phones, and the like.
In one embodiment, the image editor is connected to the computing system, typically a computer, via a telecommunications network. However, those skilled in the art will appreciate that the computing system may include a display device. Likewise, any computer in the computing system may output different pixels as image patches to at least one image editor, where the image editor may be incorporated into the display device, or into an intermediate image processing device as an image processor, which is remote from the computer to receive image content for display on a different kind of display device. In one example, one computer that outputs the image fragment is a source computer that may additionally output control information to control the display of the image content. In this case, the telecommunication network for transmitting the image and control information may be a wired or wireless LAN and the communication protocol may be TCP/IP for reliable data transfer.
It will be appreciated by those skilled in the art that the comparator forms image data, and in particular image patches, from the different pixels identified by the second array. Also, it should be understood that an image is formed by an array of pixels, each pixel in the array of pixels being specified as a set of bits (collection of bits) according to pixel quality such as color and transparency. For example, the image may be 8 bits per pixel to specify these qualities. In this way, an image patch includes only those pixels that differ from the corresponding pixels in the first array, and a complementary pixel is a pixel for which no difference from the first array is found. Typically, only a portion of the image changes between frames, and therefore, preferably, only that portion is output to the image editor via TCP/IP.
This also facilitates the receipt of authentication data for security reasons. Thus, the method further includes authenticating receipt of the computer by the image editor when the image editor is remote from the computer. If the image editor is included in a computing system, authentication may be performed between components of the system to ensure reliability. In another embodiment, all data communication between the remote image editor and the computing system may be performed via a handshake authentication protocol for reliable data transfer.
It will be understood by those skilled in the art that the display device may be referred to as a television in this specification, but may also include LCD screens, projectors, etc., all of which require the use of a standardized audio/video interface, such as HDMI, VGA, component video, etc., to receive video content, particularly image portions thereof, in a standardized format.
In addition, it will be understood by those skilled in the art that the display device may also be a computer screen of a destination computer. In this case, the image content may be received using a standardized interface such as VGA, or the destination computer may include an image editor to construct a new image using the received image fragments for display on a connected computer screen.
As described above, it is desirable to control the display of image content, and in one embodiment, control information is output in addition to image data to control the display of image content on one or more display devices. The control information may be used to remotely control the display of the display device, including pausing, continuing, starting, and stopping the display of the image content. Moreover, the control information may be used to allow a source computer of the computing system to adjust a display of a destination display device, including adjusting a size or resolution of the display.
In another embodiment, the control information includes a pixel array size of the first and second arrays. The pixel array size information facilitates the display device to change its display of different size image content from the switched source without resizing the display.
It is also desirable to receive audio and/or video content or data for display, in particular for the display of video clips, in addition to the image content described above. In such situations, it is desirable to transmit audio, video and/or image content as streaming data over a telecommunications network to reduce bandwidth requirements. In one example, a video clip is displayed within a computer desktop image, which may be displayed at a display size smaller than the entire computer screen. In this example, the pixel differences of the first and second arrays, separated in time, may only apply to changes in the video clip being displayed, and not to the rest of the computer desktop image. Thus, one of ordinary skill in the art will appreciate that one way to display a video clip is to identify, via a computer, which pixels of a portion of a screen displaying the video clip have changed at a suitably high refresh rate. However, such high refresh rates typically require high bandwidth requirements and thus jitter may occur. Another approach is to output different pixels of the desktop image, excluding the video clip pixel, to the image editor in addition to outputting the audio and video data of the video clip in streaming data. In this approach, the audio and video data are overlaid with data comprising different pixels to display image content comprising a video clip and without smearing and without requiring such a high refresh rate thus further reducing bandwidth requirements.
In another example, the streaming data is output to the image editor mediated by memory and typically located in a server. The stored data may be compressed and the memory may be a random access memory to reduce processing time. The server may also be intermediary to more than one image editor, such that multiple image editors can receive the same streaming data for display. In addition, since the streaming data is recorded on the memory, a single display device may be customized to display different portions of the streaming data at any one time. Alternatively, a memory and an image editor may be located in each image processing apparatus to cause the streaming data to be stored locally.
In embodiments where there are multiple image editors, the skilled person will appreciate that there may also be at least one display device corresponding to an image editor, in particular an image editor located in an image processing device. In this case, the output of the different pixels identified by the comparator to each image editor is synchronised to provide a synchronised display on the corresponding display device. For example, a method includes a source computer displaying and controlling display of image content on a plurality of display devices employing a plurality of image editors to process image and control data for each corresponding display device.
The method also includes monitoring CPU usage of the computer to avoid reducing available resources for processing other information than image data. In one embodiment, when the CPU usage threshold is exceeded, the computer pauses the comparator by comparing the first array and the second array to identify which pixels of the second array are different from corresponding pixels of the first array, and outputting the different pixels of the second array to the image editor. In one embodiment, the threshold is 30%.
According to another aspect of the present invention there is provided a system for processing electronic image content for display, the system comprising: a computing system arranged to receive a first image comprising a first array of pixels and a second image comprising a second array of pixels; a comparator of the computing system arranged to identify which pixels of the second array are different from corresponding pixels of the first array by comparing the first array and the second array and to output the different pixels of the second array to at least one image editor; whereby the image editor is arranged to construct a new image for display comprising the different pixel identified by the comparator from the second array and a pixel of the first array that is complementary to the different pixel of the second array; and outputs the new image for display.
According to another aspect of the present invention there is provided apparatus for processing electronic image content for display, the apparatus comprising an image editor arranged to: receiving a first image comprising a first array of pixels; receiving pixels of a second image comprising a second array of pixels, the pixels of the second image comprising the second array of pixels being different from corresponding pixels of the first array; constructing a new image for display, the new image comprising the different pixel identified by the comparator from the second array and a pixel in the first array that is complementary to the different pixel of the second array; and outputs the new image for display.
According to another aspect of the invention, there is provided computer program code which, when executed, implements the above method.
According to another aspect of the present invention, there is provided a computer readable medium including the program code thereon.
According to another aspect of the invention, a data signal is provided that includes the above program code.
Drawings
In order that the invention may be more clearly defined, embodiments will now be described, by way of example, with reference to the accompanying drawings, in which
FIG. 1 is a schematic diagram of a system for processing electronic image content for display in accordance with an embodiment of the present invention;
FIG. 2 is a flow diagram of a method implemented by the system of FIG. 1 in accordance with the present invention;
FIG. 3 is a flow chart illustrating the method of FIG. 2 for transmitting information between a computing, image processing device and a display device over a network;
FIG. 4A is a schematic diagram of the system of FIG. 1;
FIG. 4B is a schematic diagram of the system of FIG. 1;
FIG. 4C is a schematic diagram of the system of FIG. 1;
FIG. 5 is a schematic diagram of the system of FIG. 1 showing multiple computers connected to multiple devices for processing electronic image content over a network, followed by the devices for processing electronic image content being connected to multiple display devices;
FIG. 6 is a schematic diagram illustrating the system of FIG. 5, showing a server-mediated device including a memory for access by a corresponding device;
FIG. 7 is a schematic diagram of the system of FIG. 5 showing a processing device having a memory;
fig. 8 is a state diagram of the system of fig. 1.
Detailed Description
In accordance with an embodiment of the present invention, there is provided a system 10 for processing electronic image content for display, comprising a computing system 12 for receiving first and second images having a comparator 14 arranged to identify which pixels of the first image are different from which pixels of the second image and to output the different pixels as image patches to an image editor 16. As described, the computing system may include one or more computers, or devices with computing capabilities, such as digital cameras, PDAs, mobile phones, etc., and thus includes components such as a display, processor, input device, hard drive, etc. Moreover, because the image editor 16 may be located remotely from the computing system and comparator, one of ordinary skill will appreciate that the image editor 16 may include similar hardware, such as a processor, to construct the image for display. If the image editor is located within the computing system, the hardware may be shared. Also, if the image editor is located within a display device such as a projector, hardware such as power supply and network port may also be shared.
In one embodiment, image composer 16 constructs a new image from the image fragments received from comparator 14 and the pixels in the first image that are in complementary positions to the array of pixels of the image fragments for display by a display device, such as a television. Thus, the image editor 16 requires additional hardware to receive and forward data, particularly to receive image data and output display data in a form readily acceptable to televisions or similar devices, such as HDMI. The display may also be a plasma display or LCD screen, a projector, a handheld viewing device, etc.
Fig. 2 is a flow diagram of a method 18 implemented by a system for processing electronic image content. The method 18 includes first receiving a first image 20 including a first array of pixels. As described, an image is typically formed by an array of pixels, each pixel in the array of pixels being specified as a set of bits (collection of bits) according to pixel quality. Likewise, it will be understood by those of ordinary skill in the art that the electronic image content for display may be an image displayed on a source computer screen of a computing system or a screen of a source electronic device such as a PDA, which may include a static electronic image, such as a desktop image, or a video clip shown on a desktop that includes both video and audio data. Likewise, the desktop may only display the audio clip being played, and in this case, output the audio data to the display device for display. In the case where the image content comprises a sequence of still images or frames, such as the display of a computer screen, only a portion of the image changes between successive frames.
In one embodiment, method 18 further includes receiving a second image 22 including a second array of pixels. In one embodiment, the first and second images may be received by a computer. Furthermore, the computer may comprise a comparator for identifying which pixels of the second array of pixels differ 24 from the corresponding pixels of the first array. That is, the pixels of the corresponding coordinates of each array are compared to identify which pixels are different. Typically, the array size of the first and second images is the same, but in the event that they are different, the computer may implement a transformation algorithm to expand or contract the array to ensure an accurate comparison between corresponding pixels and to identify the different pixels.
The comparator of the computing system may then output that a different pixel 26 is found to be present for use in constructing a new image 28 for display. In one example, the different pixels are output over a network to an image editor that is remote from the computing system. However, in another example, the image editor is included in the computing system. The image editor may perform the step of constructing a new image for display, the new image comprising different pixels and pixels of the first array that are complementary to the different pixels. In one example, the new image is formed by overlaying the different pixels to their corresponding positions in the array of pixels of the first image, but it is conceivable to employ other methods to construct the new image, such as combining the different pixels with the pixels of the first image. In any event, the new image is output 30 for display. The method may also include outputting the new image to any number of display devices for display, for example, where multiple televisions are located at multiple viewing locations, such as in a living room and bedroom within a house.
The method of processing electronic image content may also include outputting the different pixels over a network to a plurality of image editors remote from the computing system. An example of such an approach is a teaching environment where a lecturer with a source computer wishes to display and control electronic image content that is wirelessly transmitted in desktop content to the displays of multiple students, such as computer screens and projectors.
Also shown in fig. 3 is a flow chart 32 of a method of processing electronic image content, which shows the image content being sent between a computing system (source computer) 34 and an image processing device (image processor) 36 over a telecommunications network, and ultimately to a display device (television) 38 for display. In the illustrated embodiment, the image editor 42 is remote from the computer and its comparator 40 and is contained in the image processor 36. It is contemplated that image processor 36 includes the features necessary for it to perform its functions independently, such as a processor and power supply, as well as features to enable it to communicate over a network with any display device, such as an appropriate port, interface, etc.
The flow chart 32 also illustrates a method implemented by the system of fig. 1 over time. In one example, first, the computer 34 receives first and second images of the type described above, but in this case the comparator 40 identifies that the first image is a null image and therefore the different pixels output correspond to the second image received. In the illustrated example, the image 44 is output to the image processor 36, and the processor 36 then recognizes that the complementary pixels are empty and outputs the image for display by the television 38. If the method requires, the image processor 36 returns a confirmation packet confirming receipt of the image or successful display of the image.
In the example shown, the second image received by the computer forms the first image 44 which is then received by the comparator 40. A subsequent second image 46 is also received. The comparator 40 then identifies which pixels differ between the two images and outputs only the different pixels 48 as image patches over the network to the image processor 36 and hence to the included image editor 42, rather than outputting an image having a full pixel array of larger packet size. The receipt of image fragment 48 may be confirmed if the method requires. By way of example, a handshake authentication protocol is employed between the image processor 36 and the computer 34, wherein data packets are not transmitted until an acknowledgement is received.
It will be appreciated by those of ordinary skill in the art that the telecommunications network may be a wired or wireless LAN and the communications protocol is typically TCP/IP. The protocol allows the image processor to be located anywhere away from the computer, not just within the same house in the home network instance or the same university in the educational environment instance. Also, it should be understood that other networks and protocols, such as UDP, may be employed.
Image composer 42 may receive image fragment 48 and construct a new image for display comprising image fragment 48 and pixels 50 in the first array at positions complementary to the pixel array of the image fragment. This new image 52 is then output as image data over a suitable cable to the television 38 for display in a form suitable for the television, for example VGA or HDMI.
It will also be understood by those of ordinary skill in the art that the image data output to the display device may also be output through a wired or wireless telecommunication network, and allow the display device to be located anywhere.
In another example, a system for processing electronic image content is further described with reference to fig. 4A, 4B, and 4C. Thus, referring to FIG. 4A, it can be seen from the schematic diagram that a system 54 for processing electronic image content can be implemented in a computer 56 for processing electronic image content. In this embodiment, the computer 56 includes: a comparator 14 arranged to identify which pixels in the first image are different from pixels in the second image and to output these different pixels as image patches; and an image composer 16 arranged to construct a new image from the image fragments received from the comparator and the pixels of the first image which are in a position complementary to the pixel array of the image fragments, for display by a display device 58 and a display screen 59, which may be, for example, a television screen. The image editor 16 and hence the computer 56 outputs the new image to the television 59 for display.
Those skilled in the art will appreciate that an example of another embodiment of the system shown in fig. 4A may be the case when an image processor is used as a stand-alone system for processing and displaying electronic image content. For example, a user wishing to display image content from a recorded optical disc may directly insert the optical disc into the image processor and display the image content because the image processor may contain computer processing capabilities.
Fig. 4B shows another embodiment. As can be seen, the system 60 for processing image content also includes a computer 56 having a comparator 14, but the image editor 16 is located remotely from the computer and is included in a display device 58. In this embodiment the display device may be a television set with a screen 59 and it is envisaged that if there is more than one display device, each display device has the processing capability to use a separate image editor 16 to construct a new image for display. Further, in this embodiment, the comparator 14 and the graphic editor 16 located in the television may communicate via a telecommunications network of the type described above. Furthermore, those skilled in the art will appreciate that there may be more than one computer 56, each computer 56 having its own comparator 14 to identify image patches and output them to the image editor 16 for output to the television 58. Thus, in such instances where the image editor 16 is included in a television, appropriate hardware and software is provided to receive image fragments from one or more connected computers over a network so that a new image can be constructed for display on the screen 59.
Fig. 4C shows yet another embodiment. As can be seen, the system 62 for processing image content may include a computer 56 having a comparator 14, as described above, the comparator 14 being configured to identify different pixels as image patches. In addition, it can be seen that the comparator 14 outputs image fragments for transmission by the computer 56 capability via a telecommunications network to the image editor 16 located in a separate image processor 64. The image editor 16 then constructs a new image for display on the screen 59 of the display device 58 using the same method as described above.
The embodiment shown in fig. 4C is shown in more detail in fig. 5, where it can be seen that there may be more than one computer 56 in the system 66 for providing image content to more than one image processor 64 via a telecommunications network 68, which image content may be displayed on more than one display device, such as a television or computer screen 59 and a projector 70. The network shown is an internet cloud, but it could also be a LAN as described above. It can be seen that with the method implemented by the system for processing image content described above, the computer 56 displays the image as desktop content on the computer screen 72 and desires to view the image on the user screen 59 via the projector 70.
In one example, the image processor 64 may output audio content by streaming a sequence of images with audio data to the television screen 59. Thus, in the example, in addition to the computer desktop content image being output, an audio image is also output from the computer to the display device 58 for receipt by the speakers of the display device.
In another example, image processor 64 may also output video content in addition to image content by streaming a sequence of images with audio and video data to a display device. For example, the source computer may wish to display a video clip on a display device. In this case, the video clip audio and video data is transmitted as streaming data over the network 68 to the image editor 64. In the example where the video clip is displayed within a source computer desktop image on the computer screen 72, it may be displayed on a smaller size than the computer screen. In this case, the different pixels identified by the comparator 14 of the computer are only applied to the portion of the desktop image that does not include the video clip. Thus, in case the video clip is full screen, the comparator outputs a zero difference. In addition, the computer may output control information for the video clip with the streaming audio and video data to control the display of the video clip on the display device, e.g., pause, play, etc.
In one example, the source computer 56 and image processor 64 run an application to communicate (communicate) information from a customized media player on the source computer that processes commands and video clip data to output audio and video data as streaming data. For example, streaming data may be communicated over network 68 using known communication channels. Commands and player data from the source computer are then sent over these channels in a format that can be understood by the image processor, and ultimately the display device. Furthermore, once communication is established, there is continuous communication to keep the source computer and image processor synchronized.
A portion of the application that may run on the source computer or image processor acts as a proxy that receives the commands and player synchronization data in the first example. In this example, the agent is located on the source computer and will control the video fragment data within the image environment of the source computer desktop. If the source computer receives commands, e.g. connect, pause, play, resume, it acts on the player accordingly, since these commands are not player controls, but commands for the respective display device. For example, in the example where the display device is a projector, if the user wishes to pause the projection, the agent will control the video fragment data accordingly. In addition to these projection commands, the agent may also receive player related commands, such as resize, mute, change volume, etc., which are also transmitted to the agent to be acted upon.
In the above example, the video clip itself may react to the control of the above agent. In this case, the video fragment is overlaid with the image fragment identified by the comparator 14 from the image environment of the source computer. The video clips include data that enables playback of video and audio data and data that synchronizes the data with an application running on the image processor 64. In one example, the image processor 64 is located within a user's computer, such as a laptop computer, and in another example, the image processor 64 is remote from the laptop computer, which corresponds to the display device, and which includes the laptop screen 59. Thus, it can be seen that the synchronization data can be used to convert the resolution where the resolution of the source computer and the user's laptop are different, so that the size of the image can be adjusted accordingly. The video fragment or clip size that may be overlaid on the image environment of the display screen may also be adjusted relative to the image environment to provide a display image that is consistent with the source desktop image. Those skilled in the art will also appreciate that additional algorithms and modules may be required to implement this step.
In another example, the image processor 64 may be controlled and operated remotely by communication between software applications running on the image processor 64 and/or an operating system of the computer 56. In this example, the display device 58 and the screen 59 may be remotely operated using a desired communication channel. For example, the television may receive basic operator ASCII commands, such as turn on and turn off. Further, the television may be remotely controlled using a control application on the computer 56, wherein the computer 56 outputs control data to the image processor 64 in addition to the streaming image and audio/video data described above. The control data, image and audio/video data may be packaged together for transmission over a telecommunications network using TCP/IP, and the control data, image and audio/video data may also be packaged together with an authentication protocol to ensure transmission of the data. In one example, a handshake protocol for all communications over a telecommunications network is used to reduce instances of unauthorized use and reduce the risk of data theft.
Another embodiment is shown in fig. 6. As can be seen, the medium of the image processor 64 is a memory 72 for storing streaming image and/or audio and video data retrieved by the at least one image processor 64. In the illustrated embodiment, the memory is located in a server 73, the server 73 being connected to the computing system and thus the source computer via the telecommunications network 68. The client-server arrangement of the server 73 and the image processor 64 (client) allows each client to individually control the display of received streaming data. An alternative embodiment is shown in fig. 7, in which the image processor 64 includes a memory 72 to store received streaming data for subsequent retrieval upon request by a user. Those skilled in the art will appreciate that the memory 72 may be located remotely from the image processor 64, for example, the memory 72 may be contained in a separate hard disk drive.
In addition, the computer 56 may run additional applications, such as a custom media player, to provide an intuitive user interface to display image content on a display device. The media player may also have a function capable of recording the displayed image content, in which case the recording is performed using a memory.
By way of example, the customized media player may be implemented with software operating on the source computer 56 and the image processor 64 in a client-server setting in which the source computer functions as a server. It is envisaged that the server may be different from the source computer, for example as described above. In this example, display screen 59 and projector 70 display video and audio clips from the source computer after receiving streaming image data and audio and video data for each image processor 64. This client-server arrangement also provides various other functions, such as controlling a remote media player from a server player. For example, control functions (e.g., stop, mute, play, pause, close, etc.) may be transmitted from the server to control the display on the client display screen. In addition, a connection may also be established from the server to a particular client player or group of client players running on the client image processor to control each client player independently.
In the above example, the customized media player server component is divided into three main areas, namely content delivery, control delivery and connection delivery. There are two channels for communication from the server to the client, one for content transfer and the other for control and connection transfer. The content transfer is the actual video, audio, or both video and audio flowing from the server to the client component. Controls such as mute, stop, pause, play, resize, reposition, skew, zoom in, turn off, turn on, etc. may be established remotely from the server component. Connection controls such as connect, disconnect, pause, and unpark may also be established from one server to many other servers running customized media players, each of which may be paused, connected, unpaused, and disconnected at any point in time without disturbing the other client players. Video and audio content is transmitted through the VLC streaming module and, when a change occurs in the user interface of the server component, data packets including control transmissions are sent over the remote interface to the connected customized media player client to synchronize them. Further, each synchronization control transport packet and the connection transport packet are transmitted to all connected clients. They are sent in addition to the image content and control transport packets and enable the server player to initiate a connection with a particular client or group of clients. Thus, those skilled in the art will appreciate that these commands include the following high priority commands: connect, disconnect, pause, and continue.
Furthermore, the client components of the above examples typically run on a remote system that is passive until it receives any information from the server. After the connection is initially established, active communication between the connected client and server components is maintained, and each client receives three types of data from the server, namely content transfer, control transfer, and connection transfer. Furthermore, each client manipulates its data without interfering with other clients. The first information received by the client is a connection transfer, which provides information about the type of connection and information about the server with which the connection has been established. Other connection commands include disconnect, pause, and resume. The second information received is a content transmission, which is the actual video and audio content that is being streamed from the server. Finally, a control command is received from the server to synchronize the client. Such as receiving volume control, stop, play, pause, resume, close, resize, reposition, fullscreen at each client to fully control the server and thus the display of the source computer.
In another example including a plurality of computers 56, the control information may include detailed information about the size of each computer image to be displayed. Different finite automata algorithms may be used to determine the size of the static desktop image content when switching the connection between each source computer and the image processor 64. For example, if a state change (e.g., pause, start, stop, resume, etc.) occurs, the image processing algorithm sends image data from each computer to the image processor 64 only when a change in desktop content is detected, but in other cases does not send image data, thereby avoiding unnecessary use of network bandwidth. Furthermore, different finite automata algorithms may also determine the size of the static computer desktop image when switching the connection between each computer and the image processor 64 without any discontinuities, thereby avoiding connection delays. This example illustrates a case where each computer is simultaneously connected to a plurality of image processors so that the connection topology of the system is many-to-many. That is, each image processor may be connected to a single output display device, which may receive image content as input from many source computers at once, and each source computer may be connected to multiple image processors at once, such that the connection topology is many-to-many. If this situation exists, the selection of either computer as the primary source or destination can be switched in time and in real time using the application described above. Further, at any point in time, the display or projection may be paused, resumed, started, or stopped in time.
FIG. 8 shows a state diagram for implementing the method, which includes outputting control information to control one or more display devices in a many-to-many topology example, in addition to image data. Here, it can be seen that different finite automata algorithms decide that a static desktop image must be resized when: switching connections between source computers and/or image processors in time without a disconnected condition; or a change in state such as pause, start, stop, resume, etc. has occurred.
Finite automata can be implemented in any suitable computer language using a state design schema.
The following is a description of finite automata:
the value n in each state describes whether the image must be reset to full image, where 0 is false and 1 is true.
Q is a set of finite states; e is a finite set of symbols and d is a function from QxE to Q.
Q={S0,SC,SPS,NC,SPA,SRS,SRA,URS,UPA,URA,UPS}
E={DS,DA,CS,CA,PS,PA,RS,RA}
qoIs in an initial state
Symbol table
S0-Start State
SC-synchronous connection
SPS-synchronization pause single
NC-connectionless
SPA-Sync pause all
SRA-Sync restart all
SRS-synchronous restart singleton
URS-asynchronous restart Single
URA-asynchronously restarting all
UPS-asynchronous pause single
UPA-asynchronous suspension of all
DS-disconnection of individual connections
DA-breaking all connections
CS-connection singleness
CA-connection all
PS-pause single
PA-pause all
RS-restart singleton
RA-restart all
In fig. 8, the value n of S0, SPS, SPA, SRS, UPA, and UPS is 0, and the value n of SC, NC, URS, and URA is 1. Further, symbols indicating control between states are described below:
S0-SC (CS)
S0-NC (CA)
SC-S0(DS,DA)
SC-SPS (PS)
SC-NC(CS,CA)
SPS-S0(DA)
SPS-SC (DS)
SPS-SPS(PS,RS,DS)
SPS-SPA (PA)
SPS-NC(CS,CA)
NC-SPS (DS)
NC-NC(CS,CA)
NC-SPA (PA)
NC-UPS (PS)
NC-S0(DA)
SPA-SRS (RS)
SPA-URS(CS,CA)
SPA-SRA (RA)
SPA-S0(DA)
SPA-SPA (DS)
SPA-SPS (DS)
SRA-S0(DA)
SRA-NC(CS,CA)
SRA-SPA (PA)
SRA-SPS (DA)
SRA-SRA (DS)
SRA-UPS (PS)
SRS-NC(CS,CA)
SRS-S0(DA)
SRS-SRS(DS,PS)
SRS-URA (RA)
SRS-URS (RS)
URS-UPA (PA)
URS-URA (PA)
URS-UPS(RS,PS)
URS-URS(RS,CS,CA)
URA-UPS(CS,PS)
URA-SPA (PA)
URA-NC(CS,CA)
URA-S0(DA)
UPS-URS(RS,CS,CA,DS)
UPS-SPS (PA)
UPS-URA (RA)
UPS-UPA (PA)
UPS-UPS(PS,DS)
UPS-S0(DA)
UPA-S0(DA)
UPA-URS(RS,CS,CA)
UPA-UPS(DS)
UPA-URA (RA)
in yet another example, in addition to algorithms to transmit control and image packet data, the system may execute additional algorithms to maintain low CPU usage, high image sharpness, ease of switching, and control transitions in many-to-many connections for the source computer. The CPU usage algorithm first reads the CPU usage of the source computer and adjusts the other algorithms to bring the required CPU usage upper limit (cap off) to a preconfigured limit. Thus, the algorithm properly uses the available resources and provides the user of the source computer with space for other applications.
Additional details of the algorithm are given below.
Automatic initial CPU upper limit limiting algorithm
The automatic initial CPU upper limit algorithm first reads the user's CPU utilization and automatically adjusts the algorithm to a pre-provisioned value. Thus, the available resources are used appropriately and the user is provided with space for other applications. In the first 10 seconds, the algorithm records the thread under learning and calculates the interval of time that the thread must wait to execute within the CPU upper limit.
Pseudo code:
function 1.
This function assumes an important role: a desktop image is captured. Which runs synchronously with the image difference algorithm. After the capture desktop executes for one cycle, it waits for the image differencing algorithm to complete its execution.
Function 2
The function computes the difference image that needs to be sent to the remote client system. The thread works with the image capture thread. The wait value calculated by the algorithm will be used for the thread.
Function 3
The function calculates the latency being used by the threads so that their CPU utilization is below a certain limit. It uses a number of statistics to derive the latency, which is the arithmetic average of the ten collected. Once the latency is found, the calculation will not be performed.
Function 4.
To account for the required latency, values are recorded for all threads running on the current CPU and their CPU usage, which will be used for the following calculations.
Function 5
Since the algorithm involves two major CPU-consuming threads, we need to filter out those threads that do not affect or use the CPU too much. Thus, the function will filter out unnecessary threads and consider only the two CPU intensive threads described above.
Partial image transmission processing algorithm
The image processing algorithm sends image data to the image processor only when changes in desktop content are detected, and does not otherwise send image data to the image processor, thereby avoiding unnecessary use of network bandwidth. The image transitions that occur based on decisions made by the algorithm are explained below.
Pseudo code:
function 1
The function captures the desktop image and informs the thread that computed the difference image. The image and its associated data are sent to dataContainer, which is a structural package.
Function 2
The function computes a difference image of the current desktop image and the previous desktop image, and if _ resetImage is not set to true, then it is set to true when a full image is needed instead of a partial image. The function returns an image that contains only the changes and no redundant parts. The conditions for setting the resetImage to true are restarting the connection, connecting a single, new connection, resending data, and resolution change.
This function works synchronously with StartCapturing, process by process.
Function 3
Once the differential image or full image is ready, it is sent to the destination program and gets a return value if the image has been used or updated. This is necessary to maintain the same previous image throughout, which is the source or client desktop and receiving computer. Only in so doing, the differential image is appended without any frame loss. Otherwise, the source and destination images will be different.
Function 4
Computing difference images
Algorithm for connecting from single client to multiple image processors
A user display device, such as a user laptop, running a client component program may be connected to more than one image processor at a time. All these connections are done through a single socket connection. The laptop maintains a list of all active image processors that act as servers. The moving image processor is
An image processor connected at this point of time and in active communication. Each packet is passed to the entire list. Furthermore, the synchronization attribute is built in to ensure that all image processors receive the same packet without difference.
Pseudo code:
function 1
This function causes the client laptop (display device) to connect to the image processor. The method also contains an authentication mechanism for the connection. Only when the connection is active, the image processor is added to the list of active servers.
Function 2
This function is responsible for sending the image and mouse packets to all servers in the list added by function 1. After each data packet has been sent, an acknowledgement by the image processor is awaited, after which the following packet is sent. There is a threshold time for the time that the client will wait for each image processor to acknowledge. If the timer times out, the connection to the image processor is cut and removed from the list of image processors.
Function 3
This function is responsible for removing an image processor from the connected image processor list once the threshold latency for acknowledgement has ended.
Connection from a single image processor to multiple clients
A single image processor may be connected to a plurality of client display devices that output image content to be displayed. This is possible by maintaining a list of clients or sources actively connected to the image processor. All connections are made through a single socket. Picking data packets received from all sources based on source IP addresses; the display of the connection is assembled and updated if it has been fragmented previously.
Pseudo code
Function 1
This function enables a connection to the source laptop. It also contains an authentication mechanism for the connection. Only when the connection is active is the source added to the list of active sources.
Function 2
This function performs a number of operations on the received data packet based on the type of packet received and the source that sent the packet.
It performs the following operations:
■ which accept connections from sources that need to deliver their content to the image processor.
■ when receiving a connection, it authenticates the connection and then adds the source IP address to its list of active casting sources.
■ separates the packets based on the header and ascertains whether it is a valid packet by checking the packet size and, if so, sends it to its respective processing function. If a packet is invalid, it is stored in a temporary buffer and other packets are appended thereto until it is a valid packet.

Claims (31)

1. A method of processing electronic image content for display, the method comprising:
a computing system receiving a first image comprising a first array of pixels;
the computing system receiving a second image comprising a second array of pixels;
a comparator of the computing system identifying which pixels of the second array are different from corresponding pixels of the first array by comparing the first array and the second array, and outputting the different pixels of the second array to at least one image editor;
the image editor constructing a new image for display comprising the different pixel identified by the comparator from the second array and a pixel in the first array that is complementary to the different pixel of the second array; and
the new image is output for display.
2. The method of claim 1, wherein the image editor is connected to the computing system through a telecommunications network.
3. The method of claim 2, further comprising outputting, from the computing system over the telecommunications network via TCP/IP, the different pixels identified by the comparator from the second array.
4. The method of claim 3, further comprising the image editor authenticating receipt of the different pixel.
5. The method of claim 1, further comprising outputting the new image for display by at least one display device.
6. The method of claim 5, further comprising outputting to the display device using an audio/video interface.
7. The method of claim 5, further comprising the computing system outputting control information to control display of the image content on the display device.
8. The method of claim 7, wherein the control information includes a pixel array size of the first and second arrays.
9. The method of claim 1, further comprising the computing system receiving audio and/or video data and outputting the audio and/or video data as streaming data to the image editor in addition to the different pixels identified by the comparator from the second array.
10. The method of claim 9, further comprising the computing system outputting control information to control display of the audio and/or video data for display by at least one display device.
11. The method of claim 9, further comprising outputting the stream data to the image editor for storage as a medium.
12. The method of claim 11, wherein the memory is located in a server connected to the computing system through a telecommunications network.
13. The method of claim 1, further comprising outputting, from the computing system to each of the at least one image editor, the different pixels identified by the comparator to construct the new image for display by each display device corresponding to the image editor.
14. The method of claim 13, further comprising the computing system synchronously outputting the different pixels identified by the comparator to each image editor to provide a synchronized display on each corresponding display device.
15. The method of claim 14, further comprising the computing system outputting control information to synchronize display of the image content on each corresponding display device.
16. The method of claim 1, further comprising compressing the different pixels identified by the comparator from the second array prior to output to the image editor.
17. The method of claim 1, wherein the computing system comprises at least one computer.
18. The method of claim 17, wherein the image content is derived from the at least one computer as an electronic image.
19. The method of claim 1, further comprising, when the comparator exceeds a CPU usage threshold of the computing system, the computing suspending the comparator from identifying which pixels in a second array are different from corresponding pixels of a first array by comparing the first array and the second array and outputting the different pixels of the second array to the image editor.
20. The method of claim 19, wherein the threshold is 30% of CPU usage of the computing system.
21. A system for processing an electronic image for display, the system comprising:
a computing system arranged to receive a first image comprising a first array of pixels and a second image comprising a second array of pixels;
a comparator of the computing system arranged to identify which pixels of the second array are different from corresponding pixels of the first array by comparing the first array and the second array and to output the different pixels of the second array to at least one image editor,
whereby the image editor is arranged to construct a new image for display comprising the different pixel identified by the comparator from the second array and a pixel of the first array that is complementary to the different pixel of the second array, and to output the new image for display.
22. The system of claim 21, wherein the image editor is connected to the computing system through a telecommunications network.
23. The system of claim 22, wherein the computing system outputs the different pixels identified by the comparator from the second array via TCP/IP over the telecommunications network.
24. The system of claim 21, wherein the image editor outputs the new image for display by at least one display device.
25. The system of claim 21, wherein the image editor outputs to the display device using an audio/video interface.
26. The system of claim 21, wherein the computing system outputs the different pixels identified by the comparators to a plurality of image editors to construct the new images for display by a plurality of corresponding display devices.
27. The system of claim 21, wherein the computing system comprises at least one computer.
28. An apparatus for processing electronic image content for display, the apparatus comprising:
an image editor arranged to:
receiving a first image comprising a first array of pixels;
receiving pixels of a second image comprising a second array of pixels, the pixels of the second image comprising the second array of pixels being different from corresponding pixels of the first array;
constructing a new image for display, the new image comprising the different pixel identified by the comparator from the second array and a pixel in the first array that is complementary to the different pixel of the second array; and is
The new image is output for display.
29. Computer program code which, when executed, implements the method of any of claims 1 to 20.
30. A computer readable medium comprising the program code of claim 28.
31. A data file comprising the program code of claim 29.
HK12109914.0A 2009-04-02 2010-03-31 A method and system for processing electronic image content for display HK1169205A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
SG200902294-8 2009-04-02

Publications (1)

Publication Number Publication Date
HK1169205A true HK1169205A (en) 2013-01-18

Family

ID=

Similar Documents

Publication Publication Date Title
US8068637B2 (en) Embedded appliance for multimedia capture
CN104581405B (en) A kind of display content processing method and equipment
CN102158553B (en) Method and device for playing multi-media files for remote desktop
EP3840394A1 (en) Video screen projection method, device, computer equipment and storage medium
US20190184284A1 (en) Method of transmitting video frames from a video stream to a display and corresponding apparatus
KR101942270B1 (en) Media playback apparatus and method including delay prevention system
US20160119507A1 (en) Synchronized media servers and projectors
CN104918137A (en) Method enabling spliced screen system to play videos
EP3741131A1 (en) Methods, systems, and media for synchronizing audio and video content on multiple media devices
WO2016069175A1 (en) Synchronized media servers and projectors
US8379150B2 (en) Data transmission method and audio/video system capable of splitting and synchronizing audio/video data
CN102483844A (en) A method and system for processing electronic image content for display
US20190200057A1 (en) Streaming system with a backup mechanism and a backup method thereof
HK1169205A (en) A method and system for processing electronic image content for display
CN108667795B (en) Virtual reality video multi-person sharing system and virtual reality equipment
EP3089459A1 (en) Apparatus and method for implementing video-on-demand quick switching among multiple screens
AU2019204751B2 (en) Embedded appliance for multimedia capture
WO2025112013A1 (en) Video transmission method, and device
WO2023273992A1 (en) Wireless transmission system, method and apparatus
TWI539291B (en) Mirroring transmission method
CN113965788A (en) Teaching same-screen interaction system in local area network
CN115811628A (en) Synchronous processing method of sound and picture and related device
JP2018207142A (en) Mmt reception system, synchronization controller, synchronization control method and synchronization control program
AU2013254937A1 (en) Embedded Appliance for Multimedia Capture