[go: up one dir, main page]

HK1173804A - Offloading content retrieval and decoding in pluggable content-handling systems - Google Patents

Offloading content retrieval and decoding in pluggable content-handling systems Download PDF

Info

Publication number
HK1173804A
HK1173804A HK13100857.7A HK13100857A HK1173804A HK 1173804 A HK1173804 A HK 1173804A HK 13100857 A HK13100857 A HK 13100857A HK 1173804 A HK1173804 A HK 1173804A
Authority
HK
Hong Kong
Prior art keywords
media
server
image
client
content handler
Prior art date
Application number
HK13100857.7A
Other languages
Chinese (zh)
Inventor
R.玛哈简
Original Assignee
微软技术许可有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 微软技术许可有限责任公司 filed Critical 微软技术许可有限责任公司
Publication of HK1173804A publication Critical patent/HK1173804A/en

Links

Description

Offloading content retrieval and decoding in a pluggable content processing system
Background
While computers were once isolated and had minimal or little interaction with other computers, computers now interact with a wide variety of other computers through Local Area Networks (LANs), Wide Area Networks (WANs), dial-up connections, and the like. With the proliferation of the internet, connections between computers have become more important and many new applications and technologies have been developed. The growth of large-scale networks and the widespread availability of low-cost personal computers have fundamentally transformed the way many people work, interact, communicate, and play.
An increasingly popular form of network communication is commonly referred to as a telepresence system, which can share desktops and other applications with remote clients using protocols such as Remote Desktop Protocol (RDP), Independent Computing Architecture (ICA), and others. Such computing systems typically communicate keyboard presses and mouse clicks or selections from a client computing device to a server computing device over a communication network (e.g., INTERNET)TM(internet)) relays screen updates back to the other direction. Thus, when in fact the client is only sent screenshots or frames of the application as it appears on the server side, the user has the experience as if his session were completely executing on his client computer.
Typically in a telepresence system, graphical data is encoded on a server and then transmitted to a client for rendering on a client display. To remotely display media such as video or animation, the media is first decoded from a native format (e.g., h.264 or WMV) to another format such as a bitmap. The bitmap is then encoded for transmission to the client. This decoding and encoding process is computationally expensive for the server, especially when it concurrently performs many such remote presentation sessions, and requires a large amount of bandwidth to transfer the decoded and decoded media to the client (as measured relative to the bandwidth required to transfer the natively encoded media). This may result in the server needing to drop frames, either because it cannot decode all frames, or because it cannot send all frames to the client, and thus a reduced client experience.
Disclosure of Invention
Thus, when media is displayed by using a content handler for an application, it would be an improvement over the prior art to reduce the demand on the server in a remote presentation session.
In one embodiment, the content handler is a plug-in. For example, the application may be Microsoft Internet ExplorerTMThe MEDIA may be WINDOWS MEDIA VIDEOTMThe video is in video format, and the content processing program is for processing the media in INTERNET EXPLORERTMMicrosoft SILVERLIGHT FOR DECODING AND PRESENTING WEB-PAGE WHEN VIEWEDTM ACTIVEXTMA content handler plug-in. In another embodiment, the video is FLASHTMThe format, and the content processing program is FLASHTMACTIVEXTMAnd (4) a content processing program.
This improvement is achieved by the server sending a frame to the client that includes the media in two parts-the encoded media, and the rest of the frame. Using the above embodiment, the encoded media will comprise video, while the remainder of the frame will comprise the INTERNET EXPLORER that is not occupied by videoTMApplication windows (e.g., navigation buttons and borders, the rest of the web page on which the video is presented). The client then uses the content handler in conjunction with the stub container to decode the image corresponding to the encoded media and combine the image with the rest of the frame to recreate the frame as it appeared on the server (less any lossy encoding during the remote presentation session, etc.).
The stub container may include a lightweight application configured to manage the content handler in the same manner that its corresponding application would manage it. For the second time useExample of a face, stub container to SILVERLIGHTTM ACTIVEXTMContent processing program providing and renderingTMACTIVEXTMContent handler in INTERNET EXPLORERTMThe same communication that will be received during internal execution, even though the stub container may not implement INTERNET EXPLORERTMSuch as the ability to decode web pages (thereby designating the stub container as a "lightweight" application).
In embodiments where the media is stored on a media server computing device separate from the server, the media may be sent directly to the client, bypassing the server.
In one embodiment, the server retrieves media stored on the media server and passes it to the client for decoding. It may do this by using a proxy content handler that can communicate with the server like a content handler and then transfer the media data it receives to the client.
The present disclosure encompasses systems, methods, and computer-readable storage media for implementing these teachings.
The primary embodiments described herein discuss computer-executable instructions executed by one or more processors of a computing device. However, it is understood that these techniques may be implemented entirely in hardware, such as by a suitably programmed Field Programmable Gate Array (FPGA), or some combination thereof. One of ordinary skill in the art can appreciate that one or more aspects of the present disclosure can include, but are not limited to, circuitry and/or programming for implementing the herein-referenced aspects of the present disclosure; the circuitry and/or programming can be virtually any combination of hardware, software, and/or firmware configured to effect the herein-referenced aspects depending upon the design choices of the system designer.
The foregoing is a summary and thus contains, by necessity, simplifications, generalizations and omissions of detail. Those skilled in the art will appreciate that the summary is illustrative only and is not intended to be in any way limiting.
Drawings
The systems, methods, and computer-readable media for offloading content retrieval and decoding in a pluggable content processing system are further described with reference to the accompanying drawings in which:
FIG. 1 illustrates an example general-purpose computing environment in which the techniques described herein may be embodied.
FIG. 2 illustrates an example system in which a client communicates with a server in a remote presentation session, and in which the server acts as a proxy to retrieve media from a media server for the client.
FIG. 3 illustrates an example system in which a client communicates with a server in a remote presentation session, as described in FIG. 2, and in which the client retrieves media from a media server to be displayed in the remote presentation session.
FIG. 4 illustrates an example operational procedure for a client to participate in a remote presentation session in which there is offloading of content retrieval and decoding in a pluggable content processing system.
FIG. 5 illustrates an example operational procedure for a server participating in a remote presentation session in which there is offloading of content retrieval and decoding in a pluggable content processing system.
FIG. 6A depicts a first browser window displaying media and a second browser window referenced in FIG. 2.
FIG. 6B depicts the first and second browser windows of FIG. 6A, wherein the second browser window obscures a portion of the media of the first browser window.
Detailed description of illustrative embodiments
FIG. 1 is a block diagram of a general-purpose computing device in which the techniques described herein may be implemented. The computing system environment 120 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the presently disclosed subject matter. Neither should the computing environment 120 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the example operating environment 120. In some embodiments, the various depicted computing elements may include circuitry configured to instantiate specific aspects of the present disclosure. For example, the term "circuitry" as used in this disclosure may include dedicated hardware components configured to perform functions through firmware or switches. In other examples, the term circuitry may include a general purpose processing unit, memory, etc., configured by software instructions that implement logic that may be used to perform functions. In example embodiments where circuitry includes a combination of hardware and software, an implementer may write source code embodying logic and the source code can be compiled into machine readable code that can be processed by the general purpose processing unit. Because those skilled in the art will appreciate that the prior art has evolved to the point where there is little difference between hardware, software, or a combination of hardware/software, the selection of hardware or software to implement a particular function is a design choice left to the implementer. More specifically, those skilled in the art will appreciate that a software process can be transformed into an equivalent hardware structure, and a hardware structure can itself be transformed into an equivalent software process. Thus, the choice of hardware or software implementation is one of design choice and left to the implementer.
Computer 141 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 141 and includes both volatile and nonvolatile media, removable and non-removable media. The system memory 122 includes computer-readable storage media in the form of volatile and/or nonvolatile memory such as Read Only Memory (ROM) 123 and Random Access Memory (RAM) 160. A basic input/output system 124 (BIOS), containing the basic routines that help to transfer information between elements within computer 141, such as during start-up, is typically stored in ROM 123. RAM 160 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 159. By way of example, and not limitation, FIG. 1 illustrates operating system 125, application programs 126, other program modules 127, and program data 128.
The computer 141 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only, FIG. 1 illustrates a hard disk drive 138 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 139 that reads from or writes to a removable, nonvolatile magnetic disk 154, and an optical disk drive 140 that reads from or writes to a removable, nonvolatile optical disk 153 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 138 is typically connected to the system bus 121 through a non-removable memory interface such as interface 134, and magnetic disk drive 139 and optical disk drive 140 are typically connected to the system bus 121 by a removable memory interface, such as interface 135.
The drives and their associated computer storage media discussed above and illustrated in FIG. 1, provide storage of computer readable instructions, data structures, program modules and other data for the computer 141. In FIG. 1, for example, hard disk drive 138 is illustrated as storing operating system 158, application programs 157, other program modules 156, and program data 155. Note that these components can either be the same as or different from operating system 125, application programs 126, other program modules 127, and program data 128. Operating system 158, application programs 157, other program modules 156, and program data 155 are given different numbers here to illustrate that, at a minimum, they are different copies. A user may enter commands and information into the computer 141 through input devices such as a keyboard 151 and pointing device 152, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 159 through a user input interface 136 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a Universal Serial Bus (USB). A monitor 142 or other type of display device is also connected to the system bus 121 via an interface, such as a video interface 132. In addition to the monitor, computers may also include other peripheral output devices such as speakers 144 and printer 143, which may be connected through an output peripheral interface 133.
The computer 141 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 146. The remote computer 146 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 141, although only a memory storage device 147 has been illustrated in FIG. 1. The logical connections depicted in FIG. 1 include a Local Area Network (LAN) 145 and a Wide Area Network (WAN) 149, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
When used in a LAN networking environment, the computer 141 is connected to the LAN 145 through a network interface or adapter 137. When used in a WAN networking environment, the computer 141 typically includes a modem 150 or other means for establishing communications over the WAN 149, such as the Internet. The modem 150, which may be internal or external, may be connected to the system bus 121 via the user input interface 136, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 141, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 1 illustrates remote application programs 148 as residing on memory device 147. It will be appreciated that the network connections shown are examples and other means of establishing a communications link between the computers may be used.
FIG. 2 illustrates an example system in which a client 204 communicates with a server 202 in a remote presentation session, and in which the server 202 acts as a proxy to retrieve media from a media server 208 for the client 204.
The components of this system, and the other systems discussed herein, are logically organized, and it is understood that in one embodiment they can be combined in various permutations, and that not every component is present in every embodiment.
In one embodiment, server 202 and client 204 communicate across communication network 206 in a remote presentation session. Server 202 and client 204 communicate via communication link 226. The server 202 and the media server 208 communicate via a communication link 228. When client 204 communicates with media server 208, it is implemented through server 202 via communication link 226 and communication link 228. In one embodiment, each of the server 202, client 204, and media server 208 may be implemented in the computing device of FIG. 1. During a remote presentation session, server 202 sends a plurality of frames to client 204 corresponding to graphical output of one or more applications executing on server 202. In transmitting these frames, the server 202 may need to transmit frames that include graphical output of the decoded media, as well as graphical data that the server 202 typically sends to the client 204 (in which case the server 202 retrieves the natively encoded media from the media server 208 and sends it to the client 202 for decoding and rendering). For example, this may include a web browser displaying a web page on which a video is to be played.
As an example discussed in this summary, the web browser may be INTERNETEXPLORERTMAnd the medium is obtained by using SILVERLIGHTTM ACTIVEXTMThe content handler 212 is a video in WMV format to be played in a web browser. In another embodiment, the media is to be played by using FLASHTMFLASH played by content handler 212TMA video in a format.
In the current embodiment, clients 204 include a remote presentation session client 210, a content handler 212, a stub container 214 for content handler 212, and a geometry tracker client 216.
Stub container 214, as used herein, refers to circuitry, computer-executable instructions, etc. that mimic application 222 associated with content handler 212. The content handler 212 may not typically be executed by itself, but rather with respect to instructions sent to or received from the associated application 222. When a client 204 is engaged in a remote presentation session with a server 202, an associated application 222 will run on the server 202 and the graphical output of that application 222 will be sent to the client 204. Client 204 does not have to store or execute a copy of application 222 in order to use it through a remote presentation session. Even if client 204 does not have application 222, it does not need to execute a full copy of application 222 to utilize content handler 212 and has an associated cost of computing resources. Client 204 may execute only stub container 214, which stub container 214 may then interface with content handler 212 necessary within the remote presentation session. Executing the stub container 214 to interface with the content handler 212 typically requires less computing resources than executing the corresponding full application 222 to achieve the same purpose (there are several other reasons that an application such as application 222 may be executed remotely in a remote presentation session rather than locally.
In the current embodiment, server 202 includes a remote presentation session server 218, a proxy content handler 220, an application 222, and a geometry tracker server 224. Application 222 corresponds to stub container 214 on client 204. For example, application 222 is INTERNET EXPLORERTMWhen applied, the stub container 214 is INTERNET EXPLORERTMStub container 214.
The server 202 and client 204 may communicate to determine whether the client 204 is to process either retrieval or presentation of media, and if so, how much. The communication may include whether the client 204 is capable of retrieving and decoding media locally. Such communications may include the availability of content handlers 212 on clients 204, network conditions, computing resources of clients 204, and administrator or user preferences. This communication may occur, for example, when a remote presentation session is initiated, or when a particular content handler 212 is initially needed within a remote presentation session. The communication may also include the capabilities of the server 202, such as whether or not there is an appropriate proxy content handler.
The optimization of when this communication occurs can be based on the details of the system. In one embodiment, the communication may be selected to occur when a remote presentation session is initiated, as this will likely reduce the lag when media is later selected for presentation. The user may find it less cumbersome to have a longer session initiation period than to pause during the middle of a telepresence session when the communication is generated. In one embodiment, the communication may be selected to occur when a particular content handler 212 is initially needed within a remote presentation session. This may be preferable when the content handler 212 is unlikely to be used in a session, so processing resources involved in the communication should only be used when necessary.
For example, where the client 204 lacks the appropriate content handler 212 to decode media, the server 202 can retrieve and decode the media before sending its image output to the client 204. This includes remote presentation sessions without offloading the content.
In addition, it may be determined that the client 204 has the ability to decode and access media from the media server 208. In this example, the client 204 may retrieve media from the media server 208 without interference from the server 202 and also decode the media. This embodiment is discussed in more detail with reference to fig. 3.
Further, it may be determined that client 204 has the ability to decode media, but that client 204 does not have the ability to access media directly from the media server 208 on which the media is stored, while server 202 can access media from the media server 208. This may occur, for example, when the media server 208 and the server 202 are connected to an intranet communication network that is not accessible to the client 204. This is the embodiment discussed in detail with reference to the embodiment of fig. 2.
In embodiments where both server 202 and client 204 may decode media, they may negotiate which decoded media to decode, such as based on available processing resources of each computing device.
Upon determining through the communication that the client 204 is to decode the media, the proxy content handler 220 sends initialization parameters to the stub container 214 on the client 204. Such initialization parameters may include things like the address of the media server 208, the visual location of the content output on the server 202, and security settings. Such parameters may also include parameters from the media server 208, such as the dimensions in which the media should be decoded. The stub container 214 uses the initialization parameters to invoke the content handler 212 in a similar manner as the proxy content handler 220 is initialized on the server 202. From this point forward, the content handler 212 can retrieve the media from the media server 208 and decode it locally to the client 204. If content retrieval is requested on the server 202, the proxy content handler 220 retrieves the data from the media server 208 and tunnels it to the stub container 214, which stub container 214 passes the data to the content handler 212 for decoding.
When the server 202 is to offload presentation of media to the client 204, the server 202 runs a proxy for the content handler 212 that is configured to communicate with the media server 208 in the same manner as the content handler 212 would communicate with the media server 208. For example, the proxy content handler 220 is configured to contract with the server 202 to allow the proxy content handler 220 to be loaded into a container application 222 running on the server 202 in place of the content handler 212. The proxy content handler 220 may also be configured to contract incoming commands for the content handler 212, allowing the proxy content handler 220 to receive various commands and calls from the application 222 (if the content handler 220 is located on the server 202 at the location of the proxy content handler 212, the content handler 220 will have received them). The proxy content handler 220 may also be configured to contract for notifications or outgoing commands to cause the container application 222 to receive notifications from the proxy content handler 220 as it would have received from the content handler 212.
During the course of a remote presentation session, client 204 may subject server 202 to an event that will execute content handler 212 in conjunction with application 222. For example, the client 204 may instruct the server 202 to open a website within a web browser that contains media such as a video to be rendered via the content handler 212. Application 222 executes on server 202. When it encounters media, it performs operations consistent with loading content handler 212. However, when the server 202 and client 204 have determined that the content handler 212 on the client 204 is to decode media, the application 222 instead terminates loading the proxy content handler 220.
The application 222 interacts with the media server 208 to retrieve media. It passes the received media and instructions from the media server 208 to the proxy content handler 220, and the proxy content handler 220 passes them to the content handler 212 on the client 204. This may be accomplished, for example, by proxy content handler 220 passing them to remote presentation session server 218, remote presentation session server 218 passing them to remote presentation session client 210, remote presentation session client 210 passing them to stub container 214, stub container 214 passing them to content handler 212.
The content handler 212 may include functionality of how to experience the media, and use of the functionality may include instructions to be sent to the media server 208. For example, MICROSOFTTM SILVERLIGHTTMThe content handler 212 has functionality to provide navigation features, such as selecting new media to retrieve from MICROSOFTTM SILVERLIGHTTMViewed in a list provided in-house, navigation features are executed in the media in a corresponding manner. When the server 202 acts as a proxy for the content handler 212, the use of these navigation buttons is transmitted by the server 202. For example, selecting new media from within content handler 212In physical, the stub container 214 receives the selection, the stub container 214 sends it to the remote presentation session client 210, the remote presentation session client 210 sends it to the remote presentation session server 218, the remote presentation session server 218 sends it to the application 222, and the application 222 sends it to the media server 208 that performed the selection. Likewise, communications are passed from the media server 208 to the content handler 212 in a similar manner.
During the process of content decoding and/or retrieval, proxy content handler 220 communicates incoming commands from application 222 to stub container 214, which stub container 214 in turn sends them to content handler 212. Any notifications or outgoing commands from the content handler 212 are passed to the stub container 214 on the client 204, which the stub container 214 sends to the proxy content handler 220 on the server 202. The agent content handler 220 copies these notifications or outgoing commands and provides them to the application 222.
The display elements of the proxy content handler 220 are synchronized with the stub container 214 (and thus the content handler 212) so that the decoded frame representation on the client 204 is the same image that it would represent if the entire frame were decoded on the server 202 and sent to the client 204 for display.
Once the application 222 finishes content decoding, the proxy content handler 220 is unloaded. Prior to the uninstall, the proxy container sends a command to the client 204 to uninstall the stub container 214 and the content handler 212.
During a remote presentation session, the decoded media may become occluded, and this may be monitored by the geometry tracker. Geometric tracking can be thought of as the process by which servers 202 and 204 understand the shape of a window to each other, where the media will be displayed on client 204. For example, when a session displays two web browser windows, the media may be decoded in one window, and then the second window may be moved over part or all of the first page (which is depicted in fig. 6A and 6B). Geometry tracker server 224 operates to monitor the display and arrangement of windows and media on server 202. When it detects that the media has been occluded, it determines the shape of the viewable area of the media and transmits an indication of what the viewable area is to the geometry tracker client 216.
For example, the media may comprise a 800x400 pixel rectangle, and the right half of the rectangle may become occluded. The geometry tracker server 224 may determine that this has occurred and send an indication to the geometry tracker client 216 that the rightmost 400x400 pixel area of the media will not be displayed and that only the container shape of the media for the leftmost 400x400 pixel area will be made. The container shape of the media may be rendered non-rectangular, or otherwise different from its shape type, by being partially occluded by a window on the server 202.
FIG. 3 illustrates an example system in which a client 204 communicates with a server 202 in a remote presentation session, and the client 204 retrieves media from a media server 208 to be displayed in the remote presentation session, as described in FIG. 2.
As discussed with reference to FIG. 2, when a client 204 is engaged in a remote presentation session with a server 202, the client 204 may request that the server 202 execute an application 222 and associated content handler 212 and send the executed graphical output to the client 204 for display. The server 202 and client 204 communicate via a communication link 226 as in FIG. 2, and the server 202 and media server 208 communicate via a communication link 228 as in FIG. 2. In contrast to fig. 2, however, the client 204 and the media server 208 are configured to communicate via the communication link 230 independently of the server 230. In embodiments where client 204 and server 208 are configured to communicate via communication link 230, they may also communicate through server 202 via communication link 226 and communication link 228.
During the negotiation process between the client 204 and the server 202 described in fig. 2, it may be determined that the client 204 is able to access media from the media server 208 without interference from the server 202. This may be determined, for example, by the client 204 pinging the media server 208 and receiving a response, or by the client 204 successfully downloading a portion of the media from the media server 208.
In this embodiment, the server 202 neither retrieves media from the media server 208 nor sends media (decoded or not decoded) to the client 204. This leaves a hole in the frame sent by the server 202 to the client 204 in the remote presentation session — the hole occupied by the decoded and presented media.
In one embodiment, the server 202 signals to the client 204 that it will not send to the client 204 a portion of the frame corresponding to where the media is to be displayed. In one embodiment, the server 202 fills the area of the frame occupied by the media with an image (which will then be occluded on the client 204 when the decoded and rendered media is overlaid on top of the image). In one embodiment, the server 202 fills the area with something that is highly and/or easily compressible, such as a single color (e.g., white). In making it compressible, the server 202 can reduce its own processing resources required to compress it, as well as the bandwidth required to send it to the client 204.
When client 204 receives both media and frames, it combines the two to create the image requested through the remote presentation session. In one embodiment, the server 202 indicates to the client 204 where the intra-frame location of the media is to be displayed, and the client 204 displays the image there. For example, where the remote presentation session includes a window of applications 222 on the client 204, the server 202 can indicate a location within the window (e.g., a number of pixels to the right and/or down the upper left corner of the window). In this way, the location where the media is displayed does not change when the application 222 window is moved.
FIG. 4 illustrates an example operational procedure for client 204 to participate in a remote presentation session in which there is offloading of content retrieval and decoding in a pluggable content processing system.
Operation 402 depicts communicating with the server 202 across a communication network in a remote presentation session.
Operation 404 depicts requesting, from the server 202, a remote display of a frame that includes media from the server 202 that is decodable by the content handler 212, the content handler 212 interacting with a stub container 214, the stub container 214 corresponding to an application 222 associated with the content handler 212.
Operation 406 depicts receiving an image corresponding to a frame from the server 202.
Operation 408 depicts determining a data offload level for the media to the server 202. In one embodiment, the data offload level comprises: offload access to the media, and offload presentation of the media.
Operation 410 depicts receiving media. In one embodiment, receiving media includes receiving media from second server 208, the media received from second server 202 not being transmitted by server 202. In one embodiment, this includes receiving an indication from the server 202 of how to receive media from the second server 208.
In one embodiment, receiving media includes receiving media from the server 202, the server 202 having received media from the second server 208. In embodiments where the current operational procedure is performed by the client 204, the server 202 is configured to receive media from the second server 208, but the client 204 is not configured to receive media from the second server 208.
Operation 412 depicts the stub container 214 instructing the content handler 212 to decode the media.
Operation 414 depicts the content handler 212 decoding a second image corresponding to the media.
Operation 416 depicts displaying a third image, the third image comprising the second image overlaid on the image, the third image representing a frame. The third image may be a constructed frame, such as SILVERLIGHT overlaid on top of its closed web browser windowTMAnd (6) video.
In one embodiment, a client performing the operations of FIG. 4 may accomplish this by creating a parent window of a remote presentation session on the client display (such as in a frame buffer in memory, which is then flushed to the display screen to generate the graphical output). The client may display the frame received from the server (e.g., a web browser window) and delegate a child window within the parent window (a portion of the parent window) to a content handler, which then performs rendering operations of a second image (e.g., a video embedded in a web page). Thus, when the client renders an image to a parent window while refreshing the image, the content handler may render a second image to the child window. In this way, the frame rates of the image and the second image are independent of each other. In this embodiment, the resulting third image is the full parent window in which the child window is included.
It is likely that a portion of the video image will be occluded, such as it being partially covered by another web browser window. In one embodiment, operation 416 includes receiving an indication of: a portion of the second image is occluded by the image, and displaying a third image comprising the second image overlaid on the image further comprises: only the portion of the second image that is overlaid on the image that is not obscured by the image is displayed.
In one embodiment, this includes receiving an indication of a location at which the second image is displayed, wherein displaying the second image includes displaying the second image at the location.
FIG. 5 illustrates example operational procedures for server 202 to participate in a remote presentation session in which content retrieval is offloaded and decoded in a pluggable content processing system.
Operation 502 depicts communicating with the client 204 across a communication network in a remote presentation session.
Operation 504 depicts receiving a request to send an image including media to the client 204, the media corresponding to the content handler 212. In one embodiment, the image is an application output frame. For example, the image may be a web browser window that includes embedded video (media) that is to be displayed within the browser window.
Operation 506 depicts determining that the client 204 can decode the media with the content handler 212.
Operation 508 depicts retrieving media from the media server 208.
Operation 510 depicts determining a data offload level for the media to the client 204. In one embodiment, the data offload level corresponds to media retrieval or media decoding.
Operation 512 depicts sending media to the client 204. In one embodiment, this includes sending an indication of the location of the media on the media server 208 to the client 204.
Operation 514 depicts determining a portion of an image corresponding to media and replacing the portion of the image corresponding to media with a third image. For example, when an image includes a web browser window that is sent as a frame to client 204, there will be a "hole" in the image where media should normally be located but lost, because the media will be sent separately to client 202.
Operation 516 depicts sending the image to the client 204, the client 204 overlaying the image with a second image, the second image created by the client 204, the client 204 decoding the media with the content handler 212.
Operation 518 depicts sending an indication to the client 204 of where to decode the image.
Operation 520 depicts determining that a location to decode the media is partially occluded and sending an indication to the client 204 that the location to decode the media is partially occluded.
Operation 522 depicts receiving a request from the content handler 212 of the client 204 to navigate media; transmitting a request to navigate the media to the media server 208 where the media is stored; receive a response from the media server 208; and transmits the response to the content handler 212.
Fig. 6A depicts a first browser window 602 and a second browser window 606 displaying media 604, as referenced in fig. 2.
Fig. 6B depicts the first browser window 602 and the second browser window 606 of fig. 6A, wherein the second browser window 606 obscures a portion of the media 604 displayed in the first browser window 602.
Final phrase
While the present invention has been described in connection with the preferred aspects illustrated in the various figures, it is to be understood that other similar aspects may be used or modifications and additions may be made to the described aspects for performing the same function of the present invention without deviating therefrom. Accordingly, the present invention should not be limited to any single aspect, but rather should be construed in breadth and scope in accordance with the appended claims. For example, the various processes described herein may be implemented in hardware or software, or a combination of both. Thus, the methods and apparatus of the disclosed embodiments, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium. When the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus configured to practice the disclosed embodiments. In addition to the specific implementations explicitly set forth herein, other aspects and implementations will be apparent to those skilled in the art from consideration of the specification disclosed herein. It is intended that the specification and illustrated implementations be considered as examples only.

Claims (15)

1. A method, comprising:
communicating with a server across a communications network in a remote presentation session, wherein the server sends a frame for display, the frame comprising media from the server that is decodable by a content handler that interacts with a stub container, the stub container corresponding to an application associated with the content handler;
receiving the media;
the stub container instructs the content handler to decode the media;
the content handler decoding an image corresponding to the media; and
and displaying the image.
2. The method of claim 1, further comprising:
receiving a second image from the server; and displaying the image comprises:
displaying a third image comprising the second image overlaid on the image, the third image representing the frame.
3. The method of claim 1, wherein the server includes a proxy content handler corresponding to a content handler of the server and a second application corresponding to the application.
4. The method of claim 1, wherein receiving the media comprises:
receiving the media from a second server, the media received from the second server not having been transmitted by the server.
5. The method of claim 4, further comprising:
receiving, from the server, an indication of how to receive media from the second server.
6. The method of claim 1, wherein receiving the media comprises:
receiving the media from the server, the server having received the media from a second server.
7. The method of claim 6, wherein the method is performed by a client, wherein the server is configured to receive the media from the second server, but the client is not configured to receive the media from the second server.
8. The method of claim 1, further comprising:
determining a data offload level for media to the server, the data offload level comprising offloading access to the media or offloading presentation of the media.
9. The method of claim 1, further comprising:
receiving an indication that a portion of the second image is occluded by the image; and
displaying a third image comprising the second image overlaid on the image further comprises: displaying only the portion of the second image that is overlaid on the image that is not obscured by the image.
10. The method of claim 1, further comprising:
receiving an indication of a location at which the second image is displayed; and
wherein displaying the second image comprises displaying the second image at the location.
11. A system, comprising:
circuitry for communicating with a client across a communications network in a telepresence session;
circuitry for determining to send an image comprising media and a stub image to the client;
circuitry for determining that the client-available content handler decoded the media;
circuitry for transmitting the media to the client; and
circuitry for sending the stub image to the client, the client displaying a representation of the image, the image comprising a sub-image overlaid with a second image created by the client decoding the media with the content handler.
12. The system of claim 11, further comprising:
circuitry for receiving a communication from the client, the communication involving a media server storing the media;
circuitry for transmitting the communication to a media server storing the media;
circuitry for receiving a response from the media server; and
circuitry for transmitting the response to the client.
13. The system of claim 11, wherein the communication includes an indication to navigate the media.
14. The system of claim 11, further comprising:
circuitry for determining a portion of the sub-image corresponding to the media; and
circuitry for replacing the portion of the sub-image corresponding to the media with a third image.
15. The system of claim 11, further comprising:
for sending an indication to the client of where to overlay the second image.
HK13100857.7A 2009-12-18 2010-11-18 Offloading content retrieval and decoding in pluggable content-handling systems HK1173804A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/642,581 2009-12-18

Publications (1)

Publication Number Publication Date
HK1173804A true HK1173804A (en) 2013-05-24

Family

ID=

Similar Documents

Publication Publication Date Title
US12177280B2 (en) Ultra-low latency remote application access
EP2603848B1 (en) Cloning or extending a computer desktop on a wireless display surface
KR101916980B1 (en) Web-browser based desktop and application remoting solution
RU2493582C2 (en) Architecture for remote operation with graphics
EP2357555B1 (en) Virtual user interface
US20120042275A1 (en) Cloning specific windows on a wireless display surface
US20180349283A1 (en) Video redirection in virtual desktop environments
KR101931514B1 (en) Apparatus and method for simultaneous playback and backup of media in a web browser
JP2007509408A (en) Synchronized graphics and region data for graphics remoting systems
US20170046013A1 (en) Web-browser based desktop and application remoting solution
US8307103B2 (en) Tear-free remote desktop protocol (RDP) display
US20070061399A1 (en) Filtering obscured data from a remote client display
US10223062B1 (en) Method and apparatus of capturing a screen image of a remotely managed machine
WO2014134107A2 (en) System and method for multi-user control and media streaming to a shared display
US9135154B2 (en) Algorithm execution output cache
CA3144834A1 (en) Orchestrated control for displaying media
JP5911808B2 (en) Method and system for communicating via a remote presentation session
HK1173804A (en) Offloading content retrieval and decoding in pluggable content-handling systems
KR20140088799A (en) Multi injection server and method thereof
US8046698B1 (en) System and method for providing collaboration of a graphics session