[go: up one dir, main page]

HK1181930B - A method, server and system in a caching server - Google Patents

A method, server and system in a caching server Download PDF

Info

Publication number
HK1181930B
HK1181930B HK13108920.3A HK13108920A HK1181930B HK 1181930 B HK1181930 B HK 1181930B HK 13108920 A HK13108920 A HK 13108920A HK 1181930 B HK1181930 B HK 1181930B
Authority
HK
Hong Kong
Prior art keywords
content
request
client application
additional content
additional
Prior art date
Application number
HK13108920.3A
Other languages
Chinese (zh)
Other versions
HK1181930A1 (en
Inventor
J.R.图利艾尼
N.L.霍尔特
C.黄
Original Assignee
微软技术许可有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/328,444 external-priority patent/US9294582B2/en
Application filed by 微软技术许可有限责任公司 filed Critical 微软技术许可有限责任公司
Publication of HK1181930A1 publication Critical patent/HK1181930A1/en
Publication of HK1181930B publication Critical patent/HK1181930B/en

Links

Description

Method, server and system in cache server
Technical Field
The present invention relates to pre-caching, in particular for application driven content delivery networks.
Background
A Content Delivery Network (CDN) is a computer network that contains copies of data placed on different network nodes. CDNs provide an interface for data between origin servers and end user computers. The origin server is the primary source of the content, and the servers of the CDN cache copies of the content with the highest demand. The servers of the CDN may be strategically placed closer to the end user computer than the origin server. Rather than having to directly access data from the origin server, the end user computer may access the high demand data at the servers of the CDN. As such, CDNs improve access to data by increasing access bandwidth, increasing redundancy, and reducing access latency.
Available from Microsoft corporation of Redmond, WashingtonThe (vital) map is an example of an online application that provides content using a CDN. The application has a large amount of static content in the form of map tiles (images of parts of the map) that are stored on origin servers delivered to end users through CDNs. For example, a user may use a web browser at their computing device to browse a map, such as by panning across the map, zooming in or out on portions of the map, and so forth. As the user browses the map, the browser sends a request for new map tiles to the CDN.
Various techniques have been developed to allow content (e.g., map tiles) to be provided more quickly to a web browser at a user computing device. According to a first technique, an origin server of an online application may predict future content requests. The predicted future content may be pre-cached in the CDN such that it may be more quickly accessed by the user computing device if the predicted future content is actually requested. According to a second technique, a client web browser at a user computing device may predict what content will likely be needed in the future and may prefetch the content to the web browser. According to a third technique, the CDN server may predict future content requests for the user computing device and may pre-cache the predicted future content in the CDN.
However, these techniques have drawbacks. For example, having the origin server predict future content has the disadvantage of predetermining what content is to be pre-cached. Such pre-caching may be unacceptable and/or may not provide the required performance in cases where the user incurs data transfer costs or has limited bandwidth (such as in mobile applications) for pre-fetching content to the web browser. Furthermore, for the CDN to predict and pre-cache future content, the CDN may have to be very complex to understand the type of content request that the CDN receives in order to be able to infer the future content request.
As such, current caching techniques are less desirable. In thatIn the case of (mandatory) maps, the "cache hit" rate (the ratio of requested data to the total amount of available data that is cached) is undesirably low (e.g., less than 50%) due to the number of available map tiles and different areas of focus for different users. This results in relatively high latency in map loading, as map tiles have to be frequently retrieved for the user from the origin server, rather than from a cache at the CDN.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Methods, systems, and computer program products are provided for caching content before it is actually requested. The client application may predict content to be subsequently requested as desired content (rather than making the prediction at the origin server or at the content delivery network server). Further, the predicted content may be cached at a cache server of the content delivery network (rather than being cached at the web browser). Such techniques may provide cache hit rates, reduce latency, and/or have further benefits.
In a method implementation in a cache server, a first request for desired content is received from a client application. The first request may also indicate additional content related to the desired content that may be subsequently requested by the client application. The indicated additional content is not currently indicated as being required for consumption, but is predicted as potentially being required for future consumption. The desired content and the indicated additional content are retrieved from the origin server. The desired content is sent to a client application at the user device, while the additional content is cached at a cache server. Subsequently, a second request including a request for additional content may be received from the client application (e.g., because the additional content now needs to be used for consumption at the client application). The additional content cached at the cache server in response to the first request is provided to the client application by the cache server in response to the second request.
The first request for desired content may include a likelihood indication indicating a likelihood that additional content is subsequently requested by the client application. Requests for the indicated additional content at the origin server (relative to other content requests) may be prioritized based on the likelihood indication and/or other information.
In one system implementation, a cache server may include a content request parser, a content retriever module, and a content provider module. A content request parser receives a request for desired content from a client application in a user device. The request indicates additional content related to the desired content that may be subsequently requested by the client application. The content retriever module sends at least one request for the desired content and the indicated additional content to the origin server, receives the desired content and the additional content from the origin server, and caches the additional content in the storage. The content provider module sends the desired content to a client application at the user device. The content request parser receives a second request from the client application that includes a request for additional content. In response to the second request, the content provider module provides the cached additional content to the client application.
Further, the content retriever module may include a request prioritizer (prioritizer) that prioritizes sending requests for the indicated additional content to the origin server based on the likelihood indications.
In one client application implementation, the client application may include a communication interface and an additional content predictor. The additional content predictor receives an indication of currently desired content of the client application and predicts additional content that may be subsequently requested by the client application. The communication interface generates a first request for the desired content, the first request also indicating the predicted additional content. The communication interface receives the desired content from a cache server that retrieved the desired content from the origin server in response to the first request. Subsequently, the communication interface generates a second request for the predicted additional content as the currently desired content. The predicted additional content is received from the cache server. The caching server has previously retrieved the additional content from the original server and has cached the additional content in response to the first request.
Further, the additional content predictor may include an additional content prioritizer. The additional content prioritizer generates a likelihood indication indicating a likelihood that additional content may be subsequently requested by the client application.
Also described herein are a client application for predicting future content requests, a caching server for caching the predicted future content, and computer program products of further embodiments.
Further features and advantages of the invention, as well as the structure and operation of various embodiments of the invention, are described in detail below with reference to the accompanying drawings. It should be noted that the present invention is not limited to the particular embodiments described herein. These examples are presented herein for illustrative purposes only. Other embodiments will be apparent to persons skilled in the relevant art based on the description contained herein.
Drawings
The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the pertinent art to make and use the invention.
FIG. 1 shows a block diagram of a content delivery network that delivers content to user devices, according to an example embodiment.
FIG. 2 illustrates a block diagram of a content delivery network where predicted content is retrieved and cached by a cache server, according to an example embodiment.
FIG. 3 illustrates a flow diagram providing a process for a cache server to cache content that a client application may request in the future, according to an example embodiment.
FIG. 4 shows a flowchart of a process for providing previously cached content to a client application, according to an example embodiment.
FIG. 5 illustrates a flow diagram providing a process for a client application to request and receive desired content and indicate content that may be requested in the future, according to an example embodiment.
FIG. 6 illustrates a block diagram of a user device having a client application configured to request and receive desired content and indicate content that may be requested in the future, according to an example embodiment.
FIG. 7 shows a block diagram of an additional content predictor including an additional content prioritizer, according to an example embodiment.
FIG. 8 illustrates a process for indicating the likelihood that content may be requested by a client application in the future, according to an example embodiment.
FIG. 9 illustrates a block diagram of a cache server, according to an example embodiment.
FIG. 10 illustrates a block diagram of a content retriever module including a request prioritizer, according to an example embodiment.
FIG. 11 illustrates a process for prioritizing requests to an origin server for content that a client application may request in the future, according to an example embodiment.
FIG. 12 illustrates a block diagram of an example computer that can be used to implement embodiments of the invention.
The features and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.
Detailed Description
I. Introduction to
This specification discloses one or more embodiments that include various features of the invention. The disclosed embodiments are merely illustrative of the invention. The scope of the invention is not limited to the disclosed embodiments. The invention is defined by the appended claims.
References in the specification to "one embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Various exemplary embodiments of the present invention are described below. It should be understood that any section/sub-section headings provided herein are not intended to be limiting. Embodiments are described in this document, and any type of embodiment can be included under any section/sub-section.
Example embodiments
A Content Delivery Network (CDN) is a computer network that contains copies of data placed on different network nodes. CDNs provide an interface for data between origin servers and end user computers. The origin server is the primary source of the content, and the servers of the CDN cache copies of the content for the origin server with the highest demand. The servers of the CDN may be strategically placed closer to the end user computer than the origin server. Rather than having to access the data directly from the origin server, the end user computer may access the high demand data at the servers of the CDN. As such, CDNs improve access to data by increasing access bandwidth, increasing redundancy, and reducing access latency. Types of content that may be cached in a CDN include web objects (text, graphics, URLs, and scripts), downloadable objects (media files, software, documents), applications, live streaming, and database queries.
Various techniques have been developed to allow content (e.g., map tiles) to be provided more quickly to a web browser at a user computing device. According to a first technique, an origin server of an online application may predict future content requests. The predicted future content may be pre-cached in the CDN such that it may be more quickly accessed by the user computing device if the predicted future content is actually requested. According to a second technique, a client web browser at a user computing device may predict what content may be needed in the future and may prefetch the content to the web browser at the user computing device (e.g., using AJAX (asynchronous JavaScript) technology). For example, a user may view a series of images using a web browser at a computing device. Each time an image is viewed, the next image to be viewed may be predicted at the origin server, and the predicted next image may be pre-fetched to the web browser to allow for a smooth and instantaneous transition from the current image to the next image. According to a third technique, the CDN server may predict future content requests for the user computing device and may pre-cache the predicted future content in the CDN.
However, these techniques have drawbacks. For example, having the origin server predict future content has the disadvantage of predetermining what content is to be pre-cached. Such pre-caching may be unacceptable and/or may not provide the required performance in cases where the user incurs data transfer costs or has limited bandwidth (such as in mobile applications) for pre-fetching content to the web browser. Furthermore, for the CDN to predict and pre-cache future content, the CDN may have to be very complex to understand the type of content request that the CDN receives in order to be able to infer the future content request.
Embodiments of the present invention overcome such disadvantages. In an embodiment, the logic for predicting content that may be needed in the future may be implemented in a client application at a user computing device (e.g., in a web browser or other client application). As such, the logic may be customized more specifically for the client application and may evolve (e.g., through updating) as the client application evolves.
Further, in an embodiment, the predicted future content may be cached in storage in the CDN instead of pre-caching in the web browser. Pre-caching storage into the CDN does not significantly increase the data transferred to the web browser and therefore does not adversely affect web browser performance and is not problematic if the data transfer is costly or there is limited bandwidth.
As such, embodiments allow for pre-caching of content in the CDN's cache servers rather than in the client applications themselves. Business logic is allowed to reside in the client application that decides what content is to be pre-cached. The client application may directly prompt the CDN as to what content to pre-cache based on user context/behavior and/or other factors. The pre-cache hint may optionally include a likelihood indication that the content will be used in the future, allowing the CDN to prioritize the pre-cache requests according to available capacity.
Embodiments may be implemented in any type of CDN. For example, fig. 1 shows a block diagram of a content delivery network CDN100 that delivers content to user devices according to an example embodiment. CDN100 is shown as an example type of content delivery network and is not intended to be limiting. As shown in fig. 1, CDN100 includes an origin server 102 and first and second cache servers 104a and 104 b. Further, as shown in FIG. 1, the CDN100 delivers the content to the first-fourth user devices 106a-106 d. Although a single origin server 102, two cache servers 104a and 104b, and four user devices 106a-106d are shown in fig. 1 for purposes of illustration, any number of these features in fig. 1 may be present, including one or more additional origin servers, one or more additional cache servers, and/or one or more additional user devices, including tens, hundreds, thousands, and even greater numbers of servers and/or user devices. In an embodiment, cache servers 104a and 104b may or may not be included in a cache server cluster (and optionally other cache servers), and there may be any number of cache server clusters.
Each of the user devices 106a-106d may be any type of stationary or mobile computing device, including a desktop computer (e.g., a personal computer, etc.), a mobile computer or computing device (e.g.,equipment, RIMDevices, Personal Digital Assistants (PDAs), laptop computers, notebook computers, tablet computers (e.g., AppleiPad), netbooks, and the like), smart phones (e.g., AppleiPhone, Google android, MicrosoftTelephone, etc.) or other types of computing devices.
As shown in FIG. 1, the origin server 102 and the cache servers 104a and 104b are communicatively coupled via a network 108a, while the cache servers 104a and 104b and the user devices 106a-106d are communicatively coupled via a network 108 b. The networks 108a and 108b may be separate networks or may be included in a single network. Examples of networks 108a and 108b include a LAN (local area network), a WAN (wide area network), or a combination of networks such as the Internet. Examples of communication links that may be included in networks 108a and 108b include IEEE802.11 Wireless LAN (WLAN) wireless links, worldwide interoperability for microwave Access (Wi-MAX) links, cellular network links, wireless Personal Area Network (PAN) links (e.g., Bluetooth)TMLink), an ethernet link, a USB (universal serial bus) link, etc. The origin server 102 and cache servers 104a and 104b may each be any type of computing device described herein or otherwise known.
As shown in FIG. 1, each of the user devices 106a-106d includes a corresponding one of the client applications 110a-110 d. Client applications 110a-110d are applications running in the user devices 106a-106d that access content through the CDN 100. Examples of client applications 110a-110d include web browsers, media players (e.g., video players, image viewers, audio players, etc.), and other types of client applications. As shown in FIG. 1, the origin server 102 stores content 112 in a store that the client applications 110a-110d may wish to access. The content 112 may be any type of content including web objects (e.g., text, graphics/images/video, URLs (uniform resource locators), scripts, etc.), downloadable objects (e.g., media files, software, documents, etc.), applications, live streaming media, and database data. In some cases, the client applications 110a-110d may access the content 112 directly at the origin server 102. Further, each of cache servers 104a-104b may cache portions of content 112 as cached content 114a and cached content 114b, respectively. As such, in some cases, the client applications 110a-110d may access the content 112 at the cache servers 104a and 104b (e.g., cached content 114a and 114 b) rather than having to obtain the content 112 from the origin server 102 (which may be a bottleneck).
For example, as shown in FIG. 1, a client application 110a of a user device 106a may send a content request 116 to a cache server 104 a. The content request 116 indicates content that is predicted to be requested by the client application 110a in the future and content that is needed by the client application 110a immediately. In this manner, it is desirable for client application 110a to send content to cache server 104a that may be requested in the future, and thus be cached by cache server 104 a. Note that in the embodiment of fig. 1, the content request 116 indicates both content that is required by the client application 110a and content that is predicted to be requested by the client application 110a in the future. In another embodiment, the content request 116 may indicate content that is predicted to be requested by the client application 110a in the future, and the content required by the client application 110a may be indicated in a separate request sent from the client application 110a to the cache server 104 a.
Cache server 104a may generate a content request 118 that is sent to origin server 112 requesting the predicted future content indicated in content request 116. If the desired content requested by the client application 110a has not been cached at the cache server 104a, the cache server 104a may indicate the desired content in the content request 118, or may send a separate request for the desired content to the origin server 102 (or other origin server). In response 120 to the content request 118, the origin server 102 may send the content of the content 112 to the cache server 104 a. The response 120 may include the desired content and the predicted future content. Alternatively, the desired content and the predicted future content may be sent from the origin server 102 to the cache server 104a in separate sends, or the desired content and the predicted future content may be sent from different origin servers. Cache server 104a may cache the desired content and the predicted future content as cached content 114a and may send the desired content to client application 110a as desired content 122. If client application 110a does subsequently request content from cache server 104a as predicted future content, cache server 104a may send cached content 114a including the predicted future content to client application 110 a. Due to the caching of predicted future content at the cache servers 104a and 104b, the client applications 110b-110d may similarly interact with one or both of the cache servers 104a and 104b to receive the content of the origin server 102 in a more efficient manner than conventional techniques.
As described above, in an embodiment, the client application may predict content to be accessed in the future, and the predicted content may be cached at the cache server for faster response. In an embodiment, the client applications 110a-110d of FIG. 1 may predict content to be accessed in the future and may indicate the predicted future content to the cache servers 104a and 104b for caching. For example, fig. 2 shows a block diagram of CDN200 according to an example embodiment. As shown in fig. 2, the CDN200 includes an origin server 102, a cache server 202 and a user device 106 a. Cache server 202 is an example of one of cache servers 104a and 104b of FIG. 1. As shown in fig. 2, the user device 106a includes a client application 204, and the client application 204 includes an additional content predictor 208. The client application 204 is an example of one of the client applications 110a-110d of FIG. 1. In fig. 2, client application 204 is configured to predict content that may be subsequently requested, and the predicted content is retrieved and cached by cache server 202.
Fig. 2 is described below in conjunction with fig. 3. FIG. 3 shows a flowchart 300 providing a process for a caching server to cache content that may be requested by a client application in the future, according to an example embodiment. In one embodiment, the flowchart 300 may be performed by the cache server 202 of FIG. 2. In one embodiment, flowchart 300 may be performed jointly by multiple cache servers sharing a cache storage. Further, the communication regarding origin servers in flow diagram 300 may be performed with a single origin server or multiple origin servers. Other structural and operational embodiments will be apparent to persons skilled in the relevant arts based on the following discussion of flowchart 300 and cache server 202.
Flowchart 300 begins with step 302. In step 302, a request for desired content is received from a client application in a user device, the request indicating additional content related to the desired content that may be subsequently requested by the client application. For example, as shown in FIG. 2, the cache server 202 may receive a first desired content request 210 from the client application 204 of the user device 106 a. The first desired content request 210 is a request for desired content, such as a map tile of a map being viewed by the user using the client application 204, an image in a series of images being viewed by the user using the client application 204, a video frame of a video object (e.g., a video file) being viewed by the user using the client application 204, an audio frame of an audio object (e.g., an audio file) being played by the user using the client application 204, a content item of a web page being viewed by the user using the client application 204, and/or other content discussed elsewhere herein or otherwise known.
Further, the first desired content request 210 includes an indication of additional content related to the desired content that may be subsequently requested by the client application 204. In an embodiment, the additional content predictor 208 of the client application 204 predicts additional content related to the desired content that may be subsequently requested by the client application 204. For example, the predicted additional content may be one or more additional map tiles of a map being viewed by the user that are predicted to be subsequently viewed by the user, one or more additional images of a series of images being viewed by the user that are predicted to be subsequently viewed by the user, one or more additional video frames of a video object being viewed by the user that are predicted to be subsequently viewed by the user, one or more additional audio frames of an audio object being played by the user that are predicted to be subsequently played, one or more additional content items referenced by a web page predicted to be subsequently viewed, and so forth. An indication of the predicted additional content is included by the client application 204 in the first desired content request.
Note that the first desired content request 210 (and its response) may be included in one or more communication connections (e.g., TCP connections) between the client application 204 and the cache server 202. Any number of connections may be formed between client application 204 and cache server 202, and each connection may include a request for desired content and/or may indicate one or more predicted additional content items. In some cases, the desired content may already be cached at the cache server 202, while the predicted future content is not yet cached. In such a case, the cache server 202 may simply request the predicted future content from the original server (i.e., without requesting the original server for the desired content that is already cached at the cache server 202).
Returning to FIG. 3, at step 304, a request for at least the indicated additional content is sent to the origin server. For example, as shown in FIG. 2, the cache server 202 may send a server request 212 indicating the additional content indicated in the first desired content request 210. The server request 212 is received by the origin server 102. If the desired content has not been cached by the cache server 202, the cache server 202 may optionally send a request to the origin server 102 for the desired content indicated in the first desired content request 210. In an embodiment, the server request 212 may include both a request for the predicted additional content and a request for the desired content.
At step 306, additional content is received from the origin server. For example, in response to the server request 212, the origin server 102 may send a server response 214 that includes the desired content (if requested). For example, the origin server 102 may access desired content in a store associated with the origin server 102. The cache server 202 receives the server response 214 from the origin server 102. Further, the origin server 102 may access the predicted additional content in storage associated with the origin server 102 and may send the predicted additional content to the cache server 202 in the server response 214 or in a separate response.
In step 308, the desired content is sent to a client application at the user device. For example, as shown in FIG. 2, the cache server 202 may send the requested desired content to the client application 204 in a first desired content response 216.
At step 310, the additional content is cached. For example, as shown in FIG. 2, the caching server 202 can include a store 222, and the predicted additional content received in the server response 214 from the origin server 102 can be cached in the store 222 as cached content 224. Storage 222 may include one or more of any type of storage mechanism to cache content, including a magnetic disk (e.g., in a hard disk drive), an optical disk (e.g., in an optical disk drive), a magnetic tape (e.g., in a tape drive), a memory device such as a RAM (random access memory) device, and/or any other suitable type of storage medium. Further, in one embodiment, storage 222 may be shared among multiple cache servers.
According to flow diagram 300, the client application has informed the CDN of a prediction of additional content that may be needed in the future, and the predicted additional content is pre-cached in the CDN (e.g., in a cache server 402 of the CDN). This pre-caching increases the cache hit rate of the actual subsequently requested content, thus improving the end user experience overall. Note that all pre-caching will not necessarily be used, and the pre-cached requests may optionally be prioritized under conventional CDN content requests. Example embodiments of such prioritization are described further below. The benefit of pre-caching is better utilization of the (existing) free capacity to improve the overall user experience. The use of prioritization (e.g., likelihood indicators) helps to optimize the pre-cache.
For example, FIG. 4 shows a flowchart 400 of a process for providing previously cached content to a client application, according to an example embodiment. For example, in an embodiment, the flow diagram 400 may follow the flow diagram 300 of fig. 3. Flowchart 400 is described below with reference to fig. 2.
The flowchart 400 begins at step 402. At step 402, a second request is received from a client application that includes a request for additional content. For example, as shown in FIG. 2, the cache server 202 may receive a second desired content request 218 from the client application 204 of the user device 106 a. The second desired content request 218 is a request by the client application 204 for content (e.g., map tiles, images, video, audio, web pages, etc.) that was previously predicted to be potentially requested by the client application 204 in the future and indicated in the first desired content request 210 (or other previously desired content request).
At step 404, the cached additional content is sent to a client application at the user device. In an embodiment, the cache server 202 may analyze the second desired content request 218 to determine whether any content requested therein is already cached at the cache server 202 (e.g., in the store 222). For example, in an embodiment, cache server 202 may access a cached content map or other data structure that maps content identifiers (e.g., identification numbers, such as hash values, etc.) to content cached in storage 222. The caching server 202 can compare the content identifier received in the second desired content request 218 for the desired content to the content identifiers in the cached content map to determine whether the desired content has already been cached. If the content identifier of the desired content is found in the cached content map, the desired content is already cached at the cache server 202. In this case, cache server 202 may access the content that has been cached in storage 222 (e.g., as cached content 224) and may provide the cached content to client application 204 in the sending of cached content response 220.
In the event that the desired content indicated in the second desired content request 218 has not yet been cached at the cache server 202, the cache server 202 may request the desired content from the origin server 102, as previously described in connection with steps 304 and 306 of the flow diagram 300 (FIG. 3). Further, the second desired content request 218 may optionally indicate additional content predicted by the additional content predictor 208 that is potentially subsequently needed in a manner similar to the first desired content request 210. In this case, the cache server 202 may request the indicated additional content from the origin server 102 and may cache the indicated additional content in the store 222 for subsequent content requests in a manner similar to that described above in connection with steps 304, 306, and 310 of flowchart 300.
Example embodiments of a client application and a cache server are described in subsequent subsections.
A. Example client application embodiments
As described above, in an embodiment, a client application (e.g., client application 204) may predict content to be accessed in the future, and the predicted content may be cached at a cache server for faster response. For example, FIG. 5 illustrates a flow diagram 500 that provides a process for a client application to request and receive desired content and indicate that content that may be requested in the future is to be cached, according to an example embodiment. In one embodiment, flowchart 500 may be performed by client application 204 of FIG. 2. Flowchart 500 is described with reference to fig. 6. Fig. 6 shows a block diagram of a user device 600 including an example embodiment of a client application 204. As shown in FIG. 6, the client application 204 includes a communication interface 602 and an additional content predictor 208, while the communication interface 602 includes a request formatter 604. Further structural and operational embodiments will be apparent to persons skilled in the relevant arts based on the following discussion regarding flowchart 500 and client application 204 of fig. 6.
Flowchart 500 begins with step 502. At step 502, a first request for desired content is generated that indicates additional content related to the desired content that may be subsequently requested by the client application. For example, as shown in FIG. 6, the client application 204 may generate a first desired content request 210, which is a request for desired content, as described above. Further, as described above, the first desired content request 210 includes an indication of additional content related to the desired content that may be subsequently requested by the client application 204.
In an embodiment, the additional content predictor 208 of the client application 204 predicts additional content related to the desired content that may be subsequently requested by the client application 204. For example, as shown in FIG. 6, the additional content predictor 208 may receive a desired content indication 606. The desired content indication 606 may be generated internally within the client application 204 and/or may be input by a user to a user interface provided by the client application 204. The desired content indication 606 indicates content that needs to be displayed, played, or otherwise interacted with. For example, the desired content indication 606 may indicate one or more of a map tile, a video frame, an audio frame, an image, a web page, etc. (e.g., via a URL, filename, etc.). The additional content predictor 208 may predict additional content based on the desired content indicated by the desired content indication 606 and may output the predicted additional content as a predicted additional content indication 608.
For example, when the desired content indication 606 indicates a tile of a viewed map as desired content, the additional content predictor 208 may predict one or more additional map tiles of the map to be subsequently viewed by the user. In such a case, the additional content predictor 208 may generate the predicted additional content indication 608 to indicate one or more map tiles that are spatially adjacent to the indicated map tile, map tiles within the indicated map tile (zoom in), map tiles that include the indicated map tile (zoom out), and/or other map tiles of the map that are likely to be subsequently viewed by the user.
In another example, where the desired content indication 606 indicates an image of a series of images as the desired content, the additional content predictor 208 may predict one or more additional images of the series of images to be subsequently viewed by the user, such as one or more subsequent images of the series of images (temporally or spatially) adjacent to the indicated image (including all remaining portions of the subsequent images), an image of a portion of the indicated image (e.g., an enlarged image), an image including the entire indicated image (e.g., a reduced image).
In another example, where the desired content indication 606 indicates a video frame of a video as desired content, the additional content predictor 208 may predict one or more additional video frames of the video to be subsequently viewed by a user, such as one or more subsequent video frames of the video (including all remaining portions of the video).
In yet another example, where the desired content indication 606 indicates an audio frame of audio as the desired content, the additional content predictor 208 may predict one or more additional audio frames of the audio as to be subsequently played by the user, such as one or more subsequent audio frames of the audio (including all remaining portions of the audio).
In yet another example, where the desired content indication 606 indicates a web page as the desired content, the additional content predictor 208 may predict one or more additional web pages or other web objects (e.g., images, video, audio, etc.) to be subsequently viewed by the user, such as one or more web pages or other web objects linked within the indicated web page, one or more web pages of a web site that includes the indicated web page, and so forth.
As shown in fig. 6, communication interface 602 receives a desired content indication 606 and a predicted additional content indication 608. In an embodiment, the request formatter 604 generates one or more requests to request the desired content indicated by the desired content indication 606 and the predicted additional content indicated by the predicted additional content indication 608. The communication interface 602 is configured to send the request generated by the request formatter 604 as the first desired content request 210 from the client application 204. In embodiments, the request formatter 604 may generate the request to have any format required by the particular communication technology.
For example, in one embodiment, the request formatter 604 may generate the request in the form of an HTTP (HyperText transfer protocol) request message. In such embodiments, the HTTP request message may be configured to request the desired content, and the predicted additional content may be indicated in the HTTP request message in any suitable form, such as indicated in a header. For example, portions of an example HTTP request message are shown as follows:
GET/images/logo.pngHTTP/1.1
HINTS:<URL1>=20;<URL2>=60
in this example, the HTTP request message includes a request line requesting the web object "/images/logo. Further, the example HTTP request message includes an HTTP header "hits" that indicates two URLs, URL1 and URL2, as the predicted additional content. In this example, each of "URL 1" and "URL 2" may be replaced in the "hits" header with a complete URL to the corresponding predicted additional content (e.g., http:// tv. msn. com/tv/article. aspx. Although two URLs are shown in the example header above, any number of items of predicted additional content may be indicated in the header by URLs or other identifiers in this manner.
Thus, in an embodiment, the request formatter 604 may generate an HTTP request message (e.g., using a "GET" instruction) indicating the desired content and indicating the predicted additional content in a header (e.g., a "hits" or other predetermined type of header for the predicted additional content). In other embodiments, the desired content and the predicted additional content may be otherwise indicated in the request by the request formatter 604. Note that as shown in the above example, a single "hits" header may be present in the HTTP request message, or multiple "hits" headers may be present. Still further, in an example, the HTTP request message may indicate the predicted additional content without indicating any required content (i.e., specifying/requesting only the predicted additional content). In such HTTP request messages, a "GET" instruction may be used to specify the predicted additional content, and a "hits" or other header may not be used, or there may be a header (e.g., a "cache" header) that indicates to the cache server that the requested content is to be cached and not yet returned to the client.
The communication interface 602 may send the first desired content request 210 to the cache server. In various embodiments, the communication interface 602 may be configured to send the first desired content request in an HTTP message, and/or may be configured to send the request in other manners that may be known to those skilled in the relevant arts.
Returning to FIG. 5, in step 504, the desired content is received from a cache server that retrieves the desired content from an origin server in response to a first request for the desired content. For example, as shown in fig. 6, the communication interface 602 may receive the first desired content response 216 from the cache server in response to the first desired content request 210. First desired content response 216 includes the desired content requested in request 210. The client application 204 may display, play, and/or otherwise allow a user at the user device 600 to interact with the received desired content.
At step 506, a second request for additional content is generated. For example, as shown in FIG. 6, the client application 204 may generate a second desired content request 218. As described above, the second desired content request 218 is a request by the client application 204 for content (e.g., map tiles, images, video, audio, web pages, etc.) that was previously predicted to be likely to be subsequently requested by the client application 204 and so indicated in the first desired content request 210. The second desired content request 218 may be generated in a similar manner as the first desired content request 210. For example, the additional content predictor 208 may receive a second desired content indication 606 that indicates content that needs to be displayed, played, or otherwise interacted with. The additional content predictor 208 may optionally predict additional content based on the desired content indicated by the second desired content indication 606 and may output the predicted additional content as a second predicted additional content indication 608. The communication interface 602 receives a second desired content indication 606 and optionally a second predicted additional content indication 608. In an embodiment, the request formatter 604 generates a request for second desired content and optionally includes the second predicted additional content in the generated request. The communication interface 602 is configured to send the request from the client application 204 as the second desired content request 218.
At step 508, additional content is received from a caching server that retrieves and caches the additional content from the original server in response to the first request for the desired content. For example, as shown in fig. 6, the communication interface 602 may receive the cached content response 220 that includes the cached content from the cache server as the desired content indicated in the second desired content request 218. The caching server previously obtained the desired content from the origin server and cached the desired content in preparation for its possible subsequent request.
As described above, requests for cached content may optionally be prioritized below normal CDN content requests. In this manner, the content that is actually being requested may be requested before only the content that is being requested for caching purposes. Likewise, content being requested for caching purposes that is more likely to actually be requested may be cached before content being requested for caching purposes that is less likely to actually be requested.
For example, fig. 7 shows a block diagram of the additional content predictor 208 including an additional content prioritizer 702, according to an example embodiment. The additional content prioritizer 702 is configured to analyze the predicted additional content to determine how likely it is to actually be subsequently requested by the client application. For example, in one embodiment, the additional content prioritizer 702 may operate in accordance with step 802 shown in FIG. 8. In step 802, a likelihood indication is generated indicating a likelihood that additional content may be subsequently requested by the client application. In an embodiment, for each predicted additional content item, the additional content prioritizer 702 may generate a corresponding likelihood indication indicating a likelihood that additional content may be subsequently requested by the client application 204. The generated likelihood indications may be associated with corresponding predicted additional content items and included in the desired content request sent to the cache server. The cache server may use the likelihood indication to prioritize caching of the predicted additional content item relative to other content. Further, the cache server may send the likelihood indication to the origin server when requesting the predicted content, such that the origin server may prioritize the provision of the predicted content to the cache server.
The additional content prioritizer 702 may generate the likelihood indication in any manner, including based on the content being viewed and/or based on user behavior. For example, the likelihood indications may be generated differently for different content types (e.g., higher likelihood indication values assigned to video frames relative to map tiles, etc.). In another embodiment, the likelihood indication may be generated differently for the predicted additional content by the additional content prioritizer 702 based on proximity (e.g., temporally and/or spatially) to the actual requested content. For example, in a map example, a greater likelihood indication value may be assigned to a map tile immediately adjacent to a map tile currently being viewed relative to a map tile separated from the map tile being viewed by one or more intermediate map tiles. An immediately next video frame of the video stream may be assigned a probability indication value having a high probability of being relative to a following video frame of the video stream (e.g., a temporally and positionally following video frame in the video stream). Depending on the actual implementation, the images of the image stream and/or the audio frames of the audio stream may be treated similarly and/or differently. In a web page, content items (e.g., URLs) located near the top of the web page may be assigned higher likelihood indication values relative to content items located at the bottom of the web page. In various embodiments, the likelihood indication may be generated in various ways, and may have any suitable value and range of values (e.g., a numerical range, a textual range (e.g., "high," "medium," "low," etc.)), as desired for a particular implementation. Given user behavior, when a user is sweeping the map in a particular direction, the likelihood that the map tile in that direction will be requested in the future is higher than the likelihood of other tiles, and therefore may be assigned a higher likelihood indication. When a user fast forwards through video or audio, the "next" frame may be the previous frames, and frames that are the previous frames in the stream may be assigned a higher likelihood indication value relative to the more immediately adjacent frames.
For example, in the example HTTP message shown above, a likelihood indication having a value of 20 has been generated for URL1, and a likelihood indication having a value of 60 has been generated for URL 2. In such an example, the likelihood indication may have a value in the range of 1-100, a lower likelihood value meaning that the corresponding predicted additional content item has a lower likelihood of being requested by the client application in the future, and a higher likelihood value meaning that the corresponding predicted additional content item has a higher likelihood of being requested by the client application in the future. As such, in this example, URL2 has a higher likelihood of being requested by a client application in the future, relative to URL1, and is therefore cached prior to URL 1.
B. Example cache Server embodiments
FIG. 9 illustrates a block diagram of a cache server 902, according to an example embodiment. Cache server 902 is an example of cache server 202 shown in fig. 2. As shown in fig. 2, cache server 902 includes a content retriever module 904, a content request parser 906, a content provider module 908, and storage 222. Cache server 902 may pre-cache predicted future content in various ways. For example, in various embodiments, cache server 902 may perform flowcharts 300 and 400.
For example, content request parser 906 may perform step 302 of flowchart 300, where a request for desired content is received from a client application in a user device, the request indicating additional content related to the desired content that may be subsequently requested by the client application. The content request parser 906 may receive and parse the first desired content request 210 to identify any desired content and any predicted additional content. The content request parser 906 sends the first identified desired content and the predicted additional content 912 to the content retriever module 904.
Content retriever module 904 can perform step 304 of flowchart 300, where a request for the indicated additional content is sent to the origin server. As shown in fig. 9, the content retriever module 904 can send a server request 212 to the origin server indicating the additional content that is predicted and can indicate the content that is needed (if not already cached). The content retriever module 904 can include both the request for the predicted additional content and the request for the desired content in the server request 212, or send them in separate communications.
Content retriever module 904 can perform step 306 of flowchart 300, where the desired content and additional content are received from the origin server. The content retriever module 904 can receive the server response 214 from the origin server, including the desired content (if requested) and the predicted additional content, or can receive the desired content and the predicted additional content in separate communications.
The content provider module 908 performs step 308 of flowchart 300 where the desired content is sent to the client application at the user device. As shown in FIG. 9, the content retriever module 904 can send the retrieved desired content 914, including the desired content received from the origin server in the server response 214. The content provider module 908 may receive the retrieved desired content 914 and may send the desired content to the client application in the first desired content response 216.
Content retriever module 904 can perform step 310 of flowchart 300, where the additional content is cached. As shown in fig. 9, the content retriever module 904 can cache the predicted additional content in the store 222 as cached content 224.
Content request parser 906 may perform (fig. 4) step 402 of flowchart 400, where a second request is received from the client application, including a request for additional content. As shown in fig. 9, the content request parser 906 may receive the second desired content request 218 from the client application. The content request parser 906 may receive and parse the second desired content request 218 to identify any desired content and any predicted additional content. The content request parser 906 sends the second identified desired content and the predicted additional content 916 to the content retriever module 904. In the example shown in fig. 9, the second identified desired content and the predicted additional content 916 includes the predicted additional content and the predicted additional content 912 of the first identified desired content as the desired content.
Content retriever module 904 and content provider module 908 may perform step 404 of flowchart 400, where the cached content is provided to a client application at the user device. The content retriever module 904 can analyze the second identified desired content and the predicted additional content 916 to determine whether any content requested therein has been cached in the store 222. For example, in one embodiment, the content retriever module 904 may access the cached content map or other data structure described above that maps content identifiers to content cached in the storage 222. If the content retriever module 904 determines that the desired content has been cached in the storage 222, the content retriever module 904 can access the cached content in the storage 222 (e.g., as the cached content 224) and can provide the cached content to the content provider module 908 as the cached content 918. The content provider module 908 may provide the cached content to the client application in the cached content response 220.
In the event that the desired content indicated in the second identified desired content and the predicted additional content 916 has not been cached in storage 222, content retriever module 904 can request the desired content from the origin server as described above. Further, the second identified desired content and the predicted additional content 916 may optionally indicate other additional content that is predicted to be subsequently requested. In such a case, the content retriever module 904 can request the indicated other additional content from the origin server, and can cache the indicated other additional content in the store 222 for subsequent content requests, as described above.
Note that one skilled in the relevant art will appreciate that the content retriever module 904, the content request parser 906, and the content provider module 908 may be configured to generate requests and receive responses, etc., according to any suitable communication protocol and format, including HTTP messages, etc.
Further, as described above, requests for cached content may optionally be prioritized relative to ordinary CDN content requests. For example, fig. 10 shows a block diagram of the content retriever module 904 of fig. 9, including a request prioritizer 1002, according to an example embodiment. Request prioritizer 1002 is configured to prioritize requests for cached content. For example, in one embodiment, request prioritizer 1002 may operate according to step 1102 shown in FIG. 11. At step 1102, transmission of the indicated request for additional content to the origin server is prioritized based on the received likelihood indication in the request. For example, in an embodiment, for each predicted additional content item, request prioritizer 1002 may prioritize the request for the predicted additional content item relative to other content based on the corresponding likelihood indication received from client application 204.
For example, in an embodiment, request prioritizer 1002 may generate and maintain a priority list or other data structure that lists the predicted additional content (e.g., via identifiers) and corresponding likelihood indications. Request prioritizer 1002 may order the list by likelihood indication or may otherwise compose the list such that content retriever module 904 requests the predicted additional content in a prioritized manner according to the likelihood indication. The predicted additional content and likelihood indications may be listed for a single client application or multiple client applications. Thus, in this manner, the content retriever module 904 can prioritize requests to the origin server for a single client application or multiple client applications.
For example, with respect to the example HTTP message illustrated above, request prioritizer 1002 may maintain (e.g., store and update as needed) a list of predicted additional content including URL1 and URL2 and their corresponding likelihood indications. Because URL2 has a likelihood indication with a value of 60, URL2 is prioritized higher than URL1 with a likelihood indication value of 20. In this regard, in this example, the content retriever module 904 retrieves the content of the URL2 from the origin server prior to retrieving the content of the URL1 from the origin server according to the priority maintained by the request prioritizer 1002.
Further, in various embodiments, the content retriever module 904 can use additional and/or alternative information, such as available storage capacity, available network capacity, available processing (e.g., CPU) capacity, and the like, to prioritize requests for content. Still further, in an embodiment, upon requesting the predicted future content, the cache server may send a likelihood indication to the origin server such that the origin server may prioritize the sending of the predicted content to the cache server relative to other content (e.g., prioritize the request for the predicted future content below the content required for the moment). As such, in an embodiment, the origin server may include a request prioritizer, similar to request prioritizer 1002, that prioritizes the transmission of predicted content to the cache servers based on the likelihood indications and/or the additional/replacement information.
Example computing device embodiments
Client applications 110a-110d, client application 204, additional content predictor 208, communication interface 602, request formatter 604, additional content prioritizer 702, content retriever module 904, content request parser 906, content provider module 908, request prioritizer 1002, flowchart 300, flowchart 400, flowchart 500, step 802, and step 1102 may be implemented in hardware, software, firmware, or any combination thereof. For example, the client applications 110a-110d, the client application 204, the additional content predictor 208, the communication interface 602, the request formatter 604, the additional content prioritizer 702, the content retriever module 904, the content request parser 906, the content provider module 908, the request prioritizer 1002, the flowchart 300, the flowchart 400, the flowchart 500, the step 802, and the step 1102 may be implemented as computer program code/instructions/logic configured for execution in one or more processors. Alternatively, the client applications 110a-110d, the client application 204, the additional content predictor 208, the communication interface 602, the request formatter 604, the additional content prioritizer 702, the content retriever module 904, the content request parser 906, the content provider module 908, the request prioritizer 1002, the flowchart 300, the flowchart 400, the flowchart 500, the step 802, and the step 1102 may be implemented as hardware logic/electronic circuitry. For example, in an embodiment, one or more of client applications 110a-110d, client application 204, additional content predictor 208, communication interface 602, request formatter 604, additional content prioritizer 702, content retriever module 904, content request parser 906, content provider module 908, request prioritizer 1002, flowchart 300, flowchart 400, flowchart 500, step 802, and step 1102 may be implemented together in a system on a chip (SoC). The SoC may include an integrated circuit chip including one or more of: a processor (e.g., a microcontroller, microprocessor, Digital Signal Processor (DSP), etc.), a memory, one or more communication interfaces, and/or further circuitry for performing its functions and/or embedded firmware.
FIG. 12 depicts an exemplary implementation of a computer 1200 in which embodiments of the present invention may be implemented. For example, each of the origin server 102, cache servers 104a and 104b, user devices 106a-106d, cache server 202, user device 600, and cache server 902 may be implemented in one or more computer systems like computer 1200 that include one or more features of computer 1200 and/or alternative features. Computer 1200 may be a general-purpose computing device in the form of a conventional personal computer, a mobile computer, a server, or a workstation, for example, or computer 1200 may be a special purpose computing device. The description of the computer 1200 provided herein is provided for purposes of illustration, and is not intended to be limiting. As will be appreciated by one skilled in the relevant art, embodiments of the invention may be implemented in other types of computer systems.
As shown in FIG. 12, computer 1200 includes one or more processors 1202, a system memory 1204, and a bus 1206 that couples various system components including the system memory 1204 to the processors 1202. Bus 1206 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. The system memory 1204 includes Read Only Memory (ROM)1208 and Random Access Memory (RAM) 1210. A basic input/output system 1212 (BIOS) is stored in ROM 1208.
The computer 1200 also has one or more of the following drives: a hard disk drive 1214 for reading from and writing to a hard disk, a magnetic disk drive 1216 for reading from or writing to a removable magnetic disk 1218, and an optical disk drive 1220 for reading from or writing to a removable optical disk 1222 such as a CDROM, DVDROM, or other optical media. Hard disk drive 1214, magnetic disk drive 1216, and optical disk drive 1220 are connected to bus 1206 by a hard disk drive interface 1224, a magnetic disk drive interface 1226, and an optical drive interface 1228, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules and other data for the computer. Although a hard disk, a removable magnetic disk and a removable optical disk are described, other types of computer-readable storage media can be used to store data, such as flash memory cards, digital video disks, Random Access Memories (RAMs), Read Only Memories (ROMs), and the like.
Several program modules may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. These programs include an operating system 1230, one or more application programs 1232, other program modules 1234, and program data 1236. The application programs 1232 or program modules 1234 may include, for example, computer program logic (e.g., computer program code) for implementing the client applications 110a-110d, the client application 204, the additional content predictor 208, the communication interface 602, the request formatter 604, the additional content prioritizer 702, the content retrieval module 904, the content request parser 906, the content provider module 908, the request prioritizer 1002, the flowchart 300, the flowchart 400, the flowchart 500, step 802, and/or step 1102 (including any of the steps of the flowcharts 300, 400, 500), and/or other embodiments described herein.
A user may enter commands and information into the computer 1200 through input devices such as a keyboard 1238 and pointing device 1240. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processor 1202 through a serial port interface 1242 that is coupled to bus 1206, but may be connected by other interfaces, such as a parallel port, game port, or a Universal Serial Bus (USB).
A display device 1244 is also connected to the bus 1206 via an interface, such as a video adapter 1246. In addition to the monitor, computer 1200 may also include other peripheral output devices (not shown), such as speakers and printers.
Computer 1200 is connected to a network 1248 (e.g., the Internet) through an adapter or network interface 1250, a modem 1252, or other means for establishing communications over the network. The modem 1252, which may be internal or external, may be connected to the bus 1206 via a serial port interface 1242, as shown in FIG. 12, or may be connected to the bus 1206 using another interface type, including a parallel interface.
As used herein, the terms "computer program medium," "computer-readable medium," and "computer-readable storage medium" are used to generally refer to media such as the hard disk associated with hard disk drive 1214, removable magnetic disk 1218, removable optical disk 1222, as well as other media such as flash memory cards, digital video disks, Random Access Memories (RAMs), Read Only Memories (ROM), and the like. These computer-readable storage media are distinct and non-overlapping with respect to communication media (which does not include communication media). Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wireless media such as acoustic, RF, infrared and other wireless media. Embodiments are also directed to these communication media.
As indicated above, computer programs and modules (including application programs 1232 and other program modules 1234) may be stored on the hard disk, magnetic disk, optical disk, ROM, or RAM. Such computer programs may also be received via network interface 1250, serial port interface 1242, or any other interface type. Such computer programs, when executed or loaded by an application, enable computer 1200 to implement features of the present invention as discussed herein. Accordingly, such computer programs represent controllers of the computer 1200.
The present invention also relates to a computer program product comprising software stored on any computer usable medium. Such software, when executed in one or more data processing devices, causes the data processing devices to operate as described herein. Embodiments of the present invention employ any computer-usable or computer-readable medium, known now or in the future. Examples of a computer-readable medium include, but are not limited to, storage devices such as RAM, hard drives, floppy disks, CDROM, DVDROM, zip disks, tapes, magnetic storage devices, optical storage devices, MEM (memory), nanotechnology-based storage devices, and the like.
Conclusion VI
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be understood by those of ordinary skill in the relevant art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims. Thus, the scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims (16)

1. A method in a cache server, comprising:
receiving from a client application in a user device a request for desired content and an indication of additional content related to the desired content that may be subsequently requested by the client application, the request including a likelihood indication generated by the client application indicating a likelihood that additional content will be subsequently requested by the client application;
prioritizing sending requests to an origin server for the indicated additional content, sending requests to the origin server for at least the indicated additional content;
receiving the additional content from the origin server;
sending the desired content to the client application at the user device; and
caching the additional content.
2. The method of claim 1, further comprising:
receiving a second request from the client application, the second request comprising a request for the additional content; and
providing the cached additional content to the client application at the user device.
3. The method of claim 2, wherein the second request indicates additional content related to the additional content that may be subsequently requested by the client application, the method further comprising:
sending a request for the indicated additional content to the origin server;
receiving the additional content from the origin server; and
caching the additional content.
4. The method of claim 1, wherein the prioritizing is based on one of the likelihood indication, available storage capacity, available network capacity, or available processing capacity.
5. The method of claim 1, wherein the request for the desired content received from the client application is an HTTP hypertext transfer protocol message indicating the additional content in a header.
6. The method of claim 1, wherein the desired content comprises a first image and the additional content is a second image spatially adjacent to the first image, temporally adjacent to the first image, part of the first image, or comprises the first image.
7. The method of claim 1, wherein the request for desired content is received from the client application separately from the indication of additional content.
8. A cache server, comprising:
a content request parser that receives a request for desired content from a client application in a user device and an indication of additional content related to the desired content that may be subsequently requested by the client application, the request including a likelihood indication generated by the client application indicating a likelihood that additional content will be subsequently requested by the client application;
a content retriever module that prioritizes sending a request for the indicated additional content to an origin server, sends a request for at least the indicated additional content to the origin server, receives the additional content from the origin server, and caches the additional content in storage; and
a content provider module that sends desired content to the client application at the user device.
9. The cache server of claim 8, wherein the content request parser receives a second request from the client application, the second request including a request for the additional content, and the content provider module provides the cached additional content to the client application.
10. A system in a cache server, comprising:
means for receiving from a client application in a user device a request for desired content and an indication of additional content related to the desired content that may be subsequently requested by the client application, the request including a likelihood indication generated by the client application indicating a likelihood that additional content will be subsequently requested by the client application;
means for prioritizing sending requests for the indicated additional content to an origin server, sending requests for at least the indicated additional content to the origin server;
means for receiving the additional content from the origin server;
means for sending the desired content to the client application at the user device; and
means for caching the additional content.
11. The system of claim 10, further comprising:
means for receiving a second request from the client application, the second request comprising a request for the additional content; and
means for providing the cached additional content to the client application at the user device.
12. The system of claim 11, wherein the second request indicates additional content related to the additional content that may be subsequently requested by the client application, the system further comprising:
means for sending a request for the indicated additional content to the origin server;
means for receiving the additional content from the origin server; and
means for caching the additional content.
13. The system of claim 10, wherein the prioritization is based on one of the likelihood indication, available storage capacity, available network capacity, or available processing capacity.
14. The system of claim 10, wherein the request for the desired content received from the client application is an HTTP hypertext transfer protocol message indicating the additional content in a header.
15. The system of claim 10, wherein the desired content comprises a first image and the additional content is a second image spatially adjacent to the first image, temporally adjacent to the first image, part of the first image, or comprises the first image.
16. The system of claim 10, wherein the request for desired content is received from the client application separately from the indication of additional content.
HK13108920.3A 2011-12-16 2013-07-30 A method, server and system in a caching server HK1181930B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/328,444 2011-12-16
US13/328,444 US9294582B2 (en) 2011-12-16 2011-12-16 Application-driven CDN pre-caching

Publications (2)

Publication Number Publication Date
HK1181930A1 HK1181930A1 (en) 2013-11-15
HK1181930B true HK1181930B (en) 2016-07-15

Family

ID=

Similar Documents

Publication Publication Date Title
JP6073366B2 (en) Application driven CDN pre-caching
JP2015509229A5 (en)
US10013500B1 (en) Behavior based optimization for content presentation
KR102294326B1 (en) Prefetching application data for periods of disconnectivity
US10331769B1 (en) Interaction based prioritized retrieval of embedded resources
CN103650518B (en) Predictive, multi-layer caching architectures
US10291738B1 (en) Speculative prefetch of resources across page loads
US9967361B2 (en) Physical location influenced caching
CN104796439B (en) Web page push method, client, server and system
US20130007260A1 (en) Access to network content
CN107251525A (en) For supporting the predictive content of mobile device user to prefetch the distributed server architecture of service
CN107197359B (en) Video file caching method and device
WO2012174070A2 (en) Improving access to network content
US9785619B1 (en) Interaction based display of visual effects
US9594846B2 (en) Client side caching
CN118916096A (en) Front-end resource loading intelligent scheduling strategy method
JP2022549076A (en) Methods, systems and programs for improving cacheability of single page applications
US10341454B2 (en) Video and media content delivery network storage in elastic clouds
KR20150011087A (en) Distributed caching management method for contents delivery network service and apparatus therefor
HK1181930B (en) A method, server and system in a caching server
KR20120016335A (en) Web page precaching system and method for offline execution
KR20150010415A (en) Contents delivery network service method and broker apparatus for distributed caching