[go: up one dir, main page]

GB2543279A - Methods, devices and computer programs for optimizing use of bandwidth when pushing data in a network environment comprising cache servers - Google Patents

Methods, devices and computer programs for optimizing use of bandwidth when pushing data in a network environment comprising cache servers Download PDF

Info

Publication number
GB2543279A
GB2543279A GB1518042.5A GB201518042A GB2543279A GB 2543279 A GB2543279 A GB 2543279A GB 201518042 A GB201518042 A GB 201518042A GB 2543279 A GB2543279 A GB 2543279A
Authority
GB
United Kingdom
Prior art keywords
resource
request
main
received
application server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1518042.5A
Other versions
GB2543279B (en
GB201518042D0 (en
Inventor
Fablet Youenn
Bellessort Romain
Ruellan Hervé
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB201518042A priority Critical patent/GB2543279B/en
Publication of GB201518042D0 publication Critical patent/GB201518042D0/en
Publication of GB2543279A publication Critical patent/GB2543279A/en
Application granted granted Critical
Publication of GB2543279B publication Critical patent/GB2543279B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention relates to pushing a resource associated with a main resource, such as a HTML resource, from an application server to an intermediary component such as a cache server. On receiving a request from a client device 500 the intermediary component updates the request to differ in push policy 525. This policy update may be, for example, where the intermediary component only wishes to receive information and not content or where it has amended the clients decision not to receive any pushed data. The updated request is forwarded 535 to an application server which identifies which sub-resources associated with the main resource are to be pushed. The identified supplementary resources or references to them are pushed to the intermediary component along with the main resource. The intermediary may then push the resources to the client.

Description

METHODS, DEVICES AND COMPUTER PROGRAMS FOR OPTIMIZING USE OF BANDWIDTH WHEN PUSHING DATA IN A NETWORK ENVIRONMENT COMPRISING CACHE SERVERS
FIELD OF THE INVENTION
The present invention relates in general to transmission of data between an application server and a client device in a communication network comprising cache servers, and in particular to methods, devices and computer programs for optimizing use of bandwidth when pushing data in a communication network environment comprising cache servers.
BACKGROUND OF THE INVENTION HTTP (HyperText Transfer Protocol) is an application protocol used on top of TCP/IP (Transmission Control Protocol/Internet Protocol) used to transfer data over the Internet. HTTP is the protocol used to exchange or transfer hypertext. It functions as a request-response protocol in the client-server computing model. Accordingly, to obtain data, a client device sends an HTTP request to an application server to obtain these data (also referred to as a resource) and, in response, the application server sends back the data to the client device inside an HTTP response.
It is noted that a web page displayed by a web browser is generally made up of many resources, typically about one hundred elements, also referred to as auxiliary resources. For the sake of illustration, these resources can be HTML (.HyperText Markup Language) resources (structuring the content of the page), CSS (Cascading Style Sheets) resources (describing the layout and the style of the content), JavaScript resources (providing dynamic behaviour for the page), image resources and other media or font resources.
In order to display a web page, a client device sends a request to an application server for obtaining the main HTML resource. Once it has received this HTML resource, it parses it to identify other resources that are needed to display the web page. Each time a resource is identified, the client sends a corresponding request to the application server. Such a parsing process is repeated for each newly requested resource.
For example, upon parsing a main HTML resource, a client device may identify a link to a CSS stylesheet. Upon identification, it requests this stylesheet from the application server. Then, when parsing the CSS stylesheet after it has been received, the client device may determine that the stylesheet refers to an image to be used as a background of the web page. Therefore, it will also request this image from the application server.
Figure 1, comprising Figures 1a to 1c, illustrates examples of the structure of HTML web pages formed from several resources (i.e. comprising several elements).
As illustrated in Figure 1a, web page 100 is composed of a main HTML resource denoted H1 that uses a JavaScript resource, denoted JS1, for handling some dynamic content, and that uses a CSS resource, denoted CSS1, for styling the HTML content of H1.
The structure of web page 105 illustrated in Figure 1b is more complex than that of web page 100 in that CSS resource CSS1 comprises a sub-resource consisting of an image denoted IMG1, for example an image to be used as the background of the web page 105.
It is to be noted that both web pages 100 and 105 are simplified examples in comparison to actual web pages. However, although the number of resources of actual web pages may be far greater than the number of resources of web pages 100 and 105, the structures of the web pages are very close.
In order to display these web pages, the client device should first obtain HTML resource H1. In response to this request, the application server sends the HTML resource to the client device.
Once received, HTML resource H1 is parsed to identify the JS1 and CSS1 resources that are needed to display the web page. After they have been identified, they are requested. Likewise, the IMG1 resource is requested after it is identified when parsing the CSS1 resource.
It is observed here that web pages are often dynamic.
Figure 1c illustrates an updated version of web page 100 illustrated in Figure 1a, denoted 100’. As illustrated, the CSS resource is no longer CSS1. It has been replaced by CSS2. Accordingly, when parsing the HTML resource H1, after the web page has been updated, the client device will no longer request the CSS1 resource but will request the CSS2 resource.
In order to optimize data communication between application servers and client devices, HTTP defines a mechanism allowing a client device or an intermediary component (that may be, in particular, an independent device or a part of a client device) to store resources received from an application server in a temporary memory (known as a cache memory). This allows a client device to reuse resources that have been previously received without requesting them again from an application server.
For the sake of illustration, since a logo of a company is typically displayed on all the pages of the web site of that company, it is advantageously stored after it has been received so as to be reused to display other pages of the same web site.
In order to optimize the transfer of data between an application server and a client device, HTTP/2 makes it possible for an application server to push data that have not been requested by a client device. Pushing a resource is useful in that it makes it possible for a client device to obtain the different resources it needs more quickly.
To that end, an application server can send a push promise to a client device to forewarn the latter that the application server is intending to push a resource to it. This push promise contains a request to which the pushed resource is a response. Basically, the push promise contains a URI (Uniform Resource Identifier) of the resource the application server is intending to push. This enables the client to know what the application server is promising. The push promise is made by sending a frame of the PUSH_PROMISE type. After having sent the push promise, the application server sends the advertised resource in the same way it would send a response corresponding to a client device’s request.
It is to be noted that an application server can only push resources that are in relation to a request sent by a client device. More specifically, a PUSH_PROMISE frame identifies both a client device request and a resource that will be pushed.
Turning back to Figures 1a and 1b, an application server can take advantage of the HTTP/2 push feature by pushing the auxiliary resources to improve the loading time of the page. In such a case, JS1 and CSS1 resources (Figure 1a) and JS1, CSS1, and IMG1 resources (Figure 1b) can be pushed in response to the request directed to H1 resource.
For the sake of illustration, Jetty is an open source Web server that supports the HTTP/2 push feature. To decide which resources to push in response to a request, Jetty records client device requests. Upon receiving a request for a first resource, Jetty monitors for two seconds all the requests coming from the same client device. Any request referring to the first resource (as determined by the ‘Referef header field) is assumed to be useful for the client device to process the first resource and is therefore added to the list of resources to be pushed upon receiving a request for the first resource.
It is to be noted that if a cache server arranged between a client device and an application server is used to reply to a request from this client device, resources should be pushed in a similar way. A cache server makes it possible to decrease the response time to a client device request. To that end, the cache server passes responses received from application servers to the requesting client devices and stores the responses along with information regarding the corresponding requests. Then, when the same request is received again, the cache server is able to directly reply to the client device by retrieving the stored response. This enables the client device to receive the response faster and also to decrease the data traffic between a cache server and the application server.
The HTTP/2 push feature also makes it possible to increase the responsiveness viewed from the client device end since an application server sends resources to the client device without waiting for the client device to request them.
Figure 2, comprising Figures 2a to 2d, illustrates an example for obtaining data from an application server through a cache server on a standard basis (Figures 2a and 2b) and using the push feature (Figures 2c and 2d).
As illustrated in Figure 2a, the round-trip-time for client device 200 to obtain data from application server 210 via cache server 205 is equal to 300 milliseconds (100 milliseconds from the client device to the cache server times two (going back and forth) plus 50 milliseconds from the cache server to the application server, times two (going back and forth)).
If the same data are cached in cache server 205, the round-trip-time for client device 200 to obtain the data is equal to 200 milliseconds (100 milliseconds from the client device to the cache server times two (going back and forth)), as illustrated in Figure 2b.
When using the push feature, the round-trip-time for client device 200 to obtain the same data from application server 210 is equal to 150 milliseconds (50 milliseconds from the application server to the cache server plus 100 milliseconds from the cache server to the client device), as illustrated in Figure 2c.
Finally, when using the push feature, the round-trip-time for client device 200 to obtain the same data from cache server 205 is equal to 100 (from the cache server to the client device), as illustrated in Figure 2d.
Accordingly, the time period needed to obtain data is significantly reduced when using the push feature and when the data to obtain is cached.
This is of particular interest in the case of DASH (Dynamic Adaptive Streaming over HTTP) for enabling fast start and fast seek as well as for obtaining fast MPD (Media Presentation Description) updates.
However, combining a cache server and the HTTP/2 push feature is complex since an application server (having the knowledge of which resources are to be pushed with regard to a particular request) is not contacted by a cache server to respond to a request (if the response is cached). Therefore, in order for a client device to receive pushed resources when the cache server is replying to a request, the knowledge of the application server has to be transmitted to the cache server. This leads to duplicating the knowledge and to increasing processing resources of cache server.
It should be noted that the HTTP/2 push feature is defined hop-by-hop: there are no constraints in the HTTP/2 specification for establishing a relation between resources pushed by an application server to a cache server in response to a given request for resources pushed by the cache server to a client in response to the same request. The cache server may push the same resources, it may push only some of them, or it may push other resources.
The applicant has developed a solution for pushing data, in particular for pushing cached data, so that cache servers are able to replicate push decisions made by application servers.
According to this solution, the behaviour of the push mechanism implemented in an application server is replicated in cache servers, without its complexity, so that the cache servers can answer client devices in lieu of application servers with cached versions of the resources. Accordingly, not only the resources are cached but also links between main resources and pushed resources. More precisely, each link associates a main resource with at least one pushed resource and comprises an item of information (that may also be called a type characterizing the link) provided by an application server, directly or via another cache server.
Specific headers can be used in the requests and in the responses. More precisely, a Push-Policy’ header can be introduced in responses sent by application servers. It identifies the push policy used by the application server to select a resource to push.
An ‘Accept-Push-Policy’ header corresponding to such a Push-Policy header can be used by a client device to indicate which kind of pushed resources the client device agrees to receive.
This push-policy header is useful for cache servers to cache not only the resources but also the links between resources (main resources and pushed resources) as well as the validity of the links.
Cache servers may use the ‘Accept-Push-Policy’ headers of client device requests to identify links between cached main resources and auxiliary resources that can be pushed. Accordingly, each link that is identified as a function of a received request leads to pushing the corresponding auxiliary resources to the corresponding client device.
Although such a solution has proven to be efficient, there is a need for optimizing use of bandwidth between cache servers and application servers while offering push features for pushing cached resources.
SUMMARY OF THE INVENTION
The present invention has been devised to address one or more of the foregoing concerns.
In this context, there is provided a solution for optimizing use of bandwidth when pushing data from an application server to a client device through one or more cache servers.
According to a first object of the invention, there is provided a method for optimizing pushing of at least one resource associated with a main resource, in response to a request for this main resource, the method being carried out in an intermediary component comprising a cache memory, the method comprising: receiving a request for a main resource; updating the received request for the main resource so that the received request and the updated request differ at least in a push policy information, the updated request enabling an application server to apply a push policy for identifying the at least one resource associated with the main resource; transmitting the updated request for the main resource to an application server; and further to a response to the transmitted updated request, pushing the at least one resource associated with the main resource to a component, wherein the component to which the at least one resource is pushed is a component from which the received request is received or a component from which another request for the main resource is received.
Therefore, the method of the invention makes it possible to optimize the use of the bandwidth between cache servers and application servers and to improve the responsiveness for pushing resources in a communication network comprising cache servers.
In an embodiment, the step of updating the received request comprises a step of modifying a push policy information of the received request.
In an embodiment, the step of updating the received request comprises a step of withdrawing a push policy information of the received request.
In an embodiment, the push policy information that differentiate the received request from the updated request makes it possible for an application server to determine how the at least one resource associated with the main resource is to be transmitted to the intermediary component.
In an embodiment, updating the received request is based on a latency between the intermediary component and the application server to which the updated request is transmitted, an history of resources pushed redundantly by the application server to which the updated request is transmitted, to the intermediary component, a status of the cache memory of the intermediary component, a state of a connection between the intermediary component and a component from which the received request is received, and/or a state of a component having generated or transmitted the received request.
In an embodiment, the push policy information that differentiate the received request from the updated request indicates that only headers of the at least one resource associated with the main resource are to be transmitted by the application server to which the updated request is sent, to the intermediary component, for transmitting the at least one resource associated with the main resource.
In an embodiment, the push policy information that differentiate the received request from the updated request indicates that headers of the at least one resource associated with the main resource and a portion of the content of the at least one resource associated with the main resource are to be transmitted by the application server to which the updated request is sent, to the intermediary component, for transmitting the at least one resource associated with the main resource.
In an embodiment, the push policy information that differentiate the received request from the updated request indicates that headers and content of the at least one resource associated with the main resource are to be transmitted by the application server to which the updated request is sent, to the intermediary component, for transmitting the at least one resource associated with the main resource.
In an embodiment, the intermediary component is a browser network layer and wherein the request for the main resource is received from the execution of a script of the parsing of a web page in a browser.
In an embodiment, the method further comprises a step of receiving an indication from the application server to which the updated request is sent that indicates that the at least one resource associated with the main resource is to be pushed.
In an embodiment, the method further comprises a step of receiving an indication from the application server to which the updated request is sent, that indicates that the at least one resource associated with the main resource is to be pushed and that the at least one resource associated with the main resource is to be requested by the intermediary component.
In an embodiment, the method comprises a further step of requesting the at least one resource associated with the main resource.
According to a second object of the invention, there is provided a method for optimizing pushing of at least one resource associated with a main resource, in response to a request for this main resource, the method being carried out in an application server, the method comprising: receiving a request for a main resource; identifying at least one resource associated with the main resource, as a function of a push policy; and transmitting an indication to indicate that the at least one resource associated with the main resource is to be pushed and that the at least one resource associated with the main resource is to be requested by an intermediary component.
Therefore, the method of the invention makes it possible to optimize the use of the bandwidth between cache servers and application servers and to improve the responsiveness for pushing resources in a communication network comprising cache servers.
In an embodiment, the received request comprises an indication of the push policy used to identify the at least one resource.
In an embodiment, the indication that indicates that the at least one resource associated with the main resource is to be pushed further indicates that the at least one resource associated with the main resource is to be requested by an intermediary component and wherein the indication that indicates that the at least one resource associated with the main resource is to be pushed is determined as a function of the indication of the push policy received in the received request.
According to a third object of the invention, there is provided a device for optimizing pushing of at least one resource associated with a main resource, in response to a request for this main resource, the device comprising a cache memory and a processor configured for carrying out the steps of: receiving a request for a main resource; updating the received request for the main resource so that the received request and the updated request differ at least in a push policy information, the updated request enabling an application server to apply a push policy for identifying the at least one resource associated with the main resource; transmitting the updated request for the main resource to an application server; and further to a response to the transmitted updated request, pushing the at least one resource associated with the main resource to a component, wherein the component to which the at least one resource is pushed is a component from which the received request is received or a component from which another request for the main resource is received.
Therefore, the device of the invention makes it possible to optimize the use of the bandwidth between cache servers and application servers and to improve the responsiveness for pushing resources in a communication network comprising cache servers.
In an embodiment, the processor is further configured so that the step of updating the received request comprises a step of modifying a push policy information of the received request.
In an embodiment, the processor is further configured so that the step of updating the received request comprises a step of withdrawing a push policy information of the received request.
In an embodiment, the push policy information that differentiate the received request from the updated request makes it possible for an application server to determine how the at least one resource associated with the main resource is to be transmitted to the device.
In an embodiment, the processor is further configured so that updating the received request is based on a latency between the device and the application server to which the updated request is transmitted, an history of resources pushed redundantly by the application server to which the updated request is transmitted, to the device, a status of the cache memory of the device, a state of a connection between the device and a component from which the received request is received, and/or a state of a component having generated or transmitted the received request.
In an embodiment, the processor is further configured so that the push policy information that differentiate the received request from the updated request indicates that only headers of the at least one resource associated with the main resource are to be transmitted by the application server to which the updated request is sent, to the device, for transmitting the at least one resource associated with the main resource.
In an embodiment, the processor is further configured so that the push policy information that differentiate the received request from the updated request indicates that headers of the at least one resource associated with the main resource and a portion of the content of the at least one resource associated with the main resource are to be transmitted by the application server to which the updated request is sent, to the device, for transmitting the at least one resource associated with the main resource.
In an embodiment, the processor is further configured so that the push policy information that differentiate the received request from the updated request indicates that headers and content of the at least one resource associated with the main resource are to be transmitted by the application server to which the updated request is sent, to the device, for transmitting the at least one resource associated with the main resource.
In an embodiment, the processor is further configured so as to carry out a step of receiving an indication from the application server to which the updated request is sent that indicates that the at least one resource associated with the main resource is to be pushed.
In an embodiment, the processor is further configured so as to carry out a step of receiving an indication from the application server to which the updated request is sent, that indicates that the at least one resource associated with the main resource is to be pushed and that the at least one resource associated with the main resource is to be requested by the device.
In an embodiment, the processor is further configured so as to carry out a further step of requesting the at least one resource associated with the main resource.
According to a fourth object of the invention, there is provided a server for optimizing pushing of at least one resource associated with a main resource, in response to a request for this main resource, the server comprising a processor configured for carrying out the steps of: receiving a request for a main resource; identifying at least one resource associated with the main resource, as a function of a push policy; and transmitting an indication to indicate that the at least one resource associated with the main resource is to be pushed and that the at least one resource associated with the main resource is to be requested by an intermediary component.
Therefore, the server of the invention makes it possible to optimize the use of the bandwidth between cache servers and application servers and to improve the responsiveness for pushing resources in a communication network comprising cache servers.
In an embodiment, the received request comprises an indication of the push policy used to identify the at least one resource.
In an embodiment, the processor is further configured so that the indication that indicates that the at least one resource associated with the main resource is to be pushed further indicates that the at least one resource associated with the main resource is to be requested by an intermediary component and wherein the indication that indicates that the at least one resource associated with the main resource is to be pushed is determined as a function of the indication of the push policy received in the received request.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium, and in particular a suitable tangible carrier medium or suitable transient carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal. BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which:
Figure 1, comprising Figures 1a to 1c, illustrates examples of the structure of HTML web pages formed from several resources;
Figure 2, comprising Figures 2a to 2d, illustrates an example for obtaining data from an application server through a cache server on a standard basis (Figures 2a and 2b) and using the push feature (Figures 2c and 2d);
Figure 3, comprising Figures 3a and 3b, illustrates an example for obtaining different data from an application server through a cache server when no data are cached in the cache server (Figure 3a) and when items of data are cached in the cache server (Figure 3b);
Figure 4 illustrates steps of an example of algorithm carried out in an application server for processing a request and for generating a response to this request according to embodiments of the invention;
Figure 5 illustrates steps of an example of algorithm carried out in a cache server for processing a request according to embodiments of the invention;
Figure 6 illustrates steps of an example of algorithm carried out in a cache server for processing a response received from an application server or another cache server after a request has been sent to this application server or cache server, according to embodiments of the invention;
Figure 7 illustrates an example of use of embodiments of the invention in a web runtime environment;
Figure 8 illustrates steps of an example of algorithm carried out in a cache server for processing a request in a case according to which the cache server requests an application server to receive only headers of sub-resources to be pushed and not the content of these sub-resources, according to embodiments of the invention; and Figure 9 is a schematic illustration of devices according to embodiments.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
According to general embodiments, requests received from client devices by cache servers are updated before being transmitted to application servers so that the latter can determine which sub-resources are to be pushed. This makes it possible for application servers to indicate to cache servers which sub-resources are to be pushed, without actually pushing the sub-resources to be pushed. Moreover, such modification of the requests makes it possible for the application servers to send only sub-resources to be pushed that are not memorized within a cache memory of the cache servers.
To that end, cache server specific information can be added to incoming requests so as to control how an application server should push sub-resources. It is to be noted that except in cases according to which the cache server knows exactly what is to be pushed, the added information is not used to control what is to be pushed (as it is the client device which may control that). The added information can be used by an application server to determine whether the sub-resources to be pushed should be pushed as regular pushed sub-resources (i.e. a response including a response header and the sub-resource content as a response body) or as sub-resource references (i.e. a response including the response header but not the sub-resource content).
According to general embodiments of the invention, the cache servers and application servers (or end server) are to be considered in their general meaning. A cache server is situated in between a client device and an application server. It is to be noted that a client device may be another cache server. Similarly, an application server may be another cache server.
It is indeed typical of current web infrastructure to have several cache servers between the actual client device (e.g. a web browser running on a smartphone) and the actual final server (typically a server generating the requested response without forwarding the corresponding request to another processing entity).
The terms cache server may also encompass different applications: - a cache server may be a usual web cache proxy, that can be used in an existing web infrastructure or in content delivery networks (CDNs); - a front-end server that does load balancing for back-end servers and general optimizations, such as a reverse-proxy; - software proxies between applications and a network. It can be, for example, the module responsible for handling the HTTP cache and make HTTP requests from parsed HTML or JavaScript in a browser.
Figure 3, comprising Figures 3a and 3b, illustrates an example for obtaining different data from an application server through a cache server when no data are cached in the cache server (Figure 3a) and when items of data are cached in the cache server (Figure 3b).
As illustrated in Figure 3a, a first client device denoted 300 sends a request to a cache server denoted 305 for obtaining resources corresponding to a first URL (Uniform Resource Locator), denoted URL1. The request is forwarded to an end server (i.e. an application server) denoted 310.
In response, the end server transmits the requested resource (resource 1) to the cache server and pushes two sub-resources (resources 2 and 3) associated with resource 1. These resources are then forwarded to the client device.
Before being transmitted to the client device, the resources can be stored by the cache server so as to optimize response time when these resources are requested again, at a later stage, by another client device, as explained above.
As illustrated in Figure 3b, a second client device denoted 300’ sends a request to cache server 305 for obtaining resources corresponding to a second URL denoted URL4. The request is forwarded to end server 310.
For the sake of illustration, it is assumed that the steps described by reference to Figure 3b are carried out after those described by reference to Figure 3a. Accordingly, as shown in Figure 3b, cache server 305 stores resources 1, 2, and 3 in its cache memory when it receives the request from client device 300’.
In response to the received request, the end server may transmit the requested resource (resource 4) to the cache server and may push two sub-resources (resources 2 and 5) associated with resource 4 so that these resources can be forwarded to the client device and stored within the cache memory of cache server 305.
Alternatively, in response to the received request, the end server may transmit the requested resource (resource 4) to the cache server and push subresource 5 as well as a reference to sub-resource 2 that is stored within the cache memory of cache server 305 so that the latter transmit the requested resource (resource 4) as well as the sub-resources associated with this resource (i.e. resources 2 and 5) to client device 300’.
Transmitting only a reference to sub-resource 2 makes it possible for end server 310 to control which resources are to be pushed in relation with a requested resource while optimizing use of bandwidth between the end server and the cache server.
Transmitting a resource content can be done by using a response of the type corresponding to a request of the GET type while transmitting a resource reference can be done by using a response of the type corresponding to a request of the HEAD type.
It is to be noted that response headers, especially when using header compression, are usually small in size in comparison with the resource content. Therefore, sending a pushed response header allows notifying a cache server what should be pushed to a client device without transmitting a huge amount of data and thus, allows optimizing use of bandwidth.
If a cache server already has a resource in its cache memory (i.e. a resource previously pushed or regularly obtained through a GET or HEAD request), the cache server may forget a newly HEAD pushed resource information directed to the same resource. It may also use that information to update its cache, for instance extending the freshness of the corresponding resource.
If a cache server does not have a sub-resource to be pushed in its cache memory, the cache server requests this sub-resource from an application server and retrieves it after an additional round trip. It is observed that although such a round trip is generally not noticeable, it may be noticeable by client devices in particular cases. Indeed, such a round trip is generally not noticeable because the cache server has data to transmit (the main resource response and some sub-resources to be pushed) and the round trip between the cache server and the application server is smaller than the total round trip between the client device and the application server (often significantly).
In order to avoid noticeable round trip time, guiding information of how subresources to be pushed should be transmitted by the application server is advantageously determined as a function of the network state and of the state of the cache memory, as described hereafter. It is to be noted that although pushing mostly applies on sub-resources, other resources may be pushed. For example, resources to be pushed may correspond to next web pages most people go to after current web pages.
As mentioned above, according to embodiments of the invention, cache servers add specific information to incoming requests so as to control how an application server should push sub-resources but not to control which sub-resources are to be pushed.
As an exception, cache servers may modify a push policy of a received request to indicate which sub-resources are to be pushed, for example in the case where a cache server receives a request comprising the push policy denoted ‘Accept-Push-Policy: no’, meaning that the client device does not want to receive any pushed sub-resource. In such a case, a cache server may decide to benefit from this request to prefetch sub-resources that may be pushed. To avoid increasing the amount of data transmitted from the application server to the cache server, the latter may decide to change the information in the ‘Accept-Push-Polic/ header so as to receive information regarding sub-resources which should be pushed while asking the application server not to transmit the content of the sub-resources to be pushed.
The additional information that is added by cache servers to control how an application server should push sub-resources may take the form of new headers. It may also be conveyed within the ‘Accept-Push-Polic/ header as one or several parameters.
This additional information that is added by a cache server (that may also be added by a client device) may take the form of a list of pairs of name and value. This list may be encoded in the JSON (JavaScript Object Notation) format or other textual format (JavaScript is a trademark). This type of information may be put as a HTTP header value. In the context of a WebSocket connection that would be based on the same principles, this type of information may be defined as follows: • SchemelD: indicates the protocol scheme used to interpret this message; • PushType: indicates the push type and has the JSON name “PushType”; • PushParams: indicates additional parameters for the push directive. The JSON names of allowed parameters are “PushCount” and “PushDuration”; • URL: indicates the URL of the requested resource. The corresponding JSON parameter is “url”.
Naturally an ‘Accept-Push-Policy1 header may comprise a list of push policies. This list of policies may be ordered according to a ‘q’ parameter that would indicate the preferences of the client device. For the sake of illustration, ‘Accept-Push-Policy: poiicyl, policy2;q=0.8, policy3;q=0.5’ would mean that a client would prefer policyl, or policy2 if poiicyl is not available, or policy3 if poiicyl and policy2 are not available. In a WebSocket communication, the ‘Accept-Push-Policy1 information would be sent as a JSON object. It could be a JSON array of three objects: one object related to poiicyl, one object related to policy2 and an additional q parameter equal to 0.8, and one object related to policy3 and an additional q parameter equal to 0.5. Alternatively, the JSON array may be ordered according to the q value.
Figure 4 illustrates steps of an example of algorithm carried out in an application server for processing a request and for generating a response to this request, according to embodiments of the invention.
Upon reception of a request from a cache server (step 400), the request being generated by the cache server or being received by the latter from a client device or another cache server, the application server retrieves the requested resource (or main resource) and may start transmitting this requested resource (step 405).
The requested resource can be stored locally or can be obtained by the application server from a remote data repository.
Since the application servers are responsible for determining whether or not sub-resources should be pushed and for identifying the sub-resources to be pushed, a test is carried out for determining whether sub-resources are to be pushed or not (step 410).
This can be done in parallel with the transmission of the requested resource.
Determining whether or not sub-resources should be transmitted is based on the received request. More precisely, the sub-resources to be transmitted, if any, may be identified as a function of the requested resource or URL, of an application-specific header, like a push-specific header that may comprise push information such as a push policy (e.g. the ‘Accept-Push-Policy’ header), and/or of other parameters.
If no sub-resource is to be pushed, the requested resource is transmitted and the process ends (step 415).
On the contrary, if sub-resources are to be pushed, they are identified (step 420). As mentioned above, they may be identified as a function of parameters of the received request.
Once the sub-resources to push are identified, they are retrieved and corresponding push promises may be sent to the cache server that sent the received request.
Next, data to be sent in relation with the sub-resources to be pushed are identified and sent (steps 425 and 430).
At this time, the requested resource may be sent entirely (i.e. step 415 may occur while steps 420 to 430 are carried out).
The data to be sent in relation with the sub-resources to be pushed may correspond to the pushed resource response headers only. Alternatively, they may comprise a part of the content of the sub-resources to be pushed, for instance the first 1.000 bytes of each sub-resource to be pushed. Still alternatively, they may comprise the full content of the sub-resources to be pushed (which may be a default strategy).
According to particular embodiments, the data to be sent is determined as a function of request header information that informs the application server of the needs of the cache server or the client device.
The request header information may be part of an ‘Accept-Push-Policy1 header. For the sake of illustration, setting the ‘Accept-Push-Policy1 header to the ‘Policyl;head-only1 may indicate that the requester (e.g. the client device) wants an application server to push sub-resources according to the push policy denoted 'PolicyT, but that only the headers of pushed sub-resources should be sent (i.e. the sub-resource content should not be sent).
Still for the sake of illustration, a specific ‘Push-Policy-Transmission’ header may be set to ‘1,000’ so as to indicate that the requester may only want to retrieve 1.000 bytes in total of the pushed sub-resource content. In such a case, the application server may decide to send only a portion of the content of the sub-resources, for example using byte range requests (e.g. as defined in http://www.rfc-editor.org/rfc/rfc7233.txt).
It is to be noted that information used to determine how sub-resources should be transmitted may be based on persistent information negotiated between a client device and an application server that can be stored within a cookie or a server configuration file.
Figure 5 illustrates steps of an example of algorithm carried out in a cache server for processing a request according to embodiments of the invention.
After having received a request from a client device or from another cache server (step 500), a first test is performed to determine whether or not the requested resource (or main resource) is already stored in the cache memory (step 505). According to particular embodiments, this step further comprises determining whether or not sub-resources associated with the requested resource are to be pushed and whether or not they are stored in the cache memory.
If the requested resource (and the sub-resources to be pushed if any) is stored in the cache memory, another test is performed to determine whether or not the requested resource (or main resource) stored in the cache memory is valid (step 510). Again, according to particular embodiments, this step further comprises determining whether or not sub-resources associated with the requested resource, to be pushed, are valid.
If the requested resource (and the sub-resources to be pushed if any) is stored in the cache memory and is valid, the requested resource (and the subresources to be pushed if any) is transmitted to the requester (step 515).
On the contrary, if the requested resource is not stored in the cache memory or is not valid, the request is to be forwarded to an application server or to another cache server. If the requested resource is stored in the memory cache but is not valid, the request to be forwarded to the application server or to the other cache server may be a conditional request.
According to particular embodiments, the same applies if the sub-resources associated with the requested resource, to be pushed, if any, are not stored in the cache server or are not valid.
Before forwarding the request to the application server or to the other cache server, the push policy to be used is determined and a test is performed to determine whether or not the push policy associated with the request should be modified so as to correspond to the push policy to be used (step 520).
If the push policy associated with the request should be modified, the new push policy is set in the request to be forwarded (step 525).
Modifying the push policy associated with a request may happen, for example, when a client device does not want to receive any sub-resource to be pushed (for example by setting the ‘Accept-Push-policy1 header to the ‘Accept-Push-policy: no’ value). Despite this setting, the cache server may be interested in receiving pushed sub-resources as a way to prefetch sub-resources.
Accordingly, the cache server may set the ‘Accept-Push-Policy’ header to ‘subresources’ value or may remove the ‘Accept-Push-Policy: no’ value in the ‘Accept-Push-policy’ header so that the application server is free to pick the best push strategy for the requested resource.
Since the pushed sub-resources may not be useful to the client device, the cache server may only want to retrieve the pushed sub-resource headers, not the subresource content.
Next, it is determined how the sub-resources to be pushed are transmitted, that is to say how data related to the sub-resources to be pushed are selected (step 530). As mentioned above, these data may comprise only the sub-resource headers, the sub-resource headers and a portion of the sub-resource content, or the subresource headers and the sub-resource content.
The way the sub-resources to be pushed are transmitted is referred to as the push resource transmission means or the push resource transmission mode. According to particular embodiments, the transmission mode is determined as a function of the latency between the cache server and the application server, of the history of sub-resources pushed redundantly, of the cache status and of the state of the connection (e.g. bandwidth and latency) between the client device and the cache server.
According to the previous example directed to prefetching, the cache server does not want to receive any pushed sub-resource content. Therefore, in such a case, the sub-resources should be pushed in a transmission mode according to which only the sub-resource headers are transmitted, that is to say using ‘head-only1 transmission mode.
Additional policies may be defined as a combination of these different policies. For example, a ‘specific-sub-resources’ policy may be defined as an extension to the sub-resource policy to fully push resources that are specific to the requested document (i.e. the resources that are only linked directly or indirectly to the document) and to push resources that are not specific to the document in ‘head-only’ mode. This 'specific-sub-resources' policy may also be defined to push only the resources specific to the requested document.
While it is most likely that the resources that are specific to the requested document are not stored in the cache memory, the resources that may be linked to other documents have higher chances of being stored in the cache memory. Such a policy may for instance be used by a proxy when forwarding a client device request as a replacement of the ‘sub-resources’ policy that is used in the original client request.
According to another example, the cache server is aware of a small latency between the cache server and the application server and of high filling level of its cache memory that is already quite full of resources received from the application server. In such a case, there is a risk that the application server pushes sub-resources that have already been pushed, i.e. that are already stored within the cache memory of the cache server. Based on these criteria, the cache server may decide that sub-resources should be pushed according to the ‘head-only’ transmission mode.
Accordingly, the cache server will be able to send the main resource and the pushed sub-resources stored in its cache memory. Regarding sub-resources to be pushed whose content is missing from the cache memory, the cache server explicitly requests to retrieve the corresponding content and send it to the client device as soon as it is received from the application server. Since the latency between the cache server and the application server is generally small, there is generally no visible latency from the client perspective.
Still according to another example, the cache server is aware of a high latency between the cache server and the application server, for example a latency as high as the latency between the cache server and the client device. In such a case, the cache server may decide to receive the content of some of the sub-resources to be pushed, but not all content, although its cache memory is full of sub-resources received from the application server. This limits the risk of wasting application server bandwidth while ensuring that the cache server is able to send enough data to the client device while explicitly retrieving missing content.
Next, after having determined how the sub-resources to be pushed are transmitted, that is to say how data related to the sub-resources to be pushed are selected, the request is updated (if it is to be updated), stored, and sent to an application server or to another cache server (step 535).
Processing of the response to the request sent to the application server or to another cache server is described by reference to Figure 6.
Figure 6 illustrates steps of an example of algorithm carried out in a cache server for processing a response received from an application server or another cache server after a request has been sent to this application server or cache server, according to embodiments of the invention.
As illustrated, first steps (steps 500 and 540) are directed to receiving a request from a client device or from another cache server and to transmitting this request (or this request after it has been updated) to an application server or another cache server if the requested resource and/or associated sub-resources to be pushed are not stored locally in the cache memory, as described by reference to Figure 5.
After the request has been sent, the cache server waits for the corresponding response that is to be received from the application server or the cache server to which the request has been sent (step 600). The received response is stored in the cache memory of the cache server (step 605). If sub-resources are to be pushed, the corresponding information (e.g. the corresponding push promises) is also stored in the cache memory, in connexion with the received response.
The received response is then sent to the client device or the cache server from which the request has been received (step 610).
For each sub-resource to be pushed, for which information has been received from the application server or another cache server (for example as a push promise), the cache server determines whether or not the sub-resource should be pushed to the client device or to another cache server (step 615). In most cases, the sub-resource should be pushed. However, the cache server should filter out some pushed resources, for example if the cache server changed the push policy to be used when transmitting the request.
In order to push sub-resources, the cache server sends the corresponding push promises (step 620), one push promise being sent for each sub-resource to be pushed. This should be done as fast as possible to ensure that the client device will not make an explicit request to the pushed resources.
If one or several sub-resources are to be pushed, the cache server determines whether or not the corresponding contents are available in its cache memory or if they have been received from the application server or another cache server (step 625).
If the resource content of all the sub-resources to be pushed is available in the cache server, the content of the sub-resources to be pushed is transmitted to the client device or to the cache server from which the request has been received (step 630).
On the contrary, if the resource content of all the sub-resources to be pushed is not available in the cache server, the latter explicitly requests the missing resource content of the sub-resources to be pushed to the application server or to the cache server from which the response has been received (step 635). Upon reception of the missing content, the missing content is stored in the cache memory (step 640) and sent to the client device or the cache server from which the request has been received (step 630). The process ends when all sub-resources to be pushed are processed.
For pushed sub-resources that should not be transmitted to the client device, the cache server may decide to prefetch these sub-resources. In general, the cache server tries to prefetch sub-resources that may be requested in the future. Various heuristics may be used for that purpose. The cache server should only request these sub-resources if its processing load and/or network load is low. The cache server may be particularly interested in prefetching pushed sub-resources that the client device already has in its cache memory but are, for example, obsolete or not fresh anymore.
Particular embodiments of the invention are directed to rendering and execution of web pages within a web browser or a web viewer.
Figure 7 illustrates an example of use of embodiments of the invention in a web runtime environment.
As illustrated, a first step (step 700) is directed to the execution of a script or the parsing of a web page, for example an HTML web page (HypeTtext Markup Language). Such a step is carried out in a device, for example a personal computer or a handheld device connected to the Internet network.
This may trigger the need to fetch a resource from the network. For the sake of illustration, the HTML web page may contain a ‘scripf tag whose ‘src’ attribute is an URL that should be retrieved from a remote HTTP server. Still for the sake of illustration, the executed script may call the XMLHttpRequest or Fetch API to retrieve programmatically some remote resources.
Accordingly, the script or web page plays the role of the client device as described above.
This results in generating a programmatic object representing an HTTP request that may contain push policy requirements, for instance through HTTP headers stored in a dictionary (step 705). The request object is then sent to the browser network layer of the device wherein the script is executed or the web page is parsed (step 710).
Accordingly, the browser network layer plays the role of the cache server as described above. This browser network layer has a cache memory that may be full or empty.
Based on the filling level of the cache memory, the cache history, and network statistics, the browser network layer may decide to apply an algorithm like the one described by reference to Figure 5 and to set push policy parameters accordingly.
The browser may use additional information to set the push policy parameters. For instance, a HTTP request may come from a web page that is in the foreground (i.e. a tab currently visible from the user) in which case the browser may request an aggressive push policy. On the contrary, a HTTP request may come from a background tab (i.e. a tab that is not visible to the user) in which case a less aggressive push policy may be more appropriate (for example using the ‘head-οηΐγ transmission mode).
Similarly, a part of a web application acting as a software proxy between the web application code and the browser network layer may decide to update the push policy according to information such as visibility of a particular element. Such a software proxy may be some JavaScript code executed within a Service Worker (http://www.w3.org/TR/service-workers/'). For instance, if the web application detects that a video element is not visible (background tab or the element is in a region that the user should scroll to in order to be visible), the push policy transmission means may be set to a lesser aggressive level than if the video element is visible.
For the sake of illustration, a browser network layer having an empty cache memory may decide to set an 1 Accept-Push-Policy1 header to the value ‘sub-resources’ so that an application server may push additional data as quickly as possible.
On the contrary, if the browser network layer knows that its cache memory is full, the browser network layer may set the push policy transmission mode to ‘head-only1. It should be noted that in such a context, the latency between the cache server and the web application is very small while the bandwidth is large. The bandwidth in particular may be estimated in terms of time to process and use data. One minute of video data can then be compared to one minute of latency between the browser network layer and the remote server.
As a result, the browser network layer generates a request comprising an adapted push policy (step 715) and sends it over the Internet network to a remote application server (step 720).
Upon reception of a response to this request, the browser network layer applies an algorithm like the one described by reference to Figure 6.
Figure 8 illustrates steps of an example of algorithm carried out in a cache server for processing a request in a case according to which the cache server requests an application server to receive only headers of sub-resources to be pushed and not the content of these sub-resources, according to embodiments of the invention.
As illustrated, after having received and stored a request from a client device or another cache server (step 800), the push policy defined in the received request is updated to indicate that only headers of sub-resources to be pushed should be transmitted, without the content of these sub-resources (step 805).
The updated request is transmitted to an application server or to another cache server (step 810). Then, the cache server carrying out the steps illustrated in Figure 8 waits for a response to the transmitted request.
When receiving such a request, the application server may identify which sub-resources are to be pushed in a transmission mode according to which only the sub-resource headers are transmitted so as to transmit the corresponding push promises (i.e. HEAD push promises). Therefore, the pushed sub-resource responses only contain headers, without any content.
However, since the cache server may have to disambiguate the HEAD push promises which should be forwarded to the client device as such from the HEAD push promises which the cache server should convert into GET push promises before transmitting them to the client device (so as to transmit the corresponding sub-resource content), it is necessary to add such an information into each pushed promise. A default setting may be defined. For example, a HEAD push promise may be converted into a GET push promise by the cache server if the push policy is set to ‘head-only’. According to a particular embodiment, a HEAD pushed promise that should be converted into a GET pushed promise may be defined as a HEAD pushed promise having an Accept-Push-Policy’ header set to the ‘HEAD-οηΙγ value.
Therefore, when the cache server is receiving the response (step 815), the received response is forwarded to the client device or to the cache server from which the request has been received (step 820) and a test is carried out to determine if there are sub-resources to be pushed or not (step 825).
If there is no sub-resource to be pushed, the algorithm ends.
On the contrary, if there are sub-resources to be pushed, the received push promises are forwarded to the client device or the cache server from which the request has been received (step 830). Transmitting the push promises can be done in parallel with the transmission of the response.
Push promises may be converted from the HEAD type to the GET type whenever needed (in step 830). For each sub-resource to be pushed whose content is to be transmitted (step 835), the cache server checks whether the pushed subresource content is available. If it is available, it is transmitted to the client device or the cache server from which the request has been received. On the contrary, if the pushed sub-resource content is not available, the cache server sends a request to the application server (step 840) and transmits it upon reception to the client device or the cache server from which the request has been received. The process ends when all pushed sub-resources have been processed and transmitted to the client.
The case described by reference to Figure 8 may happen in various scenarios.
As an example, the case may arise when a cache server receives a MPD request with a ‘Fast Start push policy. In such a case, the cache server may already have received a MPD request or some related video segment requests. For example, the cache server may have received in the past a MPD request with a ‘Fast Start push policy requesting three segments while the new request is targeting the same MPD with a ‘Fast Start push policy requesting four segments.
Based on its cache memory state, the cache server may determine that it has sub-resources to push in its cache memory. The cache server may compute the total byte amount of cached sub-resources to push. Based on this amount, the cache server may compute an estimation of the transmission time needed to push those cached sub-resources to the client device. If this transmission time is above the round trip time between the cache server and the application server, the cache server should use the ‘HEAD-only1 push transmission mode since there will be no delay penalty for the client device and no bandwidth between the cache server and the application server will be wasted.
On the contrary, if the transmission time is close or below the round trip time, the cache server may request the application server to accept some pushed subresource content to be pushed, up to a given amount that ensures that there is no delay in the data transmission to the client device.
The cache server may also use pushed sub-resource history to set the transmission mode. In particular, it may compute the ratio of the redundant pushed sub-resources with the total number of pushed sub-resources. If the ratio is above a certain threshold value, for example 25%, the cache server may decide to choose the ‘HEAD-only’ transmission mode. This threshold value may be set according to the round trip time. If the round trip time is low, the threshold may be decreased and if the round trip time is large, the threshold may be increased. A static table may be used to dynamically assign the threshold value according to the round trip time estimated when interacting with the cache server.
According to particular embodiments, the cache server may also change incoming request information related to push policy to exchange additional information on its cache memory with the application server. Indeed, since push sub-resources may be organized in groups or may be versioned using ETags or similar methods, it may be valuable for the cache server to input precise information on available push sub-resource groups or versions in its cache: the application server may then decide to push sub-resource content accordingly.
The application server may decide not to push sub-resources that the cache server explicitly said it already has. The problem is that these sub-resources should still be sent to the client. One solution is for the cache server to request the application server to send all push sub-resources, the ones that the cache server already has being sent as HEAD-only push sub-resources. This uses some limited bandwidth but allows a very simple cache server processing of application server responses.
One additional benefit of modifying push policy within the cache server is the ability for the application server to mark some sub-resources as obsolete. The application server may do so by pushing these sub-resources again (typically as HEAD-only push sub-resources), the response headers of these sub-resources marking that they are no longer valid in the cache. This may be done using the ‘Cache-Control’ header.
Figure 9 is a schematic illustration of a device 900 according to embodiments. The device may be an application server, a client or a cache server. The device comprises a RAM memory 910 which may be used as a working memory for a control unit 920 configured for implementing a method according to embodiments. For example, the control unit may be configured to execute instructions of a computer program loaded from a ROM memory 930. The program may also be loaded from a hard drive 940.
The device also comprises a network interface 950 which may be a single network interface, or comprise a set of network interfaces (for instance several wireless interfaces, or several types of wired or wireless interfaces). The device may comprise a user interface 960 for displaying information to a user and for receiving inputs from the user.
The device may also comprise an input/output module 970 for receiving and/or sending data from/to external devices.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive, the invention not being restricted to the disclosed embodiment. Other variations to the disclosed embodiment can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims. It is to be noted that resources or main resources may be sub-resources of other resources and that sub-resources or auxiliary resources may be requested as main resources.
In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims.
The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used. Any reference signs in the claims should not be construed as limiting the scope of the invention.

Claims (33)

1. A method for optimizing pushing of at least one resource associated with a main resource, in response to a request for this main resource, the method being carried out in an intermediary component comprising a cache memory, the method comprising: receiving a request for a main resource; updating the received request for the main resource so that the received request and the updated request differ at least in a push policy information, the updated request enabling an application server to apply a push policy for identifying the at least one resource associated with the main resource; transmitting the updated request for the main resource to an application server; and further to a response to the transmitted updated request, pushing the at least one resource associated with the main resource to a component, wherein the component to which the at least one resource is pushed is a component from which the received request is received or a component from which another request for the main resource is received.
2. The method of claim 1, wherein the step of updating the received request comprises a step of modifying a push policy information of the received request.
3. The method of claim 1, wherein the step of updating the received request comprises a step of withdrawing a push policy information of the received request.
4. The method of any one of claims 1 to 3, wherein the push policy information that differentiate the received request from the updated request makes it possible for an application server to determine how the at least one resource associated with the main resource is to be transmitted to the intermediary component.
5. The method of claim 1, wherein updating the received request is based on a latency between the intermediary component and the application server to which the updated request is transmitted, an history of resources pushed redundantly by the application server to which the updated request is transmitted, to the intermediary component, a status of the cache memory of the intermediary component, a state of a connection between the intermediary component and a component from which the received request is received, and/or a state of a component having generated or transmitted the received request.
6. The method of any one of claims 1 to 5, wherein the push policy information that differentiate the received request from the updated request indicates that only headers of the at least one resource associated with the main resource are to be transmitted by the application server to which the updated request is sent, to the intermediary component, for transmitting the at least one resource associated with the main resource.
7. The method of any one of claims 1 to 5, wherein the push policy information that differentiate the received request from the updated request indicates that headers of the at least one resource associated with the main resource and a portion of the content of the at least one resource associated with the main resource are to be transmitted by the application server to which the updated request is sent, to the intermediary component, for transmitting the at least one resource associated with the main resource.
8. The method of any one of claims 1 to 5, wherein the push policy information that differentiate the received request from the updated request indicates that headers and content of the at least one resource associated with the main resource are to be transmitted by the application server to which the updated request is sent, to the intermediary component, for transmitting the at least one resource associated with the main resource.
9. The method of any one of claims 1 to 8, wherein the intermediary component is a browser network layer and wherein the request for the main resource is received from the execution of a script of the parsing of a web page in a browser.
10. The method of any one of claims 1 to 9, further comprising a step of receiving an indication from the application server to which the updated request is sent that indicates that the at least one resource associated with the main resource is to be pushed.
11. The method of any one of claims 1 to 9, further comprising a step of receiving an indication from the application server to which the updated request is sent, that indicates that the at least one resource associated with the main resource is to be pushed and that the at least one resource associated with the main resource is to be requested by the intermediary component.
12. The method of claim 10 or claim 11, a further step of requesting the at least one resource associated with the main resource.
13. A method for optimizing pushing of at least one resource associated with a main resource, in response to a request for this main resource, the method being carried out in an application server, the method comprising: receiving a request for a main resource; identifying at least one resource associated with the main resource, as a function of a push policy; and transmitting an indication to indicate that the at least one resource associated with the main resource is to be pushed and that the at least one resource associated with the main resource is to be requested by an intermediary component.
14. The method of claim 13, wherein the received request comprises an indication of the push policy used to identify the at least one resource.
15. The method of claim 14, wherein the indication that indicates that the at least one resource associated with the main resource is to be pushed further indicates that the at least one resource associated with the main resource is to be requested by an intermediary component and wherein the indication that indicates that the at least one resource associated with the main resource is to be pushed is determined as a function of the indication of the push policy received in the received request.
16. A computer program product for a programmable apparatus, the computer program product comprising instructions for carrying out each step of the method according to any one of claims 1 to 15 when the program is loaded and executed by a programmable apparatus.
17. A computer-readable storage medium storing instructions of a computer program for implementing the method according to any one of claims 1 to 15.
18. A device for optimizing pushing of at least one resource associated with a main resource, in response to a request for this main resource, the device comprising a cache memory and a processor configured for carrying out the steps of: receiving a request for a main resource; updating the received request for the main resource so that the received request and the updated request differ at least in a push policy information, the updated request enabling an application server to apply a push policy for identifying the at least one resource associated with the main resource; transmitting the updated request for the main resource to an application server; and further to a response to the transmitted updated request, pushing the at least one resource associated with the main resource to a component, wherein the component to which the at least one resource is pushed is a component from which the received request is received or a component from which another request for the main resource is received.
19. The device of claim 18, wherein the processor is further configured so that the step of updating the received request comprises a step of modifying a push policy information of the received request.
20. The device of claim 18, wherein the processor is further configured so that the step of updating the received request comprises a step of withdrawing a push policy information of the received request.
21. The device of any one of claims 18 to 20, wherein the push policy information that differentiate the received request from the updated request makes it possible for an application server to determine how the at least one resource associated with the main resource is to be transmitted to the device.
22. The device of claim 18, wherein the processor is further configured so that updating the received request is based on a latency between the device and the application server to which the updated request is transmitted, an history of resources pushed redundantly by the application server to which the updated request is transmitted, to the device, a status of the cache memory of the device, a state of a connection between the device and a component from which the received request is received, and/or a state of a component having generated or transmitted the received request.
23. The device of any one of claims 18 to 22, wherein the processor is further configured so that the push policy information that differentiate the received request from the updated request indicates that only headers of the at least one resource associated with the main resource are to be transmitted by the application server to which the updated request is sent, to the device, for transmitting the at least one resource associated with the main resource.
24. The device of any one of claims 18 to 22, wherein the processor is further configured so that the push policy information that differentiate the received request from the updated request indicates that headers of the at least one resource associated with the main resource and a portion of the content of the at least one resource associated with the main resource are to be transmitted by the application server to which the updated request is sent, to the device, for transmitting the at least one resource associated with the main resource.
25. The device of any one of claims 18 to 22, wherein the processor is further configured so that the push policy information that differentiate the received request from the updated request indicates that headers and content of the at least one resource associated with the main resource are to be transmitted by the application server to which the updated request is sent, to the device, for transmitting the at least one resource associated with the main resource.
26. The device of any one of claims 18 to 25, wherein the processor is further configured so as to carry out a step of receiving an indication from the application server to which the updated request is sent that indicates that the at least one resource associated with the main resource is to be pushed.
27. The device of any one of claims 18 to 25, wherein the processor is further configured so as to carry out a step of receiving an indication from the application server to which the updated request is sent, that indicates that the at least one resource associated with the main resource is to be pushed and that the at least one resource associated with the main resource is to be requested by the device.
28. The device of claim 26 or claim 27, wherein the processor is further configured so as to carry out a further step of requesting the at least one resource associated with the main resource.
29. A server for optimizing pushing of at least one resource associated with a main resource, in response to a request for this main resource, the server comprising a processor configured for carrying out the steps of: receiving a request for a main resource; identifying at least one resource associated with the main resource, as a function of a push policy; and transmitting an indication to indicate that the at least one resource associated with the main resource is to be pushed and that the at least one resource associated with the main resource is to be requested by an intermediary component.
30. The server of claim 29, wherein the received request comprises an indication of the push policy used to identify the at least one resource.
31. The server of claim 30, wherein the processor is further configured so that the indication that indicates that the at least one resource associated with the main resource is to be pushed further indicates that the at least one resource associated with the main resource is to be requested by an intermediary component and wherein the indication that indicates that the at least one resource associated with the main resource is to be pushed is determined as a function of the indication of the push policy received in the received request.
32. A method for optimizing pushing of resources substantially as hereinbefore described with reference to, and as shown in Figures 4 to 8.
33. A device for optimizing pushing of resources substantially as hereinbefore described with reference to, and as shown in Figure 9.
GB201518042A 2015-10-12 2015-10-12 Methods, devices and computer programs for optimizing use of bandwidth when pushing data in a network environment comprising cache servers Active GB2543279B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB201518042A GB2543279B (en) 2015-10-12 2015-10-12 Methods, devices and computer programs for optimizing use of bandwidth when pushing data in a network environment comprising cache servers

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB201518042A GB2543279B (en) 2015-10-12 2015-10-12 Methods, devices and computer programs for optimizing use of bandwidth when pushing data in a network environment comprising cache servers

Publications (3)

Publication Number Publication Date
GB201518042D0 GB201518042D0 (en) 2015-11-25
GB2543279A true GB2543279A (en) 2017-04-19
GB2543279B GB2543279B (en) 2020-01-01

Family

ID=55130937

Family Applications (1)

Application Number Title Priority Date Filing Date
GB201518042A Active GB2543279B (en) 2015-10-12 2015-10-12 Methods, devices and computer programs for optimizing use of bandwidth when pushing data in a network environment comprising cache servers

Country Status (1)

Country Link
GB (1) GB2543279B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111200634A (en) * 2019-12-06 2020-05-26 中国联合网络通信集团有限公司 Cache resource linkage update method, system and server
EP3962023A4 (en) * 2018-12-04 2022-04-27 Hong Kong Sunstar Technology Co., Limited METHOD AND DEVICE FOR TRANSMITTING LIST INFORMATION
US12483747B2 (en) 2021-04-16 2025-11-25 Beijing Bytedance Network Technology Co., Ltd. Minimizing initialization delay in live streaming

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114301975B (en) * 2021-11-30 2023-07-28 乐美科技股份私人有限公司 Method, device, equipment and storage medium for processing push information in application
CN120416572A (en) * 2024-01-31 2025-08-01 抖音视界有限公司 Method, device, equipment and medium for streaming media data transmission

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140317230A1 (en) * 2006-08-07 2014-10-23 Unwired Planet, Llc Cache based enhancement to optimization protocol
US20140344345A1 (en) * 2005-05-26 2014-11-20 Citrix Systems, Inc. Systems and methods for using an http-aware client agent

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140344345A1 (en) * 2005-05-26 2014-11-20 Citrix Systems, Inc. Systems and methods for using an http-aware client agent
US20140317230A1 (en) * 2006-08-07 2014-10-23 Unwired Planet, Llc Cache based enhancement to optimization protocol

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Belshe et al., "RFC 7540: Hypertext Transfer Protocol Version 2 (HTTP/2)", 2015, IETF *
Ruellan et al. "Httpbis Internet-Draft: Accept-Push-Policy Header Field", 2015, IETF *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3962023A4 (en) * 2018-12-04 2022-04-27 Hong Kong Sunstar Technology Co., Limited METHOD AND DEVICE FOR TRANSMITTING LIST INFORMATION
US11838349B2 (en) 2018-12-04 2023-12-05 Hong Kong Sunstar Technology Co., Limited Method and device for transmitting list information
CN111200634A (en) * 2019-12-06 2020-05-26 中国联合网络通信集团有限公司 Cache resource linkage update method, system and server
CN111200634B (en) * 2019-12-06 2023-04-18 中国联合网络通信集团有限公司 Cache resource linkage updating method, system and server
US12483747B2 (en) 2021-04-16 2025-11-25 Beijing Bytedance Network Technology Co., Ltd. Minimizing initialization delay in live streaming

Also Published As

Publication number Publication date
GB2543279B (en) 2020-01-01
GB201518042D0 (en) 2015-11-25

Similar Documents

Publication Publication Date Title
US9055124B1 (en) Enhanced caching of network content
US10110695B1 (en) Key resource prefetching using front-end optimization (FEO) configuration
US8990357B2 (en) Method and apparatus for reducing loading time of web pages
US20220006878A1 (en) Method and apparatus for reducing loading time of web pages
US9819721B2 (en) Dynamically populated manifests and manifest-based prefetching
US8762490B1 (en) Content-facilitated speculative preparation and rendering
CN107211022B (en) Improved client-driven resource pushing with server devices
US7752258B2 (en) Dynamic content assembly on edge-of-network servers in a content delivery network
EP2791815B1 (en) Application-driven cdn pre-caching
US9158845B1 (en) Reducing latencies in web page rendering
US20180239794A1 (en) Caching of updated network content portions
US10469560B1 (en) Reduced latency for subresource transfer
US9058399B2 (en) System and method for providing network resource identifier shortening service to computing devices
GB2543279A (en) Methods, devices and computer programs for optimizing use of bandwidth when pushing data in a network environment comprising cache servers
US11201934B2 (en) Methods, device, server and computer program products for pushing data in a network environment comprising cache servers
US10419573B2 (en) Methods, devices and computer programs enabling data to be pushed in a network environment comprising proxies
Van de Vyvere et al. Comparing a polling and push-based approach for live open data interfaces
Grocevs et al. Modern approaches to reduce webpage load times
Huang A pre-fetching and caching system for web service registries