[go: up one dir, main page]

WO2017213562A1 - Invalidation of cached data - Google Patents

Invalidation of cached data Download PDF

Info

Publication number
WO2017213562A1
WO2017213562A1 PCT/SE2016/050557 SE2016050557W WO2017213562A1 WO 2017213562 A1 WO2017213562 A1 WO 2017213562A1 SE 2016050557 W SE2016050557 W SE 2016050557W WO 2017213562 A1 WO2017213562 A1 WO 2017213562A1
Authority
WO
WIPO (PCT)
Prior art keywords
content data
data
server
cache
cache server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/SE2016/050557
Other languages
French (fr)
Inventor
Tommy Arngren
Viktor GUNNARSON
Fredrik HULTKRANTZ
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Priority to PCT/SE2016/050557 priority Critical patent/WO2017213562A1/en
Publication of WO2017213562A1 publication Critical patent/WO2017213562A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0808Multiuser, multiprocessor or multiprocessing cache systems with cache invalidating means

Definitions

  • a computer program for managing cached data comprises computer program code which, when run on a cache service system, causes the cache service system to cache content data in a cache server, to obtain new content data from a data source, to map, in a server, the new content data, obtained from the data source, to content data already cached by the cache server, and to invalidate, in the cache server, mapped content data already cached by the cache server.
  • a computer program product is presented.
  • the computer program product comprises a computer program and a computer readable storage means on which the computer program is stored.
  • FIG. 2 is a schematic diagram illustrating an embodiment presented herein
  • a service such as a cache service, is a cache function.
  • a cache service is a seamless function between externa data sources 7 and CDs 1.
  • the service may be implemented as a software component running in a server/ server host.
  • a server may comprise one or more physical servers/server hosts in the same server hall/server farm, or maybe implemented with a cloud solution providing separate resources for computation and storage in a distributed manner spanning over more than one server site.
  • the notification may also contains information about affected athletes and if the result affects more athletes than a given threshold a generic invalidation for all athletes may be issued, otherwise only information for the given athletes are invalidated.
  • the threshold may e.g. be a number of athletes.
  • FIG. 3B Yet another example is presented with reference to Fig. 3B, where sales data for books are updated.
  • a best-seller-list based on sales data from publishers is provided in a web page, and information for a specific book is also provided on that page, as well as a top-3- list provided on a separate page.
  • a new sales data report is received from a publisher 7 to an online bookstore, here illustrated as Amazon 5 (Amazon backend).
  • Other online bookstores may e.g. be Barnes,&Noble and Kobo.
  • the report is a list with ISBN numbers and number of new sales for each ISBN number since the last report. This entails that the best-seller-list needs to be updated, but also that the individual pages for the books in the report, and possible the top-3-list, need to be updated.
  • a notification is thus sent to the server 6 with information, i.e. meta data, about which books that have new information, but not the actual sales data.
  • a user thereafter requests an Amazon best-seller-list or the web page/URL that contains the best-seller list and the cache servers 4 notice that the requested information has been invalidated.
  • the cache servers 4 then go to the Amazon backend 5 to retrieve the updated list, the Amazon backend 5 responds with the updated list, the cache servers 4 cache the updated list and responds to the user with the updated list.
  • a method, according to an embodiment, for invalidation of cached data is presented with reference to Fig. 4A.
  • the method is performed by a server 6 and comprises receiving 32 a notification indicating new content data for a cache server, mapping 33 the new content data indicated in the received notification to content data already cached by the cache server, and sending 34 an instruction to the cache server, to invalidate mapped content data already cached by the cache server.
  • the new content data may be modelled after a first data model, and the content data already cached by the cache server may be modelled after a second data model.
  • the first and the second data models maybe logical data models, wherein a logical data model standardizes people, places, and things, and rules, relationships and the events between them.
  • the first data model may e.g. comprise a new time for an athlete and the second data model may e.g. comprise a result list and a medal list.
  • the first and second data models may be different from each other.
  • the computer program product stores instructions that, when executed by the processor, causes the server to receive 32 a notification indicating new content data received from a data source, to map 33 the new content data indicated in the received notification to content data already cached by the cache server, and to send 34 an instruction to the cache server, to invalidate mapped content data already cached by the cache server.
  • a cache service system for managing cached data is presented with reference to Fig. 5B.
  • the cache service system 8 comprises a processor 10 and a computer program product 12, 13.
  • Fig. 5A is a schematic diagram showing some components of the server 6.
  • a processor 10 may be provided using any combination of one or more of a suitable central processing unit, CPU, multiprocessor, microcontroller, digital signal processor, DSP, application specific integrated circuit etc., capable of l6 executing software instructions of a computer program 14 stored in a memory.
  • the memory can thus be considered to be or form part of the computer program product 12.
  • the processor 10 maybe configured to execute methods described herein with reference to Figs. 6A.
  • the memory may be any combination of read and write memory, RAM, and read only memory, ROM.
  • the memory may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory.
  • the cache manager 60 is for invalidation of cached data.
  • This module corresponds to the map step 33 of Fig. 4A.
  • This module can e.g. be
  • the communication manger 61 is for communication of data and

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

It is presented a method for invalidation of cached data. The method is performed by a server (6) and comprises: receiving (32) a notification indicating new content data for a cache server, mapping (33) the new content data indicated in the received notification to content data already cached by the cache server, and sending (34) an instruction to the cache server, to invalidate mapped content data already cached by the cache server. It is also presented a server, a cache service system, a computer program and a computer program product thereof.

Description

INVALIDATION OF CACHED DATA
TECHNICAL FIELD
The invention relates to a method for invalidation of cached data, and a server, a cache service system, a computer program and a computer program product thereof.
BACKGROUND
A cache is a place to store something temporarily in a computing
environment. There are different types of caches like Web browsers, dedicated network servers, and disk, RAM or Flash caches with proximity to a CPU. A cache for a Web browser, such as Internet Explorer, Firefox, Safari and Chrome, uses a for a user local browser cache to improve performance for frequently accessed webpages. Dedicated network servers, or services acting as a server, saves webpages or other digital content more centrally for multiple end user clients (for example a proxy cache or a reverse proxy cache).
Cached content data will typically with time be outdated and thus no longer be valid. Time is typically used to invalidate cached content data.
A method performed by a cache server is disclosed in PCT/SE2015/050794.
SUMMARY
It is an object of the invention to minimize amount of cached data to be invalidated.
According to a first aspect, a method for invalidation of cached data is presented. The method is performed by a server and comprises receiving a notification indicating new content data for a cache server, mapping the new content data indicated in the received notification to content data already cached by the cache server, and sending an instruction to the cache server, to invalidate mapped content data already cached by the cache server. By the presented method, invalidation of cached data is minimized by mapping new content data to content data already cached by a cache server. In this way, time based invalidation is avoided, which occasionally may invalidate unchanged content data. The new content data may be modelled after a first data model, and the content data, already cached by the cache server, may be modelled after a second data model.
The new content data may comprise a first parameter and may be mapped to the content data, already cached by the cache server, comprising a second parameter.
The new content data may be for a first data object and may be mapped to the content data, already cached by the cache server, for a second data object.
The mapping may comprise a set of rules for mapping the new content data to content data already cached by the cache server. At least one rule of the set of rules may be determined by machine learning.
According to a second aspect, a method for managing cached data is presented. The method is performed by a cache service system and comprises caching content data in a cache server, obtaining new content data from a data source, mapping, in a server, the new content data, obtained from the data source, to content data already cached by the cache server, and invalidating, in the cache server, mapped content data already cached by the cache server.
By the presented method, invalidation of cached data is minimized by mapping new content data to content data already cached by a cache server. In this way, time based invalidation is avoided, which occasionally may invalidate unchanged content data.
According to a third aspect, a server for invalidation of cached data is presented. The server comprises a processor and a computer program product. The computer program product stores instructions that, when executed by the processor, causes the server to receive a notification indicating new content data for a cache server, to map the new content data indicated in the received notification to content data already cached by the cache server, and to send an instruction to the cache server, to invalidate mapped content data already cached by the cache server.
According to a fourth aspect, a cache service system for managing cached data is presented. The cache service system comprises a processor and a computer program product. The computer program product stores instructions that, when executed by the processor, causes the cache service system to cache content data in a cache server, to obtain new content data from a data source, to map, in a server, the new content data, obtained from the data source, to content data already cached by the cache server, and to invalidate, in the cache server, mapped content data already cached by the cache server. According to a fifth aspect, a server for invalidation of cached data is presented. The server comprises a communication manager and a cache manager. The communication manager is for receiving a notification indicating new content data for a cache server and for sending an instruction to the cache server, to invalidate mapped content data already cached by the cache server. The cache manager is for mapping the new content data indicated in the received notification to content data already cached by the cache server.
According to a sixth aspect, a cache service system for managing cached data is presented. The cache service system comprises a communication manager and a cache manager. The communication manager is for obtaining new content data from a data source. The cache manager is for caching content data in a cache server, for mapping, in a server, the new content data, obtained from the data source, to content data already cached by the cache server, and for invalidating, in the cache server, mapped content data already cached by the cache server. According to a seventh aspect, a computer program for invalidation of cached data is presented. The computer program comprises computer program code which, when run on a server, causes the server to receive a notification indicating new content data for a cache server, to map the new content data indicated in the received notification to content data already cached by the cache server, and to send an instruction to the cache server, to invalidate mapped content data already cached by the cache server.
According to an eighth aspect, a computer program for managing cached data is presented. The computer program comprises computer program code which, when run on a cache service system, causes the cache service system to cache content data in a cache server, to obtain new content data from a data source, to map, in a server, the new content data, obtained from the data source, to content data already cached by the cache server, and to invalidate, in the cache server, mapped content data already cached by the cache server. According to a ninth aspect, a computer program product is presented. The computer program product comprises a computer program and a computer readable storage means on which the computer program is stored.
Generally, all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to "a/an/the element, apparatus, component, means, step, etc." are to be interpreted openly as referring to at least one instance of the element, apparatus, component, means, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated. BRIEF DESCRIPTION OF THE DRAWINGS
The invention is now described, by way of example, with reference to the accompanying drawings, in which:
Fig. 1 is a schematic diagram illustrating an environment where
embodiments presented herein can be applied; Fig. 2 is a schematic diagram illustrating an embodiment presented herein;
Figs. 3A and 3B are schematic diagrams illustrating signalling according to embodiments presented herein;
Figs. 4A and 4B are flow charts illustrating methods for embodiments presented herein;
Figs. 5A and 5B are schematic diagrams illustrating some components of devices presented herein; and
Figs. 6A and 6B are schematic diagrams showing functional modules of devices presented herein. DETAILED DESCRIPTION
The invention will now be described more fully hereinafter with reference to the accompanying drawings, in which certain embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided by way of example so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Like numbers refer to like elements throughout the description.
Existing solutions for cache services invalidate cached data based on time, resulting in occasional invalidation of unchanged content data. Existing solutions for cache services have no understanding of data models, neither of a data model used by incoming source data to a cache server nor of a data model used towards end user clients accessing the cache server. Incoming source data can thus not be connected to outgoing data when cached data is invalidated. Further, existing solutions are not decoupled from a caching service, making them hard to extend or replace. A solution is presented herein that enable efficient and accurate mapping between content data from data sources and content data accessible at a cache service.
A method is presented for how different data models of external data sources and cached data, can be mapped to each other, and at the same time data following one model can be invalidated based on data following a different model, minimizing the amount of unchanged data to be invalidated, at the same time efficiently managing a cache service.
A communication network wherein the embodiments described herein can be implemented is presented in Fig. l. A communication device (CD) l is wirelessly connectable to a base station (BS) 2. The BS 2 is connected to a core network (CN) 3, in turn connected to a cache server (CS) 4. The CD 1 may e.g. be a mobile communication, a wireless device, a mobile device, a user equipment, a user device, a wired computer, a laptop, a wired laptop, a set-top box (STB), a TV, a machine-to-machine device, a smartphone, a tablet, a telematics unit embedded in a vehicle and connected to a vehicle- internal network for exchange of e.g. vehicle or driver data with a fleet management system connected to the vehicle via a Wide Area Network (WAN) such as Internet. The CD 1 may also be a unit mounted in a dashboard of a vehicle for displaying information and communicating with the driver or passengers of the vehicle and being connected to a telematics unit embedded in the vehicle.
A solution may involve the following components, as presented in Fig. 2. End user clients in CDs 1 access content data via a WAN, which here is illustrated as the Internet but may e.g. also be the dark web, web 2.0 or Software- defined networking (SDN) . Cache servers 4 provide content data via the Internet. A server/invalidation server/invalidation module 6 that maps data between external data sources 7 and cache servers 4, for invalidation of content data cached on cache servers 4. A data processing server 5 that provides meta data of new data to server 6 and the new data to cache servers 4, from the external data sources 7 that are updated in real time. The end user clients may e.g. use CDs of a cellular network or computer devices in other ways connected to a WAN. The end user client may e.g. be a mobile app in a CD.
A solution presented herein allows data models to change without affecting each other, while still minimizing the amount of content data to invalidate. If a data model for incoming data form an external data source 7 is changed, a data model used for end user clients in CDs 1 is unaffected since the mapping in the server 6 for invalidation is adapted thereto.
A service, such as a cache service, is a cache function. A cache service is a seamless function between externa data sources 7 and CDs 1. The service may be implemented as a software component running in a server/ server host. A server may comprise one or more physical servers/server hosts in the same server hall/server farm, or maybe implemented with a cloud solution providing separate resources for computation and storage in a distributed manner spanning over more than one server site.
An invalidation service, implemented as a software component running in the server 6, preferably close to the cache servers 4, maybe independent of both the cache servers 4 and data processing server 5. A way to communicate with both cache servers and data processing server is only needed, and an understanding of the data models used by either part. The invalidation service may instead be implemented in connection with the cache servers 4 and or the data processing server 5, but the invalidation service will in this case not be decoupled therefrom.
The flow of actions for a solution is presented with reference to Fig. 2. Content requests come to a content service from CDs 1, over a WAN here illustrated as the Internet and reach the cache servers 4.
The requested content data is cached in the cache servers 4, and this content data is not invalidated based on time. This content data follows a data model that is defined to be easily understood by the CDs 1. New data is pushed or pulled into the cache service system 8 from one or more external data sources 7 to data processing server 5. This new data is following a data model different than that of the cached data that is available to the end user client on the CD 1, and is (often) stored in a data persistence service.
The new data makes some of the cached data invalid. The invalid data needs to be handled and the new data needs to be communicated to the cache servers 4.
The data processing server 5 sends a notification containing meta data about the obtained new data, to the server 6, which is configured to understand both the data model used by the external data sources 7, and the data model used by the cache servers 4 for the CDs 1.
The server 6 transforms the meta data about the obtained new data to invalidations and sends them to the cache servers 4. The cache servers 4 receive the invalidations and mark corresponding content data as invalid or evict corresponding content data.
Protocols used for communication between the server 6 and the cache servers 4 and between the server 6 and the data processing server 5 can be of any kind, but may be Hypertext Transfer Protocol (HTTP), secure HTTP (HTTPS) or Extensible Messaging and Presence Protocol (XMPP). HTTP/HTTPS messages may be HTTP get, HTTP post or a custom defined HTTP call. For example, a call between server 6 and the cache servers 4 may in HTTP look like this. A custom defined HTTP call named ban is made against cache servers 4 from server 6: BAN / HTTP/1.1
Host: 172.31.6.242
X-Ban: /api/cd/startlist/CCWOlO*
The cache servers 4 will understand that it is getting an invalidation instruction, and the X-Ban header explains that it should invalidate all cached data that is cached from calls to URLs that start with /api/cd/startlist/CCWoio.
An example of communication between the data processing server 5, and the server 6 may in HTTP look like this. POST /cache_invalidation/invalidator HTTP/1.1
Host: 172.31.6.241
Content-Type: application/xml ; charset=utf-8
<?xml version="l .0" encoding="UTF-8 " ?>
<notification>
<ODFMessageType>DT_STARTLIST</ODFMessageType>
<affectedEventIds>CCW010</affectedEventIds>
<affectedAthletes>CC12345, CC23456, CC34567, CC45678</affectedAth letes>
</notification>
The xml tells the server 6 that the data processing server 5 got a new start list for a specific event, and which athletes this had an impact on. It can then parse this data and map what data is needed to send invalidation instructions for. The rules for the invalidation of content data, used in the server 6, can be defined in any language (anything from normal programming languages to higher level ones). The rules may map incoming content data directly to cached content data, or do more elaborate mapping, where incoming changed content data may lead to multiple invalidations of content data in the cache servers 4.
A cross country ski competition will be used as an example to illustrate how mapping of data models and invalidation of content data may be
implemented.
The server 6 receives a notification from the data processing server 5 about new data from an external data source 7. As an example, if the type of the new data only is a time for an athlete this may in the server 6 be mapped to only invalidate directly corresponding data in the cache servers 4, i.e. the time for the athlete on the cache servers 4. However, if the type of the new data is a finish result for the athlete this may in the server 6 then be mapped to not only invalidate directly corresponding data in the cache servers 4, such as the time for the athlete, but may also invalidate a result list comprising the time for many athletes.
The notification regarding a new result may contain information about which events/races/heats this affects, and all such events/races/heats will be invalidated by the server 6. In this way, any data of such events/races/heats will be invalidated on the cache servers.
The notification may also contains information about affected athletes and if the result affects more athletes than a given threshold a generic invalidation for all athletes may be issued, otherwise only information for the given athletes are invalidated. The threshold may e.g. be a number of athletes.
All the invalidations are sent to the cache servers 4. A cache server may evict invalidated content data immediately or may evict invalidated content data some time later, from the cache servers.
A cross country ski competition example is presented with reference to Fig. 3 A. A user, or an end user client in a CD 1, is following a live cross country ski competition via a client application in the CD 1. All the top ranked
competitors are out on the track and the excitement is increasing.
A data processing server 5 obtains the new data, and related meta data, related to the competitor from the external data source 7. The data processing server 5 notifies the server 6 of meta data of the new data. The server 6 translates or maps the meta data to invalidations.
As soon as a competitor passes a timing station or crosses the finish line new data is pushed to the data processing server 5 and the server 6 is then notified of meta data about the new data. The data processing server 5 also receives the related meta data or may extract related meta data from the notification.
A new result for an athlete/competitor that has passed the finish line will for example potentially change ranking for other athletes and perhaps also medal information for the competition. The server 6 translates or maps the meta data to invalidations, and sends the invalidations to the cache servers 4.
An end user client in a CD 1 requests a result list from the cache service and access cache servers 4. The invalidation of the requested data is transparent to the end user client in the CD 1, and the cache servers 4 will get a new result list from the data processing server 5, when the requested one is invalid. After the cache servers 4 have received the new result list and cached the new result list, the new result list is sent to the end user client in the CD 1.
Another similar scenario that describes how mapping and invalidation may be implemented is related to an event that has an impact on an event, such as a ski jump competition being delayed 2 hours due to bad weather.
In this scenario the server 6, based on predefined rules, knows which data objects that needs to be invalidated in the cache service 4 when a delay notification is received by the server 6 for the ski jump competition. The delay notification may e.g. be mapped to a time schedule for the competition to a status message, to a rolling banner, to images from the arena, and/ or to a recorded video clip, which then are invalidated in the cache service 4. The cache service 4 will then, when an end user client request a time schedule for the ski jump competition get a new time schedule from the data processing server 5, which will be two hours later. The cache service 4 may also or alternatively get the old time schedule or a new time schedule with a status message, a rolling banner, images from the arena, and/or a recorded video clip informing that the ski jump competition is delayed two hours.
Yet another example is presented with reference to Fig. 3B, where sales data for books are updated. A best-seller-list based on sales data from publishers is provided in a web page, and information for a specific book is also provided on that page, as well as a top-3- list provided on a separate page.
A new sales data report is received from a publisher 7 to an online bookstore, here illustrated as Amazon 5 (Amazon backend). Other online bookstores may e.g. be Barnes,&Noble and Kobo. The report is a list with ISBN numbers and number of new sales for each ISBN number since the last report. This entails that the best-seller-list needs to be updated, but also that the individual pages for the books in the report, and possible the top-3-list, need to be updated. A notification is thus sent to the server 6 with information, i.e. meta data, about which books that have new information, but not the actual sales data.
The server 6 evaluates, translates or maps the information and based thereon invalidates the best-seller-list in addition to the individual pages/books, and determines if the top-3-list has changed. The mapping may be configured manually or through machine learning.
The invalidations are sent to the cache servers 4.
A user thereafter requests an Amazon best-seller-list or the web page/URL that contains the best-seller list and the cache servers 4 notice that the requested information has been invalidated. The cache servers 4 then go to the Amazon backend 5 to retrieve the updated list, the Amazon backend 5 responds with the updated list, the cache servers 4 cache the updated list and responds to the user with the updated list.
A method, according to an embodiment, for invalidation of cached data is presented with reference to Fig. 4A. The method is performed by a server 6 and comprises receiving 32 a notification indicating new content data for a cache server, mapping 33 the new content data indicated in the received notification to content data already cached by the cache server, and sending 34 an instruction to the cache server, to invalidate mapped content data already cached by the cache server. The new content data may be modelled after a first data model, and the content data already cached by the cache server may be modelled after a second data model. The first and the second data models maybe logical data models, wherein a logical data model standardizes people, places, and things, and rules, relationships and the events between them. The first data model may e.g. comprise a new time for an athlete and the second data model may e.g. comprise a result list and a medal list. The first and second data models may be different from each other.
The new content data may comprise a first parameter and may be mapped to the content data already cached by the cache server comprising a second parameter. The first parameter may e.g. be a new time for an athlete and the second parameter may e.g. be a result list or a medal list. The first and second parameters may be different from each other.
The new content data may be for a first data object and may be mapped to the content data already cached by the cache server for a second data object. The first data object may e.g. be associated with a new time for an athlete and the second data object may e.g. be associated with a result list and a medal list. The first and second data object maybe different from each other.
The method mapping may comprise a set of rules for mapping the new content data to content data already cached by the cache server. At least one rule of the set of rules may determined by machine learning. Other rules may be manually configured.
A method, according to an embodiment, for managing cached data is presented with reference to Fig. 4B. The method is performed by a cache service system and comprises caching 30 content data in a cache server 4, obtaining 31 new content data from a data source 7, mapping 33, in an invalidation server 6, the new content data, obtained from the data source, to content data already cached by the cache server, and invalidating 35, in the cache server, mapped content data already cached by the cache server. A server, according to an embodiment, for invalidation of cached data is presented with reference to Fig. 5A. The server 6 comprises a processor 10 and a computer program product 12, 13. The computer program product stores instructions that, when executed by the processor, causes the server to receive 32 a notification indicating new content data received from a data source, to map 33 the new content data indicated in the received notification to content data already cached by the cache server, and to send 34 an instruction to the cache server, to invalidate mapped content data already cached by the cache server. A cache service system, according to an embodiment, for managing cached data is presented with reference to Fig. 5B. The cache service system 8 comprises a processor 10 and a computer program product 12, 13. The computer program product stores instructions that, when executed by the processor, causes the cache service system to cache 30 content data in a cache server 4, to obtain 31 new content data from a data source 7, to map 33, in an invalidation server 6, the new content data, obtained from the data source, to content data already cached by the cache server, and to invalidate 35, in the cache server, mapped content data already cached by the cache server.
A server, according to an embodiment, for invalidation of cached data is presented with reference to Fig. 6A. The server 6 comprises a communication manager 61 for receiving 32 a notification indicating new content data for a cache server and for sending 34 an instruction to the cache server, to invalidate mapped content data already cached by the cache server, and a cache manager 60 for mapping 33 the new content data indicated in the received notification to content data already cached by the cache server.
A cache service system, according to an embodiment, for managing cached data is presented with reference to Fig. 6B. The cache service system 8 comprises a communication manager 61 for obtaining 31 new content data from a data source 7, and a cache manager 60 for caching 30 content data in a cache server 4, for mapping 33, in an invalidation server 6, the new content data, obtained from the data source, to content data already cached by the cache server, and for invalidating 35, in the cache server, mapped content data already cached by the cache server.
The communication manager 61 may further be for receiving 32 a notification indicating new content data for the cache server 4, and for sending 34 an instruction to the cache server 4, to invalidate mapped content data already cached by the cache server
A computer program 14, 15, according to an embodiment, for invalidation of cached data is presented with reference to Fig. 5A. The computer program comprises computer program code which, when run on a server 6, causes the server 6 to receive 32 a notification indicating new content data for a cache server, to map 33 the new content data indicated in the received notification to content data already cached by the cache server, and to send 34 an instruction to the cache server, to invalidate mapped content data already cached by the cache server. A computer program 14, 15, according to an embodiment, for managing cached data is presented with reference to Fig. 5B. The computer program comprises computer program code which, when run on a cache service system 8, causes the cache service system 8 to cache 30 content data in a cache server 4, to obtain 31 new content data from a data source 7, to map 33, in an invalidation server 6, the new content data, obtained from the data source, to content data already cached by the cache server, and to invalidate 35, in the cache server, mapped content data already cached by the cache server.
A computer program product 12, 13, according to an embodiment, comprises a computer program 14, 15 and a computer readable storage means on which the computer program 14, 15 is stored.
Fig. 5A is a schematic diagram showing some components of the server 6. A processor 10 may be provided using any combination of one or more of a suitable central processing unit, CPU, multiprocessor, microcontroller, digital signal processor, DSP, application specific integrated circuit etc., capable of l6 executing software instructions of a computer program 14 stored in a memory. The memory can thus be considered to be or form part of the computer program product 12. The processor 10 maybe configured to execute methods described herein with reference to Figs. 6A. The memory may be any combination of read and write memory, RAM, and read only memory, ROM. The memory may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory. A second computer program product 13 in the form of a data memory may also be provided, e.g. for reading and/ or storing data during execution of software instructions in the processor 10. The data memory can be any combination of read and write memory, RAM, and read only memory, ROM, and may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory. The data memory may e.g. hold other software instructions 15, to improve functionality for the server 6.
The server 6 may further comprise an input/output, I/O, interface 11 including e.g. a user interface. The server or system may further comprise a receiver configured to receive signalling from other nodes, and a transmitter configured to transmit signalling to other nodes (not illustrated). Other components of the server are omitted in order not to obscure the concepts presented herein.
Fig. 5B is a schematic diagram showing some components of the cache service system 8. A processor 10 may be provided using any combination of one or more of a suitable central processing unit, CPU, multiprocessor, microcontroller, digital signal processor, DSP, application specific integrated circuit etc., capable of executing software instructions of a computer program 14 stored in a memory. The memory can thus be considered to be or form part of the computer program product 12. The processor 10 maybe configured to execute methods described herein with reference to Figs. 6B.
The memory may be any combination of read and write memory, RAM, and read only memory, ROM. The memory may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory.
A second computer program product 13 in the form of a data memory may also be provided, e.g. for reading and/ or storing data during execution of software instructions in the processor 10. The data memory can be any combination of read and write memory, RAM, and read only memory, ROM, and may also comprise persistent storage, which, for example, can be any single one or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory. The data memory may e.g. hold other software instructions 15, to improve functionality for the cache service system 8.
The cache service system may further comprise an input/ output, I/O, interface 11 including e.g. a user interface. The server or system may further comprise a receiver configured to receive signalling from other nodes, and a transmitter configured to transmit signalling to other nodes (not illustrated). Other components of the server are omitted in order not to obscure the concepts presented herein.
Fig. 6A is a schematic diagram showing functional blocks of the server 6. The modules maybe implemented as only software instructions such as a computer program executing in the cache server or only hardware, such as application specific integrated circuits, field programmable gate arrays, discrete logical components, transceivers, etc. or as a combination thereof. In an alternative embodiment, some of the functional blocks may be
implemented by software and other by hardware. The modules correspond to the steps in the methods illustrated in Fig. 4A, comprising a cache manager l8 unit 60 and a communication manager unit 61. In the embodiments where one or more of the modules are implemented by a computer program, it shall be understood that these modules do not necessarily correspond to process modules, but can be written as instructions according to a programming language in which they would be implemented, since some programming languages do not typically contain process modules.
The cache manager 60 is for invalidation of cached data. This module corresponds to the map step 33 of Fig. 4A. This module can e.g. be
implemented by the processor 10 of Fig. 5A, when running the computer program.
The communication manger 61 is for communication of data and
instructions. This module corresponds to the receive step 32 and the send step 34 of Fig. 4A. This module can e.g. be implemented by the processor 10 of Fig. 5A, when running the computer program. Fig. 6B is a schematic diagram showing functional blocks of the cache service system 8. The modules maybe implemented as only software instructions such as a computer program executing in the cache server or only hardware, such as application specific integrated circuits, field programmable gate arrays, discrete logical components, transceivers, etc. or as a combination thereof. In an alternative embodiment, some of the functional blocks maybe implemented by software and other by hardware. The modules correspond to the steps in the methods illustrated in Fig. 4B, comprising a cache manager unit 60 and a communication manager unit 61. In the embodiments where one or more of the modules are implemented by a computer program, it shall be understood that these modules do not necessarily correspond to process modules, but can be written as instructions according to a programming language in which they would be implemented, since some programming languages do not typically contain process modules.
The cache manager 60 is for managing of cached data. This module corresponds to the map cache step 30, the map step 33, and the invalidate step 35 of Fig. 4b. This module can e.g. be implemented by the processor 10 of Fig. 5B, when running the computer program.
The communication manger 61 is for communication of data and
instructions. This module corresponds to the obtain step 31, the receive step 32, and the send step 34 of Fig. 4B. This module can e.g. be implemented by the processor 10 of Fig. 5B, when running the computer program.
The invention has mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the invention, as defined by the appended patent claims.

Claims

CLAIMS l. A method for invalidation of cached data, the method being performed by a server (6) and comprising: receiving (32) a notification indicating new content data for a cache server; mapping (33) the new content data indicated in the received notification to content data already cached by the cache server; and sending (34) an instruction to the cache server, to invalidate mapped content data already cached by the cache server.
2. The method according to claim 1, wherein the new content data is modelled after a first data model, and the content data, already cached by the cache server, is modelled after a second data model.
3. The method according to claim 1 or 2, wherein the new content data comprises a first parameter and is mapped to the content data, already cached by the cache server, comprising a second parameter.
4. The method according to any one of claims 1 to 3, wherein the new content data is for a first data object and is mapped to the content data, already cached by the cache server, for a second data object.
5. The method according to any one of claims 1 to 4, wherein the mapping comprises a set of rules for mapping the new content data to content data already cached by the cache server.
6. The method according to claim 5, wherein at least one rule of the set of rules is determined by machine learning.
7. A method for managing cached data, the method being performed by a cache service system and comprising: caching (30) content data in a cache server (4); obtaining (31) new content data from a data source (7); mapping (33), in a server (6), the new content data, obtained from the data source, to content data already cached by the cache server; and invalidating (35), in the cache server, mapped content data already cached by the cache server.
8. The method according to claim 7, wherein the new content data is modelled after a first data model, and the content data, already cached by the cache server, is modelled after a second data model.
9. The method according to claim 7 or 8, wherein the new content data comprises a first parameter and is mapped to the content data already, cached by the cache server, comprising a second parameter.
10. The method according to any one of claims 7 to 9, wherein the new content data is for a first data object and is mapped to the content data, already cached by the cache server, for a second data object.
11. The method according to any one of claims 7 to 10, wherein the mapping comprises a set of rules for mapping the new content data to content data already cached by the cache server.
12. The method according to claim 11, wherein at least one rule of the set of rules is determined by machine learning. 13. A server (6) for invalidation of cached data, the server (6) comprising: a processor (10); and a computer program product (12,
13) storing instructions that, when executed by the processor, causes the server (6) to: receive (32) a notification indicating new content data for a cache server; map (33) the new content data indicated in the received notification to content data already cached by the cache server; and send (34) an instruction to the cache server, to invalidate mapped content data already cached by the cache server.
14. The server (6) according to claim 13, wherein the new content data is modelled after a first data model, and the content data, already cached by the cache server, is modelled after a second data model.
15. The server (6) according to claim 13 or 14, wherein the new content data comprises a first parameter and is mapped to the content data, already cached by the cache server, comprising a second parameter.
16. The server (6) according to any one of claims 13 to 14, wherein the new content data is for a first data object and is mapped to the content data, already cached by the cache server, for a second data object.
17. The server (6) according to any one of claims 13 to 16, wherein the mapping comprises a set of rules for mapping the new content data to content data already cached by the cache server.
18. The server (6) according to claim 17, wherein at least one rule of the set of rules is determined by machine learning.
19. A cache service system (8) for managing cached data, the cache service system (8) comprising: a processor (10); and a computer program product (12, 13) storing instructions that, when executed by the processor, causes the cache service system to: cache (30) content data in a cache server (4); obtain (31) new content data from a data source (7); map (33), in a server (6), the new content data, obtained from the data source, to content data already cached by the cache server; and invalidate (35), in the cache server, mapped content data already cached by the cache server.
20. The cache service system (8) according to claim 19, wherein the new content data is modelled after a first data model, and the content data already cached by the cache server is modelled after a second data model.
21. The cache service system (8) according to claim 19 or 20, wherein the new content data comprises a first parameter and is mapped to the content data, already cached by the cache server, comprising a second parameter.
22. The cache service system (8) according to any one of claims 19 to 21, wherein the new content data is for a first data object and is mapped to the content data, already cached by the cache server, for a second data object.
23. The cache service system (8) according to any one of claims 19 to 22, wherein the mapping comprises a set of rules for mapping the new content data to content data already cached by the cache server.
24. The cache service system (8) according to claim 23, wherein at least one rule of the set of rules is determined by machine learning.
25. A server for invalidation of cached data, the server (6) comprising: a communication manager (61) for receiving (32) a notification indicating new content data for a cache server and for sending (34) an instruction to the cache server, to invalidate mapped content data already cached by the cache server; and a cache manager (60) for mapping (33) the new content data indicated in the received notification to content data already cached by the cache server.
26. A cache service system for managing cached data, the cache service system (8) comprising: a communication manager (61) for obtaining (31) new content data from a data source (7); and a cache manager (60) for caching (30) content data in a cache server (4), for mapping (33), in a server (6), the new content data, obtained from the data source, to content data already cached by the cache server, and for
invalidating (35), in the cache server, mapped content data already cached by the cache server.
27. A computer program (14, 15) for invalidation of cached data, the computer program comprising computer program code which, when run on a server (6), causes the server (6) to: receive (32) a notification indicating new content data for a cache server; map (33) the new content data indicated in the received notification to content data already cached by the cache server; and send (34) an instruction to the cache server, to invalidate mapped content data already cached by the cache server.
28. A computer program (14, 15) for managing cached data, the computer program comprising computer program code which, when run on a cache service system (8), causes the cache service system (8) to: cache (30) content data in a cache server (4); obtain (31) new content data from a data source (7); map (33), in a server (6), the new content data, obtained from the data source, to content data already cached by the cache server; and invalidate (35), in the cache server, mapped content data already cached by the cache server.
29. A computer program product (12, 13) comprising a computer program (14, 15) according to claim 25 or 26 and a computer readable storage means on which the computer program (14, 15) is stored.
PCT/SE2016/050557 2016-06-09 2016-06-09 Invalidation of cached data Ceased WO2017213562A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SE2016/050557 WO2017213562A1 (en) 2016-06-09 2016-06-09 Invalidation of cached data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SE2016/050557 WO2017213562A1 (en) 2016-06-09 2016-06-09 Invalidation of cached data

Publications (1)

Publication Number Publication Date
WO2017213562A1 true WO2017213562A1 (en) 2017-12-14

Family

ID=60578092

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2016/050557 Ceased WO2017213562A1 (en) 2016-06-09 2016-06-09 Invalidation of cached data

Country Status (1)

Country Link
WO (1) WO2017213562A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6584548B1 (en) * 1999-07-22 2003-06-24 International Business Machines Corporation Method and apparatus for invalidating data in a cache
US6934720B1 (en) * 2001-08-04 2005-08-23 Oracle International Corp. Automatic invalidation of cached data
US20140310293A1 (en) * 2013-04-13 2014-10-16 Oracle International Corporation System for replication-driven repository cache invalidation across multiple data centers

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6584548B1 (en) * 1999-07-22 2003-06-24 International Business Machines Corporation Method and apparatus for invalidating data in a cache
US6934720B1 (en) * 2001-08-04 2005-08-23 Oracle International Corp. Automatic invalidation of cached data
US20140310293A1 (en) * 2013-04-13 2014-10-16 Oracle International Corporation System for replication-driven repository cache invalidation across multiple data centers

Similar Documents

Publication Publication Date Title
US11032388B2 (en) Methods for prerendering and methods for managing and configuring prerendering operations
US10110695B1 (en) Key resource prefetching using front-end optimization (FEO) configuration
US9055118B2 (en) Edge caching using HTTP headers
US9530099B1 (en) Access to network content
US8285808B1 (en) Loading of web resources
KR102151457B1 (en) Method and apparatus for reducing page load time in a communication system
US20080201332A1 (en) System and method for preloading content on the basis of user context
US9769285B2 (en) Access to network content
US20120054295A1 (en) Method and apparatus for providing or acquiring the contents of a network resource for a mobile device
CN102456035A (en) Webpage resource cache control method, device and system
CN106776761A (en) A kind of mobile browser renders the method and device of webpage
CN109634753B (en) Data processing method, device, terminal and storage medium for switching browser kernels
CN103152367A (en) Cache dynamic maintenance updating method and system
CN103401926A (en) Method and device for improving network resource access speed
JP2014132443A (en) Collection server and collection method
CN102681996B (en) Pre-head method and device
US11182452B2 (en) Web acceleration via learning
CN106899689B (en) Information pre-issuing method and server
US9471552B1 (en) Optimization of scripting for web applications
CN103347069A (en) Method and device for realizing network access
CN106919595B (en) Cookie mapping method and device and electronic equipment
WO2017213562A1 (en) Invalidation of cached data
KR101537222B1 (en) Web page pre-caching system and method for offline-executing
CN109344349A (en) A kind of data cache method and device, electronic equipment
CN104767603A (en) Method for providing network service, server and user terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16904766

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16904766

Country of ref document: EP

Kind code of ref document: A1