WO2019028524A1 - Multiple cache architecture - Google Patents
Multiple cache architecture Download PDFInfo
- Publication number
- WO2019028524A1 WO2019028524A1 PCT/AU2018/050845 AU2018050845W WO2019028524A1 WO 2019028524 A1 WO2019028524 A1 WO 2019028524A1 AU 2018050845 W AU2018050845 W AU 2018050845W WO 2019028524 A1 WO2019028524 A1 WO 2019028524A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- content
- cache architecture
- digital content
- partner
- cache
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0241—Advertisements
- G06Q30/0277—Online advertisement
Definitions
- the present disclosure relates to a multiple cache architecture and more particularly, to a multiple cache architecture for improving the availability and speed at which digital content can be accessed.
- Cache is a hardware/software component for storing digital data so it can be retrieved at a later time.
- Cache is typically utilized in computing so that requests for data can be served faster than if the data were stored in another type of memory (e.g., remote main memory).
- the data stored in cache memory may be a duplicate copy of data stored in another type of memory.
- the cache may be located closer in proximity to a processor than the other memory.
- Some embodiments described herein may provide for a cache architecture comprising: a data storage device; a query engine configured to query an external decision engine for information about where to retrieve digital content and receive information about a content partner upon a first trigger event; a third party SDK to query the content partner for digital content and store the digital content received from the content partner; and an API configured to retrieve the digital content from the third party SDK upon the occurrence of a second trigger event and store the digital content in the data storage device.
- a cache architecture comprising: a data storage device; a query engine configured to query an external decision engine for information about where to retrieve digital content and receive information about a content partner upon a first trigger event; a third party SDK to query the content partner for digital content and store the digital content received from the content partner; and an API configured to retrieve the digital content from the third party SDK upon the occurrence of a second trigger event and store the digital content in the data storage device.
- the decision engine may return digital content directly.
- the third party SDK may retry to query the content partner for a predetermined number of times if the previous query is unsuccessful.
- the query engine may query the external decision engine again if the content partner does not return digital content to the cache architecture after the predetermined number of retries.
- the cache architecture may be implemented in a mobile device.
- the cache architecture may be implemented in a stationary device.
- the digital content may be an advertisement.
- the digital content may be news content.
- the digital content may be loyalty offers or membership deals.
- the digital content may be audio content.
- the digital content may be audio/visual content.
- the content partner may be an ad server.
- the first trigger event may be one or more of a dismiss, a screen off detection by the cache architecture, a screen on detection by the cache architecture, the detection of some other user interaction by cache architecture.
- the second trigger event may be an unlocking of a mobile device.
- multiple first trigger events may occur before the second trigger event occurs.
- Some embodiments described herein may provide for a method for loading digital content into a cache architecture comprising: identifying a first trigger event; querying an external decision engine for information about where to retrieve digital content from; receiving information about a content partner; querying a content partner for digital content storing the digital content received from the content partner as part of a third party SDK; identifying a second trigger event; and retrieving the digital content from the third party SDK for display to the user.
- the method may further comprise retrying to query the content partner for a predetermined number of times if the previous query is unsuccessful.
- the method may further comprise querying the external decision engine again if the content partner does not return digital content after the
- the method may be implemented in a mobile device.
- the method may be implemented in a stationary device.
- the digital content may be an advertisement.
- the digital content may be news content.
- the digital content may be loyalty offers or membership deals.
- the digital content may be audio content.
- the digital content may be audio/visual content.
- the content partner may be an ad server.
- the first trigger event may be one or more of a dismiss, a screen off detection by the cache architecture, a screen on detection by the cache architecture, the detection of some other user interaction by cache architecture.
- the second trigger event may be an unlocking of a mobile device.
- multiple first trigger events may occur before the second trigger event occurs.
- the digital content that may be delivered to devices includes any combination of one or more of the following: advertisements, news content, social media content, weather content, sports content, loyalty offerings, membership deals, audio content, visual content, audio-visual content, radio content, television content, games, financial market information and other forms of information.
- the digital content may be delivered via a number of different approaches including one or more of the following: the Internet, a local area network (LAN), a wide area network (WAN) and a cellular network, or some other telecommunications network.
- the user device may be connected to the telecommunications network in one or more of the following ways: wireless, wired, coaxial, Ethernet, and fiber optics.
- the device may be configured to be unlocked by receiving an input from a user of the device.
- the input provided by the user may be one or more of the following inputs into the device: touch sensitive display, one or more buttons associated with the device, mouse, keyboard, voice activation, motion activation and gesture activation.
- FIG. 1 is a schematic representation of an embodiment of the cache architecture described, according to some embodiments.
- FIG. 2 illustrates the operation of an exemplary cache architecture, according to certain embodiments.
- FIG. 3 is a schematic diagram of an embodiment of the waterfall architecture, according to certain embodiments.
- the present disclosure relates to a multiple cache architecture and more particularly, to a multiple cache architecture for improving the availability and speed at which digital content can be accessed.
- the caching solutions described herein may be used for a variety of purposes where reliable digital content retrieval is desired. For example, in some embodiments, it may be desirable to have fast reliable retrieval of digital content on a computing device (e.g., a mobile device). In some embodiments, the cache architecture described herein may be utilized to a computing device to improve the time required to retrieve and display digital content on a display.
- the cache architecture described herein may be implemented to ensure (or improve) the availability of digital content on the computing device so there is content available to retrieve (e.g., display) to the user at the appropriate time (e.g. when requested by the computing device, an application running on the computing device or an input from a user.
- the input from the user may be the unlocking of the mobile device.
- the digital content may be pushed to the computing device and stored in the cache architecture.
- the digital content may be retrieved or pulled, by the computing device over a network (e.g., the internet) from various content sources including e.g., data exchanges and stored into the cache architecture.
- the cache architecture described herein the server content may be desirable to utilize a predetermined window of opportunity (WOO).
- WOO may be about 1 sec (e.g., about 0.5 sec, 0.75 sec, 1 sec, 1.25 sec, 1.5 sec, etc.).
- the desired content may be readily available in a cache architecture configured to enable the desired speed.
- content may be retrieved before the content is requested (e.g., before a triggering event). Determining when, how, and/or what content to retrieve may be advantageous for the cache architecture described herein.
- the digital content may not be reliably retrieved to store in the cache because the request for the digital content may fail.
- the computing device may be in a standby/sleep/low-power mode, which may result in an inability to execute the desired requests over the network until the computing device is available again (e.g., wakes up).
- the computing device may not have a stable network connection because of external factors (e.g. poor connection to the network, poor WiFi, poor cellular reception, bandwidth limitations, data shaping by the internet service provider, etc.).
- the radios in the device may not be started until after the device is awoken due to the user interacting with it which may, in some embodiments, be too late to retrieve the digital data in a timely fashion.
- digital content that is already cached within the cache architecture may expire if not utilized (e.g., displayed) before the expiry time specified by a content provider. Showing the content after it has expired may not be desirable for any number of reasons.
- digital content may be loaded to the cache architecture in two different ways - directly by downloading the digital content or indirectly by asking a third party application (e.g., SDK) to download the content.
- a third party application e.g., SDK
- the digital content is downloaded directly, the content from an external device and can be stored in cache on the computing device. In some embodiments, this storage of content may persist in the cache after the computing device is restarted and/or powered off.
- many large content providers e.g., the Facebook and Google content networks
- An SDK is a library of code which may be embedded within a larger application or computing device).
- the computing device and corresponding cache architecture may include an application or process (e.g., an API) for the SDK to load the content and later, after the content is loaded, may call another API to provide (e.g., retrieve and/or display) the content to the user.
- an API in the cache architecture e.g., an application that is part of the architecture
- the cache architecture calls a second API to query the Facebook SDK to provide (e.g., display) the requested content. Since the cache architecture does not have direct access to the content held within the third party SDK it cannot cache the content in a resilient manner before the content is requested. This may result in the need to request new content from the third party SDK when the computing device is restarted or turned on.
- a triggering event e.g., mobile device activation and/or unlock
- the application responsible for loading the content may only be in the foreground when displaying content, which means it may be constantly at risk of being stopped by the operating system on the computing device.
- the stoppage of certain application may occur in accordance with certain rules (e.g., an importance hierarchy) whereby background processes may be stopped at any time if resources such as processing and memory are needed by foreground processes.
- certain rules e.g., an importance hierarchy
- set of interrelated solutions may be developed to implement a cache architecture in which content is cached and ready to be provided to (e.g., displayed to) the user upon a triggering event.
- a durable cache architecture may be implemented such that direct served content, which can be downloaded over the network directly can be stored in persistent storage (e.g. a disk or memory card) on the computing device.
- this durable cache may survive application and /or device restarts
- the cache architecture may fill the cache from a content source chosen by an external decision engine.
- an application associated with the cache architecture may query an external decision engine (e.g. a content server) which content source (e.g. content network) the computing device should query to retrieve content.
- the decision of which content source to select may be based, at least in part, on a ranking of content sources determined by a third party and/or based on user properties (such as age, gender, interests, phone model, location, time of day, etc.). If a content source does not have content available then the cache architecture may ask the external decision engine for the next best content source, continuing down the list until content is returned or the list is exhausted.
- FIG. 1 is a schematic representation of an embodiment of the cache architecture described, according to some embodiments.
- the cache architecture 100 includes a memory 1 10 for storing digital content.
- the cache architecture 100 further includes a query engine 120 configured to query a remotely located decision engine 130 to ask the remote decision engine what content partner 140 should be queried for content.
- the decision engine may return content directly (e.g., provide direct content to the cache architecture).
- the decision engine 130 makes its choice based on a set of predefined rules.
- the cache architecture then utilizes a Requesting API 150 to request content from the selected content partner 140 (e.g., by calling a third party SDK 160).
- the third party SDK will retry again (e.g., retry up to N times, where N is a number configured in the third party SDK or request API).
- the query engine 120 may query the remote decision engine 130 to ask which content partner 140 to query next.
- these steps may be repeated until digital content is returned to the cache architecture or the decision engine 130 indicates there are no more content partners 140 to query.
- the content When the content is requested after a triggering event, the content may be retrieved from the third party SDK by a load API 170 and stored in memory 110.
- third party content networks may include Facebook, Google's AdMob or Twitter's MoPub.
- the cache architecture may be configured to directly show content (e.g., an HTML document with CSS, images and javascript), or the cache architecture may embed the third party SDKs within the cache architecture (e.g., embed the MoPub SDK inside an application on the computing device).
- the cache architecture calls the load API and once a digital content is loaded into memory, a show API may display the content to the user.
- the decision engine may be responsible for determining what content to provide to the cache architecture.
- the content server may receive an HTTP/S request from the cache architecture and make a decision about what content should be served to the cache architecture at that time.
- the cache architecture may send additional data to help the content server make its decision.
- the data may include any combination of one or more of, latitude and longitude, age, age range, gender, interests, how many days the user has been registered on a particular platform, etc.
- one or more line items set up as a waterfall may be included.
- waterfall refers to a set of rules configured in such a way that if the rules for the first line item don't apply, the decision falls through to the next line item.
- the rules on each line item are applied to the request, one line item at a time, flowing from the first line item to the next.
- the decision lands on a particular line item, then the content or instruction associated with that item is returned.
- line item 2 is identified as a mediated instruction.
- the content server can return an ad (html or image etc.) directly but it can also return some arbitrary string, an instruction, which the cache architecture may be configured to understand and use to take an action.
- a mediated instruction therefore, is an instruction that tells the cache architecture to call one of the third party SDKs.
- the content server might return the string "MoPub: 180:2" which says the cache architecture should query the MoPub SDK for content.
- the content can be cached for 180 minutes and the cache architecture can retry twice if it fails to load content.
- FIG. 2 illustrates the operation of an exemplary cache architecture, according to certain embodiments.
- the digital content is an advertisement but the process can be applied to any type of digital content.
- a trigger event occurs to initiate the process.
- the trigger event could be, for example, a dismiss, a screen off detection by the computing device, a screen on detection by the computing device, the detection of some other user interaction by the computing device, etc.
- the cache architecture makes a request (e.g., a call over HTTPS) to the content server. If a response (e.g., an HTTP response) is received from the content server, the process continues, otherwise, the cache architecture logs a content failure at operation 8.
- the process stops at operation 13 until a new trigger event occurs. Otherwise, the process continues back to operation 2 to make another content request.
- the maximum number of retries may be provided by the third party content partner (e.g., mediated partner or the cache architecture).
- the response is parsed to determine whether it is digital content or a failure message. If the response is a failure message from the content server this may indicate there is no content in the content server that satisfies the waterfall criteria (e.g., no more line items) so log a failure operation 8 occurs.
- Operation 5 corresponds to the reception of digital content. If the digital content is actually content, the content is cached into the cache architecture at operation X. If the response is a mediation request, (e.g., an instruction to use the third party SDK to deliver the content) then the cache architecture checks to determine if the content is cacheable. In the illustrated embodiment, all third party content is cacheable, so the false path is not illustrated. However, in some embodiments, the false path may result in a failed process or similar event. If the third party content is cacheable, the third party SDK (e.g. AdMob or Facebook SDK) is queried to load content at operation 24. If the content is received, the content (or a reference to the SDK where the content is stored) is cached in memory so it is ready for the user when necessary (e.g., when the computing device is initiated or when the mobile device screen is unlocked.
- a mediation request e.g., an instruction to use the third party SDK to deliver the content
- the cache architecture checks to determine if the content
- the cycle repeats (i.e., the request at operation 24 is made until, it either succeeds or fails - if it fails the cache architecture retries if retry limit for that third party has not been reached. If it succeeds then content is loaded within the third party SDK, which is ready to provide (e.g., display) the content when the user unlocks mobile device. As illustrated, if the cache architecture queries the third party to load content, until it runs out of retries and there is no content (operation 27), then the cache architecture may make another call to the content server and ask it for other content or instructions. In some embodiments, if this occurs, the cache architecture may instruct the content server to skip the third party that just failed.
- the process described above may occur multiple times before any content is requested such that multiple piece of content are cached (i.e., multiple caches are filled).
- the cache architecture attempts to retrieve content from the first cache and if it is empty, used the second cache.
- the cache architecture described herein may include 2, 3, 4, 5, 6, 7, 8, 9, 10, or more than 10 caches.
- FIG. 3 is a schematic diagram of an embodiment of the waterfall architecture, according to certain embodiments.
- the content server (SaS) is queried to fill the cache.
- the configuration of the cache architecture assumes that a content request is retried up to nine times before failing and the third party (mediated) partners are retried zero times.
- the process is initiated by an event that triggers the SDK to load content into one of its caches.
- the SDK instructs the SaS to get content (i.e. the cache architecture queries the content server to make a decision about what content it should show).
- the Sas responds to the SDK that it should use "Mopub High Floor” (first line in the SaS waterfall).
- the SDK queries the Mopub partner for content. Assuming the MoPub responds to the SDK that there is no content, the SDK initiates a first retry with SaS to get content. In this iteration, SaS may respond to the SDK to use "AdMob High Floor” (second line in the SaS waterfall).
- the SDK queries the AdMob partner for content.
- the SDK initiates a second retry with SaS to get content.
- SaS may respond to the SDK to use "MoPub Medium Floor” (third line in the SaS waterfall).
- the SDK queries the MoPub partner for content.
- the SDK initiates a third retry with SaS to get content.
- SaS may respond to the SDK to use "AdMob Medium Floor” (fourth line in the SaS waterfall).
- the SDK queries the AdMob partner for content.
- the SDK initiates a fourth retry with SaS to get content.
- SaS may respond to the SDK to use "MoPub Low Floor" (fifth line in the SaS waterfall).
- the SDK queries the MoPub partner for content. In this instance, assuming the MoPub responds to the SDK that there is content, it provided the content to the SDK.
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- Finance (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- Marketing (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Information Transfer Between Computers (AREA)
Abstract
A cache architecture comprising: a data storage device; a query engine configured to query an external decision engine for information about where to retrieve digital content and receive information about a content partner upon a first trigger event; a third party SDK to query the content partner for digital content and store the digital content received from the content partner; and an API configured to retrieve the digital content from the third party SDK upon the occurrence of a second trigger event and store the digital content in the data storage device.
Description
MULTIPLE CACHE ARCHITECTURE
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Application No. 62/543,707, filed August 10, 2017. This application is also related to International Application No.
PCT/AU2014/000793, filed on August 7, 2014 and International Application No.
PCT/AU2014/000794, filed on August 7, 2014. Each of these priority and related applications is incorporated herein by reference in its entirety.
FIELD
[0002] The present disclosure relates to a multiple cache architecture and more particularly, to a multiple cache architecture for improving the availability and speed at which digital content can be accessed.
BACKGROUND
[0003] Cache is a hardware/software component for storing digital data so it can be retrieved at a later time. Cache is typically utilized in computing so that requests for data can be served faster than if the data were stored in another type of memory (e.g., remote main memory). In some situations, the data stored in cache memory may be a duplicate copy of data stored in another type of memory. In some situations, the cache may be located closer in proximity to a processor than the other memory.
[0004] While data stored in cache can typically be accessed faster than data stored in another memory, in some situations, it may be necessary to make the access speed to the cache more predicable and/or consistent. Accordingly, there is a need to a cache architecture that improves the availability and speed at which digital content can be accessed.
[0005] The subject matter claimed herein is not limited to embodiments that solve one or more disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one exemplary technology area where some embodiments described herein may be practiced.
SUMMARY
[0006] The disclosed embodiments, consists of features and a combination of parts hereinafter fully described and illustrated in the accompanying drawings, it being understood that various changes in the details may be made without departing from the scope of the disclosed embodiments or sacrificing one or more of the advantages of the present disclosure.
[0007] Some embodiments described herein may provide for a cache architecture comprising: a data storage device; a query engine configured to query an external decision engine for information about where to retrieve digital content and receive information about a content partner upon a first trigger event; a third party SDK to query the content partner for digital content and store the digital content received from the content partner; and an API configured to retrieve the digital content from the third party SDK upon the occurrence of a second trigger event and store the digital content in the data storage device. In some
embodiments, the decision engine may return digital content directly.
[0008] In some embodiments, the third party SDK may retry to query the content partner for a predetermined number of times if the previous query is unsuccessful.
[0009] In some embodiments, the query engine may query the external decision engine again if the content partner does not return digital content to the cache architecture after the predetermined number of retries.
[0010] In some embodiments, the cache architecture may be implemented in a mobile device.
[0011] In some embodiments, the cache architecture may be implemented in a stationary device.
[0012] In some embodiments, the digital content may be an advertisement.
[0013] In some embodiments, the digital content may be news content.
[0014] In some embodiments, the digital content may be loyalty offers or membership deals.
[0015] In some embodiments, the digital content may be audio content.
[0016] In some embodiments, the digital content may be audio/visual content.
[0017] In some embodiments, the content partner may be an ad server.
[0018] In some embodiments, the first trigger event may be one or more of a dismiss, a screen off detection by the cache architecture, a screen on detection by the cache architecture, the
detection of some other user interaction by cache architecture.
[0019] In some embodiments, the second trigger event may be an unlocking of a mobile device.
[0020] In some embodiments, multiple first trigger events may occur before the second trigger event occurs.
[0021] Some embodiments described herein may provide for a method for loading digital content into a cache architecture comprising: identifying a first trigger event; querying an external decision engine for information about where to retrieve digital content from; receiving information about a content partner; querying a content partner for digital content storing the digital content received from the content partner as part of a third party SDK; identifying a second trigger event; and retrieving the digital content from the third party SDK for display to the user.
[0022] In some embodiments, the method may further comprise retrying to query the content partner for a predetermined number of times if the previous query is unsuccessful.
[0023] In some embodiments, the method may further comprise querying the external decision engine again if the content partner does not return digital content after the
predetermined number of retries.
[0024] In some embodiments, the method may be implemented in a mobile device.
[0025] In some embodiments, the method may be implemented in a stationary device.
[0026] In some embodiments, the digital content may be an advertisement.
[0027] In some embodiments, the digital content may be news content.
[0028] In some embodiments, the digital content may be loyalty offers or membership deals.
[0029] In some embodiments, the digital content may be audio content.
[0030] In some embodiments, the digital content may be audio/visual content.
[0031] In some embodiments, the content partner may be an ad server.
[0032] In some embodiments, the first trigger event may be one or more of a dismiss, a screen off detection by the cache architecture, a screen on detection by the cache architecture, the detection of some other user interaction by cache architecture.
[0033] In some embodiments, the second trigger event may be an unlocking of a mobile device.
[0034] In some embodiments, multiple first trigger events may occur before the second trigger event occurs.
[0035] In some embodiments, the digital content that may be delivered to devices includes any combination of one or more of the following: advertisements, news content, social media content, weather content, sports content, loyalty offerings, membership deals, audio content, visual content, audio-visual content, radio content, television content, games, financial market information and other forms of information.
[0036] The digital content may be delivered via a number of different approaches including one or more of the following: the Internet, a local area network (LAN), a wide area network (WAN) and a cellular network, or some other telecommunications network. In addition, the user device may be connected to the telecommunications network in one or more of the following ways: wireless, wired, coaxial, Ethernet, and fiber optics.
[0037] In some embodiments, the device may be configured to be unlocked by receiving an input from a user of the device. For example, in certain embodiments, the input provided by the user may be one or more of the following inputs into the device: touch sensitive display, one or more buttons associated with the device, mouse, keyboard, voice activation, motion activation and gesture activation.
[0038] As well as the embodiments discussed in the summary, other embodiments are disclosed in the specification, drawings and claims.
BRIEF DESCRIPTION OF ACCOMPANYING DRAWINGS
[0039] To further clarify various aspects of certain embodiments, a more particular description of certain embodiments is provided by references to specific embodiments thereof, which are illustrated in the appended drawings. These drawings depict exemplary embodiments and are therefore not to be considered limiting of its scope. The exemplary embodiments are described and explained with additional specificity and detail through the accompanying drawings.
[0040] FIG. 1 is a schematic representation of an embodiment of the cache architecture described, according to some embodiments.
[0041] FIG. 2 illustrates the operation of an exemplary cache architecture, according to certain embodiments.
[0042] FIG. 3 is a schematic diagram of an embodiment of the waterfall architecture, according to certain embodiments.
DETAILED DESCRIPTION
[0043] The present disclosure relates to a multiple cache architecture and more particularly, to a multiple cache architecture for improving the availability and speed at which digital content can be accessed.
[0044] Although certain embodiments described herein discuss the use of a computing device generally, the embodiments are also applicable to more specialized computing devices such as mobile devices, etc.
[0045] The present disclosure is described in further detail with reference to one or more embodiments, some examples of which are illustrated in the accompanying drawings. The examples and embodiments are provided by way of explanation and are not to be taken as limiting to the scope of the disclosure. Furthermore, features illustrated or described as part of one embodiment may be used by themselves to provide other embodiments and features illustrated or described as part of one embodiment may be used with one or more other embodiments to provide further embodiments. The present disclosure covers these variations and embodiments as well as other variations and/or modifications.
[0046] The caching solutions described herein may be used for a variety of purposes where reliable digital content retrieval is desired. For example, in some embodiments, it may be desirable to have fast reliable retrieval of digital content on a computing device (e.g., a mobile device). In some embodiments, the cache architecture described herein may be utilized to a computing device to improve the time required to retrieve and display digital content on a display.
[0047] In some embodiments, the cache architecture described herein may be implemented to ensure (or improve) the availability of digital content on the computing device so there is content available to retrieve (e.g., display) to the user at the appropriate time (e.g. when requested by the computing device, an application running on the computing device or an input from a user. For example, in some embodiments, the input from the user may be the unlocking of the mobile device.
[0048] In some embodiments, the digital content may be pushed to the computing device
and stored in the cache architecture. In some embodiments, the digital content may be retrieved or pulled, by the computing device over a network (e.g., the internet) from various content sources including e.g., data exchanges and stored into the cache architecture.
[0049] In some embodiments, it may be desirable to utilize the cache architecture described herein the server content within a predetermined window of opportunity (WOO). In some embodiments, the WOO may be about 1 sec (e.g., about 0.5 sec, 0.75 sec, 1 sec, 1.25 sec, 1.5 sec, etc.). To achieve these types of retrieval times, in some embodiments, it may be advantageous for the desired content to be readily available in a cache architecture configured to enable the desired speed.
[0050] Due to network latency and reliability, in some instances, it can take up to several seconds to retrieve digital content over a network (e.g., the internet). This makes real-time (or substantially real-time) retrieval of digital content from such a network too slow for certain applications. To overcome this issue, in some embodiments, content may be retrieved before the content is requested (e.g., before a triggering event). Determining when, how, and/or what content to retrieve may be advantageous for the cache architecture described herein.
[0051] Several problems may arise when determining how and when to fill the cache architecture described herein.
[0052] First, the digital content may not be reliably retrieved to store in the cache because the request for the digital content may fail. For example, the computing device may be in a standby/sleep/low-power mode, which may result in an inability to execute the desired requests over the network until the computing device is available again (e.g., wakes up). The computing device may not have a stable network connection because of external factors (e.g. poor connection to the network, poor WiFi, poor cellular reception, bandwidth limitations, data shaping by the internet service provider, etc.). The radios in the device may not be started until after the device is awoken due to the user interacting with it which may, in some embodiments, be too late to retrieve the digital data in a timely fashion. There may not be any content available from external content providers (e.g., content servers or content networks) when the request is made. As a result, it may be desirable, in some embodiments, to identify times when content can be retrieved reliably.
[0053] Second, digital content that is already cached within the cache architecture may expire if not utilized (e.g., displayed) before the expiry time specified by a content provider.
Showing the content after it has expired may not be desirable for any number of reasons.
Accordingly, in some embodiments, it may be desirable to retrieve new content to replace expired content within the cache architecture.
[0054] Third, digital content may be loaded to the cache architecture in two different ways - directly by downloading the digital content or indirectly by asking a third party application (e.g., SDK) to download the content. When the digital content is downloaded directly, the content from an external device and can be stored in cache on the computing device. In some embodiments, this storage of content may persist in the cache after the computing device is restarted and/or powered off. However, many large content providers, (e.g., the Facebook and Google content networks) do not provide a mechanism to directly access the content and instead require us to use their application. An SDK is a library of code which may be embedded within a larger application or computing device). To load content through the third party application (e.g., SDK) the computing device and corresponding cache architecture may include an application or process (e.g., an API) for the SDK to load the content and later, after the content is loaded, may call another API to provide (e.g., retrieve and/or display) the content to the user. Since the SDK is provided by a third party, the functionality of the SDK may not be known to the computing device. For example, if the computing device requests content from the Facebook network, an API in the cache architecture (e.g., an application that is part of the architecture) may query the Facebook SDK, which is embedded in cache architecture, to retrieve the requested content. Then when a triggering event (e.g., mobile device activation and/or unlock) occurs the cache architecture calls a second API to query the Facebook SDK to provide (e.g., display) the requested content. Since the cache architecture does not have direct access to the content held within the third party SDK it cannot cache the content in a resilient manner before the content is requested. This may result in the need to request new content from the third party SDK when the computing device is restarted or turned on.
[0055] Fourth, in some embodiments, the application responsible for loading the content, may only be in the foreground when displaying content, which means it may be constantly at risk of being stopped by the operating system on the computing device. In some embodiments, the stoppage of certain application may occur in accordance with certain rules (e.g., an importance hierarchy) whereby background processes may be stopped at any time if resources such as processing and memory are needed by foreground processes. By storing the digital content in a
persistent cache the content may be available and ready to display when a triggering event occurs.
[0056] To address the issues described above, set of interrelated solutions may be developed to implement a cache architecture in which content is cached and ready to be provided to (e.g., displayed to) the user upon a triggering event.
[0057] In some embodiments, a durable cache architecture may be implemented such that direct served content, which can be downloaded over the network directly can be stored in persistent storage (e.g. a disk or memory card) on the computing device. In some embodiments, this durable cache may survive application and /or device restarts
[0058] In some embodiments, the cache architecture may fill the cache from a content source chosen by an external decision engine. For example, an application associated with the cache architecture may query an external decision engine (e.g. a content server) which content source (e.g. content network) the computing device should query to retrieve content. In some embodiments, the decision of which content source to select may be based, at least in part, on a ranking of content sources determined by a third party and/or based on user properties (such as age, gender, interests, phone model, location, time of day, etc.). If a content source does not have content available then the cache architecture may ask the external decision engine for the next best content source, continuing down the list until content is returned or the list is exhausted.
[0059] FIG. 1 is a schematic representation of an embodiment of the cache architecture described, according to some embodiments. As illustrated, the cache architecture 100 includes a memory 1 10 for storing digital content.
[0060] The cache architecture 100 further includes a query engine 120 configured to query a remotely located decision engine 130 to ask the remote decision engine what content partner 140 should be queried for content. In some embodiments, the decision engine may return content directly (e.g., provide direct content to the cache architecture). The decision engine 130 makes its choice based on a set of predefined rules. The cache architecture then utilizes a Requesting API 150 to request content from the selected content partner 140 (e.g., by calling a third party SDK 160). In some embodiments, if there is an error retrieving content from the content partner 140 the third party SDK will retry again (e.g., retry up to N times, where N is a number configured in the third party SDK or request API). If the content partner 140 returns a
response indicating that it declines to provide content, then the query engine 120 may query the remote decision engine 130 to ask which content partner 140 to query next. In some
embodiments, these steps may be repeated until digital content is returned to the cache architecture or the decision engine 130 indicates there are no more content partners 140 to query.
[0061] When the content is requested after a triggering event, the content may be retrieved from the third party SDK by a load API 170 and stored in memory 110.
[0062] In some embodiments, third party content networks may include Facebook, Google's AdMob or Twitter's MoPub. The cache architecture may be configured to directly show content (e.g., an HTML document with CSS, images and javascript), or the cache architecture may embed the third party SDKs within the cache architecture (e.g., embed the MoPub SDK inside an application on the computing device). In this case, when a triggering event required the display of digital content, the cache architecture calls the load API and once a digital content is loaded into memory, a show API may display the content to the user.
[0063] As described herein, the decision engine (e.g., content server) may be responsible for determining what content to provide to the cache architecture. In some embodiments, the content server my receive an HTTP/S request from the cache architecture and make a decision about what content should be served to the cache architecture at that time. When a request is made from the cache architecture to the content server the cache architecture may send additional data to help the content server make its decision. The data may include any combination of one or more of, latitude and longitude, age, age range, gender, interests, how many days the user has been registered on a particular platform, etc. Within the content server structure one or more line items set up as a waterfall may be included. As used herein, the term waterfall refers to a set of rules configured in such a way that if the rules for the first line item don't apply, the decision falls through to the next line item. In other words, the rules on each line item are applied to the request, one line item at a time, flowing from the first line item to the next. When the decision lands on a particular line item, then the content or instruction associated with that item is returned.
[0064] For example, in an embodiment, there may three line items as follows:
• Line item 1 : direct content (i.e. HTML content). Rule: display max
100,000 times per day to women only.
• Line item 2: MoPub mediated instruction. Rule: display a maximum of 5
times per hour
• Line item 3: Facebook mediated instruction. Rule: unlimited
[0065] In this example when a request comes into the content server, for a woman, it may land on line item 1 first and direct HTML content may be be returned. However, after the content has been served 100,000 times in the day then that line may be be skipped and the decision would move to line item 2, the MoPub instruction. However, if a particular user has already been served the MoPub content 5 times in the last hour then it will also skip line item 2 and land on item 3, which is the Facebook instruction. Since there is no cap on the number of times it can be served then item 3 is always there as a fall back, if none of the previous rules apply.
[0066] As described above, line item 2 is identified as a mediated instruction. The content server can return an ad (html or image etc.) directly but it can also return some arbitrary string, an instruction, which the cache architecture may be configured to understand and use to take an action. A mediated instruction, therefore, is an instruction that tells the cache architecture to call one of the third party SDKs. For example the content server might return the string "MoPub: 180:2" which says the cache architecture should query the MoPub SDK for content. The content can be cached for 180 minutes and the cache architecture can retry twice if it fails to load content.
[0067] FIG. 2 illustrates the operation of an exemplary cache architecture, according to certain embodiments. In the embodiment illustrated in Fig. 2, the digital content is an advertisement but the process can be applied to any type of digital content. Initially, a trigger event occurs to initiate the process. The trigger event could be, for example, a dismiss, a screen off detection by the computing device, a screen on detection by the computing device, the detection of some other user interaction by the computing device, etc. At operation 2, the cache architecture makes a request (e.g., a call over HTTPS) to the content server. If a response (e.g., an HTTP response) is received from the content server, the process continues, otherwise, the cache architecture logs a content failure at operation 8. If the number of retries reaches a predetermined threshold, the process stops at operation 13 until a new trigger event occurs. Otherwise, the process continues back to operation 2 to make another content request. In some embodiments, the maximum number of retries may be provided by the third party content partner (e.g., mediated partner or the cache architecture).
[0068] If a valid HTTP response is received from the content server, the response is parsed to determine whether it is digital content or a failure message. If the response is a failure message from the content server this may indicate there is no content in the content server that satisfies the waterfall criteria (e.g., no more line items) so log a failure operation 8 occurs.
[0069] Operation 5 corresponds to the reception of digital content. If the digital content is actually content, the content is cached into the cache architecture at operation X. If the response is a mediation request, (e.g., an instruction to use the third party SDK to deliver the content) then the cache architecture checks to determine if the content is cacheable. In the illustrated embodiment, all third party content is cacheable, so the false path is not illustrated. However, in some embodiments, the false path may result in a failed process or similar event. If the third party content is cacheable, the third party SDK (e.g. AdMob or Facebook SDK) is queried to load content at operation 24. If the content is received, the content (or a reference to the SDK where the content is stored) is cached in memory so it is ready for the user when necessary (e.g., when the computing device is initiated or when the mobile device screen is unlocked.
[0070] If the third party SDK couldn't load the content, the cycle repeats (i.e., the request at operation 24 is made until, it either succeeds or fails - if it fails the cache architecture retries if retry limit for that third party has not been reached. If it succeeds then content is loaded within the third party SDK, which is ready to provide (e.g., display) the content when the user unlocks mobile device. As illustrated, if the cache architecture queries the third party to load content, until it runs out of retries and there is no content (operation 27), then the cache architecture may make another call to the content server and ask it for other content or instructions. In some embodiments, if this occurs, the cache architecture may instruct the content server to skip the third party that just failed.
[0071] In some embodiments, the process described above may occur multiple times before any content is requested such that multiple piece of content are cached (i.e., multiple caches are filled). In some embodiments, when the content is requested by the user, the cache architecture attempts to retrieve content from the first cache and if it is empty, used the second cache. In some embodiments, the cache architecture described herein may include 2, 3, 4, 5, 6, 7, 8, 9, 10, or more than 10 caches.
[0072] FIG. 3 is a schematic diagram of an embodiment of the waterfall architecture,
according to certain embodiments. As illustrated in Fig. 3, the content server (SaS) is queried to fill the cache. For purposes of this example, the configuration of the cache architecture assumes that a content request is retried up to nine times before failing and the third party (mediated) partners are retried zero times.
[0073] The process is initiated by an event that triggers the SDK to load content into one of its caches. The SDK instructs the SaS to get content (i.e. the cache architecture queries the content server to make a decision about what content it should show). The Sas responds to the SDK that it should use "Mopub High Floor" (first line in the SaS waterfall). The SDK then queries the Mopub partner for content. Assuming the MoPub responds to the SDK that there is no content, the SDK initiates a first retry with SaS to get content. In this iteration, SaS may respond to the SDK to use "AdMob High Floor" (second line in the SaS waterfall). The SDK then queries the AdMob partner for content. Assuming the AdMob responds to the SDK that there is no content, the SDK initiates a second retry with SaS to get content. In this iteration, SaS may respond to the SDK to use "MoPub Medium Floor" (third line in the SaS waterfall). The SDK then queries the MoPub partner for content. Assuming the MoPub responds to the SDK that there is no content, the SDK initiates a third retry with SaS to get content. In this iteration, SaS may respond to the SDK to use "AdMob Medium Floor" (fourth line in the SaS waterfall). The SDK then queries the AdMob partner for content. Assuming the MoPub responds to the SDK that there is no content, the SDK initiates a fourth retry with SaS to get content. In this iteration, SaS may respond to the SDK to use "MoPub Low Floor" (fifth line in the SaS waterfall). The SDK then queries the MoPub partner for content. In this instance, assuming the MoPub responds to the SDK that there is content, it provided the content to the SDK.
[0074] The disclosure has been described with reference to particular exemplary embodiments. However, it may be readily apparent to those skilled in the art that it is possible to embody the disclosure in specific forms other than those of the embodiments described herein. The embodiments are merely illustrative and should not be considered restrictive. The scope of the disclosure is given by the appended claims, rather than the preceding description, and variations and equivalents that fall within the range of the claims are intended to be embraced therein.
Claims
1. A cache architecture comprising:
a data storage device;
a query engine configured to query an external decision engine for information about where to retrieve digital content and receive information about a content partner upon a first trigger event;
a third party SDK to query the content partner for digital content and store the digital content received from the content partner; and
an API configured to retrieve the digital content from the third party SDK upon the occurrence of a second trigger event and store the digital content in the data storage device.
2. The cache architecture of claim 1, wherein the third party SDK retries to query the content partner for a predetermined number of times if the previous query is unsuccessful.
3. The cache architecture of claim 2, wherein the query engine queries the external decision engine again if the content partner does not return digital content to the cache architecture after the predetermined number of retries.
4. The cache architecture of claims 1-3, wherein the cache architecture is implemented in a mobile device.
5. The cache architecture of claims 1-3, wherein the cache architecture is implemented in a stationary device.
6. The cache architecture of one or more of the above claims, wherein the digital content is an advertisement.
7. The cache architecture of one or more of the above claims, wherein the digital content is news content.
8. The cache architecture of one or more of the above claims, wherein the digital content is loyalty offers or membership deals.
9. The cache architecture of one or more of the above claims, wherein the digital content is audio content.
10. The cache architecture of one or more of the above claims, wherein the digital content is audio/visual content.
11. The cache architecture of one or more of the above claims, wherein the content partner is an ad server.
12. The cache architecture of one or more of the above claims, wherein the first trigger event is one or more of a dismiss, a screen off detection by the cache architecture, a screen on detection by the cache architecture, the detection of some other user interaction by cache architecture.
13. The cache architecture of one or more of the above claims, wherein the second trigger event is an unlocking of a mobile device.
14. The cache architecture of one or more of the above claims, wherein multiple first trigger events occur before the second trigger event occurs.
15. A method for loading digital content into a cache architecture comprising:
identifying a first trigger event;
querying an external decision engine for information about where to retrieve digital content from;
receiving information about a content partner;
querying a content partner for digital content
storing the digital content received from the content partner as part of a third party SDK; identifying a second trigger event; and
retrieving the digital content from the third party SDK for display to the user.
16. The method of claim 15, further comprising retrying to query the content partner for a predetermined number of times if the previous query is unsuccessful.
17. The method of claim 16, further comprising querying the external decision engine again if the content partner does not return digital content after the predetermined number of retries.
18. The method of claims 15-17, wherein the method is implemented in a mobile device.
19. The method of claims 15-17, wherein the method is implemented in a stationary device.
20. The method of one or more of claims 15-19, wherein the digital content is an advertisement.
21. The method of one or more of claims 15-20, wherein the digital content is news content.
22. The method of one or more of claims 15-21 , wherein the digital content is loyalty offers or membership deals.
23. The method of one or more of claims 15-22, wherein the digital content is audio content.
24. The method of one or more of claims 15-23, wherein the digital content is audio/visual content.
25. The method of one or more of claims 15-24, wherein the content partner is an ad
server.
26. The method of one or more of claims 15-25, wherein the first trigger event is one or more of a dismiss, a screen off detection by the cache architecture, a screen on detection by the cache architecture, the detection of some other user interaction by cache architecture.
27. The method of one or more of claims 15-26, wherein the second trigger event is an unlocking of a mobile device.
28. The method of one or more of claims 15-27, wherein multiple first trigger events occur before the second trigger event occurs.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201762543707P | 2017-08-10 | 2017-08-10 | |
| US62/543,707 | 2017-08-10 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019028524A1 true WO2019028524A1 (en) | 2019-02-14 |
Family
ID=65273019
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/AU2018/050845 Ceased WO2019028524A1 (en) | 2017-08-10 | 2018-08-10 | Multiple cache architecture |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2019028524A1 (en) |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6826614B1 (en) * | 2001-05-04 | 2004-11-30 | Western Digital Ventures, Inc. | Caching advertising information in a mobile terminal to enhance remote synchronization and wireless internet browsing |
| US20130110637A1 (en) * | 2011-11-02 | 2013-05-02 | Ross Bott | Strategically timed delivery of advertisements or electronic coupons to a mobile device in a mobile network |
| US20140006538A1 (en) * | 2012-06-28 | 2014-01-02 | Bytemobile, Inc. | Intelligent Client-Side Caching On Mobile Devices |
| US8805950B1 (en) * | 2007-09-12 | 2014-08-12 | Aol Inc. | Client web cache |
| WO2015017891A1 (en) * | 2013-08-07 | 2015-02-12 | Unlockd Media Pty. Ltd. | Systems, devices and methods for displaying digital content on a display |
| US20150294342A1 (en) * | 2014-04-15 | 2015-10-15 | Xperiel, Inc. | Platform for Providing Customizable Brand Experiences |
-
2018
- 2018-08-10 WO PCT/AU2018/050845 patent/WO2019028524A1/en not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6826614B1 (en) * | 2001-05-04 | 2004-11-30 | Western Digital Ventures, Inc. | Caching advertising information in a mobile terminal to enhance remote synchronization and wireless internet browsing |
| US8805950B1 (en) * | 2007-09-12 | 2014-08-12 | Aol Inc. | Client web cache |
| US20130110637A1 (en) * | 2011-11-02 | 2013-05-02 | Ross Bott | Strategically timed delivery of advertisements or electronic coupons to a mobile device in a mobile network |
| US20140006538A1 (en) * | 2012-06-28 | 2014-01-02 | Bytemobile, Inc. | Intelligent Client-Side Caching On Mobile Devices |
| WO2015017891A1 (en) * | 2013-08-07 | 2015-02-12 | Unlockd Media Pty. Ltd. | Systems, devices and methods for displaying digital content on a display |
| US20150294342A1 (en) * | 2014-04-15 | 2015-10-15 | Xperiel, Inc. | Platform for Providing Customizable Brand Experiences |
Non-Patent Citations (1)
| Title |
|---|
| KHAN, A J I ET AL.: "CAMEO: A Middleware for Mobile Advertisement Delivery", PROCEEDING OF THE 11TH ANNUAL INTERNATIONAL CONFERENCE ON MOBILE SYSTEMS, APPLICATIONS, AND SERVICES, 2013, pages 125 - 138, XP055576980 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9823917B2 (en) | Update application user interfaces on client devices | |
| US9189801B2 (en) | System and method for rewarding application installs | |
| US8700735B1 (en) | Multi-level cache with synch | |
| US9804994B2 (en) | Application architecture supporting multiple services and caching | |
| CN113010818A (en) | Access current limiting method and device, electronic equipment and storage medium | |
| US10437689B2 (en) | Error handling for services requiring guaranteed ordering of asynchronous operations in a distributed environment | |
| CN113420051B (en) | Data query method and device, electronic equipment and storage medium | |
| CN112764948B (en) | Data transmission method, data transmission device, computer device, and storage medium | |
| US20170221109A1 (en) | Ads management in a browser application | |
| US20190265851A1 (en) | Platform for third-party supplied calls-to-action | |
| CN111125595A (en) | Multi-page control method and device, electronic equipment and storage medium | |
| CN112181733A (en) | A service request processing method, device, device and storage medium | |
| US10684898B2 (en) | In-line event handlers across domains | |
| CN110807040B (en) | Method, device, equipment and storage medium for managing data | |
| CN118672806B (en) | Multi-project data caching method, device, equipment and storage medium | |
| US9378178B1 (en) | Enhancing HTTP caching by allowing content sharing of data blocks across resources identified by different uniform resource locators | |
| US11269784B1 (en) | System and methods for efficient caching in a distributed environment | |
| WO2019028524A1 (en) | Multiple cache architecture | |
| US20240089339A1 (en) | Caching across multiple cloud environments | |
| CN117395125A (en) | Method, apparatus and computer readable medium for processing user interaction data | |
| CN111182330B (en) | Video playing method and device | |
| CN105068762B (en) | A kind of method for reading data and device | |
| CN113138943A (en) | Method and device for processing request | |
| US9396111B2 (en) | Caching using base setup version | |
| CN117311557A (en) | Display method, device, equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18844860 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18844860 Country of ref document: EP Kind code of ref document: A1 |