[go: up one dir, main page]

CN110598138B - Processing method and device based on cache - Google Patents

Processing method and device based on cache Download PDF

Info

Publication number
CN110598138B
CN110598138B CN201810600524.2A CN201810600524A CN110598138B CN 110598138 B CN110598138 B CN 110598138B CN 201810600524 A CN201810600524 A CN 201810600524A CN 110598138 B CN110598138 B CN 110598138B
Authority
CN
China
Prior art keywords
target data
cache pool
pool
request
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810600524.2A
Other languages
Chinese (zh)
Other versions
CN110598138A (en
Inventor
陈然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201810600524.2A priority Critical patent/CN110598138B/en
Publication of CN110598138A publication Critical patent/CN110598138A/en
Application granted granted Critical
Publication of CN110598138B publication Critical patent/CN110598138B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/547Remote procedure calls [RPC]; Web services
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/54Indexing scheme relating to G06F9/54
    • G06F2209/541Client-server

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a processing method and device based on cache, and relates to the technical field of computers. Wherein the method comprises the following steps: inquiring the local cache according to the data inquiry request to acquire target data; and when the local cache does not contain the target data, acquiring the target data from a server, counting the request times of the target data, and writing the target data acquired from the server into the local cache when the request times exceed an activation threshold. Through the steps, the access pressure of the high-frequency network request to the server can be effectively relieved, the response efficiency of the request is improved, the caching mechanism can be automatically triggered according to the statistical condition of the request, and the flexibility of the caching mechanism is improved.

Description

Processing method and device based on cache
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a processing method and apparatus based on cache.
Background
In the existing RPC (remote procedure call) technology, a server provides a set of function interfaces for a client, and all caching policies are placed on the server for execution. In addition, in the use of the cache, the service end performs start and stop control on the use of the cache based on the switch configuration of the configuration file or the database configuration table.
In the process of implementing the present invention, the inventor finds that at least the following problems exist in the prior art: firstly, the caching strategies are all placed on the server side for implementation, network blocking is easy to occur under a high concurrency scene, and timely return of response data of the RPC request is seriously affected. Second, starting and stopping caching relies on manual configuration (modifying or setting configuration files, database configuration tables, etc.), and the working mechanism is inflexible. Thirdly, the prior art mainly relies on a timeout mechanism to manage the cache data, and the management means is single. In addition, the prior art does not hierarchically manage the cached data.
Disclosure of Invention
In view of the above, the invention provides a processing method and a processing device based on cache, which not only can effectively relieve the access pressure of a high-frequency network request to a server, improve the response efficiency of the request, but also can automatically trigger a cache mechanism according to the statistics of the request, and improve the flexibility of the cache mechanism.
To achieve the above object, according to one aspect of the present invention, there is provided a cache-based processing method.
The cache-based processing method of the invention comprises the following steps: inquiring the local cache according to the data inquiry request to acquire target data; and when the target data does not exist in the local cache, acquiring the target data from a server, counting the request times of the target data, and writing the target data acquired from the server into the local cache under the condition that the request times of the target data exceed an activation threshold.
Optionally, the local cache includes: a first-level cache pool and a second-level cache pool; the step of querying the local cache according to the data query request to acquire the target data comprises the following steps: inquiring the first-level cache pool according to the data inquiry request; when target data exists in a first-level cache pool, acquiring the target data from the first-level cache pool; when the target data does not exist in the first-level cache pool, inquiring the second-level cache pool according to the data inquiry request; and when the target data exists in the secondary cache pool, acquiring the target data from the secondary cache pool.
Optionally, the method further comprises: before the step of acquiring the target data from the primary cache pool is executed, the target data in the primary cache pool is confirmed to be in a first effective duration, and the number of times of requests of the target data in the primary cache pool in the first effective duration is not more than a first abrasion threshold value.
Optionally, the method further comprises: when target data in the first-level cache pool exceeds a first effective duration or the number of requests in the first effective duration is greater than a first abrasion threshold, acquiring the target data from a server; when the target data in the first-level cache pool exceeds a first effective duration, deleting the target data in the first-level cache pool; and updating the target data in the first-level cache pool when the request times of the target data in the first-level cache pool in the first effective duration are greater than a first abrasion threshold.
Optionally, the method further comprises: and before the step of acquiring the target data from the secondary cache pool is executed, confirming that the target data in the secondary cache pool is in a second effective duration and the number of times of requests of the target data in the secondary cache pool in the second effective duration is not more than a second abrasion threshold.
Optionally, the method further comprises: when the target data in the second-level cache pool exceeds a second effective duration or the request times in the second effective duration is larger than a second abrasion threshold value, acquiring the target data from a server; when the target data in the second-level cache pool exceeds the second effective duration, deleting the target data in the second-level cache pool; and when the request times of the target data in the secondary cache pool in the second effective duration are greater than the second abrasion threshold value, updating the target data in the secondary cache pool.
Optionally, the method further comprises: after the step of confirming that the target data in the secondary cache pool is within the second effective duration is executed, judging whether the request times of the target data in the jump counting period is larger than a jump threshold value or not; if yes, writing the target data acquired from the server into a first-level cache pool, and deleting the target data in a second-level cache pool; wherein the transition statistical period is less than the second effective duration.
Optionally, the local cache further comprises a three-level cache pool; the step of counting the number of requests of the target data includes: inquiring the three-level cache pool according to the data inquiry request to obtain a corresponding statistical record; adding one to the number of requests in the corresponding statistical record under the condition that the corresponding statistical record is within a third effective duration; and setting the request times in the corresponding statistical record as one under the condition that the corresponding statistical record exceeds the third effective duration.
Optionally, the primary cache pool, the secondary cache pool and/or the tertiary cache pool adopt an LRU storage mechanism.
To achieve the above object, according to another aspect of the present invention, there is provided a cache-based processing apparatus.
The cache-based processing device of the present invention includes: the acquisition module is used for inquiring the local cache according to the data inquiry request so as to acquire target data; the communication module is used for acquiring the target data from the server when the target data does not exist in the local cache; the cache starting module is used for counting the request times of the target data when the target data does not exist in the local cache; and the cache starting module is further used for writing the target data acquired from the server into the local cache under the condition that the request times of the target data exceed an activation threshold.
To achieve the above object, according to still another aspect of the present invention, there is provided an electronic apparatus.
The electronic device of the present invention includes: one or more processors; and a storage means for storing one or more programs; the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the cache-based processing methods of the present invention.
To achieve the above object, according to still another aspect of the present invention, a computer-readable medium is provided.
The computer readable medium of the present invention has stored thereon a computer program which, when executed by a processor, implements the cache-based processing method of the present invention.
One embodiment of the above invention has the following advantages or benefits: the local cache is arranged at the client, the target data is acquired from the local cache according to the data query request, and when the target data does not exist in the local cache, the target data is acquired from the server, so that a large number of data query requests can be directly processed locally, the number of requests transmitted to the server through a network is obviously reduced, the access pressure of the high-frequency network request to the server is effectively relieved, and the response efficiency of the data query request is improved. In addition, by counting the request times of the target data when the local cache does not have the target data and writing the target data acquired from the server into the local cache when the request times exceed the activation threshold, the local cache mechanism of the client can be automatically triggered according to the statistics of the request times, and the flexibility of the cache mechanism is improved.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main steps of a cache-based processing method according to one embodiment of the invention;
FIG. 2 is a schematic diagram of the main steps of a cache-based processing method according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of partial steps of a cache-based processing method according to yet another embodiment of the present invention;
FIG. 4 is a schematic diagram of partial steps of a cache-based processing method according to yet another embodiment of the present invention;
FIG. 5 is a schematic diagram of the main blocks of a cache-based processing apparatus according to one embodiment of the invention;
FIG. 6 is a schematic diagram of the composition of a local cache according to an embodiment of the invention;
FIG. 7 is a schematic diagram of the main blocks of a cache-based processing apparatus according to another embodiment of the present invention;
FIG. 8 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
fig. 9 is a schematic diagram of a computer system suitable for use in implementing an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It is noted that embodiments of the invention and features of the embodiments may be combined with each other without conflict.
FIG. 1 is a schematic diagram of the main steps of a cache-based processing method according to one embodiment of the invention. The method of the embodiment of the invention can be executed by a client. As shown in fig. 1, the cache-based processing method according to the embodiment of the present invention includes:
Step S101, inquiring a local cache according to the data inquiry request so as to acquire target data.
The data query request may be in a request format such as an RPC (remote procedure call) request or an Http request. In a specific example, the caller may make an RPC request through parameters defined in a function interface provided by the server. After receiving the RPC request of the caller, the client may query the local cache according to parameters in the RPC request. If the local cache has the target data, the client side directly acquires the target data from the local cache. The target data may be understood as "result data of a query". For example, the request parameter is a commodity identifier, and the target data is detailed information of the commodity. For example, the request parameter is a merchant name, and the target data is transaction information of the merchant.
The local cache may be disposed in a memory of the client, and may cache data in a Key-Value pair (Key-Value) form. Further, the local cache may be a multi-level cache, including: a first level cache pool and a second level cache pool. The first-level cache pool and the second-level cache pool are mainly used for caching target data with higher request frequency. And the request frequency of the cache data in the first-level cache pool is higher than that of the cache data in the second-level cache pool.
Step S102, when the target data does not exist in the local cache, the target data is obtained from a server.
Specifically, when the target data does not exist in the local cache, the client may send a data query request to the server, and then receive the result data corresponding to the data query request returned by the server, that is, the target data.
Step S103, when the local cache does not have the target data, counting the number of requests of the target data, and writing the target data acquired from the server into the local cache when the number of requests of the target data exceeds an activation threshold.
In step S103, the number of requests for the target data may be understood as "number of calls for the request parameter". For example, if the request parameter in the data query request is a commodity identifier, the client counts the number of calls of the received commodity identifier.
In the embodiment of the invention, the local cache is arranged at the client, the target data is acquired from the local cache according to the data query request, and the target data is acquired from the server when the target data does not exist in the local cache, so that a large number of data query requests can be directly processed locally, the number of requests transmitted to the server through a network is obviously reduced, the access pressure of the high-frequency network request to the server is effectively relieved, and the response efficiency of the data query request is improved. In addition, by counting the request times of the target data when the local cache does not have the target data and writing the target data acquired from the server into the local cache when the request times exceed the activation threshold, the local cache mechanism of the client can be automatically triggered according to the statistics of the request times, and the flexibility of the cache mechanism is improved.
Fig. 2 is a schematic diagram of main steps of a cache-based processing method according to another embodiment of the present invention. In the embodiment shown in fig. 2, the local cache includes: a first-level cache pool, a second-level cache pool and a third-level cache pool. The first-level cache pool and the second-level cache pool are mainly used for caching target data with higher request frequency, and the third-level cache pool is mainly used for caching statistical records of data query requests. And the request frequency of the cache data in the first-level cache pool is higher than that of the cache data in the second-level cache pool. As shown in fig. 2, the cache-based processing method according to the embodiment of the present invention includes:
step S201, inquiring the first-level cache pool according to the data inquiry request.
The data query request may be in a request format such as an RPC request or an Http request. In particular, the caller may make the RPC request via parameters defined in the function interface provided by the server. After receiving the RPC request, the client may first query a first level cache pool in the local cache according to parameters in the RPC request. The primary cache pool may cache data in the form of key-value pairs. Further, the primary cache pool employs an LRU (least recently used) storage mechanism to ensure that recently requested target data is properly recorded and that earlier requested target data is deleted from the primary cache pool.
Step S202, when target data exists in the first-level cache pool, the target data is acquired from the first-level cache pool. Further, after the target data is obtained from the primary cache pool, the obtained target data may be returned to the caller.
And step S203, inquiring the secondary cache pool according to the data inquiry request when the primary cache pool does not have target data.
The secondary cache pool may cache data in the form of Key-Value pairs (Key-Value). Further, the secondary cache pool employs an LRU (least recently used) storage mechanism to ensure that recently requested target data is properly recorded and that earlier requested target data is deleted from the secondary cache pool. By adopting an LRU storage mechanism in the first-level cache pool and the second-level cache pool, the hot spot data is ensured not to occupy a large amount of memory resources of the client. In addition, the first-level buffer pool and the second-level buffer pool can also adopt a storage mechanism of FIFO (first-in first-out queue).
And step S204, when the target data exists in the secondary cache pool, acquiring the target data from the secondary cache pool. Further, after the target data is obtained from the secondary cache pool, the obtained target data may be returned to the caller.
Step S205, when the target data does not exist in the secondary cache pool, the target data is acquired from the server.
Specifically, when the first-level cache pool and the second-level cache pool do not have the requested target data, the client may send the data query request to the server, and then receive the result data corresponding to the data query request returned by the server, that is, the target data. Further, after the target data is acquired from the server, the acquired target data may be returned to the caller.
Step S206, inquiring the three-level cache pool according to the data inquiry request so as to acquire a corresponding statistical record.
The tertiary cache pool may cache statistical records in the form of Key-Value pairs (Key-Value). Further, the tertiary cache pool employs an LRU (least recently used) storage mechanism to ensure that statistical records of recent data query requests are properly saved and older statistical records are deleted from the tertiary cache pool. Specifically, when the first-level cache pool and the second-level cache pool do not have the requested target data, the client can query the third-level cache pool according to the request parameters in the data query request so as to obtain the statistical records corresponding to the requested parameters. Wherein the statistical record may include: request parameters, a third effective duration (or "statistical period duration"), number of requests in the third effective duration, activation threshold. In specific implementation, the third effective duration and the activation threshold value can be flexibly set according to the requirement. For example, the third effective duration may be set to 5 minutes and the activation threshold 100 times.
Step S207, judging whether the corresponding statistical record is within a third effective duration. If yes, go to step S208; if not, go to step S212.
In an alternative embodiment, the statistical record may further include a cycle end time. In this alternative embodiment, step S207 further includes: and comparing the cycle end time in the acquired statistical record with the current time. If the cycle end time is later than the current time, judging that the acquired statistical record is in a third effective duration; otherwise, judging that the acquired statistical record exceeds the third effective duration. In another alternative embodiment, the statistical record may also include a status identification. In this alternative embodiment, step S207 further includes: when the value of the state identifier is true, the obtained statistical record is in a third effective duration; and when the value of the state identifier is 'false', indicating that the acquired statistical record exceeds a third effective duration.
Step S208, the request times in the corresponding statistical record are increased by one. After step S208, step S209 is performed.
Step S209, determining whether the number of requests in the corresponding statistics record exceeds an activation threshold. If yes, go to step S210; if not, step S211 is performed.
In the embodiment of the present invention, when the number of requests for the target data exceeds the activation threshold, it indicates that the data query request is more active, and a caching policy needs to be opened for the data query request, that is, the target data of the request is put into the local cache through step S210.
Step S210, writing the target data acquired from the server into a secondary cache pool.
Step S211, no operation is performed. Note that "no operation" in step S211 means that the operation in step S210 is not performed.
Step S212, the request times in the corresponding statistical record are set as one. Further, after step S212, the method according to the embodiment of the present invention further includes the following steps: updating the cycle end time or the value of the state identifier in the statistical record.
In the embodiment of the invention, the local cache comprising the first-level cache pool, the second-level cache pool and the third-level cache pool is arranged at the client, so that a large number of data query requests can be directly processed locally, the number of requests transmitted to the server through a network is obviously reduced, the access pressure of the high-frequency network requests to the server is effectively relieved, and the response efficiency of the data query requests is improved. In addition, through the steps, the local caching mechanism of the client can be automatically triggered in real time according to the statistics of the request times, so that manual configuration is not needed, and the flexibility of the caching mechanism is improved. In addition, by adopting a storage mechanism of LRU in the first-level cache pool, the second-level cache pool and the third-level cache pool, the cache data can be ensured not to occupy a large amount of memory resources of the client.
The present invention has been further improved on the basis of the embodiment shown in fig. 2, whereby a further cache-based processing method is proposed. In still another embodiment of the present invention, improvements are mainly made to the processing flows of step S201 to step S204. In the following, the steps related to improvement in still another embodiment of the present invention will be described in detail with reference to fig. 3 and 4, and the non-improved steps will not be described again.
FIG. 3 is a schematic diagram of some steps of a cache-based processing method according to still another embodiment of the present invention. As shown in fig. 3, the method according to the embodiment of the present invention includes the following steps:
step S301, judging that target data exists in the first-level cache pool.
Step S302, judging whether the target data is in a first effective duration. If yes, go to step S303; if not, go to step S306.
In the embodiment of the invention, the first-level cache pool comprises, besides the request parameters and the target data corresponding to the request parameters, the following components: the method comprises the steps of a first effective duration, the number of requests of target data in the first effective duration and a first abrasion threshold. In specific implementation, the first effective duration and the first abrasion threshold value can be flexibly set according to requirements. For example, the first effective duration may be set to 5 minutes and the first wear threshold 10000 times.
In an alternative embodiment, the first level cache pool further includes: effective deadlines for the target data. In this alternative embodiment, step S302 further includes: and comparing the effective deadline of the target data acquired by the query with the current time. If the effective deadline is later than the current time, judging that the target data is in a first effective duration; otherwise, judging that the target data exceeds the first effective duration. In another alternative embodiment, the first level cache pool may further include: status identification of the target data. In this alternative embodiment, step S302 further includes: when the value of the state identifier of the target data is true, the target data is in a first effective duration; when the state identification of the target data is 'false', the target data is beyond the first valid duration.
Further, after determining that the target data is in the first valid duration, and before executing step S303, the method according to the embodiment of the present invention further includes the following steps: and updating the request times of the target data in the first-level cache pool. Specifically, the update operation may be to increase the number of requests by 1.
Step S303, judging whether the request times of the target data are not more than a first abrasion threshold value. If yes, executing step S304; if not, step S305 is performed.
Step S304, obtaining target data from the first-level cache pool.
Step S305, obtaining the target data from the server, and updating the target data in the first-level cache pool.
Specifically, when the number of requests of the target data within the first effective duration is greater than the first abrasion threshold, the client may send a data query request to the server, and then receive the target data returned by the server. Then, the client may return the target data obtained from the server to the caller, and update the cache data of the first-level cache pool according to the target data returned by the server.
Step S306, the target data are obtained from the server side, and the target data are deleted from the first-level cache pool.
Specifically, when the target data exceeds the first effective duration, the client may send a data query request to the server, and then receive the target data returned by the server. Then, the client may return the target data obtained from the server to the caller, delete the target data corresponding to the data query request in the first-level cache pool, and write the statistical record of the request in the third cache pool.
In the embodiment of the invention, through the steps S301 to S306, the cache data in the first-level cache pool can be managed from the first effective duration and the first abrasion threshold value in multiple dimensions, and the cache data can be updated and deleted in time.
FIG. 4 is a schematic diagram of some steps of a cache-based processing method according to yet another embodiment of the present invention. As shown in fig. 4, the method according to the embodiment of the present invention includes the following steps:
In step S401, target data exists in the secondary cache pool. After step S401, step S402 is performed.
Step S402, judging whether the target data is in a second effective duration. If yes, go to step S403; if not, go to step S407.
In the embodiment of the invention, the second-level cache pool comprises, besides the request parameters and the target data corresponding to the request parameters, the following steps: the second effective duration, the number of requests for target data within the second effective duration, the second wear threshold, the transition statistical period, and the transition threshold. And the transition statistical period is less than the second effective duration. In the implementation, the second effective duration, the jump statistical period, the second abrasion threshold and the jump threshold can be flexibly set according to the requirements. For example, the second effective duration may be set to 2 minutes, the transition statistical period may be set to 30 seconds, the second wear threshold may be set to 1000 times, and the transition threshold may be set to 900 times.
In an alternative embodiment, the secondary cache pool further includes: effective deadlines for the target data. In this alternative embodiment, step S402 further includes: and comparing the effective deadline of the target data acquired by the query with the current time. If the effective deadline is later than the current time, judging that the target data is in a second effective duration; otherwise, judging that the target data exceeds the second effective duration. In another alternative embodiment, the secondary cache pool may further include: status identification of the target data. In this alternative embodiment, step S402 further includes: when the value of the state identifier of the target data is true, the target data is in a second effective duration; when the state flag of the target data has a value of "false", it indicates that the target data exceeds the second valid period.
Step S403 updates the total number of requests for the target data, and the number of requests in the transition statistic period. After step S403, step S404 may be performed.
Specifically, the update operation in step S403 may be: the total request times of the target data are added by 1, and the request times in the jump counting period are added by 1.
Step S404, judging whether the number of requests in the transition statistical period is not more than a transition threshold. If yes, go to step S405; if not, go to step S408.
Step S405, judging whether the total request times is not more than a second abrasion threshold value. If yes, go to step S406; if not, go to step S409.
Step S406, obtaining target data from the secondary cache pool.
Step S407, obtaining the target data from the server side, and deleting the target data from the secondary cache pool.
Specifically, when the target data exceeds the second effective duration, the client may send a data query request to the server, and then receive the target data returned by the server. Then, the client may return the target data obtained from the server to the caller, delete the target data corresponding to the data query request in the second cache pool, and write the statistical record of the data query request in the third cache pool.
Step S408, writing the target data obtained from the server into the first-level cache pool, and deleting the target data from the second-level cache pool.
Specifically, when the number of requests of the target data in the transition statistics period is greater than the transition threshold, the client may send a data query request to the server, and then receive the target data returned by the server. Then, the client may return the target data obtained from the server to the caller, write the target data returned from the server into the primary cache pool, and delete the original target data in the secondary cache pool.
And step S409, acquiring target data from the server side, and updating the target data in the secondary cache pool.
Specifically, when the number of requests of the target data in the second effective duration is greater than the second abrasion threshold, the client may send a data query request to the server, and then receive the target data returned by the server. Then, the client may return the target data obtained from the server to the caller, and update the cache data of the secondary cache pool according to the target data returned by the server.
According to the embodiment of the invention, through the steps, the cache data in the secondary cache pool can be managed from multiple dimensions of the second effective duration, the second abrasion threshold and the transition threshold, so that the cache data can be classified, and the cache data can be updated and deleted in time.
FIG. 5 is a schematic diagram of the main blocks of a cache-based processing device according to one embodiment of the invention. As shown in fig. 5, the cache-based processing apparatus 500 according to the embodiment of the present invention includes: an acquisition module 501, a communication module 502 and a cache starting module 503.
The obtaining module 501 is configured to query the local cache according to the data query request to obtain the target data.
The data query request may be in a request format such as an RPC (remote procedure call) request or an Http request. In a specific example, the caller may make an RPC request through parameters defined in a function interface provided by the server. The obtaining module 501 may query the local cache according to parameters in the RPC request after receiving the RPC request of the caller. If the local cache has target data, the acquisition module 501 acquires the target data directly from the local cache. The target data may be understood as "result data of a query". For example, the request parameter is a commodity identifier, and the target data is detailed information of the commodity. For example, the request parameter is a merchant name, and the target data is transaction information of the merchant.
The local cache may be disposed in a memory of the client, and may cache data in a Key-Value pair (Key-Value) form. Further, the local cache may be a multi-level cache, including: a first level cache pool and a second level cache pool. The first-level cache pool and the second-level cache pool are mainly used for caching target data with higher request frequency. And the request frequency of the cache data in the first-level cache pool is higher than that of the cache data in the second-level cache pool.
And the communication module 502 is configured to obtain the target data from the server when the local cache does not exist the target data.
Specifically, when the local cache does not have the target data, the communication module 502 may send the data query request to the server, and then receive the result data corresponding to the data query request returned by the server, that is, the target data.
A cache starting module 503, configured to count, when the local cache does not have the target data, the number of times of request of the target data; and the cache starting module is further used for writing the target data acquired from the server into the local cache under the condition that the request times of the target data exceed an activation threshold.
The number of requests of the target data can be understood as "number of calls of the request parameter". For example, if the request parameter in the data query request is a commodity identifier, the client counts the number of calls of the received commodity identifier.
In the embodiment of the invention, the local cache is arranged at the client, the target data is acquired from the local cache through the acquisition module 501, and the target data is acquired from the server through the communication module 502 when the target data does not exist in the local cache, so that a large number of data query requests can be directly processed locally, the number of requests transmitted to the server through a network is obviously reduced, the access pressure of the high-frequency network request to the server is effectively relieved, and the response efficiency of the data query requests is improved. In addition, the cache starting module 503 writes the target data acquired from the server into the local cache when the request times exceed the activation threshold, so that the local cache mechanism of the client can be automatically triggered according to the statistics of the request times, and the flexibility of the cache mechanism is improved.
FIG. 6 is a schematic diagram of the composition of a local cache according to an embodiment of the invention. As shown in fig. 6, the local cache 600 according to the embodiment of the present invention includes: a first level cache pool 601, a second level cache pool 602, and a third level cache pool 603.
The first-level buffer pool 601 is mainly used for buffering target data with higher request frequency, and may be also called a "pressure sample pool". In addition, the first-level cache pool 601 includes, in addition to the request parameter and the target data corresponding to the request parameter, the following: the method comprises the steps of a first effective duration, the number of requests of target data in the first effective duration and a first abrasion threshold. In specific implementation, the first effective duration and the first abrasion threshold value can be flexibly set according to requirements. For example, the first effective duration may be set to 5 minutes and the first wear threshold 10000 times.
The secondary buffer pool 602 is mainly used for buffering target data with higher request frequency, and may be also called as an active sample pool. In addition, the second-level cache pool includes, in addition to the request parameter and the target data corresponding to the request parameter, further includes: the second effective duration, the number of requests for target data within the second effective duration, the second wear threshold, the transition statistical period, and the transition threshold. And the transition statistical period is less than the second effective duration. In the implementation, the second effective duration, the jump statistical period, the second abrasion threshold and the jump threshold can be flexibly set according to the requirements. For example, the second effective duration may be set to 2 minutes, the transition statistical period may be set to 30 seconds, the second wear threshold may be set to 1000 times, and the transition threshold may be set to 900 times.
The third level buffer pool 603 is mainly used for buffering statistical records of data query requests, and may also be called a "sample screening pool". The difference between the third-level buffer pool 603 and the second-level buffer pool is that: the third level buffer pool 603 does not hold the requested target data, and the first level buffer pool and the second level buffer pool hold the requested target data. The statistics may include, among other things, request parameters, a third effective duration (or "statistical period duration"), a number of requests within the third effective duration, and an activation threshold. In specific implementation, the third effective duration and the activation threshold value can be flexibly set according to the requirement. For example, the third effective duration may be set to 5 minutes and the activation threshold 100 times.
In an embodiment of the present invention, the primary cache pool 601, the secondary cache pool 602, and the tertiary cache pool 603 may cache data in the form of Key-Value pairs (Key-Value). Further, the primary cache pool, the secondary cache pool and the tertiary cache pool adopt a storage mechanism of LRU (least recently used) to ensure that the most recently requested target data or statistical record is correctly recorded and the earlier requested target data or statistical record is deleted. In addition, the first-level buffer pool, the second-level buffer pool and the third-level buffer pool can also adopt a storage mechanism of FIFO (first-in first-out queue). In addition, in the implementation, the storage capacities of the primary cache pool 601, the secondary cache pool 602, and the tertiary cache pool 603 may be defined at the time of initialization. In a preferred embodiment, the storage capacities of the primary cache pool, the secondary cache pool and the tertiary cache pool have the following proportional relationship, namely 1:4:16. for example, the storage capacity of the primary cache pool may be set to 128KB, the storage capacity of the secondary cache pool may be set to 512KB, and the capacity of the tertiary cache pool may be set to 2048KB.
Fig. 7 is a schematic diagram of main modules of a cache-based processing apparatus according to another embodiment of the present invention. In the embodiment of the present invention, the local cache adopts the structure shown in fig. 6. As shown in fig. 7, a cache-based processing apparatus 700 according to an embodiment of the present invention includes: an acquisition module 701, a first confirmation module 702, a cache management module 703, a communication module 704, a second confirmation module 705, and a cache initiation module 706.
The obtaining module 701 is configured to query the local cache according to a data query request to obtain target data, and specifically includes: 1) The acquisition module 701 queries the first-level cache pool according to the data query request; when target data exists in the first-level cache pool, and the first confirming module 701 confirms that the target data is in the first effective duration and the number of times of request of the target data in the first effective duration is not greater than a first abrasion threshold value, the acquiring module 701 acquires the target data from the first-level cache pool. 2) When the primary cache pool does not have target data, the acquisition module 701 queries the secondary cache pool according to the data query request; when target data exists in the secondary cache pool, and the second confirmation module 705 confirms that the target data is in the second effective duration and the number of times of the target data in the second effective duration is not greater than the second abrasion threshold, the acquisition module 701 acquires the target data from the secondary cache pool.
The first confirmation module 702 is configured to determine whether the target data in the first level cache pool is in a first effective duration, and determine whether the number of requests in the first effective duration is not greater than a first wear threshold. In a preferred embodiment, the first confirmation module 702 may execute the logic for determining whether the target data in the first level cache pool is in the first valid period, and execute the logic for determining whether the number of requests in the first valid period is not greater than the first wear threshold after confirming that the target data is in the first valid period.
And the cache management module 703 is configured to delete the target data in the first level cache pool and write the statistical record of the data query request in the third level cache pool when the first confirmation module 702 determines that the target data in the first level cache pool exceeds the first effective duration. The cache management module 703 is further configured to update the target data in the first level cache pool when the first confirmation module 702 determines that the number of requests of the target data in the first level cache pool in the first effective duration is greater than the first wear threshold.
The second confirmation module 705 is configured to determine whether the target data in the second level buffer pool is in a second valid duration, and determine whether the number of requests in the second valid duration is not greater than a second wear threshold. In a preferred embodiment, the second confirmation module 705 may execute the logic of determining whether the target data in the second level buffer pool is in the first valid period, and execute the logic of determining whether the number of requests in the second valid period is not greater than the second wear threshold after confirming that the target data is in the second valid period.
The cache management module 703 is further configured to delete the target data in the second level cache pool and write the statistical record of the data query request in the third level cache pool when the second confirmation module 705 determines that the target data in the second level cache pool exceeds the second effective duration. The cache management module 703 is further configured to update the target data in the second level cache pool when the second confirmation module 705 determines that the number of requests of the target data in the second level cache pool in the second effective duration is greater than the second wear threshold.
In a preferred embodiment, the second confirmation module 705 is further configured to determine, after confirming that the target data in the second level buffer pool is in the second valid period, whether the number of requests of the target data in the transition statistics period is greater than the transition threshold. In the preferred embodiment, the buffer management module 703 is further configured to write the target data obtained from the server into the primary buffer pool and delete the target data in the secondary buffer pool when the number of requests of the target data in the transition statistics period is greater than the transition threshold.
And the communication module 704 is configured to obtain the target data from the server when the target data does not exist in the local cache. The communication module 704 is further configured to obtain the target data from the server when the first confirmation module 702 determines that the target data in the primary cache pool exceeds the first effective duration or the number of times of the request in the first effective duration is greater than the first abrasion threshold. The communication module 704 is further configured to obtain the target data from the server when the second confirmation module 705 determines that the target data in the second level buffer pool exceeds the second effective duration or the number of times of request in the second effective duration is greater than the second abrasion threshold.
And the cache starting module 706 is configured to count the number of requests of the target data when the target data does not exist in the local cache. And, in the case that the number of requests of the target data exceeds the activation threshold, the cache starting module 706 is further configured to write the target data acquired from the server into the local cache.
Illustratively, the cache initiation module 706 counts the number of requests for the target data including: the cache starting module 706 queries the three-level cache pool according to the data query request to obtain a corresponding statistical record; in the case that the corresponding statistics are within the third effective duration, the cache initiation module 706 increments the number of requests in the corresponding statistics by one; in the case that the corresponding statistics exceeds the third effective duration, the cache initiation module 706 sets the number of requests in the corresponding statistics to one.
In the embodiment of the invention, the following technical effects can be realized through the device: 1) The local cache comprising the first-level cache pool, the second-level cache pool and the third-level cache pool is arranged at the client, and the local cache is queried according to the data query request through the acquisition module, so that a large number of data query requests can be directly processed locally, the number of requests transmitted to the server through a network is obviously reduced, the access pressure of the high-frequency network request to the server is effectively relieved, and the response efficiency of the data query request is improved. 2) By setting the cache starting module, the local cache mechanism of the client can be automatically triggered in real time according to the statistics of the request times, so that manual configuration is not needed, and the flexibility of the cache mechanism is improved. 3) By setting the cache management module, the data in the primary cache pool and the secondary cache pool can be managed in a grading manner from multiple dimensions, and updated and deleted in time. 4) By adopting the storage mechanism of the LRU in the first-level cache pool, the second-level cache pool and the third-level cache pool, the cache data can be ensured not to occupy a large amount of memory resources of the client.
Fig. 8 illustrates an exemplary system architecture 800 for a cache-based processing method or cache-based processing device to which embodiments of the present invention may be applied.
As shown in fig. 8, a system architecture 800 may include terminal devices 801, 802, 803, a network 804, and a server 805. The network 804 serves as a medium for providing communication links between the terminal devices 801, 802, 803 and the server 805. The network 804 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with a server 805 through a network 804 using terminal devices 801, 802, 803 to receive or send messages (such as data query requests), etc. Various client applications may be installed on the terminal devices 801, 802, 803, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, and the like.
The terminal devices 801, 802, 803 may be a variety of electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 805 may be a server providing various services, such as a background management server providing support for shopping-type websites browsed by the user using the terminal devices 801, 802, 803. The background management server may analyze and process the received data such as the data query request, and feed back the processing result (for example, the response data of the data query request) to the terminal device.
It should be noted that, the processing method based on the cache provided by the embodiment of the present invention is generally executed by the terminal device, and accordingly, the processing device based on the cache is generally set in the terminal device.
It should be understood that the number of terminal devices, networks and servers in fig. 8 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Fig. 9 shows a schematic diagram of a computer system 900 suitable for use in implementing an electronic device of an embodiment of the invention. The electronic device shown in fig. 9 is only an example, and should not impose any limitation on the functions and scope of use of the embodiments of the present invention.
As shown in fig. 9, the computer system 900 includes a Central Processing Unit (CPU) 901, which can execute various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 902 or a program loaded from a storage section 908 into a Random Access Memory (RAM) 903. In the RAM 903, various programs and data necessary for the operation of the system 900 are also stored. The CPU 901, ROM 902, and RAM 903 are connected to each other through a bus 904. An input/output (I/O) interface 905 is also connected to the bus 904.
The following components are connected to the I/O interface 905: an input section 906 including a keyboard, a mouse, and the like; an output portion 907 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 908 including a hard disk or the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs communication processing via a network such as the internet. The drive 910 is also connected to the I/O interface 905 as needed. A removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 910 so that a computer program read out therefrom is installed into the storage section 908 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from the network via the communication portion 909 and/or installed from the removable medium 911. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 901.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as: a processor comprises an acquisition module, a communication module and a cache starting module. The names of these modules do not limit the module itself in some cases, for example, the acquisition module may also be described as "a module that queries a local cache to acquire target data according to a data query request".
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer-readable medium carries one or more programs which, when executed by one of the devices, cause the device to perform the following: inquiring the local cache according to the data inquiry request to acquire target data; and when the local cache does not contain the target data, acquiring the target data from a server, counting the request times of the target data, and writing the target data acquired from the server into the local cache under the condition that the request times of the target data exceed an activation threshold.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. A cache-based processing method, the method comprising:
Inquiring the local cache according to the data inquiry request to acquire target data;
when the local cache does not contain the target data, acquiring the target data from a server, counting the request times of the target data, and writing the target data acquired from the server into the local cache under the condition that the request times of the target data exceed an activation threshold;
the local cache includes: a first-level cache pool and a second-level cache pool;
The step of querying the local cache according to the data query request to acquire the target data comprises the following steps: inquiring the first-level cache pool according to the data inquiry request; when target data exists in a first-level cache pool, acquiring the target data from the first-level cache pool; when the target data does not exist in the first-level cache pool, inquiring the second-level cache pool according to the data inquiry request; when target data exists in the secondary cache pool, acquiring the target data from the secondary cache pool;
Before the step of acquiring the target data from the secondary cache pool is executed, confirming that the target data in the secondary cache pool is in a second effective duration and the request times of the target data in the secondary cache pool in the second effective duration are not more than a second abrasion threshold;
After the step of confirming that the target data in the secondary cache pool is within the second effective duration is executed, judging whether the request times of the target data in the jump counting period is larger than a jump threshold value or not; if yes, writing the target data acquired from the server into a first-level cache pool, and deleting the target data in a second-level cache pool; wherein the transition statistical period is less than the second effective duration.
2. The method according to claim 1, wherein the method further comprises:
before the step of acquiring the target data from the primary cache pool is executed, the target data in the primary cache pool is confirmed to be in a first effective duration, and the number of times of requests of the target data in the primary cache pool in the first effective duration is not more than a first abrasion threshold value.
3. The method according to claim 2, wherein the method further comprises:
when target data in the first-level cache pool exceeds a first effective duration or the number of requests in the first effective duration is greater than a first abrasion threshold, acquiring the target data from a server; and
Deleting target data in the first-level cache pool when the target data in the first-level cache pool exceeds a first effective duration; and updating the target data in the first-level cache pool when the request times of the target data in the first-level cache pool in the first effective duration are greater than a first abrasion threshold.
4. The method according to claim 1, wherein the method further comprises:
when the target data in the second-level cache pool exceeds a second effective duration or the request times in the second effective duration is larger than a second abrasion threshold value, acquiring the target data from a server; and
Deleting the target data in the second-level cache pool when the target data in the second-level cache pool exceeds the second effective duration; and when the request times of the target data in the secondary cache pool in the second effective duration are greater than the second abrasion threshold value, updating the target data in the secondary cache pool.
5. The method of any of claims 1 to 4, wherein the local cache further comprises a tertiary cache pool;
the step of counting the number of requests of the target data includes: inquiring the three-level cache pool according to the data inquiry request to obtain a corresponding statistical record; adding one to the number of requests in the corresponding statistical record under the condition that the corresponding statistical record is within a third effective duration; and setting the request times in the corresponding statistical record as one under the condition that the corresponding statistical record exceeds the third effective duration.
6. The method of claim 5, wherein the primary cache pool, secondary cache pool, and/or tertiary cache pool employ an LRU storage mechanism.
7. A cache-based processing apparatus, the apparatus comprising:
the acquisition module is used for inquiring the local cache according to the data inquiry request so as to acquire target data;
The communication module is used for acquiring the target data from the server when the target data does not exist in the local cache;
The cache starting module is used for counting the request times of the target data when the target data does not exist in the local cache; and under the condition that the request times of the target data exceeds an activation threshold, the cache starting module is further used for writing the target data acquired from a server into the local cache;
the local cache includes: a first-level cache pool and a second-level cache pool;
The acquisition module is further configured to: inquiring the first-level cache pool according to the data inquiry request; when target data exists in a first-level cache pool, acquiring the target data from the first-level cache pool; when the target data does not exist in the first-level cache pool, inquiring the second-level cache pool according to the data inquiry request; when target data exists in the secondary cache pool, acquiring the target data from the secondary cache pool;
Before the step of acquiring the target data from the secondary cache pool is executed, confirming that the target data in the secondary cache pool is in a second effective duration and the request times of the target data in the secondary cache pool in the second effective duration are not more than a second abrasion threshold;
After the step of confirming that the target data in the secondary cache pool is within the second effective duration is executed, judging whether the request times of the target data in the jump counting period is larger than a jump threshold value or not; if yes, writing the target data acquired from the server into a first-level cache pool, and deleting the target data in a second-level cache pool; wherein the transition statistical period is less than the second effective duration.
8. An electronic device, comprising:
One or more processors;
Storage means for storing one or more programs,
When executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1 to 6.
9. A computer readable medium on which a computer program is stored, characterized in that the program, when executed by a processor, implements the method according to any one of claims 1 to 6.
CN201810600524.2A 2018-06-12 2018-06-12 Processing method and device based on cache Active CN110598138B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810600524.2A CN110598138B (en) 2018-06-12 2018-06-12 Processing method and device based on cache

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810600524.2A CN110598138B (en) 2018-06-12 2018-06-12 Processing method and device based on cache

Publications (2)

Publication Number Publication Date
CN110598138A CN110598138A (en) 2019-12-20
CN110598138B true CN110598138B (en) 2024-10-18

Family

ID=68848959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810600524.2A Active CN110598138B (en) 2018-06-12 2018-06-12 Processing method and device based on cache

Country Status (1)

Country Link
CN (1) CN110598138B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111245822B (en) * 2020-01-08 2023-03-14 北京小米松果电子有限公司 Remote procedure call processing method and device and computer storage medium
CN111414383B (en) * 2020-02-21 2024-03-15 车智互联(北京)科技有限公司 Data request method, data processing system and computing device
CN111522836B (en) * 2020-04-22 2023-10-10 杭州海康威视系统技术有限公司 Data query method and device, electronic equipment and storage medium
CN113722023A (en) * 2020-05-22 2021-11-30 华为技术有限公司 Application data processing method and device
CN111782391A (en) * 2020-06-29 2020-10-16 北京达佳互联信息技术有限公司 Resource allocation method, apparatus, electronic device and storage medium
CN112131260B (en) * 2020-09-30 2024-08-06 中国民航信息网络股份有限公司 Data query method and device
CN112398852B (en) * 2020-11-12 2022-11-15 北京天融信网络安全技术有限公司 Message detection method, device, storage medium and electronic equipment
CN112398849B (en) * 2020-11-12 2022-12-20 北京天融信网络安全技术有限公司 Method and device for updating embedded threat information data set
CN112506973B (en) * 2020-12-14 2023-12-15 中国银联股份有限公司 A method and device for storage data management
CN113760982B (en) * 2021-01-18 2024-05-17 西安京迅递供应链科技有限公司 Data processing method and device
CN112685454A (en) * 2021-03-10 2021-04-20 江苏金恒信息科技股份有限公司 Industrial data hierarchical storage system and method and industrial data hierarchical query method
CN115174471B (en) * 2021-04-07 2024-03-26 中国科学院声学研究所 Cache management method for storage unit of ICN router
CN113849751A (en) * 2021-09-03 2021-12-28 深圳Tcl新技术有限公司 Data caching method, apparatus, electronic device, and computer-readable storage medium
CN114143376A (en) * 2021-11-18 2022-03-04 青岛聚看云科技有限公司 Server for loading cache, display equipment and resource playing method
CN113900830B (en) * 2021-12-10 2022-04-01 北京达佳互联信息技术有限公司 Resource processing method and device, electronic equipment and storage medium
CN115964391A (en) * 2022-10-21 2023-04-14 北京百度网讯科技有限公司 Cache management method, device, equipment and storage medium
CN119484640B (en) * 2024-10-25 2025-09-30 中国平安财产保险股份有限公司 Cache adjustment method, device, server and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217019A (en) * 2014-09-25 2014-12-17 中国人民解放军信息工程大学 Content inquiry method and device based on multiple stages of cache modules
CN106815287A (en) * 2016-12-06 2017-06-09 中国银联股份有限公司 A kind of buffer memory management method and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521252A (en) * 2011-11-17 2012-06-27 四川长虹电器股份有限公司 Access method of remote data
US9298620B2 (en) * 2013-11-25 2016-03-29 Apple Inc. Selective victimization in a multi-level cache hierarchy
CN107623702B (en) * 2016-07-13 2020-09-11 阿里巴巴集团控股有限公司 Data caching method, device and system
CN106446097B (en) * 2016-09-13 2020-02-07 苏州浪潮智能科技有限公司 File reading method and system
CN107122410A (en) * 2017-03-29 2017-09-01 武汉斗鱼网络科技有限公司 A kind of buffering updating method and device
CN107301215B (en) * 2017-06-09 2020-12-18 北京奇艺世纪科技有限公司 Search result caching method and device and search method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217019A (en) * 2014-09-25 2014-12-17 中国人民解放军信息工程大学 Content inquiry method and device based on multiple stages of cache modules
CN106815287A (en) * 2016-12-06 2017-06-09 中国银联股份有限公司 A kind of buffer memory management method and device

Also Published As

Publication number Publication date
CN110598138A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN110598138B (en) Processing method and device based on cache
CN109947668B (en) Method and device for storing data
CN109684358B (en) Data query method and device
CN108629029B (en) Data processing method and device applied to data warehouse
CN108804447B (en) Method and system for responding to data request by using cache
CN113760982B (en) Data processing method and device
CN112631504B (en) Method and device for implementing local cache using off-heap memory
CN112445988B (en) A data loading method and device
CN112118352B (en) Method and device for processing notification trigger message, electronic equipment and computer readable medium
CN110648216A (en) Wind control method and device
CN116846831A (en) Current limiting processing method and device, electronic equipment and computer readable medium
CN112784139B (en) Query method, device, electronic equipment and computer readable medium
CN113360528B (en) Data query method and device based on multi-level cache
CN111698273B (en) Method and device for processing request
CN113760965B (en) Data query method and device
CN114595069A (en) Service offline method, device, electronic device and storage medium
CN113742131B (en) Method, electronic device and computer program product for storage management
CN111209308B (en) Method and device for optimizing distributed cache
CN114374657B (en) Data processing method and device
CN109213815B (en) Method, device, server terminal and readable medium for controlling execution times
CN109087097B (en) Method and device for updating same identifier of chain code
CN112699116A (en) Data processing method and system
CN113138943B (en) Method and device for processing request
CN113778909B (en) Method and device for caching data
CN113722193A (en) Method and device for detecting page abnormity

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant