CN120386801B - Data analysis system and method based on distributed caching technology - Google Patents
Data analysis system and method based on distributed caching technologyInfo
- Publication number
- CN120386801B CN120386801B CN202510779687.1A CN202510779687A CN120386801B CN 120386801 B CN120386801 B CN 120386801B CN 202510779687 A CN202510779687 A CN 202510779687A CN 120386801 B CN120386801 B CN 120386801B
- Authority
- CN
- China
- Prior art keywords
- data
- node
- cache
- capacity
- business
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/2866—Architectures; Arrangements
- H04L67/288—Distributed intermediate devices, i.e. intermediate devices for interaction with other intermediate devices on the same level
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
The invention discloses a data analysis system and a data analysis method based on a distributed caching technology, which relate to the technical field of data analysis and are used for positioning service nodes based on target service data to acquire service node state data; the method comprises the steps of analyzing the service node data request demand degree in real time based on the service node state data, judging the database access state, constructing data cache nodes for each service node according to the database access state judgment result, analyzing cache memory allocation based on the corresponding service node data request demand data, analyzing the real-time cache node memory utilization degree according to the service node period data reading state of the cache nodes, carrying out periodic memory dynamic regulation on the cache nodes, generating regulation and control instructions according to the cache node period memory dynamic regulation and control data and feeding back to a management port, and realizing flexible regulation and analysis on cache demands in different demand scenes in the service node data calling process.
Description
Technical Field
The invention relates to the field of data analysis, in particular to a data analysis system and method based on a distributed caching technology.
Background
The distributed caching technology is a technical scheme for coping with high concurrency and big data scenes, can improve system throughput and remarkably reduce data read-write delay and high concurrency requests by storing data in a plurality of server nodes in a scattered manner, but when the distribution and regulation of caching nodes are single, the caching nodes cannot be adaptively regulated and controlled according to dynamic node data request environments, the capacity regulation and node state regulation of the caching nodes cannot be carried out according to complex data change scenes, and the situation that memory resource waste is caused by unreasonable capacity resource allocation exists in most caching nodes at present is caused.
Disclosure of Invention
The invention aims to provide a data analysis system and a data analysis method based on a distributed caching technology, which are used for solving the problems in the prior art.
In order to achieve the above purpose, the present invention provides the following technical solutions:
A data analysis method based on a distributed caching technique, the method comprising the steps of:
determining target service data, positioning service nodes based on the target service data, and acquiring service node state data;
According to the judging result of the access state of the database, constructing data cache nodes for each service node, and analyzing cache memory allocation based on the data request requirement data of the corresponding service node;
Analyzing the memory utilization degree of the real-time cache node according to the data reading state of the cache node in each service node period, carrying out periodic memory dynamic regulation and control on the cache node according to the memory utilization degree of the real-time cache node, and generating a regulation and control instruction according to the periodic memory dynamic regulation and control data of the cache node and feeding back to the management port.
Further, the service data is searched on the basis of the management port to determine target service data, wherein the service data comprises service name data and service node data, and the service node data is a service node number;
and positioning the service node through the target service node data, and acquiring the state data of the corresponding service node through acquiring the request data of the corresponding service node, wherein the state data of the service node comprises the number of the request data packets and the request data packet capacity of the service node.
Further, based on the state data of each service node of the target service, a corresponding service node state set is constructed, the real-time data request demand of the target service is analyzed according to the service node state set, the real-time data request data capacity of the target service is obtained, and the calculation formula is as follows
;
P (t) is the data request demand capacity of the target service at the current time t, P is the single data packet capacity, mj is the data packet request quantity at the time t of the service node of the corresponding number j, n is the service node quantity of the target service, analysis data are analyzed based on the actual target service data request demand, the real-time access state of the database is judged and analyzed, the maximum response request data packet capacity at the single time of the database is calculated by acquiring the request data packet quantity of the maximum response at the single time of the database, the access state of the database at the current time is determined by comparing the data request demand of the target service at the single time with the maximum response request data packet capacity at the single time of the database, the access state comprises busy and idle, the specific comparison step is that the pre-warning coefficient k is determined for the maximum response request data packet capacity at the single time of the database, the response request data packet capacity at the single time is acquired, the real-time is calculated as P e,s=k*Pmax,s, the maximum response request data packet capacity at the single time of the database is calculated, the current time of the target service data request capacity P (t) is compared, and the current access state is judged as the busy state if the P (t) is more than or equal to the current state of the current access state is judged as busy;
when the access state of the database is busy, constructing cache nodes of each service node of the target service, carrying out capacity prediction analysis on each cache node based on the real-time service data request demand data of each service node to obtain the real-time capacity data of the cache node corresponding to each service node, and calculating as
;
The method comprises the steps of taking C (j, T) as a time T, numbering a capacity prediction value of a corresponding cache node of a service node, taking r as a single data packet cache capacity pre-storage coefficient, wherein the single data packet cache capacity pre-storage coefficient refers to the fact that extra storage space is needed to be carried out when a single data packet is cached and stored, preventing data storage loss, carrying out capacity prediction analysis on each cache node in an observation period through dividing the observation period T1, taking the maximum value as the storage capacity value of the corresponding service node period T1, carrying out type division and overall preparation according to request demand data in the corresponding service node period T1, respectively determining the quantity of various types of request data in the period T1, carrying out analysis on the request data quantity proportion of various types of request data in the period T1, introducing a proportion threshold according to analysis results, comparing the request data quantity proportion of various types of request data with the proportion threshold, determining high request type data and low request type data, otherwise judging the request data type corresponding to be high request type data, judging the request data type corresponding to the cache type data is the low request type data, and the low request type data is used for the first-level cache space and the second-level cache type data storage space, and the first-level cache type data is compared with the low request type data storage space.
Further, based on the capacity prediction analysis and data storage division results of the corresponding cache nodes of each service node of the target service, a cache node update period T2 is set, the calling condition analysis of the data stored by the cache nodes of each service node in the period is analyzed, the utilization degree of the primary cache space and the secondary cache space is analyzed in real time, the calling hit rate of the data stored in the primary cache space and the secondary cache space is analyzed respectively, and the calculation formula is that
;
The hit1 and hit2 correspond to the call hit rate in the data storage period T2 in the first-level storage space and the second-level storage space in the cache nodes of each service node respectively, m (1, T2) and m (1, T2) correspond to the call number in the data storage period T2 in the first-level storage space and the second-level storage space in the cache nodes of each service node respectively, and m (T2) is the call number in the data storage period T2 in the cache nodes of each service node;
Based on the data calling hit rate of the first-level storage space and the second-level storage space in the corresponding cache nodes, the memory occupation amount of the first-level storage space and the second-level storage space in the corresponding cache nodes in the period T2 is analyzed, and the calculation formula is as follows
;
Wherein C (1, j, T2) and C (2, j, T2) are respectively the period memory occupation amount of the first-level memory space and the second-level memory space in the cache nodes corresponding to the service nodes with the numbers j in the period T2, C (1, j) and C (2, j) are respectively the memory capacity of the first-level memory space and the second-level memory space in the cache nodes corresponding to the service nodes with the numbers j, S is a cache fragment rate parameter of the cache nodes, the memory update capacity Cg1 and the memory update capacity Cg2 of the second-level memory space in the cache nodes corresponding to the service nodes are analyzed, the memory occupation amount of the first-level memory space and the second-level memory space in the cache nodes corresponding to the service nodes with the numbers j is set, the memory prompt capacity Cv is set, the memory occupation amount of the first-level memory space and the second-level memory space in the cache nodes corresponding to the continuous time points in the period T2 is updated and judged, the memory capacity regulation strategy of the corresponding to the cache nodes is generated according to the judgment result, wherein when the memory occupation amount of the first-level memory space in the cache nodes corresponding to the first-level memory space is larger than the cache prompt capacity and smaller than or equal to the memory update capacity of the cache nodes in the certain period, the memory occupation amount of the first-level memory space is corresponding to the memory prompt capacity of the cache nodes corresponding to the first-level memory space, the corresponding to the first-level memory space is C (1, j, T2) is the memory capacity of the first-level memory storage space and the second-level memory space is larger than the free data is required to be the free state, and the first-level memory storage capacity is directly or the free data is judged to be the free from the first level storage and the storage capacity is calculated to be the storage and the first level storage and the storage capacity is larger than the storage and the storage is corresponding is larger and the free storage and the storage and is corresponding storage and is larger and is corresponding and a high. The memory regulation is not performed, wherein the calculation formulas of the memory update capacity Cg1 of the primary storage space and the memory update capacity Cg2 of the secondary storage space corresponding to the cache node are as follows
。
Further, the service node state data of the target service and the state data of the corresponding cache node are fed back through a visual window, wherein the cache node state data comprises capacity data of the cache node and data calling records of the response service node;
outputting the generated memory capacity regulating strategy corresponding to each cache node to a management port, and executing the memory capacity regulating strategy corresponding to each cache node.
The system comprises a node positioning module, a node analysis module, a dynamic regulation and control module and a data feedback module;
The node positioning module determines target service data, positions service nodes based on the target service data and acquires service node state data, the node analysis module analyzes service node data request demand degrees in real time based on the service node state data and analyzes data according to the service node data request demand, the node analysis module judges database access states, builds data cache nodes for the service nodes according to the database access state judgment results and analyzes cache memory allocation based on corresponding service node data request demand data, the dynamic regulation module analyzes real-time cache node memory utilization degrees according to data reading states of the cache nodes in each service node period and carries out periodic memory dynamic regulation on the cache nodes according to the real-time cache node memory utilization degrees, a regulation command is generated according to the cache node periodic memory dynamic regulation data and is fed back to the management port, and the data feedback module outputs the service node state data and the corresponding cache node state data by utilizing the visual window and executes the regulation command.
Further, the node positioning module comprises a target service determining unit and a service node positioning unit;
The target service determining unit determines target service data by searching the service data based on the management port, wherein the service data comprises service name data and service node data, and the service node data is a service node number;
The service node positioning unit positions the service node through target service node data, and acquires state data of the corresponding service node through acquiring request data of the corresponding service node, wherein the state data of the service node comprises the number of request data packets and the request data packet capacity of the service node.
Further, the node analysis module comprises a service node demand analysis unit and a cache node construction unit;
The service node demand analysis unit is used for constructing a corresponding service node status set based on each service node status data of a target service, analyzing real-time data request demands of the target service according to the service node status set to obtain real-time data request data capacity of the target service, judging and analyzing a real-time access state of a database based on actual target service data request demand analysis data, calculating the maximum response request data packet capacity of the database at a single moment by obtaining the maximum response request data packet number of the database at the single moment, and determining the access state of the database at the current moment by comparing the target service data request demand analysis data at the single moment with the maximum response request data packet capacity of the database at the single moment, wherein the access state comprises busy and idle;
The method comprises the steps of establishing a cache node for each service node of a target service when an access state of a database is busy, carrying out capacity prediction analysis on each cache node based on real-time service data request demand data of each service node to obtain cache node real-time capacity data corresponding to each service node, carrying out capacity prediction analysis on each cache node in an observation period through dividing the observation period T1 to obtain a maximum value which is a memory value of the cache node period T1 corresponding to the service node, carrying out type division according to the request demand data in the corresponding service node period T1, respectively determining the quantity of each type of request data in the period T1, carrying out analysis on the request data quantity proportion of each type of request data in the period T1, introducing a proportion threshold according to an analysis result, comparing the request data quantity proportion of each type of request data with the proportion threshold, determining high request type data and low request type data, carrying out cache space division on the cache nodes into a first-level cache space and a second-level cache space according to the comparison analysis data of each type of request data in the corresponding service nodes, wherein the first-level cache space is used for storing the high request type data and the second-level request data.
Further, the dynamic regulation and control module comprises a cache node memory analysis unit and a cache node memory regulation and control unit;
The cache node memory analysis unit sets a cache node update period T2 based on capacity prediction analysis and data storage division results of corresponding cache nodes of each service node of target service, analyzes the calling condition of each service node in the period for data stored in the cache nodes, and analyzes the utilization degree of a first-level cache space and a second-level cache space in real time;
the memory regulating and controlling unit of the cache node analyzes data based on the calling condition of each service node for the data stored in the cache node in a cache node updating period T2, dynamically regulates and controls and analyzes the memory of each service node for the cache node, analyzes the memory occupation amount of the corresponding first-level memory space and second-level memory space of the cache node in the period T2 based on the data calling hit rate of the first-level memory space and the second-level memory space of the corresponding cache node respectively, analyzes the memory updating capacity Cg1 and the memory updating capacity Cg2 of the corresponding first-level memory space and the memory occupation amount of the second-level memory space of the corresponding cache node in the cache node of each service node, sets a cache prompting capacity Cv, updates and judges the memory occupation amount of the first-level memory space and the second-level memory space in each cache node at continuous time points in the period, and generates a memory capacity regulating and controlling strategy of the corresponding cache node according to the judging result.
Further, the data feedback module comprises a visualization unit and a strategy execution unit;
The visualization unit feeds back the service node state data of the target service and the state data of the corresponding cache node through a visualization window, wherein the cache node state data comprises capacity data of the cache node and a data calling record of the response service node;
And the strategy executing unit outputs the memory capacity regulating strategy generated by corresponding each cache node to the management port and executes the memory capacity regulating strategy corresponding to each cache node.
Compared with the prior art, the invention has the beneficial effects that:
The application realizes flexible regulation and analysis of the cache demands under different demand scenes in the service node data calling process by carrying out node determination on the target service and a series of node request quantity analysis and database access state analysis and combining cache node construction and cache node memory prediction and regulation analysis, can realize adaptive cache node regulation and control on dynamic node data request environments, and can carry out cache capacity regulation and state regulation on complex data change scenes, thereby improving the condition of memory resource waste caused by unreasonable capacity resource allocation of most cache nodes at present.
Drawings
FIG. 1 is a schematic diagram of a data analysis system based on a distributed caching technique according to the present invention;
fig. 2 is a flow chart of a data analysis method based on a distributed caching technology according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Embodiment as shown in fig. 1, the invention provides a technical scheme that:
the data analysis system based on the distributed caching technology comprises a node positioning module, a node analysis module, a dynamic regulation and control module and a data feedback module;
The node positioning module determines target service data, positions service nodes based on the target service data and acquires service node state data, the node analysis module analyzes service node data request demand degrees in real time based on the service node state data and judges database access states according to the service node data request demand analysis data, a data cache node is built for each service node according to the database access state judgment result and cache memory allocation is analyzed based on corresponding service node data request demand data, the dynamic regulation module analyzes real-time cache node memory utilization degrees according to data reading states of the cache nodes in each service node period and carries out periodic memory dynamic regulation on the cache nodes according to the real-time cache node memory utilization degrees, a regulation command is generated according to the cache node periodic memory dynamic regulation data and fed back to the management port, and the data feedback module outputs the service node state data and the state data of the corresponding cache nodes by using the visual window and executes the regulation command.
Further, the node positioning module comprises a target service determining unit and a service node positioning unit;
The target service determining unit determines target service data by searching the service data based on the management port, wherein the service data comprises service name data and service node data;
The service node positioning unit positions the service node through the target service node data, and acquires the state data of the corresponding service node through acquiring the corresponding service node request data, wherein the state data of the service node comprises the number of the request data packets and the request data packet capacity of the service node.
Further, the node analysis module comprises a service node demand analysis unit and a cache node construction unit;
The service node demand analysis unit is used for constructing a corresponding service node status set based on each service node status data of the target service, analyzing the real-time data request demand of the target service according to the service node status set to obtain the real-time data request data capacity of the target service, judging and analyzing the real-time access status of the database based on the real target service data request demand analysis data, calculating the maximum response request data packet capacity of the database at a single moment by obtaining the maximum response request data packet quantity of the database at a single moment, and determining the access status of the database at the current moment by comparing the target service data request demand analysis data at a single moment with the maximum response request data packet capacity of the database at a single moment;
The method comprises the steps of constructing a cache node for each service node of a target service when an access state of a database is busy, carrying out capacity prediction analysis on each cache node based on real-time service data request demand data of each service node to obtain real-time capacity data of the cache node corresponding to each service node, carrying out capacity prediction analysis on each cache node in an observation period through dividing the observation period T1, taking the maximum value as a memory value of the cache node period T1 of the corresponding service node, carrying out type division and overall according to the request demand data in the corresponding service node period T1, respectively determining the quantity of each type of request data in the period T1, carrying out analysis on the request data quantity proportion of each type of request data in the period T1, introducing a proportion threshold according to an analysis result, comparing the request data quantity proportion of each type of request data with the proportion threshold, determining high request type data and low request type data, carrying out cache space division on the cache nodes into a first-level cache space and a second-level cache space according to comparison analysis data of each type of request data in the corresponding service nodes, wherein the first-level cache space is used for storing the high request type data, and the second-level cache space is used for storing the low request type data.
Further, the dynamic regulation and control module comprises a cache node memory analysis unit and a cache node memory regulation and control unit;
The cache node memory analysis unit sets a cache node update period T2 based on capacity prediction analysis and data storage division results of corresponding cache nodes of each service node of the target service, analyzes the calling condition of the data stored by the cache nodes by each service node in the period, and analyzes the utilization degree of a first-level cache space and a second-level cache space in real time;
The method comprises the steps of analyzing the memory occupation amount of a first-level storage space and a second-level storage space of corresponding cache nodes in a period T2 based on the data of the cache node updating period T2, analyzing the data of the calling condition of each service node for the data stored in the cache nodes, dynamically adjusting and analyzing the memory occupation amount of the cache node by each service node, updating and judging the memory occupation amount of the first-level storage space and the second-level storage space in each cache node at continuous time points in the period according to the judging result, and generating a memory capacity adjusting strategy of the corresponding cache nodes.
Further, the data feedback module comprises a visualization unit and a strategy execution unit;
The visualization unit feeds back the service node state data of the target service and the state data of the corresponding cache node through a visualization window, wherein the cache node state data comprises capacity data of the cache node and a data calling record of the response service node;
The strategy execution unit outputs the generated memory capacity regulation strategy corresponding to each cache node to the management port, and executes the memory capacity regulation strategy corresponding to each cache node;
As shown in fig. 2, the present invention provides another technical solution:
A data analysis method based on a distributed caching technique, the method comprising the steps of:
determining target service data, positioning service nodes based on the target service data, and acquiring service node state data;
According to the judging result of the access state of the database, constructing data cache nodes for each service node, and analyzing cache memory allocation based on the data request requirement data of the corresponding service node;
Analyzing the memory utilization degree of the real-time cache node according to the data reading state of the cache node in each service node period, carrying out periodic memory dynamic regulation and control on the cache node according to the memory utilization degree of the real-time cache node, and generating a regulation and control instruction according to the periodic memory dynamic regulation and control data of the cache node and feeding back to the management port.
Further, the service data is searched on the basis of the management port to determine target service data, wherein the service data comprises service name data and service node data;
And positioning the service node through the target service node data, and acquiring the state data of the corresponding service node through acquiring the request data of the corresponding service node, wherein the state data of the service node comprises the number of the request data packets and the request data packet capacity of the service node.
Further, based on the state data of each service node of the target service, a corresponding service node state set is constructed, the real-time data request demand of the target service is analyzed according to the service node state set, the real-time data request data capacity of the target service is obtained, and the calculation formula is as follows
;
P (t) is the data request demand capacity of the target service at the current time t, P is the single data packet capacity, mj is the data packet request quantity at the time t of the service node of the corresponding number j, n is the service node quantity of the target service, based on the actual target service data request demand analysis data, judgment and analysis are carried out on the real-time access state of the database, the maximum response request data packet capacity of the database at the single time is calculated by acquiring the request data packet quantity of the maximum response of the database at the single time, the access state of the database at the current time is determined by comparing the target service data request demand analysis data at the single time with the maximum response request data packet capacity of the database at the single time, the access state comprises busy and idle, the specific comparison steps are that the request data packet capacity P e,s of the database at the single time is acquired by carrying out early warning coefficient k determination on the maximum response request data packet capacity of the database at the single time, P e,s=k*Pmax,s is calculated, wherein P max,s is the maximum response request data packet capacity at the single time, the current time is compared with the request data capacity P (t) at the maximum response request data packet capacity at the single time, and if P (t) is more than or equal to the current P e,s, the current access state is judged to be busy, otherwise, the access state of the database is judged to be idle;
when the access state of the database is busy, constructing cache nodes of each service node of the target service, carrying out capacity prediction analysis on each cache node based on the real-time service data request demand data of each service node to obtain the real-time capacity data of the cache node corresponding to each service node, and calculating as
;
The method comprises the steps of taking a time C (j, T) as a time T, numbering a service node according to a capacity prediction value of the corresponding buffer node, taking r as a single data packet buffer capacity pre-storage coefficient, comparing a request data volume ratio of each type of request data with a ratio threshold according to analysis results, determining high request type data and low request type data, judging the request data type of each buffer node in an observation period as high request type data by dividing the observation period T1, taking the maximum value as the content value of the buffer node period T1 of the corresponding service node, carrying out type division and overall formulation according to request data in the corresponding service node period T1, respectively determining the quantity of each type of request data in the period T1, analyzing the request data volume ratio of each type of request data in the period T1, introducing the ratio threshold according to analysis results, comparing the request data volume ratio of each type of request data with the ratio threshold, and determining the high request type data and the low request type data, otherwise judging the request type data with the ratio of the ratio threshold, and judging the request data type of each type of request data in the corresponding type of request data as low request type data, and the buffer node is used for the first-level buffer space storage of the low request type data.
Further, based on the capacity prediction analysis and data storage division results of the corresponding cache nodes of each service node of the target service, a cache node update period T2 is set, the calling condition analysis of the data stored by the cache nodes of each service node in the period is analyzed, the utilization degree of the primary cache space and the secondary cache space is analyzed in real time, the calling hit rate of the data stored in the primary cache space and the secondary cache space is analyzed respectively, and the calculation formula is that
;
The hit1 and hit2 correspond to the call hit rate in the data storage period T2 in the first-level storage space and the second-level storage space in the cache nodes of each service node respectively, m (1, T2) and m (1, T2) correspond to the call number in the data storage period T2 in the first-level storage space and the second-level storage space in the cache nodes of each service node respectively, and m (T2) is the call number in the data storage period T2 in the cache nodes of each service node;
Based on the data calling hit rate of the first-level storage space and the second-level storage space in the corresponding cache nodes, the memory occupation amount of the first-level storage space and the second-level storage space in the corresponding cache nodes in the period T2 is analyzed, and the calculation formula is as follows
;
Wherein C (1, j, T2) and C (2, j, T2) are respectively the period memory occupation amount of the first-level memory space and the second-level memory space in the cache nodes corresponding to the service nodes with the numbers j in the period T2, C (1, j) and C (2, j) are respectively the memory capacity of the first-level memory space and the second-level memory space in the cache nodes corresponding to the service nodes with the numbers j, S is a cache fragment rate parameter of the cache nodes, the memory update capacity Cg1 and the memory update capacity Cg2 of the second-level memory space in the cache nodes corresponding to the service nodes are analyzed, the memory occupation amount of the first-level memory space and the second-level memory space in the cache nodes corresponding to the service nodes with the numbers j is set, the memory prompt capacity Cv is set, the memory occupation amount of the first-level memory space and the second-level memory space in the cache nodes corresponding to the continuous time points in the period T2 is updated and judged, the memory capacity regulation strategy of the corresponding to the cache nodes is generated according to the judgment result, wherein when the memory occupation amount of the first-level memory space in the cache nodes corresponding to the first-level memory space is larger than the cache prompt capacity and smaller than or equal to the memory update capacity of the cache nodes in the certain period, the memory occupation amount of the first-level memory space is corresponding to the memory prompt capacity of the cache nodes corresponding to the first-level memory space, the corresponding to the first-level memory space is C (1, j, T2) is the memory capacity of the first-level memory storage space and the second-level memory space is larger than the free data is required to be the free state, and the first-level memory storage capacity is directly or the free data is judged to be the free from the first level storage and the storage capacity is calculated to be the storage and the first level storage and the storage capacity is larger than the storage and the storage is corresponding is larger and the free storage and the storage and is corresponding storage and is larger and is corresponding and a high. The memory regulation is not performed, wherein the calculation formulas of the memory update capacity Cg1 of the primary storage space and the memory update capacity Cg2 of the secondary storage space corresponding to the cache node are as follows
。
Further, the service node state data of the target service and the state data of the corresponding cache node are fed back through a visual window, wherein the cache node state data comprises capacity data of the cache node and data calling records of the response service node;
outputting the generated memory capacity regulating strategy corresponding to each cache node to a management port, and executing the memory capacity regulating strategy corresponding to each cache node.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510779687.1A CN120386801B (en) | 2025-06-12 | 2025-06-12 | Data analysis system and method based on distributed caching technology |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202510779687.1A CN120386801B (en) | 2025-06-12 | 2025-06-12 | Data analysis system and method based on distributed caching technology |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN120386801A CN120386801A (en) | 2025-07-29 |
| CN120386801B true CN120386801B (en) | 2025-10-14 |
Family
ID=96489760
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202510779687.1A Active CN120386801B (en) | 2025-06-12 | 2025-06-12 | Data analysis system and method based on distributed caching technology |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120386801B (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119652907A (en) * | 2025-02-18 | 2025-03-18 | 江苏智檬智能科技有限公司 | A distributed cache data control security management system and method |
| CN119719232A (en) * | 2025-03-04 | 2025-03-28 | 浙江爱客智能科技有限责任公司 | Distributed database management method and system based on artificial intelligence |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN107093082A (en) * | 2017-04-21 | 2017-08-25 | 北京恒冠网络数据处理有限公司 | The Data Collection and management method of a kind of technical transaction platform |
| US11586630B2 (en) * | 2020-02-27 | 2023-02-21 | Sap Se | Near-memory acceleration for database operations |
| CN114428796A (en) * | 2022-01-26 | 2022-05-03 | 麒麟合盛网络技术股份有限公司 | A data acquisition method and device |
| US12417136B2 (en) * | 2023-03-24 | 2025-09-16 | AtomBeam Technologies Inc. | System and method for adaptive protocol caching in event-driven data communication networks |
-
2025
- 2025-06-12 CN CN202510779687.1A patent/CN120386801B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN119652907A (en) * | 2025-02-18 | 2025-03-18 | 江苏智檬智能科技有限公司 | A distributed cache data control security management system and method |
| CN119719232A (en) * | 2025-03-04 | 2025-03-28 | 浙江爱客智能科技有限责任公司 | Distributed database management method and system based on artificial intelligence |
Also Published As
| Publication number | Publication date |
|---|---|
| CN120386801A (en) | 2025-07-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110166282B (en) | Resource allocation method, device, computer equipment and storage medium | |
| CN107404409B (en) | Prediction method and system of container cloud elastic supply container quantity for sudden load | |
| CN118012906A (en) | Multi-level cache adaptive system and strategy based on machine learning | |
| CN119938942B (en) | A method for generating distributed data storage network based on knowledge graph | |
| CN119862907B (en) | A hybrid expert model training optimization method based on matrix routing and token allocation | |
| CN119271398B (en) | Heterogeneous computing power resource allocation optimization method for deep reinforcement learning model training | |
| CN119691305A (en) | Intelligent caching method, device, equipment and medium for front-end resource loading as required | |
| CN115981863A (en) | Intelligent cloud resource elastic expansion method and system combining business characteristics | |
| CN120386801B (en) | Data analysis system and method based on distributed caching technology | |
| CN111629216B (en) | VOD service cache replacement method based on random forest algorithm under edge network environment | |
| CN115190135B (en) | Distributed storage system and copy selection method thereof | |
| CN114785856A (en) | Edge calculation-based collaborative caching method, device, equipment and storage medium | |
| CN110535894A (en) | A method and system for dynamic allocation of container resources based on load feedback | |
| CN111143411A (en) | Dynamic streaming pre-calculation method and device and storage medium | |
| CN111190737A (en) | Memory allocation method for embedded system | |
| CN110944050B (en) | Reverse proxy server cache dynamic configuration method and system | |
| CN117407921A (en) | Differential privacy histogram publishing method and system based on must-connect and do-not-connect constraints | |
| CN116541147A (en) | Heterogeneous multi-core task scheduling method and system based on improved whale optimization algorithm | |
| Kim et al. | T-cachenet: Transformer-based deep reinforcement learning for next-generation internet content caching | |
| CN114882713A (en) | Multi-scene-based signal control method, system, device and storage medium | |
| CN116644047B (en) | A Spark-based file size adaptive processing method and system | |
| CN121070629B (en) | Load balancing method and system for low-power-consumption AI processor | |
| CN120448054B (en) | Scheduling methods, inference methods, devices and electronic equipment for large model parameters | |
| CN120653451B (en) | A method, apparatus, device, medium, and program product for cache allocation. | |
| CN113742383B (en) | Data storage method, device, equipment and medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |