[go: up one dir, main page]

CN120407174A - Fixed cache implementation method, system, device and readable storage medium - Google Patents

Fixed cache implementation method, system, device and readable storage medium

Info

Publication number
CN120407174A
CN120407174A CN202510493169.3A CN202510493169A CN120407174A CN 120407174 A CN120407174 A CN 120407174A CN 202510493169 A CN202510493169 A CN 202510493169A CN 120407174 A CN120407174 A CN 120407174A
Authority
CN
China
Prior art keywords
buf
cache
local
queue
available
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510493169.3A
Other languages
Chinese (zh)
Inventor
胡进
张欢
朱正东
郭晓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anchao Cloud Software Co Ltd
Original Assignee
Anchao Cloud Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anchao Cloud Software Co Ltd filed Critical Anchao Cloud Software Co Ltd
Priority to CN202510493169.3A priority Critical patent/CN120407174A/en
Publication of CN120407174A publication Critical patent/CN120407174A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/524Deadlock detection or avoidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

本发明公开了一种固定缓存实现方法、系统、设备及可读存储介质。其包括:创建全局无锁队列以及本地缓存队列;在本地缓存队列中,选举本地活跃BUF;响应于用户的缓存申请指令,基于本地活跃BUF返回一可用缓存单元,用于存储待缓存数据;若本地活跃BUF无可用缓存单元,则将所述无可用缓存单元的本地BUF标记为已用完BUF,并重新选举本地活跃BUF;若本地缓存队列内的BUF均被标记为已用完BUF,则基于全局无锁队列获取一可用BUF放入本地缓存队列,并将获取的可用BUF定义为新的本地活跃BUF。本发明相较于现有技术,无需加锁的同时,减少对全局资源的竞争,减少全局资源访问时的锁争用。同时保证线程始终在同一个核心上运行,可以提高缓存命中率。

The present invention discloses a fixed cache implementation method, system, device and readable storage medium. It includes: creating a global lock-free queue and a local cache queue; in the local cache queue, selecting a local active BUF; in response to a user's cache request instruction, returning an available cache unit based on the local active BUF for storing data to be cached; if the local active BUF has no available cache unit, then marking the local BUF without an available cache unit as a used BUF, and re-electing a local active BUF; if all BUFs in the local cache queue are marked as used BUFs, then obtaining an available BUF based on the global lock-free queue and putting it into the local cache queue, and defining the obtained available BUF as a new local active BUF. Compared with the existing technology, the present invention does not require locking while reducing competition for global resources and reducing lock contention when accessing global resources. At the same time, it ensures that threads always run on the same core, which can improve the cache hit rate.

Description

Fixed cache implementation method, system, equipment and readable storage medium
Technical Field
The invention belongs to the technical field of computer storage, and particularly relates to a method, a system, a device and a readable storage medium for realizing fixed cache.
Background
A Cache is a buffer for data exchange (called a Cache) and is a temporary place for storing data (data that is frequently used). The setting of the cache is one of the important factors for all modern computer systems to exert high performance, when a user queries data, the data is first searched in the cache, and if found, the data is directly executed. If not, searching in the database.
The essence of the cache is that space time is used, the real-time property of data is sacrificed, the data in the memory of the server is used for temporarily replacing the latest data read from the database, the IO of the database is reduced, the pressure of the server is lightened, the network delay is reduced, and the page opening speed is increased.
Conventional caches are typically based on global queues and locks, which are often used together in concurrent programming. When a plurality of threads need to acquire and release the concurrent access of the cache under the model that only one thread can operate the queue at the same time, the locks possibly collide, the threads need to be normally dormant to wait for queuing to acquire the locks, the performance can be greatly influenced, and under the model, the performance of the cache has a bottleneck and cannot be expanded according to the cores. In addition, allocation release usually accesses the session resources, resulting in low hit rates in the CPU hardware cache.
Therefore, in view of the above technical problems, it is necessary to provide a method, a system, a device and a readable storage medium for implementing fixed buffering.
The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person of ordinary skill in the art.
Disclosure of Invention
The invention aims to provide a fixed cache realization method, a system, equipment and a readable storage medium, which can avoid the problem of performance degradation caused by using a lock, reduce the access to global resources and improve the system performance.
In order to achieve the above object, a specific embodiment of the present invention provides the following technical solution:
in a first aspect, the present invention provides a method for implementing fixed buffering, including:
creating a global lock-free queue and a local cache queue, wherein the global lock-free queue and the local cache queue comprise a plurality of BUFs;
in the local buffer queue, selecting a local active BUF based on the number of available buffer units in each BUF, and returning an available buffer unit based on the local active BUF for storing data to be buffered in response to a buffer application instruction of a user;
if the local active BUF has no available cache unit, updating the local active BUF of the unavailable cache unit into an exhausted BUF and reselecting the local active BUF;
If the BUFs in the local cache queue are marked as used BUFs, acquiring an available BUF based on the global lock-free queue, putting the available BUF into the local cache queue, and defining the acquired available BUF as a new local active BUF.
In one or more embodiments of the invention, the method further comprises:
Applying for memory to the system based on the memory occupation of the expected single cache unit and the number of the expected cache units, wherein the applied memory is greater than or equal to the product of the memory occupation of the expected single cache unit and the number of the expected cache units;
Dividing the memory of the application into a plurality of BUFs, and dividing one or a plurality of cache units in each BUF.
In one or more embodiments of the present invention, the electing the locally active BUF based on the number of available cache units within each BUF includes:
traversing the number of remaining available cache units in each BUF in a local cache queue;
and selecting the BUF with the largest number of remaining available cache units in the local cache queue as the local active BUF.
In one or more embodiments of the present invention, the obtaining a BUF based on the global lock-free queue is put in a local cache queue, including:
acquiring an available BUF in a global lock-free queue;
dividing the acquired BUF into a plurality of cache units based on the number of cache units in each BUF in a local cache queue;
and placing the segmented BUF into the local cache queue.
In a second aspect, the present invention provides a method for implementing fixed buffering, which includes:
responding to a buffer release instruction of a user, and releasing data stored in a corresponding buffer unit;
Based on metadata at the rear end of a cache unit, acquiring BUF information of the cache unit and placing the cache unit into BUF of the cache unit;
Updating the marking information of the BUF based on the state of the BUF to which the caching unit belongs before being put into the caching unit;
And returning BUFs to the global lock-free queue based on the number of BUFs of the current local cache queue.
In one or more embodiments of the present invention, updating tag information of the BUF based on a state of the BUF to which the cache unit belongs before being placed in the cache unit includes:
If available cache units exist in the BUF to which the cache units belong, skipping BUF mark updating;
And if the BUF to which the cache unit belongs is the used BUF, updating the mark of the BUF to which the cache unit belongs from the used BUF to the available BUF.
In one or more embodiments of the present invention, returning the BUF to the global lock-free queue based on the BUF number of the current local cache queue includes:
If the buffer units are placed into the BUF, the buffer units in the BUF are available, and the number of the local BUFs is larger than a preset threshold value, placing the BUF of the buffer units into a global lock-free queue;
And if the buffer units are placed in the BUF, and unavailable buffer units exist in the BUF or the number of local BUFs is smaller than or equal to a preset threshold value, not returning the BUF to the global lock-free queue.
In a third aspect, the present invention provides a fixed cache implementation system, including:
The system comprises a creation module, a storage module and a storage module, wherein the creation module is used for creating a global lock-free queue and a local cache queue, and the global lock-free queue and the local cache queue comprise a plurality of BUFs;
The system comprises a response module, a local active BUF, a local buffer queue, a buffer request module and a buffer request module, wherein the response module is used for selecting the local active BUF based on the number of available buffer units in each BUF in the local buffer queue;
the election module is used for updating the local active BUF of the unavailable cache unit into an exhausted BUF when the local active BUF has no available cache unit, and reelecting the local active BUF;
And the migration module is used for acquiring a BUF based on the global lock-free queue and placing the BUF into the local cache queue when the BUFs in the local cache queue are marked as used-up BUFs, and defining the acquired BUFs as new local active BUFs.
In a fourth aspect, the present invention provides a computer device, including a memory and a processor, where the memory and the processor are communicatively connected to each other, and the memory stores computer instructions, and the processor executes the computer instructions, thereby executing the fixed cache implementation method.
In a fifth aspect, the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to perform the fixed cache implementation method.
Compared with the prior art, the fixed cache implementation method provided by the invention establishes the local cache queue based on the CPU cores, so that each CPU core has an independent data structure, and most operations only need to access the data of the core local without locking. Meanwhile, each core is provided with an independent L1 cache, and the buffer area is preferentially acquired/returned from the local cache, so that the competition for global resources is reduced. The use of lock-free queues to manage the central buffer reduces lock contention in global resource accesses. On the other hand, the invention distributes a plurality of buffer areas at one time, can reduce distribution frequency, does not update the global state immediately, accumulates to a certain degree and updates again, reduces frequent access to global resources, ensures that threads always run on the same core, and can improve the cache hit rate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
FIG. 1 is a schematic diagram of a fixed cache implementation scenario in an embodiment of the present invention;
FIG. 2 is a flow chart of a method for implementing a fixed buffer in an embodiment of the invention;
FIG. 3 is a flow chart of a method for implementing a fixed buffer in another embodiment of the present invention;
FIG. 4 is a block diagram illustrating a fixed cache implementation system in accordance with an embodiment of the present invention;
FIG. 5 is a block diagram illustrating a fixed cache implementation system in accordance with another embodiment of the present invention;
Fig. 6 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the technical solution of the present invention better understood by those skilled in the art, the technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
Throughout the specification and claims, unless explicitly stated otherwise, the term "comprise" or variations thereof such as "comprises" or "comprising", etc. will be understood to include the stated element or component without excluding other elements or components.
Prior art cache implementations are typically based on global queue and phase lock interworking. Queues, as a data structure, follow the first-in first-out principle. In a computer, tasks or data may be stored in a queue for sequential processing. Locks are a synchronization mechanism. In a multithreading or multiprocessing environment, to ensure data consistency and integrity, when one thread or process accesses a shared resource, a lock is added, and other threads or processes wait for the lock to be released if they want to access the resource.
When multiple threads need to acquire and release cache concurrent access under the model, locks may collide, and the threads need to sleep normally to wait for the locks to be acquired in a queue, which greatly affects performance. In addition, allocation release usually accesses the session resources, resulting in low hit rates in the CPU hardware cache.
The inventor of the invention finds the main defects in the prior art and provides a new technical realization idea based on the defects in the prior art, namely, a cache queue is expanded according to CPU cores, so that each core has an independent cache queue without locking. Meanwhile, a global lock-free queue is arranged in an auxiliary mode and used for supplying extra BUF when the local cache resources are insufficient. The method reduces the competition and conflict of global resources and can improve the hit rate and the utilization rate of the cache.
Referring to fig. 1, an application scenario of a fixed cache implementation provided in an embodiment of the present invention is shown, where the scenario specifically includes a global lock-free queue 101, a local cache queue 102, and a user terminal 103.
The global lock-free queue 101 and the local cache queue 102 each include a plurality of BUFs. In particular, the local cache queues are established based on CPU cores, each having an independent said local cache queue. Meanwhile, in order to implement fixed caching, each BUF of the local cache queue includes a plurality of cache units for storing data.
The local cache queue can elect a locally active BUF for preferentially performing cache tasks by the number of available cache units of the respective BUF. And if all the local BUFs are used up, acquiring one BUF from the global lock-free queue, dividing the BUF into a format of a cache unit in the local cache queue, and putting the divided BUFs into the local cache queue to serve as local active BUFs. After receiving the cache request, the local active BUF goes out of an available cache unit for storing the corresponding data.
When a user program on the CPU core is operated to release the cache unit, the BUF to which the user program belongs is obtained from metadata at the rear end of the cache unit, and the cache unit is placed into a BUF local queue to which the user program belongs. And when the BUF is in a run-out state, updating the BUF into the available state. And if the cache units in the BUF are available and the number of the current local BUFs of the CPU core is more than or equal to a constant X (X is configured according to the actual service test condition), placing the BUF into a global lock-free queue.
It should be noted that, the user terminal 103 is installed with a computer software program matched with the implementation method of the fixed buffer provided by the method, and the user terminal 103 may include, but is not limited to, a portable electronic device or a wearable electronic device such as a desktop computer (PC), a desktop computer, a smart phone, a handheld computer, a tablet computer, a Personal Digital Assistant (PDA), and the like, which are not limited in the embodiments of the present invention.
It should be further noted that the method for implementing the fixed buffer according to the embodiment of the present invention may be applied to the system for implementing the fixed buffer according to the embodiment of the present invention. The fixed buffer implementation system can be configured at a terminal. Terminals may include, but are not limited to, PCs (Personal Computer, personal computers), PDAs (tablet computers), smartphones, smart wearable devices, and the like.
Fig. 2 is a flow chart of a fixed buffer implementation according to an embodiment of the invention. The fixed cache implementation method specifically comprises the following steps:
s201, creating a global lock-free queue and a local cache queue, wherein the global lock-free queue and the local cache queue comprise a plurality of BUFs;
It should be noted that, in the embodiment of the present invention, the establishment of the local cache queue needs to be based on the CPU core. That is, a corresponding local cache queue needs to be created for each CPU core, so that each core has an independent cache, and a physically isolated cache space is formed through such a fine-grained resource partitioning mechanism. Each processor core, when performing memory access operations, preferentially operates its bound local queue. The queue binding mechanism based on CPU affinity naturally avoids lock contention problems of cross-core operations. The method reduces the competition for global resources and can avoid the reduction of the cache performance caused by frequent lock acquisition/release.
Specifically, the independent cache queue set based on the CPU core may not be locked, for two main reasons. First, the independence design ensures contention-free access. Each CPU core maintains a dedicated cache queue, and this architecture design provides complete isolation of cache operations between different cores. Because the concurrent access scene of the shared resource does not exist, each core can independently perform read-write operation on the own queue, the possibility of data competition is fundamentally eliminated, and therefore, a lock mechanism is not required to be introduced to ensure data consistency.
Second, the design takes full advantage of the locality of CPU access. Modern CPU access patterns have significant spatial and temporal locality characteristics, i.e., processors tend to centrally access recently used data and its neighboring data. The independent cache queues not only optimize the data access efficiency, but also greatly reduce the requirement of cross-core data interaction by storing the associated data in the local queues of the same core. The design not only avoids the performance overhead caused by the lock operation, but also simplifies the complexity of system implementation, thereby realizing efficient lock-free concurrent processing.
In an embodiment of the invention, a global lock-free queue is used to provide additional available BUFs when local caches are not sufficient. The central buffer area is managed based on the lock-free queue, so that the lock-up phenomenon during global resource access can be reduced. Specifically, the global lock-free queue selection can be dynamically adjusted based on actual use situations, and the method specifically comprises, but is not limited to, array-based lock-free queue (ring buffer queue), linked list-based lock-free queue (Michael-Scott queue), slice/segment queue and the like. The embodiment of the present invention is not particularly limited thereto.
Further, BUF (Buffer) is a block of memory area in a computer system for temporarily storing data, and is mainly used for solving the problem of data transmission between devices or components with different speeds or different timings. In an exemplary embodiment of the present invention, configuring a BUF structure in a local cache queue specifically includes applying for memory to a system based on a memory footprint of an expected single cache unit and a number of expected cache units, and dividing the applied memory into a plurality of BUFs, where one or more cache units are partitioned in each BUF.
It will be appreciated that the memory of the application should be greater than or equal to the product of the memory footprint of the desired single cache unit and the desired number of cache units to meet the establishment of the desired BUF. For example, in a specific embodiment, the cache unit includes a cache application unit and cache unit metadata. The size of the expected cache application unit UN, the metadata size of the expected cache unit UM, and the number of the expected cache units N, the total size of caches required to be applied is (UN+UM) N. After the application to the memory greater than or equal to (un+um) x N, it is further divided into M parts, each part being named BUF, each BUF containing N/M cache application units. Wherein in this embodiment, the value of M may be divisible by N by configuration, or may be automatically rounded by code after calculating N/M.
S202, selecting a local active BUF based on the number of available buffer units in each BUF in a local buffer queue, and returning an available buffer unit based on the local active BUF to store data to be buffered in response to a buffer application instruction of a user;
As can be seen from the above, in the embodiment of the present invention, the BUF includes a plurality of the cache units and cache unit metadata. The cache metadata is suitable for managing the data of the cache unit, such as tags (tags), valid bits (Valid bits), dirty bits (Dirty bits) and the like.
In an exemplary embodiment of the present invention, the electing the locally active BUF based on the number of available buffer units in each BUF includes traversing the number of remaining available buffer units in each BUF in a local buffer queue, and electing the BUF with the largest number of remaining available buffer units in the local buffer queue as the locally active BUF.
It will be appreciated that since the physical presence of available cache molecules is the basic condition for performing a cache task, it must meet the basic criterion of availability of the cache molecules as a locally active BUF that preferentially performs the cache task. And secondly, by selecting the BUF node with the largest holding capacity of the cache unit as a main execution unit, the efficiency of an election mechanism of the system can be obviously optimized, and the resource cost caused by frequent election is reduced. In addition, by maximizing the cache capacity of the high-load node, the resource fragmentation phenomenon can be effectively reduced, and the cache hit rate and the data throughput are improved.
It should be noted that, the local cache queues are independently set based on different central processing unit cores, so that the cache task is executed on the corresponding core, only the local data is required to be accessed, and locking is not required. In a multi-core processor, the core number is a unique identifier for each physical or logical processing core in the multi-core CPU. When executing the cache task, the core number corresponding to the central processing unit core should be obtained, so that the thread is ensured to run on the same core all the time, and the cache hit rate is improved.
S203, if the local active BUF has no available cache unit, updating the local active BUF of the unavailable cache unit into an exhausted BUF, and reelecting the local active BUF;
As described above, in order to reduce load overhead, the cache capacity of the node is maximized at the same time. The local BUF with the largest available buffer unit allowance is preferably configured as the local active BUF, and the buffer task is preferentially executed. Further, when the available cache unit in the local active BUF is exhausted, it no longer has objective conditions for performing the cache task, so it needs to be marked and reselected for a new local active BUF. The election strategy is as described above, and it is preferable to elect, among the remaining local BUFs, the local BUF having the largest available buffer unit margin as the new local active BUF.
In a specific embodiment, the method for marking the local active BUF depleted by the available cache units as the used-up BUF specifically comprises the steps of establishing a used-up BUF linked list, and adding the local active BUF depleted by the available cache units into a corresponding used-up BUF linked list. Similarly, the available BUF linked list and the local active BUF linked list can be correspondingly configured to realize classification and marking of the local BUF.
S204, if the BUFs in the local cache queue are marked as used BUFs, acquiring an available BUF based on the global lock-free queue, putting the available BUF into the local cache queue, and defining the acquired available BUF as a new local active BUF.
It will be appreciated that when the BUFs in the local cache queues are marked as used up BUFs, then the local cache corresponding to the current cpu core is not actually able to continue to perform the cache task. At this point, the global cache queue may place its available BUFs into the local cache queue to support its continued execution of the corresponding cache function.
It should be noted that the present invention is intended to implement fixed caching. That is, the storage space of each buffer unit in the BUF in the buffer queue is configured by the user terminal, and has a relatively fixed memory. Therefore, the BUF issued by the global lock-free queue to the local cache queue should also partition the cache unit according to the configuration of the local cache queue. That is, in an exemplary embodiment of the present invention, acquiring a BUF based on a global lock-free queue and placing the BUF in a local cache queue includes acquiring an available BUF in the global lock-free queue, dividing the acquired BUF into a plurality of cache units based on the number of cache units in the local BUF, and placing the divided BUF in the local cache queue.
Fig. 3 is a flow chart of a fixed buffer implementation according to an embodiment of the invention. The fixed cache implementation method specifically comprises the following steps:
s301, responding to a buffer release instruction of a user, and releasing data stored in a corresponding buffer unit;
It should be noted that, releasing the information in the cache unit is an important operation for managing the memory and the data access efficiency in the computer system, which means that the temporary data stored in the cache unit is cleared or released, so as to recover the memory or the storage space, and ensure that the system or the application program can operate efficiently.
In an embodiment of the invention, the buffer release instruction is expanded and interpreted, that is, the buffer release instruction of the invention not only comprises the steps that a user directly deletes a specific buffer item through a programming instruction to realize the release of a buffer unit, but also comprises the steps of automatically clearing time-efficient buffer data based on the survival time set by the user for the buffer data, and carrying out buffer unit release based on a preconfigured algorithm for other purposes such as protecting data consistency.
S302, acquiring BUF information of a cache unit based on metadata of the rear end of the cache unit, and placing the cache unit into the BUF of the cache unit;
As can be seen from the above, in the embodiment of the present invention, the BUF includes a plurality of the cache units and cache unit metadata. The cache metadata is suitable for managing the data of the cache unit, such as tags (tags), valid bits (Valid bits), dirty bits (Dirty bits) and the like. Particularly, the cache unit metadata in the invention also comprises BUF information to which the cache unit belongs, so as to realize orderly homing after the cache unit is released.
S303, updating the marking information of the BUF based on the state of the BUF to which the cache unit belongs before being placed in the cache unit;
In an exemplary embodiment of the present invention, if an available buffer unit exists in the BUF to which the buffer unit belongs, the BUF flag update is skipped, and if the BUF to which the buffer unit belongs is an exhausted BUF, the BUF to which the buffer unit belongs is released, and the flag of the BUF to which the buffer unit belongs is updated from the exhausted BUF to the available BUF.
It should be noted that when a cache unit is released and allocated to its associated BUF, then the BUF has at least one available cache unit. I.e. if said BUF is marked as an exhausted BUF before the released cache molecules are placed, it should be updated to an available BUF at this time. On the other hand, if there is an available buffer unit in the BUF before the released buffer unit is placed, then placing an available buffer unit does not change the state of the current BUF. Accordingly, the corresponding flag information update flow may be skipped.
In a specific embodiment, updating the tag information of the BUF is determined by skipping if the BUF of the cache unit after the replacement is a locally active BUF or an available BUF, and removing the BUF in the corresponding used-up linked list if the BUF is a used-up BUF. If there is an available linked list, it is moved to the available linked list.
S304, returning BUFs to the global lock-free queue based on the BUFs of the current local cache queue.
To ensure efficient and stable operation of the global lock-free queue, each CPU core must follow a preset resource return mechanism after acquiring the BUF from the global lock-free queue. Meanwhile, when the core completes the operation on the BUF, it must be guaranteed that all cache units in the BUF are in a valid state for immediate allocation use when returning them to the global cache queue. The mandatory checking mechanism not only ensures the integrity of memory resources, but also is an important basis for maintaining the high performance characteristic of the lock-free queue.
On the other hand, in order to guarantee the cache processing requirement of the local cache queue, the resource redundancy is prevented. The method and the device avoid insufficient local cache resources in short time caused by returning the BUF, request the BUF to the global lock-free queue again, and avoid excessive local cache resources and waste of cache resources. In one embodiment of the present invention, a preset threshold may be set. Only if the number of the BUFs of the local cache queues is larger than the preset threshold value and there are BUFs available for all the cache units, returning the BUFs to the global lock-free queue. The specific value of the preset threshold may be dynamically changed based on the estimated cache data size of the current cpu core, which is not limited in the embodiment of the present invention.
For example, in one embodiment, the preset threshold value is adaptively adjusted according to the real-time load characteristic and the estimated data throughput of the current CPU core by using an elastic calculation method. The dynamic adjustment process can be completed through a sliding window algorithm realized by the monitoring module, so that the synchronization of the threshold setting and the running state of the system is ensured.
Referring to fig. 4, based on the same inventive concept as the above-mentioned fixed cache implementation method, an embodiment of the present invention provides a fixed cache implementation system 400, which includes a creation module 401, a response module 402, an election module 403, and a migration module 404
Specifically, the creating module 401 is configured to create a global lock-free queue and a local buffer queue, where the global lock-free queue and the local buffer queue include multiple BUFs, the responding module 402 is configured to elect a local active BUF in the local buffer queue based on the number of available buffer units in each BUF, respond to a buffer application instruction of a user, return an available buffer unit based on the local active BUF to store data to be buffered, the electing module 403 is configured to mark the local BUF of the unavailable buffer unit as an exhausted BUF and reselect the local active BUF when the local active BUF does not have the available buffer unit, and the migrating module 404 is configured to obtain a BUF in the local buffer queue based on the global lock-free queue and define the obtained BUF as a new local active BUF when the BUF in the local buffer queue is marked as an exhausted BUF.
Referring to fig. 5, based on the same inventive concept as the above-mentioned fixed cache implementation method, another embodiment of the present invention provides a fixed cache implementation system 500, which includes a releasing module 501, an obtaining module 502, an updating module 503, and a returning module 504.
Specifically, the releasing module 501 is configured to respond to a buffer release instruction of a user, release data stored in a corresponding buffer unit, the acquiring module 502 is configured to acquire BUF information of the buffer unit and place the buffer unit into a BUF of the buffer unit based on metadata at a rear end of the buffer unit, the updating module 503 is configured to update flag information of the BUF based on a state of the BUF of the buffer unit before the buffer unit is placed, and the returning module 504 is configured to return the BUF to a global lock-free queue based on a number of the BUFs of a current local buffer queue.
Referring to fig. 6, an embodiment of the present invention further provides an electronic device 600, where the electronic device 600 includes at least one processor 601, a memory 602 (e.g., a nonvolatile memory), a memory 603, and a communication interface 604, and the at least one processor 601, the memory 602, the memory 603, and the communication interface 604 are connected together via an internal bus 605. The at least one processor 601 is configured to invoke at least one program instruction stored or encoded in the memory 602 to cause the at least one processor 601 to perform the various operations and functions of the fixed cache implementation method described in various embodiments of the present description.
In embodiments of the present description, electronic device 600 may include, but is not limited to, a personal computer, a server computer, a workstation, a desktop computer, a laptop computer, a notebook computer, a mobile electronic device, a smart phone, a tablet computer, a cellular phone, a Personal Digital Assistant (PDA), a handheld device, a messaging device, a wearable electronic device, a consumer electronic device, and the like.
The embodiment of the invention also provides a computer readable medium, which carries computer executing instructions, and when the computer executing instructions are executed by a processor, the computer executing instructions can be used for realizing various operations and functions of the fixed cache realizing method described in various embodiments of the specification.
The computer readable medium in the present invention may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus, systems, and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing descriptions of specific exemplary embodiments of the present invention are presented for purposes of illustration and description. It is not intended to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teaching. The exemplary embodiments were chosen and described in order to explain the specific principles of the invention and its practical application to thereby enable one skilled in the art to make and utilize the invention in various exemplary embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims and their equivalents.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.

Claims (10)

1. The fixed cache implementation method is characterized by comprising the following steps:
creating a global lock-free queue and a local cache queue, wherein the global lock-free queue and the local cache queue comprise a plurality of BUFs;
in the local buffer queue, selecting a local active BUF based on the number of available buffer units in each BUF, and returning an available buffer unit based on the local active BUF for storing data to be buffered in response to a buffer application instruction of a user;
if the local active BUF has no available cache unit, updating the local active BUF of the unavailable cache unit into an exhausted BUF and reselecting the local active BUF;
If the BUFs in the local cache queue are marked as used BUFs, acquiring an available BUF based on the global lock-free queue, putting the available BUF into the local cache queue, and defining the acquired available BUF as a new local active BUF.
2. The fixed cache implementation method of claim 1, wherein the method further comprises:
Applying for memory to the system based on the memory occupation of the expected single cache unit and the number of the expected cache units, wherein the applied memory is greater than or equal to the product of the memory occupation of the expected single cache unit and the number of the expected cache units;
Dividing the memory of the application into a plurality of BUFs, and dividing one or a plurality of cache units in each BUF.
3. The fixed cache implementation method according to claim 1, wherein the electing a locally active BUF based on the number of available cache units within each BUF comprises:
traversing the number of remaining available cache units in each BUF in a local cache queue;
and selecting the BUF with the largest number of remaining available cache units in the local cache queue as the local active BUF.
4. The method for implementing fixed buffering according to claim 1, wherein the obtaining a BUF based on the global lock-free queue is put in a local buffering queue, comprising:
acquiring an available BUF in a global lock-free queue;
dividing the acquired BUF into a plurality of cache units based on the number of cache units in each BUF in a local cache queue;
and placing the segmented BUF into the local cache queue.
5. The fixed cache implementation method is characterized by comprising the following steps:
responding to a buffer release instruction of a user, and releasing data stored in a corresponding buffer unit;
Based on metadata at the rear end of a cache unit, acquiring BUF information of the cache unit and placing the cache unit into BUF of the cache unit;
Updating the marking information of the BUF based on the state of the BUF to which the caching unit belongs before being put into the caching unit;
And returning BUFs to the global lock-free queue based on the number of BUFs of the current local cache queue.
6. The fixed cache implementation method according to claim 5, wherein updating the tag information of the BUF based on the state of the BUF to which the cache unit belongs before being placed in the cache unit includes:
If available cache units exist in the BUF to which the cache units belong, skipping BUF mark updating;
And if the BUF to which the cache unit belongs is the used BUF, updating the mark of the BUF to which the cache unit belongs from the used BUF to the available BUF.
7. The method according to claim 1, wherein returning the BUF to the global lock-free queue based on the BUF number of the current local cache queue comprises:
If the buffer units are placed into the BUF, the buffer units in the BUF are available, and the number of the local BUFs is larger than a preset threshold value, placing the BUF of the buffer units into a global lock-free queue;
And if the buffer units are placed in the BUF, and unavailable buffer units exist in the BUF or the number of local BUFs is smaller than or equal to a preset threshold value, not returning the BUF to the global lock-free queue.
8. A fixed cache implementation system, comprising:
The system comprises a creation module, a storage module and a storage module, wherein the creation module is used for creating a global lock-free queue and a local cache queue, and the global lock-free queue and the local cache queue comprise a plurality of BUFs;
The system comprises a response module, a local active BUF, a local buffer queue, a buffer request module and a buffer request module, wherein the response module is used for selecting the local active BUF based on the number of available buffer units in each BUF in the local buffer queue;
the election module is used for updating the local active BUF of the unavailable cache unit into an exhausted BUF when the local active BUF has no available cache unit, and reelecting the local active BUF;
And the migration module is used for acquiring a BUF based on the global lock-free queue and placing the BUF into the local cache queue when the BUFs in the local cache queue are marked as used-up BUFs, and defining the acquired BUFs as new local active BUFs.
9. A computer device comprising a memory and a processor, said memory and said processor being communicatively coupled to each other, said memory having stored therein computer instructions, said processor executing said computer instructions to perform the fixed cache implementation method of any of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a computer to perform the fixed cache implementation method of any one of claims 1-7.
CN202510493169.3A 2025-04-18 2025-04-18 Fixed cache implementation method, system, device and readable storage medium Pending CN120407174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510493169.3A CN120407174A (en) 2025-04-18 2025-04-18 Fixed cache implementation method, system, device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510493169.3A CN120407174A (en) 2025-04-18 2025-04-18 Fixed cache implementation method, system, device and readable storage medium

Publications (1)

Publication Number Publication Date
CN120407174A true CN120407174A (en) 2025-08-01

Family

ID=96509344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510493169.3A Pending CN120407174A (en) 2025-04-18 2025-04-18 Fixed cache implementation method, system, device and readable storage medium

Country Status (1)

Country Link
CN (1) CN120407174A (en)

Similar Documents

Publication Publication Date Title
US9430388B2 (en) Scheduler, multi-core processor system, and scheduling method
US20190220418A1 (en) Memory Management Method and Apparatus
TWI627536B (en) System and method for a shared cache with adaptive partitioning
CN103914399B (en) Disk buffering method and device in a kind of concurrent computational system
CN105224255B (en) A kind of storage file management method and device
US9507633B2 (en) Scheduling method and system
KR102594657B1 (en) Method and apparatus for implementing out-of-order resource allocation
CN111124270A (en) Method, apparatus and computer program product for cache management
CN111416825A (en) Inter-thread lock-free log management method and system, terminal and storage medium
CN120499269A (en) Data management method, device, equipment and readable storage medium
US12481598B2 (en) Memory scanning method and apparatus
US20170364442A1 (en) Method for accessing data visitor directory in multi-core system and device
CN115203133A (en) Data processing method and device, reduction server and mapping server
US20060294330A1 (en) Managing memory pages
CN112099728B (en) Method and device for executing write operation and read operation
JP5776813B2 (en) Multi-core processor system, control method and control program for multi-core processor system
CN119988306B (en) An on-chip inter-core communication system and method
CN110413689B (en) Multi-node data synchronization method and device for memory database
CN120407174A (en) Fixed cache implementation method, system, device and readable storage medium
CN117093508B (en) Memory resource management method and device, electronic equipment and storage medium
KR101889749B1 (en) Message scheduling method
CN117914867A (en) Data buffering method, device, equipment and computer readable storage medium
CN112346879B (en) Process management method, device, computer equipment and storage medium
CN116244219A (en) Disk dropping method and system based on RAID (redundant array of independent disks) cache state
CN108052536A (en) A kind of file system of IoT equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination