CN119127481B - Memory allocation method for novel nonvolatile memory - Google Patents
Memory allocation method for novel nonvolatile memoryInfo
- Publication number
- CN119127481B CN119127481B CN202411151881.7A CN202411151881A CN119127481B CN 119127481 B CN119127481 B CN 119127481B CN 202411151881 A CN202411151881 A CN 202411151881A CN 119127481 B CN119127481 B CN 119127481B
- Authority
- CN
- China
- Prior art keywords
- memory
- block
- super block
- super
- size
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Abstract
The invention belongs to the technical field of computer storage, and particularly relates to a memory allocation method for a novel nonvolatile memory, which comprises the steps of dividing a plurality of independent memory areas from a persistent memory heap of the nonvolatile memory, ensuring that each working thread can be divided into one memory area, logically dividing each memory area into a plurality of super blocks with equal size, managing the idle super blocks in a bidirectional linked list mode, splitting different super blocks according to different size classes in a global memory application, splitting the same super block into a plurality of memory blocks with the same size, setting a private memory pool for each working thread, searching the super blocks in the corresponding size class in the private memory pool according to the memory allocation request size of the working thread, and allocating one idle memory block from the super blocks.
Description
Technical Field
The invention belongs to the technical field of computer storage, and particularly relates to a memory allocation method for a novel nonvolatile memory.
Background
The new type of non-volatile memory (also called persistent memory) is a new type of computer storage medium, which has the features of low access delay, large storage capacity, addressing in bytes, long-term data storage in power-off state, etc. Non-volatile memory devices include memristors, phase change memories, and the like.
The existing computer storage system adopts a structure with separated memory and external memory. Memory uses low access latency and byte addressable Dynamic Random Access Memory (DRAM) for program runtime caching of the necessary temporary data. However, due to the volatility of the memory, data in the memory may be lost after power failure. And the external memory adopts a magnetic disk (or solid state disk) memory with larger capacity and non-volatile, and is used for storing data which needs long-term use. However, the access delay of the external memory is extremely high (tens of thousands times higher than that of the internal memory), and the external memory cannot be addressed by bytes, and only the read-write can be performed through the I/O interruption of the operating system, so that the efficiency is extremely low. The appearance of the novel nonvolatile memory makes up the performance gap between the external memory and the memory, and provides a basis for constructing a more efficient memory system.
In order to fully utilize the performance advantage of the nonvolatile memory, the operating system provides a direct access mechanism, so that an application program (i.e. a user) can bypass the operating system kernel and directly read and write the nonvolatile memory in a user mode. Unlike conventional magnetic disks or solid state disks (which can only read and write with a granularity of 4 KB), nonvolatile memory can be accessed with a granularity of bytes, thus requiring a finer granularity memory allocation mechanism.
The existing work mainly carries out fine-granularity memory allocation through different size classes, realizes concurrent memory allocation by establishing a private memory pool for each application program, and ensures the collapse consistency of metadata through log before writing.
However, the existing work generally has the problem of high concurrency conflict of global memory allocation. When the thread private memory pool is exhausted or the size of the required memory exceeds the size of the maximum memory block that can be provided by the thread private memory pool, the thread private memory pool becomes invalid. When the private memory pool fails, the thread needs to allocate memory to the global persistent memory heap. In some cases, multiple worker threads may perform global memory allocation at the same time, thereby generating conflicts and contentions. Furthermore, the existing working thread needs to write a log and modify metadata after each memory allocation, so that a large number of repeated cache line refreshing is caused, extra time overhead is caused, and the problem of writing abrasion of the nonvolatile memory is amplified. In addition, the mutual isolation between different super blocks in the same size class can cause the problem of memory fragmentation, thereby reducing the utilization rate of memory space.
Disclosure of Invention
Aiming at the defects and improvement demands of the prior art, the invention provides a memory allocation method for a novel nonvolatile memory, which aims to reduce cache line refreshing during memory block management and avoid the problems of time overhead and writing abrasion.
In order to achieve the above object, according to an aspect of the present invention, there is provided a memory allocation method for a novel nonvolatile memory, including:
The initialization stage comprises the steps of dividing a plurality of independent memory areas from a persistent memory heap of a nonvolatile memory based on a main thread, logically dividing a plurality of super blocks with equal sizes for each memory area and linking the super blocks into double-directional linked lists, setting a plurality of private memory pools and initializing each private memory pool to be empty, generating a plurality of empty super block double-directional linked lists, wherein each double-directional linked list corresponds to one memory block size class;
the distribution phase is to execute global memory application and memory block distribution based on each working thread, wherein:
The global memory application comprises the steps of judging whether a private memory pool of a working thread is empty after receiving a memory allocation request, if so, applying for an idle super block based on a doubly linked list from a memory area of the working thread and placing the idle super block into the private memory pool, determining a size class closest to the requested size from a plurality of preset selectable splitting size classes in an upward alignment mode, splitting the idle super block according to the closest size class to obtain a plurality of memory blocks, adding the idle super block into the doubly linked list corresponding to the size class of the memory block, and executing memory block allocation;
the memory block allocation is to search super blocks of corresponding size class from the private memory pool according to the request size, allocate a free memory block in the super blocks to users, and manage the memory blocks of the super blocks by adopting a batch persistence mechanism, specifically, the head of the super block is cached in a DRAM memory to obtain metadata copies, when the memory blocks are allocated from the super block for a plurality of times, only the metadata copies of the super block are updated, and when the update times reach a batch value, the metadata copies are updated and synchronized to the head of the super block in the private memory pool.
Further, the method further comprises the following steps:
Before the initialization phase, the non-volatile memory is mapped in a direct access mode to a virtual address space of the user, which virtual address space acts as the persistent memory heap.
Further, the dividing a plurality of independent memory areas from the persistent memory heap of the nonvolatile memory specifically includes:
Taking part of head space of the persistent memory heap as a heap head area for storing metadata information of the persistent memory heap, and dividing the rest part into a plurality of independent memory areas;
the metadata information comprises a formatting mark of a persistent memory heap, a starting address and total size of the persistent memory heap, a starting address of the persistent memory heap in the last running process, metadata of a private memory pool of each working thread and log data.
Further, the logical partitioning of each memory area into a plurality of super blocks with equal sizes is specifically:
The method comprises the steps of using the head of each memory area to store state information of the memory area, wherein the state information comprises the number of superblocks contained in the memory area, the number of currently idle superblocks in the memory area, a pointer pointing to the first idle superblock in the memory area and a pointer pointing to the last idle superblock in the memory area, and the size of the head is the same as that of the superblock to be divided;
the method comprises the steps of logically dividing the rest into a plurality of super blocks with equal size, wherein the head of each super block is used for storing state information of the super block, and the method specifically comprises the step of dimension class number of the super block, the dimension of each memory block in the super block, the total number of the memory blocks in the super block, the number of current free memory blocks in the super block, a bitmap for marking the states of all the memory blocks in the super block, the number of the first free memory block in the super block, a pointer for pointing to the next super block in the same dimension class and a pointer for pointing to the previous super block in the same dimension class.
Further, each working thread selects and determines a memory area from the persistent memory heap based on a hash function to serve as the memory area of the working thread;
Wherein the hash function is H= (TID+rand ())) R, wherein TID is the thread number of the working thread, rand () is a random number generation function, R is the number of memory areas, and H is the number of the memory areas obtained by calculation.
Further, the state information stored in the header of each memory region also includes a 64-bit tag variable;
After the memory area of each working thread is determined, modifying the marking variable of the memory area into the thread number of the working thread by adopting a CAS atomic instruction so as to prevent other working threads from simultaneously carrying out memory allocation in the memory area, and resetting the marking variable of the memory area after the memory allocation is finished by the working threads.
Further, the method further comprises the following steps:
In the initialization stage, a plurality of background threads are started, the use condition of a private memory pool and the use condition of a global persistent memory stack of each working thread are monitored in real time, and defragmentation operation is carried out when specific conditions are met;
the specific condition comprises that for any size class in a private memory pool of any working thread, if the proportion of the free memory in the size class is higher than a preset value and the currently available memory space in a persistent memory heap is lower than the preset value, super block merging operation is executed for the size class of the private memory pool.
Further, the preset value is 80% or 20%.
The invention also provides an electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the electronic device is characterized in that the processor realizes the memory allocation method for the novel nonvolatile memory when executing the computer program.
The invention also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor implements a memory allocation method for a novel non-volatile memory as described above.
In general, through the above technical solutions conceived by the present invention, the following beneficial effects can be obtained:
(1) The invention provides a memory allocation method for a novel nonvolatile memory, which adopts a direct access mechanism to map the nonvolatile memory into a persistent memory heap and divide the persistent memory heap into a plurality of independent areas so as to support concurrent memory allocation of multithreading application. And a private memory pool is set up for each active thread to relieve the concurrent conflict problem of multi-thread memory allocation, and meanwhile, the key path length of the memory allocation is shortened, so that the response time of the memory allocation is greatly shortened. The method has the advantages that the collapse consistency of the metadata is ensured by adopting the log-before-write technology, the memory leakage problem caused by partial persistence of the metadata is avoided, the update operation of the metadata is accelerated by adopting the batch persistence technology, the time expenditure caused by the log-before-write is reduced, and meanwhile, a large number of write operations to the nonvolatile memory are redirected to write operations to the DRAM memory, so that the performance is further improved, and meanwhile, the write abrasion problem of the nonvolatile memory is relieved.
(2) Due to the mutual isolation between different size classes, a large amount of memory fragments can appear in some cases, and the memory utilization rate is reduced. In order to reduce memory fragments and improve the memory utilization rate, the memory management method provided by the invention can create a background thread to monitor the use condition of the memory in real time, perform superblock merging operation when the available memory space is insufficient or the problem of memory fragments is serious, integrate the memory fragments in superblocks in the same size class, and release the idle superblocks after integration, thereby generating available memory space. The method effectively reduces the memory fragments and improves the utilization rate of the memory space.
Drawings
FIG. 1 is a flow chart of a memory allocation method for a novel nonvolatile memory according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a batch persistence scheme provided by an embodiment of the present invention;
fig. 3 is a general schematic diagram of a memory allocation method for a novel nonvolatile memory according to an embodiment of the present invention;
fig. 4 is a flowchart of a memory allocation request processing provided in an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
Example 1
A memory allocation method for a novel nonvolatile memory, as shown in figure 1, comprises the following steps:
the initialization stage comprises the steps of dividing a plurality of independent memory areas from a persistent memory heap of a nonvolatile memory based on a main thread, logically dividing a plurality of super blocks with equal sizes for each memory area and linking the super blocks into double-directional linked lists, setting a plurality of private memory pools and initializing each private memory pool to be empty, and generating a plurality of empty super block double-directional linked lists, wherein each double-directional linked list corresponds to one memory block size class.
It should be noted that, the re-initialization is required after the program is restarted or the computer is restarted. One user is an application program and corresponds to one main thread and a plurality of working threads. The number of private memory pools is not less than the number of worker threads. Each work thread private memory pool belongs to a specific thread only, and other threads cannot access. The thread identifier is included, if the identifier is 0, the private memory pool is not occupied by the thread, otherwise, the thread ID occupying the private memory pool is indicated. When any thread occupies the private memory pool for the first time, the identifier is set to the thread ID of the thread, and other threads can no longer access the private memory pool.
The distribution phase is to execute global memory application and memory block distribution based on each working thread, wherein:
The global memory application comprises the steps of judging whether a private memory pool of a working thread is empty after receiving a memory allocation request, if so, applying for an idle super block from a memory area of the working thread based on a doubly linked list according to the memory allocation request size and placing the idle super block into the private memory pool, determining a size class closest to the request size (in such a way that the size of a memory block can be used and not wasted) from a plurality of preset selectable split size classes in an upward alignment mode, splitting the idle super block according to the latest size class to obtain a plurality of memory blocks, adding the idle super block into the doubly linked list corresponding to the memory block size class, and executing memory block allocation;
The memory block allocation is to search super blocks of corresponding size class from the private memory pool according to the request size, allocate a free memory block in the super blocks to users, and use a batch persistence mechanism to manage the memory blocks of the super blocks, concretely, the head of the super blocks is cached in DRAM memory to obtain metadata copies, when the memory blocks are allocated from the super blocks for multiple times, only the metadata copies of the super blocks are updated, an update counter is increased by one, when the value of the update counter reaches the batch size, the metadata copies are updated and synchronized to the head of the super blocks in the private memory pool, and the update counter is cleared.
A small number of superblocks allocated from the global persistent memory heap are cached in the private memory pool of each worker thread, and are organized into a plurality of doubly linked lists according to different size classes. Super blocks in the same size class are in the same linked list and are split into a plurality of small memory blocks with the same size. When the super blocks in the thread private memory pool are insufficient, a plurality of idle super blocks are allocated from the global persistent memory heap to be supplemented.
The embodiment method relates to a novel fine-grained memory allocation problem in a nonvolatile memory direct access mode. The method comprises the steps of carrying out fine granularity allocation and recovery on a nonvolatile memory (the part of memory space is called a persistent memory heap) mapped to a virtual address space in a user mode of an application program, specifically managing and allocating idle superblocks in a bidirectional linked list mode, allocating and recovering global memory application with superblocks as granularity, and simultaneously, supporting multi-thread concurrent memory allocation to ensure the crash consistency of metadata.
The memory allocation method includes the steps of firstly dividing a plurality of independent memory areas from a persistent memory heap of a nonvolatile memory to ensure that each working thread can be divided into one memory area for memory application, logically dividing each memory area in the persistent memory heap into a plurality of super blocks with equal sizes, managing and allocating idle super blocks in a bidirectional linked list mode, and carrying out global memory application by taking the super blocks as a unit, splitting different super blocks according to different size classes in the global memory application, and splitting the same super block into a plurality of memory blocks with the same size, wherein the sizes of the memory blocks in different super blocks are possibly different. Meanwhile, a private memory pool is set for each working thread and used for distributing internal memory of the thread. And searching the super block in the corresponding size class in the private memory pool according to the size of the memory allocation request of the working thread, and allocating an idle memory block from the super block. When memory blocks are distributed, a batch persistence mechanism is provided, and the crash consistency of metadata is ensured by adopting a mode of firstly writing logs and then modifying the metadata.
The embodiment adopts a log-before-write technology to ensure the crash consistency of the metadata. Due to the persistence characteristic of the novel nonvolatile memory, when the system is broken down accidentally and power is cut off, metadata in a CPU cache may not be completely written back into the nonvolatile memory, so that the condition of partial persistence of the metadata occurs. Partial persistence of metadata can cause non-uniformity in system state, thereby causing serious problems such as memory leakage, runtime errors, and the like. To avoid partial persistence of metadata, the present embodiment employs a log-before-write technique. Specifically, when an application process allocates a memory block from a superblock, the superblock's header metadata will first be written to the log area and persisted before the superblock's metadata can be modified. Thus, after the system crashes, the head metadata of the super block can be recovered according to the metadata items stored in the log.
The present embodiment provides a batch persistence mechanism to reduce the time overhead of frequent log writes. Due to the cyclic structure of the program, there may be a case where memory blocks of the same size are allocated consecutively a plurality of times, and these memory blocks of the same size are often obtained from the same super block. If the metadata of the super block (i.e. the header of the super block) is logged and persisted after each allocation of the memory block, the same memory address is repeatedly written, resulting in unnecessary time overhead. In view of this problem, as shown in fig. 2, the present invention adopts a batch persistence technique to cache the metadata item in the nonvolatile memory into the DRAM memory, to obtain a metadata item copy (referred to as a volatile header), and updates only the metadata backup in the DRAM memory when the memory block is allocated from the same super block a plurality of times in succession, and increments the update counter by one. When the value of the update counter reaches the batch size, the update of the metadata in the DRAM is synchronized to the nonvolatile memory, and the update counter is cleared. The specific synchronization frequency is determined by the batch size. Because of the volatility of DRAM, no journaling is required to update the metadata copy in DRAM, and only journaling is required to the non-volatile memory during synchronous operation. The batch persistence mechanism can effectively reduce repeated operations of writing logs and writing nonvolatile memories, simultaneously redirects a large number of writing operations to the nonvolatile memories into writing operations to the DRAM memories, further improves the performance, and simultaneously relieves the writing abrasion problem of the nonvolatile memories.
As a preferred embodiment, the method further comprises:
Before the initialization stage, the nonvolatile memory device is mounted on a file system, and the nonvolatile memory is mapped to a virtual address space of an application program in a memory mapping mode, wherein the virtual address space is used as a persistent memory heap.
For example, the nonvolatile memory in this embodiment is mounted by the following commands:
sudondctl create-namespace--mode=devdax--map=mem。
The preferred mode is a memory allocation method for a novel nonvolatile memory, wherein a direct access mechanism is adopted to map the nonvolatile memory into a persistent memory heap, and the persistent memory heap is divided into a plurality of independent areas so as to support concurrent memory allocation of multi-thread application. And a private memory pool is set up for each active thread to relieve the concurrent conflict problem of multi-thread memory allocation, and meanwhile, the key path length of the memory allocation is shortened, so that the response time of the memory allocation is greatly shortened. The method has the advantages that the collapse consistency of the metadata is ensured by adopting the log-before-write technology, the memory leakage problem caused by partial persistence of the metadata is avoided, the update operation of the metadata is accelerated by adopting the batch persistence technology, the time expenditure caused by the log-before-write is reduced, and meanwhile, a large number of write operations to the nonvolatile memory are redirected to write operations to the DRAM memory, so that the performance is further improved, and meanwhile, the write abrasion problem of the nonvolatile memory is relieved.
As a preferred embodiment, the above-mentioned dividing a plurality of independent memory areas from the persistent memory heap of the nonvolatile memory specifically includes:
Taking part of head space of the persistent memory heap as a heap head area, wherein the heap head area is 1MB in size for storing important metadata information of the persistent memory heap, and the rest part is divided into a plurality of independent memory areas for memory allocation of different threads;
The important metadata information comprises a formatting mark of a persistent memory heap with the size of 8 bytes, a starting address and total size of the persistent memory heap, a starting address of the last running time of the persistent memory heap, metadata of a private memory pool of each working thread and log data.
As shown in fig. 3, the remaining memory space except for the heap header is divided into a plurality of independent memory areas of the same size, and in this embodiment, the size of each memory area is set to 256MB. Further, the header 16KB, for example, of each memory region is used to store state information for that memory region.
Then, as a preferred embodiment, the above-mentioned memory areas are logically divided into a plurality of super blocks with equal sizes, specifically:
The header of each memory area is used for storing state information of the memory area, and specifically comprises the number of superblocks contained in the memory area, the number of currently free superblocks in the memory area, a pointer pointing to the first free superblock in the memory area and a pointer pointing to the last free superblock in the memory area, wherein the size of the header is the same as that of the superblock to be divided, for example, 16KB;
The method comprises the steps of logically dividing the rest into a plurality of super blocks with equal size, wherein the head part (for example, the head 128 bytes) of each super block is used for storing state information of the super block, and the method specifically comprises the step of dimension class number of the super block, the size of each memory block in the super block, the total number of the memory blocks in the super block, the number of current free memory blocks in the super block, a bitmap for marking the states of all the memory blocks in the super block, the number of the first free memory block in the super block, a pointer pointing to the next super block in the same dimension class, and a pointer pointing to the previous super block in the same dimension class.
In this embodiment, the global memory application is performed in units of superblocks. That is, each global application will get several consecutive idle superblocks. The free superblocks in each memory region are organized in a doubly linked list and arranged in order of address from low to high. When super block allocation is carried out on each memory area, the super block linked list is traversed from front to back, allocation is carried out according to the principle of a first adaptation algorithm, and when super block recovery is carried out, the super block linked list is traversed from front to back, and proper positions are selected for insertion according to the principle of address sequence arrangement. After insertion, the successive idle superblocks are merged.
The preset plurality of selectable split size classes may include:
{8,16,24,32,40,48,56,64,80,96,112,128,160,192,224,256,320,384,448,512,640,768,896,1024,1280,1536,1792,2048,2560,3072,3584,4096,5120,6144,7168,8192}
In this embodiment, any requested size will be aligned up to the nearest size class. For example, a memory allocation request of 500 bytes in size will be aligned up to 512 bytes. As shown in fig. 4, memory allocation requests exceeding 8192 bytes (8 KB) in size will be allocated directly to a number of consecutive free superblocks. For example, a memory allocation request of 50KB will result in 4 consecutive idle superblocks, totaling 64KB of consecutive memory space. And memory allocation requests exceeding 128MB in size directly map a new memory region.
A small number of superblocks allocated from the global persistent memory heap are cached in the private memory pool of each worker thread, and are organized into a plurality of doubly linked lists according to different size classes. Super blocks in the same size class are in the same linked list and are split into a plurality of small memory blocks with the same size. In this embodiment, 36 different size classes are used, so that there are at most 36 bidirectional linked lists in each thread private memory pool, and the number of the linked lists is determined according to the needs of the user. The superblocks in the linked list are each split into small memory blocks of {8,16,24,32,..8192 } bytes.
When the super blocks in the thread private memory pool are insufficient, the thread applies for a plurality of idle super blocks from the global persistent memory heap to be supplemented.
When the thread applies for global memory, a hash function is used to assign a memory area for the thread, that is, as a preferred implementation manner, each working thread selects and determines a memory area from the persistent memory heap based on the hash function to be used as the memory area of the working thread;
Wherein the hash function is H= (TID+rand ())) R, wherein TID is the thread number of the working thread, rand () is a random number generation function, R is the number of memory areas, and H is the number of the memory areas obtained by calculation.
As a preferred embodiment, the state information stored in the header of each memory region also includes a 64-bit tag variable;
after the target memory area of each working thread is determined, modifying the marking variable of the target memory area into the thread number of the working thread by adopting a CAS atomic instruction so as to prevent other working threads from simultaneously performing memory allocation in the memory area, and resetting the marking variable of the memory area after the memory allocation is finished by the working threads.
That is, to avoid that multiple threads simultaneously allocate memory in the same memory area, a thread that reaches the area first uses a CAS atomic instruction to modify the tag variable of the area into its own thread number, and a thread that subsequently reaches the area cannot enter the area again to allocate memory until the previous thread is allocated, and exits the area.
As shown in fig. 4, when the user's memory allocation request does not exceed 8KB, it will be allocated from its private memory pool. If the thread does not occupy the private memory pool, it occupies an empty private memory pool. Preferably, in this embodiment, 1024 free private memory pools are set, so that up to 1024 working threads can be supported to perform memory allocation simultaneously. It is further preferred that the superblock be applied from the global persistent memory heap to be replenished when the private memory pool of the thread is exhausted.
As a preferred embodiment, the method further comprises:
Starting a plurality of background threads during initialization, monitoring the use condition of a private memory pool and the use condition of a global persistent memory stack of each working thread in real time, and performing defragmentation operation when specific conditions are met;
The specific condition comprises that for any size class in a private memory pool of any working thread, if the proportion of the free memory in the size class is higher than a preset value and the currently available memory space in a persistent memory heap is lower than the preset value, super block merging operation is executed for the size class of the private memory pool. Preferably, the preset value is 80% or 20%.
Due to the mutual isolation between different size classes, a large amount of memory fragments can appear in some cases, and the memory utilization rate is reduced. In order to reduce memory fragments and improve the memory utilization rate, the memory management method provided by the invention can create a background thread to monitor the use condition of the memory in real time, perform superblock merging operation when the available memory space is insufficient or the problem of memory fragments is serious, integrate the memory fragments in superblocks in the same size class, and release the idle superblocks after integration, thereby generating available memory space. The method effectively reduces the memory fragments and improves the utilization rate of the memory space.
When the superblock merging operation is performed in this embodiment, for example, the background thread selects the superblock B1 with the lowest utilization rate in the current superblock linked list and the superblock B2 with the highest utilization rate in the superblocks with the utilization rates not higher than 1-r (B1), where r (B1) represents the utilization rate of the superblock B1. And B1 and B2 are removed from the current linked list, all the memory blocks which are allocated in B1 are copied to the free memory blocks in B2, B1 is returned to the global persistent memory heap after the copying is finished, and B2 is added to the super block linked list again. Since r (B2). Ltoreq.1-r (B1), it can be demonstrated that the free memory blocks in B2 are not less than the already allocated memory blocks in B1, i.e., that the free memory blocks in B2 are sufficient to accommodate copy operations of all allocated memory blocks in B1.
In general, as shown in fig. 3, in order to fully utilize the performance advantage of the novel nonvolatile memory, the present invention maps the nonvolatile memory into persistent memory heap in a direct access mode, and divides the persistent memory heap into a plurality of independent memory areas, and each memory area is managed and allocated in units of super blocks. In addition, a private memory pool is set for each working thread, so that the conflict problem of multi-thread concurrent memory allocation is reduced. In order to ensure the crash consistency of the metadata and avoid memory leakage and operation errors caused by system crash, the method adopts a mode of writing logs first and modifying the metadata later. Further, the invention adopts a batch persistence mechanism to reduce the time overhead of writing logs. In addition, the invention creates a background thread to monitor the use condition and the fragmentation condition of the persistent memory heap in real time and performs super block merging operation when necessary, thereby reducing memory fragmentation and improving the memory space utilization rate.
Example two
An electronic device comprising a memory storing a computer program and a processor implementing the steps of the method as described above when the processor executes the computer program.
The electronic device can be a computing device such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The Processor may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The memory may be used to store computer programs and/or modules, and the processor may be used to perform various functions of the electronic device by executing or executing the computer programs and/or modules stored in the memory, and invoking data stored in the memory.
The related technical solution is the same as the first embodiment, and will not be described herein.
Example III
A computer readable storage medium having stored thereon a computer program which when executed by a processor realizes the steps of the method as described above.
In particular, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card (SMART MEDIA CARD, SMC), secure Digital (SD) card, flash memory card (FLASH CARD), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
The related technical solution is the same as the first embodiment, and will not be described herein.
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411151881.7A CN119127481B (en) | 2024-08-21 | Memory allocation method for novel nonvolatile memory |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411151881.7A CN119127481B (en) | 2024-08-21 | Memory allocation method for novel nonvolatile memory |
Publications (2)
Publication Number | Publication Date |
---|---|
CN119127481A CN119127481A (en) | 2024-12-13 |
CN119127481B true CN119127481B (en) | 2025-09-16 |
Family
ID=
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102866954A (en) * | 2012-08-31 | 2013-01-09 | 华为技术有限公司 | Method and device for allocating internal memory |
CN103914265A (en) * | 2014-04-09 | 2014-07-09 | 江苏物联网研究发展中心 | Cluster fine-grained memory management method |
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102866954A (en) * | 2012-08-31 | 2013-01-09 | 华为技术有限公司 | Method and device for allocating internal memory |
CN103914265A (en) * | 2014-04-09 | 2014-07-09 | 江苏物联网研究发展中心 | Cluster fine-grained memory management method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10360149B2 (en) | Data structure store in persistent memory | |
JP3611305B2 (en) | Persistent and robust storage allocation system and method | |
US10884630B2 (en) | Storage system | |
US11210020B2 (en) | Methods and systems for accessing a memory | |
CN111522507B (en) | A low-latency file system address space management method, system and medium | |
US20120311298A1 (en) | Mount-time unmapping of unused logical addresses in non-volatile memory systems | |
TWI596541B (en) | Data accessing system, data accessing appraratus and method for accessing data | |
CN112084032B (en) | Write-optimized persistent memory heap management method | |
US8185693B2 (en) | Cache-line aware collection for runtime environments | |
CN109165321B (en) | Consistent hash table construction method and system based on nonvolatile memory | |
CN113220490A (en) | Transaction persistence method and system for asynchronous write-back persistent memory | |
CN111611223B (en) | Non-volatile data access method, system, electronic device and medium | |
CN110018790A (en) | A kind of method and system guaranteeing persistence data in EMS memory crash consistency | |
US11687392B2 (en) | Method and system for constructing persistent memory index in non-uniform memory access architecture | |
CN113050886A (en) | Nonvolatile memory storage method and system for embedded memory database | |
WO2024099448A1 (en) | Memory release method and apparatus, memory recovery method and apparatus, and computer device and storage medium | |
CN115617542A (en) | Memory exchange method and device, computer equipment and storage medium | |
CN119200962B (en) | Method for reading and writing solid state disk and computer readable storage medium | |
CN119127481B (en) | Memory allocation method for novel nonvolatile memory | |
US12223187B2 (en) | SSD supporting deallocate summary bit table and associated SSD operations | |
CN119127481A (en) | A memory allocation method for a new type of non-volatile memory | |
WO2024197789A1 (en) | Fine-grained file system and file reading and writing method | |
JP2022121655A (en) | Memory system and control method | |
CN111190543A (en) | Storage method and system for sharing NVDIMM storage resources among threads | |
US20250053482A1 (en) | Incremental data backup using a combined tracking data structure |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |