[go: up one dir, main page]

CN119002815A - Data processing method, device, storage equipment and storage medium - Google Patents

Data processing method, device, storage equipment and storage medium Download PDF

Info

Publication number
CN119002815A
CN119002815A CN202411140940.0A CN202411140940A CN119002815A CN 119002815 A CN119002815 A CN 119002815A CN 202411140940 A CN202411140940 A CN 202411140940A CN 119002815 A CN119002815 A CN 119002815A
Authority
CN
China
Prior art keywords
data
cache
written
target data
write
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411140940.0A
Other languages
Chinese (zh)
Inventor
王飞
王陆
方浩俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Dapu Microelectronics Co Ltd
Original Assignee
Shenzhen Dapu Microelectronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Dapu Microelectronics Co Ltd filed Critical Shenzhen Dapu Microelectronics Co Ltd
Priority to CN202411140940.0A priority Critical patent/CN119002815A/en
Publication of CN119002815A publication Critical patent/CN119002815A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The embodiment of the application discloses a data processing method, a data processing device, storage equipment and a storage medium, which are used for optimizing the writing performance and efficiency of the storage equipment. The method of the embodiment of the application comprises the following steps: receiving a data writing request, wherein the data writing request comprises a logic block address LBA of a first cache to be accessed and target data; judging whether the target data have write-in conflict in the first cache according to the LBA; if the writing conflict exists, writing the target data into a second cache; if the target data written into the second buffer memory has no write-in conflict in the first buffer memory, the target data written into the second buffer memory is transmitted to the first buffer memory, so that the main control chip transmits the target data written into the first buffer memory to the flash memory chip. By performing data conflict processing between different caches, the memory read-write frequency of the storage device is reduced, and the write performance of a host is improved, so that the problem of limited write performance of the storage device and the host in the prior art is solved.

Description

Data processing method, device, storage equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of data storage, in particular to a data processing method, a data processing device, storage equipment and a storage medium.
Background
The memory device is often used for reading and writing memory data, the write data transmitted by the host computer needs to be transmitted to the memory DDR of the memory device through the main control chip of the memory device, and then the write data in the memory of the memory device is written into the flash memory chip of the memory device.
In the prior art, the method for realizing the data writing needs to allocate a memory space special for temporarily storing the writing data from the memory of the storage device, the data to be written is transmitted to the memory of the storage device through the main control chip, the software program in the storage device can perform management grouping and conflict processing on the writing data temporarily stored in the memory, and finally the writing data is written into the flash memory particles and the memory space special for temporarily storing the writing data is released.
However, the writing performance of the storage device and the host in the prior art is mainly limited by the reading and writing performance of the memory of the storage device. Writing data into and reading data from the memory of the storage device frequently causes that the writing performance of the host is only half of the memory performance, namely, the writing and reading directions respectively occupy half of the performance, so that the efficiency of the writing performance of the host is reduced.
Disclosure of Invention
Based on the above problems, the embodiments of the present application provide a data processing method, apparatus, storage device, and storage medium, so as to optimize host write performance.
In a first aspect, an embodiment of the present application provides a data processing method, applied to a storage device, where the storage device includes a main control chip, a first cache, a second cache, and a flash memory chip, and the method includes:
Receiving a data write request, wherein the data write request comprises a logic block address (LBA, logical BlockAddress) of a first cache to be accessed and target data;
judging whether the target data have write-in conflict in the first cache according to the LBA;
if the writing conflict exists, writing the target data into the second cache;
Judging whether the target data written into the second cache has write conflict in the first cache according to the LBA;
and if no write-in conflict exists, transmitting the target data written in the second cache to the first cache, so that the main control chip of the storage device transmits the target data written in the first cache to the flash memory chip.
In an embodiment, after the determining, according to the LBA, whether the target data has a write collision in the first cache, the method further includes:
If no write conflict exists, the target data is written into the first cache, so that the main control chip of the storage device transmits the target data written into the first cache to the flash memory chip.
In an embodiment, after the writing the target data into the second cache if there is a write collision, the method further includes:
And if the target data is non-aligned data and the starting address of the target data is continuous with the ending address of the written data of the LBA, transmitting the target data written into the second cache to the first cache so that the main control chip of the storage device transmits the target data written into the first cache to the flash memory chip.
In an embodiment, the determining, according to the LBA, whether the target data has a write collision in the first cache includes:
Determining whether overlapping LBAs of a plurality of data write requests exist in response to the received plurality of data write requests;
If the LBAs of the plurality of data writing requests are overlapped, sequentially executing the received data writing requests;
and if the LBAs of the plurality of data writing requests do not overlap, executing the received data writing requests at the same time.
In an embodiment, the determining, according to the LBA, whether the target data has a write collision in the first cache includes:
Obtaining a pre-stored cache table; the cache table comprises storage position information of written data in the first cache;
and determining whether the target data has write-in conflict in the first cache according to the LBA of the first cache to be accessed and the storage position information of the written data of the first cache.
In an embodiment, the first buffer includes a plurality of logic blocks, and the transferring the target data written in the first buffer to the flash memory chip includes:
and when the stored data quantity of any logic block meets a preset data quantity threshold value, transmitting the written data in the logic block to the flash memory chip.
In an embodiment, the first buffer includes a plurality of logic blocks, each logic block includes a plurality of sectors, the flash memory chip stores a preset alignment writing rule, and the transmitting the target data written in the first buffer to the flash memory chip includes:
When the caching duration of the logic block meets a preset time threshold, if the written data in the logic block does not meet the preset alignment writing rule, determining a target sector which does not meet the preset alignment writing rule, and acquiring an index of the target sector;
Acquiring first supplementary data from the flash memory chip according to storage position information of the written data in the target sector and the index of the target sector;
and combining the first supplementary data with the written data of the logic block and transmitting the combined data to the flash memory chip.
In a second aspect, an embodiment of the present application further provides a data processing apparatus, which is applied to a storage device, where the storage device includes a main control chip, a first cache, a second cache, and a flash memory chip, and the data processing apparatus includes:
A request receiving unit, configured to receive a data writing request, where the data writing request includes a logical block address LBA of a first cache to be accessed and target data;
A conflict detection unit, configured to determine, according to the LBA, whether a write conflict exists in the first cache in the target data;
The data transmission unit is used for writing the target data into the second cache if the writing conflict exists;
The conflict detection unit is further configured to determine, according to the LBA, whether a write conflict exists in the first cache in the target data written into the second cache;
And the data transmission unit is further used for transmitting the target data written into the second buffer memory to the first buffer memory if no writing conflict exists, so that the main control chip of the storage device transmits the target data written into the first buffer memory to the flash memory chip.
In a third aspect, an embodiment of the present application further provides a storage device, where the storage device includes a main control chip, a first cache, a second cache, and a flash memory chip, where communication connection is implemented among the main control chip, the first cache, the second cache, and the flash memory chip, and the main control chip includes a memory and a processor, and the main control chip executes a computer program to implement the data processing method according to the first aspect.
In a fourth aspect, embodiments of the present application also provide a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a data processing method as described in the first aspect above.
From the above technical solutions, the embodiment of the present application has the following advantages: after receiving a data writing request, judging whether writing conflict exists according to the LBA, writing target data into a second cache under the condition that the writing conflict exists in a first cache, transmitting the target data to the first cache after the conflict is eliminated, and finally writing the target data into a flash memory chip. Therefore, through a multi-level storage structure, the problem of write-in conflict can be relieved by using the second cache flexibly, the data read-write frequency in the memory and the delay caused by the write-in conflict of the first cache are reduced, the overall read-write performance of the storage device is improved, and the write efficiency of the host is optimized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a system architecture according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a data processing method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a data processing method according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a storage device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Furthermore, the terms "first," "second," and the like, if any, are used merely for distinguishing between descriptions and not for indicating or implying a relative importance. It should be noted that, without conflict, the features of the embodiments of the present application may be combined with each other.
The storage device is often used for reading and writing storage data, the write data transmitted by the host needs to be transmitted to the memory of the storage device through the main control chip of the storage device, and then the write data in the memory of the storage device is written into the flash memory chip of the storage device.
Referring to FIG. 1, FIG. 1 is a diagram illustrating a system architecture of a data processing system according to an embodiment of the present application. Data processing system 100 provided by embodiments of the present application may include a host 101 and a storage device 102, where the host may refer to a device that accesses the storage device to access data in the storage device. The host may be virtual, such as a virtual machine. The host may also be a physical device, for example, a personal computer, a server, a notebook computer, a smart phone, a tablet computer, and the like. The storage device 102 may be in communication with a host 101, and data may be written to the storage device 102 by the host 101. The storage device 102 may include a main control chip 103, a first cache 104, a second cache 105, and a flash memory chip 106, where the main control chip 103 is in communication connection with the first cache 104, the second cache 105, and the flash memory chip 106, and the storage device 102 communicates with the host 101 through the main control chip 103. The main control chip 103 executes a computer program or instructions to perform any one of the data processing methods described below.
In the embodiment of the present application, the storage device 102 may include a Solid state disk (SSD, solid STATE DRIVES), a mobile device (such as a mobile phone or a tablet computer), and the caches for temporarily storing or conflict managing the written data may include a first cache 104 and a second cache 105. In an embodiment of the present application, the first buffer 104 may use a storage medium with a higher read/write speed, such as a static random access memory (SRAM, static RandomAccess Memory). The first buffer 104 may be understood as a hardware buffer, for example, an SRAM, where the hardware buffer refers to a section of SRAM space managed by a hardware module of the storage device 102, and allocation of the SRAM space is completely performed automatically by the hardware module, and the software module is only responsible for space release. The second buffer 105 may be a dynamic random Access Memory (DRAM, dynamic Random Access Memory) or a double rate synchronous dynamic random Access Memory (DDR, double Data Rate Synchronous Dynamic Random-Access Memory), and the second buffer 105 may be understood as a software buffer, for example, a DDR, which refers to a piece of DDR space managed by software logic. The entire life cycle of the software cache, including allocation, use, management and release of space, is controlled by the software modules. By means of the hierarchical caching strategy, the storage device can balance the requirements of speed and capacity, and high-performance and high-efficiency data access can be achieved. Flash memory chip 106 is a non-volatile memory chip for persisting data in storage device 102, typically NAND FLASH. The main control Chip 103 may include a Central Processing Unit (CPU), a System on a Chip (SOC), and the like.
Under the same condition, when only dynamic random access memories such as DDR and the like are used as software caches, the writing performance of a host is only about 10 GB/s; when only static random access memories such as SRAM are used as hardware caches, the host write performance can reach 11GB/s, however, in the data processing method provided by the application, the conflict management is carried out on the written data by combining the first cache and the second cache (namely combining the software cache and the hardware cache), and the host write performance can be improved to 12GB/s. Therefore, the embodiment of the application not only quickens the processing speed of data writing, but also improves the writing performance of the storage equipment and the host.
Various embodiments of the present application are described in further detail below with respect to the attached drawings and the system architecture of the data processing system described above.
Referring to fig. 2, fig. 2 is a schematic flow chart of a data processing method according to an embodiment of the application. It should be noted that, the data processing method provided in the embodiment of the present application is not limited by fig. 2 and the following specific order, and it should be understood that, in other embodiments, the order of part of the steps in the data processing method in the embodiment of the present application may be interchanged according to actual needs, or part of the steps may be omitted or deleted.
The specific flow shown in fig. 2 will be described in detail, and the data processing method provided in the embodiment of the present application includes steps S201 to S205.
S201: and receiving a data writing request, wherein the data writing request comprises the logic block address LBA of the first cache to be accessed and target data.
In the embodiment of the application, the data processing method can be applied to a storage device, and the data writing request can be a data writing request sent by a host. Illustratively, the data write request may include an identifier of the logical partition of the first cache to be accessed, a logical block address LBA of the logical partition to be accessed, target data (including information about the size of the data amount or offset), and so forth. The logical partition of the first cache may contain a plurality of LBAs within each LBA pointing to a particular storage space or data block within one logical partition. The identifier of the logical partition may be an identifier of a volume, logical unit number (LUN, logical Unit Number), partition, or other logical storage unit. In addition, in order to improve the processing efficiency of the critical tasks of the critical data, the priority of the data writing request may be carried.
Of course, the existing storage media adopting NAND FLASH technology will use FLBA (Flatten Logic Block Address) addressing, FLBA addressing can be regarded as one mode of LBA addressing, and because these storage media do not have physical magnetic heads or cylinders, but use NAND FLASH to store data, the storage media are more suitable for a flattened addressing mode, and FLBA addressing can also simplify the addressing structure and improve the read-write performance and efficiency. Thus, for this addressing scheme, the data write request may include the logical block address FLBA of the first cache to be accessed, the target data (including information about the size of the data or the offset), and so on. It should be noted that "the logical block address LBA of the logical partition to be accessed" or "the logical block address FLBA of the first cache to be accessed" are hereinafter referred to as "the LBA to be accessed".
After receiving the data writing instruction, the storage position of the target data, namely the storage starting position and the storage ending position in the first cache, can be determined according to the information carried by the data writing instruction. The cache memory comprises a plurality of logic blocks for data storage, wherein each logic block comprises a plurality of sectors, and the storage capacity of each sector is the same. For example, the storage capacity of each logical block is 4096 bytes (i.e., 4K), and the storage capacity of each sector is 512 bytes. For example, the start position of logical block [0] is 0k and the end position is 4k; the start position of the logic block [1] is 4k, and the end position is 8k.
In general, in order to avoid omission and errors when a storage device processes data writing requests, a host machine normally transmits the data writing requests sequentially through a queue management mechanism and transmits the data writing requests according to a specific sequence; the data writing requests are received and processed one by one at the storage device side so as to ensure that each request can be processed correctly, and data collision and damage are avoided.
It will be appreciated that assuming an LBA of 100 (i.e., logical block [100 ]), a sector size of 512 bytes, and a target data size of 1024 bytes (i.e., 2 sectors). The starting position and ending position of the target data in the cache can be determined according to the target data and the LBA in the data writing request, and the starting address is calculated by way of example: LBA sector size=100×512=51200 bytes, end address is calculated: start position+target data size-1=51200+1024-1=52223 bytes.
S202: and judging whether the target data has write-in conflict in the first cache according to the LBA.
The first buffer (e.g., SRAM) generally has a faster read/write speed than the second buffer (e.g., DRAM), so that the fast response requirement can be satisfied without collision, and the number of accesses to the slower storage medium is reduced, thereby improving the efficiency of the whole data writing process. Therefore, the host writes data into the first buffer memory preferentially, so that whether the target data has write conflict in the first buffer memory needs to be judged according to the LBA carried in the data write request.
Specifically, it may be confirmed whether the LBA of the first cache to be accessed is already occupied or is being accessed in the first cache. Specifically, referring to the foregoing example in which the start position and the end position in the cache can be determined from the target data and the LBA in the data write request, it is checked whether there is data in the range of 51200 to 52223 bytes in the first cache, and if there is data in the range of 51200 to 51600 bytes, there is a write collision. Or the LBA to be accessed in the data write request already has data, at this time, it may also be determined that there is a write conflict, which is understood to be the same LBA conflict.
S203: and if the write conflict exists, writing the target data into a second cache.
If a conflict exists in the first cache, the target data can be selected to be temporarily stored in the second cache so as to avoid data loss and error. Illustratively, based on the foregoing example where the starting and ending locations in the cache may be determined from the target data and LBAs in the data write request, if the first cache does not have enough contiguous space to store the 1024 bytes of target data, the system may need to write part of the data to the second cache or directly choose to transfer the entire 1024 bytes of target data to the second cache. Further, after the target data is written into the second cache, the data writing request is marked as complete.
S204: and judging whether the target data written into the second cache has write-in conflict in the first cache according to the LBA.
If it is confirmed that there is a write collision at the first judgment, the target data is temporarily written into the second buffer. In a subsequent period of time, the data having the conflict in the first cache may have been flushed to the memory chip, and then the original write conflict may not have been present, so it is necessary to check again whether the conflict has been resolved, thereby writing the target data back to the first cache.
S205: if no write conflict exists, the target data written into the second buffer memory is transmitted to the first buffer memory, so that the main control chip of the storage device transmits the target data written into the first buffer memory to the flash memory chip.
Compared with the prior art, the method has the advantages that one memory space (generally 4KB size) is allocated from the memory firstly, and then data is transmitted to the allocated memory space; the master control chip performs management grouping and conflict processing on the data buffered in the memory, and then sends the data to the back-end software to write the data into the flash memory particles (i.e. the flash memory chip, for example NAND FLASH), and the memory space applied before is released after the data to be written is successfully written into the flash memory particles. If the data size is not matched with the partitioned memory space, the write amplification phenomenon may be caused, that is, the data amount actually written into the flash memory is larger than the original data amount, which may reduce the service life of the memory. And the write-in needs to be performed once every time a request is received, and then the memory space needs to be erased or released, so that the data requested next time can be written in, the service life of the memory can be influenced by frequent writing and reading in the memory, and the writing performance efficiency of a host is reduced.
According to the embodiment of the application, by introducing the first cache and the second cache, after receiving a data writing request, judging whether writing conflict exists in the first cache of target data according to LBA (logical block address) of the first cache to be accessed; if the writing conflict exists, writing the target data into a second cache; if the target data written into the second buffer memory has no write-in conflict in the first buffer memory, the target data written into the second buffer memory is transmitted to the first buffer memory, so that the main control chip transmits the target data written into the first buffer memory to the flash memory chip. Through the multi-layer storage structure, the problems of write-in conflict and frequent erasing can be solved by using the first cache and the second cache flexibly, the data read-write frequency in the memory and the delay caused by the write-in conflict of the first cache are reduced, the overall read-write performance of the storage device is improved, and the write efficiency of a host is optimized.
Further, if the data write request includes a priority, the first data write request accesses the LBA of the first cache by the second data write request, it may be compared whether the priority of the second data write request is higher than the priority of the first data write request, if so, the first data write request is temporarily not executed until the second data write request is marked as complete, and then the first data write request is executed.
Please understand the data processing method provided in the following embodiment of the present application with reference to fig. 3, and fig. 3 is a schematic flow chart of a data processing method provided in the embodiment of the present application. It should be noted that, the data processing method provided in the embodiment of the present application is not limited by fig. 3 and the following specific order, and it should be understood that, in other embodiments, the order of part of the steps in the data processing method in the embodiment of the present application may be interchanged according to actual needs, or part of the steps may be omitted or deleted.
In one embodiment, after determining whether the target data has a write conflict in the first cache according to the LBA, the method further includes: if no write-in conflict exists, the target data is written into the first cache, so that the main control chip of the storage device transmits the target data written into the first cache to the flash memory chip.
The first cache (such as SRAM) is used as a cache, so that the speed of data writing and reading can be greatly improved. Because the read-write speed of the first buffer memory is far higher than that of the flash memory, the whole performance can be improved by writing the target data into the first buffer memory. In the first cache, multiple write requests and conflict management may be handled more efficiently, with the flash memory chip (e.g., NAND FLASH) having write lifetime limitations, frequent writes may reduce its lifetime. By writing the data into the first cache first, the frequency of directly writing the data into the flash memory is reduced, and the service life of the flash memory chip can be prolonged. Further, after the target data is written into the first cache, the data write request is marked as complete.
In a possible embodiment, assuming that the storage capacity of the currently set logical block is 4096 bytes (i.e., 4K), the target data for which there is a write collision is one complete 4K data, in which case such complete 4K data can be understood as aligned data. Then it can be detected whether the target data has a write collision in the first cache, if no collision exists, the copying of the complete 4K target data from the second cache to the first cache is triggered; if the conflict exists, after the written data which conflicts with the target data in the first cache is written into the flash memory chip, marking the target data as a swiftable state, triggering the copying of the target data to the first cache, and after the copying is completed, swiftable of the target data is realized, namely the target data is written into the flash memory chip.
In one embodiment, after writing the target data into the second cache if there is a write collision, the method further includes: if the target data is non-aligned data and the starting address of the target data is continuous with the ending address of the written data of the LBA, the target data written into the second buffer memory is transmitted to the first buffer memory, so that the main control chip of the storage device transmits the target data written into the first buffer memory to the flash memory chip.
Unaligned data refers to the fact that the starting address of the data is not perfectly aligned with the boundary of a memory cell (e.g., sector or block) of the memory device. When the target data is unaligned data, a certain logic block of the first cache is not fully filled. When the target data is transferred to the second cache, since the LBA to be accessed corresponding to the target data already has the written data, there is a write conflict (i.e. the same LBA conflict), so the storage space corresponding to the LBA to be accessed cannot be written, but in practice, the written data in the storage space corresponding to the LBA to be accessed does not fill the storage space corresponding to the LBA, which causes resource waste of the storage space and delay of data writing.
Based on the above phenomenon, the embodiment of the present application detects the start address of the target data in the temporary second buffer and the end address of the written data in the LBA to be accessed of the first buffer, if the start address of the target data is continuous with the end address of the written data in the first buffer, this means that by merging this part of unaligned data with the written data in the first buffer, full utilization of the logical block that is not filled in the first buffer and reasonable scheduling of the target data temporarily stored in the second buffer can be achieved.
Illustratively, assume that one logical block LBA [0] of the first cache has the following cache information:
Total size: 4KB;
Written data: 3KB, occupying space from address 0 to address 3071 (in bytes);
Now, a data writing request is received, and there is a target data to be written, and the information carried is as follows:
LBA to be accessed: LBA [0];
Size of: 2KB;
start address: address 3072;
In this case, the start address (3072) of the target data is consecutive to the end address (3071) of the written data in the first buffer. However, since the LBA [0] has a write collision with the written data, the target data may not be directly written into the remaining 1KB space of LBA [0], resulting in the 1KB space not being utilized.
Referring to fig. 3, in the embodiment of the present application, if the start address of the target data is detected to be continuous with the end address (3071) of the written data in the first buffer, the target data in the second buffer can be migrated back to the first buffer. Specifically, the entire 2KB of target data can be written to the next available address of LBA [0], address 3072. In this case, it is also understood that if it is detected that the start address of the target data is consecutive to the end address of the written data in the first buffer, the target data in the second buffer is merged with the written data in the first buffer. In another possible embodiment, if the check finds that the starting address 0 of LBA [1] is already occupied by other data, resulting in the inability to write the remaining back 1KB of target data (from 4096 to 5199) together, the first 1KB of target data (from 3072 to 4095) may be written to LBA [0], while the remaining back 1KB of target data continues to be cached in the second cache. Then, the system continuously monitors the first buffer, and once enough free space appears in LBA [0], the buffered residual 1KB data is migrated to the first buffer, and the process of writing the target data into the first buffer is completed, so that the main control chip of the storage device can transmit the target data written into the first buffer to the flash memory chip.
According to the embodiment of the application, the non-aligned data and the target data with the writing conflict can be integrated and tidied, the storage space is effectively and reasonably utilized, the storage waste caused by the non-aligned data is reduced, the data management efficiency is improved, the data can be filled in each logic block, the data can be smoothly written into the flash memory chip, the fragmentation of the writing operation is reduced, and the writing efficiency is improved.
Referring to fig. 3, in some possible embodiments, if the target data is unaligned data and cannot be merged with the written data of the first cache (i.e., the start address of the target data is not continuous with the end address of the written data of the first cache), after the written data conflicting with the target data in the first cache is written into the flash memory chip, the target data is marked as a swiftable state, and the copying of the target data to the first cache is triggered, and after the copying is completed, the target data can be swifted, i.e., written into the flash memory chip.
In one embodiment, determining whether the target data has a write collision in the first cache according to the LBA includes: determining whether overlapping LBAs of the plurality of data write requests exist in response to the received plurality of data write requests; if the LBAs of the plurality of data writing requests are overlapped, sequentially executing the received data writing requests; if the LBAs of the plurality of data write requests do not overlap, the received data write requests are executed simultaneously.
As mentioned in the foregoing embodiments, in general, the host sequentially transmits these data write requests through a queue management mechanism and transmits the data write requests in a specific order, and the storage device side receives and processes the data write requests one by one. Therefore, if the first data writing request is received and the second data writing request is received, then the master control chip may analyze that the LBA carried by the second data writing request is the same as or overlaps with the LBA carried by the first data writing request, if it is detected that the LBA overlaps, that is, a plurality of requests point to the same logical block or address range, the system will execute the writing requests according to the receiving order, process the data writing requests according to the first come first serve principle, and avoid covering the previous data. If it is detected that there is no overlap of the two data write request LBAs, i.e., each request is directed to a different logical block or address range, the system may execute the requests simultaneously.
In this way, write conflicts can be avoided during data writing, and the integrity and consistency of the data are ensured. Even in the case of a high load or reception of a large number of requests, data writing can be effectively managed, throughput of the storage device is improved by parallel processing in the case where LBAs of a plurality of requests overlap, and unnecessary data overwriting or rewriting is avoided by sequential processing in the case where LBAs of a plurality of requests do not overlap.
In order to ensure accuracy and security of the data writing operation, in an embodiment, determining whether the target data has a writing conflict in the first cache according to the LBA includes: obtaining a pre-stored cache table; the cache table comprises storage position information of written data in the first cache; and determining whether the target data has write-in conflict in the first cache according to the storage position information of the LBA of the first cache to be accessed and the written data of the first cache.
Wherein the cache table may include the following:
when a data writing request is received, detecting LBAs carried in the request, and comparing the LBAs to be accessed with storage position information in a cache table to determine whether target data in the data writing request can overlap or conflict with data existing in a first cache. If the LBA to be accessed does not find the corresponding storage position information in the cache table, namely the LBA is unoccupied, a write operation can be executed, and the target data is written into the LBA position to be accessed; otherwise, determining that the write conflict exists, and transmitting the target data to the second cache. By using the cache table to perform conflict detection, the system can effectively process different data writing requests, optimize the use of storage resources, prevent data loss or damage, and improve the management efficiency and performance of the storage device.
In one embodiment, the first buffer includes a plurality of logic blocks, and transmitting the target data written in the first buffer to the flash memory chip includes: and when the stored data quantity of any logic block meets the preset data quantity threshold value, transmitting the written data in the logic block to the flash memory chip.
In the embodiment of the application, in order to reduce frequent small-scale writing operations and reduce the access frequency to the flash memory chip, after receiving the data writing instruction, the method can determine whether to trigger the step of taking out the written data from the logic block and writing the written data into the flash memory chip according to the stored data quantity of the logic block in the first cache. The preset data quantity threshold defines the data quantity stored in the logic block triggering the event of flushing the written data to the flash memory chip, and if the used quantity or the stored data quantity of a certain logic block is greater than or equal to the preset data quantity threshold, the written data in the logic block is directly stored into the flash memory chip. For example, if the predetermined data amount threshold is 4K and the written data in LBA [0] is full of 4K, the written data in the logical block is transferred to the flash memory chip. It should be noted that, in the embodiment of the present application, the preset data amount threshold is not limited in detail, and may be set according to actual situations.
In order to optimize storage efficiency and device performance, in an embodiment, the first buffer includes a plurality of logic blocks, each logic block includes a plurality of sectors, the flash memory chip stores a preset alignment writing rule, and the method for transmitting target data written in the first buffer to the flash memory chip includes: when the caching duration of the logic block meets a preset time threshold, if the written data in the logic block does not meet a preset alignment writing rule, determining a target sector which does not meet the preset alignment writing rule, and acquiring an index of the target sector; acquiring first supplementary data from a flash memory chip according to storage position information of written data in a target sector and an index of the target sector; and combining the first supplementary data with the written data of the logic block and transmitting the combined first supplementary data and the written data of the logic block to the flash memory chip.
Here, the preset alignment writing rule may be an alignment rule in a nonvolatile storage medium protocol, and in an embodiment of the present application, the cache duration of each logic block may be equal to the cache duration of each logic block. If the buffer duration after the logic block has been written with data reaches a predetermined time threshold (e.g., 30 seconds or 1 minute), the written data of the logic block is written into the flash memory chip. Before performing the writing operation, it is required to determine whether the target data satisfies a preset alignment rule.
Specifically, for example, the preset alignment rule is 4K (i.e. set to the size of a logical block), it is determined whether the start address of the written data of each sector under the logical block is aligned with or consistent with the start address of the sector, and similarly, whether the end address of the written data of each sector under the logical block is aligned with or consistent with the end address of the sector is required. The flash memory chip may be written to if the sectors are aligned or coincident. If the written data in a certain sector does not fill the sector, if the end address of the written data is not aligned with or consistent with the end address of the sector, or the written data in a certain sector is not written, the sector does not meet the preset aligned writing rule. It is necessary to determine an index of a sector that does not satisfy a preset alignment rule, storage location information of written data (the storage location information includes at least one of a start address and an end address of the written data), and an end address of the sector in order to acquire supplementary data in the flash memory chip to fill in the vacant place.
For example, the written data is located at addresses 2560 to 3000 in the sector [5], and then the written data is required to be merged with the written data in the first buffer memory according to the index of the sector (for example, the index of the sector [5] of the logical block [0] is S4), the "end address 3000" of the written data in the sector [5] and the "end address 3071" of the sector, and the supplementary data corresponding to the addresses 3001 to 3071 of the sector [5] are obtained in the flash memory chip, so as to fill the blank space of the sector. Or if the data is not written in the sector [5], the position of the blank space of the sector can be supplemented according to the supplement data of the corresponding address obtained by the sector [5] of the index' logic block [0] in the flash memory chip, and finally the supplement data written in the sector is combined with the data of other sectors, so that the data alignment of each sector, namely the whole logic block, is realized. By the method, not only is the alignment of the written data ensured, but also the data writing efficiency and the service life of the flash memory chip are improved.
In order to implement the data processing method of the embodiment of the present application, the embodiment of the present application further provides a data processing apparatus, which is applied to a storage device, where the storage device includes a main control chip, a first cache, a second cache, and a flash memory chip, as shown in fig. 4, and the data processing apparatus includes:
A request receiving unit 401, configured to receive a data writing request, where the data writing request includes a logical block address LBA of a first cache to be accessed and target data;
a conflict detection unit 402, configured to determine whether a write conflict exists in the first cache for the target data according to the LBA;
a data transmission unit 403, configured to write the target data into the second buffer if there is a write collision;
The conflict detection unit 402 is further configured to determine, according to the LBA, whether a write conflict exists in the first cache in the target data written into the second cache;
the data transmission unit 403 is further configured to, if no write conflict exists, transmit the target data written into the second buffer to the first buffer, so that the main control chip of the storage device transmits the target data written into the first buffer to the flash memory chip.
In an embodiment, the data transmission unit is further configured to write the target data into the first cache if no write collision exists, so that the main control chip of the storage device transmits the target data written into the first cache to the flash memory chip.
In an embodiment, the data transmission unit is further configured to transmit the target data written into the second cache to the first cache if the target data is unaligned data and the start address of the target data is consecutive to the end address of the written data of the LBA, so that the host chip of the storage device transmits the target data written into the first cache to the flash memory chip.
In one embodiment, the apparatus further comprises: and a processing unit. The processing unit is used for responding to the received multiple data writing requests and determining whether the LBAs of the multiple data writing requests are overlapped; if the LBAs of the plurality of data writing requests are overlapped, sequentially executing the received data writing requests; and if the LBAs of the plurality of data writing requests do not overlap, executing the received data writing requests at the same time.
In one embodiment, the apparatus further comprises: and an acquisition unit. The acquisition unit is used for acquiring a prestored cache table; the cache table comprises storage position information of written data in the first cache;
The processing unit is further configured to determine, according to LBA of a first cache to be accessed and storage location information of written data of the first cache, whether write conflict exists in the first cache for the target data.
In an embodiment, the data transmission unit is further configured to transmit the written data in the logic block to the flash memory chip when the stored data amount of any one of the logic blocks satisfies a preset data amount threshold.
In an embodiment, when the cache duration of the logic block meets a preset time threshold, if the written data in the logic block does not meet the preset alignment writing rule, determining a target sector that does not meet the preset alignment writing rule, and obtaining an index of the target sector;
The obtaining unit is further configured to obtain first supplementary data from the flash memory chip according to storage location information of the written data in the target sector and an index of the target sector;
The data transmission unit is further configured to combine the first supplemental data with the written data of the logic block, and transmit the combined first supplemental data and the written data to the flash memory chip.
In practical application, the processing unit can be realized by combining a main control chip in the storage device with a communication interface, and the request receiving unit, the conflict detecting unit, the data transmission unit, the processing unit and the acquisition unit can be realized by the communication interface in the data processing device.
It should be noted that: the above embodiments provide that the data processing apparatus is exemplified by the above-described division of each program module when performing data processing, and in practical applications, the above-described processing allocation may be performed by different program modules, i.e., the internal structure of the apparatus is divided into different program modules, so as to complete all or part of the above-described processing. In addition, the data processing apparatus and the data processing method embodiment provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the data processing apparatus and the data processing method embodiment are detailed in the method embodiment, which is not described herein again.
Based on the hardware implementation of the program module, and in order to implement a data processing method provided by the embodiment of the present application, the embodiment of the present application further provides a storage device, as shown in fig. 5, the storage device 500 includes a main control chip 501, a first cache 502, a second cache 503, and a flash memory chip 504, where communication connection is implemented among the main control chip 501, the first cache 502, the second cache 503, and the flash memory chip 504, and the main control chip 501 runs a computer program to implement the data processing method as described in any one of the above.
Of course, in actual practice, the various components in the storage device 500 are coupled together by a bus system 505. It is understood that the bus system 505 is used to enable connected communications between these components. The bus system 505 includes a power bus, a control bus, a status signal bus, and the like in addition to a data bus. But for clarity of illustration the various buses are labeled as bus system 505 in fig. 5.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, the computer program implementing the data processing method according to any one of the above when being executed by a processor.
The embodiment of the application also provides a computer program product, on which a computer program/instruction is stored, the computer program/instruction being executed by a processor for implementing the data processing method according to any one of the above.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in whole or in part in the form of a software product stored in a storage medium, comprising several instructions for causing a storage device (which may be a personal computer, a server, a solid state disk, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, randomAccess Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.

Claims (10)

1. The data processing method is characterized by being applied to a storage device, wherein the storage device comprises a main control chip, a first cache, a second cache and a flash memory chip, and the method comprises the following steps:
receiving a data writing request, wherein the data writing request comprises a logic block address LBA of a first cache to be accessed and target data;
judging whether the target data have write-in conflict in the first cache according to the LBA;
if the writing conflict exists, writing the target data into the second cache;
Judging whether the target data written into the second cache has write conflict in the first cache according to the LBA;
and if no write-in conflict exists, transmitting the target data written in the second cache to the first cache, so that the main control chip of the storage device transmits the target data written in the first cache to the flash memory chip.
2. The data processing method of claim 1, wherein after said determining from said LBA whether said target data has a write collision in said first cache, said method further comprises:
If no write conflict exists, the target data is written into the first cache, so that the main control chip of the storage device transmits the target data written into the first cache to the flash memory chip.
3. The data processing method of claim 1, wherein after the target data is written to the second cache if a write conflict exists, the method further comprises:
And if the target data is non-aligned data and the starting address of the target data is continuous with the ending address of the written data of the LBA, transmitting the target data written into the second cache to the first cache so that the main control chip of the storage device transmits the target data written into the first cache to the flash memory chip.
4. The method of claim 1, wherein determining whether the target data has a write collision in the first cache based on the LBA comprises:
Determining whether overlapping LBAs of a plurality of data write requests exist in response to the received plurality of data write requests;
If the LBAs of the plurality of data writing requests are overlapped, sequentially executing the received data writing requests;
and if the LBAs of the plurality of data writing requests do not overlap, executing the received data writing requests at the same time.
5. The method of claim 1, wherein determining whether the target data has a write collision in the first cache based on the LBA comprises:
Obtaining a pre-stored cache table; the cache table comprises storage position information of written data in the first cache;
and determining whether the target data has write-in conflict in the first cache according to the LBA of the first cache to be accessed and the storage position information of the written data of the first cache.
6. The method of claim 1, wherein the first buffer comprises a plurality of logic blocks, and wherein transferring the target data written in the first buffer to the flash memory chip comprises:
and when the stored data quantity of any logic block meets a preset data quantity threshold value, transmitting the written data in the logic block to the flash memory chip.
7. The method according to claim 1, wherein the first buffer includes a plurality of logic blocks, each logic block includes a plurality of sectors, the flash memory chip stores a preset aligned write rule, and the transferring the target data written in the first buffer to the flash memory chip includes:
When the caching duration of the logic block meets a preset time threshold, if the written data in the logic block does not meet the preset alignment writing rule, determining a target sector which does not meet the preset alignment writing rule, and acquiring an index of the target sector;
Acquiring first supplementary data from the flash memory chip according to storage position information of the written data in the target sector and the index of the target sector;
and combining the first supplementary data with the written data of the logic block and transmitting the combined data to the flash memory chip.
8. A data processing apparatus, characterized in that it is applied to a storage device, the storage device includes a main control chip, a first cache, a second cache, and a flash memory chip, the data processing apparatus includes:
A request receiving unit, configured to receive a data writing request, where the data writing request includes a logical block address LBA of a first cache to be accessed and target data;
A conflict detection unit, configured to determine, according to the LBA, whether a write conflict exists in the first cache in the target data;
The data transmission unit is used for writing the target data into the second cache if the writing conflict exists;
The conflict detection unit is further configured to determine, according to the LBA, whether a write conflict exists in the first cache in the target data written into the second cache;
And the data transmission unit is further used for transmitting the target data written into the second buffer memory to the first buffer memory if no writing conflict exists, so that the main control chip of the storage device transmits the target data written into the first buffer memory to the flash memory chip.
9. A memory device, characterized in that the memory device comprises a main control chip, a first cache, a second cache and a flash memory chip, wherein communication connection is realized among the main control chip, the first cache, the second cache and the flash memory chip, and the main control chip runs a computer program to realize the data processing method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the data processing method according to any one of claims 1 to 7.
CN202411140940.0A 2024-08-19 2024-08-19 Data processing method, device, storage equipment and storage medium Pending CN119002815A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411140940.0A CN119002815A (en) 2024-08-19 2024-08-19 Data processing method, device, storage equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411140940.0A CN119002815A (en) 2024-08-19 2024-08-19 Data processing method, device, storage equipment and storage medium

Publications (1)

Publication Number Publication Date
CN119002815A true CN119002815A (en) 2024-11-22

Family

ID=93474110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411140940.0A Pending CN119002815A (en) 2024-08-19 2024-08-19 Data processing method, device, storage equipment and storage medium

Country Status (1)

Country Link
CN (1) CN119002815A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120386485A (en) * 2025-03-14 2025-07-29 珠海妙存科技有限公司 Data operation method, device, electronic device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180173418A1 (en) * 2016-12-20 2018-06-21 Intel Corporation Apparatus, system and method for offloading collision check operations in a storage device
CN112992207A (en) * 2019-12-12 2021-06-18 英特尔公司 Write amplification buffer for reducing misaligned write operations
CN113485640A (en) * 2021-06-23 2021-10-08 至誉科技(武汉)有限公司 Data writing method, device, equipment and readable storage medium
CN115292212A (en) * 2022-07-31 2022-11-04 南斗六星系统集成有限公司 Processing method and system based on EEPORM high-speed storage data
CN117193634A (en) * 2023-08-15 2023-12-08 珠海云洲智能科技股份有限公司 Data caching method, device, electronic equipment and computer readable storage medium
CN117406935A (en) * 2023-12-13 2024-01-16 苏州萨沙迈半导体有限公司 Data reading method and device and read-write controller

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180173418A1 (en) * 2016-12-20 2018-06-21 Intel Corporation Apparatus, system and method for offloading collision check operations in a storage device
CN112992207A (en) * 2019-12-12 2021-06-18 英特尔公司 Write amplification buffer for reducing misaligned write operations
CN113485640A (en) * 2021-06-23 2021-10-08 至誉科技(武汉)有限公司 Data writing method, device, equipment and readable storage medium
CN115292212A (en) * 2022-07-31 2022-11-04 南斗六星系统集成有限公司 Processing method and system based on EEPORM high-speed storage data
CN117193634A (en) * 2023-08-15 2023-12-08 珠海云洲智能科技股份有限公司 Data caching method, device, electronic equipment and computer readable storage medium
CN117406935A (en) * 2023-12-13 2024-01-16 苏州萨沙迈半导体有限公司 Data reading method and device and read-write controller

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN120386485A (en) * 2025-03-14 2025-07-29 珠海妙存科技有限公司 Data operation method, device, electronic device and storage medium

Similar Documents

Publication Publication Date Title
KR102688570B1 (en) Memory System and Operation Method thereof
US12321628B2 (en) Data migration method, host, and solid state disk
US10289314B2 (en) Multi-tier scheme for logical storage management
US8924659B2 (en) Performance improvement in flash memory accesses
US10572391B2 (en) Methods and apparatus for implementing a logical to physical address mapping in a solid state drive
KR102505913B1 (en) Memory module and memory system including memory module)
KR102782783B1 (en) Operating method of controller and memory system
US20110161552A1 (en) Command Tracking for Direct Access Block Storage Devices
CN109164976B (en) Optimizing storage device performance using write caching
CN114385235B (en) Command exhaust using host memory buffering
EP3637242B1 (en) Data access method and apparatus
CN107066202B (en) Storage device with multiple solid state disks
US10452313B2 (en) Apparatuses and methods for multiple address registers for a solid state device
US20200104072A1 (en) Data management method and storage controller using the same
US20210397511A1 (en) Nvm endurance group controller using shared resource architecture
KR102818037B1 (en) System including a storage device for providing data to an application processor
CN108228483B (en) Method and apparatus for processing atomic write commands
CN119002815A (en) Data processing method, device, storage equipment and storage medium
CN120687382A (en) Memory access method and electronic device
US20210191626A1 (en) Data processing system
US12216597B2 (en) Memory system, including a plurality of memory controllers and operation method thereof
CN118113461B (en) A CXL memory expansion device, atomic operation method and atomic operation system
US20260037183A1 (en) Memory controllers, memory systems and control methods thereof, memory mediums, and program products
US20260030169A1 (en) Write Amplification Reduction with Sub-Indirection Unit (IU) Hinting
US20250044987A1 (en) Memory controller, memory system, and operating method of memory system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination