US20160335024A1 - Assisting data deduplication through in-memory computation - Google Patents
Assisting data deduplication through in-memory computation Download PDFInfo
- Publication number
- US20160335024A1 US20160335024A1 US15/154,044 US201615154044A US2016335024A1 US 20160335024 A1 US20160335024 A1 US 20160335024A1 US 201615154044 A US201615154044 A US 201615154044A US 2016335024 A1 US2016335024 A1 US 2016335024A1
- Authority
- US
- United States
- Prior art keywords
- data
- storage device
- data storage
- chunk
- hashing engine
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
- G06F3/0605—Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0608—Saving storage space on storage systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/064—Management of blocks
- G06F3/0641—De-duplication techniques
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0658—Controller construction arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
Definitions
- the present invention relates to the field of data storage and processing, and particularly to providing unified in-memory data storage and data deduplication services in computing systems.
- data deduplication has become an indispensable feature in almost all the storage archive/backup systems and many front-end computing/storage systems.
- the basic principle of data deduplication can be descried as follows. First, files are split into multiple chunks, where all the chunks may have the same or different size (typically at least a few kB and larger) dependent upon whether content awareness is incorporated into the chunking process. In general, content-aware chunking (hence variable chunk size) tends to achieve better data deduplication efficiency. Content-aware chunking is typically realized by using certain rolling hash schemes such as Rabin fingerprinting.
- data deduplication aims to discover and accordingly exploit the redundancy between a current chunk and the other chunks that have been stored or transferred in the system. This is realized in either a locality-based or similarity-based manner, where the former only focuses on exactly identical chunks and the latter considers both identical and highly similar chunks.
- a locality-oriented signature (or fingerprint) with a reasonably small size (e.g., 20 bytes or 32 bytes) is calculated using a hash function (e.g., SHA-1 or SHA-256).
- the signature is used to determine whether two chunks are identical (i.e., if the signatures of two same-sized chunks are identical, then the two chunks are considered to be identical).
- the signatures are used to determine whether two chunks are identical or highly similar. Once multiple identical data chunks are identified, the system can physically store or transfer only a single copy through appropriate data structure management. For similarity-based data deduplication, once multiple highly similar data chunks are identified, the system can only physically store or transfer a single copy and the inter-chunk differences through appropriate data structure management.
- the system maintains a signature index consisting of all or portion of all the data chunks that have been stored or transferred.
- One critical process is to, given the signature of a new data chunk, determine whether this signature already exists in a current signature index.
- Such a signature index look-up operation can be very time and resource consuming and hence degrade the overall data deduplication speed performance, especially for large-scale systems.
- a Bloom filter is typically used to eliminate unnecessary signature index look-ups and hence speed up the data deduplication.
- the objective of a Bloom filter is to, without carrying out any index look-up, quickly identify those signatures that are definitely not in current signature index. This can obviate a significant amount of costly and unnecessary signature index look-up operations.
- the core operation of a Bloom filter is to apply several (e.g., k) hash functions onto the signature in order to obtain k integers, h 1 , . . . , h k , whose values all fall into a given range [0, m-1]. If any one out of the k bits at position h 1 , . . . , hk in the m-bit summary vector is 0, then it is guaranteed that this signature is not in current signature index. For each signature being added to the signature index, the corresponding k bits in the summary vector should be set as 1.
- flash memory based solid-state data storage devices may utilize in-storage hash function computations to gain various advantages such as reducing the host computational workload, reducing host main memory capacity, and reducing data traffic across the storage-memory-CPU hierarchy.
- the invention provides a data storage device, comprising: a storage media; and a controller, wherein the controller includes a hashing engine for implementing a data deduplication process on data stored in the storage medium, wherein the hashing engine: inputs parameters from a host that specifies a sliding widow size and a boundary condition; implements a rolling hash function; and outputs a data chunk boundary.
- the invention provides a data storage device, comprising: a storage media; and a controller, wherein the controller includes a hashing engine for implementing a data deduplication process on data stored in the storage medium, wherein the hashing engine: inputs a location and size of a data chunk from a host; implements a hash function on the data chunk that calculates a chunk signature; and outputs the chunk signature.
- the invention provides data storage device, comprising: a storage media; and a controller having a delta compression engine for processing data chunks in the storage media identified as similar, wherein the controller includes: a data read module that receives addresses of a pair of data chunks from a host, a first algorithm for calculating a difference between the data chunks, and a second algorithm for compressing the difference.
- the features alluded to herein may also be implemented as a chip, a controller, e.g., a card that plugs into a memory system, a method and/or a program product for hashing or otherwise processing data in a data storage device to improve data deduplication efficiency.
- FIG. 1 illustrates the diagram of integrating a hashing engine in storage device controller according to embodiments
- FIG. 2 illustrates the diagram for using the hashing engine to carry out data chunking calculation according to embodiments
- FIG. 3 illustrates the diagram for using the hashing engine to carry out data chunk signature calculation according to embodiments
- FIG. 4 illustrates the diagram of using the hashing engine to carry out data chunking and chunk signature calculation concurrently according to embodiments
- FIG. 5 illustrates the diagram of using the hashing engine to calculate the Bloom filter location parameters according to embodiments
- FIG. 6 illustrates the diagram of using the hashing engine to directly report a Bloom filter hit/miss result with summary vector according to embodiments
- FIG. 7 illustrates the diagram of using the hashing engine to carry out data chunking, chunk signatures, and Bloom filter location parameter calculation concurrently according to embodiments
- FIG. 8 illustrates the diagram of using the hashing engine to carry out data chunking, chunk signatures, and Bloom filter hit/miss calculation concurrently according to embodiments.
- FIG. 9 illustrates the diagram of adding one delta-compression engine in storage device controller for similarity-based data deduplication according to embodiments.
- the disclosed embodiments deal with computing systems in which host processing chips carry out data deduplication in order to reduce the data storage footprint and/or transfer volume in computing systems.
- various hashing functions are utilized to implement content-aware data chunking, locality-oriented or similarity-oriented signature computation, and Bloom filters.
- embodiments are provided to directly realize all or some of these hashing functions inside a data storage device 10 , which stores data for a host 12 in a storage media 16 , such as flash memory.
- a hashing engine 18 in the storage device controller 14 is provided to perform various hashing activities implemented during a data deduplication process based on a set of parameters provided by an external system such as host 12 .
- FIG. 2 depicts an illustration of hashing engine 18 implemented to perform content-aware data chunking inside data storage device 10 .
- a rolling hash function e.g., Rabin fingerprinting, Rabin-Karp string search, etc.
- the host 12 may provide the following parameters 20 to the hashing engine 18 :
- the hashing engine 18 in the storage device 10 carries out the entire data chunking process and sends the chunking results (i.e., the locations of all the chunk boundaries) to the host 12 .
- the hashing engine 18 may be implemented with a set of rolling hash functions to allow the host 12 to select the most appropriate one.
- hashing engine may be implemented with a single or default rolling hash function, in which case it need not be specified within parameters 20 .
- the next operation is to calculate the chunk signatures for each individual data chunk.
- chunk signatures For locality-oriented data deduplication, typically only a single signature is required for each chunk.
- similarity-oriented data deduplication multiple signatures are required. Implementation of a data chunk signature calculation can be separate from or integrated with the data chunking process.
- FIG. 3 shows the case when data chunk signature calculation is separate from the data chunking process.
- the host 10 provides the following parameters 26 to the hashing engine 18 :
- FIG. 4 shows an embodiment of hashing engine 18 in which the data chunk signature calculation 32 is integrated with the data chunking process 30 .
- the hashing engine 18 carries out the rolling hash for data chunking and hashing for data chunk signature calculation concurrently.
- the data go through the two computational components 30 , 32 in serial (e.g., bit-by-bit or byte-by-byte).
- the hashing engine 18 identifies the chunk boundary 24 (i.e., the end point of current data chunk), it accordingly finishes up the data chunk signature calculation 32 , and sends both chunking results 24 and signature calculation 28 results to the host 12 ( FIG. 1 ).
- the output may be processed by an in-memory Bloom filter.
- a Bloom filter Recall that the core operation of a Bloom filter is to apply several (e.g., k) hash functions onto the signature in order to obtain k integers, h 1 , . . . , h k , whose values all fall into a given range [0, m-1]. As shown in FIG. 5 , these k integers are referred to as filter locations 34 .
- the hashing engine 18 Given the data chunk signatures 28 , the hashing engine 18 carries out the Bloom filter computation to obtain the k filter locations 34 . If the hashing engine 18 is not aware of the current value of the summary vector, the hashing engine 18 sends the obtained k filter locations to the host 12 .
- the hashing engine 18 checks the summary vector 18 based upon the obtained k filter locations, and accordingly determines whether there is a data chunk signature match and reports the Bloom filter match/miss result 40 to the host 12 .
- FIG. 7 presents an embodiment in which a Bloom filter 42 is incorporated for the case where the hashing engine 18 is not aware of the current value of the summary vector ( FIG. 5 ) and FIG. 8 presents an embodiment in which the Bloom filter 42 is incorporated for the case where the hashing engine 18 is aware of the current value of the summary vector 36 ( FIG. 6 ).
- delta compression may be implemented in-memory for data chunks identified as similar.
- similarity-based data deduplication once significant similarity has been detected between the current data chunk and an existing data chunk, the system only stores the difference between them in order to reduce the data volume.
- the storage device controller 14 can also integrate a delta compression engine 46 to facilitate similarity-based data deduplication. Given the address 44 of the two similar data chunks inputted via a data read module 50 , the storage device controller 14 uses the integrated delta compression engine 46 to generate the compressed difference 48 for the host 12 .
- host may refer to various devices capable of sending read/write commands to the storage devices. It is understood that such devices may be referred to as processors, hosts, initiators, requesters or the like, without departing from the spirit and scope of the present disclosure.
- Computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- This application claims priority to co-pending U.S. Provisional Patent Application Ser. No. 62/161,928 filed May 15, 2015, which is hereby incorporated herein as though fully set forth.
- The present invention relates to the field of data storage and processing, and particularly to providing unified in-memory data storage and data deduplication services in computing systems.
- Aiming to eliminate the data redundancy and hence reduce data storage footprint and/or data transfer volume, data deduplication has become an indispensable feature in almost all the storage archive/backup systems and many front-end computing/storage systems. The basic principle of data deduplication can be descried as follows. First, files are split into multiple chunks, where all the chunks may have the same or different size (typically at least a few kB and larger) dependent upon whether content awareness is incorporated into the chunking process. In general, content-aware chunking (hence variable chunk size) tends to achieve better data deduplication efficiency. Content-aware chunking is typically realized by using certain rolling hash schemes such as Rabin fingerprinting.
- Given each data chunk, data deduplication aims to discover and accordingly exploit the redundancy between a current chunk and the other chunks that have been stored or transferred in the system. This is realized in either a locality-based or similarity-based manner, where the former only focuses on exactly identical chunks and the latter considers both identical and highly similar chunks.
- In the context of locality-based data deduplication, for each individual chunk, a locality-oriented signature (or fingerprint) with a reasonably small size (e.g., 20 bytes or 32 bytes) is calculated using a hash function (e.g., SHA-1 or SHA-256). The signature is used to determine whether two chunks are identical (i.e., if the signatures of two same-sized chunks are identical, then the two chunks are considered to be identical).
- In the context of similarity-based data deduplication, more complicated hashing schemes are used to calculate similarity-oriented signatures for each chunk.
- The signatures are used to determine whether two chunks are identical or highly similar. Once multiple identical data chunks are identified, the system can physically store or transfer only a single copy through appropriate data structure management. For similarity-based data deduplication, once multiple highly similar data chunks are identified, the system can only physically store or transfer a single copy and the inter-chunk differences through appropriate data structure management.
- The system maintains a signature index consisting of all or portion of all the data chunks that have been stored or transferred. One critical process is to, given the signature of a new data chunk, determine whether this signature already exists in a current signature index. Such a signature index look-up operation can be very time and resource consuming and hence degrade the overall data deduplication speed performance, especially for large-scale systems. In practical implementation, a Bloom filter is typically used to eliminate unnecessary signature index look-ups and hence speed up the data deduplication. The objective of a Bloom filter is to, without carrying out any index look-up, quickly identify those signatures that are definitely not in current signature index. This can obviate a significant amount of costly and unnecessary signature index look-up operations. The core operation of a Bloom filter is to apply several (e.g., k) hash functions onto the signature in order to obtain k integers, h1, . . . , hk, whose values all fall into a given range [0, m-1]. If any one out of the k bits at position h1, . . . , hk in the m-bit summary vector is 0, then it is guaranteed that this signature is not in current signature index. For each signature being added to the signature index, the corresponding k bits in the summary vector should be set as 1.
- For computing systems in which host processing chips such as CPUs/GPUs implement data deduplication, systems and methods are provided that implement various hash functions and data processing throughout the data deduplication process on data storage devices. For example, flash memory based solid-state data storage devices may utilize in-storage hash function computations to gain various advantages such as reducing the host computational workload, reducing host main memory capacity, and reducing data traffic across the storage-memory-CPU hierarchy.
- In a first aspect, the invention provides a data storage device, comprising: a storage media; and a controller, wherein the controller includes a hashing engine for implementing a data deduplication process on data stored in the storage medium, wherein the hashing engine: inputs parameters from a host that specifies a sliding widow size and a boundary condition; implements a rolling hash function; and outputs a data chunk boundary.
- In a second aspect, the invention provides a data storage device, comprising: a storage media; and a controller, wherein the controller includes a hashing engine for implementing a data deduplication process on data stored in the storage medium, wherein the hashing engine: inputs a location and size of a data chunk from a host; implements a hash function on the data chunk that calculates a chunk signature; and outputs the chunk signature.
- In a third aspect, the invention provides data storage device, comprising: a storage media; and a controller having a delta compression engine for processing data chunks in the storage media identified as similar, wherein the controller includes: a data read module that receives addresses of a pair of data chunks from a host, a first algorithm for calculating a difference between the data chunks, and a second algorithm for compressing the difference.
- The features alluded to herein may also be implemented as a chip, a controller, e.g., a card that plugs into a memory system, a method and/or a program product for hashing or otherwise processing data in a data storage device to improve data deduplication efficiency.
- The numerous embodiments of the present invention may be better understood by those skilled in the art by reference to the accompanying figures in which:
-
FIG. 1 illustrates the diagram of integrating a hashing engine in storage device controller according to embodiments; -
FIG. 2 illustrates the diagram for using the hashing engine to carry out data chunking calculation according to embodiments; -
FIG. 3 illustrates the diagram for using the hashing engine to carry out data chunk signature calculation according to embodiments; -
FIG. 4 illustrates the diagram of using the hashing engine to carry out data chunking and chunk signature calculation concurrently according to embodiments; -
FIG. 5 illustrates the diagram of using the hashing engine to calculate the Bloom filter location parameters according to embodiments; -
FIG. 6 illustrates the diagram of using the hashing engine to directly report a Bloom filter hit/miss result with summary vector according to embodiments; -
FIG. 7 illustrates the diagram of using the hashing engine to carry out data chunking, chunk signatures, and Bloom filter location parameter calculation concurrently according to embodiments; -
FIG. 8 illustrates the diagram of using the hashing engine to carry out data chunking, chunk signatures, and Bloom filter hit/miss calculation concurrently according to embodiments; and -
FIG. 9 illustrates the diagram of adding one delta-compression engine in storage device controller for similarity-based data deduplication according to embodiments. - Reference will now be made in detail to embodiments of the invention, examples of which are illustrated in the accompanying drawings.
- The disclosed embodiments deal with computing systems in which host processing chips carry out data deduplication in order to reduce the data storage footprint and/or transfer volume in computing systems. Throughout the entire data deduplication process, various hashing functions are utilized to implement content-aware data chunking, locality-oriented or similarity-oriented signature computation, and Bloom filters.
- As illustrated in
FIG. 1 , embodiments are provided to directly realize all or some of these hashing functions inside adata storage device 10, which stores data for ahost 12 in astorage media 16, such as flash memory. In particular, ahashing engine 18 in thestorage device controller 14 is provided to perform various hashing activities implemented during a data deduplication process based on a set of parameters provided by an external system such ashost 12. -
FIG. 2 depicts an illustration ofhashing engine 18 implemented to perform content-aware data chunking insidedata storage device 10. To realize content-aware data chunking, a rolling hash function (e.g., Rabin fingerprinting, Rabin-Karp string search, etc.) is applied to a sliding window across a bitstream ofdata 22 stored in thestorage media 16. The hashing result is continuously updated with the sliding window and checked against a pre-defined boundary condition (e.g., h(W) modulo D=r, where h denotes the rolling hash function, W denotes the current data within the sliding window, D are r are constants). - As shown in
FIG. 2 , thehost 12 may provide the followingparameters 20 to the hashing engine 18: - 1. The particular rolling hash function to be used for data chunking;
- 2. The length of the sliding window; and
- 3. The parameters being used in boundary condition check. During runtime, the
hashing engine 18 in thestorage device 10 carries out the entire data chunking process and sends the chunking results (i.e., the locations of all the chunk boundaries) to thehost 12. Note that thehashing engine 18 may be implemented with a set of rolling hash functions to allow thehost 12 to select the most appropriate one. Alternatively, hashing engine may be implemented with a single or default rolling hash function, in which case it need not be specified withinparameters 20. - After data chunking, the next operation is to calculate the chunk signatures for each individual data chunk. For locality-oriented data deduplication, typically only a single signature is required for each chunk. For similarity-oriented data deduplication, multiple signatures are required. Implementation of a data chunk signature calculation can be separate from or integrated with the data chunking process.
-
FIG. 3 shows the case when data chunk signature calculation is separate from the data chunking process. Thehost 10 provides the followingparameters 26 to the hashing engine 18: -
- 1. The particular hash functions to be used for calculating the data chunk signatures (if multiple options are included in the hash engine 18); and
- 2. The location and size of the data chunk to be processed. During the runtime, the hashing
engine 18 in thestorage device 10 carries out the hashing operation to obtain thesignatures 28 for each data chunk, and if required, sends thesignatures 28 to thehost 12.
-
FIG. 4 shows an embodiment of hashingengine 18 in which the datachunk signature calculation 32 is integrated with thedata chunking process 30. Starting from the very beginning of a new data chunk, the hashingengine 18 carries out the rolling hash for data chunking and hashing for data chunk signature calculation concurrently. As shown inFIG. 4 , the data go through the two 30, 32 in serial (e.g., bit-by-bit or byte-by-byte). Once the hashingcomputational components engine 18 identifies the chunk boundary 24 (i.e., the end point of current data chunk), it accordingly finishes up the datachunk signature calculation 32, and sends both chunkingresults 24 andsignature calculation 28 results to the host 12 (FIG. 1 ). - In a further embodiment, the output may be processed by an in-memory Bloom filter. Recall that the core operation of a Bloom filter is to apply several (e.g., k) hash functions onto the signature in order to obtain k integers, h1, . . . , hk, whose values all fall into a given range [0, m-1]. As shown in
FIG. 5 , these k integers are referred to asfilter locations 34. Given thedata chunk signatures 28, the hashingengine 18 carries out the Bloom filter computation to obtain thek filter locations 34. If the hashingengine 18 is not aware of the current value of the summary vector, the hashingengine 18 sends the obtained k filter locations to thehost 12. - As shown in
FIG. 6 , if the hashingengine 18 knows the current value of the summary vector 36 (e.g., thehost 12 keeps updating the hashingengine 18 with the summary vector 36), the hashingengine 18 checks thesummary vector 18 based upon the obtained k filter locations, and accordingly determines whether there is a data chunk signature match and reports the Bloom filter match/miss result 40 to thehost 12. - In addition, as shown in
FIG. 7 andFIG. 8 , all the above processes can be carried out by the hashingengine 18 concurrently, which can most effectively facilitate the realization of data deduplication. In particular,FIG. 7 presents an embodiment in which aBloom filter 42 is incorporated for the case where the hashingengine 18 is not aware of the current value of the summary vector (FIG. 5 ) andFIG. 8 presents an embodiment in which theBloom filter 42 is incorporated for the case where the hashingengine 18 is aware of the current value of the summary vector 36 (FIG. 6 ). - In a further embodiment, delta compression may be implemented in-memory for data chunks identified as similar. In the context of similarity-based data deduplication, once significant similarity has been detected between the current data chunk and an existing data chunk, the system only stores the difference between them in order to reduce the data volume. The difference between similar data chunks is typically compressed by delta compression, i.e., let Da and Db represent the two data chunks with significant similarity, the process first obtains their bit-wise XOR Dab=DaaDb, and then compress the difference Dab using algorithms such as run-length encoding, which is typically referred to as delta compression. As shown in
FIG. 9 , thestorage device controller 14 can also integrate adelta compression engine 46 to facilitate similarity-based data deduplication. Given theaddress 44 of the two similar data chunks inputted via a data readmodule 50, thestorage device controller 14 uses the integrateddelta compression engine 46 to generate thecompressed difference 48 for thehost 12. - The embodiments of the present disclosure are applicable to various types of storage devices without departing from the spirit and scope of the present disclosure. It is also contemplated that the term host may refer to various devices capable of sending read/write commands to the storage devices. It is understood that such devices may be referred to as processors, hosts, initiators, requesters or the like, without departing from the spirit and scope of the present disclosure.
- Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by processing logic implemented in hardware and/or computer readable program instructions.
- Computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
- The foregoing description of various aspects of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously, many modifications and variations are possible. Such modifications and variations that may be apparent to an individual in the art are included within the scope of the invention as defined by the accompanying claims.
Claims (19)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US15/154,044 US10416915B2 (en) | 2015-05-15 | 2016-05-13 | Assisting data deduplication through in-memory computation |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201562161928P | 2015-05-15 | 2015-05-15 | |
| US15/154,044 US10416915B2 (en) | 2015-05-15 | 2016-05-13 | Assisting data deduplication through in-memory computation |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20160335024A1 true US20160335024A1 (en) | 2016-11-17 |
| US10416915B2 US10416915B2 (en) | 2019-09-17 |
Family
ID=57276036
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/154,044 Active 2036-07-09 US10416915B2 (en) | 2015-05-15 | 2016-05-13 | Assisting data deduplication through in-memory computation |
Country Status (1)
| Country | Link |
|---|---|
| US (1) | US10416915B2 (en) |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170038978A1 (en) * | 2015-08-05 | 2017-02-09 | HGST Netherlands B.V. | Delta Compression Engine for Similarity Based Data Deduplication |
| CN106997391A (en) * | 2017-04-10 | 2017-08-01 | 华北电力大学(保定) | A kind of method of steady state condition data in quick screening large scale process data |
| CN107391034A (en) * | 2017-07-07 | 2017-11-24 | 华中科技大学 | A kind of duplicate data detection method based on local optimization |
| US9864542B2 (en) * | 2015-09-18 | 2018-01-09 | Alibaba Group Holding Limited | Data deduplication using a solid state drive controller |
| US10282127B2 (en) | 2017-04-20 | 2019-05-07 | Western Digital Technologies, Inc. | Managing data in a storage system |
| US10503608B2 (en) | 2017-07-24 | 2019-12-10 | Western Digital Technologies, Inc. | Efficient management of reference blocks used in data deduplication |
| US20200034244A1 (en) * | 2018-07-26 | 2020-01-30 | EMC IP Holding Company LLC | Detecting server pages within backups |
| US10809928B2 (en) | 2017-06-02 | 2020-10-20 | Western Digital Technologies, Inc. | Efficient data deduplication leveraging sequential chunks or auxiliary databases |
| US11055006B1 (en) * | 2017-10-30 | 2021-07-06 | EMC IP Holding Company LLC | Virtual storage domain for a content addressable system |
| TWI751374B (en) * | 2017-10-11 | 2022-01-01 | 南韓商三星電子股份有限公司 | Bridge device and method for providing near storage compute |
| US20220035546A1 (en) * | 2020-08-03 | 2022-02-03 | Cornell University | Base and compressed difference data deduplication |
| US11449465B2 (en) * | 2019-09-11 | 2022-09-20 | International Business Machines Corporation | Fixed chunk size deduplication with variable-size chunking |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040174276A1 (en) * | 2002-10-30 | 2004-09-09 | Nbt Technology, Inc. | Content-based segmentation scheme for data compression in storage and transmission including hierarchical segment representation |
| US20100125553A1 (en) * | 2008-11-14 | 2010-05-20 | Data Domain, Inc. | Delta compression after identity deduplication |
| US20110218972A1 (en) * | 2010-03-08 | 2011-09-08 | Quantum Corporation | Data reduction indexing |
| US20110238635A1 (en) * | 2010-03-25 | 2011-09-29 | Quantum Corporation | Combining Hash-Based Duplication with Sub-Block Differencing to Deduplicate Data |
| US20110307447A1 (en) * | 2010-06-09 | 2011-12-15 | Brocade Communications Systems, Inc. | Inline Wire Speed Deduplication System |
| US9514146B1 (en) * | 2013-09-26 | 2016-12-06 | Emc Corporation | System and method for improving data compression of a storage system in an online manner |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090204636A1 (en) * | 2008-02-11 | 2009-08-13 | Microsoft Corporation | Multimodal object de-duplication |
| US20110055471A1 (en) * | 2009-08-28 | 2011-03-03 | Jonathan Thatcher | Apparatus, system, and method for improved data deduplication |
| US8364652B2 (en) * | 2010-09-30 | 2013-01-29 | Commvault Systems, Inc. | Content aligned block-based deduplication |
-
2016
- 2016-05-13 US US15/154,044 patent/US10416915B2/en active Active
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040174276A1 (en) * | 2002-10-30 | 2004-09-09 | Nbt Technology, Inc. | Content-based segmentation scheme for data compression in storage and transmission including hierarchical segment representation |
| US20100125553A1 (en) * | 2008-11-14 | 2010-05-20 | Data Domain, Inc. | Delta compression after identity deduplication |
| US20110218972A1 (en) * | 2010-03-08 | 2011-09-08 | Quantum Corporation | Data reduction indexing |
| US20110238635A1 (en) * | 2010-03-25 | 2011-09-29 | Quantum Corporation | Combining Hash-Based Duplication with Sub-Block Differencing to Deduplicate Data |
| US20110307447A1 (en) * | 2010-06-09 | 2011-12-15 | Brocade Communications Systems, Inc. | Inline Wire Speed Deduplication System |
| US9514146B1 (en) * | 2013-09-26 | 2016-12-06 | Emc Corporation | System and method for improving data compression of a storage system in an online manner |
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20170038978A1 (en) * | 2015-08-05 | 2017-02-09 | HGST Netherlands B.V. | Delta Compression Engine for Similarity Based Data Deduplication |
| US9864542B2 (en) * | 2015-09-18 | 2018-01-09 | Alibaba Group Holding Limited | Data deduplication using a solid state drive controller |
| CN106997391A (en) * | 2017-04-10 | 2017-08-01 | 华北电力大学(保定) | A kind of method of steady state condition data in quick screening large scale process data |
| US10282127B2 (en) | 2017-04-20 | 2019-05-07 | Western Digital Technologies, Inc. | Managing data in a storage system |
| US10809928B2 (en) | 2017-06-02 | 2020-10-20 | Western Digital Technologies, Inc. | Efficient data deduplication leveraging sequential chunks or auxiliary databases |
| CN107391034A (en) * | 2017-07-07 | 2017-11-24 | 华中科技大学 | A kind of duplicate data detection method based on local optimization |
| US10503608B2 (en) | 2017-07-24 | 2019-12-10 | Western Digital Technologies, Inc. | Efficient management of reference blocks used in data deduplication |
| TWI780168B (en) * | 2017-10-11 | 2022-10-11 | 南韓商三星電子股份有限公司 | Data storage device and bridge device |
| TWI751374B (en) * | 2017-10-11 | 2022-01-01 | 南韓商三星電子股份有限公司 | Bridge device and method for providing near storage compute |
| US11487696B2 (en) | 2017-10-11 | 2022-11-01 | Samsung Electronics Co., Ltd. | System and method for providing in-storage acceleration (ISA) in data storage devices |
| US12001374B2 (en) | 2017-10-11 | 2024-06-04 | Samsung Electronics Co., Ltd. | System and method for providing in-storage acceleration (ISA) in data storage devices |
| US12450188B2 (en) | 2017-10-11 | 2025-10-21 | Samsung Electronics Co., Ltd. | System and method for providing in-storage acceleration (ISA) in data storage devices |
| US11055006B1 (en) * | 2017-10-30 | 2021-07-06 | EMC IP Holding Company LLC | Virtual storage domain for a content addressable system |
| US20200034244A1 (en) * | 2018-07-26 | 2020-01-30 | EMC IP Holding Company LLC | Detecting server pages within backups |
| US11449465B2 (en) * | 2019-09-11 | 2022-09-20 | International Business Machines Corporation | Fixed chunk size deduplication with variable-size chunking |
| US20220035546A1 (en) * | 2020-08-03 | 2022-02-03 | Cornell University | Base and compressed difference data deduplication |
| US11797207B2 (en) * | 2020-08-03 | 2023-10-24 | Cornell University | Base and compressed difference data deduplication |
Also Published As
| Publication number | Publication date |
|---|---|
| US10416915B2 (en) | 2019-09-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10416915B2 (en) | Assisting data deduplication through in-memory computation | |
| US11334255B2 (en) | Method and device for data replication | |
| US8442942B2 (en) | Combining hash-based duplication with sub-block differencing to deduplicate data | |
| US20140358872A1 (en) | Storage system and method for performing deduplication in conjunction with host device and storage device | |
| US8180740B1 (en) | System and method for eliminating duplicate data by generating data fingerprints using adaptive fixed-length windows | |
| US8335877B2 (en) | Hardware acceleration of commonality factoring on removable media | |
| CN102246137B (en) | Delta compression after the deletion of identity copy | |
| US11627207B2 (en) | Systems and methods for data deduplication by generating similarity metrics using sketch computation | |
| CN103729225A (en) | Content-defined chunking remote file real-time updating method | |
| US11995050B2 (en) | Systems and methods for sketch computation | |
| CN105912268B (en) | Distributed repeated data deleting method and device based on self-matching characteristics | |
| JP2018527681A (en) | Data deduplication using a solid-state drive controller | |
| US20210191640A1 (en) | Systems and methods for data segment processing | |
| US11379524B2 (en) | Multiple overlapping hashes at variable offset in a hardware offload | |
| EP4078340A1 (en) | Systems and methods for sketch computation | |
| WO2021082926A1 (en) | Data compression method and apparatus | |
| JP2012164130A (en) | Data division program | |
| CN114072759B (en) | Data processing method and device in storage system and computer readable storage medium | |
| US11748307B2 (en) | Selective data compression based on data similarity | |
| CN115809013A (en) | Data deduplication method and related device | |
| CN103885859B (en) | It is a kind of to go fragment method and system based on global statistics | |
| WO2018036290A1 (en) | Data compression method and terminal | |
| Abdulsalam et al. | Evaluation of Two Thresholds Two Divisor Chunking Algorithm Using Rabin Finger print, Adler, and SHA1 Hashing Algorithms | |
| US20240345955A1 (en) | Detecting Modifications To Recently Stored Data | |
| CN115543688B (en) | Backup method, backup device, proxy terminal and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: SCALEFLUX, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHONG, HAO;SUN, FEI;LIU, YANG;REEL/FRAME:038599/0926 Effective date: 20160511 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |