US8484536B1 - Techniques for data storage, access, and maintenance - Google Patents
Techniques for data storage, access, and maintenance Download PDFInfo
- Publication number
- US8484536B1 US8484536B1 US12/748,066 US74806610A US8484536B1 US 8484536 B1 US8484536 B1 US 8484536B1 US 74806610 A US74806610 A US 74806610A US 8484536 B1 US8484536 B1 US 8484536B1
- Authority
- US
- United States
- Prior art keywords
- chunks
- error
- storage nodes
- correcting code
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2053—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
- G06F11/2094—Redundant storage or storage space
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2211/00—Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
- G06F2211/10—Indexing scheme relating to G06F11/10
- G06F2211/1002—Indexing scheme relating to G06F11/1076
- G06F2211/109—Sector level checksum or ECC, i.e. sector or stripe level checksum or ECC in addition to the RAID parity calculation
Definitions
- This specification relates to data storage, access, and maintenance.
- Important data is stored on storage devices that potentially fail.
- the data can be backed-up and stored redundantly so that the data can be recovered if a storage device fails.
- Data centers can store large amounts of data. Some data is stored redundantly across multiple data centers so that even if an entire data center fails the data can be recovered.
- Data can be stored using error-detecting codes.
- An error-detecting code adds extra data to the data that enables detection of certain errors in the data.
- One example of an error-detecting code is a cyclic redundancy check (CRC).
- CRC codes are used to detect failures on storage devices (e.g., hard disk drives).
- Data can also be stored using error-correcting codes.
- An error-correcting code adds extra data to the data that enables correction of errors in the data. The number of errors that can be corrected is limited by the amount of extra data that is added. Examples of error-correcting codes include Reed-Solomon codes.
- a computer-implemented method includes generating a plurality of error-correcting code chunks from a plurality of data chunks.
- the error-correcting code chunks can be used to reconstruct one or more of the data chunks.
- the data chunks are allocated to a local group of storage nodes.
- the error correcting code chunks are allocated between the local group of storage nodes and one or more remote groups of storage nodes.
- Each remote group of storage nodes is allocated one or more unique error-correcting code chunks from the error-correcting code chunks. Any of the error-correcting code chunks not allocated to a remote group of storage nodes are allocated to the local group of storage nodes.
- Other implementations of this aspect include corresponding systems, apparatus, and computer program products.
- Each remote group of storage nodes is allocated a same number of the error-correcting code chunks.
- Each data chunk is stored at a distinct storage node of the local group of storage nodes.
- Each error-correcting code chunk at each remote group of storage nodes is stored at a distinct storage node of the group of storage nodes.
- Each error-correcting code chunk and each data chunk is the same size.
- Each data chunk and each error-correcting code chunk is stored using an error-detecting code so that damaged chunks can be identified.
- Generating the error-correcting code chunks includes using a minimum-distance separable (MDS) error-correcting code.
- the local group of storage nodes is a first data center and each of the remote groups of storages nodes is a distinct data center.
- a number of error-correcting code chunks generated is based on the formula ((R ⁇ 1)*d+R*c), where R is the total number of groups of storage nodes including the local group of storage nodes and the one or more remote groups of storage nodes, d is the number of data chunks, and c is a variable parameter related to a level of redundancy.
- Each remote group of storage nodes is allocated (d+c) unique error correcting code chunks.
- a computer-implemented method includes generating a plurality of error-correcting code chunks using a plurality of data chunks.
- the error-correcting code chunks can be used to reconstruct one or more of the data chunks.
- the data chunks are allocated to each of two or more groups of storage nodes.
- the error-correcting code chunks are allocated between the two or more groups of storage nodes.
- Each group of storage nodes is allocated one or more unique error-correcting code chunks.
- Other implementations of this aspect include corresponding systems, apparatus, and computer program products.
- Implementations can include any or all of the following features.
- Each group of storage nodes is allocated a same number of error-correcting code chunks.
- the number of error-correcting code chunks generated is based on the formula (R*c), where R is the total number of groups of storage nodes, and c is a variable parameter related to a level of redundancy.
- R*c the total number of groups of storage nodes
- c the variable parameter related to a level of redundancy.
- Each group of storage nodes is allocated c unique error-correcting code chunks.
- a computer-implemented method includes identifying a damaged chunk of data.
- the damaged chunk is associated with a full stripe of d data chunks and one or more error-correcting code chunks (where d is greater than one).
- the damaged chunk is stored at a first storage node of a first group of storage nodes.
- h healthy chunks are identified at the first group of storage nodes.
- Each of the h healthy chunks is one of the d data chunks or the error-correcting code chunks of the full stripe.
- h is less than d, and h is greater than or equal to zero.
- (d ⁇ h) healthy chunks are identified among one or more second groups of storage nodes, where each of the second groups of storage nodes is distinct from the first group of storage nodes.
- Each of the (d ⁇ h) healthy chunks is a unique one of the d data chunks or error-correcting code chunks of the stripe.
- the damaged chunk is reconstructed using the identified healthy chunks.
- the reconstructed chunk is stored in an available storage node.
- Other implementations of this aspect include corresponding systems, apparatus, and computer program products.
- Implementations can include any or all of the following features.
- a request for the damaged chunk is received.
- the storage nodes of the first group of storage nodes are periodically polled to determine whether any storage nodes are damaged. Identifying the damaged chunk of data includes using an error-detecting code. Reconstructing the damaged chunk includes using a minimum-distance separable (MDS) error-correcting code.
- MDS minimum-distance separable
- Data can be stored, accessed, and maintained at groups of storage nodes while reducing either communication between groups or total storage space or both. Communication between groups can be reduced when a group (e.g., a “local” group) can be identified that receives more requests for data than other groups. In some cases, reliability can be greatly improved without increasing storage or costs associated with communication between groups.
- FIG. 1 is a schematic diagram of an example encoding system, an example local data center, and example remote data centers.
- FIG. 2 is an illustration of an example file comprising partial stripes and of data chunks.
- FIG. 3A is an illustration of an example full stripe comprising a partial stripe of data chunks and error-correcting code chunks.
- FIG. 3B is an illustration of an example full stripe comprising a partial stripe of data chunks and error-correcting code chunks.
- FIG. 3C illustrates an example full stripe comprising a partial stripe of data chunks and error-correcting code chunks.
- FIG. 4A is a diagram showing two example data centers that can communicate using a network.
- FIG. 4B is a diagram showing an example local data center and an example remote data center that can communicate using a network.
- FIG. 4C is a diagram showing an example first data center and an example second data center that can communicate using a network.
- FIG. 5 is a flow diagram of an example technique for storing data at groups of storage nodes.
- FIG. 6 is a flow diagram of an example technique for storing data at groups of storage nodes.
- FIG. 7 is a flow diagram of a technique for storing a file at groups of storage nodes.
- FIG. 8 is a flow diagram of a technique for data access and maintenance.
- FIG. 9 is a schematic diagram of an example system configured for data storage, access, and maintenance.
- FIG. 1 is a diagram of an encoding system 102 , a local data center 104 , and remote data centers 106 .
- the encoding system 102 comprising one or more data processing apparatuses can store data from a file 108 across storage nodes 110 at the local data center 104 and the remote data centers 106 . Redundant copies of the data and error-correcting codes can also be stored at storage nodes 110 .
- the encoding system communicates with the local data center 104 and the remote data centers 106 using a network 112 (e.g., a local area network (LAN), a wide area network (WAN), a cellular network, the Internet, combinations of networks, and the like).
- a network 112 e.g., a local area network (LAN), a wide area network (WAN), a cellular network, the Internet, combinations of networks, and the like.
- a storage node comprises one or more computer storage mediums.
- a computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of these.
- a computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple compact discs, disk drives, or other storage devices).
- a storage node is a data server, for example, a server including a data processing apparatus and a redundant array of independent disks (RAID) that can divide data among multiple hard disk drives.
- RAID redundant array of independent disks
- a group of storage nodes can include a rack, a subnetwork, a data center, or various other collections of servers or storage nodes.
- a data center is a group of storage nodes.
- a data center is a facility with physical space for computer systems.
- data centers include telecommunication systems, backup power supplies, climate controls, security, and the like.
- storing additional redundant copies of data requires more storage space at the storage nodes 110 .
- Additional storage space can require, for example, more physical space in a data center, more electricity, more climate control, more money, and so on.
- the encoding system 102 can access data at the local data center 104 faster than it can access data at the remote data centers 106 (e.g., because the remote data centers are on a network with more traffic or less bandwidth or both, or are physically further away, or for other reasons.) Thus recovery of damaged data takes more time when data at the remote data centers 106 needs to be accessed more frequently.
- communication between storage nodes in a data center is typically less expensive (e.g., faster, or requiring less money, or the like) than communication between data centers.
- recovery of damaged data takes more time when data centers need to communicate between each other than when data can be recovered at a single data center.
- failures of storage nodes within a data center are correlated (e.g., because failures occur when power is lost to the whole data center, a hurricane strikes the data center, or the like), while failures between storage nodes in different data centers are uncorrelated or weakly correlated. Consequently, data is generally stored using techniques that tolerate at least the loss of a single data center (that is, using techniques that can recover data despite an entire data center failing.)
- metadata is used at the encoding system 102 or the data centers 104 and 106 or both to keep track of data.
- the metadata can specify which parts of a file are stored at which data centers, where redundant copies of data are stored, and the like.
- FIG. 2 is an illustration of an example file 202 comprising partial stripes 204 and 206 of data chunks 208 .
- a data chunk is a specified amount of data.
- a data chunk is a contiguous portion of data from a file.
- a data chunk is one or more non-continuous portions of data from a file.
- a data chunk can be 256 bytes or other units of data.
- FIGS. 2-4 data chunks are illustrated as squares and labeled with the letter “D” and a number. The number indicates the position of the data chunk in the file 202 . For example, “D 1 ” indicates the first data chunk in the file 202 .
- all of those squares represent data chunks with the same data corresponding to a data chunk (e.g., for “D 1 ” the first data chunk) in the file 202 .
- a partial stripe is a specified number of data chunks.
- the partial stripes 204 and 206 of the example file 202 have six data chunks. Any file or any amount of data can be divided into partial stripes of data chunks.
- FIG. 3A is an illustration of an example full stripe 302 comprising a partial stripe 204 of data chunks and error-correcting code chunks 304 .
- a full stripe comprises a partial stripe and one or more error-correcting code chunks.
- the depicted full stripe 302 includes the partial stripe 204 from FIG. 2 and three error-correcting code chunks 304 .
- An error-correcting code chunk (“code chunk” hereinafter) comprises a chunk of data based on one or more data chunks of a partial stripe.
- each code chunk in a full stripe is the same specified size (e.g., 256 bytes) as the data chunks.
- the letter “d” is used in this specification to refer to the number of data chunks in a partial stripe or a full stripe.
- the letter “c” is used in this specification to refer to a variable parameter related to a level of redundancy. In some implementations, c is the number of code chunks in a full stripe. In other implementations, the number of code chunks is based on c.
- the code chunks are generated using an error-correcting code.
- generating the code chunks comprises using a minimum-distance separable (MDS) code.
- MDS codes include Reed-Solomon codes.
- Various techniques can be used to generate the code chunks.
- any error-correcting code can be used that can reconstruct d data chunks for any set of d unique, healthy chunks (either data chunks or code chunks) out of a full stripe.
- d unique, healthy chunks either data chunks or code chunks
- any number of failures up to the total number of code chunks in a full stripe can be tolerated—the full stripe can be reconstructed using the healthy chunks.
- a damaged chunk is a chunk containing one or more errors.
- a damaged chunk is identified using an error-detecting code.
- a damaged chunk can be completely erased (e.g., if the chunk was stored in a hard drive destroyed in a hurricane), or a damaged chunk can have a single bit flipped.
- a healthy chunk is a chunk that is not damaged.
- FIG. 3B is an illustration of an example full stripe 306 comprising a partial stripe 204 of data chunks and error-correcting code chunks 308 .
- the example full stripe 306 has 12 code chunks.
- the entire example full stripe 306 can be regenerated as long as any six chucks (code chunks or data chunks) are healthy. Consequently, data is only lost when there are more than 12 damaged chunks.
- the full stripe 306 can be reconstructed using any six of the healthy remaining chunks (e.g., C 1 -C 6 , or C 7 -C 12 , or C 1 , C 3 , C 5 , C 7 , C 9 , and C 11 ).
- FIG. 3C illustrates an example full stripe 310 comprising a partial stripe 204 of data chunks and error-correcting code chunks 312 .
- the example full stripe 310 has six code chunks.
- the entire example full stripe 310 can be regenerated as long as any six chunks (code chunks or data chunks) are healthy. Consequently, data is only lost when there are more than six failures.
- FIG. 4A is a diagram showing two example data centers 402 and 404 that can communicate using a network 406 .
- FIG. 4A illustrates an example possibility for how to store and maintain the example full stripe 302 illustrated in FIG. 3A .
- Identical copies of the full stripe 302 are allocated to each data center 402 and 404 .
- each chunk is stored at a distinct storage node at each data center.
- the first data center 402 can reconstruct the full stripe 302 using the healthy remaining chunks, D 4 -D 6 and C 1 -C 3 .
- the first data center 402 can retrieve the copies of D 1 -D 3 from the second data center 404 (e.g., this can be done to minimize the amount of processing resources used for reconstruction).
- the first data center 402 cannot reconstruct the full stripe 302 using only the healthy chunks at the first data center 402 . Nonetheless, the first data center 402 can retrieve the copies of D 1 -D 4 from the second data center 404 .
- the first data center 402 can retrieve a healthy chunk (any chunk from D 1 -D 4 ) from the second data center 404 , and it can then reconstruct the full stripe 302 .
- the second data center 404 can retrieve a healthy chunk (any one from D 6 or C 1 -C 3 ) from the first data center 402 and reconstruct the full stripe 302 .
- FIG. 4B is a diagram showing an example local data center 408 and an example remote data center 410 that can communicate using a network 406 .
- FIG. 4B illustrates an example possibility for how to store and maintain the example full stripe 306 illustrated in FIG. 3B .
- the data chunks are allocated to the local data center 408 .
- the code chunks are allocated between the local data center 408 and the remote data center 410 .
- Each data center has the same total number of chunks.
- each chunk is stored at a distinct storage node at each data center.
- the local data center 408 can reconstruct the full stripe 306 (and hence the damaged data chunks) using its healthy chunks, so no between-group communication costs are incurred.
- the remote data 410 center can reconstruct the full stripe 306 using its healthy chunks without between-group communication.
- the local data center 408 can retrieve a healthy chunk (any chunk from C 8 -C 12 ) from the remote data center 410 , and it can then reconstruct the full stripe 306 .
- the remote data center 410 can retrieve a healthy chunk (any one from D 5 -D 6 or C 1 -C 3 ) from the local data center 408 and reconstruct the full stripe 306 .
- An advantage to the example allocation of chunks illustrated in FIG. 4B is that the full stripe 306 can be reconstructed even if a large number of chunks (12 total) are damaged. Nonetheless, the remote data center 410 does not have the data chunks immediately available. If the remote data center 410 receives a request for the data chunks, it can reconstruct them using its code chunks (e.g., to minimize the costs of communicating between data centers), or forward the request to the local data center 408 (e.g., to minimize usage of processing resources) and retrieve the data chunks from the local data center 408 . If the local data center 408 receives a request for the data chunks, it can serve them directly from its storage nodes. Consequently, this example allocation is generally suitable where more data requests go to the local data center 408 than to the remote data center 410 .
- FIG. 4C is a diagram showing an example first data center 402 and an example second data center 404 that can communicate using a network 406 .
- FIG. 4C illustrates an example possibility for how to store and maintain the example full stripe 310 illustrated in FIG. 3C .
- the data chunks are allocated to both the first data center 402 and the second data center 404 .
- the code chunks are allocated between the first data center 402 and the second data center 404 .
- Each data center has the same total number of chunks. In some implementations, each chunk is stored at a distinct storage node at each data center.
- the first data center 402 can reconstruct the full stripe 306 using its healthy chunks, so no between-group communication costs are incurred.
- the second data center 404 can reconstruct the full stripe 306 using its healthy chunks without between-group communication.
- the first data center 402 can retrieve a healthy chunk (any chunk from C 4 -C 6 , but not D 5 or D 6 because it already has those chunks) from the second data center 404 , and it can then reconstruct the full stripe 310 .
- the second data center 404 can retrieve a healthy chunk (any one from C 1 -C 3 , but not D 5 or D 6 because it already has those) from the first data center 402 and reconstruct the full stripe 310 .
- FIG. 5 is a flow diagram of an example technique 500 for storing data at groups of storage nodes.
- the technique 500 is performed by a system, for example, encoding system 102 , or a system in a local data center 104 , or the like.
- the technique 500 will be described with respect to a system that performs the technique 500 .
- the technique 500 can be used, for example, to achieve the example allocation of code chunks and data chunks between the data centers illustrated in FIG. 4B .
- the system identifies data chunks (step 502 ).
- the data chunks can be from a partial stripe of data.
- the system receives the data chunks with a request to store the data chunks.
- the system generates code chunks using the data chunks (step 504 ).
- the code chunks can be generated using an MDS code.
- the number of code chunks generated is based on the formula ((R ⁇ 1)*d+R*c), where R is the total number of groups of storage nodes.
- each code chunk and each data chunk are the same size (e.g., the same number of bytes).
- the system allocates the data chunks to a local group of storage nodes (step 506 ).
- allocating the data chunks comprises sending them to the local group.
- the local group can be, for example, a data center, a group of servers in a data center, an array of hard drives, or the like.
- each data chunk is stored at a distinct storage node of the local group of storage nodes.
- each data chunk is stored using an error-detecting code so that damaged chunks can be identified. For example, each data chunk can be stored with a CRC.
- the system allocates the code chunks between the local group of storage nodes and one or more remote groups of storage nodes (step 508 ).
- the local group of storage nodes is a first data center and each of the remote groups of storage nodes is a distinct data center.
- Each remote group of storage nodes is allocated one or more unique code chunks from the code chunks generated in step 504 .
- the code chunks are unique because they were created using an error-correcting code specifying a number of unique code chunks.
- Each remote group of storage nodes is allocated the same number of code chunks. Any of the code chunks not allocated to a remote group of storage nodes are allocated to the local group of storage nodes (in addition to the data chunks).
- each code chunk at each remote group of storage nodes is stored at a distinct storage node.
- each code chunk is stored using an error detecting code so that damaged chunks can be identified.
- each code chunk can be stored with a CRC.
- allocating data or code chunks includes sending those chunks to the group of storage nodes that is allocated the chunks.
- the encoding system can generate all of the code chunks, and then send each allocated code chunk to its allocated group of storage nodes.
- each group of storage nodes generates its allocated code chunks after receiving the data chunks (or d chunks of either data or code chunks). For example, the data chunks can be sent to each of the remote groups of storage nodes. Then each remote group of storage nodes can generate its allocated code chunks (e.g., by reconstructing an entire full stripe using an error-correcting code and retaining only its allocated code chunks).
- each remote group of storage nodes can generate its allocated code chunks (e.g., by reconstructing an entire full stripe using an error-correcting code and retaining only its allocated code chunks).
- FIG. 6 is a flow diagram of an example technique 600 for storing data at groups of storage nodes.
- the technique 600 is performed by a system, for example, encoding system 102 , or a system in a data center, or the like.
- the technique 600 will be described with respect to a system that performs the technique 600 .
- the technique 600 can be used, for example, to achieve the example allocation of code chunks and data chunks between the data centers illustrated in FIG. 4C .
- the system identifies data chunks (step 602 ).
- the system generates code chunks using the data chunks (step 604 ).
- the number of code chunks generated is based on the formula (R*c), where R is the total number of groups of storage nodes.
- the system allocates the data chunks to each of two or more groups of storage nodes (step 606 ). Typically, the data chunks are sent to each group of storage nodes. The system allocates the code chunks between the groups of storage nodes (step 608 ). In some implementations, each group of storage nodes is a distinct data center.
- Each group of storage nodes is allocated one or more unique code chunks from the code chunks generated in step 604 (step 608 ).
- the system generates the code chunks and sends the allocated code chunks to the groups of storage nodes.
- each group of storage nodes generates its allocated code chunks. For example, each group can reconstruct a full stripe using the data chunks allocated to it, and then retain its allocated code chunks.
- Each remote group of storage nodes is allocated the same number of code chunks.
- FIG. 7 is a flow diagram of a technique 700 for storing a file at groups of storage nodes.
- the technique 700 is performed by a system, e.g., encoding system 102 , or a system at a data center.
- the technique 700 will be described with respect to a system.
- the system identifies the file (step 702 ).
- a file is a collection of data.
- the system identifies a partial stripe of data chunks in the file (step 704 ). Typically the system starts at the beginning of the file and works through the file one partial stripe at a time.
- the system allocates a full stripe (based on the partial stripe) to the groups of storage nodes (step 706 ). For example, the technique 500 described in FIG. 5 can be used, or the technique 600 described in FIG. 6 can be used. If there are more partial stripes in the file (step 708 ), the system repeats for the additional partial stripes (repeat steps 704 and 706 ).
- FIG. 8 is a flow diagram of a technique 800 for data access and maintenance.
- the technique 800 is performed by a system, e.g., encoding system 102 , or a system at a data center.
- the technique 800 will be described with respect to a system.
- the system identifies a damaged chunk of data (step 802 ).
- the damaged chunk is associated with a full stripe of data including d data chunks (d>1) and one or more code chunks.
- the damaged chunk is stored at a first storage node of a first group of storage nodes.
- identifying the damaged chunk of data includes using an error-detecting code.
- the system identifies the damaged chunk when the system receives a request for the damaged chunk. In some other cases, the system identifies the damaged chunk while periodically polling the storage nodes of the first group of storage nodes to determine whether any storage nodes are damaged. In some other cases, the system identifies the damaged chunk after receiving notification of a problem, for example, a power failure, a hard drive failure, a memory error, or the like.
- the system identifies both healthy data chunks and healthy code chunks. In some cases, there are no healthy chunks at the first group of storage nodes. In some implementations, identifying healthy chunks includes using an error-detecting code.
- the system determines whether (d ⁇ h) healthy, unique chunks are available from one or more second groups of storage nodes (step 808 ). In some implementations, the system polls each group of storage nodes to determine which chunks of the full stripe are available and healthy.
- the system can search for healthy chunks at other locations. In some implementations, the system is unable to reconstruct the damaged chunk. In some implementations, the system reports a problem (e.g., by sending a message to an encoding system 102 , sending a message that a requested chunk is not available, displaying an error message on a display device, or the like) (step 810 ).
- the system retrieves the (d ⁇ h) healthy, unique chunks (step 812 ).
- the system reconstructs the damaged chunk (step 814 ).
- reconstructing the damaged chunk includes using an MDS code.
- the system stores the reconstructed chunk in an available storage node (step 816 ).
- the reconstructed chunk is stored in the first storage node.
- the first storage node is suspected to be damaged and the reconstructed chunk is stored in another storage node.
- FIG. 9 is a schematic diagram of an example system configured for data storage, access, and maintenance.
- the system generally consists of a server 902 .
- the server 902 is optionally connected to one or more user or client computers 990 through a network 980 .
- the server 902 consists of one or more data processing apparatus. While only one data processing apparatus is shown in FIG. 9 , multiple data processing apparatus can be used.
- the server 902 includes various modules, e.g. executable software programs, including an error correcting code engine 904 for generating code chunks and reconstructing damaged chunks.
- An error-detecting code engine 906 is configured to identify damaged chunks of data.
- An allocation engine 908 allocates code chunks and data chunks between one or more groups of storage nodes.
- Each module runs as part of the operating system on the server 902 , runs as an application on the server 902 , or runs as part of the operating system and part of an application on the server 902 , for instance.
- the software modules can be distributed on one or more data processing apparatus connected by one or more networks or other suitable communication mediums.
- the server 902 also includes hardware or firmware devices including one or more processors 912 , one or more additional devices 914 , a computer readable medium 916 , a communication interface 918 , and optionally one or more user interface devices 920 .
- Each processor 912 is capable of processing instructions for execution within the server 902 .
- the processor 912 is a single or multi-threaded processor.
- Each processor 912 is capable of processing instructions stored on the computer readable medium 916 or on a storage device such as one of the additional devices 914 .
- the server 902 uses its communication interface 918 to communicate with one or more computers 990 , for example, over a network 980 .
- the server 902 does not have any user interface devices.
- the server 902 includes one or more user interface devices. Examples of user interface devices 920 include a display, a camera, a speaker, a microphone, a tactile feedback device, a keyboard, and a mouse.
- the server 902 can store instructions that implement operations associated with the modules described above, for example, on the computer readable medium 916 or one or more additional devices 914 , for example, one or more of a floppy disk device, a hard disk device, an optical disk device, or a tape device.
- Embodiments of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
- Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus.
- the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
- a computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them.
- a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal.
- the computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
- the operations described in this specification can be implemented as operations performed by a data processing apparatus on data stored on one or more computer-readable storage devices or received from other sources.
- the term “data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing
- the apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
- the apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them.
- the apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
- a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.
- a computer program may, but need not, correspond to a file in a file system.
- a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
- a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
- the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output.
- the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
- processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
- a processor will receive instructions and data from a read-only memory or a random access memory or both.
- the essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data.
- a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
- mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
- a computer need not have such devices.
- a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few.
- Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
- the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
- a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
- a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
- keyboard and a pointing device e.g., a mouse or a trackball
- Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
- a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a
- Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components.
- the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network.
- Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter-network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
- LAN local area network
- WAN wide area network
- inter-network e.g., the Internet
- peer-to-peer networks e.g., ad hoc peer-to-peer networks.
- the computing system can include clients and servers.
- a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
- a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device).
- client device e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device.
- Data generated at the client device e.g., a result of the user interaction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Detection And Correction Of Errors (AREA)
Abstract
Description
Claims (33)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/748,066 US8484536B1 (en) | 2010-03-26 | 2010-03-26 | Techniques for data storage, access, and maintenance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/748,066 US8484536B1 (en) | 2010-03-26 | 2010-03-26 | Techniques for data storage, access, and maintenance |
Publications (1)
Publication Number | Publication Date |
---|---|
US8484536B1 true US8484536B1 (en) | 2013-07-09 |
Family
ID=48701552
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/748,066 Expired - Fee Related US8484536B1 (en) | 2010-03-26 | 2010-03-26 | Techniques for data storage, access, and maintenance |
Country Status (1)
Country | Link |
---|---|
US (1) | US8484536B1 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180165155A1 (en) * | 2015-02-19 | 2018-06-14 | Netapp, Inc. | Layering a distributed storage system into storage groups and virtual chunk spaces for efficient data recovery |
US10705907B1 (en) * | 2016-03-24 | 2020-07-07 | EMC IP Holding Company LLC | Data protection in a heterogeneous random access storage array |
US10705911B2 (en) * | 2017-04-24 | 2020-07-07 | Hewlett Packard Enterprise Development Lp | Storing data in a distributed storage system |
CN111506450A (en) * | 2019-01-31 | 2020-08-07 | 伊姆西Ip控股有限责任公司 | Method, apparatus and computer program product for data processing |
US11036583B2 (en) * | 2014-06-04 | 2021-06-15 | Pure Storage, Inc. | Rebuilding data across storage nodes |
EP3916559A1 (en) * | 2014-01-31 | 2021-12-01 | Google LLC | Prioritizing data reconstruction in distributed storage systems |
Citations (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6032269A (en) * | 1998-06-30 | 2000-02-29 | Digi-Data Corporation | Firmware recovery from hanging channels by buffer analysis |
US6151641A (en) * | 1997-09-30 | 2000-11-21 | Lsi Logic Corporation | DMA controller of a RAID storage controller with integrated XOR parity computation capability adapted to compute parity in parallel with the transfer of data segments |
US6216247B1 (en) * | 1998-05-29 | 2001-04-10 | Intel Corporation | 32-bit mode for a 64-bit ECC capable memory subsystem |
US6378038B1 (en) * | 1999-03-31 | 2002-04-23 | International Business Machines Corporation | Method and system for caching data using raid level selection |
US20030056068A1 (en) | 2000-04-29 | 2003-03-20 | Mcallister Curtis R. | Arrangement of data within cache lines so that tags are first data received |
US20030149750A1 (en) * | 2002-02-07 | 2003-08-07 | Franzenburg Alan M. | Distributed storage array |
US6721317B2 (en) * | 1999-03-04 | 2004-04-13 | Sun Microsystems, Inc. | Switch-based scalable performance computer memory architecture |
US20050091234A1 (en) | 2003-10-23 | 2005-04-28 | International Business Machines Corporation | System and method for dividing data into predominantly fixed-sized chunks so that duplicate data chunks may be identified |
US20060080505A1 (en) * | 2004-10-08 | 2006-04-13 | Masahiro Arai | Disk array device and control method for same |
US7356752B2 (en) * | 2000-03-14 | 2008-04-08 | Comtech Telecommunications Corp. | Enhanced turbo product codes |
US7398459B2 (en) * | 2003-01-20 | 2008-07-08 | Samsung Electronics Co., Ltd. | Parity storing method and error block recovering method in external storage subsystem |
US7505890B2 (en) * | 2003-01-15 | 2009-03-17 | Cox Communications, Inc. | Hard disk drive emulator |
US7546484B2 (en) * | 2006-02-08 | 2009-06-09 | Microsoft Corporation | Managing backup solutions with light-weight storage nodes |
US20090262839A1 (en) | 2007-07-05 | 2009-10-22 | Shelby Kevin A | Transmission of Multimedia Streams to Mobile Devices With Uncoded Transport Tunneling |
US20090265578A1 (en) * | 2008-02-12 | 2009-10-22 | Doug Baloun | Full Stripe Processing for a Redundant Array of Disk Drives |
US7624229B1 (en) * | 2006-09-29 | 2009-11-24 | Emc Corporation | Spillover slot |
US20100037117A1 (en) * | 2008-08-05 | 2010-02-11 | Advanced Micro Devices, Inc. | Data error correction device and methods thereof |
US7676730B2 (en) * | 2005-09-30 | 2010-03-09 | Quantum Corporation | Method and apparatus for implementing error correction coding in a random access memory |
US7739446B2 (en) * | 2005-04-21 | 2010-06-15 | Hitachi, Ltd. | System and method for managing disk space in a thin-provisioned storage subsystem |
US7774681B2 (en) * | 2004-06-03 | 2010-08-10 | Inphase Technologies, Inc. | Data protection system |
US20100217915A1 (en) | 2009-02-23 | 2010-08-26 | International Business Machines Corporation | High availability memory system |
US7831764B2 (en) * | 2007-02-19 | 2010-11-09 | Hitachi, Ltd | Storage system having plural flash memory drives and method for controlling data storage |
US7861052B2 (en) * | 2006-05-16 | 2010-12-28 | Hitachi, Ltd. | Computer system having an expansion device for virtualizing a migration source logical unit |
US7861035B2 (en) * | 2006-06-20 | 2010-12-28 | Korea Advanced Institute Of Science And Technology | Method of improving input and output performance of raid system using matrix stripe cache |
US20110258161A1 (en) | 2010-04-14 | 2011-10-20 | International Business Machines Corporation | Optimizing Data Transmission Bandwidth Consumption Over a Wide Area Network |
US8065555B2 (en) * | 2006-02-28 | 2011-11-22 | Intel Corporation | System and method for error correction in cache units |
US8082393B2 (en) * | 2008-06-06 | 2011-12-20 | Pivot3 | Method and system for rebuilding data in a distributed RAID system |
US8090792B2 (en) * | 2007-03-08 | 2012-01-03 | Nec Laboratories America, Inc. | Method and system for a self managing and scalable grid storage |
US20120036333A1 (en) | 2004-09-30 | 2012-02-09 | Lecrone Douglas E | Triangular asynchronous replication |
US20120042142A1 (en) | 2008-08-08 | 2012-02-16 | Amazon Technologies, Inc. | Providing executing programs with reliable access to non-local block data storage |
US20120042200A1 (en) * | 2010-08-11 | 2012-02-16 | The University Of Tokyo | Control device and data storage device |
US20120042201A1 (en) * | 2009-06-05 | 2012-02-16 | Resnick David R | Failure recovery memory devices and methods |
US8145865B1 (en) * | 2006-09-29 | 2012-03-27 | Emc Corporation | Virtual ordered writes spillover mechanism |
US8176247B2 (en) * | 2008-10-28 | 2012-05-08 | Pivot3 | Method and system for protecting against multiple failures in a RAID system |
US8180954B2 (en) * | 2008-04-15 | 2012-05-15 | SMART Storage Systems, Inc. | Flash management using logical page size |
US8213205B2 (en) * | 2005-09-02 | 2012-07-03 | Google Inc. | Memory system including multiple memory stacks |
US8234539B2 (en) * | 2007-12-06 | 2012-07-31 | Sandisk Il Ltd. | Correction of errors in a memory array |
US8255761B1 (en) | 2007-07-12 | 2012-08-28 | Samsung Electronics Co., Ltd. | Methods and apparatus to compute CRC for multiple code blocks |
US20120246548A1 (en) | 2007-09-14 | 2012-09-27 | Motorola Mobility, Inc. | Multi-layer cyclic redundancy check code in wireless communication system |
US8279755B2 (en) | 2004-09-16 | 2012-10-02 | Digital Fountain, Inc. | FEC architecture for streaming services including symbol based operations and packet tagging |
US8307258B2 (en) | 2009-05-18 | 2012-11-06 | Fusion-10, Inc | Apparatus, system, and method for reconfiguring an array to operate with less storage elements |
US8327234B2 (en) | 2009-02-27 | 2012-12-04 | Research In Motion Limited | Code block reordering prior to forward error correction decoding based on predicted code block reliability |
-
2010
- 2010-03-26 US US12/748,066 patent/US8484536B1/en not_active Expired - Fee Related
Patent Citations (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6151641A (en) * | 1997-09-30 | 2000-11-21 | Lsi Logic Corporation | DMA controller of a RAID storage controller with integrated XOR parity computation capability adapted to compute parity in parallel with the transfer of data segments |
US6216247B1 (en) * | 1998-05-29 | 2001-04-10 | Intel Corporation | 32-bit mode for a 64-bit ECC capable memory subsystem |
US6032269A (en) * | 1998-06-30 | 2000-02-29 | Digi-Data Corporation | Firmware recovery from hanging channels by buffer analysis |
US6721317B2 (en) * | 1999-03-04 | 2004-04-13 | Sun Microsystems, Inc. | Switch-based scalable performance computer memory architecture |
US6378038B1 (en) * | 1999-03-31 | 2002-04-23 | International Business Machines Corporation | Method and system for caching data using raid level selection |
US7356752B2 (en) * | 2000-03-14 | 2008-04-08 | Comtech Telecommunications Corp. | Enhanced turbo product codes |
US20030056068A1 (en) | 2000-04-29 | 2003-03-20 | Mcallister Curtis R. | Arrangement of data within cache lines so that tags are first data received |
US20030149750A1 (en) * | 2002-02-07 | 2003-08-07 | Franzenburg Alan M. | Distributed storage array |
US7505890B2 (en) * | 2003-01-15 | 2009-03-17 | Cox Communications, Inc. | Hard disk drive emulator |
US7398459B2 (en) * | 2003-01-20 | 2008-07-08 | Samsung Electronics Co., Ltd. | Parity storing method and error block recovering method in external storage subsystem |
US20050091234A1 (en) | 2003-10-23 | 2005-04-28 | International Business Machines Corporation | System and method for dividing data into predominantly fixed-sized chunks so that duplicate data chunks may be identified |
US7774681B2 (en) * | 2004-06-03 | 2010-08-10 | Inphase Technologies, Inc. | Data protection system |
US8279755B2 (en) | 2004-09-16 | 2012-10-02 | Digital Fountain, Inc. | FEC architecture for streaming services including symbol based operations and packet tagging |
US20120036333A1 (en) | 2004-09-30 | 2012-02-09 | Lecrone Douglas E | Triangular asynchronous replication |
US20060080505A1 (en) * | 2004-10-08 | 2006-04-13 | Masahiro Arai | Disk array device and control method for same |
US7739446B2 (en) * | 2005-04-21 | 2010-06-15 | Hitachi, Ltd. | System and method for managing disk space in a thin-provisioned storage subsystem |
US8213205B2 (en) * | 2005-09-02 | 2012-07-03 | Google Inc. | Memory system including multiple memory stacks |
US7676730B2 (en) * | 2005-09-30 | 2010-03-09 | Quantum Corporation | Method and apparatus for implementing error correction coding in a random access memory |
US7546484B2 (en) * | 2006-02-08 | 2009-06-09 | Microsoft Corporation | Managing backup solutions with light-weight storage nodes |
US8065555B2 (en) * | 2006-02-28 | 2011-11-22 | Intel Corporation | System and method for error correction in cache units |
US7861052B2 (en) * | 2006-05-16 | 2010-12-28 | Hitachi, Ltd. | Computer system having an expansion device for virtualizing a migration source logical unit |
US7861035B2 (en) * | 2006-06-20 | 2010-12-28 | Korea Advanced Institute Of Science And Technology | Method of improving input and output performance of raid system using matrix stripe cache |
US7624229B1 (en) * | 2006-09-29 | 2009-11-24 | Emc Corporation | Spillover slot |
US8145865B1 (en) * | 2006-09-29 | 2012-03-27 | Emc Corporation | Virtual ordered writes spillover mechanism |
US7831764B2 (en) * | 2007-02-19 | 2010-11-09 | Hitachi, Ltd | Storage system having plural flash memory drives and method for controlling data storage |
US8090792B2 (en) * | 2007-03-08 | 2012-01-03 | Nec Laboratories America, Inc. | Method and system for a self managing and scalable grid storage |
US20090262839A1 (en) | 2007-07-05 | 2009-10-22 | Shelby Kevin A | Transmission of Multimedia Streams to Mobile Devices With Uncoded Transport Tunneling |
US8255761B1 (en) | 2007-07-12 | 2012-08-28 | Samsung Electronics Co., Ltd. | Methods and apparatus to compute CRC for multiple code blocks |
US8327237B2 (en) | 2007-09-14 | 2012-12-04 | Motorola Mobility Llc | Multi-layer cyclic redundancy check code in wireless communication system |
US20120246548A1 (en) | 2007-09-14 | 2012-09-27 | Motorola Mobility, Inc. | Multi-layer cyclic redundancy check code in wireless communication system |
US8234539B2 (en) * | 2007-12-06 | 2012-07-31 | Sandisk Il Ltd. | Correction of errors in a memory array |
US20090265578A1 (en) * | 2008-02-12 | 2009-10-22 | Doug Baloun | Full Stripe Processing for a Redundant Array of Disk Drives |
US8180954B2 (en) * | 2008-04-15 | 2012-05-15 | SMART Storage Systems, Inc. | Flash management using logical page size |
US8140753B2 (en) * | 2008-06-06 | 2012-03-20 | Pivot3 | Method and system for rebuilding data in a distributed RAID system |
US8082393B2 (en) * | 2008-06-06 | 2011-12-20 | Pivot3 | Method and system for rebuilding data in a distributed RAID system |
US20100037117A1 (en) * | 2008-08-05 | 2010-02-11 | Advanced Micro Devices, Inc. | Data error correction device and methods thereof |
US20120042142A1 (en) | 2008-08-08 | 2012-02-16 | Amazon Technologies, Inc. | Providing executing programs with reliable access to non-local block data storage |
US8176247B2 (en) * | 2008-10-28 | 2012-05-08 | Pivot3 | Method and system for protecting against multiple failures in a RAID system |
US20120131383A1 (en) * | 2008-10-28 | 2012-05-24 | Pivot3 | Method and system for protecting against multiple failures in a raid system |
US8086783B2 (en) | 2009-02-23 | 2011-12-27 | International Business Machines Corporation | High availability memory system |
US20100217915A1 (en) | 2009-02-23 | 2010-08-26 | International Business Machines Corporation | High availability memory system |
US8327234B2 (en) | 2009-02-27 | 2012-12-04 | Research In Motion Limited | Code block reordering prior to forward error correction decoding based on predicted code block reliability |
US8307258B2 (en) | 2009-05-18 | 2012-11-06 | Fusion-10, Inc | Apparatus, system, and method for reconfiguring an array to operate with less storage elements |
US20120042201A1 (en) * | 2009-06-05 | 2012-02-16 | Resnick David R | Failure recovery memory devices and methods |
US20110258161A1 (en) | 2010-04-14 | 2011-10-20 | International Business Machines Corporation | Optimizing Data Transmission Bandwidth Consumption Over a Wide Area Network |
US20120042200A1 (en) * | 2010-08-11 | 2012-02-16 | The University Of Tokyo | Control device and data storage device |
Non-Patent Citations (4)
Title |
---|
Duminuco, Alessandro; "Hierarchical Codes: How to Make Erasure Codes Attractive for Peer-to-Peer Storage Systems;" Proceedings of the Eighth International Conference on Peer-to-Peer Computing; 2008 (P2P'08), pp. 8-11; 10 pages. |
Hafner, James; "HoVer Erasure Codes for Disk Arrays," Proceedings of the 2006 International Conference on Dependable Systems and Networks; pp. 217-226; 2006; 10 pages. |
Li, Mingqiang; "GRID Codes: Strip-Based Erasure Codes with High Fault Tolerance for Storage Systems;" ACM Transactions on Storage, vol. 4, No. 4, Article 15, Jan. 2009; 22 pages. |
Wikipedia; Reed Solomon; http://en.wikipedia.org/wiki/Reed-Solomon; last modified Sep. 13, 2006, 14 pages. |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3916559A1 (en) * | 2014-01-31 | 2021-12-01 | Google LLC | Prioritizing data reconstruction in distributed storage systems |
US11036583B2 (en) * | 2014-06-04 | 2021-06-15 | Pure Storage, Inc. | Rebuilding data across storage nodes |
US12066895B2 (en) * | 2014-06-04 | 2024-08-20 | Pure Storage, Inc. | Heterogenous memory accommodating multiple erasure codes |
US11593203B2 (en) | 2014-06-04 | 2023-02-28 | Pure Storage, Inc. | Coexisting differing erasure codes |
US11023340B2 (en) * | 2015-02-19 | 2021-06-01 | Netapp, Inc. | Layering a distributed storage system into storage groups and virtual chunk spaces for efficient data recovery |
US10795789B2 (en) | 2015-02-19 | 2020-10-06 | Netapp, Inc. | Efficient recovery of erasure coded data |
US20180165155A1 (en) * | 2015-02-19 | 2018-06-14 | Netapp, Inc. | Layering a distributed storage system into storage groups and virtual chunk spaces for efficient data recovery |
US10503621B2 (en) | 2015-02-19 | 2019-12-10 | Netapp, Inc. | Manager election for erasure coding groups |
US10489210B2 (en) * | 2015-02-19 | 2019-11-26 | Netapp, Inc. | Layering a distributed storage system into storage groups and virtual chunk spaces for efficient data recovery |
US10353740B2 (en) | 2015-02-19 | 2019-07-16 | Netapp, Inc. | Efficient recovery of erasure coded data |
US10152377B2 (en) * | 2015-02-19 | 2018-12-11 | Netapp, Inc. | Layering a distributed storage system into storage groups and virtual chunk spaces for efficient data recovery |
US10705907B1 (en) * | 2016-03-24 | 2020-07-07 | EMC IP Holding Company LLC | Data protection in a heterogeneous random access storage array |
US10705911B2 (en) * | 2017-04-24 | 2020-07-07 | Hewlett Packard Enterprise Development Lp | Storing data in a distributed storage system |
CN111506450A (en) * | 2019-01-31 | 2020-08-07 | 伊姆西Ip控股有限责任公司 | Method, apparatus and computer program product for data processing |
CN111506450B (en) * | 2019-01-31 | 2024-01-02 | 伊姆西Ip控股有限责任公司 | Method, apparatus and computer program product for data processing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8719675B1 (en) | Orthogonal coding for data storage, access, and maintenance | |
US8856619B1 (en) | Storing data across groups of storage nodes | |
EP3014450B1 (en) | Erasure coding across multiple zones | |
US10951236B2 (en) | Hierarchical data integrity verification of erasure coded data in a distributed computing system | |
EP3014451B1 (en) | Locally generated simple erasure codes | |
US10025666B2 (en) | RAID surveyor | |
US9384232B2 (en) | Confirming data consistency in a data storage environment | |
US8370307B2 (en) | Cloud data backup storage manager | |
US20080155191A1 (en) | Systems and methods for providing heterogeneous storage systems | |
US8484536B1 (en) | Techniques for data storage, access, and maintenance | |
WO2020027911A1 (en) | Storage systems with peer data recovery | |
US20100199123A1 (en) | Distributed Storage of Recoverable Data | |
US8484506B2 (en) | Redundant array of independent disks level 5 (RAID 5) with a mirroring functionality | |
US9058291B2 (en) | Multiple erasure correcting codes for storage arrays | |
US8543864B2 (en) | Apparatus and method of performing error recovering process in asymmetric clustering file system | |
US10740182B2 (en) | Erased memory page reconstruction using distributed coding for multiple dimensional parities | |
US8621317B1 (en) | Modified orthogonal coding techniques for storing data | |
US8615698B1 (en) | Skewed orthogonal coding techniques | |
US9229811B2 (en) | Folded codes for correction of latent media errors | |
US8316258B2 (en) | System and method for error detection in a data storage system | |
US8510625B1 (en) | Multi-site data redundancy | |
US9830220B1 (en) | Enhanced error recovery for data storage drives |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: GOOGLE INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CYPHER, ROBERT;REEL/FRAME:024441/0303 Effective date: 20100325 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: GOOGLE LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044101/0299 Effective date: 20170929 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20250709 |