[go: up one dir, main page]

CN1302392C - Online method for reorganizing magnetic disk - Google Patents

Online method for reorganizing magnetic disk Download PDF

Info

Publication number
CN1302392C
CN1302392C CNB031024629A CN03102462A CN1302392C CN 1302392 C CN1302392 C CN 1302392C CN B031024629 A CNB031024629 A CN B031024629A CN 03102462 A CN03102462 A CN 03102462A CN 1302392 C CN1302392 C CN 1302392C
Authority
CN
China
Prior art keywords
group
lun
cache
buffer
raid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CNB031024629A
Other languages
Chinese (zh)
Other versions
CN1519726A (en
Inventor
张巍
张国彬
任雷鸣
陈绍元
郑珉
胡鹏
罗传藻
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CNB031024629A priority Critical patent/CN1302392C/en
Publication of CN1519726A publication Critical patent/CN1519726A/en
Application granted granted Critical
Publication of CN1302392C publication Critical patent/CN1302392C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The present invention discloses an online method for reorganizing magnetic disks, which comprises the following steps that logic units (LUN) the depth size of strips of which is similar in an independent redundant disk array (RAID) can be in advance divided into at least one group; when the reorganizing process of magnetic disks begins, a block of buffer can be applied as a special buffer according to the LUN the strip depth of which is maximal in RAID, and the reorganization can be carried out according to the group; when the reorganization is carried out to any group, a block of buffer can be applied as a free buffer according to the LUN the strip depth of which is maximal in the group, and the application of an advance buffer is the same with the application of the free buffer. The proposal of the present invention can ensure that the release frequencies of internal memory application of a manager of a buffer area are less, and simultaneously, the waste of the internal memory is reduced.

Description

一种磁盘在线重构方法A Disk Online Reconfiguration Method

技术领域technical field

本发明涉及独立冗余磁盘阵列(RAID)系统领域,特别是涉及一种磁盘在线重构方法。The invention relates to the field of an independent redundant disk array (RAID) system, in particular to an online reconfiguration method of a disk.

背景技术Background technique

随着计算机数据处理能力的不断增强,硬盘的读取速度已无法满足大量数据读取的需要,因而出现了独立冗余磁盘阵列技术,其以成本低、功耗小、传输速率高、实现简单等优点,在网络服务器等设备中得到广泛应用。With the continuous enhancement of computer data processing capabilities, the reading speed of hard disks can no longer meet the needs of large amounts of data reading, so independent redundant disk array technology has emerged, which has low cost, low power consumption, high transmission rate, and simple implementation. And other advantages, it is widely used in network servers and other equipment.

RAID技术的优点之一就是能够在磁盘损坏时进行在线磁盘重构。目前主流的磁盘重构方法如下:One of the advantages of RAID technology is the ability to perform online disk reconstruction when a disk is damaged. The current mainstream disk reconstruction methods are as follows:

构造N个进程,N-1个进程对应N-1个仍然正常工作的磁盘,一个进程对应于热备盘。Construct N processes, N-1 processes correspond to N-1 disks that are still working normally, and one process corresponds to the hot spare disk.

与正常工作的磁盘相连的进程进行如下的处理:A process attached to a functioning disk proceeds as follows:

步骤1、找到该磁盘上地址最低的分条单元;Step 1. Find the stripe unit with the lowest address on the disk;

步骤2、若缓冲区(buffer)有足够的空间接收数据,则对该分条单元发出一个低优先级的读请求,读至缓冲区内(跳过已重构完毕的分条);Step 2. If the buffer (buffer) has enough space to receive data, send a low-priority read request to the stripe unit and read it into the buffer (skip the reconstructed stripe);

步骤3、等待读操作完成,如果缓冲区管理器(buffer manager)中有缓冲区能接受该数据,将该数据送至缓冲区管理器的缓冲区中进行异或,并返回步骤1,否则等待,直到所有的分条单元都被读完。Step 3. Wait for the read operation to complete. If there is a buffer in the buffer manager that can accept the data, send the data to the buffer of the buffer manager for XOR, and return to step 1, otherwise wait , until all the stripe units are read.

与热备盘相连的进程进行如下处理:Processes connected to the hot spare disk are processed as follows:

步骤4、若缓冲区管理器中有已进行完所有分条单元异或的缓冲区,取出,进入下一步,否则等待;Step 4. If there is a buffer in the buffer manager that has completed the XOR of all the striping units, take it out and go to the next step, otherwise wait;

步骤5、对热备盘发出一个低优先级的写请求,将取来的缓冲区中的数据写至热备盘上,返回步骤4,等待写完成,直到失败磁盘所有数据都得到重构。Step 5. Send a low-priority write request to the hot spare disk, write the data in the fetched buffer to the hot spare disk, return to step 4, and wait for the writing to complete until all data on the failed disk is reconstructed.

目前主流的Raid技术中均含有对逻辑单元(LUN,Logical Unit Number)的划分,Raid的分条深度,即在一个单盘上可以连续读写的数据数量,依LUN的大小而指定,所以各LUN的分条深度互有不同。The current mainstream Raid technology includes the division of logical units (LUN, Logical Unit Number). The striping depth of the LUN varies from one another.

从上面流程可知,在面向磁盘的重构算法中,由于不同磁盘上的用户随机到达,有可能某些进程读取的数据块比其它进程多,缓存管理进程必须管理这些信息直到系统中最慢进程提交数据上来。这样各个进程对缓存的需求随时变化,没有一个固定值,但是系统的缓存是有限的,因而必须限制各个进程的缓存需求量。而重构过程中,是按分条单元地址由低到高顺序进行的,因此缓冲区管理器中申请缓冲区的大小很难确定。而如果缓冲区的大小不固定,则会增加缓存的申请释放过程,影响系统性能。对此,目前通常的做法是使缓冲区管理器中的缓冲区大小依最大的分条单元而定。这样虽然统一了缓冲区的大小,但可能造成内存的浪费。特别是当分条深度差别较大时,比如按照不同应用,一个逻辑单元的分条深度可能相差上千倍,那么如果按最大的分条深度统一申请的缓存大小,将长时间占用buffer manager中的大块缓存,从而造成内存资源的极大浪费。As can be seen from the above process, in the disk-oriented reconstruction algorithm, due to the random arrival of users on different disks, some processes may read more data blocks than other processes, and the cache management process must manage this information until the slowest in the system The process submits the data up. In this way, each process's demand for cache changes at any time, and there is no fixed value, but the system's cache is limited, so the cache demand of each process must be limited. However, in the reconstruction process, the address of the stripe unit is carried out from low to high, so it is difficult to determine the size of the application buffer in the buffer manager. However, if the size of the buffer is not fixed, it will increase the application release process of the cache and affect the system performance. In this regard, the current common practice is to make the size of the buffer in the buffer manager depend on the largest stripe unit. Although this unifies the size of the buffer, it may cause a waste of memory. Especially when the stripe depth differs greatly, for example, according to different applications, the stripe depth of a logical unit may differ by thousands of times, then if the cache size of the unified application is based on the maximum stripe depth, it will occupy the buffer manager in the buffer manager for a long time. Large block cache, resulting in a great waste of memory resources.

发明内容Contents of the invention

有鉴于此,本发明的目的是提供一种基于磁盘逻辑单元在线重构中的缓存管理方法。使通过该方法能够在保证较少的buffer manager内存申请释放次数的同时,降低内存的浪费。In view of this, the object of the present invention is to provide a cache management method based on online reconfiguration of disk logical units. Through this method, the waste of memory can be reduced while ensuring fewer buffer manager memory application release times.

为达到上述目的,本发明的技术方案具体是这样实现的:In order to achieve the above object, the technical solution of the present invention is specifically realized in the following way:

一种磁盘逻辑单元在线重构中的缓存管理方法,包括以下步骤:A cache management method in online reconfiguration of a disk logical unit, comprising the following steps:

a)预先将独立冗余磁盘阵列中分条深度大小相近的逻辑单元划分成至少一个以上的组;a) Divide logical units with similar stripe depths in the independent redundant disk array into at least one or more groups in advance;

b)磁盘重构过程开始,按RAID中分条深度最大的LUN申请一块缓存作为专有缓存,按组进行重构,重构进行到某一组时,按该组中分条深度最大的LUN申请一块缓存作为自由缓存,并按照该组中分条深度最大的LUN进行预留缓存的申请。b) When the disk reconstruction process starts, apply for a cache according to the LUN with the largest stripe depth in the RAID as a dedicated cache, and perform reconstruction according to the group. Apply for a cache as a free cache, and apply for a reserved cache according to the LUN with the largest stripe depth in the group.

较佳的使该方法步骤a)所划分的每个组中所有LUN的分条深度大小之和相近。Preferably, the sum of the stripe depths of all LUNs in each group divided by step a) of the method is similar.

该方法进一步包括:设定一个阈值,如果RAID中分条深度最大的LUN与分条深度最小的LUN所需缓存的比值小于该阈值,则令步骤a)中的分组个数为1,否则,令步骤a)中的分组个数为3至5之间。The method further includes: setting a threshold, if the required cache ratio of the LUN with the largest stripe depth and the LUN with the smallest stripe depth in the RAID is less than the threshold, then make the number of groups in step a) be 1, otherwise, Let the number of groups in step a) be between 3 and 5.

该方法所述阈值为2或3。The threshold value of this method is 2 or 3.

通过上述方案可以看出,针对传统面向磁盘的重构算法的缺点,本发明增加了对LUN的区分,这样buffer manager的buffer大小可以根据LUN组的变更进行调整,从而实现内存的较低浪费和内存的较少次申请释放过程。It can be seen from the above scheme that for the shortcomings of the traditional disk-oriented reconstruction algorithm, the present invention increases the distinction of LUNs, so that the buffer size of the buffer manager can be adjusted according to the change of the LUN group, thereby achieving low memory waste and Less application and release process of memory.

附图说明Description of drawings

图1为本发明实施例的实现示意图。FIG. 1 is a schematic diagram of an implementation of an embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图及具体实施例对本发明再作进一步详细的说明。The present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.

通常RAID重构缓存包括三部分:专有缓存、预留缓存、自由缓存。专有缓存在重构过程中专用于读取磁盘数据;自由缓存用于保存待异或数据;预留缓存属于自由缓存的一部分,主要用于当出现强制重构,即用户写请求的分条正处于重构状态时,专有缓存若无法提交自由缓存进行异或操作,则专有缓存提交预留缓存完成异或操作。Generally, RAID reconstruction cache includes three parts: dedicated cache, reserved cache, and free cache. The private cache is dedicated to reading disk data during the reconstruction process; the free cache is used to save the data to be XORed; the reserved cache is part of the free cache and is mainly used when forced reconstruction occurs, that is, the segmentation of user write requests In the reconstruction state, if the private cache cannot submit the free cache for the XOR operation, the private cache will submit the reserved cache to complete the XOR operation.

针对前面所述的发明目的,本发明采用如下策略:先将分条深度较接近的LUN分为一组,然后按组进行重构,当重构到某一组时,按该组中最大的分条深度申请自由缓存和预留缓存,待该组重构完成后再将其释放,而用于磁盘数据预读取的专有缓存则按整个RAID中最大分条深度的LUN申请,申请之后,除非重构结束,专有缓存不被释放。这样通过将LUN分组,从而既减少缓存资源占用,又保证了尽量少的缓存申请释放次数。For the purpose of the invention described above, the present invention adopts the following strategy: first divide the LUNs with closer stripe depths into one group, and then perform reconstruction according to the group. The stripe depth applies for free cache and reserved cache, and releases them after the group reconstruction is completed, while the dedicated cache for disk data pre-reading is applied for according to the LUN with the largest stripe depth in the entire RAID. After the application , the private cache is not freed until the refactoring is complete. In this way, by grouping LUNs, it not only reduces the occupation of cache resources, but also ensures that the number of cache application releases is as small as possible.

在实际操作过程中,还要考虑到如果RAID中各LUN分条深度相差不大,则没必要对其分组。During actual operation, it is also necessary to consider that if the stripe depths of the LUNs in the RAID are not much different, there is no need to group them.

因此,首先给定一个阈值M。并假设N=MAX_Memery_Requied/MIN_Memery_Requied。这里Memery_Requied为每个LUN重构时所需的缓存大小;MAX_Memery_Requied为该RAID中LUN的最大值;MIN_Memery_Requied为其中的最小值。Therefore, a threshold M is given first. And assume that N=MAX_Memery_Requied/MIN_Memery_Requied. Here Memery_Requied is the cache size required for each LUN reconstruction; MAX_Memery_Requied is the maximum value of the LUN in the RAID; MIN_Memery_Requied is the minimum value.

有如下两种情况:There are two situations:

1、N小于M,此时由于分条深度之间差距较小,可不对其分组,而是统一申请一块缓存池,专有缓存、自由缓存和预留缓存,都可按该RAID中LUN的最大分条深度统一申请。当然也可以认为将整个RAID是分成了一组。这样既降低了内存管理难度,也减少了内存释放的频度。比如有LUN1~LUN5,其最大分条单元深度为45个Sectors,那么所需的重构缓存不会超过1MB,此时LUN1~LUN5完全可以固定申请一块缓存池,统一使用。1. N is smaller than M. At this time, because the gap between the stripe depths is small, you can apply for a cache pool instead of grouping them. The dedicated cache, free cache, and reserved cache can all be allocated according to the LUN in the RAID. Unified application for maximum slitting depth. Of course, it can also be considered that the entire RAID is divided into a group. This not only reduces the difficulty of memory management, but also reduces the frequency of memory release. For example, there are LUN1-LUN5, and the maximum stripe unit depth is 45 sectors, then the required reconstruction cache will not exceed 1MB. At this time, LUN1-LUN5 can completely apply for a cache pool for unified use.

2、N大于M,此时分条深度差异较大。则需按上述方法分组进行磁盘重构。2. N is greater than M, and the difference in the depth of the strips is large at this time. You need to perform disk reconstruction in groups according to the above method.

阈值M,一般在2~3之间。其具体值,可根据该RAID组的应用场合,并结合系统配置,合理设置M。比如对于多媒体应用,由于一个分条单元大小较大,一般在512K~4M之间,则M可设置略大,如3。The threshold M is generally between 2 and 3. For its specific value, M can be reasonably set according to the application occasion of the RAID group and in combination with the system configuration. For example, for multimedia applications, since the size of a stripe unit is relatively large, generally between 512K and 4M, M can be set slightly larger, such as 3.

参见图1所示,以RAID5为例来说明基于LUN的在线重构方法。Referring to FIG. 1, RAID5 is taken as an example to illustrate the LUN-based online reconstruction method.

一般情况下,LUN之间的分条深度差异不很均匀。如图1中的情况,则可先确定一个允许差值δ,比较RAID组中所有LUN的分条深度,将满足|LUNi-LUNj|≤δ的LUN集中起来,根据分组情况,分别申请缓存,供其重构使用。In general, the stripe depth difference between LUNs is not very uniform. As shown in Figure 1, you can first determine an allowable difference δ, compare the stripe depths of all LUNs in the RAID group, gather the LUNs satisfying |LNi-LUNj|≤δ, and apply for cache respectively according to the grouping situation. for its reconstruction.

比如若设δ=50扇区(Sectors),则将LUN1~LUN5分成了三个组,组1:LUN1、LUN3;组2:LUN2、LUN4;组3:LUN5。在重构过程中,专有缓存大小恒定为800个Sectors;自由缓存或预留缓存则随着重构的进行,依次取为250个Sectors、350个Sectors、800个Sectors。重构时,按照LUN1与LUN3第一组、LUN2与LUN4第二组、LUN5第三组顺序依次重构,重构完成一组LUN,重新申请自由缓存。For example, if δ=50 sectors (Sectors), LUN1-LUN5 are divided into three groups, group 1: LUN1, LUN3; group 2: LUN2, LUN4; group 3: LUN5. During the reconstruction process, the private cache size is constant at 800 Sectors; the free cache or reserved cache is taken as 250 Sectors, 350 Sectors, and 800 Sectors in sequence as the reconstruction proceeds. When reconfiguring, follow the order of LUN1 and LUN3 in the first group, LUN2 and LUN4 in the second group, and LUN5 in the third group. After the reconstruction is complete, a group of LUNs is re-applied for free cache.

具体步骤是:The specific steps are:

步骤1:申请专有缓存的大小为800个Sectors。Step 1: Apply for a private cache with a size of 800 Sectors.

步骤2:读取LUN1中的数据到专有缓存,申请自由缓存的大小为250个Sectors,将LUN1的数据提交到自由缓存,进行异或操作,完成后将数据发送至热备盘。此时保留该自由缓存,继续按以上步骤对LUN3进行重构,即读取LUN3中的数据到专有缓存,将LUN3的数据提交到自由缓存,待LUN3的所有分条单元的数据进行完异或后,将数据发送至热备盘,从而完成了组1的重构。Step 2: Read the data in LUN1 to the dedicated cache, apply for a free cache size of 250 sectors, submit the data of LUN1 to the free cache, perform an XOR operation, and send the data to the hot spare disk after completion. At this time, keep the free cache, and continue to reconstruct LUN3 according to the above steps, that is, read the data in LUN3 to the dedicated cache, submit the data of LUN3 to the free cache, and wait for the data of all the stripe units of LUN3 to be completely different or later, send the data to the hot spare disk, thus completing the reconstruction of group 1.

这样,组1重构完成后,释放该自由缓存;重新申请自由缓存大小为350个Sectors,对组2的LUN2、LUN4按上述步骤一次进行重构,完成后,释放该自由缓存;再申请自由缓存的大小为800个Sectors,对组3的LUN5进行重构。In this way, after the reconstruction of group 1 is completed, release the free cache; re-apply for the free cache with a size of 350 Sectors, and reconstruct LUN2 and LUN4 of group 2 once according to the above steps. After completion, release the free cache; apply again for free The size of the cache is 800 sectors, and LUN5 in group 3 is reconstructed.

当出现强制重构时,若无法将数据从专有缓存提交自由缓存进行异或操作,则提交预留缓存完成异或操作,其步骤及缓存申请办法与自由缓存相同。When forced reconstruction occurs, if the data cannot be submitted from the private cache to the free cache for XOR operation, then the reserved cache is submitted to complete the XOR operation. The steps and cache application method are the same as those of the free cache.

这里的δ,以使分组数目在3~5之间为好,太大太小都起不到分组归类的目的。且分组时,最好能使每个分组中所有LUN大小之和相近,这样重构的每个分组大小相近,若主机每个分组的访问频率相近的情况下,每个分组的重构时间相近,不至使某个分组重构时间过长,某个分组的重构时间又过短。由于某个LUN重构时,主机对该LUN的访问效率总会受到影响,减少某个LUN所在分组的重构时间,对提高主机对该LUN的访问效率是有好处的。Here, δ is better to make the number of groups between 3 and 5. If it is too large or too small, it will not be able to achieve the purpose of grouping and categorization. And when grouping, it is best to make the sum of all LUN sizes in each group similar, so that the size of each group to be reconstructed is similar. If the access frequency of each group of the host is similar, the reconstruction time of each group is similar. , so that the reconstruction time of a certain group is not too long, and the reconstruction time of a certain group is too short. When a LUN is reconfigured, the access efficiency of the host to the LUN will always be affected. Reducing the reconstruction time of the group where a LUN is located is good for improving the access efficiency of the host to the LUN.

如果RAID系统各LUN的分条深度相差恰好较为均匀,比如同样是图1结构的RAID系统,而LUN的分条深度从LUN1到LUN5恰好依次为:100、200、300、400、500个Sectors,此时很难确定一个合适的δ大小,因此可以根据上述各分组中LUN大小之和相近的原则,将LUN1到LUN3划为一组,LUN4为一组,最后LUN5为一组。重构过程与上述相同。If the stripe depths of the LUNs in the RAID system happen to be relatively uniform, such as the same RAID system as shown in Figure 1, and the stripe depths of the LUNs from LUN1 to LUN5 happen to be: 100, 200, 300, 400, and 500 sectors, At this time, it is difficult to determine an appropriate δ size. Therefore, according to the principle that the sum of the LUN sizes in each group is similar, divide LUN1 to LUN3 into a group, LUN4 into a group, and finally LUN5 into a group. The refactoring process is the same as above.

另外,对于另一种情况,比如仍以图1结构的RAID为例。如果LUN1至LUN5依次为100、210、300、410、500,虽然此时按设定一个差值δ的方法可以分组,但无法满足各组中LUN大小之和相近原则,此时可同样将LUN1到LUN3划为一组,LUN4为一组,最后LUN5为一组。In addition, for another situation, for example, the RAID structure in FIG. 1 is still taken as an example. If LUN1 to LUN5 are 100, 210, 300, 410, and 500 in turn, although they can be grouped by setting a difference value δ, the principle of similar sum of LUN sizes in each group cannot be satisfied. At this time, LUN1 can also be grouped Group LUN3 into a group, LUN4 into a group, and finally LUN5 into a group.

总之,分组原则既要使分组数目不能太多,通常在3~5之间,以保证缓存较少的申请释放次数,又要使各分组中LUN大小之和相近。这样可根据实际情况,综合考虑上述两原则来确定较佳的分组方案。当然,在具体实施方式中并不仅限于上述分组方法。即保证较少的缓存申请释放次数,又实现内存的较低浪费的其他分组方法也可达到本发明的目的。In short, the principle of grouping is to keep the number of groups not too large, usually between 3 and 5, so as to ensure a small number of cache application releases, and to make the sum of the LUN sizes in each group similar. In this way, a better grouping scheme can be determined by comprehensively considering the above two principles according to the actual situation. Of course, the specific implementation is not limited to the above grouping method. That is to say, other grouping methods that ensure less cache application release times and achieve less waste of memory can also achieve the purpose of the present invention.

Claims (4)

1, a kind of disk on-line reorganization method is characterized in that, may further comprise the steps:
A) in advance the close logical block LUN of segment depth size among the raid-array RAID is divided at least more than one group;
B) the disk restructuring procedure begins, LUN by segment depth maximum among the RAID applies for that a block cache is as proprietary buffer memory, be reconstructed by group, when reconstruct proceeds to a certain group, LUN by segment depth maximum in this group applies for a block cache as free buffer memory, and reserves the application of buffer memory according to the LUN of segment depth maximum in this group.
2, method according to claim 1 is characterized in that, the segment depth size sum of all LUN is close in each group that step a) is divided.
3, method according to claim 1, it is characterized in that, further comprise: set a threshold value, if the ratio of the required buffer memory of LUN of the LUN of segment depth maximum and segment depth minimum is less than this threshold value among the RAID, then making the grouping number in the step a) is 1, otherwise making the grouping number in the step a) is between 3 to 5.
4, method according to claim 3 is characterized in that, described threshold value is 2 or 3.
CNB031024629A 2003-01-24 2003-01-24 Online method for reorganizing magnetic disk Expired - Lifetime CN1302392C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB031024629A CN1302392C (en) 2003-01-24 2003-01-24 Online method for reorganizing magnetic disk

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB031024629A CN1302392C (en) 2003-01-24 2003-01-24 Online method for reorganizing magnetic disk

Publications (2)

Publication Number Publication Date
CN1519726A CN1519726A (en) 2004-08-11
CN1302392C true CN1302392C (en) 2007-02-28

Family

ID=34281735

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB031024629A Expired - Lifetime CN1302392C (en) 2003-01-24 2003-01-24 Online method for reorganizing magnetic disk

Country Status (1)

Country Link
CN (1) CN1302392C (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100388237C (en) * 2004-10-20 2008-05-14 北京织女星网格技术有限公司 Data Reorganization Method Based on Lightweight Computing
CN102023810B (en) * 2009-09-10 2012-08-29 成都市华为赛门铁克科技有限公司 Method and device for writing data and redundant array of inexpensive disk
CN101840313B (en) * 2010-04-13 2011-11-16 杭州华三通信技术有限公司 LUN mirror image processing method and equipment
CN101923501B (en) * 2010-07-30 2012-01-25 华中科技大学 Disk array multi-level fault tolerance method
CN101901273B (en) * 2010-08-13 2012-09-05 优视科技有限公司 Memory disk-based high-performance storage method and device
CN101980137B (en) * 2010-10-19 2012-05-30 成都市华为赛门铁克科技有限公司 Method, device and system for reconstructing redundant array of inexpensive disks
CN102096557B (en) * 2010-12-31 2013-08-14 华为数字技术(成都)有限公司 Capacity expansion method, device and system for independent redundant array of inexpensive disc (RAID)
CN102521058A (en) * 2011-12-01 2012-06-27 北京威视数据系统有限公司 Disk data pre-migration method of RAID (Redundant Array of Independent Disks) group

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5857112A (en) * 1992-09-09 1999-01-05 Hashemi; Ebrahim System for achieving enhanced performance and data availability in a unified redundant array of disk drives by using user defined partitioning and level of redundancy
CN1324462A (en) * 1998-10-19 2001-11-28 英特尔公司 Raid striping using multiple virtual channels
US6480969B1 (en) * 1993-06-04 2002-11-12 Network Appliance, Inc. Providing parity in a RAID sub-system using non-volatile memory

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5857112A (en) * 1992-09-09 1999-01-05 Hashemi; Ebrahim System for achieving enhanced performance and data availability in a unified redundant array of disk drives by using user defined partitioning and level of redundancy
US6480969B1 (en) * 1993-06-04 2002-11-12 Network Appliance, Inc. Providing parity in a RAID sub-system using non-volatile memory
CN1324462A (en) * 1998-10-19 2001-11-28 英特尔公司 Raid striping using multiple virtual channels

Also Published As

Publication number Publication date
CN1519726A (en) 2004-08-11

Similar Documents

Publication Publication Date Title
US6671772B1 (en) Hierarchical file system structure for enhancing disk transfer efficiency
US7792882B2 (en) Method and system for block allocation for hybrid drives
US10621057B2 (en) Intelligent redundant array of independent disks with resilvering beyond bandwidth of a single drive
CN1148658C (en) Method and system for managing RAID storage system using cache
US8090924B2 (en) Method for the allocation of data on physical media by a file system which optimizes power consumption
US20090217067A1 (en) Systems and Methods for Reducing Power Consumption in a Redundant Storage Array
CN1624670A (en) Method of local data migration
CN104484130A (en) Construction method of horizontal expansion storage system
CN1545030A (en) Method of Dynamic Mapping of Data Distribution Based on Disk Characteristics
CN104778018A (en) Broad-strip disk array based on asymmetric hybrid type disk image and storage method of broad-strip disk array
CN1302392C (en) Online method for reorganizing magnetic disk
US11561695B1 (en) Using drive compression in uncompressed tier
US10572464B2 (en) Predictable allocation latency in fragmented log structured file systems
US10474572B2 (en) Intelligent redundant array of independent disks with high performance recompaction
CN107678690A (en) A kind of implementation method of solid state hard disc and its RAID array
US20210216403A1 (en) Dynamically adjusting redundancy levels of storage stripes
CN120123154B (en) Storage system rapid reconstruction method based on fine-grained hard disk partition
WO2023020136A1 (en) Data storage method and apparatus in storage system
US20190056865A1 (en) Intelligent Redundant Array of Independent Disks
CN110600070B (en) Coding and repairing method for improving repairing performance of solid state disk array system
US20220129174A1 (en) Method, device and computer program product for storage management
CN117785026B (en) Cache method based on SSD RAID-5 system high-efficiency writing
US6934803B2 (en) Methods and structure for multi-drive mirroring in a resource constrained raid controller
US8312210B2 (en) Apparatus, system, and method for storing and retrieving compressed data
CN1276360C (en) Management method of reconstructing memory

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211229

Address after: 450046 Floor 9, building 1, Zhengshang Boya Plaza, Longzihu wisdom Island, Zhengdong New Area, Zhengzhou City, Henan Province

Patentee after: xFusion Digital Technologies Co., Ltd.

Address before: 518057 HUAWEI building, road, Shenzhen science and Technology Park

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.

CX01 Expiry of patent term
CX01 Expiry of patent term

Granted publication date: 20070228