WO2015065399A1 - Reproduction de centre de données - Google Patents
Reproduction de centre de données Download PDFInfo
- Publication number
- WO2015065399A1 WO2015065399A1 PCT/US2013/067629 US2013067629W WO2015065399A1 WO 2015065399 A1 WO2015065399 A1 WO 2015065399A1 US 2013067629 W US2013067629 W US 2013067629W WO 2015065399 A1 WO2015065399 A1 WO 2015065399A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data center
- data
- volume
- center
- recovery
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2041—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with more than one idle spare processing component
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2048—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share neither address space nor persistent storage
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/2097—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated
Definitions
- Datacenters are widely used to store data for various entities and organizations.
- large enterprises may store data at one or more datacenters that may be centrally located.
- the data stored at such datacenters may be vital to the operations of the enterprise and may include, for example, employee data, customer data, etc.
- Figure 1 illustrates a system in accordance with an example
- Figure 2 illustrates the example system of Figure 1 forming two three-datacenter (3DC) arrangements
- Figure 3 is a flow chart illustrating an example method. DETAILED DESCRIPTION
- Various examples described herein provide for successful disaster recovery for datacenters with data currency, while also providing prevention of propagation of data corruption.
- the present disclosure provides a solution which may be implemented as a four-data center arrangement which simultaneously forms a first three-datacenter (3DC) arrangement for replication of the primary data volume and a second 3DC arrangement for replication of a recovery data volume.
- the recovery data volume may lag in time, thereby preventing propagation of data corruption which may occur in the primary data volume.
- the example system 100 of Figure 1 includes four data centers 110, 120, 130, 140.
- the first data center (Data Center A) 110 and the second data center (Data Center B) 120 may be located close to one another, while the third data center (Data Center C) 130 and fourth data center (Data Center D) 140 may be located close to one another, but away from the first data center 110 and the second data center 120.
- the first data center 110 and the second data center 120 may be located in a first region (e.g., same neighborhood, city, state, etc.), while the third data center (Data Center C) 130 and the fourth data center (Data Center D) 140 are located in a second region.
- a disaster e.g., natural, political, economic, etc.
- the first data center 110 includes a primary data volume 112 for storage of data.
- the primary data volume 112 may include any of a variety of non-transitory storage media, including hard drives, flash drives, etc.
- the data stored on the primary data volume 112 may include any type of data, including databases, program data, software programs, etc.
- the primary data volume 112 of the first data center 110 may be the primary storage of data for the enterprise.
- the primary data volume 112 of the first data center 110 may be the storage source accessed for all data read and write requests.
- the first data center 110 may be provided with a second data volume, shown in Figure 1 as a recovery data volume 118.
- a recovery data volume 118 may be either separate storage media or separate portions (e.g., virtual) of a single storage medium.
- the first data center 110 may also include various other components, such as a cache 114 and a server 116, which may include a processor and a memory.
- the cache 114 and the server 116 may facilitate in the storage, writing and access of the data on the primary data volume 112.
- the second data center 120, the third data center 130, and the fourth data center 140 may each be provided with similar components as the first data center 110.
- the second data center 120 includes a primary data volume 122, a cache 124 and a server 126.
- the example third data center 130 includes a primary data volume 132, a cache 134, a server 136 and a recovery data volume 138.
- the example fourth data center 140 includes a recovery data volume 148, a cache 144 and a server 146.
- the first data center 110 and the second data center 120 are located close to each other.
- the primary data volume 112 of the first data center 110 and the primary data volume 122 of the second data center 120 replicate each other, as indicated by the arrow labeled "A".
- the server 116 may write data to the primary data volume 112 through the cache 114.
- the primary data volume 112 of the first data center 110 may write to the primary data volume 122 of the second data center 120.
- synchronous replication may assure that all data written to the primary data volume 112 of the first data center 110 is replicated to the primary data volume 122 of the second data center 120. Synchronous replication is effective when the replicated data volumes are located relatively close to one another (e.g., less than 100 miles).
- the primary data volume 112 from the first data center 110 is replicated to the primary data volume 132 at the third data center 130, as indicated by the arrow labeled "B".
- the replication indicated as B is performed asynchronously.
- the server 116 may write data to the primary data volume 112 through the cache 114, and the write is immediately acknowledged to the server 116 of the first data center 110.
- the cached write data may then be pushed to the primary data volume 132 of the third data center 130.
- the cached write data is indicated in a journal which may be stored within the first data center 110.
- the third data center 130 may periodically poll the first data center to read journal information and, if needed, retrieve data for writing to the primary data volume 132 of the third data center 130.
- asynchronous replication may not assure that all data written to the primary data volume 112 of the first data center 110 is replicated to the primary data volume 132 of the third data center 130.
- Asynchronous replication can be effectively implemented for data centers that are located far apart from one another (e.g., more than 100 miles).
- Figure 1 illustrates that the primary data volume 132 of the third data center 130 is asynchronously replicated from the first data center 110 (line B), in other examples, the primary data volume 132 of the third data center 130 may be replicated from the second data center 120.
- the primary data volume 132 is used to generate, update or synchronize a recovery data volume 138, as indicated by the arrow labeled "C".
- the recovery data volume 138 may be generated during an initial copy step. Thereafter, the recovery data volume 138 may be updated or synchronized with the primary data volume 132 at regular intervals, for example.
- Figure 1 illustrates the recovery data volume 138 as being a separate storage device from the primary data volume 132, in various examples, the recovery data volume 138 may be provided on the same storage device in, for example, a virtual separated portion of the storage device.
- the generation of the recovery data volume 138 may be performed periodically based on, for example, a predetermined schedule.
- the frequency at which the recovery data volume 138 is generated may be determined based on the needs of the particular implementation. In one example, the recovery data volume 138 is generated every six hours.
- the recovery data volume 138 from the third data center 130 is replicated to the recovery data volume 148 at the fourth data center 140, as indicated by the arrow labeled "D".
- the third data center 130 and the fourth data center 140 are located relatively close to one another.
- the replication of the recovery data volume 138 to the recovery data volume 148 may be effectively performed synchronously.
- the recovery data volume 138 of the third data volume 130 may be replicated to the recovery data volume 118 at the first data center 110, as indicated by the arrow labeled "E".
- the third data center 130 and the first data center 110 are located relatively far from one another.
- the replication of the recovery data volume 138 to the recovery data volume 118 may be effectively performed asynchronously.
- Figure 1 illustrates the replication of the recovery volume 118 of the first data center 110 from the third data center 130
- the recovery volume 118 may be replicated from the recovery volume 148 of the fourth data center 140.
- FIG. 1 illustrates each data center 110, 120, 130, 140 provided with a server and a cache.
- the server may not be required in certain data centers.
- the second data center 120 of Figure 1 may not require the server 126 under normal operation.
- the second data center 120 may only require the server 126 during a recovery mode in, for example, the event of a disaster during which the second data center 120 may be required to serve as the primary data center.
- the third data center 130 and the fourth data center 140 may also not require a server in normal operation.
- the example system 100 of Figure 1 is illustrated as forming two three-datacenter (3DC) arrangements.
- the first 3DC arrangement 210 is illustrated by the solid lined triangle.
- the first 3DC arrangement 210 includes the first data center 110, more particularly, the primary data volume 112 of the first data center 110.
- the first 3DC arrangement 210 is further formed by the second data center 120 and the third data center (more particularly, the primary data volume 132 of the third data center 130).
- the second data center 120 includes the primary data volume 122 having a synchronous replication of the primary data volume 112 from the first data center 110.
- the third data center 130 includes an asynchronous replication of the primary data volume 112 from the first data center 110.
- the example system 100 further includes a second 3DC arrangement 220, as illustrated by the dashed lined triangle.
- the second 3DC arrangement 220 includes the third data center 130, more particularly, the recovery data volume 138 of the third data center 130.
- the second 3DC arrangement 220 is further formed by the fourth data center 140 and the first data center (more particularly, the recovery data volume 118 of the first data center 110).
- the fourth data center 140 includes the recovery data volume 148 having a synchronous replication of the recovery data volume 138 from the third data center 130.
- the first data center 110 includes an asynchronous replication of the recovery data volume 138 from the third data center 130.
- the first data center 110 may include the primary data volume 112 having current data, as well as a recovery data volume having time-lagging data. Further the replicated recovery data volume is isolated from the primary data volumes. Therefore, protection against a local disaster, which may affect the entire region in which both the first data center 110 and second data center 120 are located, as well as protection against propagation of data corruption, is provided.
- FIG. 3 a flow chart illustrating an example method is provided.
- primary data volumes at the first data center 110 and the second data center 120 are synchronously replicating (block 310), as illustrated by the line labeled "A" in Figures 1 and 2 above.
- synchronous replication is an effective mode.
- the primary data volume at the first data center 110 is asynchronously replicated to the third data center (block 312), as illustrated by the line labeled "B" in Figures 1 and 2 above. Further, in examples where the third data center is located relatively far from the first data center, asynchronous replication is the most effective mode.
- the recovery volume may be initially generated as a copy of the primary volume and may be updated or synchronized with the primary volume on a periodic basis. If the determination is made that the time for updating or synchronization of the recovery volume has not yet arrived, the process returns to block 310 and continues synchronous replication of the first and second data centers and the asynchronous replication of the third data center.
- the process proceeds to block 316, and a recovery data volume is updated or synchronized with the primary volume at the third data center 130, as illustrated by the line labeled "C" in Figures 1 and 2.
- the frequency at which the recovery data volume is updated or synchronized may be set for the particular implementation.
- the recovery data volume at the third data center may then be synchronously replicated to the fourth data center (block 318), as illustrated by the line labeled "D" in Figures 1 and 2. Since the third and fourth data centers are located in close proximity to each other, synchronous replication can be effectively achieved.
- the recovery data volume at the third data center 130 is asynchronously replicated to the first data center 110 (block 320), as illustrated by the line labeled "E" in Figures 1 and 2 above. Further, in examples where the third data center is located relatively far from the first data center, asynchronous replication is the most effective mode. The process 300 then returns to block 310.
- four data centers may be used to form two separate 3DC arrangements.
- One of the 3DC arrangements provides replication of the primary data of a data center at a nearby data center (synchronously) and a distant data center (asynchronously), while the other 3DC arrangement provides similar replication for a recovery data volume which lags in time.
- data protection is provided against regional disasters, as well as propagation of data corruption.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
L'invention concerne un système à titre d'exemple qui peut comprendre un premier centre de données ayant un volume de données primaire; un deuxième centre de données ayant une reproduction du volume de données primaire à partir du premier centre de données; un troisième centre de données ayant une reproduction du volume de données primaire à partir du premier centre de données, le troisième centre de données ayant un volume de données de récupération mis à jour ou synchronisé à des intervalles prédéterminés, le volume de données de récupération étant une copie du volume de données primaire; et un quatrième centre de données ayant une reproduction du volume de données de récupération à partir du troisième centre de données. Le premier centre de données peut comprendre une reproduction du volume de données de récupération à partir d'au moins l'un du troisième centre de données ou du quatrième centre de données.
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2013/067629 WO2015065399A1 (fr) | 2013-10-30 | 2013-10-30 | Reproduction de centre de données |
| EP13896501.7A EP3063638A4 (fr) | 2013-10-30 | 2013-10-30 | Reproduction de centre de données |
| CN201380081317.6A CN105980995A (zh) | 2013-10-30 | 2013-10-30 | 数据中心复制 |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2013/067629 WO2015065399A1 (fr) | 2013-10-30 | 2013-10-30 | Reproduction de centre de données |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2015065399A1 true WO2015065399A1 (fr) | 2015-05-07 |
Family
ID=53004812
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2013/067629 Ceased WO2015065399A1 (fr) | 2013-10-30 | 2013-10-30 | Reproduction de centre de données |
Country Status (3)
| Country | Link |
|---|---|
| EP (1) | EP3063638A4 (fr) |
| CN (1) | CN105980995A (fr) |
| WO (1) | WO2015065399A1 (fr) |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030126388A1 (en) * | 2001-12-27 | 2003-07-03 | Hitachi, Ltd. | Method and apparatus for managing storage based replication |
| US20040230859A1 (en) * | 2003-05-15 | 2004-11-18 | Hewlett-Packard Development Company, L.P. | Disaster recovery system with cascaded resynchronization |
| US20040230756A1 (en) * | 2001-02-28 | 2004-11-18 | Hitachi. Ltd. | Three data center adaptive remote copy |
| US20120290787A1 (en) * | 2003-06-27 | 2012-11-15 | Hitachi, Ltd. | Remote copy method and remote copy system |
| US8359491B1 (en) * | 2004-03-30 | 2013-01-22 | Symantec Operating Corporation | Disaster recovery rehearsal using copy on write |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7117386B2 (en) * | 2002-08-21 | 2006-10-03 | Emc Corporation | SAR restart and going home procedures |
| US7979651B1 (en) * | 2006-07-07 | 2011-07-12 | Symantec Operating Corporation | Method, system, and computer readable medium for asynchronously processing write operations for a data storage volume having a copy-on-write snapshot |
| US8745006B2 (en) * | 2009-04-23 | 2014-06-03 | Hitachi, Ltd. | Computing system and backup method using the same |
| US8281094B2 (en) * | 2009-08-26 | 2012-10-02 | Hitachi, Ltd. | Remote copy system |
| CN103197988A (zh) * | 2012-01-05 | 2013-07-10 | 中国移动通信集团湖南有限公司 | 一种数据备份、恢复的方法、设备和数据库系统 |
-
2013
- 2013-10-30 WO PCT/US2013/067629 patent/WO2015065399A1/fr not_active Ceased
- 2013-10-30 EP EP13896501.7A patent/EP3063638A4/fr not_active Withdrawn
- 2013-10-30 CN CN201380081317.6A patent/CN105980995A/zh active Pending
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040230756A1 (en) * | 2001-02-28 | 2004-11-18 | Hitachi. Ltd. | Three data center adaptive remote copy |
| US20030126388A1 (en) * | 2001-12-27 | 2003-07-03 | Hitachi, Ltd. | Method and apparatus for managing storage based replication |
| US20040230859A1 (en) * | 2003-05-15 | 2004-11-18 | Hewlett-Packard Development Company, L.P. | Disaster recovery system with cascaded resynchronization |
| US20120290787A1 (en) * | 2003-06-27 | 2012-11-15 | Hitachi, Ltd. | Remote copy method and remote copy system |
| US8359491B1 (en) * | 2004-03-30 | 2013-01-22 | Symantec Operating Corporation | Disaster recovery rehearsal using copy on write |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP3063638A4 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN105980995A (zh) | 2016-09-28 |
| EP3063638A1 (fr) | 2016-09-07 |
| EP3063638A4 (fr) | 2017-07-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7308545B1 (en) | Method and system of providing replication | |
| CN108376109B (zh) | 将来源阵列的卷复制到目标阵列的设备和方法、存储介质 | |
| US9251008B2 (en) | Client object replication between a first backup server and a second backup server | |
| US10565071B2 (en) | Smart data replication recoverer | |
| US7299378B2 (en) | Geographically distributed clusters | |
| US9772789B1 (en) | Alignment fixing on a data protection system during continuous data replication to deduplicated storage | |
| US7191299B1 (en) | Method and system of providing periodic replication | |
| US7987158B2 (en) | Method, system and article of manufacture for metadata replication and restoration | |
| US7941622B2 (en) | Point in time remote copy for multiple sites | |
| US9672117B1 (en) | Method and system for star replication using multiple replication technologies | |
| KR101662212B1 (ko) | 부분동기화 지원 데이터베이스 관리 시스템 및 데이터베이스 관리 시스템에서 부분동기화 방법 | |
| US10229056B1 (en) | Alignment fixing on a storage system during continuous data replication to deduplicated storage | |
| US10082973B2 (en) | Accelerated recovery in data replication environments | |
| US20150213100A1 (en) | Data synchronization method and system | |
| US10430290B2 (en) | Method and system for star replication using multiple replication technologies | |
| US20140108753A1 (en) | Merging an out of synchronization indicator and a change recording indicator in response to a failure in consistency group formation | |
| US7457830B1 (en) | Method and system of replicating data using a recovery data change log | |
| US20140156595A1 (en) | Synchronisation system and method | |
| JP2007183930A (ja) | 異なるコピー技術を用いてデータをミラーリングするときの整合性の維持 | |
| US6859811B1 (en) | Cluster database with remote data mirroring | |
| US8677088B1 (en) | Systems and methods for recovering primary sites after failovers to remote secondary sites | |
| US10339010B1 (en) | Systems and methods for synchronization of backup copies | |
| US8010758B1 (en) | System and method for performing secondary site synchronization based on a single change map | |
| CN106855869B (zh) | 一种实现数据库高可用的方法、装置和系统 | |
| US7979396B1 (en) | System and method for performing consistent resynchronization between synchronized copies |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13896501 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| REEP | Request for entry into the european phase |
Ref document number: 2013896501 Country of ref document: EP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2013896501 Country of ref document: EP |