US20130232300A1 - System for maintaining coherency during offline changes to storage media - Google Patents
System for maintaining coherency during offline changes to storage media Download PDFInfo
- Publication number
- US20130232300A1 US20130232300A1 US13/858,533 US201313858533A US2013232300A1 US 20130232300 A1 US20130232300 A1 US 20130232300A1 US 201313858533 A US201313858533 A US 201313858533A US 2013232300 A1 US2013232300 A1 US 2013232300A1
- Authority
- US
- United States
- Prior art keywords
- media
- data
- scsi
- tiering
- storage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0629—Configuration or reconfiguration of storage systems
- G06F3/0635—Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/065—Replication mechanisms
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1095—Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
Definitions
- Fibre Channel provides practical and expandable means of transferring data between workstations, mainframes, supercomputers, desktop computers, and storage devices at fast data rates.
- Fibre Channel (FC) is especially suited for connecting computer servers to shared storage devices and for interconnecting storage controllers and drives.
- a proxy device may be connected by a FC network between a client computer and a storage device.
- the proxy device may contain a tiering media that needs to maintain an identical state as the storage device, so that consistent and correct data can be provided to the client computer.
- the proxy device may not have access to all operations performed on the storage device. These “off line” operations may leave different versions of data in the tiering media and in the data storage device.
- the proxy device may provide incorrect data from the tiering media unless made aware of the offline activity.
- FIG. 1 shows a storage access system coupled between client devices and storage media
- FIGS. 2 and 3 show how data snapshots are performed for data contained in the storage media of FIG. 1 ;
- FIG. 4 is a flow diagram showing Small Computer System Interface (SCSI) operations performed for the snapshot operations of FIGS. 2 and 3 ;
- SCSI Small Computer System Interface
- FIG. 5 shows how the storage access system in FIG. 1 uses SCSI operations to identity snapshot operations
- FIG. 6 is a flow diagram showing how the storage access system in FIG. 1 invalidates data in a tiering media responsive to a SCSI bus rescan;
- FIG. 7 is a flow diagram showing how the storage access system in FIG. 1 invalidates data in a tiering media responsive to a SCSI bus rescan and a SCSI device inquiry.
- FIG. 1 shows a storage access system 100 connected between client devices 106 and a storage media 114 .
- the client devices 106 can be servers, personal computers, terminals, portable digital devices, routers, switches, or any other wired or wireless computing device that needs to access data on storage media 114 .
- the client devices 106 conduct different storage operations 102 with the storage media 114 though the storage access system 100 .
- the storage operations 102 may include write operations 102 A and read operations 102 B.
- the storage media 114 may contain multiple media devices 120 , such as multiple storage disks that are referred to generally as a disk array.
- the storage access system 100 and the storage media 114 are stand-alone appliances, devices, or blades.
- the client devices 106 , storage access system 100 , and storage media 114 might be coupled to each other via wired or wireless connections 112 capable of transporting the storage operations 102 and any associated data between client devices 106 and storage media 114 .
- connection 112 is a Fibre Channel network that uses the Small Computer System Interface (SCSI) protocol for storage operations.
- Client devices 106 , storage access system 100 , and storage media 114 may use fibre channel interface cards or Host Bus Adapters (HBA) (not shown).
- HBA Host Bus Adapters
- the fibre channel HBAs allow the client devices 106 and storage media 114 to communicate over the fibre channel medium 112 using the SCSI protocol.
- Most FC networks utilize SCSI as the underlying storage protocol, and any non-SCSI disk, such as a Serial ATA (SATA) disk, within storage media 114 will typically be virtualized as a SCSI entity.
- SATA Serial ATA
- the client devices 106 may access one or more of the media devices 120 in storage media 114 over an internal or external data bus.
- the storage media 114 in this embodiment could be located in personal computers or servers, or could also be a stand-alone device coupled to the client computer/server 106 via a fiber channel SCSI bus, Universal Serial Bus (USB), or packet switched network connections 112 .
- USB Universal Serial Bus
- the storage access system 100 contains one or more processors or processing elements 105 that operate as a proxy for the storage operations 102 between the client devices 106 and storage media 114 .
- Tiering media 110 in storage access system 100 includes different combinations of Flash memory and Dynamic Random Access Memory (DRAM) that typically provides faster access speeds than say disks that may be used in storage media 114 .
- DRAM Dynamic Random Access Memory
- the storage access system 100 receives the read and write operations 102 from the client devices 106 that are directed to the storage media 114 .
- the media devices 120 contain multiple storage blocks that have associated block addresses.
- some of the blocks of data from the storage media 114 are temporarily copied into the tiering media 110 .
- the storage access system 100 then uses the data in the faster tiering media 110 to service certain storage access operations 102 from the client devices 106 .
- storage access system 100 monitors all of the storage operations 102 performed in storage media 114 and maintains the same version of data in the tiering media 110 and storage media 114 .
- Proxy 105 is responsible for maintaining this data coherency between the tiering media 110 and the storage media 114 and must see all write operations to storage media 114 .
- FIGS. 2 and 3 show how snapshot operations might be performed in the storage media 114 .
- a snapshot operation is used for capturing data in storage media 114 at a particular instance in time.
- a client application 116 operating on client device 106 may need to conduct a backup operation for the data currently stored in storage media 114 or may need to process the data in storage media 114 as of a particular time.
- a client database application 116 may need to generate reports for stock market transactions from the previous day. Stock transactions are used as an example below, but of course any type of software application and data may be used.
- the client device 106 uses a storage controller 130 to capture a stable state or “snapshot” for the stock transactions from the previous trading day.
- the storage controller 130 copies a particular set of snapshot data from storage media 114 into other media devices 119 or to a different location in storage media 114 .
- the storage media containing the snapshot data is referred to generally as snapshot storage media 118 and is shown separately from storage media 114 in FIGS. 3 and 4 for illustration purposes.
- the snapshot storage media 118 could be a particular directory or particular media devices 119 within the same storage media 114 .
- snapshot storage media 118 will not be constantly updated with new transaction data and thus have superior performance from the perspective of client database application 116 . Reports run against storage media 114 would generate the same result, but will include content with real-time updates and thus be slower.
- FIG. 3 shows an alternative embodiment where the storage controller 130 generates a logical snapshot using pointers 122 in the snapshot storage media 118 .
- the storage controller 130 Instead of copying all of the related data from storage media 114 into snapshot storage media 118 , the storage controller 130 generates pointers 122 that point to data in the storage media 114 that has not changed since the last snapshot operation. However, any data 124 that has changed since the last snapshot operation is copied from the storage media 114 into the snapshot storage media 118 . Again, this could comprise the storage controller 130 copying the stock transactions from the previous day into a particular read only directory in the storage media 114 reserved for the snapshot pointers 122 and changed snapshot data 124 .
- the snapshot method chosen often reflects the percentage of snapshot data that is dynamically changed within the real-time storage. If little change is expected, such as the previous day's stock transaction data that is not expected to be modified during the current day, a pointer system is usually more efficient.
- the storage controller 130 needs to ensure that the data in snapshot storage media 118 is accurate with respect to a particular point in time. Data operations should not be in transit when the snapshot operations are performed. For example, the client application 116 should not be performing account balance updates for the stock transactions for the previous day while the storage controller 130 is generating the snapshot data in media 118 . Otherwise, the account balance updates may be inconsistent with the stock transactions in snapshot media 18 .
- the snapshot operation may be performed within storage controller 130 and not be visible to Storage Access System 100 as no write operations are performed as the snapshot is created.
- FIG. 4 shows how data is isolated during a snapshot operation.
- the client application 116 is shut down to temporarily stop any read or write operations 120 to storage media 114 .
- the client device 106 in block 302 unmounts the media devices 120 in the storage media 114 .
- the client device 106 may send unmount commands to its operating system.
- the unmount commands also clear any data that might be cached in the client device 106 , such as within the operating system block cache.
- the storage controller 130 in block 304 then logically removes all media devices 120 from the SCSI network 112 using the method supported by the client device operating system. By clearing its caches, client device 106 assures data integrity when the devices are eventually restored.
- the client application may have its own caches which are cleared upon shut down.
- the storage controller 130 is then free to perform the snapshot operations described above in FIGS. 2 and 3 without the client devices 106 or media device 120 changing any data.
- the storage controller 130 in block 308 adds the media devices 120 back to the SCSI bus 112 by requesting the client operating system to rescan the SCSI bus and add available devices. In most cases, these new devices will have the same identities as those unmounted in 302 .
- the application thus requires no change or reconfiguration, a key advantage of the snapshot process.
- the client device 106 in block 310 remounts the media devices 120 for example by sending mount requests to the client operating system.
- the client application 116 is then restarted on the client device 106 in block 312 .
- the client application 116 can then go back to performing real-time write and read operations 102 with the storage media 114 .
- the client database application 106 can also start generating the stock transaction reports for the previous day from the data in snapshot storage media 118 .
- One of the problems with these snapshot operations or any other offline operations is that data is changed or updated by the storage controller 130 offline from the read and write operations that normally pass through storage access system 100 . Because the storage access system 100 cannot monitor these snapshot operations, the proxy device 105 cannot keep the data in tiering media 110 coherent with the data in storage media 114 . Other than the rescan operation, client requests to its operating system to mount and unmount devices are not visible on the storage interface.
- the tiering media 110 may currently contain some of the snapshot data for stock transactions that happened two days ago. However, after the snapshot operations in FIGS. 2 and 3 , the snapshot storage media 118 contains the stock transactions from one day ago, while the tiering media 110 still contains the stock transactions from two days ago. If the data in tiering media 110 is not invalidated or cleared, the storage access system 100 may provide some of the two day old data to the client application 116 instead of the one day old data in snapshot storage media 118 . Because the snapshot operations were conducted offline by the storage controller 130 , the storage access system 100 has no way of knowing if or when to clear tiering media 110 .
- Table 1 below shows two control operations conducted using the Small Computer System Interface (SCSI) protocol.
- the proxy 105 uses these control operations to determine when to invalidate or clear data in tiering media 110 .
- a first SCSI bus rescan operation enumerates all devices on the SCSI bus.
- the rescan operation references each device on the SCSI bus and is used for adding devices to the SCSI bus or to identity a removed device.
- the rescan operation is typically performed after a snapshot operation when the media devices 120 are remounted in block 310 in FIG. 4 .
- a second SCSI device inquiry message obtains parameters for specified SCSI target devices that have already been scanned and applies to the SCSI devices specifically referenced in the device inquiry message.
- the SCSI bus rescan indicates a particular number of media devices 120 in the storage media 114 and the SCSI device inquiry identifies the size and other parameters of the individual media devices 120 .
- the SCSI bus rescan is typically associated with a complete reconfiguration of a SCSI device.
- SCSI device inquiry can happen at any time and is not necessarily associated the reconfiguration of a SCSI device.
- an initiator may issue a SCSI device inquiry to check the status of a target device.
- the exact cases during which rescan and inquiry operations occur depend on the operating system of the client and the exact configuration of the operating system and applications software.
- FIG. 5 shows one embodiment of the storage access system 100 that monitors control operations 103 sent between the client devices 106 and storage media 114 , in addition to the read and write memory access operations 102 described above in FIG. 1 .
- the control operations 103 include SCSI commands for the SCSI protocol used over a SCSI fiber channel network 112 .
- the control operations 103 could be any operations used in any protocol that can be associated with potentially non-concurrent data in tiering media 110 .
- the storage access system 100 includes registers, buffers, or memory that stores configuration data 107 .
- the configuration data 107 is used by the proxy 105 to determine when to clear or invalidate data in tiering media 100 .
- the configuration information 107 can be entered by a system administrator based on the type of control operations 103 performed in the system in FIG. 5 .
- the configuration information 107 can also be dynamically changed, for example using a script or Application Programmers Interface (API) according to the particular control operations 103 currently being performed on the SCSI bus 112 and/or based on the frequency of the control operations 103 .
- API Application Programmers Interface
- the proxy 105 in block 702 detects control operations 103 sent from the client device 106 to the storage media 114 .
- the control operations 103 are SCSI messages.
- the proxy 105 in block 704 checks to see if the control operation 103 is a SCSI bus rescan operation. For example, the proxy 105 looks for a designator in SCSI control messages that indicate a bus rescan message. If the message is not a bus rescan, the proxy 105 continues to monitor the control operations in block 702 .
- the proxy 105 in block 706 invalidates all of the data in tiering media 706 .
- the proxy 105 assumes that the bus rescan operation 103 followed some offline operation that possibly changed the data in storage media 114 .
- the bus rescan could have followed the snapshot operation described in FIG. 2 . Accordingly, the proxy 105 invalidates all of the data in tiering media 110 to prevent out of date data from being supplied to the client application 116 .
- client devices 106 may assume that the media devices 120 maintain the same configuration after a snapshot operation. Accordingly, the client devices 106 may not issue bus rescans after snapshot operations or after other offline operations. If there is no SCSI bus rescan, the proxy 105 will not clear the data in tiering media 110 and could supply out of date data to the client device 106 .
- the proxy device 105 could be programmed to clear the tiering media 110 after some other SCSI operation affiliated with an offline operation that changes data in storage media 114 .
- the proxy device 105 could be programmed to clear the tiering media 110 responsive to the SCSI device inquiry message described above in Table 1.0.
- the client device 106 issues the SCSI device inquiry after a snapshot operation and before the media devices 120 are remounted in operation 310 .
- the client devices 106 may frequently issue SCSI device inquires to the media devices 120 to obtain device status information. Frequently clearing the tiering media 110 after each SCSI device query would substantially slow down the storage access system 100 . If the data in tiering media 110 is frequently invalidated, the storage access system 100 could not provide as many hits from the faster memory devices contained in tiering media 110 . The storage access system 100 could even slow memory access times below the typical speeds provided by storage media 114 .
- FIG. 7 shows how the storage access system 100 ensures correct data is provided to the client devices 106 and also prevents invalidation of the data in tiering media 110 from significantly slowing down memory access times.
- the proxy 105 in block 802 monitors the SCSI control operations 103 exchanged between the client device 106 and storage media 114 .
- the proxy 105 in block 804 checks to see if the control operation 107 is a SCSI bus rescan. If the control operation 103 is a bus rescan in block 804 , the proxy 105 in block 806 invalidates all of the data in tiering media 706 . This prevents the storage access system 100 from providing out of date data when the client application 116 makes subsequent memory access requests 102 ( FIG. 1 ) to storage media 114 .
- proxy 105 in block 808 checks to see if the control operation 103 is a SCSI device inquiry. If the control operation 103 is not a SCSI device inquiry, the proxy 105 goes back to monitoring the control operations 103 in block 802 . If the control operation 103 is a SCSI device inquiry, the proxy 105 in block 810 checks the configuration data 107 in block 810 . Alternatively, the proxy 105 could have also checked the configuration data 107 earlier during initial device configuration.
- SCSI bus rescans may perform SCSI bus rescans and SCSI device inquires in different situations. For example, some computing systems may not perform snapshot operations. Other computer systems may decide to issue the SCSI device inquires in conjunction with the mounting of media devices after snapshot operations.
- An administrator or client device 106 programs the configuration data 107 in a register or memory device.
- the configuration data 107 either enables or disables the proxy 105 to invalidate data in tiering media 110 .
- the configuration data 107 may remain static during subsequent system operations or the administrator or client device 106 may dynamically set or change the configuration data 107 when a snapshot operation is performed.
- the proxy device reads the configuration data 107 in block 810 to determine if SCSI device inquiries are associated with an operation, such as a snapshot operation, that requires invalidation of at least some data in tiering media 110 .
- the configuration data 107 may be a bit or flag that is set to notify the proxy 105 to clear data in the tiering media 110 whenever a SCSI device inquiry is detected.
- the configuration data 107 can be set via an administration script based on a time of day, initiation of a snapshot operation, or based on any other event that can change coherency between data in storage media 114 and data in tiering media 110 .
- the proxy 105 moves back to block 802 and waits for the next control operation. Otherwise, the proxy 105 in block 812 invalidates the data in tiering media 110 associated with the particular media device 120 identified in the SCSI device inquiry.
- data in tiering media 110 is mapped to a particular media device 120 and to a particular address or block address in the media device 120 .
- the proxy 105 searches for any data in tiering media 110 that maps to the media device 120 identified in the SCSI device inquiry.
- the proxy 105 then invalidates the identified data or blocks of data in operation 812 .
- the device referenced in the SCSI device inquiry may represent multiple disks or a stripe of data across multiple disks in a device volume.
- the proxy 105 in operation 812 only invalidates the data in tiering media 110 associated with those particular disks or device volume.
- the system described above can use dedicated processor systems, micro controllers, programmable logic devices, or microprocessors that perform some or all of the operations. Some of the operations described above may be implemented in software and other operations may be implemented in hardware.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
- This application is continuation of U.S. patent application Ser. No. 12/794,057 filed on Jun. 4, 2010 which is a continuation in part of U.S. patent application Ser. No. 12/619,609 filed Nov. 16, 2009 which claims priority to U.S. provisional patent application Ser. No. 61/115,426, filed Nov. 17,2008, and which are both herein incorporated by reference in their entirety. U.S. patent application Ser. No. 12/794,057, to which priority is claimed herein, is also a continuation in part of U.S. patent application Ser. No. 12/568,612 filed on Sep. 28, 2009, now U.S. Pat. No. 8,160,070, which claims priority to U.S. Provisional Application Ser. No. 61/101,645 filed Sep. 30, 2008, which are also incorporated by reference in their entirety.
- Fibre Channel (FC) provides practical and expandable means of transferring data between workstations, mainframes, supercomputers, desktop computers, and storage devices at fast data rates. Fibre Channel (FC) is especially suited for connecting computer servers to shared storage devices and for interconnecting storage controllers and drives.
- A proxy device may be connected by a FC network between a client computer and a storage device. The proxy device may contain a tiering media that needs to maintain an identical state as the storage device, so that consistent and correct data can be provided to the client computer. However, the proxy device may not have access to all operations performed on the storage device. These “off line” operations may leave different versions of data in the tiering media and in the data storage device. When the client computer goes back “on line” and tries to access the storage device through the proxy device, the proxy device may provide incorrect data from the tiering media unless made aware of the offline activity.
-
FIG. 1 shows a storage access system coupled between client devices and storage media; -
FIGS. 2 and 3 show how data snapshots are performed for data contained in the storage media ofFIG. 1 ; -
FIG. 4 is a flow diagram showing Small Computer System Interface (SCSI) operations performed for the snapshot operations ofFIGS. 2 and 3 ; -
FIG. 5 shows how the storage access system inFIG. 1 uses SCSI operations to identity snapshot operations; -
FIG. 6 is a flow diagram showing how the storage access system inFIG. 1 invalidates data in a tiering media responsive to a SCSI bus rescan; and -
FIG. 7 is a flow diagram showing how the storage access system inFIG. 1 invalidates data in a tiering media responsive to a SCSI bus rescan and a SCSI device inquiry. - Several preferred examples of the present application will now be described with reference to the accompanying drawings. Various other examples are also possible and practical. This application may be exemplified in many different forms and should not be construed as being limited to the examples set forth herein.
-
FIG. 1 shows astorage access system 100 connected betweenclient devices 106 and astorage media 114. Theclient devices 106 can be servers, personal computers, terminals, portable digital devices, routers, switches, or any other wired or wireless computing device that needs to access data onstorage media 114. Theclient devices 106 conduct different storage operations 102 with thestorage media 114 though thestorage access system 100. The storage operations 102 may includewrite operations 102A and readoperations 102B. Thestorage media 114 may containmultiple media devices 120, such as multiple storage disks that are referred to generally as a disk array. - In one embodiment, the
storage access system 100 and thestorage media 114 are stand-alone appliances, devices, or blades. In one embodiment, theclient devices 106,storage access system 100, andstorage media 114 might be coupled to each other via wired orwireless connections 112 capable of transporting the storage operations 102 and any associated data betweenclient devices 106 andstorage media 114. - One example of a
connection 112 is a Fibre Channel network that uses the Small Computer System Interface (SCSI) protocol for storage operations.Client devices 106,storage access system 100, andstorage media 114 may use fibre channel interface cards or Host Bus Adapters (HBA) (not shown). The fibre channel HBAs allow theclient devices 106 andstorage media 114 to communicate over thefibre channel medium 112 using the SCSI protocol. Most FC networks utilize SCSI as the underlying storage protocol, and any non-SCSI disk, such as a Serial ATA (SATA) disk, withinstorage media 114 will typically be virtualized as a SCSI entity. - In another embodiment, the
client devices 106 may access one or more of themedia devices 120 instorage media 114 over an internal or external data bus. Thestorage media 114 in this embodiment could be located in personal computers or servers, or could also be a stand-alone device coupled to the client computer/server 106 via a fiber channel SCSI bus, Universal Serial Bus (USB), or packet switchednetwork connections 112. - The
storage access system 100 contains one or more processors orprocessing elements 105 that operate as a proxy for the storage operations 102 between theclient devices 106 andstorage media 114. Tieringmedia 110 instorage access system 100 includes different combinations of Flash memory and Dynamic Random Access Memory (DRAM) that typically provides faster access speeds than say disks that may be used instorage media 114. - The
storage access system 100 receives the read and write operations 102 from theclient devices 106 that are directed to thestorage media 114. In one embodiment, themedia devices 120 contain multiple storage blocks that have associated block addresses. To improve throughput and/or to reduce latency to the data in thestorage media 114, some of the blocks of data from thestorage media 114 are temporarily copied into thetiering media 110. Thestorage access system 100 then uses the data in thefaster tiering media 110 to service certain storage access operations 102 from theclient devices 106. - In order to maintain data coherency,
storage access system 100 monitors all of the storage operations 102 performed instorage media 114 and maintains the same version of data in thetiering media 110 andstorage media 114.Proxy 105 is responsible for maintaining this data coherency between thetiering media 110 and thestorage media 114 and must see all write operations tostorage media 114. -
FIGS. 2 and 3 show how snapshot operations might be performed in thestorage media 114. A snapshot operation is used for capturing data instorage media 114 at a particular instance in time. Aclient application 116 operating onclient device 106 may need to conduct a backup operation for the data currently stored instorage media 114 or may need to process the data instorage media 114 as of a particular time. For example, aclient database application 116 may need to generate reports for stock market transactions from the previous day. Stock transactions are used as an example below, but of course any type of software application and data may be used. - The
client device 106 uses astorage controller 130 to capture a stable state or “snapshot” for the stock transactions from the previous trading day. Thestorage controller 130 copies a particular set of snapshot data fromstorage media 114 intoother media devices 119 or to a different location instorage media 114. The storage media containing the snapshot data is referred to generally assnapshot storage media 118 and is shown separately fromstorage media 114 inFIGS. 3 and 4 for illustration purposes. However, thesnapshot storage media 118 could be a particular directory orparticular media devices 119 within thesame storage media 114. - After the snapshot operation, real-time read and write data can continue to be accessed in
storage media 114 while the stock transactions from the previous day are isolated as read only data insnapshot storage media 118. Theclient database application 116 is then free to generate reports for the stock transactions from the previous day fromsnapshot storage media 118. The advantage of this method is thatsnapshot storage media 118 will not be constantly updated with new transaction data and thus have superior performance from the perspective ofclient database application 116. Reports run againststorage media 114 would generate the same result, but will include content with real-time updates and thus be slower. -
FIG. 3 shows an alternative embodiment where thestorage controller 130 generates a logicalsnapshot using pointers 122 in thesnapshot storage media 118. Instead of copying all of the related data fromstorage media 114 intosnapshot storage media 118, thestorage controller 130 generatespointers 122 that point to data in thestorage media 114 that has not changed since the last snapshot operation. However, anydata 124 that has changed since the last snapshot operation is copied from thestorage media 114 into thesnapshot storage media 118. Again, this could comprise thestorage controller 130 copying the stock transactions from the previous day into a particular read only directory in thestorage media 114 reserved for thesnapshot pointers 122 and changedsnapshot data 124. To minimize the time required to perform the snapshot operation, the snapshot method chosen often reflects the percentage of snapshot data that is dynamically changed within the real-time storage. If little change is expected, such as the previous day's stock transaction data that is not expected to be modified during the current day, a pointer system is usually more efficient. - The
storage controller 130 needs to ensure that the data insnapshot storage media 118 is accurate with respect to a particular point in time. Data operations should not be in transit when the snapshot operations are performed. For example, theclient application 116 should not be performing account balance updates for the stock transactions for the previous day while thestorage controller 130 is generating the snapshot data inmedia 118. Otherwise, the account balance updates may be inconsistent with the stock transactions in snapshot media 18. Specifically, the snapshot operation may be performed withinstorage controller 130 and not be visible toStorage Access System 100 as no write operations are performed as the snapshot is created. -
FIG. 4 shows how data is isolated during a snapshot operation. Inblock 300 theclient application 116 is shut down to temporarily stop any read or writeoperations 120 tostorage media 114. Theclient device 106 inblock 302 unmounts themedia devices 120 in thestorage media 114. For example, theclient device 106 may send unmount commands to its operating system. The unmount commands also clear any data that might be cached in theclient device 106, such as within the operating system block cache. Thestorage controller 130 inblock 304 then logically removes allmedia devices 120 from theSCSI network 112 using the method supported by the client device operating system. By clearing its caches,client device 106 assures data integrity when the devices are eventually restored. The client application may have its own caches which are cleared upon shut down. - In
block 306 thestorage controller 130 is then free to perform the snapshot operations described above inFIGS. 2 and 3 without theclient devices 106 ormedia device 120 changing any data. After the snapshot data is successfully copied intosnapshot media 118, thestorage controller 130 inblock 308 adds themedia devices 120 back to theSCSI bus 112 by requesting the client operating system to rescan the SCSI bus and add available devices. In most cases, these new devices will have the same identities as those unmounted in 302. The application thus requires no change or reconfiguration, a key advantage of the snapshot process. - The
client device 106 inblock 310 remounts themedia devices 120 for example by sending mount requests to the client operating system. Theclient application 116 is then restarted on theclient device 106 inblock 312. Theclient application 116 can then go back to performing real-time write and read operations 102 with thestorage media 114. Theclient database application 106 can also start generating the stock transaction reports for the previous day from the data insnapshot storage media 118. - One of the problems with these snapshot operations or any other offline operations, is that data is changed or updated by the
storage controller 130 offline from the read and write operations that normally pass throughstorage access system 100. Because thestorage access system 100 cannot monitor these snapshot operations, theproxy device 105 cannot keep the data intiering media 110 coherent with the data instorage media 114. Other than the rescan operation, client requests to its operating system to mount and unmount devices are not visible on the storage interface. - For example, the
tiering media 110 may currently contain some of the snapshot data for stock transactions that happened two days ago. However, after the snapshot operations inFIGS. 2 and 3 , thesnapshot storage media 118 contains the stock transactions from one day ago, while thetiering media 110 still contains the stock transactions from two days ago. If the data intiering media 110 is not invalidated or cleared, thestorage access system 100 may provide some of the two day old data to theclient application 116 instead of the one day old data insnapshot storage media 118. Because the snapshot operations were conducted offline by thestorage controller 130, thestorage access system 100 has no way of knowing if or when to cleartiering media 110. - Table 1 below shows two control operations conducted using the Small Computer System Interface (SCSI) protocol. The
proxy 105 uses these control operations to determine when to invalidate or clear data intiering media 110. A first SCSI bus rescan operation enumerates all devices on the SCSI bus. The rescan operation references each device on the SCSI bus and is used for adding devices to the SCSI bus or to identity a removed device. The rescan operation is typically performed after a snapshot operation when themedia devices 120 are remounted inblock 310 inFIG. 4 . -
TABLE 1 CONTROL OPERATIONS NUMBER OF MEDIA TYPE PURPOSE DEVICES REFERENCED SCSI BUS ENUMERATE ALL ALL DEVICES ON SCSI RESCAN DEVICES ON SCSI BUS BUS SCSI DEVICE OBTAIN DEVICE ONLY SPECIFIED INQUIRY PARAMETERS FOR DEVICE SPECIFIC SCSI DEVICE - A second SCSI device inquiry message obtains parameters for specified SCSI target devices that have already been scanned and applies to the SCSI devices specifically referenced in the device inquiry message. For example, the SCSI bus rescan indicates a particular number of
media devices 120 in thestorage media 114 and the SCSI device inquiry identifies the size and other parameters of theindividual media devices 120. - The SCSI bus rescan is typically associated with a complete reconfiguration of a SCSI device. However, SCSI device inquiry can happen at any time and is not necessarily associated the reconfiguration of a SCSI device. For example, an initiator may issue a SCSI device inquiry to check the status of a target device. The exact cases during which rescan and inquiry operations occur depend on the operating system of the client and the exact configuration of the operating system and applications software.
-
FIG. 5 shows one embodiment of thestorage access system 100 that monitorscontrol operations 103 sent between theclient devices 106 andstorage media 114, in addition to the read and write memory access operations 102 described above inFIG. 1 . In one example, thecontrol operations 103 include SCSI commands for the SCSI protocol used over a SCSIfiber channel network 112. However, thecontrol operations 103 could be any operations used in any protocol that can be associated with potentially non-concurrent data intiering media 110. - The
storage access system 100 includes registers, buffers, or memory that storesconfiguration data 107. Theconfiguration data 107 is used by theproxy 105 to determine when to clear or invalidate data intiering media 100. Theconfiguration information 107 can be entered by a system administrator based on the type ofcontrol operations 103 performed in the system inFIG. 5 . Theconfiguration information 107 can also be dynamically changed, for example using a script or Application Programmers Interface (API) according to theparticular control operations 103 currently being performed on theSCSI bus 112 and/or based on the frequency of thecontrol operations 103. - Referring to
FIG. 6 , in one embodiment, theproxy 105 inblock 702 detectscontrol operations 103 sent from theclient device 106 to thestorage media 114. Again in one example, thecontrol operations 103 are SCSI messages. Theproxy 105 inblock 704 checks to see if thecontrol operation 103 is a SCSI bus rescan operation. For example, theproxy 105 looks for a designator in SCSI control messages that indicate a bus rescan message. If the message is not a bus rescan, theproxy 105 continues to monitor the control operations inblock 702. - If the
control operation 103 is a bus rescan inblock 704, theproxy 105 inblock 706 invalidates all of the data intiering media 706. Theproxy 105 assumes that thebus rescan operation 103 followed some offline operation that possibly changed the data instorage media 114. For example, the bus rescan could have followed the snapshot operation described inFIG. 2 . Accordingly, theproxy 105 invalidates all of the data intiering media 110 to prevent out of date data from being supplied to theclient application 116. - In some computer systems,
client devices 106 may assume that themedia devices 120 maintain the same configuration after a snapshot operation. Accordingly, theclient devices 106 may not issue bus rescans after snapshot operations or after other offline operations. If there is no SCSI bus rescan, theproxy 105 will not clear the data intiering media 110 and could supply out of date data to theclient device 106. - The
proxy device 105 could be programmed to clear thetiering media 110 after some other SCSI operation affiliated with an offline operation that changes data instorage media 114. For example, theproxy device 105 could be programmed to clear thetiering media 110 responsive to the SCSI device inquiry message described above in Table 1.0. Referring briefly back toFIG. 4 , theclient device 106 issues the SCSI device inquiry after a snapshot operation and before themedia devices 120 are remounted inoperation 310. - However, the
client devices 106 may frequently issue SCSI device inquires to themedia devices 120 to obtain device status information. Frequently clearing thetiering media 110 after each SCSI device query would substantially slow down thestorage access system 100. If the data intiering media 110 is frequently invalidated, thestorage access system 100 could not provide as many hits from the faster memory devices contained intiering media 110. Thestorage access system 100 could even slow memory access times below the typical speeds provided bystorage media 114. -
FIG. 7 shows how thestorage access system 100 ensures correct data is provided to theclient devices 106 and also prevents invalidation of the data intiering media 110 from significantly slowing down memory access times. Theproxy 105 inblock 802 monitors theSCSI control operations 103 exchanged between theclient device 106 andstorage media 114. Theproxy 105 inblock 804 checks to see if thecontrol operation 107 is a SCSI bus rescan. If thecontrol operation 103 is a bus rescan inblock 804, theproxy 105 inblock 806 invalidates all of the data intiering media 706. This prevents thestorage access system 100 from providing out of date data when theclient application 116 makes subsequent memory access requests 102 (FIG. 1 ) tostorage media 114. - If the
control operation 103 is not a SCSI bus rescan,proxy 105 inblock 808 checks to see if thecontrol operation 103 is a SCSI device inquiry. If thecontrol operation 103 is not a SCSI device inquiry, theproxy 105 goes back to monitoring thecontrol operations 103 inblock 802. If thecontrol operation 103 is a SCSI device inquiry, theproxy 105 inblock 810 checks theconfiguration data 107 inblock 810. Alternatively, theproxy 105 could have also checked theconfiguration data 107 earlier during initial device configuration. - As explained above, different computer systems may perform SCSI bus rescans and SCSI device inquires in different situations. For example, some computing systems may not perform snapshot operations. Other computer systems may decide to issue the SCSI device inquires in conjunction with the mounting of media devices after snapshot operations.
- An administrator or
client device 106 programs theconfiguration data 107 in a register or memory device. Theconfiguration data 107 either enables or disables theproxy 105 to invalidate data intiering media 110. Theconfiguration data 107 may remain static during subsequent system operations or the administrator orclient device 106 may dynamically set or change theconfiguration data 107 when a snapshot operation is performed. - The proxy device reads the
configuration data 107 inblock 810 to determine if SCSI device inquiries are associated with an operation, such as a snapshot operation, that requires invalidation of at least some data intiering media 110. For example, theconfiguration data 107 may be a bit or flag that is set to notify theproxy 105 to clear data in thetiering media 110 whenever a SCSI device inquiry is detected. Theconfiguration data 107 can be set via an administration script based on a time of day, initiation of a snapshot operation, or based on any other event that can change coherency between data instorage media 114 and data intiering media 110. - If the
configuration data 107 is not set inblock 810, theproxy 105 moves back to block 802 and waits for the next control operation. Otherwise, theproxy 105 inblock 812 invalidates the data intiering media 110 associated with theparticular media device 120 identified in the SCSI device inquiry. - For example, data in
tiering media 110 is mapped to aparticular media device 120 and to a particular address or block address in themedia device 120. The proxy 105 searches for any data intiering media 110 that maps to themedia device 120 identified in the SCSI device inquiry. Theproxy 105 then invalidates the identified data or blocks of data inoperation 812. In another example, the device referenced in the SCSI device inquiry may represent multiple disks or a stripe of data across multiple disks in a device volume. Theproxy 105 inoperation 812 only invalidates the data intiering media 110 associated with those particular disks or device volume. - Thus, outdated data is invalidated in the
tiering media 110 even when theclient device 106 fails to issue SCSI bus rescans after snapshot operations. Invalidation based on SCSI devices inquiries is programmable. Therefore, theproxy 105 will also not unnecessarily invalidate data in thetiering media 110 for SCSI device inquiries not associated with snapshot operations or for other operations that do not require invalidation of the data intiering media 110. - Several preferred examples have been described above with reference to the accompanying drawings. Various other examples of the application are also possible and practical. The system may be exemplified in many different forms and should not be construed as being limited to the examples set forth above.
- The figures listed above illustrate preferred examples of the application and the operation of such examples. In the figures, the size of the boxes is not intended to represent the size of the various physical components. Where the same element appears in multiple figures, the same reference numeral is used to denote the element in all of the figures where it appears.
- Only those parts of the various units are shown and described which are necessary to convey an understanding of the examples to those skilled in the art. Those parts and elements not shown may be conventional and known in the art.
- The system described above can use dedicated processor systems, micro controllers, programmable logic devices, or microprocessors that perform some or all of the operations. Some of the operations described above may be implemented in software and other operations may be implemented in hardware.
- For the sake of convenience, the operations are described as various interconnected functional blocks or distinct software modules. This is not necessary, however, and there may be cases where these functional blocks or modules are equivalently aggregated into a single logic device, program or operation with unclear boundaries. In any event, the functional blocks and software modules or features of the flexible interface can be implemented by themselves, or in combination with other operations in either hardware or software.
Claims (20)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/858,533 US20130232300A1 (en) | 2008-09-30 | 2013-04-08 | System for maintaining coherency during offline changes to storage media |
Applications Claiming Priority (6)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US10164508P | 2008-09-30 | 2008-09-30 | |
| US11542608P | 2008-11-17 | 2008-11-17 | |
| US12/568,612 US8160070B2 (en) | 2008-09-30 | 2009-09-28 | Fibre channel proxy |
| US12/619,609 US8838850B2 (en) | 2008-11-17 | 2009-11-16 | Cluster control protocol |
| US12/794,057 US8417895B1 (en) | 2008-09-30 | 2010-06-04 | System for maintaining coherency during offline changes to storage media |
| US13/858,533 US20130232300A1 (en) | 2008-09-30 | 2013-04-08 | System for maintaining coherency during offline changes to storage media |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/794,057 Continuation US8417895B1 (en) | 2008-09-30 | 2010-06-04 | System for maintaining coherency during offline changes to storage media |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20130232300A1 true US20130232300A1 (en) | 2013-09-05 |
Family
ID=47999367
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/794,057 Expired - Fee Related US8417895B1 (en) | 2008-09-30 | 2010-06-04 | System for maintaining coherency during offline changes to storage media |
| US13/858,533 Abandoned US20130232300A1 (en) | 2008-09-30 | 2013-04-08 | System for maintaining coherency during offline changes to storage media |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US12/794,057 Expired - Fee Related US8417895B1 (en) | 2008-09-30 | 2010-06-04 | System for maintaining coherency during offline changes to storage media |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US8417895B1 (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150286420A1 (en) * | 2014-04-03 | 2015-10-08 | ANALYSIS SOLUTION LLC, dba Gearbit | High-Speed Data Storage |
Families Citing this family (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8417895B1 (en) * | 2008-09-30 | 2013-04-09 | Violin Memory Inc. | System for maintaining coherency during offline changes to storage media |
| US8819317B1 (en) | 2013-06-12 | 2014-08-26 | International Business Machines Corporation | Processing input/output requests using proxy and owner storage systems |
| US9940019B2 (en) | 2013-06-12 | 2018-04-10 | International Business Machines Corporation | Online migration of a logical volume between storage systems |
| US9274989B2 (en) | 2013-06-12 | 2016-03-01 | International Business Machines Corporation | Impersonating SCSI ports through an intermediate proxy |
| US9769062B2 (en) | 2013-06-12 | 2017-09-19 | International Business Machines Corporation | Load balancing input/output operations between two computers |
| US9274916B2 (en) | 2013-06-12 | 2016-03-01 | International Business Machines Corporation | Unit attention processing in proxy and owner storage systems |
| US9779003B2 (en) | 2013-06-12 | 2017-10-03 | International Business Machines Corporation | Safely mapping and unmapping host SCSI volumes |
| US12355623B2 (en) * | 2023-04-17 | 2025-07-08 | Dell Products L.P. | Configuration checking of asset protection infrastructure |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20110231615A1 (en) * | 2010-03-19 | 2011-09-22 | Ober Robert E | Coherent storage network |
| US8417895B1 (en) * | 2008-09-30 | 2013-04-09 | Violin Memory Inc. | System for maintaining coherency during offline changes to storage media |
Family Cites Families (76)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5544347A (en) | 1990-09-24 | 1996-08-06 | Emc Corporation | Data storage system controlled remote data mirroring with respectively maintained data indices |
| US5954796A (en) | 1997-02-11 | 1999-09-21 | Compaq Computer Corporation | System and method for automatically and dynamically changing an address associated with a device disposed in a fire channel environment |
| US6041366A (en) | 1998-02-02 | 2000-03-21 | International Business Machines Corporation | System and method for dynamic specification of input/output attributes |
| EP1095373A2 (en) | 1998-05-15 | 2001-05-02 | Storage Technology Corporation | Caching method for data blocks of variable size |
| US6401147B1 (en) | 1999-05-24 | 2002-06-04 | Advanced Micro Devices, Inc. | Split-queue architecture with a first queue area and a second queue area and queue overflow area having a trickle mode and an overflow mode based on prescribed threshold values |
| US8108590B2 (en) | 2000-01-06 | 2012-01-31 | Super Talent Electronics, Inc. | Multi-operation write aggregator using a page buffer and a scratch flash block in each of multiple channels of a large array of flash memory to reduce block wear |
| US8266367B2 (en) | 2003-12-02 | 2012-09-11 | Super Talent Electronics, Inc. | Multi-level striping and truncation channel-equalization for flash-memory system |
| US6636982B1 (en) | 2000-03-03 | 2003-10-21 | International Business Machines Corporation | Apparatus and method for detecting the reset of a node in a cluster computer system |
| US20020175998A1 (en) | 2000-05-31 | 2002-11-28 | Hoang Khoi Nhu | Data-on-demand digital broadcast system utilizing prefetch data transmission |
| US8204082B2 (en) | 2000-06-23 | 2012-06-19 | Cloudshield Technologies, Inc. | Transparent provisioning of services over a network |
| US6810470B1 (en) | 2000-08-14 | 2004-10-26 | Ati Technologies, Inc. | Memory request interlock |
| US6678795B1 (en) | 2000-08-15 | 2004-01-13 | International Business Machines Corporation | Method and apparatus for memory prefetching based on intra-page usage history |
| US20020035655A1 (en) | 2000-09-15 | 2002-03-21 | Dawn Finn | Method of checking for and recovering from underruns and overrun slips when writing to circular buffers in dynamic bandwidth circuit emulation services |
| US7110359B1 (en) | 2001-03-05 | 2006-09-19 | Advanced Micro Devices, Inc. | System and method for dynamically updating weights of weighted round robin in output queues |
| US7398302B2 (en) | 2001-03-30 | 2008-07-08 | Hitachi, Ltd. | Remote copy with path selection and prioritization |
| US6721870B1 (en) | 2001-06-12 | 2004-04-13 | Emc Corporation | Prefetch algorithm for short sequences |
| JP2002373109A (en) | 2001-06-13 | 2002-12-26 | Nec Corp | Data look-ahead system and its method |
| US6985490B2 (en) | 2001-07-11 | 2006-01-10 | Sancastle Technologies, Ltd. | Extension of fibre channel addressing |
| JP3888095B2 (en) | 2001-07-26 | 2007-02-28 | 株式会社日立製作所 | Gas turbine equipment |
| US7017084B2 (en) | 2001-09-07 | 2006-03-21 | Network Appliance Inc. | Tracing method and apparatus for distributed environments |
| US6976134B1 (en) | 2001-09-28 | 2005-12-13 | Emc Corporation | Pooling and provisioning storage resources in a storage network |
| US7080140B2 (en) * | 2001-10-05 | 2006-07-18 | International Business Machines Corporation | Storage area network methods and apparatus for validating data from multiple sources |
| US7430593B2 (en) * | 2001-10-05 | 2008-09-30 | International Business Machines Corporation | Storage area network for topology rendering |
| US7599360B2 (en) | 2001-12-26 | 2009-10-06 | Cisco Technology, Inc. | Methods and apparatus for encapsulating a frame for transmission in a storage area network |
| US6891543B2 (en) | 2002-05-08 | 2005-05-10 | Intel Corporation | Method and system for optimally sharing memory between a host processor and graphics processor |
| US6789171B2 (en) | 2002-05-31 | 2004-09-07 | Veritas Operating Corporation | Computer system implementing a multi-threaded stride prediction read ahead algorithm |
| US7171469B2 (en) | 2002-09-16 | 2007-01-30 | Network Appliance, Inc. | Apparatus and method for storing data in a proxy cache in a network |
| JP2004222070A (en) | 2003-01-16 | 2004-08-05 | Ntt Docomo Inc | Route control device and route control method |
| US7194568B2 (en) | 2003-03-21 | 2007-03-20 | Cisco Technology, Inc. | System and method for dynamic mirror-bank addressing |
| US7089394B2 (en) | 2003-04-22 | 2006-08-08 | Intel Corporation | Optimally mapping a memory device |
| US7853699B2 (en) | 2005-03-15 | 2010-12-14 | Riverbed Technology, Inc. | Rules-based transaction prefetching using connection end-point proxies |
| US7089370B2 (en) | 2003-09-30 | 2006-08-08 | International Business Machines Corporation | Apparatus and method for pre-fetching page data using segment table data |
| JP2005251078A (en) | 2004-03-08 | 2005-09-15 | Hitachi Ltd | Information processing apparatus and control method of information processing apparatus |
| US7975108B1 (en) | 2004-03-25 | 2011-07-05 | Brian Holscher | Request tracking data prefetcher apparatus |
| CA2564967C (en) | 2004-04-30 | 2014-09-30 | Commvault Systems, Inc. | Hierarchical systems and methods for providing a unified view of storage information |
| US8018936B2 (en) | 2004-07-19 | 2011-09-13 | Brocade Communications Systems, Inc. | Inter-fabric routing |
| US7500063B2 (en) | 2004-08-09 | 2009-03-03 | Xiv Ltd. | Method and apparatus for managing a cache memory in a mass-storage system |
| US7756841B2 (en) * | 2005-03-22 | 2010-07-13 | Microsoft Corporation | System and method for identity decisions and invalidation |
| US8260982B2 (en) | 2005-06-07 | 2012-09-04 | Lsi Corporation | Method for reducing latency |
| US7424577B2 (en) | 2005-08-26 | 2008-09-09 | Network Appliance, Inc. | Dynamic optimization of cache memory |
| US7296135B2 (en) | 2005-10-05 | 2007-11-13 | Hewlett-Packard Development Company, L.P. | Data misalignment detection and correction in a computer system utilizing a mass storage subsystem |
| US8010485B1 (en) | 2005-10-20 | 2011-08-30 | American Megatrends, Inc. | Background movement of data between nodes in a storage cluster |
| JP4856932B2 (en) | 2005-11-18 | 2012-01-18 | 株式会社日立製作所 | Storage system and data movement method |
| US7380074B2 (en) | 2005-11-22 | 2008-05-27 | International Business Machines Corporation | Selecting storage clusters to use to access storage |
| US8595313B2 (en) | 2005-11-29 | 2013-11-26 | Netapp. Inc. | Systems and method for simple scale-out storage clusters |
| JP2007272357A (en) | 2006-03-30 | 2007-10-18 | Toshiba Corp | Storage cluster system, data processing method, and program |
| US8250316B2 (en) | 2006-06-06 | 2012-08-21 | Seagate Technology Llc | Write caching random data and sequential data simultaneously |
| US7809919B2 (en) | 2006-07-26 | 2010-10-05 | Hewlett-Packard Development Company, L.P. | Automatic data block misalignment detection and correction in a computer system utilizing a hard disk subsystem |
| US9697253B2 (en) * | 2006-10-20 | 2017-07-04 | Oracle International Corporation | Consistent client-side cache |
| US7636832B2 (en) | 2006-10-26 | 2009-12-22 | Intel Corporation | I/O translation lookaside buffer performance |
| JP2008152464A (en) | 2006-12-15 | 2008-07-03 | Toshiba Corp | Storage device |
| US7685401B2 (en) * | 2006-12-27 | 2010-03-23 | Intel Corporation | Guest to host address translation for devices to access memory in a partitioned system |
| WO2008106686A1 (en) | 2007-03-01 | 2008-09-04 | Douglas Dumitru | Fast block device and methodology |
| US7882304B2 (en) | 2007-04-27 | 2011-02-01 | Netapp, Inc. | System and method for efficient updates of sequential block storage |
| US8849793B2 (en) * | 2007-06-05 | 2014-09-30 | SafePeak Technologies Ltd. | Devices for providing distributable middleware data proxy between application servers and database servers |
| US8782322B2 (en) | 2007-06-21 | 2014-07-15 | International Business Machines Corporation | Ranking of target server partitions for virtual server mobility operations |
| US20090006745A1 (en) | 2007-06-28 | 2009-01-01 | Cavallo Joseph S | Accessing snapshot data image of a data mirroring volume |
| US8411566B2 (en) | 2007-10-31 | 2013-04-02 | Smart Share Systems APS | Apparatus and a method for distributing bandwidth |
| US7856533B2 (en) | 2007-11-01 | 2010-12-21 | International Business Machines Corporation | Probabilistic method for performing memory prefetching |
| US7870351B2 (en) | 2007-11-15 | 2011-01-11 | Micron Technology, Inc. | System, apparatus, and method for modifying the order of memory accesses |
| US7873619B1 (en) | 2008-03-31 | 2011-01-18 | Emc Corporation | Managing metadata |
| US8566505B2 (en) | 2008-04-15 | 2013-10-22 | SMART Storage Systems, Inc. | Flash management using sequential techniques |
| US8051243B2 (en) | 2008-04-30 | 2011-11-01 | Hitachi, Ltd. | Free space utilization in tiered storage systems |
| GB2460841B (en) | 2008-06-10 | 2012-01-11 | Virtensys Ltd | Methods of providing access to I/O devices |
| TWI398770B (en) | 2008-07-08 | 2013-06-11 | Phison Electronics Corp | Data accessing method for flash memory and storage system and controller using the same |
| US8352519B2 (en) | 2008-07-31 | 2013-01-08 | Microsoft Corporation | Maintaining large random sample with semi-random append-only operations |
| US8838850B2 (en) | 2008-11-17 | 2014-09-16 | Violin Memory, Inc. | Cluster control protocol |
| US8160070B2 (en) | 2008-09-30 | 2012-04-17 | Gridiron Systems, Inc. | Fibre channel proxy |
| JP4809413B2 (en) | 2008-10-08 | 2011-11-09 | 株式会社日立製作所 | Storage system |
| US8214599B2 (en) | 2008-11-04 | 2012-07-03 | Gridiron Systems, Inc. | Storage device prefetch system using directed graph clusters |
| US8214608B2 (en) | 2008-11-04 | 2012-07-03 | Gridiron Systems, Inc. | Behavioral monitoring of storage access patterns |
| US8285961B2 (en) | 2008-11-13 | 2012-10-09 | Grid Iron Systems, Inc. | Dynamic performance virtualization for disk access |
| CN102257482B (en) | 2008-12-19 | 2015-06-03 | 惠普开发有限公司 | Redundant data storage for uniform read latency |
| KR101028929B1 (en) | 2008-12-31 | 2011-04-12 | 성균관대학교산학협력단 | Log block association distribution method for real-time system and flash memory device |
| US8161246B2 (en) | 2009-03-30 | 2012-04-17 | Via Technologies, Inc. | Prefetching of next physically sequential cache line after cache line that includes loaded page table entry |
| US8612718B2 (en) | 2009-08-19 | 2013-12-17 | Seagate Technology Llc | Mapping alignment |
-
2010
- 2010-06-04 US US12/794,057 patent/US8417895B1/en not_active Expired - Fee Related
-
2013
- 2013-04-08 US US13/858,533 patent/US20130232300A1/en not_active Abandoned
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8417895B1 (en) * | 2008-09-30 | 2013-04-09 | Violin Memory Inc. | System for maintaining coherency during offline changes to storage media |
| US20110231615A1 (en) * | 2010-03-19 | 2011-09-22 | Ober Robert E | Coherent storage network |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150286420A1 (en) * | 2014-04-03 | 2015-10-08 | ANALYSIS SOLUTION LLC, dba Gearbit | High-Speed Data Storage |
| US9619157B2 (en) * | 2014-04-03 | 2017-04-11 | Analysis Solution Llc | High-speed data storage |
Also Published As
| Publication number | Publication date |
|---|---|
| US8417895B1 (en) | 2013-04-09 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8417895B1 (en) | System for maintaining coherency during offline changes to storage media | |
| US11243708B2 (en) | Providing track format information when mirroring updated tracks from a primary storage system to a secondary storage system | |
| US7788453B2 (en) | Redirection of storage access requests based on determining whether write caching is enabled | |
| US9152339B1 (en) | Synchronization of asymmetric active-active, asynchronously-protected storage | |
| US9244997B1 (en) | Asymmetric active-active access of asynchronously-protected data storage | |
| US20230280944A1 (en) | Tiering Data Strategy for a Distributed Storage System | |
| US9081842B1 (en) | Synchronous and asymmetric asynchronous active-active-active data access | |
| US9087112B1 (en) | Consistency across snapshot shipping and continuous replication | |
| US9383937B1 (en) | Journal tiering in a continuous data protection system using deduplication-based storage | |
| US7370163B2 (en) | Adaptive cache engine for storage area network including systems and methods related thereto | |
| US9910621B1 (en) | Backlogging I/O metadata utilizing counters to monitor write acknowledgements and no acknowledgements | |
| US9367260B1 (en) | Dynamic replication system | |
| US9110914B1 (en) | Continuous data protection using deduplication-based storage | |
| US9916244B1 (en) | Techniques for maintaining cache coherence by atomically processing groups of storage commands | |
| US9864683B1 (en) | Managing cache for improved data availability by associating cache pages with respective data objects | |
| US9684576B1 (en) | Replication using a virtual distributed volume | |
| US10235087B1 (en) | Distributing journal data over multiple journals | |
| WO2014051639A1 (en) | Storage architecture for server flash and storage array operation | |
| US11068299B1 (en) | Managing file system metadata using persistent cache | |
| US11315028B2 (en) | Method and apparatus for increasing the accuracy of predicting future IO operations on a storage system | |
| US20220019359A1 (en) | Alert Tracking In Storage | |
| US8443150B1 (en) | Efficient reloading of data into cache resource | |
| US20180052750A1 (en) | Online nvm format upgrade in a data storage system operating with active and standby memory controllers | |
| US11188425B1 (en) | Snapshot metadata deduplication | |
| EP4163780A1 (en) | Systems, methods, and devices for near storage elasticity |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: VIOLIN MEMORY INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:IGLESIA, ERIC DE LA;REEL/FRAME:030942/0964 Effective date: 20130718 |
|
| AS | Assignment |
Owner name: SILICON VALLEY BANK, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNOR:VIOLIN MEMORY, INC.;REEL/FRAME:033645/0834 Effective date: 20140827 |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
| AS | Assignment |
Owner name: VSIP HOLDINGS LLC (F/K/A VIOLIN SYSTEMS LLC (F/K/A VIOLIN MEMORY, INC.)), NEW YORK Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:SILICON VALLEY BANK;REEL/FRAME:056600/0186 Effective date: 20210611 |