[go: up one dir, main page]

WO2016175762A1 - Storage cache management - Google Patents

Storage cache management Download PDF

Info

Publication number
WO2016175762A1
WO2016175762A1 PCT/US2015/027999 US2015027999W WO2016175762A1 WO 2016175762 A1 WO2016175762 A1 WO 2016175762A1 US 2015027999 W US2015027999 W US 2015027999W WO 2016175762 A1 WO2016175762 A1 WO 2016175762A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
storage
volatile memory
storage system
memory module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2015/027999
Other languages
French (fr)
Inventor
Nathaniel S Deneui
Keith ORSAK
Joseph David BLACK
James Kenneth Yates
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Enterprise Development LP
Original Assignee
Hewlett Packard Enterprise Development LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development LP filed Critical Hewlett Packard Enterprise Development LP
Priority to PCT/US2015/027999 priority Critical patent/WO2016175762A1/en
Publication of WO2016175762A1 publication Critical patent/WO2016175762A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0804Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/22Employing cache memory using specific memory technology
    • G06F2212/225Hybrid cache memory, e.g. having both volatile and non-volatile portions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/28Using a specific disk cache architecture
    • G06F2212/283Plural cache memories
    • G06F2212/284Plural cache memories being distributed
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/31Providing disk cache in a specific location of a storage system
    • G06F2212/312In storage controller
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/31Providing disk cache in a specific location of a storage system
    • G06F2212/313In storage device
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7207Details relating to flash memory management management of metadata or control data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7209Validity control, e.g. using flags, time stamps or sequence numbers

Definitions

  • Computer systems include host computers that communicate with storage systems.
  • the storage systems may include storage devices to store data for later retrieval.
  • the host computers may send commands to the storage systems to have the storage systems store data at the storage devices as well as retrieve data from the storage devices.
  • FIG. 1 depicts an example system comprising storage cache management in accordance with an example of the techniques of the present application
  • FIG. 2 depicts an example flow chart of processes for storage cache management in accordance with an example of the techniques of the present application
  • FIG. 3 depicts an example flow diagram of processes for storage cache management in accordance with an example of the techniques of the present application.
  • FIG. 4 depicts an example block diagram showing a non-transitory, computer-readable medium that stores instructions for storage cache management in accordance with an example of the techniques of the present application.
  • Computer systems include host computers that communicate with storage systems.
  • the storage systems may include storage devices to store data for later retrieval.
  • the host computers may send commands to the storage systems to have the storage systems store data at the storage devices as well as retrieve data from the storage devices.
  • Computer systems may include a storage controller to interface between host computers and storage systems.
  • the storage controller may include non-volatile memory configured as write cache memory.
  • the storage controller may use the write cache memory to construct or build volumes or connect to the storage system in the form of physical disks of the storage devices.
  • the storage controller may use the write cache to redirect write commands to the write cache for buffering Input-Output (IO) processing to help improve write completion, write coalescing, and write sorting.
  • IO Input-Output
  • write completion may include a condition or situation where the storage controller has stored the data to persistent memory such as non-volatile memory without having committed the data to the storage device (end device) which may improve performance of a host application.
  • write coalescing may include a condition or situation of combining adjacent Logical Block Address (LBA) data into large flush requests which may reduce the number of 10 activity and increase the size of the transfer which may improve the performance of the storage device (end device).
  • write sorting may include a condition or situation where for certain storage devices, such as Hard Disk Drives (HDDs), 10 activity may be sorted relative to LBA location to help minimize seek times and improve performance.
  • the storage device of the storage system (end device) may comprise a volatile memory module configured as write cache memory.
  • the write cache may not be configured to support power loss or interruption which may result in exposing the data written to the write cache to possible loss if the computer system or components of the computer system experiences power loss or interruption.
  • the storage system performance may be improved by enabling or turning on the volatile memory write cache. For example, this may improve storage system write coalescing, storage system write sorting, additional staging area for read-modify-write operations, and enhanced performance for emulation type devices such as HDDs, Shingled Magnetic Recording (SMR) devices, and the like.
  • the storage controller may be configured to implement techniques for flushing or writing data to the storage devices such as HDDs which may help reduce the possibility of data loss. This may improve storage system performance by enabling the volatile memory write cache of the storage system and keeping all data in the storage controller cache until a subsequent cache sync command completion message is successfully returned from the storage system.
  • the storage controller cache may be configured without enabling the storage system write cache. In this case, in operation, the host computer may send or issue a write command to the storage controller to cause the storage controller to write data to a LBA X at the storage system, where X represents any LBA on the storage device.
  • the storage controller buffers data associated the LBA X in the write cache of the storage controller and marks metadata associated with the data for LBA X to Dirty state. At some later time, based on particular criteria such as a sorted list or by time period, the storage controller determines to flush or write the data for LBA X to the storage device. In this case, the storage controller generates and sends a write command to write the data to LBA X at the storage device. At some point, the LBA X operation completes and the storage controller marks the metadata associated with the data for LBA X in its cache to the Clean state to allow the storage controller to reuse the space used by the data. However, it may be desirable for the storage controller to be able to provide proper management of power loss or interruption.
  • the storage controller may include non-volatile memory module configured as a write cache (Tier 1 ) and the storage device may include volatile memory module configured as a write cache (Tier 2).
  • the storage controller may be configured to receive from the host computer write commands to write data to locations such as LBA X at storage device. The storage controller buffers the data for LBA X in the non-volatile memory module configured as write cache and marks the associated metadata for LBA X to the Dirty state.
  • the storage controller determines to flush the data for LBA X to the storage device. In this case, the storage controller generates and sends a write command to write the data to LBA X to the storage device.
  • the LBA X operation completes and the storage controller marks the metadata associated with the data for LBA X in its cache to the Dirty-Flushed state.
  • the storage controller sends or issues to the storage system a cache sync command to cause the data to be flushed or written from the volatile memory configured as cache to the storage device.
  • the storage system sends to the storage controller a message indicating completion of the cache sync command.
  • the storage controller marks the associated metadata for LBA X to Clean state and allowing it to be reused for subsequent write commands from the host computer or other purpose.
  • the two-tier cache technique allow the end-to-end throughput of the system to improve since now the storage system may be able to employ or implement additional optimization operations using the write cache on the storage system.
  • Power loss or interruption conditions may be defined as conditions or events such that the components of the system such as those memory components of the storage controller and storage system may no longer be able to maintain data without having to backup data to non-volatile storage.
  • Power restoration conditions may be defined as conditions or events where power is restored to components of the system such as the storage controller and storage system such as memory so to be able to maintain data without having to backup data to a non-volatile storage
  • storage controller provides a backup of the data from non-volatile memory and load back to it the volatile memory upon on subsequent power on or power restoration event.
  • the storage controller may attempt to write to the storage device data corresponding to the LBAs that still have metadata marked Dirty or Dirty-Flushed state.
  • the storage controller since the loss of power may have caused the data associated with the write command operation to not have been written to the storage device, the storage controller is configured to resend write commands to the storage system for corresponding LBAs to ensure that the data is written or placed on the storage device. In this manner, these techniques may help provide data integrity and reduce any negative impact of data loss that was stored on volatile memory configured as write cache on the storage system that was not committed to the storage device.
  • the present application discloses techniques which may help increase storage system performance by allowing the storage system to use a volatile memory configured as write cache with a backup stored in a non-volatile memory configured as power loss safe cache managed by the storage controller.
  • the storage controller may be configured to retain or hold on to data that was flushed from the non-volatile memory configured as write cache until it receives a response or message from the storage system indicating completion of the data written to the storage device. In this manner, this technique may help ensure that the data written to the storage device was successfully completed.
  • these techniques may enable storage system with volatile memory which may be encounter data loss in power loss conditions to be utilized in a power safe method. In this manner, storage systems with volatile memory configured as write cache may improve performance and endurance.
  • the present application discloses techniques for storage cache management that include a storage controller configured to receive a write command to write data to an LBA at a storage device of a storage system, store the data to non-volatile memory module, and mark metadata associated with the data as Dirty state
  • the storage controller is configured to send a write command to cause the data to be written to the volatile memory module at the storage system and mark the metadata associated with the data as Dirty-Flushed state.
  • the storage controller is configured to send a cache sync command to cause the data from the volatile memory module to be copied to the LBA at the storage device at the storage system.
  • the storage controller is configured to, upon receipt of message indicating completion of the cache sync command, mark the metadata associated with the data as Clean state to allow reuse of the memory space used by the data at the non-volatile memory module.
  • the storage controller is configured to, upon power interruption and subsequent power restoration, send a write command to cause the data to be written from the non-volatile memory module having metadata marked as Dirty state or Dirty-Flushed state to the volatile memory module at the storage system.
  • the data at the volatile-memory module may not have been successfully written to the storage device.
  • the storage controller stills maintains the data in non-volatile memory module that has metadata of the data in the Dirty or Dirty-Flushed state.
  • the storage controller sends write commands to the storage system to resends the data from the non-volatile memory module to the volatile memory module so that the data may be written to the storage device because of the power loss condition the storage system.
  • the present application may provide storage cache management techniques which may help improve the performance of storage systems.
  • these techniques may enable storage systems with volatile memory configured as write cache to increase throughput with reduced risk of data loss in the event of power loss or interruption.
  • these techniques may allow management of relatively low IO loads and allow storage systems to observe lower latencies for operations that may have blocked storage controller write operations directed to the cache of the storage controllers.
  • these techniques may allow systems with storage devices with different relatively large physical block size compared to logical block size to potentially utilize additional memory for read-mod ify-write operations. In one example, this may relate to devices with larger physical block size relative to logical block size. In order to access large blocks, the physical blocks may first be read, modified on the logical block level then re-written to a physical block.
  • the techniques may allow storage systems with storage devices to perform bookkeeping operations and reduce impacts due to write buffering being utilized.
  • FIG. 1 depicts an example system 100 for storage cache management in accordance with an example of the techniques of the present application.
  • the system 100 includes a storage controller 104 configured to communicate with a host computer 102 and communicate with a storage system 1 10.
  • the host computer 102 can be any electronic device configured to perform data processing.
  • the host computer 102 may be implemented in hardware, software, or a combination thereof.
  • the host computer 102 is configured to communicate with storage controller 104.
  • host computer 102 may send to storage controller 104 write commands to write data to locations such as LBAs at storage device 1 16 of storage system 1 10.
  • host computer 102 may send to storage controller 104 read commands to read data from locations such as LBAs at storage device 1 16 of storage system 1 10.
  • the storage controller 104 includes a cache management module 106 and a non-volatile memory module 108.
  • the storage controller 104 may be any electronic device configured to perform data processing.
  • the storage controller 104 and the components of the storage controller such as cache management module 106 may be implemented in hardware, software, or a combination thereof.
  • the cache management module 106 may be configured to manage communications between host computer 102 and storage system 1 10.
  • the cache management module 106 may be configured to keep track of write commands from host computer 102 using metadata and state logic associated with the different states of the operation associated with the write commands. For example, as explained below in further detail, the cache management module 106 is configured to mark metadata associated with the data to be written to storage device 1 16 in accordance with three states: Dirty state, Dirty-Flushed State, Clean state.
  • the Dirty state represents a condition or state where storage controller 104 has written the data, received from a write command from host computer 102, to non-volatile memory module 108.
  • the Dirty-Flushed state represents a condition where storage controller 104 has caused storage system 1 10 to copy or write the data from non-volatile memory module 108 to volatile memory module 1 14.
  • the Clean state represents a condition where storage controller 104 has caused storage system 1 10 to write the data from volatile memory module 1 14 to storage device 1 16.
  • cache management module 106 may receive write commands to write data to LBAs at storage device 1 16 of storage system 1 10.
  • the cache management module 106 may store the data to non-volatile memory module 108 and mark metadata associated with the data as Dirty state.
  • the cache management module 106 may send a write command to storage management module 1 12 to cause the data to be written to volatile memory module 1 14 and mark the metadata associated with the data as Dirty-Flushed state.
  • the cache management module 106 may send, to storage management module 1 12, a cache sync command to cause the storage management module to copy the data from volatile memory module 1 14 to the LBA at storage device 1 16 at storage system 1 10.
  • the cache sync commands may be sent to the storage system at various times or conditions such as when the cache is full, when a stale timer has expired, when the device optimal write boundary is reached and the like.
  • the cache management module 106 upon receipt, from storage system 1 10, a message indicating completion of the cache sync command, may mark the metadata associated with the data as Clean state to allow reuse of the memory space used by the data at nonvolatile memory module 108.
  • the cache management module 106 upon power interruption and subsequent power restoration, send to storage system 1 10 a write command to cause storage management module 1 12 to write data from non-volatile memory module 108 having metadata marked as Dirty state or Dirty-Flushed state to volatile memory module 1 14 at the storage system.
  • the data at volatile- memory module 1 14 may not have been successfully written to storage device 1 16.
  • the storage controller 104 stills maintains the data in non-volatile memory module 108 that has metadata of the data in the Dirty or Dirty-Flushed state.
  • storage controller 104 sends write commands to storage system 1 10 to resend the data from nonvolatile memory module 108 to volatile memory module 1 14 so that the data may be written to the storage device because of the power loss condition of the system.
  • the cache management module 106 may be configured to manage the timing of the generation and transmission of cache sync commands or operations directed to storage system 1 10. For example, cache management module 106 may be configured to send cache sync commands to storage system 1 10 based on a pattern of previous cache sync commands. In one example, the pattern of previous cache sync commands may be based on a measurement or metrics of storage device 1 16 performance. In another example, cache management module 106 may be configured to send cache sync commands to storage system 1 10 based on hit rate of the non-volatile memory module 108.
  • the hit rate of the non-volatile memory 108 may be based on amount of data stuck in memory such as a relatively large amount of data stuck in the Dirty-Flushed metadata state which may suggest the storage controller may not enough memory space for future operations.
  • cache management module 106 may be configured to send cache sync commands to storage system 1 10 based on a pattern of write commands sent to storage system 1 10. For example, cache management module 106 may encounter relatively high sequential patterns which may result in higher throughput and write coalescing thereby generate cache sync commands at a higher rate to lower the amount of data which may have metadata marked as the Dirty-Flushed state.
  • the storage system 1 10 includes storage management module 1 12, a volatile memory module 1 14, and storage device 1 16.
  • the storage system 1 10 may be any electronic device configured to perform data processing.
  • the storage system 1 10 and the components of the storage system may be implemented in hardware, software, or a combination thereof.
  • storage management module 1 12 may be configured to manage communication with storage controller 104.
  • storage management module 1 12 may be configured to receive write commands from storage controller 104 to write data to volatile memory module 1 14 of storage system 1 10. In this case, storage management module 1 12 may mark metadata associated with the data as the Dirty state.
  • storage management module 1 12 may be configured to receive cache sync commands from storage controller 104 to write data from volatile memory module 1 14 to storage device 1 16. Upon successful completion of cache sync command, storage management module 1 12 may respond to storage controller 104 with a message indicating completion of the cache sync command. In this case, storage management module 1 12 may mark metadata associated with the data as the Clean state. In this manner, storage controller may now be able to reuse the memory space used by the data at non-volatile memory module 108.
  • the storage controller 104 is configured with non-volatile memory module 108 which comprise any electronic means for storing data for later retrieval by storage controller 104 even after power loss condition.
  • the non-volatile memory module 108 may be defined as memory that may be configured to store data or information and be later retrieved even after experiencing a power loss condition. That is, non-volatile memory module 108 may retain data even without power.
  • Some examples of non-volatile memory module 108 may include Flash memory, magnetic type computer devices, such as hard disk drives and the like.
  • the storage controller 104 may configure non-volatile memory module 108 to provide cache memory space at the storage controller separate from volatile memory module 1 14 to provide cache memory space at the storage system.
  • the cache memory space may be defined as memory space to store data that is frequently accessed by storage controller 104.
  • the cache memory space way include configuration of cache memory as write cache memory such that data frequently written to storage is stored to the write cache memory.
  • the cache memory space way include configuration of the cache memory as read cache memory such that data frequently read from storage is stored to the read cache memory.
  • the storage controller 104 may configure cache memory space as write cache, read cache or a combination thereof.
  • the storage system 1 10 may be configured with volatile memory module 1 14 which may comprise any electronic means for storing data for later retrieval by storage system 1 10 but is not able to maintain the data after a power loss condition which may cover volatile and non-volatile conditions or situations.
  • the volatile memory module 1 14 may be defined as memory that may store data or information but requiring power to maintain the stored information. That is, the stored information is lost if module 1 14 experiences a power loss condition. Examples of volatile memory module 1 14 may include general Random Access Memory (RAM) and the like.
  • RAM Random Access Memory
  • the storage system 1 10 may configure volatile memory module 1 14 to provide cache memory space at the storage system separate from non-volatile memory module 108 to provide cache memory space at the storage controller 104.
  • the storage system 1 10 may configure cache memory space as write cache, read cache or a combination thereof.
  • the techniques of the present application may provide storage cache management techniques to help improve the performance of storage systems.
  • system 100 of Fig. 1 shows an example storage controller 104 and should be understood that other configurations may be employed to practice the techniques of the present application.
  • system 100 may be configured to communicate with a plurality of storage controllers 104 and with a plurality of host computers 102 and with a plurality of storage systems 1 10.
  • the components of system 100 may be implemented in hardware, software or a combination thereof.
  • the functionality of the components of system 100 may be implemented using Personal Computers (PCs), server computers, tablet computers, mobile computers and the like.
  • the storage controller 104, host computer 102 and storage system 1 10 may communicate using any communications means such as Fibre Channel, Ethernet and the like.
  • Fig. 2 depicts an example flow chart 200 of a process for storage cache management in accordance with an example of the techniques of the present application. To illustrate operation, it may be assumed that storage controller 104 is configured to communicate with host computer 102 and storage system 1 10 as described herein. It may be assumed that storage controller storage controller 104 includes cache management module 106 to implement the functionality described herein. [00029] It should be understood the process depicted in Fig. 2 represents generalized illustrations, and that other processes may be added or existing processes may be removed, modified, or rearranged without departing from the scope and spirit of the present application.
  • the processes may represent instructions stored on a processor-readable storage medium that, when executed, may cause a processor to respond, to perform actions, to change states, and/or to make decisions.
  • the processes may represent functions and/or actions performed by functionally equivalent circuits like analog circuits, digital signal processing circuits, application specific integrated circuits (ASICs), or other hardware components associated with the system.
  • ASICs application specific integrated circuits
  • the flow charts are not intended to limit the implementation of the present application, but rather the flow charts illustrate functional information to design/fabricate circuits, generate software, or use a combination of hardware and software to perform the illustrated processes.
  • the process 200 may begin at block 202, where storage controller receives 104 a write command to write data to an LBA at storage device 1 16 of storage system 1 10.
  • cache management module 106 receives a write command from host computer 102. Processing proceeds to block 204.
  • storage controller 104 stores the data to non-volatile memory module 108 and marks metadata associated with the data as Dirty state.
  • storage controller 104 may send a write completion message to the host computer 102 after the metadata of the data is marked as Dirty state. Processing proceeds to block 206.
  • storage controller 104 sends a write command to storage management module 1 12 to cause the data to be written to volatile memory module 1 14 and marks the metadata associated with the data as Dirty-Flushed state. Processing proceeds to block 208.
  • storage controller 104 sends, to storage management module 1 12, a cache sync command to cause the storage management module to copy the data from volatile memory module 1 14 to the LBA at storage device 1 16 at storage system 1 10. Processing proceeds to block 210.
  • storage controller 104 upon receipt, from storage system 1 10, a message indicating completion of the cache sync command, marks the metadata associated with the data as Clean state to allow reuse of the memory space used by the data at non-volatile memory module 108. Processing proceeds to block 212.
  • storage controller 104 upon power interruption and subsequent power restoration, sends to storage system 1 10 a write command to cause storage management module 1 12 to write data from non-volatile memory module 108 having metadata marked as Dirty state or Dirty-Flushed state to volatile memory module 1 14 at the storage system.
  • storage management module 1 12 upon power interruption and subsequent power restoration, sends to storage system 1 10 a write command to cause storage management module 1 12 to write data from non-volatile memory module 108 having metadata marked as Dirty state or Dirty-Flushed state to volatile memory module 1 14 at the storage system.
  • the data at volatile-memory module 1 14 may not have been successfully written to storage device 1 16.
  • the storage controller 104 stills maintains the data in non-volatile memory module 108 as in the Dirty or Dirty-Flushed state.
  • storage controller 104 sends write commands to storage system 100 to resend the data from non-volatile memory module 108 to volatile memory module 1 14 so that the data may be written to storage device 1 1 16 because of the power loss condition of the system. Processing proceeds to End block. In another example, processing may proceed to back block 202 to process further write commands from host computer 102.
  • the techniques of the present application may provide storage cache management techniques to help improve the performance of storage systems.
  • process 200 of Fig. 2 shows an example process and it should be understood that other configurations can be employed to practice the techniques of the present application.
  • process 200 may be configured to communicate with a plurality of storage devices.
  • FIG. 3 depicts an example flow diagram 300 of processes for storage cache management in accordance with an example of the techniques of the present application. To illustrate operation, it may be assumed that storage controller 104 is configured to communicate with host computer 102 and storage system 1 10 as described herein. It may be assumed that storage controller 104 includes cache management module 106 to implement the functionality described herein.
  • the process 300 may begin at block 301 , where storage controller 104 receives, from host computer 102, a write command to write data to an LBA at storage device 1 16 of storage system 1 10. Processing proceeds to block 302.
  • storage controller 104 writes or stores the data to non-volatile memory module 108 and marks metadata associated with the data as Dirty state. Processing proceeds to block 303.
  • storage controller 104 sends to host computer 102 a response or message indicating completion of the write command. Processing proceeds to block 304.
  • storage controller 104 flushes the write command or request so to cause the data stored at non-volatile memory module 108 to be copied or written to volatile memory module 1 14.
  • cache management module 106 sends, to storage system 1 10, a write command to cause the storage system to write the data from non-volatile memory module 108 to volatile memory module 1 14 and mark the metadata associated with the data as Dirty-Flushed state. Processing proceeds to block 305.
  • storage system 1 10 writes the data to volatile memory module 1 14 and marks metadata associated from Clean state to Dirty state. In one example, after the data is written to volatile memory module 1 14 at storage system 1 10, storage system marks the metadata at the storage system associated with the data to the Dirty state. Processing proceeds to block 306.
  • storage controller 104 sends a cache sync command to storage system 1 10.
  • storage management module 1 12 sends a cache sync command to cause storage management module 1 12 to copy the data from volatile memory module 1 14 to the LBA at storage device 1 16 at storage system 1 10.
  • storage controller 104 or cache management module 106 may send the cache sync command to storage system 1 10 based on at least one of a pattern of previous cache sync commands, hit rate of non-volatile memory module 108, and pattern of write commands sent to storage system 1 10. Processing proceeds to block 307.
  • storage system 1 10 writes the data from volatile memory module 1 14 to the LBA at storage device 1 16. Processing proceeds to block 308.
  • storage system 1 10 marks metadata associated with the data written to the LBA from the Dirty state to Clean state.
  • the storage system marks the metadata at storage system 1 10 associated with the data to the Clean state. Processing proceeds to block 309.
  • storage system 1 10 sends a message or response to storage controller 104 indicating completion of the cache sync command that was previously sent by storage controller 104 at block 306. Processing proceeds to block 310.
  • storage controller 104 marks metadata associated with data from Dirty-Flushed state to Clean state. In this manner, marking the metadata associated with the data as Clean state may allow reuse of the memory space used by the data at non-volatile memory module 108. In one example, processing may proceed back to block 301 to continue processing other write commands from host computer 102.
  • the storage controller 104 may be configured to perform cache storage functions related to power loss or interruption and then power restoration.
  • storage controller 104 may send to storage system 1 10 write commands to cause storage management module 1 12 to write data from non-volatile memory module 108 having metadata marked as Dirty state or Dirty-Flushed state to volatile memory module 1 14 at the storage system.
  • storage controller 104 may backup the data and restore the data from non-volatile memory module upon power restoration.
  • storage controller 104 may cause data at non-volatile memory module 108 having metadata marked as Dirty-Flushed state to be flushed or written to storage system 1 10. In this case, storage controller 104 may perform the process of blocks 304 through 310 to help ensure data integrity of the data at storage system 1 10.
  • these techniques may help ensure data integrity of the data at storage system 1 10.
  • the data at volatile-memory module 1 14 may not have been successfully written to storage device 1 16.
  • the storage controller 104 stills maintains the data in non-volatile memory module 108 which has metadata marked as in the Dirty or Dirty-Flushed state.
  • storage controller 104 resends write commands to storage system 1 10 cause the data from non-volatile memory module 108 to be written to volatile memory module 1 14 so that the data may be written to the storage device because of the power loss condition of the system.
  • the storage controller 104 may be configured to handle overlapping or multiple write commands from host computer 102.
  • host computer 102 may send to storage controller 104 a second write command with data that overlaps with data in non-volatile memory module 108 that has metadata associated with the data marked as in Dirty-Flushed state.
  • storage controller 104 marks the metadata associated with the data marked as Dirty state and performs the process above of blocks 306 through 308.
  • the techniques of the present application may provide storage cache management techniques to help improve the performance of storage systems.
  • the process 300 of Fig. 3 shows an example process and it should be understood that other configurations can be employed to practice the techniques of the present application.
  • process 300 may be configured to communicate with a plurality of storage devices.
  • Fig. 4 is an example block diagram showing a non-transitory, computer- readable medium that stores code for operation in accordance with an example of the techniques of the present application.
  • the non-transitory, computer-readable medium is generally referred to by the reference number 400 and may be included in the system in relation to Fig. 1 .
  • the non-transitory, computer-readable medium 400 may correspond to any typical storage device that stores computer-implemented instructions, such as programming code or the like.
  • the non-transitory, computer-readable medium 400 may include one or more of a non-volatile memory, a volatile memory, and/or one or more storage devices.
  • non-volatile memory examples include, but are not limited to, electrically erasable programmable read only memory (EEPROM) and read only memory (ROM).
  • volatile memory examples include, but are not limited to, static random access memory (SRAM), and dynamic random access memory (DRAM).
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • storage devices include, but are not limited to, hard disk drives, compact disc drives, digital versatile disc drives, optical drives, and flash memory devices.
  • a processor 402 generally retrieves and executes the instructions stored in the non-transitory, computer-readable medium 400 to operate the present techniques in accordance with an example.
  • the tangible, machine-readable medium 400 can be accessed by the processor 402 over a bus 404.
  • a first region 406 of the non- transitory, computer-readable medium 400 may include cache management module 106 functionality as described herein.
  • the cache management module 106 functionality may be implemented in hardware, software or a combination thereof.
  • block 408 provides store data instructions which may include instructions to store data to non-volatile memory module 108 and mark metadata associated with the data as Dirty state, as described herein.
  • block 410 provides send write command instructions which may include instructions to send a write command to write the data to volatile memory module 1 14 at storage system 1 10 and mark the metadata associated with the data as Dirty-Flushed state, as described herein.
  • block 412 provides send cache sync command instructions which may include instructions to send a cache sync command to copy the data from volatile memory module 1 14 to an LBA at storage device 1 16 at storage system 1 10, as described herein.
  • block 414 provides instructions to mark metadata of the data instructions which may include instructions to mark the metadata associated with the data as Clean state to allow reuse of the memory space used by the data at non-volatile memory module 108, as described herein.
  • block 416 provides instructions to send write command instructions which may include instructions to, upon power interruption and subsequent power restoration, storage controller 104 sends a write command to write data having metadata marked as Dirty state or Dirty-Flushed state, as described herein.
  • the software components can be stored in any order or configuration.
  • the non-transitory, computer-readable medium 400 is a hard drive
  • the software components can be stored in non-contiguous, or even overlapping, sectors

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

In one example, techniques for storage cache management include receiving a write command to write data to a logical block address (LBA) at a storage device of a storage system, storing the data to the non-volatile memory module and mark metadata associated with the data as Dirty state, sending a write command to write the data to volatile memory module at the storage system and mark the metadata associated with the data as Dirty-Flushed state, sending a cache sync command to copy the data from the volatile memory module to the LBA at the storage device at the storage system, and upon receipt, from the storage system, a message indicating completion of the cache sync command, marking the metadata associated with the data as Clean state to allow reuse of the memory space used by the data at the non-volatile memory module.

Description

STORAGE CACHE MANAGEMENT
BACKGROUND
[0001] Computer systems include host computers that communicate with storage systems. The storage systems may include storage devices to store data for later retrieval. The host computers may send commands to the storage systems to have the storage systems store data at the storage devices as well as retrieve data from the storage devices.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] Examples are described in the following detailed description and in reference to the drawings, in which:
[0003] Fig. 1 depicts an example system comprising storage cache management in accordance with an example of the techniques of the present application;
[0004] Fig. 2 depicts an example flow chart of processes for storage cache management in accordance with an example of the techniques of the present application;
[0005] Fig. 3 depicts an example flow diagram of processes for storage cache management in accordance with an example of the techniques of the present application; and
[0006] Fig. 4 depicts an example block diagram showing a non-transitory, computer-readable medium that stores instructions for storage cache management in accordance with an example of the techniques of the present application.
DETAILED DESCRIPTION
[0007] Computer systems include host computers that communicate with storage systems. The storage systems may include storage devices to store data for later retrieval. The host computers may send commands to the storage systems to have the storage systems store data at the storage devices as well as retrieve data from the storage devices. Computer systems may include a storage controller to interface between host computers and storage systems. The storage controller may include non-volatile memory configured as write cache memory. The storage controller may use the write cache memory to construct or build volumes or connect to the storage system in the form of physical disks of the storage devices. The storage controller may use the write cache to redirect write commands to the write cache for buffering Input-Output (IO) processing to help improve write completion, write coalescing, and write sorting. In one example, write completion may include a condition or situation where the storage controller has stored the data to persistent memory such as non-volatile memory without having committed the data to the storage device (end device) which may improve performance of a host application. In another example, write coalescing may include a condition or situation of combining adjacent Logical Block Address (LBA) data into large flush requests which may reduce the number of 10 activity and increase the size of the transfer which may improve the performance of the storage device (end device). In one example, write sorting may include a condition or situation where for certain storage devices, such as Hard Disk Drives (HDDs), 10 activity may be sorted relative to LBA location to help minimize seek times and improve performance. The storage device of the storage system (end device) may comprise a volatile memory module configured as write cache memory. The write cache may not be configured to support power loss or interruption which may result in exposing the data written to the write cache to possible loss if the computer system or components of the computer system experiences power loss or interruption. The storage system performance may be improved by enabling or turning on the volatile memory write cache. For example, this may improve storage system write coalescing, storage system write sorting, additional staging area for read-modify-write operations, and enhanced performance for emulation type devices such as HDDs, Shingled Magnetic Recording (SMR) devices, and the like.
[0008] The storage controller may be configured to implement techniques for flushing or writing data to the storage devices such as HDDs which may help reduce the possibility of data loss. This may improve storage system performance by enabling the volatile memory write cache of the storage system and keeping all data in the storage controller cache until a subsequent cache sync command completion message is successfully returned from the storage system. In one example, the storage controller cache may be configured without enabling the storage system write cache. In this case, in operation, the host computer may send or issue a write command to the storage controller to cause the storage controller to write data to a LBA X at the storage system, where X represents any LBA on the storage device. The storage controller buffers data associated the LBA X in the write cache of the storage controller and marks metadata associated with the data for LBA X to Dirty state. At some later time, based on particular criteria such as a sorted list or by time period, the storage controller determines to flush or write the data for LBA X to the storage device. In this case, the storage controller generates and sends a write command to write the data to LBA X at the storage device. At some point, the LBA X operation completes and the storage controller marks the metadata associated with the data for LBA X in its cache to the Clean state to allow the storage controller to reuse the space used by the data. However, it may be desirable for the storage controller to be able to provide proper management of power loss or interruption.
[0009] To help improve storage performance of storage systems, in one example, techniques are disclosed that provide a two-tier cache configuration where the storage controller may include non-volatile memory module configured as a write cache (Tier 1 ) and the storage device may include volatile memory module configured as a write cache (Tier 2). In one example, the storage controller may be configured to receive from the host computer write commands to write data to locations such as LBA X at storage device. The storage controller buffers the data for LBA X in the non-volatile memory module configured as write cache and marks the associated metadata for LBA X to the Dirty state. At some later time, based on particular criteria such as a sorted list (e.g., sorting process describe above) or by time period (e.g., stale data timer), the storage controller determines to flush the data for LBA X to the storage device. In this case, the storage controller generates and sends a write command to write the data to LBA X to the storage device. At some point (e.g., upon completion of write to secondary storage or volatile storage), the LBA X operation completes and the storage controller marks the metadata associated with the data for LBA X in its cache to the Dirty-Flushed state. At some future point, the storage controller sends or issues to the storage system a cache sync command to cause the data to be flushed or written from the volatile memory configured as cache to the storage device. Once the storage device completes the write operation, the storage system sends to the storage controller a message indicating completion of the cache sync command. At this point, the storage controller marks the associated metadata for LBA X to Clean state and allowing it to be reused for subsequent write commands from the host computer or other purpose.
[00010] In this manner, the two-tier cache technique allow the end-to-end throughput of the system to improve since now the storage system may be able to employ or implement additional optimization operations using the write cache on the storage system.
[00011] These techniques may allow storage systems to better handle power loss or interruption conditions. Power loss or interruption conditions may be defined as conditions or events such that the components of the system such as those memory components of the storage controller and storage system may no longer be able to maintain data without having to backup data to non-volatile storage. Power restoration conditions may be defined as conditions or events where power is restored to components of the system such as the storage controller and storage system such as memory so to be able to maintain data without having to backup data to a non-volatile storage For example, in the event of a power loss or interruption, storage controller provides a backup of the data from non-volatile memory and load back to it the volatile memory upon on subsequent power on or power restoration event. In this case, the storage controller may attempt to write to the storage device data corresponding to the LBAs that still have metadata marked Dirty or Dirty-Flushed state. For the storage system, since the loss of power may have caused the data associated with the write command operation to not have been written to the storage device, the storage controller is configured to resend write commands to the storage system for corresponding LBAs to ensure that the data is written or placed on the storage device. In this manner, these techniques may help provide data integrity and reduce any negative impact of data loss that was stored on volatile memory configured as write cache on the storage system that was not committed to the storage device.
[00012] In some examples, the present application discloses techniques which may help increase storage system performance by allowing the storage system to use a volatile memory configured as write cache with a backup stored in a non-volatile memory configured as power loss safe cache managed by the storage controller. In one example, the storage controller may be configured to retain or hold on to data that was flushed from the non-volatile memory configured as write cache until it receives a response or message from the storage system indicating completion of the data written to the storage device. In this manner, this technique may help ensure that the data written to the storage device was successfully completed. In another example, these techniques may enable storage system with volatile memory which may be encounter data loss in power loss conditions to be utilized in a power safe method. In this manner, storage systems with volatile memory configured as write cache may improve performance and endurance.
[00013] In one example, the present application discloses techniques for storage cache management that include a storage controller configured to receive a write command to write data to an LBA at a storage device of a storage system, store the data to non-volatile memory module, and mark metadata associated with the data as Dirty state The storage controller is configured to send a write command to cause the data to be written to the volatile memory module at the storage system and mark the metadata associated with the data as Dirty-Flushed state. The storage controller is configured to send a cache sync command to cause the data from the volatile memory module to be copied to the LBA at the storage device at the storage system. The storage controller is configured to, upon receipt of message indicating completion of the cache sync command, mark the metadata associated with the data as Clean state to allow reuse of the memory space used by the data at the non-volatile memory module. The storage controller is configured to, upon power interruption and subsequent power restoration, send a write command to cause the data to be written from the non-volatile memory module having metadata marked as Dirty state or Dirty-Flushed state to the volatile memory module at the storage system. In other words, in the event of power loss condition, the data at the volatile-memory module may not have been successfully written to the storage device. The storage controller stills maintains the data in non-volatile memory module that has metadata of the data in the Dirty or Dirty-Flushed state. To help ensure that the data at the volatile-memory module is successfully written to storage device, the storage controller sends write commands to the storage system to resends the data from the non-volatile memory module to the volatile memory module so that the data may be written to the storage device because of the power loss condition the storage system.
[00014] In this manner, in some examples, the present application may provide storage cache management techniques which may help improve the performance of storage systems. For example, these techniques may enable storage systems with volatile memory configured as write cache to increase throughput with reduced risk of data loss in the event of power loss or interruption. In another example, these techniques may allow management of relatively low IO loads and allow storage systems to observe lower latencies for operations that may have blocked storage controller write operations directed to the cache of the storage controllers. In one example, these techniques may allow systems with storage devices with different relatively large physical block size compared to logical block size to potentially utilize additional memory for read-mod ify-write operations. In one example, this may relate to devices with larger physical block size relative to logical block size. In order to access large blocks, the physical blocks may first be read, modified on the logical block level then re-written to a physical block. In another example, the techniques may allow storage systems with storage devices to perform bookkeeping operations and reduce impacts due to write buffering being utilized.
[00015] Fig. 1 depicts an example system 100 for storage cache management in accordance with an example of the techniques of the present application. The system 100 includes a storage controller 104 configured to communicate with a host computer 102 and communicate with a storage system 1 10.
[00016] The host computer 102 can be any electronic device configured to perform data processing. The host computer 102 may be implemented in hardware, software, or a combination thereof. The host computer 102 is configured to communicate with storage controller 104. For example, host computer 102 may send to storage controller 104 write commands to write data to locations such as LBAs at storage device 1 16 of storage system 1 10. In another example, host computer 102 may send to storage controller 104 read commands to read data from locations such as LBAs at storage device 1 16 of storage system 1 10. The techniques of the present application are described in the context to read and write data or groups of data such as data blocks at LBA address locations at a storage device but it should be understood that the techniques may be applicable to other location address techniques such as physical address blocks, Logical Unit Number (LUN) and the like.
[00017] The storage controller 104 includes a cache management module 106 and a non-volatile memory module 108. The storage controller 104 may be any electronic device configured to perform data processing. The storage controller 104 and the components of the storage controller such as cache management module 106 may be implemented in hardware, software, or a combination thereof.
[00018] The cache management module 106 may be configured to manage communications between host computer 102 and storage system 1 10. The cache management module 106 may be configured to keep track of write commands from host computer 102 using metadata and state logic associated with the different states of the operation associated with the write commands. For example, as explained below in further detail, the cache management module 106 is configured to mark metadata associated with the data to be written to storage device 1 16 in accordance with three states: Dirty state, Dirty-Flushed State, Clean state. The Dirty state represents a condition or state where storage controller 104 has written the data, received from a write command from host computer 102, to non-volatile memory module 108. The Dirty-Flushed state represents a condition where storage controller 104 has caused storage system 1 10 to copy or write the data from non-volatile memory module 108 to volatile memory module 1 14. The Clean state represents a condition where storage controller 104 has caused storage system 1 10 to write the data from volatile memory module 1 14 to storage device 1 16.
[00019] In one example, in operation, cache management module 106 may receive write commands to write data to LBAs at storage device 1 16 of storage system 1 10. The cache management module 106 may store the data to non-volatile memory module 108 and mark metadata associated with the data as Dirty state. The cache management module 106 may send a write command to storage management module 1 12 to cause the data to be written to volatile memory module 1 14 and mark the metadata associated with the data as Dirty-Flushed state. The cache management module 106 may send, to storage management module 1 12, a cache sync command to cause the storage management module to copy the data from volatile memory module 1 14 to the LBA at storage device 1 16 at storage system 1 10. In one example, the cache sync commands may be sent to the storage system at various times or conditions such as when the cache is full, when a stale timer has expired, when the device optimal write boundary is reached and the like. The cache management module 106 upon receipt, from storage system 1 10, a message indicating completion of the cache sync command, may mark the metadata associated with the data as Clean state to allow reuse of the memory space used by the data at nonvolatile memory module 108. The cache management module 106, upon power interruption and subsequent power restoration, send to storage system 1 10 a write command to cause storage management module 1 12 to write data from non-volatile memory module 108 having metadata marked as Dirty state or Dirty-Flushed state to volatile memory module 1 14 at the storage system.
[00020] In other words, in the event of power loss condition, the data at volatile- memory module 1 14 may not have been successfully written to storage device 1 16. The storage controller 104 stills maintains the data in non-volatile memory module 108 that has metadata of the data in the Dirty or Dirty-Flushed state. To help ensure that the data at volatile-memory module 1 14 is successfully written to storage device 1 16, storage controller 104 sends write commands to storage system 1 10 to resend the data from nonvolatile memory module 108 to volatile memory module 1 14 so that the data may be written to the storage device because of the power loss condition of the system.
[00021] The cache management module 106 may be configured to manage the timing of the generation and transmission of cache sync commands or operations directed to storage system 1 10. For example, cache management module 106 may be configured to send cache sync commands to storage system 1 10 based on a pattern of previous cache sync commands. In one example, the pattern of previous cache sync commands may be based on a measurement or metrics of storage device 1 16 performance. In another example, cache management module 106 may be configured to send cache sync commands to storage system 1 10 based on hit rate of the non-volatile memory module 108. For example, the hit rate of the non-volatile memory 108 may be based on amount of data stuck in memory such as a relatively large amount of data stuck in the Dirty-Flushed metadata state which may suggest the storage controller may not enough memory space for future operations. In another example, cache management module 106 may be configured to send cache sync commands to storage system 1 10 based on a pattern of write commands sent to storage system 1 10. For example, cache management module 106 may encounter relatively high sequential patterns which may result in higher throughput and write coalescing thereby generate cache sync commands at a higher rate to lower the amount of data which may have metadata marked as the Dirty-Flushed state.
[00022] The storage system 1 10 includes storage management module 1 12, a volatile memory module 1 14, and storage device 1 16. The storage system 1 10 may be any electronic device configured to perform data processing. The storage system 1 10 and the components of the storage system may be implemented in hardware, software, or a combination thereof.
[00023] In one example, storage management module 1 12 may be configured to manage communication with storage controller 104. For example, storage management module 1 12 may be configured to receive write commands from storage controller 104 to write data to volatile memory module 1 14 of storage system 1 10. In this case, storage management module 1 12 may mark metadata associated with the data as the Dirty state. In another example, storage management module 1 12 may be configured to receive cache sync commands from storage controller 104 to write data from volatile memory module 1 14 to storage device 1 16. Upon successful completion of cache sync command, storage management module 1 12 may respond to storage controller 104 with a message indicating completion of the cache sync command. In this case, storage management module 1 12 may mark metadata associated with the data as the Clean state. In this manner, storage controller may now be able to reuse the memory space used by the data at non-volatile memory module 108.
[00024] The storage controller 104 is configured with non-volatile memory module 108 which comprise any electronic means for storing data for later retrieval by storage controller 104 even after power loss condition. The non-volatile memory module 108 may be defined as memory that may be configured to store data or information and be later retrieved even after experiencing a power loss condition. That is, non-volatile memory module 108 may retain data even without power. Some examples of non-volatile memory module 108 may include Flash memory, magnetic type computer devices, such as hard disk drives and the like. The storage controller 104 may configure non-volatile memory module 108 to provide cache memory space at the storage controller separate from volatile memory module 1 14 to provide cache memory space at the storage system. The cache memory space may be defined as memory space to store data that is frequently accessed by storage controller 104. In one example, the cache memory space way include configuration of cache memory as write cache memory such that data frequently written to storage is stored to the write cache memory. In another example, the cache memory space way include configuration of the cache memory as read cache memory such that data frequently read from storage is stored to the read cache memory. The storage controller 104 may configure cache memory space as write cache, read cache or a combination thereof.
[00025] The storage system 1 10 may be configured with volatile memory module 1 14 which may comprise any electronic means for storing data for later retrieval by storage system 1 10 but is not able to maintain the data after a power loss condition which may cover volatile and non-volatile conditions or situations. The volatile memory module 1 14 may be defined as memory that may store data or information but requiring power to maintain the stored information. That is, the stored information is lost if module 1 14 experiences a power loss condition. Examples of volatile memory module 1 14 may include general Random Access Memory (RAM) and the like. The storage system 1 10 may configure volatile memory module 1 14 to provide cache memory space at the storage system separate from non-volatile memory module 108 to provide cache memory space at the storage controller 104. The storage system 1 10 may configure cache memory space as write cache, read cache or a combination thereof.
[00026] In this manner, in some examples, the techniques of the present application may provide storage cache management techniques to help improve the performance of storage systems.
[00027] The system 100 of Fig. 1 shows an example storage controller 104 and should be understood that other configurations may be employed to practice the techniques of the present application. For example, system 100 may be configured to communicate with a plurality of storage controllers 104 and with a plurality of host computers 102 and with a plurality of storage systems 1 10. The components of system 100 may be implemented in hardware, software or a combination thereof. For example, the functionality of the components of system 100 may be implemented using Personal Computers (PCs), server computers, tablet computers, mobile computers and the like. The storage controller 104, host computer 102 and storage system 1 10 may communicate using any communications means such as Fibre Channel, Ethernet and the like.
[00028] Fig. 2 depicts an example flow chart 200 of a process for storage cache management in accordance with an example of the techniques of the present application. To illustrate operation, it may be assumed that storage controller 104 is configured to communicate with host computer 102 and storage system 1 10 as described herein. It may be assumed that storage controller storage controller 104 includes cache management module 106 to implement the functionality described herein. [00029] It should be understood the process depicted in Fig. 2 represents generalized illustrations, and that other processes may be added or existing processes may be removed, modified, or rearranged without departing from the scope and spirit of the present application. In addition, it should be understood that the processes may represent instructions stored on a processor-readable storage medium that, when executed, may cause a processor to respond, to perform actions, to change states, and/or to make decisions. Alternatively, the processes may represent functions and/or actions performed by functionally equivalent circuits like analog circuits, digital signal processing circuits, application specific integrated circuits (ASICs), or other hardware components associated with the system. Furthermore, the flow charts are not intended to limit the implementation of the present application, but rather the flow charts illustrate functional information to design/fabricate circuits, generate software, or use a combination of hardware and software to perform the illustrated processes.
[00030] The process 200 may begin at block 202, where storage controller receives 104 a write command to write data to an LBA at storage device 1 16 of storage system 1 10. In one example, cache management module 106 receives a write command from host computer 102. Processing proceeds to block 204.
[00031] At block 204, storage controller 104 stores the data to non-volatile memory module 108 and marks metadata associated with the data as Dirty state. In one example, storage controller 104 may send a write completion message to the host computer 102 after the metadata of the data is marked as Dirty state. Processing proceeds to block 206.
[00032] At block 206, storage controller 104 sends a write command to storage management module 1 12 to cause the data to be written to volatile memory module 1 14 and marks the metadata associated with the data as Dirty-Flushed state. Processing proceeds to block 208.
[00033] At block 208, storage controller 104 sends, to storage management module 1 12, a cache sync command to cause the storage management module to copy the data from volatile memory module 1 14 to the LBA at storage device 1 16 at storage system 1 10. Processing proceeds to block 210.
[00034] At block 210, storage controller 104, upon receipt, from storage system 1 10, a message indicating completion of the cache sync command, marks the metadata associated with the data as Clean state to allow reuse of the memory space used by the data at non-volatile memory module 108. Processing proceeds to block 212.
[00035] At block 212, storage controller 104, upon power interruption and subsequent power restoration, sends to storage system 1 10 a write command to cause storage management module 1 12 to write data from non-volatile memory module 108 having metadata marked as Dirty state or Dirty-Flushed state to volatile memory module 1 14 at the storage system. In other words, in the event of power loss condition, the data at volatile-memory module 1 14 may not have been successfully written to storage device 1 16. The storage controller 104 stills maintains the data in non-volatile memory module 108 as in the Dirty or Dirty-Flushed state. To help ensure that the data at volatile-memory module 1 14 is successfully written to storage device 1 16, storage controller 104 sends write commands to storage system 100 to resend the data from non-volatile memory module 108 to volatile memory module 1 14 so that the data may be written to storage device 1 1 16 because of the power loss condition of the system. Processing proceeds to End block. In another example, processing may proceed to back block 202 to process further write commands from host computer 102.
[00036] In this manner, in some examples, the techniques of the present application may provide storage cache management techniques to help improve the performance of storage systems.
[00037] The process 200 of Fig. 2 shows an example process and it should be understood that other configurations can be employed to practice the techniques of the present application. For example, process 200 may be configured to communicate with a plurality of storage devices.
[00038] Fig. 3 depicts an example flow diagram 300 of processes for storage cache management in accordance with an example of the techniques of the present application. To illustrate operation, it may be assumed that storage controller 104 is configured to communicate with host computer 102 and storage system 1 10 as described herein. It may be assumed that storage controller 104 includes cache management module 106 to implement the functionality described herein.
[00039] The process 300 may begin at block 301 , where storage controller 104 receives, from host computer 102, a write command to write data to an LBA at storage device 1 16 of storage system 1 10. Processing proceeds to block 302.
[00040] At block 302, storage controller 104 writes or stores the data to non-volatile memory module 108 and marks metadata associated with the data as Dirty state. Processing proceeds to block 303.
[00041] At block 303, storage controller 104 sends to host computer 102 a response or message indicating completion of the write command. Processing proceeds to block 304. [00042] At block 304, storage controller 104 flushes the write command or request so to cause the data stored at non-volatile memory module 108 to be copied or written to volatile memory module 1 14. In one example, cache management module 106 sends, to storage system 1 10, a write command to cause the storage system to write the data from non-volatile memory module 108 to volatile memory module 1 14 and mark the metadata associated with the data as Dirty-Flushed state. Processing proceeds to block 305.
[00043] At block 305, storage system 1 10 writes the data to volatile memory module 1 14 and marks metadata associated from Clean state to Dirty state. In one example, after the data is written to volatile memory module 1 14 at storage system 1 10, storage system marks the metadata at the storage system associated with the data to the Dirty state. Processing proceeds to block 306.
[00044] At block 306, storage controller 104 sends a cache sync command to storage system 1 10. In one example, storage management module 1 12 sends a cache sync command to cause storage management module 1 12 to copy the data from volatile memory module 1 14 to the LBA at storage device 1 16 at storage system 1 10. In another example, storage controller 104 or cache management module 106 may send the cache sync command to storage system 1 10 based on at least one of a pattern of previous cache sync commands, hit rate of non-volatile memory module 108, and pattern of write commands sent to storage system 1 10. Processing proceeds to block 307.
[00045] At block 307, storage system 1 10 writes the data from volatile memory module 1 14 to the LBA at storage device 1 16. Processing proceeds to block 308.
[00046] At block 308, storage system 1 10 marks metadata associated with the data written to the LBA from the Dirty state to Clean state. In one example, after storage system 1 10 writes the data from the volatile memory module 1 14 to storage device 1 16, the storage system marks the metadata at storage system 1 10 associated with the data to the Clean state. Processing proceeds to block 309.
[00047] At block 309, storage system 1 10 sends a message or response to storage controller 104 indicating completion of the cache sync command that was previously sent by storage controller 104 at block 306. Processing proceeds to block 310.
[00048] At block 310, storage controller 104 marks metadata associated with data from Dirty-Flushed state to Clean state. In this manner, marking the metadata associated with the data as Clean state may allow reuse of the memory space used by the data at non-volatile memory module 108. In one example, processing may proceed back to block 301 to continue processing other write commands from host computer 102. [00049] The storage controller 104 may be configured to perform cache storage functions related to power loss or interruption and then power restoration. In one example, upon power interruption and subsequent power restoration conditions, storage controller 104 may send to storage system 1 10 write commands to cause storage management module 1 12 to write data from non-volatile memory module 108 having metadata marked as Dirty state or Dirty-Flushed state to volatile memory module 1 14 at the storage system. In another example, upon power loss or interruption, storage controller 104 may backup the data and restore the data from non-volatile memory module upon power restoration. In another example, in the event of power loss or interruption during process above between blocks 302 and 310, storage controller 104 may cause data at non-volatile memory module 108 having metadata marked as Dirty-Flushed state to be flushed or written to storage system 1 10. In this case, storage controller 104 may perform the process of blocks 304 through 310 to help ensure data integrity of the data at storage system 1 10.
[00050] In another example, these techniques may help ensure data integrity of the data at storage system 1 10. For example, in the event of power loss condition, the data at volatile-memory module 1 14 may not have been successfully written to storage device 1 16. The storage controller 104 stills maintains the data in non-volatile memory module 108 which has metadata marked as in the Dirty or Dirty-Flushed state. To help ensure that the data at volatile-memory module 1 14 is successfully written to storage device 1 16, storage controller 104 resends write commands to storage system 1 10 cause the data from non-volatile memory module 108 to be written to volatile memory module 1 14 so that the data may be written to the storage device because of the power loss condition of the system.
[00051] The storage controller 104 may be configured to handle overlapping or multiple write commands from host computer 102. In one example, host computer 102 may send to storage controller 104 a second write command with data that overlaps with data in non-volatile memory module 108 that has metadata associated with the data marked as in Dirty-Flushed state. In this case, storage controller 104 marks the metadata associated with the data marked as Dirty state and performs the process above of blocks 306 through 308.
[00052] In this manner, in some examples, the techniques of the present application may provide storage cache management techniques to help improve the performance of storage systems. [00053] The process 300 of Fig. 3 shows an example process and it should be understood that other configurations can be employed to practice the techniques of the present application. For example, process 300 may be configured to communicate with a plurality of storage devices.
[00054] Fig. 4 is an example block diagram showing a non-transitory, computer- readable medium that stores code for operation in accordance with an example of the techniques of the present application. The non-transitory, computer-readable medium is generally referred to by the reference number 400 and may be included in the system in relation to Fig. 1 . The non-transitory, computer-readable medium 400 may correspond to any typical storage device that stores computer-implemented instructions, such as programming code or the like. For example, the non-transitory, computer-readable medium 400 may include one or more of a non-volatile memory, a volatile memory, and/or one or more storage devices. Examples of non-volatile memory include, but are not limited to, electrically erasable programmable read only memory (EEPROM) and read only memory (ROM). Examples of volatile memory include, but are not limited to, static random access memory (SRAM), and dynamic random access memory (DRAM). Examples of storage devices include, but are not limited to, hard disk drives, compact disc drives, digital versatile disc drives, optical drives, and flash memory devices.
[00055] A processor 402 generally retrieves and executes the instructions stored in the non-transitory, computer-readable medium 400 to operate the present techniques in accordance with an example. In an example, the tangible, machine-readable medium 400 can be accessed by the processor 402 over a bus 404. A first region 406 of the non- transitory, computer-readable medium 400 may include cache management module 106 functionality as described herein. The cache management module 106 functionality may be implemented in hardware, software or a combination thereof.
[00056] For example, block 408 provides store data instructions which may include instructions to store data to non-volatile memory module 108 and mark metadata associated with the data as Dirty state, as described herein.
[00057] For example, block 410 provides send write command instructions which may include instructions to send a write command to write the data to volatile memory module 1 14 at storage system 1 10 and mark the metadata associated with the data as Dirty-Flushed state, as described herein.
[00058] For example, block 412 provides send cache sync command instructions which may include instructions to send a cache sync command to copy the data from volatile memory module 1 14 to an LBA at storage device 1 16 at storage system 1 10, as described herein.
[00059] For example block 414 provides instructions to mark metadata of the data instructions which may include instructions to mark the metadata associated with the data as Clean state to allow reuse of the memory space used by the data at non-volatile memory module 108, as described herein.
[00060] For example block 416 provides instructions to send write command instructions which may include instructions to, upon power interruption and subsequent power restoration, storage controller 104 sends a write command to write data having metadata marked as Dirty state or Dirty-Flushed state, as described herein.
[00061] Although shown as contiguous blocks, the software components can be stored in any order or configuration. For example, if the non-transitory, computer-readable medium 400 is a hard drive, the software components can be stored in non-contiguous, or even overlapping, sectors
[00062] The foregoing describes a novel and previously unforeseen approach for storage cache management. While the above application has been shown and described with reference to the foregoing examples, it should be understood that other forms, details, and implementations may be made without departing from the spirit and scope of this application.

Claims

[00063] WHAT IS CLAIMED IS:
1. A storage controller comprising:
a non-volatile memory module to provide memory space to store data; and a cache management module to:
receive, from a host computer, a write command to write data to a logical block address (LBA) at a storage device of a storage system,
store the data to the non-volatile memory module and mark metadata associated with the data as Dirty state,
send, to the storage system, a write command to write the data to volatile memory module at the storage system and mark the metadata associated with the data as Dirty-Flushed state,
send, to the storage system, a cache sync command to copy the data from the volatile memory module to the LBA at the storage device at the storage system,
receive a message indicating completion of the cache sync command and mark the metadata associated with the data as Clean state to allow reuse of the memory space used by the data at the non-volatile memory module; and
upon power interruption and subsequent power restoration, send to the storage system a write command to write data having metadata marked as Dirty state or Dirty-Flushed state.
2. The storage controller of claim 1 , wherein the cache management module to send a write completion message to the host after the metadata of the data is marked as Dirty state.
3. The storage controller of claim 1 , wherein after the data is written to volatile memory module at the storage system, metadata at the storage system associated with the data is marked as Dirty state, and after the data is written from the volatile memory module to the storage device, the metadata at the storage system associated with the data is marked as Clean state.
4. The storage controller of claim 1 , wherein the cache management module to send the cache sync command to the storage system based on at least one of a pattern of previous cache sync commands, hit rate of the non-volatile memory, and pattern of write commands sent to the storage system.
5. The storage controller of claim 1 , wherein the non-volatile memory module to provide cache memory space at the storage controller separate from the volatile memory module to provide cache memory space at the storage system.
6. A method of storage cache management comprising:
receiving, at a cache management module of a storage controller, a write command to write data to a logical block address (LBA) at a storage device of a storage system;
storing the data to non-volatile memory module, at the storage controller, and marking metadata associated with the data as Dirty state;
sending, by the cache management module, a write command, to a storage management module at the storage system, to cause the data to be written to the volatile memory module at the storage system and marking the metadata associated with the data as Dirty-Flushed state;
sending, to the storage management module, a cache sync command to cause the storage management module to copy the data from the volatile memory module to the LBA at the storage device at the storage system;
upon receipt, from the storage system, a message indicating completion of the cache sync command, marking the metadata associated with the data as Clean state to allow reuse of the memory space used by the data at the non-volatile memory module; and
upon power interruption and subsequent power restoration, sending to the storage system a write command to cause the storage management module to write data from the non-volatile memory module having metadata marked as Dirty state or Dirty-Flushed state to the volatile memory module at the storage system.
7. The method of claim 6, further comprising sending a write completion message to the host after the metadata of the data is marked as Dirty state.
8. The method of claim 6, wherein after the data is written to volatile memory module at the storage system, metadata at the storage system associated with the data is marked as Dirty state, and after the data is written from the volatile memory module to the storage device, the metadata at the storage system associated with the data is marked as Clean state.
9. The method of claim 6, wherein sending the cache sync command to the storage system is based on at least one of a pattern of previous cache sync commands, hit rate of the non-volatile memory, and pattern of write commands sent to the storage system.
10. The method of claim 6, further comprising providing the non-volatile memory module as cache memory space at the storage controller separate from providing the volatile memory module as cache memory space at the storage system.
1 1. A non-transitory computer-readable medium having computer executable instructions stored thereon for storage cache management, the instructions are executable by a processor to:
store data to non-volatile memory module and mark metadata associated with the data as Dirty state;
send a write command to write the data to volatile memory module at the storage system and mark the metadata associated with the data as Dirty-Flushed state;
send a cache sync command to copy the data from the volatile memory module to a logical block address (LBA) at the storage device at the storage system;
receive a message indicating completion of the cache sync command and mark the metadata associated with the data as Clean state to allow reuse of the memory space used by the data at the non-volatile memory module; and
upon power interruption and subsequent power restoration, send a write command to write data having metadata marked as Dirty state or Dirty-Flushed state.
12. The non-transitory computer-readable medium of claim 1 1 , further comprising instructions that if executed cause a processor to send a write completion message to the host after the metadata of the data is marked as Dirty state.
13. The non-transitory computer-readable medium of claim 1 1 , further comprising instructions that if executed cause a processor to: wherein after the data is written to volatile memory module at the storage system, metadata at the storage system associated with the data is marked as Dirty state, and after the data is written from the volatile memory module to the storage device, the metadata at the storage system associated with the data is marked as Clean state.
14. The non-transitory computer-readable medium of claim 1 1 further comprising instructions that if executed cause a processor to: send the cache sync command to the storage system is based on at least one of a pattern of previous cache sync commands, hit rate of the non-volatile memory, and pattern of write commands sent to the storage system.
15. The non-transitory computer-readable medium of claim 1 1 further comprising instructions that if executed cause a processor to: provide the non-volatile memory module as cache memory space at the storage controller separate from to provide the volatile memory module as cache memory space at the storage system.
PCT/US2015/027999 2015-04-28 2015-04-28 Storage cache management Ceased WO2016175762A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/027999 WO2016175762A1 (en) 2015-04-28 2015-04-28 Storage cache management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/027999 WO2016175762A1 (en) 2015-04-28 2015-04-28 Storage cache management

Publications (1)

Publication Number Publication Date
WO2016175762A1 true WO2016175762A1 (en) 2016-11-03

Family

ID=57198617

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/027999 Ceased WO2016175762A1 (en) 2015-04-28 2015-04-28 Storage cache management

Country Status (1)

Country Link
WO (1) WO2016175762A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108572790A (en) * 2017-04-20 2018-09-25 立而鼎科技(深圳)有限公司 A kind of solid state disk write-in instruction reorder method, solid state disk
CN112764671A (en) * 2019-11-06 2021-05-07 北京忆芯科技有限公司 Metadata aggregation processing method of storage device and storage device
CN113127388A (en) * 2021-04-13 2021-07-16 郑州云海信息技术有限公司 Metadata writing method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080104344A1 (en) * 2006-10-25 2008-05-01 Norio Shimozono Storage system comprising volatile cache memory and nonvolatile memory
US20100180065A1 (en) * 2009-01-09 2010-07-15 Dell Products L.P. Systems And Methods For Non-Volatile Cache Control
WO2013095465A1 (en) * 2011-12-21 2013-06-27 Intel Corporation High-performance storage structures and systems featuring multiple non-volatile memories
US20130297880A1 (en) * 2006-12-06 2013-11-07 Fusion-Io, Inc. Non-volatile cache
US20150012690A1 (en) * 2013-03-15 2015-01-08 Rolando H. Bruce Multi-Leveled Cache Management in a Hybrid Storage System

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080104344A1 (en) * 2006-10-25 2008-05-01 Norio Shimozono Storage system comprising volatile cache memory and nonvolatile memory
US20130297880A1 (en) * 2006-12-06 2013-11-07 Fusion-Io, Inc. Non-volatile cache
US20100180065A1 (en) * 2009-01-09 2010-07-15 Dell Products L.P. Systems And Methods For Non-Volatile Cache Control
WO2013095465A1 (en) * 2011-12-21 2013-06-27 Intel Corporation High-performance storage structures and systems featuring multiple non-volatile memories
US20150012690A1 (en) * 2013-03-15 2015-01-08 Rolando H. Bruce Multi-Leveled Cache Management in a Hybrid Storage System

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108572790A (en) * 2017-04-20 2018-09-25 立而鼎科技(深圳)有限公司 A kind of solid state disk write-in instruction reorder method, solid state disk
CN108572790B (en) * 2017-04-20 2021-11-26 深圳市得一微电子有限责任公司 Solid state disk writing instruction ordering method and solid state disk
CN112764671A (en) * 2019-11-06 2021-05-07 北京忆芯科技有限公司 Metadata aggregation processing method of storage device and storage device
CN113127388A (en) * 2021-04-13 2021-07-16 郑州云海信息技术有限公司 Metadata writing method and related device

Similar Documents

Publication Publication Date Title
US10223272B2 (en) Latency sensitive metadata object persistence operation for storage device
JP6690891B2 (en) Data mirroring control device and method
JP5990430B2 (en) SSD (solid state drive) device
KR101870521B1 (en) Methods and systems for improving storage journaling
KR101678868B1 (en) Apparatus for flash address translation apparatus and method thereof
JP2008015769A (en) Storage system and write distribution method
CN107608628A (en) Flash memory controller
CN110674056B (en) Garbage recovery method and device
US9639287B1 (en) Write command reporting
US9785438B1 (en) Media cache cleaning based on workload
CN105096964A (en) Vibration mitigation for a data storage device
JP2014096072A (en) Disk array having mirror configuration and rebuild method of the same
US9007710B1 (en) Rewrite operation for recording bands
US9933955B1 (en) Power safe write buffer for data storage device
CN104932834A (en) Data management of data storage device based on ultimate means areas
US8922930B1 (en) Limit disc nodes by band usage
US9047923B1 (en) Fast shingled tracks recording
WO2016175762A1 (en) Storage cache management
US8667218B2 (en) Storage apparatus comprising RAID groups of RAID 1 series and control method of writing to RAID group of RAID 1 series
JP6015700B2 (en) Information storage device and information storage method
US20130031320A1 (en) Control device, control method and storage apparatus
JP5953245B2 (en) Information processing system
JP4837121B1 (en) Data storage device and data writing method
JP7572520B1 (en) STORAGE APPARATUS, STORAGE CONTROL METHOD, AND PROGRAM
CN106569908B (en) Data backup system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15890900

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15890900

Country of ref document: EP

Kind code of ref document: A1