US11194720B2 - Reducing index operations in a cache - Google Patents
Reducing index operations in a cache Download PDFInfo
- Publication number
- US11194720B2 US11194720B2 US16/580,362 US201916580362A US11194720B2 US 11194720 B2 US11194720 B2 US 11194720B2 US 201916580362 A US201916580362 A US 201916580362A US 11194720 B2 US11194720 B2 US 11194720B2
- Authority
- US
- United States
- Prior art keywords
- cache
- data
- location
- index
- flash
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0815—Cache consistency protocols
- G06F12/0831—Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
- G06F12/0833—Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means in combination with broadcast means (e.g. for invalidation or updating)
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/12—Replacement control
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/62—Details of cache specific to multiprocessor cache arrangements
-
- G06F2212/69—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/70—Details relating to dynamic memory management
Definitions
- Embodiments of the present invention relate to systems and methods for reducing index operations in a cache when accessing data. More specifically, embodiments of the invention relate to reducing index I/O (Input/Output) operations in a cache such as a flash cache.
- I/O Input/Output
- Flash caches such as solid-state drives (SSDs) can be incorporated into storage systems and can be quite large. Data stored in the cache is accessed using a cache index, which identifies the location of data in the cache. Because the flash cache may be large, the cache index may need to be stored in the flash cache itself because the cache index may be too large to fit in memory (e.g., RAM).
- memory e.g., RAM
- the cache index When the cache index is stored in the flash cache, accessing data in the flash cache becomes more expensive at least in terms of I/O operations. Because the cache index is stored in the flash cache, accessing the cache index is equivalent to accessing the flash cache. The number of I/O operations to the flash therefore increase because any request to access data stored in the cache usually requires that the cache index be accessed first. If each data access requires a corresponding cache index lookup, the flash cache is effectively accessed at least twice for each read operation. Even though a flash cache can be fast, the response time of the flash cache is affected.
- cache index updates are also expensive operations at least in terms of I/O, and also in terms of media longevity, because a cache index update requires both an erasure operation and a write operation.
- FIG. 1 illustrates an example of a computing system in which index lookup operations can be reduced or minimized
- FIG. 2 illustrates an example of a block that is returned in response to a cache access operation and that includes location information in addition to the requested data
- FIG. 3 illustrates an example of systems and methods for performing read-modify-write operations while reducing or minimize cache index lookups
- FIG. 4 illustrates an example of systems and methods for re-inserting previously read data into a cache while minimizing or reducing cache index lookups
- FIG. 5 illustrates an example of systems and methods for avoiding index lookups by invalidating entries in a cache when the data is read
- FIG. 6 illustrates another example of systems and methods for reducing cache index lookups
- FIG. 7 illustrates another example of a cache index
- FIG. 8 is a flow diagram for accessing data in the context of performing a cache index lookup.
- Embodiments of the invention generally relate to reducing index input/outputs (I/Os). Embodiments of the invention further relate to reducing index I/Os when performing read operations, write operations, modify operations, or the like. Embodiments further relate to minimizing the number of times that a cache index of a flash cache is accessed during operations including read operations, write operations, and modify operations.
- Embodiments of the invention can be implemented in a computing environment that includes, by way of example, one or more clients, at least one cache, and at least one storage system that includes one or more storage devices.
- the clients can include any device that can interact with the cache and/or the storage system.
- Example clients include, but are not limited to, smartphones or other cellular devices, tablet devices, laptop computers, desktop computers, server computers or the like.
- the communications between the clients, cache and storage system can occur over direct connections or network connections or multi-network connections and can include wireless connections and/or wired connections.
- the computing systems can vary in size and complexity and may include, but are not limited to, a single device, a high availability system, a local area network, a datacenter, the Internet, or the like or any combination thereof.
- the storage system includes hard disk drives (HDDs).
- the cache may include a faster storage device such as a solid-state drive (SSD) as a flash cache.
- SSD solid-state drive
- An SSD flash cache can be very large and can store a significant portion of the underlying storage system.
- the SSD cache may have a capacity equal to 5-10% of the storage system. The capacity of the cache is not limited to this range however and can be smaller or larger.
- a large cache also requires a large cache index to track what segments or data are in the cache and where the segments (or data) are located in the cache.
- the cache index may not fit in memory economically and may be either partially or wholly kept on storage media such as within the flash cache itself.
- looking up a segment or data in the cache index can be expensive in terms of I/Os because of the associated queries to the index and subsequent queries for the data once the location is determined from the cache index lookup.
- there may be multiple indexes within the flash cache that must be queried to determine whether a segment or data is located in the cache and its location. Embodiments of the invention reduce the number of times that the cache index is accessed and can improve the performance of the flash cache.
- the cache index is queried for several different reasons.
- De-duplication means that only unique data is stored, though there may be multiple references to a data item or segment.
- the cache index may be queried when there is a request for data from a client.
- the cache index may also be queried when there is an insertion request to ensure that a duplicate copy of the data is not inserted into the cache.
- the cache index may also be queried when a segment or data is invalidated from the cache.
- the same segment or data may be queried within a short period of time to complete the client request.
- a read-modify-write request will query both the cache index and the cache to read the data and then query the cache index to invalidate the overwritten data (which was just read).
- Embodiments of the invention preserve the location information or cache index information during an operation. As a result, the second query to the cache index is not necessary because the location information is preserved.
- the returned data may be cached at a higher level cache (such as a memory buffer cache).
- a higher level cache such as a memory buffer cache.
- this data is evicted from the memory buffer cache, an attempt may be made to insert the data back into the lower level cache (e.g., the flash cache). This reinsertion attempt would require a cache index lookup.
- this cache index lookup can be avoided by providing some hints as to the origin of the data, for example whether the data came from the flash cache or the storage system. In this way, a second cache index lookup is avoided if the origin was from the flash cache and the data is assumed to still reside in the cache.
- Embodiments of the invention reduce the number of index queries for at least these scenarios where data may be read and shortly after either invalidated or reinserted.
- meta-data such as location data (potentially opaque to the client) is included with the returned data and the location data indicates the origin or location of the data in some embodiments.
- the location data can be used when writing back to the cache or when inserting or re-inserting the data into the cache. This allows for optimizations by avoiding a cache index lookup.
- a segment or data in the cache may be invalidated. Marking a segment or data as invalid indicates that the segment or data is not a valid response to subsequent queries for the data. Segments or data may be invalidated because the segment or data is deleted, the corresponding file/LUN position is overwritten, the age of the segment or data has passed a threshold, or the like. There may be situations, however, where the old data is retained in addition to any modified data that may be written to the cache.
- the location data or other meta-data (indicating a segment location from a previous index lookup or, in other words a data location) would be used to mark an in-memory data structure, for example a location manager such as a bitmap, to indicate that the segment is invalid.
- a location manager has a bit for each entry in the cache.
- Other data structures besides a bitmap may serve the same purpose.
- the location manager is used to record this information since updating the cache index in the flash cache may take place in an offline manner with some delay between invalidation operations and updates to the cache index. This process does not incur additional I/O to the cache index because the location manager is in-memory and invalidated segments are batched for cleaning later.
- the client may not have a mechanism to track where the data came from, and the client may attempt to reinsert the segment into the cache.
- the location data indicating segment location would allow a quick in-memory check to validate that that container or data is still available on cache. If the container or data is still available, then no cache index query or reinsertion is necessary.
- the request for data is implemented as a read-and-invalidate call to the cache.
- the segment data would be queried and returned to the client.
- the cached copy would be invalidated, for example by marking a corresponding entry in a location manager. Subsequent requests for that segment would miss in the cache.
- the read-and-invalidate call could be made for read-modify-write operations to prevent a second call to the cache to invalidate the old version of the segment or data.
- a small in-memory cache of recently read index entries is maintained and can be used in these situations.
- the newly modified data or segment is written to the cache.
- the location of the newly modified segment is added to the cache index immediately or in an offline manner. Additions to the cache can be batched, for example.
- FIG. 1 illustrates an example of a computing system in which embodiments of the invention may be implemented.
- FIG. 1 illustrates a client 102 that can communicate with a cache 104 and a storage system 106 .
- the storage system stores data 128 .
- the cache 104 may be a flash cache (e.g., an SSD) and may be an intermediary storage between the storage system 106 and the client 102 .
- the cache 104 is typically faster and smaller than the storage system 106 .
- the cache 104 stores data 108 and a cache index 110 .
- the cache index 110 is maintained in the flash cache 104 itself in one embodiment.
- the data 108 stored in the cache 104 includes data that has been recently used or accessed by the client 102 or that is frequently used by the client 102 , or the like.
- the data may be located in the cache using a key or other appropriate manner.
- the cache index 110 references the data 108 stored in the cache.
- the entry 112 identifies at least a location of the data 122 in the cache 104 and the entry 114 identifies at least a location of the data 124 .
- the manner in which the location is identified can depend on how the data is stored and on the structure of the cache or the format of the cache.
- An entry in the index 110 may alternatively identify a location of the data 126 stored in the storage system 128 .
- an entry in the cache index 110 may identify the location of data in both the cache 104 and in the storage system 106 .
- a key may be provided and the cache index 110 is checked first using the key.
- the key may be, by way of example, a hash, a file and an offset, a logical unit number and a logical block address, an object identifier, or other identifier that can be used to identify the location of the data in the cache corresponding to the key.
- entries in the cache index 110 are updated in a batch process where multiple entries are updated during the same process. For example, a group of invalid entries may be cleaned at the same time.
- An entry in the cache index 110 needs to be updated, for example, when the corresponding data is invalidated. Data can become invalidated for many reasons, including but not limited to, overwrites, file deletions, cache evictions, data corruption, hardware failures, cache shrink operations, time or the like or combinations thereof.
- a location manager 116 may be used to track which entries in the cache index 104 are invalid.
- the location manager 116 is maintained in a memory (e.g., RAM) 130 .
- the memory 130 may be associated with the storage system 106 and/or the cache 104 .
- the memory 130 may reside on the client 102 .
- Each entry in the location manager may correspond to an entry in the cache index 110 .
- the entries 118 and 120 in the location manager 116 may correspond to the entries 112 and 114 in the cache index 110 . In effect, entries in the location manager 116 also correspond to locations of the data 108 .
- each entry in the location manager 116 may be a single bit.
- a 0 may represent a valid entry and a 1 may represent an invalid entry.
- the corresponding entry in the location manager 116 is set to a 1 in this example.
- Other data structures may be used as the location manager 116 to track the entries in the cache index 110 .
- entries in the location manager 116 may include additional information about the entries or the corresponding data.
- one or more of the cache 104 , the memory 130 , and the storage system 106 may be part of a server computer or part of a plurality of servers or a server system that is configured to serve client requests.
- a server system may be a file server, a database server, an email server, a backup server, or the like.
- the memory 130 is included in the server computer and the location manager 116 may be maintained in the memory 130 .
- the memory 130 may be RAM or another memory buffer.
- FIG. 2 illustrates an example of a block returned in response to an access operation or a cache index lookup.
- Embodiments of the invention can reduce the number of cache index lookups in one example with a block 200 .
- the block 200 includes, in one example, data 202 and location information 204 .
- the data 202 corresponds to the data in the cache that was requested by the client.
- the location information 204 includes information about the data 202 .
- the metadata included in the location information 204 can vary in size from a single bit to a larger structure. The size of the location information 204 , however may affect the information conveyed by the location information 204 .
- the location information 204 identifies where the data is stored.
- the location can be as general as the cache or the storage system. The location can be more specific and specify the exact location of the data in the cache or the storage system.
- the location information 204 may include an origin of the data 202 (e.g., the flash cache, the storage system, or the like), a container identifier (an identifier that can be used to address the location manager in memory), a block identifier (a physical identifier from which data can be read), a block ordinal position (a position inside the container), a block byte offset (an offset inside the container), and/or a cache generation identifier.
- the container identifier, block identifier, block ordinal position, and block byte offset may specify a precise position in memory or in the cache.
- One or more of the foregoing examples can be included in the location information.
- the location information is not limited to this information however.
- the cache generation identifier may relate to cache generations.
- a cache can ensure that data is valid for a certain number of generations.
- the cache generation identifier can be used to determine whether the data is still in the cache when the current cache iteration is in an appropriate range.
- a read-write-modify operation requires at least two cache index lookups.
- the client may read from the cache. This requires a first cache index lookup to determine in the data is stored in the cache. In one example, more than one entry in the cache index may be checked. The data is then returned to the client by reading the data from the location indicated by the cache index. The client may then modify the data.
- the client may write the new or modified data to the cache.
- a request to invalidate the previous version of the data is necessary in this case and this requires a second cache index lookup to invalidate the old entry of the data.
- the cache index is read multiple times in read-modify-write operations. A similar issue arises when inserting previously read data that may have been stored in RAM or other memory.
- FIG. 3 illustrates an example of a method for performing a read-modify-write operation while reducing or minimizing cache index lookups.
- the cache is read is response to a request from a client 102 .
- the request may identify or include a key in one example.
- Reading the cache 104 requires an index access or an index lookup operation to determine a location of the requested data.
- a block 200 is returned to the client.
- the box 200 includes the requested data 202 (which may have been read from the cache 104 or from the storage system 106 .).
- the block 200 also includes location information 204 about the requested data.
- the location information 204 may be opaque from the perspective of the client 102 .
- the location information 204 may provide an indication as to the origin of the data 202 .
- the specificity of the location information 204 can vary.
- the location information 204 may be a single bit (Boolean) that identifies its origin as the cache 104 or the storage system 106 .
- the location information 106 may, however, be more robust and include information similar to that maintained in the cache index 110 . Other variations are also possible.
- the client 102 may keep the location information 204 in memory.
- the location information 204 may be a copy of at least some of the location information that was stored in the entry of the cache index 110 associated with the requested data.
- the data 202 may be modified by the client 102 .
- the new data may be written to the cache 104 as data 312 in box 308 .
- the data 202 is invalidated because the new or modified data is now being written to the cache 104 .
- the old data 202 can be invalidated, in one example, by making an appropriate change in the location manager 116 , which may be stored in memory 130 of the server in one example.
- the location information 204 has been retained during the read-modify-write operation, the location of the data 202 can be invalidated without having to access the cache index 110 .
- the location information 202 allows the data 202 or the location of the data 202 to be invalidated because the location is known from the location information.
- the corresponding entry in the cache index 110 is also known and may be cleaned immediately or at another time.
- the location information 204 identifies an entry in the cache index 110 .
- the data 202 can be invalidated by marking the corresponding entry in the location manager 116 that corresponds to the entry in the cache index 110 associated with the data 202 .
- the data 202 can be marked as invalid in the location manager 116 without having to access the cache index to find the location of the data 202 .
- a cache index lookup is avoided in this example and the data can be effectively invalidated without performing a cache index lookup.
- the cleanup process can be performed in a batch process at a later time if desired.
- FIG. 4 illustrates an example of a method for efficiently determining whether to insert previously read data into a cache.
- a request for data is made by the client 102 .
- Accessing the cache 104 for the requested data 202 requires a cache index lookup in the cache index 110 .
- a key may be used to access the cache index 110 and identify the location of the data 202 .
- the data 202 is then returned to the client 102 in box 404 as the block 200 .
- the block 200 is inserted into the memory 130 , which is an example of another cache level.
- the block 200 and thus the data 202 may remain in the memory 130 for a period of time.
- the data 202 is removed from the memory 130 .
- the data 202 may be evicted from the memory 130 .
- the location information 204 could be a Boolean value that identifies the origin of the data 202 .
- the location information 204 may include metadata that allows the location manager 116 to be checked to see if the data is still cached.
- additional information such as the container ID and the like may be required. This information can be compared with the corresponding entry in the location manager 116 . If the location manager 116 indicates that the location is still valid and the location information identifies the cache as the origin of the data, then the data is not inserted into the cache 104 . If the location manager 116 indicates that the location is invalid, then the data is written to the cache.
- the location information 204 can be used to determine if the data 202 should be inserted back into the cache 104 .
- the location information 204 indicates, in one example, that the data 202 originated from the cache and it is determined that the data 202 in the cache is still valid, the data 202 is not inserted because the data 202 is already in the cache. This avoids an index lookup to determine if the data is still cached and avoids inserting duplicate data in the cache 104 .
- the location information 204 may include a location value. With this value, the location manager 116 can be checked determine whether the location is still valid or whether the data is still located in the cache 104 . If the data is not in the cache, then a determination may be made to insert the data 202 back into the cache 104 .
- the corresponding entry in the location manager can be marked as invalid and the method may proceed as previously described.
- FIG. 5 illustrates an example of a method for avoiding index lookups by invalidating entries in a cache when the data is read.
- a request is made to access data 510 in the cache 104 .
- an entry in the location manager 116 is changed to reflect that the data 510 is considered to be invalid.
- the valid data is returned to the client in box 504 .
- the data 510 is modified.
- the new data is written to the cache 104 as the data 512 .
- the cache index 110 may be changed to reflect the new data.
- the write performed in box 508 does not need to invalidate the previous version which would require another cache index lookup.
- the cache index key is a content defined hash
- the index keys of the data 510 and the data 512 will be dramatically different and would require looking up in different locations of the index.
- FIG. 6 illustrates another example for reducing cache index lookups.
- data is read from the cache 104 .
- the read operation requires a cache index lookup as previously described.
- the data is returned to the client 102 .
- a block 200 may be returned that includes location information.
- the data is modified in box 606 .
- FIG. 6 also illustrates that a cache of index entries 610 may be maintained in the memory 130 of the server (or in another location or memory).
- the cache of index entries 610 may include a cache of, for example, the location information associated with recently accessed data.
- the cache of index entries can be checked for the location information.
- a cache index lookup operation can be avoided. Rather, the location information stored in the cache of index entries 610 can be used to determine how to handle the data being written to the cache.
- the new data is written to the cache and the appropriate entry in the location manager 116 for the old version of the data can be marked as invalid based on the location information maintained in the cache of index entries 610 in the memory 130 .
- location information is returned with the data such that the location information can be tracked. More specifically, the location information can be used to access the location manager to mark a particular entry, which corresponds to data in the cache or to a location in the cache, as invalid.
- the entry in the location manager can be marked as invalid when the read operation is performed. In this example, it may not be necessary to return location information with the data because the appropriate entry in the location manager has already been marked as invalid.
- a cache of recent cache index entries that were looked up is maintained in memory other than the cache and different from the location manager.
- the cache of recent index entries can be used to invalidate the entry in the location manager instead of performing a cache index lookup.
- FIG. 7 illustrates another example of a cache index.
- the cache index 700 includes entries that may be associated with more than one location.
- the entry 702 for example, is associated with the data 706 in the cache 104 and with the data 706 in the storage system 106 .
- the entry 704 may only include a location of the data 708 in the cache 104 .
- the entry 718 may identify a location of the data 720 , which may only be present in the storage system 106 .
- the location manager 716 may be similarly configured.
- the entry 714 may also correspond to the data 706 in both the cache 104 and the storage system 106 .
- the appropriate locations in the location manager 714 can be marked as invalid.
- the location manager 716 is thus configured to handle multiple locations in each entry. When the location manager 716 is marked to invalidate the data in the cache, the entry for the copy of the data in storage may remain valid.
- a cache index 700 can include more than one location in each entry and allows the location of data to be identified from a single cache index lookup operation instead of two index lookup operations.
- a process may then be performed to determine which copy of the data should be returned.
- a cost function for example, may be used to determine which location to use when responding to a request for data.
- the locations may include, by way of example only, both caches and tiers of storage.
- data could be stored in a local storage system, or in a local cache, or in a remote storage system, or a remote cache.
- Remote storage systems could include cloud storage. Data can be stored on different types of media, some being faster to access, some being more expensive to use.
- An index lookup operation may include a cache index lookup in box 710 .
- the cache index may identify more than one location for the requested data and a decision is made with respect to which location is used.
- the results are returned.
- the locations may be ordered according to some property. For example, the locations may be ordered based on expected access time, where data stored in locations with faster expected access times are returned before data stored in locations with slower expected access times. Another property to consider is the financial cost of accessing data. Some storage systems charge for accesses, such as cloud storage providers, so accesses that are less expensive may be preferred. Alternatively, a single location may be returned based on the same property.
- the number of I/Os can be reduced and the appropriate one or more entries in the location manager 714 can be marked as invalid.
- FIG. 8 illustrates an example of a method 800 for returning data in response to a request for the data.
- a cache index lookup is performed in box 802 .
- the entries in the cache index e.g., the cache index 700
- the locations are ordered.
- the locations may be ordered based on at least one factor. Example factors include those previously mentioned such as expected access time, financial cost, and validity of location. Other factors may include the urgency of the request, the status of the requestor, or the like.
- the location manager may also represent multiple locations per entry.
- the data in the best location (e.g., first in the ordered results) is returned to the client 102 .
- the method 800 may determine that the location in the cache should be returned because the cache provides faster access and because there is no cost associated with returning the data from the cache. If the data in the cache is valid, the data can be returned to the client. If the copy in the cache is determined to be invalid, then the next location in the ordered results is used. In this case, the data may be returned from the storage system.
- the validity of the locations may be determined before the ordered locations are determined or ordered.
- the validity of the locations is another factor that can be used when ordering the locations.
- a computer may include a processor and computer storage media carrying instructions that, when executed by the processor and/or caused to be executed by the processor, perform any one or more of the methods disclosed herein.
- embodiments within the scope of the present invention also include computer storage media, which are physical media for carrying or having computer-executable instructions or data structures stored thereon.
- Such computer storage media can be any available physical media that can be accessed by a general purpose or special purpose computer.
- such computer storage media can comprise hardware such as solid state disk (SSD), RAM, ROM, EEPROM, CD-ROM, flash memory, phase-change memory (“PCM”), or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other hardware storage devices which can be used to store program code in the form of computer-executable instructions or data structures, which can be accessed and executed by a general-purpose or special-purpose computer system to implement the disclosed functionality of the invention. Combinations of the above should also be included within the scope of computer storage media. As well, such media are examples of non-transitory storage media, and non-transitory storage media also embraces cloud-based storage systems and structures, although the scope of the invention is not limited to these examples of non-transitory storage media.
- SSD solid state disk
- ROM read-only memory
- PCM phase-change memory
- Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
- module or ‘component’ can refer to software objects or routines that execute on the computing system.
- the different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system, for example, as separate threads. While the system and methods described herein can be implemented in software, implementations in hardware or a combination of software and hardware are also possible and contemplated.
- a ‘computing entity’ may be any computing system as previously defined herein, or any module or combination of modulates running on a computing system.
- a hardware processor is provided that is operable to carry out executable instructions for performing a method or process, such as the methods and processes disclosed herein.
- the hardware processor may or may not comprise an element of other hardware, such as the computing devices and systems disclosed herein.
- embodiments of the invention can be performed in client-server environments, whether network or local environments, or in any other suitable environment.
- Suitable operating environments for at least some embodiments of the invention include cloud computing environments where one or more of a client, server, or target virtual machine may reside and operate in a cloud environment.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/580,362 US11194720B2 (en) | 2015-03-31 | 2019-09-24 | Reducing index operations in a cache |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/674,879 US10210087B1 (en) | 2015-03-31 | 2015-03-31 | Reducing index operations in a cache |
US16/151,028 US10430337B2 (en) | 2015-03-31 | 2018-10-03 | Reducing index operations in a cache |
US16/580,362 US11194720B2 (en) | 2015-03-31 | 2019-09-24 | Reducing index operations in a cache |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/151,028 Continuation US10430337B2 (en) | 2015-03-31 | 2018-10-03 | Reducing index operations in a cache |
Publications (2)
Publication Number | Publication Date |
---|---|
US20200019505A1 US20200019505A1 (en) | 2020-01-16 |
US11194720B2 true US11194720B2 (en) | 2021-12-07 |
Family
ID=65038669
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/674,879 Active 2035-07-23 US10210087B1 (en) | 2015-03-31 | 2015-03-31 | Reducing index operations in a cache |
US16/151,028 Active US10430337B2 (en) | 2015-03-31 | 2018-10-03 | Reducing index operations in a cache |
US16/580,362 Active 2035-09-01 US11194720B2 (en) | 2015-03-31 | 2019-09-24 | Reducing index operations in a cache |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/674,879 Active 2035-07-23 US10210087B1 (en) | 2015-03-31 | 2015-03-31 | Reducing index operations in a cache |
US16/151,028 Active US10430337B2 (en) | 2015-03-31 | 2018-10-03 | Reducing index operations in a cache |
Country Status (1)
Country | Link |
---|---|
US (3) | US10210087B1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11537617B2 (en) | 2019-04-30 | 2022-12-27 | Dremio Corporation | Data system configured to transparently cache data of data sources and access the cached data |
CN111580742B (en) * | 2019-08-30 | 2021-06-15 | 上海忆芯实业有限公司 | Method for processing read (Get)/Put (write) request using accelerator and information processing system thereof |
CN112698935B (en) * | 2019-10-22 | 2025-06-10 | 深圳市茁壮网络股份有限公司 | Data access method, data server and data storage system |
JP2023104400A (en) | 2022-01-17 | 2023-07-28 | 富士通株式会社 | Data management method and data management program |
Citations (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5581729A (en) | 1995-03-31 | 1996-12-03 | Sun Microsystems, Inc. | Parallelized coherent read and writeback transaction processing system for use in a packet switched cache coherent multiprocessor system |
US5644753A (en) | 1995-03-31 | 1997-07-01 | Sun Microsystems, Inc. | Fast, dual ported cache controller for data processors in a packet switched cache coherent multiprocessor system |
US6018763A (en) | 1997-05-28 | 2000-01-25 | 3Com Corporation | High performance shared memory for a bridge router supporting cache coherency |
US6134634A (en) | 1996-12-20 | 2000-10-17 | Texas Instruments Incorporated | Method and apparatus for preemptive cache write-back |
US6295582B1 (en) | 1999-01-15 | 2001-09-25 | Hewlett Packard Company | System and method for managing data in an asynchronous I/O cache memory to maintain a predetermined amount of storage space that is readily available |
US6415362B1 (en) * | 1999-04-29 | 2002-07-02 | International Business Machines Corporation | Method and system for write-through stores of varying sizes |
US6516386B1 (en) | 1997-12-31 | 2003-02-04 | Intel Corporation | Method and apparatus for indexing a cache |
US6772298B2 (en) | 2000-12-20 | 2004-08-03 | Intel Corporation | Method and apparatus for invalidating a cache line without data return in a multi-node architecture |
US6782444B1 (en) | 2001-11-15 | 2004-08-24 | Emc Corporation | Digital data storage subsystem including directory for efficiently providing formatting information for stored records |
US20040199727A1 (en) | 2003-04-02 | 2004-10-07 | Narad Charles E. | Cache allocation |
US20070043914A1 (en) * | 2005-08-22 | 2007-02-22 | Fujitsu Limited | Non-inclusive cache system with simple control operation |
US20070136535A1 (en) * | 2005-01-11 | 2007-06-14 | Ramakrishnan Rajamony | System and Method for Reducing Unnecessary Cache Operations |
US20070174554A1 (en) * | 2006-01-25 | 2007-07-26 | International Business Machines Corporation | Disowning cache entries on aging out of the entry |
US7269825B1 (en) | 2002-12-27 | 2007-09-11 | Unisys Corporation | Method and system for relative address translation |
US20070288692A1 (en) * | 2006-06-08 | 2007-12-13 | Bitmicro Networks, Inc. | Hybrid Multi-Tiered Caching Storage System |
US20090172287A1 (en) | 2007-12-28 | 2009-07-02 | Lemire Steven Gerard | Data bus efficiency via cache line usurpation |
US20090182948A1 (en) | 2008-01-16 | 2009-07-16 | Via Technologies, Inc. | Caching Method and Apparatus for a Vertex Shader and Geometry Shader |
US20110161594A1 (en) | 2009-12-25 | 2011-06-30 | Fujitsu Limited | Information processing device and cache memory control device |
US20110258391A1 (en) * | 2007-12-06 | 2011-10-20 | Fusion-Io, Inc. | Apparatus, system, and method for destaging cached data |
US20120198174A1 (en) * | 2011-01-31 | 2012-08-02 | Fusion-Io, Inc. | Apparatus, system, and method for managing eviction of data |
US8244960B2 (en) * | 2009-01-05 | 2012-08-14 | Sandisk Technologies Inc. | Non-volatile memory and method with write cache partition management methods |
US20120221776A1 (en) | 2009-12-18 | 2012-08-30 | Kabushiki Kaisha Toshiba | Semiconductor storage device |
US20120239860A1 (en) | 2010-12-17 | 2012-09-20 | Fusion-Io, Inc. | Apparatus, system, and method for persistent data management on a non-volatile storage media |
US20120239854A1 (en) | 2009-05-12 | 2012-09-20 | Stec., Inc. | Flash storage device with read cache |
US8275935B2 (en) | 2010-01-29 | 2012-09-25 | Kabushiki Kaisha Toshiba | Semiconductor storage device and control method thereof |
US20130067245A1 (en) | 2011-09-13 | 2013-03-14 | Oded Horovitz | Software cryptoprocessor |
US20130166816A1 (en) * | 2011-02-25 | 2013-06-27 | Fusion-Io, Inc. | Apparatus, System, and Method for Managing Contents of a Cache |
US20130185508A1 (en) | 2012-01-12 | 2013-07-18 | Fusion-Io, Inc. | Systems and methods for managing cache admission |
US20130204854A1 (en) | 2012-02-08 | 2013-08-08 | International Business Machines Corporation | Efficient metadata invalidation for target ckd volumes |
US8527544B1 (en) | 2011-08-11 | 2013-09-03 | Pure Storage Inc. | Garbage collection in a storage system |
US20140013025A1 (en) | 2012-07-06 | 2014-01-09 | Seagate Technology Llc | Hybrid memory with associative cache |
US20140032853A1 (en) | 2012-07-30 | 2014-01-30 | Futurewei Technologies, Inc. | Method for Peer to Peer Cache Forwarding |
US8688951B2 (en) | 2009-06-26 | 2014-04-01 | Microsoft Corporation | Operating system virtual memory management for hardware transactional memory |
US8700840B2 (en) * | 2009-01-05 | 2014-04-15 | SanDisk Technologies, Inc. | Nonvolatile memory with write cache having flush/eviction methods |
US20140115251A1 (en) | 2012-10-22 | 2014-04-24 | International Business Machines Corporation | Reducing Memory Overhead of Highly Available, Distributed, In-Memory Key-Value Caches |
US20140223103A1 (en) | 2010-06-09 | 2014-08-07 | Micron Technology, Inc. | Persistent memory for processor main memory |
US20140331013A1 (en) | 2011-12-07 | 2014-11-06 | Fujitsu Limited | Arithmetic processing apparatus and control method of arithmetic processing apparatus |
US20140372686A1 (en) * | 2010-07-14 | 2014-12-18 | Nimble Storage, Inc. | Methods and systems for marking data in a flash-based cache as invalid |
US20150012690A1 (en) * | 2013-03-15 | 2015-01-08 | Rolando H. Bruce | Multi-Leveled Cache Management in a Hybrid Storage System |
US20150149720A1 (en) | 2012-09-05 | 2015-05-28 | Fujitsu Limited | Control method, control device, and recording medium |
US20150261468A1 (en) | 2014-03-13 | 2015-09-17 | Open Text S.A. | System and Method for Data Access and Replication in a Distributed Environment Utilizing Data Derived from Data Access within the Distributed Environment |
US20150347298A1 (en) * | 2014-05-29 | 2015-12-03 | Green Cache AB | Tracking alternative cacheline placement locations in a cache hierarchy |
US9213649B2 (en) | 2012-09-24 | 2015-12-15 | Oracle International Corporation | Distributed page-table lookups in a shared-memory system |
US9390116B1 (en) | 2013-09-26 | 2016-07-12 | Emc Corporation | Insertion and eviction schemes for deduplicated cache system of a storage system |
US9639481B2 (en) * | 2014-08-08 | 2017-05-02 | PernixData, Inc. | Systems and methods to manage cache data storage in working memory of computing system |
US20170123725A1 (en) | 2015-10-28 | 2017-05-04 | International Business Machines Corporation | Reducing page invalidation broadcasts in virtual storage management |
US10169365B2 (en) * | 2016-03-02 | 2019-01-01 | Hewlett Packard Enterprise Development Lp | Multiple deduplication domains in network storage system |
-
2015
- 2015-03-31 US US14/674,879 patent/US10210087B1/en active Active
-
2018
- 2018-10-03 US US16/151,028 patent/US10430337B2/en active Active
-
2019
- 2019-09-24 US US16/580,362 patent/US11194720B2/en active Active
Patent Citations (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5581729A (en) | 1995-03-31 | 1996-12-03 | Sun Microsystems, Inc. | Parallelized coherent read and writeback transaction processing system for use in a packet switched cache coherent multiprocessor system |
US5644753A (en) | 1995-03-31 | 1997-07-01 | Sun Microsystems, Inc. | Fast, dual ported cache controller for data processors in a packet switched cache coherent multiprocessor system |
US6134634A (en) | 1996-12-20 | 2000-10-17 | Texas Instruments Incorporated | Method and apparatus for preemptive cache write-back |
US6018763A (en) | 1997-05-28 | 2000-01-25 | 3Com Corporation | High performance shared memory for a bridge router supporting cache coherency |
US6516386B1 (en) | 1997-12-31 | 2003-02-04 | Intel Corporation | Method and apparatus for indexing a cache |
US6295582B1 (en) | 1999-01-15 | 2001-09-25 | Hewlett Packard Company | System and method for managing data in an asynchronous I/O cache memory to maintain a predetermined amount of storage space that is readily available |
US6415362B1 (en) * | 1999-04-29 | 2002-07-02 | International Business Machines Corporation | Method and system for write-through stores of varying sizes |
US6772298B2 (en) | 2000-12-20 | 2004-08-03 | Intel Corporation | Method and apparatus for invalidating a cache line without data return in a multi-node architecture |
US6782444B1 (en) | 2001-11-15 | 2004-08-24 | Emc Corporation | Digital data storage subsystem including directory for efficiently providing formatting information for stored records |
US7269825B1 (en) | 2002-12-27 | 2007-09-11 | Unisys Corporation | Method and system for relative address translation |
US20040199727A1 (en) | 2003-04-02 | 2004-10-07 | Narad Charles E. | Cache allocation |
US20070136535A1 (en) * | 2005-01-11 | 2007-06-14 | Ramakrishnan Rajamony | System and Method for Reducing Unnecessary Cache Operations |
US20070043914A1 (en) * | 2005-08-22 | 2007-02-22 | Fujitsu Limited | Non-inclusive cache system with simple control operation |
US20070174554A1 (en) * | 2006-01-25 | 2007-07-26 | International Business Machines Corporation | Disowning cache entries on aging out of the entry |
US20070288692A1 (en) * | 2006-06-08 | 2007-12-13 | Bitmicro Networks, Inc. | Hybrid Multi-Tiered Caching Storage System |
US20110258391A1 (en) * | 2007-12-06 | 2011-10-20 | Fusion-Io, Inc. | Apparatus, system, and method for destaging cached data |
US20090172287A1 (en) | 2007-12-28 | 2009-07-02 | Lemire Steven Gerard | Data bus efficiency via cache line usurpation |
US20090182948A1 (en) | 2008-01-16 | 2009-07-16 | Via Technologies, Inc. | Caching Method and Apparatus for a Vertex Shader and Geometry Shader |
US8700840B2 (en) * | 2009-01-05 | 2014-04-15 | SanDisk Technologies, Inc. | Nonvolatile memory with write cache having flush/eviction methods |
US8244960B2 (en) * | 2009-01-05 | 2012-08-14 | Sandisk Technologies Inc. | Non-volatile memory and method with write cache partition management methods |
US20120239854A1 (en) | 2009-05-12 | 2012-09-20 | Stec., Inc. | Flash storage device with read cache |
US8688951B2 (en) | 2009-06-26 | 2014-04-01 | Microsoft Corporation | Operating system virtual memory management for hardware transactional memory |
US20120221776A1 (en) | 2009-12-18 | 2012-08-30 | Kabushiki Kaisha Toshiba | Semiconductor storage device |
US20110161594A1 (en) | 2009-12-25 | 2011-06-30 | Fujitsu Limited | Information processing device and cache memory control device |
US8275935B2 (en) | 2010-01-29 | 2012-09-25 | Kabushiki Kaisha Toshiba | Semiconductor storage device and control method thereof |
US20140223103A1 (en) | 2010-06-09 | 2014-08-07 | Micron Technology, Inc. | Persistent memory for processor main memory |
US20140372686A1 (en) * | 2010-07-14 | 2014-12-18 | Nimble Storage, Inc. | Methods and systems for marking data in a flash-based cache as invalid |
US20120239860A1 (en) | 2010-12-17 | 2012-09-20 | Fusion-Io, Inc. | Apparatus, system, and method for persistent data management on a non-volatile storage media |
US20120198174A1 (en) * | 2011-01-31 | 2012-08-02 | Fusion-Io, Inc. | Apparatus, system, and method for managing eviction of data |
US20130166816A1 (en) * | 2011-02-25 | 2013-06-27 | Fusion-Io, Inc. | Apparatus, System, and Method for Managing Contents of a Cache |
US8527544B1 (en) | 2011-08-11 | 2013-09-03 | Pure Storage Inc. | Garbage collection in a storage system |
US20130067245A1 (en) | 2011-09-13 | 2013-03-14 | Oded Horovitz | Software cryptoprocessor |
US20140331013A1 (en) | 2011-12-07 | 2014-11-06 | Fujitsu Limited | Arithmetic processing apparatus and control method of arithmetic processing apparatus |
US20130185508A1 (en) | 2012-01-12 | 2013-07-18 | Fusion-Io, Inc. | Systems and methods for managing cache admission |
US20130204854A1 (en) | 2012-02-08 | 2013-08-08 | International Business Machines Corporation | Efficient metadata invalidation for target ckd volumes |
US20140013025A1 (en) | 2012-07-06 | 2014-01-09 | Seagate Technology Llc | Hybrid memory with associative cache |
US20140032853A1 (en) | 2012-07-30 | 2014-01-30 | Futurewei Technologies, Inc. | Method for Peer to Peer Cache Forwarding |
US20150149720A1 (en) | 2012-09-05 | 2015-05-28 | Fujitsu Limited | Control method, control device, and recording medium |
US9213649B2 (en) | 2012-09-24 | 2015-12-15 | Oracle International Corporation | Distributed page-table lookups in a shared-memory system |
US20140115251A1 (en) | 2012-10-22 | 2014-04-24 | International Business Machines Corporation | Reducing Memory Overhead of Highly Available, Distributed, In-Memory Key-Value Caches |
US20150012690A1 (en) * | 2013-03-15 | 2015-01-08 | Rolando H. Bruce | Multi-Leveled Cache Management in a Hybrid Storage System |
US9390116B1 (en) | 2013-09-26 | 2016-07-12 | Emc Corporation | Insertion and eviction schemes for deduplicated cache system of a storage system |
US20150261468A1 (en) | 2014-03-13 | 2015-09-17 | Open Text S.A. | System and Method for Data Access and Replication in a Distributed Environment Utilizing Data Derived from Data Access within the Distributed Environment |
US20150347298A1 (en) * | 2014-05-29 | 2015-12-03 | Green Cache AB | Tracking alternative cacheline placement locations in a cache hierarchy |
US9639481B2 (en) * | 2014-08-08 | 2017-05-02 | PernixData, Inc. | Systems and methods to manage cache data storage in working memory of computing system |
US20170123725A1 (en) | 2015-10-28 | 2017-05-04 | International Business Machines Corporation | Reducing page invalidation broadcasts in virtual storage management |
US10169365B2 (en) * | 2016-03-02 | 2019-01-01 | Hewlett Packard Enterprise Development Lp | Multiple deduplication domains in network storage system |
Non-Patent Citations (9)
Title |
---|
Costanzo, Carlo. "Enabling FAST Cache on your EMC Clariion with Flash Drives" vCloudinfo.com Nov. 18, 2010. |
Krzyzanowski, Paul. "Memory Management: Paging." Rutgers University, Mar. 21, 2012. |
Li, Cheng, et al. "Nitro: a capacity-optimized SSD cache for primary storage." 2014 USENIX Annual Technical Conference (USENIX ATC 14). 2014. |
Liu, Yang, and Wang Wei. "FLAP: Flash-aware prefetching for improving SSD-based disk cache." Journal of Networks 9.10 (2014): 2766-2775. (Year: 2014). |
Lu, Youyou, Jiwu Shu, and Weimin Zheng. "Extending the lifetime of flash-based storage through reducing write amplification from file systems." Presented as part of the 11th USENIX Conference on File and Storage Technologies (FAST 13). 2013. |
Mao, Bo, et al. "Read-performance optimization for deduplication-based storage systems in the cloud." ACM Transactions on Storage (TOS) 10.2 (2014): 6. |
Saxena, Mohit, Michael M. Swift, and Yiying Zhang. "Flashtier: a lightweight, consistent and durable storage cache." Proceedings of the 7th ACM european conference on Computer Systems. ACM, 2012. |
Silberschatz, Abraham, and Peter Baer Galvin. "Demand Paging: Operating system concepts". Silberschatz and Galvin, 1999. |
VNX FAST Cache, EMC, Dec. 2013. |
Also Published As
Publication number | Publication date |
---|---|
US10430337B2 (en) | 2019-10-01 |
US20200019505A1 (en) | 2020-01-16 |
US20190034341A1 (en) | 2019-01-31 |
US10210087B1 (en) | 2019-02-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9779027B2 (en) | Apparatus, system and method for managing a level-two cache of a storage appliance | |
US11194720B2 (en) | Reducing index operations in a cache | |
US9892045B1 (en) | Methods to select segments of an evicted cache unit for reinsertion into the cache | |
US9772949B2 (en) | Apparatus, system and method for providing a persistent level-two cache | |
US11567871B2 (en) | Input/output patterns and data pre-fetch | |
US8930648B1 (en) | Distributed deduplication using global chunk data structure and epochs | |
CN108810041B (en) | Data writing and capacity expansion method and device for distributed cache system | |
US9921963B1 (en) | Method to decrease computation for cache eviction using deferred calculations | |
US9996542B2 (en) | Cache management in a computerized system | |
US9720835B1 (en) | Methods to efficiently implement coarse granularity cache eviction based on segment deletion hints | |
US20140195551A1 (en) | Optimizing snapshot lookups | |
US20190129621A1 (en) | Intelligent snapshot tiering | |
US10061523B2 (en) | Versioning storage devices and methods | |
US8935481B2 (en) | Apparatus system and method for providing raw data in a level-two cache | |
US11113199B2 (en) | Low-overhead index for a flash cache | |
CN108885589B (en) | Approach to Flash-Friendly Cache for CDM Workloads | |
US11200116B2 (en) | Cache based recovery of corrupted or missing data | |
US10366011B1 (en) | Content-based deduplicated storage having multilevel data cache | |
US20210286730A1 (en) | Method, electronic device and computer program product for managing cache | |
US9892044B1 (en) | Methods to efficiently implement coarse granularity cache eviction | |
US10719240B2 (en) | Method and device for managing a storage system having a multi-layer storage structure | |
US20240264939A1 (en) | Methods for cache insertion using ghost lists | |
US20170168956A1 (en) | Block cache staging in content delivery network caching system | |
CN109002400A (en) | A kind of perception of content type Computer Cache management system and method | |
US20140115246A1 (en) | Apparatus, system and method for managing empty blocks in a cache |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., A Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;WYSE TECHNOLOGY L.L.C.;AND OTHERS;REEL/FRAME:051302/0528 Effective date: 20191212 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;WYSE TECHNOLOGY L.L.C.;AND OTHERS;REEL/FRAME:051302/0528 Effective date: 20191212 |
|
AS | Assignment |
Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NORTH CAR Free format text: SECURITY AGREEMENT;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;WYSE TECHNOLOGY L.L.C.;AND OTHERS;REEL/FRAME:051449/0728 Effective date: 20191230 Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, NORTH CAROLINA Free format text: SECURITY AGREEMENT;ASSIGNORS:DELL PRODUCTS L.P.;EMC IP HOLDING COMPANY LLC;WYSE TECHNOLOGY L.L.C.;AND OTHERS;REEL/FRAME:051449/0728 Effective date: 20191230 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001 Effective date: 20200409 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: SECURITY INTEREST;ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:053311/0169 Effective date: 20200603 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
AS | Assignment |
Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST AT REEL 051449 FRAME 0728;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058002/0010 Effective date: 20211101 Owner name: SECUREWORKS CORP., DELAWARE Free format text: RELEASE OF SECURITY INTEREST AT REEL 051449 FRAME 0728;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058002/0010 Effective date: 20211101 Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA Free format text: RELEASE OF SECURITY INTEREST AT REEL 051449 FRAME 0728;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058002/0010 Effective date: 20211101 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 051449 FRAME 0728;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058002/0010 Effective date: 20211101 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 051449 FRAME 0728;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058002/0010 Effective date: 20211101 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053311/0169);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0742 Effective date: 20220329 Owner name: SECUREWORKS CORP., DELAWARE Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (051302/0528);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0593 Effective date: 20220329 Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO WYSE TECHNOLOGY L.L.C.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (051302/0528);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0593 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (051302/0528);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0593 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (051302/0528);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060438/0593 Effective date: 20220329 Owner name: DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: DELL INTERNATIONAL L.L.C., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: DELL USA L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (053546/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:071642/0001 Effective date: 20220329 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |