CN103250141B - Networking client-server architecture structure in pre-read process - Google Patents
Networking client-server architecture structure in pre-read process Download PDFInfo
- Publication number
- CN103250141B CN103250141B CN201180057801.6A CN201180057801A CN103250141B CN 103250141 B CN103250141 B CN 103250141B CN 201180057801 A CN201180057801 A CN 201180057801A CN 103250141 B CN103250141 B CN 103250141B
- Authority
- CN
- China
- Prior art keywords
- read
- sequence
- offset
- ahead
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0862—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/067—Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/16—General purpose computing application
- G06F2212/163—Server or database system
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer And Data Communications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Information Transfer Between Computers (AREA)
Abstract
Description
技术领域technical field
本发明一般涉及计算机,并且具体来说,涉及计算存储环境中的联网的客户端-服务器体系结构中的预读处理。The present invention relates generally to computers and, in particular, to read-ahead processing in a networked client-server architecture in a computing storage environment.
背景技术Background technique
当执行顺序读取操作时,预读机制通过执行将数据从存储设备加载到基于存储器的高速缓存中的后台预读操作,来提高读取过程的效率,然后在后续的读取操作中,直接从高速缓存中读取此数据。这使得有效地使用存储通道和设备,随着时间的推移平衡I/O访问,如此提高总的读取过程的效率。具体而言,当处理读取操作时,并非等待从存储设备中检索数据,数据一般而言已经在预读高速缓存中可用,由于高速缓存访问(通常是基于存储器的)比I/O访问快一些,因此整个读取过程更有效。When performing sequential read operations, the read-ahead mechanism improves the efficiency of the read process by performing a background read-ahead operation that loads data from the storage device into a memory-based cache, and then, on subsequent read operations, directly Read this data from cache. This enables efficient use of storage channels and devices, balancing I/O access over time, thus increasing the efficiency of the overall read process. Specifically, when a read operation is processed, rather than waiting to retrieve the data from the storage device, the data is generally already available in the read-ahead cache, since cache accesses (usually memory-based) are faster than I/O accesses Some, so the whole reading process is more efficient.
发明内容Contents of the invention
为顺序读取使用情况一般性地优化预读机制。在以下所示出的各实施例和所要求保护的主题中考虑的体系结构中,多个因素可能会降低预读机制的效率。主要地,由于假设当通过网络时消息可以重新排序,因此在目的地处接收到的消息的顺序会与它们被生成和发送的顺序不同。这可能会导致由客户端连续地发出的读取和预读消息在由存储系统接收到时是不按顺序的。具体而言,这些消息可能看起来具有间隙和后读取行为。这两个行为可能会降低在存储系统中操作的预读机制的效率,因为在这样的情况下更难以确定哪些数据驻留在存储系统的预读高速缓存中最有益。Generally optimizes the read-ahead mechanism for sequential read use cases. In the architectures considered in the various embodiments presented below and in the claimed subject matter, several factors may reduce the efficiency of the read-ahead mechanism. Primarily, due to the assumption that messages can be reordered as they pass through the network, the order in which messages are received at the destination will differ from the order in which they were generated and sent. This may cause read and read-ahead messages sent consecutively by the client to be received by the storage system out of order. Specifically, these messages may appear to have gaps and post-read behavior. These two behaviors may reduce the efficiency of the read-ahead mechanism operating in the storage system, since in such cases it is more difficult to determine which data is most beneficial to reside in the storage system's read-ahead cache.
另外,随着客户端应用从读取一个存储段移动到另一个存储段,由客户端为以前的段所发出的预读消息可能在与下面的段相关联的读取和预读消息已经被存储系统处理之后到达存储系统。处理与以前的段相关联的过时的消息效率低,因为这样的处理耗费资源。此外,处理这样的过时的消息可能会将在存储系统中操作的预读机制转移到以前的段,这也会降低读取过程的效率。Additionally, as a client application moves from reading one storage segment to another, a read-ahead message issued by the client for a previous segment may occur after the read and read-ahead messages associated with the following segment have already been read. Arrives at the storage system after processing by the storage system. Processing outdated messages associated with previous segments is inefficient because such processing is resource intensive. Furthermore, processing such stale messages may shift the read-ahead mechanism operating in the storage system to previous segments, which also reduces the efficiency of the read process.
鉴于前面的内容,需要解决以上的挑战的机制。相应地,提供了由处理器设备在联网的客户端-服务器体系结构中进行预读处理的各实施例。读取消息通过多个唯一序列标识(ID)来被分组,其中,所述序列ID中的每一个都对应于一个特定读取序列,所述特定读取序列包括与正在被客户端应用中的执行的线程连续地读取的特定存储段有关的所有读取和预读请求。存储系统使用序列ID值来标识和过滤在由存储系统接收到时过时的预读消息,因为客户端应用已经移动以读取不同的存储段。基本上,当其序列ID值比由存储系统已经看到的最近值晚时,丢弃消息。序列ID被存储系统用来确定要加载到由存储系统为每一个客户端应用读取会话维护的预读高速缓存中的对应的预读数据,其中所述预读高速缓存在逻辑上被分区为用于数据处理的前面的和后面的在逻辑上连续的缓冲器,当使预读高速缓存的数据内容前进时,根据客户端应用读取会话的读取请求前进的方式,数据被加载到从前面的逻辑缓冲器的末端偏移之后的一个字节的偏移开始的后面的逻辑缓冲器中。只要通过观察序列ID的传入的和维护的值而推导的连续读取流是由客户端应用读取会话维护的,则正在被读取的数据段中的预读高速缓存位置使用上文广泛地描述的方法前进,读取请求从高速缓存的内容处理,或者从存储设备中检索(如果它们引用的数据不完全包含在高速缓存中)。当标识再次通过观察序列ID的传入的和维护的值而推导的新连续读取流时,正在被读取的数据段中的高速缓存的位置基于传入的读取请求的偏移来修改,以及所请求的数据从高速缓存来提供。In view of the foregoing, mechanisms that address the above challenges are needed. Accordingly, embodiments are provided for read-ahead processing by a processor device in a networked client-server architecture. Read messages are grouped by a number of unique sequence identifications (IDs), where each of the sequence IDs corresponds to a specific read sequence including the A thread of execution sequentially reads all read and read-ahead requests related to a particular memory segment. The storage system uses the sequence ID value to identify and filter read-ahead messages that are out of date when received by the storage system because the client application has moved to read a different storage segment. Basically, a message is discarded when its sequence ID value is later than the most recent value that has been seen by the storage system. The sequence ID is used by the storage system to determine the corresponding read-ahead data to be loaded into the read-ahead cache maintained by the storage system for each client application read session, where the read-ahead cache is logically partitioned into Front and back logically contiguous buffers for data processing, when advancing the data content of the read-ahead cache, the data is loaded into the slave The offset of the beginning of the following logical buffer is one byte after the end of the previous logical buffer. The read-ahead cache location in the data segment being read uses the above extensive Going forward in the method described, read requests are processed from the contents of the cache, or retrieved from storage (if the data they reference is not fully contained in the cache). When identifying a new sequential read stream, again derived by observing the incoming and maintained values of the sequence ID, the location of the cache in the data segment being read is modified based on the offset of the incoming read request , and the requested data is served from the cache.
除前面的示例性方法实施例之外,还提供了其他示例性系统和计算机产品实施例,它们提供了相关的优点。In addition to the foregoing exemplary method embodiments, other exemplary system and computer product embodiments are provided that provide related advantages.
附图说明Description of drawings
现在将参考各个附图,只作为示例,来描述本发明的各实施例,其中:Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
图1示出了计算存储环境中的示例性预读体系结构;Figure 1 illustrates an exemplary read-ahead architecture in a computing storage environment;
图2示出了顺序读取流中的间隙;Figure 2 shows a gap in a sequential read stream;
图3示出了考虑了传入的和维护的序列ID值的用于处理读取请求的示例性方法;FIG. 3 illustrates an exemplary method for processing a read request that takes into account incoming and maintained sequence ID values;
图4示出了考虑了传入的和维护的序列ID值以及最远的偏移值的用于处理读取请求的示例性方法;FIG. 4 illustrates an exemplary method for processing a read request taking into account incoming and maintained sequence ID values and farthest offset values;
图5示出了使用最远的偏移的读取请求的更新数据范围的示例性计算;Figure 5 shows an exemplary calculation of the update data range for a read request using the furthest offset;
图6和7示出了被实现为预读高速缓存的物理缓冲器中的逻辑缓冲器的示例性布局;Figures 6 and 7 illustrate exemplary layouts of logical buffers in physical buffers implemented as read-ahead caches;
图8示出了基于预定阈值来触发首先在图6中所描绘的逻辑缓冲器的数据内容的前进的示例性条件;FIG. 8 shows exemplary conditions for triggering the advancement of the data content of the logical buffer first depicted in FIG. 6 based on a predetermined threshold;
图9示出了用于使用高速缓存缓冲器来处理传入的读取请求的示例性方法;以及FIG. 9 illustrates an example method for processing incoming read requests using a cache buffer; and
图10示出了适用于实现下列所要求保护的主题的各方面的示例性硬件。Figure 10 illustrates exemplary hardware suitable for implementing aspects of the following claimed subject matter.
具体实施方式Detailed ways
在下面所示出的各实施例中,考虑了联网的客户端-服务器体系结构,其中客户端应用发出对于存储在存储系统(在此体系结构中,是服务器)中的数据的读取请求。客户端应用和存储系统通过网络被附接。图1示出了示例性的这种联网的客户端-服务器体系结构10。客户端系统12包含客户端应用14,其中通过相对于客户端应用本地驻留(即,在同一个处理器上)的并使用预读高速缓存18的客户端代理20发出读取请求。客户端代理20是存储系统26在运行客户端应用14的处理器上的代理。客户端代理20(并非客户端应用)通过网络28与存储系统26进行通信。In the embodiments shown below, a networked client-server architecture is considered, where client applications issue read requests for data stored in a storage system (in this architecture, a server). Client applications and storage systems are attached through the network. FIG. 1 shows an exemplary such networked client-server architecture 10 . The client system 12 includes a client application 14 where read requests are issued through a client agent 20 that resides locally (ie, on the same processor) relative to the client application and uses a read-ahead cache 18 . Client agent 20 is a proxy of storage system 26 on the processor running client application 14 . Client agent 20 (not a client application) communicates with storage system 26 over network 28 .
客户端代理20和存储系统26使用消息(例如,读取和预读请求22)通过网络28进行通信。如通常对于网络假设的,假设在此体系结构中,当通过网络时相对于它们的生成顺序,消息22可以被重新排序。在体系结构10中,客户端代理20和存储系统26两者都可以应用它们自己的预读机制。即,客户端代理20可以基于由客户端应用20所发出的读取请求来产生预读操作,并将预读数据存储在其自己的高速缓存18中。此外,存储系统26还可以基于从客户端代理20接收到的读取请求22来生成预读操作,并将预读数据存储在专用高速缓存24中。存储系统26利用存储网络连接性30来向存储设备34发送读取和预读请求32,如图所示。Client agent 20 and storage system 26 communicate over network 28 using messages (eg, read and read-ahead requests 22 ). As is generally assumed for networks, it is assumed that in this architecture messages 22 can be reordered when passing through the network with respect to the order in which they were generated. In architecture 10, both client agent 20 and storage system 26 may apply their own read-ahead mechanisms. That is, the client agent 20 may generate read-ahead operations based on read requests issued by the client application 20 and store the read-ahead data in its own cache 18 . Additionally, storage system 26 may also generate read-ahead operations based on read requests 22 received from client agents 20 and store the read-ahead data in private cache 24 . Storage system 26 utilizes storage network connectivity 30 to send read and read-ahead requests 32 to storage devices 34, as shown.
虽然假设由客户端应用14所发出的读取请求一般是连续的(因此,在此上下文中,是预读机制的优点),假设客户端应用的高级别读取模式是随机的。这样的读取模式的示例将是使用较小的小节的顺序读取操作从多个存储实体(例如,文件)(每一个独立地存储在存储系统中)读取相对大的数据节的应用。While it is assumed that read requests issued by client applications 14 are generally sequential (hence, in this context, the advantage of the read-ahead mechanism), high-level read patterns by client applications are assumed to be random. An example of such a read pattern would be an application that reads a relatively large section of data from multiple storage entities (eg, files) (each stored independently in the storage system) using sequential read operations of smaller sections.
如上文所提及的,为顺序读取使用情况一般性地优化预读机制。在所示出的实施例中考虑的体系结构10中,多个因素可能会降低预读机制的效率。主要地,由于假设当通过网络时消息可以被重新排序,因此在目的地处可以接收到的消息的顺序会与它们被生成和发送的顺序不同。这可能会导致由客户端代理连续地发出的读取和预读消息在由存储系统接收到时看起来是不按顺序的。具体而言,这些消息可能看起来具有间隙和后读取(read-behind)行为。这两个行为可能会降低在存储系统中操作的预读机制的效率,因为在这样的情况下更难以确定哪些数据驻留在存储系统的预读高速缓存中是最有益的。As mentioned above, the read-ahead mechanism is generally optimized for sequential read use cases. In the architecture 10 considered in the illustrated embodiment, several factors may reduce the efficiency of the read-ahead mechanism. Primarily, due to the assumption that messages can be reordered when passing through the network, messages may be received at the destination in a different order than the order in which they were generated and sent. This can cause read and read-ahead messages issued consecutively by the client agent to appear out of order when received by the storage system. Specifically, these messages may appear to have gaps and read-behind behavior. These two behaviors may reduce the efficiency of the read-ahead mechanism operating in the storage system, since in such cases it is more difficult to determine which data is most beneficial to reside in the storage system's read-ahead cache.
另外,再次如上文所提及的,随着客户端应用从读取一个存储段移动到另一个存储段,由客户端代理为以前的段所发出的预读消息可能在与下面的段相关联的读取和预读消息已经被存储系统处理之后到达存储系统。处理与以前的段相关联的过时的消息效率低,因为这样的处理耗费资源。此外,处理这样的过时的消息可能会将在存储系统中操作的预读机制转移到以前的段,这也会降低读取过程的效率。Also, again as mentioned above, as a client application moves from reading one storage segment to another, read-ahead messages issued by the client agent for previous segments may be associated with the following segment The read and read-ahead messages arrive at the storage system after they have been processed by the storage system. Processing outdated messages associated with previous segments is inefficient because such processing is resource intensive. Furthermore, processing such stale messages may shift the read-ahead mechanism operating in the storage system to previous segments, which also reduces the efficiency of the read process.
所示出的各实施例用于有效地解决上面的挑战。在所示出的各实施例的机制中,从客户端代理发送到存储系统的每一个读取和预读消息表达此处将被称为序列ID值的内容,序列ID值以特定读取序列来分组读取消息,以便与正在由客户端应用中的执行的线程连续地读取的特定存储段相关联的所有读取和预读请求都被指派相同唯一序列ID值,并且因此被分组在一起。存储系统使用序列ID值来标识和过滤在由存储系统接收到时过时的预读消息,因为客户端应用已经移动以读取不同的存储段。广泛地,当其序列ID值比由存储系统已经看到的最近值晚时,丢弃消息。The illustrated embodiments serve to effectively address the above challenges. In the mechanisms of the illustrated embodiments, each read and read-ahead message sent from a client agent to a storage system expresses what will be referred to herein as a sequence ID value in a particular read sequence to group read messages such that all read and read-ahead requests associated with a particular memory segment being read consecutively by threads of execution in the client application are assigned the same unique sequence ID value and are thus grouped in Together. The storage system uses the sequence ID value to identify and filter read-ahead messages that are out of date when received by the storage system because the client application has moved to read a different storage segment. Broadly, a message is discarded when its sequence ID value is later than the most recent value that has been seen by the storage system.
在客户端代理的其预读机制的实现涉及在每一次迭代中生成覆盖加载到其预读高速缓存所需的所有数据的预读请求,同时不考虑以前发出的预读请求或对当前正在生成或发送的预读请求的响应的情况下,所示出的各实施例的机制允许存储系统有效地处理这样的预读请求。为客户端代理的实现采取的这种方法简化了其实现,最终使存储系统能确保:通过其预读机制向存储设备施加的读取访问,在现实中就它们的偏移而言被序列化,如此增强了被存储系统所使用的预读机制的有效性。在此方法中,由客户端代理所生成的预读请求可以在它们的数据范围重叠,这又要求存储系统也基于它们的所请求的数据范围来过滤并修改读取请求。The implementation of its read-ahead mechanism at the client agent involves generating read-ahead requests on each iteration covering all data needed to load into its read-ahead cache, without regard to previously issued read-ahead requests or to the In the case of a response to a read-ahead request sent or sent, the mechanisms of the illustrated embodiments allow the storage system to efficiently handle such a read-ahead request. This approach taken for the implementation of the client agent simplifies its implementation and ultimately enables the storage system to ensure that read accesses imposed to the storage device through its read-ahead mechanism are in reality serialized with respect to their offsets , which enhances the effectiveness of the read-ahead mechanism used by the storage system. In this approach, read-ahead requests generated by client agents can overlap in their data ranges, which in turn requires the storage system to also filter and modify read requests based on their requested data ranges.
在下列描述中,与客户端应用中的执行的线程相关联的读取会话被称为“客户端应用读取会话”。根据所示出的各实施例的机制,存储系统为每一个客户端应用读取会话维护它在正在被读取的数据段中处理的当前最远的偏移(除维护的序列ID值之外)。一般而言,如果由消息所指定的读取请求的序列ID值等于维护的序列ID值,并且接收到的读取请求的末端偏移小于或等于维护的最远的偏移,则由存储系统丢弃传入的消息。如果序列ID值相等,并且读取请求的末端偏移大于最远的偏移,则最远的偏移被修改为读取请求的末端偏移,并且作为从最远的偏移的以前的值加一个字节开始并在最远的偏移的新的值结束的范围来计算读取并发送到客户端代理的数据范围。In the following description, a read session associated with a thread of execution in a client application is referred to as a "client application read session". According to the mechanism of the illustrated embodiments, the storage system maintains for each client application read session the current furthest offset it processes in the data segment being read (in addition to the maintained sequence ID value ). In general, if the sequence ID value of the read request specified by the message is equal to the maintained sequence ID value, and the end offset of the received read request is less than or equal to the furthest offset maintained, the Discard incoming messages. If the sequence ID values are equal, and the end offset of the read request is greater than the farthest offset, the farthest offset is modified to be the end offset of the read request, and as the previous value from the farthest offset Add a byte starting and ending at the new value of the farthest offset to calculate the range of data read and sent to the client agent.
存储系统为每一个客户端应用读取会话维护预读高速缓存,并使用序列ID的传入的和维护的值来确定要被加载到预读高速缓存中的数据内容。构成预读高速缓存的物理缓冲器在逻辑上被分区为两个缓冲器,该两个缓冲器就它们的数据中的相关偏移而言始终在逻辑上是连续的。逻辑缓冲器中的每一个,不管它们在物理缓冲器中的布局如何,就它们在数据中的偏移而言,可以是第一逻辑缓冲器,然后另一个缓冲器是第二逻辑缓冲器。缓冲器中的数据内容可以根据客户端应用读取会话的读取请求前进的方式来前进。缓冲器的数据内容只能在正在被读取的数据段中向前移动,并不反向跟踪。通过超过有关其末端偏移超过第二逻辑缓冲器中的阈值偏移的读取请求的数量的阈值,来触发前进,其中后一偏移是基于由第二逻辑缓冲器覆盖的数据范围的百分比来定义的。在激活这样的前进时,第一逻辑缓冲器的开始偏移被设置为第二逻辑缓冲器的末端偏移加一个字节,然后数据被加载到新定义的第二逻辑缓冲器中。The storage system maintains a read-ahead cache for each client application read session, and uses the incoming and maintained values of the sequence ID to determine the data content to be loaded into the read-ahead cache. The physical buffers that make up the read-ahead cache are logically partitioned into two buffers that are always logically contiguous with respect to relative offsets in their data. Each of the logical buffers, regardless of their layout in the physical buffer, may be a first logical buffer in terms of their offset in the data, and then the other buffer is a second logical buffer. The data content in the buffer may advance according to the manner in which a read request of the client application read session advances. The data content of the buffer can only be moved forward in the data segment being read, and not tracked backwards. Advancement is triggered by exceeding a threshold on the number of read requests whose end offset exceeds a threshold offset in the second logical buffer, where the latter offset is based on the percentage of the data range covered by the second logical buffer to define. When such advance is activated, the start offset of the first logical buffer is set to the end offset of the second logical buffer plus one byte, and then the data is loaded into the newly defined second logical buffer.
当处理传入的读取请求时,两个逻辑缓冲器中的数据内容被视为单个缓冲器内的连贯的数据段。在一个实施例中,传入的读取请求可以使用下列方法来处理,如目前简要地描述的。只要通过观察序列ID的传入的和维护的值而推导的连续读取流是由客户端应用读取会话维护的,则正在被读取的数据段中的缓冲器的位置只使用上文所广泛描述的方法来修改,读取请求从缓冲器的内容来处理,或从存储设备中检索(如果它们引用的数据不完全包含在缓冲器中)。当标识再次通过观察序列ID的传入的和维护的值而推导的新连续读取流时,则正在被读取的数据段中的缓冲器的位置,基于传入的读取请求的偏移来修改,以及所请求的数据从缓冲器来提供。When processing incoming read requests, the data contents in the two logical buffers are treated as contiguous data segments within a single buffer. In one embodiment, incoming read requests may be processed using the following method, as briefly described so far. As long as the continuous read stream, derived by observing the incoming and maintained values of the sequence ID, is maintained by the client application read session, the position of the buffer in the data segment being read is only used as described above. Broadly describes the methods to modify, read requests to be processed from the contents of the buffer, or retrieved from storage (if the data they refer to is not completely contained in the buffer). When identifying a new sequential read stream, again derived by observing the incoming and maintained values of the sequence ID, then the position of the buffer in the data segment being read, based on the offset of the incoming read request to modify, and the requested data is provided from the buffer.
在向客户端代理发送由读取操作请求的数据的过程中,存储系统将返回的数据划分为多个不相重叠的段,并在单独的网络消息中发送每一个段。存储系统并行地通过执行的多个线程并利用多个网络连接(即,每一个响应消息都可以利用不同的网络连接来发送)发送这些响应消息,如此在网络连接上平衡响应消息。由于此方法,存储系统和客户端代理之间的网络带宽使用率显著改善。客户端代理收集由存储系统所发送的响应消息,并从在响应消息中表达的数据段构成读取和预读请求的数据。由于利用上面的方法网络带宽被更好地使用,因此增强了总的读取性能。In sending data requested by a read operation to the client agent, the storage system divides the returned data into non-overlapping segments and sends each segment in a separate network message. The storage system sends these response messages in parallel through multiple threads of execution and utilizing multiple network connections (ie, each response message may be sent using a different network connection), thus balancing the response messages over the network connections. Due to this approach, network bandwidth usage between the storage system and the client agent is significantly improved. The client agent collects the response messages sent by the storage system and composes the data for read and read-ahead requests from the data segments expressed in the response messages. Since the network bandwidth is better used with the above method, the overall read performance is enhanced.
当客户端应用读取会话移动以读取不同的存储段时,并且如果这些消息在与下一段相关联的消息已经被存储系统处理之后在存储系统中接收到,则由客户端代理所生成的预读消息可能变得过时。根据所示出的各实施例的机制,可以使用下列方法在存储系统处过滤这样的消息。When a client application read session moves to read a different storage segment, and if these messages are received in the storage system after the messages associated with the next segment have already been processed by the storage system, the Pre-read messages may become outdated. According to the mechanisms of the illustrated embodiments, such messages can be filtered at the storage system using the following methods.
从客户端代理发送到存储系统的每一个读取和预读消息表达了序列ID值,该序列ID值以特定读取序列分组读取消息,以便与正在由客户端应用中的执行的线程连续读取的特定存储段相关联的所有读取和预读请求都被指派相同唯一序列ID值,并因此被分组在一起。在序列ID值之间有次序关系。序列ID值是由客户端代理独立地为每一个客户端应用读取会话生成的,并允许确定正在被会话连续地读取的不同的存储段。读取和预读请求与特定序列ID值相关联,只要序列ID值不基于接下来指定的客户端代理逻辑而修改。Each read and read-ahead message sent from the client agent to the storage system expresses a sequence ID value that groups read messages in a specific read sequence so as to be contiguous with the thread of execution being executed by the client application All read and read-ahead requests associated with a particular bucket being read are assigned the same unique sequence ID value and are therefore grouped together. There is an order relationship between sequence ID values. Sequence ID values are generated by the client agent independently for each client application read session and allow identification of the different memory segments being read consecutively by the session. Read and read-ahead requests are associated with a specific sequence ID value, as long as the sequence ID value is not modified based on client agent logic specified next.
在一个实施例中,客户端代理在下列情况下生成客户端应用读取会话的新序列ID值:(1)没有会话的以前的序列ID值,或(2)由会话启动新顺序读取流。在一个实施例中,新顺序读取流可以通过观察当前读取流中的间隙(前向间隙或者后向间隙)来被标识,如以下图2所示。具体而言,当新读取请求的开始偏移和最近的读取请求的末端偏移之间的差异不同于一个字节时,存在间隙(此差异可以是正的或负的)。观察读取会话读取存储器中的不同数据实体(例如,不同的独立文件)的移动,还标识新顺序读取流。这样的事件通过使用存储器实体的新标识符观察会话来标识。In one embodiment, the client agent generates a new sequence ID value for a client application read session when (1) there is no previous sequence ID value for the session, or (2) a new sequential read stream is initiated by the session . In one embodiment, a new sequential read stream can be identified by observing gaps (forward gaps or backward gaps) in the current read stream, as shown in Figure 2 below. Specifically, a gap exists when the difference between the start offset of the new read request and the end offset of the most recent read request differs by one byte (this difference can be positive or negative). The movement of different data entities (eg, different independent files) in the read session read memory is observed, and new sequential read streams are also identified. Such events are identified by observing the session using the new identifier of the memory entity.
图2描绘了正在被读取的特定数据段中的示例性范围50,示出了顺序读取流中的间隙。下一读取请求的数据范围被示范为最近的读取请求的数据范围56前面的数据范围54或者后面的数据范围60。在第一种情况下,读取请求创建后向间隙52,在第二种情况下,读取请求创建前向间隙58。FIG. 2 depicts an exemplary range 50 in a particular data segment being read, showing gaps in the sequential read stream. The data range for the next read request is exemplified as the data range 54 preceding or following the data range 56 of the most recent read request. In the first case the read request creates a backward gap 52 and in the second case the read request creates a forward gap 58 .
现在参看图3,示出了用于通过存储系统处理读取请求,应用预读逻辑并考虑传入的和维护的序列ID值的示例性方法70。对于每一个客户端应用读取会话,由存储系统维护当前序列ID值。当前序列ID值被初始化为空值。对于与客户端应用读取会话相关联的新接收到的读取请求(步骤74):如果没有此会话的以前的序列ID值(步骤76),或者如果接收到的序列ID值比维护的值更新(步骤78),则维护的值被设置为与新读取请求一起发送的值(步骤80),并进一步处理读取请求(步骤82);如果接收到的序列ID值等于维护的值(再次,步骤78),则不改变维护的值,并进一步处理读取请求(步骤82);如果接收到的序列ID值比维护的值晚(再次,步骤78),则丢弃相关联的读取请求和其序列ID值(步骤84)。然后,方法70结束(步骤86)。Referring now to FIG. 3 , there is shown an exemplary method 70 for processing read requests by a storage system, applying read-ahead logic and taking into account incoming and maintained sequence ID values. For each client application read session, the storage system maintains the current sequence ID value. The current sequence ID value is initialized to a null value. For a newly received read request associated with a client application read session (step 74): If there is no previous sequence ID value for this session (step 76), or if the received sequence ID value is greater than the maintained value update (step 78), the maintained value is set to the value sent with the new read request (step 80), and the read request is further processed (step 82); if the received sequence ID value is equal to the maintained value ( Again, step 78), the maintained value is not changed, and the read request is further processed (step 82); if the received sequence ID value is later than the maintained value (again, step 78), the associated read is discarded request and its serial ID value (step 84). Method 70 then ends (step 86).
在一个实施例中,客户端代理为每一个客户端应用读取会话都维护预读高速缓存,以有效地处理由会话所发出的读取请求。客户端代理生成预读请求,以将数据加载到其预读高速缓存中。生成这些请求,并以异步(后台)方式处理其来自存储系统的响应。In one embodiment, the client agent maintains a read-ahead cache for each client application read session to efficiently process read requests issued by the session. The client agent generates read-ahead requests to load data into its read-ahead cache. These requests are generated and their responses from the storage system are processed asynchronously (in the background).
在一个可能的实施例中,客户端代理记录它向其发出预读请求的最远的偏移,并从该偏移进一步生成额外的预读请求。在此实施例中,这样的预读请求将不会在它们的数据范围重叠,如此存储系统根据它们的范围来处理传入的读取请求,不必由于重叠的范围而过滤或修改读取请求。In one possible embodiment, the client agent records the furthest offset to which it issued a read-ahead request, and further generates additional read-ahead requests from this offset. In this embodiment, such read-ahead requests will not overlap in their data extents, so the storage system processes incoming read requests according to their extents without having to filter or modify read requests due to overlapping extents.
在另一个替换实施例中,客户端代理在每一次迭代中生成覆盖加载到其预读高速缓存中所需的所有数据的预读请求,而不考虑以前发出的预读请求或对当前正在被生成或发送的预读请求的响应。此方法简化了客户端代理实现,并导致由客户端代理所生成的可能会在它们的数据范围重叠的预读请求。这需要存储系统也基于它们的请求的数据范围来过滤并修改传入的读取请求。作为此处理的结果,存储系统可以确保:通过其预读机制向存储设备施加的读取访问,在现实中就它们的偏移而言被序列化,如此增强了被存储系统所使用的预读机制的有效性。在此方法中,存储系统使用如以下图4所示的下列方法来过滤并修改读取请求。In another alternative embodiment, the client agent generates read-ahead requests each iteration covering all data needed to be loaded into its read-ahead cache, regardless of previously issued read-ahead requests or requests for data currently being read-ahead. Generated or sent in response to a read-ahead request. This approach simplifies client proxy implementation and results in read-ahead requests generated by client proxies that may overlap in their data ranges. This requires storage systems to also filter and modify incoming read requests based on their requested data extent. As a result of this processing, the storage system can ensure that read accesses imposed to the storage device through its read-ahead mechanism are in reality serialized with respect to their offsets, thus enhancing the read-ahead used by the storage system effectiveness of the mechanism. In this method, the storage system filters and modifies read requests using the following method as shown in Figure 4 below.
图4示出了考虑传入的和维护的序列ID值以及最远的偏移值的用于通过存储系统来处理读取请求的示例性方法90。存储系统为每一个客户端应用读取会话维护它在正在被读取的数据段中处理的当前最远的偏移。此值被初始化为空值。除维护的序列ID值之外,还维护此值。对于从客户端应用读取会话接收到的新读取请求(步骤94),如果读取请求的序列ID值等于维护的序列ID值(步骤98),则:如果读取请求的末端偏移小于或等于最远的偏移(步骤100),则丢弃请求(因为所请求的范围已经被处理,并被发送到客户端代理)(步骤108)。如果读取请求的末端偏移大于最远的偏移(再次,步骤100),则最远的偏移被修改为读取请求的末端偏移(步骤102),并且作为从最远的偏移的以前的值加一个字节开始并在最远的偏移的新的值结束的范围来计算读取并发送到客户端代理的数据范围(步骤104)。在以下的图5中,示出了此计算120,其中对于具有开始偏移124和末端偏移132的读取请求的示例性数据范围122,以及最远的偏移的以前的值126,导致读取请求的在最远的偏移的新的值130结束的更新的数据范围128。FIG. 4 illustrates an exemplary method 90 for processing read requests by a storage system that takes into account incoming and maintained sequence ID values and farthest offset values. The storage system maintains for each client application read session the farthest offset it currently processes in the data segment being read. This value is initialized to null. This value is maintained in addition to the sequence ID value maintained. For a new read request received from a client application read session (step 94), if the read request's sequence ID value is equal to the maintained sequence ID value (step 98), then: if the read request's end offset is less than or equal to the farthest offset (step 100), the request is discarded (because the requested range has already been processed and sent to the client agent) (step 108). If the end offset of the read request is greater than the farthest offset (again, step 100), then the farthest offset is modified to be the end offset of the read request (step 102), and as The range of data read and sent to the client agent is calculated by adding the previous value to the range starting with a byte and ending at the farthest offset to the new value (step 104). In Figure 5 below, this calculation 120 is shown, where for an exemplary data range 122 for a read request with a start offset 124 and an end offset 132, and the previous value 126 of the farthest offset, results in The updated data range 128 of the read request ends at the new value 130 of the farthest offset.
如果读取请求的序列ID值大于维护的序列ID值(再次,步骤98),或如果对于此会话没有以前的序列ID值(步骤96),则维护的序列ID值被设置为与新读取请求一起发送的值(步骤110),最远的偏移被设置为新读取请求的末端偏移(步骤112),以及进一步处理读取请求,而不对其范围进行任何更改(步骤106)。如果读取请求的序列ID值小于维护的值(再次,步骤98),则相关联的读取请求和其序列ID值被丢弃(再次,步骤108)。然后,方法90结束(步骤114)。If the sequence ID value of the read request is greater than the maintained sequence ID value (again, step 98), or if there is no previous sequence ID value for this session (step 96), the maintained sequence ID value is set to be the same as the new read sequence ID value. The value sent together with the request (step 110), the farthest offset is set as the end offset of the new read request (step 112), and the read request is further processed without any changes to its range (step 106). If the read request's sequence ID value is less than the maintained value (again, step 98), the associated read request and its sequence ID value are discarded (again, step 108). Method 90 then ends (step 114).
在一个实施例中,存储系统为每一个客户端应用读取会话维护预读高速缓存。下面是用于确定要被加载到预读高速缓存的数据内容,以及使用高速缓存来处理读取请求的示例性方法。构成预读高速缓存的物理缓冲器在逻辑上被分区为两个缓冲器,其数据内容是使用下列方法确定的。两个缓冲器就它们的数据中的相关联的偏移而言始终在逻辑上是连续的。即,第二逻辑缓冲器的开始偏移始终在第一逻辑缓冲器的末端偏移之后的一个字节开始。逻辑缓冲器中的每一个,不管它们在物理缓冲器中的布局如何,就它们在数据中的偏移而言,可以是第一逻辑缓冲器,然后另一个缓冲器是第二逻辑缓冲器。在以下的图6和7分别作为情况(A)和(B)示出了示例性数据段148,158的此分区140,150。物理缓冲器142,152被分区为第一和第二逻辑缓冲器144,146和154,156,如图所示。In one embodiment, the storage system maintains a read-ahead cache for each client application read session. The following is an exemplary method for determining data content to be loaded into a read-ahead cache, and using the cache to handle read requests. The physical buffer constituting the read-ahead cache is logically partitioned into two buffers whose data contents are determined using the following method. Two buffers are always logically contiguous with respect to their associated offsets in their data. That is, the start offset of the second logical buffer always starts one byte after the end offset of the first logical buffer. Each of the logical buffers, regardless of their layout in the physical buffer, may be a first logical buffer in terms of their offset in the data, and then the other buffer is a second logical buffer. Such partitions 140, 150 of exemplary data segments 148, 158 are shown in Figures 6 and 7 below as cases (A) and (B), respectively. The physical buffers 142, 152 are partitioned into first and second logical buffers 144, 146 and 154, 156 as shown.
开始时,当两个逻辑缓冲器是空的时,并且当处理客户端应用读取会话中的第一读取请求时,可以应用下列示例性方法。一个缓冲器(例如,在物理缓冲器中在物理上是第一个的缓冲器)的开始偏移被设置为读取请求的开始偏移。另一个缓冲器的开始偏移被设置为第一逻辑缓冲器的末端偏移加一个字节。要被加载到缓冲器中的数据大小是它们的总大小(即,物理缓冲器的大小)。数据被加载到两个缓冲器(一般而言,利用对存储设备的单个读取操作)。从缓冲器提供传入的读取请求。Initially, when both logical buffers are empty, and when processing the first read request in a client application read session, the following exemplary method may be applied. The start offset of a buffer (eg, the one that is physically first among physical buffers) is set to the start offset of the read request. The start offset of the other buffer is set to the end offset of the first logical buffer plus one byte. The size of the data to be loaded into the buffers is their total size (ie, the size of the physical buffer). Data is loaded into two buffers (generally, with a single read operation to the storage device). Serves incoming read requests from the buffer.
缓冲器中的数据内容可以使用例如下列方法,根据客户端应用读取会话的读取请求前进的方式而前进。使缓冲器中的数据内容前进是通过将第一逻辑缓冲器的开始偏移设置为第二逻辑缓冲器的末端偏移加一个字节来完成的。这在第一和第二逻辑缓冲器之间切换。然后,数据被加载到当前第二逻辑缓冲器(以前的第一逻辑缓冲器)中。The data content in the buffer can be advanced according to the way the read request of the client application read session is advanced using, for example, the following method. Advancing the data content in the buffer is done by setting the start offset of the first logical buffer to the end offset of the second logical buffer plus one byte. This toggles between the first and second logical buffers. Then, data is loaded into the current second logical buffer (formerly the first logical buffer).
用于使用上面指定的示例性方法使缓冲器的数据内容前进的触发是:其末端偏移超过偏移阈值的读取请求的数量超过该读取请求的数量的阈值。每当逻辑缓冲器的数据内容改变(即,第一和第二逻辑缓冲器切换)时,重新计算偏移阈值,而其值与由第二逻辑缓冲器覆盖的数据范围的百分比相关联。在我们的方法中,此百分比是50%,意味着当读取请求开始引用第二逻辑缓冲器的后一半时,第一逻辑缓冲器的数据内容具有低的进一步被访问的概率,因此第一逻辑缓冲器前进,变为第二逻辑缓冲器。在一个实施例中,这样的读取请求的数量阈值是2。在以下的图8中示出了对于示例性数据段172的这些阈值166和用于触发缓冲器168,170的数据内容的前进的条件(例如,其末端偏移超过偏移阈值164的两个以上的读取请求162),如图所示。A trigger for advancing the data content of a buffer using the exemplary method specified above is that the number of read requests whose ends are offset by more than an offset threshold exceed a threshold for the number of read requests. Whenever the data content of a logical buffer changes (ie, the first and second logical buffer switch), the offset threshold is recalculated, with its value associated with the percentage of the data range covered by the second logical buffer. In our approach, this percentage is 50%, meaning that when a read request starts referencing the second half of the second logical buffer, the data content of the first logical buffer has a low probability of being accessed further, so the first The logical buffer advances and becomes the second logical buffer. In one embodiment, the threshold number of such read requests is two. These thresholds 166 for an exemplary data segment 172 and the conditions used to trigger the advancement of the data content of the buffers 168, 170 are shown in FIG. The above read request 162), as shown in the figure.
在使缓冲器的数据内容前进的过程中,将数据加载到新定义的第二逻辑缓冲器中是相对于读取请求的处理以异步(后台)过程来完成的。如果任何读取请求必须访问处于被加载到第二逻辑缓冲器的处理中的数据,则此读取请求被阻止(使用同步机制),直到数据被加载并在第二逻辑缓冲器中可用。The loading of data into the newly defined second logical buffer is done in an asynchronous (background) process with respect to the processing of the read request in the process of advancing the data content of the buffer. If any read request must access data that is in process of being loaded into the second logical buffer, this read request is blocked (using a synchronization mechanism) until the data is loaded and available in the second logical buffer.
当处理传入的读取请求时,两个逻辑缓冲器中的数据内容被视为单个高速缓存缓冲器内的连贯的数据段。在一个实施例中,传入的读取请求可以使用图9所示出的下列方法180来处理。方法180以接收到读取请求(步骤184)开始(步骤182)。如果高速缓存缓冲器是空的(步骤186),则数据使用以前所描述的方法被加载到两个逻辑缓冲器中(步骤188),以及从高速缓存缓冲器提供读取请求的数据(步骤196)。When processing an incoming read request, the data content in the two logical buffers is treated as a coherent data segment within a single cache buffer. In one embodiment, incoming read requests may be processed using the following method 180 shown in FIG. 9 . Method 180 begins (step 182 ) by receiving a read request (step 184 ). If the cache buffer is empty (step 186), the data is loaded into the two logical buffers (step 188) using the previously described method, and the data for the read request is provided from the cache buffer (step 196 ).
如果高速缓存缓冲器不是空的(再次,步骤186),并且如果读取请求的开始和末端偏移在高速缓存缓冲器的偏移内(步骤190),则从高速缓存缓冲器提供读取请求的数据(再次,步骤196)。如果读取请求的序列ID大于当前序列ID(步骤192),则设置指示在超过高速缓存缓冲器的范围的第一连续的读取请求时,高速缓存缓冲器将被复位的标记(如下面所指定的)。当前序列ID被设置为该读取请求的序列ID(步骤194)。如果读取请求的序列ID小于当前序列ID(再次,步骤192),则该读取请求已经通过以前所描述的序列ID屏蔽而被丢弃。If the cache buffer is not empty (again, step 186), and if the read request's start and end offsets are within the cache buffer's offset (step 190), then the read request is served from the cache buffer data (again, step 196). If the sequence ID of the read request is greater than the current sequence ID (step 192), then a flag is set indicating that the cache buffer will be reset on the first consecutive read request that exceeds the bounds of the cache buffer (as described below Specified). The current sequence ID is set to the sequence ID of the read request (step 194). If the sequence ID of the read request is less than the current sequence ID (again, step 192), then the read request has been discarded by the previously described sequence ID masking.
如果高速缓存缓冲器不是空的(再次,步骤186),并且如果读取请求的偏移超过高速缓存缓冲器的偏移(再次,步骤190),并且如果读取请求的序列ID等于当前序列ID,以及指示高速缓存缓冲器复位的标记被关闭(指示它仍是相同顺序读取流)(步骤198),则一般从存储设备中检索由读取请求所引用的数据,带有下列例外。(1)如果由读取请求所引用的数据的一部分存在于高速缓存缓冲器中,则此部分可以从高速缓存缓冲器提供,以及(2)如果当前读取请求触发了高速缓存缓冲器的数据内容中的修改,或者如果这样的修改已经在进行中,并且如果其引用的数据将存在于高速缓存缓冲器的经过修改的数据内容中,则该读取请求可以阻止,直到高速缓存缓冲器的更新的数据内容被加载(步骤200)。在上面暗示了从存储设备中检索落后于高速缓存缓冲器的数据内容的读取请求(具体而言,检索它们的不存在于高速缓存缓冲器中的部分),并决不等待高速缓存缓冲器的内容中的修改(始终向前进行)。If the cache buffer is not empty (again, step 186), and if the read request's offset exceeds the cache buffer's offset (again, step 190), and if the read request's sequence ID is equal to the current sequence ID , and the flag indicating that the cache buffer is reset is turned off (indicating that it is still the same sequential read stream) (step 198), the data referenced by the read request is generally retrieved from the storage device, with the following exceptions. (1) If a portion of the data referenced by the read request is present in the cache buffer, this portion can be served from the cache buffer, and (2) if the current read request triggers the cache buffer's data modification in the content, or if such a modification is already in progress, and if the data it references would exist in the modified data content of the cache buffer, the read request may block until the cache buffer's Updated data content is loaded (step 200). Read requests that retrieve data content behind the cache buffer from the storage device (specifically, retrieve parts of them that are not present in the cache buffer) from the storage device are implied above, and never wait for the cache buffer Modifications in the content of the (always forward).
如果读取请求的序列ID大于当前序列ID或者指示高速缓存缓冲器复位的标记打开(指示这是新读取的流)(步骤198),则使用下列方法来更新高速缓存缓冲器的数据内容。一个逻辑缓冲器的开始偏移被设置为读取请求的开始偏移;另一个逻辑缓冲器的开始偏移被设置为第一逻辑缓冲器的末端偏移加一个字节;读取到缓冲器的大小是它们的总大小;以及然后数据被加载到高速缓存缓冲器中(使用对存储设备的单个读取请求)(步骤202)。指示高速缓存缓冲器复位的标记被关闭(再次,步骤202)。从高速缓存缓冲器提供读取请求(步骤196)。最后,如果读取请求的序列ID小于当前序列ID,则在接收时通过在前的处理过滤消息(以前所描述的)。然后,方法180结束(步骤204)。If the sequence ID of the read request is greater than the current sequence ID or the flag indicating that the cache buffer is reset is on (indicating that this is a newly read stream) (step 198), then the following method is used to update the data content of the cache buffer. The start offset of one logical buffer is set to the start offset of the read request; the start offset of the other logical buffer is set to the end offset of the first logical buffer plus one byte; read to buffer is their total size; and then the data is loaded into the cache buffer (using a single read request to the storage device) (step 202). The flag indicating that the cache buffer is reset is turned off (again, step 202). The read request is served from the cache buffer (step 196). Finally, if the sequence ID of the read request is less than the current sequence ID, the message is filtered by the preceding process on receipt (described previously). Method 180 then ends (step 204).
在向客户端代理发送由读取操作请求的数据的过程中,存储系统将返回的数据划分为多个不相重叠的段,并在单独的网络消息中发送每一个段。存储系统通过执行的多个线程并利用多个网络连接(即,每一个响应消息都可以利用不同的网络连接来发送)并行地发送这些响应消息,如此在网络连接上平衡响应消息。结果,存储系统和客户端代理之间的网络带宽使用率显著改善。客户端代理收集由存储系统所发送的响应消息,并从在响应消息中表达的数据段构成读取和预读请求的数据。由于利用上面的机制网络带宽被更好地使用,因此增强了总的读取性能。In sending data requested by a read operation to the client agent, the storage system divides the returned data into non-overlapping segments and sends each segment in a separate network message. The storage system sends these response messages in parallel by executing multiple threads and utilizing multiple network connections (ie, each response message can be sent using a different network connection), thus balancing the response messages over the network connections. As a result, network bandwidth usage between the storage system and client agents is significantly improved. The client agent collects the response messages sent by the storage system and composes the data for read and read-ahead requests from the data segments expressed in the response messages. The overall read performance is enhanced since the network bandwidth is better used with the above mechanism.
图10示出了适用于实现下列所要求保护的主题的各方面的示例性硬件250。在所描绘的实施例中,示出了体系结构10(图1)的示例性部分252。体系结构10的部分252可作为其一部分在计算机环境中操作,在计算机环境中,可以实现前面所示出的各实施例的机制。然而,应该理解,图10只是示例性的,并不意图声明或暗指关于其中可以实现各实施例的示例性方面的特定体系结构的任何限制。在不偏离后面的描述和所要求保护的主题的范围的情况下,可以对图10中所描绘的体系结构进行许多修改。FIG. 10 illustrates exemplary hardware 250 suitable for implementing aspects of the claimed subject matter below. In the depicted embodiment, an exemplary portion 252 of architecture 10 (FIG. 1) is shown. Portion 252 of architecture 10 is operable as a part of a computer environment in which the mechanisms of the previously illustrated embodiments can be implemented. It should be understood, however, that Figure 10 is exemplary only, and is not intended to assert or imply any limitation with respect to the particular architecture in which exemplary aspects of the various embodiments may be implemented. Many modifications may be made to the architecture depicted in Figure 10 without departing from the following description and the scope of the claimed subject matter.
部分252包括处理器254和诸如随机存取存储器(RAM)之类的存储器256。部分252可以可操作地耦合到为了方便起见未示出的多个组件,包括在图形用户界面上向用户呈现诸如窗口之类的图像的显示器、键盘、鼠标、打印机等等。当然,所属领域的技术人员将认识到,上面组件的任何组合、或任意数量的不同的组件、外围设备及其他设备,可以与部分252一起使用。Portion 252 includes processor 254 and memory 256 such as random access memory (RAM). Portion 252 may be operably coupled to a number of components not shown for convenience, including a display, keyboard, mouse, printer, etc. to present images, such as windows, to the user on a graphical user interface. Of course, those skilled in the art will recognize that any combination of the above components, or any number of different components, peripherals, and other devices, may be used with portion 252 .
在所示出的实施例中,部分252在存储在存储器256中的操作系统(OS)258(例如,z/OS、OS/2、LINUX、UNIX、WINDOWSMACOS)控制之下操作,并与用户对接以接受输入和命令并呈现结果。在本发明的一个实施例中,OS258根据本发明便于预读功能。为此,OS258包括预读模块264,该预读模块264可以适用于执行在前面所示出的各实施例中所描述的示例性方法中的各种进程和机制。In the illustrated embodiment, portion 252 operates under the control of an operating system (OS) 258 (e.g., z/OS, OS/2, LINUX, UNIX, WINDOWSMACOS) stored in memory 256 and interfaces with the user to accept input and commands and present results. In one embodiment of the present invention, OS 258 facilitates the read-ahead functionality according to the present invention. To this end, the OS 258 includes a prefetch module 264 that may be adapted to perform various processes and mechanisms in the exemplary methods described in the previously illustrated embodiments.
部分252可以实现编译器262,该编译器262允许以诸如COBOL、PL/1、C、C++、JAVA、ADA、BASIC、VISUALBASIC或任何其他编程语言编写的应用程序260被转换为可由处理器254读取的代码。在完成之后,应用程序260使用利用编译器262所生成的关系以及逻辑来访问和操纵存储在部分252的存储器256中的数据。Portion 252 may implement a compiler 262 that allows an application program 260 written in a programming language such as COBOL, PL/1, C, C++, JAVA, ADA, BASIC, VISUALBASIC, or any other programming language to be converted to be readable by processor 254. fetched code. After completion, the application program 260 uses the relationships and logic generated using the compiler 262 to access and manipulate the data stored in the memory 256 of the portion 252 .
在一个实施例中,实现操作系统258、应用程序260、以及编译器262的指令可触摸地以计算机可读介质实现,计算机可读介质可以包括一个或多个固定的或可移动的数据存储设备,诸如压缩驱动器、光盘、硬盘驱动器、DVD/CD-ROM、数字磁带、固态驱动器(SSD)等等。进一步地,操作系统258和应用程序260可以包括指令,这些指令在由部分252读取和执行时,使部分252执行实现和/或使用本发明所需的步骤。应用程序260和/或操作系统258指令也可以可触摸地以存储器256实现。如此,如此处可以使用的术语“制品”、“程序存储设备”和“计算机程序产品”旨在涵盖可以从任何计算机可读设备或介质访问的和/或操作的计算机程序。In one embodiment, instructions implementing the operating system 258, application programs 260, and compiler 262 are tangibly embodied on a computer-readable medium, which may include one or more fixed or removable data storage devices , such as compact drives, compact discs, hard drives, DVD/CD-ROMs, digital tapes, solid-state drives (SSD), and more. Further, operating system 258 and application programs 260 may include instructions that, when read and executed by portion 252, cause portion 252 to perform the steps necessary to implement and/or use the present invention. Application programs 260 and/or operating system 258 instructions may also be tangibly implemented in memory 256 . As such, the terms "article of manufacture," "program storage device," and "computer program product" as they may be used herein are intended to cover a computer program that can be accessed and/or operated on from any computer-readable device or medium.
本发明的各实施例可以包括一个或多个相关软件应用程序260,包括例如用于管理构成计算设备的网络(诸如存储区域网络(SAN))的分布式计算机系统的功能。相应地,处理器254可以包括一个或多个存储管理处理器(SMP)。应用程序260可以在单个计算机内或作为包含计算设备的网络的分布式计算机系统的一部分来操作。网络可以涵盖通过局域网和/或因特网连接(可以是公共或安全的,例如,通过虚拟专用网络(VPN)连接),或通过光纤通道SAN或所属领域的技术人员所理解的其他已知网络类型而连接的一台或多台计算机。Embodiments of the invention may include one or more associated software applications 260, including, for example, functionality for managing a distributed computer system comprising a network of computing devices, such as a storage area network (SAN). Accordingly, processor 254 may include one or more storage management processors (SMPs). Application 260 may operate within a single computer or as part of a distributed computer system comprising a network of computing devices. The network may encompass connections via a local area network and/or the Internet (which may be public or secure, for example, via a virtual private network (VPN) connection), or via a Fiber Channel SAN or other known network types as understood by those skilled in the art. One or more connected computers.
所属技术领域的技术人员知道,本发明的各个方面可以实现为系统、方法或计算机程序产品。因此,本发明的各个方面可以具体实现为以下形式,即:完全的硬件实施方式、完全的软件实施方式(包括固件、驻留软件、微代码等),或硬件和软件方面结合的实施方式,这里可以统称为“电路”、“模块”或“系统”。此外,在一些实施例中,本发明的各个方面还可以实现为在一个或多个计算机可读介质中的计算机程序产品的形式,该计算机可读介质中包含计算机可读的程序代码。Those skilled in the art know that various aspects of the present invention can be implemented as a system, method or computer program product. Therefore, various aspects of the present invention can be embodied in the following forms, that is: a complete hardware implementation, a complete software implementation (including firmware, resident software, microcode, etc.), or a combination of hardware and software implementations, These may collectively be referred to herein as "circuits," "modules," or "systems." Furthermore, in some embodiments, various aspects of the present invention can also be implemented in the form of a computer program product embodied in one or more computer-readable media having computer-readable program code embodied therein.
可以采用一个或多个计算机可读介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer-readable storage media include: electrical connection with one or more wires, portable computer disk, hard disk, random access memory (RAM), read only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In this document, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于——无线、有线、光缆、RF等等,或者上述的任意合适的组合。可以以一种或多种程序设计语言的任意组合来编写用于执行本发明操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++等,还包括常规的过程式程序设计语言—诸如“C”程序设计语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including - but not limited to - wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out the operations of the present invention may be written in any combination of one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, etc., including conventional A procedural programming language—such as the "C" programming language or a similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. Where a remote computer is involved, the remote computer may be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (such as via the Internet using an Internet Service Provider). connect).
上面已经参照根据本发明实施例的方法、装置(系统)和计算机程序产品的流程图和/或框图描述本发明。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机程序指令实现。这些计算机程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些计算机程序指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。The present invention has been described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It should be understood that each block of the flowchart and/or block diagrams, and combinations of blocks in the flowchart and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine such that when executed by the processor of the computer or other programmable data processing apparatus , producing an apparatus for realizing the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
也可以把这些计算机程序指令存储在计算机可读介质中,这些指令使得计算机、其它可编程数据处理装置、或其他设备以特定方式工作,从而,存储在计算机可读介质中的指令就产生出包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的指令的制造品(articleofmanufacture)。也可以把计算机程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机或其它可编程装置上执行的指令提供实现流程图和/或框图中的方框中规定的功能/操作的过程。These computer program instructions can also be stored in a computer-readable medium, and these instructions cause a computer, other programmable data processing apparatus, or other equipment to operate in a specific way, so that the instructions stored in the computer-readable medium produce information including An article of manufacture of instructions that implement the functions/actions specified in one or more blocks in a flowchart and/or block diagram. It is also possible to load computer program instructions into a computer, other programmable data processing device, or other equipment, so that a series of operation steps are executed on the computer, other programmable device or other equipment, so as to generate a computer-implemented process, so that in The instructions executed on the computer or other programmable apparatus provide a process for implementing the functions/operations specified in the flowcharts and/or blocks in the block diagrams.
附图中的流程图和框图显示了根据本发明的多个实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in a flowchart or block diagram may represent a module, program segment, or part of code that includes one or more Executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified function or action , or may be implemented by a combination of dedicated hardware and computer instructions.
尽管详细示出了本发明的一个或多个实施例,但是那些精通技术的人员将理解,在不偏离如下面的权利要求所阐述的本发明的范围的情况下,可以对那些实施例进行修改。Although one or more embodiments of the invention have been shown in detail, those skilled in the art will appreciate that modifications may be made to those embodiments without departing from the scope of the invention as set forth in the claims below .
Claims (16)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/958,196 | 2010-12-01 | ||
US12/958,196 US20120144123A1 (en) | 2010-12-01 | 2010-12-01 | Read-ahead processing in networked client-server architecture |
PCT/EP2011/070285 WO2012072418A1 (en) | 2010-12-01 | 2011-11-16 | Read-ahead processing in networked client-server architecture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103250141A CN103250141A (en) | 2013-08-14 |
CN103250141B true CN103250141B (en) | 2015-12-16 |
Family
ID=44971039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201180057801.6A Expired - Fee Related CN103250141B (en) | 2010-12-01 | 2011-11-16 | Networking client-server architecture structure in pre-read process |
Country Status (5)
Country | Link |
---|---|
US (7) | US20120144123A1 (en) |
CN (1) | CN103250141B (en) |
DE (1) | DE112011103276T5 (en) |
GB (1) | GB2499946B (en) |
WO (1) | WO2012072418A1 (en) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8539163B1 (en) * | 2010-12-17 | 2013-09-17 | Amazon Technologies, Inc. | Speculative reads |
US8930619B2 (en) | 2012-05-29 | 2015-01-06 | Dot Hill Systems Corporation | Method and apparatus for efficiently destaging sequential I/O streams |
US8886880B2 (en) | 2012-05-29 | 2014-11-11 | Dot Hill Systems Corporation | Write cache management method and apparatus |
US9552297B2 (en) | 2013-03-04 | 2017-01-24 | Dot Hill Systems Corporation | Method and apparatus for efficient cache read ahead |
US9684455B2 (en) | 2013-03-04 | 2017-06-20 | Seagate Technology Llc | Method and apparatus for sequential stream I/O processing |
US9053038B2 (en) | 2013-03-05 | 2015-06-09 | Dot Hill Systems Corporation | Method and apparatus for efficient read cache operation |
US10210480B2 (en) | 2012-05-31 | 2019-02-19 | Apple Inc. | Avoiding a redundant display of a notification on multiple user devices |
US9465555B2 (en) | 2013-08-12 | 2016-10-11 | Seagate Technology Llc | Method and apparatus for efficient processing of disparate data storage commands |
US9152563B2 (en) | 2013-03-04 | 2015-10-06 | Dot Hill Systems Corporation | Method and apparatus for processing slow infrequent streams |
US9158687B2 (en) | 2013-03-04 | 2015-10-13 | Dot Hill Systems Corporation | Method and apparatus for processing fast asynchronous streams |
US9134910B2 (en) * | 2013-04-30 | 2015-09-15 | Hewlett-Packard Development Company, L.P. | Set head flag of request |
US9213498B2 (en) | 2013-09-03 | 2015-12-15 | Kabushiki Kaisha Toshiba | Memory system and controller |
KR101694988B1 (en) | 2014-02-26 | 2017-01-11 | 한국전자통신연구원 | Method and Apparatus for reading data in a distributed file system |
CN105094701B (en) * | 2015-07-20 | 2018-02-27 | 浪潮(北京)电子信息产业有限公司 | A kind of adaptive pre-head method and device |
US10255190B2 (en) * | 2015-12-17 | 2019-04-09 | Advanced Micro Devices, Inc. | Hybrid cache |
CN105653684B (en) * | 2015-12-29 | 2020-03-03 | 曙光云计算集团有限公司 | Pre-reading method and device of distributed file system |
CN107783911B (en) * | 2016-08-31 | 2021-06-08 | 华为技术有限公司 | A data filtering method and terminal device |
CN110532199B (en) * | 2018-05-23 | 2023-06-20 | 北京忆芯科技有限公司 | Pre-reading method and memory controller thereof |
CN110808956B (en) * | 2019-09-30 | 2021-09-17 | 林德(中国)叉车有限公司 | Data interaction method and system |
CN114327299B (en) * | 2022-03-01 | 2022-06-03 | 苏州浪潮智能科技有限公司 | A kind of method, apparatus, device and medium for sequential read pre-reading |
CN116643702A (en) * | 2023-06-07 | 2023-08-25 | 北京火山引擎科技有限公司 | Distributed file processing method, device, equipment and medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1109306C (en) * | 1996-08-19 | 2003-05-21 | 国际商业机器公司 | Ideal transmission intractive user's machine-service device conversation system not referring to apparatus |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6161162A (en) | 1993-12-08 | 2000-12-12 | Nec Corporation | Multiprocessor system for enabling shared access to a memory |
US5752063A (en) | 1993-12-08 | 1998-05-12 | Packard Bell Nec | Write inhibited registers |
US6745194B2 (en) * | 2000-08-07 | 2004-06-01 | Alta Vista Company | Technique for deleting duplicate records referenced in an index of a database |
US7269663B2 (en) | 2001-09-28 | 2007-09-11 | Intel Corporation | Tagging packets with a lookup key to facilitate usage of a unified packet forwarding cache |
US7325051B2 (en) | 2001-11-06 | 2008-01-29 | International Business Machines Corporation | Integrated storage appliance |
EP1349113A3 (en) | 2002-03-20 | 2006-01-11 | Ricoh Company | Image processor and image processing method |
US7542986B2 (en) | 2002-03-26 | 2009-06-02 | Hewlett-Packard Development Company, L.P. | System and method for maintaining order for a replicated multi-unit I/O stream |
US7418494B2 (en) * | 2002-07-25 | 2008-08-26 | Intellectual Ventures Holding 40 Llc | Method and system for background replication of data objects |
US7287068B1 (en) * | 2002-12-13 | 2007-10-23 | Bmc Software, Inc. | System and method for updating devices that execute an operating system or application program directly from nonvolatile storage |
US7334088B2 (en) * | 2002-12-20 | 2008-02-19 | International Business Machines Corporation | Page descriptors for prefetching and memory management |
US7020743B2 (en) * | 2003-02-24 | 2006-03-28 | Sun Microsystems, Inc. | Atomic remote memory operations in cache mirroring storage systems |
US7039747B1 (en) | 2003-12-18 | 2006-05-02 | Cisco Technology, Inc. | Selective smart discards with prefetchable and controlled-prefetchable address space |
US7441087B2 (en) | 2004-08-17 | 2008-10-21 | Nvidia Corporation | System, apparatus and method for issuing predictions from an inventory to access a memory |
JP4088899B2 (en) | 2004-11-17 | 2008-05-21 | ソニー株式会社 | Recording / reproducing apparatus, recording / reproducing method, and program |
US20060165040A1 (en) | 2004-11-30 | 2006-07-27 | Rathod Yogesh C | System, method, computer program products, standards, SOA infrastructure, search algorithm and a business method thereof for AI enabled information communication and computation (ICC) framework (NetAlter) operated by NetAlter Operating System (NOS) in terms of NetAlter Service Browser (NSB) to device alternative to internet and enterprise & social communication framework engrossing universally distributed grid supercomputing and peer to peer framework |
US20060184718A1 (en) | 2005-02-16 | 2006-08-17 | Sinclair Alan W | Direct file data programming and deletion in flash memories |
US7404042B2 (en) | 2005-05-18 | 2008-07-22 | Qualcomm Incorporated | Handling cache miss in an instruction crossing a cache line boundary |
JP2007011523A (en) | 2005-06-29 | 2007-01-18 | Hitachi Ltd | Data prefetching method and computer system |
US8295475B2 (en) | 2006-01-13 | 2012-10-23 | Microsoft Corporation | Selective glitch detection, clock drift compensation, and anti-clipping in audio echo cancellation |
KR101952812B1 (en) * | 2006-04-12 | 2019-02-27 | 티큐 델타, 엘엘씨 | Packet retransmission and memory sharing |
US7827331B2 (en) | 2006-12-06 | 2010-11-02 | Hitachi, Ltd. | IO adapter and data transferring method using the same |
US8161353B2 (en) | 2007-12-06 | 2012-04-17 | Fusion-Io, Inc. | Apparatus, system, and method for validating that a correct data segment is read from a data storage device |
US20090063400A1 (en) * | 2007-09-05 | 2009-03-05 | International Business Machines Corporation | Apparatus, system, and method for improving update performance for indexing using delta key updates |
JP4498409B2 (en) * | 2007-12-28 | 2010-07-07 | 株式会社エスグランツ | Database index key update method and program |
US9110796B2 (en) | 2009-01-27 | 2015-08-18 | Avago Technologies General Ip (Singapore) Pte Ltd | Apparatus and circuitry for memory-based collection and verification of data integrity information |
-
2010
- 2010-12-01 US US12/958,196 patent/US20120144123A1/en not_active Abandoned
-
2011
- 2011-11-16 CN CN201180057801.6A patent/CN103250141B/en not_active Expired - Fee Related
- 2011-11-16 DE DE112011103276T patent/DE112011103276T5/en not_active Ceased
- 2011-11-16 WO PCT/EP2011/070285 patent/WO2012072418A1/en active Application Filing
- 2011-11-16 GB GB1310506.9A patent/GB2499946B/en active Active
-
2012
- 2012-06-04 US US13/488,157 patent/US8832385B2/en active Active
-
2013
- 2013-03-08 US US13/789,932 patent/US9251082B2/en not_active Expired - Fee Related
- 2013-03-08 US US13/789,924 patent/US8578102B2/en not_active Expired - Fee Related
- 2013-03-08 US US13/789,927 patent/US8595444B2/en not_active Expired - Fee Related
- 2013-03-08 US US13/789,914 patent/US8578101B2/en not_active Expired - Fee Related
- 2013-03-08 US US13/789,907 patent/US8949543B2/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1109306C (en) * | 1996-08-19 | 2003-05-21 | 国际商业机器公司 | Ideal transmission intractive user's machine-service device conversation system not referring to apparatus |
Also Published As
Publication number | Publication date |
---|---|
GB2499946B (en) | 2019-01-16 |
GB2499946A (en) | 2013-09-04 |
US20130191448A1 (en) | 2013-07-25 |
US20120239749A1 (en) | 2012-09-20 |
WO2012072418A1 (en) | 2012-06-07 |
US20120144123A1 (en) | 2012-06-07 |
GB201310506D0 (en) | 2013-07-24 |
DE112011103276T5 (en) | 2013-07-18 |
CN103250141A (en) | 2013-08-14 |
US20130205095A1 (en) | 2013-08-08 |
US20130185518A1 (en) | 2013-07-18 |
US8595444B2 (en) | 2013-11-26 |
US20130191602A1 (en) | 2013-07-25 |
US20130191490A1 (en) | 2013-07-25 |
US8949543B2 (en) | 2015-02-03 |
US8578101B2 (en) | 2013-11-05 |
US8578102B2 (en) | 2013-11-05 |
US8832385B2 (en) | 2014-09-09 |
US9251082B2 (en) | 2016-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103250141B (en) | Networking client-server architecture structure in pre-read process | |
US11301165B2 (en) | Accelerating shared file checkpoint with local burst buffers | |
CN102473112B (en) | Cache prefill method, product and system for thread migration | |
CN102906738B (en) | Accelerator system and data access method in out-of-core processing environment | |
US8874877B2 (en) | Method and apparatus for preparing a cache replacement catalog | |
US9135032B2 (en) | System, method and computer program product for data processing and system deployment in a virtual environment | |
JP5833897B2 (en) | Method, system, and computer program for data processing | |
JP2023504680A (en) | Training neural networks with data flow graphs and dynamic memory management | |
JP2014503886A (en) | Deduplication storage system, method and program for facilitating synthetic backup inside thereof | |
US10970254B2 (en) | Utilization of tail portions of a fixed size block in a deduplication environment by deduplication chunk virtualization | |
JP5613139B2 (en) | Method, system, and computer program for writing data (sliding write window mechanism for writing data) | |
CN110046047A (en) | An inter-process communication method, device and computer-readable storage medium | |
CN114365109A (en) | RDMA-enabled key-value store | |
US10983949B2 (en) | File system quota versioning | |
JP2019537097A (en) | Tracking I-node access patterns and pre-empting I-nodes | |
CN118276950A (en) | Instruction processing method, apparatus, electronic device, storage medium and program product | |
CN111198843B (en) | File system writing acceleration method based on bus control on application processor chip | |
CN114721584A (en) | Method, apparatus and computer program product for writing data | |
CN104333803A (en) | Method, equipment and system for preventing frame loss in process of video editing | |
US11106588B2 (en) | Deferred method of allocating disk space for lightning segments | |
CN117667307A (en) | Virtual machine starting method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20151216 |