CN102354301A - Cache partitioning method - Google Patents
Cache partitioning method Download PDFInfo
- Publication number
- CN102354301A CN102354301A CN2011102864226A CN201110286422A CN102354301A CN 102354301 A CN102354301 A CN 102354301A CN 2011102864226 A CN2011102864226 A CN 2011102864226A CN 201110286422 A CN201110286422 A CN 201110286422A CN 102354301 A CN102354301 A CN 102354301A
- Authority
- CN
- China
- Prior art keywords
- data block
- cache
- record
- bit
- caching data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Memory System Of A Hierarchy Structure (AREA)
Abstract
本发明公开了一种缓存分区方法,包括如下步骤:分区:在逻辑上将最后一级缓存分成大小相同的两个区域,分别为分区一和分区二;新增缓存数据块信息位:增加被访问次数位,用2个比特位来表示缓存数据块被访问次数;新增历史访问记录表:新增一个历史访问记录表,记录被访问过的缓存数据块,每条记录就是缓存数据块的信息标记位和有效位。发明将最后一级缓存分区来提高最后一级缓存的使用效率。针对大容量缓存,通过将访问次数较多的缓存数据块保存在缓存中,而将访问次数较少的缓存数据块移到主存,从而提高缓存访问命中率,提高系统性能。
The invention discloses a cache partition method, which includes the following steps: partitioning: logically divide the last level of cache into two areas of the same size, which are respectively partition one and partition two; adding cache data block information bits: adding The number of visits, using 2 bits to indicate the number of times the cache data block has been accessed; Add a historical access record table: add a historical access record table to record the cache data blocks that have been accessed, and each record is the cache data block Information flag bit and valid bit. Invented to partition the last-level cache to improve the efficiency of the last-level cache. For large-capacity caches, the cache data blocks with more access times are stored in the cache, and the cache data blocks with less access times are moved to the main memory, thereby improving the cache access hit rate and improving system performance.
Description
技术领域 technical field
本发明属于存储技术领域,涉及大容量缓存及多核架构下的缓存分区方法。The invention belongs to the technical field of storage, and relates to a large-capacity cache and a cache partition method under a multi-core architecture.
背景技术 Background technique
目前大多数计算机系统的性能在很大程度上是受内存平均访问延迟来决定的,提高缓存的命中率就能减少内存的访问次数,也就能提高系统性能。当前的处理器都使用缓存机制,缓存的作用主要是缓解调整处理器与低速主存之间速度及性能上的不匹配。缓存实行分级机制,目前的处理器大多采用三级缓存(L1,L2,L3),其最后一级缓存(L3)靠近主存。随着最后一级缓存的容量不断增加,相应的管理策略也要不断改进,以提高缓存的利用率,减少主存的访问次数。At present, the performance of most computer systems is largely determined by the average memory access latency. Improving the cache hit rate can reduce the number of memory accesses and improve system performance. The current processors all use a cache mechanism, and the function of the cache is mainly to alleviate the speed and performance mismatch between the adjustment processor and the low-speed main memory. The cache implements a hierarchical mechanism. Most current processors use three-level cache (L1, L2, L3), and the last level cache (L3) is close to the main memory. As the capacity of the last-level cache continues to increase, the corresponding management strategy must also be continuously improved to increase the utilization of the cache and reduce the number of accesses to the main memory.
缓存的管理策略包括插入算法和替换算法。插入算法是从主存中读入到缓存的一个缓存数据块应该放置到缓存的什么位置上。而替换算法是由于缓存空间容量有限,当有新的缓存块要进来时,需要将缓存中的一个缓存数据块从缓存移到主存中,以便腾出空间给新的缓存数据块。目前大多数处理器使用的缓存管理策略是最近最少使用算法(LRU)。LRU是将一组缓存看成是一个链表,当有新的缓存数据块要插入的缓存中时,将表尾的缓存数据块移到主存中,表中其他缓存数据块相应地往后移一个位置,将新的缓存数据块放在表头。在缓存访问过程中,如果一个缓存数据块被命中了,那么LRU算法就会将这个缓存数据块移到表头位置。LRU算法对于管理小容量的缓存是很有效的,但是对于管理容量较大的缓存,却显得有些低效,目前的最后一级缓存容量比较大,如何管理这个大容量缓存,是很多科研工作者都在思考的一个问题。故,实有必要进行研究,提供一种方法以解决目前存在的问题。Cache management strategies include insertion algorithms and replacement algorithms. The insertion algorithm is where a cache data block read into the cache from the main memory should be placed in the cache. The replacement algorithm is due to the limited capacity of the cache space. When a new cache block comes in, a cache data block in the cache needs to be moved from the cache to the main memory to make room for the new cache data block. The cache management strategy used by most processors today is the least recently used algorithm (LRU). LRU regards a group of caches as a linked list. When there is a new cache data block to be inserted into the cache, the cache data block at the end of the table is moved to the main memory, and other cache data blocks in the table are moved back accordingly. A location to place new cached data blocks at the table header. During the cache access process, if a cache data block is hit, the LRU algorithm will move the cache data block to the header position. The LRU algorithm is very effective for managing small-capacity caches, but it is somewhat inefficient for managing large-capacity caches. The current last-level cache has a relatively large capacity. How to manage this large-capacity cache is a problem for many scientific researchers. A question everyone is thinking about. Therefore, it is necessary to conduct research to provide a method to solve the existing problems.
发明内容 Contents of the invention
本发明实施例的目的在于提供一种缓存分区方法,提高缓存访问命中率,提高系统性能。The purpose of the embodiments of the present invention is to provide a cache partition method, which improves the cache access hit rate and system performance.
本发明实施例是这样实现的,一种缓存分区方法,包括如下步骤:The embodiment of the present invention is achieved in this way, a cache partition method, comprising the following steps:
分区:在逻辑上将最后一级缓存分成大小相同的两个区域,分别为分区一和分区二;Partition: logically divide the last level cache into two areas of the same size, namely Partition 1 and Partition 2;
新增缓存数据块信息位:增加被访问次数位,用2个比特位来表示缓存数据块被访问次数;New cache data block information bit: increase the number of access times, and use 2 bits to indicate the number of times the cache data block is accessed;
新增历史访问记录表:新增一个历史访问记录表,记录被访问过的缓存数据块,每条记录就是缓存数据块的信息标记位和有效位。New historical access record table: Add a new historical access record table to record the cached data blocks that have been accessed, and each record is the information flag bit and valid bit of the cached data block.
进一步地,所述分区一和分区二的缓存配置相同,分区一存放的是没有被访问过的缓存数据块;而分区二存放的是之前被访问过,但是被移到主存中的缓存数据块。Further, the cache configurations of partition 1 and partition 2 are the same, and partition 1 stores cache data blocks that have not been accessed; while partition 2 stores cache data that has been accessed before but has been moved to the main memory piece.
进一步地,所述新增缓存数据块信息位系增加被访问次数位,用2个比特位来表示。Further, the information bit of the newly added cached data block is an increase of the number of access times, represented by 2 bits.
进一步地,每一个缓存数据块都有一些信息位,主要包括标记位、有效位、LRU位、读写位以及被访问次数位。Further, each cache data block has some information bits, mainly including a flag bit, a valid bit, an LRU bit, a read/write bit, and an access count bit.
进一步地,所述历史访问记录表存储的是被替换出去的缓存数据块的访问记录,每条记录就是缓存数据块的信息标记位和有效位。Further, the historical access record table stores the access records of the replaced cache data blocks, and each record is the information flag bit and valid bit of the cache data block.
进一步地,所述历史访问记录表中可记录的数据块条数与分区可容纳的缓存数据块数一样。Further, the number of data blocks that can be recorded in the historical access record table is the same as the number of cache data blocks that can be accommodated by the partition.
进一步地,所述历史访问记录表是用来存储之前被替换到主存的缓存数据块的标记位,当一个缓存数据块要被移到主存中时,它的标记位就会存放到这个表中。Further, the historical access record table is used to store the flag bits of the cache data blocks that were replaced to the main memory before. When a cache data block is to be moved to the main memory, its flag bits will be stored in this table.
进一步地,当一个缓存数据块从主存被读入到缓存中时,需要进行查表操作,如果这个缓存数据块的标记位在记录表中,则将它存储到分区二中并将其在记录表中的记录的有效位设置成0,否则将其存储到分区一中。Further, when a cache data block is read into the cache from the main memory, a table lookup operation is required, and if the mark bit of the cache data block is in the record table, it is stored in Partition 2 and stored in The effective bit of the record in the record table is set to 0, otherwise it is stored in partition one.
进一步地,所述记录表采用的是先进先出的替换方法,当一个缓存数据块的标记位要存入到表中时,先查找表中是否有有效位为0的记录,如果有,则将这些标记位存入到这个有效位为0的记录中,并将有效位设置成1,否则,将表中最后一条记录的标记位设置成需要存储的标记位Further, the record table adopts a first-in-first-out replacement method. When the tag bit of a cache data block is to be stored in the table, first look up whether there is a record with a valid bit of 0 in the table, and if so, then Store these flag bits into the record whose effective bit is 0, and set the effective bit to 1, otherwise, set the flag bit of the last record in the table to the flag bit that needs to be stored
本发明将最后一级缓存分区来提高最后一级缓存的使用效率。针对大容量缓存,通过将访问次数较多的缓存数据块保存在缓存中,而将访问次数较少的缓存数据块移到主存,从而提高缓存访问命中率,提高系统性能。The present invention partitions the last-level cache to improve the utilization efficiency of the last-level cache. For large-capacity caches, the cache data blocks with more access times are stored in the cache, and the cache data blocks with less access times are moved to the main memory, thereby improving the cache access hit rate and improving system performance.
附图说明 Description of drawings
图1是本发明的流程图示。Figure 1 is a flow diagram of the present invention.
具体实施方式 Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.
本发明缓存分区方法是将大容量缓存分成大小相同的两个区,其是一种基于最近最少使用策略的缓存管理策略。两个分区分别存放不同性质的数据块,将访问较少的数据块移出缓存,将更多的空间留给访问次数较多的数据块,从而以提高缓存的使用效率。其通过一张历史访问记录表来记录被访问过的缓存数据块。当需要从主存中读取缓存数据块到缓存中时,需要查找这个历史访问记录表。如果这个缓存数据块的标记位信息在这个表中找到,那么就将这个缓存数据块放置到分区二中,否则将它放置到分区一中。The cache partition method of the present invention divides the large-capacity cache into two areas of the same size, which is a cache management strategy based on the least recently used strategy. The two partitions respectively store data blocks of different natures, and remove less accessed data blocks from the cache, leaving more space for data blocks with more access times, so as to improve the efficiency of the cache. It uses a historical access record table to record the accessed cache data blocks. When it is necessary to read the cache data block from the main memory into the cache, it is necessary to look up this historical access record table. If the tag bit information of this cache data block is found in this table, then this cache data block is placed in partition two, otherwise it is placed in partition one.
请参照图1所示,本发明缓存分区方法包括如下步骤:Referring to Figure 1, the cache partition method of the present invention includes the following steps:
分区:在逻辑上将最后一级缓存分成大小相同的两个区域,分别为分区一和分区二;Partition: logically divide the last level cache into two areas of the same size, namely Partition 1 and Partition 2;
新增缓存数据块信息位:增加被访问次数位,用2个比特位来表示缓存数据块被访问次数;New cache data block information bit: increase the number of access times, and use 2 bits to indicate the number of times the cache data block is accessed;
新增历史访问记录表:新增一个历史访问记录表,记录被访问过的缓存数据块,每条记录就是缓存数据块的信息标记位和有效位。New historical access record table: Add a new historical access record table to record the cached data blocks that have been accessed, and each record is the information flag bit and valid bit of the cached data block.
其中,所述分区一和分区二的缓存配置相同,其在组相关联度和访问延迟方面均是一样的,都是利用最近最少使用(LRU)管理策略;分区一存放的是没有被访问过的缓存数据块;而分区二存放的是之前被访问过,但是被移到主存中的缓存数据块。Wherein, the cache configurations of the partition one and partition two are the same, and they are all the same in terms of group association and access delay, and all use the least recently used (LRU) management strategy; The cache data blocks of partition 2 store the cache data blocks that have been accessed before but moved to the main memory.
分成两个容量较小的分区的优点是:小容量缓存的访问延迟较小,这是因为缓存访问延迟与它的容量大小成正比,即:容量大,访问延迟就高,容量小,访问延迟就小。分区一存放的是没有被访问过的缓存数据块,这些数据块被认为是被访问比较少的。分区二存放的是之前被访问过,但是被移到主存中的缓存数据块,这些数据块被认为是被访问次数较多。这样将被访问次数多和被访问次数少的数据块分开来存放,减小缓存的访问延迟。The advantage of being divided into two smaller-capacity partitions is that the access latency of a small-capacity cache is smaller, because the cache access latency is proportional to its capacity, that is, the larger the capacity, the higher the access latency, and the smaller the capacity, the lower the access latency. Just small. Partition 1 stores cache data blocks that have not been accessed, and these data blocks are considered to be less accessed. Partition 2 stores the cache data blocks that have been accessed before but moved to the main memory. These data blocks are considered to be accessed more often. In this way, data blocks that are frequently accessed and data blocks that are accessed less frequently are stored separately to reduce the access delay of the cache.
所述新增缓存数据块信息位系增加被访问次数位,用2个比特位来表示。每一个缓存数据块都有一些信息位,主要包括,标记位,有效位,LRU位,读写位。当前的缓存数据块都有信息位,根据不同的缓存替换算法,需要不同的信息位。例如,一个16路组相联的缓存,采用最近最少使用替换算法(LRU),那么每一个缓存数据块就需要4个比特位来存储LRU信息。本创作在原有的基础上增加被访问次数位,用2个比特位来表示,以记录缓存数据块被访问的次数,简称为UC。The information bit of the newly added cache data block is an increase in the number of times of access, which is represented by 2 bits. Each cache data block has some information bits, mainly including mark bits, valid bits, LRU bits, and read and write bits. The current cache data blocks all have information bits, and different information bits are required according to different cache replacement algorithms. For example, a 16-way set associative cache adopts the least recently used replacement algorithm (LRU), then each cache data block needs 4 bits to store LRU information. This creation increases the number of access times on the original basis, expressed by 2 bits, to record the number of times the cache data block is accessed, referred to as UC.
当一个新的缓存数据块从主存被读入到缓存(分区1或分区2)中时,其UC值设置为0,其他的信息位的设置还是按照原始的方法。在缓存访问过程中,如果一个缓存数据块被命中了并且它的UC值小于3,那么,它的UC值就加1。这是因为UC值是用2个比特位表示,它的取值范围是0~3。When a new cache data block is read into the cache (partition 1 or partition 2) from the main memory, its UC value is set to 0, and other information bits are set according to the original method. During the cache access process, if a cache data block is hit and its UC value is less than 3, then its UC value is increased by 1. This is because the UC value is represented by 2 bits, and its value range is 0-3.
由于缓存容量有限,需要进行缓存数据块替换操作时,以假设当前新缓存数据块将要存放到分区一中为例,LRU策略首先选择分区一中LRU值最大的那个缓存数据块。该缓存数据块简称为victim。然后,分析victim的UC值。如果这个UC值小于2,那么就将victim的标记位存储到历史访问记录表中,并将它移到主存中。否则,将它的UC位设置为0,并移到分区二中,此时需要将分区二中LRU值最大的缓存数据块替换出去,这个缓存数据块简称为victim。同样,需要检查victim的UC值,如果小于2,则直接移到主存,并将标记位存储到历史访问记录表中,否则就将它的UC位清零并移到分区一中。如此反复,直至找到一个缓存数据块,它的LRU值最大且UC值小于2,那么就将这个缓存数据块移到主存并将它的标记位存储到历史访问记录表中。Due to the limited cache capacity, when a cache data block replacement operation is required, assuming that the current new cache data block will be stored in partition 1 as an example, the LRU strategy first selects the cache data block with the largest LRU value in partition 1. The cache data block is called victim for short. Then, analyze the victim's UC value. If the UC value is less than 2, then store the victim's mark bit in the historical access record table and move it to the main memory. Otherwise, set its UC bit to 0 and move it to partition 2. At this time, the cache data block with the largest LRU value in partition 2 needs to be replaced. This cache data block is called victim for short. Similarly, it is necessary to check the UC value of the victim. If it is less than 2, it will be directly moved to the main memory, and the mark bit will be stored in the historical access record table, otherwise its UC bit will be cleared and moved to partition one. Repeat this until a cache data block is found, its LRU value is the largest and its UC value is less than 2, then this cache data block is moved to the main memory and its mark bit is stored in the historical access record table.
所述新增历史访问记录表系新增一个历史访问记录表,记录被访问过的缓存数据块。这个历史访问记录表存储的是被替换出去的缓存数据块的访问记录,每条记录就是缓存数据块的信息标记位和有效位。记录表中可记录的数据块条数与分区可容纳的缓存数据块数一样。例如,一个2MB的缓存,分成两个1MB的分区,缓存数据块大小为64B,那么每个分区可容纳的缓存数据块数为16K,所以这个表的容量为16k条记录。这里的k表示2^10,M表示2^20,B表示字节。这个记录表的组相联跟分区的组相联一样,16路组相联的缓存,那么这个表也是16路组相联。The newly added historical access record table is a newly added historical access record table, which records the accessed cache data blocks. This historical access record table stores the access records of the replaced cache data blocks, and each record is the information flag bit and valid bit of the cache data block. The number of data blocks that can be recorded in the record table is the same as the number of cache data blocks that the partition can hold. For example, if a 2MB cache is divided into two 1MB partitions, and the cache data block size is 64B, then the number of cache data blocks that can be accommodated in each partition is 16K, so the capacity of this table is 16k records. Here k means 2^10, M means 2^20, and B means byte. The group associative of this record table is the same as the group associative of the partition, a 16-way group associative cache, then this table is also a 16-way group associative.
历史访问记录表是用来存储之前被替换到主存的缓存数据块的标记位。当一个缓存数据块要被移到主存中时,它的标记位就会存放到这个表中。当一个缓存数据块从主存被读入到缓存中时,需要进行查表操作。如果这个缓存数据块的标记位在表中,则将它存储到分区二中并将它在表中的记录的有效位设置成0,否则将它存储到分区一中。The historical access record table is used to store the tag bits of the cache data blocks that were replaced to the main memory before. When a cache data block is to be moved to main memory, its flag will be stored in this table. When a cache data block is read into the cache from the main memory, a table lookup operation is required. If the mark bit of this cache data block is in the table, it is stored in partition two and the effective bit of its record in the table is set to 0, otherwise it is stored in partition one.
所述记录表采用的是先进先出的替换策略。当一个缓存数据块的标记位要存入到表中时,先查找表中是否有有效位为0的记录,如果有,则将这些标记位存入到这个有效位为0的记录中,并将有效位设置成1,否则,将表中最后一条记录的标记位设置成需要存储的标记位。The record table adopts a first-in first-out replacement strategy. When the flag bits of a cache data block are to be stored in the table, first look up whether there is a record with an effective bit of 0 in the table, and if so, store these flag bits into the record with an effective bit of 0, and Set the valid bit to 1, otherwise, set the flag bit of the last record in the table to the flag bit that needs to be stored.
以一处理器的最后一级缓存(L3)为例进行说明,该缓存配置如下:容量为2MB,16路组相联,4个比特位表示LRU值,缓存数据块大小为64B,采用LRU管理策略。将它分成两个分区A和B,分区A和B的配置为:容量为1MB,16路组相联,4个比特位表示LRU值,缓存数据块大小为16B,采用LRU管理策略。历史记录表包含16K条记录,16路组相联,采用先进先出管理策略。(单位k表示2^10),M表示2^20,单位B表示字节)。一个缓存数据块D从主存读入到缓存中,先在记录表中查找D的标记位。如果找到,则将相应记录的有效位设置成0,然后将D存储到分区B中,否则将D存储到分区A中,无论存储在哪个分区,UC值都设置为0。以假设是存储到分区A中为例,先要在A中选择LRU值为15的缓存数据块victim。然后测试victim的UC值,如果小于2,则将victim的标记位存入到历史记录表中。否则,将它的UC值设置为0并移到分区B中,这时需要在分区B中选择LRU值为15的缓存数据块victim,然后测试victim的UC值。循环反复,直到找到了一个缓存数据块victim,它的LRU值为15,UC值小于2。这样就把victim的标记位存入到历史访问记录表中并将victim移到主存中。Taking the last level cache (L3) of a processor as an example, the cache configuration is as follows: capacity is 2MB, 16-way set associative, 4 bits represent LRU value, cache data block size is 64B, adopts LRU management Strategy. Divide it into two partitions A and B. The configuration of partitions A and B is as follows: capacity is 1MB, 16-way set associative, 4 bits represent LRU value, cache data block size is 16B, and LRU management strategy is adopted. The history record table contains 16K records, 16-way group associative, adopts first-in-first-out management strategy. (unit k means 2^10), M means 2^20, unit B means byte). A cache data block D is read from the main memory into the cache, and the flag bit of D is first searched in the record table. If found, the valid bit of the corresponding record is set to 0, and then D is stored in partition B, otherwise D is stored in partition A, regardless of which partition it is stored in, the UC value is set to 0. Taking the assumption that it is stored in partition A as an example, first select the cache data block victim with an LRU value of 15 in A. Then test the victim's UC value, if it is less than 2, store the victim's mark bit in the history table. Otherwise, set its UC value to 0 and move it to partition B. At this time, you need to select a cache data block victim with an LRU value of 15 in partition B, and then test the victim's UC value. The loop repeats until a cache data block victim is found, its LRU value is 15, and its UC value is less than 2. In this way, the victim's mark bit is stored in the historical access record table and the victim is moved to the main memory.
本发明创作主要是针对大容量缓存,通过改进缓存的管理策略,将访问次数较多的缓存数据块保存在缓存中,而将访问次数较少的缓存数据块移到主存,从而提高缓存访问命中率,提高系统性能。The creation of the present invention is mainly aimed at large-capacity cache. By improving the cache management strategy, the cache data blocks with more access times are stored in the cache, and the cache data blocks with less access times are moved to the main memory, thereby improving cache access. Hit rate, improve system performance.
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention should be included in the protection of the present invention. within range.
Claims (9)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201110286422.6A CN102354301B (en) | 2011-09-23 | 2011-09-23 | Cache partitioning method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201110286422.6A CN102354301B (en) | 2011-09-23 | 2011-09-23 | Cache partitioning method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN102354301A true CN102354301A (en) | 2012-02-15 |
| CN102354301B CN102354301B (en) | 2014-03-19 |
Family
ID=45577867
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201110286422.6A Expired - Fee Related CN102354301B (en) | 2011-09-23 | 2011-09-23 | Cache partitioning method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN102354301B (en) |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103634231A (en) * | 2013-12-02 | 2014-03-12 | 江苏大学 | Content popularity-based CCN cache partition and substitution method |
| CN104239233A (en) * | 2014-09-19 | 2014-12-24 | 华为技术有限公司 | Cache managing method, cache managing device and cache managing equipment |
| CN105743975A (en) * | 2016-01-28 | 2016-07-06 | 深圳先进技术研究院 | Cache placing method and system based on data access distribution |
| CN109032970A (en) * | 2018-06-16 | 2018-12-18 | 温州职业技术学院 | A kind of method for dynamically caching based on lru algorithm |
| CN110059482A (en) * | 2019-04-26 | 2019-07-26 | 海光信息技术有限公司 | The exclusive update method and relevant apparatus of exclusive spatial cache unit |
| CN116560585A (en) * | 2023-07-05 | 2023-08-08 | 支付宝(杭州)信息技术有限公司 | Method and system for hierarchical data storage |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1543605A (en) * | 2001-06-29 | 2004-11-03 | ض� | Storage of Cache Metadata |
| US20050216693A1 (en) * | 2004-03-23 | 2005-09-29 | International Business Machines Corporation | System for balancing multiple memory buffer sizes and method therefor |
| CN101320353A (en) * | 2008-07-18 | 2008-12-10 | 四川长虹电器股份有限公司 | Design method of embedded type browser caching |
-
2011
- 2011-09-23 CN CN201110286422.6A patent/CN102354301B/en not_active Expired - Fee Related
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1543605A (en) * | 2001-06-29 | 2004-11-03 | ض� | Storage of Cache Metadata |
| US20050216693A1 (en) * | 2004-03-23 | 2005-09-29 | International Business Machines Corporation | System for balancing multiple memory buffer sizes and method therefor |
| CN101320353A (en) * | 2008-07-18 | 2008-12-10 | 四川长虹电器股份有限公司 | Design method of embedded type browser caching |
Non-Patent Citations (1)
| Title |
|---|
| XIANG LINGXIANG等: "Less reused filter: improving l2 cache performance via filtering less reused lines", 《PROCEEDINGS OF THE 23RD INTERNATIONAL CONFERENCE ON SUPERCOMPUTING. ACM》, 12 June 2009 (2009-06-12), pages 1 - 12 * |
Cited By (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103634231A (en) * | 2013-12-02 | 2014-03-12 | 江苏大学 | Content popularity-based CCN cache partition and substitution method |
| CN104239233A (en) * | 2014-09-19 | 2014-12-24 | 华为技术有限公司 | Cache managing method, cache managing device and cache managing equipment |
| CN104239233B (en) * | 2014-09-19 | 2017-11-24 | 华为技术有限公司 | Buffer memory management method, cache management device and caching management equipment |
| CN105743975A (en) * | 2016-01-28 | 2016-07-06 | 深圳先进技术研究院 | Cache placing method and system based on data access distribution |
| CN105743975B (en) * | 2016-01-28 | 2019-03-05 | 深圳先进技术研究院 | Cache placement method and system based on data access distribution |
| CN109032970A (en) * | 2018-06-16 | 2018-12-18 | 温州职业技术学院 | A kind of method for dynamically caching based on lru algorithm |
| CN110059482A (en) * | 2019-04-26 | 2019-07-26 | 海光信息技术有限公司 | The exclusive update method and relevant apparatus of exclusive spatial cache unit |
| CN116560585A (en) * | 2023-07-05 | 2023-08-08 | 支付宝(杭州)信息技术有限公司 | Method and system for hierarchical data storage |
| CN116560585B (en) * | 2023-07-05 | 2024-04-09 | 支付宝(杭州)信息技术有限公司 | A data hierarchical storage method and system |
Also Published As
| Publication number | Publication date |
|---|---|
| CN102354301B (en) | 2014-03-19 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN102314397B (en) | Method for processing cache data block | |
| CN105094686B (en) | Data cache method, caching and computer system | |
| CN104794064B (en) | A kind of buffer memory management method based on region temperature | |
| CN104166634A (en) | Management method of mapping table caches in solid-state disk system | |
| US9489239B2 (en) | Systems and methods to manage tiered cache data storage | |
| CN104834607B (en) | A kind of hit rate for improving distributed caching and the method for reducing solid state hard disc abrasion | |
| CN103246613B (en) | Buffer storage and the data cached acquisition methods for buffer storage | |
| CN103150136B (en) | Implementation method of least recently used (LRU) policy in solid state drive (SSD)-based high-capacity cache | |
| CN107391398B (en) | A management method and system for flash memory cache area | |
| CN102354301B (en) | Cache partitioning method | |
| CN103885728A (en) | Magnetic disk cache system based on solid-state disk | |
| CN104102591A (en) | Computer subsystem and method for implementing flash translation layer therein | |
| CN105389135B (en) | A kind of solid-state disk inner buffer management method | |
| JP2018537770A (en) | Profiling cache replacement | |
| CN103678169A (en) | Method and system for efficiently utilizing solid-state disk for caching | |
| GB2444818A (en) | Line swapping scheme to reduce back invalidations in a snoop filter | |
| CN107451071A (en) | A kind of caching replacement method and system | |
| CN109240944B (en) | A data read and write method based on variable length cache line | |
| CN111580754B (en) | A Write-Friendly Flash SSD Cache Management Method | |
| CN107423229A (en) | A kind of buffering area improved method towards page level FTL | |
| CN103019963B (en) | The mapping method of a kind of high-speed cache and storage device | |
| CN103885890B (en) | Replacement processing method and device for cache blocks in caches | |
| CN102521161B (en) | Data caching method, device and server | |
| CN111290974B (en) | Cache elimination method for storage device and storage device | |
| KR101976320B1 (en) | Last level cache memory and data management method thereof |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20140319 Termination date: 20180923 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |