[go: up one dir, main page]

CN103810115A - Management method and device of memory pool - Google Patents

Management method and device of memory pool Download PDF

Info

Publication number
CN103810115A
CN103810115A CN201210460482.XA CN201210460482A CN103810115A CN 103810115 A CN103810115 A CN 103810115A CN 201210460482 A CN201210460482 A CN 201210460482A CN 103810115 A CN103810115 A CN 103810115A
Authority
CN
China
Prior art keywords
memory block
memory
chained list
lru chained
pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210460482.XA
Other languages
Chinese (zh)
Other versions
CN103810115B (en
Inventor
黄明生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tencent Computer Systems Co Ltd
Original Assignee
Shenzhen Tencent Computer Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tencent Computer Systems Co Ltd filed Critical Shenzhen Tencent Computer Systems Co Ltd
Priority to CN201210460482.XA priority Critical patent/CN103810115B/en
Publication of CN103810115A publication Critical patent/CN103810115A/en
Application granted granted Critical
Publication of CN103810115B publication Critical patent/CN103810115B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a management method and device of a memory pool and belongs to the field of communication. The method comprises the steps of when a memory request of a user is received, obtaining memory blocks with the preset number and added in advance at the head of the least recently used (LRU) linked list head in the memory pool; searching for idle memory blocks in the memory blocks with the preset number in the LRU linked list head and returning the searched idle memory blocks to the user for use; when the idle memory in the memory pool exceeds a preset threshold value, releasing the idle memory blocks from the LRU linked list tail to a system till the idle memory blocks in the memory pool are not larger than the threshold value. The management method and device of the memory pool solve the problems that when the user requests the memory at present, address mapping needs to be established in a TLB every time and when the memory is released to the system, the leasing is always eliminated firstly from the larger memory blocks no matter whether the memory blocks are frequently used or not, so the accessing speed to a CPU (central processing unit) is improved, the memory distribution efficiency is improved and the probability that a shake phenomenon occurs in the memory pool is reduced.

Description

A kind of management method of memory pool and device
Technical field
The present invention relates to the communications field, particularly a kind of management method of memory pool and device.
Background technology
Memory pool is to cause applied for memory block size indefinite and cause the problem of a large amount of memory fragmentations when the frequent use internal memory and a kind of memory allocate way of adopting in order to solve direct application storage allocation.Memory pool is using before internal memory, first to file distribute some, general equal-sized memory block gives over to for subsequent use; In the time having new memory requirements, just from memory pool, separate a part of memory block and use, if the memory block new internal memory of continuation application not again, thus memory fragmentation avoided as far as possible, Memory Allocation efficiency is got a promotion.
At present, in the time that the user of memory pool asks internal memory, from memory pool, taking out a free memory block according to the memory size of user's request uses to user, user according to the memory block returning at TLB(Translation Lookaside Buffer, bypass conversion buffered) in set up address mapping, so that CPU(Central Processing is Unit, central processing unit) access; If there is no free memory block in memory pool, use to user to new memory block of system application; In the time that in memory pool, free memory is excessive,, to the free memory block in system releasing memory pond, be generally to start to eliminate from larger memory block.
Realizing in process of the present invention, inventor finds that prior art at least exists following problem:
When user asks internal memory, directly from memory pool, get free memory block, need in TLB, set up address mapping at every turn, reduced the access speed of CPU; And during to system releasing memory, generally to start to eliminate from larger memory block, and no matter whether these memory blocks are often used, cause memory block to be just released back the system memory pool jitter phenomenon that application is returned again from system again, reduce the Memory Allocation efficiency in memory pool.
Summary of the invention
In order to solve the problem of prior art, the embodiment of the present invention provides a kind of management method and device of memory pool.Described technical scheme is as follows:
On the one hand, provide a kind of management method of memory pool, described method comprises:
In the time receiving user's memory request, obtain the memory block of the default number of least recently used LRU chained list head of adding in advance in memory pool;
In the memory block of the default number of described LRU chained list head, search free memory block, the free memory block finding is returned to described user and use;
In the time that the free memory in described memory pool exceedes default threshold value, start to system releasing idling memory block from described LRU chained list afterbody, until the free memory block in described memory pool is not more than described threshold value.
Particularly, described in receive user's memory request before, comprising:
In described memory pool, add LRU chained list, in described LRU chained list according to the service time of service time by all memory blocks in memory pool described in the journal before backward.
Particularly, described the free memory block finding is returned to after described user uses, comprising:
Revise described LRU chained list, the described free memory block returning is placed in to described LRU chained list head foremost.
Particularly, describedly in the memory block of the default number of described LRU chained list head, search free memory block, also comprise:
If do not find free memory block in the memory block of the default number of described LRU chained list head, from described memory pool, transfer free memory block and return to described user's use, and the described free memory block returning is placed in to described LRU chained list head foremost;
If there is no free memory block in described memory pool, return to described user to the new memory block of system application and use, the new memory block of described application is placed in to described LRU chained list head foremost.
Particularly, describedly in the memory block of the default number of described LRU chained list head, search free memory block, the free memory block finding returned to described user and use, comprising:
If find multiple free memory blocks in the memory block of the default number of described LRU chained list head, return to described user's use by being placed in described LRU chained list head free memory block foremost in the free memory block finding.
On the other hand, provide a kind of management devices of memory pool, described device comprises:
Acquisition module, in the time receiving user's memory request, obtains the memory block of the default number of LRU chained list head of adding in advance in memory pool;
Search module, for searching free memory block at the memory block of the default number of described LRU chained list head, the free memory block finding is returned to described user and use;
Release module, in the time that the free memory of described memory pool exceedes default threshold value, starts to system releasing idling memory block from described LRU chained list afterbody, until the free memory block in described memory pool is not more than described threshold value.
Particularly, described device comprises:
Add module, receive user's memory request for described acquisition module before, in described memory pool, add LRU chained list, in described LRU chained list according to the service time of service time by all memory blocks in memory pool described in the journal before backward.
Particularly, described device comprises:
Modified module, returns to the free memory block finding after described user uses for the described module of searching, and revises described LRU chained list, and the described free memory block returning is placed in to described LRU chained list head foremost.
Particularly, described device also comprises:
Transfer module, do not find free memory block if search module described in being used at the memory block of the default number of described LRU chained list head, from described memory pool, transfer free memory block and return to described user's use, and the described free memory block returning is placed in to described LRU chained list head foremost; If there is no free memory block in described memory pool, return to described user to the new memory block of system application and use, the new memory block of described application is placed in to described LRU chained list head foremost.
Particularly, the described module of searching, if for finding multiple free memory blocks at the memory block of the default number of described LRU chained list head, return to described user's use by being placed in described LRU chained list head free memory block foremost in the free memory block finding.
The beneficial effect that the technical scheme that the embodiment of the present invention provides is brought is:
When receiving user's memory request, obtain the memory block of the default number of LRU chained list head of adding in advance in memory pool, the free memory block finding in these memory blocks is returned to user and use; In the time that the free memory in memory pool exceedes default threshold value, start to system releasing idling memory block from LRU chained list afterbody, solved when current user asks internal memory in memory pool, need in TLB, set up address mapping at every turn and during to system releasing memory from starting compared with large memory block to eliminate no matter the problem whether these memory blocks are often used, improve access speed and the Memory Allocation efficiency of CPU, reduced the probability of memory pool generation jitter phenomenon.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing of required use during embodiment is described is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the management method process flow diagram of the memory pool that provides of the embodiment of the present invention one;
Fig. 2 is the management method process flow diagram of the memory pool that provides of the embodiment of the present invention two;
Fig. 3 is management devices the first structural representation of the memory pool that provides of the embodiment of the present invention three;
Fig. 4 is management devices the second structural representation of the memory pool that provides of the embodiment of the present invention three;
Fig. 5 is the third structural representation of management devices of the memory pool that provides of the embodiment of the present invention three;
Fig. 6 is the 4th kind of structural representation of management devices of the memory pool that provides of the embodiment of the present invention three;
Fig. 7 is the 5th kind of structural representation of management devices of the memory pool that provides of the embodiment of the present invention three;
Fig. 8 is the 6th kind of structural representation of management devices of the memory pool that provides of the embodiment of the present invention three.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing, embodiment of the present invention is described further in detail.
Embodiment mono-
Referring to Fig. 1, the embodiment of the present invention provides a kind of management method of memory pool, and the method comprises:
101, in the time receiving user's memory request, obtain in memory pool the LRU(Least Recently Used adding in advance, least recently used) memory block of the default number of chained list head;
102, in the memory block of the default number of LRU chained list head, search free memory block, the free memory block finding is returned to user and use;
103,, in the time that the free memory in memory pool exceedes default threshold value, start to system releasing idling memory block from LRU chained list afterbody, until the free memory block in memory pool is not more than threshold value.
Particularly, before receiving user's memory request, comprising:
In memory pool, add LRU chained list, in LRU chained list according to the service time of service time by all memory blocks in the journal memory pool before backward.
Particularly, after the free memory block finding is returned to user's use, comprising:
Revise LRU chained list, the free memory block returning is placed in to LRU chained list head foremost.
Particularly, in the memory block of the default number of LRU chained list head, search free memory block, also comprise:
If do not find free memory block in the memory block of the default number of LRU chained list head, from memory pool, transfer free memory block and return to user's use, and the free memory block returning is placed in to LRU chained list head foremost;
If there is no free memory block in memory pool, return to user to the new memory block of system application and use, the new memory block of application is placed in to LRU chained list head foremost.
Particularly, in the memory block of the default number of LRU chained list head, search free memory block, the free memory block finding returned to user and use, comprising:
If find multiple free memory blocks in the memory block of the default number of LRU chained list head, return to user's use by being placed in LRU chained list head free memory block foremost in the free memory block finding.
The method that the embodiment of the present invention provides, when receiving user's memory request, obtains the memory block of the default number of LRU chained list head of adding in advance in memory pool, the free memory block finding is returned to user use in these memory blocks; In the time that the free memory in memory pool exceedes default threshold value, start to system releasing idling memory block from LRU chained list afterbody, solved when current user asks internal memory in memory pool, need in TLB, set up address mapping at every turn and during to system releasing memory from starting compared with large memory block to eliminate no matter the problem whether these memory blocks are often used, improve access speed and the Memory Allocation efficiency of CPU, reduced the probability of memory pool generation jitter phenomenon.
Embodiment bis-
Referring to Fig. 2, the embodiment of the present invention provides a kind of management method of memory pool, and the method comprises:
201, in memory pool, add LRU chained list, in LRU chained list according to the service time of service time by all memory blocks in the journal memory pool before backward;
Particularly, referring to Fig. 3, memory pool is made up of multiple chunk, and the memory block that each chunk is equated by multiple memory sizes forms, as the chunk 128 of chunk 64, the 128byte of chunk 32, the 64byte of the 32byte in Fig. 3; Wherein, each chunk can be that a continuous fixing internal memory can be also the block chained list that link is got up.
Referring to Fig. 4, in the embodiment of the present invention, in memory pool, add a LRU chained list that comprises all memory blocks as all memory blocks that comprise in chunk 32, chunk 64 and chunk 128 formed to an overall LRU chained list in Fig. 4, the sequencing of the service time of all memory blocks recording memory pool in this LRU chained list in, wherein most recently used memory block is at the front end of LRU chained list, according to using sequencing to be arranged in order.
202,, in the time receiving user's memory request, obtain the memory block of the default number of LRU chained list head of adding in advance in memory pool;
Particularly, in the time receiving user's memory request, obtain the memory size in user's memory request, round up, such as user applies for the internal memory of 50byte because the memory block in memory pool all has fixed size, needing the memory block of upwards getting 64byte to use to user.
Receiving after user's memory request, the memory block of the default number of the LRU chained list head that first obtaining step 201 adds, such as 10 memory blocks of LRU chained list head; Default number can arrange flexibly according to actual conditions, and the embodiment of the present invention does not limit this.
203, in the memory block of the default number of LRU chained list head, search free memory block, the free memory block finding is returned to user and use, the free memory block returning is placed in to LRU chained list head foremost.
Particularly, obtain after the memory block of the default number of LRU chained list head, in these memory blocks, search the equal-sized free memory block of memory block after asking memory size to round up with user, such as the memory block that need to get 64byte uses to user, in 10 memory blocks of LRU chained list head, search whether there is memory size be the free memory block of 64byte, if had, the free memory block finding is returned to user and use, and revise LRU chained list, the free memory block returning is placed in to LRU chained list head foremost.
Further, if find multiple free memory blocks in the memory block of the default number of LRU chained list head, return to user's use by being placed in LRU chained list head free memory block foremost in the free memory block finding.
Wherein, if do not find free memory block in the memory block of the default number of LRU chained list head, from memory pool, transfer free memory block and return to user's use, and the free memory block returning is placed in to LRU chained list head foremost; Such as, if do not find the free memory block of 64byte in 10 memory blocks of LRU chained list head, from memory pool, search the free memory block of 64byte, and the 64byte free memory block returning is placed in to LRU chained list head foremost;
If there is no the memory block identical with rounding rear memory size in memory pool, from memory pool, take out after free memory block that memory size is larger is cut apart and use to user; Such as, if there is no the free memory block of 64byte in memory pool, from memory pool, search the free memory block of 128byte, the free memory block of 128byte is divided into two, one that gets wherein returns to user's use; And the free memory block of cutting apart returning is placed in to LRU chained list head foremost;
If there is no free memory block in memory pool, return to user to the new memory block of system application and use, the new memory block of application is placed in to LRU chained list head foremost.
204,, in the time that the free memory in memory pool exceedes default threshold value, start to system releasing idling memory block from LRU chained list afterbody, until the free memory block in memory pool is not more than threshold value.
Particularly, in memory pool, free memory exceedes default threshold value, during such as 2M, now needs to system releasing idling memory block to reduce the free time; This threshold value can according to circumstances arrange flexibly, can be also a ratio, such as arrange free memory in memory pool exceed total internal memory 50% time, need to system releasing idling memory block;
The embodiment of the present invention is when to system releasing idling internal memory, first obtain LRU chained list, start releasing memory piece from the afterbody of LRU chained list, because LRU chained list afterbody is there is no at most used memory block, therefore eliminate these memory blocks and can reduce the probability of the generation jitter phenomenon of memory pool.
The method that the embodiment of the present invention provides, when receiving user's memory request, obtains the memory block of the default number of LRU chained list head of adding in advance in memory pool, the free memory block finding is returned to user use in these memory blocks; In the time that the free memory in memory pool exceedes default threshold value, start to system releasing idling memory block from LRU chained list afterbody, solved when current user asks internal memory in memory pool, need in TLB, set up address mapping at every turn and during to system releasing memory from starting compared with large memory block to eliminate no matter the problem whether these memory blocks are often used, improve access speed and the Memory Allocation efficiency of CPU, reduced the probability of memory pool generation jitter phenomenon.
Embodiment tri-
Referring to Fig. 5, the embodiment of the present invention provides a kind of management devices of memory pool, and this device comprises:
Acquisition module 301, in the time receiving user's memory request, obtains the memory block of the default number of LRU chained list head of adding in advance in memory pool;
Search module 302, for searching free memory block at the memory block of the default number of LRU chained list head, the free memory block finding is returned to user and use;
Release module 303, in the time that the free memory of memory pool exceedes default threshold value, starts to system releasing idling memory block from LRU chained list afterbody, until the free memory block in memory pool is not more than threshold value.
Particularly, referring to Fig. 6, this device comprises:
Add module 304, receive user's memory request for acquisition module 301 before, in memory pool, add LRU chained list, in LRU chained list according to the service time of service time by all memory blocks in the journal memory pool before backward.
Particularly, referring to Fig. 7, this device comprises:
Modified module 305, returns to the free memory block finding after user uses for searching module 302, revises LRU chained list, and the free memory block returning is placed in to LRU chained list head foremost.
Particularly, referring to Fig. 8, this device also comprises:
Transfer module 306, if do not find free memory block for searching module 302 at the memory block of the default number of LRU chained list head, from memory pool, transfer free memory block and return to user's use, and the free memory block returning is placed in to LRU chained list head foremost; If there is no free memory block in memory pool, return to user to the new memory block of system application and use, the new memory block of application is placed in to LRU chained list head foremost.
Particularly, this searches module 302, if for finding multiple free memory blocks at the memory block of the default number of LRU chained list head, returns to user's use by being placed in LRU chained list head free memory block foremost in the free memory block finding.
The device that the embodiment of the present invention provides, when receiving user's memory request, obtains the memory block of the default number of LRU chained list head of adding in advance in memory pool, the free memory block finding is returned to user use in these memory blocks; In the time that the free memory in memory pool exceedes default threshold value, start to system releasing idling memory block from LRU chained list afterbody, solved when current user asks internal memory in memory pool, need in TLB, set up address mapping at every turn and during to system releasing memory from starting compared with large memory block to eliminate no matter the problem whether these memory blocks are often used, improve access speed and the Memory Allocation efficiency of CPU, reduced the probability of memory pool generation jitter phenomenon.
It should be noted that: the management devices of the memory pool that above-described embodiment provides is in the time that memory block manages, only be illustrated with the division of above-mentioned each functional module, in practical application, can above-mentioned functions be distributed and completed by different functional modules as required, be divided into different functional modules by the inner structure of device, to complete all or part of function described above.In addition, the management devices of the memory pool that above-described embodiment provides and the management method embodiment of memory pool belong to same design, and its specific implementation process refers to embodiment of the method, repeats no more here.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
One of ordinary skill in the art will appreciate that all or part of step that realizes above-described embodiment can complete by hardware, also can carry out the hardware that instruction is relevant by program completes, program can be stored in a kind of computer-readable recording medium, the above-mentioned storage medium of mentioning can be ROM (read-only memory), disk or CD etc.
These are only preferred embodiment of the present invention, in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (10)

1. a management method for memory pool, is characterized in that, described method comprises:
In the time receiving user's memory request, obtain the memory block of the default number of least recently used LRU chained list head of adding in advance in memory pool;
In the memory block of the default number of described LRU chained list head, search free memory block, the free memory block finding is returned to described user and use;
In the time that the free memory in described memory pool exceedes default threshold value, start to system releasing idling memory block from described LRU chained list afterbody, until the free memory block in described memory pool is not more than described threshold value.
2. method according to claim 1, is characterized in that, described in receive user's memory request before, comprising:
In described memory pool, add LRU chained list, in described LRU chained list according to the service time of service time by all memory blocks in memory pool described in the journal before backward.
3. method according to claim 1, is characterized in that, described the free memory block finding is returned to after described user uses, and comprising:
Revise described LRU chained list, the described free memory block returning is placed in to described LRU chained list head foremost.
4. method according to claim 1, is characterized in that, describedly in the memory block of the default number of described LRU chained list head, searches free memory block, also comprises:
If do not find free memory block in the memory block of the default number of described LRU chained list head, from described memory pool, transfer free memory block and return to described user's use, and the described free memory block returning is placed in to described LRU chained list head foremost;
If there is no free memory block in described memory pool, return to described user to the new memory block of system application and use, the new memory block of described application is placed in to described LRU chained list head foremost.
5. method according to claim 1, is characterized in that, describedly in the memory block of the default number of described LRU chained list head, searches free memory block, the free memory block finding is returned to described user and use, and comprising:
If find multiple free memory blocks in the memory block of the default number of described LRU chained list head, return to described user's use by being placed in described LRU chained list head free memory block foremost in the free memory block finding.
6. a management devices for memory pool, is characterized in that, described device comprises:
Acquisition module, in the time receiving user's memory request, obtains the memory block of the default number of LRU chained list head of adding in advance in memory pool;
Search module, for searching free memory block at the memory block of the default number of described LRU chained list head, the free memory block finding is returned to described user and use;
Release module, in the time that the free memory of described memory pool exceedes default threshold value, starts to system releasing idling memory block from described LRU chained list afterbody, until the free memory block in described memory pool is not more than described threshold value.
7. device according to claim 6, is characterized in that, described device comprises:
Add module, receive user's memory request for described acquisition module before, in described memory pool, add LRU chained list, in described LRU chained list according to the service time of service time by all memory blocks in memory pool described in the journal before backward.
8. device according to claim 6, is characterized in that, described device comprises:
Modified module, returns to the free memory block finding after described user uses for the described module of searching, and revises described LRU chained list, and the described free memory block returning is placed in to described LRU chained list head foremost.
9. device according to claim 6, is characterized in that, described device also comprises:
Transfer module, do not find free memory block if search module described in being used at the memory block of the default number of described LRU chained list head, from described memory pool, transfer free memory block and return to described user's use, and the described free memory block returning is placed in to described LRU chained list head foremost; If there is no free memory block in described memory pool, return to described user to the new memory block of system application and use, the new memory block of described application is placed in to described LRU chained list head foremost.
10. device according to claim 6, it is characterized in that, the described module of searching, if for finding multiple free memory blocks at the memory block of the default number of described LRU chained list head, return to described user's use by being placed in described LRU chained list head free memory block foremost in the free memory block finding.
CN201210460482.XA 2012-11-15 2012-11-15 The management method and device of a kind of memory pool Active CN103810115B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210460482.XA CN103810115B (en) 2012-11-15 2012-11-15 The management method and device of a kind of memory pool

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210460482.XA CN103810115B (en) 2012-11-15 2012-11-15 The management method and device of a kind of memory pool

Publications (2)

Publication Number Publication Date
CN103810115A true CN103810115A (en) 2014-05-21
CN103810115B CN103810115B (en) 2017-10-13

Family

ID=50706909

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210460482.XA Active CN103810115B (en) 2012-11-15 2012-11-15 The management method and device of a kind of memory pool

Country Status (1)

Country Link
CN (1) CN103810115B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016078388A1 (en) * 2014-11-21 2016-05-26 中兴通讯股份有限公司 Data aging method and apparatus
CN106294731A (en) * 2016-08-09 2017-01-04 四川网达科技有限公司 Enter management method and the device of database data
WO2017070869A1 (en) * 2015-10-28 2017-05-04 华为技术有限公司 Memory configuration method, apparatus and system
CN106776375A (en) * 2016-12-27 2017-05-31 东方网力科技股份有限公司 Data cache method and device inside a kind of disk
CN107179997A (en) * 2017-06-12 2017-09-19 合肥东芯通信股份有限公司 A kind of method and device of configuration memory cell
CN110162395A (en) * 2018-02-12 2019-08-23 杭州宏杉科技股份有限公司 A kind of method and device of Memory Allocation
CN110716941A (en) * 2019-10-18 2020-01-21 网络通信与安全紫金山实验室 A HANDLE ID parsing system and data query method
CN110928680A (en) * 2019-11-09 2020-03-27 上交所技术有限责任公司 Order memory allocation method suitable for security trading system
CN111007986A (en) * 2019-11-04 2020-04-14 厦门天锐科技股份有限公司 Text segmentation transfer method and device based on memory
CN111274039A (en) * 2020-02-14 2020-06-12 Oppo广东移动通信有限公司 Memory recovery method and device, storage medium and electronic equipment
CN113076266A (en) * 2021-06-04 2021-07-06 深圳华云信息系统有限公司 Memory management method and device, electronic equipment and storage medium
CN115617902A (en) * 2021-07-16 2023-01-17 西安宇视信息科技有限公司 Bitmap processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1722106A (en) * 2004-07-13 2006-01-18 中兴通讯股份有限公司 Method for internal memory allocation in the embedded real-time operation system
US7469329B2 (en) * 2006-03-30 2008-12-23 International Business Machines Corporation Methods for dynamically resizing memory pools
CN102455974A (en) * 2010-10-21 2012-05-16 上海宝信软件股份有限公司 High-speed internal memory application and release management system with controllable internal memory consumption and high-speed internal memory application release management method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1722106A (en) * 2004-07-13 2006-01-18 中兴通讯股份有限公司 Method for internal memory allocation in the embedded real-time operation system
US7469329B2 (en) * 2006-03-30 2008-12-23 International Business Machines Corporation Methods for dynamically resizing memory pools
CN102455974A (en) * 2010-10-21 2012-05-16 上海宝信软件股份有限公司 High-speed internal memory application and release management system with controllable internal memory consumption and high-speed internal memory application release management method

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016078388A1 (en) * 2014-11-21 2016-05-26 中兴通讯股份有限公司 Data aging method and apparatus
WO2017070869A1 (en) * 2015-10-28 2017-05-04 华为技术有限公司 Memory configuration method, apparatus and system
CN106294731A (en) * 2016-08-09 2017-01-04 四川网达科技有限公司 Enter management method and the device of database data
CN106294731B (en) * 2016-08-09 2019-05-28 四川网达科技有限公司 It is put in storage the management method and device of data
CN106776375A (en) * 2016-12-27 2017-05-31 东方网力科技股份有限公司 Data cache method and device inside a kind of disk
CN107179997A (en) * 2017-06-12 2017-09-19 合肥东芯通信股份有限公司 A kind of method and device of configuration memory cell
CN110162395B (en) * 2018-02-12 2021-07-20 杭州宏杉科技股份有限公司 Memory allocation method and device
CN110162395A (en) * 2018-02-12 2019-08-23 杭州宏杉科技股份有限公司 A kind of method and device of Memory Allocation
CN110716941A (en) * 2019-10-18 2020-01-21 网络通信与安全紫金山实验室 A HANDLE ID parsing system and data query method
CN111007986A (en) * 2019-11-04 2020-04-14 厦门天锐科技股份有限公司 Text segmentation transfer method and device based on memory
CN111007986B (en) * 2019-11-04 2022-09-30 厦门天锐科技股份有限公司 Text segmentation transfer method and device based on memory
CN110928680A (en) * 2019-11-09 2020-03-27 上交所技术有限责任公司 Order memory allocation method suitable for security trading system
CN110928680B (en) * 2019-11-09 2023-09-12 上交所技术有限责任公司 Order memory allocation method suitable for securities trading system
CN111274039A (en) * 2020-02-14 2020-06-12 Oppo广东移动通信有限公司 Memory recovery method and device, storage medium and electronic equipment
CN111274039B (en) * 2020-02-14 2023-12-08 Oppo广东移动通信有限公司 Memory recycling method and device, storage medium and electronic equipment
CN113076266A (en) * 2021-06-04 2021-07-06 深圳华云信息系统有限公司 Memory management method and device, electronic equipment and storage medium
CN113076266B (en) * 2021-06-04 2021-10-29 深圳华云信息系统有限公司 Memory management method and device, electronic equipment and storage medium
CN115617902A (en) * 2021-07-16 2023-01-17 西安宇视信息科技有限公司 Bitmap processing method and device

Also Published As

Publication number Publication date
CN103810115B (en) 2017-10-13

Similar Documents

Publication Publication Date Title
CN103810115A (en) Management method and device of memory pool
US10114749B2 (en) Cache memory system and method for accessing cache line
WO2022016861A1 (en) Hotspot data caching method and system, and related device
CN100543750C (en) A matrix data cache method and device based on WEB application
CN103995855B (en) Method and device for storing data
US9594682B2 (en) Data access system, memory sharing device, and data reading method
CN112632069B (en) Hash table data storage management method, device, medium and electronic equipment
CN102629941A (en) Caching method of a virtual machine mirror image in cloud computing system
CN105095109B (en) cache access method, cache access router and computer system
CN105518631B (en) EMS memory management process, device and system and network-on-chip
CN104462225A (en) Data reading method, device and system
CN102279810A (en) A network storage server and its method for caching data
EP3131015B1 (en) Memory migration method and device
CN102291298B (en) Efficient computer network communication method oriented to long message
CN108139966A (en) Management turns the method and multi-core processor of location bypass caching
CN103227778A (en) Method, device and system for accessing memory
CN103548004B (en) The method and apparatus of dynamic data attemper is realized in file system
CN116155828B (en) Message order keeping method and device for multiple virtual queues, storage medium and electronic equipment
CN101673271A (en) Distributed file system and file sharding method thereof
US20130061009A1 (en) High Performance Free Buffer Allocation and Deallocation
CN103106147B (en) Memory allocation method and system
CN103902472B (en) Internal storage access processing method, memory chip and system based on memory chip interconnection
WO2024260039A1 (en) Data access method and apparatus, non-volatile readable storage medium, and electronic device
CN103778120A (en) Global file identification generation method, generation device and corresponding distributed file system
WO2019174206A1 (en) Data reading method and apparatus of storage device, terminal device, and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant