[go: up one dir, main page]

CN102497431B - Memory application method and system for caching application data of transmission control protocol (TCP) connection - Google Patents

Memory application method and system for caching application data of transmission control protocol (TCP) connection Download PDF

Info

Publication number
CN102497431B
CN102497431B CN201110415220.7A CN201110415220A CN102497431B CN 102497431 B CN102497431 B CN 102497431B CN 201110415220 A CN201110415220 A CN 201110415220A CN 102497431 B CN102497431 B CN 102497431B
Authority
CN
China
Prior art keywords
cache node
module
application
stream
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110415220.7A
Other languages
Chinese (zh)
Other versions
CN102497431A (en
Inventor
刘灿
刘朝辉
窦晓光
纪奎
邵宗有
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dawning Information Industry Beijing Co Ltd
Dawning Information Industry Co Ltd
Original Assignee
Dawning Information Industry Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dawning Information Industry Beijing Co Ltd filed Critical Dawning Information Industry Beijing Co Ltd
Priority to CN201110415220.7A priority Critical patent/CN102497431B/en
Publication of CN102497431A publication Critical patent/CN102497431A/en
Application granted granted Critical
Publication of CN102497431B publication Critical patent/CN102497431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a method for caching the application data of transmission control protocol (TCP) connection. The method is characterized in that: the TCP connection applies for a buffer block with a fixed length from a static buffer pool under the condition of small application load, and dynamically applies for a buffer block with a fixed length from an operating system under the condition of high application load. Compared with the prior art, the method has the advantages that: upper-layer application is well supported to temporarily store a load for content analysis; when the upper-layer application has a small memory load, resources can be quickly acquired from the static buffer pool; and when the upper-layer application has a heavy memory load, data can be properly buffered to avoid packet loss.

Description

一种TCP连接缓存应用数据的内存申请方法和系统A memory application method and system for TCP connection caching application data

技术领域 technical field

本发明属于网络安全领域,具体涉及一种TCP连接缓存应用数据的内存申请方法和系统。The invention belongs to the field of network security, and in particular relates to a memory application method and system for TCP connection cache application data.

背景技术 Background technique

随着网路的高速发展,网路给人们带来了便利,同时带来不少问题。如:色情,反政府舆论等都可以通过网络来传输。因此,对网络的监控的重要性也越来越显著。当前网络大部分采用TCP/IP的四层模型,若要对应用层中数据内容进行监控,必须对数据包的应用负载内容进行检查。在TCP/IP模型下,只要在传输层对负载分析即可。基于TCP连接的应用,可以对每个TCP连接的数据进行检查,确定其内容是否非法。With the rapid development of the Internet, the Internet has brought convenience to people, but at the same time it has brought many problems. Such as: pornography, anti-government public opinion, etc. can be transmitted through the network. Therefore, the importance of network monitoring is becoming more and more significant. Most of the current network adopts the four-layer model of TCP/IP. To monitor the data content in the application layer, the application load content of the data packet must be checked. Under the TCP/IP model, it is only necessary to analyze the load at the transport layer. Based on the application of TCP connection, the data of each TCP connection can be checked to determine whether its content is illegal.

专利号CN200580031571.0(在网络元件处缓存内容和状态数据)公开了用于在网络元件处缓存内容和状态数据的方法。在一个实施例中,在网络元件处截取数据。从数据分组中确定指定向服务器应用对指定数据的请求的应用层消息。确定包含在网络元件的缓存中的指定数据的第一部分。向服务器应用发送请求未包含在缓存中的数据的第二部分的消息。接收包含第二部分但不包含第一部分的第一响应。向客户端应用发送包含第一和第二部分的第二响应。在一个实施例中,在网络元件处截取数据分组。从数据分组中确定指定会话或数据库连接状态信息的应用层消息。在网络元件处缓存状态信息。Patent No. CN200580031571.0 (Caching Content and State Data at Network Elements) discloses a method for caching content and state data at network elements. In one embodiment, data is intercepted at a network element. An application layer message specifying a request for the specified data to be applied to the server is determined from the data packet. A first portion of the specified data contained in the cache of the network element is determined. A message is sent to the server application requesting a second portion of the data not contained in the cache. A first response containing the second part but not the first part is received. A second response including the first and second parts is sent to the client application. In one embodiment, data packets are intercepted at a network element. An application-layer message that determines state information for a specified session or database connection from a data packet. State information is cached at network elements.

专利号CN200680012181.3(分布式数据管理系统及其动态订阅数据的方法)公开了一种分布式数据管理系统,包括:应用模块(1)和数据管理器(2);所述应用模块(1)中设有数据访问模块(11)和数据缓存器(12);所述数据管理器(2)中设有订阅管理模块(21)、订阅列表模块(22)、通知模块(23)和数据存储器(24);另外,所述应用模块(1)中还设有动态订阅管理模块(14)和数据记录模块(15);所述数据管理器(2)中还设有数据发布模块(25),该数据发布模块(25)与所述数据存储器(24)连接;所述动态订阅管理模块(14)分别与所述数据记录模块(15)、所述数据缓存器(12)以及所述数据访问模块(11)连接,与所述订阅管理模块(21)、所述通知模块(23)以及所述数据发布模块(25)通信连接;本发明还包括一种动态订阅数据的方法。采用本发明,可有效减小网络传输和系统处理的数据量,减轻网络负担,提高系统的工作性能。Patent No. CN200680012181.3 (distributed data management system and its method for dynamically subscribing data) discloses a distributed data management system, including: an application module (1) and a data manager (2); the application module (1 ) is provided with a data access module (11) and a data cache (12); the data manager (2) is provided with a subscription management module (21), a subscription list module (22), a notification module (23) and a data memory (24); in addition, a dynamic subscription management module (14) and a data recording module (15) are also provided in the application module (1); a data release module (25) is also provided in the data manager (2) ), the data release module (25) is connected to the data store (24); the dynamic subscription management module (14) is respectively connected to the data recording module (15), the data cache (12) and the The data access module (11) is connected to communicate with the subscription management module (21), the notification module (23) and the data publishing module (25); the invention also includes a method for dynamically subscribing to data. By adopting the invention, the amount of data transmitted by the network and processed by the system can be effectively reduced, the burden on the network can be reduced, and the working performance of the system can be improved.

上述tcp卸载系统中,软硬件不配置或只配置对少量的缓冲区用于缓存应用数据。In the above tcp offloading system, software and hardware are not configured or only a small amount of buffers are configured for caching application data.

上述技术的缺点是:在tcp卸载系统中,硬件不配置或只配置对少量的缓冲区用于缓存应用数据。对于TCP连接的负载内容不做检查,有不支持为上层应用暂时缓存部分数据,因此,不能很好的配合上层应用的内容分析,在上层应用较忙时,也只能丢包。The disadvantage of the above technique is: in the tcp offloading system, the hardware is not configured or only a small amount of buffer is configured for caching application data. It does not check the load content of the TCP connection, and does not support temporary caching of some data for the upper-layer application. Therefore, it cannot cooperate with the content analysis of the upper-layer application well, and it can only lose packets when the upper-layer application is busy.

发明内容 Contents of the invention

本发明克服现有技术的不足,提供应用的缓存分配机制,能为每个连接静态分配一定容量的内存,当内存不够时,通过动态分配从OS获得内存资源动静结合,即节省资源又能尽快满足应用需求。The present invention overcomes the deficiencies of the prior art, provides an applied cache allocation mechanism, and can statically allocate a certain capacity of memory for each connection. When the memory is not enough, the dynamic and static combination of memory resources is obtained from the OS through dynamic allocation, which saves resources and can be used as soon as possible. meet application needs.

本发明提供了一种TCP连接缓存应用数据的内存申请方法,其包括如下步骤:The present invention provides a kind of memory application method of TCP connection cache application data, and it comprises the following steps:

(1)初始化,根据应用规模为流缓存节点按多种尺度(如三种尺度,5k,1.5k,0.5k)申请数个节点组成静态池,转步骤(2);(1) Initialize, according to the application scale, apply for several nodes to form a static pool according to various scales (such as three scales, 5k, 1.5k, 0.5k) for the stream cache node, and go to step (2);

(2)流节点申请转步骤(3);流节点释放转步骤(7);(2) flow node application to step (3); flow node release to step (7);

(3)从静态池中申请空闲节点,如果申请成功,则进入步骤(5),否则进入步骤(4);(3) Apply for an idle node from the static pool, if the application is successful, proceed to step (5), otherwise proceed to step (4);

(4)从操作系统申请动态流缓存节点(大小为静态中能满足需求最小的一种),如果申请成功,则进入步骤(5),否则进入步骤(6);(4) Apply for a dynamic streaming cache node (the size is the smallest one that can meet the requirements in the static state) from the operating system, if the application is successful, then enter step (5), otherwise enter step (6);

(5)返回节点头指针,转步骤(11);(5) return node head pointer, go to step (11);

(6)返回空指针,转步骤(11);(6) return null pointer, go to step (11);

(7)流缓存节点有动态申请标志转步骤(8),否则转步骤(9);(7) If the stream cache node has a dynamic application mark, go to step (8), otherwise go to step (9);

(8)静态池中,相同尺度的流缓存节点数小于设定阈值(如:1k个),转步骤(8) In the static pool, if the number of flow cache nodes of the same scale is less than the set threshold (eg: 1k), go to step

(9),否则转步骤(10);(9), otherwise go to step (10);

(9)流缓存节点放入静态池中,转步骤(11);(9) The flow cache node is put into the static pool, and then step (11) is turned on;

(10)流缓存节点归还给操作系统,转步骤(11);(10) The stream cache node is returned to the operating system, and then step (11);

(11)结束。(11) END.

本发明提供的TCP连接缓存应用数据的内存申请方法,步骤(3)中TCP连接从静态的缓存池中的空闲缓冲区链表中申请获得固定长度len的缓冲区块。In the memory application method for TCP connection cache application data provided by the present invention, in step (3), the TCP connection applies for a buffer block with a fixed length len from the free buffer list in the static buffer pool.

本发明提供的TCP连接缓存应用数据的内存申请方法,步骤(4)中TCP连接从操作系统动态地申请获得固定长度len的缓冲区块。In the memory application method for TCP connection cache application data provided by the present invention, in step (4), the TCP connection dynamically applies for a buffer block with a fixed length len from the operating system.

本发明提供的TCP连接缓存应用数据的内存申请方法,步骤(7)~(10)中根据被释放节点的标识和静态池中相同尺度的节点个数(与静态池中相同尺度的预设阈值比较)确定节点由静态池回收还是操作系统回收。The memory application method of the TCP connection cache application data provided by the present invention, in step (7)~(10), according to the mark of the released node and the number of nodes of the same scale in the static pool (the preset threshold value of the same scale as in the static pool Compare) to determine whether the node is reclaimed by the static pool or by the operating system.

本发明提供的TCP连接缓存应用数据的内存申请方法,所述TCP连接关闭或淘汰时,也采用节点回收的(7)~(10)模块来处理。In the memory application method for TCP connection cache application data provided by the present invention, when the TCP connection is closed or eliminated, the modules (7)-(10) of node recovery are also used for processing.

本发明还提供了一种TCP连接缓存应用数据的内存申请的系统,其包括如下模块:The present invention also provides a system for memory application of TCP connection cache application data, which includes the following modules:

(1)初始化模块,根据应用规模为流缓存节点按多种尺度申请数个节点组成静态池;(1) The initialization module, according to the application scale, applies for several nodes to form a static pool according to various scales for the stream cache node;

(2)申请静态缓冲区块模块,流节点申请转模块(3);流节点释放转模块(7);(2) Apply for the static buffer block module, and the flow node applies for the transfer module (3); the flow node releases the transfer module (7);

(3)从静态池中申请空闲节点,如果申请成功,则进入模块(5),否则进入模块(4);(3) Apply for idle nodes from the static pool, if the application is successful, enter module (5), otherwise enter module (4);

(4)申请动态缓冲区模块,从操作系统申请动态流缓存节点,如果申请成功,则进入模块(5),否则进入模块(6);(4) Apply for a dynamic buffer module, apply for a dynamic stream cache node from the operating system, if the application is successful, then enter module (5), otherwise enter module (6);

(5)返回节点头指针,转模块(11);(5) return node head pointer, turn module (11);

(6)返回空指针,转模块(11);(6) return null pointer, turn module (11);

(7)流缓存节点有动态申请标志转模块(8),否则转模块(9);(7) The stream cache node has a dynamic application flag and transfers to the module (8), otherwise it transfers to the module (9);

(8)静态池中,相同尺度的流缓存节点数小于设定阈值,转模块(9),否则转模块(10);(8) In the static pool, if the number of flow cache nodes of the same scale is less than the set threshold, transfer to module (9), otherwise transfer to module (10);

(9)流缓存节点放入静态池中,转模块(11);(9) The flow cache node is put into the static pool, and then the module (11) is transferred;

(10)流缓存节点归还给操作系统,转模块(11);(10) The stream cache node is returned to the operating system, and then transferred to the module (11);

(11)结束;(11) end;

其中,初始化模块中所述的尺度包括三种,分别为:5k,1.5k,0.5k;申请动态缓冲区模块中所述节点大小为静态中能满足需求最小的一种。Among them, the scales described in the initialization module include three types, namely: 5k, 1.5k, and 0.5k; the node size described in the application dynamic buffer module is the smallest one that can meet the requirements in the static state.

本发明还提供的TCP连接缓存应用数据的内存申请系统,模块(2)中TCP连接从静态的缓存池中的空闲缓冲区链表中申请获得固定长度len的缓冲区块。The present invention also provides a memory application system for TCP connection caching application data. In the module (2), the TCP connection applies to obtain a buffer block with a fixed length len from a free buffer list in a static buffer pool.

本发明还提供的TCP连接缓存应用数据的内存申请系统,模块(4)中TCP连接从动态的缓存池中的空闲缓冲区链表中申请获得固定长度len的缓冲区块。The present invention also provides a memory application system for TCP connection caching application data. In the module (4), the TCP connection applies for a buffer block with a fixed length len from the free buffer list in the dynamic buffer pool.

本发明还提供的TCP连接缓存应用数据的内存释放系统,模块(7、8、9、10)中根据流缓冲节点的标志(动态申请还是静态申请)和静态缓冲池中的流缓存节点数,确定缓冲节点回收给静态池还是操作系统。The present invention also provides the memory release system of the TCP connection cache application data, in the module (7, 8, 9, 10), according to the sign of the flow buffer node (dynamic application or static application) and the number of flow buffer nodes in the static buffer pool, Determines whether buffer nodes are reclaimed to the static pool or to the operating system.

本发明提供的TCP连接缓存应用数据的内存申请系统,所述TCP连接关闭或淘汰时,也采用节点回收的(7)~(10)模块来处理。In the memory application system for TCP connection caching application data provided by the present invention, when the TCP connection is closed or eliminated, the modules (7)-(10) of node recovery are also used for processing.

与现有技术相比,本发明的有益效果在于:很好的支持了上层应用暂存负载进行内容分析,在上层应用cpu负载大时,也能做适当的缓冲,避免丢包;将静态申请和动态申请结合,兼顾了申请速度和申请的效率,系统流缓存节点消耗少时,直接从静态池中获取,实现快速申请。系统流缓存节点消耗大时,从操作系统获取,有效地利用了操作系统资源。静态释放和动态释放结合,从静态池中申请的流缓存节点,释放给静态池,从操作系统申请的动态节点,根据系统消耗的静态节点情况,确定释放给静态池还是操作系统。可以划分成以下几点:1.避免从操作系统频繁地申请和释放流缓存节点,一定数目的缓存直接从优先申请的流缓存节点池中申请和释放;2.当静态池中的流缓存节点不够时,可以从操作系统中申请,满足应用需求;3.流缓存节点在释放时,根据流缓存节点的标志(从静态缓存获得还是操作系统动态获得)和系统对静态池中的空闲流缓存节点的情况,来判断释放给静态池还是操作系统。Compared with the prior art, the beneficial effect of the present invention is that: it well supports the temporary storage load of the upper layer application for content analysis, and can also perform proper buffering to avoid packet loss when the CPU load of the upper layer application is heavy; Combined with dynamic application, both application speed and application efficiency are considered. When the system stream cache node consumes less, it is directly obtained from the static pool to achieve fast application. When the system stream cache node consumes a lot, it is obtained from the operating system, which effectively uses the operating system resources. Combining static release and dynamic release, the stream cache nodes applied from the static pool are released to the static pool, and the dynamic nodes applied from the operating system are released to the static pool or the operating system according to the static nodes consumed by the system. It can be divided into the following points: 1. Avoid frequent application and release of stream cache nodes from the operating system, and a certain number of caches are directly applied and released from the stream cache node pool that is preferentially applied for; 2. When the stream cache nodes in the static pool When it is not enough, you can apply from the operating system to meet the application requirements; 3. When the stream cache node is released, according to the flag of the stream cache node (obtained from the static cache or dynamically obtained by the operating system) and the system's idle stream cache in the static pool According to the situation of the node, it is judged whether it is released to the static pool or the operating system.

附图说明 Description of drawings

图1是本发明流程示意图。Fig. 1 is a schematic flow chart of the present invention.

具体实施方式 Detailed ways

参见图1的本发明流程示意图,本发明的方法是如下进行的:Referring to the schematic flow sheet of the present invention of Fig. 1, method of the present invention is carried out as follows:

1.为TCP连接从静态的缓存池中的空闲缓冲区链表中申请获得固定长度len(以后说的len即为固定长度的len)的缓冲区块。1. Apply for a buffer block of fixed length len (the len mentioned later is len of fixed length) from the free buffer list in the static buffer pool for the TCP connection.

2.步骤1失败,则静态内存不够,则动态申请len的缓冲区,若申请失败,则说明系统资源耗尽,返回空,动态申请成功,则把此缓冲区的信息节点连接到动态链表中,在信息节点中记录动态标志,步骤1成功,则把缓冲区块连接到连接到静态链表中,在信息节点中记录动态标志;2. If step 1 fails, the static memory is not enough, then dynamically apply for the buffer of len, if the application fails, it means that the system resources are exhausted, return empty, if the dynamic application is successful, then connect the information nodes of this buffer to the dynamic linked list , record the dynamic flag in the information node, if step 1 is successful, connect the buffer block to the static linked list, and record the dynamic flag in the information node;

3.TCP连接关闭或淘汰时,根据信息节点的标志,把缓冲区返回给系统或放回静态空闲链表。3. When the TCP connection is closed or eliminated, return the buffer to the system or put it back into the static free list according to the flag of the information node.

本发明首先静态分配一片缓存作为缓存池,满足正常流量下TCP流的缓存,在流量较大时,动态分配缓存,当流量恢复正常水平后,按一定的策略把动态分配的缓存返回给系统。缓冲保存上传给应用的数据,根据需求上传给应用。从而解决为tcp连接缓存应用层数据提供一种内存分配机制的问题。The present invention first statically allocates a piece of cache as a cache pool to satisfy the cache of TCP flow under normal traffic, dynamically allocates the cache when the traffic is large, and returns the dynamically allocated cache to the system according to a certain strategy after the traffic returns to the normal level. The data uploaded to the application is buffered and uploaded to the application as required. Therefore, the problem of providing a memory allocation mechanism for caching application layer data for tcp connections is solved.

以上实施例仅用以说明本发明的技术方案而非对其限制,尽管参照上述实施例对本发明进行了详细的说明,所述领域的普通技术人员应当理解:依然可以对本发明的具体实施方式进行修改或者同等替换,而未脱离本发明精神和范围的任何修改或者等同替换,其均应涵盖在本发明的权利要求范围当中。The above embodiments are only used to illustrate the technical solutions of the present invention and not to limit them. Although the present invention has been described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: the specific implementation of the present invention can still be carried out. Any modification or equivalent replacement without departing from the spirit and scope of the present invention shall fall within the scope of the claims of the present invention.

Claims (8)

1. a method for TCP Connection Cache application data, it comprises the steps:
(1) initialization,, goes to step (2) for stream cache node forms static pond by the several nodes of multiple size application according to application scale;
(2) application of stream cache node goes to step (3); Stream cache node discharges and goes to step (7);
(3) from static pond, apply for idle node, if applied for successfully, enter step (5), otherwise enter step (4);
(4), from operating system application dynamic flow cache node, if applied for successfully, enter step (5), otherwise enter step (6);
(5) return node head pointer, goes to step (11);
(6) return to null pointer, go to step (11);
(7) stream cache node has dynamic application mark to go to step (8), otherwise goes to step (9);
(8) in static pond, be less than setting threshold with the stream cache node size phase homogeneous turbulence cache node number of current release, go to step (9), otherwise go to step (10);
(9) stream cache node is put into static pond, goes to step (11);
(10) stream cache node returns to operating system, goes to step (11);
(11) finish;
Wherein, the size that flows cache node described in step (1) comprises three kinds, is respectively: 5KB, 1.5KB, 0.5KB; The size of dynamic flow cache node described in step (4) is the minimum one stream cache node size that can meet application dynamic flow cache node demand in static pond;
In step (7)~(10) according in the stream dynamic application mark of cache node and static pond with the stream cache node size phase homogeneous turbulence cache node number of current release, determine that stream cache node is reclaimed by static pond or operating system reclaims.
2. method according to claim 1, is characterized in that, in step (3), TCP connects the buffer blocks of applying for obtaining regular length len in the freebuf chained list from static pond.
3. method according to claim 1 and 2, is characterized in that, in step (4), TCP connects the buffer blocks of dynamically applying for obtaining regular length len from operating system.
4. method according to claim 1, is characterized in that, described TCP connection closed or when superseded, process step (7)~(10) that also adopt node to reclaim.
5. a system for the internal memory application of TCP Connection Cache application data, it comprises as lower module (1)~(11):
Module (1): initialization module, forms static pond for flowing cache node by the several nodes of multiple size application according to application scale;
Module (2): stream cache node application revolving die piece (3); Stream cache node discharges revolving die piece (7);
Module (3): apply for idle node from static pond, if applied for successfully, enter module (5), otherwise enter module (4);
Module (4): from operating system application dynamic flow cache node, if applied for successfully, enter module (5), otherwise enter module (6);
Module (5): return node head pointer, revolving die piece (11);
Module (6): return to null pointer, revolving die piece (11);
Module (7): stream cache node has dynamic application mark, revolving die piece (8), otherwise revolving die piece (9);
Module (8): in static pond, be less than setting threshold, revolving die piece (9), otherwise revolving die piece (10) with the stream cache node size phase homogeneous turbulence cache node number of current release;
Module (9): stream cache node is put into static pond, revolving die piece (11);
Module (10): stream cache node returns to operating system, revolving die piece (11);
Module (11): finish;
Wherein, the size that flows cache node described in initialization module comprises three kinds, is respectively: 5KB, 1.5KB, 0.5KB; The size of dynamic flow cache node described in module (4) is the minimum one stream cache node size that can meet application dynamic flow cache node demand in static pond;
In module (7)~(10) according in the stream dynamic application mark of cache node and static pond with the stream cache node size phase homogeneous turbulence cache node number of current release, determine that stream buffer joint reclaims to static pond or operating system.
6. system according to claim 5, is characterized in that, in module (3), TCP connects the buffer blocks of applying for obtaining regular length len in the freebuf chained list from static pond.
7. according to the system described in claim 5 or 6, it is characterized in that, in module (4), TCP connects the buffer blocks of dynamically applying for obtaining regular length len from operating system.
8. system according to claim 5, is characterized in that, described TCP connection closed or when superseded, (7)~(10) module that also adopts node to reclaim is processed.
CN201110415220.7A 2011-12-13 2011-12-13 Memory application method and system for caching application data of transmission control protocol (TCP) connection Active CN102497431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110415220.7A CN102497431B (en) 2011-12-13 2011-12-13 Memory application method and system for caching application data of transmission control protocol (TCP) connection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110415220.7A CN102497431B (en) 2011-12-13 2011-12-13 Memory application method and system for caching application data of transmission control protocol (TCP) connection

Publications (2)

Publication Number Publication Date
CN102497431A CN102497431A (en) 2012-06-13
CN102497431B true CN102497431B (en) 2014-10-22

Family

ID=46189216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110415220.7A Active CN102497431B (en) 2011-12-13 2011-12-13 Memory application method and system for caching application data of transmission control protocol (TCP) connection

Country Status (1)

Country Link
CN (1) CN102497431B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761192B (en) * 2014-01-20 2016-08-17 华为技术有限公司 A kind of method and apparatus of Memory Allocation
CN106250239A (en) * 2016-07-26 2016-12-21 汉柏科技有限公司 The using method of memory cache cache and device in a kind of network equipment
CN113992731B (en) * 2021-11-02 2024-04-30 四川安迪科技实业有限公司 Abnormal control method and device based on STOMP protocol
CN119166541B (en) * 2024-08-27 2025-07-25 珠海妙存科技有限公司 Mapping table management method, system, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1444812A (en) * 2000-07-24 2003-09-24 睦塞德技术公司 Method and apparatus for reducing memory pool starvation in a shared memory switch
CN1798094A (en) * 2004-12-23 2006-07-05 华为技术有限公司 Method of using buffer area
EP1890425A1 (en) * 2005-12-22 2008-02-20 Huawei Technologies Co., Ltd. A distributed data management system and a method for data dynamic subscribing
CN101069169B (en) * 2004-11-23 2010-10-27 思科技术公司 Caching content and state data at a network element

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1444812A (en) * 2000-07-24 2003-09-24 睦塞德技术公司 Method and apparatus for reducing memory pool starvation in a shared memory switch
CN101069169B (en) * 2004-11-23 2010-10-27 思科技术公司 Caching content and state data at a network element
CN1798094A (en) * 2004-12-23 2006-07-05 华为技术有限公司 Method of using buffer area
EP1890425A1 (en) * 2005-12-22 2008-02-20 Huawei Technologies Co., Ltd. A distributed data management system and a method for data dynamic subscribing

Also Published As

Publication number Publication date
CN102497431A (en) 2012-06-13

Similar Documents

Publication Publication Date Title
CN110191148B (en) Statistical function distributed execution method and system for edge calculation
CN104243481B (en) A method and system for pre-processing data of electricity consumption information collection
CN100477643C (en) Data Packet Capture Method Based on Shared Memory
CN101917490B (en) Method and system for reading cache data
WO2012024909A1 (en) Long connection management apparatus and link resource management method for long connection communication
US9064124B1 (en) Distributed caching system
WO2013078875A1 (en) Content management method, device and system
CN103902355B (en) A kind of quick loading method of medical image
WO2010072083A1 (en) Web application based database system and data management method therof
CN105450780A (en) CDN system and source tracing method thereof
CN110121863A (en) For providing the system and method for message to multiple subscribers
CN101917350A (en) Network card drive-based zero copy Ethernet message capturing and transmitting implementation method under Linux
CN104239509B (en) Multi version GIS section service systems
CN102497431B (en) Memory application method and system for caching application data of transmission control protocol (TCP) connection
US10404603B2 (en) System and method of providing increased data optimization based on traffic priority on connection
CN103414693B (en) Get method and device for dotting ready
CN104865953A (en) Vehicle data processing method and device
CN108183893A (en) A kind of fragment packet inspection method, detection device, storage medium and electronic equipment
WO2015062228A1 (en) Method and device for accessing shared memory
CN105357286A (en) Web-based real-time directional message pushing method
CN107615259B (en) Data processing method and system
CN110147345A (en) A kind of key assignments storage system and its working method based on RDMA
CN102375789A (en) Non-buffer zero-copy method of universal network card and zero-copy system
CN104994152B (en) A kind of Web collaboration caching system and method
CN104410725A (en) Processing method and processing system of GPU (Graphics Processing Unit) as well as method and system for DNS (Domain Name Server) resolution based on GPU

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220728

Address after: 100193 No. 36 Building, No. 8 Hospital, Wangxi Road, Haidian District, Beijing

Patentee after: Dawning Information Industry (Beijing) Co.,Ltd.

Patentee after: DAWNING INFORMATION INDUSTRY Co.,Ltd.

Address before: 100084 Beijing Haidian District City Mill Street No. 64

Patentee before: Dawning Information Industry (Beijing) Co.,Ltd.