[go: up one dir, main page]

CN110851658B - Tree index data structure, content storage pool, router and tree index method - Google Patents

Tree index data structure, content storage pool, router and tree index method Download PDF

Info

Publication number
CN110851658B
CN110851658B CN201910967770.6A CN201910967770A CN110851658B CN 110851658 B CN110851658 B CN 110851658B CN 201910967770 A CN201910967770 A CN 201910967770A CN 110851658 B CN110851658 B CN 110851658B
Authority
CN
China
Prior art keywords
data
data packet
slot
name
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910967770.6A
Other languages
Chinese (zh)
Other versions
CN110851658A (en
Inventor
李卓
周美丽
刘开华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201910967770.6A priority Critical patent/CN110851658B/en
Publication of CN110851658A publication Critical patent/CN110851658A/en
Application granted granted Critical
Publication of CN110851658B publication Critical patent/CN110851658B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9027Trees
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明公开了一种树形索引数据结构,其每个结点都含有指向同层结点的右指针和指向下一层结点的子指针;每个数据结点存储真实数据的完整名称,真实数据是指直接到达路由的名称数据。本发明还公开了一种内容存储池,其包括片内存储单元和片外存储单元,片内存储单元内设有用于存储映射数据的神经网络模型和存储数据信息的位图;片外存储单元设有多个上述树形索引数据结构。本发明还公开了一种路由器,其内设有上述的内容存储池。本发明还公开了一种利用上述的内容存储池的树形索引方法,即将真实数据作为最佳匹配数据包插入到树形索引数据结构中,动态选择和更新最佳匹配数据包。本发明的内容存储池及路由器可用于命名数据网转发平面的数据缓存。

Figure 201910967770

The invention discloses a tree-shaped index data structure, each node of which contains a right pointer pointing to the node of the same layer and a sub-pointer pointing to the node of the next layer; each data node stores the complete name of real data, Real data refers to the name data that goes directly to the route. The invention also discloses a content storage pool, which includes an on-chip storage unit and an off-chip storage unit, the on-chip storage unit is provided with a neural network model for storing mapping data and a bitmap for storing data information; the off-chip storage unit There are multiple above-mentioned tree index data structures. The invention also discloses a router, which is provided with the above-mentioned content storage pool. The invention also discloses a tree index method using the above content storage pool, that is, real data is inserted into the tree index data structure as the best matching data package, and the best matching data package is dynamically selected and updated. The content storage pool and the router of the present invention can be used for the data cache of the forwarding plane of the named data network.

Figure 201910967770

Description

树形索引数据结构、内容存储池、路由器及树形索引方法Tree index data structure, content storage pool, router and tree index method

技术领域technical field

本发明涉及一种高性能路由器结构设计领域,特别涉及一种树形索引数据结构、内容存储池、路由器及树形索引方法。The invention relates to the field of high-performance router structure design, in particular to a tree index data structure, a content storage pool, a router and a tree index method.

背景技术Background technique

目前,随着互联网规模的爆炸式增长,创新技术和计算模式的不断涌现,加速了互联网由“通信信道”向“数据处理平台”的角色转变。为了应对互联网内容化、个性化、超高移动性、“零”时延、超高流量密度等未来业务需求,彻底解决当前互联网IP架构下所带来的诸多问题,一种以内容缓存为特色、面向通信内容的命名数据网应用而生。At present, with the explosive growth of the scale of the Internet, the continuous emergence of innovative technologies and computing models has accelerated the transformation of the role of the Internet from a "communication channel" to a "data processing platform". In order to meet the future business needs of Internet content, personalization, ultra-high mobility, "zero" delay, ultra-high traffic density, etc., and completely solve many problems brought about by the current Internet IP architecture, a content cache is featured , Naming Data Network applications for communication content.

命名数据网不仅可以通过使用名称数据,实现互联网面向内容的通信模式;还可以通过在路由节点中部署缓冲存储器,缩短用户访问缓存数据的响应时间,实现真正意义上的内容共享,极大地降低网络负载,有效提高网络数据传输速率。因此被认为是未来互联网架构领域最有前景的发展方向之一。NDN can not only realize the content-oriented communication mode of the Internet through the use of name data; it can also shorten the response time for users to access cached data by deploying buffer memory in routing nodes, realize content sharing in the true sense, and greatly reduce network traffic. load, effectively increasing the network data transmission rate. Therefore, it is considered to be one of the most promising development directions in the field of Internet architecture in the future.

然而命名数据网也面临着一系列亟待解决的问题和挑战[1],特别是路由数据平面中Content Store内存存储和线速处理支持的问题[2]。一方面,命名数据网中路由表的表项数据通常是由数字和字符组成的,并具有变长、无边界特点的字符串来命名,导致Content Store需能够存储数百万规模的数据存储量。另一方面,从转发路由(PendingInterest Table,PIT)转发出来的每一个数据包和每个传入路由的兴趣包都必须访问Content Store,频繁的内存访问发生在Content Store中。考虑到Content Store作为NDN路由器操作流的一部分,应满足线速操作,包括名称查找和缓存替换。为了尽可能为请求的兴趣包返回数据包,Content Store需能够支持所有子名字匹配机制,为了准确对到达的数据包进行判断,Content Store需能够支持精确匹配机制[2]。此外,作为临时的内容缓存,其容量有限,所以Content Store需能够高效压缩存储数据以减少存储消耗,并及时进行缓存替换为新插入的数据包清理空间。However, the NDN also faces a series of problems and challenges to be solved [1], especially the content store memory storage and wire-speed processing support in the routing data plane [2]. On the one hand, the entry data of the routing table in the named data network is usually composed of numbers and characters, and is named by a character string with variable length and borderless characteristics. As a result, the Content Store needs to be able to store millions of data storage volumes. . On the other hand, every data packet forwarded from the forwarding route (PendingInterest Table, PIT) and every interest packet incoming to the route must visit the Content Store, and frequent memory accesses occur in the Content Store. Considering that the Content Store is part of the NDN router's operational flow, wire-speed operations should be satisfied, including name lookup and cache replacement. In order to return the data packet for the requested Interest packet as much as possible, the Content Store needs to be able to support all subname matching mechanisms, and in order to accurately judge the arriving data packets, the Content Store needs to be able to support the exact matching mechanism [2]. In addition, as a temporary content cache, its capacity is limited, so the Content Store needs to be able to efficiently compress and store data to reduce storage consumption, and replace the cache in time to clean up space for newly inserted data packets.

参考文献:references:

1、L.Zhang et al.,“Named Data Networking,”ACM SIGCOMM ComputerCommuni-cation Review,vol.44,no.3,pp.66-73,2014。1. L. Zhang et al., "Named Data Networking," ACM SIGCOMM Computer Communication Review, vol.44, no.3, pp.66-73, 2014.

2、Z.Li,Y.Xu,B,Zhang,L.Yan,and K.Liu,“Packet Forwarding in Named DataNetworking Requirements and Survey of Solutions,”IEEE Communications Surveys&Tutorials,DOI:10.1109/COMST.2018.2880444,2018。2. Z.Li, Y.Xu, B, Zhang, L.Yan, and K.Liu, "Packet Forwarding in Named DataNetworking Requirements and Survey of Solutions," IEEE Communications Surveys&Tutorials, DOI: 10.1109/COMST.2018.2880444, 2018.

发明内容Contents of the invention

本发明为解决公知技术中存在的技术问题而提供一种提高网络数据传输速率的树形索引数据结构、内容存储池、路由器及树形索引方法。The invention provides a tree index data structure, a content storage pool, a router and a tree index method for improving the network data transmission rate in order to solve the technical problems in the known technology.

本发明为解决公知技术中存在的技术问题所采取的技术方案是:一种树形索引数据结构,该树形索引数据结构中的每个结点都含有指向同层结点的右指针和指向下一层结点的子指针;每个数据结点存储真实数据的完整名称,所述真实数据是指直接到达路由的名称数据。The technical solution adopted by the present invention to solve the technical problems in the known technology is: a tree index data structure, each node in the tree index data structure contains a right pointer pointing to the same layer node and pointing to The child pointer of the next layer node; each data node stores the complete name of the real data, and the real data refers to the name data directly reaching the route.

进一步地,所述树形索引数据结构根据真实数据的组件数和插入顺序动态分配插入数据结点的指针。Further, the tree index data structure dynamically allocates pointers for inserting data nodes according to the number of components of real data and the insertion order.

本发明还提供了一种内容存储池,其包括:片内存储单元和片外存储单元,所述片内存储单元内设有用于存储信息的位图;所述片外存储单元设有多个上述的树形索引数据结构,用于存储数据名称在所述片外存储单元的位置信息以及数据名称间的关系信息。The present invention also provides a content storage pool, which includes: an on-chip storage unit and an off-chip storage unit, the on-chip storage unit is provided with a bitmap for storing information; the off-chip storage unit is provided with a plurality of The above-mentioned tree index data structure is used for storing position information of data names in the off-chip storage unit and relationship information between data names.

进一步地,所述片内存储单元采用高速存储器,所述高速存储器内部署用于实现数据名称均匀映射的神经网络模型。Further, the on-chip storage unit adopts a high-speed memory, and a neural network model for realizing uniform mapping of data names is deployed in the high-speed memory.

本发明还提供了一种路由器,其内设有上述的内容存储池。The present invention also provides a router, which is provided with the above-mentioned content storage pool.

本发明还提供了一种利用上述的内容存储池的树形索引方法,将真实数据作为最佳匹配数据包插入到所述树形索引数据结构中,自动存储名称数据间的逻辑关系,且动态选择和更新最佳匹配数据包。The present invention also provides a tree-shaped index method using the above-mentioned content storage pool, inserting real data as the best matching data package into the tree-shaped index data structure, automatically storing the logical relationship between name data, and dynamically The best matching packets are selected and updated.

进一步地,索引真实数据的所有父名称数据,并按照所述索引地址槽中记录的真实数据的名称组件数和插入顺序,选择组件数差异最小且插入时间最早的数据包作为所述索引地址槽的最佳匹配数据包;并将所有父名称数据对应的索引地址与该槽对应的最佳匹配数据包建立连接。Further, index all parent name data of the real data, and select the data packet with the smallest difference in the number of components and the earliest insertion time as the index address slot according to the number of name components and insertion order of the real data recorded in the index address slot The best matching data packet of the slot; and the index address corresponding to all parent name data is connected with the best matching data packet corresponding to the slot.

进一步地,当下一层结点有多个与真实数据具有最小组件差异的子名称数据时,按照插入树形索引数据结构的先后顺序,选择最早插入的子名称作为下一层结点,其他子名称作为同层结点。Further, when the lower layer node has multiple subname data with the smallest component difference from the real data, the earliest inserted subname data is selected as the next layer node according to the order of insertion into the tree index data structure, and the other subname data are selected as the next layer node. The name acts as a peer node.

进一步地,插入数据包的步骤如下:Further, the steps of inserting data packets are as follows:

步骤a1:输入数据包:输入数据包到内容存储池中;Step a1: input data packet: input data packet to the content storage pool;

步骤a2:定长处理:对输入的数据包的名称进行定长处理;Step a2: fixed-length processing: perform fixed-length processing on the name of the input data packet;

步骤a3:计算输入矢量:将定长处理的结果进行同位之间异或操作,得到数据包的输入矢量;Step a3: Calculate the input vector: perform the XOR operation between the same bits on the result of the fixed-length processing to obtain the input vector of the data packet;

步骤a4:计算映射标号:数据包的输入矢量经神经网络模型运算得到一个0~1之间的数值,该值乘以位图的槽总数,得出该数据包映射到位图上的映射标号即映射槽;Step a4: Calculate the mapping label: the input vector of the data packet is calculated by the neural network model to obtain a value between 0 and 1, and this value is multiplied by the total number of slots in the bitmap to obtain the mapping label of the data packet mapped to the bitmap, namely mapping slot;

步骤a5:判断该数据包是否存在:若映射槽中当前值为1,则该数据包存在,执行步骤a10;若映射槽中当前值为0,则该数据包不存在,执行步骤a6;Step a5: Determine whether the data packet exists: if the current value in the mapping slot is 1, then the data packet exists, and perform step a10; if the current value in the mapping slot is 0, then the data packet does not exist, and perform step a6;

步骤a6:插入树形索引数据结构:若数据包为真实数据,当该映射槽为空时,则将数据包作为最佳数据包直接插入到树形索引数据结构中,并继续执行步骤a8,当该映射槽不为空时,将数据包作为最佳数据包插入树形索引数据结构,并将该数据包的子指针指向此时映射槽中所记录的槽中指针所指向的节点,继续执行步骤a8;若数据包为子名字数据,当该映射槽为空时,则上一次插入的真实数据作为最佳数据包,继续执行步骤a8,若当该映射槽不为空时,则进行步骤a7;Step a6: Insert the tree index data structure: if the data packet is real data, when the mapping slot is empty, directly insert the data packet as the best data packet into the tree index data structure, and continue to execute step a8, When the mapping slot is not empty, the data packet is inserted into the tree index data structure as the best data packet, and the sub-pointer of the data packet points to the node pointed to by the pointer in the slot recorded in the mapping slot at this time, and continues Execute step a8; if the data packet is subname data, when the mapping slot is empty, the real data inserted last time is used as the best data packet, and continue to execute step a8, if the mapping slot is not empty, then proceed to step a7;

步骤a7:更新最佳数据包:若该数据包组件数大于槽中记录的级别,则将槽中指针指向的名称节点的类指针指向上一次插入的真实数据,最佳数据包不做更新;否则,上一次插入的真实数据作为新的最佳数据包,并将最佳数据包的类指针指向此时槽中指针指向的节点;Step a7: Update the best data package: if the number of components of the data package is greater than the level recorded in the slot, point the class pointer of the name node pointed to by the pointer in the slot to the real data inserted last time, and the best data package will not be updated; Otherwise, the real data inserted last time is used as the new best data packet, and the class pointer of the best data packet points to the node pointed to by the pointer in the slot at this time;

步骤a8:更新位图槽:槽中指针指向最佳数据包的实际存储地址,级别更新为最佳数据包的组件层数,当数据包为真实数据时,当前值由默认值0变为1,否则当前值保持不变;Step a8: Update the bitmap slot: the pointer in the slot points to the actual storage address of the best data packet, and the level is updated to the number of component layers of the best data packet. When the data packet is real data, the current value changes from the default value 0 to 1 , otherwise the current value remains unchanged;

步骤a9:判断数据包名字是否可以分层:若槽中级别>1,则从后往前对该名称数据包组件数减一,进行分层处理形成新的名称,循环执行步骤a2至a9;否则数据包不进行分层,继续步骤a11;Step a9: Determine whether the name of the data package can be layered: if the level in the slot is > 1, then subtract one from the number of data package components of the name from the back to the front, perform layering processing to form a new name, and execute steps a2 to a9 in a loop; Otherwise, the data packet is not stratified, and continues to step a11;

步骤a10:直接丢弃该数据包;Step a10: discarding the data packet directly;

步骤a11:数据包插入结束。Step a11: The data packet insertion ends.

进一步地,检索数据包的步骤如下:Further, the steps of retrieving data packets are as follows:

步骤b1:输入数据包:输入数据包到内容存储池中;Step b1: input data packet: input data packet to the content storage pool;

步骤b2:定长处理:对输入的数据包进行定长处理;Step b2: fixed-length processing: perform fixed-length processing on the input data packet;

步骤b3:计算输入矢量:将定长处理的结果进行同位之间异或操作,得到数据包的输入矢量;Step b3: Calculate the input vector: perform the XOR operation between the same bits on the result of the fixed-length processing to obtain the input vector of the data packet;

步骤b4:数据包的输入矢量经神经网络模型运算得到一个0~1之间的数值,该值乘以位图的槽总数,得出该数据包映射到位图上的映射标号即映射槽;Step b4: The input vector of the data packet is calculated by the neural network model to obtain a value between 0 and 1, and this value is multiplied by the total number of slots of the bitmap to obtain the mapping number of the data packet mapped to the bitmap, that is, the mapping slot;

步骤b5:判断是否存在最佳匹配数据包:若映射槽中当前值为1,则最佳匹配数据包存在,执行步骤b6;若映射槽中当前值为0,则最佳匹配数据包不存在,执行步骤b7;Step b5: Determine whether there is a best matching data packet: if the current value in the mapping slot is 1, then the best matching data packet exists, go to step b6; if the current value in the mapping slot is 0, then the best matching data packet does not exist , execute step b7;

步骤b6:丢弃数据包:学习位图内容存储池直接丢弃数据包,并继续执行步骤b8;Step b6: Discard data packets: learn the bitmap content storage pool to directly discard data packets, and continue to execute step b8;

步骤b7:插入数据包:执行数据包插入流程,并继续执行步骤b8;Step b7: insert data packet: execute the data packet insertion process, and continue to execute step b8;

步骤b8:数据包在学习位图内容存储池结构中的检索结束。Step b8: The retrieval of the data packet in the learning bitmap content storage pool structure ends.

本发明具有的优点和积极效果是:本发明的树形索引数据结构中的每个结点都含有指向同层结点的右指针和指向下一层结点的子指针;每个数据结点存储真实数据的完整名称。其在数据结构建立过程中,自动存储名称数据间的逻辑关系,且动态选择和更新最佳匹配数据包。因此,名称数据经树形索引数据结构索引后,可由索引地址中的点(Point)直接返回最佳匹配数据包的实际地址,成功实现将可变名称数据的复杂检索过程前移至建表过程。这样通过前移名称数据检索过程,从而简化检索过程,降低检索时间复杂度;通过在内容存储池中,在名称数据插入过程中存储名称数据的逻辑关系来支持可变长度名称数据查找。可利用神经网络,同时结合本发明中的树形索引数据结构,设计一种新型的内容存储池存储结构及路由器。该内容存储池及路由器通过均匀映射数据,可降低总体存储消耗。通过将复杂名字检索过程前移至建表过程,以及在树形索引数据结构中存储数据间的逻辑关系,来实现线速支持所有子名称匹配、精确名称匹配、名称数据检索机制以及数据缓存替换策略。本发明的内容存储池及路由器可用于命名数据网转发平面的数据缓存。The advantages and positive effects that the present invention has are: each node in the tree-shaped index data structure of the present invention contains the right pointer pointing to the node of the same layer and the child pointer pointing to the node of the next layer; each data node Store the full name of the real data. During the establishment of the data structure, it automatically stores the logical relationship between the name data, and dynamically selects and updates the best matching data package. Therefore, after the name data is indexed by the tree-shaped index data structure, the actual address of the best matching data packet can be directly returned from the point in the index address, and the complex retrieval process of variable name data has been successfully moved forward to the table building process. . In this way, the name data retrieval process is moved forward, thereby simplifying the retrieval process and reducing the time complexity of the retrieval; by storing the logical relationship of the name data in the name data insertion process in the content storage pool to support variable-length name data search. A new type of content storage pool storage structure and router can be designed by using the neural network and combining with the tree index data structure in the present invention. The content storage pool and router reduce overall storage consumption by evenly mapping data. Wire-speed support for all subname matching, exact name matching, name data retrieval mechanisms, and data cache replacement by moving the complex name retrieval process forward to the table building process and storing the logical relationship between data in a tree index data structure Strategy. The content storage pool and the router of the present invention can be used for the data cache of the forwarding plane of the named data network.

附图说明Description of drawings

图1为本发明的一种树形索引数据结构的示意图。FIG. 1 is a schematic diagram of a tree index data structure of the present invention.

图2为本发明的一种树形索引数据结构的结点的结构框图。FIG. 2 is a structural block diagram of nodes in a tree index data structure of the present invention.

图3为本发明的一种内容存储池结构框图。FIG. 3 is a structural block diagram of a content storage pool according to the present invention.

图4为本发明的一种内容存储池支持所有子名字匹配操作流程图。FIG. 4 is a flow chart of a content storage pool supporting all subname matching operations in the present invention.

图5为本发明的一种内容存储池支持精确匹配操作流程图。FIG. 5 is a flow chart of an operation for supporting exact matching in a content storage pool according to the present invention.

图6为本发明的一种内容存储池对数据插入操作流程图。FIG. 6 is a flow chart of a data insertion operation of a content storage pool according to the present invention.

图7为本发明的一种内容存储池对数据删除操作流程图。FIG. 7 is a flowchart of an operation for deleting data by a content storage pool according to the present invention.

具体实施方式Detailed ways

为能进一步了解本发明的发明内容、特点及功效,兹列举以下实施例,并配合附图详细说明如下:In order to further understand the invention content, characteristics and effects of the present invention, the following embodiments are enumerated hereby, and detailed descriptions are as follows in conjunction with the accompanying drawings:

本申请中的英文的中文释义如下:The Chinese interpretation of English in this application is as follows:

NDN:命名数据网;NDN: Named Data Network;

Content Store:内容存储池;Content Store: content storage pool;

Back Propagation Neural Networks,BPNN:反向神经网络;Back Propagation Neural Networks, BPNN: reverse neural network;

Recent:当前;Recent: current;

Point:槽中指针;Point: the pointer in the slot;

Slot:槽;Slot: slot;

Bitmap:位图;Bitmap: bitmap;

Trie:单词查找树;Trie: word search tree;

Level:级别;Level: level;

Info:信息;Info: information;

next node:同层结点;next node: same layer node;

child node:下一层结点;child node: the next layer of nodes;

next:右指针;next: right pointer;

child:子指针;child: child pointer;

Pending Interest Table,PIT:转发路由;Pending Interest Table, PIT: Forwarding routing;

URL:统一资源定位符;URL: Uniform Resource Locator;

Back Propagation Neural Networks,BPNN:反向神经网络;Back Propagation Neural Networks, BPNN: reverse neural network;

Forwarding Information Base,FIB:前向转发表。Forwarding Information Base, FIB: forward forwarding table.

请参见图1至图2,一种树形索引数据结构,该树形索引数据结构中的每个结点都含有指向同层结点的右指针和指向下一层结点的子指针;每个数据结点存储真实数据的完整名称,所述真实数据是指直接到达路由的名称数据。由所述真实数据分层而得到的名称数据定义为该真实数据的子名字,例如:/A、/A/B是真实数据A/B/C的子名字数据。Please refer to Figures 1 to 2, a tree index data structure, each node in the tree index data structure contains a right pointer pointing to the same layer node and a sub pointer pointing to the next layer node; each Each data node stores the complete name of the real data, which refers to the name data directly to the route. The name data obtained by layering the real data is defined as the subname of the real data, for example: /A, /A/B are the subname data of the real data A/B/C.

优选地,所述树形索引数据结构根据真实数据的组件数和插入顺序动态分配插入数据结点的指针。Preferably, the tree index data structure dynamically allocates pointers for inserting data nodes according to the number of components of real data and the insertion sequence.

为了减少分层名字的内存消耗以及减小树的深度,我们在构建树形索引数据结构时,每个数据结点存储真实数据的完整名称而不是其名称数据的组件信息。为了存储数据结点间的逻辑关系,树形索引数据结构中每个结点都含有指向同层结点(next node)的右指针(next)和指向下一层结点(child node)的子指针(child)。In order to reduce the memory consumption of hierarchical names and reduce the depth of the tree, when we construct the tree index data structure, each data node stores the complete name of the real data instead of the component information of its name data. In order to store the logical relationship between data nodes, each node in the tree index data structure contains a right pointer (next) pointing to the same layer node (next node) and a child node pointing to the next layer node (child node). pointer (child).

在树形索引数据结构中,对于到达内容名称为x的数据包,我们定义:x是真实数据名称,通过连续减少x的组件层数形成的新数据为x的父名字数据,而x为其父名字数据的子名字数据,如当名称为/ICN/NDN/maps的数据输入到Content Store时,/ICN/NDN/和/ICN是真实数据/ICN/NDN/maps的父名称数据,反过来ICN/NDN/maps是/ICN/NDN/和/ICN的子名称数据。我们还定义child node是与真实数据具有最小组件差异的子名称数据。当有多个符合条件的子名称数据时,按照插入树形索引数据结构的先后顺序,选择最早插入的子名称作为child node,其他子名称作为next node,如有三个内容名称/ICN/NDN/maps、/ICN/NDN/TJU和/ICN/NDN按顺序输入,/ICN/NDN/maps和ICN/NDN/TJU是/ICN/NDN的child node,ICN/NDN/TJU是/ICN/NDN/maps的next node节点。In the tree index data structure, for the data packet whose content name is x, we define: x is the real data name, and the new data formed by continuously reducing the number of component layers of x is the parent name data of x, and x is its The child name data of the parent name data, such as when the data named /ICN/NDN/maps is input to the Content Store, /ICN/NDN/ and /ICN are the parent name data of the real data /ICN/NDN/maps, and vice versa ICN/NDN/maps are subname data for /ICN/NDN/ and /ICN. We also define child nodes as child name data with minimal component differences from the real data. When there are multiple qualified child name data, according to the order of inserting the tree index data structure, select the earliest inserted child name as the child node, and other child names as the next node, if there are three content names /ICN/NDN/ maps, /ICN/NDN/TJU and /ICN/NDN are entered in order, /ICN/NDN/maps and ICN/NDN/TJU are the child nodes of /ICN/NDN, and ICN/NDN/TJU is /ICN/NDN/maps The next node node.

树形索引数据结构建表过程中存储名称数据逻辑关系如下:首先,树形索引数据结构索引真实数据得到确切的索引地址。然后,真实数据作为最佳匹配数据包被插入到树形索引数据结构中。最后,树形索引数据结构索引真实数据的所有父名称数据,并按照索引地址槽中记录的真实数据的名称组件数和插入顺序,选择组件数差异最小且插入时间最早的数据包,作为该索引地址槽的最佳匹配数据包。同时,将所有父名称数据对应的索引地址与该槽对应的最佳匹配数据包建立连接。The logical relationship of storing name data in the tree index data structure table building process is as follows: First, the tree index data structure indexes real data to obtain the exact index address. Then, the real data is inserted into the tree index data structure as the best matching packet. Finally, the tree index data structure indexes all the parent name data of the real data, and according to the number of name components and insertion order of the real data recorded in the index address slot, select the data packet with the smallest difference in the number of components and the earliest insertion time as the index The best matching packet for the address slot. At the same time, establish a connection between the index addresses corresponding to all parent name data and the best matching data packets corresponding to this slot.

本发明还提供了一种内容存储池实施例,其包括:片内存储单元和片外存储单元,所述片内存储单元内设有用于存储信息的位图;所述片外存储单元设有多个上述的树形索引数据结构,用于存储数据名称在所述片外存储单元的位置信息以及数据名称间的关系信息。该内容存储池可用于命名数据网转发平面的数据缓存。The present invention also provides an embodiment of a content storage pool, which includes: an on-chip storage unit and an off-chip storage unit, the on-chip storage unit is provided with a bitmap for storing information; the off-chip storage unit is provided with A plurality of above-mentioned tree-shaped index data structures are used for storing position information of data names in the off-chip storage unit and relationship information between data names. The content storage pool can be used for the data cache of the NDN forwarding plane.

优选地,所述片内存储单元可采用高速存储器,所述高速存储器内可部署用于实现数据名称均匀映射的神经网络模型。所述片外存储单元可采用低速存储器。Preferably, the on-chip storage unit can use a high-speed memory, and a neural network model for realizing uniform mapping of data names can be deployed in the high-speed memory. The off-chip storage unit can be a low-speed memory.

本发明还提供了一种路由器实施例,其内设有上述的内容存储池。该路由器可用于命名数据网转发平面的数据缓存。The present invention also provides an embodiment of a router, which is provided with the above-mentioned content storage pool. The router can be used for data caching of the forwarding plane of the named data network.

本发明还提供了一种利用上述的内容存储池的树形索引方法的实施例,该树形索引方法为:将真实数据作为最佳匹配数据包插入到所述树形索引数据结构中,自动存储名称数据间的逻辑关系,且动态选择和更新最佳匹配数据包。The present invention also provides an embodiment of a tree index method using the above-mentioned content storage pool. The tree index method is: inserting real data as the best matching data packet into the tree index data structure, automatically Store the logical relationship between name data, and dynamically select and update the best matching data package.

优选地,可索引真实数据的所有父名称数据,并可按照所述索引地址槽中记录的真实数据的名称组件数和插入顺序,可选择组件数差异最小且插入时间最早的数据包作为所述索引地址槽的最佳匹配数据包;并可将所有父名称数据对应的索引地址与该槽对应的最佳匹配数据包建立连接。Preferably, all parent name data of the real data can be indexed, and according to the number of name components and insertion order of the real data recorded in the index address slot, the data packet with the smallest difference in the number of components and the earliest insertion time can be selected as the The best matching data packet of the index address slot; and the index address corresponding to all parent name data can be connected with the best matching data packet corresponding to this slot.

优选地,当下一层结点有多个与真实数据具有最小组件差异的子名称数据时,可按照插入树形索引数据结构的先后顺序,可选择最早插入的子名称作为下一层结点,其他子名称可作为同层结点。Preferably, when the lower layer node has multiple subname data with the smallest component difference from the real data, the earliest inserted subname can be selected as the next layer node according to the order of insertion into the tree index data structure, Other subnames can be used as peer nodes.

优选地,经所述树形索引数据结构索引后,名称数据可由所述索引地址槽中的槽中指针(Point)直接返回最佳匹配数据包的实际地址。Preferably, after being indexed by the tree index data structure, the name data can be directly returned to the actual address of the best matching data packet by the pointer in the slot (Point) in the index address slot.

优选地,插入数据包的步骤可如下:Preferably, the steps of inserting a data packet may be as follows:

步骤a1:输入数据包:输入数据包到内容存储池中;Step a1: input data packet: input data packet to the content storage pool;

步骤a2:定长处理:对输入的数据包的名称进行定长处理;Step a2: fixed-length processing: perform fixed-length processing on the name of the input data packet;

步骤a3:计算输入矢量:将定长处理的结果进行同位之间异或操作,得到数据包的输入矢量;Step a3: Calculate the input vector: perform the XOR operation between the same bits on the result of the fixed-length processing to obtain the input vector of the data packet;

步骤a4:计算映射标号:数据包的输入矢量经神经网络模型运算得到一个0~1之间的数值,该值乘以位图的槽总数,得出该数据包映射到位图上的映射标号即映射槽;Step a4: Calculate the mapping label: the input vector of the data packet is calculated by the neural network model to obtain a value between 0 and 1, and this value is multiplied by the total number of slots in the bitmap to obtain the mapping label of the data packet mapped to the bitmap, namely mapping slot;

步骤a5:判断该数据包是否存在:若映射槽中当前值为1,则该数据包存在,执行步骤a10;若映射槽中当前值为0,则该数据包不存在,执行步骤a6;Step a5: Determine whether the data packet exists: if the current value in the mapping slot is 1, then the data packet exists, and perform step a10; if the current value in the mapping slot is 0, then the data packet does not exist, and perform step a6;

步骤a6:插入树形索引数据结构:若数据包为真实数据,当该映射槽为空时,则将数据包作为最佳数据包直接插入到树形索引数据结构中,并继续执行步骤a8,当该映射槽不为空时,将数据包作为最佳数据包插入树形索引数据结构,并将该数据包的子指针指向此时映射槽中所记录的槽中指针所指向的节点,继续执行步骤a8;若数据包为子名字数据,当该映射槽为空时,则上一次插入的真实数据作为最佳数据包,继续执行步骤a8,若当该映射槽不为空时,则进行步骤a7;Step a6: Insert the tree index data structure: if the data packet is real data, when the mapping slot is empty, directly insert the data packet as the best data packet into the tree index data structure, and continue to execute step a8, When the mapping slot is not empty, the data packet is inserted into the tree index data structure as the best data packet, and the sub-pointer of the data packet points to the node pointed to by the pointer in the slot recorded in the mapping slot at this time, and continues Execute step a8; if the data packet is sub-name data, when the mapping slot is empty, the real data inserted last time is used as the best data packet, and continue to execute step a8, if the mapping slot is not empty, proceed to Step a7;

步骤a7:更新最佳数据包:若该数据包组件数大于槽中记录的级别,则将槽中指针指向的名称节点的类指针指向上一次插入的真实数据,最佳数据包不做更新;否则,上一次插入的真实数据作为新的最佳数据包,并将最佳数据包的类指针指向此时槽中指针指向的节点;Step a7: Update the best data package: if the number of components of the data package is greater than the level recorded in the slot, point the class pointer of the name node pointed to by the pointer in the slot to the real data inserted last time, and the best data package will not be updated; Otherwise, the real data inserted last time is used as the new best data packet, and the class pointer of the best data packet points to the node pointed to by the pointer in the slot at this time;

步骤a8:更新位图槽:槽中指针指向最佳数据包的实际存储地址,级别更新为最佳数据包的组件层数,当数据包为真实数据时,当前值由默认值0变为1,否则当前值保持不变;Step a8: Update the bitmap slot: the pointer in the slot points to the actual storage address of the best data packet, and the level is updated to the number of component layers of the best data packet. When the data packet is real data, the current value changes from the default value 0 to 1 , otherwise the current value remains unchanged;

步骤a9:判断数据包名字是否可以分层:若槽中级别>1,则从后往前对该名称数据包组件数减一,进行分层处理形成新的名称,循环执行步骤a2至a9;否则数据包不进行分层,继续步骤a11;Step a9: Determine whether the name of the data package can be layered: if the level in the slot is > 1, then subtract one from the number of data package components of the name from the back to the front, perform layering processing to form a new name, and execute steps a2 to a9 in a loop; Otherwise, the data packet is not stratified, and continues to step a11;

步骤a10:直接丢弃该数据包;Step a10: discarding the data packet directly;

步骤a11:数据包插入结束。Step a11: The data packet insertion ends.

优选地,检索数据包的步骤可如下:Preferably, the steps of retrieving the data packet may be as follows:

步骤b1:输入数据包:输入数据包到内容存储池中;Step b1: input data packet: input data packet to the content storage pool;

步骤b2:定长处理:对输入的数据包进行定长处理;Step b2: fixed-length processing: perform fixed-length processing on the input data packet;

步骤b3:计算输入矢量:将定长处理的结果进行同位之间异或操作,得到数据包的输入矢量;Step b3: Calculate the input vector: perform the XOR operation between the same bits on the result of the fixed-length processing to obtain the input vector of the data packet;

步骤b4:数据包的输入矢量经神经网络模型运算得到一个0~1之间的数值,该值乘以位图的槽总数,得出该数据包映射到位图上的映射标号即映射槽;Step b4: The input vector of the data packet is calculated by the neural network model to obtain a value between 0 and 1, and this value is multiplied by the total number of slots of the bitmap to obtain the mapping number of the data packet mapped to the bitmap, that is, the mapping slot;

步骤b5:判断是否存在最佳匹配数据包:若映射槽中当前值为1,则最佳匹配数据包存在,执行步骤b6;若映射槽中当前值为0,则最佳匹配数据包不存在,执行步骤b7;Step b5: Determine whether there is a best matching data packet: if the current value in the mapping slot is 1, then the best matching data packet exists, go to step b6; if the current value in the mapping slot is 0, then the best matching data packet does not exist , execute step b7;

步骤b6:丢弃数据包:学习位图内容存储池直接丢弃数据包,并继续执行步骤b8;Step b6: Discard data packets: learn the bitmap content storage pool to directly discard data packets, and continue to execute step b8;

步骤b7:插入数据包:执行数据包插入流程,并继续执行步骤b8;Step b7: insert data packet: execute the data packet insertion process, and continue to execute step b8;

步骤b8:数据包在学习位图内容存储池结构中的检索结束。Step b8: The retrieval of the data packet in the learning bitmap content storage pool structure ends.

下面以两个真实数据包/ICN/NDN/maps和/ICN首次依次输入树形索引数据结构为例,说明以上树形索引数据结构的建表过程。对于真实数据/ICN/NDN/maps,首先,假设该名称被树形索引数据结构映射到索引地址插槽2,然后,/ICN/NDN/maps作为2号槽的最佳匹配数据包插入到树形索引数据结构中,且2号槽中Point指针指向该槽的最佳匹配数据包。最后,对/ICN/NDN/maps的子名字/ICN/NDN,/ICN,依次经树形索引数据结构索引,假设分别索引到槽4和6,则槽4和6中Point(假设初始话状态,为空时)都指向树形索引数据结构中/ICN/NDN/maps结点的内存地址。/ICN/NDN/maps数据检索结束。对于真实数据/ICN,经树形索引数据结构映射到索引地址槽6中,比较Point指向的/ICN/NDN/maps数据与/ICN的组件层数,选择/ICN作为6号槽的最佳匹配数据包,因此,/ICN插入到树形索引数据结构中,同时6号槽与片外/ICN结点用Point连接。The following takes two real data packets /ICN/NDN/maps and /ICN inputting the tree index data structure for the first time as an example to illustrate the table building process of the above tree index data structure. For the real data /ICN/NDN/maps, first, assuming that the name is mapped to the index address slot 2 by the tree index data structure, then, /ICN/NDN/maps is inserted into the tree as the best matching packet for slot 2 In the shape index data structure, and the Point pointer in slot 2 points to the best matching data packet of this slot. Finally, the subnames /ICN/NDN and /ICN of /ICN/NDN/maps are indexed through the tree index data structure in turn, assuming that they are indexed to slots 4 and 6 respectively, then Points in slots 4 and 6 (assuming the initial state , when it is empty) all point to the memory address of the /ICN/NDN/maps node in the tree index data structure. /ICN/NDN/maps data retrieval completed. For the real data /ICN, it is mapped to the index address slot 6 through the tree index data structure, compare the /ICN/NDN/maps data pointed to by Point and the component layers of /ICN, and select /ICN as the best match for slot 6 The data packet, therefore, /ICN is inserted into the tree index data structure, and slot 6 is connected with the off-chip /ICN node by Point.

上述的内容存储池及由其构成的路由器,可以用于命名数据网转发平面。其可以通过将复杂名字检索过程前移至建表过程,以及在树形索引数据结构中存储数据间的逻辑关系,来实现线速支持所有子名称匹配、精确名称匹配、名称数据检索机制以及数据缓存替换策略。The above-mentioned content storage pool and the router formed by it can be used for the forwarding plane of the named data network. It can achieve line speed by moving the complex name retrieval process to the table building process and storing the logical relationship between data in the tree index data structure. It supports all subname matching, exact name matching, name data retrieval mechanism and data Cache replacement strategy.

以下以本发明的一个用于命名数据网转发平面的内容存储池的优选实施例,来说明本发明的工作流程及工作原理:The following is a preferred embodiment of the content storage pool used for the forwarding plane of the named data network of the present invention to illustrate the workflow and working principle of the present invention:

一种命名数据网转发平面的内容存储池,如图3所示,其结构包括:一个片内存储单元和一个片外存储单元构成。其中,所述片内存储单元中设有一个高速存储器,片内部署一个神经网络模型以实现数据名称均匀映射,提高数据存储利用率,图3中用模型表示神经网络模型;还部署一个改进型位图(Bitmap),图3中用位图表示,用于存储最佳匹配名称数据的信息;片外存储单元使用低速存储器,其上部署多个与改进型位图的索引单元的槽(slot)对应的所述树形索引数据结构,来存储数据名称在内容存储池的位置信息以及名字间的关系信息,且每个树形索引数据结构节点中存有指向同层节点的类指针,以及指向下一层节点的子指针。图3中,将神经网络模型和改进型位图综合并称之为学习树。用人字形箭头表示next指针;用实线箭头表示child指针;用虚线箭头表示索引单元的槽(slot)对应所述树形索引数据结构。A content storage pool of a NDN forwarding plane, as shown in FIG. 3 , has a structure comprising: an on-chip storage unit and an off-chip storage unit. Wherein, a high-speed memory is provided in the on-chip storage unit, and a neural network model is deployed on-chip to realize uniform mapping of data names and improve data storage utilization. In Fig. 3, a model is used to represent the neural network model; an improved model is also deployed. Bitmap (Bitmap), represented by bitmap in Fig. 3, is used to store the information of the best matching name data; The off-chip storage unit uses low-speed memory, deploys the slot (slot) of the index unit of multiple and improved bitmap on it ) corresponding to the tree-shaped index data structure to store the position information of the data name in the content storage pool and the relationship information between the names, and each tree-shaped index data structure node stores a class pointer pointing to a node of the same layer, and Child pointer to the next layer of nodes. In Figure 3, the neural network model and the improved bitmap are integrated and called a learning tree. A herringbone arrow is used to indicate the next pointer; a solid arrow is used to indicate the child pointer; a dotted arrow is used to indicate the slot of the index unit corresponding to the tree index data structure.

本发明的内容存储池可实现对数据的快速压缩处理操作,关键是在ContentStore中,运用神经网络的网络映射以提高存储效率以及支持不同名称数据检索算法,设计满足支持快速缓存替换策略的树形索引数据结构以及可存储名称数据的逻辑关系的树形索引数据结构。其具体设计及实施方案如下:The content storage pool of the present invention can realize fast compression processing operations on data. The key is that in the ContentStore, the network mapping of the neural network is used to improve storage efficiency and support different name data retrieval algorithms, and the tree shape that meets the fast cache replacement strategy is designed. Index data structures and tree index data structures that can store logical relationships of name data. Its specific design and implementation plan are as follows:

(1)神经网络模型以实现对数据名称的均匀映射过程如下:(1) Neural network model to realize the uniform mapping process of the data name is as follows:

首先,神经网络采集样本进行训练,与命名数据网名称数据格式类似的大量统一资源定位符URL作为样本数据;其次,计算样本数据的累积分布函数F(x)值作为标签;然后,训练反向传播神经网络,学习出能反映索引数据分布情况的神经网络模型,最后,将数据名称的名称字符串作为输入,输入训练出神经网络模型,得到一个0~1之间的实数值,该数值乘以改进型位图的槽总数,得到映射标号,即实现对数据名称的均匀映射。First, the neural network collects samples for training, and a large number of uniform resource locator URLs similar to the name data format of the named data network are used as sample data; secondly, the cumulative distribution function F(x) value of the sample data is calculated as the label; then, the training reverse Propagate the neural network to learn a neural network model that can reflect the distribution of index data. Finally, the name string of the data name is used as input to train the neural network model, and a real value between 0 and 1 is obtained. The value is multiplied by The total number of slots in the improved bitmap is used to obtain the mapping label, that is, to achieve uniform mapping of data names.

(2)神经网络模型结构设计如下:(2) The structure of the neural network model is designed as follows:

神经网络模型是一个由以下两层组成的模型:在1级BPNN和1,000个BPNN中,每个BPNN有5个输入神经元、20个隐藏神经元和1个输出神经元。图1显示,其每个位图插槽中都有一个表示该插槽被实际内容名称占用的Recent,一个点(Point)用以返回最优匹配名称的实际内存地址(在所有子名字匹配操作中,最优匹配名字定义为存储在名称Trie中的数据包,其名称具有最小的组件差异并以到达兴趣包名称开头;在精确匹配操作中,最优匹配名字定义为与到达的数据包相同的数据包。注意到当一个或多个匹配名字其组件最小是,选择最早插入树形索引数据结构的数据包作为最优匹配名字)。一个级别(Level)表示最佳匹配名称的级别(以“/”或“分隔的组件数表示),一个信息(Info)来记录用于满足缓存替换所需要的名称数据信息。如数据被索引次数或者到达的时间等。The neural network model is a model consisting of the following two layers: in 1-level BPNN and 1,000 BPNNs, each BPNN has 5 input neurons, 20 hidden neurons and 1 output neuron. Figure 1 shows that in each bitmap slot, there is a Recent indicating that the slot is occupied by the actual content name, and a point (Point) is used to return the actual memory address of the best matching name (in all subname matching operations In , the best matching name is defined as the data packet stored in the name Trie, whose name has the smallest component difference and starts with the name of the arriving Interest packet; in the exact matching operation, the best matching name is defined as the same as the arriving data packet Note that when one or more matching names have the smallest component, select the data packet inserted into the tree index data structure first as the best matching name). A level (Level) indicates the level of the best matching name (indicated by the number of components separated by "/" or "), and an information (Info) records the name data information required to satisfy the cache replacement. For example, the number of times the data is indexed or arrival time etc.

通过以上各数据结构之间高效配合,使设计的新型存储结构内容存储池,在提升内存利用率的前提下,能够快速支持数据所有子名字匹配机制,精确名字匹配机制以及缓存替换策略,以及实现数据的插入,删除操作。Through the efficient cooperation between the above data structures, the designed new storage structure content storage pool can quickly support all sub-name matching mechanisms of data, precise name matching mechanisms and cache replacement strategies, and realize Data insertion and deletion operations.

图4为本发明的一种内容存储池支持所有子名字匹配操作流程图。FIG. 4 is a flow chart of a content storage pool supporting all subname matching operations in the present invention.

请参见图4,该用于命名数据网转发平面的内容存储池,其存储结构中采用子名称匹配机制检索兴趣包,每检索一次兴趣包的步骤如下:Please refer to Figure 4. The content storage pool used for the forwarding plane of the named data network uses a subname matching mechanism in its storage structure to retrieve interest packets. The steps for each retrieval of interest packets are as follows:

步骤1-1:输入兴趣包x:输入兴趣包x到内容存储池中。Step 1-1: Input interest packet x: Input interest packet x into the content storage pool.

步骤1-2:定长处理:对输入的数据包x的名称进行定长处理;以满足反向神经网络(Back Propagation Neural Networks,BPNN)仅支持定长数据处理的特点,即将x的每5位为一份,拆分为5份,位数不足的在后位补0。Step 1-2: Fixed-length processing: Perform fixed-length processing on the name of the input data packet x; to meet the characteristics that the reverse neural network (Back Propagation Neural Networks, BPNN) only supports fixed-length data processing, that is, every 5 of x The number of digits is one, and it is divided into five parts. If the number of digits is insufficient, 0 is added at the end.

步骤1-3:计算输入矢量:将该定长处理的结果同位之间进行异或操作,得到x经输入阶段处理的5位输入值。Step 1-3: Calculate the input vector: perform an XOR operation between the same bits of the fixed-length processing result to obtain the 5-bit input value of x processed by the input stage.

步骤1-4:计算映射标号:输入值经已经训练完成的神经网络运算得到一个0~1之间的数值,该值乘改进型位图的槽总数,得出该兴趣包映射到改进型位图上的映射标号。Step 1-4: Calculate the mapping label: the input value is calculated by the neural network that has been trained to obtain a value between 0 and 1, and this value is multiplied by the total number of slots in the improved bitmap to obtain that the Interest packet is mapped to the improved bitmap The mapping label on the diagram.

步骤1-5:判断是否存在最佳匹配数据包:若映射标号中的槽中指针(Point)不为空,则证明存在最佳匹配数据包,执行步骤1-6。否则,说明不存在最佳匹配数据包,执行步骤1-7。Step 1-5: Determine whether there is a best matching data packet: if the pointer (Point) in the slot in the mapping label is not empty, it proves that there is a best matching data packet, and perform steps 1-6. Otherwise, it means that there is no best matching packet, and go to steps 1-7.

步骤1-6:返回最佳匹配数据包:内容存储池直接返回槽中指针(Point)所指向的实际数据包并继续执行步骤1-8。Step 1-6: return the best matching data packet: the content storage pool directly returns the actual data packet pointed to by the pointer (Point) in the slot and continues to execute steps 1-8.

步骤1-7:转发兴趣包x:内容存储池将该兴趣包转发给PIT和前向转发表(Forwarding Information Base,FIB)进行后续处理,并继续执行步骤1-8。Step 1-7: Forward the Interest packet x: The content storage pool forwards the Interest packet to the PIT and the Forwarding Information Base (FIB) for subsequent processing, and continue to execute steps 1-8.

步骤1-8:兴趣包在内容存储池中的检索结束。Steps 1-8: The retrieval of the Interest packet in the content storage pool ends.

图5为本发明的一种内容存储池支持精确匹配操作流程图。FIG. 5 is a flow chart of an operation for supporting exact matching in a content storage pool according to the present invention.

请参见图5,该用于命名数据网转发平面内容存储池,其存储结构中采用精确匹配机制检索数据包,每检索一次数据包的步骤如下:Please refer to Fig. 5, the content storage pool for the forwarding plane of the named data network uses an exact matching mechanism to retrieve data packets in its storage structure, and the steps for each retrieval of data packets are as follows:

步骤2-1:输入数据包y:输入数据包y到内容存储池中;Step 2-1: Input data package y: input data package y to the content storage pool;

步骤2-2:定长处理:对输入的数据包y的名称进行定长处理;以满足反向神经网络仅支持定长数据处理的特点,即将y的每5位为一份,拆分为5份,位数不足的在后面补0。Step 2-2: Fixed-length processing: Perform fixed-length processing on the name of the input data packet y; to meet the characteristics that the reverse neural network only supports fixed-length data processing, that is, every 5 bits of y are divided into 5 copies, if there are not enough digits, add 0 at the end.

步骤2-3:计算输入矢量:将该定长处理的结果相同位之间进行异或操作,得到y经输入阶段处理的5位输入值。Step 2-3: Calculate the input vector: Execute the XOR operation between the same bits of the result of the fixed-length processing to obtain the 5-bit input value of y processed by the input stage.

步骤2-4:计算映射标号:输入值经已经训练完成的神经网络运算得到一个0~1之间的数值,该值乘改进型位图的槽总数,得出该数据包y映射到改进型位图上的映射标号。Step 2-4: Calculate the mapping label: the input value is calculated by the neural network that has been trained to obtain a value between 0 and 1, which is multiplied by the total number of slots in the improved bitmap, and the data packet y is mapped to the improved bitmap Mapping label on the bitmap.

步骤2-5:判断是否存在最佳匹配数据包:若映射标号中的槽中Recent不为0,则证明存在最佳匹配数据包,执行步骤2-6。否则,说明不存在最佳匹配数据包,执行步骤2-7。Step 2-5: Determine whether there is a best matching data packet: if Recent in the slot in the mapping label is not 0, it proves that there is a best matching data packet, and perform steps 2-6. Otherwise, it means that there is no best matching data packet, and go to steps 2-7.

步骤2-6:丢弃数据x:内容存储池直接丢弃x,并继续执行步骤2-8。Step 2-6: discard data x: the content storage pool directly discards x, and proceed to step 2-8.

步骤2-7:插入数据x:该数据包进入插入阶段,并继续执行步骤2-8。Step 2-7: Insert data x: The packet enters the insert phase, and proceeds to step 2-8.

步骤2-8:数据包在内容存储池中的检索结束。Steps 2-8: The retrieval of the data packet in the content storage pool ends.

图6为本发明的一种内容存储池对数据插入操作流程图。FIG. 6 is a flow chart of a data insertion operation of a content storage pool according to the present invention.

请参见图6,该用于命名数据网转发平面内容存储池存储结构中插入数据包,每插入一个数据包的步骤如下:Please refer to Fig. 6, the data packet is inserted into the storage structure of the NDN forwarding flat content storage pool, and the steps of inserting each data packet are as follows:

步骤3-1:输入数据包:输入数据包到内容存储池中。Step 3-1: Input data package: input data package into the content storage pool.

步骤3-2:定长处理:对输入的数据包的名称进行定长处理;以满足反向神经网络仅支持定长数据处理的特点,即将数据包的每5位为一份,拆分为5份,位数不足的在后面补0。Step 3-2: Fixed-length processing: Perform fixed-length processing on the name of the input data packet; to meet the characteristics that the reverse neural network only supports fixed-length data processing, that is, every 5 bits of the data packet are divided into 5 copies, if there are not enough digits, add 0 at the end.

步骤3-3:计算输入矢量:将该定长处理的结果进行同位之间异或操作,得到数据包经输入阶段处理的输入值。Step 3-3: Calculate the input vector: perform the XOR operation between the same bits on the result of the fixed-length processing to obtain the input value of the data packet processed by the input stage.

步骤3-4:计算映射标号:输入值经已经训练完成的神经网络运算得到一个0~1之间的数值,该值乘改进型位图的槽总数,得出该数据包映射到改进型位图上的映射标号即映射槽。Step 3-4: Calculate the mapping label: the input value is calculated by the neural network that has been trained to obtain a value between 0 and 1, and this value is multiplied by the total number of slots in the improved bitmap to obtain that the data packet is mapped to the improved bitmap Mapping labels on the diagram are mapping slots.

步骤3-5:判断是否存在该数据包:若映射槽中Recent=1,则证明存在该数据包,并执行步骤3-10。否则,说明不存在该数据包,执行步骤3-6。Step 3-5: Judging whether the data packet exists: if Recent=1 in the mapping slot, it proves that the data packet exists, and executes step 3-10. Otherwise, it means that the data package does not exist, go to steps 3-6.

步骤3-6:插入树形索引数据结构:若数据包为真实数据,当该映射槽为空时,则将数据包作为最佳数据包直接插入到树形索引数据结构中,并继续执行步骤3-8,当映射槽非空时,将数据包作为最佳数据包插入树形索引数据结构,并将该数据包的子指针指向此时映射槽中所记录的Point所指向的节点,继续执行步骤3-8;若数据包为子名字数据时,当映射槽为空,则上一次插入的真实数据作为最佳数据包,继续执行步骤3-8,若映射槽不为空,则进行步骤3-7。Step 3-6: Insert the tree index data structure: If the data packet is real data, when the mapping slot is empty, insert the data packet directly into the tree index data structure as the best data packet, and continue to execute the steps 3-8, when the mapping slot is not empty, insert the data packet as the best data packet into the tree index data structure, and point the sub-pointer of the data packet to the node pointed to by the Point recorded in the mapping slot at this time, continue Execute steps 3-8; if the data packet is subname data, when the mapping slot is empty, the real data inserted last time will be used as the best data packet, continue to perform steps 3-8, if the mapping slot is not empty, proceed to Steps 3-7.

步骤3-7:更新最佳数据包:若该数据包组件数大于槽中记录的Level,则将槽中Point指向的名称节点的类指针指向上一次插入的真实数据,最佳数据包不做更新;否则,上一次插入的真实数据作为新的最佳数据包,并将最佳数据包的类指针指向此时槽中Point指向的节点。Step 3-7: Update the best data package: If the number of components of the data package is greater than the Level recorded in the slot, point the class pointer of the name node pointed to by Point in the slot to the real data inserted last time, and the best data package does not Update; otherwise, the real data inserted last time is used as the new best data packet, and the class pointer of the best data packet is pointed to the node pointed to by Point in the slot at this time.

步骤3-8:更新Bitmap槽:Point指向最佳数据包的实际存储地址,Level更新为最佳数据包的组件层数,当数据包为真实数据时,Recent由默认值0变为1,否则Recent保持不变。Step 3-8: Update the Bitmap slot: Point points to the actual storage address of the best data packet, Level is updated to the number of component layers of the best data packet, and when the data packet is real data, Recent changes from the default value 0 to 1, otherwise Recent remains unchanged.

步骤3-9:判断名字是否可以分层:若槽中Level>1,则从后往前对该名称数据包组件数减一,进行分层处理形成新的名称(如/A/B,分层为/A),循环执行步骤3-9。否则数据包不可以进行分层,继续步骤3-11.Step 3-9: Determine whether the name can be layered: if the Level in the slot is > 1, the number of components of the name data package will be reduced by one from the back to the front, and a new name will be formed by hierarchical processing (such as /A/B, divided into Layer is /A), execute steps 3-9 in a loop. Otherwise, the packet cannot be layered, continue to step 3-11.

步骤3-10:直接丢弃该数据包。Step 3-10: discard the packet directly.

步骤3-11:数据包在内容存储池中的插入结束。Step 3-11: The insertion of the data packet in the content storage pool ends.

图7为本发明的一种内容存储池对数据删除操作流程图。FIG. 7 is a flowchart of an operation for deleting data by a content storage pool according to the present invention.

请参见图7,该用于命名数据网转发平面内容存储池存储结构中删除数据包,每删除一个数据包的步骤如下:Please refer to Fig. 7, which is used to delete data packets in the storage structure of the NDN forwarding plane content storage pool, and the steps for deleting a data packet are as follows:

步骤4-1:输入数据包:输入数据包到内容存储池中。Step 4-1: Input data package: input data package into the content storage pool.

步骤4-2:定长处理:对数据包的名称进行定长处理,以满足反向神经网络仅支持定长数据处理的特点,即将数据包的每5位为一份,拆分为5份,位数不足的在后面补0。Step 4-2: Fixed-length processing: Perform fixed-length processing on the name of the data packet to meet the characteristics that the reverse neural network only supports fixed-length data processing, that is, every 5 bits of the data packet are divided into 5 parts , if the number of digits is insufficient, add 0 at the end.

步骤4-3:计算输入矢量:将该定长处理的结果进行同位之间异或操作,得到数据包经输入阶段处理的输入值。Step 4-3: Calculate the input vector: perform the XOR operation between the same bits on the result of the fixed-length processing to obtain the input value of the data packet processed by the input stage.

步骤4-4:计算映射标号:输入值经已经训练完成的神经网络运算得到一个0~1之间的数值,该值乘改进型位图的槽总数,得出该数据包映射到改进型位图上的映射标号即映射槽。Step 4-4: Calculate the mapping label: the input value is calculated by the neural network that has been trained to obtain a value between 0 and 1, and this value is multiplied by the total number of slots in the improved bitmap to obtain that the data packet is mapped to the improved bitmap Mapping labels on the diagram are mapping slots.

步骤4-5:判断是否存在该数据包:若映射槽中Recent=1,则证明存在该数据包,继续执行步骤4-6。否则,说明不存在该数据包,执行步骤4-9。Step 4-5: Determine whether the data packet exists: if Recent=1 in the mapping slot, it proves that the data packet exists, and proceed to step 4-6. Otherwise, it means that the data package does not exist, go to steps 4-9.

步骤4-6:更新指针:如果该数据包节点子指针和类指针不为空,则将原本指向该数据包节点的子指针指向该数据包类指针所指向的节点。该数据包类指针所指向的节点的子指针指向该数据包节点的所指向的子节点,并将该数据包节点子指针所指向的节点作为最佳数据包。否则,最佳数据包为空。Step 4-6: Updating the pointer: If the subpointer and class pointer of the data packet node are not empty, point the subpointer originally pointing to the data packet node to the node pointed to by the data packet class pointer. The sub-pointer of the node pointed to by the data packet class pointer points to the pointed child node of the data packet node, and the node pointed to by the data packet node sub-pointer is taken as the optimal data packet. Otherwise, the best packet is empty.

步骤4-7:更新Bitmap槽:Point指向此时最佳数据包的实际存储地址,Level更新为最佳数据包的组件层数,Recent由1变为0。Step 4-7: Update the Bitmap slot: Point points to the actual storage address of the best data packet at this time, Level is updated to the number of component layers of the best data packet, and Recent changes from 1 to 0.

步骤4-8:直接删除该数据包。Steps 4-8: Delete the packet directly.

步骤4-9:数据包在内容存储池中的删除结束。Steps 4-9: The deletion of the data package in the content storage pool ends.

由以上名称数据插入树形索引数据结构过程可知,在树形索引数据结构建立过程中,自动存储名称数据间的逻辑关系,且动态选择和更新最佳匹配数据包。因此,名称数据经树形索引数据结构索引后,可由索引地址中的Point直接返回最佳匹配数据包的实际地址,成功实现将可变名称数据的复杂检索过程前移至建表过程。From the above process of inserting name data into the tree-shaped index data structure, it can be seen that during the establishment of the tree-shaped index data structure, the logical relationship between name data is automatically stored, and the best matching data package is dynamically selected and updated. Therefore, after the name data is indexed by the tree-shaped index data structure, the Point in the index address can directly return the actual address of the best matching data packet, successfully realizing the complex retrieval process of variable name data forward to the table building process.

以上所述的实施例仅用于说明本发明的技术思想及特点,其目的在于使本领域内的技术人员能够理解本发明的内容并据以实施,不能仅以本实施例来限定本发明的专利范围,即凡本发明所揭示的精神所作的同等变化或修饰,仍落在本发明的专利范围内。The above-described embodiments are only used to illustrate the technical ideas and characteristics of the present invention, and its purpose is to enable those skilled in the art to understand the content of the present invention and implement it accordingly. The present invention cannot be limited only by this embodiment. The scope of the patent, that is, all equivalent changes or modifications made to the spirit disclosed in the present invention still fall within the scope of the patent of the present invention.

Claims (3)

1.一种基于内容存储池的树形索引方法,其特征在于,设置内容存储池,所述内容存储池包括:片内存储单元和片外存储单元,所述片内存储单元内设有用于存储信息的位图;所述片外存储单元设有多个树形索引数据结构,用于存储数据名称在所述片外存储单元的位置信息以及数据名称间的关系信息;1. A tree index method based on a content storage pool, characterized in that a content storage pool is set, and the content storage pool includes: an on-chip storage unit and an off-chip storage unit, and the on-chip storage unit is provided with a A bitmap for storing information; the off-chip storage unit is provided with multiple tree-shaped index data structures for storing position information of data names in the off-chip storage unit and relationship information between data names; 所述树形索引数据结构中的每个结点都含有指向同层结点的右指针和指向下一层结点的子指针;每个数据结点存储真实数据的完整名称,所述真实数据是指直接到达路由的名称数据;Each node in the tree index data structure contains a right pointer pointing to the same layer node and a subpointer pointing to the next layer node; each data node stores the full name of the real data, and the real data Refers to the name data that goes directly to the route; 所述树形索引数据结构根据真实数据的组件数和插入顺序动态分配插入数据结点的指针;The tree index data structure dynamically allocates pointers to insert data nodes according to the number of components of real data and the insertion order; 该树形索引方法将真实数据作为最佳匹配数据包插入到所述树形索引数据结构中,自动存储名称数据间的逻辑关系,且动态选择和更新最佳匹配数据包;The tree index method inserts real data as the best matching data package into the tree index data structure, automatically stores the logical relationship between name data, and dynamically selects and updates the best matching data package; 索引真实数据的所有父名称数据,并按照所述索引地址槽中记录的真实数据的名称组件数和插入顺序,选择组件数差异最小且插入时间最早的数据包作为所述索引地址槽的最佳匹配数据包;并将所有父名称数据对应的索引地址与该槽对应的最佳匹配数据包建立连接;Index all parent name data of the real data, and according to the number of name components and insertion order of the real data recorded in the index address slot, select the data packet with the smallest difference in the number of components and the earliest insertion time as the best package for the index address slot Match the data packet; and establish a connection between the index address corresponding to all the parent name data and the best matching data packet corresponding to the slot; 当下一层结点有多个与真实数据具有最小组件差异的子名称数据时,按照插入树形索引数据结构的先后顺序,选择最早插入的子名称作为下一层结点,其他子名称作为同层结点;When the next-level node has multiple sub-name data with the smallest component difference from the real data, select the earliest inserted sub-name as the next-level node according to the order of insertion into the tree index data structure, and other sub-names as the same layer node; 检索数据包的步骤如下:The steps to retrieve a packet are as follows: 步骤b1:输入数据包:输入数据包到内容存储池中;Step b1: input data packet: input data packet to the content storage pool; 步骤b2:定长处理:对输入的数据包进行定长处理;Step b2: fixed-length processing: perform fixed-length processing on the input data packet; 步骤b3:计算输入矢量:将定长处理的结果进行同位之间异或操作,得到数据包的输入矢量;Step b3: Calculate the input vector: perform the XOR operation between the same bits on the result of the fixed-length processing to obtain the input vector of the data packet; 步骤b4:数据包的输入矢量经神经网络模型运算得到一个0~1之间的数值,该值乘以位图的槽总数,得出该数据包映射到位图上的映射标号即映射槽;Step b4: The input vector of the data packet is calculated by the neural network model to obtain a value between 0 and 1, and this value is multiplied by the total number of slots of the bitmap to obtain the mapping number of the data packet mapped to the bitmap, that is, the mapping slot; 步骤b5:判断是否存在最佳匹配数据包:若映射槽中当前值为1,则最佳匹配数据包存在,执行步骤b6;若映射槽中当前值为0,则最佳匹配数据包不存在,执行步骤b7;Step b5: Determine whether there is a best matching data packet: if the current value in the mapping slot is 1, then the best matching data packet exists, go to step b6; if the current value in the mapping slot is 0, then the best matching data packet does not exist , execute step b7; 步骤b6:丢弃数据包:学习位图内容存储池直接丢弃数据包,并继续执行步骤b8;Step b6: Discard data packets: learn the bitmap content storage pool to directly discard data packets, and continue to execute step b8; 步骤b7:插入数据包:执行数据包插入流程,并继续执行步骤b8;Step b7: insert data packet: execute the data packet insertion process, and continue to execute step b8; 步骤b8:数据包在学习位图内容存储池结构中的检索结束;Step b8: the retrieval of the data packet in the learning bitmap content storage pool structure ends; 插入数据包的步骤如下:The steps to insert a packet are as follows: 步骤a1:输入数据包:输入数据包到内容存储池中;Step a1: input data packet: input data packet to the content storage pool; 步骤a2:定长处理:对输入的数据包的名称进行定长处理;Step a2: fixed-length processing: perform fixed-length processing on the name of the input data packet; 步骤a3:计算输入矢量:将定长处理的结果进行同位之间异或操作,得到数据包的输入矢量;Step a3: Calculate the input vector: perform the XOR operation between the same bits on the result of the fixed-length processing to obtain the input vector of the data packet; 步骤a4:计算映射标号:数据包的输入矢量经神经网络模型运算得到一个0~1之间的数值,该值乘以位图的槽总数,得出该数据包映射到位图上的映射标号即映射槽;Step a4: Calculate the mapping label: the input vector of the data packet is calculated by the neural network model to obtain a value between 0 and 1, and this value is multiplied by the total number of slots in the bitmap to obtain the mapping label of the data packet mapped to the bitmap, namely mapping slot; 步骤a5:判断该数据包是否存在:若映射槽中当前值为1,则该数据包存在,执行步骤a10;若映射槽中当前值为0,则该数据包不存在,执行步骤a6;Step a5: Determine whether the data packet exists: if the current value in the mapping slot is 1, then the data packet exists, and perform step a10; if the current value in the mapping slot is 0, then the data packet does not exist, and perform step a6; 步骤a6:插入树形索引数据结构:若数据包为真实数据,当该映射槽为空时,则将数据包作为最佳数据包直接插入到树形索引数据结构中,并继续执行步骤a8,当该映射槽不为空时,将数据包作为最佳数据包插入树形索引数据结构,并将该数据包的子指针指向此时映射槽中所记录的槽中指针所指向的节点,继续执行步骤a8;若数据包为子名字数据,当该映射槽为空时,则上一次插入的真实数据作为最佳数据包,继续执行步骤a8,若当该映射槽不为空时,则进行步骤a7;Step a6: Insert the tree index data structure: if the data packet is real data, when the mapping slot is empty, directly insert the data packet as the best data packet into the tree index data structure, and continue to execute step a8, When the mapping slot is not empty, the data packet is inserted into the tree index data structure as the best data packet, and the sub-pointer of the data packet points to the node pointed to by the pointer in the slot recorded in the mapping slot at this time, and continues Execute step a8; if the data packet is subname data, when the mapping slot is empty, the real data inserted last time is used as the best data packet, and continue to execute step a8, if the mapping slot is not empty, then proceed to step a7; 步骤a7:更新最佳数据包:若该数据包组件数大于槽中记录的级别,则将槽中指针指向的名称节点的类指针指向上一次插入的真实数据,最佳数据包不做更新;否则,上一次插入的真实数据作为新的最佳数据包,并将最佳数据包的类指针指向此时槽中指针指向的节点;Step a7: Update the best data package: if the number of components of the data package is greater than the level recorded in the slot, point the class pointer of the name node pointed to by the pointer in the slot to the real data inserted last time, and the best data package will not be updated; Otherwise, the real data inserted last time is used as the new best data packet, and the class pointer of the best data packet points to the node pointed to by the pointer in the slot at this time; 步骤a8:更新位图槽:槽中指针指向最佳数据包的实际存储地址,级别更新为最佳数据包的组件层数,当数据包为真实数据时,当前值由默认值0变为1,否则当前值保持不变;Step a8: Update the bitmap slot: the pointer in the slot points to the actual storage address of the best data packet, and the level is updated to the number of component layers of the best data packet. When the data packet is real data, the current value changes from the default value 0 to 1 , otherwise the current value remains unchanged; 步骤a9:判断数据包名字是否可以分层:若槽中级别>1,则从后往前对该名称数据包组件数减一,进行分层处理形成新的名称,循环执行步骤a2至a9;否则数据包不进行分层,继续步骤a11;Step a9: Determine whether the name of the data package can be layered: if the level in the slot is > 1, then subtract one from the number of data package components of the name from the back to the front, perform layering processing to form a new name, and execute steps a2 to a9 in a loop; Otherwise, the data packet is not stratified, and continues to step a11; 步骤a10:直接丢弃该数据包;Step a10: discarding the data packet directly; 步骤a11:数据包插入结束。Step a11: The data packet insertion ends. 2.根据权利要求1所述的基于内容存储池的树形索引方法,其特征在于,所述片内存储单元采用高速存储器,所述高速存储器内部署用于实现数据名称均匀映射的神经网络模型。2. The tree index method based on content storage pool according to claim 1, characterized in that, the on-chip storage unit adopts a high-speed memory, and the neural network model for realizing uniform mapping of data names is deployed in the high-speed memory . 3.一种路由器,其特征在于,其内设有如权利要求2所述的内容存储池。3. A router, characterized in that the content storage pool according to claim 2 is arranged inside it.
CN201910967770.6A 2019-10-12 2019-10-12 Tree index data structure, content storage pool, router and tree index method Expired - Fee Related CN110851658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910967770.6A CN110851658B (en) 2019-10-12 2019-10-12 Tree index data structure, content storage pool, router and tree index method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910967770.6A CN110851658B (en) 2019-10-12 2019-10-12 Tree index data structure, content storage pool, router and tree index method

Publications (2)

Publication Number Publication Date
CN110851658A CN110851658A (en) 2020-02-28
CN110851658B true CN110851658B (en) 2023-05-05

Family

ID=69597383

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910967770.6A Expired - Fee Related CN110851658B (en) 2019-10-12 2019-10-12 Tree index data structure, content storage pool, router and tree index method

Country Status (1)

Country Link
CN (1) CN110851658B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115134298A (en) * 2021-03-23 2022-09-30 华为技术有限公司 Message processing method and network device
CN114238316B (en) * 2021-12-03 2024-07-09 神州灵云(北京)科技有限公司 Data indexing method based on tree structure and tree structure generation method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110096458A (en) * 2019-04-02 2019-08-06 天津大学 Name data network content storage pool data retrieval method neural network based
CN110109616A (en) * 2019-04-02 2019-08-09 天津大学 Name data network content storage pool data-erasure method neural network based
CN110196938A (en) * 2019-04-02 2019-09-03 天津大学 A Neural Network-Based Method for Data Insertion in Content Storage Pool of Named Data Network

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1107126A2 (en) * 1999-12-08 2001-06-13 Hewlett-Packard Company, A Delaware Corporation A fast, efficient, adaptive, hybrid tree
US6671694B2 (en) * 2001-06-04 2003-12-30 Hewlett-Packard Development Company, L.P. System for and method of cache-efficient digital tree with rich pointers
CN112214424B (en) * 2015-01-20 2024-04-05 乌尔特拉塔有限责任公司 Object memory architecture, processing node, memory object storage and management method
CN107908357B (en) * 2017-10-13 2020-08-21 天津大学 Named Data Network Forwarding Plane PIT Storage Structure and Data Retrieval Method
CN110019649A (en) * 2017-12-25 2019-07-16 北京新媒传信科技有限公司 A kind of method and device established, search for index tree
CN109271390B (en) * 2018-09-30 2022-03-01 天津大学 A neural network-based index data structure and its data retrieval method
CN110138661A (en) * 2019-04-02 2019-08-16 天津大学 Name data network content storage pool neural network based

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110096458A (en) * 2019-04-02 2019-08-06 天津大学 Name data network content storage pool data retrieval method neural network based
CN110109616A (en) * 2019-04-02 2019-08-09 天津大学 Name data network content storage pool data-erasure method neural network based
CN110196938A (en) * 2019-04-02 2019-09-03 天津大学 A Neural Network-Based Method for Data Insertion in Content Storage Pool of Named Data Network

Also Published As

Publication number Publication date
CN110851658A (en) 2020-02-28

Similar Documents

Publication Publication Date Title
US8724624B2 (en) Systolic array architecture for fast IP lookup
Van Lunteren Searching very large routing tables in wide embedded memory
Wang et al. CoPTUA: Consistent policy table update algorithm for TCAM without locking
US20070171911A1 (en) Routing system and method for managing rule entry thereof
CN109271390B (en) A neural network-based index data structure and its data retrieval method
WO2013040730A1 (en) Ip lookup method and device, and route updating method and device
CN103428093A (en) Route prefix storing, matching and updating method and device based on names
CN110851658B (en) Tree index data structure, content storage pool, router and tree index method
CN104780101B (en) Content center network Forwarding plane fib table structure and its search method
CN110460529A (en) Content router FIB storage structure and its data processing method
Li et al. Smart name lookup for NDN forwarding plane via neural networks
CN110096458B (en) Named data network content storage pool data retrieval method based on neural network
CN100496019C (en) A Method for Rapid Search and Update of IPv6 Routing Table
CN114817648A (en) High-energy-efficiency collaborative map calculation method and device
CN110138661A (en) Name data network content storage pool neural network based
Han et al. A novel routing algorithm for IoT cloud based on hash offset tree
CN100413285C (en) Design and Implementation of High Speed Multidimensional Packet Classification Algorithm Based on Network Processor
CN107729053A (en) A kind of method for realizing cache tables
CN111611348A (en) A Method of Searching ICN Network Information Name Based on Learning Bloom Filter
CN110109616B (en) Named data network content storage pool data deletion method based on neural network
WO2015032214A1 (en) High-speed routing lookup method and device simultaneously supporting ipv4 and ipv6
CN101521627A (en) Method and device for inserting nodes
JP2023534123A (en) Packet matching method, apparatus, network device and medium
CN110196938B (en) A Neural Network-Based Method for Data Insertion in Content Storage Pool of Named Data Network
CN102739550B (en) Based on the multi-memory flowing water routing architecture that random copy distributes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20230505