[go: up one dir, main page]

CN112818185A - Method for searching longest prefix matching hardware system based on SRAM - Google Patents

Method for searching longest prefix matching hardware system based on SRAM Download PDF

Info

Publication number
CN112818185A
CN112818185A CN202110423364.0A CN202110423364A CN112818185A CN 112818185 A CN112818185 A CN 112818185A CN 202110423364 A CN202110423364 A CN 202110423364A CN 112818185 A CN112818185 A CN 112818185A
Authority
CN
China
Prior art keywords
module
hash
search
output
comparison
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110423364.0A
Other languages
Chinese (zh)
Inventor
项禹
卢笙
陈盈安
江悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinqiyuan Nanjing Semiconductor Technology Co ltd
Original Assignee
Xinqiyuan Nanjing Semiconductor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinqiyuan Nanjing Semiconductor Technology Co ltd filed Critical Xinqiyuan Nanjing Semiconductor Technology Co ltd
Priority to CN202110423364.0A priority Critical patent/CN112818185A/en
Publication of CN112818185A publication Critical patent/CN112818185A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9014Indexing; Data structures therefor; Storage structures hash tables
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/30Circuit design
    • G06F30/32Circuit design at the digital level

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a method for searching a longest prefix matching hardware system based on SRAM, which relates to the technical field of network communication, wherein the hardware comprises a storage module, the system comprises a search algorithm module based on HASH, a first comparison module supporting different lengths and an output comparison module, and the method comprises the following steps: s100: the search key obtains an output value through a search algorithm module; s200: the output value is output with priority through a first comparison module supporting different lengths; s300: compared with the prior art, the invention designs various logic circuits in the HASH module to generate the HASH value, reduces the calculation amount of the HASH module, improves the operation speed, designs comparison logic circuits with different bit widths in the comparison module, saves precious memory units and has more flexibility in bit width selection.

Description

Method for searching longest prefix matching hardware system based on SRAM
Technical Field
The invention relates to the technical field of network communication, in particular to a method for searching a longest prefix matching hardware system based on SRAM (static random access memory).
Background
When the router forwards the IP packet, the forwarding engine needs to search the routing information corresponding to the destination address in the IP packet in the routing table, so as to determine the forwarding mode of the IP packet. In the series of working processes of message forwarding, the routing lookup process is the most critical, so designing a fast routing lookup algorithm has become one of the keys for improving the overall performance of the router, and with the improvement of the interface rate of the router, the current requirements cannot be met by the traditional software-based lookup mechanism, so that the hardware-based Ternary Content Addressable Memory (TCAM) high-speed routing lookup is widely applied. TCAMs are a special class of memory cells, and conventional memory cells read contents according to addresses, such as static memory cells (SRAMs) and dynamic memory cells (DRAMs), but TCAMs input a data (search contents or search key) by obtaining a corresponding address according to the stored contents, compare the key with its stored entry in parallel inside the TCAM, and output the matched address, and if there are multiple entries matching the search key, output the smallest address.
Based on Internet Protocol (IP) addresses for example, the same concept can be applied to any other length and type of network addresses, such as 32-bit long version-four IP (Internet Protocol version4, IPv4) addresses, 64-bit long Media Access Control (MAC) addresses or 128-bit long IPv6 addresses.
The longest prefix matching mechanism is a route query mechanism adopted By almost all routers in the industry at present, when the router receives an IP data packet, the router compares the destination IP address of the data packet with all the routing table entries in its local routing table Bit By Bit (Bit-By-Bit), because each table entry in the routing table designates a network, one destination address may be matched with multiple table entries. The most specific entry, i.e. the one with the longest subnet mask, is called the longest prefix match, and this entry matches the high order bits of the destination address the most.
TCAMs are developed based on CAM. In general, there are only two "0" or "1" states for each bit in a CAM memory cell. Each bit in the TCAM has three states, namely a "don't care" state in addition to "0" and "1", and is referred to as tri-state. The tri-state of the TCAM is implemented by one data bit (data) and one mask bit (mask), and it is due to this third state that the TCAM can perform both exact match and fuzzy search. This makes TCAMs very useful in fast lookup ACLs, longest prefix matches for routing table lookups, and fuzzy lookup.
The traditional search methods based on the SRAM comprise linear search, binary tree search, HASH table search and the like, and the search methods have the common characteristic of low search speed, and the linear search method needs all table items in a history table; the binary tree search method needs to traverse most nodes in tree species, and the search speed is greatly influenced by the depth of the tree; the hash table lookup method is a faster one in software lookup, and maps a group of keywords to a limited address interval according to a set hash function h (key) and a collision processing method, and uses the image of the keyword in the address interval as a storage location recorded in a table, wherein the table is called a hash table or hash, and the obtained storage location is called a hash address or hash address. Although the hash table lookup method is relatively fast, it still cannot meet the extremely fast lookup requirement of the high-speed real-time communication system.
With the continuous increase of network speed, the traditional software-based lookup mechanism can not meet the requirements, the most used hardware lookup method in the industry at present uses a Content Addressable Memory (CAM), but because the routing lookup has the characteristic of longest prefix match, another CAM implementation mechanism is proposed, and the hardware-based TCAM lookup comes along. When searching is carried out by using a TCAM method, all data of the whole table entry space are inquired at the same time, the searching speed is not influenced by the size of the data of the table entry space, the searching is completed once in each clock period, and the average searching speed is 6 times that of the searching based on the SRAM algorithm, and can even reach 128 times.
Although the TCAM device has very flexible requirement on the length of the table entry stored by the TCAM device, and can store the key table entry with any length in a TCAM chip, the TCAM device also has the defects; the first is that they are more expensive and have relatively small capacity; secondly, the TCAM uses a parallel matching comparison method, which results in higher power consumption, and the third TCAM needs to ensure that the keyword with a longer prefix is stored before the keyword with a shorter prefix, so that the sequence makes updating of the TCAM keyword extremely complicated.
Therefore, based on the shortcomings of the conventional hardware TCAM, the current route lookup is mainly based on a large-capacity memory including a static memory (SRAM) and a dynamic memory (DRAM). In addition, due to the particularity of a dynamic memory (DRAM), the current load-bearing form is mainly plug-in, and is difficult to integrate into a chip, and timing refresh and switching time for reading and writing different areas are required, so that different search delays are uncertain, and therefore, the dynamic memory is not suitable for being applied to the search requirements of high performance and fixed delay.
In terms of the search method, the common characteristic of the traditional linear search and binary tree search is that the search is slow, and particularly under the condition that the search table entries are huge, the performance loss is particularly obvious; although HASH table lookup is relatively fast, HASH conflicts exist in certain scenes and data sets, additional processing is needed, processing delay is increased, uniform lookup delay cannot be guaranteed, and the lookup performance is greatly influenced. Compared logic circuits with different bit widths are redesigned in the comparison module, thereby saving precious memory cells and having more flexibility in bit width selection.
Disclosure of Invention
Compared with the prior art, the method for searching the longest prefix matching hardware system based on the SRAM has the advantages that various logic circuits are designed in the HASH module to generate HASH values, the calculation amount of the HASH module is reduced, the operation speed is increased, comparison logic circuits with different bit widths are designed, precious memory units are saved, and the bit width selection is more flexible.
The invention is realized by the following technical scheme: a method for SRAM-based longest prefix match hardware system lookup, the hardware including a storage module, the system including a HASH-based search algorithm module, a first comparison module supporting different lengths, and an output comparison module, the method comprising the steps of:
s100: the search key obtains an output value through a search algorithm module;
s200: the output value is output with priority through a first comparison module supporting different lengths;
s300: and the final information result is compressed and processed by an output comparison module and output.
Preferably, the search algorithm module comprises a primary HASH function module and a secondary HASH function module; the main HASH function module hashes key values in a larger range to smaller returned HASH values on average under a data set; and if HASH conflict occurs in the processing process of the main HASH function module, the secondary HASH function module processes the key value to acquire the HASH value.
Preferably, the secondary HASH function module comprises a HASH1 function module and a HASH2 function module;
the HASH1 function module is a lightweight HASH function, a predefined rule can be directly configured according to the storage unit, and when the HASH1 function module is directly matched, additional information which is directly matched is generated and enters the next round of operation;
the HASH2 function module is a HASH function with less operation parameters and less operation complexity.
Preferably, the first comparing module includes an LPM _ RAM module, a plurality of searching modules with different table entry widths, and a direct matching module, and step S200 includes the following steps:
s201: addressing the output value in the LPM _ RAM module and outputting first data;
s202: and step S201, the direct matching module outputs to the output comparison module, marks whether the direct matching is hit, and selects a search module with the width of the matched table entry to output through conditional operation if the direct matching is not hit.
Preferably, the search modules with different table entry widths include a short table search module with a table entry width of 64 bits, a middle table search module with a table entry width of 128 bits, and a long table search module with a table entry width of 192 bits.
Preferably, the storage module is formed by splicing at least two storage units with the same depth and bit width.
Preferably, the bit width of the memory unit is 64 bits.
The invention discloses a method for searching a longest prefix matching hardware system based on SRAM, which is compared with the prior art that:
the hardware circuit design of the invention utilizes the SRAM to well match the function needed by the longest prefix match, compared with the prior art, the HASH module of the invention designs a plurality of logic circuits to generate the HASH value, the circuit design aims at reducing the calculation amount of the HASH module and improving the operation speed, the first comparison module is redesigned with comparison logic circuits with different bit widths, thereby saving precious memory cells and having more flexibility in bit width selection.
Drawings
FIG. 1 is a schematic diagram of an exemplary SRAM-based longest prefix match hardware system;
FIG. 2 is a schematic structural diagram of a search algorithm module in an embodiment;
FIG. 3 is a diagram illustrating a first comparison module according to an embodiment;
fig. 4 is an exemplary embodiment of a method for searching information in the present invention.
Detailed Description
The following examples are given for the detailed implementation and specific operation of the present invention, but the scope of the present invention is not limited to the following examples.
As shown in fig. 1, the present invention discloses a method for searching a longest prefix matching hardware system based on SRAM, where the hardware includes a storage module, the system includes a HASH-based search algorithm module, a first comparison module supporting different lengths, and an output comparison module, and the method includes the following steps:
s100: the search key obtains an output value through a search algorithm module;
s200: the output value is output with priority through a first comparison module supporting different lengths;
s300: and the final information result is compressed and processed by an output comparison module and output.
The search algorithm module comprises a main HASH function module and a secondary HASH function module; the main HASH function module hashes key values in a larger range to smaller returned HASH values on average under a data set; and if HASH conflict occurs in the processing process of the main HASH function module, the secondary HASH function module processes the key value to acquire the HASH value.
As shown in fig. 2, a specific embodiment of the present invention is disclosed, in this embodiment, after key values are input into one or more HASH modules, fig. 2 includes HASH1, HASH2, and HASH3, HASH3 is used as a main HASH function, and the main HASH function (HASH3 function) has better hashing performance, that is, under common data sets such as IPv4 and IPv6, a larger range of key values can be hashed averagely to a smaller returned HASH value, and only HASH collisions with lower probability are achieved. Of course, the cost is to have a greater functional complexity and more input parameters. Generally, the input parameters are pre-stored in a separate entry, and as the key value is input, the content of the entry is indexed along with the entry id number, so as to indirectly obtain the input parameters.
In fig. 2, HASH1 is a very simple processing function, and can be understood as a lightweight HASH function, in a specific embodiment, it can be configured as a predefined rule (adding a non-X length of this rule) directly according to the configuration of the storage unit or table entry, if it is directly matched, additional information such as DIRECT _ MATCH is generated to enter the next round of operation, HASH2 is usually a conventional HASH function, these functions generally have less operation parameters and lower operation complexity, in a general embodiment, for example, a hardware fast and efficient modulo operation is adopted, that is, a low 14-bit DIRECT truncation of key is adopted, so that it can be implemented in a hardware implementation with very little cost, and only a small circuit scale is added to reduce the cost.
The purpose of selecting various simple HASH functions (for example, HASH1 and HASH2 in this embodiment) is to reduce or avoid collision of the main HASH functions, or to obtain HASH values by using other HASH functions in the case of collision of the main HASH functions, and generally, for a type of data set, in the case that the sample size of the data set is large, if only a single main HASH function exists, collision is easily caused; in this case, a secondary HASH function is employed to avoid the situation that collision avoidance is achieved at a relatively small cost.
In the embodiment of fig. 2, the KEY value is input into the HASH3 path, NX _ REG [7:0] is used to represent the length of non-X in KEY in the preprocessing module, that is, the non-X part of the KEY value is sent to the preprocessing 2 module for masking to remove some bits of less interest, and finally sent to the HASH3 module for HASH operation to obtain HASH3_ value. And meanwhile, a directly matched direct _ match signal is obtained in a HASH1 path, a HASH2_ value is obtained in a HASH2 path, and a HASH _ value and a direct _ match are selected from the HASH2_ value and the HASH3_ value according to the configuration parameters of the table entry and are used in the next stage.
Wherein, the first comparing module includes an LPM _ RAM module and a plurality of searching modules with different table entry widths, and step S200 includes the following steps:
s201: addressing the output value in the LPM _ RAM module and outputting first data;
s202: and step S201, the direct matching module outputs to the output comparison module, marks whether the direct matching is hit, and selects the search module with the width of the matched table item to output through conditional operation if the direct matching is not hit.
For ease of understanding, the present invention discloses a specific embodiment, as shown in fig. 3, the outputted HASH value will be used for addressing in the LPM _ RAM module (comparison module) and outputting QDATA (related data) to the next module for further processing, while DIRECT _ MATCH will be outputted to the next module and the flag is directly matched or not. If the direct matching is not hit, judging which mode is the last one through conditional operation: short table searches (entry width of 64 bits), medium table searches (entry width of 128 bits) and long table searches (entry width of 192 bits).
Furthermore, the search modules with different table entry widths comprise a short table search module with a table entry width of 64 bits, a table search module with a table entry width of 128 bits and a long table search module with a table entry width of 192 bits, the storage module is formed by splicing at least two storage units with the same depth and bit width, and the bit width of each storage unit is 64 bits.
The memories LPM _ RAM0 to LPM _ RAM3 have the same depth and bit width respectively, and the 4 memories can be spliced by configuration to support 4 entries of 64 bits (DATA0, DATA1, DATA2, DATA2), 2 entries of 128 bits (DATA4, DATA5) and 1 entry of 192 bits (DATA 6). The table entry width and the splicing method are only taken as a specific embodiment, the table entry width can be flexibly changed in different scenes, and the splicing method can be more flexible.
In order to multiplex three different modes, different circuit modes need to be implemented in the circuit design to support the application, the circuit design in this example has three modes, SMALL _ RECORD _ SEARCH, MEDIUM _ RECORD _ SEARCH, and LONG _ RECORD _ SEARCH, which are implemented by CMP _ xxBITS _ BLK, CMP _ ybits _ BLK, and CMP _ zzBITS _ BLK modules to implement the comparison function to obtain the result.
As shown in fig. 4, in a specific embodiment, for the application scenarios of the 64-bit, 128-bit and 192-bit tables, the functions are implemented by the CMP _64_ BLK, CMP _128_ BLK and CMP _192_ BLK modules of the circuit diagram in fig. 4, respectively, and there are 4 CMP _64_ BLKs, 2 CMP _128_ BLKs and 1 CMP _192_ BLKs in one GROUP _ BLK. And finally, outputting the result of comparison by the modules to an output comparison module for selection to obtain a final value.
In a specific embodiment, the first stage sends the KEY value and the PROFILE serial number of the search mode to the hash module for searching to obtain corresponding information of each 64 bits; the comparison module outputs up to seven comparison results (four short table comparison results, two middle table comparison results and one long table comparison result) according to the comparison modules with different bit widths, and then outputs the comparison results to the next-stage processing module.
The role of OC _ BLK shown in fig. 4 is to output comparison, and after the comparison operation of the previous stage, 7 comparison module results are output, and then in this module, the judgment is performed according to the hit condition of seven comparison results and the configuration information of the table entry length, and it is specified in the output rule that the priority of the longest table entry is the highest, the priority of the medium-length table entry is the next highest, and the priority of the shortest-length table entry is the lowest; when a module has a plurality of outputs of the same type, the output with the highest priority is selected as the output of the module, and when a plurality of modules output, the output with the highest priority in the modules is selected.
Finally, the data information ADATA obtained after the match condition and the match hit is output through the OC _ BLK process.
In summary, compared with the prior art, the hardware circuit design of the present invention utilizes the SRAM to well match the function required for the longest prefix match, and compared with the prior art, the present invention designs various logic circuits in the HASH module to generate HASH values, and the purpose of the circuit design is to reduce the calculation amount of the HASH module and increase the operation speed. Compared logic circuits with different bit widths are redesigned in the comparison module, thereby saving precious memory cells and having more flexibility in bit width selection.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solution and method concept of the present invention are equivalent to or changed within the technical scope of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (7)

1. A method for searching a longest prefix matching hardware system based on SRAM, wherein the hardware comprises a storage module, the system comprises a HASH-based search algorithm module, a first comparison module supporting different lengths, and an output comparison module, the method comprises the following steps:
s100: the search key obtains an output value through a search algorithm module;
s200: the output value is output with priority through a first comparison module supporting different lengths;
s300: and the final information result is compressed and processed by an output comparison module and output.
2. The method of claim 1, wherein said search algorithm module comprises a primary HASH function module and a secondary HASH function module;
the main HASH function module hashes key values in a larger range to smaller returned HASH values on average under a data set;
and if HASH conflict occurs in the processing process of the main HASH function module, the secondary HASH function module processes the key value to acquire the HASH value.
3. The method of claim 2, wherein said secondary HASH function module comprises a HASH1 function module and a HASH2 function module;
the HASH1 function module is a lightweight HASH function, a predefined rule can be directly configured according to the storage unit, and when the HASH1 function module is directly matched, additional information which is directly matched is generated and enters the next round of operation;
the HASH2 function module is a HASH function with less operation parameters and less operation complexity.
4. The method of claim 3, wherein the first comparing module comprises an LPM _ RAM module, a plurality of searching modules with different table entry widths, and a direct matching module, and step S200 comprises the steps of:
s201: addressing the output value in the LPM _ RAM module and outputting first data;
s202: and step S201, the direct matching module outputs to the output comparison module, marks whether the direct matching is hit, and selects the search module with the width of the matched table item to output through conditional operation if the direct matching is not hit.
5. The method of claim 3, wherein the search modules with different table entry widths comprise a short table search module with a table entry width of 64 bits, a middle table search module with a table entry width of 128 bits, and a long table search module with a table entry width of 192 bits.
6. The method of claim 3, wherein the storage module is formed by splicing at least two storage units with the same depth and bit width.
7. The method of claim 6, wherein the bit width of the memory cell is 64 bits.
CN202110423364.0A 2021-04-20 2021-04-20 Method for searching longest prefix matching hardware system based on SRAM Pending CN112818185A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110423364.0A CN112818185A (en) 2021-04-20 2021-04-20 Method for searching longest prefix matching hardware system based on SRAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110423364.0A CN112818185A (en) 2021-04-20 2021-04-20 Method for searching longest prefix matching hardware system based on SRAM

Publications (1)

Publication Number Publication Date
CN112818185A true CN112818185A (en) 2021-05-18

Family

ID=75862588

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110423364.0A Pending CN112818185A (en) 2021-04-20 2021-04-20 Method for searching longest prefix matching hardware system based on SRAM

Country Status (1)

Country Link
CN (1) CN112818185A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113806403A (en) * 2021-09-22 2021-12-17 浙江锐文科技有限公司 Method for reducing search matching logic resources in intelligent network card/DPU
CN115622934A (en) * 2022-10-09 2023-01-17 苏州盛科通信股份有限公司 Routing information storage method, device, network equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101827137A (en) * 2010-04-13 2010-09-08 西安邮电学院 Hash table-based and extended memory-based high-performance IPv6 address searching method
CN104572983A (en) * 2014-12-31 2015-04-29 北京锐安科技有限公司 Construction method based on hash table of memory, text searching method and corresponding device
CN105141525A (en) * 2015-06-30 2015-12-09 杭州华三通信技术有限公司 IPv6 routing lookup method and IPv6 routing lookup device
WO2016060715A1 (en) * 2014-10-16 2016-04-21 Cisco Technology, Inc. Hash-based address matching
CN106330716A (en) * 2015-06-30 2017-01-11 杭州华三通信技术有限公司 IP routing search method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101827137A (en) * 2010-04-13 2010-09-08 西安邮电学院 Hash table-based and extended memory-based high-performance IPv6 address searching method
WO2016060715A1 (en) * 2014-10-16 2016-04-21 Cisco Technology, Inc. Hash-based address matching
CN104572983A (en) * 2014-12-31 2015-04-29 北京锐安科技有限公司 Construction method based on hash table of memory, text searching method and corresponding device
CN105141525A (en) * 2015-06-30 2015-12-09 杭州华三通信技术有限公司 IPv6 routing lookup method and IPv6 routing lookup device
CN106330716A (en) * 2015-06-30 2017-01-11 杭州华三通信技术有限公司 IP routing search method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
熊忠阳等: "分组IP 路由最长前缀匹配查找算法研究", 《世界科技研究与发展》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113806403A (en) * 2021-09-22 2021-12-17 浙江锐文科技有限公司 Method for reducing search matching logic resources in intelligent network card/DPU
CN115622934A (en) * 2022-10-09 2023-01-17 苏州盛科通信股份有限公司 Routing information storage method, device, network equipment and readable storage medium

Similar Documents

Publication Publication Date Title
CN102377664B (en) TCAM (ternary content addressable memory)-based range matching device and method
US8150891B2 (en) System for IP address lookup using substring and prefix matching
EP1623347B1 (en) Comparison tree data structures and lookup operations
US6658482B1 (en) Method for speeding up internet protocol address lookups with efficient use of memory
US6985483B2 (en) Methods and systems for fast packet forwarding
US7415472B2 (en) Comparison tree data structures of particular use in performing lookup operations
US6928430B1 (en) Prefix match search scheme
US20200228449A1 (en) Exact match and ternary content addressable memory (tcam) hybrid lookup for network device
US20140086248A1 (en) Method for IP Longest Prefix Match Using Prefix Length Sorting
Bando et al. Flashtrie: Hash-based prefix-compressed trie for IP route lookup beyond 100Gbps
US20040233692A1 (en) Magnitude comparator based content addressable memory for search and sorting
US20040044868A1 (en) Method and apparatus for high-speed longest prefix match of keys in a memory
CN112818185A (en) Method for searching longest prefix matching hardware system based on SRAM
US20060209725A1 (en) Information Retrieval Architecture for Packet Classification
CN112667526B (en) Method and circuit for realizing access control list circuit
CN109039911B (en) Method and system for sharing RAM based on HASH searching mode
CN108075979B (en) Method and system for realizing longest mask matching
JP3569802B2 (en) Routing table search device and search method
US20060198379A1 (en) Prefix optimizations for a network search engine
US10476785B2 (en) IP routing search
Rojas-Cessa et al. Parallel search trie-based scheme for fast IP lookup
US7934198B2 (en) Prefix matching structure and method for fast packet switching
CN109194574B (en) IPv6 route searching method
CN107204926B (en) Rapid route searching method for preprocessing cache
Lin et al. Improved IP lookup technology for trie-based data structures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210518

RJ01 Rejection of invention patent application after publication