WO2019241926A1 - Access control list management method and device - Google Patents
Access control list management method and device Download PDFInfo
- Publication number
- WO2019241926A1 WO2019241926A1 PCT/CN2018/091954 CN2018091954W WO2019241926A1 WO 2019241926 A1 WO2019241926 A1 WO 2019241926A1 CN 2018091954 W CN2018091954 W CN 2018091954W WO 2019241926 A1 WO2019241926 A1 WO 2019241926A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sram
- node
- stored
- keyword
- compression
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/40—Network security protocols
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/06—Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
Definitions
- the present application relates to the field of data processing technologies, and in particular, to a method and device for managing an access control list (Access Control List, ACL).
- ACL Access Control List
- ACL is a list of instructions configured in forwarding devices such as routers or switches.
- the ACL includes multiple rules.
- a forwarding device receives a packet, it can extract keywords from the packet, and find rules that match the extracted keywords from the multiple rules included in the ACL. As a result, the received message is processed. Because the number of rules included in an ACL is extremely large, in order to facilitate searching, usually, a forwarding device can create a corresponding search tree according to the ACL, and then store the ACL according to the layers included in the search tree.
- the forwarding device may select multiple specific bits in the rule to divide the multiple rules according to the characteristics of the multiple rules included in the ACL, thereby dividing the multiple rules into multiple buckets.
- the maximum capacity of each of the multiple buckets is the same.
- the forwarding device may create a search tree according to the process of dividing the multiple rules, and each leaf node of the search tree eventually points to a bucket. For example, suppose the maximum capacity of the bucket is 3.
- the ACL includes 12 rules.
- the forwarding device can first divide the 12 rules into 2 groups according to the 6th bit. Among them, the rule with the value of the 6th bit is 0 is the first group, the rule with the value of 1 is the second group, and the value is the specified character.
- Rules need to be placed in two groups at the same time. After the first group and the second group are divided, since the number of rules in the first group and the second group are both greater than 3, that is, greater than the maximum capacity of the bucket, the first group and the second group can be continued Divided. Among them, for the first group, they are divided according to the third bit to obtain the third and fourth groups, and for the second group, they are divided according to the first bit to obtain the fifth and sixth groups. Among them, the number of rules included in the fourth group, the fifth group, and the sixth group are all less than 3. Therefore, the rules of the fourth group, the fifth group, and the sixth group can be stored in different buckets, respectively. The number of rules included in the three groups is still greater than three.
- the third group can be further divided according to the 13th bit to obtain the seventh group and the eighth group. Because the number of rules included in the seventh and eighth groups is less than three, the rules included in the seventh and eighth groups may be stored in different buckets, respectively. It can be seen that through the above process, 12 rules can be divided into 5 buckets, and according to the division process, the search tree shown in FIG. 2 can be created. Each leaf node of the search tree stores the storage address of the corresponding bucket, and other nodes except the leaf nodes store the specific bits corresponding to the corresponding nodes that are used to indicate the division of the rule and the specific Segmentation information for the division method.
- the forwarding device can start from the first layer of the search tree and A preset number of layers is used as a compression node layer, and multiple nodes in the compression node layer are stored in at least one compression node, and then stored in each static random access memory (SRAM).
- a compression node layer includes at least one compression node.
- the present application provides an access control list management method, device, and computer storage medium, which can be used to solve the problem of wasted storage space caused by each SRAM storing a compressed node layer in the related art.
- the technical scheme is as follows:
- a method for managing an access control list includes:
- the compressed node layer is formed by a layer on the search tree corresponding to the access control list ACL, and the compressed node layer includes at least one compressed node, where N is greater than or equal to 1 An integer
- each of the M SRAMs stores the N compression node layers At least one compression node layer.
- the forwarding device may store N compression node layers in less than N SRAMs, that is, multiple compression node layers may share one SRAM, which effectively avoids storing only one in each SRAM.
- the problem of wasted storage space caused by compression of the node layer improves the utilization of the storage space of the SRAM.
- the storing the N compression node layers in the M SRAMs includes:
- the method further includes:
- the searching for a rule matching the keyword based on the keyword and the N compression node layers stored in the M SRAMs includes:
- Leaf node If the k-th compressed node layer is stored in the i-th SRAM, based on the keyword, find the corresponding to the keyword from at least one compressed node included in the k-th compressed node layer.
- the method further includes:
- the method further includes:
- the method further includes:
- searching for a rule matching the keyword based on the address of a bucket stored in the found leaf node includes:
- an access control list management device is provided, and the access control list management device has a function of implementing the behavior of the access control list management method in the first aspect.
- the apparatus for managing an access control list includes at least one module, and the at least one module is configured to implement the method for managing an access control list provided by the first aspect.
- an access control list management device includes a processor and a memory, and the memory is configured to store the management device supporting the access control list to perform the first aspect.
- the processor is configured to execute a program stored in the memory.
- the operating device of the storage device may further include a communication bus for establishing a connection between the processor and the memory.
- a computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the method for managing an access control list according to the first aspect. .
- a computer program product containing instructions which when executed on a computer, causes the computer to execute the random access method described in the first aspect.
- the technical solution provided by this application has the beneficial effect of: obtaining the number N of compressed node layers formed by each adjacent preset number of layers on the search tree corresponding to the ACL, and storing N compressed node layers in M blocks On the SRAM, where M is less than N, each SRAM stores at least one compression node included in at least one compression node layer of the N compression node layers. It can be seen that, in the embodiment of the present application, N compression node layers can be stored in less than N SRAMs. In this way, more than one compression node layer can be stored in each SRAM, which avoids only one SRAM in each SRAM. The problem of wasted storage space caused by storing a compressed node layer improves the utilization of the storage space of the SRAM.
- FIG. 1 is a schematic diagram of dividing multiple rules included in an ACL provided by related technologies
- FIG. 2 is a schematic diagram of a search tree created according to an ACL division process in the related art
- FIG. 3 is a schematic diagram of an applicable implementation environment of an ACL management method according to an embodiment of the present application.
- FIG. 4 is a schematic structural diagram of a forwarding device according to an embodiment of the present application.
- FIG. 5 is a flowchart of an ACL management method according to an embodiment of the present application.
- FIG. 6 is a schematic diagram of a compressed node layer of a search tree corresponding to an ACL according to an embodiment of the present application
- FIG. 7 is a schematic diagram illustrating storing N compression node layers on M SRAMs according to an embodiment of the present application.
- FIG. 8 is a flowchart of an ACL management method according to an embodiment of the present application.
- FIG. 9 is a schematic diagram of a distribution of N compression node layers on M SRAMs provided by an embodiment of the present application.
- FIG. 10 is a schematic structural diagram of an access control list management apparatus according to an embodiment of the present application.
- FIG. 3 is a schematic diagram of an implementation environment to which an access control list management method according to an embodiment of the present application is applied.
- the implementation environment may include a transmitting device 301, a forwarding device 302, and a receiving device 303.
- the transmitting device 301 may communicate with the forwarding device 302 through a wireless network or a wired network, and the receiving device 303 and the forwarding device 302 may also communicate through a wireless network or a wired network.
- the sending-end device 301 may be used to send a message to the forwarding device 302, and the message may be a message sent by the sending-end device 301 to the receiving-end device 303.
- the forwarding device 302 includes multiple SRAMs, and the forwarding device 302 may be configured with multiple ACLs. Among them, each ACL can include multiple rules configured by the user.
- the forwarding device 302 may divide multiple rules in each ACL into multiple buckets, and create a search tree for each ACL according to the division process. Since the number of layers of the ACL search tree may be large, the forwarding device 302 may divide multiple layers included in the search tree of each ACL into multiple compressed node layers, where each compressed node layer includes multiple nodes Stored in at least one compression node.
- the forwarding device 302 may store multiple compressed node layers included in each ACL in multiple SRAMs.
- the forwarding device 302 may further include a tri-state content addressable memory (TCAM).
- TCAM tri-state content addressable memory
- the forwarding device 302 can be used to parse the received message sent by the sending-end device 301, extract a keyword from the message, and find the rule corresponding to the keyword through the search tree of each ACL, and according to the search As a result, the message is processed differently. For example, if found, the forwarding device 302 may forward the message to the receiving device 303; if not found, the forwarding device 302 may refuse to forward the message.
- the sending-end device 301 and the receiving-end device 303 may be computer devices such as a smart phone, a tablet computer, a notebook computer, and a desktop computer.
- the forwarding device 302 may be a router, a switch, or another routing and switching access integrated device.
- FIG. 4 is a schematic structural diagram of a forwarding device 302 in the foregoing implementation environment provided by an embodiment of the present application.
- the forwarding device mainly includes a transmitter 3021, a receiver 3022, a memory 3023, a processor 3024, and Communication bus 3025.
- the structure of the forwarding device 302 shown in FIG. 4 does not constitute a limitation on the forwarding device 302, and may include more or less components than shown in the figure, or combine some components, or different The arrangement of components is not limited in the embodiment of the present application.
- the transmitter 3021 may be configured to send data and / or signaling to the receiving device 103.
- the receiver 3022 may be used to receive data and / or signaling sent by the sending-end device 101.
- the memory 3023 may be used to store multiple rules included in the ACL, the search tree of the ACL, and the data sent by the sending device 101 described above, and the memory 3023 may also be used to store the data used to execute the rules provided by the embodiments of the present invention.
- the memory 3023 may be a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a random access memory (random access memory (RAM)), or a device that can store information and instructions.
- EEPROM Electrically Erasable Programmable Read-Only Memory
- CD-ROM Compact Disc
- Optical disc storage including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.
- magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can Any other medium accessed by the integrated circuit, but is not limited thereto.
- the memory 3023 may exist independently, and is connected to the processor 3024 through a communication bus 3025. The memory 3023 may also be integrated with the processor 3024.
- the processor 3024 is the control center of the forwarding device 302.
- the processor 3024 may be a general-purpose central processing unit (CPU), a microprocessor, and an application-specific integrated circuit (Application-Specific Integrated Circuit). , Hereinafter referred to as ASIC), or one or more integrated circuits used to control the execution of the program procedures.
- ASIC Application-Specific Integrated Circuit
- the processor 3024 can implement the ACL storage method provided by the embodiment of the present invention by running or executing software programs and / or modules stored in the memory 3023 and calling data stored in the memory 3023.
- processor 3024 and the memory 3023 can transmit information through a communication bus 3025.
- FIG. 5 is a flowchart of an ACL management method according to an embodiment of the present application.
- the method can be applied to the forwarding device 302 in the implementation environment shown in FIG. 3.
- the method includes the following steps:
- Step 501 Obtain the number N of compressed node layers.
- the forwarding device may start from the first layer of the search tree and treat each adjacent preset number of layers as a compressed node layer, thereby dividing the search tree into multiple compressed node layers, and then forward The device can count the number N of compressed node layers included in the search tree.
- each compressed node layer includes multiple layers of a search tree
- each compressed node layer will include multiple nodes, and the forwarding device can combine multiple nodes in each compressed node layer to store at least Within a compressed node.
- FIG. 6 is a schematic diagram of a compressed node layer of a search tree corresponding to an ACL according to an embodiment of the present application.
- the forwarding device may start from the first layer, and use the first layer to the fourth layer as the first compression node layer.
- the compression node layer may include multiple nodes. Stored in a compression node, that is, including a compression node in the first compression node layer. Starting from the fifth layer, the fifth to eighth layers are used as the second compression node layer. It should be noted that in the second compression node layer, the leaf nodes in the fifth layer can be stored in one compression node.
- each top node and the corresponding branch of the node are stored in a compressed node.
- the two leaf nodes can be stored in a compressed node, and the remaining 4 nodes other than the leaf nodes can be stored in the four nodes.
- Each node in the node serves as a top-level node, and each top-level node and the branch derived from each top-level node are stored in a compression node, thereby obtaining four compression nodes.
- the forwarding device can process according to the above method, so as to obtain N compressed node layers.
- Step 502 Store N compression node layers in M SRAMs, where M is a positive integer less than N, and each SRAM in the M SRAM stores at least one compression node layer of the N compression node layers.
- the forwarding device may store at least one compression node included in each compression node layer in a SRAM, that is, N compression node layers are to be stored Requires N blocks of SRAM.
- N compression node layers are to be stored Requires N blocks of SRAM.
- the forwarding device may store N compression node layers in less than N SRAMs, that is, each SRAM may store at least one compression node layer.
- a plurality of compression node layers can share a SRAM, which effectively avoids the problem of wasted space of the SRAM.
- the forwarding device may store the N compressed node layers in M SRAMs in the following ways.
- the first method According to the sequence of the N compression node layers, starting from the first SRAM of the M SRAMs, and sequentially from the first SRAM to the Mth SRAM, one compression node is stored in each SRAM in order. When it is stored in the Mth SRAM, it starts from the first SRAM in the Mth SRAM and stores a compression node layer in each SRAM in order from the first SRAM to the Mth SRAM, until Until storing N compression node layers.
- the forwarding device may determine the sequence of the compressed node layer according to the position of the compressed node layer on the search tree and in a top-down order. That is, the compressed node layer at the top of the search tree is the first compressed node layer, the next layer pointed to by the last node in the first compressed node layer is the second compressed node layer, and so on.
- the M SRAMs may also be arranged in a certain order.
- the forwarding device may sequentially store the first compression node layer to the Mth compression node layer in the order of the N compression node layers on the first SRAM to the Mth SRAM. .
- the first compression node layer is stored on the first SRAM
- the second compression node layer is stored on the second SRAM
- the i-th compression node layer is stored on the i-th SRAM
- the forwarding device can return to the first SRAM, store the M + 1th compression node layer on the first SRAM, and store the M + 2th on the second SRAM. Compression node layers, and so on, until all N compression node layers are completely stored.
- the i-th SRAM of the M SRAMs will store the i-th compression node layer, the (M + i) -th compression node layer, the (2M + i) -th compression node layer, etc., among which 2M + i Less than or equal to N.
- FIG. 7 is a schematic diagram of storing N compression node layers on M SRAMs according to an embodiment of the present application.
- SRAM1 is the first SRAM
- SRAM2 is the second SRAM
- Conde_stage_01 is the first compression node layer
- Conde_stage_02 is the second compression node layer
- Conde_stage_10 is the tenth compression node layer.
- the forwarding device can store Conde_stage_01 on SRAM1, Conde_stage_02 on SRAM2, Conde_stage_03 on SRAM3, and Conde_stage_04 on SRAM4.
- the forwarding device can return to SRAM1, store Conde_stage_5 on SRAM1, Conde_stage_6 on SRAM2, Conde_stage_7 on SRAM3, and Conde_stage_8 on SRAM4. After that, the forwarding device can return to SRAM1 again, store Conde_stage_9 on SRAM1, and store Conde_stage_10 on SRAM2. In this way, 10 compression node layers are stored on 4 SRAMs.
- Conde_stage_01, Conde_stage_05, and Conde_stage_09 are stored on SRAM1
- Conde_stage_02, Conde_stage_06, and Conde_stage_10 are stored on SRAM2
- Conde_stage_03, Conde_stage_07 are stored on SRAM3
- Conde_stage is stored on SRAM4.
- the second method According to the sequence of the N compressed node layers, the N compressed node layers are sequentially divided into Q groups, and the number of compressed node layers included in each of the first Q-1 groups in the Q group is The number of compressed node layers included in the last group in the M, Q groups is less than or equal to M; the i-th compressed node layer in the compressed node layers included in each group in the Q group is stored in the i-th in the M SRAMs In block SRAM, i is an integer greater than 0 and not greater than M.
- the foregoing first method mainly introduces the implementation process of storing N compression node layers one by one into M SRAMs.
- the forwarding device may first divide the N compressed node layers into Q groups in order, and then store the compressed node layers included in each group in the Q group in parallel.
- the forwarding device may divide each adjacent M compression node layer into a group starting from the first compression node layer according to the sequence of the N compression node layers, thereby obtaining a Q group. If N is an integer multiple of M, each group in the Q group includes M compression node layers. If N is not an integer multiple of M, the number of compressed node layers included in the last group in the Q group will be less than M .
- the forwarding device can store the first compression node layer in each compression node layer included in each group on the first SRAM and the second compression node in the M SRAMs. The layers are stored on the second SRAM, and so on, until the compressed node layers included in each group are stored.
- the forwarding device can store the compression node layers included in each group in the Q group in parallel to speed up the storage. speed.
- the third method According to the sequence of the N compression node layers, starting from the first SRAM of the M SRAMs, and sequentially from the first SRAM to the Mth SRAM, one compression node is stored in each SRAM in order. Layer; when the storage in the Mth SRAM is completed, if the number of remaining compressed node layers is less than M, then according to the size of the remaining storage space of each SRAM in the M SRAM, select the remaining nodes from the M SRAM The number of SRAM layers is the same, and the remaining compression node layers are stored in the selected SRAM.
- the sequence of the N compression node layers is restarted, starting from the first SRAM in the M SRAMs, and in the order from the first SRAM to the Mth SRAM.
- a compression node layer is stored in turn on each SRAM.
- the forwarding device may first store the first compression node layer to the Mth compression node layer of the N compression node layers in the corresponding storage on the first to the Mth SRAMs in sequence. After the block SRAM is stored, the forwarding device can count whether the number of remaining unstored compression node layers is less than M. If it is not less than M, it can start again from the first SRAM, from the first SRAM to the Mth SRAM. It stores the M + 1th compression node layer to the 2Mth compression node layer in turn.
- the forwarding device can count the size of the remaining storage space of each SRAM in the M SRAMs, sort them according to the remaining storage space in descending order, and select the remaining storage space first from the sorting result W SRAMs, where W is equal to the number of remaining compressed node layers.
- W is equal to the number of remaining compressed node layers.
- the remaining compressed node layers are stored in W SRAMs.
- the forwarding device can select the SRAM with the most remaining storage space from the four SRAMs and the remaining SRAM is ranked second. Assume that the SRAM with the most remaining storage space is the third SRAM of the 4 SRAMs, and the remaining SRAM is ranked the second SRAM as the first SRAM. One compression node layer is stored in the first SRAM, and the second compression node layer in the remaining compression node layers is stored in the third SRAM.
- N compression node layers are stored on M SRAMs.
- the forwarding device may store N compression node layers in less than N SRAMs, that is, multiple compression node layers may share one SRAM, which effectively avoids storing only one in each SRAM.
- the problem of wasted storage space caused by compression of the node layer improves the utilization of the storage space of the SRAM.
- the forwarding device receives a message
- the rules provided by the following embodiments can be used to find the rules that match the message. Based on this, referring to FIG. 8, another ACL management method is provided. The method includes the following steps:
- Step 801 Obtain the number N of compressed node layers.
- step 501 For the implementation of this step, reference may be made to step 501 in the foregoing embodiment.
- Step 802 Store N compression node layers in M SRAMs, where M is a positive integer less than N, and each SRAM in the M SRAM stores at least one compression node layer of the N compression node layers.
- step 502 For the implementation of this step, reference may be made to step 502 in the foregoing embodiment.
- Step 803 When a message is received, keywords are extracted from the message.
- the forwarding device may extract corresponding keywords from the message according to the rules in the ACL.
- the rules in the ACL are used to classify packets based on the source address of the packet, and the forwarding device can extract the source address information in the packet to obtain keywords.
- the rules in the ACL are used to classify packets based on time period information, the forwarding device can obtain the time period information of the packet and use it as a keyword.
- Step 804 Based on the keywords and the N compressed node layers stored in the M SRAMs, find a rule that matches the keywords.
- the forwarding device can find the leaf nodes that match the keywords from the N compressed node layers stored in the M SRAMs, and then obtain the corresponding nodes based on the addresses of the buckets stored in the found leaf nodes.
- the keyword matches the rule. If the forwarding device does not find a leaf node matching the keyword in the N compression node layers, the forwarding device may discard the message.
- the k-th compressed node layer based on keywords, finds the leaf node corresponding to the keyword from at least one compressed node included in the k-th compressed node layer; if a leaf node corresponding to the keyword is found, then Based on the addresses of the buckets stored in the found leaf nodes, find rules that match the keywords.
- the forwarding device may search from the first SRAM to determine whether the first compression node layer is stored in the first SRAM, and if so, may search from at least one compression node included in the first compression node layer. Whether there is a leaf node matching the keyword. If a leaf node matching the keyword is found in the first compression node layer, the forwarding device can obtain the address of the bucket stored in the leaf node and The address of the bucket is transferred from the first SRAM to the second SRAM, from the second SRAM to the third SRAM, and so on, and until the M-th SRAM is passed, the bucket is output by the M-th SRAM. After that, the forwarding device can obtain a rule matching the keyword based on the bucket address to the corresponding bucket.
- the forwarding device may find the first compression node layer in the second SRAM, and if the second SRAM still does not store the second compression node layer, If there is a compression node layer, the forwarding device can continue to search to the next SRAM until it finds the first compression node layer, and then decides whether to search for the second compression node layer according to the search result in the first compression node layer.
- each compression node layer is stored in each SRAM one by one. Therefore, when searching from the first SRAM, the first compression node layer is stored in the first SRAM, the second compression node layer is stored in the second SRAM, and the i-th compression is correspondingly stored in the i-th SRAM.
- the node layer does not store the case where the k-th compressed node layer is not stored in the i-th compressed node layer.
- the forwarding device may not perform the step of judging whether the k-th compression node layer is stored in the i-th SRAM, but may directly find and compare with the k-th compression node layer stored in the i-th SRAM.
- the leaf node corresponding to the keyword may not perform the step of judging whether the k-th compression node layer is stored in the i-th SRAM, but may directly find and compare with the k-th compression node layer stored in the i-th SRAM.
- the forwarding device may First determine whether the k-th compressed node layer is stored in the i-th SRAM. If it is, then search for the leaf node corresponding to the keyword in the k-th compressed node layer. If not, continue to find the k-th compressed node in the next SRAM. k compressed node layers until found.
- the forwarding device can go to the next SRAM of the i-th SRAM to find the next compression node layer of the k-th compression node layer. If the next compression node layer is stored in the next SRAM, the forwarding device can Find a leaf node corresponding to the keyword in a compressed node layer.
- the forwarding device needs to start again from the first SRAM, and find the next compression node layer of the kth compression node layer in the first SRAM.
- the forwarding device stores the N compressed node layers according to the first method or the second method.
- the forwarding device when performing a search, can start searching from SRAM1, and first find the leaf node corresponding to the keyword in Conde_stage_01 stored in SRAM1. If a leaf node is found, the forwarding device can pass the address of the bucket stored in the leaf node from SRAM1 to SRAM2, then from SRAM2 to SRAM3, from SRAM3 to SRAM4, and the forwarding device obtains the address of the bucket from the SRAM4. And according to the address of the bucket, the corresponding rule in the bucket is obtained.
- the forwarding device can search in Conde_stage_02 stored in SRAM2. If found, it can be processed by referring to the foregoing method. If it is not found, it can continue to search downward. If the leaf node is not found until Conde_stage_04 in SRAM4, the forwarding device may return to SRAM1 and continue to find leaf nodes in order starting from Conde_stage_05.
- the forwarding device stores the N compressed node layers according to the third method.
- Conde_stage_06 Since Conde_stage_06 is not stored on SRAM2, the forwarding device can continue to determine whether Conde_stage_06 is stored on SRAM3. , Conde_stage_06 is stored in SRAM3, then the forwarding device looks for the leaf node corresponding to the keyword in Conde_stage_06. If found, the forwarding device can obtain the address of the bucket from the found leaf node and pass the address of the bucket to SRAM4. The forwarding device can obtain the address of the bucket from SRAM4, and find the rule matching the keyword in the corresponding bucket according to the address of the bucket.
- the forwarding device can find the leaf nodes corresponding to the keywords in the N compressed node layers stored in the M blocks of SRAM. As long as the leaf nodes corresponding to the keywords are found, the forwarding device can The address of the bucket stored in the leaf node is passed to the Mth SRAM to end the search process.
- the forwarding device can only be in the Nth block. Only the SRAM can obtain the address of the bucket stored in the leaf node.
- the search delay is equal to the delay from the first compressed node layer to the last compressed node layer. It can be seen that, through the storage method and corresponding search method provided in the embodiments of the present application, when the forwarding device finds a leaf node corresponding to a keyword in a certain compressed node layer, the search process can be ended as early as possible, which is effective Reduced search delay.
- FIG. 10 is a block diagram of an access control list management apparatus according to an embodiment of the present invention.
- the apparatus is applied to a forwarding device.
- the apparatus 1000 includes an acquisition module 1001 and a storage module 1002.
- the obtaining module 1001 is configured to execute step 501 in the foregoing embodiment
- the storage module 1002 is configured to execute step 502 in the foregoing embodiment.
- the storage module 1002 is specifically configured to:
- the device further includes:
- An extraction module for extracting keywords from a message when a message is received
- a search module is used to find rules matching keywords based on keywords and N compressed node layers stored in M SRAMs.
- the search module includes:
- a first search submodule configured to find, if there is a k-th compressed node layer in the i-th SRAM, at least one compression node included in the k-th compressed node layer based on a keyword, Leaf node
- the second search submodule is configured to, if a leaf node corresponding to the keyword is found, find a rule matching the keyword based on the address of a bucket stored in the found leaf node.
- the search module further includes:
- the search module further includes:
- the search module further includes:
- the second search submodule is specifically configured to:
- the forwarding device can store N compression node layers in less than N SRAMs, that is, multiple compression node layers can share one SRAM, which effectively avoids each block.
- the problem of wasted storage space caused by storing only one compression node layer in the SRAM improves the utilization of the storage space of the SRAM.
- the leaf nodes corresponding to the keywords can be found in the N compressed node layers stored in the M blocks of SRAM. As long as the leaf nodes corresponding to the keywords are found, the forwarding device can The address of the bucket stored in the leaf node is passed to the Mth SRAM to end the search process.
- the search process can be ended as early as possible, effectively reducing Search delay.
- the management device of the access control list provided in the foregoing embodiment when managing the access control list, uses only the division of the functional modules as an example for illustration. In actual applications, the above functions can be allocated by different functional modules as needed.
- the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
- the device for managing an access control list provided by the foregoing embodiment belongs to the same concept as the method embodiments shown in FIG. 5 and FIG. 8, and the specific implementation process thereof is detailed in the method embodiment, and details are not described herein again.
- the computer program product includes one or more computer instructions.
- the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
- the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center through a cable (For example: coaxial cable, optical fiber, Digital Subscriber Line (DSL)) or wireless (for example: infrared, wireless, microwave, etc.) to another website, computer, server or data center for transmission.
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes one or more available medium integration.
- the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a Digital Versatile Disc (DVD)), or a semiconductor medium (for example, a solid state disk (Solid State Disk) (SSD)) Wait.
- a magnetic medium for example, a floppy disk, a hard disk, a magnetic tape
- an optical medium for example, a Digital Versatile Disc (DVD)
- DVD Digital Versatile Disc
- semiconductor medium for example, a solid state disk (Solid State Disk) (SSD)
- a computer-readable storage medium is provided.
- the computer-readable storage medium runs on the computer, the computer is caused to execute the steps of the method for managing an access control list shown in FIG. 5 or FIG.
- the program may be stored in a computer-readable storage medium.
- the storage medium mentioned may be a read-only memory, a magnetic disk or an optical disk.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
Description
本申请涉及数据处理技术领域,特别涉及一种访问控制列表(Access Control List,ACL)的管理方法及装置。The present application relates to the field of data processing technologies, and in particular, to a method and device for managing an access control list (Access Control List, ACL).
ACL是配置在路由器或交换机等转发设备中的指令列表。其中,ACL中包括多条规则,当转发设备接收到报文时,可以从报文中提取关键字,并从ACL包括的多条规则中查找与提取的关键字相匹配的规则,进而根据查找结果对接收到的报文进行处理。由于ACL包括的规则的数量极其巨大,因此,为了方便查找,通常,转发设备可以根据ACL创建对应的搜索树,进而按照该搜索树包括的层来对ACL进行存储。ACL is a list of instructions configured in forwarding devices such as routers or switches. Among them, the ACL includes multiple rules. When a forwarding device receives a packet, it can extract keywords from the packet, and find rules that match the extracted keywords from the multiple rules included in the ACL. As a result, the received message is processed. Because the number of rules included in an ACL is extremely large, in order to facilitate searching, usually, a forwarding device can create a corresponding search tree according to the ACL, and then store the ACL according to the layers included in the search tree.
具体地,转发设备可以根据ACL包括的多条规则的特点,选择规则中的多个特定比特位来对多条规则进行分割,从而将该多条规则划分到多个桶中。其中,该多个桶中的每个桶的最大容量均是相同的。转发设备可以根据划分该多条规则的过程创建搜索树,该搜索树的每个叶子节点最终都指向一个桶。例如,假设桶的最大容量为3,如图1所示,ACL中包括12条规则。转发设备可以首先根据第6比特位将12条规则划分为2组,其中,第6比特位的值为0的规则为第一组,值为1的规则为第二组,值为指定字符的规则需要同时放入到两个组中。当划分得到第一组和第二组之后,由于第一组和第二组中规则的数量均大于3,也即均大于桶的最大容量,因此,可以继续对第一组和第二组进行划分。其中,对于第一组,根据第3比特位对其划分,得到第三组和第四组,对于第二组,根据第1比特位对其划分,得到第五组和第六组。其中,第四组、第五组和第六组包括的规则的数量均小于3,因此,可以将第四组、第五组和第六组的规则分别存储在不同的桶中,而由于第三组包括的规则的数量仍然大于3,因此,可以继续根据第13比特位对第三组进行划分,得到第七组和第八组。由于第七组和第八组包括的规则的数量均小于3,因此,可以将第七组和第八组包括的规则分别存储在不同的桶中。由此可见,通过上述过程,可以将12条规则划分到5个桶中,而根据划分的过程,则可以创建得到如图2所示的搜索树。该搜索树的每个叶子节点上都存储有对应的桶的存储地址,而除叶子节点之外的其他节点上则存储有相应节点所对应的用于指示对规则进行划分的特定比特位以及具体划分方式的分割信息。Specifically, the forwarding device may select multiple specific bits in the rule to divide the multiple rules according to the characteristics of the multiple rules included in the ACL, thereby dividing the multiple rules into multiple buckets. The maximum capacity of each of the multiple buckets is the same. The forwarding device may create a search tree according to the process of dividing the multiple rules, and each leaf node of the search tree eventually points to a bucket. For example, suppose the maximum capacity of the bucket is 3. As shown in Figure 1, the ACL includes 12 rules. The forwarding device can first divide the 12 rules into 2 groups according to the 6th bit. Among them, the rule with the value of the 6th bit is 0 is the first group, the rule with the value of 1 is the second group, and the value is the specified character. Rules need to be placed in two groups at the same time. After the first group and the second group are divided, since the number of rules in the first group and the second group are both greater than 3, that is, greater than the maximum capacity of the bucket, the first group and the second group can be continued Divided. Among them, for the first group, they are divided according to the third bit to obtain the third and fourth groups, and for the second group, they are divided according to the first bit to obtain the fifth and sixth groups. Among them, the number of rules included in the fourth group, the fifth group, and the sixth group are all less than 3. Therefore, the rules of the fourth group, the fifth group, and the sixth group can be stored in different buckets, respectively. The number of rules included in the three groups is still greater than three. Therefore, the third group can be further divided according to the 13th bit to obtain the seventh group and the eighth group. Because the number of rules included in the seventh and eighth groups is less than three, the rules included in the seventh and eighth groups may be stored in different buckets, respectively. It can be seen that through the above process, 12 rules can be divided into 5 buckets, and according to the division process, the search tree shown in FIG. 2 can be created. Each leaf node of the search tree stores the storage address of the corresponding bucket, and other nodes except the leaf nodes store the specific bits corresponding to the corresponding nodes that are used to indicate the division of the rule and the specific Segmentation information for the division method.
在创建搜索树之后,由于ACL中的规则的数量极其庞大,因此,创建的搜索树往往包括很多层,在这种情况下,转发设备可以从搜索树的第一层开始,将每相邻的预设数量的层作为一个压缩节点层,将该压缩节点层内的多个节点存储在至少一个压缩节点中,之后,在每块静态随机存取存储器(Static Random-Access Memory,SRAM)中存储一个压缩节点层包括的至少一个压缩节点。After the search tree is created, because the number of rules in the ACL is extremely large, the search tree created often includes many layers. In this case, the forwarding device can start from the first layer of the search tree and A preset number of layers is used as a compression node layer, and multiple nodes in the compression node layer are stored in at least one compression node, and then stored in each static random access memory (SRAM). A compression node layer includes at least one compression node.
当采用上述方法来存储ACL时,由于搜索树的每一层的节点数量不同,因此,每一个压缩节点层内的节点数量也不同,这样,每块SRAM中被占用的容量的多少也不同,对于某些节点数量较少的压缩节点层所对应的SRAM,会存在存储空间浪费的问题。When the above method is used to store the ACL, because the number of nodes in each layer of the search tree is different, the number of nodes in each compressed node layer is also different. In this way, the amount of occupied capacity in each SRAM is also different. For the SRAM corresponding to some compressed node layers with a small number of nodes, there will be a problem of wasted storage space.
发明内容Summary of the Invention
本申请提供了一种访问控制列表的管理方法、装置及计算机存储介质,可以用于解决相关技术中每块SRAM存放一个压缩节点层所导致的存储空间浪费的问题。该技术方案如下:The present application provides an access control list management method, device, and computer storage medium, which can be used to solve the problem of wasted storage space caused by each SRAM storing a compressed node layer in the related art. The technical scheme is as follows:
第一方面,提供了一种访问控制列表的管理方法,该方法包括:In a first aspect, a method for managing an access control list is provided. The method includes:
获取压缩节点层的数量N,所述压缩节点层是由访问控制列表ACL对应的搜索树上的层所形成的,且所述压缩节点层包括至少一个压缩节点,所述N为大于或等于1的整数;Obtain the number N of compressed node layers. The compressed node layer is formed by a layer on the search tree corresponding to the access control list ACL, and the compressed node layer includes at least one compressed node, where N is greater than or equal to 1 An integer
将所述N个压缩节点层存储在M块SRAM中,所述M为大于0且小于所述N的整数,所述M块SRAM中的每块SRAM中存储有所述N个压缩节点层中的至少一个压缩节点层。Storing the N compression node layers in M SRAMs, where M is an integer greater than 0 and less than the N, and each of the M SRAMs stores the N compression node layers At least one compression node layer.
在本申请实施例中,转发设备可以将N个压缩节点层存储在少于N块的SRAM中,也即,多个压缩节点层可以共享一块SRAM,有效的避免了每块SRAM中仅存储一个压缩节点层所造成的存储空间浪费的问题,提高了SRAM的存储空间的利用率。In the embodiment of the present application, the forwarding device may store N compression node layers in less than N SRAMs, that is, multiple compression node layers may share one SRAM, which effectively avoids storing only one in each SRAM. The problem of wasted storage space caused by compression of the node layer improves the utilization of the storage space of the SRAM.
可选地,所述将所述N个压缩节点层存储在所述M块SRAM中,包括:Optionally, the storing the N compression node layers in the M SRAMs includes:
按照所述N个压缩节点层的先后顺序,从所述M块SRAM中的第1块SRAM开始,按照从所述第1块SRAM到第M块SRAM的顺序,依次在每块SRAM存储一个压缩节点层;According to the sequence of the N compression node layers, starting from the first SRAM of the M SRAMs, and sequentially from the first SRAM to the Mth SRAM, one compression is stored in each SRAM in turn. Node layer
当存储到第M块SRAM时,重新从所述M块SRAM中的第1块SRAM开始,按照从所述第1块SRAM到第M块SRAM的顺序,依次在每块SRAM存储一个压缩节点层,直到存储完所述N个压缩节点层为止。When it is stored in the Mth SRAM, it starts from the first SRAM in the Mth SRAM, and sequentially stores a compression node layer in each SRAM in the order from the first SRAM to the Mth SRAM. Until the N compressed node layers are stored.
可选地,所述方法还包括:Optionally, the method further includes:
当接收到报文时,从所述报文中提取关键字;When a message is received, extract keywords from the message;
基于所述关键字和所述M块SRAM中存储的所述N个压缩节点层,查找与所述关键字相匹配的规则。Based on the keyword and the N compressed node layers stored in the M SRAMs, a rule matching the keyword is searched.
可选地,所述基于所述关键字和所述M块SRAM中存储的所述N个压缩节点层,查找与所述关键字相匹配的规则,包括:Optionally, the searching for a rule matching the keyword based on the keyword and the N compression node layers stored in the M SRAMs includes:
令i=1,k=1,判断所述M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层;Let i = 1 and k = 1 to determine whether the k-th compressed node layer is stored in the i-th SRAM in the M-blocks;
若所述第i块SRAM中存储有第k个压缩节点层,则基于所述关键字,从所述第k个压缩节点层包括的至少一个压缩节点中,查找与所述关键字相对应的叶子节点;If the k-th compressed node layer is stored in the i-th SRAM, based on the keyword, find the corresponding to the keyword from at least one compressed node included in the k-th compressed node layer. Leaf node
若查找到与所述关键字相对应的叶子节点,则基于查找到的叶子节点中存储的桶的地址,查找与所述关键字相匹配的规则。If a leaf node corresponding to the keyword is found, based on the address of the bucket stored in the found leaf node, a rule matching the keyword is found.
可选地,所述判断所述M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层包括的至少一个压缩节点之后,还包括:Optionally, after the determining whether the i-th SRAM in the M SRAMs stores at least one compression node included in the k-th compression node layer, the method further includes:
若所述第i块SRAM中未存储有第k个压缩节点层,则令所述i=i+1,并返回所述判断所述M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层的步骤。If the k-th compression node layer is not stored in the i-th SRAM, let i = i + 1, and return to determining whether the k-th SRAM is stored in the i-th SRAM in the M-th SRAM. Steps to compress the node layer.
可选地,所述基于所述关键字,从所述第k个压缩节点层包括的至少一个压缩节点中,查找与所述关键字相对应的叶子节点之后,还包括:Optionally, after searching for a leaf node corresponding to the keyword from at least one compressed node included in the k-th compressed node layer based on the keyword, the method further includes:
若未查找到与所述关键字相对应的叶子节点,且所述i小于所述M,则令所述i=i+1, 所述k=k+1,并返回所述判断所述M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层的步骤。If no leaf node corresponding to the keyword is found, and the i is less than the M, then let i = i + 1, k = k + 1, and return to the judgment M Step of whether the k-th compressed node layer is stored in the i-th SRAM in the block SRAM.
可选地,所述基于所述关键字,从所述第k个压缩节点层包括的至少一个压缩节点中,查找与所述关键字相对应的叶子节点之后,还包括:Optionally, after searching for a leaf node corresponding to the keyword from at least one compressed node included in the k-th compressed node layer based on the keyword, the method further includes:
若未查找到与所述关键字相对应的叶子节点,且所述i等于所述M,则令所述i=1,所述k=k+1,并返回所述判断所述M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层的步骤。If a leaf node corresponding to the keyword is not found, and the i is equal to the M, then let i = 1, k = k + 1, and return to judge the M SRAMs. Whether the step of the k-th compressed node layer is stored in the i-th SRAM in.
可选地,所述若查找到与所述关键字相对应的叶子节点,则基于查找到的叶子节点中存储的桶的地址,查找与所述关键字相匹配的规则,包括:Optionally, if a leaf node corresponding to the keyword is found, searching for a rule matching the keyword based on the address of a bucket stored in the found leaf node includes:
若查找到与所述关键字相对应的叶子节点,且所述i小于所述M,则获取查找到的叶子节点中存储的桶的地址;If a leaf node corresponding to the keyword is found, and the i is less than the M, then the address of a bucket stored in the found leaf node is obtained;
将所述桶的地址依次通过所述第i块SRAM至所述第M块SRAM之间的每块SRAM传递,直到传递至所述第M块SRAM时,由所述第M块SRAM输出所述桶的地址;Passing the address of the bucket through each of the SRAMs from the i-th SRAM to the M-th SRAM in turn, until the M-th SRAM is output by the M-th SRAM Bucket address
基于所述桶的地址,查找与所述关键字相匹配的规则。Based on the bucket's address, find a rule that matches the keyword.
第二方面,提供了一种访问控制列表的管理装置,所述访问控制列表的管理装置具有实现上述第一方面中的访问控制列表的管理方法行为的功能。所述访问控制列表的管理装置包括至少一个模块,该至少一个模块用于实现上述第一方面所提供的访问控制列表的管理方法。According to a second aspect, an access control list management device is provided, and the access control list management device has a function of implementing the behavior of the access control list management method in the first aspect. The apparatus for managing an access control list includes at least one module, and the at least one module is configured to implement the method for managing an access control list provided by the first aspect.
第三方面,提供了一种访问控制列表的管理装置,所述访问控制列表的管理装置的结构中包括处理器和存储器,所述存储器用于存储支持访问控制列表的管理装置执行上述第一方面所提供的访问控制列表的管理方法的程序,以及存储用于实现上述第一方面所提供的访问控制列表的管理方法所涉及的数据。所述处理器被配置为用于执行所述存储器中存储的程序。所述存储设备的操作装置还可以包括通信总线,该通信总线用于该处理器与存储器之间建立连接。According to a third aspect, an access control list management device is provided. The structure of the access control list management device includes a processor and a memory, and the memory is configured to store the management device supporting the access control list to perform the first aspect. A program for a method for managing an access control list provided, and storing data related to the method for managing an access control list provided by the first aspect described above. The processor is configured to execute a program stored in the memory. The operating device of the storage device may further include a communication bus for establishing a connection between the processor and the memory.
第四方面,提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面所述的访问控制列表的管理方法。According to a fourth aspect, a computer-readable storage medium is provided. The computer-readable storage medium stores instructions that, when run on a computer, cause the computer to execute the method for managing an access control list according to the first aspect. .
第五方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面所述的随机接入方法。According to a fifth aspect, a computer program product containing instructions is provided, which when executed on a computer, causes the computer to execute the random access method described in the first aspect.
上述第二方面、第三方面、第四方面和第五方面所获得的技术效果与第一方面中对应的技术手段获得的技术效果近似,在这里不再赘述。The technical effects obtained by the second aspect, the third aspect, the fourth aspect, and the fifth aspect are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not repeated here.
本申请提供的技术方案带来的有益效果是:获取由ACL对应的搜索树上每相邻的预设数量的层所形成的压缩节点层的数量N,将N个压缩节点层存储在M块SRAM上,其中,M小于N,每块SRAM中存储有N个压缩节点层中的至少一个压缩节点层所包括的至少一个压缩节点。由此可见,在本申请实施例中,可以将N个压缩节点层存储在少于N块的SRAM 中,这样,每块SRAM中不止可以存储有一个压缩节点层,避免了每块SRAM中仅存储一个压缩节点层所造成的存储空间浪费的问题,提高了SRAM的存储空间的利用率。The technical solution provided by this application has the beneficial effect of: obtaining the number N of compressed node layers formed by each adjacent preset number of layers on the search tree corresponding to the ACL, and storing N compressed node layers in M blocks On the SRAM, where M is less than N, each SRAM stores at least one compression node included in at least one compression node layer of the N compression node layers. It can be seen that, in the embodiment of the present application, N compression node layers can be stored in less than N SRAMs. In this way, more than one compression node layer can be stored in each SRAM, which avoids only one SRAM in each SRAM. The problem of wasted storage space caused by storing a compressed node layer improves the utilization of the storage space of the SRAM.
图1是相关技术提供的一种对ACL包括的多条规则进行划分的示意图;FIG. 1 is a schematic diagram of dividing multiple rules included in an ACL provided by related technologies; FIG.
图2是根据相关技术中ACL的划分过程创建得到的搜索树的示意图;FIG. 2 is a schematic diagram of a search tree created according to an ACL division process in the related art; FIG.
图3是本申请实施例提供的一种ACL的管理方法的所适用的实施环境的示意图;FIG. 3 is a schematic diagram of an applicable implementation environment of an ACL management method according to an embodiment of the present application; FIG.
图4是本申请实施例提供的一种转发设备的结构示意图;4 is a schematic structural diagram of a forwarding device according to an embodiment of the present application;
图5是本申请实施例提供的一种ACL的管理方法的流程图;5 is a flowchart of an ACL management method according to an embodiment of the present application;
图6是本申请实施例示出的一种ACL对应的搜索树的压缩节点层的示意图;6 is a schematic diagram of a compressed node layer of a search tree corresponding to an ACL according to an embodiment of the present application;
图7是本申请实施例示出的在M块SRAM上存储N个压缩节点层的示意图;7 is a schematic diagram illustrating storing N compression node layers on M SRAMs according to an embodiment of the present application;
图8是本申请实施例提供的一种ACL的管理方法的流程图;8 is a flowchart of an ACL management method according to an embodiment of the present application;
图9是本申请实施例提供的一种N个压缩节点层在M块SRAM上的分布示意图;FIG. 9 is a schematic diagram of a distribution of N compression node layers on M SRAMs provided by an embodiment of the present application; FIG.
图10是本申请实施例提供的一种访问控制列表的管理装置的结构示意图。FIG. 10 is a schematic structural diagram of an access control list management apparatus according to an embodiment of the present application.
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。To make the objectives, technical solutions, and advantages of the present application clearer, the embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
在对本申请实施例进行详细的解释说明之前,先对本申请实施例涉及的实施环境予以介绍。Before explaining the embodiments of the present application in detail, the implementation environment involved in the embodiments of the present application will be described first.
图3是本申请实施例提供的访问控制列表的管理方法所适用的实施环境的示意图。如图3所示,该实施环境中可以包括发送端设备301、转发设备302和接收端设备303。其中,发送端设备301可以通过无线网络或有线网络与转发设备302进行通信,接收端设备303和转发设备302也可以通过无线网络或者有线网络进行通信。FIG. 3 is a schematic diagram of an implementation environment to which an access control list management method according to an embodiment of the present application is applied. As shown in FIG. 3, the implementation environment may include a
示例性地,发送端设备301可以用于向转发设备302发送报文,该报文可以是发送端设备301发送给接收端设备303的报文。Exemplarily, the sending-
转发设备302包括多块SRAM,且转发设备302中可以配置有多个ACL。其中,每个ACL中可以包括多条由用户配置的规则。转发设备302可以将每个ACL中的多条规则划分到多个桶中,并根据划分过程创建每个ACL的搜索树。由于ACL的搜索树的层数可能较多,因此,转发设备302可以将每个ACL的搜索树包括的多个层划分为多个压缩节点层,其中,每个压缩节点层包括的多个节点存储在至少一个压缩节点中。转发设备302可以将每个ACL包括的多个压缩节点层存储在多块SRAM中。The
可选地,转发设备302还可以包括三态内容寻址存储器(Ternary Content Addressable Memory,TCAM)。其中,转发设备302在对每个ACL中的多条规则进行划分时,从每个ACL包括的多条规则中选择可以同时被划分到多个桶中的规则,并将这类规则存储到TCAM中。Optionally, the
转发设备302可以用于对接收到的发送端设备301发送的报文进行解析,从该报文中提取关键字,并通过每个ACL的搜索树查找与该关键字对应的规则,并根据查找结果对该 报文进行不同的处理。例如,若查找到,则转发设备302可以向接收端设备303转发该报文,若未查找到,则转发设备302可以拒绝转发该报文。The
需要说明的是,在本申请实施例中,发送端设备301和接收端设备303可以为诸如智能手机、平板电脑、笔记本电脑、台式电脑等计算机设备。转发设备302则可以为路由器、交换机或其他路由交换接入一体设备。It should be noted that, in the embodiment of the present application, the sending-
图4是本申请实施例提供的一种上述实施环境中的转发设备302的结构示意图,如图4所示,该转发设备主要包括有发射机3021、接收机3022、存储器3023、处理器3024以及通信总线3025。本领域技术人员可以理解,图4中示出的转发设备302的结构并不构成对转发设备302的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置,本申请实施例对此不做限定。FIG. 4 is a schematic structural diagram of a
其中,该发射机3021可以用于向接收端设备103发送数据和/或信令等。该接收机3022可以用于接收发送端设备101发送的数据和/或信令等。The
其中,该存储器3023可以用于存储ACL中包括的多条规则、ACL的搜索树以及上述发送端设备101发送的数据,并且,该存储器3023也可以用于存储用于执行本发明实施例提供的ACL的存储方法的一个或多个运行程序和/或模块。该存储器3023可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其它类型的静态存储设备,随机存取存储器(random access memory,RAM))或者可存储信息和指令的其它类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其它光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其它磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由集成电路存取的任何其它介质,但不限于此。存储器3023可以是独立存在,通过通信总线3025与处理器3024相连接。存储器3023也可以和处理器3024集成在一起。The
其中,该处理器3024是该转发设备302的控制中心,该处理器3024可以是一个通用中央处理器(Central Processing Unit,以下简称CPU),微处理器,特定应用集成电路(Application-Specific Integrated Circuit,以下简称ASIC),或一个或多个用于控制本方案程序执行的集成电路。该处理器3024可以通过运行或执行存储在存储器3023内的软件程序和/或模块,以及调用存储在存储器3023内的数据,来实现本发明实施例提供的ACL的存储方法。The
另外,上述处理器3024和存储器3023可以通过通信总线3025传送信息。In addition, the
接下来对本申请实施例提供的ACL的管理方法进行介绍。The following describes the ACL management method provided in the embodiments of the present application.
图5是本申请实施例提供的一种ACL的管理方法的流程图,该方法可以应用于图3所示的实施环境中的转发设备302中,该方法包括以下步骤:FIG. 5 is a flowchart of an ACL management method according to an embodiment of the present application. The method can be applied to the
步骤501:获取压缩节点层的数量N。Step 501: Obtain the number N of compressed node layers.
其中,由于ACL中的规则的数量巨大,因此,ACL对应的搜索树往往包括很多层。在这种情况下,转发设备可以从搜索树的第一层开始,将每相邻的预设数量的层作为一个压 缩节点层,从而将该搜索树划分为多个压缩节点层,之后,转发设备可以统计搜索树包括的压缩节点层的数量N。Among them, because the number of rules in the ACL is huge, the search tree corresponding to the ACL often includes many layers. In this case, the forwarding device may start from the first layer of the search tree and treat each adjacent preset number of layers as a compressed node layer, thereby dividing the search tree into multiple compressed node layers, and then forward The device can count the number N of compressed node layers included in the search tree.
需要说明的是,由于每个压缩节点层包括搜索树的多层,因此,每个压缩节点层中将包括多个节点,转发设备可以将每个压缩节点层内的多个节点合并存储在至少一个压缩节点内。It should be noted that, since each compressed node layer includes multiple layers of a search tree, each compressed node layer will include multiple nodes, and the forwarding device can combine multiple nodes in each compressed node layer to store at least Within a compressed node.
图6是本申请实施例示出的一种ACL对应的搜索树的压缩节点层的示意图。如图6所示,假设该搜索树中包括n层,转发设备可以从第一层开始,将第一层到第四层作为第一个压缩节点层,该压缩节点层内包括多个节点可以存储在一个压缩节点中,也即,在第一个压缩节点层内包括一个压缩节点。从第五层开始,将第五层到第八层作为第二个压缩节点层,需要说明的是,在第二个压缩节点层中,可以将第五层中的叶子节点存储在一个压缩节点中,将除叶子节点之外的每个节点作为一个顶层节点,从而将每个顶层节点以及相应节点的分支存储在一个压缩节点中。例如,如图6所示,假设第五层中包括2个叶子节点,可以将这两个叶子节点存储在一个压缩节点中,而除叶子节点之外的剩余4个节点,可以将这4个节点中的每个节点作为一个顶层节点,将每个顶层节点以及每个顶层节点引出的分支存储在一个压缩节点中,从而得到四个压缩节点。对于搜索树的第九层到第n层,转发设备均可以按照上述方法进行处理,从而得到N个压缩节点层。FIG. 6 is a schematic diagram of a compressed node layer of a search tree corresponding to an ACL according to an embodiment of the present application. As shown in FIG. 6, assuming that the search tree includes n layers, the forwarding device may start from the first layer, and use the first layer to the fourth layer as the first compression node layer. The compression node layer may include multiple nodes. Stored in a compression node, that is, including a compression node in the first compression node layer. Starting from the fifth layer, the fifth to eighth layers are used as the second compression node layer. It should be noted that in the second compression node layer, the leaf nodes in the fifth layer can be stored in one compression node. In each node except the leaf node as a top node, each top node and the corresponding branch of the node are stored in a compressed node. For example, as shown in FIG. 6, assuming that the fifth layer includes two leaf nodes, the two leaf nodes can be stored in a compressed node, and the remaining 4 nodes other than the leaf nodes can be stored in the four nodes. Each node in the node serves as a top-level node, and each top-level node and the branch derived from each top-level node are stored in a compression node, thereby obtaining four compression nodes. For the ninth layer to the nth layer of the search tree, the forwarding device can process according to the above method, so as to obtain N compressed node layers.
步骤502:将N个压缩节点层存储在M块SRAM中,M为小于N的正整数,M块SRAM中的每块SRAM中存储有N个压缩节点层中的至少一个压缩节点层。Step 502: Store N compression node layers in M SRAMs, where M is a positive integer less than N, and each SRAM in the M SRAM stores at least one compression node layer of the N compression node layers.
相关技术中,当ACL对应的搜索树有N个压缩节点层时,转发设备可以将每个压缩节点层包括的至少一个压缩节点存储在一块SRAM中,也就是说,要存储N个压缩节点层,需要N块SRAM中。在这种情况下,当某个压缩节点层包括的节点数量较少时,存储该压缩节点层的SRAM的存储空间将会存在浪费的问题。而在本申请实施例中,转发设备可以将N个压缩节点层存储在少于N块的SRAM中,也即,每块SRAM中可以存储有至少一个压缩节点层,换句话说,在本申请实施例中,多个压缩节点层可以共享一块SRAM,有效的避免了SRAM的空间浪费的问题。In the related art, when the search tree corresponding to the ACL has N compression node layers, the forwarding device may store at least one compression node included in each compression node layer in a SRAM, that is, N compression node layers are to be stored Requires N blocks of SRAM. In this case, when the number of nodes included in a compressed node layer is small, the storage space of the SRAM storing the compressed node layer will be wasted. In the embodiment of the present application, the forwarding device may store N compression node layers in less than N SRAMs, that is, each SRAM may store at least one compression node layer. In other words, in this application, In the embodiment, a plurality of compression node layers can share a SRAM, which effectively avoids the problem of wasted space of the SRAM.
示例性地,在本申请实施例中,转发设备可以通过以下几种方式来将N个压缩节点层存储在M块SRAM中。Exemplarily, in the embodiment of the present application, the forwarding device may store the N compressed node layers in M SRAMs in the following ways.
第一种方式:按照N个压缩节点层的先后顺序,从M块SRAM中的第1块SRAM开始,按照从第1块SRAM到第M块SRAM的顺序,依次在每块SRAM存储一个压缩节点层;当存储到第M块SRAM时,重新从M块SRAM中的第1块SRAM开始,按照从第1块SRAM到第M块SRAM的顺序,依次在每块SRAM存储一个压缩节点层,直到存储完N个压缩节点层为止。The first method: According to the sequence of the N compression node layers, starting from the first SRAM of the M SRAMs, and sequentially from the first SRAM to the Mth SRAM, one compression node is stored in each SRAM in order. When it is stored in the Mth SRAM, it starts from the first SRAM in the Mth SRAM and stores a compression node layer in each SRAM in order from the first SRAM to the Mth SRAM, until Until storing N compression node layers.
其中,转发设备可以按照压缩节点层在搜索树上所处的位置,按照自顶向下的顺序来确定压缩节点层的先后顺序。也即,位于搜索树的顶部的压缩节点层为第一个压缩节点层,第一压缩节点层中最后一层节点所指向的下一层为第二个压缩节点层,以此类推。The forwarding device may determine the sequence of the compressed node layer according to the position of the compressed node layer on the search tree and in a top-down order. That is, the compressed node layer at the top of the search tree is the first compressed node layer, the next layer pointed to by the last node in the first compressed node layer is the second compressed node layer, and so on.
需要说明的是,在本申请实施例中,M块SRAM也可以按照一定顺序排列。在存储该N个压缩节点层时,转发设备可以按照N个压缩节点层的先后顺序,将第1个压缩节点层到第M个压缩节点层依次存储在第1块SRAM到第M块SRAM上。其中,第1个压缩节点层存储在第1块SRAM上,第2个压缩节点层存储在第2块SRAM上,第i个压缩节点 层存储在第i块SRAM,依次类推,直到将第M个压缩节点层存储到第M块SRAM上之后,转发设备可以重新返回到第1块SRAM,在第1块SRAM上存储第M+1个压缩节点层,第2块SRAM上存储第M+2个压缩节点层,以此类推,直到将N个压缩节点层全部存储完毕为止。其中,M块SRAM中的第i块SRAM上将存储有第i个压缩节点层、第(M+i)个压缩节点层、第(2M+i)个压缩节点层等,其中,2M+i小于或等于N。It should be noted that, in the embodiment of the present application, the M SRAMs may also be arranged in a certain order. When storing the N compression node layers, the forwarding device may sequentially store the first compression node layer to the Mth compression node layer in the order of the N compression node layers on the first SRAM to the Mth SRAM. . Among them, the first compression node layer is stored on the first SRAM, the second compression node layer is stored on the second SRAM, the i-th compression node layer is stored on the i-th SRAM, and so on until the Mth After each compression node layer is stored on the Mth SRAM, the forwarding device can return to the first SRAM, store the M + 1th compression node layer on the first SRAM, and store the M + 2th on the second SRAM. Compression node layers, and so on, until all N compression node layers are completely stored. Among them, the i-th SRAM of the M SRAMs will store the i-th compression node layer, the (M + i) -th compression node layer, the (2M + i) -th compression node layer, etc., among which 2M + i Less than or equal to N.
图7是本申请实施例示出的在M块SRAM上存储N个压缩节点层的示意图。其中,假设M=4,N=10。其中,SRAM1为第一块SRAM,SRAM2为第二块SRAM,以此类推。Conde_stage_01为第一个压缩节点层,Conde_stage_02为第二个压缩节点层,以此类推,Conde_stage_10为第10个压缩节点层。如图7所示,在存储这10个压缩节点层时,转发设备可以将Conde_stage_01存储在SRAM1上,将Conde_stage_02存储在SRAM2上,将Conde_stage_03存储在SRAM3上,将Conde_stage_04存储在SRAM4上。之后,转发设备可以重新返回到SRAM1,在SRAM1上存储Conde_stage_5,在SRAM2上存储Conde_stage_6,在SRAM3上存储Conde_stage_7,在SRAM4上存储Conde_stage_8。之后,转发设备可以再次返回到SRAM1,在SRAM1上存储Conde_stage_9,在SRAM2上存储Conde_stage_10。这样,将10个压缩节点层存储在4块SRAM上,其中,SRAM1上存储有Conde_stage_01、Conde_stage_05和Conde_stage_09,SRAM2上存储有Conde_stage_02、Conde_stage_06和Conde_stage_10,SRAM3上存储有Conde_stage_03、Conde_stage_07,SRAM4上存储有Conde_stage_04、Conde_stage_08。也即,M块SRAM中的SRAMi上将存储有Conde_stage_i、Conde_stage_M+i、Conde_stage_2M+i、Conde_stage_3M+i等。FIG. 7 is a schematic diagram of storing N compression node layers on M SRAMs according to an embodiment of the present application. Here, it is assumed that M = 4 and N = 10. Among them, SRAM1 is the first SRAM, SRAM2 is the second SRAM, and so on. Conde_stage_01 is the first compression node layer, Conde_stage_02 is the second compression node layer, and so on, and Conde_stage_10 is the tenth compression node layer. As shown in FIG. 7, when storing the 10 compression node layers, the forwarding device can store Conde_stage_01 on SRAM1, Conde_stage_02 on SRAM2, Conde_stage_03 on SRAM3, and Conde_stage_04 on SRAM4. After that, the forwarding device can return to SRAM1, store Conde_stage_5 on SRAM1, Conde_stage_6 on SRAM2, Conde_stage_7 on SRAM3, and Conde_stage_8 on SRAM4. After that, the forwarding device can return to SRAM1 again, store Conde_stage_9 on SRAM1, and store Conde_stage_10 on SRAM2. In this way, 10 compression node layers are stored on 4 SRAMs. Among them, Conde_stage_01, Conde_stage_05, and Conde_stage_09 are stored on SRAM1, Conde_stage_02, Conde_stage_06, and Conde_stage_10 are stored on SRAM2, Conde_stage_03, Conde_stage_07 are stored on SRAM3, and Conde_stage is stored on SRAM4. , Conde_stage_08. That is to say, Conde_stage_i, Conde_stage_M + i, Conde_stage_2M + i, Conde_stage_3M + i, etc. will be stored on the SRAMi in the M SRAMs.
第二种方式:按照N个压缩节点层的先后顺序,将N个压缩节点层依次划分为Q组,Q组中的前Q-1组中的每组中包括的压缩节点层的数量均为M,Q组中的最后一组中包括的压缩节点层的数量小于或等于M;将Q组中每组包括的压缩节点层中的第i个压缩节点层存储在M块SRAM中的第i块SRAM中,i为大于0且不大于M的整数。The second method: According to the sequence of the N compressed node layers, the N compressed node layers are sequentially divided into Q groups, and the number of compressed node layers included in each of the first Q-1 groups in the Q group is The number of compressed node layers included in the last group in the M, Q groups is less than or equal to M; the i-th compressed node layer in the compressed node layers included in each group in the Q group is stored in the i-th in the M SRAMs In block SRAM, i is an integer greater than 0 and not greater than M.
前述第一种方式中主要介绍了将N个压缩节点层挨个存储到M块SRAM中的实现过程。在另外一种可能的实现方式中,转发设备可以首先将N个压缩节点层按照先后顺序划分为Q组,然后将Q组中每组包括的压缩节点层并行进行存储。The foregoing first method mainly introduces the implementation process of storing N compression node layers one by one into M SRAMs. In another possible implementation manner, the forwarding device may first divide the N compressed node layers into Q groups in order, and then store the compressed node layers included in each group in the Q group in parallel.
示例性的,转发设备可以按照N个压缩节点层的先后顺序,从第一个压缩节点层开始,将每相邻的M个压缩节点层划分为一组,从而得到Q组。若N为M的整数倍,则Q组中的每组均包括M个压缩节点层,若N不为M的整数倍,在Q组中的最后一组包括的压缩节点层的数量将小于M。Exemplarily, the forwarding device may divide each adjacent M compression node layer into a group starting from the first compression node layer according to the sequence of the N compression node layers, thereby obtaining a Q group. If N is an integer multiple of M, each group in the Q group includes M compression node layers. If N is not an integer multiple of M, the number of compressed node layers included in the last group in the Q group will be less than M .
在将N个压缩节点层划分为Q组之后则转发设备可以将每组包括的压缩节点层中的第1个压缩节点层存储在M块SRAM中的第1块SRAM上,第2个压缩节点层存储在第2块SRAM上,以此类推,直到将每组包括的压缩节点层存储完为止。After the N compression node layers are divided into Q groups, the forwarding device can store the first compression node layer in each compression node layer included in each group on the first SRAM and the second compression node in the M SRAMs. The layers are stored on the second SRAM, and so on, until the compressed node layers included in each group are stored.
由此可见,当采用第二种方式进行存储时,在将N个压缩节点层划分为Q组之后,转发设备可以并行的存储该Q组中每组包括的压缩节点层,以此来加快存储速度。It can be seen that when the second method is used for storage, after the N compression node layers are divided into Q groups, the forwarding device can store the compression node layers included in each group in the Q group in parallel to speed up the storage. speed.
第三种方式:按照N个压缩节点层的先后顺序,从M块SRAM中的第1块SRAM开始,按照从第1块SRAM到第M块SRAM的顺序,依次在每块SRAM存储一个压缩节点层;当在第M块SRAM中存储完毕时,若剩余的压缩节点层的数量小于M,则根据M块SRAM中每块SRAM剩余的存储空间的大小,从M块SRAM中选择与剩余的节点层数量 相同数量的SRAM,并将剩余的压缩节点层存储在选择的SRAM中。若剩余的压缩节点层的数量不小于M,则重新按照N个压缩节点层的先后顺序,从M块SRAM中的第1块SRAM开始,按照从第1块SRAM到第M块SRAM的顺序,依次在每块SRAM存储一个压缩节点层。The third method: According to the sequence of the N compression node layers, starting from the first SRAM of the M SRAMs, and sequentially from the first SRAM to the Mth SRAM, one compression node is stored in each SRAM in order. Layer; when the storage in the Mth SRAM is completed, if the number of remaining compressed node layers is less than M, then according to the size of the remaining storage space of each SRAM in the M SRAM, select the remaining nodes from the M SRAM The number of SRAM layers is the same, and the remaining compression node layers are stored in the selected SRAM. If the number of remaining compression node layers is not less than M, the sequence of the N compression node layers is restarted, starting from the first SRAM in the M SRAMs, and in the order from the first SRAM to the Mth SRAM. A compression node layer is stored in turn on each SRAM.
在该种实现方式中,转发设备可以首先将N个压缩节点层中的第1个压缩节点层到第M个压缩节点层依次对应存储在第1块到第M块SRAM上,当在第M块SRAM上存储完毕之后,转发设备可以统计剩余的未存储的压缩节点层的数量是否小于M,若不小于M,则可以重新从第1块SRAM开始,在第1块SRAM到第M块SRAM上依次对应存储第M+1个压缩节点层到第2M个压缩节点层。若小于M,则转发设备可以统计M块SRAM中每块SRAM剩余的存储空间的大小,按照剩余的存储空间从大到小的顺序进行排序,并从排序结果选择出剩余的存储空间排在前W个的SRAM,其中,W等于剩余的压缩节点层的数量。将剩余的压缩节点层存储在W个SRAM中。In this implementation manner, the forwarding device may first store the first compression node layer to the Mth compression node layer of the N compression node layers in the corresponding storage on the first to the Mth SRAMs in sequence. After the block SRAM is stored, the forwarding device can count whether the number of remaining unstored compression node layers is less than M. If it is not less than M, it can start again from the first SRAM, from the first SRAM to the Mth SRAM. It stores the M + 1th compression node layer to the 2Mth compression node layer in turn. If it is less than M, the forwarding device can count the size of the remaining storage space of each SRAM in the M SRAMs, sort them according to the remaining storage space in descending order, and select the remaining storage space first from the sorting result W SRAMs, where W is equal to the number of remaining compressed node layers. The remaining compressed node layers are stored in W SRAMs.
例如,假设M=4,剩余的压缩节点层的数量为2,则转发设备可以从4块SRAM中选择剩余的存储空间最多的SRAM以及剩余的存储空间排在第二的SRAM。假设剩余的存储空间最多的SRAM为4块SRAM中的第3块SRAM中,而剩余的存储空间排在第二的SRAM为第1块SRAM,则转发设备可以将剩余的压缩节点层中的第1个压缩节点层存储在第1块SRAM,将剩余的压缩节点层中的第2个压缩节点层存储在第3块SRAM中。For example, assuming M = 4 and the number of remaining compression node layers is two, the forwarding device can select the SRAM with the most remaining storage space from the four SRAMs and the remaining SRAM is ranked second. Assume that the SRAM with the most remaining storage space is the third SRAM of the 4 SRAMs, and the remaining SRAM is ranked the second SRAM as the first SRAM. One compression node layer is stored in the first SRAM, and the second compression node layer in the remaining compression node layers is stored in the third SRAM.
上述是本申请实施例提供的将N个压缩节点层存储到M块SRAM上的三种实现方式,可选地,在一种可能的实现方式中,转发设备也可以不按顺序存储,随机的将N个压缩节点层存储在M块SRAM上。The above are the three implementation manners for storing N compression node layers on M SRAMs provided in the embodiments of the present application. Optionally, in a possible implementation manner, the forwarding devices may also be stored out of order, randomly. N compression node layers are stored on M SRAMs.
在本申请实施例中,转发设备可以将N个压缩节点层存储在少于N块的SRAM中,也即,多个压缩节点层可以共享一块SRAM,有效的避免了每块SRAM中仅存储一个压缩节点层所造成的存储空间浪费的问题,提高了SRAM的存储空间的利用率。In the embodiment of the present application, the forwarding device may store N compression node layers in less than N SRAMs, that is, multiple compression node layers may share one SRAM, which effectively avoids storing only one in each SRAM. The problem of wasted storage space caused by compression of the node layer improves the utilization of the storage space of the SRAM.
在通过上述实施例将N个压缩节点层存储到M块SRAM上之后,若转发设备接收到报文,则可以通过下述实施例中提供的方法来查找与该报文匹配的规则。基于此,参见图8,提供了另一种ACL的管理方法,该方法包括如下步骤:After the N compression node layers are stored on the M SRAMs through the above embodiments, if the forwarding device receives a message, the rules provided by the following embodiments can be used to find the rules that match the message. Based on this, referring to FIG. 8, another ACL management method is provided. The method includes the following steps:
步骤801:获取压缩节点层的数量N。Step 801: Obtain the number N of compressed node layers.
本步骤的实现方式可以参考前述实施例中的步骤501。For the implementation of this step, reference may be made to step 501 in the foregoing embodiment.
步骤802:将N个压缩节点层存储在M块SRAM中,M为小于N的正整数,M块SRAM中的每块SRAM中存储有N个压缩节点层中的至少一个压缩节点层。Step 802: Store N compression node layers in M SRAMs, where M is a positive integer less than N, and each SRAM in the M SRAM stores at least one compression node layer of the N compression node layers.
本步骤的实现方式可以参考前述实施例中的步骤502。For the implementation of this step, reference may be made to step 502 in the foregoing embodiment.
步骤803:当接收到报文时,从报文中提取关键字。Step 803: When a message is received, keywords are extracted from the message.
其中,根据ACL中的规则的不同,在接收到报文时,转发设备可以根据ACL中的规则的从报文中提取相应地关键字。Among them, according to different rules in the ACL, when a message is received, the forwarding device may extract corresponding keywords from the message according to the rules in the ACL.
例如,ACL中的规则是用于根据报文的源地址对报文进行分类的,则转发设备可以提取该报文中的源地址信息,得到关键字。再例如,若ACL中的规则是用于根据时间段信息来对报文进行分类的,则转发设备可以获取该报文的时间段信息,并将其作为关键字。For example, the rules in the ACL are used to classify packets based on the source address of the packet, and the forwarding device can extract the source address information in the packet to obtain keywords. As another example, if the rules in the ACL are used to classify packets based on time period information, the forwarding device can obtain the time period information of the packet and use it as a keyword.
步骤804:基于关键字和M块SRAM中存储的N个压缩节点层,查找与关键字相匹配 的规则。Step 804: Based on the keywords and the N compressed node layers stored in the M SRAMs, find a rule that matches the keywords.
在提取到关键字之后,转发设备可以从M块SRAM中存储的N个压缩节点层中查找与该关键字相匹配的叶子节点,进而根据查找到的叶子节点中存储的桶的地址来获取与该关键字相匹配的规则。若转发设备在N个压缩节点层中未查找到与该关键字相匹配的叶子节点,则转发设备可以抛弃该报文。After extracting the keywords, the forwarding device can find the leaf nodes that match the keywords from the N compressed node layers stored in the M SRAMs, and then obtain the corresponding nodes based on the addresses of the buckets stored in the found leaf nodes. The keyword matches the rule. If the forwarding device does not find a leaf node matching the keyword in the N compression node layers, the forwarding device may discard the message.
其中,在查找与关键字匹配的规则时,令i=1,k=1,判断M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层,若第i块SRAM中存储有第k个压缩节点层,则基于关键字,从第k个压缩节点层包括的至少一个压缩节点中,查找与关键字相对应的叶子节点;若查找到与关键字相对应的叶子节点,则基于查找到的叶子节点中存储的桶的地址,查找与关键字相匹配的规则。Among them, when looking for a rule that matches a keyword, let i = 1 and k = 1 to determine whether the k-th compressed node layer is stored in the i-th SRAM of the M SRAMs. The k-th compressed node layer, based on keywords, finds the leaf node corresponding to the keyword from at least one compressed node included in the k-th compressed node layer; if a leaf node corresponding to the keyword is found, then Based on the addresses of the buckets stored in the found leaf nodes, find rules that match the keywords.
示例性的,转发设备可以从第1块SRAM查起,判断第1块SRAM中是否存储有第1个压缩节点层,若是,则可以从第1个压缩节点层包括的至少一个压缩节点中查找是否存在与该关键字相匹配的叶子节点,若在第1个压缩节点层中查找到与该关键字相匹配的叶子节点,则转发设备可以获取该叶子节点中存储的桶的地址,并将该桶的地址传递由第1块SRAM传递至第2块SRAM,由第2块SRAM传递至第3块SRAM,以此类推,直到传递到第M块SRAM时,由第M块SRAM输出该桶的地址,之后,转发设备可以基于该桶的地址到相应地桶中获取与该关键字相匹配的规则。Exemplarily, the forwarding device may search from the first SRAM to determine whether the first compression node layer is stored in the first SRAM, and if so, may search from at least one compression node included in the first compression node layer. Whether there is a leaf node matching the keyword. If a leaf node matching the keyword is found in the first compression node layer, the forwarding device can obtain the address of the bucket stored in the leaf node and The address of the bucket is transferred from the first SRAM to the second SRAM, from the second SRAM to the third SRAM, and so on, and until the M-th SRAM is passed, the bucket is output by the M-th SRAM. After that, the forwarding device can obtain a rule matching the keyword based on the bucket address to the corresponding bucket.
可选地,若第i块SRAM中未存储有第k个压缩节点层,则转发设备可以令i=i+1,并返回判断M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层。Optionally, if the k-th compression node layer is not stored in the i-th SRAM, the forwarding device may set i = i + 1, and return to determine whether the k-th is stored in the i-th SRAM in the M-th SRAM. Compact the node layer.
示例性的,若第1块SRAM中未存储有第1个压缩节点层,则转发设备可以在第2块SRAM中查找第1个压缩节点层,若第2块SRAM中依然未存储有第2个压缩节点层,则转发设备可以继续向下一块SRAM查找,直到找到第1个压缩节点层为止,再根据在第1个压缩节点层中的查找结果,决定是否查找第2个压缩节点层。Exemplarily, if the first compression node layer is not stored in the first SRAM, the forwarding device may find the first compression node layer in the second SRAM, and if the second SRAM still does not store the second compression node layer, If there is a compression node layer, the forwarding device can continue to search to the next SRAM until it finds the first compression node layer, and then decides whether to search for the second compression node layer according to the search result in the first compression node layer.
需要说明的是,若转发设备按照前述实施例中第一种方式或第二种方式来存储N个压缩节点层,则由于每个压缩节点层是挨个存储在每块SRAM中的,因此,当从第1块SRAM查起时,第1块SRAM中存储有第1个压缩节点层,第2块SRAM中存储有第2个压缩节点层,第i块SRAM中会对应存储有第i个压缩节点层,所以不会存储上述第i块压缩节点层中未存储有第k个压缩节点层的情况下。在这种情况下,转发设备可以不执行判断第i块SRAM中是否存储有第k个压缩节点层的步骤,而是可以直接在第i块SRAM存储的第k个压缩节点层中查找与该关键字相对应的叶子节点。It should be noted that if the forwarding device stores N compression node layers according to the first or second method in the foregoing embodiment, each compression node layer is stored in each SRAM one by one. Therefore, when When searching from the first SRAM, the first compression node layer is stored in the first SRAM, the second compression node layer is stored in the second SRAM, and the i-th compression is correspondingly stored in the i-th SRAM. The node layer does not store the case where the k-th compressed node layer is not stored in the i-th compressed node layer. In this case, the forwarding device may not perform the step of judging whether the k-th compression node layer is stored in the i-th SRAM, but may directly find and compare with the k-th compression node layer stored in the i-th SRAM. The leaf node corresponding to the keyword.
若转发设备按照前述实施例中的第三种方式或者随机来存储N个压缩节点层,则由于相邻的两个压缩节点层可能会存储在不相邻的两个SRAM中,则转发设备可以首先判断第i块SRAM中是否存储有第k个压缩节点层,若是,则在第k个压缩节点层中查找与关键字相对应的叶子节点,若否,则继续去下一块SRAM中查找第k个压缩节点层,直到找到为止。If the forwarding device stores the N compression node layers according to the third method in the foregoing embodiment or randomly, then since two adjacent compression node layers may be stored in two non-adjacent SRAMs, the forwarding device may First determine whether the k-th compressed node layer is stored in the i-th SRAM. If it is, then search for the leaf node corresponding to the keyword in the k-th compressed node layer. If not, continue to find the k-th compressed node in the next SRAM. k compressed node layers until found.
可选地,若未查找到与关键字相对应的叶子节点,且i小于M,则令i=i+1,k=k+1,并返回判断M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层的步骤。Optionally, if a leaf node corresponding to the keyword is not found, and i is less than M, then i = i + 1, k = k + 1, and return to determine whether the i-th SRAM in the M SRAMs is Steps for k-th compressed node layer are stored.
具体的,若第i块SRAM中存储有第k个压缩节点层,但是未在第k个压缩节点层中找到与该关键字相对应的叶子节点,且第i块SRAM并非M块SRAM中的最后一块SRAM, 则转发设备可以到第i块SRAM的下一块SRAM中查找第k个压缩节点层的下一个压缩节点层,若下一块SRAM中存储有下一个压缩节点层,则转发设备可以在下一个压缩节点层中查找与该关键字相对应的叶子节点。Specifically, if the k-th compressed node layer is stored in the i-th SRAM, but no leaf node corresponding to the keyword is found in the k-th compressed node layer, and the i-th SRAM is not in the M-th SRAM For the last SRAM, the forwarding device can go to the next SRAM of the i-th SRAM to find the next compression node layer of the k-th compression node layer. If the next compression node layer is stored in the next SRAM, the forwarding device can Find a leaf node corresponding to the keyword in a compressed node layer.
可选地,若未查找到与关键字相对应的叶子节点,且i等于M,则令i=1,k=k+1,并返回判断M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层的步骤。Optionally, if a leaf node corresponding to the keyword is not found, and i is equal to M, then i = 1, k = k + 1, and return to determine whether the i-th SRAM in the M SRAMs is stored in the i-th SRAM. The kth step of compressing the node layer.
示例性的,若第i块SRAM中存储有第k个压缩节点层,但是未在第k个压缩节点层中找到与该关键字相对应的叶子节点,且第i块SRAM是M块SRAM中的最后一块SRAM,则转发设备需要重新从第1块SRAM开始,在第1块SRAM中查找第k个压缩节点层的下一个压缩节点层。Exemplarily, if the k-th compression node layer is stored in the i-th SRAM, but no leaf node corresponding to the keyword is found in the k-th compression node layer, and the i-th SRAM is in the M-th SRAM The last SRAM, the forwarding device needs to start again from the first SRAM, and find the next compression node layer of the kth compression node layer in the first SRAM.
接下来将对前述几种不同存储方式所对应的查找过程进行举例说明。Next, the searching processes corresponding to the foregoing different storage methods will be described by examples.
(1)转发设备按照第一种方式或第二种方式对N个压缩节点层进行存储。(1) The forwarding device stores the N compressed node layers according to the first method or the second method.
假设M=4,N=10,无论按照第一种方式还是第二种方式存储时,最终N个压缩节点层在M块SRAM中的分布均如图7所示。以此为例,在进行查找时,转发设备可以从SRAM1开始查起,首先在SRAM1存储的Conde_stage_01中查找与关键字对应的叶子节点。若查找到叶子节点,则转发设备可以将该叶子节点中存储的桶的地址从SRAM1传递至SRAM2,再从SRAM2传递至SRAM3,从SRAM3传递至SRAM4,转发设备从该SRAM4获取该桶的地址,并根据该桶的地址到相应地桶中获取与该关键字相匹配的规则。Assuming M = 4 and N = 10, whether it is stored in the first way or the second way, the distribution of the final N compressed node layers in the M SRAM is shown in FIG. 7. Taking this as an example, when performing a search, the forwarding device can start searching from SRAM1, and first find the leaf node corresponding to the keyword in Conde_stage_01 stored in SRAM1. If a leaf node is found, the forwarding device can pass the address of the bucket stored in the leaf node from SRAM1 to SRAM2, then from SRAM2 to SRAM3, from SRAM3 to SRAM4, and the forwarding device obtains the address of the bucket from the SRAM4. And according to the address of the bucket, the corresponding rule in the bucket is obtained.
若在Conde_stage_02未查找到叶子节点,则转发设备可以到SRAM2存储的Conde_stage_02中查找,若查找到,则可以参考前述方式进行处理,若未查找到,则可以继续向下查找。若一直到SRAM4中的Conde_stage_04,仍未查找到叶子节点,则转发设备可以返回到SRAM1,从Conde_stage_05开始按顺序继续查找叶子节点。If no leaf node is found in Conde_stage_02, the forwarding device can search in Conde_stage_02 stored in SRAM2. If found, it can be processed by referring to the foregoing method. If it is not found, it can continue to search downward. If the leaf node is not found until Conde_stage_04 in SRAM4, the forwarding device may return to SRAM1 and continue to find leaf nodes in order starting from Conde_stage_05.
(2)转发设备按照第三种方式对N个压缩节点层进行存储。(2) The forwarding device stores the N compressed node layers according to the third method.
假设M=4,N=10,按照第三种方式进行存储,10个压缩节点层在4块SRAM上的分布如图9所示。在这种情况下,假设转发设备在SRAM1-SRAM4存储的Conde_stage_01到Conde_stage_04均未查找到叶子节点,则转发设备重新返回到SRAM1,并判断SRAM1上是否存储有Conde_stage_05,由于SRAM1上存储有Conde_stage_05,则转发设备可以在Conde_stage_05中查找叶子节点,若依然未找到,则转发设备可以判断SRAM2上是否存储有Conde_stage_06,由于SRAM2上未存储有Conde_stage_06,则转发设备可以继续向下,判断SRAM3上是否存储有Conde_stage_06,SRAM3上存储有Conde_stage_06,则转发设备在Conde_stage_06中查找与关键字对应的叶子节点,若查找到,则转发设备可以从查找到的叶子节点中获取桶的地址,并将该桶的地址传递至SRAM4,转发设备可以从SRAM4中获取该桶的地址,并根据该桶的地址到相应桶中查找与关键字相匹配的规则。Assuming M = 4 and N = 10, storage is performed according to the third method. The distribution of 10 compression node layers on 4 SRAMs is shown in FIG. 9. In this case, assuming that the forwarding device does not find any leaf nodes in Conde_stage_01 to Conde_stage_04 stored in SRAM1-SRAM4, the forwarding device returns to SRAM1 and determines whether Conde_stage_05 is stored on SRAM1. Because Conde_stage_05 is stored on SRAM1, then The forwarding device can find the leaf node in Conde_stage_05. If it is still not found, the forwarding device can determine whether Conde_stage_06 is stored on SRAM2. Since Conde_stage_06 is not stored on SRAM2, the forwarding device can continue to determine whether Conde_stage_06 is stored on SRAM3. , Conde_stage_06 is stored in SRAM3, then the forwarding device looks for the leaf node corresponding to the keyword in Conde_stage_06. If found, the forwarding device can obtain the address of the bucket from the found leaf node and pass the address of the bucket to SRAM4. The forwarding device can obtain the address of the bucket from SRAM4, and find the rule matching the keyword in the corresponding bucket according to the address of the bucket.
在本申请实施例中,转发设备可以在M块SRAM存储的N个压缩节点层中查找与关键字相对应的叶子节点,其中,只要查找到与关键字相对应的叶子节点,转发设备就可以将叶子节点中存储的桶的地址传递至第M块SRAM处从而结束查找过程。而相关技术中,由于N个压缩节点层顺序存储在N块SRAM中,因此,无论转发设备在N个压缩节点层中的哪个压缩节点层中找到叶子节点,转发设备都只能在第N块SRAM处才能获取到该叶子节点中存储的桶的地址。也即,对于ACL中的任一条规则,查找延时均等于从第一个压缩节点层到最后一个压缩节点层的延时。由此可见,通过本申请实施例提供的存储方法和相 应地查找方法,当转发设备在某个压缩节点层中查找到与关键字相对应的叶子节点时,可以尽早的结束查找过程,有效的减少了查找延时。In the embodiment of the present application, the forwarding device can find the leaf nodes corresponding to the keywords in the N compressed node layers stored in the M blocks of SRAM. As long as the leaf nodes corresponding to the keywords are found, the forwarding device can The address of the bucket stored in the leaf node is passed to the Mth SRAM to end the search process. In the related technology, because N compression node layers are sequentially stored in N SRAMs, no matter which compression node layer among the N compression node layers the forwarding device finds the leaf node, the forwarding device can only be in the Nth block. Only the SRAM can obtain the address of the bucket stored in the leaf node. That is, for any rule in the ACL, the search delay is equal to the delay from the first compressed node layer to the last compressed node layer. It can be seen that, through the storage method and corresponding search method provided in the embodiments of the present application, when the forwarding device finds a leaf node corresponding to a keyword in a certain compressed node layer, the search process can be ended as early as possible, which is effective Reduced search delay.
图10是本发明实施例提供的一种访问控制列表的管理装置的框图,该装置应用于转发设备中,如图10所示,该装置1000包括获取模块1001和存储模块1002。FIG. 10 is a block diagram of an access control list management apparatus according to an embodiment of the present invention. The apparatus is applied to a forwarding device. As shown in FIG. 10, the apparatus 1000 includes an
获取模块1001,用于执行上述实施例中的步骤501;The obtaining
存储模块1002,用于执行上述实施例中的步骤502。The
可选地,存储模块1002具体用于:Optionally, the
按照N个压缩节点层的先后顺序,从M块SRAM中的第1块SRAM开始,按照从第1块SRAM到第M块SRAM的顺序,依次在每块SRAM存储一个压缩节点层;According to the sequence of the N compression node layers, starting from the first SRAM of the M SRAMs, and sequentially from the first SRAM to the Mth SRAM, one compression node layer is stored in each SRAM in order;
当存储到第M块SRAM时,重新从M块SRAM中的第1块SRAM开始,按照从第1块SRAM到第M块SRAM的顺序,依次在每块SRAM存储一个压缩节点层,直到存储完N个压缩节点层为止。When it is stored in the Mth SRAM, it starts from the first SRAM in the Mth SRAM, and sequentially stores a compression node layer in each SRAM in the order from the first SRAM to the Mth SRAM, until the storage is completed. N compression node layers.
可选地,该装置还包括:Optionally, the device further includes:
提取模块,用于当接收到报文时,从报文中提取关键字;An extraction module for extracting keywords from a message when a message is received;
查找模块,用于基于关键字和M块SRAM中存储的N个压缩节点层,查找与关键字相匹配的规则。A search module is used to find rules matching keywords based on keywords and N compressed node layers stored in M SRAMs.
可选地,查找模块包括:Optionally, the search module includes:
判断子模块,用于令i=1,k=1,判断M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层;A judging sub-module, for setting i = 1 and k = 1 to judge whether the k-th compressed node layer is stored in the i-th SRAM in the M SRAMs;
第一查找子模块,用于若第i块SRAM中存储有第k个压缩节点层,则基于关键字,从第k个压缩节点层包括的至少一个压缩节点中,查找与关键字相对应的叶子节点;A first search submodule, configured to find, if there is a k-th compressed node layer in the i-th SRAM, at least one compression node included in the k-th compressed node layer based on a keyword, Leaf node
第二查找子模块,用于若查找到与关键字相对应的叶子节点,则基于查找到的叶子节点中存储的桶的地址,查找与关键字相匹配的规则。The second search submodule is configured to, if a leaf node corresponding to the keyword is found, find a rule matching the keyword based on the address of a bucket stored in the found leaf node.
可选地,查找模块还包括:Optionally, the search module further includes:
第一触发子模块,用于若第i块SRAM中未存储有第k个压缩节点层,则令i=i+1,并触发判断子模块判断M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层。The first trigger sub-module is used to set i = i + 1 if the k-th compression node layer is not stored in the i-th SRAM and trigger the judgment sub-module to determine whether the i-th SRAM in the M-block SRAM is stored There is a k-th compression node layer.
可选地,查找模块还包括:Optionally, the search module further includes:
第二触发子模块,用于若未查找到与关键字相对应的叶子节点,且i小于M,则令i=i+1,k=k+1,并触发判断子模块判断M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层。The second triggering sub-module is used to determine i = i + 1 and k = k + 1 if no leaf node corresponding to the keyword is found and i is less than M, and trigger the judging sub-module to judge the M SRAMs. Whether the k-th compressed node layer is stored in the i-th SRAM of the.
可选地,查找模块还包括:Optionally, the search module further includes:
第三触发子模块,用于若未查找到与关键字相对应的叶子节点,且i等于M,则令i=1,k=k+1,并触发判断子模块判断M块SRAM中的第i块SRAM中是否存储有第k个压缩节点层。The third triggering sub-module is used to determine if the leaf node corresponding to the keyword and i is equal to M, then i = 1, k = k + 1, and trigger the judging sub-module to judge the first Whether the k-th compressed node layer is stored in the i SRAM.
可选地,第二查找子模块具体用于:Optionally, the second search submodule is specifically configured to:
若查找到与关键字相对应的叶子节点,且i小于M,则获取查找到的叶子节点中存储的桶的地址;If a leaf node corresponding to the keyword is found, and i is less than M, the address of the bucket stored in the found leaf node is obtained;
将桶的地址依次通过第i块SRAM至第M块SRAM之间的每块SRAM传递,直到传 递至第M块SRAM时,由第M块SRAM输出桶的地址;Pass the address of the bucket through each of the SRAMs from the i-th SRAM to the M-th SRAM in turn, until the M-th SRAM outputs the bucket address;
基于桶的地址,查找与关键字相匹配的规则。Based on the bucket address, find rules that match the keywords.
综上所述,在本申请实施例中,转发设备可以将N个压缩节点层存储在少于N块的SRAM中,也即,多个压缩节点层可以共享一块SRAM,有效的避免了每块SRAM中仅存储一个压缩节点层所造成的存储空间浪费的问题,提高了SRAM的存储空间的利用率。另外,在本申请实施例中,可以在M块SRAM存储的N个压缩节点层中查找与关键字相对应的叶子节点,其中,只要查找到与关键字相对应的叶子节点,转发设备就可以将叶子节点中存储的桶的地址传递至第M块SRAM处从而结束查找过程。也即,通过本申请实施例提供的存储方法和相应地查找方法,当转发设备在某个压缩节点层中查找到与关键字相对应的叶子节点时,可以尽早的结束查找过程,有效的减少了查找延时。In summary, in the embodiment of the present application, the forwarding device can store N compression node layers in less than N SRAMs, that is, multiple compression node layers can share one SRAM, which effectively avoids each block. The problem of wasted storage space caused by storing only one compression node layer in the SRAM improves the utilization of the storage space of the SRAM. In addition, in the embodiment of the present application, the leaf nodes corresponding to the keywords can be found in the N compressed node layers stored in the M blocks of SRAM. As long as the leaf nodes corresponding to the keywords are found, the forwarding device can The address of the bucket stored in the leaf node is passed to the Mth SRAM to end the search process. That is, with the storage method and corresponding search method provided in the embodiments of the present application, when the forwarding device finds a leaf node corresponding to a keyword in a certain compressed node layer, the search process can be ended as early as possible, effectively reducing Search delay.
上述实施例提供的访问控制列表的管理装置在管理访问控制列表时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的访问控制列表的管理装置与图5和图8所示的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。The management device of the access control list provided in the foregoing embodiment, when managing the access control list, uses only the division of the functional modules as an example for illustration. In actual applications, the above functions can be allocated by different functional modules as needed. The internal structure of the device is divided into different functional modules to complete all or part of the functions described above. In addition, the device for managing an access control list provided by the foregoing embodiment belongs to the same concept as the method embodiments shown in FIG. 5 and FIG. 8, and the specific implementation process thereof is detailed in the method embodiment, and details are not described herein again.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意结合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机指令时,全部或部分地产生按照本发明实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如:同轴电缆、光纤、数据用户线(Digital Subscriber Line,DSL))或无线(例如:红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如:软盘、硬盘、磁带)、光介质(例如:数字通用光盘(Digital Versatile Disc,DVD))、或者半导体介质(例如:固态硬盘(Solid State Disk,SSD))等。In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions according to the embodiments of the present invention are wholly or partially generated. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center through a cable (For example: coaxial cable, optical fiber, Digital Subscriber Line (DSL)) or wireless (for example: infrared, wireless, microwave, etc.) to another website, computer, server or data center for transmission. The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, or the like that includes one or more available medium integration. The usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a Digital Versatile Disc (DVD)), or a semiconductor medium (for example, a solid state disk (Solid State Disk) (SSD)) Wait.
也即,在本发明实施例中,提供了一种计算机可读存储介质,当其在计算机上运行时,使得计算机执行上述图5或图8所示的访问控制列表的管理方法的步骤。That is, in the embodiment of the present invention, a computer-readable storage medium is provided. When the computer-readable storage medium runs on the computer, the computer is caused to execute the steps of the method for managing an access control list shown in FIG. 5 or FIG.
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。A person of ordinary skill in the art may understand that all or part of the steps for implementing the foregoing embodiments may be implemented by hardware, or may be instructed by a program to complete related hardware. The program may be stored in a computer-readable storage medium. The storage medium mentioned may be a read-only memory, a magnetic disk or an optical disk.
以上所述为本申请提供的实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above-mentioned embodiments provided by this application are not intended to limit this application. Any modification, equivalent replacement, or improvement made within the spirit and principle of this application shall be included in the protection scope of this application. Inside.
Claims (16)
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201880091018.3A CN111819552B (en) | 2018-06-20 | 2018-06-20 | Access control list management method and device |
| PCT/CN2018/091954 WO2019241926A1 (en) | 2018-06-20 | 2018-06-20 | Access control list management method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2018/091954 WO2019241926A1 (en) | 2018-06-20 | 2018-06-20 | Access control list management method and device |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019241926A1 true WO2019241926A1 (en) | 2019-12-26 |
Family
ID=68982877
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2018/091954 Ceased WO2019241926A1 (en) | 2018-06-20 | 2018-06-20 | Access control list management method and device |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN111819552B (en) |
| WO (1) | WO2019241926A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113328948A (en) * | 2021-06-02 | 2021-08-31 | 杭州迪普信息技术有限公司 | Resource management method, device, network equipment and computer readable storage medium |
| CN116633865A (en) * | 2023-07-25 | 2023-08-22 | 北京城建智控科技股份有限公司 | Network flow control method and device, electronic equipment and storage medium |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101055535A (en) * | 2006-04-13 | 2007-10-17 | 国际商业机器公司 | Parallel computer and method for locating hardware faults in a parallel computer |
| CN102487374A (en) * | 2010-12-01 | 2012-06-06 | 中兴通讯股份有限公司 | Access control list realization method and apparatus thereof |
| CN102662855A (en) * | 2012-04-17 | 2012-09-12 | 华为技术有限公司 | Storage method and system of binary tree |
| CN103858392A (en) * | 2011-08-02 | 2014-06-11 | 凯为公司 | Incremental update of package classification rules |
| CN107577756A (en) * | 2017-08-31 | 2018-01-12 | 南通大学 | An Improved Recursive Data Flow Matching Method Based on Multilayer Iteration |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101577662B (en) * | 2008-05-05 | 2012-04-04 | 华为技术有限公司 | Method and device for matching longest prefix based on tree form data structure |
| WO2011091581A1 (en) * | 2010-01-26 | 2011-08-04 | 华为技术有限公司 | Method and device for storing and searching keyword |
-
2018
- 2018-06-20 WO PCT/CN2018/091954 patent/WO2019241926A1/en not_active Ceased
- 2018-06-20 CN CN201880091018.3A patent/CN111819552B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101055535A (en) * | 2006-04-13 | 2007-10-17 | 国际商业机器公司 | Parallel computer and method for locating hardware faults in a parallel computer |
| CN102487374A (en) * | 2010-12-01 | 2012-06-06 | 中兴通讯股份有限公司 | Access control list realization method and apparatus thereof |
| CN103858392A (en) * | 2011-08-02 | 2014-06-11 | 凯为公司 | Incremental update of package classification rules |
| CN102662855A (en) * | 2012-04-17 | 2012-09-12 | 华为技术有限公司 | Storage method and system of binary tree |
| CN107577756A (en) * | 2017-08-31 | 2018-01-12 | 南通大学 | An Improved Recursive Data Flow Matching Method Based on Multilayer Iteration |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113328948A (en) * | 2021-06-02 | 2021-08-31 | 杭州迪普信息技术有限公司 | Resource management method, device, network equipment and computer readable storage medium |
| CN116633865A (en) * | 2023-07-25 | 2023-08-22 | 北京城建智控科技股份有限公司 | Network flow control method and device, electronic equipment and storage medium |
| CN116633865B (en) * | 2023-07-25 | 2023-11-07 | 北京城建智控科技股份有限公司 | Network flow control method and device, electronic equipment and storage medium |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111819552B (en) | 2024-08-02 |
| CN111819552A (en) | 2020-10-23 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| EP2643762B1 (en) | Method and apparatus for high performance, updatable, and deterministic hash table for network equipment | |
| US7827182B1 (en) | Searching for a path to identify where to move entries among hash tables with storage for multiple entries per bucket during insert operations | |
| JP6004299B2 (en) | Method and apparatus for matching flow tables and switch | |
| CN112425131B (en) | ACL rule classification method, search method and device | |
| CN101604337B (en) | Apparatus and method for hash table storage, searching | |
| CN112231398B (en) | Data storage method, device, equipment and storage medium | |
| CN110955704A (en) | Data management method, device, equipment and storage medium | |
| WO2015127721A1 (en) | Data matching method and apparatus and computer storage medium | |
| CN106302172A (en) | Support Hash lookup and the storage of route querying, lookup method and device simultaneously | |
| CN118227518B (en) | Table entry storage and searching method and device, network equipment and storage medium | |
| WO2019241926A1 (en) | Access control list management method and device | |
| CN104253754A (en) | ACL (access control list) fast matching method and equipment | |
| CN115665066A (en) | Method, equipment and medium for expanding MAC address table capacity | |
| CN112242908B (en) | A network function deployment method, system and storage medium | |
| WO2024037243A1 (en) | Data processing method, apparatus and system | |
| CN114115696B (en) | Memory deduplication method, device and storage medium | |
| US9378140B2 (en) | Least disruptive cache assignment | |
| CN106302174A (en) | A kind of method and device realizing route querying | |
| CN119003385B (en) | A table item storage, search method, device, network equipment and storage medium | |
| CN111506658B (en) | Data processing method and device, first equipment and storage medium | |
| CN113225308B (en) | Network access control method, node equipment and server | |
| CN114448882B (en) | Design method for realizing high-performance high-capacity routing equipment | |
| CN114747193B (en) | switch chip | |
| Siyu et al. | An Efficient Packet Classification Method Based on Extended Bloom Filter and Cuckoo Hash | |
| JP2021524085A (en) | Message processing methods, devices and systems |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18923383 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18923383 Country of ref document: EP Kind code of ref document: A1 |