[go: up one dir, main page]

HK1194551A - Virtual trunking over physical links - Google Patents

Virtual trunking over physical links Download PDF

Info

Publication number
HK1194551A
HK1194551A HK14107627.0A HK14107627A HK1194551A HK 1194551 A HK1194551 A HK 1194551A HK 14107627 A HK14107627 A HK 14107627A HK 1194551 A HK1194551 A HK 1194551A
Authority
HK
Hong Kong
Prior art keywords
devices
port
coupled
data
endpoint
Prior art date
Application number
HK14107627.0A
Other languages
Chinese (zh)
Inventor
比朱.巴布
莫汉.卡尔昆特
Original Assignee
美国博通公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 美国博通公司 filed Critical 美国博通公司
Publication of HK1194551A publication Critical patent/HK1194551A/en

Links

Abstract

The invention provides virtual trunking over physical links, in which at least one controlling bridge controls data traffic among devices located lower in hierarchy below the controlling bridge. Those devices include a plurality of porting devices, such as line modules and port extenders, which ultimately communicate with an end point device, referred to as a station. At least two physical pathways from a controlling bridge to a station are grouped together into a virtual trunk to provide multiple physical pathways for packet transfer when operating in a dual-homed mode.

Description

Virtual pipelining of physical links
Cross Reference to Related Applications
This application claims priority from provisional patent application No. 61/732,236 filed on day 11, 30, 2012, which is incorporated herein by reference in its entirety from all aspects.
Technical Field
Embodiments of the present invention relate to wired communications and, more particularly, to connecting bridging devices to various intermediate routing and endpoint devices within a wired network.
Background
Various wired communication systems are now known that provide a communication link between devices, whether the devices are endpoint devices, intermediate routing devices, or bridging devices. The communication may be among devices within a particular network or the connection may be established between networks. In one particular type of system, bridging means are used to control data traffic between those components present on one side of the bridge (e.g. the downlink) and those components or networks present on the other side of the bridge (e.g. the uplink). One example of a bridging system is an enterprise system, where a bridge controls the data traffic that is staged among multiple components that exist below the bridge, and the data flow between the bridge that exists above the bridge and the environment.
An exemplary prior art data transmission system using physical links is shown in fig. 1. The diagram of fig. 1 shows a system block diagram of the system 100, in which only a high degree of connectivity is shown. The system 100 includes: a Control Bridge (CB) 101, which communicates with a plurality of Line Modules (LM) 102 located at the downlink of the CB 101. In the specific example of system 100, eight line modules LM 0-LM 7 are shown. Each line module 102 is further coupled to a downstream device. One such device is shown coupled to LM 0. One endpoint device 103 is shown connected to LM 0. It should be noted that although not shown, each LM102 also has a connection to an endpoint device. In one application of data networks employing control bridges, endpoint devices are referred to as virtual machines or VMs. Endpoint device 103 is thus a VM coupled downstream from LM0 in fig. 1.
Data transmission of the system 100 is controlled by the CB 101. For example, if endpoint device 103 wants to transmit data to another endpoint device within system 100, the data is first transmitted from endpoint device 103 to LM 0. LM0 then transfers the data to CB101 under arbitration control provided by CB 101. The CB101 receives the data and identifies the destination endpoint device using the destination address of the accompanying data. CB101 then transmits the data to the LM associated with the endpoint device and then sends the data to the destination endpoint device. Alternatively, if data from endpoint device 103 is destined for a location beyond the scope of system 100, CB101, after receiving the data from LM0, may send the data up-link to devices, components, and/or networks that exist at a higher level in the hierarchy above CB 101.
Typically, data transmission is accomplished by utilizing a particular communication protocol. A common communication protocol used by wired systems, such as system 100, is the protocol specification defined by the IEEE802.1 standard. The IEEE802.1 standard is applicable to network management. System 100 may be configured as an EthernetTMA network, wherein system 100 may employ IEEE802.3 or equivalent specifications to define a Media Access Control (MAC) layer of a Local Area Network (LAN), for example.
The CB101 may utilize a single bridging device or may utilize more than one bridging device. In FIG. 1, CB101 is shown with two bridging devices CB-A and CB-B. CB-A and CB-B may operate independently or they may operate together. In FIG. 1, data lines 105 and control lines 106 are shown connecting CB-A and CB-B so that two bridging devices can operate together to balance the data flow. As indicated, the uplink of the CB101 may be connected to other devices and/or networks that exist on the uplink of the CB 101.
It should be noted that in the system 100, the respective components 101 to 103 are connected by physical links having one-to-one connections with each other. Even though a virtual channel may be assigned to communications between a particular VM and CB101, communications pass along a single physical path. That is, data traversing the hierarchy upward from the VM or LM to the CB101 takes a prescribed physical path. Likewise, data traversing down the hierarchy from CB101 to LM or VM takes a single physical path. A break (failure) in a particular link will break the path of the physical link. Furthermore, unless alternate paths are available in addition to a single physical path, data load balancing cannot be achieved when only a single physical path is utilized.
Accordingly, there is a need for a system that utilizes more than one physical link in a network to assign data paths to endpoint devices for a hierarchy managed by a bridging device.
Disclosure of Invention
According to an aspect of the invention, there is provided a system comprising: at least one control bridge; a plurality of port transfer devices coupled to the at least one control bridge, wherein the at least one control bridge and the plurality of port transfer devices are configured in a hierarchical arrangement, wherein in the hierarchical arrangement the plurality of port transfer devices are lower in hierarchy than the at least one control bridge; and a plurality of endpoint devices coupled to the plurality of port transport devices in the hierarchical arrangement to transport data within the system, wherein one of the plurality of endpoint devices is configured to have a plurality of different physical paths to the at least one control bridge, and wherein the different physical paths are grouped together as a virtual trunk, wherein the at least one control bridge identifies the virtual trunk and selects one of the different physical paths to the one endpoint device to transport data to the one endpoint device when sending data from the at least one control bridge to the one endpoint device.
According to one embodiment of this aspect of the invention, wherein the plurality of control bridges are configured on top of a hierarchical arrangement having a plurality of port transport devices and a plurality of endpoint devices.
According to one embodiment of this aspect of the invention, the plurality of control bridges maintain an identification of one of the plurality of control bridges and the virtual trunk to determine which of a plurality of different physical paths in the virtual trunk to use in transferring data from the control bridge to the endpoint device.
According to one embodiment of this aspect of the invention, wherein the plurality of port transport devices comprises a plurality of line modules coupled to the plurality of control bridges at a hierarchical layer below the control bridges, wherein different physical lanes in the virtual trunk are configured to use at least two line modules.
According to one embodiment of this aspect of the invention, wherein the plurality of port transfer devices comprises a plurality of port extension devices coupled to the plurality of line modules at a hierarchical layer below the line modules.
According to one embodiment of this aspect of the invention, the one endpoint device is coupled to one of the plurality of port extension devices by a plurality of physical connection links.
According to one embodiment of this aspect of the invention, the data transmission from one control bridge is a unicast data stream.
According to one embodiment of this aspect of the invention, wherein the data transmission from one control bridge is a multicast data stream.
According to one embodiment of this aspect of the invention, wherein the second endpoint device within the system has only a single physical pathway coupled to the at least one control bridge, wherein the system is operative to have one or more endpoint devices coupled to the at least one control bridge via a plurality of physical pathways and one or more endpoint devices coupled to the at least one control bridge via the single physical pathway.
According to another aspect of the invention, there is provided an apparatus operating as a bridging device comprising: at least one data interface coupled to a plurality of port transport apparatus, wherein the device and the plurality of port transport apparatus are configured in a hierarchical arrangement, wherein in the hierarchical arrangement the plurality of port transport apparatus are lower than the hierarchy of the device, and in the hierarchical arrangement a plurality of endpoint apparatuses are coupled to the plurality of port transport apparatus to transport data from the device to one of the plurality of endpoint apparatuses; and a controller connected to the at least one data interface, wherein the controller is operable to configure the virtual trunk to have a plurality of different physical paths from the at least one data interface to one end-point device and to select one of the different physical paths to one end-point device to transmit data to the one end-point device.
According to an embodiment of this further aspect of the invention, wherein the apparatus is configured on top of a hierarchical arrangement with a plurality of port transfer devices and a plurality of endpoint devices.
According to one embodiment of this further aspect of the invention, wherein the plurality of port transport apparatus comprises a plurality of line modules coupled to the device and configured by the controller at a hierarchical level below the device, wherein different physical lanes in the virtual trunk are configured using at least two line modules.
According to one embodiment of this further aspect of the invention, wherein the plurality of port transfer devices comprises a plurality of port extension devices coupled to the plurality of line modules and configured by the controller at the hierarchical layer below the plurality of line modules.
According to one embodiment of this further aspect of the invention, wherein one endpoint device is coupled to one of the plurality of port extension devices by a plurality of physical connection links.
According to one embodiment of this further aspect of the invention, wherein the second end point device within the hierarchical arrangement has only a single physical pathway coupled to the apparatus, wherein the apparatus operates within the hierarchical arrangement having one or more end point devices coupled to the at least one data interface via a plurality of physical pathways and one or more end point devices coupled to the at least one data interface via a single physical pathway.
According to yet another aspect of the invention, a method is presented, comprising: configuring a bridge controller to work with a plurality of port transfer devices coupled to the bridge controller, wherein the bridge controller and the plurality of port transfer devices are configured in a hierarchical arrangement, wherein in the hierarchical arrangement the plurality of port transfer devices are configured to be lower level than the bridge controller, and in the hierarchical arrangement the plurality of endpoint devices are coupled to the plurality of port transfer devices to transfer data from the bridge controller to one of the plurality of endpoint devices; and configuring the virtual trunk to have a plurality of different physical paths from the bridge controller to one of the endpoint devices and selecting one of the different physical paths to one of the endpoint devices to transmit data to one of the endpoint devices.
According to an embodiment of this further aspect of the invention, the method further comprises configuring the plurality of port transfer devices to comprise a plurality of line modules coupled to the bridge controller at a hierarchical level below the bridge controller, wherein the different physical pathways of the virtual trunk are configured to use at least two line modules.
According to an embodiment of this further aspect of the invention, the method further comprises configuring the plurality of port transfer devices as a plurality of port extension devices comprising a plurality of line modules coupled thereto at a hierarchical layer below the plurality of line modules.
According to one embodiment of this further aspect of the invention, the method further comprises configuring an endpoint device to be coupled to one of the plurality of port extension devices by a plurality of physical connection links.
According to one embodiment of this further aspect of the invention, the method further comprises arranging the second endpoint device within the hierarchical arrangement to have only a single physical pathway coupled to the bridge controller, wherein the bridge controller operates within the hierarchical arrangement having one or more endpoint devices coupled to the bridge controller via a plurality of physical pathways and one or more endpoint devices coupled to the bridge controller via the single physical pathway.
Drawings
Fig. 1 shows a prior art diagram depicting a system having a bridge, a plurality of line modules, and at least one endpoint device, wherein the hierarchy of the system utilizes a single physical link to transport data between the bridge and the endpoint device.
Fig. 2 shows a system block diagram according to an embodiment of practicing the invention, in which virtual trunk lines are used on physical links to provide multiple paths to transfer data between control bridges of the system and endpoint devices in order to provide dual paths for data transfer.
Fig. 3 illustrates an exemplary unicast data flow for the system of fig. 2 when using one physical link in single-homed mode, according to one embodiment for practicing the invention.
Fig. 4 illustrates an example of an ETAG format used in data communication of the system of fig. 2 in accordance with an embodiment of the present invention.
Fig. 5 illustrates an exemplary multicast data flow for the system of fig. 2 when using one physical link in single-homed mode, according to one embodiment for practicing the invention.
Fig. 6 illustrates an exemplary unicast data flow of the system of fig. 2 when virtual pipelining using dual-homed mode is used, according to one embodiment for practicing the invention.
Fig. 7 illustrates an exemplary multicast data flow of the system of fig. 2 when virtual pipelining using dual-homing mode is used, according to one embodiment for practicing the invention.
FIG. 8 shows a block schematic diagram illustrating a hardware device that may be used in the line module or port expander of the system of FIG. 2, according to one embodiment for practicing the invention.
FIG. 9 shows a block schematic diagram illustrating a hardware arrangement of a control bridge that may be used in the system of FIG. 2, according to one embodiment for practicing the invention.
Detailed Description
Embodiments of the present invention may be implemented in various systems employing central or edge routing devices, such as bridging devices, to transfer data. Although the embodiments are described in terms of a control bridge on a network, the invention is readily implemented in other routing devices as well. The present invention is not necessarily limited to control bridges. For example, switching of the switching fabric may employ embodiments of the present invention. Likewise, other devices besides the line modules, line cards, port components, port expanders described herein may also be used to practice the present invention. Furthermore, embodiments of the present invention are described in accordance with, but not necessarily limited to, one of the IEEE802.# standards or protocols (such as IEEE802.1, IEEE802.2, IEEE802.3, etc.) defined under the above-described standards or protocols. Furthermore, embodiments of the present invention are described in terms of physical links in a wired environment. However, other embodiments may use wireless links or be combined with systems in which some of the systems have wireless communication links. Thus, the physical links described herein and shown in the figures refer to wired connections, but it is noted that in other embodiments wireless communication paths, or a combination of wired and wireless paths, may be employed.
Fig. 2 shows a block diagram of a system 200 including a plurality of Control Bridges (CBs) 201, Line Modules (LMs) 202, and Port Expanders (PEs) 203. System 200 shows two CBs 201 (labeled CB0 and CB 1). It is noted that other embodiments may have more than two CBs. The CBs 201 communicate with each other to transfer data and control information between the CBs through a data bus 240 and control lines 241. The CB201 communicates with a plurality of LMs 202 disposed in a downlink of the CB 201. Eight LMs (labeled LM 0-LM 7) are indicated for system 200, but other embodiments may have a greater or lesser number of LMs 202. In an embodiment of system 200, LM202 is positioned at a level below CB201 in the hierarchical arrangement of system 200. A given LM202 provides an interface between CB201 and the components and devices present downstream of LM202, such that the LM essentially operates as an extended port.
The system hierarchy of LMs 202 is followed by PEs 203 that provide functionality to increase (e.g., expand) the number of components that can be connected to each LM 202. For example, if one LM202 has "N" downstream lines, it can be connected to "N" end devices or end stations (or stations). However, by using the PE on each LM line, the number of end stations connectable through the LM increases. For example, if a particular LM with "N" downstream lines connects each line to a PE with "M" downstream lines, then potentially NxM stations may be connected to the CB by an LM/PE combination. It is noted that each PE203 may be further extended by having another PE or PEs positioned further downstream from the first PE. While the above described hierarchical structure varies widely, the gist of an embodiment of the present invention is identified in fig. 2 and the following description. Note that in fig. 2, four PEs are shown (labeled PE0, PE1, PE3, and PE 4). Further, it should be noted that in some cases, a station may be coupled directly to the LM or even to the CB without utilizing the LM 202.
Thus, for the particular structure shown in system 200, CB0 and CB1 are coupled to LMs (LM 0 to LM 7) and to each other such that data transfer may occur between a particular CB and a particular LM. Likewise, each LM202 may be coupled downstream to a station, PE203, or other device or component. As described above, a particular station may be directly coupled to LM202 (as shown by stations S4 and S5 connecting to LM 5) or even directly connected to CB201 (as shown by station S3). In one embodiment, the systems 200 communicate with each other and transmit data within the systems 200 using one or more of the IEEE802.# standards or protocols (such as IEEE802.1, IEEE802.2, IEEE802.3, etc.). Further, data may be transmitted from CB201 to the uplink, or received from the uplink into CB 201. In one embodiment, an ethernet LAN provides uplink connection for CB 201. However, other protocols, standards, and/or specifications may be used for other embodiments. Typically, upon startup initialization, when devices are added, or during other conditions, various devices/components of the system 200 are identified in the system and the CB retains the preconfiguration information of the system 200.
System 200 may operate as a single-host system, a dual-host system, or a combination of single and dual hosts. When operating in single-homed mode, the physical link coupling a particular station to the CB has a single physical path to the upper hierarchy to the designated CB 201. When operating in dual-homed mode, there are two alternative paths from a particular station to two CBs. The dual-homed path is routed through two different intermediate routing devices. The combined system will employ both single and dual routing schemes. As will be described in the following description, for a dual-homed system, a CB may establish and maintain a virtual connection (termed herein as a "virtual trunk" or a "virtual channel") to a station over two different physical paths (links) such that data transfer of either or both of the two physical links may be achieved.
In the example embodiment of fig. 2, station S0 is coupled to PE0 using a single connection (link) labeled interface 1, and station S1 is coupled to PE3 using a single connection labeled interface 2. Station S6 is coupled to PE1 using a single connection link and station S2 is coupled to PE4 using a single connection. At the PE layer, PE0 is shown with connections to LM0 and LM 1. It should be noted that in other embodiments PE0 may be coupled to more PEs. Likewise, PE1 is shown coupled to LM0 and LM 1. PE3 is coupled to LM6 and LM 7. PE4, on the other hand, is shown coupled only to LM 6. Because in the embodiment of fig. 2, the LM has separate connections to CB0 and CB1, PE0, PE1, and PE3 have dual paths to either CB. PE4, however, does not have a full dual path to the CB, because PE4 has only a single path through LM 6.
Thus, a station may establish dual paths from the station to the CB using different LMs, while other stations (e.g., station S2) may establish paths through only a single LM. Further, it should be noted that stations S0, S1, S2, and S6 are shown with a single pass-through between the stations and the corresponding PE, but in other embodiments the interface connecting the end stations to the PE may have multiple links. The multiple link coupling of stations described above may enable duplication in the connection of the end stations. Thus, the dashed line at station S6 in fig. 2 shows a potential second link in the interface coupling S6 to PE 1.
In the following description about fig. 3, 5, 6, and 7, the single-host mode and the dual-host mode are described. The single-host mode is a mode in which: where a single physical path is available or configured for reaching the last PE connected to a station, or the end station itself. The dual-host mode is one such mode: where dual physical paths are available or configured to reach the last PE connected to a station, or the station itself. It is noted that fig. 2 shows only one PE layer in the hierarchy, but other embodiments may use multiple PE layers.
Fig. 3 shows a single-homed mode of operation for unicast data flow from one station to another. In the example, station S0 (labeled as virtual machine 0, or VM 0) generates a data packet by one of the CBs for unicast transmission to station S1 (VM 1). In some cases, the connection may be to an interface coupled to a virtual station, designated as a Virtual Station Interface (VSI) and may be coupled to one edge relay. The data packet includes a Media Access Control (MAC) Source Address (SA) of station S0 to identify the source of the data packet and a MAC Destination Address (DA) to identify the destination of the data packet, which in the example is station S1. If the station is operating within a virtual LAN, a Virtual LAN (VLAN) identifier may also be included. If station S0 is coupled to PE0 (via interface 1), PE0 assigns a tag to the packet. Although various labels may be assigned to packets, fig. 4 illustrates one format known as an E-channel label (ETAG) that may be used for ethernet communications.
The ETAG300 shown in fig. 4 uses a format specified by the IEEE802.# specification such as IEEE802.1br, which is provided for bridge port extension. In this format, the ETAG ethernet field 301 defines an IEEE802.3 type field for determining that the frame carries an ETAG. An E-channel identifier (ECID) field 302 identifies the downstream interface (e.g., VM/VSI) associated with the frame. For packets transmitted upstream, the ECID field 302 identifies the source VM/VSI. The ECID value may also indicate whether the transmission is unicast or multicast. In one embodiment involving the use of ieee802.1br, ECID values below 4096 are for unicast destinations, while values in the range of 4096 to 16383 represent multicast replication tree identifiers. It should be noted that other embodiments may use other values than those noted above.
The ingress ECID field 303 is used for a pruning function to ensure that data is not sent back to senders in the same namespace within the hierarchy. The ingress ECID is valid only for downstream packet flows and identifies the VM/VSI from which the packet originated. If the source VM/VSI and the destination VM/VSI are in the same namespace domain, the packet is not transmitted back to the source. Although not relevant to the understanding of the present invention, the ETAG format 300 also includes a pre-empted code point (PCP) field 304 and a Discard Eligibility (DE) field 305. The PCP is a value for including traffic differentiation, and the DE is a value for indicating whether a frame can be dropped when congestion is experienced.
Referring again to fig. 3, an example packet flow is shown. Assuming station S0 is operating in single-homed mode, unicast packets from station S0 are sent to station S1 via PE0 as an access PE. PE0 is coupled to LM0, where LM0 acts as a transport PE to carry traffic from S0 (VM 0) upstream to one of the CBs. In the single-host mode, only one LM is selected. The CB acts as a central network policy authority for system 200 and performs forwarding functions to communicate packet traffic to LM 6. LM6 and PE3 transmit traffic downstream to station S1, also indicated as VM 1. In single-host mode, only one LM (e.g., LM 6) is used for the path between CB and PE 3. It should be noted that the PEs interfacing to the station are referred to as access PEs (apes), while the other intermediate PEs are referred to as transport PEs (tpes). It should be noted that the LM may be either APE or TPE depending on where the station is formed. In the example shown, the LM functions as a TPE. APEs allocate ETAGs based on ingress ports, while TPEs do not allocate ETAGs.
For upstream traffic flows, PE0 is responsible for assigning ingress ports to packets based on ETAG 350. ECID (etag.ecid) identifies the source station of the data packet (S0 in the example). PE0 also populates the PCP and DE fields of ETAG. The ingress ECID field is set to "0". Packet ingress of ecad =0 is handled as a non-ETAG packet (and will assign an ingress port based on ETAG), except that the incoming PCP/DE value is reserved.
PE0 forwards traffic received on a downstream port to a preconfigured upstream port. The packet is typically not looked up or learned through layer 2 (L2) or layer 3 (L3). The LM0 considers all incoming packets at the downstream port to have ETAGs so that the ingress ECID field is not viewed by the LM 0. The LM0 then performs a Reverse Path Forwarding (RPF) check based on the ECID field of the incoming ETAG. This check is performed to determine that the incoming ECID is known and present in the downstream port. LM0 then sends the traffic received on the downstream port to the preconfigured upstream port. The packets are not looked up or known via L2 and/or L3 (L2/L3).
Subsequently, when the CB receives a packet from LM0, the CB uses { ingress port, etag.ecid } to identify the station from which the packet originated (in this case, S0). Any policy for traffic from S0 applies. The CB also knows the association between { MACSA, VLAN } and { ingress port, etag. The CB then performs a L2/L3 forwarding lookup on the { MACDA, VLAN } of the packet, in which case the result may be a local station on the network 200 or a destination reachable through the ethernet uplink of the CB through the L2 switch coupled to the CB. The L2/L3 lookup or learning is specified by the switch, such as the L2 switch shown in fig. 3, which forwards the lookup result only to the egress port if the packet is specified for an uplink, such as an ethernet uplink. The ETAG is deleted and the data packet is sent to the ethernet uplink. If the destination is a local station (case in the example), the forwarding lookup yields { egress port, egress etag.
As shown in fig. 3, the downstream packet flow is through LM6 and PE3 to station S1. For packets passing downstream, the ECID identifies the destination station (VM or VSI). If the egress port happens to be the same as the ingress port, the packet is sent back into the same namespace domain. The entry ECID of the ETAG is populated with the input ETAG. The egress etag.ecid is allocated from the forwarding lookup. If the egress port is different from the ingress port, the packet is directed to a different namespace domain such that the ingress ECID field is set to "0". Egress etag.ecid351 looks up the allocation from the forwarding at the CB.
For CB downstream traffic, the downstream TPE (LM 6 in this example) expects packets from the CB to contain ETAG 351. The LM6 discards or copies all non-ETAG packets to its processing lane. LM6 also checks if the format of the ETAG is correct for the downstream packet flow. The RPF check is typically performed at this time. LM6 then forwards the packet based on finding the { ingress port, etag.ecid } of the destination port (downstream port to PE 3). The ingress port in the key identifies the namespace (CB port) of the ECID and the packet is forwarded to PE 3. PE3 also forwards packets based on { ingress port, etag.vid } lookup. Because the packet is now sent to station S1 (and not another PE), PE3 deletes the ETAG from the packet before sending the packet to station S1. Fig. 3 shows the relevant parts of the data packets relating to the data packet flow in the rectangular blocks in the lower part of the figure.
For multicast packet flows, fig. 5 shows an example of multicast packet traffic from station S0 to multiple destinations in single-homed mode. The flow from station S0 to CB is identical to the upstream traffic flow described with respect to fig. 3, except that the ECID value indicates multicast transmission. For example, an ECID value of ieee802.1br, 4096 or more, which is specified for traffic, is used for a multicast destination. In one implementation, the ECID values in the range of 4096 to 16383 represent multicast replication tree identifiers.
In the upstream direction, all packets are sent first to the CB, whether unicast or multicast transmission. Each port forwards traffic to its associated upstream port(s). As indicated, when a multicast packet is received at PE0 from S0, the process is consistent with the unicast case, where an ETAG is inserted and the packet is forwarded to a pre-programmed upstream port.
At the CB, the CB performs a forwarding lookup based on { MACDA, VLAN } and determines the receiver for each packet. ECID is capable of multicast replication based on etag.ecid (e.g., using ECID values from 4096 to 16383 to identify that the traffic is multicast traffic). Thus, even if there are multiple multicast destinations coupled to a particular PE, the CB sends only one copy of the packet to each PE downstream connected to it. Each downstream PE represents a single multicast replication tree with a unique multicast replication indication (pointer). In one embodiment, a 14-bit multicast replication indication is used. When the CB receives a packet from LM0, the CB uses { ingress port, etag.ecid } to identify the station from which the packet originated (S0 in this case). Any policy for traffic from S0 is applied. The CB also knows the association between { MACSA, VLAN } and { ingress port, etag. The CB performs a L2/L3 forwarding lookup on the { MACDA, VLAN } of the packet. The ETAG from CB is shown as ETAG 360. In the instance of uplink to one or more recipients over ethernet, the ETAGs of these ports are deleted and the packet is sent to the uplink port.
As shown in fig. 5, multiple recipients are shown as destinations of a multicast transmission. The receiver may be coupled directly to the CB as shown by station S3 (VM 3). For those downstream ports where a station exists after a PE, the CB sends one copy of the packet with etag.ecid set to the multicast allocation tree identifier. For packets returned in the ingress port, the ETAG. In an example, LM6 is TPE and LM5 is APE. For packets output on ports connected to the station, the ETAG is deleted. Therefore, for the packet output from the LM5, ETAG is deleted and the packet is transmitted to the stations S4 and S5 (VM 4 and VM 5). For these ports, LM5 also checks if the packet originated from the same port (if etag.
For packets output on ports coupled to the PE, the ETAG passes through (shown as ETAG 361). Thus, as shown in fig. 5, LM6 transmits ETAG361 to PE4, where the ETAG is removed before forwarding the data packet downstream to station S2 (VM 2). Similar to fig. 3, fig. 5 shows, in the lower part of the figure, the relevant parts of the packets relating to the packet flow in rectangular blocks.
When the system 200 of fig. 2 is configured to operate in a dual-homed mode of operation, in an embodiment of dual-homed operation, the dual physical links establish a single path for the single-homed mode. Thus, in the dual-homed mode of operation, station S0 is shown with two physical links (e.g., two separate physical paths) from PE0 to the CB through LM0 and LM 1. In the example, one physical link couples S0 to PE0 (designated as interface 1). However, as described above, in other embodiments multiple links may be used to couple S0 to PE 0. Two physical paths through LM0 and LM1 from either or both of the CBs to PE0 are designated as a single Virtual Trunk (VTRUNK) (denoted VTRUNK-A) for station S0. This is illustrated in fig. 2, where two physical paths are configured from PE0 to the CB through different LMs. Fig. 2 also shows a second vtrun (denoted vtrun-B) where two physical links are configured between PE3 and CB through two different LMs (LM 6 and LM 7) to connect to station S1.
Further, it should be noted that PE1 can also have vtrun dual connectivity paths configured for PE1 and station S6 through LM0 and LM 1. It should be noted that for a dual-homed configuration, a station utilizes two different paths from an APE device to a CB, where the paths pass through different TPE devices.
As shown in fig. 2, the upstream coupling of vtrun-a from PE0 uses two physical links (denoted by packet 210). By using separate LMs, failure of one LM can still ensure that an optional physical link upstream to the CB is available. As shown, one physical link is coupled to LM1 and a second physical link is coupled to LM 2. Similarly, fig. 2 shows that the dual upstream connection VTRUNK-B for station S1 has one physical link to LM6 and a second physical link to LM7 using packet 212. PE1 may also be configured for dual-homed use because PE1 may establish VTRUNK by using packet 211 to configure the path through LM0 and LM1 to CB.
The upstream connection from LM0 to CB0 utilizes physical link 220 and the upstream connection from LM1 to CB0 utilizes physical link 221. However, because the packet from S0 may take a path through LM0 or through LM1, two physical paths are available for upstream transmission of the packet to CB 0. In this way, the dual path of vtrun uses different intermediate routing devices/components at least at one TPE layer. If one LM fails, the second path to CB0 for the dual-homed configuration is still available. It should be noted that LM0 and LM1 may also provide dual physical link connections to links 222 and 223, respectively, of CB 1. In this way, a CB failure still allows the packet from S0 to be sent to its intended destination.
In order to associate the dual physical lanes of the vtrun to connect to a particular CB, the concept of "virtual trunking" (also referred to as "virtual channelization") is implemented in the physical link. When establishing individual connections in a dual-homed operation, the CB creates VTRUNKs that determine dual paths for a given end station. In the above example of station S0 and vtruik-a, CB201 sets up a virtual trunk (or channel) that determines both LM0 and LM1 as downstream destinations for station S0. The virtual connection is shown as a dashed line in packet 230. That is, packet 230 determines that one virtual path, called VTRUNK-A, actually has two possible downstream paths (one to LM0 and one to LM 1). This information is typically retained in the CB as part of a pre-configured system. Thus, when the CB receives upstream ETAG information, the CB checks to determine whether the destination device is connected through a virtual trunk. If so, then a dual-host technique may be applied, where CB determines VTRUNK and the downstream devices associated with the partial VTRUNK.
If an equivalent virtual connection is established for VTRUNK-A for CB1, the virtual connection (shown by the dashed line for packet 231 in FIG. 2 to identify the trunk) provides information at CB1 that physical links 222 and 223 are available to VTRUNK-A to reach station S0. Once the LM0 or LM1 receives packet traffic destined for a station related to a vtrun, the associated ETAG identifies the destination device so that the received LM may further cause the packet to be transmitted downstream to the intended destination (e.g., station S0).
A similar technique may be used for vtrun-B with respect to station S1, where vtrun (or channel) packets 232, 233 may be used to configure LM6 and LM7 as downstream devices to reach PEs and S1. This technique may be used to establish multiple vtrungs, where each CB may retain information about which physical link from the CB is related to the VTRUNK. In this manner, a given CB may send packets downstream on either physical link based on the provisioned ETAG/ECID. It is noted that a particular physical link may be assigned to more than one vtrun.
Fig. 6 shows an example of a unicast packet flow from station S0 (VM 0) to station S1 (VM 1) using a dual-homed virtual channel. As shown, a unicast packet is sent from station S0 to PE0 and to downstream port PE0 on interface 1. PE0 adds an ETAG with an interface-specific ECID value. After the physical link is determined, the packet is forwarded to either LM0 or LM1 (since dual homing is allowed for PE 0). Any LM selected to receive a packet ensures that the incoming packet has the correct ETAG (i.e., the ECID value is present on the ingress port) and then forwards the packet to its upstream port towards CB0 (or CB 1). When the CB receives this packet, it converts the { ingress port, ECID } into { interface 1, vtrun-a } because station S0 is configured for dual-homing and knows the interface/vtrun value of { MAC _ SA, VLAN } for the packet. L2/L3 forwarding at the CB may then send this packet to the destination { S2, vtrun-B } and vtrun-B will be determined to be the physical link connected to either LM6 or LM 7.
In fig. 6, two physical port components are forwarded, one to LM6 and the other to LM7, to connect to PE 3. The packet output from the CB is modified to replace the ETAG with a new ECID value representing interface 2 at PE 3. The physical port determination scheme (resolution) picks either LM6 or LM7 and forwards the packet downstream. The received LM will check the ETAG status of the incoming downstream packet and forward the packet to the downstream port at PE3 based on { input port, ETAG. PE3 does so as well and sends the packet to interface 2 after deleting the ETAGs present in the packet. Note that the dual physical links are shown as packets (circled) in fig. 6. As described above, a change in the dual-homed configuration may be implemented in a single connection between a station and its APE, such as the single connection between S6 and PE1 (shown in fig. 2).
Fig. 7 shows an example of a multicast transmission from station S0. For multicast transmission, packets are forwarded in the upstream direction as unicast packets. The L2/L3 forwarding process occurs at the selected CB. The packet identification causes the packet to be forwarded to the designated multicast packet. The CB uses a loop-free multicast replication tree starting from the CB to reach a specific interface connected to the CB using vtrun. The ECID of the outgoing packet ETAG is replaced by the multicast tree ID characterizing the downstream multicast packet replication tree. The single copy of the packet is then forwarded to the LM. The received LM checks the ETAG status of the incoming packet and performs a forwarding lookup for the packet based on { input port, ETAG-ECID }, which results in a list of downstream ports that are members of the multicast replication tree. The LM replicates the packet and forwards the replica to a downstream port, which may be a PE or a VM. Then, after deleting the ETAGs, the PE forwards the packet to either of the two interfaces based on the presence of any member in the selected multicast tree. If the receiving PE identifies a packet originating in any of the destination interfaces (any one that exists in the multicast replication tree), a pruning is performed because the entry-ECID value in the ETAG of the packet is the same as the ECID value of the interface, with the packet being dropped and no duplicate being forwarded.
Fig. 7 shows an example of multicast transmission for a dual-homed system. Multicast packet transmission from station S0 to the upstream end of the CB is equivalent to unicast packet transmission from station S0 as described in fig. 6, but with the multicast rules described with respect to fig. 5. In fig. 7, station S3 is directly connected to one or both CBs, and when S4 and S5 are directly connected to LM5, there is no PE. Fig. 7 also shows a case where one of the stations is not configured for dual-homed operation. Fig. 2 shows station S2 connected only through a single LM (LM 6), so that vtrun is not established for S2. Thus, in this example, station S2 is not operating in dual-homed mode, and the only physical path to the CB is through a single LM (LM 6). However, the system 200 is operable such that some stations are configured for dual mode operation, while other stations are configured for single mode operation. Thus, in fig. 7 where other stations are configured for dual mode operation (using vtrun), the remaining stations (such as S2) may be configured for single mode operation (not using vtrun).
The same applies to the unicast case of fig. 6. That is, either the upstream station or the downstream station may be configured in the dual-homed mode, and the other may be configured in the single-homed mode. Thus, in the implementation of various embodiments of the present invention, the system may be configured to operate as a dual-homed system, a single-homed system, or a combination of both schemes in which some end stations are configured to operate dual-homed while other stations are configured to operate in single mode. The outgoing link may be considered as vtrun (with multiple physical paths to the station) or non-vtrun (with one physical path to the station). Multiple physical links can be traced back to either the end station or the APE interfacing to the station.
Although various components and devices may be used for port transport devices (port devices) such as the PEs and LMs described above, one embodiment is shown in fig. 8. Fig. 8 shows a port transfer device 400, which may be a port transfer device, a port expander, a line module, a line card, etc., to provide hardware to perform port functions for the LM and PE described above. The apparatus 400 includes an upstream interface 401 and a downstream interface 402 for receiving and transmitting data packets. Corresponding buffers 403 and 404 may be associated with the interface to buffer data. In some cases, there may be only one buffer or no buffer at all. A controller, processor or processing circuit 405, utilizing an associated memory 406, may provide control functions for port transmission and routing of data packets.
Likewise, fig. 9 shows an embodiment in which a CB as described above is provided as the apparatus 500. Interfaces 501 and 502, along with buffers 503 and 504, provide for the reception and transmission of data packets. In some cases, a single interface may be used for reception and transmission by devices further down in the hierarchy. The uplink interface 510 and accompanying buffer 511 may provide port transmission for an uplink device, component, or network. It should be noted that three buffers are shown, but one or any number of buffers may be used. In some cases, there may be no buffer. A controller, processor or processing circuit 505, utilizing associated memory 506, may provide control functions for port transmission and routing of data packets. In one embodiment, vtrun information 507 for routing packets to the dual-homed master as described above is retained in a portion of memory 506. It is noted that one or both of apparatus 400 and apparatus 500 may be integrated onto one or more integrated circuits, printed circuit boards, circuit cards, and other devices used to construct circuits.
Thus, by associating physical link information to a virtual port in the CB, packets destined for a virtual interface may be made available over multiple physical links. A physical link determination scheme (resolution) may be performed and physical components selected according to a component selection algorithm. If a packet is destined for a virtual interface connected to a single-homed PE, the packet may reach the physical link LM connecting the CB and LM. For a dual-homed virtual interface, two separate physical paths are allocated, where one or both of the paths may be used to transmit packets. However, the multiple paths are treated by the system as a single virtual path.
Furthermore, the dual-homed mode of operation described above uses two physical paths. Other embodiments may readily use dual-homing techniques to provide more than two physical paths in constructing an aggregated packet. Thus, the present invention can be easily applied to various multi-homed systems. Likewise, the system may be implemented strictly as a dual-homed (or multi-homed) system or a combination of single-homed and multi-homed systems, such that some endpoints are connected as a single device while other endpoints are connected into an aggregate packet.
Effectively, based on the destination virtual port in the CB, the CB may discover multiple physical links connecting the LMs via VTRUNKs and when the virtual interface is not dual homed, the physical links connecting the CB are treated as separate links. For the practice of the invention, multiple virtual ports may exist on the same physical link, but the combination of physical links reaching the end points may be different and may have different intermediate routing means.
Thus, virtual pipelining (or channelization) using multiple physical links is described. Further, the example embodiments described herein use two physical links of a dual-homed system. However, other embodiments may use additional paths to provide an X-host system with X physical links allocated for vtrun. The present invention may be implemented in a variety of systems including, without limitation, trunked lines, enterprise architectures, switching fabrics, and the like, and further, it is noted that the various connections shown in the figures may be provided by wired connections, wireless connections, or a combination of both. In addition, the virtual trunked system shown can transmit a variety of data, rather than just packets.
Embodiments of the present invention have been described above with the aid of functional building blocks illustrating the performance of certain functions. For convenience of description, the boundaries of these functional building blocks have been arbitrarily defined. Boundaries may be arbitrarily defined so long as the specified functions are appropriately performed. Those of ordinary skill in the art will further appreciate that the functional building blocks, and other illustrative blocks, modules, and components herein may be implemented as shown or by discrete components, application specific integrated circuits, processors executing appropriate software, etc., or any combination thereof.
Also as used herein, the terms "controller," "processor," and/or "processing unit or circuit" may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, microcontroller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard-coded and/or operational instructions for the circuitry. The processing module, processing circuit and/or processing unit may be, or further include, memory and/or integrated storage elements, may be a single memory device, multiple memory devices and/or embedded circuitry of another processing module, processing circuit and/or processing unit. The memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information.

Claims (10)

1. A system, comprising:
at least one control bridge;
a plurality of port transfer devices coupled to the at least one control bridge, wherein the at least one control bridge and the plurality of port transfer devices are configured in a hierarchical arrangement, wherein in the hierarchical arrangement the plurality of port transfer devices are lower in hierarchy than the at least one control bridge; and
a plurality of endpoint devices coupled to a plurality of the port transport devices in the hierarchical arrangement to transport data within the system, wherein one of the plurality of endpoint devices is configured to have a plurality of different physical lanes to the at least one control bridge, and wherein the different physical lanes are grouped together as a virtual trunk, wherein the at least one control bridge identifies the virtual trunk and selects one of the different physical lanes to the one endpoint device to transport the data to the one endpoint device when sending data from the at least one control bridge to the one endpoint device.
2. The system of claim 1, wherein a plurality of control bridges are configured on top of the hierarchical arrangement having the plurality of port transport devices and the plurality of endpoint devices.
3. The system of claim 2, wherein the plurality of control bridges retain an identification of one of the plurality of control bridges and the virtual trunk to determine which of the plurality of different physical lanes in the virtual trunk to use in transferring data from the control bridge to the endpoint device.
4. The system of claim 3, wherein the plurality of port transport devices comprises a plurality of line modules coupled to the plurality of control bridges at a hierarchical layer below the control bridges, wherein the different physical lanes in the virtual trunk are configured to use at least two line modules.
5. The system of claim 1, wherein a second endpoint device within the system has only a single physical pathway coupled to the at least one control bridge, wherein the system is operative to have one or more endpoint devices coupled to the at least one control bridge via a plurality of physical pathways and one or more endpoint devices coupled to the at least one control bridge via a single physical pathway.
6. An apparatus operating as a bridging device, comprising:
at least one data interface coupled to a plurality of port transport devices, wherein the apparatus and the plurality of port transport devices are configured in a hierarchical arrangement, wherein in the hierarchical arrangement the plurality of port transport devices are lower than a hierarchy of the apparatus and in the hierarchical arrangement a plurality of endpoint devices are coupled to the plurality of port transport devices to transport data from the apparatus to one of the plurality of endpoint devices; and
a controller connected to the at least one data interface, wherein the controller is operable to configure the virtual trunk to have a plurality of different physical paths from the at least one data interface to the one endpoint device and to select one of the different physical paths to the one endpoint device to transmit the data to the one endpoint device.
7. The apparatus of claim 6, wherein the apparatus is configured on top of the hierarchical arrangement having the plurality of port transport devices and the plurality of endpoint devices.
8. The apparatus of claim 6, wherein a second endpoint device within the hierarchical arrangement has only a single physical pathway coupled to the apparatus, wherein the apparatus operates within the hierarchical arrangement having one or more endpoint devices coupled to the at least one data interface via a plurality of physical pathways and one or more endpoint devices coupled to the at least one data interface via a single physical pathway.
9. A method, comprising:
configuring a bridge controller to work with a plurality of port transfer devices coupled to the bridge controller, wherein the bridge controller and the plurality of port transfer devices are configured in a hierarchical arrangement, wherein in the hierarchical arrangement the plurality of port transfer devices are configured to be lower in level than the bridge controller and in the hierarchical arrangement a plurality of endpoint devices are coupled to the plurality of port transfer devices to transfer data from the bridge controller to one of the plurality of endpoint devices; and
configuring a virtual trunk to have a plurality of different physical paths from the bridge controller to the one endpoint device and selecting one of the different physical paths to the one endpoint device to transmit the data to the one endpoint device.
10. The method of claim 9, further comprising configuring the plurality of port transport devices to include a plurality of line modules coupled to the bridge controller at a hierarchical layer below the bridge controller, wherein the different physical pathways of the virtual trunk are configured to use at least two line modules.
HK14107627.0A 2012-11-30 2014-07-26 Virtual trunking over physical links HK1194551A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US61/732,236 2012-11-30
US13/766,629 2013-02-13

Publications (1)

Publication Number Publication Date
HK1194551A true HK1194551A (en) 2014-10-17

Family

ID=

Similar Documents

Publication Publication Date Title
US20140156906A1 (en) Virtual Trunking Over Physical Links
CN103081418B (en) Computer system and communication method in computer system
CN102857416B (en) A kind of realize the method for virtual network, controller and virtual network
US8654680B2 (en) Packet forwarding using multiple stacked chassis
US8134922B2 (en) Reducing flooding in a bridged network
CN101964746B (en) Routing frames in shortest path computer network for multi-homed legacy bridge node
EP3175590B1 (en) Bridging clouds
US8369296B2 (en) Distributed link aggregation
US8694664B2 (en) Active-active multi-homing support for overlay transport protocol
US8166187B2 (en) Distributed IP gateway based on sharing a MAC address and IP address concurrently between a first network switching device and a second network switching device
US8243729B2 (en) Multiple chassis stacking using front end ports
US8442045B2 (en) Multicast packet forwarding using multiple stacked chassis
JP5873597B2 (en) System and method for virtual fabric link failure recovery
CN103905325B (en) Double layer network data transferring method and network node
CN112822097B (en) Message forwarding method, first network device and first device group
CN103684965B (en) The switching equipment and message forwarding method configured based on virtual unit
JP6032074B2 (en) Relay system, relay device, and relay method
CN105493454B (en) Method and device for realizing active-active access to TRILL campus edge
US20110222541A1 (en) Network System, Edge Node, and Relay Node
CN103812796B (en) Communication system and network repeater
CN107547347B (en) VNI-based path adjustment method and device
HK1194551A (en) Virtual trunking over physical links
US8732335B2 (en) Device communications over unnumbered interfaces
US20150023359A1 (en) Edge extension of an ethernet fabric switch
KR20170068973A (en) Packet Transport Network System Equipped with Virtual Router and Packet Transport Method thereof