HK1144233B - Bandwidth admission control on link aggregation groups - Google Patents
Bandwidth admission control on link aggregation groups Download PDFInfo
- Publication number
- HK1144233B HK1144233B HK10110795.4A HK10110795A HK1144233B HK 1144233 B HK1144233 B HK 1144233B HK 10110795 A HK10110795 A HK 10110795A HK 1144233 B HK1144233 B HK 1144233B
- Authority
- HK
- Hong Kong
- Prior art keywords
- link
- lag
- bandwidth
- primary
- point
- Prior art date
Links
Description
Background
Link aggregation (e.g., as set forth in IEEE 802.3 ad) is a computer networking term that describes using multiple links (e.g., ethernet cables and/or parallel ports) as a logical port to increase link speed beyond the limits of any one single link and/or to provide link redundancy between two network elements. Other terms for link aggregation may include link bonding, link bundling, and/or Link Aggregation Group (LAG). LAG will be used hereinafter to denote link aggregation. The LAG may be provided locally or virtually between a pair of network elements. LAG in a network element may provide protection against handling line card failures across ports in the same packet processing line card or across packet processing line cards.
A LAG allows two network elements interconnected by the LAG to communicate simultaneously over all member links in the LAG. Network datagrams may be dynamically distributed across the member links in the LAG based on local rules, so that the management of what datagrams actually flow through a given port may be automatically considered with the LAG.
LAG, as set forth in IEEE 802.3ad, allows one or more links to be aggregated together to form a LAG. Once implemented, LAGs can be configured and reconfigured quickly and automatically with minimal packet loss without risk of duplication or rendering (rendering) of frames.
LAG may be used to provide load balancing across multiple parallel links between two network devices. One load balancing method currently in use is based on Internet Protocol (IP) header source and destination addresses. Another approach that may be used for non-IP protocols carried in ethernet frames is based on Media Access Control (MAC) source and destination addresses. In a typical network, the load may not be evenly divided among the links of the LAG. The statistical nature of traffic distribution across parameters (e.g., IP addresses) used by typical hashing algorithms may result in some links in the LAG being overloaded while other links in the LAG are underutilized.
LAG may provide local link protection. If one of the plurality of member links used in the LAG fails, network traffic (e.g., datagrams) may be dynamically redirected to flow through the remaining surviving links in the LAG. The LAG may redirect traffic to surviving links based on a hashing algorithm. However, there is no pre-prediction of which link to redirect traffic through, and it is unpredictable which link in the LAG may fail. In a point-to-point ethernet application that uses a Virtual Local Area Network (VLAN) Identifier (ID) to identify a connection between two edge ethernet switches, a hashing algorithm may be performed on the VLAN and/or other ethernet header and/or payload information (e.g., IP header information if the ethernet payload contains an IP packet). This can make it difficult to predict the load on a given link in a LAG, and can make it difficult to efficiently and predictably design an ethernet network that provides packet loss and bandwidth Service Level Agreement (SLA) guarantees for point-to-point services. Point-to-point services called ELine (ethernet private line (EPL) or Ethernet Virtual Private Line (EVPL)) may be the most stringent of services in terms of SLAs.
Drawings
FIG. 1 is an exemplary diagram of a network in which systems and methods described herein may be implemented;
FIG. 2 is a diagram of the exemplary network device of FIG. 1;
FIG. 3 is a diagram illustrating an exemplary class of service (CoS) queue of the network device of FIG. 2;
FIG. 4 is a diagram illustrating an exemplary VLAN allocator of the network device of FIG. 2;
FIG. 5 is a functional block diagram illustrating exemplary functional components of a control unit of the network device of FIG. 2; and
6-8 illustrate flow diagrams of example processes of the network and/or network device of FIG. 1 according to implementations described herein.
Detailed Description
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention.
The systems and methods described herein may guarantee SLAs for peer-to-peer services in the presence of multipoint services on a Link Aggregation Group (LAG). In one implementation, the system and method may ensure that a point-to-point service may share a LAG with a multipoint service while still ensuring that the point-to-point service has predictable performance. In another implementation, the systems and methods may allocate respective point-to-point connections to queues on links of the LAG via a management mechanism and/or via signaling. In other implementations, the systems and methods may receive bandwidth available on each link of a LAG, may allocate a primary LAG link and a redundant LAG link to a Virtual Local Area Network (VLAN), and may set the available bandwidth for primary and redundant link subscriptions.
Fig. 1 is a diagram illustrating an exemplary network 100 in which systems and methods described herein may be implemented. Network 100 may include, for example, a Local Area Network (LAN), a private network (e.g., a corporate intranet), a Wide Area Network (WAN), a Metropolitan Area Network (MAN), or another type of network. In one implementation, network 100 may include switched networks that provide point-to-point and multipoint services, networks that are capable of using VLANs, and so on.
As shown in fig. 1, network 100 may include network devices 110-0, 110-1, and 110-2 (collectively referred to as network devices 110) interconnected by links 120-0, 120. Although three network devices 110 and eight links 120 are shown in fig. 1, more or fewer network devices 110 and/or links 120 may be used in other implementations.
Network device 110 may include a variety of devices. For example, network device 110 may include a computer, router, switch, Network Interface Card (NIC), hub, bridge, and so forth. Links 120 may include paths, such as wired connections, input ports, output ports, etc., that allow communication between network devices 110. For example, network device 110-0 may include a PORT PORT0、PORT1、...、PORTNNetwork device 110-1 may include a PORT0、PORT1、PORT2、PORT3And network device 110-2 may include a PORT0、PORT1、...、PORT7. The ports of network device 110 may be considered part of respective links 120 and the ports of network device 110 may be input ports, output ports, or a combination of input and output ports. Although eight ports for network device 110-0, four ports for network device 110-1, and eight ports for network device 110-2 are shown in FIG. 1, more or fewer ports may be used in other implementations.
In an exemplary implementation, network device 110 may provide entry and/or exit points for datagrams (e.g., traffic) in network 100. Port of network device 110-0 (e.g., PORT)0.N) Datagrams may be sent and/or received. Port of network device 110-1 (e.g., PORT)0、PORT1、PORT2And PORT3) And a PORT of network device 110-2 (e.g., PORT)0.7) Datagrams may likewise be sent and/or received.
In one implementation, a LAG may be established between network devices 110-0 and 110-1. For example, a PORT of network device 110-0 may be PORT0.3Grouped together as a PORT with network device 110-1 via links 120-0, 120-1, 120-2, and 120-30、PORT1、PORT2And PORT3LAG for two-way communication110-0. May be at a PORT of network device 110-0 (e.g., PORT)0、PORT1、PORT2And PORT3) With a PORT of network device 110-1 (e.g., PORT)0、PORT1、PORT2And PORT3) Dynamically distribute datagrams between so that they can be delivered by the LAG110-0To automatically manipulate the management of what datagrams actually flow through a given link (e.g., links 120-0,. and 120-3).
In another implementation, a LAG may be established between network devices 110-0 and 110-2. For example, a PORT of network device 110-0 may be PORTN-3.NGrouped together into a stream via links 120-N-3, 120-N-2,120-N-1, and 120-N with PORT PORTs of network device 110-20、PORT1、PORT2And PORT3LAG for bidirectional communication110-2. PORT of network device 110-2 may be ported0、PORT1、PORT2And PORT3Grouped together into LAG110-2. LAG110-2 may allow a PORT PORT of network device 110-0N-3.NPort PORT with network device 110-20、PORT1、PORT2And PORT3Two-way communication is performed. May be at a PORT of network device 110-0 (e.g., PORT)N-3.N) With a PORT of network device 110-2 (e.g., PORT)0、PORT1、PORT2And PORT3) Dynamically distribute datagrams between so that they can be delivered by the LAG110-2To automatically manipulate the management of what datagrams actually flow through a given link (e.g., links 120-N-3. With such an arrangement, network device 110 may send and receive datagrams simultaneously on all links within the LAG established by network device 110.
Although fig. 1 shows exemplary components of network 100, in other implementations, network 100 may contain fewer, different, or additional components than illustrated in fig. 1. In other implementations, one or more components of network 100 may perform tasks performed by one or more other components of network 100.
Fig. 2 is an exemplary diagram of a device that may correspond to one of network devices 110 of fig. 1. As shown, network device 110 may include an input port 210, an ingress packet processing block 220, a switching mechanism 230, an egress packet processing block 240, an output port 250, and a control unit 260. In one implementation, the ingress packet processing block 220 and the egress packet processing block 240 may be on the same line card.
Input port 210 may be an attachment point for a physical link (e.g., link 120) (not shown) and may be an entry point for incoming datagrams. Ingress packet processing block 220 may store a forwarding table and may perform a forwarding table lookup to determine an egress packet processing and/or output port to which a datagram may be forwarded. The switching mechanism 220 may interconnect incoming packet processing blocks 220 and outgoing packet processing blocks 240, as well as associated input ports 210 and output ports 250. Egress packet processing block 240 may store the datagrams and may schedule the datagrams for service on an output link (e.g., link 120) (not shown). Output port 250 may be an attachment point for a physical link (e.g., link 120) (not shown) and may be an egress point for datagrams. Control unit 260 may run routing protocols and ethernet control protocols, build forwarding tables and download them to ingress packet processing block 220 and/or egress packet processing block 240, etc.
Ingress packet processing block 220 may perform data link layer encapsulation and decapsulation. To provide quality of service (QoS) guarantees, incoming packet processing block 220 may classify datagrams into predefined service classes. Input ports 210 may run data link level protocols. In other implementations, input ports 210 may send (e.g., may be exit points) and/or receive (e.g., may be entry points) datagrams.
The switching mechanism 230 may be implemented using many different techniques. For example, switching mechanism 230 may include a bus, a crossbar, and/or a shared memory. The simplest switching mechanism 230 may be a bus linking input ports 210 and output ports 250. The crossbar may provide multiple simultaneous data paths through the switching mechanism 230. In the shared memory switching mechanism 230, incoming datagrams may be stored in shared memory and pointers to datagrams may be switched.
Egress packet processing block 240 may store the datagram prior to sending the datagram on the output link (e.g., link 120). Egress packet processing block 240 may include scheduling algorithms that support priority and guarantees. Egress packet processing block 240 may support data link layer encapsulation and decapsulation, and/or a variety of higher level protocols. In other implementations, output port 230 may send (e.g., may be an exit point) and/or receive (e.g., may be an entry point) datagrams.
The control unit 260 may be interconnected with the input port 210, the incoming packet processing block 220, the switching mechanism 230, the outgoing packet processing block 240, and the output port 250. Control unit 260 may compute forwarding tables, implement routing protocols, and/or run software to configure and manage network device 110. In one implementation, control unit 260 may include a bus 260-1, and bus 260-1 may include a path that allows communication between processor 260-2, memory 260-3, and communication interface 260-4. Processor 260-2 may include a microprocessor or processing logic that may interpret and execute instructions. Memory 260-3 may include a Random Access Memory (RAM), a Read Only Memory (ROM) device, a magnetic and/or optical recording medium and its corresponding drive, and/or another type of static and/or dynamic storage device that may store information and instructions for execution by processor 260-2. Communication interface 260-3 may include any transceiver-like mechanism that enables control unit 260 to communicate with other devices and/or systems.
Network device 110 may perform certain operations as described herein. Network device 110 may perform these operations in response to processor 260-2 executing software instructions contained in a computer-readable medium, such as memory 260-3. A computer-readable medium may be defined as a physical or logical memory device.
The software instructions may be read into memory 260-3 from another computer-readable medium, such as a data storage device, or from another device via communication interface 260-4. The software instructions contained in memory 260-3 may cause processor 260-2 to perform processes that will be described later. Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
Although fig. 2 shows exemplary components of network device 110, in other implementations, network device 110 may include fewer, different, or additional components than illustrated in fig. 2. In other implementations, one or more components of network device 110 may perform tasks performed by one or more other components of network device 110.
Fig. 3 is a diagram illustrating an exemplary class of service (CoS) queue for network device 110. For simplicity, it may be assumed that network device 110 may define a CoS for a point-to-point service and may define another CoS for a multipoint service. In other implementations, there is more than one CoS for a point-to-point service and/or a multipoint service. As shown in fig. 3, network device 110 may include one or more CoS queues for each link of network device 110. For example, link 120-0 may be associated with one or more CoS queuing systems 310-0, link 120-1 may be associated with one or more CoS queuing systems 310-1, and link 120-2 may be associated with one or more CoS queuing systems 310-2. Each of CoS queuing systems 310-0, 310-1, and 310-2 (collectively CoS queuing systems 310) may include a separate packet queue assigned to the respective link for each network service, or each CoS corresponding to a network service. For example, CoS queuing system 310-0 may include CoS assigned to link 120-0 for point-to-point servicesPPSPacket queue 320-0, and COS assigned to link 120-0 for multipoint servicesMPSPacket queue 330-0. CoS queuing system 310-1 may include CoS assigned to link 120-1 for point-to-point servicesPPSPacket queue 320-1, and COS assigned to link 120-1 for multipoint servicesMPSPacket queue 330-1. CoS queuing system 310-2 may include CoS assigned to link 120-2 for point-to-point servicesPPSPacket queue 320-2, and COS assigned to link 120-2 for multipoint servicesMPSPacket queue 330-2.
Can be CoS over LAG 300 (e.g., as defined by links 120-0, 120-1, and 120-2)PPSPacket buffer queues 320-0, 320-1, and 320-2 (collectively referred to as CoS)PPSPacket buffer queue 320) allocates bandwidth so that point-to-point services can have a minimum guaranteed bandwidth. May be COS on LAG 300MPSPacket buffer queues 330-0, 330-1, and 330-2 (collectively COS)MPSGroupingBuffer queue 330) allocates bandwidth so that the multipoint service can have a minimum guaranteed bandwidth.
In one implementation, the point-to-point connection may be identified by a VLAN value in the header, which may allow operation over local ethernet. In other implementations, point-to-point connections may be identified with any type of connection identifier (e.g., a generalized multi-protocol label switching (MPLS) label).
Although fig. 3 shows exemplary components of network device 110, in other implementations, network device 110 may contain fewer, different, or additional components than those illustrated in fig. 3. In other implementations, one or more components of network device 110 may perform tasks performed by one or more other components of network device 110.
Fig. 4 is a diagram illustrating an exemplary VLAN allocator 400 of network device 110. In one implementation, a VLAN may be assigned to one or more links (e.g., link 120) on a LAG (e.g., LAG 300). Typical devices do not allow such assignments, but rather assign VLANs to LAGs, and do not assign VLANs to particular links in the LAGs. As shown in fig. 4, the VLAN assignor 400 may assign VLANs to one or more links in the LAG for redundancy purposes. For example, VLAN allocator 400 may be via CoS queuing systems 310-0 and PORT0The link 120-0 is assigned a VLAN410 and may also be queued via the CoS queuing system 310-1 and PORT1Link 120-1 is assigned VLAN 410. Traffic from a given VLAN (e.g., VLAN410) may be sent on links (e.g., links 120-0 and 120-1) in the VLAN-assigned LAG. Although fig. 4 shows VLANs 410 being assigned to two of the illustrated three links 120, in other implementations, one or more links 120 may be assigned VLANs 410.
If the VLAN allocator 400 allocates a VLAN to a LAG (e.g., a LAG having a predetermined bandwidth), the VLAN may be admitted to a respective queue on the LAG such that the sum of the bandwidths of the active VLANs allocated to that queue may not exceed the bandwidth allocated for that queue multiplied by an oversubscription factor.
Although fig. 4 shows exemplary components of network device 110, in other implementations, network device 110 may contain fewer, different, or additional components than those illustrated in fig. 4. In other implementations, one or more components of network device 110 may perform tasks performed by one or more other components of network device 110. In further implementations, the network device 110 may include features set forth in co-pending application No.11/949,164 (attorney docket No.20070050), filed ON even date herewith and entitled "networking and routing GROUP S," the disclosure of which is hereby incorporated by reference in its entirety.
Fig. 5 is a functional block diagram illustrating exemplary functional components of the control unit 260. As shown, the control unit 260 may include various functional components, such as a primary path bandwidth allocator 500, a redundant path bandwidth allocator 510, and a bandwidth pool holder 520. Each of the functional components shown in fig. 5 may be interrelated and may be implemented in a management system separate from network device 110.
Primary path bandwidth allocator 500 may receive bandwidth (B)530 available on each link in the LAG for a point-to-point VLAN and on a plurality of links (N)540 in the LAG at a start time prior to allocating any VLANs. Primary path bandwidth allocator 500 may allocate available bandwidth (B-B/N) on each link for primary path reservation 550 and may provide bandwidth (B/N)560 on each link for redundant path allocator 510. The available bandwidth for primary and redundant path subscriptions may be set to different values than (B-B/N) and (B/N), respectively, and may be allocated such that the sum of the bandwidth available to each allocator for each link equals bandwidth (B). When a link is selected as the primary path for a VLAN, bandwidth for that VLAN may be allocated from the pool of available primary bandwidth. The available bandwidth (B-B/N) of primary path reservation 550 for each link may be provided to bandwidth pool holder 520.
The redundant path bandwidth allocator 510 may receive available bandwidth (B/N) for the redundant path reservation 560 on each link in the LAG and on a plurality of links (N) in the LAG. The available bandwidth (B/N) of the redundant path reservation 560 for each link may be provided to the bandwidth pool holder 520.
The bandwidth pool holder 520 may receive available bandwidth (B-B/N) for the primary path subscriptions 550 and available bandwidth (B/N) for the redundant path subscriptions 560 and may hold multiple (e.g., N +1) bandwidth pools. In one implementation, the bandwidth pool holder 520 may hold the following bandwidth pools for each Link _ n:
Link_n_0_Redudnancy_available_Bandwidth
Link_n_1_Redundancy_available_Bandwith
Link_n_Primary_available_Bandwidth
Link_n_N_Redundancy_available_bandwidth.
at time "0", if no VLAN is assigned to the link, the bandwidth pool may be initialized as follows:
Link_n_0_Redudnancy_available_Bandwidth=B/N
Link_n_1_Redundancy_available_Bandwith=B/N
Link_n_Primary_available_Bandwidth=B-(B/N)
Link_n_N_Redundancy_available_bandwidth=B/N.
each Link other than Link _ N may have up to (B/N) bandwidth protected on Link _ N. In one implementation, a set of VLANs whose primary links do not include Link _ N may be protected on Link _ N, and each of these links may obtain full (B/N) redundant bandwidth on Link _ N. Thus, the total bandwidth of the protected VLANs on Link _ N may be equal to ((N-1) × B/N). In this example, if it is assumed that the primary Bandwidth on Link _ n is allocated to the VLAN and the bandwidths Link _0_ reduction _ available _ Bandwidth and Link _1_ reduction _ available _ Bandwidth are allocated to protect the VLANs on Link _ n and Link _1, respectively, the status of the Bandwidth pool for Link n in the Bandwidth pool holder 520 may be as follows:
Link_n_0_Redudnancy_available_Bandwidth=0
Link_n_1_Redundancy_available_Bandwith=0
Link_n_2_Redundancy_available_Bandwith=B/N
Link_n_Primary_available_Bandwidth=0
Link_n_N_Redundancy_available_bandwidth=B/N.
if there is no Link failure, the point-to-point traffic load on the primary Link (e.g., Link _ N) may be (B- (B/N)). If a Link (e.g., Link _0) fails, the VLAN protected on the primary Link (e.g., Link _ n) may send traffic on Link _ n and the point-to-point traffic load set on Link _ n may be (B) (e.g., the maximum bandwidth available for point-to-point traffic on Link _ n). When the VLAN allocator 400 allocates a Primary path to a Link (e.g., Link _ n), it can ensure that the VLAN Bandwidth is less than or equal to the Primary Bandwidth available on the Link (e.g., Link _ n _ Primary _ n _ available _ Bandwidth). The VLAN allocator 400 may also update the bandwidth pool holder 520 with this allocation. The Bandwidth pool holder 520 may then adjust the available Primary Bandwidth on Link _ n by subtracting the VLAN Bandwidth from Link _ n _ Primary _ available _ Bandwidth. If the VLAN allocator 400 allocates the same VLAN to Link _0 for protection, it can ensure that the VLAN bandwidth is less than or equal to Link _0_ n _ reduction _ available _ bandwidth on Link _ 0. The VLAN allocator 400 may also update the bandwidth pool holder 520 with this allocation. The bandwidth pool holder 520 may adjust the available redundant bandwidth on Link _0 for Link _ n by subtracting the VLAN bandwidth from Link _0_ n _ reduction _ available _ bandwidth on Link _ 0. As shown in FIG. 5, if VLAN 570 is granted access to Link _ n primary bandwidth and Link _0 redundant bandwidth for Link _ n, the bandwidth pool holder 520 may reduce the available primary bandwidth on Link _ n by VLAN bandwidth and reduce the available redundant bandwidth for Link _ n on Link _0 by VLAN bandwidth.
In the above example, for protected Link _0 and Link _1 traffic on Link _ n, protection may be provided without overloading the primary Link (e.g., Link _ n) upon failure of a single Link (e.g., failure of Link _ n or Link _ 1). However, if both links fail (e.g., Link _0 and Link _1 failures), the bandwidth load on Link _ N may be (B + B/N). This example employs a link protection scheme of ((N-1): 1) when a LAG of (N) links is used. In other implementations, VLANs may be selectively protected and redundancy schemes may be applied based on (X: 1) link protection schemes, where (X) may range from "1" to (N-1). In other implementations, the schemes described herein may be applied to primary and redundant paths that are assigned on different links and/or LAGs that terminate on adjacent network devices for a given VLAN, which may provide link and next hop network device protection. In yet another implementation, protection may be provided on a Link basis such that all VLANs, for example, whose primary paths are Link _ n and Link _1, may be protected on Link _ n. If Link _0 fails, protected traffic can be directed from Link _0 to Link _ n. If Link _1 fails, traffic from Link _1 can be switched to Link _ n so that Link _ n is not overloaded. There may also be various implementations and/or configurations as these example variations that provide a tradeoff between the amount of protection, the amount of link overload that can be tolerated, and the amount of traffic that can be protected.
In the scheme described herein, protection of VLAN traffic is considered at the VLAN level. However, the scheme also provides protection at the VLAN and CoS levels so that the assignment of VLAN primary and redundant paths to links can be done based on admission control of the bandwidth pool allocated for CoS on the link. The allocated bandwidth for the primary path may also be subdivided across CoS, and the bandwidth may be allocated for the redundant paths.
Although fig. 5 shows exemplary functional components of control unit 260, in other implementations, control unit 260 may contain fewer, different, or additional functional components than illustrated in fig. 5. In other implementations, one or more functional components of control unit 260 may perform tasks performed by one or more other functional components of control unit 260.
Fig. 6-8 illustrate flow charts of example processes for a network (e.g., network 100) and/or a network device (e.g., network device 110). In one implementation, the processes of fig. 6-8 may be performed by hardware and/or software components of a network device, or may be performed by hardware and/or software components of a device that is external to, but in communication with, a network. In other implementations, the processes of fig. 6-8 may be performed by hardware and/or software components of network device 110 (e.g., by control unit 260) and/or one or more devices in network 100.
Fig. 6 illustrates a flow diagram of an exemplary process 600 for allocating a LAG link to a packet buffer queue for a point-to-point service and to another packet buffer queue for a multipoint service. As shown in fig. 6, process 600 may begin by defining a service class for a point-to-point service (block 610) and defining a service class for a multipoint service (block 620). For example, in one implementation described above in connection with fig. 3, network device 110 may define a class of service (CoS) for a point-to-point service and may define another CoS for a multipoint service.
As further shown in fig. 6, links in the LAG may be allocated to a first packet buffer queue for a point-to-point service (block 630). For example, in one implementation described above in connection with FIG. 3, the CoS queuing system 310-0 may include a COS assigned to the link 120-0 for point-to-point servicesPPSPacket buffer queue 320-0, CoS queuing system 310-1 may include CoS allocated to link 120-1 for point-to-point servicesPPSPacket buffer queue 320-1 and CoS queuing system 310-2 may include CoS allocated to link 120-2 for point-to-point servicesPPSPacket buffer queue 320-2.
For multipoint services, a LAG link may be allocated to the second packet buffer queue (block 640). For example, in one implementation described above in connection with FIG. 3, the CoS queuing system 310-0 may include a COS assigned to link 120-0 for multipoint servicesMPSGroupingThe buffer queue 330-0, CoS queuing system 310-1 may include a COS assigned to link 120-1 for multipoint servicesMPSPacket buffer queue 330-1, and CoS queuing system 310-2 may include a CoS allocated to link 120-2 for multipoint servicesMPSPacket buffer queue 330-2.
As further shown in fig. 6, bandwidth may be allocated to the first and second packet buffer queues such that point-to-point and multipoint services have a minimum guaranteed bandwidth (block) 650. For example, in one implementation described above in connection with FIG. 3, there may be CoS's over the LAG 300 (e.g., defined by links 120-0, 120-1, and 120-2)PPSPacket buffer queues 320-0, 320-1, and 320-2 allocate bandwidth so that point-to-point services can have a minimum guaranteed bandwidth. May be COS on LAG 300MPSThe packet buffer queues 330-0, 330-1, and 330-2 allocate bandwidth so that the multipoint service can have a minimum guaranteed bandwidth.
Fig. 7 illustrates a flow diagram of an exemplary process 700 for assigning VLANs to one or more links of a LAG. As shown in fig. 7, process 700 may begin by assigning a VLAN to a particular link in a LAG (block 710). For example, in one implementation described above in connection with fig. 4, the VLAN allocator 400 may allocate VLANs to one or more links in the LAG for redundancy purposes. In one example, VLAN allocator 400 may be via CoS queuing System 310-0 and PORT0The link 120-0 is assigned a VLAN410 and may also be queued via the CoS queuing system 310-1 and PORT1Link 120-1 is assigned VLAN 410.
As further shown in fig. 7, if the VLAN bandwidth does not exceed the bandwidth of the queue, the VLAN may be admitted to the queue corresponding to the assigned LAG link (block 720). For example, in one implementation as described above in connection with fig. 4, if the VLAN allocator 400 allocates VLANs to a LAG having a predetermined bandwidth, the VLANs may be admitted to respective queues on the LAG such that the sum of the bandwidths of the active VLANs allocated to the queues may not exceed the queue bandwidth multiplied by the oversubscription factor.
Traffic may be sent from the VLAN on the assigned LAG link (block 730). For example, in one implementation described above in connection with fig. 4, traffic from a given VLAN (e.g., VLAN410) may be sent on links (e.g., links 120-0 and 120-1) in a LAG to which the VLAN is assigned. In other implementations, where traffic may need to be sent from the same VLAN on one link, traffic from a given VLAN (e.g., VLAN410) may be sent on a link (e.g., link 120-0 or 120-1) in a VLAN-assigned LAG, where one link may be active (e.g., link 120-0) and the other link may be standby (e.g., link 120-1).
Fig. 8 illustrates a flow diagram of an exemplary process 800 for assigning VLANs to links in a LAG based on an admission control mechanism. As shown in fig. 8, process 800 may begin by receiving bandwidth (B) available on each link of a LAG including multiple links (N) (block 810). For example, in one implementation described above in connection with fig. 5, primary path bandwidth allocator 500 may receive bandwidth (B)530 available on each link for use in the LAG and on a plurality of links (N)540 in the LAG at a start time prior to allocating any VLANs.
The primary and redundant LAG links may be assigned to VLANs (block 820). For example, in the implementation described above in connection with fig. 5, primary path bandwidth allocator 500 may allocate primary links in the LAG to VLANs, and redundant path bandwidth allocator 510 may allocate redundant links in the LAG to VLANs allocated to the primary links by primary path bandwidth allocator 500.
As further shown in FIG. 8, the available bandwidth for the primary link subscription may be set to (B-B/N) (block 830), and the available bandwidth for the redundant link subscription may be set to (B/N) (block 840). For example, in one implementation described above in connection with fig. 5, primary path bandwidth allocator 500 may allocate available bandwidth (B-B/N) on each link for primary path (or link) reservation 550. The redundant path bandwidth allocator 510 may receive available bandwidth (B/N) for the redundant path reservation 560 on each link in the LAG and on a plurality of links (N) in the LAG. The available bandwidth (B/N) of the redundant path reservation 560 for each link may be provided to the bandwidth pool holder 520.
Multiple (N +1) bandwidth pools may be maintained (block 850). For example, in one implementation described above in connection with fig. 5, bandwidth pool holder 520 may receive available bandwidth (B-B/N) for primary path subscriptions 550 and available bandwidth (B/N) for redundant path subscriptions 560 for each link in the LAG, and may hold multiple (e.g., N +1) bandwidth pools.
As further shown in fig. 8, if no link failure has occurred in the LAG (block 860 — no), a traffic load of (B-B/N) may be set on the primary LAG link (block 870). For example, in the implementation described above in connection with fig. 5, if there is no Link failure, the point-to-point traffic load on the primary Link (e.g., Link _ N) may be (B- (B/N)).
If a link failure occurs (block 860 — yes), the traffic load (B) may be set on the primary LAG link (block 880). For example, in one implementation described above in connection with fig. 5, if a Link (e.g., Link _0) fails, the VLAN protected on the primary Link (e.g., Link _ n) may send traffic on Link _ n, and the point-to-point traffic load set on Link _ n may be (B) (e.g., the maximum bandwidth available for point-to-point traffic on Link _ n).
The systems and methods described herein may guarantee a LAG for point-to-point services in the presence of multipoint services on the LAG. In another implementation, the systems and methods may ensure that point-to-point services may share a LAG with multipoint services while still ensuring that the point-to-point services have predictable performance. In another implementation, the systems and methods may allocate respective point-to-point connections to queues on links of the LAG via a management mechanism and/or via signaling. In other implementations, the systems and methods may receive bandwidth available on each link of a LAG, may allocate a primary LAG link and a redundant LAG link to a Virtual Local Area Network (VLAN), and may set the available bandwidth for primary and redundant link subscriptions.
The foregoing description provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention.
For example, while series of acts have been described with regard to the flowcharts of FIGS. 6-8, the order of the acts may be different in other implementations. Further, non-dependent actions may be implemented in parallel. In another example, while fig. 5 illustrates tasks performed by functional components of control unit 260 of network device 110, in other implementations, the tasks illustrated in fig. 5 may be performed by other components of network device 110, such as switching mechanism 220. Alternatively, some of the tasks shown in fig. 5 may be performed by another device (external network device 110).
It will be apparent that the embodiments, as described herein, may be implemented in many different forms of software, firmware, and hardware in the implementations illustrated in the figures. The actual software code or specialized control hardware used to implement embodiments described herein is not limiting of the invention. Thus, the operation and behavior of the embodiments were described without reference to the specific software code — it being understood that software can be designed and hardware-implemented embodiments controlled based on the description herein.
Further, portions of the invention may be implemented as "logic" that performs one or more functions. This logic may include hardware, such as an application specific integrated circuit or a field programmable gate array, software, or a combination of software and hardware.
Even though particular combinations of features are recited in the claims and/or disclosed in the present specification, these combinations are not intended to limit the invention. Indeed, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the present specification.
Even though particular combinations of features are recited in the claims and/or disclosed in the present specification, these combinations are not intended to limit the invention. Indeed, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the present specification.
No element, act, or instruction used in the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, the indefinite article "a" or "an" is intended to include one or more items. Where only one item is intended, the term "one" or similar language is used. Further, unless expressly stated otherwise, the phrase "based on" is intended to mean "based, at least in part, on".
Claims (20)
1. A method for controlling bandwidth grants of a link aggregation group, the method comprising:
receiving bandwidth (B) available on each link of a Link Aggregation Group (LAG) comprising a plurality (N) of links;
allocating a primary LAG link and a redundant LAG link to a Virtual Local Area Network (VLAN);
setting available bandwidth for primary link reservation to (B-B/N) on each link of the LAG; and
setting available bandwidth for redundant link reservation to (B/N) on each link of the LAG.
2. The method of claim 1, further comprising:
maintaining (N +1) bandwidth pools associated with the primary and redundant LAG links.
3. The method of claim 1, further comprising:
setting a traffic load (B-B/N) on the primary LAG link if there is no link failure for the LAG.
4. The method of claim 1, further comprising:
setting a traffic load (B) on the primary LAG link if there is a link failure for the LAG.
5. A method for controlling bandwidth grants of a link aggregation group, the method comprising:
associating links in a Link Aggregation Group (LAG) with a first packet buffer queue for a point-to-point service;
associating the LAG link with a second packet buffer queue for a multipoint service;
allocating bandwidth to the first and second packet buffer queues;
receiving bandwidth (B) available on each link of the LAG comprising a plurality (N) of links;
allocating a primary LAG link and a redundant LAG link to a Virtual Local Area Network (VLAN);
setting available bandwidth for primary link reservation to (B-B/N); and
the available bandwidth for redundant link reservation is set to (B/N).
6. The method of claim 5, further comprising:
maintaining (N +1) bandwidth pools associated with the primary and redundant LAG links.
7. The method of claim 5, further comprising:
setting a traffic load (B-B/N) on the primary LAG link if there is no link failure for the LAG.
8. The method of claim 5, further comprising:
setting a traffic load (B) on the primary LAG link if there is a link failure for the LAG.
9. The method of claim 5, further comprising:
defining a service class for the point-to-point service.
10. The method of claim 5, further comprising:
defining a service class for the multipoint service.
11. A system for controlling bandwidth grants of a link aggregation group, the system comprising:
means for associating a link in a Link Aggregation Group (LAG) with a first packet buffer queue for a point-to-point service,
means for associating the LAG link with a second packet buffer queue for a multipoint service,
means for allocating bandwidth to said first and second packet buffer queues, an
Means for receiving bandwidth (B) available on each LAG link, the LAG including a plurality (N) of links,
means for assigning a primary LAG link and a redundant LAG link to a Virtual Local Area Network (VLAN),
means for setting available bandwidth for primary link reservation to (B-B/N), and
means for setting available bandwidth for redundant link reservation to (B/N).
12. The system of claim 11, further comprising:
means for maintaining (N +1) bandwidth pools associated with the primary LAG link and the redundant LAG link.
13. The system of claim 11, further comprising:
means for setting a traffic load (B-B/N) on the primary LAG link if there is no link failure for the LAG.
14. The system of claim 11, further comprising:
means for setting a traffic load (B) on the primary LAG link if there is a link failure on the LAG.
15. A system for controlling bandwidth grants of a link aggregation group, the system comprising:
means for receiving bandwidth (B) available on each link of a Link Aggregation Group (LAG) comprising a plurality (N) of links;
means for assigning a primary LAG link and a redundant LAG link to a Virtual Local Area Network (VLAN);
means for setting available bandwidth for primary link reservation to (B-B/N); and
means for setting available bandwidth for redundant link reservation to (B/N).
16. The system of claim 15, further comprising:
means for maintaining (N +1) bandwidths associated with the primary LAG link and the redundant LAG link.
17. The system of claim 15, further comprising:
means for setting a traffic load (B-B/N) on the primary LAG link if there is no link failure for the LAG.
18. The system of claim 15, further comprising:
means for setting a traffic load (B) on the primary LAG link if there is a link failure for the LAG.
19. The system of claim 15, further comprising:
means for associating a link in the LAG with a first packet buffer queue for a point-to-point service;
means for associating the LAG link with a second packet buffer queue for a multipoint service; and
means for allocating bandwidth to the first and second packet buffer queues.
20. The system of claim 19, further comprising:
means for defining a service class for the point-to-point service; and
means for defining a service class for the multipoint service.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US11/949,249 | 2007-12-03 | ||
| US11/949,249 US8284654B2 (en) | 2007-12-03 | 2007-12-03 | Bandwidth admission control on link aggregation groups |
| PCT/US2008/084141 WO2009073375A1 (en) | 2007-12-03 | 2008-11-20 | Bandwidth admission control on link aggregation groups |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1144233A1 HK1144233A1 (en) | 2011-02-02 |
| HK1144233B true HK1144233B (en) | 2013-02-01 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8284654B2 (en) | Bandwidth admission control on link aggregation groups | |
| CN101843045B (en) | Pinning and protection on link aggregation groups | |
| EP3949293B1 (en) | Slice-based routing | |
| US7974207B2 (en) | Bandwidth-based admission control mechanism | |
| US9590914B2 (en) | Randomized per-packet port channel load balancing | |
| US8630171B2 (en) | Policing virtual connections | |
| JP3714238B2 (en) | Network transfer system and transfer method | |
| US9667570B2 (en) | Fabric extra traffic | |
| US20030067653A1 (en) | System and method for slot deflection routing | |
| JP7288980B2 (en) | Quality of Service in Virtual Service Networks | |
| EP2074770A2 (en) | Link aggregation | |
| US10461873B1 (en) | Disaggregated hybrid optical transport network, internet protocol, and Ethernet switching system | |
| US20050243852A1 (en) | Variable packet-size backplanes for switching and routing systems | |
| US20070268825A1 (en) | Fine-grain fairness in a hierarchical switched system | |
| CN106716940A (en) | Allocating capacity of a network connection to data steams based on type | |
| HK1144233B (en) | Bandwidth admission control on link aggregation groups | |
| HK1143676B (en) | Pinning and protection on link aggregation groups | |
| US8295172B1 (en) | Network device traffic class mapping function | |
| Douglas et al. | ‘Harmonia: Tenant-provider cooperation for work-conserving bandwidth guarantees | |
| Figueira et al. | New World Campus Networking | |
| Gabler | Better bonded ethernet load balancing |