[go: up one dir, main page]

HK1050970A - Method and system for controlling flows in sub-pipes of computer networks - Google Patents

Method and system for controlling flows in sub-pipes of computer networks Download PDF

Info

Publication number
HK1050970A
HK1050970A HK03103170.3A HK03103170A HK1050970A HK 1050970 A HK1050970 A HK 1050970A HK 03103170 A HK03103170 A HK 03103170A HK 1050970 A HK1050970 A HK 1050970A
Authority
HK
Hong Kong
Prior art keywords
sub
pipe
traffic
pipes
packets
Prior art date
Application number
HK03103170.3A
Other languages
Chinese (zh)
Inventor
布拉赫马南德‧库马尔‧高尔迪
黄东明
克拉克‧德布斯‧杰弗里斯
迈克尔‧斯蒂芬‧西格尔
卡迪克‧苏迪普
Original Assignee
国际商业机器公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国际商业机器公司 filed Critical 国际商业机器公司
Publication of HK1050970A publication Critical patent/HK1050970A/en

Links

Description

Method and system for controlling flow in a sub-pipe of a computer network
Technical Field
The present invention relates to computer networks, and more particularly to a method and system for providing classification services and better control for computer networks.
Background
Computer networks are attracting increasing interest due to the increasing use of various network applications, such as those involving the internet. Fig. 1 illustrates that the conventional networks 10 and 30 are connected via the internet 1. Networks 10 and 30 include hosts 11, 12, 17, 18, 21, 22 and 32, 34, respectively. Each network 10 and 20 also includes a switch 14, 16, 19, and 36, respectively, and possibly one or more servers, such as servers 15, 20, 24, and 38, respectively. In addition, each network 10 and 30 may also include one or more gateways 10 and 35, respectively, to connect to the Internet 1. Routers and other portions of networks 10 and 30, not shown in the figures, may also control traffic through networks 10 and 30 and are generally considered to be naturally described by switches 14, 16, 19, and 36, and networks 10 and 30. In addition, the internet 1 comprises its own switches and networks, which are not explicitly shown either.
Switches such as switch 14 or 36 and switches (not shown) in the internet 1 connected to switches such as switch 14 or 16 are considered to be at the edge of the network 10, 30 or the internet 1, respectively. This is because these switches send and/or receive traffic directly to and/or from entities, rather than directly under the control of the network 10, 30 or the internet 1. For example, a network administrator or other user of network 10 may control parameter settings for use and execution of network 10. However, a typical network administrator does not control the internet 1, nor the network 30. The switch 14 is directly connected to the gateway 13 to provide access to the internet 1. While switches 16 and 19 are not, so switch 13 is considered to be at the edge of network 10. Similarly, a switch (not shown) of the internet 1 also interfaces with the gateway 13. A network administrator or other user may control some or all of the operation of the internet 1 but may not control the corresponding portions of the network 10. Such a switch would be considered to be at the edge of the internet.
Fig. 2 depicts a high-level block diagram of a switch 40 that can be used in a computer network. Thus, switch 40 can be used in the Internet 1 and for switches 14, 16, 19 and 36. Switch 40 includes a switching mechanism 41 connected to blade switches 47, 48 and 49. Each blade 47, 48 and 49 is typically a circuit board and includes at least one network processor 42 connected to a port 44. In this way, the port 44 may be connected to a host or other component in the network in which the switch is located. Blade switches 47, 48, and 49 may provide traffic to and receive traffic from switch fabric 41. Thus, any component connected to any one of the knife switches 47, 48 or 49 can communicate with another component connected to another knife switch 47, 48 or 49, or with another component connected to the knife switch itself.
Fig. 3A depicts another simplified block diagram of the switch 40 illustrating some of the functions performed by the network processors 51 and 55. The switch 40 connects the component connected to port a52 and the component connected to port B76. The switch 40 performs various functions including classification of data packets entering the switch 40, transmission of data packets through the switch 40, and reassembly of data packets. These functions are provided by sorter 58, swapping mechanism 64, and reloader 70, respectively. The classifier 58 classifies incoming packets and segments each packet into segments of suitable length, called cells. The switch fabric 64 is a connection matrix through which cells are transmitted through the switch 40 according to respective paths. The reassembler 70 reassembles the cells into the correct packets. The packet may be provided to the appropriate port B76 and output to the destination host. The classifier 54 may be part of the network processor 51, while the reassembler 70 may be part of another network processor 55. The above-described network processors 51 and 55 perform functions of outputting traffic from the port B76 and inputting traffic to the port a52, respectively. Also, the network processors 51 and 55 also perform functions of outputting traffic from the port B76 and inputting traffic to the port a52, respectively. Thus, each of the network processors 51 and 55 can perform the functions of sorting and reloading.
Due to bottlenecks in the transport of traffic through the switch 40, data packets may be required to wait for the classification, transport, and reassembly functions to be performed. As a result, queues 56, 62, 68, and 74 are generated. Coupled to queues 56, 62, 68, and 74 are queuing mechanisms 54, 60, 66, and 72. Queuing mechanisms 54, 60, 66 and 72 place the packet or cell into the corresponding queues 56, 62, 68 and 74 and provide a notification back to the host that generated the packet.
Although queues 56, 62, 68, and 74 are described independently, one of ordinary skill in the art will readily recognize that some or all of queues 56, 62, 68, and 74 may be part of the same physical memory resource. Fig. 3B depicts such a switch 40'. Many of the components of switch 40' are similar to the components of switch 40. These components are therefore similarly identified. For example, port A52 'in switch 40' corresponds to port A52 in switch 40. In switch 40', queues 54 and 62 share a single memory resource 59. Similarly, queue 68 and queue 74 are part of another separate memory resource 71. Thus, in switch 40', queues 56, 62, 68, and 74 are logical queues divided from memory resources 59 and 71.
Currently, most conventional switches 40 handle communication traffic through the network in which the use of the switches is the same. However, there is a trend to provide different services to customers, such as based on the fees paid by the customers for the services. Some customers may wish to spend more expense to ensure faster responses or to ensure that traffic is delivered to other customers even if it drops due to congestion. Therefore, the concept of classification services (differential services) has been developed. Classification services can provide different levels of service or provide different traffic flows through the network for different customers.
DiffServ is an internet network engineering part (IETF) standard developed for providing classification services (see IETF RFC 2475 and related RFCs). DiffServ aggregates flows based on behavior. A behavioral aggregate flow may be viewed as a pipe from one network edge to another. In each behavioral aggregate flow, there may be hundreds of sessions between individual hosts. However, DiffServ is independent of the session in the behavioral aggregate flow. In contrast, DiffServ is concerned with the allocation of bandwidth between behaviorally aggregated flows. According to DiffServ, the extra bandwidth is fairly distributed among the behavioural aggregate flows. Furthermore, DiffServ provides criteria to measure the level of service provided to each behavioral aggregate flow, as discussed below.
One common mechanism for providing different levels of service is to use a combination of weight and queue levels to provide different levels of service, such as the product provided by Cisco systems Inc. of San Jose, Calif. Fig. 4 depicts such a general method 80. The queue level threshold and weight are set by step 82. Typically, in step 82, over a networkThe administrator adjusts or "adjusts" to set the queue level threshold. Weights may be set for different pipes or flows by a queue, switch 40, or network processor 42. Thus, the weights are typically set for different behavioral aggregate flows. The queue level is typically observed at the end of a time period, via step 84. The pipe flow is then changed based on the queue level comparison to the queue level threshold and the weight, via step 86. The pipe stream with the higher weight undergoes a large change in step 86. Pipe flows determine the amount of traffic provided to a queue, such as queue 56, and are transmitted through the pipe into queue 56 by a corresponding queuing mechanism, such as queuing mechanism 54. Traffic is thus transmitted to queues or dropped based on the traffic, via step 88. The network administrator then determines whether the current service level is ideal, via step 90. If desired, the network administrator completes the congestion avoidance. However, if the desired level of service is not achieved, then the queue level thresholds, weights, may all be reset and the method 80 repeated, via step 82. Information associated with the conventional method 80 may be found inhttp://www.cisco.com/univercd/cc/td/doc/product/software/ios120/12 cqcr/qosc/qcpart2/qcconman.htmIs found in (1).
While the conventional method 80 works, one of ordinary skill in the art will readily recognize that it is difficult to determine, over a network, the impact that changing the queue level threshold will have on a pipe. Thus, the network administrator using method 80 has to do a lot of experimentation before reaching the desired traffic rate in one computer network for different customers or pipes (behaving aggregate flows).
Moreover, the method 80 indirectly operates parameters that are typically used to measure quality of service. Queue level is not a typical measure for service. Typically, for example in DiffServ (see IETF RFC 2475 and related RFCs), the service level is measured by four parameters: drop rate, bandwidth, latency, and jitter. Drop rate refers to the percentage of traffic that is dropped as it passes through the switch. The behavioral aggregate flow bandwidth is a measure of the total amount of traffic that passes through the switch and reaches the behavioral aggregate flow at the destination. Latency is the delay that occurs in sending traffic through the network. Jitter is the change in latency over time. Queue level is not considered a direct measure of quality of service. Thus, the method 80 does not directly address any criteria for quality of service. Thus, it is also more difficult for a network administrator to provide different levels of service using method 80. Additionally, the method 80 performs flow control for the behavioral aggregate flow. The method 80 does not control traffic at a better level than this.
Other conventional methods of controlling traffic are to control traffic using traffic, minimum traffic rate, weights, priorities, thresholds and a signal indicating the presence of additional bandwidth or transport traffic capacity. However, the conventional method is based on intuition and seems convincing, but there is no actual foundation support based on control theory. It is not clear that this conventional method controls whether traffic is a stable mechanism through the switch. Thus, this conventional approach is not able to adequately control traffic through the switch 40.
In addition, another IETF proposal is IntServ adapted to integrated services. In IntServ, every traffic in the network is always controlled. IntServ therefore proposes to control each behavior of the network to aggregate each individual traffic in a flow or pipe. As a result, IntServ may be used to provide classification services. However, it is impractical to quickly control all of the flow at all times. In particular, as networks and the amount of traffic through the networks increase, the number of flows in the networks also increases rapidly. For most networks, controlling each traffic in the network at all times would consume significant resources and would be difficult to accomplish. Therefore, IntServ cannot sufficiently control traffic through the network.
Therefore, what is needed is a system and method for providing better classification services and controlling traffic at a better level. The present method addresses such a need.
Disclosure of Invention
The present invention provides a method and system for controlling a plurality of sub-pipes in a computer network. The computer network includes at least one switch. The plurality of sub-pipes utilize switches to transport traffic through the network. The method and system include allowing a minimum flow to be set for each of a plurality of sub-pipes and determining whether there is congestion in the pipe. The method and system further include controlling flow control in a sub-pipe of the plurality of sub-pipes only when congestion is present. The flow control can linearly increase the flow of the sub-pipe in the case that the flow of the sub-pipe of the plurality of sub-pipes is less than the minimum flow. In the case where the flow rate is greater than the minimum flow rate, the sub-pipe flow rate of the plurality of sub-pipes may be exponentially decreased. Traffic through the switch is stabilized.
According to the system and method disclosed herein, the present invention provides a stable, feasible mechanism to control traffic through a network at a better level when classification services are considered. In addition, the present invention can be used to control traffic through the network at the network edge, thus allowing redundant control to be eliminated within the network.
Drawings
Preferred embodiments of the present invention will be described with reference to the following drawings:
FIG. 1 is a block diagram of a computer network according to the prior art;
FIG. 2 is a high level block diagram of a switch according to the prior art;
FIG. 3A is a simplified block diagram of a switch according to the prior art;
FIG. 3B is a simplified block diagram of another switch according to the prior art;
FIG. 4 is a flow chart depicting a conventional method of providing different levels of service through a switch according to the prior art;
FIG. 5 is a diagram depicting a network in which the present invention may be used;
FIG. 6 is a high level flow chart depicting a method in accordance with the present invention for controlling traffic and providing different levels of service in a sub-pipe;
FIGS. 7A and 7B depict a more detailed flow chart of a method in accordance with the present invention for controlling traffic and providing different levels of service in a sub-pipe;
FIG. 8 is a first embodiment of a method according to the present invention for determining whether congestion exists in a pipe
Detailed flow diagrams of embodiments;
FIG. 9 is a detailed flow chart depicting a second embodiment of a method in accordance with the present invention for determining whether congestion exists in a pipe;
FIG. 10 is a detailed flow chart depicting a third embodiment of a method in accordance with the present invention for determining whether congestion exists in a pipe; and
FIG. 11 is a detailed flow chart depicting a fourth embodiment of a method in accordance with the present invention for determining whether congestion exists in a pipe;
Detailed Description
The present invention will be described with respect to particular systems and particular components. However, one skilled in the art will readily recognize that the method and system will work effectively for other components in a computer network. The invention will also be described in the context of queues. However, those skilled in the art will readily recognize that the present invention may also operate effectively when the queue is part of a logical queue of a single memory resource or when the queue is part of a separate memory resource. Furthermore, the present invention may operate similarly when controlling traffic into a sub-queue of a particular logical queue. The present invention will also be discussed in terms of controlling network traffic by actively dropping packets. However, those skilled in the art will readily recognize that the method and system control the rate at which packets arrive in the queue, and that the signal sent to the source and indicating the segment of the packet to be sent will be valid along with other signals in the source. Thus, control of the transmission segments is similar to control of the rate at which packets are provided by one or more sources, for example. Furthermore, the invention will be described with respect to pipes and sub-pipes to a queue. However, one skilled in the art will readily recognize that the pipes may be different or the same kind of behavioral aggregate flows or any provided rate for a particular component utilizing the queue for storage. The present invention will also be described in the context of controlling the subducts of a particular duct. However, those skilled in the art will readily recognize that the present invention may also be used to control flow in a sub-conduit of a plurality of conduits.
A more detailed illustration of the method and system according to the invention can now be obtained with reference to fig. 5, which depicts preferred embodiment networks 10 'and 30' and the internet 1, in which the invention can be used. The networks 10 ' and 30 ' and the internet 1 ' are substantially the same as the networks 10 and 30 and the internet 1 in fig. 1. However, control points 25, 26, 27 and 37 are also shown. The control points 25, 26, 27 and 37 are preferably general purpose computers connected to the switches 14 ', 16', 19 'and 36'. The control points 25, 26, 27 and 37 preferably perform functions related to the matrix and bit selection used in detecting the filtering rules, as discussed below. Switches 14 ', 16', 19 ', and 36' preferably include software-managed decision trees (not shown in fig. 5), discussed below, that are used to determine whether a key matches one or more filtering rules. In addition, switches 14 ', 16 ', 19 ', and 36 ' are preferably identical to switches 40 and 40 ' depicted in FIGS. 2, 3A, and 3B. Although the networks 10 ', 30 ' and the internet 1 ' are preferred, the networks 10 and 30 and the internet 1 may also be used in conjunction with the present invention.
A more detailed illustration of the method and system according to the invention can now be obtained with reference to fig. 6, which depicts an embodiment of the method 100 according to the invention. The method 100 is preferably accomplished using the apparatus disclosed in U.S. patent application No.09/384,691, filed on 8/27/1999, entitled "network processing COMPLEX AND METHODS," which is assigned to the assignee of the present application.
The method 100 may be used by the switch 40 or 40' shown in fig. 2, 3A, and 3B. Thus, the method 100 may be performed in a switch having multiple blade switches 47, 48, 49 and multiple ports per blade switch. For clarity, the method 100 will be explained in conjunction with the queue 74 and queuing mechanism 72 described in FIG. 3A. However, the method 100 may be used with other queues, such as queues 56, 62, 68, 56 ', 62', 68 ', and 74'. The method 100 may also be used with other queuing mechanisms, such as queuing mechanisms 54, 60, 66, 54 ', 60', 66 ', and 72'. In a preferred embodiment, the method 100 is used in a system where multiple queues are part of the same memory resource. However, the method 100 may also be used in another system where each queue has independent memory resources. The method 100 is preferably used in a switch located at the edge of a network, such as the switch 14 or 14' depicted in fig. 1 and 5. In addition, the method 100 is preferably used only in switches located at the edge of the network, and not in switches inside the network. Additionally, although the method 100 is preferably used to control the sub-pipes arriving at the queue and is performed using a queuing mechanism, it is not prevented from being used by other parts of the switch.
The method 100 is used to control traffic for a sub-pipe in a network. Each subduct is a portion of a duct. The pipes also flow through the network. Thus, a pipe of a particular network may be considered to terminate at the network edge. Preferably, each pipe is a behavioral aggregate flow. One subduct is a combination of flows in the duct. Thus, a sub-pipe may also be considered to terminate at the network edge. In other words, pipes and sub-pipes are defined within the network. A sub-pipe may comprise a single stream or a combination of streams in the pipe. The method 100 is also preferably used only in switches at the edge of the network where the sub-pipes and pipes terminate. However, the method 100 may also be used in a switch inside a network.
Referring to fig. 3A and 6, provision of traffic may be set by step 102Minimum flow to a sub-pipe of a particular pipe of the switch 40. The minimum flow set in step 102 is preferably the minimum flow guaranteed to be provided to the subduct. Thus, the minimum traffic set in step 102 corresponds to the total amount of bandwidth that the sub-pipe is always allowed to consume. Therefore, the minimum traffic is preferably set such that if the traffic carried by each sub-pipe is equal to the minimum traffic, the pipe will not be congested and traffic will propagate through the switch and network with the desired parameters. When the traffic through a sub-pipe is small, the sub-pipe may carry less than the minimum traffic. The sub-pipes may also be allowed to carry more traffic than the minimum flow. Also in the preferred embodiment, the maximum possible traffic may be set, possibly for denial of service attacks. In the preferred embodiment, a minimum flow is set for a subduct entering a certain queue, such as queue 74. In addition, each sub-pipe is preferably at a given rate, Ii(t) traffic is provided to queue 74, where i represents the ith sub-pipe. Depending on different factors, some traffic may be dropped rather than into queues. Transmission segment, T, of the ith sub-pipei(t), is a fraction of traffic from the ith sub-pipe that is transmitted to queue 74'. Thus, the instantaneous flow of the ith sub-pipe is fi(t)=Ii(t)*Ti(t)。
A determination is made whether the pipe whose sub-pipe is being controlled is congested, via step 104. A pipe may be determined to be congested based on a variety of factors. These factors are typically estimated based on the packets or traffic flowing through the pipe. Note that packets as used herein refer to various types of data packets including, but not limited to, ethernet packets (often called frames), ATM packets (often called cells), and IP packets (often called packets). Some factors that determine whether a pipe is congested include the time it takes for a certain packet to reach the receiver from the sender and for the receiver to send back an acknowledgement, the number of synchronization packets, round trip time (RTF), and Explicit Congestion Notification (ECN), which may be provided for the packet, discussed below with reference to fig. 8-11. Referring back to fig. 6, in general, a pipe may be considered congested when it carries traffic large enough to delay or drop packets at a rate that is deemed unacceptable by a user, such as a network administrator.
If it is determined that congestion is not present, the flow in the sub-pipe is not controlled, via step 108. Thus, the switch 40 does not need to expand resources to control traffic in the subduct when execution of the pipe is deemed acceptable. However, if it is determined that congestion exists, then the flow in the sub-pipe is controlled for a particular time, T, via step 106. The specific time T of the sub-pipe control is typically many times (perhaps one thousand or one million times) the time increment Dt used to update the transmission segment. Thus, during T, if the congestion signal currently indicates congestion, then the traffic in those sub-pipes above their guaranteed minimum rate decreases exponentially as long as the congestion persists and the traffic remains above the guaranteed minimum rate. In addition, note T and TiWith the difference that T is the time interval for controlling the flow in the subduct, TiRefers to a transmission segment of a certain sub-pipeline. In a preferred embodiment, step 106 includes exponentially decreasing the flow in the subduct greater than the corresponding minimum flow set in step 102. Thus, the sub-duct flow decreases in proportion to the sub-duct flow value. Also in a preferred embodiment, step 106 may include maintaining constant, at least initially, the subduct flows having a flow rate less than or equal to the corresponding minimum flow rate. However, during time interval T, if congestion is not indicated immediately, then the flow of those sub-pipes below their maximum rate is allowed to increase linearly. In particular, as the sub-pipes continue to be controlled, those sub-pipes whose flow rates are below a set minimum value remain unchanged or are allowed to increase linearly, those sub-pipes which are above their maximum flow rates (if set) are forced to decrease, and those sub-pipes whose flow rates are between the minimum and maximum values are allowed to increase linearly or are forced to decrease exponentially. This is especially true when the flow continues to be controlled at step 106. Since the flow is exponentially decreased or linearly increased, the flow of the subduct is controlled in a stable manner. Step 106 preferably includes setting transmission segments for the controlled sub-pipe. Also in step 106, the transmission fragment is used to control the division into queue sub-pipesThe number of groups.
Thus, the method 100 controls traffic in a sub-pipe only when the pipe is congested. As a result, the method 100 does not unnecessarily consume resources when congestion is not present, and thus traffic ideally flows through the network. Moreover, because the method 100 can linearly increase and exponentially decrease the flow in the subduct, the flow control through the subduct will be stable. The method 100 is also preferably performed only at the edge of a network, such as in the switch 14 'of the network 10' depicted in fig. 5. However, the pipes and sub-pipes thus flow through the entire network 10'. Thus, at the network edge, the method 100 of fig. 6 controls congestion of the entire pipe of the network by controlling the sub-pipes. The method 100 need not be performed at other points in the network. Thus, computationally expensive and unnecessary sub-pipe redundancy control may be avoided. However, even when other agencies are executing at the hub and/or edge of the network, the method 100 can be used in conjunction with such other agencies to control traffic in the network.
Fig. 7A and 7B depict a more detailed flow diagram of one embodiment of a method 110 for controlling flow in a subduct in accordance with the present invention. The method 110 preferably begins after setting a minimum flow rate for the controlled subduct.
The method 110 will be described in the context of controlling flow for a sub-pipe of a pipe. However, one skilled in the art will readily recognize that method 110 may be extended for use with multiple pipelines. Additionally, method 110 is similar to method 100 and thus may be performed using the same equipment. After the minimum flow rates for the sub-pipes are set, constants are calculated for each sub-pipe based on the minimum and maximum (if set) flow rates for the sub-pipes, via step 112. For each sub-pipe i, a constant C is calculated in step 112iAnd a constant Di. Constant CiFor linearly increasing the flow rate for subduct i, as discussed below. Similarly, a constant DiFor exponentially decreasing the flow rate for subduct i, as discussed below. In a preferred embodiment, the constant CiAnd DiAre based on minimum flow. In another embodiment, weights may be provided for different sub-pipes. In this case, the constant CiAnd DiAnd may also be calculated based on the weights provided.
Once the constants are determined, traffic is allowed to flow through the sub-pipe to the queue 74, via step 114. It is then determined whether the pipe is congested, via step 116. In a preferred embodiment, step 116 is performed using the Pipe Congestion Level (PCL) signal for each monitored pipe. The PCL of the pipe is between 0 and 1. The closer the PCL is to 1, the more congested the pipe. Thus, a threshold between 0 and 1 may be set as the congestion threshold. Step 116 will determine the PCL of the pipe. If the PCL of the pipe is less than or equal to this threshold, step 116 will indicate that congestion is not present. If PCL is greater than this threshold, then step 116 will determine that congestion exists in the pipe. If there is no congestion, then it is determined whether a time period is over during which it may be desirable to perform control, via step 134. Thus, if there is no congestion, the sub-pipe flow may not be controlled.
If congestion is determined to exist in step 116, then the flow in the subduct is controlled through steps 118 and 132. If not already available, a queue level is determined for the previous time period, via step 118, an instantaneous extra bandwidth signal B, represented by a binary value of 0 or 1, and an extra bandwidth value E, represented by a value between 0 and 1. In the preferred embodiment, the sub-pipe flow for the previous time period is also made available at step 118. In other words, the number necessary to update the system in the method 110 is determined in step 118. In a preferred embodiment, step 118 is generally unnecessary because the above-mentioned quantities were determined using method 110 at previous time periods. The queue level is the level of the queue into which the sub-pipe is flowing. The queue level may be defined as a fraction of the maximum queue level. For example, the queue level may be a portion of the full level of the queue 74 of FIG. 3A. Referring back to fig. 7A and 7B, in the preferred embodiment, the determined queue level is the level of the entire memory resource. However, determining the queue level for a logical queue or sub-queue is not prevented. The instantaneous extra bandwidth signal, B, is represented by a binary value of 0 or 1, and the extra bandwidth value, E, is represented by a value between 0 and 1, discussed below.
It is determined whether there is additional bandwidth, via step 120. In one embodiment, the extra bandwidth is determined to be present only if the queue level, such as queue 74, is 0 or below a small value or is decreasing. However, determining whether there is additional bandwidth preferably depends on a single determination of round trip time or other parameters, or variations in such measurements. These other parameters may also be used to measure congestion, as described below. Thus, in the preferred embodiment, two definitions of congestion may be considered to be used. The first definition of congestion is used to trigger flow control of the sub-pipe in step 116. A second definition of congestion may be used in step 120 to determine if additional bandwidth exists. The second definition of congestion used in step 120 may be different from the first definition of congestion. For example, in the second definition of congestion in step 120, congestion may be considered to exist when the traffic rate is lower than the first congestion definition used in step 116. In addition, the definition of extra bandwidth, and the second definition of congestion, may change over time interval T. For example, early in time interval T, the definition of extra bandwidth may include congestion measurements to ensure that traffic in the sub-pipe will be controlled to reduce congestion. Late in time interval T, the extra bandwidth definition may change to exclude congestion, increase the required traffic for congestion deemed to be present and/or rely solely on other parameters, such as queue level. In particular, the difference between the queue level and the maximum buffer capacity (referred to as the headroom) can be used to define congestion. However, the use of another criterion for determining whether extra bandwidth is present is not prevented. Generally, the instantaneous excess bandwidth signal indicates whether excess resources were present in the switch 40, or the entire pipe, in the previous time period to handle the additional traffic. If extra bandwidth is not present, then the instantaneous extra bandwidth signal, B, is set to 0 by step 122. Signal B is referred to as instantaneous because it is based on a single measurement of queue level, round trip time, or other parameter, or a single determination of a change in such a measurement. Thus, because the definition of the extra bandwidth may depend on congestion, the instantaneous extra bandwidth signal, B, may also depend on the congestion used to determine whether extra bandwidth is present. The extra bandwidth value, E, increases exponentially to 1 (e.g., if there is no congestion) or decreases exponentially to 0 (e.g., if there is congestion), via step 124. In a preferred embodiment, the additional bandwidth value is an exponentially weighted average of the instantaneous additional bandwidth signal. Thus, the extra bandwidth value provides a measure of the resources available over the previous time period. In addition, since the extra bandwidth value, E, depends on the instantaneous extra bandwidth signal, B, the extra bandwidth value, E, may also depend on the presence of congestion, as is used to determine whether extra bandwidth is present.
If extra bandwidth is determined to be present in step 120, then the instantaneous extra bandwidth value is set to 1, via step 126. The extra bandwidth value is set to an exponentially weighted average of the instantaneous extra bandwidth signal, via step 128. In a preferred embodiment, the extra-bandwidth value is a first constant multiplied by the previous extra-bandwidth value, plus a second constant multiplied by the instantaneous extra-bandwidth value. The values of the instantaneous extra bandwidth signal, B, and the extra bandwidth value, E, set in steps 122 and 124 or steps 126 and 128 will preferably be used for the next time period to update the flow control for the sub-pipe.
In the preferred embodiment, a transmission segment is set for each sub-pipe i in parallel with steps 120 to 128, via step 130. However, in another embodiment, the transmission segments may be updated serially with the instantaneous additional bandwidth signal, B, and the additional bandwidth value, E. If the pipe traffic is above the maximum level, then the traffic is preferably exponentially decreased by setting the transmission segment to 31/32 times the previous transmission segment. If the pipe traffic is not above the maximum level, then the transmission segments change as follows. If the previous instantaneous excess bandwidth signal, B, was 1 (excess bandwidth available), then step 130 would be based on the previous transmission fraction, constant CiAnd the extra bandwidth value sets a transmission segment for each sub-pipe i. Preferably, the transmission segments set for the pre-existing excess bandwidth in step 130 are:
Ti(t+Dt)=Ti(t)+Ci*E(t)Dt
wherein:
dt is the length of the time period (since the last time the transport segment was calculated)
Preferably, the traffic element is set such that Dt and the maximum possible queue level, QmaxAnd is 1. Thus, if extra bandwidth exists, the transmission fraction of the sub-pipe can be increased linearly. Moreover, the transmission segments may continue to increase linearly as long as additional bandwidth continues to exist. Note, however, that in the preferred embodiment, there will typically be no extra bandwidth when the sub-pipe is first controlled (when it is first determined that congestion exists in the pipe). Therefore, the subduct flow will not increase. Thus, the decision to control the sub-pipe made in step 116 is based on a definition of the onset of congestion in the pipe. The sub-pipe control implements a time interval T, which is typically many times the time increment Dt of the time segment. Initially, during T, the sub-pipe traffic is likely to be forced to decrease in response to the mechanism defined above, which is typically based on second definition of pipe congestion. Congestion is defined as the second definition used to determine whether there is additional bandwidth, at least early in the time interval T. However, in time interval T, there may be a moment when the sub-pipe flow rate actually increases. This is especially true near the end of time interval T after some period of time has elapsed. Thus, one definition of congestion may be used to initiate flow control in a sub-pipe. The second definition of congestion is used in defining the extra bandwidth to allow instantaneous extra bandwidth signaling, B, to be either 0 or 1, allowing the traffic in the sub-pipe to increase and decrease during time interval T.
If additional bandwidth is not present and the sub-pipe carries more traffic than its minimum traffic for the previous time period (B is 0), then in step 130 the transmission segment T of sub-pipe ii(T + Dt) based on previous transmission segments T of the sub-pipei(t)、DiAnd previous subduct flow fi(t), is set. If B is 0 and the sub-pipe carries a larger flow than its minimum flow, the transmission segment is preferably obtained by the following equation:
Ti(t+Dt)=Ti(t)-Di*fi(t)
in addition, when B is 0, if the sub pipe transmits its minimum flow rate or less, the transmission segment is preferably obtained by the following equation:
Ti(t+Dt)=Ti(t)
thus, the transmission segments set in step 130 ensure that transmission segments of sub-pipes that are larger than their minimum traffic are transmitted and the traffic of these sub-pipes decreases exponentially as long as additional bandwidth continues to be absent. The constants Ci and Di are preferably based on a minimum flow value. Additionally, in the preferred embodiment, when controlling the first application, the extra bandwidth will not be present due to congestion in the pipe. Thus, the traffic of the sub-pipes exceeding their minimum traffic will be exponentially reduced by using transmission segments. The traffic of the sub-pipes that do not exceed their minimum traffic will remain unchanged when the extra bandwidth is not present. Congestion is thus controlled since the setting of the minimum value of the sub-pipes is preferably such that if each sub-pipe delivers a flow equal to the minimum flow, congestion will not exist. Thus, if congestion exists, at least one of the sub-pipes carries more than its minimum flow. Thus, exponentially decreasing the flow of these sub-pipes may correct the congestion. As discussed above, the traffic for all of the sub-pipes may increase linearly when additional bandwidth is available. Thus, congestion can be controlled and the flow of the sub-pipes regulated in a stable manner.
Using the transmission segments calculated in step 130, packets passing through the switch 40 are transmitted or dropped during a time period, via step 132. The packet is preferably discarded by not allowing the packet to enter a queue, such as queue 74. In the preferred embodiment, the dropping of packets is based not only on the transmission fraction of their flow through the pipe, but also on the priority of each packet. In another embodiment, the packets are randomly dropped. In addition, a discard segment may be used instead of a transmit segment. The sub-pipe drop fragment is a1 minus transmit fragment. Thus, a drop fragment indicates a fragment of a packet that should be dropped in order for a particular sub-pipe to obtain the desired transmission fragment, or communication traffic.
It is determined whether the time period is complete, via step 134. If not, the traffic continues to be transmitted based on the same transmission segment, via step 132. If the time period has been completed, then the method beginning at step 116 is repeated. However, in one embodiment, step 116 (determining whether congestion exists) may skip a certain time period or multiple time periods as method 110 repeats. Skipping step 116 allows the flow control in the sub-pipe to continue for some amount of time after congestion no longer exists. In other words, skipping step 116 allows the time, T, set to control the flow in the sub-pipe to expire. In such an embodiment, it allows for a high degree of control over the subducting and for increased traffic in the subducting to ensure allocation of additional bandwidth.
The length of the time period, Dt, in which the method 110 is performed is preferably set before the method 110 is started. However, in another embodiment, the length of the time period may vary. In addition, the time scale of the time period is preferably relatively long, perhaps 1 to 10 milliseconds or more. The length of the time period should be set to account for the delay between the end of the pipe and the sub-pipe. This is because the control of the sub-pipes in the method 100 may change at the end of the time period depending on whether there is congestion and whether additional bandwidth is available. To ensure that the control performed over a period of time has the opportunity to clear the pipe of congestion and to provide the method 100 or 110 with proper feedback, the packet should have sufficient time to reach the destination and provide notification as to whether the destination has been reached (if any). In other words, the length of the time period should be long enough to allow changes in the transmission segments to take effect and provide information to the system regarding the impact of the transmission segments.
Because the method 110 linearly increases flow and exponentially decreases flow, the method 110 is similar to the method 100 and provides many of the same benefits. Thus, the method 110 ensures that traffic through the sub-pipes is automatically and progressively stabilized. Furthermore, the increase and decrease in flow per pipe will depend on the minimum flow for the pipe and the previous flow for the pipe. Thus, different pipes may have different traffic levels, or services. As a result, the method 110 may produce stable behavior, fairly allocate additional bandwidth, and provide classification services. Note, however, that although method 110 fairly allocates additional bandwidth, fair allocation of resources may not be considered in controlling traffic through the sub-pipes and congestion in the pipes. All network administrators or other users should do to set different minimum and, if needed, maximum flows for a customer depending on the desired level of service for a certain customer in order to provide different levels of service for different sub-pipes or customers. Thus, the method 110 may be used in a variety of applications, such as networks using DiffServ, by Internet service providers that want to provide different levels of service for different customers or different media.
Additionally, the methods 100 and 110 may be used only at the edge of the network controlled using the methods 100 or 110. For example, for network 10 ', methods 100 and 110 may be performed only in the entry of switch 14', i.e., the pipe. Thus, computationally relatively expensive sub-pipe flow control is not necessarily performed throughout the network 10'. However, the pipes and sub-pipes in the network 10 'are typically defined throughout the network 10'. Thus, by controlling the sub-pipe traffic only at the edges, it can be ensured that the corresponding pipes of the entire network 10' are not congested. Thus, congestion can be controlled without performing redundant calculations and sub-pipe control of the entire network. However, other control methods (preferably simpler) may also be used in conjunction with methods 100 and 110 in the same switch and other switches of the network. For example, a method of controlling traffic is described in co-pending and assigned to PCT patent application No. gb00/04410, entitled "method and system for controlling packet transmission in a computer network". The method of controlling flow in a pipe described in the above-mentioned co-pending application may be used with the present invention. Thus, at least two levels of control may be provided, a better level for a sub-pipe, and a general level for a sub-pipe.
Various mechanisms may be used to determine whether congestion exists in steps 104 and 116 of methods 100 and 110 described in fig. 6, 7A, and 7B, respectively. Fig. 8-11 illustrate embodiments in which this method is used to determine whether congestion exists in a pipe. However, the methods described in FIGS. 8-11 may be used with multiple conduits, may be used in combination, and other methods (not shown) may be used. Additionally, the methods described in fig. 8-11 may be used for a second definition of congestion, which may be used in step 120 of method 110 to determine whether additional bandwidth is present.
Fig. 8 depicts one embodiment of a method 140 for determining whether congestion exists. Method 140 utilizes ECN. ECN is described in IETF recommendation RFC2481, which applies to protocols like TCP. In ECN, two unused bits in the packet are used to indicate congestion. The switch that supports ECN and through which the packet passes sets the first bit. The ECN capable switch through which the packet passes sets the second bit if the ECN capable switch is congested. Thus, the bit pattern indicates whether at least one switch of the sub-pipe through which the packet flows is congested. These bits are typically set as the packet flows from the source to the destination. When the receiver (destination) sends back an acknowledgement, the composite of these two bits is saved and provided to the source (sender). Thus, the ECN represented by these two bits can be used to determine whether the pipe is congested.
In particular, a metric is determined by step 142 that the ECN indicates the number of packets passing through the congested switch. In one embodiment, step 142 includes determining a fraction of packets flowing through a pipe where the ECN indicates passage through a congested switch. As discussed above, this fragment can be used as the PCL. It is then determined whether congestion exists (as defined for the ECN) via step 144. In one embodiment, step 144 includes determining whether the ECN indicates that a segment of a packet passing through a congested switch is greater than a threshold. However, some other congestion statistics metric using ECN may be used. If it is determined that the threshold has not been exceeded, then the pipe will be defined as not congested, via step 146. If it is determined that the threshold is exceeded, then the pipe will be defined as congested, via step 148. Thus, using method 140, it may be determined whether the pipe is congested.
Fig. 9 depicts another embodiment of a method 150 for determining whether a pipe is congested. Method 150 utilizes a Synchronization (SYN) packet. In TCP, a SYN packet may indicate the start of a session between two components, such as hosts. When the session packets do not reach the destination, e.g. due to a congested packet being dropped, a new session will start between the components. Thus, a new SYN packet is issued. Since SYN packets can be used to measure the number of session starts and restarts, they can be used to moderate the amount of congestion in the pipe.
A metric of the number of SYN packets in the pipe is determined, via step 152. In one embodiment, step 152 includes determining a fraction of the total number of packets flowing through the pipe packet and the SYN packet. This fragment can be set to the PCL discussed above. It is then determined whether the congestion defined for the SYN packet exists, via step 154. In one embodiment, step 154 includes determining whether the ratio of SYN packets to the total number of packets is greater than a threshold. However, some other congestion statistics metric using SYN packets may be used. If it is determined that the threshold has not been exceeded, then the pipe will be defined as not congested, via step 156. If it is determined that the threshold is exceeded, then the pipe will be defined as congested, via step 158. Thus, using the method 150, it may be determined whether the pipe is congested.
FIG. 10 depicts another embodiment of a method 160 for determining whether a pipe is congested. The method 160 utilizes the Round Trip Time (RTT) when a packet is transmitted between a source (sender) and a destination (receiver) and returns a notification of the received packet to the source. The RTT will be longer when packets of a session do not reach the destination, for example, are dropped due to congestion, or when packets take longer to reach the destination. A longer RTT indicates more congestion in the pipe. Thus, the length of the RTT can be used to measure congestion in the pipe. In addition, instead of RTT, time from sender to receiver may be used.
Some statistical measures of the RTT of the packets in the pipe may be determined, via step 162. In one embodiment, step 162 may include determining a fraction of packets whose RTTs are longer than some average. However, other RTT statistical metrics may also be used. As discussed above, RTT statistics may be used to determine PCL. It is determined whether congestion defined for the RTT exists, via step 164, for example, by determining whether the RTT metric is above a threshold. If it is determined that the RTT metric indicates that the pipe is not congested, then the pipe is defined as uncongested, via step 166. If it is determined that the RTT metric indicates that the pipe is congested, then the pipe is defined as congested, via step 168. Thus, using the method 160, it may be determined whether the pipe is congested.
Fig. 11 depicts one embodiment of a method 170 in which an ECN, SYN packet, and RTT are used in combination to determine whether a pipe is congested. A metric is determined, via step 172, that the ECN indicates the number of packets passing through the congested switch. In one embodiment, step 172 includes determining that the ECN indicates a fraction of packets passing through the congested switch. A metric of the number of SYN packets in the pipe is determined, via step 174. In one embodiment, step 174 includes determining the fraction of the total number of SYN packets and flow-through pipe packets. A statistical measure of the RTT of the packets in some of the pipes is determined, via step 176. The metrics of the ECN, SYN packet and RTT are combined, via step 178, to provide a metric of congestion, such as PCL. Whether congestion exists is determined, via step 180, preferably by determining whether PCL exceeds a threshold. If it is determined that the PCL indicates that the pipe is not congested, then the pipe is defined as not congested, via step 182. If it is determined that PCL indicates pipe congestion, then pipe congestion is defined, via step 184. Thus, using the method 170, it may be determined whether the pipe is congested.
Thus, a variety of methods may be used to determine whether a pipe is congested. If the pipe is congested, the flow of the subduct within the pipe is controlled. Otherwise, the flow is not controlled at the sub-pipe level. The flow of the subduct is controlled by exponentially decreasing the flow sum, and in some cases, linearly increasing the flow. Thus, resources are consumed to finely control traffic at the sub-pipe level only when the network is not operating well due to congestion. Also, congestion may be used to decide how to fine-tune the sub-pipe traffic, e.g. by a second definition of congestion for determining whether extra bandwidth is present. In addition, different sub-pipes, and thus also different pipes, may be provided with different levels of service. Also, the control level is only provided at the network edge. Thus, redundant control of the flow in the subduct becomes unnecessary. However, other methods of controlling traffic in the network may be used.
Methods and systems are disclosed for controlling traffic in a sub-pipe of a network. Software written according to the present invention is stored in some medium readable by a computer, such as a memory, a CD-ROM, or a medium transmitted over a network and executed by a processor.

Claims (32)

1. A method for controlling a plurality of sub-pipes in a pipe of a computer network including at least one switch, the plurality of sub-pipes utilizing the switch to transport traffic through the network, the method comprising the steps of:
(a) allowing a minimum flow to be set for each of the plurality of subducts (102);
(b) determining whether congestion exists in the pipe (104, 116);
(c) if congestion exists, controlling flow (106, 132) to one of the plurality of sub-pipes.
2. The method of claim 1, wherein: the controlling step (c) further comprises the steps of:
(c1) if the flow rate of the subduct is less than the minimum flow rate, linearly increasing (106) or keeping the flow rate of the subduct unchanged; and
(c2) if the flow rate of the subduct is greater than the minimum flow rate, the flow rate of the subduct is exponentially reduced (106).
3. The method of claim 1, comprising the steps of:
(b1) determining whether there is additional bandwidth (120);
and, the flow control (c) further comprises the steps of:
(c1) linearly increasing (130) the flow of the sub-pipe if the sub-pipe flow is less than the minimum flow of the sub-pipe and additional bandwidth is present; and
(c2) if the sub-pipe traffic is greater than the sub-pipe's minimum traffic and additional bandwidth is not present, the sub-pipe's traffic is exponentially decreased (130).
4. The method of claim 3, wherein: the switch also includes a processor having a queue, the queue being used by the plurality of sub-pipes to transmit traffic through the switch, wherein the bandwidth determining step (b1) determines whether excess bandwidth is present based on determining whether the queue excess bandwidth is present.
5. The method of claim 3, wherein: the extra bandwidth determining step (b1) determines whether extra bandwidth is present based on determining whether the second type of congestion is present.
6. The method according to any one of claims 3 to 5, wherein: the extra bandwidth determining step (b1) further includes the steps of:
(b1i) setting the extra bandwidth value (128) as an exponentially weighted average of previous extra bandwidth values.
7. The method of claim 6, wherein: the flow of a sub-pipe of the plurality of sub-pipes is the provided rate multiplied by the transmission segment and wherein the linearity increasing step (c1) sets (130) the sub-pipe transmission segment to the previous transmission segment plus a first parameter, the first parameter being a first constant multiplied by an additional bandwidth value, and the exponential decreasing step (c2) sets the sub-pipe transmission segment to the previous transmission segment minus a second parameter, the second parameter being the previous flow multiplied by a second constant.
8. The method of claim 7, wherein: the first constant and the second constant are dependent on the minimum flow of the pipeline.
9. The method of claim 7, wherein: the first constant is the weight multiplied by a third parameter, which is the queue service rate plus the sum of the pipe minimum traffic minus the minimum traffic for each of the plurality of sub-pipes.
10. The method of any of claims 2 to 9 further comprising the steps of:
(a1) allowing a maximum flow rate to be set for each of the plurality of sub-pipes;
and, the controlling step (c) further includes the steps of:
(c3) if the subduct flow is greater than the maximum flow, then the subduct flow is reduced (106, 130).
11. The method of any preceding claim, wherein: performing the controlling step (c) for each of the plurality of sub-pipes.
12. The method of any preceding claim, wherein: the network also includes an edge, and wherein the switch is located at the edge of the network.
13. The method of any preceding claim further comprising the steps of:
(d) repeating the congestion determining step (b) after a predetermined time period and repeating the controlling step (c) throughout the predetermined time period.
14. The method of any preceding claim, wherein: the traffic through the switch comprises a plurality of packets, and wherein the congestion determining step (b) determines whether congestion exists based on an Explicit Congestion Notification (ECN) (140) for a first portion of each of the plurality of packets.
15. The method according to any one of claims 1 to 13, wherein: the traffic through the switch comprises a plurality of packets, and wherein the congestion determining step (b) determines whether congestion exists based on a portion of the plurality of packets (150), the portion of the plurality of packets comprising a plurality of synchronization packets.
16. The method according to any one of claims 1 to 13, wherein: traffic through the switch includes a plurality of packets, each of the plurality of packets being sent by a sender and received by a receiver, and wherein the congestion determining step (b) determines whether congestion exists based on a round trip time between the sender and the receiver for each of the plurality of packets (160).
17. The method according to any one of claims 1 to 13, wherein: traffic through the switch includes a plurality of packets, a first portion of each of the plurality of packets being sent by a sender and received by a receiver, and wherein the congestion determining step (b) determines whether congestion exists based on an Explicit Congestion Notification (ECN) for a second portion of each of the plurality of packets, a round trip time between the sender and the receiver for the first portion of each of the plurality of packets, and a third portion of the plurality of packets, wherein the third portion of the plurality of packets includes a plurality of synchronization packets (170).
18. The method of any preceding claim, wherein: the flow in the sub-pipes of the plurality of pipes is reduced by randomly dropping packets from the sub-pipes.
19. The method according to any one of claims 1 to 17, wherein: traffic in the sub-pipes is reduced by dropping packets based on their priorities.
20. A computer readable medium containing a program for controlling a plurality of sub-pipes in a pipe of a computer network, the computer network comprising at least one switch, the plurality of sub-pipes utilizing the switch to transport traffic through the network, the program comprising instructions for performing the method as claimed in claims 1 to 19.
21. A system for controlling a plurality of sub-pipes of a pipe in a computer network, the computer network including a switch overlapped by the plurality of sub-pipes, the system comprising:
queues used by the plurality of sub-pipes in transmitting traffic through the switch; and
a queuing mechanism coupled to the queue to control traffic through the switch using a minimum flow set by a user for each of the plurality of sub-pipes, the queuing mechanism determining whether congestion exists for the pipe and controlling flow of one of the plurality of sub-pipes only when congestion exists, such that if the flow of the one of the plurality of sub-pipes is less than the minimum flow, the sub-pipe flow may be linearly increased, and if the flow of the one of the plurality of sub-pipes is greater than the minimum flow, the flow may be exponentially decreased, such that traffic through the switch is stabilized.
22. The system of claim 21, wherein: the queuing mechanism also determines whether additional bandwidth is present and increases or decreases traffic for each of the plurality of sub-pipes based on whether additional bandwidth is present.
23. The system of claim 22, wherein: the queuing mechanism determines whether additional bandwidth is present by determining whether a second type of congestion is present, indicating that additional bandwidth is present if the second type of congestion is present, and otherwise indicating that additional bandwidth is not present.
24. The system of claim 21, wherein: traffic through the transducer includes a plurality of packets, and wherein the queuing mechanism determines whether congestion exists based on an Explicit Congestion Notification (ECN) for a first portion of each of the plurality of packets.
25. The system of claim 21, wherein: the traffic through the switch includes a plurality of packets, and wherein the queuing mechanism determines whether congestion exists based on a portion of the plurality of packets, the portion of the plurality of packets including a plurality of synchronization packets.
26. The system of claim 21, wherein: traffic through the switch includes a plurality of packets, each of the plurality of packets being sent out by a sender and received by a receiver, and wherein the queuing mechanism determines whether congestion exists based on a round trip time between the sender and the receiver for each of the plurality of packets.
27. The system of claim 21, wherein: traffic through the switch includes a plurality of packets, a first portion of each of the plurality of packets being sent by a sender and received by a receiver, and wherein the queuing mechanism determines whether congestion exists based on an Explicit Congestion Notification (ECN) for a second portion of each of the plurality of packets, a round trip time between the sender and the receiver for the first portion of each of the plurality of packets, and a third portion of the plurality of packets, the third portion of the plurality of packets including a plurality of synchronization packets.
28. The system of claim 21, wherein: the switch includes a plurality of processors corresponding to the plurality of blade switches, each of the plurality of processors having a plurality of ports, and wherein the queues are for processors of the plurality of processors.
29. The system of claim 21, wherein: the network includes an edge, and the switch is located at the network edge.
30. A processor for use with a switch in a computer network, the processor coupled to a plurality of ports and a switching mechanism, the processor comprising:
receiving queues of traffic from a plurality of sub-pipes in a pipe of a computer network; and
a queuing mechanism coupled to the queue for controlling flow from a sub-pipe of the plurality of sub-pipes, the queuing mechanism determining whether congestion exists for the pipe and controlling sub-pipe flow for the plurality of sub-pipes only when congestion exists, such that if the sub-pipe flow for the plurality of sub-pipes is less than a minimum flow, the sub-pipe flow may be increased linearly, and if the sub-pipe flow for the plurality of sub-pipes is greater than the minimum flow, the sub-pipe flow may be decreased exponentially, such that traffic through the switch is stabilized.
31. A switch for a computer network including a plurality of hosts, the switch comprising:
a plurality of processors, each processor coupled to a plurality of ports, the plurality of ports coupled to a portion of the plurality of hosts, each processor including a queue that receives traffic from a plurality of sub-pipes in a pipe of the computer network, the plurality of sub-pipes coupled to a portion of the plurality of ports, the port being coupled to a first processor and a portion of the plurality of ports being coupled to a second processor, a queuing mechanism coupled to the queue, the queuing mechanism determining whether congestion exists and controlling sub-pipe traffic for the plurality of sub-pipes only when congestion exists, such that, if the subduct flow of the plurality of subducts is less than the minimum flow, the subduct flow may be increased linearly, and if the sub-pipe traffic of the plurality of sub-pipes is greater than the minimum traffic, the sub-pipe traffic may be exponentially decreased such that traffic through the switch is stable.
A switching fabric coupled to the plurality of processors.
32. The switch of claim 31, wherein: the network includes an edge, and the switch is located at the network edge.
HK03103170.3A 2000-03-31 2001-03-30 Method and system for controlling flows in sub-pipes of computer networks HK1050970A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/540,428 2000-03-31

Publications (1)

Publication Number Publication Date
HK1050970A true HK1050970A (en) 2003-07-11

Family

ID=

Similar Documents

Publication Publication Date Title
US8665892B2 (en) Method and system for adaptive queue and buffer control based on monitoring in a packet network switch
KR101075724B1 (en) Apparatus and method for limiting packet transmission rate in a communication system
US20070183332A1 (en) System and method for backward congestion notification in network
CN104272680A (en) signaling congestion
KR20040023719A (en) Method for supporting non-linear, highly scalable increase-decrease congestion control scheme
CN1531804A (en) Method for controlling queue buffer
US6985442B1 (en) Technique for bandwidth sharing in internet and other router networks without per flow state record keeping
CN1663195A (en) Calculation of Token Bucket Parameters for Guaranteed Services in Data Networks
US6724776B1 (en) Method and system for providing optimal discard fraction
JP2008507204A (en) How to manage inter-zone bandwidth in a two-way messaging network
Albuquerque et al. Network border patrol: Preventing congestion collapse and promoting fairness in the internet
CN1168265C (en) Method and system for controlling traffic in a subpipe of a computer network
Aweya et al. Multi-level active queue management with dynamic thresholds
CN110324255B (en) Data center network coding oriented switch/router cache queue management method
Cao et al. Rainbow fair queueing: theory and applications
HK1050970A (en) Method and system for controlling flows in sub-pipes of computer networks
CN101753407A (en) OBS framing method and OBS framing device
JP3394478B2 (en) Congestion avoidance apparatus and method using RED
Patel et al. A new active queue management algorithm: Altdrop
Filali et al. Fair bandwidth sharing between unicast and multicast flows in best-effort networks
Más et al. A model for endpoint admission control based on packet loss
Pan et al. CHOKe-A simple approach for providing Quality of Service through stateless approximation of fair queueing
Singh et al. Utilizing spare network bandwidth to improve TCP performance
Tamura et al. NBQ: neighbor-state based queuing for adaptive bandwidth sharing
Filali et al. SBQ: A Simple Scheduler for Fair Bandwidth Sharing Between Unicast and Multicast Flows